This repo contains companion video explanation and code walkthrough from my YouTube channel @johnnycode. If the code and video helped you, please consider:
ChromaDB is the Vector Database with the lowest learning curve. Start testing out Semantic Searches on a vector database within minutes. Everything works locally and is free. Don't need to sign up for a cloud account or learn Langchain first.
How to start using ChromaDB Multimodal (images) semantic searches on a vector database. We’ll load some images and query for objects in the images. We’ll also cover how to use the Where Metadata filter to limit the relevance of the search results.
How to vectorize embeddings into ChromaDB as fast as possible leveraging the power of your NVidia CUDA GPU along with Python's Multiprocessing capability. We'll use Multiprocessing to 1) launch a Python producer process on the CPU to handle the workload of reading and transforming the data and 2) launch a consumer process to vectorize the data into embeddings using the GPU.
Improve your semantic searches with vector embeddings from one of the best LLMs out there. We'll swap the ChromaDB out-of-the-box local model with the Gemini Pro embedding model with just a few code changes.
Tutorial shows how to persist a ChromaDB database in Google Colab by creating the database in your Google Drive. This is part of my Recipe Database tutorial series at RecipeDB Repo.