Vector Search RAG Tutorial - Combine Your Data with LLMs with Advanced Search
Learn how to use vector search and embeddings to easily combine your data with large language models like GPT-4. You will first learn the concepts and then create three projects.
✏️ Course developed by Beau Carnes.
💻 Code: github.com/beaucarnes/vector-...
🔗 Access MongoDB Atlas: cloud.mongodb.com/
🏗️ MongoDB provided a grant to make this course possible.
⭐️ Contents ⭐️
⌨️ (00:00) Introduction
⌨️ (01:18) What are vector embeddings?
⌨️ (02:39) What is vector search?
⌨️ (03:40) MongoDB Atlas vector search
⌨️ (04:30) Project 1: Semantic search for movie database
⌨️ (32:55) Project 2: RAG with Atlas Vector Search, LangChain, OpenAI
⌨️ (54:36) Project 3: Chatbot connected to your documentation
🎉 Thanks to our Champion and Sponsor supporters:
👾 davthecoder
👾 jedi-or-sith
👾 南宮千影
👾 Agustín Kussrow
👾 Nattira Maneerat
👾 Heather Wcislo
👾 Serhiy Kalinets
👾 Justin Hual
👾 Otis Morgan
👾 Oscar Rahnama
--
Learn to code for free and get a developer job: www.freecodecamp.org
Read hundreds of articles on programming: freecodecamp.org/news
What kinds of projects do you plan to make with Vector Search?
Currently making a discord chatbot with long term memory
Currently making Product Recommendation Project for My Organisation for which I'm working [Ecommerce Platform]
THIS COURSE IS AMAZING!!!!!!!!!!!!!!!
For Right now I am going try to create RAG project using google makersuit LLM which is free. if i am able to create it am I allow to share the github repo's link?
I want to create a marketplace to match job posts with applicants. i would like both the job creators and the job seekers to be able to submit their requirements via a chatbot (chatgpt e.g) as well as a structured form. So ideally i'd like the llm to push the postings into the db, and also call an api function to pull the potential matches from the postings to the applicant requirements. Do you think this solution could work?
Woah you're teaching this is the first time I've ever seen one from you
Thanks for the video tutorial. Helped me to understand the core ideas used in this technology!
This is brilliant. Thanks so much from a grateful student at the School Of Code
It would really help everyone if you followed the best practices of using your tokens/logins safely. The old practice what you preach. Many of your viewers might not really know how to do that. They NEED to do it. I appreciate it makes your video less expository and is a burden in terms of prep.
That was awesome. I learnt a lot 🎉
best video of the year ❤
Where code for project two is available ? in github repository it is different, thanks
Awesome 🎉
Great content!
The files for project two in the Github repository do not match this video. Could you kindly verify the files please? Thanks
I understand MongoDB sponsored this but I’d really have appreciated WHY someone should choose MongoDB vs other options. Same with embedding model. WHY use the hugging face model vs OpenAI Ada. There are so many different options for vector store and model, so a tutorial that deep dives into this decision is super important.
It was touched on: - Mongo DB allows you to store the vectors alongside the original data (i.e. in the same document). this means you can filter out documents that you don't want to use in your vector search before you run a vector query - Huggingface is free when starting out, Open AI's API costs money
The thing with openai, Claude and so on and so forth is that you are at the mercy of the suppliers. The most obvious concern would be that if for any reason openai Claude and the likes had downtime and or their servers are not responsive, your businesses will absolutely be affected. Take Openai as example, Openai lib gets updated super frequently, also they provide API instead of model. So you are absolutely at the mercy of Openai when they decide to change endpoints, decommission old models and etc. You are also at the mercy of their pricing. There's nothing wrong with just using openai's API just that you have to position your business well. If you're just an integrator then all's good but if you're an ai consultancy firm then it makes sense for you to have ur own model that is tuned specifically for specific task. E.g. Mistral mixture of experts. It is also cheaper if you make a leaner model and host it urself. Why is mongodb chosen? Because they are the sponsor. Obviously right. It doesn't really matter for now what db you are using because it's just a tutorial. However if you're really going into production then it is perfectly ok to have specific dbs for specific tasks. Lastly it's all about use case, no one has infinite money to burn. There's only small or big budget to use. If your wallet is deep then use openai for everything. If your wallet is shallow then you should provision resources correctly.
OpenAI is paid
Well this is freecodecamp. The place to get started.
شكرا لك علي الشرح الرائع
AMAZING!!!!!!!!!!!!!!!!!!!
Hi, thanks for video! What about a follow-up questions in RAG? Example Q: Suggest some movie with Johny Depp A: Q: What year was it filmed? A: ...
Wow great Video thank you! How does this compares to just using chatgpt api for semantic search within our data?
Thats why he's the goat
people having trouble with loading sample data: be on the main screen and click project drop down menu on the top place to see "view all projects", next will be Overview screen, there is right pointing arrow close to it "view database deployments", there you will see your Cluster0, click it, next screen right side you will see buttons "connect", "configuration", and " ...", click the dots button to see "Load sample dataset".
Thanks that was really helpful! I want to create a marketplace to match job posts with applicants. i would like both the job creators and the job seekers to be able to submit their requirements via a chatbot (chatgpt e.g) as well as a structured form. So ideally i'd like the llm to push the postings into the db, and also call an api function to pull the potential matches from the postings to the applicant requirements. Do you think this solution could work with the vector search / RAG approach youve shown here?
There’s a lot missing. I get this is basic, but the metadata is crucial.. and 90% of people will be using cosine similarly, especially in RAG systems. Great video by the way. It’s awesome that you take time out to help others…
Thank you for the course! I have a question, how can I search between data in multiple languages? I'd have to create embeddings for every language (though being the same data, ie "house" in English and "casa" in Spanish, which have the same meaning but I want to be able to search in any language)
dude.. you are a bomb!!
Fantastic source of information! Learnt a lot 🤓
Recently getting in Data Science/ML do you guys recommend any resources to learn more about vectors for programming?
Where is the sample_data used in project 2? Doesn't seem to be in the repository that is linked
Have you got the sample_data?
Let's go
Is the accuracy of the documents retrieved influenced by the user's query? For instance, you mentioned using "imaginary characters from outer space at war" as a user query at 25:14. Would employing a more detailed query, such as "Please, I need to find all the imaginary characters from outer space at war in the collected data, could you do that for me, please?" result in better or worse outcomes?
Guys please make a video with opensource llms API, like palm or hugging face. Please..
Agreed. Nice video but calling openAI APIs is not practical for most folks trying to learn anything.
I cannot for the life of me find the .py and .txt files for project number two and three?
Hi, I loved this session. I wanted to have my own Embedding Server. Can you please make a video on this. I want to have it based on Opensource LLM Model. Please Guide. 🙏🙏🙏🙏
Please commit the latest code to git, the .txt files are missing
Would you be able to point me to some tutorials that achieves the same thing as Project 2, but without using langchain? The query_data function from that tutorial is pretty mysterious, and I'd love to learn what's happening behind the scenes.
You can generate vector embeddings by calling rest api exposed by Vendors like HuggingFace, OpenAI etc. One thing to note that, these vendors employ rate limiting at their ending basically throttling the no of request that you can make to theirs apis within second. You need to buy subscription accordingly depending on your requirement
May I ask why you did not use spacy to create vectors but llm models instead?
Hi. Could you be so kind to add the three TXT files mentioned in project#2?. The are mandatory for completing the example... thanks.
Have you got the txt files, please send :)
How does this compare to Qdrant and weaviate ?
How did you choose the dimension while creating the vector search index?
Can you please upload these 3 files in the git repo? aerodynamics.txt, chat_conversation.txt and log_example.txt.
I could not find the same endpoint for the embedding model using in the video for the first project. Could you tell me where to get it for this specific model?
is there a way to use any other model other than openai , for doing these operations ? something like open source models ?
Is there some kind of a limit on how much data I can provide? If I have documents with 1,000,000 words in total, will the RAG be able to retrieve the most relevant documents? And if most of the documents are relevant, will the LLM be able to take all of those as an input? Sorry, I just noticed I've asked quite a few questions 😂
@beau -The github repo doesnt match the contents of the video for Project two atleast.
Can it be down privately? May one question local .pdfs? At 30:00, why Euclidean? Thought it was 4 images vs. Test (cosign similarity).
Yes
May i ask, where did you get the hugging face embedding_url?
What are the prerequisites for this tutorial?
In the privided link for the repos on github, the project two is missing!
In 22:29, How to get Index Json on right the side? Thanks
Which is a selfhosted opensource alternative to Mongodb cloud ?
Selfhosted mongoDB 🙂
Where dis you get the hf model’s embedding url from?
Can I put this course in the cv
Is there any video in this channel for math? For AI u need linear algebra and all
We have quite a few math courses. Here is a linear algebra course: kzhead.info/sun/fdKNkZ2Qq6ijmYE/bejne.html
Its a shame the files arent there for the final two. I followed along with the second one but the third might be a push. anyone find the files elsewhere ?
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
Is the embedding_url still valid? When I run the code at 15:09, it just returns "None". I tried pasting the url in a browser and it returns a 404.
🎯 Key Takeaways for quick navigation: 00:00 *🕵️ Vector search allows searching based on meaning, transforming data into high-dimensional vectors.* 01:10 *🚀 Vector search enhances large language models, offering knowledge beyond keywords, useful in various contexts like natural language processing and recommendations.* 02:03 *💡 Benefits of vector search include semantic understanding, scalability for large datasets, and flexibility across different data types.* 03:11 *🔗 Storing vectors with data in MongoDB simplifies architecture, avoiding data sync issues and ensuring consistency.* 04:06 *📈 MongoDB Atlas supports vector storage and search, scaling for demanding workloads with efficiency.* 05:02 *🔄 Setting up MongoDB Atlas trigger and OpenAI API integration for embedding vectors in documents upon insertion.* 06:38 *🔑 Safely storing API keys in MongoDB Atlas using secrets for secure integration with external services.* 08:56 *📄 Functions triggered on document insertion/update generate embeddings using OpenAI API and update MongoDB documents.* 10:33 *🧩 Indexing data with vector embeddings in MongoDB Atlas enables efficient querying for similar content.* 11:15 *📡 Using Node.js to query MongoDB Atlas with vector embeddings, transforming queries into embeddings for similarity search.* Made with HARPA AI
Hi, thanks for the video, very good content, I have a question: how can I specify a "prompt" or how can I specify limits in the answers, for example, I ask the question: "from your knowledge base of what topics could you answer questions?" in my database I only have information of my company but the program adds general topics (movies, books, music, etc), the only way to limit the answers is in the .md files I must explicitly specify the topics or I must write the "prompt" in the file? thanks for your help
Hello, I am getting following error can you please help me by sharing your thoughts OperationFailure: Unrecognized pipeline stage name: $vectorSearch, full error: {'ok': 0.0, 'errmsg': 'Unrecognized pipeline stage name: $vectorSearch', 'code': 40324, 'codeName': 'UnrecognizedCommand'} Thanks in advance !
Can’t you just fetch data from the database, stringify it, and pass it to the open ai completions api? And let chatgpt know about the data, what it is, etc? You could also use function calls to generate said data as well. Embeddings is something I haven’t invested time into yet since what I have said above is working well for me.
That way you would need to make a request to the Completions API each time you want to query for something, which is more expensive than quering your database with just the user embedding. Also if your data grows, you will find yourself sending not just more requests but larger ones, which are gonna increase latency and costs again.
😍
I tried your 1st project it throws an error if i pass {"inputs":text}. Doc says we need to pass like this "inputs": { "source_sentence": "", "sentences": ["That is a happy person",], } but then I'm able generate 1 dimensionlity data e.g [0.111111145]
Only for searching, is embeding method efficient? can any expert enliten me?
when i log the vectorSearch api, why does it always return [] even if the data in mongodb correct?
why is throwing an error in generate_embedding function?
Can we create a new search index using code instead of using the MongoDB UI? Using the UI is not practical when making a real-world project. It's fine for fun project.
just self-host your own MongoDB. You would have to change the URL to your db in your code to something like "localhost:27017". You would do everything in code then
How to get the embedding_url
to bypass the HuggingFace rate limit, could I just download the model, and do the embedding on my laptop?
was this a good work around? I'm facing the same issue, even though I have pro
I got it working locally, but the embedding were slightly different after the 6th level of precision in the floating point number
Author did not provide a lot of details, e.g. how did he got the reponse structure, embedding url.
I could make a function call for whatever, asking what customers daily sales are, how many customers they have, anything at all. I really don’t know the true value of embeddings and I hope I’m not being naive.
The github files are completely different from the tutorial, at least for the second project.
you're speed running through the code and your project while it takes mongoDB atlas search as the vector store, you are not able to even briefly explain how integrations with other vector stores might happen. please explain in more detail next time
4:36
Why do you have to ask for "imaginary characters" from space? Its a movie search. Aren't most characters in movies "imaginary"? Why couldn't you just ask for "aliens"?
Hi 👋 I'm new here
First
Thats why he's the goat
Thats why he's the goat
Thats why he's the goat