"How to give GPT my business knowledge?" - Knowledge embedding 101
A step by step guide on how to create your own knowledge base embedding, from prep knowledge data to retrieval augmented generation
🔗 Links
- Follow me on twitter: / jasonzhou1993
- Join my AI email list: www.ai-jason.com/
- My discord: / discord
- Finetune LLM video: • "okay, but I want GPT ...
- No code alternative: relevanceai.com/
- Github repo: github.com/JayZeeDesign/Knowl...
⏱️ Timestamps
0:00 What is Knowledge embedding?
4:21 Core business use cases
5:52 Step1 Prep knowledge data
6:25 Step2 Create embedding
8:34 Step3 Similarity search
9:55 Step4 Retrieval augmented generation (RAG)
12:23 Step5 Deploy
14:49 No code alternatives
👋🏻 About Me
My name is Jason Zhou, a product designer who shares interesting AI experiments & products. Email me if you need help building AI apps! ask@ai-jason.com
#gpt #autogpt #ai #artificialintelligence #tutorial #stepbystep #openai #llm #langchain #largelanguagemodels #largelanguagemodel #bestaiagent #chatgpt #embedding #openaiembeddings #wordembeddings
A few people asked “why only vectorise one column instead of the whole csv?” Adding a few more explanation here: So vectorise is mainly for search, and the column to vectorise can be considered as “index” or “id” of the dataset; while the data it return will still be in question/answer pair; The reason I want to vectorise only one column is because: 1. It save cost - vectorise using embedding model which means every token we vectorise generate cost 2. It increase accuracy, in this case I want to only search for past customer email instead of sales response; search both column might return wrong answer “e.g. search for “interested in learning more”, it can return pair: “client: stop sending me emails; sales: understood, let us know if you are interested in learning more in future!” Hope this help!
It seems Embedding enriches your search query. how about answers? In your example, do you 'train' llm with Q&A pair?
@@ozfish17 yep, it return both Q&A pair!
Jason, brilliant step-by-step guide on knowledge embedding! Your breakdown of the process was super insightful. I'm curious about how AI Agents in Langchain perform, especially in long-running scenarios. Hope you'll consider diving into that topic in the future. Keep up the stellar content!
So if you want the output response email to be generated by the LLM based on a specific tone, why wouldn't the 2nd column be a part of vectorizing the dataset?
Hey Jason! What would be the best way to do this with financial PDFs? I want to ask questions and get accurate insights from the large documents. Would using embeddings be best or the fine tuning from your other video? Thanks! @AIJasonZ
Small channels like this are the ones that hold the most values.
In 2 minutes and 54 seconds you explained what is vectoring better than any other video online. You made it easy. Thank you!
I have the same idea in mind. I have tons of product documents that I wish I could just ask an agent something about it instead of scrolling hundreds of word pages. I really appreciate your video man.
man you have a really rare ability to explain super complicated things in a very simple way and organize the information so it's even more clear. Bravo and thank you
I really love your style, first explaining the theory and then demonstrating it by an example
Thank you very much! Nobody explained Embedding and Vectorization like this! Thank you again!
Thank for sharing your knowledge with us, your channel is literally a gold mine of information. Keep doing what you doing, Jason!
Absolutely great video, I loved that you took the time to explain everything in theory and then went on to give a detailed walkthrough of the code. Please keep posting such videos !
Keep it up man probably one of the only channels with incredible value
Man... you have a serious gift for teaching! This is super helpful. Thanks.
one of the best channels out there, really appreciate your content!
I've watched many video on this topic and I can say that your simple examples has covered most of what I need to know. Thanks Jason.
You have helped the community so much with this valuable content. Keep it up my friend, i'll be watching!
This is super duper helpful man ! Great work and thanks !
This is pure gold. Thank you so much!
Great job and appreciate a lot on sharing your knowledge. Looking forward for Open LLM content.
Thanks for your work Jason. You're one of the best, and I follow tons.
yo bro.. i really like when you explain all the step-by-step and all relevant tools out there! thank you!
Bro... you are incredibly smart and are a great teacher. This is going to provide 10x value to my users
the best video about embedding ive seen; thank you!
you're the man Jason, great content!
this is virtual gold, mad props to jason for clearly describing complex topics and even showing practical application, saved me hours of research lol, it'd be great if you can touch up on the various services out there that offer AI services that embed, and how they compare in performance, pros / cons etc.
I'm blown away. Thank you!!
have been waiting for this video, Thank you!
Love your content good sir, tuned for all next videos you are the leader
This was super helpful. Thank you, Jason!
This is GOLD !! Thank You !
this is just awesome, now people who didnt had idea now dont only have idea but also reference
Really high quality content, thank you Jason!
Outstanding. Your ability to explain complicated topics is incredible. Thank you.
I really like your video. You knows how to reach the people attention. Please make more videos like this 😊
Another great video! Thanks Jason, keep up the excellent work
Absolutely outstanding. I liked, subscribed and shared. Best explanation of knowledge embedding I have come across!!!!
The video is very inspiring and straightforward, a valuable lesson
Cannot be more valuable than this. Loved it 🎉
thank you for this As a dev with no AI experience, you really make it easy to understand
Great video! Very simple to understand.
More excellent content, thanks mate
Great tutorial! You covered a LOT of ground quickly, but thoroughly. Haha. Nice work.
Exactly want I want , thanks Jason.
Anyone looking to make a great startup in AI,you have to jump on this!
Working on it!
Working on it now
Very well done! Straightforward to follow!
Great content and love the intros
Fantastic content, thank you.
Amazing explanations, thank you!
Thank you! This was incredibly helpful
Thank you for sharing!
These are gems
This is hilariously good. Thanks for this wonderful ressource!
Great job. You deserve more subscribers.
Thanks Jason this was a great tutorial! :)
Love your content, very easy to digest and understand. The only recommendation I would give is to use other embeddings and LLM models besides OpenAI. Mid/Large sized companies cannot use OpenAI in their environment because of legal issues around OpenAIs data retention policy. Alot of companies want to develop their own implementations so including other models like Llama 2, Vicuna, etc would allow you to reach a bigger audience.
yea great points, thanks for the recommendation! totally get that company dont want to send any data to OpenAI LOL
+1 for using more open models. I love your content and the approach you take to your videos. But even though I'm not a big company I just value using systems that are open instead of closed.
You are the man Jason!
I am really surprised that these tools can help so many businesses doing the low-cost and autonomous response specifically for customer service! Great video!
Big thank you ❤
Loved to see similar demo of knowledge search with open source models not with openai models
Came here after the fine tune model video - looking for exactly this. Thanks!
Very nice video.
Great explanation
Great video as always, Jason. Thank you for making one of the few channels with genuine AI tools video that actually demonstrate implementation and applications rather than hyping up the content through sweet talk then simply dropping an affiliate link.
This! I feel so grateful that the KZhead algorithm blessed me with Jason's channel. Beautiful explanations and clear steps.
Yeah, he's one if the real ones. I've asked him if he could add a github for the code. It's the only thing this channel lacks imo.
@@senxo.visualssame feelings here
Great video, subscribed.
Amazing video Jason! Pretty useful information. I would love to see a video about GPT4All as a personal assistance for everyday life.
Awesome explanation, thanks.
So helpful! I started using relevance ai because of your videos & just as a no-code developer been able to build some sick ass LLM chains with Zapier Custom HTTP Requests. I have my development team even using it & it’s definitely speeding up our velocity to iterate🙌🔥
thats great to hear! 🤘
hey, can you share the salse_response.csv file also, it's not in the git repo
Excellent.
this dude is on FIRE 🔥
this is the best video on your channel.
Amazing!!
Excellent vid thank you !
This was 🔥🔥🔥. If I hadn't already subscribed, I would have. Excellent use case! Looking to impliment this using Flowise.
Great video
Thanks a lot for the info!! Greetings from Mexico 🤙
解釋得非常清楚
This is exactly what I was looking for, I have a question Jason: How can we secure our company personal data?
you make really useful videos man
Jason you are awesome!
当中间向量查询的结果出来, 一下子就了解了整个流程, 非常赞. 原来是拿向量查询的结果, 再去扔给llm, 当作promt instruction, 然后让llm给出答案.
great video! is that enough info to go out and start building a customer response ai for other people or businesses?
Thank you for the super video. I'm learning LLM and am quite confused between knowledge base embedding, that was mentioned, vs prompt tuning. Could you tell me the difference?
hey, thanks for the very detailed tutorial. Just a question how do you manage to set higher weights for the most recent messages?
thank you so much Jason, is there a way to tell the model not to answer anything not from the csv file?
Dude. You. Are. Awesome!
Thanks!
Amazing! Thanks for sharing
Thanks for No coding alteratives
Amazing video, thanks so much for sharing! I haven't really understood LangChain until now. Now, let's assume we want to update the vector database because we have additional rows in our CSV or data file. How can we do this (or do you have a video explaining this?)?
Bro you are awesome.
I wonder why those AI channels, like yours, are not exploding. This is so important for the future what you all are doing. Only a few people get this!
Do you only need to have one use case for the data or can you just upload a lot of data that could be used in different ways? For example, your use case was for responding to customer emails from what I understand, but what if you wanted to upload all of your organization's data and then ask it various questions or use it in various ways?
It would be amazing if you could make a video creating a knowledge base using long pdfs as source,, and use gpt as well to make an expert assistant in a topic.
Yes like if the data source is like a book and we want to search the contents in it giving relative data like “I remember this part of the book saying something like this… where was it?” … or “the book had this story … where was it and the main ideas”
top tier content!!!!
Amazing content my guy Amazing
Thanks ! when will this be on Github ?
Is there a way to carry context of previous messages to the next one? so a follow up message can be answered.
Question: Can this be done with ohter LLMs like Falcon for example instead of using OpenAI API key [kinda new ai development and wanna try things out before paying for stuff]
Great video, thank you
Hey @AIJasonZ, great video! I'm curious, is there a method to retrieve the confidence level from the embeddings? Since it's possible that not all the information will be present in the embeddings, it would be helpful to have a way to handle such scenarios. For instance, if certain information is missing, perhaps the system could respond with "response not found" or trigger another action like calling an API.
Thank you so much for your video. Its very helpful. At the same time, is there a way to run this with Llama-2 or other open source LLM's? Edit: If security is my main concern, how do I go about embedding?