Learn RAG From Scratch - Python AI Tutorial from a LangChain Engineer
Learn how to implement RAG (Retrieval Augmented Generation) from scratch, straight from a LangChain software engineer. This Python course teaches you how to use RAG to combine your own custom data with the power of Large Language Models (LLMs).
💻 Code: github.com/langchain-ai/rag-f...
If you're completely new to LangChain and want to learn about some fundamentals, check out our guide for beginners: www.freecodecamp.org/news/beg...
✏️ Course created by Lance Martin, PhD.
Lance on X: / rlancemartin
⭐️ Course Contents ⭐️
⌨️ (0:00:00) Overview
⌨️ (0:05:53) Indexing
⌨️ (0:10:40) Retrieval
⌨️ (0:15:52) Generation
⌨️ (0:22:14) Query Translation (Multi-Query)
⌨️ (0:28:20) Query Translation (RAG Fusion)
⌨️ (0:33:57) Query Translation (Decomposition)
⌨️ (0:40:31) Query Translation (Step Back)
⌨️ (0:47:24) Query Translation (HyDE)
⌨️ (0:52:07) Routing
⌨️ (0:59:08) Query Construction
⌨️ (1:05:05) Indexing (Multi Representation)
⌨️ (1:11:39) Indexing (RAPTOR)
⌨️ (1:19:19) Indexing (ColBERT)
⌨️ (1:26:32) CRAG
⌨️ (1:44:09) Adaptive RAG
⌨️ (2:12:02) The future of RAG
🎉 Thanks to our Champion and Sponsor supporters:
👾 davthecoder
👾 jedi-or-sith
👾 南宮千影
👾 Agustín Kussrow
👾 Nattira Maneerat
👾 Heather Wcislo
👾 Serhiy Kalinets
👾 Justin Hual
👾 Otis Morgan
👾 Oscar Rahnama
--
Learn to code for free and get a developer job: www.freecodecamp.org
Read hundreds of articles on programming: freecodecamp.org/news
Lance is the man! Love his content
I rarely say that a tutorial is good - but this is an amazing tutorial, extremely underrated!!!
Include more of langchain, llms, industry level based tutorials
I was waiting for this particular course. Thanks
Assala mu alaikum brother
This man is amazing!
This is great! Thank you so much!
Always a fan of a lance video
The complete happenstance of the phrase "do rag" sounding like "durag" coming from this video was awesome. Sorry, totally unrelated...but it made me chuckle.
VERY WELL EXPLAINED. THANK YOU
Lance thank you for sharing your deep insights on the subject of RAG and taking the time to share this with the community. Just a question, at 1:04:00 into the overall video concerning the subject of Query Construction. For the question: "videos that are focused on the topic of chat langchain that are published before 2024" Should the result have been?: latest_publish_date: 2024-01-01 as opposed to earliest_publish_date: 2024-01-01 This would be more inline with question: "videos on chat langchain published in 2023" where the results where: earliest_publish_date: 2023-01-01 latest_publish_date: 2024-01-01 Thank you
I watched this twice. Very good.
Thank you!!!
Awesome video! it helped a great deal to explain the concept.
Thanks for the content !
This is great content. Speaking of that 95% of private data I guess a lot of practitioner are finding it hard to convince business people to share their data with an LLM provider. And of course concerns are very much understandable. I guess people would feel more comfortable if a RAG application would be able to clearly define a partition of data that it can work on for the benefit of the tool, and a partition that can be either used as obfuscated or simply never shared, not even by chance.
Maybe the solution would be running the model locally?
NVDIA CHATRTX might just do the job
Thank you !
Thank you for the talk.
Awesome as always
Please let us know when the blog related to adaptive RAG will be uploaded, Lance mentioned that he will be uploading it in a day or so. Also I wanted to ask this question to general public, which one is better, State machines or Guardrails?? (In the context of creating complex flows using llms)
Great video! What software is used to create these nice diagrams ?
thank you
Great!!
i have an question. In the rag-fusion part in the fusion_rank function: why u using the index (rank) to upgrade your scores ? Isnt it better to use the variable "previous_score" ?? the variable rank is just an index, wich descripes in witch order you read in the chunks. btw ty for the video you are an livesaver
Love the teaching style! at 9:00 you mention that you've walked through the code previously. Is there another video to go with this one or did I miss something?
those are shorts videos and they combined them to form an long single video. when lance referring previous video means not another video.
I think this is the playlist from the videos are taken: kzhead.info/channel/PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x.html
@@shraeychikker694 Nice one - many thanks :)
great content
I recommend this vid to everyone.
Question:. Is it possible to do RAG across different vector stores that use different embedding strategies?
Thanks
Amazing.
What is the best way to manage the chunk size?
What the name of this screen recorder used by Lance?
is it possible i can do rag and combine data with huggingface models?
GOLD
❤❤❤
How to add coverstional memory to it?
Like first and then watch
Thanks for the excellent video! If your goal is to democratize gen AI to as diverse an audience as possible, I suggest you stop using OpenAI in these tutorials. In many parts of the world, having a credit card is not an option and OpenAI quickly backs you into that corner. Use, promote and support open-sources alternatives instead. Thank you.
llama 3 in 15T tokens, chart would be different if you released video 3 days later :)
Udemy created 50 accounts to dislike this video
😂😂😂
I will create 50 accounts to like your comment 😂
LLM Agents plzzz... ❤
Amazing videos but how does this translate into careers or jobs? What positions are employers looking for? Would they even hire anyone without experience? How do you even get started? I'm aware this channel mostly focuses on the coding and hands-on experience but I wish there was an actual channel focused on employment. I'm pretty sure there are channels out there and if anyone has recommendations, I'll be grateful.
look on linkedin jobs title descriptions keywords if any with llm ai
Start working on some AI projects first, on your own, in your spare time. Show some results. Once you have two to show, getting a job should be easier. You don't actually need this langchain thing. As for how you get started, if you've already used GPT-4, etc., thinking about larger workflows that chain inputs and outputs in creative ways to solve problems. Also think about when you need to use embeddings for distance computation. You can use the LLM+embedding APIs directly or via an SDK, optionally sometimes with a local vector database. You don't need to go fancy.
@@vcool what results? What you done
And there was me thinking "how can it take over 2 hours to talk about applying RAG status to your project plans"
Hello, at 27:13 why is he using itemgetter to pass the question? What's the difference between doing that and setting a RunnablePassthrough() in there?
No difference, just that with RunnablePassThrough() you don't need a dictionary in the invoke
This is the way...
Good job, but ' dict ' never again 😂
Does this video have everyone brainwashed? If you know basic programming, you don't need langchain at all. I don't like unnecessary abstractions.
Yeah but the building blocks are useful. Do you write your own sorting functions?
@vcool would you mind expanding on what the alternative is to langchain? Genuinely curious on learning, not attacking
This seems like it's going to pigeonhole me and tie my hands into a small dogmatic set of patterns, when what I need is broader freedom that I can accomplish without it.
This video getting massive viewership🇮🇱
You mean 🇵🇸 ?
@@blacklight8318 No
Get and eat dudu
Palestine
Thanks, but that was on the menu yesterday... today is rice and beans