RAG + Langchain Python Project: Easy AI/Chat For Your Docs

2024 ж. 12 Мам.
94 628 Рет қаралды

Learn how to build a "retrieval augmented generation" (RAG) app with Langchain and OpenAI in Python.
You can use this to create chat-bots for your documents, books or files. You can also use it to build rich, interactive AI applications that use your data as a source.
👉 Links
🔗 Code: github.com/pixegami/langchain...
📄 (Sample Data) AWS Docs: github.com/awsdocs/aws-lambda...
📄 (Sample Data) Alice in Wonderland: www.gutenberg.org/ebooks/11
📚 Chapters
00:00 What is RAG?
01:36 Preparing the Data
05:05 Creating Chroma Database
06:36 What are Vector Embeddings?
09:38 Querying for Relevant Data
12:47 Crafting a Great Response
16:18 Wrapping Up
#pixegami #python

Пікірлер
  • Easily one of the best explained walk-throughs of LangChain RAG I’ve watched. Keep up the great content!

    @colegoddin9034@colegoddin90344 ай бұрын
    • Thanks! Glad you enjoyed it :)

      @pixegami@pixegami3 ай бұрын
  • I never comment on videos, but this was such an in-depth and easy to understand walkthrough! Keep it up!

    @elijahparis3719@elijahparis37195 ай бұрын
    • Thank you :) I appreciate you commenting, and I'm glad you enjoyed it. Please go build something cool!

      @pixegami@pixegami4 ай бұрын
  • Thanks so much for this. Your teaching style is incredible and the subject is well explained.

    @wtcbd01@wtcbd01Ай бұрын
  • Absolutely epic video. I was able to follow along with no problems by watching the video and following the code. Really tremendous job, thank you so much! Definitely subscribing!

    @MattSimmonsSysAdmin@MattSimmonsSysAdmin4 ай бұрын
    • Thank you for your comment! I'm really glad to hear it was easy to follow - well done! Hope you build some cool stuff with it :)

      @pixegami@pixegami4 ай бұрын
  • Thank you, that was a great walk through very easy to understand with a great pace. Please make a video on LangGraph as well.

    @jim93m@jim93m2 ай бұрын
    • Thank you! Glad you enjoyed it. Thanks for the LangGraph suggestion. I hadn't noticed that feature before-tech seems to move fast in 2024 :)

      @pixegami@pixegami2 ай бұрын
  • Your channel is one of the best of KZhead. Thank you. Now I'll go watch the video.

    @gustavojuantorena@gustavojuantorena5 ай бұрын
  • Great video! This was my first exposure to ChromaDB (worked flawlessly on a fairly large corpus of material). Looking forward to experimenting with other language models as well. This is a great stepping stone towards knowledge based expansions for LLMs. Nice work!

    @michaeldimattia9015@michaeldimattia90155 ай бұрын
    • Really glad to hear you got it to work :) Thanks for sharing your experience with it as well - that's the whole reason I make these videos!

      @pixegami@pixegami5 ай бұрын
  • Fantastic, clear, concise and to the point. thanks so much for your efforts to share your knowledge with others.

    @StringOfMusic@StringOfMusic4 күн бұрын
    • Thank you, I'm glad you enjoyed it!

      @pixegami@pixegamiКүн бұрын
  • This is what I look for! Thanks for the simplest explanation. There are some adjustments on the codebase during the updates but it doesn't matter. Keep it up!

    @insan2080@insan20804 күн бұрын
    • You're welcome, glad it helped! I try to keep the code accurate, but sometimes I think these libraries update/change really fast. I think I'll need to lock/freeze package versions in future videos so it doesn't drift.

      @pixegami@pixegamiКүн бұрын
  • Thank you for this. Looking forward to tutorials on using Assistants API.

    @MrValVet@MrValVet5 ай бұрын
    • You're welcome! And great idea for a new video :)

      @pixegami@pixegami5 ай бұрын
  • Clean, strucktured, good to follow, tutorial. Thank you for that

    @lalalala99661@lalalala996618 күн бұрын
    • Thank you! Glad you enjoyed it!

      @pixegami@pixegamiКүн бұрын
  • this is the best tutorial i have ever seen on this topic, thank you so much, Keep up the good work. Immediately subscribed.

    @narendaPS@narendaPSАй бұрын
    • Glad you enjoyed it. Thanks for subscribing!

      @pixegami@pixegami27 күн бұрын
  • Straight to the point. Awesome!

    @gustavstressemann7817@gustavstressemann78172 ай бұрын
    • Thanks, I appreciate it!

      @pixegami@pixegami2 ай бұрын
  • Perfectly explained👌🏼

    @theneumann7@theneumann7Ай бұрын
  • Awesome walkthrough, thanks for making this 🎉

    @kwongster@kwongster2 ай бұрын
    • Thank you! Glad you liked it.

      @pixegami@pixegami2 ай бұрын
  • Great walkthrough, now all thats needed is a revision to cope with the changes to the langchain namespaces.

    @geoffhirst5338@geoffhirst5338Ай бұрын
    • What changes have ben done, I cant get this to work :-(

      @niklasvilnersson24@niklasvilnersson2423 күн бұрын
  • I too am quite impressed with your videos (this is my 2nd one). I have now subscribed and I bet you'll be growing fast.

    @erikjohnson9112@erikjohnson91125 ай бұрын
    • Thank you! 🤩

      @pixegami@pixegami5 ай бұрын
  • Epic. Thank you for sharing this.

    @voulieav@voulieav3 ай бұрын
    • Thank you!

      @pixegami@pixegami2 ай бұрын
  • Very helpful video! Keep going, you are the best! Thank you very much, I am looking forward to see a video about Virtuel assistant doing actions. By communicating others applications using API.

    @israeabdelbar8994@israeabdelbar89943 ай бұрын
    • Glad you enjoyed it! Thanks for the suggestion :)

      @pixegami@pixegami2 ай бұрын
    • You're welcome @@pixegami

      @israeabdelbar8994@israeabdelbar89942 ай бұрын
  • This was excellent. easy to follow, has codes and very useful! Thank you.

    @jasonlucas3772@jasonlucas3772Ай бұрын
    • Thank you, I really appreciate it!

      @pixegami@pixegami27 күн бұрын
  • Simple explained and kept an engaging tone. I would also look for a use case where the source of vector data is a combination of files (PDF, DOCX, EXCEL etc.) along with some database (RDBMS or File based database)

    @rikhavthakkar2015@rikhavthakkar20152 ай бұрын
    • Thanks! That's a good idea too. You can probably achieve that by detecting what type of file you are working with, and then using a different parser (document loader) for that type. Langchain should have custom document loaders for all the most common file types.

      @pixegami@pixegamiАй бұрын
  • This was so informative and well presented. Exactly what I was looking for. Thank you!

    @mao73a@mao73a10 күн бұрын
    • You're welcome, glad you liked it!

      @pixegami@pixegamiКүн бұрын
  • Really good. Thank very much sir. Articulated perfectly!

    @RZOLTANM@RZOLTANMАй бұрын
    • Thank you! Glad you enjoyed it :)

      @pixegami@pixegami27 күн бұрын
  • Thanks a lot for this tutorial! Very well explained.

    @ahmedamamou7221@ahmedamamou722129 күн бұрын
    • Glad it was helpful!

      @pixegami@pixegami27 күн бұрын
  • Finally a good langchain video to understand better. Do you have a video in mind to use local llm using Ollama and local embeddings to port the code ?

    @basicvisual7137@basicvisual71372 ай бұрын
  • Thank you for sharing, this was the info I was looking for

    @thatoshebe5505@thatoshebe55052 ай бұрын
    • Glad it was helpful!

      @pixegami@pixegami2 ай бұрын
  • Muy bueno y esperando el próximo 🎉

    @MartinRodriguez-sx2tf@MartinRodriguez-sx2tf19 күн бұрын
    • Thank you!

      @pixegami@pixegami18 күн бұрын
  • Great explanation. Perhaps one criticism would be using open ai’s embedding library: would rather not be locked into their ecosystem and i believe that free alternatives exist that are perfectly good! But would have loved a quick overview there.

    @PoGGiE06@PoGGiE06Ай бұрын
    • Thanks for the feedback. I generally use OpenAI because I thought it was the easiest API for people to get started with. But actually I've received similar feedback where people just want to use open source (or their own) LLM engines. Feedback received, thank you :) Luckily with somehitng like Langchain, swapping out the LLM engine (e.g. the embedding functionality) is usually just a few lines of code.

      @pixegami@pixegamiАй бұрын
    • @@pixegami It's a pleasure :). Yes, everyone seems to be using OpenAI by default, because everyone is using chatGPT. But there are lots of good reasons why one might not wish to get tied to open AI, anthropic, or any other cloud-based provider besides the mounting costs if one is developing applications using LLM. E.g. data privacy/integrity, simplicity, reproducibility (e.g. chatGPT is always changing and that is out of your control), in addition a general suspicion of non-open-source frameworks whose primary focus is often (usually?) on wealth extraction, not solution provision. There is not enough good material out there on how to create a basic RAG with vector storage using a local LLM, something that is very practical with smaller models e.g. mistral, dolphincoder, Mixtral 8x7b etc., at least for putting together an MVP. Re: avoiding openAI: I've managed to use embed_model = OllamaEmbeddings(model="nomic-embed-text"). I still get occasional 'openAI' related errors, but gather that Ollama has support for mimicking openAI now, including a 'fake' openAI key, so am looking into that as a fix. ollama.com/blog/windows-preview I also gather that with llama-cpp, one can specify model temperature and other configuration options, whereas with Ollama, one is stuck with the configuration used in the modelfile when the Ollama-compatible model is made (if that is the correct terminology). So I may have to investigate that. I'm currently using llama-index because I am focused on RAG and don't need the flexibility of langchain. Good tutorial in the llama-index docs: docs.llamaindex.ai/en/stable/examples/usecases/10k_sub_question/ I'm also a bit sceptical that langchain isn't another attempt to 'lock you in' to an ecosystem that can then be monetised e.g. minimaxir.com/2023/07/langchain-problem/. I am still learning, so don't have a real opinion yet. Very exciting stuff! Kind regards.

      @PoGGiE06@PoGGiE06Ай бұрын
  • Well illustrated! Thanks

    @chrisogonas@chrisogonas26 күн бұрын
    • Thank you!

      @pixegami@pixegami21 күн бұрын
  • Huge class!!

    @lucasboscatti3584@lucasboscatti35843 ай бұрын
  • Perfect thank you!

    @aiden9990@aiden99903 ай бұрын
    • Glad it helped!

      @pixegami@pixegami3 ай бұрын
  • VERY well explained. thank you so much for releasing this level of education on youtube!!

    @pojomcbooty@pojomcbootyАй бұрын
    • Glad you enjoyed it!

      @pixegami@pixegamiАй бұрын
  • Super helpful, thank you!

    @elidumper52@elidumper522 ай бұрын
    • Glad it was helpful!

      @pixegami@pixegamiАй бұрын
  • Thanks so much! This is super helpful to better understand RAG. Only the thing is still not sure how to run this program that I clonned from your github repository via windows terminal. Will try on my own but if you could provide any guidance or sources KZhead links anything like that would be much more appreciated.

    @seankim6080@seankim6080Ай бұрын
  • this is an awesome video. Thank You !! ! Am curious how to leverage these technologies with structured data , like business data thats stored in tables. Appreciate any videos about that.

    @mohanraman@mohanramanАй бұрын
  • thanks dude!

    @bcippitelli@bcippitelli5 ай бұрын
  • Very well done! Thanks

    @williammariasoosai1153@williammariasoosai11533 ай бұрын
    • Glad you liked it!

      @pixegami@pixegami2 ай бұрын
  • Useful, Nice, Thank You 🤩🤩🤩

    @shapovalentine@shapovalentine3 ай бұрын
    • Glad to hear it was useful!

      @pixegami@pixegami2 ай бұрын
  • Hi, by far the best video on Langchain - Chroma! :D Quick question: How would you update the chroma database if you want to feed it with documents (while avoiding duplication of documents) ?

    @quengelbeard@quengelbeard2 ай бұрын
    • Glad you liked it! Thank you. If you want to add (modify) the ChromaDB data, you should be able to do that after you've loaded up the DB: docs.trychroma.com/usage-guide#adding-data-to-a-collection

      @pixegami@pixegami2 ай бұрын
  • This is Amazing 🙌

    @kewalkkarki6284@kewalkkarki62844 ай бұрын
    • Thank you! Glad you liked :)

      @pixegami@pixegami4 ай бұрын
  • That's the best and most reliable content about LangChain I've ever seen, and it only took 16 minutes.

    @FrancisRodrigues@FrancisRodriguesАй бұрын
    • Glad you enjoyed it! I try to keep my content short and useful because I know everyone is busy these days :)

      @pixegami@pixegamiАй бұрын
    • @@pixegamihey great work can we have an updated version with the langchain imports because its throwing all kind of errors of imports which are changed

      @Shwapx@ShwapxАй бұрын
  • you got a new subscriber. nice work

    @serafeiml1041@serafeiml104123 күн бұрын
    • Thank you! Welcome :)

      @pixegami@pixegami21 күн бұрын
  • hi , thanks for such ultimate knowledge sharing . I have a use case: 1. can we perform some action (call an api) as response ? 2. how can we use mistral and opensource embedding for this purpose?

    @AdandKidda@AdandKidda2 ай бұрын
  • helpful if I wanna do analysis on properly-organized documents

    @litttlemooncream5049@litttlemooncream50492 ай бұрын
    • Yup! I think it could be useful for searching through unorganised documents too.

      @pixegami@pixegami2 ай бұрын
  • Amazing video, thank you.

    @chandaman95@chandaman952 ай бұрын
    • Thank you!

      @pixegami@pixegamiАй бұрын
  • Great tutorial, very clear

    @jianganghao1857@jianganghao18575 күн бұрын
    • Glad it was helpful!

      @pixegami@pixegamiКүн бұрын
  • Hello, thank you so much for this video. i have a question related of sumuraze questions in LLM documents.for example in vector database have thousands documents with date property, and i want ask the model how much document i received in the last week?

    @user-fj4ic9sq8e@user-fj4ic9sq8eАй бұрын
  • Nice,how pretty that is it.

    @tinghaowang-ei7kv@tinghaowang-ei7kv23 күн бұрын
  • This very simple and useful video for me 🤟🤟🤟

    @pampaniyavijay007@pampaniyavijay0078 күн бұрын
    • Thank you! I'm glad to hear that.

      @pixegami@pixegamiКүн бұрын
  • best explanation of rag

    @shikharsaxena9989@shikharsaxena998911 күн бұрын
    • Thank you!

      @pixegami@pixegamiКүн бұрын
  • You gained a new subscriber. Thank you, amazing content! Only one question, how about the cost associated with this software? How match it consumes per request?

    @nachoeigu@nachoeigu26 күн бұрын
    • Thank you, welcome! To calculate pricing, it's based on which AI model you use. In this video, we use OpenAI, so check the pricing here: openai.com/pricing 1 Token ~= 1 Word. So to embed a document with 10,000 words (tokens) with "text-embedding-3-large" ($0.13 per 1M token), it's about $0.0013. Then apply the same calculation to the prompt/response for "gpt-4" or whichever model you use for the chat.

      @pixegami@pixegami21 күн бұрын
  • Amazing video - directly subscribed to your channel ;-) Can you also provide an example with using your own LLM instead of OpenAI?

    @sunnysk43@sunnysk435 ай бұрын
    • Yup! Great question. I'll have to work on that, but in the meantime here's a page with all the LLM supported integrations: python.langchain.com/docs/integrations/llms/

      @pixegami@pixegami5 ай бұрын
  • thanks for the video, this looks great, but I tried to implement it and seems like the langchain packages needed are no longer available has anyone had any luck getting this to work? Thanks

    @SantiYounger@SantiYounger2 ай бұрын
  • first thank you very much and now also tell to apply memory of various kinds

    @user-iz7wi7rp6l@user-iz7wi7rp6l5 ай бұрын
    • Thanks! I haven't looked at how to use the Langchain memory feature yet so I'll have to work on that first :)

      @pixegami@pixegami4 ай бұрын
    • @@pixegami ohk i i have implemented memory and other features also also as well as worked with windows also after some monstor errors,, thank once again for the clear working code (used in production) hope to see more in future

      @user-iz7wi7rp6l@user-iz7wi7rp6l4 ай бұрын
  • Very clear video and tutorial ! Good job ! Just have a question : Is it possible to use Open Source model rather than OpenAI ?

    @theobelen-halimi2862@theobelen-halimi28623 ай бұрын
    • Yes! Check out this video on how to use different models other than OpenAI: kzhead.info/sun/e9yImMmpmWiHoIk/bejne.html And here is the official documentation on how to use/implement different LLMs (including your own open source one) python.langchain.com/docs/modules/model_io/llms/

      @pixegami@pixegami2 ай бұрын
  • Great tutorial, Thanks! My only feedback is that any LLM already knows everything about Alice in wonderland

    @corbin0dallas@corbin0dallas14 күн бұрын
    • You can create custom apps for Businesses using their own documents = huge Business opportunity If it really works.

      @SongforTin@SongforTin13 күн бұрын
    • Yeah that's a really good point. What I really needed was a data-source that was easy to understand, but would not appear in the base knowledge of any LLM (I've learnt that now for my future videos).

      @pixegami@pixegamiКүн бұрын
  • This is a really good video, thank you so much! Out of curiosity, why do you use iterm2 as a terminal and how did you set it up to look that cool? 😍

    @frederikklein1806@frederikklein18063 ай бұрын
    • I use iTerm2 for videos because it looks and feels familiar for my viewers. When I work on my own, I use warp (my terminal set up and theme explained here: kzhead.info/sun/qMuwnayXr6yhdnk/bejne.html) And if you're using Ubuntu, I have a terminal setup video for that too: kzhead.info/sun/iNqSZcV-f4CleK8/bejne.html

      @pixegami@pixegami3 ай бұрын
  • very great!! thanks you

    @user-md4pp8nv7u@user-md4pp8nv7uАй бұрын
    • Glad you liked it!

      @pixegami@pixegami27 күн бұрын
  • @pixegami What are your suggestions on cleaning the company docs before chunking? Some of the challenges faced are how to handle the index pages in multiple pdfs also the headers and footers. You should definitely make some video related to cleaning a pdf before chunking much needed.

    @JJaitley@JJaitley2 ай бұрын
    • That's a tactical question that will vary from doc to doc. It's a great question and a great use-case though for creative problem solving-thanks for the suggestion and video idea.

      @pixegami@pixegami2 ай бұрын
  • Thanks for sharing the examples with OpenAI Embedding model. I'm trying to practice using the HuggingFaceEmbeddings because it's free but wanted to check the evaluation metrics - like the apple and orange example you showed. Do you know if it exists by any chance?

    @cindywu3265@cindywu32652 ай бұрын
    • Yup, you should be able to override the evaluator (or extend your own) to use whichever embedding system you want: python.langchain.com/docs/guides/evaluation/comparison/custom But at the end of the day, if you can already get the embedding, then evaluation is usually just a cosine similarity distance between the two, so it's not too complex if you need to calculate it yourself.

      @pixegami@pixegami2 ай бұрын
  • Question : once loading a vector store , how can we output a dataset from the store to be used as a fine tuning object ?

    @xspydazx@xspydazx10 күн бұрын
  • Thank you SO MUCH! Exactly what I was looking for. Your presentation was easy to understand and very complete. 5 STARS! Not to be greedy, but I'd love to see this running 100% locally.

    @jimg8296@jimg8296Ай бұрын
    • Glad it was helpful! Running local LLM apps is something I get asked quite a lot about and so I do actually plan to do a video about it quite soon.

      @pixegami@pixegamiАй бұрын
    • @@pixegami Yes please!

      @jessicabull3918@jessicabull3918Ай бұрын
  • Great level of knowledge and details. Please where is your Open AI key stored.

    @Chisanloius@Chisanloius10 күн бұрын
    • Thank you! I normally just store the OpenAI key in the environment variable `OPENAI_API_KEY`. See here for storage and safety tips: help.openai.com/en/articles/5112595-best-practices-for-api-key-safety

      @pixegami@pixegamiКүн бұрын
  • Would love to hear your thoughts if hats on how to use evaluation to keep LLM output in check. Can we set up framework so that we can have an evaluation framework?

    @yangsong8812@yangsong88122 ай бұрын
    • There's currently a lot of different research and tools on how to evaluate the output - I don't think anyone's figured out the standard yet. But stuff like this is what you'd probably want to look at: cloud.google.com/vertex-ai/generative-ai/docs/models/evaluate-models

      @pixegami@pixegami2 ай бұрын
  • Excellent coding! working wonderful! Appreciate. One question please: what difference if I change from md to pdf?

    @user-wi8ne4qb6u@user-wi8ne4qb6u4 ай бұрын
    • Thanks, glad you enjoyed it. It should still work fine :) You might just need to use a different "Document Loader" from Langchain: python.langchain.com/docs/modules/data_connection/document_loaders/pdf

      @pixegami@pixegami4 ай бұрын
  • Thank you for this very instructive video. I am looking at embedding some research documents from sources such as PubMed or Google scholar. Is there a way for the embedding to use website data instead of locally stored text files?

    @vlad910@vlad9104 ай бұрын
    • Yes, you can basically load any type of text data if you use the appropriate document loader: python.langchain.com/docs/modules/data_connection/document_loaders/ Text files are an easy example, but there's examples of Wikipedia loaders in there too (python.langchain.com/docs/integrations/document_loaders/). If you don't find what you are looking for, you can implement your own Document loader, and have it get data from anywhere you want.

      @pixegami@pixegami4 ай бұрын
    • @@pixegami Exactly the question and answer I was looking for, thanks

      @jessicabull3918@jessicabull3918Ай бұрын
  • Great content! Thanks for sharing. Can you suggest a Chat GUI to connect?

    @mlavinb@mlavinb3 ай бұрын
    • If you want a simple, Python based one, try Streamlit (streamlit.io/). I also have a video about it here: kzhead.info/sun/d5R9ZLSZaWSfemg/bejne.html

      @pixegami@pixegami3 ай бұрын
  • Thank you for a great video. What if I already did word embedding and in the future I have some updates for the data?

    @hoangngbot@hoangngbotАй бұрын
    • Thanks! I'm working on a video to explain techniques like that. But in a nutshell, you'll need to attach an ID to each document you add to the DB (derived deterministically from your page meta-data) and use that to update entries that change (or get added): docs.trychroma.com/usage-guide#updating-data-in-a-collection

      @pixegami@pixegami27 күн бұрын
  • This is great work, Thank you How can I use the result of a sql query or a dataframe, rather than text files

    @AjibadeYakub@AjibadeYakub5 күн бұрын
    • Yup, looks like there is a Pandas Dataframe Document loader you can use with Langchain: python.langchain.com/v0.1/docs/integrations/document_loaders/pandas_dataframe/

      @pixegami@pixegamiКүн бұрын
  • how can we make a RAG system which will answer both stuctured and unstructured data. for example, user upload a csv and a text file and start asking question, then chatbot has to answer from both database. (structured data should store in different database and pass to a tool to process) unstructured should store in the vector database. how can we do effectively?

    @user-wm2pb3hi7p@user-wm2pb3hi7pАй бұрын
  • do these tools work with code also? for example when having a big codebase, and qrying that codebase asking about how xyz is implemeted would be really usefull. Or generating doc etc.

    @lukashk.1770@lukashk.177021 күн бұрын
    • I think the idea of a RAG app should definitely work with code. But you'll probably need to have an intermediate step to translate that code close to something you'd want to query for first (e.g. translate a function into a text description). Embed the descriptive element, but have it refer to the original code snippet. It sounds like a really interesting idea to explore for sure!

      @pixegami@pixegami18 күн бұрын
  • I was just thinking about this, great work. Hypothetically, what if your data sucks? What models can I use to create the documentation? (lol)

    @RobbyRobinson1@RobbyRobinson15 ай бұрын
    • Haha that's a topic for another video. But yeah, if the data is not good, then I think that should be your first focus. This RAG technique builds on the assumption that your data is good-and it just adds value on top of that.

      @pixegami@pixegami5 ай бұрын
  • Great instructions! I read through all the comments. How do you get paid? I value the work of others and want to explore an affiliate model that I tested a year ago. What is a good way to connect with you and explore possibilities of mutual interest.

    @henrygagejr.-founderbuildg9199@henrygagejr.-founderbuildg91992 ай бұрын
    • Thanks for your kind words, but I'm actually doing these videos as a hobby and I already have a full time job so I'm not actually interested in exploring monetisation options right now.

      @pixegami@pixegami2 ай бұрын
  • hi, your video is so good. I just wanna know,if i want to automatically change my document in the production environment and keep the query service don't stop and always use the latest document as the sources, how can i do this by changing the code?❤

    @fengshi9462@fengshi94624 ай бұрын
    • Ah, if you change the source document, you actually have to generate a new embedding and add it to the RAG database (the Prisma DB here). So you would have to figure out which piece of document changes, then create a new entry for that into the database. I don't have a code example right now, but it's definitely possible.

      @pixegami@pixegami4 ай бұрын
  • Hey nice video, I was just wondering, whats the difference on doing it like this and using chains? I noticed you didnt use any chain and directly used the predict with the prompt 🤔

    @annialevko5771@annialevko57713 ай бұрын
    • With chains, I think you have a little bit more control (especially if you want to do things in a sequence). But since that wasn't the focus of this video, I just did it using `predict()`.

      @pixegami@pixegami2 ай бұрын
  • How did you rip the aws documentation

    @naveeng2003@naveeng20033 ай бұрын
  • As others have asked: "Could you show how to do it with an open source LLM?" Also, instead of Markdown (.md) can you show how to use PDFs ? Thanks.

    @slipthetrap@slipthetrap4 ай бұрын
    • Thanks :) It seems to be a popular topic so I've added to my list for my upcoming content.

      @pixegami@pixegami4 ай бұрын
    • If made video on above request kindly give link in description it gonna be a good for all users

      @danishammar.official@danishammar.official2 ай бұрын
    • Instead of md extension you can simply use txt or pdf extension thats it just replace the file extension

      @raheesahmed56@raheesahmed562 ай бұрын
    • Yes, pls share how to work with pdfs directly instead of .mds . Thanks !

      @yl8908@yl890827 күн бұрын
  • Hi dear friend . Thank you for your efforts . How to use this tutorial in PDFs at other language (for example Persian ) What will the subject ? I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate! Thank you for the explanation

    @mohsenghafari7652@mohsenghafari765220 күн бұрын
    • Thank you for your comment. For good performance in other languages, you'll probably need to find an LLM model that is optimized for that language. For Persian, I see this result: huggingface.co/MaralGPT/Maral-7B-alpha-1

      @pixegami@pixegami18 күн бұрын
  • Nicely explained but I had to go through a ton of documentation for using this project with AzureOpenAI instead of OpenAI.

    @uchiha_mishal@uchiha_mishal12 сағат бұрын
    • Thanks! I took a look at Azure Open AI documentation on Langchain and you're right-it doesn't exactly look straightforward: python.langchain.com/v0.1/docs/integrations/llms/azure_openai/

      @pixegami@pixegami11 сағат бұрын
  • Thank you very much for the video. It seems the adding of chunks to the chroma database takes a really long time. If i just save the embeddings to a json its takes a few seconds but the to the chroma it takes like 20 minutes...Ist there somthing i am missing? I am doing this only on a document about one page long.

    @moriztrautmann8231@moriztrautmann823117 күн бұрын
    • Hey, thanks for commenting! It does seem to me something is wrong - the way you're generating embeddings (as a JSON) and via Chroma seems to be doing different things (because they should normally take the same amount of time, if it's for the same amount of text). Have you tried using different embedding functions? Or is your ChromaDB saved onto a slower disk drive?

      @pixegami@pixegamiКүн бұрын
  • Thank you for this video, is NLTK something required to do this?

    @ailenrgrimaldi6050@ailenrgrimaldi6050Ай бұрын
    • The NLTK library? I don't think I had to use it here in the project, a lot of the other libraries might give you all the functionality at a higher abstraction already.

      @pixegami@pixegamiАй бұрын
  • Very nice video, what kind of theme do you use to make the vscode look like this? Thanks.

    @NahuelD101@NahuelD1015 ай бұрын
    • I use Monokai Pro :)

      @pixegami@pixegami5 ай бұрын
    • The VSCode theme is called Monokai Pro :)

      @pixegami@pixegami5 ай бұрын
  • Nailed it and Easy Understandable, can i make this an chat bot ? Anyone please share your thoughts

    @RajAIversion@RajAIversion2 ай бұрын
  • pls, I'd like to see a Recommendation model (products, images, etc) based on our different sources, it could be scraping from webpages. Something to use in e-commerce.

    @FrancisRodrigues@FrancisRodriguesАй бұрын
    • Product recommendations are a good idea :) Thanks for the suggestion, I'll add it to my list.

      @pixegami@pixegamiАй бұрын
  • would this work for a general question such as this: please summarize the book in 5 sentences?

    @spicytuna08@spicytuna08Ай бұрын
  • I want to hear your thoughts on what approach is likely the better one: 1. Chop the document into multiple chunks and convert chunks to vectors 2. Convert the whole document to a vector Thank you

    @hoangngbot@hoangngbot20 күн бұрын
    • I think it really depends on your use-case and the content. The best way to know is to have a way to evaluate (test) the results/quality. In my own use-cases, I find that a chunk length of around 3000 characters work quite well (you need enough context for the content to make sense). I also like to concatenate some context info into the chunk (like "this is page 5 about XYZ, part of ABC". But I haven't done enough research into this to really give a qualified answer. Good luck!

      @pixegami@pixegami18 күн бұрын
  • any one face tesseract error in windows,,it works well at linux ?

    @user-iz7wi7rp6l@user-iz7wi7rp6l4 ай бұрын
  • Great video..but where have you added OpeAI API keys ?

    @gvtanuja4874@gvtanuja48744 ай бұрын
    • You can add them to your environment variable :) I add mine into my .bashrc file.

      @pixegami@pixegami3 ай бұрын
  • Amazing video! How would I make this into a webapp? Like a support chat bot?

    @marselse@marselse11 күн бұрын
    • Thank you! To make it into a web-app, you'd first need to turn this into an API and deploy it somewhere (my next video tutorial will cover that, so stay tuned!). Then you'll need to connect the API to a front-end (webpage or app) to for your users. There's a couple of low-code/no-code tools to help you achieve this, but I haven't looked into them in detail yet. Or you can code it all up yourself as well if you are technical.

      @pixegami@pixegamiКүн бұрын
  • There is something that I did not understand: In the video, you mentioned that the more the score approximates to zero, the more accurate the result will be. But in your GitHub, you have this code: if len(results) == 0 or results[0][1] < 0.7: print(f"Unable to find matching results.") So, it seems that this code does the opposite: if the result is close to zero, then is wrong. Perhaps I understood incorrectly. Could you please help me out?

    @julianm3706@julianm370620 күн бұрын
    • I see, that's a good catch! I should have been more clear about that. So those are actually two different "scores" being used. 1) Distance: The first one I talked about is the "distance" score. It's like either a Euclidean distance or a cosine similarity. For "distance" scores, the closer to 0, the closer they are. 2) Relevancy: The second score is a "relevancy" score. I don't know how it's calculated, because its wrapped inside a Langchain/ChromaDB helper function. But for that score, the higher the better - they do something to invert it. The scale also varies depending on what embedding is used I think, so it could range from 0 - 1, or even be in ranges like 0 - 10k. The are both "scores" but measure different things, and we used different help functions to calculate them. Hope that clarifies it!

      @pixegami@pixegami18 күн бұрын
  • What is more advisable if I work with PDF documents, transforming them into text using a library like PyPDFLoader, or transforming them into another format that is easier to read?

    @canasdruid@canasdruid12 күн бұрын
    • I haven't done a deep dive on what's the most optimal way to use PDF data yet. I think it really depends on the data in the PDF, and what the chunk outputs look like. You probably need to do a bit of experimentation. If you have specific patterns with your PDFs (like lots of tables or columns) I'd probably try to pre-process them somehow first before feeding them into the document loader.

      @pixegami@pixegamiКүн бұрын
  • does the data have to be in a .md format? Also, how do you prep the data beforehand?

    @MichaelChenAdventures@MichaelChenAdventures2 ай бұрын
    • The data can be anything you want. Here's a list of all the Document loaders supported in Langchain (or you can even write your own): python.langchain.com/docs/modules/data_connection/document_loaders/ The level of preparation is up to you, and it depends on your use case. For example, if you want to split your embeddings by chapters or headers (rather than some length of text), your data format will need a way to surface that.

      @pixegami@pixegami2 ай бұрын
  • this is great but can we use our custom LLM as server i dont want to use OPENAI is there any way

    @noubgaemer1044@noubgaemer10444 ай бұрын
    • Yes, Langchain is LLM agnostic. So whilst OpenAI is just the easiest one to demo with (and also the default option), you should be able to swap it out to any other LLM by changing the LLM implementation. See python.langchain.com/docs/modules/model_io/llms/

      @pixegami@pixegami4 ай бұрын
  • hai bro i am creating a chatbot which takes data from third party api which means there is less data but dynamic data for every call so should i use RAG approach if not then suggest me a batter approach

    @kashishvarshney2225@kashishvarshney22254 ай бұрын
    • Hmm, I think RAG probably isn't the right approach. But it depends... For example if you get +3000 characters of dynamic data each call, then it might still be helpful to generate a vector DB on the spot so you can use RAG to narrow down the answer. But it's going to make each call a lot slower. But if you have way less data than that (say

      @pixegami@pixegami4 ай бұрын
  • what would be the steps to deploy this on the cloud and be ready for an internal team to use it?

    @NinVibe@NinVibeАй бұрын
    • Good question. I think I wanna cover this in one of my new videos this month. I can make a tutorial on how to deploy this and turn it into a produciton-grade API.

      @pixegami@pixegami27 күн бұрын
    • ​@@pixegami great, waiting for it!

      @NinoRisteski@NinoRisteski27 күн бұрын
  • which model you are using in this ?

    @user-cc3ev7de9v@user-cc3ev7de9v2 ай бұрын
  • what is the compiler you are using? I am using jupyter notebook but yours looks better

    @johnfakes1298@johnfakes12983 ай бұрын
    • If you mean the development environment (editor or IDE), then I'm using VSCode with the Monokai Pro theme.

      @pixegami@pixegami2 ай бұрын
    • @@pixegami thank you

      @johnfakes1298@johnfakes12982 ай бұрын
  • I've been trying to use llama2 instead of ChatOpenAI but i couldn't, could you please provide some insights that could help in that matter?

    @khalilchi5726@khalilchi57263 ай бұрын
    • This was quite a commonly requested thing, so I made another tutorial about how to switch to different LLMs here: kzhead.info/sun/e9yImMmpmWiHoIk/bejne.html I think you just have to switch the LLM implementation that you use (but the prompt structure might be different too). In that video, you'll see code for how to use Llama2 via AWS Bedrock.

      @pixegami@pixegami3 ай бұрын
    • @@pixegami Thanks alot! I switched to Gemini Pro and it worked!

      @khalilchi5726@khalilchi57263 ай бұрын
  • Hello man, first of all Great video! The best i seen soo far! I looked all there's about this topic on the web kkkk already sub and like the vid! When i tried to replicated it, the database did go welll, but the query_data had some erros! Can u please help this poor noob dude here =) The erros will be in the coment.

    @gabrielmartinphilot980@gabrielmartinphilot9802 ай бұрын
    • python query_data.py "Me diga as 5 etapas da Metodologia Crisp" /home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain/embeddings/__init__.py:29: LangChainDeprecationWarning: Importing embeddings from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead: `from langchain_community.embeddings import OpenAIEmbeddings`. To install langchain-community run `pip install -U langchain-community`. warnings.warn( /home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class `langchain_community.embeddings.openai.OpenAIEmbeddings` was deprecated in langchain-community 0.0.9 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run `pip install -U langchain-openai` and import as `from langchain_openai import OpenAIEmbeddings`. warn_deprecated( Traceback (most recent call last): File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/src/dev/query_data.py", line 61, in main() File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/src/dev/query_data.py", line 41, in main results = db.similarity_search_with_relevance_scores(query_text, k=3) File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 324, in similarity_search_with_relevance_scores docs_and_similarities = self._similarity_search_with_relevance_scores( File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 272, in _similarity_search_with_relevance_scores docs_and_scores = self.similarity_search_with_score(query, k, **kwargs) File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 437, in similarity_search_with_score query_embedding = self._embedding_function.embed_query(query) File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 697, in embed_query return self.embed_documents([text])[0] File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 668, in embed_documents return self._get_len_safe_embeddings(texts, engine=engine) File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 494, in _get_len_safe_embeddings response = embed_with_retry( File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 116, in embed_with_retry return embeddings.client.create(**kwargs) File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/openai/resources/embeddings.py", line 113, in create return self._post( File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1200, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/openai/_base_client.py", line 889, in request return self._request( File "/home/gabriel/linux_folder/studies/model_own_data_streamlit/venv/lib/python3.10/site-packages/openai/_base_client.py", line 980, in _request raise self._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sua_chave. You can find your API key at platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}} Where did u use your OpenIA API-key? Cause i use: from dotenv import load_dotenv load_dotenv() And its working.

      @gabrielmartinphilot980@gabrielmartinphilot9802 ай бұрын
  • Can I upload JSON files updating through Cronjob?

    @Nouman-es7on@Nouman-es7on2 ай бұрын
KZhead