I didn't know RAG could be this easy

2024 ж. 11 Сәу.
3 104 Рет қаралды

Gradient AI: tinyurl.com/gradient-ai
Get the code: github.com/gkamradt/RAGWithGr...
Get updates from me: mail.gregkamradt.com/
Greg’s Info:
- Twitter: / gregkamradt
- Newsletter: mail.gregkamradt.com/
- Website: gregkamradt.com/
- LinkedIn: / gregkamradt
- Work with me: tiny.one/TEi2HhN
- Contact Me: Twitter DM, LinkedIn Message, or contact@dataindependent.com

Пікірлер
  • 🔥 as usual Greg! Thanks!

    @AP-hv5dh@AP-hv5dhАй бұрын
    • Love it thanks

      @DataIndependent@DataIndependentАй бұрын
  • Any guidance for RAG with complex documents like Engineering Documents that have a lot of figures (think of an IKEA manual with instructions) and then have instructions further along that refer to those figures. For example: if the compressor is being extra noisy check the air filter by taking part A in figure 3 and unscreweing the 4 bolts (C 8’ figure), etc

    @Lampshadx@LampshadxАй бұрын
    • ya, I would check out level 3 of kzhead.info/sun/a7ODc5Zpi2SJf2w/bejne.html which talks about different representations of raw data. You'll need to do some serious post processing on your chunks to make sure the data can be referenced correctly

      @DataIndependent@DataIndependentАй бұрын
    • I think something like llamaparse can deal with many figures and formulae

      @ColinNardo-le3bl@ColinNardo-le3bl12 күн бұрын
  • Simple and useful

    @davinci137@davinci137Ай бұрын
    • Super simple

      @DataIndependent@DataIndependentАй бұрын
  • Love this! Easy RAG! The best! So much easier.

    @andrewlaery@andrewlaeryАй бұрын
  • As prompt content sizes continue to increase, will we get to a point where RAG won’t be needed? Eventually will we be to fit everything in the prompt?

    @ekkamailax@ekkamailaxАй бұрын
    • eh, you won't be able to fit all of wikipedia into a prompt, so we'll need to select somehow. My take is that data selection (retrieval) will be a thing for a while

      @DataIndependent@DataIndependentАй бұрын
    • I just saw a study that showed Claude and chatGPT are very inaccurate when it comes to using long content added in the context window

      @HashimWarren@HashimWarrenАй бұрын
    • @@DataIndependent thank you for your feedback

      @ekkamailax@ekkamailaxАй бұрын
    • @@HashimWarren interesting, would be a good experiment to compare both approaches

      @ekkamailax@ekkamailaxАй бұрын
    • @@HashimWarren yeah Greg himself did a "needle in the haystack" thing. LLMs do not have the capability yet to reliably retrieve facts and reason over them in a large context window. Also you pay by token so retrieval is very much relevant still and will probably continue to do so.

      @CasparSornberger@CasparSornberger27 күн бұрын
  • nice

    @ganghuang8892@ganghuang88925 күн бұрын
  • Bye!

    @naromsky@naromskyАй бұрын
KZhead