Semantic Chunking for RAG

2024 ж. 18 Мам.
12 362 Рет қаралды

Semantic chunking for RAG allows us to build more concise chunks for our RAG pipelines, chatbots, and AI agents. We can pair this with various LLMs and embedding models from OpenAI, Cohere, Anthropic, etc, and libraries like LangChain or CrewAI to build potentially improved Retrieval Augmented Generation (RAG) pipelines.
📌 Code:
github.com/pinecone-io/exampl...
🚩 Intro to Semantic Chunking:
www.aurelio.ai/learn/semantic...
🌲 Subscribe for Latest Articles and Videos:
www.pinecone.io/newsletter-si...
👋🏼 AI Consulting:
aurelio.ai
👾 Discord:
/ discord
Twitter: / jamescalam
LinkedIn: / jamescalam
00:00 Semantic Chunking for RAG
00:45 What is Semantic Chunking
03:31 Semantic Chunking in Python
12:17 Adding Context to Chunks
13:41 Providing LLMs with More Context
18:11 Indexing our Chunks
20:27 Creating Chunks for the LLM
27:18 Querying for Chunks
#artificialintelligence #ai #nlp #chatbot #openai

Пікірлер
  • At this point, you are practically Captain Chunk.

    @aaronsmyth7943@aaronsmyth79435 күн бұрын
  • Thank you! I’ve been doing this for a while, but did not have a good name for it.

    @AaronJOlson@AaronJOlson9 күн бұрын
  • Love all your content sir!

    @rodgerb2645@rodgerb26453 күн бұрын
  • Amazing video, thank you so much!!

    @MrMoonsilver@MrMoonsilver14 күн бұрын
  • best i have seen so far about understanding core concept of chunking , thanks

    @lalamax3d@lalamax3d14 күн бұрын
    • glad it was helpful :)

      @jamesbriggs@jamesbriggs13 күн бұрын
  • Thank you so much for this. Will test it out on the RAG flow in the company.

    @shameekm2146@shameekm214614 күн бұрын
    • welcome, would love to hear how it goes

      @jamesbriggs@jamesbriggs14 күн бұрын
  • Another killer video. Great work!

    @gullyburns1280@gullyburns128012 күн бұрын
  • King of Chunk

    @naromsky@naromsky14 күн бұрын
    • a title I have always wanted

      @jamesbriggs@jamesbriggs14 күн бұрын
  • Great material. 🙏

    @trn450@trn45014 күн бұрын
  • Just what i eas trying to lewrn ...awesome mate, thanks

    @klik24@klik2414 күн бұрын
    • Nice np

      @jamesbriggs@jamesbriggs13 күн бұрын
  • Excellent content and explanation , espeicialy chunking core concepts and challenges. Keep going your work it's so precisous to learn 👍

    @AdrienSales@AdrienSales14 күн бұрын
    • Glad to hear it helps

      @jamesbriggs@jamesbriggs13 күн бұрын
  • Need a video on cross-chunk attention. Wasn’t attention all about key query and val anyway

    @xuantungnguyen9719@xuantungnguyen971912 күн бұрын
  • Great video. Thanks for posting. I have been thinking of document chunking but using the LLM itself via prompting + k-shot. The approach you show will be cheaper of course but curious to see how these two approaches will compare in terms of any relevant non-cost metrics.

    @baskarjayaraman5821@baskarjayaraman58215 күн бұрын
  • I see the #abstract is also with #title ideally both should be in different chunks so that LLM can understand better semantics.

    @amantandon-ln9xx@amantandon-ln9xx4 күн бұрын
  • "Grab complete thoughts" is an obvious good and expensive thing. Except for tables, for instance.

    @scottmiller2591@scottmiller259113 күн бұрын
    • yeah tables need to handled differently - doable if you are identifying text vs. table elements in your processing pipeline

      @jamesbriggs@jamesbriggs12 күн бұрын
  • Dude already embedded whole documents of texts into PC haha would've helped a month ago. But awesome thanks for this! 🤘🏾

    @talesfromthetrailz@talesfromthetrailz14 күн бұрын
    • Maybe for the next project 😅

      @jamesbriggs@jamesbriggs14 күн бұрын
    • @@jamesbriggs quick question man. Is the objective of semantic chunking to achieve broader search results? Or to decrease query times? I'm thinking of it in terms of medium sized text docs, for example movies summaries and such. Thanks!

      @talesfromthetrailz@talesfromthetrailz14 күн бұрын
  • Hi James , would you please tell me how you would tackle this one.. How would you design a realtime updating rag system? For example, let's say our clients updated some details in some watched doc, I want the old chunks to be removed, and rechunked automatically. Have you seen such pipeline existing already? No one seems to cover this and I think it sets apart fun projects and actual production system. Thanks and all the best! Love your channel ❤

    @AGI-Bingo@AGI-Bingo14 күн бұрын
    • I have achieved this for one of the sources in my RAG bot. It has an api provided to access the data. So i run the embedding script on the delta changes.

      @shameekm2146@shameekm214613 күн бұрын
    • @@shameekm2146 amazing, would you please opensource it so we can all improve the pipeline as a community? 🌈

      @AGI-Bingo@AGI-Bingo13 күн бұрын
  • We use a simple combination of Microsoft's Document Intelligence with markdown output and a simple markdown splitter. The improvement is noticeable although the Document Intelligence models do come at an additional cost.

    @GeertBaeke@GeertBaeke14 күн бұрын
    • yeah it depends on what you need ofcourse, I'm mostly interested in further abstraction and more analytics methods for chunking not for where it is now, but for where this type of experimentation might lead to in the future - I could see a few more iterations and improvements to more intelligent doc parsing and chunking to become increasingly more performant - but we'll see

      @jamesbriggs@jamesbriggs14 күн бұрын
    • Do you have a link for this markdown processing? :) We are using Document Intelligence as well, but not for layout analysis, yet.

      @alivecoding4995@alivecoding499514 күн бұрын
    • @@alivecoding4995you can also use layoutpdf reader from llmsherpra

      @user-os6uo8xq9g@user-os6uo8xq9g12 күн бұрын
  • Can this be used to create chunks for creating a training dataset as well? It would be great to chunk a document into 'statements' and use those statements for a dataset. In essence have a LLM create questions for each of those statements and use those pairs for training. Could you make a video to show how that works?

    @MrMoonsilver@MrMoonsilver14 күн бұрын
  • How does Parent Document Rag fits in your in your new techniques?

    @luciolrv@luciolrv13 күн бұрын
  • People since GPT2: Simply ask an LLM recursively to please insert “{split}“ where a topic change etc happens according to a summary of prior text. Get embeddings. Use to separate and group. 2024: We would like to introduce a novel concept called Semantic Chunking with a sliding Context…….. Beginners must be truly lost 😮‍💨

    @dinoscheidt@dinoscheidt14 күн бұрын
  • Hi James. Excuse me, maybe I missed it. But how you handle the situation that when we use semantic chunking we miss pages numbers for chunks? Is it possible to receive it with using this package?

    @MrDespik@MrDespik9 күн бұрын
  • and how can I use big models from huggingface ? I can't load them into memory because many of them are bigger than 15gb, some of them are 130gb+ . Any thoughts?

    @botondvasvari5758@botondvasvari57582 күн бұрын
  • At ~4:40, you mention that you should use the same encoder for the chunking and the encoding. Why? A chunk size captures a "single meaning", so why would it matter that the same encoder is used? If you look at the chunking as a clutering algorithim that creates meaningful chunks, then what does it matter that the encoders match? What am I missing?

    @FatherNovelty@FatherNovelty14 күн бұрын
    • good point - yes they are capturing the "single meaning" and that single meaning will (hopefully) overlap a lot, but embedding models are not perfect and so they will not align between themselves. Similar to if someone asked myself and you to chunk an article, we'd likely overlap for the majority of the article, but I'm sure there would be differences

      @jamesbriggs@jamesbriggs14 күн бұрын
  • Why not chunk based on paragraphs, lists, and tables.

    @mrchongnoi@mrchongnoi9 күн бұрын
  • My son just asked if you were the Rock

    @jimmc448@jimmc44811 күн бұрын
    • I hope you said yes

      @jamesbriggs@jamesbriggs11 күн бұрын
  • "What is the title of the document?" -> 99% of RAG pipelines fail, because there is not answer in the document as it is embedded,

    @saqqara6361@saqqara636114 күн бұрын
    • in that case we can try including the title in our chunk, and possibly consider different routing logic for this type of query - something that triggers when a user asks for metadata about a received document we trigger a function that identifies the document ID in previously retrieved contexts, and uses that to pull in the document metadata for the answer to be generated by the LLM

      @jamesbriggs@jamesbriggs14 күн бұрын
  • I am facing the problem in my jupyter notebook as this, please help 2024-05-10 10:59:50 WARNING semantic_router.utils.logger Retrying in 2 seconds...

    @itzuditsharma@itzuditsharma9 күн бұрын
KZhead