37% Better Output with 15 Lines of Code - Llama 3 8B (Ollama) & 70B (Groq)

2024 ж. 16 Мам.
13 567 Рет қаралды

To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/AllAboutAI . You’ll also get 20% off an annual premium subscription.
37% Better Output with 15 Lines of Code - Llama 3 8B (Ollama) & 70B (Groq)
GitHub Project:
github.com/AllAboutAI-YT/easy...
👊 Become a member and get access to GitHub and Code:
/ allaboutai
🤖 Great AI Engineer Course:
scrimba.com/learn/aiengineer?...
📧 Join the newsletter:
www.allabtai.com/newsletter/
🌐 My website:
www.allabtai.com
In this video I try to improve a known problem when using RAG in local model like Llama 3 8B on ollama. This local RAG system was improved by just adding around 15 lines of code. Feel free to share and rate on GitHub :)
00:00 Llama 3 Improved RAG Intro
02:01 Problem / Soulution
03:05 Brilliant.org
04:26 How this works
12:05 Llama 3 70B Groq
15:12 Conclusion

Пікірлер
  • Brilliant: To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/AllAboutAI . You’ll also get 20% off an annual premium subscription.

    @AllAboutAI@AllAboutAI23 күн бұрын
  • @AllAboutAi the issue is it makes the assumption that the question is related to the content passed, which is not always the case in a conversation. Like suddenly you talk about something else, let's say "How are you", it will be rewritten to be aligned to the precedent context, which is not what you want.. then you need to implement some more mechanism or tweak your prompt to only rephrase when the question seems to be linked to the past. Many discussions about this..

    @pec8377@pec837720 күн бұрын
  • Another approach to this is to just ask for the simple llm to hallucinate an answer to the current chat. That answer will not be correct but it will probably have the phrases needed for the RAG system to find the needed excerpts. There's a technical term for this idea which I can't remember but I came across it on the TwoSetAI channel which has a lot of similar tricks

    @MattJonesYT@MattJonesYT22 күн бұрын
    • HyDE, Hypothetical Document Embeddings. Works very well and easy to implement. Similarity search on a vector database using a hallucinated answer to the question instead of the question usually gives better similarity

      @robboerman9378@robboerman937822 күн бұрын
    • yes this is nice, thnx :)

      @AllAboutAI@AllAboutAI22 күн бұрын
    • RAG is a bit too much of an exact match because it is based on concepts and similar concepts. Therefore no match, no return. HyDE makes the search a bit more fuzzy by expanding the query and introducing more concepts. It would be good to have an evaluator to check on the faithfuness of retrieval and the relevance of the ouputs to the original query.

      @kenhtinhthuc@kenhtinhthuc22 күн бұрын
  • Dolphin-llama3 & Groq-llama3 are awesome! Well done!

    @ASchnacky@ASchnacky22 күн бұрын
    • how are they different?

      @ByZaMo64@ByZaMo6417 күн бұрын
  • direct, didactic almost verbatim in my book, explanation. excellent

    @Edoras5916@Edoras591622 күн бұрын
  • 👍👍👍Thanks! Useful information.

    @nic-ori@nic-ori23 күн бұрын
  • dolphin-llama3:8b-v2.9-fp16 is so good as an assistant!

    @MarcShade@MarcShade22 күн бұрын
    • Dolphin-llama3 & Groq-llama3

      @ASchnacky@ASchnacky22 күн бұрын
  • Bruuuuuuh, just found this channel, you sure you're human?!?! Wish i had 5% of your brain.... thank you so much for your work! Im learning so much!!

    @akimezra7178@akimezra717813 күн бұрын
  • best AI python coding channel hands down

    @futureworldhealing@futureworldhealing22 күн бұрын
    • thnx a lot :D

      @AllAboutAI@AllAboutAI22 күн бұрын
  • based on your experience, why is olama better than LMStudio?

    @realorfake4765@realorfake476518 күн бұрын
  • Great job

    @technolus5742@technolus574222 күн бұрын
    • thnx :)

      @AllAboutAI@AllAboutAI22 күн бұрын
  • 💎💎🌟💎💎💎💎

    @elsondasilva8636@elsondasilva863623 күн бұрын
  • first

    @iamisobe@iamisobe23 күн бұрын
  • What about doing the same for the output? One pass is the internal voice, compare it to the promo to see if matches up and a second pass for any corections. Like giving LLMs an inner voice like we do.

    @monstercameron@monstercameron23 күн бұрын
    • interesting

      @AllAboutAI@AllAboutAI22 күн бұрын
  • the problem and solution is that your setup is stateless

    @buttpub@buttpub22 күн бұрын
    • interesting, will look into

      @AllAboutAI@AllAboutAI22 күн бұрын
    • @@AllAboutAIllms such as those built on transformer architectures, are fundamentally stateless, meaning they do not inherently maintain information about previous inputs across separate input sequences like recurrent neural networks. however; they can emulate state-like behavior through the use of positional and specialized embeddings that incorporate contextual information within a given sequence, processing data in a stateless manner, the autoregressive nature of many llms allows them to generate text by sequentially predicting the next token based on the accumualted outputs, mimicking a form of statefulness. allowing them to handle extensive and complex sequences effectively, tho each processing step inherently lacks a continuous internal state beyond its immediate inputs.

      @buttpub@buttpub22 күн бұрын
KZhead