FAST Local Live LLM Preview Window - Phi-2 / Mistral 7B Uncensored

2024 ж. 20 Мам.
8 626 Рет қаралды

FAST Local Live LLM Preview Window - Phi-2 / Mistral 7B Uncensored
👊 Become a member and get access to GitHub:
/ allaboutai
🤖 AI Engineer Course:
scrimba.com/?ref=allabtai
Get a FREE 45+ ChatGPT Prompts PDF here:
📧 Join the newsletter:
www.allabtai.com/newsletter/
🌐 My website:
www.allabtai.com
In this video i create a project where we get a real time live preview of the llm output, running on local models like uncensored Phi-2 and Mistral 7B. Very fun and simple project.
00:00 Local Live LLM Preview Window Intro
00:20 Flowchart
01:32 Python Code
04:01 Live LLM Preview Test 1
04:57 Live LLM Preview Test 2
05:22 Live LLM Preview Test 3

Пікірлер
  • I think that should officially replace "Hello World" from now on. 😆👍

    @mikew2883@mikew28833 ай бұрын
  • 😂 the first email was hilarious!!

    @PriceActionTradesbyJosh@PriceActionTradesbyJosh3 ай бұрын
  • Beautiful

    @milkywaydev593@milkywaydev5933 ай бұрын
  • Graet - thank you:)

    @micbab-vg2mu@micbab-vg2mu3 ай бұрын
  • 🎯 Key Takeaways for quick navigation: 00:27 🐍 *Developed a local real-time LLM (Language Model) preview using Python and threading.* 01:23 🔄 *Parallel function allows capturing and processing keyboard inputs without interruption.* 01:49 📝 *Python code includes M7B function for local LM Studio, update preview function, and capture input function.* 03:27 🗣️ *System prompts for the Mistral 7B model include examples for a 4chan Reddit style and a more explicit chatbot style.* 04:09 🖥️ *Demonstrates the local real-time preview in action with user input and model responses.* 05:32 🔄 *Switches the model to Open Hermis Mistral 7P, adjusts settings, and changes the system prompt for a different tone.* 06:28 📧 *Tests the modified model by writing a short, humorous, and explicit email using keyboard input.* 08:31 💻 *Encourages support through channel membership for access to scripts and the community GitHub and Discord.*

    @BoldStatement@BoldStatement3 ай бұрын
  • I've had great results with the beagle models! Even the 3 bit version does better than some of the big models for things like evaluating RAG results.

    @MattJonesYT@MattJonesYT3 ай бұрын
  • How does it feel to be a demigod? Thank you for sharing your spells!

    @MetaphoricMinds@MetaphoricMinds3 ай бұрын
  • hiliarious concept

    @thetagang6854@thetagang6854Ай бұрын
  • Great content. I'm LMFAO.

    @Tripp111@Tripp1113 ай бұрын
  • Can we do a real-time llm that works with ollama STT then Ollama then TTS in the terminal. Ollama is the most optimized llm solution.

    @Edward_ZS@Edward_ZS3 ай бұрын
    • check out twinny

      @goodchoice4410@goodchoice44103 ай бұрын
    • Just use the ollama end point. All done

      @Canna_Science_and_Technology@Canna_Science_and_Technology3 ай бұрын
    • @Canna_Science_and_Technology with ollama, don't you need additional headers? For model and pre-prompt

      @Edward_ZS@Edward_ZS3 ай бұрын
    • @Canna_Science_and_Technology curl localhost:11434/api/generate -d '{ "model": "tinydolphin", "prompt": "As a friendly and informative assistant, provide detailed explanations. This is a test. Output:", "options": { "stop": ["Instruct:", "Output:"] }, "raw": true, "stream": true }'

      @Edward_ZS@Edward_ZS3 ай бұрын
    • @@Edward_ZS yes, host.docker.internal:11434/api

      @Canna_Science_and_Technology@Canna_Science_and_Technology3 ай бұрын
  • Where do I find dolphin phi 2 q5 k m

    @nvleo4what@nvleo4what2 ай бұрын
  • really love the web troll lol

    @futurizerush@futurizerush3 ай бұрын
  • you obviously need a debounce on that input

    @avi7278@avi72783 ай бұрын
  • I feel like this would have been much better in streamlit

    @emmanuelgoldstein3682@emmanuelgoldstein36823 ай бұрын
KZhead