100% Local Tiny AI Vision Language Model (1.6B) - Very Impressive!!

2024 ж. 20 Мам.
62 573 Рет қаралды

Local Tiny AI Vision Language Model (1.6B) - Very Impressive!!
To try everything Brilliant has to offer-free-for a full 30 days, visit: brilliant.org/AllAboutAI.
The first 200 of you will get 20% off Brilliant’s annual premium subscription.
👊 Become a member and get access to GitHub:
/ allaboutai
🤖 AI Engineer Course:
scrimba.com/?ref=allabtai
Get a FREE 45+ ChatGPT Prompts PDF here:
📧 Join the newsletter:
www.allabtai.com/newsletter/
🌐 My website:
www.allabtai.com
Moondream GH:
github.com/vikhyat/moondream
This video was sponsored by Brilliant
00:00 Local Vision Model Intro
00:28 Flowchart
01:27 Brilliant.org
02:36 Text Tests
05:05 Video Tests
10:15 Speech to Speech Tests

Пікірлер
  • To try everything Brilliant has to offer-free-for a full 30 days, visit: brilliant.org/AllAboutAI. The first 200 of you will get 20% off Brilliant’s annual premium subscription.

    @AllAboutAI@AllAboutAI3 ай бұрын
    • is there no delay? or did you get the video?

      @peter486@peter4863 ай бұрын
    • The energy in the Brilliant ad lmao! I loved it. The change of tone really makes a distinction between the actual AllAboutAI content and the paid promo. Well done.

      @technolus5742@technolus57423 ай бұрын
    • Please tell me how you how you connected PNG viewer to AI program?

      @bbrother92@bbrother92Ай бұрын
  • @All About AI: Great video. Is the description of the hardware requirements of running the model locally somewhere ? Thanks!

    @ItCanAlwaysGetWorse@ItCanAlwaysGetWorse3 ай бұрын
  • Really good vid. Thank you!

    @marcinkrupinski@marcinkrupinski3 ай бұрын
  • lol you just made a jarvis. idk how much you grinded but its absolutely worth it. subbed to you for some such content

    @3xOGsavage@3xOGsavage3 ай бұрын
  • Very cool, similar level to LLaVA, can't wait to see this in Ollama. Any recommendations for CUDA? I tried with a 2GB VRAM/8GB RAM in Linux and it was a no go. Seemed to segfault as soon as it hit 7.7GB. Clock speed and CPU and minimum 16GB fast RAM would be the considerations for consumer usage but if something can run on an 8GB laptop that's gonna bring in a lot more users.

    @duracell80@duracell803 ай бұрын
  • Lovely video, mate!

    @MacProUser99876@MacProUser998763 ай бұрын
  • I'm a bit confused the video have the names in the frame. Is it these names that gets picked up or would it work without the text tags. I wish you done a video without name labels.

    @actorjohanmatsfredkarlsson2293@actorjohanmatsfredkarlsson22933 ай бұрын
  • Excellent use cases sir. Can you please teach us hardware requirements and tutorial for setting it up?

    @test12382@test123823 ай бұрын
  • Loving all the speech integration, but I was strugging to get it setup from the initial video. Would be great to see a more detailed video on getting it setup since it's a dependecy of the following few videos.

    @NightSpyderTech@NightSpyderTech3 ай бұрын
    • did you use ElevenLabs for the response voice or was it a local clone?

      @ClipsofCoolStuff@ClipsofCoolStuff3 ай бұрын
    • its the local OpenVoice one is this vid

      @AllAboutAI@AllAboutAI3 ай бұрын
    • noted! thnx for tuning in :)

      @AllAboutAI@AllAboutAI3 ай бұрын
    • @AllAboutAI Thank you. I would also love to see a fully integrated video for the whole workflow: this is so useful.

      @ccuny1@ccuny13 ай бұрын
    • For Python I highly recommend virtual environments maybe could do something with WSL on Windows. Very easy on Linux to script services and such to run in venv watch folders for changes and add files to a queue. I suspect that's where Microsoft is going for file metadata and enhancing search with this.

      @duracell80@duracell803 ай бұрын
  • hi. I understand that you have created a repo for this video. Can you share the link for the repo to follow along the codes that you show in the video?

    @mehmetbakideniz@mehmetbakideniz18 күн бұрын
  • Damn, this gotta be the best Brilliant ad i have ever seen my life

    @pablochacon7641@pablochacon76413 ай бұрын
    • thnx mate:p

      @AllAboutAI@AllAboutAI3 ай бұрын
  • How can I test this locally?

    @pierruno@pierruno3 ай бұрын
  • I love the Ad's theme music. Killer

    @befikerbiresaw9788@befikerbiresaw978825 күн бұрын
  • Do you offer the scripts that you created to run this? Are they after payment? How much is it? I have entered your blog but the discord is not accessible and the web page has only general info but no where to "support" you and get access to the scripts.. EDİT: Just support him by becoming a member inside youtube. The rest will be there

    @hikaroto2791@hikaroto27913 ай бұрын
  • concrete use case example... thanks

    @mickelodiansurname9578@mickelodiansurname95783 ай бұрын
  • Hi man, observing you since 2023 and hope you re doing well. I've got a question. Could you please create guide how to by using Node.js (NestJS), PostgreSQL (Prisma), Pinecone (vector base ) and API OpenAI... make an AI with memory. Literally bought a course in which someone explains it, but in the course it s been stated "this is so simple to do that you don't need to know programming", eventually I ended up with course where of course everything has been showed however without explanation how to config and install those: Node.js (NestJS), PostgreSQL (Prisma), Pinecone (vector base). Literally you are creating so simple guides that non native such as me can understand everything. I ask for a lot, i know! Anyway, kind regards and thank you for everything.

    @kamilwysocki8850@kamilwysocki88503 ай бұрын
  • This is really awesome! In your code prompt, you spell "description" wrong - with a B instead of P.

    @johnflux1@johnflux13 ай бұрын
  • can you drop a colab in the description - also, great example, love to see this go viral

    @thewatersavior@thewatersavior3 ай бұрын
  • Excellent

    @theoriginalrecycler@theoriginalrecycler3 ай бұрын
  • Very cool!

    @gabscar1@gabscar13 ай бұрын
    • thnx :)

      @AllAboutAI@AllAboutAI3 ай бұрын
  • Are you able to release this on your Github?

    @scottt1234@scottt12342 ай бұрын
  • Please, can you tell me if there is an AI computer vision/recognition software that can search through my images folder and find images? Example: Search for cat images - 56 images containing cat. Search dog: 47 images containing a dog. Like Google Image Search for my local folder? I CANNOT FIND A WORKING SOFTWARE. Everything is "train your model"

    @id104335409@id1043354092 ай бұрын
  • when u say local, with 1.6b parameters, what would be the size that you need on your local laptop, along with the memory/gpu etc?

    @nhtna4706@nhtna47063 ай бұрын
    • For Mistral 7b it is 7*4=28 GB on 32 bit and 14 GB on half precision. For Moondream it is 1.6*4=6.4 GB and 3.2 GB on half precision. Add this together and you have the mem requirements. You could also split it up, lets say run Moondream on your GPU and Mistral on your CPU. Or you could shrink them down to 4 bit or even lower. But the models will perform worse, the lower you go.

      @wurstelei1356@wurstelei13563 ай бұрын
    • @@wurstelei1356 cool, am assuming u r talking about the memory, correct? Is there any sizing doc link that talks about the cpu , gpu power, processor speed etc, along with size of SSD etc? For these models to run locally for pre training purposes ?

      @nhtna4706@nhtna47063 ай бұрын
    • thnx :) @wurstelei1356

      @AllAboutAI@AllAboutAI3 ай бұрын
    • majority models that i see now a days are 16 bit float, so they are around 2.5GB to 3GB with 1.3 to 1.5 billion parameters, so it would not be very different from them. also, you can try looking it up (if open sourced) on hugging face.

      @jawadmansoor6064@jawadmansoor60643 ай бұрын
    • @nhtna4706, you seem a bit old school. I would suggest asking these questions to ChatGPT. it can even give you step-by-step instructions, and if you have the Plus version, as you should, it'll research the internet to get the most recent information and walk you through the entire process step by step.

      @brianlink391@brianlink3913 ай бұрын
  • I wanna use the description thingy and labeler to help me learn chinese as a language immersion tool along with the DEEPL to search for videos/websites in chinese while typing in english, transparent windows with WINDOWS TOP + anki SRS, etc...

    @aoeu256@aoeu2562 ай бұрын
  • Can any one dm me the requirments of laptop to run mixtral 8x7b locally

    @ajayjasperj@ajayjasperj3 ай бұрын
  • One of the major untapped uses of gpt4 vision is using it for OCR. It does far better than tesseract which always outputs very dirty results that have to be cleaned up. You can say "Write all of the text in this image. Perfectly preserve all of the formatting including bolding, italics and lists" and gpt 4 vision does as good a job as a human. This is very useful when dealing with books that have strange layouts of the text, gpt 4 vision can figure out how to correctly convert strange text layouts which tesseract always fails on. I would really like to see how these new vision models can be used for OCR.

    @MattJonesYT@MattJonesYT3 ай бұрын
    • Yea but the MAIN usecase for OCR would be scanning handritten journals, and nobody wants to send their intimate thoughts to OpenAI... I'm so eagerly waiting for a GPT4-level open source LLM that we can run locally that can finally read my shitty handwriting...

      @mbrochh82@mbrochh823 ай бұрын
    • it's not, several lawyers and journalists already ruined all their careers by blindly relying on LLMs text outcomes. I think you haven't seen "Ai explained" tests, even GPT4 Vision hallucination and errors so high that it just refuses to view one text number in the stat table. There's even research paper that in business using LLMs are not practical, you must hire a human editor to check everything (references), so basically you are spending more time than just writing text yourself from the start.

      @fontende@fontende3 ай бұрын
    • And that's not taking the "trickery" topic by models, which was inserted there by corporations so no other New York Times will find out directly requesting model about dataset source. (if you instruct Ai to tell half truth to protect yourself from court - it will do that for all results) There's no clean legally dataset models anywhere, there was no audits anywhere (which people made with open encryption tools for example).

      @fontende@fontende3 ай бұрын
    • @@fontende Humans have an error rate too. If you have humans transcribe the text they will make mistakes. With AI it's very easy to have it do several attempts and iterations and see where it converges and that result will be much better than the first attempt you get from a human which will be thousands of times more expensive and take much longer.

      @MattJonesYT@MattJonesYT3 ай бұрын
    • @@MattJonesYT of course, that's how editor was created as profession, all writers for centuries still give first manuscript to editor before printing. Fact checking is a separate profession very important for newspapers. In case of ruined american lawyers careers with useless legal work made by ChatGPT there was needed 2 additional human personnel - an editor checking text structure, typos (it's important for official court documents) and a fact checker (ChatGPT made-up a dosen of nonsense court cases with references on them, you must manually see each, even if existed -read that, process, many work). I don't see practical use of any chatbots in any business if it's not selling chatbots, it's incredible leakers of data, robots is a different story.

      @fontende@fontende3 ай бұрын
  • I've tried to replicate your system and am having problems. Could you maybe make a video on starting from scratch. I'm on Linux. Also lookat Ollama as a better open source version to run llm.

    @chrisBruner@chrisBruner3 ай бұрын
    • hey! yeah i do have some vids on my member section on this, might to a main channel vid someday too

      @AllAboutAI@AllAboutAI3 ай бұрын
  • Is open to the public or its private now

    @PrinceAnkrah@PrinceAnkrah3 ай бұрын
  • scaling down deep learning is the way

    @Graverman@Graverman3 ай бұрын
  • Can you make a comparison video between this model and LLaVA?

    @borisrusev9474@borisrusev94743 ай бұрын
    • this is is much better then LLaVA

      @DevPythonUnity@DevPythonUnity3 ай бұрын
    • I'd switch to this in Ollama when it becomes available, much more seamless dev wise to call a one liner. I would hope this inference is faster on CPU compared to LLaVA?

      @duracell80@duracell803 ай бұрын
  • Wow!!

    @altered.thought@altered.thought3 ай бұрын
  • Can it answer questions in the image??

    @vaibhavmishra1100@vaibhavmishra11003 ай бұрын
  • loved your work! can you share the source code?

    @khaledalshammari857@khaledalshammari8573 ай бұрын
  • hmm.. will it run on an esp32cam?

    @thewatersavior@thewatersavior3 ай бұрын
  • Do you have a github we can look at the code closer please?

    @carldraper616@carldraper6163 ай бұрын
    • I see the requirement for membership now, ill sign up :)

      @carldraper616@carldraper6163 ай бұрын
    • I just signed up. Where is the github repository?

      @scottt1234@scottt12342 ай бұрын
  • I really want to give this video a second thumbs-up XD

    @wurstelei1356@wurstelei13563 ай бұрын
  • Brilliant

    @lancemarchetti8673@lancemarchetti86733 ай бұрын
    • thnx for tuning in :)

      @AllAboutAI@AllAboutAI3 ай бұрын
  • Sir can we stream our webcam to it and say what's is in my hand..

    @lokeshart3340@lokeshart33402 ай бұрын
  • The most important question: How did you get the Matrix code running on the TV?

    @sh00ting5tar@sh00ting5tar3 ай бұрын
  • Woooowww 🤯👏👏👏😁

    @JOHN.Z999@JOHN.Z9993 ай бұрын
    • thnx for tuning in :)

      @AllAboutAI@AllAboutAI3 ай бұрын
  • 8:20 Cant you debug the AI to find out if Bradley Cooper really was identified as "Casanova", and why? Rhetoric question, just to state the obvious.

    @cutterboard4144@cutterboard41443 ай бұрын
  • Er du norsk?

    @picklenickil@picklenickil3 ай бұрын
    • Jeg skulle akkurat spørre om det samme :p

      @Zymosisproductions@Zymosisproductions3 ай бұрын
  • Wow. Does this also work on Mac silicon?

    @maxziebell4013@maxziebell40133 ай бұрын
    • New Macs are known to be pretty good at AI, thou I don't have one and GPU is still better, but more expensive. Mat Berman got some nice videos about his Mac and local AI.

      @wurstelei1356@wurstelei13563 ай бұрын
    • Macs might run bigger models. These smaller ones bring in people with realistic lowest common denominator hardware. For example I have LLaVA running on a mini PC from 2018, inference is terribly slow but there are non interactive use cases. For Mac's you're gonna be able to do much more than these small models.

      @duracell80@duracell803 ай бұрын
  • 👍

    @frankdearr2772@frankdearr27723 ай бұрын
  • "yeah" 31 times so yeah, haha My son says yeah a lot as well so yeah!

    @FloodGold@FloodGold3 ай бұрын
  • Soon, Captcha will have to start asking questions that only humans would get WRONG...

    @ChrisM-tn3hx@ChrisM-tn3hx3 ай бұрын
  • Will be nice to know how to fine-tune model in other language

    @Piotr_Sikora@Piotr_Sikora3 ай бұрын
  • As you're testing these models and creating these models, little do we know that we are models ourselves being tested and created.

    @brianlink391@brianlink3913 ай бұрын
  • You are Ironman

    @hgeldenhuys@hgeldenhuys3 ай бұрын
  • Cats are actually prey to dogs; there's not much fighting happening here.

    @user-ik3jh7kr5n@user-ik3jh7kr5n3 ай бұрын
  • the vision model is quite good, but it has problems with describing porn pictures,

    @DevPythonUnity@DevPythonUnity3 ай бұрын
  • Excuse me sir, you have a well maintained channel, why did you steal my channel logo???? its a shame...

    @aiglobalX@aiglobalX3 ай бұрын
    • Your logo looks nothing like this channel logo. Are you feeling OK?

      @gaijinshacho@gaijinshacho3 ай бұрын
    • They're literally identical? I think you need to see the doctor 😂​@@gaijinshacho

      @elijahpavich1095@elijahpavich10953 ай бұрын
    • what haha

      @AllAboutAI@AllAboutAI3 ай бұрын
  • another non open source model 😢

    @sherpya@sherpya3 ай бұрын
    • Is it possible they forgot to add a license? I couldn't find mention in README or other files

      @matten_zero@matten_zero3 ай бұрын
    • @@matten_zero it's based on Microsoft phi that has a restrictive license, the problem is you can't use it commercially even if willing to pay

      @sherpya@sherpya3 ай бұрын
    • I guess it is open source, but not for commercial use

      @Greenthum6@Greenthum63 ай бұрын
    • @@Greenthum6 so it's not open source, it's just source available

      @sherpya@sherpya3 ай бұрын
    • @@Greenthum6 the problem is these local models are not intended for end users, but instead for developers that create chatbots or apps for end users. Since the inference is costly, a developer cannot make chatbots or apps for free or anyway without commercial usage, like running ads. So unfortunately these models are only for researchers, learning or educational videos. Nothing wrong with this and even this one is interesting and unique, but still talking about the base model phi from Microsoft, we really need a bunch of non open source and even not commercial available models? I also often see they are proposed as they were open source (not obviously referring at this video)

      @sherpya@sherpya3 ай бұрын
  • so you're running it on a "server"... why don't you start with telling us the required hardware minimum? Not watching any further tbh.

    @microfx@microfx3 ай бұрын
    • I think it's impossible to run such concert locally for now, only speech recognition will make a serious delay. And I'm on the market for new GPU, max you can get is 4080 with 16Gb memory for 1k and it's not enough for serious Llms.

      @fontende@fontende3 ай бұрын
    • yeah, whatever. I expect a video to have this information right at the beginning. or in the description. @@fontende

      @microfx@microfx3 ай бұрын
    • @@fontende ?? The 4090 and 3090 both have 24 gigs. Lol

      @NimVim@NimVim3 ай бұрын
    • @@NimVim and what? Be glad if you can buy such, 3090 is 3 year old toasted on most mining boom-good luck with that. LaL 🤏

      @fontende@fontende3 ай бұрын
    • 🧠 Think 👨🏼‍💻 Shadow PC ⚡ in 🤩 VR 😉 ❇️

      @LeftBoot@LeftBoot3 ай бұрын
  • Why did you skip Ayo Edebiri… because she is black?

    @olagunjujeremiah4003@olagunjujeremiah40033 ай бұрын
  • i'd love to see this working with camera and recognizing ppl (memGPT?)

    @EpicFlow@EpicFlow3 ай бұрын
  • wtf its pretty quick amazing

    @geomorillo@geomorillo3 ай бұрын
KZhead