Today we learn how we can run our own ChatGPT-like web interface using Ollama WebUI.
Ollama: github.com/ollama/ollama
Ollama WebUI: github.com/ollama-webui/ollam...
◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
📚 Programming Books & Merch 📚
🐍 The Python Bible Book: www.neuralnine.com/books/
💻 The Algorithm Bible Book: www.neuralnine.com/books/
👕 Programming Merch: www.neuralnine.com/shop
💼 Services 💼
💻 Freelancing & Tutoring: www.neuralnine.com/services
🌐 Social Media & Contact 🌐
📱 Website: www.neuralnine.com/
📷 Instagram: / neuralnine
🐦 Twitter: / neuralnine
🤵 LinkedIn: / neuralnine
📁 GitHub: github.com/NeuralNine
🎙 Discord: / discord
Thank you for this, its fantastic. Would you be able to demonstrate installing a local llm to query your own documents? I have came across a number of tutorials for this but had not success running.
Hi, can you share which video card you are using for this demo ?
Hey so I’m trying to write an Alexa task that will provide a conversational UI w/ offline LLM (my usecase is crisis relief workers in areas with limited / downed connectivity). Would the VoiceGPT extension work with Ollama WebUI? Also, is there a risk rating for wrong results for the lighter 2B or less models?
Awsome content as always, mr. Maximilian
Very Useful, I want to setup an LM like this on my own hp g8 server to use on my other python project to generate descriptions automatically. Is there any way to connect Ollama to my Python project (need to use Ollama API)?
hi can we able to deploy this model with UI on any platforms like github or smtng else
Love it! Do you mind sharing the hardware list of your desktop/laptop running llama2 ? The speed looks great in your demo. Thanks!
32GB of RAM, AMD Ryzen 7 5800, Nvidia GeForce RTX 3060 Ti, SSD Hard Drive
@@NeuralNine we got similar specs just that urs is desktop and mine is laptop 😆😅😫☠
@@NeuralNine Iam a laptop and the spec: 8gb of ram , intel i5 10210, no Gpu (🙂), 256 ssd
I am new to Docker, containers and linux commads. Please help me in converting from Ollama using CPU (I had an AMD GPU) to Nvidia, to which I switched after installing Ollama and the models... atm it still uses CPU only. Any ideas?
Hey there, im asking if i can remove the register button because mine its a private ai and i dont wanna other people using my pc as ai. can you help me?
Thank for sharing 👍, …same installation steps to set up on cloud instances….? ? 👨🏽💻
does this work with azure open ai api?
pls make a videoon how can we fine tune oss model
Cool, thx :)
hello can I make this control my network and ask him about it to get the information localy
Good Thinking
Awesome. Thank You
Mine looks different, it installed and the icon is OI, not Llama. I can't load llama LLMs. hmm...
Where is the link to the docker website?! How am I supposed to do anything if the link isn't even there?!
How to train this local AI on my dataset???
Thanks!
Is Ollama better than Jan? It seems like no one is talking about it
can you do a video to how to install it one computer and acess it via wifi
Maximillian, cool name.
Just not mine :D
@@NeuralNine - That’s hallucination for you
@@NeuralNine classic maximilian!
If I run it on wsl, Can I acess on windows?
yes
yes it works on WSL but beeter to run via docker on windows - better performacne
How good is Ollama compared to GPT-4?
Ollama itself is just the app / platform. It depends on the model you use. Check out the LMSYS leaderboard for a comparison
Thank you@@NeuralNine
Windows version is out now.
good
how to use with python
What do you mean? Do you want to call ollama models from python?
works not! when i install ollama and then i use docker with open-webui all works god. the ollama on my terminal works perfect. and the docker with open-webui works. now tell me what mus we do for using llama2:latest it is not usable in open-webui! i can nothing see the llama2 or other! this step have you not in the video, that sucks!
malisimo tutorial. No me interesa tu vida o que enseñes LO QUE YA HICISTE. MOSTRA COMO SE INSTALA de cero o ni te molestes en poner a grabar la pantalla y tu camara
Guys, why do you show us such things? What's the point of using this software locally on a PC if there are professional services on the market such as GPT or Gemini? Who in their right mind would install this on their computer for such purposes? Show us something that MAKES SENSE. For example, how to build a knowledge base using this model. How to search a local database. How to create a search engine for content in documents... other. I would have to lose my mind to replace GPT with Ollama to use it as a chatbot.
Because for organizations that have confidential documents you can’t simply plug them into chatGPT or Gemini. This is a solution if you want to have an internal LLM thats private that can interact with your own private data.
Are you 12? There are tons of reason for localization. Start with security.
I am struggling to create knowledge base using these models. Any Good guide ?
Are u dumb? 😂😂😂
@@graphguythey probably trolling..