AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"

2024 ж. 7 Мам.
353 941 Рет қаралды

Andrew Ng, Google Brain, and Coursera founder discusses agents' power and how to use them.
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
HuggingGPT - • NEW HuggingGPT 🤗 - One...
ChatDev - • How To Install ChatDev...
Andrew Ng's Talk - • What's next for AI age...
Chapters:
0:00 - Andrew Ng Intro
1:09 - Sequoia
1:59 - Agents Talk
Disclosure:
I'm an investor in CrewAI

Пікірлер
  • I really like how you feature your sources in your videos. This "open source" journalism has real merit, and it separates authentic journalism from fake news. Keep it up! Thanks for sharing all this interesting info on AI and agents.

    @e-vd@e-vdАй бұрын
  • Exponentially self-improving agents. Love how incremental improvements over a period of years is so over.

    @Chuck_Hooks@Chuck_HooksАй бұрын
    • I'm expecting deep mind to at any point just pop off with an ai that plays the game of making an ai

      @andrewferguson6901@andrewferguson6901Ай бұрын
    • When did the information age end and the AI age begin haha. I still think, we need to figure out how to make self-replicating robots (that replicate themselves half-size each generation) by making them out of lego-blocks, and then have the lego-blocks be cast from a mold that the robot itself makes. Once hardware(robots) improves the capabilities of software can improve.

      @aoeu256@aoeu256Ай бұрын
    • @@aoeu256 Oh come on now, you know how that'll end. Admit it, you've watched Futurama :D

      @wrOngplan3t@wrOngplan3tАй бұрын
    • Not sure if. I love that

      @jonyfrany1319@jonyfrany1319Ай бұрын
    • It may refine the quality of results, but it won't teach itself anything new or have any "ah hah!" moments like a human thinker. There will be an upper limit to any exponential growth due to eventual lack of entropy (there's a limit to how many ways a set of information can be organized). Spam in a can is a homogenous mixture of meat scraps left over from slaughtering pigs. It's the ground up form of the parts that humans don't want to see in a butcher's meat display. LLMs produce the spam from the pork chops of human creativity. These agents will produce a better looking can with better marketing speak on the label. Might have a nicer color and smell to it. But it's still spam that will never be displayed next to real cuts of meat. Despite how much the marketers want you to think it's as good as or superior to the real thing.

      @paulsaulpaul@paulsaulpaulАй бұрын
  • LLM AI + "self-dialogue" via reflection = "Agent". Multiple "Agents" together meet. User asks them to solve a problem. "Agents" all start collaborating with one another to generate a solution. So awesome!

    @stray2748@stray2748Ай бұрын
    • Is self dialog same as Q* ?

      @ihbrzmkqushzavojtr72mw5pqf6@ihbrzmkqushzavojtr72mw5pqf6Ай бұрын
    • @@ihbrzmkqushzavojtr72mw5pqf6 I think it's the lynchpin they discovered to be a catalyst for AGI. Albeit self-dialogue + multimodality being trained from the ground-up in Q* (something ChatGPT did not have in it's training). Transformers were built on mimicking the human neuron (Rosenblatt Perceptron) ; okay now following human nature, lets train it ground-up with multimodal data and self-dialogue (like humans posess).

      @stray2748@stray2748Ай бұрын
    • @@ihbrzmkqushzavojtr72mw5pqf6 Not exactly, Q* is pre-thought, before inference is complete. The difference is with planning if someone asks you a question like "how many words are in your response, you can think about it, and come to a conclusion like to say 'One'" but if you don't have pre-thought, you're doing simple word prediction every time, and the only way to get a simple outcome is if something akin to key/value pairs passed into LLM at some point gives it the idea to try that in one shot. Even if it has a chance to iterate it'll probably never reach that response without forethought.

      @Korodarn@KorodarnАй бұрын
    • give it a couple more ai models, like world simulators, a little bit of time...and then something similar to what we refer as consiciousness may emerge of all those intereactions.

      @enriquea.fonolla4495@enriquea.fonolla449527 күн бұрын
    • They’re coming for you Neo.

      @defaultHandle1110@defaultHandle111021 күн бұрын
  • Matthew, I've watched many of your videos, and I want to thank you for sharing so much knowledge and news. This latest one was exceptionally good. At times, I've been hesitant to use agents because they seemed too complex, and didn't work on my laptop when I tried. However, this video has convinced me that I've been wasting time by not diving deeper into it. Thanks again, and remember, you now have a friend in Madrid whenever you're around.

    @8691669@8691669Ай бұрын
  • The old saying comes to mind: Think twice , say once. Perfectly applicable to AI where LLM checks its own answer before outputting it. Another excellent video.

    @janchiskitchen2720@janchiskitchen2720Ай бұрын
  • Your commentary "dumbing things down" for people like me was very helpful in understanding all this stuff. Good video!

    @richardgordon@richardgordonАй бұрын
  • This is one of the best vids you've made. Good commentary along with the presentation!

    @carlkim2577@carlkim2577Ай бұрын
  • Glad I saw this, your additional explanations were incredibly helpful and woven into the main talk in a non-intrusive way. Subscribed.

    @notclagnew@notclagnew15 күн бұрын
  • I really appreciate your rational and well-considered insights on these topics, particularly your focus on follow-on implications. I follow several AI News creators, and your voice stands out in that specific respect.

    @BTFranklin@BTFranklinАй бұрын
    • Matthew is really good, isn't he? I want to know how he's able to keep up with all the news while also producing videos so regularly.

      @samhiatt@samhiattАй бұрын
  • Excellent video. Helped clear away a lot of fog and hype to reveal the amazing capabilities even relatively simple agentic workflows can provide.👍

    @JohnSmith762A11B@JohnSmith762A11BАй бұрын
  • Very good Matthew! Thanks for sharing. I built my simple agent and I see it improving a lot after a few interactions.

    @jonatasdp@jonatasdpАй бұрын
  • You upload on the least expected random times of the day and I'm all for it

    @AINEET@AINEETАй бұрын
    • LOL. Keeping you on your toes!

      @matthew_berman@matthew_bermanАй бұрын
    • Haha 😂

      @holdthetruthhostage@holdthetruthhostageАй бұрын
  • Great point about combining Groq's inference speed with agents!

    @youri655@youri655Ай бұрын
  • Thank you so much for all your videos. You are gold. Please never stop!

    @user-en6ot9ju7f@user-en6ot9ju7fАй бұрын
  • All of your videos are very informative and I like that you keep the coding bugs in rather than skipping ahead, and you demonstrate solving those issues as you go. I’ve been experimenting with ollama, LM studio, and CrewAI, with some really cool results. I’ve come to realize I’m going to need a much more expensive PC. 😂

    @ronald2327@ronald2327Ай бұрын
  • I like the idea of replacing a single 120b (for instance) with a cluster of intelligently chosen 7b fine-tuned models if for no other reason than the hardware limitations lift drastically. With a competently configured "swarm," you could run one or two 7b sized models in parallel, adversarially, or cooperatively, each one contributing to a singular task/workspace/etc. They could even be guided by a master/conductor AI tuned for orchestrating its swarm.

    @virtualalias@virtualaliasАй бұрын
    • ehem, skynet. :D but i agree

      @kliersheed@kliersheedАй бұрын
  • I appreciate the way you explained every step, very informative. Great video.

    @narindermahil6670@narindermahil6670Ай бұрын
  • Love your breakdowns! Adding context and background info into the mix. Very useful.

    @animalrave7167@animalrave716714 күн бұрын
  • Wow, I’ve been a big believer in agentic workflows since I saw your first video on chatdev and later on autogen. It’s really validating to hear someone of this stature thinking along the same lines

    @timh8490@timh8490Ай бұрын
  • Matthew, everytjing that Prof Ng referenced, you have already covered and analyzed. Much credit to you.

    @saadatkhan9583@saadatkhan9583Ай бұрын
  • You are probably my number one source for bleeding edge info and explanations on AI and AI agents keep it up and great job Matthew! you were one of the fleeting influences in my learning AI and basically learning python for that matter now that I can use AI as a personal tutor for free anyone can learn anything now way better than being in a classroom because having an AI tutor way better than human

    @NateMina@NateMinaАй бұрын
  • I have been thinking about agents for months without knowing what I am thinking of untill I found videos like crewai and swarm-agent and my mind is blown. I am all in for this and trying to learn as much as i can because this is for sure the future. Thanks for all your uploads

    @JacquesvanWyk@JacquesvanWyk18 күн бұрын
  • Matthew, your videos are really informative. Many thanks to you for sharing such knowledge and update. This latest one was exceptionally good.

    @NasrinHashemian@NasrinHashemian21 күн бұрын
    • I love this video. In the long run Advances in A.I surely can be debated for the good of AI Agents, though most will argue that only a few will benefit especially to their pockets, at the end, interesting to see what the future holds.

      @RasoulGhaderi@RasoulGhaderi21 күн бұрын
    • I also agree that it will be interesting, take a look at the benefits of the computing age millions of people were made for life simply because they made the right decisions at the time thereby creating lifetime wealth.

      @YousefMilanian@YousefMilanian21 күн бұрын
    • I wasn't born into lifetime wealth handed over, but I am definitely on my way to creating one, $715k in profits in one year is surely a start in the right path for me and my dream. Others had luck born in wealth, I have a brain that works.

      @RasoulGhaderi@RasoulGhaderi21 күн бұрын
    • I can say for sure you had money laying around and was handed over to you from family to be able to achieve such.

      @ShahramHesabani@ShahramHesabani21 күн бұрын
    • It may interest you to know that no such thing happened, I did a lot of research on how the rich get richer and this led me to meet, Linda Alice parisi . Having someone specialized in a particular field do a job does wonders you know. I gave her 100 grand at first

      @RasoulGhaderi@RasoulGhaderi21 күн бұрын
  • Nice share with valuable commentary throughout, you've got yourself a new subscriber!

    @weishenmejames@weishenmejames14 күн бұрын
  • Super informative, thank you so much! ❤

    @mayagayam@mayagayamАй бұрын
  • Great video and valuable clarifications of AN's insights. It will be also great if you are able to make a video that capture all these concepts and notions using CrewAI and/or Autogen. Thank you Matt!

    @AC-go1tp@AC-go1tpАй бұрын
  • Thank you. Great analysis

    @d.d.z.@d.d.z.Ай бұрын
  • I'm glad we all seem to be on the same page but I think it would help to use a different word when thinking about the implementation of "Agents". What I think was a breakthrough for me was replacing the word "Agent" with "Frame of mind" or something along those lines when prompting an "Agent" for a task in a series of steps where the "Frame of mind" changes for each step until the task is complete. Not trying to say anything different than what has been said thus far but only help us humans see that this is how we think about a task. As humans we change our "Frame of mind" so fast we often don't realize we are doing it when working on a task. For a LLMs your "Frame of mind" is a new LLM prompt on the same or different LLM. Thanks Matthew Berman you get all the credit for getting into this LLM rabbit hole. I'm also working on a LLM project I hope to share soon. 😎🤯😅

    @DragonWolf2099@DragonWolf2099Ай бұрын
    • agens = actor = compartimented entity doing smth. i think the word fits perfectly. its like transistors are simulating our neurons and the agent is simulating the individual compartments in our brain. a frame of mind would be a fitting expression for the supervising AI keeping the agents in check and organizing them to solve the problem perceived. its like the "me" as in consciousness, ruling the processes in the brain. a frame always has to contain smth and IMO its hard to say what an agent contains as its already really specialized and works WITHIN a frame (not being a frame). even if you speak of frames as in relation systems, the agent is WITHIN, not one itself. just my thoughts on the terms ^^

      @kliersheed@kliersheedАй бұрын
  • Thank you just finished its great that you explained for ones who may not be as techie as ng expected.

    @danshd.9316@danshd.931624 күн бұрын
  • The iterating part of the process seems more important to me than the "agentic" one. If we compare current LLMs to DeepMind's AlphaZero method, it's clear that so far LLMs currently only do the equivalent of AlphaZero's evaluation function. They don't do the equivalent of the Monte-Carlo search thing. That's what reasoning needs : the ability to explore the tree of possibilities, the NN being used to guide that exploration.

    @luciengrondin5802@luciengrondin5802Ай бұрын
    • what get's interesting about agentic - is what if certain agents have access to differrent 'experiences' - meaning their context window starts with 'hidden' priorities objectives and examples of what final state should look like. Since Context windows are limited right now this is an exciting area. Of course the other part of agentic vs iterative - is that since a model isn't really 'thinking' it needs some for of stimulus that will disrupt the previous answer - so you either have to use self reflection or external crtiic - if the external critic uses a differrent model (fine tune or lora) and is given a differrent objective you should be able to 'stimulate' the model into giving radically differrent end products.

      @joelashworth7463@joelashworth7463Ай бұрын
  • Excellent video my friend , you are my favorite channel , continue your good work! ❤

    @pengouin@pengouinАй бұрын
  • Your input is quite valuable. Thanks!

    @rafaelvesga860@rafaelvesga86025 күн бұрын
  • ...ok I can see its right...having done a lot of "by hand iterations"...I mean i am not using agent yet...but if you think with GPT...you ask something...you test....you adjust...you give it back..and the result is better...and in this process if you do question on the same topic but from different aspect it becomes better...so an agent is basically doing this by itself!. Great video! Thank you :D

    @federico-bi2w@federico-bi2wАй бұрын
  • I saw the original video, but the commentary adds a lot. thx

    @lLvupKitchen@lLvupKitchenАй бұрын
  • i totally agree - was trying to make this case for years - but i guess technology has now evolved to the point where we can see this as a reality

    @existentialquest1509@existentialquest150929 күн бұрын
  • Thank you Matt. I appreciate your explanations, insights and exploration. This is a journey.

    @CM-zl2jw@CM-zl2jwАй бұрын
  • I loved this video. Your selection was great and your comments were right to the point and very useful. I like that you test things yourself and provide links to the topics that are discussed previously.

    @TestMyHomeChannel@TestMyHomeChannelАй бұрын
  • Awesome video. Yeah, this is why I voted for speed in the poll you did, this is the what I was talking about.

    @jakeparker918@jakeparker918Ай бұрын
  • Great video! Thank you for the insights 🔥

    @michaelmcwhirter@michaelmcwhirterАй бұрын
  • Awesome thanks for the great perspectives!

    @seanhynes9516@seanhynes95168 күн бұрын
  • Really cutting edge! Thanks.

    @ManolisPolychronides@ManolisPolychronides28 күн бұрын
  • Good stuff, really like the commentary side by side.

    @DougFinke@DougFinkeАй бұрын
  • Imagine an extensive neural network, except instead of W/B in nodes, each are an agent.

    @jets115@jets115Ай бұрын
    • like the internet, where every computer is a node agent/expert ?

      @Tayo39@Tayo39Ай бұрын
    • @NewAccount_WhoDis Don't think of it as a literal NN.. more like expanding the original prompt. If you can ask one researcher, imagine asking 100 with small variations in prompts to each! :)

      @jets115@jets115Ай бұрын
  • I convinced my chat AI that our new mutually conceived idea of think before you speak extremely helpful for both of us

    @marshallodom1388@marshallodom1388Ай бұрын
  • Very interesting - coding and work flow and having worked with coders /with Asperger in order to communicate we moved to a very simple process of task and explain to coder thru subject verb, subject verb and so on it smoothly flattered communication thus task to coding workflows.

    @davedave2941@davedave2941Ай бұрын
  • I did in fact like andrew's talk but I liked it even more with your moderation, which was extremely helpful made a big difference in my understanding of the talk. Just Subbed, thank you very much! Off to take a look at your Hugging GPT video🏃‍♂

    @icns01@icns0128 күн бұрын
  • Decentralized, highly specialized agents running on lower parameter count models (7b-70b) working together to accomplish tasks is where I think opportunity lies. I was mining ETH back when it was POW with my gaming rig to earn some money on the side. I did the calculations once and the entire eth computation available was a couple hundred exaflops. With more and more devices being manufactured for AI calculation (phones, GPUs, etc) the available computing will only increase

    @zaurenstoates7306@zaurenstoates7306Ай бұрын
  • When me realising I realized this over a month ago . And thinking to create virtual environment where multiple agents work together that are especially fine tuned for each use case . So my brain is as intelligent as this person.

    @dhruvbaliyan6470@dhruvbaliyan6470Ай бұрын
  • As long as we're relying on backpropagation to fit a network to pre-designated inputs/outputs, we're not going to have the sort of AI that will change the world overnight. The future of machine intelligence is definitely agentic, but we're not going to have robotic agents cleaning our house, cooking our food, fixing our house, constructing buildings, etc... unless we have an online learning algorithm that can run on portable hardware. Backpropagation, gradient descent, automatic differentiation, and the like, isn't how we're going to get there. We need a more brain-like algorithm. Throwing gobs and gobs of compute at backprop training progressively larger networks isn't how we're going to get where we're going. It's like everyone saw that backprop can do some cool stuff and then totally forgot about brains being the only example of what we're actually trying to achieve. They're totally ignoring that brains abstract and learn without any backpropagation. Backprop is the expensive brute force way to make a computer "learn". I feel like we're living in a Wright Brothers age right now where everyone believes that the internal combustion powered vehicle is the only way humans will ever move around the earth, except it's backpropagation that everyone has resigned to being the only way we'll ever make computers learn, when there's no living sentient creatures that even rely on backpropagation to exhibit vastly more complex behaviors than what we can manage with it. A honeybee only has one million neurons, and in spite of ChatGPT being, ostensibly, one trillion parameters, all it can do is generate text. We don't even know how to make a trillion parameter network that can behave with the complexity of an insect. That should be a huge big fat hint to anyone actually paying attention that backprop is going to end up looking very stupid by comparison to whatever does actually end up being used to control thinking machines - and the people who are fully invested in (and defending) backprop are most certainly going to be the last ones who figure out the last piece of the puzzle. When you have people like Yann LeCunn pursuing things like I-JEPA, and Geoffrey Hinton putting out whitepapers for algorithms like Forward-Forward, and Carmack saying things like "I wouldn't bother with an algorithm that can't do online learning at ~30hz", that should be a clue to everyone dreaming that backprop will get us where we're going that they're on the wrong track.

    @CharlesVanNoland@CharlesVanNolandАй бұрын
    • Maybe. Though it's fun to hear what people said when Wright brothers and such tried to crack flying: this is not how birds fly, this is inefficient etc. We "brute forced" flying by just blasting shit ton of energy into the problem. Maybe we can do the same with intelligence

      @sup3a@sup3aАй бұрын
    • Learning within a Single individual Brain maybe without any backpropagation. But couldn't the whole evolutionary process through billions of brains and ariving at a setup with different Brain Regions be seen as some sort of backpropagation?

      @bilderzucht@bilderzuchtАй бұрын
    • I think the idea is to get it to an advanced enough stage where it is competent and reliable. so much so it exaperdite the process in researching something that looks more like the the human brain process as a replacement. We might even get it to a point where it self improve there is no reason to think it won't find a different approach thats doesn't involve back propagation. Either we can't deny it has great potential and application to make AI advancement significantly much faster.

      @vicipi4907@vicipi490729 күн бұрын
    • Progression is rarely linear and innovation follows a line of optimism use not the end game. That's why we had the 'stupid' internal combustion engine for over 100 years melting our planet😢

      @colmxbyrne@colmxbyrne25 күн бұрын
    • This assumes the goal of AI is to mimic a brain. It probably isn’t, mostly because it (probably) can’t, at least using existing compute approaches and current physics. If consciousness involves quantum effects as Penrose puts forward, current physics isn’t there yet. Or maybe it’s neither quantum nor algorithmic but involves interactions we can’t properly categorise today, which may or may not be deterministic. All of which is to say that I basically agree with you that all of the current approaches are building fantastic tools, but certainly nothing approaching sentience.

      @Mattje8@Mattje823 күн бұрын
  • this is absolutely fascinating.

    @bobharris5093@bobharris509323 күн бұрын
  • As I come from neuroscience, I insist it must the the right track. The brain also uses "agents" which are more likely to be called "concepts" or "concept maps". These are specialized portions of the network doing simple jobs such as recognizing a face, or recognizing the face of a specific person. Tiny cost per concept, huge power of the intellect when working in concert and improved dynamically

    @SuperMemoVideo@SuperMemoVideo17 күн бұрын
  • It was usefull your explanation after each pause.

    @fernandodiaz8231@fernandodiaz823124 күн бұрын
  • Andrew Ng is actually one of the more conservative of the AI folks. So when he's enthusiastic about something, he has a pretty good basis for doing so. He's very practical. As for this video, good point on Groq. We need a revolution on inference hardware. Also, another point to consider, is the criteria for specifying when something is "good" or "bad", when doing iterative refinement. I suspect, the quality of the agentic workflows will also depend on the quality of this specification, as in the case of all optimization algorithms.

    @mintakan003@mintakan003Ай бұрын
  • 20:00 such a good point!

    @d_b_@d_b_Ай бұрын
  • Nice review of the field of agents you have built in your videos over the past few months. Next build a team of agents to build an AI to build, refine, optimize, and validate agents and agent teams for various tasks. Now repeat the process.

    @darwinboor1300@darwinboor1300Ай бұрын
  • awesome analysis

    @elon-69-musk@elon-69-muskАй бұрын
  • Your input is quite valuable

    @TrasThienTien@TrasThienTien23 күн бұрын
  • amazingly inspiring and informative video

    @ondrazposukie@ondrazposukie23 күн бұрын
  • It makes sense to be agents. They are parallelized and can be specificly trained where needed.

    @evanoslick4228@evanoslick4228Ай бұрын
  • I think the real breakthrough will come when we have user-friendly UI and agents based on computer vision, allowing them to be trained on existing software from the user's perspective. For example, I could train an AI agent on how to edit pictures or videos, or how to use a management application, etc. One approach could be to develop a dedicated OS for AI agents, but that would require all the apps to be rewritten to work with the AI agent as a priority. However, I'm not sure if that's feasible, as people may not adopt such a system rapidly. The fastest way forward might be to let the AI agent perform the exact task workflows that I would perform from the UI. This approach would enable the AI to work with existing software without requiring significant changes to the applications themselves.

    @RaitisPetrovs-nb9kz@RaitisPetrovs-nb9kzАй бұрын
  • Fantastic breakdown

    @samfurlong4050@samfurlong4050Ай бұрын
  • Amazing. Multi agents debating... Exciting

    @agilejro@agilejro25 күн бұрын
  • @MatthewBerman what do you recommend I pick out to have a more synergistic Value as i prepare for the near future. Im already using Chat GPt Plus and perplexity pro. But Because of this video i might need to take away one so i can Add in Agent GPT So what do you Recommend i pick out Perplexity Pro + Agent GPT? Or ChatGPTplus + Agent GPt? Your advice would truly be appreciated.

    @rupertllavore1731@rupertllavore1731Ай бұрын
  • What gets really interesting is that you could hook agentic workflows into an iterative distillation pipeline. 1) Create a bunch of tasks to accomplish 2) Use an agentic workflow to accomplish the tasks at a competence level way above what your model can normally do with one-shot inference 3) Feed that as training data to either fine tune a model, or if you have the compute, train a model from scratch 4) Repeat at step 2 with the new model. In theory you could build a training workflow that endlessly improves itself.

    @johnh3ss@johnh3ssАй бұрын
    • Let's also remember this is what open source tools were already doing over a year ago, but often these got stuck in loops. I'm really interested in revisiting them.

      @autohmae@autohmaeАй бұрын
    • Or don't start the pipeline with a bunch of tasks, but rather let it be triggered from the outside when a task appears, e. g. in form of a customer support ticket

      @gotoHuman@gotoHuman28 күн бұрын
  • Im considering delving into this space and curious what your preference is @Mathew Berman between Autogen, CrewAI, and whatever else is most comparable in the current market. What are your current rankings of them are, and optimal current use cases. Might make for a good upcoming video?

    @greatworksalliance6042@greatworksalliance6042Ай бұрын
  • This makes total sense

    @stevencord292@stevencord292Ай бұрын
  • Great videos, thank you! I have a question about this agentic framework that perhaps you answer ... it seems like the iteration process inherent in the likes of Autogen & CrewAI will be built in to the next LLM models (CHatGPT5, Claude 4 etc) - does that make Autogen redundant at that point? Or, am I missing something? Thanks

    @user-qn7iw4ih3d@user-qn7iw4ih3dАй бұрын
  • great breakdown as always. I'm a bit scared to play with agents until I can do so on a local llm. I'm afraid the costs will run away with themselves if I do an ambitious project.

    @christiandarkin@christiandarkinАй бұрын
  • I've long suspected that iteration is the key to spectacular results, it's like an ODE solver iterating on a differential equation until it stumbles into a basin of attraction. You could probably do "agents" with just one GTP and loop through different roles. Then again maybe multiple agents are a crutch for small context windows lol. However, keep in mind that GPT4 already gives you an iterative solution by running the model as many times as there are tokens.

    @u2b83@u2b8311 күн бұрын
  • Thanks!

    @EliyahuGreitzer@EliyahuGreitzer27 күн бұрын
  • I've been thinking that this is the future for a while now, partially from my own experience and experiments with what you can get a model to do with prompting it with it's own output as well as having it reflect on it ever since I got access to GPT-3, partially thanks to everything I've learned about agents from you, Matthew. (I have spent an embarassing amount of money fiddling with AI and figuring out their limits, considering that I'm just an interested layman.)

    @DefaultFlame@DefaultFlameАй бұрын
  • Maybe this is what Grok 1.5 is doing behind the scenes to get a better score to GPT4.

    @StuartJ@StuartJАй бұрын
  • whether we know or not - that is how most of us work. we evaluate the prompt, then we do a first pass, then we reevaluate, then we edit, then we do more, and reevaluate, check against the prompt, edit, do more work, etcetc

    @baumulrich@baumulrichАй бұрын
  • I just started learning how to set up AI last month but this is what I thought this is what Multi-Agents or a Crew was.

    @YorkyPoo_UAV@YorkyPoo_UAVАй бұрын
  • If you spend some exchanges on brainstorming first with GPT4 a few different approaches and only then give it a task, it is superb. I can see a pair of agents brainstorming in the future instead.

    @konstantinlozev2272@konstantinlozev2272Ай бұрын
  • Did you know that this is actually how our mind/brain works as well? We have different parts (physical and psychological) that fulfill different roles. That is why we can experience inner conflict. One part of us wants this. Another part wants this. IFS teaches about this.

    @Lukas-ye4wz@Lukas-ye4wzАй бұрын
  • GPT 3.5 cognitive performance going from 48% to 95%+ by just changing how we interact with the same exact model is WILD! Are we learning that "team work makes the dream work" is true even for AI? I wonder what other common human sayings will cause the next architectural breakthrough in the field🤔 Thank you Matthew for this walkthrough, first time I learn about agentic workflow, Andrew Ng is amazing but you made it even more accessible 🙏

    @mykdoingthings@mykdoingthings11 күн бұрын
  • The future will be agentic. Yes the future will be bananas. Well said!

    @peterpetrov6522@peterpetrov6522Ай бұрын
  • It's a very good strategy instead of training single long duration models. I do wonder about security, but the technology is very fascinating.

    @paulblart5358@paulblart5358Ай бұрын
  • With a limited context window, this hits an asymtotic wall very quickly. Keep in mind, I’m not saying the approach is not a big improvement; it is. However, my extensive experience is that it is not able to go nearly far enough. LLMs are still not fully functional of high performing work. That can only still do basics (or high level information recall). Perhaps with a large context window, this would actually be useful.

    @NOYFB982@NOYFB982Ай бұрын
  • Is it possible for say Gemini to iterate itself if you prompt it correctly in your first prompt? Or do you need to build an application to do such? Can you use the web interface to do such?

    @GregoryBohus@GregoryBohusАй бұрын
  • Something you guys never talk about - the INSANE cost of building and running these agents. It limits developers just as much as compute limits AI companies. The reason agentic systems work is they remove the context problem. LLMs get off track and confused easily. But if you open multiple tabs and keep each copy of the LLM "focused" it gets better results" - so when you do the same with agents, each agent outperforms a single agent who has to juggle all the context. We get better results with GPT 3.5 using this method than you would get in a browser with GPT4. Basically, you are "narrowing" the expertise of the model. And you can select multiple models and have them responsible for different things. Think Mixtral but instead of a gating model, the agent code handles the gating.

    @agenticmark@agenticmarkАй бұрын
    • I’m really intrigued by your multi-tab workflow, it sounds super powerful, but I’m not sure how it works in practice. Do you have the different tabs working on different sub-tasks or performing different roles (kind of a manual agentic workflow, but with human oversight of each of the zero-shot workers), or are they working in parallel on the same task, or … ? IANAP, but I need to have ChatGPT (my current platform, or it could be Claude or whatever) do some fairly complex tasks like parsing web pages and PDFs to navigate a very large dataset and use reasoning to identify significantly-relevant data, download and assemble it into a knowledge database that I’ll then want to use as test input for another AI system. Ideally I’d use one of the no-code/low-code agent dev tools to automate the whole thing but as I said IANAP, and just multi-tabbing it could get me a long way there. It sounds like whatever you’re doing is exactly what I need to - and likely a boatload of others as well: I do wish someone would do a video on it. Meanwhile, would you be willing to share a brief description of an example use case and what you’d have the various tabs doing for it? (I hope @matthew_berman sees this and makes a vid on the topic: Your comment is possibly the most important I’ve ever encountered on YT, at least in terms of what it could do for my work and personal life.) Thanks for the note!

      @DaveEtchells@DaveEtchellsАй бұрын
    • You don't always need the state of the art models like that. GPT, Gemini, Claude etc. many open source 7B models work just as well for most of the companies.

      @japneetsingh5015@japneetsingh5015Ай бұрын
    • @@japneetsingh5015 Yeah, llama, mistral, mixtral, the list goes on. If you want something even more lightweight than 7B, stablelm-zephyr is a 3B that is surprisingly capable. Orca-mini is good too and comes in 3B, 7B, 13B, and 70B versions so you can pick whichever you want based on your hardware.

      @DefaultFlame@DefaultFlameАй бұрын
    • What You're saying is: Attention is all you need 😁 I do agree that mixing goals will confuse models, as it could people. People however already have learned processes to compartmentalise tasks. We might have to teach agents to do that, apart from constructing them to minimize this confusion.

      @user-bd8jb7ln5g@user-bd8jb7ln5gАй бұрын
    • @@user-bd8jb7ln5g The whole point of multiple agents with different "jobs," personalities, or even different models powering them, is that we can cheat. The point of multiple agents is that we don't **need** to teach a single agent or model those learned processes, we can just connect several that each do each part, each agent taking on the role of different parts of a single functional brain.

      @DefaultFlame@DefaultFlameАй бұрын
  • Agents will be part of the future of LLMs. Just imaging different expert(agents) working in different parts of the app and an agent that's the program manager. Will be able to create app in weeks instead of months

    @santiagoc93@santiagoc93Ай бұрын
  • this really good

    @mrpro7737@mrpro7737Ай бұрын
  • 10 years ago, I created the data architecture for AI Agent and AI Congress networks.

    @michaelcharlesthearchangel@michaelcharlesthearchangelАй бұрын
  • I think there should be more emphasis on the insane efficiency gains achievable when agents are enabled to take actions in connected apps and systems

    @gotoHuman@gotoHuman28 күн бұрын
  • You can actually tell a gpt to break itself into multiple separate personalities. Give them each a goal. One can write code then the next reviews it and have the one chatbot work it all without resorting to a convoluted separate agents system. Tell them to talk to each other to get a task done. Name them.. Bob, Joe and tell it to preference their discussion with their names as each one talks. I tried it and results were very promising.

    @gregkendall3559@gregkendall355923 күн бұрын
  • In a multi agent and multi LLM scenario is key to understand which LLM assign to each agent. I found that there are not enough information about each LLM to make that decision. Maybe the answer is to train each LLM on a specific topic.

    @biskero@biskeroАй бұрын
    • It's possible without training, just a simple group of base gpt3.5 already outperforms a single gpt4, it's more about orchestration.

      @stepanfilonov@stepanfilonovАй бұрын
    • @@stepanfilonov interesting, so it's a matter of agents supporting it? Still LLM should have more information about their specific training.

      @biskero@biskeroАй бұрын
  • With the way things are going and the power of agentic AI, I'd suggest that Deep Thought would arrive at the number 42 at least three minutes quicker, or within three minutes. There's no way to tell but I reckon Douglas would love this.

    @chrispteemagician@chrispteemagicianАй бұрын
  • I am looking for something like Autogen/GPT Pilot 2, but that is designed for programming for iOS, such as Swift/Xcode. Is there something along those lines?

    @bobnothing4921@bobnothing4921Ай бұрын
  • This is definitely the future. It’s the same as using chat GPT to brainstorm and Claude to write the first draft and Gemini to critique it- that’s my current work flow. once it becomes repetitive - the algorithm for workflow is an easy model.

    @MrJawnawthin@MrJawnawthinАй бұрын
  • Hey Matthew, Andrews slide mentioned about Gorilla LLM, that might be an interesting idea for a future video.

    @SumedhKadoo@SumedhKadooАй бұрын
  • Super cool stuff! I’m still curious when “normal guy” like me is going to be able to use the stuff to do something “helpful”. So far, all the stuff still feels like it’s for the world of coders and developers .

    @dallasnorman7648@dallasnorman7648Ай бұрын
  • As is often the case, education in England is years behind private sector organisations. I’d love to understand how AI can be utilised to help run a school and how a senior leader like myself could learn how to introduce this infrastructure into the functions of a school at large. I’m sure it can be utilised to help with lesson planning but I’m thinking more in terms of the organisation scale processes

    @matt6288joyce@matt6288joyce18 күн бұрын
  • Next video on how to run LDB+Reflection pleeease

    @UnchartedDiscoveries@UnchartedDiscoveriesАй бұрын
  • Yeah, Sequoia Capital also misled everyone by not doing actual due diligence on FTX. When everyone heard that they invested, no one else did Due Diligence because they assumed Sequoia did. And they did not go to court or get any punishment

    @TheStandard_io@TheStandard_ioАй бұрын
  • Matthew is back from the SHOCK. Glad to have you back again.

    @rayhere7925@rayhere7925Ай бұрын
  • Damn, when I was learning English the expression "He's incredibly bullish" would totally have made me scratch my head. I don't know why people like it so much, as it's very investment specific. Otherwise, great video, if it wasn't for you, I'd have missed this video by Andrew and I agree, having agents running on the background of our tasks would speed up stuff. In the end, the only limit would be our own bandwidth limit and also the limit at which we can come up with new tasks and ideas. I don't know about you, but my own is definitely not infinite.

    @denijane89@denijane89Ай бұрын
  • You mention tool use and computer vision; I'm sure you already have seen Intel's ModelZoo and similar repositories for tools? With a coding LLM and tool libraries, you can essentially turn out new tooll AIs and validate them quickly.

    @nukadog1969@nukadog1969Ай бұрын
  • This is the way it should work. But the cost could be very high if agents starts to iterate a lot sending a lot of tokens

    @snuwan@snuwanАй бұрын
KZhead