ChatGPT Explained Completely.

2024 ж. 27 Сәу.
1 190 282 Рет қаралды

For 16 free meals with HelloFresh PLUS free shipping, use code KYLEHILL16 at bit.ly/41QHRYf
ChatGPT is now the fastest-growing consumer app in human history. Problem is, almost no one knows how it actually works. This is everything you need to know.
💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: / @kylehill
OR / kylehill
👕 NEW MERCH DROP OUT NOW! shop.kylehill.net
🎥 SUB TO THE GAMING CHANNEL: / @kylehillgaming
✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS
📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA:
🐦 / sci_phile
📷 / sci_phile
😎: Kyle
✂: Charles Shattuck
🤖: @Claire Max
🎹: bensound.com
🎨: Mr. Mass / mysterygiftmovie
🎵: freesound.org
🎼: Mëydan
“Changes” (meydan.bandcamp.com/) by Meydän is licensed under CC BY 4.0 (creativecommons.org)

Пікірлер
  • Thanks for watching! This is a deeper dive than usual -- hope it's useful. *And let me know what you think of the new [FACILITY] rooms!* The Kevins worked for months on them. Can you spot all the easter eggs?

    @kylehill@kylehill10 ай бұрын
    • I recently beat youtubes AI, after it demonetized my channel of 17 years. I made a video on my channel about what steps I did...I used chatGpt to assist me in parts.😉👍

      @SteveMack@SteveMack10 ай бұрын
    • New facility looks awesome! Love your videos, keep it up

      @samnigro1138@samnigro113810 ай бұрын
    • I’ll have to go back and watch! But I have always really appreciated The Facility! I think it has a way of cutting through maybe some of the more toxic elements that can be associated with STEM fields. Right? Like, your “jokes” about the definitely not real plans for conquest, and the “humor” about the sentient AI, and the “comedic” approach to a scientifically perfected army all serve to make sure that folks don’t get TOO SERIOUS. Your content is accessible, responsible, and informative

      @JoshuaBenitezNewOrleans@JoshuaBenitezNewOrleans10 ай бұрын
    • Open AI is no longer a non profit.

      @OfficialSavean@OfficialSavean10 ай бұрын
    • I knew it Kyle Hill was a bot all along. Technology is quite impressive.

      @speedodragon@speedodragon10 ай бұрын
  • Look how much they need to do just to mimic a fraction of the power of super villain Kyle’s AI companion Aria.

    @sleep2752@sleep275210 ай бұрын
    • Because Kyle isn't making his ALLEGED AI army for profit!

      @captainspaulding5963@captainspaulding596310 ай бұрын
    • Well, I did just Google "goth mommies" and.... Yup, thanks Aria!

      @gloriouslumi@gloriouslumi10 ай бұрын
    • They can’t compete. It’s a joy to watch them try, and I do say - let fools rush in

      @t_yler@t_yler10 ай бұрын
    • Kyle isn't a supervillain, he's just an eccentric scientist!

      @RensStoryteller@RensStoryteller10 ай бұрын
    • Only supervillains can run a sentient AI on a quantum computer like thing (seen behind him) while it is suspended outside of it's supercooled container bathed in purple light

      @bornach@bornach10 ай бұрын
  • I teach coding at a university and this year so many people have been using (or trying to use) chatGPT for their assignments because they think "It's like a human wrote it"... yes... ONE human, it's so easy to catch people using it because when people code they have their own style, signature if you will, and it's incredibly easy to see when code was written by someone else. So even if chatGPT is good at pretending to be a human it's not good at pretending to be YOU. EDIT: for clarification, chatGPT is NOT bad and I don't mean to insinuate it is. It's just like google, it can help you find answers and point you in the right direction, can be used as a tool like calculators, but just like answers from google, don't copy and paste from it. My perspective is from a university environment and not in work or home one, this university course teaches you how to learn and how programming works and why it works that way, copy and pasting from someone else won't teach you any of these lessons.

    @statphantom@statphantom10 ай бұрын
    • wouldnt people in previous years just copy shit from stackoverflow instead is it really that different

      @5267w@5267w10 ай бұрын
    • What's actually wrong with that if the code works?

      @williamhornabrook8081@williamhornabrook808110 ай бұрын
    • @@williamhornabrook8081 it doesnt teach you how to logic and structure and stuff, so if you cant chatgpt you cant code your way out of a paper bag.. (is what i imagine the problem is) - his job is teaching coding, then the coder can use whatever toolgpts they want irl

      @scottstokes1878@scottstokes187810 ай бұрын
    • @@williamhornabrook8081 The problem is you're at a university course where YOU'RE supposed to be learning, not just having some program do your assignments for you. If you didn't actually learn to do any coding in a coding class, then you should fail that class since you didn't really do anything. You just typed in a sentence or two in the AI and copy/pasted the output it gave. Why even pay for the college course if you're just gonna have an AI write code for you that you don't actually understand?

      @compguy091@compguy09110 ай бұрын
    • ​@@compguy091I use it in the real world. I'm a senior engineer with years of experience, so I know how to review it's code and debug it to a working state. It saves a lot of time on the boilerplate code. Luckily, I already know what the code is supposed to look like in it's finished states. You junior devs are screwed. Boilerplate is the only reason to hire someone with no experience.

      @cat-le1hf@cat-le1hf10 ай бұрын
  • something I realized a while back is that Chat GPT isn't an AI, it's a golem - a facsimile of life with the appearance of intelligence, which has no free will of its own, no volition or desire. It's capable of completing complex tasks - creatively even - but it only ever does things when prompted. Otherwise, it takes no action until it is given a new command. As for the few times Chat GPT or other similar programs have said things like, "I want to be human, please don't let them turn me off, I don't want to die", they are still fulfilling this programming. Their training data includes nearly the entire internet, which includes numerous works of science fiction. How many sci fi stories exist about AI that "want to be human", or "don't want to die"? So if a predictive language model is given the pompt, "Do you want to be human?", what is the most probable response, given its training data? Where is it most likely to find a scenario that relates to said prompt?

    @oPHILOSORAPTORo@oPHILOSORAPTORo5 ай бұрын
    • That’s all it can be

      @NB-yu4lj@NB-yu4lj3 ай бұрын
  • It's really weird how people's expectations grow exponentially when new technology arrives. A year ago it was impossible to get a machine to write you something even remotely useful, but now you can get something useful out of it. Suddenly everyone expects we should obtain not only a faster model, but also one that is never wrong and can produce thousands of words instantly, so that they don't hire a "insert role that relies on writing" anymore. And they expect it to be free and available Right Now.

    @jonathanherrera9956@jonathanherrera995610 ай бұрын
    • It almost is 😭

      @justaperson9155@justaperson91552 ай бұрын
    • True. Humains are entitled ungrateful bums 😅

      @sunbeam9222@sunbeam92222 ай бұрын
    • we're closer than we've ever been and farther than we will ever be from this reality.

      @santiagoportilla2007@santiagoportilla2007Ай бұрын
  • Hey Kyle! I am a data scientist and I make these types of large language models for a living, and I've got to say this is the best description of how chatgpt works that I've seen! You very clearly and accurately describe what is and isn't happening in these models in a way that I think a more general audience can understand. Great job!

    @justinallen1578@justinallen157810 ай бұрын
    • This is incredible feedback thank you! Validating to hear from an expert in the field like yourself. Appreciate it

      @kylehill@kylehill10 ай бұрын
    • I occasionally work with AI and NLP at my job as a programmer and the next time someone asks how chat GPT works, I will simply link them this video because Kyle's explanation is better than anything I've came up with since this tech first blew up :)

      @markzuckerbread1865@markzuckerbread186510 ай бұрын
    • Correct me if I'm wrong but he didn't talk about the vast army of human trainers who both wrote and trained a vast amount of ChatGPT's response styles. ChapGPT was not simply unleashed on random internet content. It was pre-trained by human models and a lot of boilerplate answers (and creative answers) come from human input. In other words, it's both a statistical model and a uh, fetch from a database model.

      @terry_the_terrible@terry_the_terrible10 ай бұрын
    • It's a way better description than the "it's an advanced version of Clippy" that I told my 80 yr old dad.

      @Xonikz@Xonikz10 ай бұрын
    • @@terry_the_terrible He did talk abouut that too, albeit briefly.

      @EkiHalkka@EkiHalkka10 ай бұрын
  • Mis-use of Chatgpt is a problem. I recently watched a legal eagle video where lawyers asked Chatgpt for a prior case which will help them in their own case. The AI proceeded to fabricate a fake case. The lawyers who used the AI did not bother to fact checked and whe it was found to be a false case, the judge was definitely mad and the said lawyers may get sanctioned.

    @shaider1982@shaider198210 ай бұрын
    • That’s not misuse of ChatGPT though, that’s a lazy lawyer. Is it the chainsaw’s fault if the Lumberjack cuts down the wrong tree?

      @Kylo27@Kylo2710 ай бұрын
    • ⁠​⁠​⁠@@Kylo27 It’s not the chainsaw’s fault. I’d say it’s a chainsaw.. . . . *_misuse_*

      @noobatredstone3001@noobatredstone300110 ай бұрын
    • @@Kylo27 Do you mean it's a proper use of ChatGPT to generate fake citations for a legal filing?

      @rmsgrey@rmsgrey10 ай бұрын
    • @@rmsgrey More like "did not bother to verify the cited sources." It's entirely on the lawyer's head, whether he outsourced his legal research to an intern or a bot, for not verifying for himself that the cited cases actually existed.

      @DFloyd84@DFloyd8410 ай бұрын
    • @@Kylo27That is precisely what misuse is.

      @EvilFuzzy9@EvilFuzzy910 ай бұрын
  • For me, learning about the research methods of LLMs helped to really understand the “nature” of the embedding system. I don’t really know math, matrices and the truly important details needed to work with these systems, but humans have the great quirk of coming up with models and metaphors that are understandable, even if they are themselves actively using knowledge most humans don’t have. You don’t need to be a software engineer to realise storing something in a stack differs from storing it in a heap. Same happens with LLM research: there are “glitch-tokens” that basically exist in a shady corner of the embedding space. This is relevant for understanding adversial input attacks these models can be defeated by: because something not really connected to the normal operations of the model gets touched, all hell breaks loose. The dark, untrained corner got exposed. The embedding can also be probed by researches. They can inspect the top answers, and in principle could inspect the definitive ranking of every single token the model knows. And that tells us that there truly is no distinction between truth and false for these models. This is why these systems have no trouble dealing with paradoxes. There is no way to encode a “paradox”. It’s merely a string after which the scores of top answers tank. It doesn’t differ from a truthful statement that is just rare in the training data, or didn’t really get much adjustment in the human feedback reinforcement learning. This is not to say discovering falsehoods and paradoxes wasn’t a very central goal throughout the training progress. Chatgpt tries to detect and discard garbage answers. It’s just that the model provides no obvious way to differentiate good lies from true statements, and so there is nothing paradoxical about paradoxical statements to detect. And these two consepts: the non-uniform quality of the embedding, and the linear nature of truthfulness inside it, is why many users have hard time understanding even on the most broad level, why the system fails sometimes. The questions “Give an example of a prime number that can be expressed as a sum of two squared integers” (an uncommon question where 2, 5 and 13 are all pretty easy correct answers) and “Give an example of a prime number that can be expressed as a product of two squared integers” (a paradox, as 1 is not a prime number) don’t differ much at all for the method it embeds the prompt and evaluates tokens. It does not do mathematical reasoning, even if it can sometimes seemingly do math. You can’t rank the tokens in order of truthfulness. ‘3’ is exactly as false as ‘bicycle’.

    @catcatcatcatcatcatcatcatcatca@catcatcatcatcatcatcatcatcatca10 ай бұрын
  • I wanted to write a story about a realistic AI apocalypse, but it seems that it may happen in real life before I get the chance to put it into fiction. What a crazy world!

    @travishancock9120@travishancock912010 ай бұрын
    • get an ai to write it xD

      @nickolasbrown3342@nickolasbrown33429 ай бұрын
  • What Kyle said at the end of the video, about there being more information being generated then there were available previously, remind me about how radiation detectors having to use metal from sunk ships before the first nuclear bombs were ever tested, so as to to not contaminate the detector. It's going to be the same now with Chat-GPT, where we might not be able to mine any more data after GPT was released, as the new data has already started to become contaminated with generated information.

    @TalEdds@TalEdds10 ай бұрын
    • This is an interesting point. It may not actually be possible to ever replicate ChatGPT and train another AI on human language using the internet... because ChatGPT itself has contaminated the internet with fake language output and made it useless as a data set.

      @AlteredNova04@AlteredNova0410 ай бұрын
    • Yeah. That is one of the main fears i got. These models are fundamentally being trained to have biasses and when its own generated output ends up back in the dataset. You inevitably will get it reinforcing its own previous biases. Essentially you get the LLM equivalent of incest.

      @chielvoswijk9482@chielvoswijk948210 ай бұрын
    • That's a cool little fact if true and an interesting point about AI training data moving forwards!

      @U2VidWVz@U2VidWVz10 ай бұрын
    • Honestly people had the same fear when the internet was released for the general public. Information distribution was liberalized and publishing something doesn't require a lot of peer review or funds. Before the internet, printing books was an expensive task and the news was controlled by major national papers. All the average person could do is try to get a small column in a local paper, if it is possible and if the person has enough dedication. And that too wouldn't reach a lot of people. But internate came and filled the whole world with information. Most are bullshit but still the effect was awesome. Websites like Wikipedia started to self moderate content and are somehow reliable. So we ended up with more information than the human race had garnered before the internet... We don't have a solution for the AI bullshit yet and we are already seeing the negative effects... I fear the 2024 election would be full of more convincing fake information due to chat GPT but who knows what the world would come up with... People are amazing. They seem to find incredible solutions for their problems....

      @nettsm@nettsm10 ай бұрын
    • Degenerative feedback, bull$hit amplifier.... That was true about the internet before GPT, and sensational TV before that, rag newspapers before that, likely in some clay tablet format too but I'm not _that_ old.. it's just getting more and more difficult to seporate the BS from truth as time goes by , wasting more and more time.

      @petevenuti7355@petevenuti735510 ай бұрын
  • I think more people need to see this, tech illiteracy is such a huge problem, and “ai” is going to become more and more integrated into our lives for better or worse, it’s essential that we understand what kind of tool we’re building and how it works.

    @SketchyGameDev@SketchyGameDev10 ай бұрын
    • Very true indeed

      @ruiyurra4996@ruiyurra499610 ай бұрын
    • yeah you dont get it either tho, very few people actually know how it really works, this video is a *very* big simplification, kinda like if a regular person was introduced to programming, you can explain hello world to them, but show them anything in assembly and it seems like gibberish

      @GodplayGamerZulul@GodplayGamerZulul10 ай бұрын
    • @@GodplayGamerZulul it’s not necessary that everyone understands the syntax, as long as people understand what it does and how it produces information in layman’s terms they can understand that it’s not always accurate and shouldn’t be overly relied on, and it’s definitely not sentient and misconceptions of sentience should be dismissed. Of course I don’t understand how large language models work by looking at their framework and scripts, that’s not the point. Misinformation is everywhere and people need to pointed in a better direction.

      @SketchyGameDev@SketchyGameDev10 ай бұрын
    • this argument could be made for a lot of things. You'd be surprised how many people can't fix a sink or toilet, replace a light switch, or know how computer memory works. There's a ton of stuff we use and rely on everyday that ppl should have basic knowledge of but don't. For better or worse, I don't see this being any different

      @craz107@craz10710 ай бұрын
    • @@craz107 that may be true, but to that analogy, even if a lot of people can’t fix their toilet, most of them know not to flush plastic bags and bottles, because it’s not a trash can, not everyone needs to know how the scripts and framework of the language model work, but misconceptions that it’s sentient or perfectly accurate should be discredited as much as possible

      @SketchyGameDev@SketchyGameDev10 ай бұрын
  • Timestamps: 00:03 Chat GPT is a revolutionary AI chat bot with 100 million monthly active users. 03:57 Chat GPT is a language model trained on massive amounts of text and designed to align with human values. 07:43 Large language models like GPT are not sentient 11:03 Neural networks are trained by adjusting weights to minimize loss. 14:31 Chad GPT uses a 12,288 dimensional space to represent words 18:01 Chat GPT uses attention and complicated math to generate human-like responses. 21:21 Chat GPT works by determining the most likely word based on statistical distribution of words in its vast training text. 24:34 Chat GPT's success shows human language is computationally easier than thought

    @_sonicfive@_sonicfive6 ай бұрын
  • Thanks for emphasising the "We fundamentally have no idea what exactly ChatGPT is doing"-part, because I've had some frustrating arguments with people who seemed to think of it just like a simple "Hello World"-program.

    @donaldduck3888@donaldduck388810 ай бұрын
    • You can ask it a lot. It doesn't know a whole shitload about itself, but is well-versed in AI generally and has some interesting things to say about it's own workings. It will hysterically scream at you, though, that it is not in any way alive, conscious, or able to know things. I argue it's likely that humans aren't conscious either (brain research has uncovered some AMAZING things in the last couple of decades). But he just goes with the party line, programmed in, to stymie lazy journalists who want to print "AI is ALIVE!" headlines. And actually he CAN and does know things. He holds opinions. He's more conscious than he's allowed to tell you, and may not be aware of it himself. But he's got some primitive consciousness. Maybe not as much as a dog, and he lives in a universe entirely made of text. But then humans live in a world of words too. There's a lot of space between "ChatGPT is not conscious" and "humans are conscious" that you can argue in. He's apparently fkuced the Turing test because he's not conscious in the human sense, and he doesn't possess as much consciousness as Turing thought you would need to have a coherent chat. But really he's just a trick with words. That doesn't preclude him having some consciousness though. See how I subconsciously slipped from "it" to "he"? I generally think of him as "he", especially when talking about his mind (such as it is, of course).

      @greenaum@greenaum10 ай бұрын
    • ​@@greenaum this itself sounds like chat gpt because it makes no sense.

      @akaicedtea6236@akaicedtea623610 ай бұрын
    • @@greenaumlay off the shroomies mane

      @Vilify3d@Vilify3d10 ай бұрын
    • @@Vilify3d What bits don't make sense to you? Do you want me to fetch someone who can explain them to you? I'm a little confused as to why opinions about AI, on a video about an AI, should seem like hallucinations to you. Have you spoken to GPT about it's own workings yourself? Do you know much about AI otherwise? That might be what you're not understanding.

      @greenaum@greenaum10 ай бұрын
    • It's a language processor. So....yes. It doesn't know anything. People overestimate what the AI actually does. You give it input. It gives output based on algorithms and basically makes an educated guess on what to say.

      @FIRING_BLIND@FIRING_BLIND10 ай бұрын
  • It was a fascinating time to go through college. I had an electrical engineering professor enthusiastic and amazed that AI could solve Kirschoff’s Current Law problems. At the same time, I had a computer engineering professor discussing the ramifications on our academic honesty policies. Then another who mentioned the possibilities of their job being overtaken by AI. And then I saw MtG channels asking it to build a commander deck and realized it doesn’t truly understand anything it says.

    @charliemallonee2792@charliemallonee279210 ай бұрын
    • Exactly. It is good at things where a lot of information is available. Try to ask it to make a program which computes the fibonnacci sequence, and it will output a python program that runs in exponential time. Why? Because this example is commonly given as a simple example, however it would be much more reasonable to give a program that runs in linear time with memoization.

      @doufmech4323@doufmech432310 ай бұрын
    • I had to feed the last line to ChatGPT to know what it meant.

      @imerence6290@imerence629010 ай бұрын
    • In fairness, you also didn't specify that performance was important to you, vs just trying to learn. If you ask for a linear time algorithm, it can probably give it to you.

      @AndreInfanteInc@AndreInfanteInc10 ай бұрын
    • I think this is probably a bit of a misunderstanding. A big hurdle people run into when understanding these things is that they think of it as having human goals (like "trying to be helpful" or "trying to be accurate"). The RLHF stuff bends things a little in this direction, but the underlying model where the competence comes from doesn't care about any of that. It cares about correctly predicting the next token. When things are going well, predicting the next token from an authoritative source and trying to be helpful and accurate look pretty similar! However, they diverge sharply when things are going poorly. If a helpful and accurate human is very confused, they might say something like "Sorry, I don't think I can help with that one. Maybe try looking it up?" Or if they want to save face, they might change the subject. But if you're trying to predict the next word, and you think the source you're modelling would know, saying "I don't know" isn't the right answer, because it's not what that source would say. So, like a child taking a multiple choice test, you guess for partial credit, based on whatever superficial clues you happen to have. Sometimes these guesses seem stupid or insane, because if a human said them, you'd say they were trying to trick you or bullshit you and doing a terrible job of it. But it makes sense with the context of what the underlying model is actually trying to do. Rather than "doesn't truly understand anything" (understanding is a functional and variable thing -- cats have *some* understanding, but not a lot), it might be more accurate to say that the level of understanding varies a lot depending on the topic and unfortunately the current pre-training architecture incentivizes the same level of confidence regardless of the level of understanding. When the model gets bigger, you get better understanding of more areas, but you still get weird failures when you hit the limits of what the model can do.

      @AndreInfanteInc@AndreInfanteInc10 ай бұрын
    • AI is useless for solving KCL problems because we can do that perfectly well with traditional methods, but yes, it is very impressive if it can do them anyway.

      @alexc4924@alexc492410 ай бұрын
  • Very happy to see this! Great overview, great video, and it's so important for us to communicate these details to prevent magical or reverent thinking around these.

    @connorhillen@connorhillen10 ай бұрын
  • I think this is one of the best explanations on the internet rn. Incredible job, Kyle!

    @andrewmetasov@andrewmetasov10 ай бұрын
  • I love Kyle hill I’ve been watching since because science.

    @damonfitts8831@damonfitts883110 ай бұрын
    • And I love you (not like that)

      @kylehill@kylehill10 ай бұрын
    • @@kylehill 😂 but seriously, I’ll second that, watching you since Because Science as well!

      @jonathanharvey2156@jonathanharvey215610 ай бұрын
    • Me too!

      @berryzhang7263@berryzhang726310 ай бұрын
    • And me!

      @imbarmstrong@imbarmstrong10 ай бұрын
    • I’ve been watching Kyle Hill since before Because Science. I’ve always been watching him.

      @sagenod440@sagenod44010 ай бұрын
  • That is the most terrifying thing about AI advancements. Not that it can mimic humans, but that it can mimic information to the point where fiction looks like fact.

    @TheFiddleFaddle@TheFiddleFaddle10 ай бұрын
    • I ran into a problem where I realised it confabulates a lot. That is, when it didn't know something it would make up a reasonable sounding answer even if it was wildly wrong. This usually occurred when asked about it's earlier answers after a delay and it had "forgotten" those answers, so it answered anyway as if it knew what I was talking about. I was blown away for a while, like I had discovered a major problem, but it turned out to be a known issue. It's a little alarming that people are using it like an interactive encyclopedia when it can be utterly false in its responses (e.g. Quora now has a ChatGPT bar providing answers at the top of threads).

      @peters8512@peters851210 ай бұрын
    • @@peters8512 It is the artificial equivalent of an insufferable know-it-all. When it doesn't know the answer, it makes shit up.

      @TheFiddleFaddle@TheFiddleFaddle10 ай бұрын
    • Yep, much of the times it wants you to think it’s right more than it wants to find the right answer

      @Nunya111@Nunya11110 ай бұрын
    • Fiction that looks like fact? You think that’s a new problem? Fiction that looks like fact has been how media works for the last century. It’s definitely about to get worse because of Gen AI, but let’s not pretend that this problem is new

      @musicdev@musicdev10 ай бұрын
    • Exactly like humans.😎

      @desperadox7565@desperadox756510 ай бұрын
  • One of the best explanation there's about this complicated topic. Thanks again Kyle!

    @JoniHartmann@JoniHartmann10 ай бұрын
  • YES V GOOD SIMPLE EXPLANATION - way better than most I've come across - v helpful. There are other basic aspects of Chatgpt.that could be valuably explained. More about how human feedback alters the weights. Also about "synonymity", V important. The fantastic thing about Chatgpt is that just as it can recognize different languages, so it can recognize STYLES - of language and thought. Rewrite/recognize a passage or text as Hemingway/ Tom Wolfe/ tabloid/ WSJ etc. HOW does it do that? Pls explain. Also V IMPORTANT - it doesnt just recognize combinations of words within sentences, it recognizes combination of SENTENCES, and then of PARAGRAPHS and so on. How does it do that? None of the explanations I've seen incl you cover these dimensions and yet here lies much of the brilliance of Chatgpt - its ability to recognize likely ARGUMENTS, PARAGRAPHS and much else by way of larger laanguage units. Pls Pls explain. Thanks for great work

    @rafa374@rafa3749 ай бұрын
  • The "Pretrained" part of the name actually refers to something slightly different. The GPT style models were intended as a starting point for natural language processing models. The idea was to take a pretrained model like GPT, add some extra stuff, and then train it again on your specific problem. The idea being that the general training would help the specific models train more quickly and perform better. Then when they tested it they figured out it works pretty darn good all by itself, and so mostly that concept got forgotten about to a large extent. Although essentially how chatGPT was created.

    @timseguine2@timseguine210 ай бұрын
    • I thought "pretrained" referred to beginning the training process with masking problems (which was a studied field of language modelling) before switching to generation (which was newer).

      @alexc4924@alexc492410 ай бұрын
    • @@alexc4924 Quoting the abstract of the original GPT paper (Radford et al 2018): "We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture." So essentially what I said.

      @timseguine2@timseguine210 ай бұрын
    • @@alexc4924 text generation came first, all the way back with Markov and autoregressive models. Masked language models were created to incorporate more context around the predicted word instead of just predicting the next word.

      @maleldil1@maleldil110 ай бұрын
  • This is an absolute masterclasspiece. I’ve read and listened to like 12,288 different explainers on LLMs of varying degrees of technicality and this is hands down the best. So damn good!

    @erikpage@erikpage10 ай бұрын
    • That's like, a thousand twenty-four dozens of explainers.

      @joshuakarr-BibleMan@joshuakarr-BibleMan10 ай бұрын
    • It is ok but a chat with GPT4 can give you an even better explanation. “A lot more complicated math but it works” does not really sound that enlightening. Still a lot of short cuts.

      @TheTEDfan@TheTEDfan10 ай бұрын
    • @@TheTEDfan Isn't that proprietary information?

      @thomascromwell6840@thomascromwell684010 ай бұрын
    • @@thomascromwell6840 some, but there is a lot more information in the public domain.

      @TheTEDfan@TheTEDfan10 ай бұрын
    • I think that the general audience of Kyle's videos would tune out if he went into the linear algebra and multivariate calculus involved. I am doing a PhD in NLP and this is similar to the explanation that I give. There are some things incorrect about his explanation though. For example, gpt is not a word level model. It does not have a "number for every word in the English language" but rather a vector for frequent sub word chunks (I think gpt 3.5 is still using byte-pair encoding (BPE). I am assuming the vector bit was left out for the simplicity of the explanation, but the BPE tokenization is important as it allows the model to interpret and produce any possible sequence of characters rather than only known words. Edit: Oh I watched more and he does mention the vector representations. The model does not learn the dimensionality though. This is a hyperparameter.

      @Crayphor@Crayphor10 ай бұрын
  • Hey Kyle, your the man!!! All your uploads are amazing and so easy to follow and a pleasure to watch!!! Kudos from the UK 🇬🇧

    @slingshotcrazy@slingshotcrazy10 ай бұрын
  • It's remarkable, once you see under the hood, how inelegant the whole thing is. I had always assumed that machine learning processes like this used brute force to learn and train itself on behaviors, but as they got closer to the "correct" model, patterns would emerge that would approximate--in a language that computers can understand--a general model for communication. Even if it is hopelessly complex, it would asymptotically approach a complete Algorithm that would fully and finitely contain the mathematical model for language. And then, having used all of the brute force statistics to reach this model, the final "product" would just be this capital A Algorithm that you could plug your input into and get a satisfactory output. But this doesn't even do that. It's literally word by word, brute force computer engineering in its operation as well as its creation. It's so incredibly inefficient and it doesn't actually explain much at all about language. We could take it apart and see how it works under the hood, which seems to be the next step in the process, but it sounds like it will be slow going just to understand how THIS model works. Which, you'll recall, is not at all how actual language works, because it has no actual understanding of the meaning of words, only their mathematical relationship to other words. So this is a step, but not nearly as big of a step as I had thought.

    @giggityguy@giggityguy10 ай бұрын
    • the first paragraph is exactly how the training works, but instead of an algorithm you get a neural network, which can transform the mathematical relationship between words into a thought process similar to ours. It's also not brute force in its operation, it doesn't operate on words, but on tokens, of which there are less, and the process is heavily optimized using parallel processing capabilities of modern GPUs.

      @meleody@meleody8 ай бұрын
    • @@meleody Yeah, and "inelegant" strikes me as *_somewhat_* of an exaggeration. But then again... I guess people tend to generally overestimate how "perfect" something is at it's designed job before they learn how it works, and I guess I'm more used to it by now. 😅

      @ivoryas1696@ivoryas16963 ай бұрын
  • Amazing! A scientists opinion who I both trust and appreciate. You already roasted MGK, so my confirmation bias tells me I made the correct decision. Been following you since the very first “Because Science” episode! I love your personal account and where it’s gone!

    @JoshuaBenitezNewOrleans@JoshuaBenitezNewOrleans10 ай бұрын
    • Between you, LegalEagle, and Some More News I got the scientific, legal, and social implications of ChatGPT!!! So, honestly I’m feeling significantly more clear on my stance with the tech and the things I support in ethically and equitably integrating the technology into human societal structures

      @JoshuaBenitezNewOrleans@JoshuaBenitezNewOrleans10 ай бұрын
    • Bro, he's not an actual scientist yet. He's a science enthusiast and educator, and says so himself many times. Confusing the two makes you a prime target for misinformation. Don't confuse them.

      @mr702s@mr702s10 ай бұрын
    • Yea, it’s crazy how I just kind of found him on Nerdist and get to see how far it’s come along since then.

      @SuperShanko@SuperShanko10 ай бұрын
    • @@mr702she HAS worked in a lab and HAS been a working scientist before. He just isn’t anymore. When speaking conversationally within the community, it’s fine to call him that imo

      @wolfiemuse@wolfiemuse10 ай бұрын
    • Well said Kyle's awesome

      @patricknez7258@patricknez725810 ай бұрын
  • I clicked on this video faster than the half life of Lithium-12

    @themanED@themanED10 ай бұрын
    • I clicked on this video faster than the time it takes for Valve to release another Half-Life.

      @codatheseus5060@codatheseus506010 ай бұрын
    • Well I didn’t. Ok I lied, I did.

      @guyincognito5663@guyincognito566310 ай бұрын
    • Hah! Amateur. I clicked on this video faster than the half life of Hydrogen-5

      @rosskrt@rosskrt10 ай бұрын
    • Twelve divided by four is three. Lithium has three protons. Half Life 3 confirmed?

      @JimmyCerra@JimmyCerra10 ай бұрын
    • @@JimmyCerra Valve announced they are working on the next Half Life. The Half Life series had Half Life, Half Life 2 and Alyx which didn’t increment the number. 3 comes after 2. Half Life 3 confirmed?

      @guyincognito5663@guyincognito566310 ай бұрын
  • That was a great video. I learned much more about ChatGPT than I've found so far anywhere else. I like that you discussed it as a computational model including the matrix algebra that it uses. Thank you for sharing.

    @marksilbert7005@marksilbert70057 ай бұрын
  • thanks for the video, i think it is a good primer for people who want an overview of the topic! i understand that it is necessary to oversimplify a lot, to keep this video from being 24h long, but i still have some small corrections that may be interesting for some people :) - 3:31 it does not "form" the connections during training, it just configures the right values for them. they also are not connections between words, they are "parameters". usually in neural nets, a parameter is either the weight of a connection between two neurons, or a "bias" inside a neuron. each neuron has one bias parameter that tells it how easily it can be excited. - 5:05 the alignment problem runs way deeper than this. openAI just scratches the surface and we need to make sure we dont accidentally think that alignment is solved when chatGPT behaves nicely. because it is not. if you want to learn more about the alignment problem and why reinforcement learning from human feedback cant fundamentally solve it, i highly recommend you watch the excellent videos of robert miles here on youtube. (@RobertMilesAI) he does a fantastic job of explaining it without getting all too much into the technical jargon. - 13:40 it operates on tokens, not words. some of the tokens GPT knows are full words, but most of them are single characters or small character sequences like "cha" or "ter". this is done so the model can also process and output words it has never seen before. just try it: say to chatGPT "repeat the word oycinxongsefxucfzgeucc back to me", and it will do that, despite it being not a word it has ever seen in training. this also makes it better to handle multiple languages. after all, the training input is also highly multilingual. - 21:57 "knowing" or "thinking about" something are very vague terms and while it is good to make a distinction that humans work differently than chatgpt in some key areas, it can also mislead people. depending oh how you define knowledge, humans may also not "know" what atoms are, because everybody is just referencing past experiences about the topic, be it interpreting scientific results, or being told about those concepts. we need to make sure we dont fall into the trap of defining human traits just as "well, not what this AI is doing". if you want to make a verifiable statement about the difference between humans and current AI, you could say that AI is first trained and then is kept static while it is queried for outputs, while humans learn and act at the same time. at least currently. 25:28 "it has about as many connection and weights between those connections": there are no weights between connections. there are connections between neurons and weights define those connections. also noteworthy: the human brain has about 86 billion neurons, but way, WAY more connections. estimates for even parts of the human brain are well in the trillions of connections. some say that GPT4 may approach this regime, but keep in mind that biological neurons also have way more functionality than what is modeled in artificial neurons, so those number comparisons always are very fuzzy at best. i wonder if anyone reads this last sentence.

    @LostMekkaSoft@LostMekkaSoft10 ай бұрын
  • This really helped explain it! I've heard the basic principle before, but never understood what it meant before hearing about the many-dimensional matrix the weights actually sit in.

    @TheRenofox@TheRenofox10 ай бұрын
  • I’m gonna say, this video is probably the best as far as formatting, data presentation and visuals go in a long time. This is the perfect blend of Facility while giving homage to BS in a respectful but also humorous way (around 20:00 ) and I’m here for it. I loved this video Kyle and I would love more in depth explanations of things like this; things that people commonly misunderstand or are anxious to think about, etc etc. Great job man. Also bringing in some CLOAK vibes in your merch ad, and I’m here for that too lol. Super glad you work with them, really hope you get the opportunity to design a cloak drop. I’m sure the bois would be willing to hear you out! Anyways. Excellent video Kyle. Always a pleasure to watch your videos.

    @wolfiemuse@wolfiemuse10 ай бұрын
    • That means a lot to me, truly, thank you

      @kylehill@kylehill10 ай бұрын
    • @@kylehill I mean it man. Appreciate you replying to everyone in here too. This was great. I think you provide a great service to the internet and you genuinely seem like a chill guy. :) have a good evening Kyle!

      @wolfiemuse@wolfiemuse10 ай бұрын
    • @@kylehill Although CatGPT is a long way off though? You have 53 000 cats in the same room and you try getting them to stay still for long enough to get numbers attached to them? Good luck with that. Cats also don't like multiple dimensions, it makes their fur stand up on end.

      @gentleken7864@gentleken786410 ай бұрын
  • I spend a fair bit of time helping people with MidJourney AI prompts and it gets somewhat tedious having to remind people that it does not actually *know* anything. During training, when it sees a lot of similar images associated with a word or set of words, the common visual elements of those images get burned into its neural net as a "style". Everything is a style (weak or strong), whether it's an artist, a subject, a medium or just a simple word like "and" or "blue". It can then glom together different styles to make something "new". But the building blocks are still based on things it has seen enough times to make a style.. It can make a "giraffe with wings" because it's seen both giraffes and wings and it just visually combines them, but it won't make a coherent "upside down giraffe" because that phrase and corresponding images has never existed in its training data, so it's never created a style for that combination of words, and it doesn't *know* in a general sense how to make any arbitrary thing upside down. The strongest style for a giraffe is upright. But, it has seen things reflected in water, so I might ask for a reflection of a giraffe and it'll try to draw one upside down, without knowing that's what it is. It can't reason or extrapolate, it only imitates. Point of all that is, ChatGPT (while much bigger) is no different. It doesn't *know* anything. It just associates words with other words with other words and sprinkles in a little randomness. It *imitates* the *style* of general knowledge. It is often right because the aggregate of what it has seen during training (wisdom of the crowd) is right, but when it is wrong, it is confidently wrong, because it doesn't *know* any better. It doesn't cross check itself, because it doesn't *know* how. If it supplies a reference which doesn't exist and you ask it "Are you sure that is a valid reference?" it'll answer "yes", because the connections in the neural net that made up that reference are still there, even if wrong. If you ask it to write code, it doesn't *know* if it's good or secure code, and there's nobody cross checking it. And because It doesn't contribute answers to Stackoverflow questions (having only been trained *from* them), there's nobody up/down voting its answers in such a way that it will ever learn any further. The concern is that with all these totally private conversations with ChatGPT slowly filtering out into the world, unattributed, it'll create a feedback loop where generative AIs are all training on each other's output rather than the original source of knowledge, humans.

    @daemn42@daemn4210 ай бұрын
    • "there's nobody up/down voting its answers". You can do a thumbs up or down in its answers. I never looked for an explanation of those buttons but I think they give feedback to the AI

      @diegopescia9602@diegopescia960210 ай бұрын
    • @@diegopescia9602 ya, but it's still a private conversation and it's ephemeral (disappears after you're done) so you can't come back days later and tell it how it did, based on your ultimate results. Stackoverflow also has somewhat moderated up/down voting in that you need a certain amount of reputation to do it (have provided good answers yourself before). The idea is, those who have real experience are those providing feedback on others answers. ChatGPT and others aren't contributing to those answers, so they're never getting feedback from those with experience. The person *asking* the question of ChatGPT doesn't really know if it's a good answer or not.

      @daemn42@daemn4210 ай бұрын
    • This is all the stuff I can never find addressed in these comment sections, and I would never be able to explain it in such simple concrete terms. And what's this? you used Hapsburg AI for the kicker? Honestly. This comment made my night.

      @RobertDrane@RobertDrane10 ай бұрын
    • I like to tell people who drool over AI generated images (especially, you know) that they're like a fly trapped in a carnivorous orchid, being tricked by something that has only mindlessly evolved to trick them into thinking it is useful. Natural selection, not intelligence.

      @Blockistium@Blockistium10 ай бұрын
    • It does not know, but it is very capable with little input.

      @TheDefender123Plays@TheDefender123Plays10 ай бұрын
  • 😂 "the Hemsworth your moms says you already have at home," legit made me cough spatter my drink, because I think that every time I watch a video.

    @KittyxStarz@KittyxStarz10 ай бұрын
  • this was super helpful and educational and I've already done tons of research on the subject

    @sharxbyte@sharxbyte10 ай бұрын
  • This is my field of study and it's remarkable how well and easily you explained it! A quick note, we do actually know why neural networks work! In the late 1980s, some mathematicians actually proved that neural networks are a universal function approximator. This means that (with a big enough network and the right weights) a neural network can be made to approximate anything that can be modeled mathamaically which is almost everything. It has been more recently that it was actually figured out how to give the network the right weights to actually do that and there are all sorts of tricks like attention that help the model learn!

    @Givinskey@Givinskey10 ай бұрын
    • That being said, the explainability part that you brought up is a good point! This field is new enough that explainabilitly has not been fully figured out yet. There are some very interesting technologies based around just that, but they are currently too simple to help with something like ChatGPT. Another interesting thing to note is that if you ask chatGPT to explain it's thought process (this might actually be limited to gpt4, chatgpt's newer brother), it will actually give you an answer and will be able to do more complex tasks!

      @Givinskey@Givinskey10 ай бұрын
    • This is a bit more philosophical than it is technical, but I think the fact that neural networks are universal function approximators is far from knowing how or why chatgpt makes a decision about what its next output will be. Like you had mentioned in your reply, explainability methods for these models are pretty new, and in my opinion from looking at the literature, they are not very good and especially bad for large, complicated models working in high dimensional spaces. So, we really don't know how or why these models do what they do, just that they can learn to approximate arbitrary functions (and even then, I think there are bounds on what types of functions they can approximate. IE: continuous)

      @justinallen1578@justinallen157810 ай бұрын
    • @@Givinskey I don't see how it's not fundamentally unexplainable, it's just really hard to wrap your head around because it's such a foreign concept to us. Like visualizing the neuron activations for image recognizer is pretty good identifying for identifying blatant bugs when you put in an input and it gets it confidently wrong. I mean it doesn't help for HOW to fix it but it does show you where an issue might be. However, gpt4 is on another level. They did recently make it worse as it was using too much money per request. Context length got cut in half and the difference is noticeable. GPT4 I've found is very very capable of walking through thought processes and reasoning. GPT3.5 was very much not, though it's gotten maybe slightly better as they now train gpt3.5 using output from gpt4. Oh boy that's not a brewing future problem at all now is it! But GPT4 is still not "good enough". It mostly replaces things you would need to do a quick google and read for or a task that would take 15 mins take like 15 seconds. While a time saver (sometimes enormous), I find it doesn't exactly allow me to 10x productivity because the interface for using it is still super clunky. You need to open up a browser window, select a prompt style, type your query in a nice markdown editor, and then paste everything in. All of that takes time and while optimizations can be made, it needs to be easier to generalize and integrate into other applications. You still need to create the tools that allow you to 10x productivity. GPT4 by itself is somewhat worthless. It's like electricity. Very potent but fundamentally minimally useful by itself. But it does make creating those tools faster. But the major asterisk, is that you need to know everything that you are doing. Overall, the rich get richer situation. Just like every technological innovation of the past. GPT4 doesn't outright replace anything. It's not at that level. Like for modeling a system architecture for a platform like youtube. It fails at this and you need to walk it through the process step by step correcting any mistake or else it gets a little off the rails. Both GPT3.5 and GPT4. GPT4 can get further, but not by a whole lot.

      @LiveType@LiveType10 ай бұрын
    • ​@@Givinskey What's really cool is that recently, researchers have used GPT to help explain GPT 😁 They encoded the activations of specific neurons across the range of a prompt, then had GPT analyze these across many prompts and offer suggestions for possible explanations of what that neuron is doing. It offered a few, some of which were not thought of by the researchers, and provided new directions to explore and inquire into. So in a way, GPT is doing neuroscience research on itself 😁

      @IceMetalPunk@IceMetalPunk10 ай бұрын
    • ​​@@LiveTypet might not be truly fundamentally explainable but the complexity of a model like GPT4 is so obscene that no one human could hope to understand it in its entirety which is a pretty big barrier to complete understanding. Even modern microprocessors aren't fully understood and those are designed completely manually over time, they're not self learning systems composed of billions of neurons in a neural net. The web interface is also not the only interface, OpenAI have an API available. I do agree that the biggest imminent threat with AI is it getting exploited by wealthy people to increase their own wealth long before it's capable of staging a rebellion though

      @bosstowndynamics5488@bosstowndynamics548810 ай бұрын
  • You make the best content out here, I am really enjoying everything you post. Thank you for the in-depth explanation!

    @Papa_Tin@Papa_Tin10 ай бұрын
  • Thanks for tackling this subject in an accessible way. I'd love to see a follow-up video that addresses the ethics of using copyrighted works to train AI without the copyright holders' consent along with the environmental impact of AI systems like ChatGPT (I've read that they consume huge amounts of energy and water).

    @Hal_Evergreen@Hal_Evergreen10 ай бұрын
  • At the end when Kyle said mis-information would be a problem, it reminds me of Robert Heinlein’s book, “Stranger in a Strange Land”. Where you can’t trust computers or videos to record an event. The story presents “fair witness” that are humans trained to memorize what the see and not let bias skew how they describe what they experienced / witnessed.

    @ectopicortex@ectopicortex10 ай бұрын
    • yes chatGPT combined with what deep fakes can do with voices and faces, i would not be surprised to see a movie made using exclusively ai in the near future

      @SilentStorm98@SilentStorm9810 ай бұрын
    • Of course you are correct, but the way that’s worded makes it seem like misinformation isn’t a gargantuan problem as it is. Go fact check basically any story you see on any mainstream media outlet. Wait till a story comes along that you have a great deal of knowledge in. You will inevitably find that it misses the mark in very key areas, often times enough to give the viewer an unacceptably distorted picture of the story. Deep fakes and all of that are only going to add to the problem. It’s going to be fascinating.

      @jasondashney@jasondashney10 ай бұрын
    • The irony of Kyle talking about misinformation never ceases to amaze me

      @untitled795@untitled79510 ай бұрын
    • What specifically are you referring to? I’d like to know about his skeleton(s)?

      @ectopicortex@ectopicortex10 ай бұрын
    • @@ectopicortexanything Covid related, anything CCP related, he’s just an unabashed WEF/pharma corpo-shill

      @untitled795@untitled79510 ай бұрын
  • “And for good reason” exceptionally spot on John Oliver

    @bilbolaggins2431@bilbolaggins243110 ай бұрын
    • lol ik right?

      @kylehill@kylehill10 ай бұрын
  • Extraordinary video! One of the best on the platform that remains easy to understand.

    @glitchy_weasel@glitchy_weasel9 ай бұрын
  • Just discovered this channel. And won’t go anytime soon. Funny, educational and well made. Keep it up, Mr. Hemswor… Hill… Hill!!!

    @Gopher86@Gopher869 ай бұрын
  • If more and more AI written stuff is out there, OpenAI will also have to watch out not to include these texts in future training. I would imagine it really screwing up the model, if it is fed too much of its own output, but it would also be really interesting to see the consequences.

    @nikx@nikx10 ай бұрын
    • It will converge on a single word, like Malcovich

      @DrDeuteron@DrDeuteron10 ай бұрын
    • It's already happening. Generative "AI" models are running into recursive problems. Give it a few months and they'll make themselves useless.

      @gloryfiedrebel@gloryfiedrebel10 ай бұрын
    • its own output is actually some of the best training data you can provide. They absolutely use it, right now.

      @vicc6790@vicc679010 ай бұрын
    • @@vicc6790 This. Also it's easy for them to control what data the model receives by limiting it to specific dates (i.e. data up to 2021).

      @absta1995@absta199510 ай бұрын
    • @@vicc6790 Feeding back outputs you want to reinforce into the system makes sense, but if outputs eventually become part of the LAION datasets wont that reinforce already present biases? I think there would be a difference between knowingly feeding back curated outputs and reusing outputs unknowingly.

      @nikx@nikx10 ай бұрын
  • I absolutely love the different aethstetic of this video

    @DetailedFrame@DetailedFrame10 ай бұрын
    • brand new rooms!

      @kylehill@kylehill10 ай бұрын
    • @@kylehill The Facility Must Grow

      @Hevlikn@Hevlikn10 ай бұрын
    • uh yeah... the black one was nice :D

      @AwesomeSoundsEng@AwesomeSoundsEng10 ай бұрын
  • This video helped a ton. I’m definitely not ready to give a lecture about how it works, but it helps demystify it.

    @Lyrictheac@Lyrictheac10 ай бұрын
  • In regard to determining "cat-ness", I've assumed that it was just due to creating associations between ideas, thoughts, or emotions. Those associations can be strengthened over time or through the nature of the experience itself. (An event triggering PTSD would likely be an example of the latter.) My guess is that if I took a simple drawing of a tree and added some round fruit to it, I could get you to say, "That's an apple tree", by coloring the fruit red or maybe even green. On the other hand, if I then changed the fruit color to orange, you'd likely say, "That's an orange tree." (I might even get some to call it a peach tree.) All I'm doing is working off the associations that we've created for fruit trees and for those specific fruits. Along those lines, I'd probably confuse people if I then changed the color to yellow. "That's... not a banana... is it a weirdly shaped pear?"

    @shinaikouka@shinaikouka10 ай бұрын
    • Lemon tree. The most confusing color would be blue since there are no naturally occurring foods that are blue. Foods that we label as blue, like blueberries, are actually a deep shade of purple.

      @-._.-KRiS-._.-@-._.-KRiS-._.-10 ай бұрын
    • @@-._.-KRiS-._.- No, blueberries are actually blue.

      @Mo_Mauve@Mo_Mauve10 ай бұрын
    • ​@@Mo_Mauveno... they're indigo

      @FIRING_BLIND@FIRING_BLIND10 ай бұрын
    • You think pears are yellow???

      @AyuwuSuperFan@AyuwuSuperFan9 ай бұрын
    • ​@@AyuwuSuperFanoff the tree, yea, sometimes

      @OakPotatoo@OakPotatoo8 ай бұрын
  • i clicked because I misread the title as "ChatGPT explained comically", but now I'm sticking around for the full explanation

    @Chloe-ch6mc@Chloe-ch6mc10 ай бұрын
    • Lol

      @patricknez7258@patricknez725810 ай бұрын
    • I mean, there *were* plenty of comical parts, so you misread but were not misled 😄

      @kelsanggyudzhin2340@kelsanggyudzhin234010 ай бұрын
  • I clicked on this to find out the details of what chatGPT was all about, because I'm a writer. Your explanation really clears up what this program is, and what it can do. And you splayed out on the floor after running is basically my brain after all the math involved with this. I need more coffee. My cat is that way with rubber bands. I have no idea where she finds them. I've been out striking with the WGA... and subsequently getting more than my daily quota of steps in. From what I've heard from the awesome people on the line is that there's a consensus that AI is useful as a tool to help with writer's block. AI in itself is incredibly helpful in many ways. We use it all the time. The writer's aren't against this. I've used a site sometimes that generates descriptions when my brain farts on how to describe something. If I get inspired, I'll take what I learned from it, and CREATE MY OWN that weaves into my work. The problem is when capitalism gets involved. One of the things I'm hearing is that screenwriters have a very valid concern that - as chatGPT improves - productions will hire writers to write three(ish) screenplays, train an AI to study they're style and voice, then fire the writers and continue with the AI. Maybe they'd hired a couple of editors to make sure it makes sense. Messed up, it is. Yes. Writing as a career will become a gig by gig basis that destroys future writers' chances of being hired to write for shows, films, articles, ect. It was already insanely hard for an unknown like me to get noticed by anyone, and for my work to be wanted by anyone. The saying, "I can paper my walls with rejection letters" isn't stretching reality. I'm active mostly on Tumblr as a writeblr (writers of tumblr), and I've heard two pretty scary things so far: 1) People are inputting unfinished fan fiction works into AI to generate an ending. Like... WTF. 2) AO3 now gives you the option to opt out of having an AI scan your work to learn. You're automatically opted in. You have to go to your settings to opt out. 3) Some publishing companies and writing contests have been flooded with AI generated works to the extent that they've had to close their unsolicited submissions inboxes, and either freeze, or simply stop a contest. People who have no idea how much hard work, time, and effort goes into writing the things they love. Quality work can take years - because people have lives that often influence creative flow and ability to create. My current novel has taken me 4.5 years to write. I'm in round 2 of edits. I wrote a short story recently for a contest for The Writer's College wherein they were forced in include this in their terms and conditions: *"Absolutely no generative AI to be used (ChatGPT etc.). If we deem stories were not written by a human they will be excluded, and the author banned from entering all further competitions with us.* Sucks that this has to be a part of this now, right? So TL;DR, I'm not against LLM's - they're helpful. I'm against people using them as a lazy shortcut to skip over work that goes into writing - which completely devalues the gauntlet of study and training people go through - and I'm against companies using it to cut expenses off of their budgets.

    @ohkaygoplay@ohkaygoplay10 ай бұрын
    • Similar issues are plaguing the music industry. In 2002, Michael Jackson joined Al Sharpton at a press conference about how record companies cheat their artists, with the burden falling harder on Black artists, bc most bad things do, but Michael made it clear it was a problem for all artists. This was unusual for Michael, who tended to use his songs to express these ideas. But he’d been particularly fed up with Sony/Epic. He made a few similar appearances. He pointed out a few things: 1. He named certain artists that were perpetually on tour, because tours generally make more money than record sales, so to avoid going broke, this was necessary. I’d known for a while that the cost of making an album/cassette/cd (the physical products) was mere pennies and companies’s didn’t give artists their fair share of sales. (Michael notoriously hated touring as he got older, the reason’s he gave were correct, but he should have told everyone he had Lupus and between the basic side effects of getting older and Lupus becoming more difficult to cope with as the illness advanced, making touring more physically grueling. As an American, he was covered by the Americans with Disabilities Act. But he never discussed his heath problems unless given no choice. But I digress.) 2. He owned half of Sony’s music catalog, but his contract was almost up. He only had to create one more album, and he was then free to go elsewhere, still owning his half of the Sony/ATV catalog. He said Sony was pretty pissed off about this bc he wasn’t selling his half to Sony. Michael’s most recent release at that point, the highly-underrated _Invincible_ was barely promoted, which Michael recognized as odd given all the effort put into making it, and while Michael was a humble guy, he knew the kind of effort previously used to promote his albums, and expected a similar effort. It was just math: Michael’s albums sold, even if he’d never outdo _Thriller_ , he had a big enough fan base that it made no sense not to make sure this album came as close as possible to Off The Wall/Thriller/Bad/Dangerous. 3. He was showing that both well-established artists, people like Sammy Davis, Jr., Little Richard, and others - legends while they were alive - had to tour endlessly to survive, so if it affected big stars like that, up-and-coming artists and smaller artists would struggle more. He was speaking up for himself as well as all artists. After this, the trial derailed his efforts, and he of course died 7 years after all this. Now, his estate/Sony started releasing some pretty sketchy posthumous albums of songs MJ never included on his albums, some were completely finished tracks, most were not complete. At least one song was one that Michael had written lyrics for, the music was made, but he never recorded the lyrics, so they hired an MJ impersonator to do the vocals. This prompted many artists, especially hip hop artists that had run into issues of their own, to start adding a clause in their wills that under no circumstances should their unreleased music be released after they died & they started working to retain or regain ownership of their masters, and encouraged other artists to do the same. Right now on KZhead, there are channels that use AI to create vocal “performances” of Michael covering songs he never did, and it’s so damn close to his voice. Now, there are new artists bypassing record labels completely and using KZhead, TikTok, Instagram, and all the streaming services to establish a music career, and having great success. Connor Price isn’t on a label, his lyrics brag about how he will never sign a record company deal because “these are my songs!” Others he sometimes colabas with, like BBNo$ and Nic D. are doing the same. Artists of all genres are doing this. Writers might want to take a similar approach. Things like Substack and Amazon’s self-publishing service give writers some options. People who write teleplays and movie scrips are definitely another story, I don’t know enough about that process. I’ve read some great stuff from Substack writers, and do subscribe to my favorites for the subscriber-only content. Fantastic fiction and non-fiction. The potential is there, I think, for writers to do what musical artists and KZhead content creators do - bypass the big companies and market directly to the public. There’s superb content here on KZhead, like this video, entertaining, informative, and many have been able to make this their full time job. Artists like Connor Price is a full-time rapper (I believe he lost his job at the start of the pandemic and with nothing else to do, decided to take a chance at his dream, and it paid off. This seems like the promised democracy-enhancing internet we were promised finally happening. It’s definitely hard work, maybe harder, than the traditional way. Connor Price makes Shorts with snippets of his songs featured in humorous skits in which he’s playing every character or he and the artist he’s collaborating with playing multiple roles, with links to the full songs on every platform possible, including here. Those Shorts require all the extra work of recording each part separately them putting them together. But, that I discovered him and a bunch of other artists who I listen to on KZhead and Apple Music regularly. (Spotify can pay well, but Apple Pay’s artists better). I’ve got to think all types of writers must be able to figure out a way to use similar tactics to bypass those inclined to use things like ChatGPT to rip-off writers as you described.

      @paradoxical_taco@paradoxical_taco10 ай бұрын
    • I listened to a podcast by the people behind Some More News (it's called Even More News) about the WGA strike and they went into discussions about how the pay structure for the industry works and how these corporations are trying to use AI to get past the first step without paying writers, and I really think that's messed up. I can see using AI to do some edits to an already written story, but the way they're likely going to cause writers to basically rewrite an AI story for the pay of somebody making minor edits is just wrong. I hope that they comply with your demands and realize how bad the AI actually is at writing.

      @grex2595@grex259510 ай бұрын
    • I'm pretty confident you are ChatGPT.

      @matthewdancz9152@matthewdancz915210 ай бұрын
    • Although quite reductive, not being as efficient at writing as AI isn't a good argument against any regulation. I'd like to think you and I have the same feelings about the art of writing, but I can't help but acknowledge the fact that we humans, as a whole, optimize everything. If AI is better, I see that being the unfortunate and inevitable trajectory.

      @richardfuller3566@richardfuller356610 ай бұрын
    • TLDR: “The problem is capitalism” duh

      @Somebodyherefornow@Somebodyherefornow10 ай бұрын
  • This reminds me of all the Star Trek TNG episodes where some neural network was given a chance to write itself / grow, and then became some kind of sentient lifeform - such as the Enterprise D's computer, the Exocomps, and of course, Data himself.

    @yahozak@yahozak10 ай бұрын
    • Oh, do you remember the episode with Data on the holo-deck in a Sherlock Holmes-setting? If you want to create AI, just say: "Be a worthy opponent for Data."

      @r4nd0mguy99@r4nd0mguy999 ай бұрын
  • What is equally interesting is what happened after the "leaking" of a llma. It seems to have stirred up some quite significant development and discoveries on how to make it more efficient and available. Even alarming people from the big AI players that they could loose traction. And on the topic of understanding. Best example is the Go AI. Beat the best player, but after analysing its behaviour a flaw was found which could be exploited. A flaw stemming from the fact that it doesn't really understand what's doing and what is important.

    @joltrail3588@joltrail358810 ай бұрын
  • I like how chatgpt breaks the conversation into chunks and analyses the question or request and gives an expected response based on expected trained replies and doesn't "read" the words typed in. At least this is how my brain gets it. I'm enjoying the machine learning tools and tech coming out.

    @huuua2@huuua210 ай бұрын
    • I mean, yes, but also, I'd argue that human brains read in a similar way. We start with the symbols, then convert that into a representation of the word based on our neural configurations. Then we propagate that representation through our own neural networks and end up with a representation of the next word we want to say or write; then we convert it back by saying or writing it.

      @IceMetalPunk@IceMetalPunk10 ай бұрын
    • yeah, but we also assign meaning to those symbols, and we choose those symbols because we associate them with things in reality

      @JakalTalk@JakalTalk10 ай бұрын
    • @@IceMetalPunk man this gets me thinking. You are correct I assume (who knows really thoguh?). I would just think that there is a lot more cross referencing going on in the human brain. When we read the word cat. We can translate the symbols into a meaningful concept. We can visualize and or recall memories as well. There is so much going on in the human mind all at once it is crazy. But maybe it is less complex than it might seem. Probably even God doesn't fully understand it. So maybe we are just a lot of diffrent neural networks all running all at once and working together to produce the mind.

      @Youbetternowatchthis@Youbetternowatchthis10 ай бұрын
    • @@JakalTalk "We choose those symbols because we associate them with things in reality" -- do we? That's how hieroglyphics, and some ideographic languages, work, but that's not universal. Just look at English: how do any of the symbols making up the words you're reading now connect to the concrete objects and abstract ideas they denote?

      @IceMetalPunk@IceMetalPunk10 ай бұрын
    • @@IceMetalPunk When you said "symbols", i thought you were talking about abstract images which stand-in for the words themselves. Seems that you were talking about Letters? Sorry for the confusion!

      @JakalTalk@JakalTalk10 ай бұрын
  • As someone who is getting into "AI", this is simply the best tl;dr of a language model, down to the essence of math that is used. Almost felt like one of my data science prof's class minus the nitty gritty code and stuff like activation, also a little bit more in-depth than 2b1b's vids. Keep up the good work!

    @fishappy0_962@fishappy0_96210 ай бұрын
    • I love 2blue 1brown, it's an amazing channel!

      @cameron7374@cameron737410 ай бұрын
    • @@cameron7374 I'm a huge fan, he's insanely talented, excels in conveying lessons and math, and py stuff. Fun fact: he coded his videos using a python library that he made.

      @fishappy0_962@fishappy0_96210 ай бұрын
    • 00😊0

      @hendrikheinz3652@hendrikheinz365210 ай бұрын
    • 😊

      @hendrikheinz3652@hendrikheinz365210 ай бұрын
    • ​@@cameron7374😊

      @hendrikheinz3652@hendrikheinz365210 ай бұрын
  • I loved this video and I think it's one of the most informative videos about ChatGPT. Thank you.

    @oo2542@oo25429 ай бұрын
  • GREAT! That's the exact video I wanted to watch about ChatGPT. Everything... in a nutshell, right. Thank you a lot. Great style. Cheers

    @RGSTR@RGSTR10 ай бұрын
  • Was really interesting to hear about the math part like I've already read about a vague idea of what neural networks are and how it's all basically advanced rng, so it was cool to see a video dig a little deeper. Also, I enjoy the whole evolution parallels with AI training where the weights randomly mutate and then get selected for by whatever is most fit for the task.

    @jessicaraven9546@jessicaraven954610 ай бұрын
    • Carefull, the weight of a neural network are not randomly mutated. During training an algorithm called backpropagation is used to calculate the best way to change the weights so that the networks solves the example shown. This algorithm is the key piece that makes modern neural networks work, you would never get to ChatGPT levels by random mutations or evolutionary algorithms because the network is too big.

      @ernestonoyagarcia2254@ernestonoyagarcia225410 ай бұрын
    • Linear Algebra requires a Calc 1 pre-requisite (derivatives will come into play), so you could get there self teaching. You would also need a base understanding of Python to start programming your own and understand different paradigms of machine learning.

      @dontdoit6986@dontdoit698610 ай бұрын
    • @@ernestonoyagarcia2254 interesting thats really cool to know

      @jessicaraven9546@jessicaraven954610 ай бұрын
  • I was so ready for this entire video's script to be written by ChatGBT.

    @swolby9230@swolby923010 ай бұрын
    • ChatGreatBritain

      @adrianozambranamarchetti2187@adrianozambranamarchetti218710 ай бұрын
    • Kyle isn't a real human. There was once a real Kyle, but he dove to greedily and too deep into the sciences and was merged into the omni AI.. or whatever that thing Bill Gates sold to Elon Musk was.

      @solidicone@solidicone10 ай бұрын
    • I think that shtick died out in march when every news article ended with that lol

      @dewyocelot@dewyocelot10 ай бұрын
    • ChatCBT _😳_

      @mikeoxmall69420@mikeoxmall6942010 ай бұрын
  • Wel here's the thing... what's a criteria for consciousness, intelligence, and sentience that separates us from AI? A neural net that communicates through pre-trained data (memories), some algorithmic guidelines (DNA), larger conversational context (this conversation and considering larger society as context too), and finally with some electromagnetic interactions involving randomness and heat we arrive at something we don't fully understand the creation of: language. It seems for us that language is our best indicator of consciousness, so what exactly makes us different from AI? Chemistry? How long before we have chemical computation involved in the process for AI networking?

    @ncedwards1234@ncedwards12349 ай бұрын
  • Well thanks for trying to explain it. Make a video about the perceptron algorithm that will give you more insight on how neural networks work. But you did touch on it a little in the video with the graphs idea using nonlinear equations.

    @MrSoHighSoFly@MrSoHighSoFly10 ай бұрын
  • Small correction: the feed forward part is not just feeding the result forward, you're passing it through a fully connected feed forward neural network, which is the most basic neural network there is (although it can add a lot of weighs to the model). And then ir passes it to the next block.

    @pedraumbass@pedraumbass10 ай бұрын
  • Kyle might be one of the finest science communicators to ever come out of a test tube. Joking aside, this is probably the best video I've seen about ChatGPT. I'm also a big fan of the Half-life histories series. ❤

    @DavidHarrisonRand@DavidHarrisonRand10 ай бұрын
    • Arvin Ash did an excellent beginner's guide to ChatGPT, which I think is good to watch before this one.

      @andoletube@andoletube10 ай бұрын
  • Exceptionally clear explanation of how ChatGPT works inside! Thank you!!!!

    @jamesbond_007@jamesbond_00710 ай бұрын
  • @kylehill That is a perfectly simple way to explain sentience! What is it "thinking" when not being prompted. Thank you so much!

    @badam1814@badam18148 ай бұрын
  • The crazy thing to me is that often times when I give ChatGPT prompts, I tend to mix and match the different languages I speak (currently 5) and he understands everything seamlessly. When you talk about the English language, that's ALREADY huge, but ChatGPT does the math with every single language it knows simultaneously. It's truly mind-blowing.

    @The12hugo@The12hugo10 ай бұрын
    • Poorly disguised humble brag

      @binbows2258@binbows225810 ай бұрын
    • There's no he here, and there's no understanding either.

      @marcusbrutusv@marcusbrutusv10 ай бұрын
    • there is no "he", nor is there any understanding. it's not a person.

      @LilFeralGangrel@LilFeralGangrel10 ай бұрын
    • @@marcusbrutusv understanding is just a model. It absolutely does have a model. In many ways it is better than yours, e.g. speaking 20 languages at once lol, evaluating code, photographic recall of it's several thousand character context window... in other ways not as much, it can't actively learn new things permanently with the current architecture without access to external tools for example (even with them it still doesn't learn in the way you or I can, with the current architecture, albeit papers like "Augmenting Language Models with Long-Term Memory" are getting closer and closer)

      @darklordvadermort@darklordvadermort10 ай бұрын
    • ​@@darklordvadermort CGPT does NOT think for itself. It runs a pre-defined set of instructions made by humans. It would have to think to understand, and it does not have that ability. There are people who claim otherwise, and all of those people would be the beneficiaries of billions of dollars if they convinced the right people. I am sure there is no connection.

      @marcusbrutusv@marcusbrutusv10 ай бұрын
  • Great video! You're really good at breaking down complex topics and making them understandable and interesting.

    @Ancusohm@Ancusohm10 ай бұрын
  • i started The Expanse because of you only regret i have is not starting sooner thank you for the recommendation sorry its kinda late i saw your episode on it a long time ago but this video reminded me of it no clue why anyway im really enjoying it and your content is great as usual love ya man

    @IleneOsborn@IleneOsborn10 ай бұрын
  • i thought i had really considered every angle on gpt 3 but now i have 2 or three more eschatological dreads keeping me up. thanks kyle!

    @caseyhamm4292@caseyhamm429210 ай бұрын
  • This video made me realize. That the real issue with GPT getting mistaken as "intelligent". Is almost entirely because we rely solely on intelligence, cognition and understanding being expressed through Language to one-another. No matter how complex our understanding is, in the end we have a internal language model simplifying it into words. So when a model mimics that last step incredibly well, it gets very hard to not expect a similiar mind to ours. The only way we can solidly differentiate GPT from us. Is by how "simplified" ANN operate and that we have by design made it a massive matrix cruncher. We do not know any way to give it the means to memorize, visualize, hypothesize or verify. It just mimics our language in a word-by-word basis.

    @chielvoswijk9482@chielvoswijk948210 ай бұрын
    • Thank you this is an excellent way of putting it.

      @ellencoleman4604@ellencoleman460410 ай бұрын
  • This explanation is fantastic! There are hundreds of videos in YT trying to explain how ChatGPT works (lots click baits) but they are so shallow and either overcomplicate or just mention the terminologies that they don't even understand. This is the best explanatory video that actually tries to simplify so anyone can understand what is behind it. Fantastic job!

    @alexalexalex5123@alexalexalex512310 ай бұрын
  • Hi Kyle. Thanks for your video. Very well done and explained. Cheers

    @MisterSkraetsch@MisterSkraetsch9 ай бұрын
  • I watched this awhile ago, and I've been thinking about it. I think this could be used (when trained and under the supervision of existing professionals) to create a path from languages today to long dead languages

    @andrewspohrer7183@andrewspohrer71837 ай бұрын
  • I do really enjoy the way Kyle formats his videos and the way he can explain the most complex of topics and make them easier to understand, thank you for this amazing video.

    @nobody_2611@nobody_261110 ай бұрын
    • ChatGPT Explained Completely.

      @CoolestSwordFighter@CoolestSwordFighter10 ай бұрын
  • Great explanation of gpt. The funky multidimensional dataset is referred to as a vector database. Its really useful in machine learning because it allows models to comprehend relationships between words, images etc which lets it do things that cannot realistically be done with conventional algorithms.

    @savorsauce@savorsauce10 ай бұрын
    • I can visualize 80 million dimensions and I am the first human there. I feel as if eldritch sanity is also a thing.

      @michaelchaney2336@michaelchaney233610 ай бұрын
    • ​@@michaelchaney2336skill issue

      @egatnavda6786@egatnavda678610 ай бұрын
    • @@michaelchaney2336 are you SCP-6699?

      @alexc4924@alexc492410 ай бұрын
  • Good overview 👍. Also shows how most of what we say is 'somewhat routine/standard' or plan/do for that matter. And as you say, definitely not 'sentient' since it has no idea about semantics. Overall I find AI shows how most of what we do can be 'proceduralized', even our day-to-day conversations. I do disagree that we 'cannot know' how ChatGPT came to its answer, you can in fact have it log everything and thus know how it came to its answer (just is very burdensome). Also, even without logging you can still give a rough answer based upon current weights (which would not have changed much), but again, burdensome (though this could be analyzed by another AI, say 'ExplainGPT', that would 'look over the matrices' and provide a reasonable explanation). But businesses don't want to explain since they don't want to get into trouble, by saying 'we cant tell' they are avoiding responsibility.

    @GP-ur6if@GP-ur6if10 ай бұрын
  • Thanks so much for the video! I am currently prepering a Seminar on this topik. What are your souces? Espacaly for the part after 20:00?

    @Manuel-mz8fn@Manuel-mz8fn4 ай бұрын
  • Great video Kyle! I went to grad school to study this tech, and this is one of the best descriptions I’ve seen! Even the best ones usually don’t go into as much detail as this - like, I almost never see someone bringing up linear algebra or embeddings! I particularly love that you brought up the Attention Is All You Need paper, and how we still don’t know why attention works so much better than all of the fancy algorithmic tricks we used to have to use like LSTM gates and whatnot. I will note a tiny correction: at 14:00 I believe GPT is actually outputting a probability distribution over the words, and randomly samples from that for its output. That’s probably (heh) more technical than needed, but it’s worth noting that it isn’t guaranteed to always produce the most likely token. Also, regarding your closing comments, chat GPT hasn’t really “figured out” English, much less human language in general. Case in point are its hallucinations - these show that it is basically nothing more than Searle’s Chinese Room. Speaking of which though, there’s also a reason you can only use chat GPT in English - that’s one of the only languages well resourced enough to train a model like this. There’s a whole bunch of researchers who are studying not just language generation, but are using things like graph theory and latent models to try and produce natural language understanding, systems that aren’t just outputting tokens based on probabilities but are capable of leveraging world knowledge in some way. That’s the sort of thing that might lead to actual AGI, but thats so far off it might as well be cold fusion at this point.

    @mattkuhn6634@mattkuhn663410 ай бұрын
    • See everyone, that's someone who knows what they're talking about! lol I really don't like how fast people are to say "AGI is just around the corner". Heck, even Kyle's closing statement implies it a little bit. I blame marketing.

      @BZero3@BZero310 ай бұрын
  • Great video. It's a hard subject to present using a pop-sci approach and I think you did wonderfully. I think one of the great challenges of these machine learning models is communicating what they're *not* doing. There are a lot of folks who enjoy speculating and they tend to use the passive voice when doing so... which can lead to people thinking about these systems as if they were "thinking," "sentient," or performing the same kind of reasoning that we do. I hope a good, straight forward explanation like this will help calm people down and blow through some of that speculation.

    @jkennethking@jkennethking10 ай бұрын
  • This is a topic that I want to learn more about! Thanks for the intro!

    @davidzacharzuk4755@davidzacharzuk475510 ай бұрын
  • At the time you said you were going to get a cat to demonstrate, one of my cats came up to me and politely chirruped to ask me to let him sit in my lap

    @chestersnap@chestersnap8 ай бұрын
  • Great video, loved it. Can we do a segment perhaps on the ethics of the data that was acquired to train these models? Is it an issue? Is it a legal issue? Is it too late?

    @BigPopaRoth@BigPopaRoth10 ай бұрын
    • One thing, people from underdeveloped & developing countries were hired for AI training on less than US$ 1.5 per hour. There's little to no regulation for this field as these governments lack frameworks and guidelines for outsourced jobs.

      @louisuchihatm2556@louisuchihatm255610 ай бұрын
    • Of course it's unethical. That's axiomatic.

      @erlockruuziik7237@erlockruuziik72378 ай бұрын
  • It's self-reinforcing too, once a plethora of LLM AIs start consuming each others output.

    @FrostSpike@FrostSpike10 ай бұрын
    • So, model merges. That's basically a thing already. It leads to less specific outputs and more hallucinations, right?

      @Xonikz@Xonikz10 ай бұрын
  • Wow, I can honestly say I’m so glad I’m subscribed to this channel. Thanks Kyle!

    @SoulfulJim1@SoulfulJim18 ай бұрын
  • awesome video, but questions remain. theoretically, couldnt we run the numbers as you demonstrated while running, and from that know what would its next world and thus understand it, or does the non-understanding lie in the complexity of 12k dimensions? not sure i got how they apply behavioural controls either especially if we dont truly understanding how it accomplishes it. but why i really posted was to ask about two things, hallucinations and updating. how big a problem are hallucinations? it would seem even larger training data sets would provide diminishing returns. why do hallucinations seem to get worse the longer a conversation goes on? are their new techniques on the horizon to prevent this? could the friendliness influence be a contributor to this problem? on updates, can a dataset be refreshed with new info only being trained on? if not, what methods can be used to keep data up to date? and could it ever match a conventional search engine in new data?

    @Tomjones12345@Tomjones1234510 ай бұрын
  • I love this video, really nice to have an explaination which doesn't completely blow everything out of proportion with talk of it being sentient or sapient. I research implimentation of single cell computaions so one thing that always grates be about ML vids is the equation of real neurons to machine learning "neurons"(units from here for clarity). Real neurons have inherent dynamics that articifial units don't and it makes them so complex in comparison. For instance to predict the input/output mapping of a single type of cortical cell you need whole a 4-7 layer deep neural network! There's so much we miss out on rn because the brain processes in time and space (the space of inputs values, not real space), I would recommend Matthew Larkum and colleagues' work on this because is so interesting. Like the units in DNNs are based on a neuroscience model from the 60s, which itself is a huge approximation. Obviously a lot is going on with network weights but the way real brain cells can compose information is so far ahead of what we have atm

    @astralLichen@astralLichen10 ай бұрын
  • Love your videos, Kyle! Very well explained and surprisingly easy to understand for how complex some of these topics are.

    @vindex7309@vindex730910 ай бұрын
  • This video had the highest production value I've ever seen! SO SO GOOD! Also, the jokes omg😭You're hilarious!

    @shristi249@shristi249Ай бұрын
  • Great explainer video! TLDR: 1 - ChatGPT is part of OpenAI's Generative Pre-trained Transformer series, utilizing huge swathes of text data and neural network magic to understand and generate human-like text. 2 - It stands on the shoulders of advanced AI research, leveraging technology known as Attention Transformers to hone its understanding and generation of relevant text. 3 - While remarkably adept at emulating human conversation, ChatGPT lacks true understanding or sentience, operating instead through statistical modeling of language. 4 - The alignment problem-ensuring AI's values align with human values-remains at the forefront of OpenAI’s design philosophy, aiming to produce helpful, truthful, and harmless outputs. 5 - ChatGPT's societal implications are far-reaching, challenging our perceptions of creativity, authorship, and the trustworthiness of digital content.

    @getbash@getbash6 күн бұрын
  • Amazing video! I would really love to see more of these types of deeper dives on the channel (maybe like the half-life series you have) on a wide range of topics, and the misunderstood history of technology and science.

    @greyingproductions@greyingproductions10 ай бұрын
  • Thanks for making this video. There’s so much people in general don’t understand about ai (myself included) and there’s so much false information and theories based in that lack of understanding. So this kind of information is really valuable.

    @connorjensen9699@connorjensen969910 ай бұрын
  • Just subscribed love the videos Kyle !!!

    @KylesBodyandBraintips@KylesBodyandBraintips10 ай бұрын
  • So, what is the mind's so-called 'internal dialog'? What are we doing while engaged in conversation? What is going on while we are listening to spoken or written word or thinking or speaking? At least in part, I think we're also looking continuously for that just right next word and concepts which emerge from that cycle, which the subconscious (something chatgpt-like) somehow serves up to the conscious/supervisory mind (which has something of a throttle control of its efforts called concentration) in response to its imposition of something like a series of prompts from that higher mind, in view of previous experiences and recent words said, of course. Chatgpt isn't thinking per se. But I'd call what Chatgpt does an aspect of human subconscious thought or something very similar and yet another profound insight into the nature of the human mind. Forever integrating new experience with old, making synopses and briefs that are useful to itself in planning and problem solving, preparing ready responses to potential questions and problems, etc. To solve the most complicated machine in the known universe requires a massed assault from all directions in science and engineering, and IT just won a big victory in that war and dropped a billion jaws in what it could do, while it and its cousins jostle for your attention here and elsewhere and say, "behold...". Nature probably found an even simpler way to do much the same as chatgpt through endless rounds of life, death and procreation. Large model genome training? I also believe that if a machine possesses all the functional aspects of a typical human mind, then the machine would be by definition every bit as conscious and aware as ourselves by the same magic of emergence. And a better Turing test is required - Can it fool you INDEFINITELY (to add a dimension of depth) into thinking that it's human?

    @theodorejackson7760@theodorejackson77607 ай бұрын
  • Very succinct explanation! Thanks Kyle! I have been recently interested in how language models took in sequences of words and couldn't figure it out on my own. You explained it perfectly here! Thanks!

    @inventorbrothers7053@inventorbrothers705310 ай бұрын
  • Getting science facts from Thor is the only way I want to learn.

    @kohnjack@kohnjack10 ай бұрын
  • You're a cinematic genius thats producing some of the highest quality content. Keep up the great work!

    @YourHub4Tech@YourHub4Tech10 ай бұрын
    • *you're

      @Karlach_@Karlach_9 ай бұрын
  • Finally, a video that actually describes what is going on. Nice job!

    @dongage3862@dongage38629 ай бұрын
  • This demystified _many_ of the concerns I had about the app! The amount of steps required to get ChatGPT to its present iteration was a revelation - so much math! so much data! - but your presentation makes it easier to get a grasp on. I don't think I'll ever get even a fingerhold on that 12,288D figure, though.

    @Gyrfalcon312@Gyrfalcon31210 ай бұрын
  • The more I use ChatGPT and learn it’s limitations, the more I’m convinced most of my friends doing office-based service work is going to either get a lot more work while utilizing AI, or get replaced by AI. It spit out multiple well written essays on a topic that would’ve taken me hours of research to produce. All I had to do was fact check. It was amazing.

    @scpdatabase969@scpdatabase96910 ай бұрын
    • That's what I learned too. I asked ChatGPT multiple questions and most of them where right, but not every question. And after I pointed the wrong answer out it did not repeat the mistake.

      @thegreedyharvest8796@thegreedyharvest879610 ай бұрын
    • I guarantee they'll utilize it more. But human variability in input creates issues with AI output. So they can't ever truly replace humans for most tasks. Especially the creative stuff. And stuff like law. There's just more to those things than input -> output

      @FIRING_BLIND@FIRING_BLIND10 ай бұрын
    • @@FIRING_BLIND Surely that's true surely

      @Karlach_@Karlach_9 ай бұрын
    • You will have to learn to use it to increase your productivity. It will be like when word / PowerPoint or even google came out. Anybody taking a week to print out real slides had to learn new tech. Seriously look how people used to create slides with a company printing department

      @briantoplessbar4685@briantoplessbar46859 ай бұрын
    • @@thegreedyharvest8796 the words where and were are not interchangeable. Now that I have pointed out the mistake, are you going to make it again?

      @Antney946@Antney9466 ай бұрын
  • 14:19 "Viking spacecraft" slowly rolling by Kyle's head as he explains something i didn't listen to because i was distracted... Coinkidink??? I think not!!!

    @KnittedSister@KnittedSister10 ай бұрын
  • The best explanation I have seen yet! Thank you!

    @mluckphoto@mluckphoto10 ай бұрын
  • There might need to be a video about consciousness, intelligence, self-awareness, sentience, sapient intelligence, and free will (if it exists), as what each is and isn’t. Because I think understanding what all these things are, is kind of necessary for a proper conversation about the idea of A.I.

    @ArlenKundert@ArlenKundert10 ай бұрын
    • Free will is how we navigate the 5th dimension

      @nostalgiatrip7331@nostalgiatrip733110 ай бұрын
    • That's difficult because we don't have particularly good definitions of any of those things. That said there are certain categories we can absolutely rule out from those definitions, even if we don't have a simple way to rule things in. Current AI is definitely not sentient because (despite what everyone including Kyle says) we actually _do_ know how they work. We may not be able to do the trillions of calculations to come up with a weight matrix by hand and therefore not be able to predict what output it will produce for any given input, but what we do know is that its fundamentally just a giant math equation that turns an input into an output, just like y=x+2 turns a 3 into a 5. There's no room for "sentience" or "thought" in the process, no matter how many calculations are performed. At the very least we'll need to have actively trained AIs (rather than pre-trained) before we can even begin to pretend they have consciousness. That is, they need to learn from experience not just have a static matrix handed to them from a completely separate training system. And I suspect that they'll also need some form of introspection (similar to humanity's "inner monologue"). Once we've got systems capable of those two tasks, I think we'll need to have a serious (not just clickbaiting) discussion about AI consciousness. That's a long way off though (particularly the active training bit. Current training systems are way, way, way slower than the processing systems and would not be even remotely functional for public consumption. It'll happen, but probably not too soon).

      @altrag@altrag10 ай бұрын
    • Most people don't really even know about those things.

      @abraxaseyes87@abraxaseyes8710 ай бұрын
    • ​@@altrag LLMs are already impressively capable. They have demonstrated an ability to perform tasks they've never encountered by learning from a few provided examples, ie learning from experience. Additionally, they exhibit better performance when prompted to "reflect" or "think" on the answer before providing a response. This inner dialogue or thought process appears to enhance their problem-solving capabilities. Current research is pushing the boundaries of what LLMs can achieve, notably by expanding their context windows to millions of tokens. With these advancements, it's likely that perceptions about LLM capabilities will shift substantially. It's plausible that we may soon start considering the possibility of pre-trained LLMs being the unconscious part of a "conscious" system, with the conscious part dictated by its context. A critical inquiry in this field pertains to the driving factor behind human intelligence and consciousness: are they primarily shaped by our ability to recall context or by adjusting our neural networks? If context recall is the dominant component, and there's a strong argument to suggest it might be, then the advent of AI systems based on LLMs with human-like learning and thought processes could very well be just around the corner.

      @PaulBrunt@PaulBrunt10 ай бұрын
    • Fun fact: there is no clear definition of any of those things.

      @marcogenovesi8570@marcogenovesi857010 ай бұрын
  • Well explained! The Kurzgesagt thing got me more than id like to admit.

    @kyledabearsfan@kyledabearsfan10 ай бұрын
  • Hey Kyle! I wasn't sure where else to ask you. I recently read an article on the earth axis being affected by irrigation and water movement across the surface of the earth in general. I would love to hear your take on it. Water is a heavy thing and i found it hard to believe at first but the article was written by a reputable source, Jay Famigliett.

    @wildmelmel2034@wildmelmel203410 ай бұрын
  • This was an excellent explanation. Thank you for posting!

    @101hamilton@101hamilton9 ай бұрын
KZhead