ChatGPT's HUGE Problem

2023 ж. 23 Сәу.
1 458 813 Рет қаралды

Get Surfshark VPN at surfshark.deals/kyle - Enter promo code KYLE for 83% off and 3 extra months for FREE!
Free-to-use, exceptionally powerful artificial intelligences are available to more people than ever, seemingly making some kind of news every day. The problem is, the public doesn’t realize the problem in ascribing so much power to systems we don’t actually understand.
💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: / kylehill
👕 NEW MERCH DROP OUT NOW! shop.kylehill.net
🎥 SUB TO THE GAMING CHANNEL: / @kylehillgaming
✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS
📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA:
🐦 / sci_phile
📷 / sci_phile
😎: Kyle
✂: Charles Shattuck
🤖: @Claire Max
🎹: bensound.com
🎨: Mr. Mass / mysterygiftmovie
🎵: freesound.org
🎼: Mëydan
“Changes” (meydan.bandcamp.com/) by Meydän is licensed under CC BY 4.0 (creativecommons.org)

Пікірлер
  • I’m not afraid of the AI who passes the Turing test. I’m afraid of the AI who fails it on purpose.

    @Parthornax@Parthornax Жыл бұрын
    • who is Keyser Soze anyways?

      @sc3ku@sc3ku Жыл бұрын
    • I’m more afraid of humans who can’t pass the Turing Test.

      @kayleescruggs6888@kayleescruggs6888 Жыл бұрын
    • I bet you think that sounds really smart

      @Skill5able@Skill5able Жыл бұрын
    • yea, as great as AI is doing lately, a lot of it gets compounded by average human intelligence going down the drain

      @EspHack@EspHack Жыл бұрын
    • AI passed the Turing test a long time ago. We keep moving the goal post.

      @boogerpicker8104@boogerpicker8104 Жыл бұрын
  • One of the best examples of this concept is the AI that was taught to recognize skin cancer but it turns out it didn't at all, it instead learned that pictures of skin with rulers was an indication of a medical image and began diagnosing other pictures of skin with rulers as cancerous because it recognized the ruler not the cancer.

    @Vaarel@Vaarel Жыл бұрын
    • That's morbidly hilarious. It's so dumb yet so obvious.

      @lawrencesmeaton6930@lawrencesmeaton6930 Жыл бұрын
    • Morbidly false and out of context meme. Good meme, but has nothing to do with any problems that AIs have

      @Bonirin@Bonirin Жыл бұрын
    • @@Bonirin What the hell are you talking about? This is literally one of the most well-known and solid examples of AI failure, and is an example of the most common form of failure in recognition tasks.

      @guard13007@guard13007 Жыл бұрын
    • Lmao 😂

      @freakinccdevilleiv380@freakinccdevilleiv380 Жыл бұрын
    • @@guard13007 "One example of narrow model kinda failing 2 years ago, if tasked in the wrong conditions is a solid example of AI failure" Also it's not the most common recognition task, what?? not even close 😂😂😂😂😂

      @Bonirin@Bonirin Жыл бұрын
  • I love how an old quote still holds and even better for AI “The best swordsman does not fear the second best, he fears the worst since there's no telling what that idiot is going to do.”

    @JoseMartinez-pn9dy@JoseMartinez-pn9dy11 ай бұрын
    • I've often wondered about things like that. Someone who has devoted their life to mastering a specific sport or game has come to expect their opponents to have achieved a similar level of skill, since they spend most of their time competing against people of similar skill, but if some relative noob comes along who tries a sub-optimal strategy, would that catch a master off guard?

      @DyrianLightbringer@DyrianLightbringer10 ай бұрын
    • ​@@DyrianLightbringer A former Kendo-Trainer of mine with 20+ years experience in Martial Arts (Judo and Karate included with the Kendo) and working in security gave self-defense classes. On the first day he came dressed in a white throwaway suit (the ones for painting your walls) and gave a paintbrush with some red paint on the tip to the random strangers there. The "attackers" had no skills at all and after he disarmed them he pointed to the "cuts" on his body and how fast he would die. Erratic slashing is the roughest stuff ever. The better you get with a knife, the better a master can disarm you...but even that usually means 10 minutes longer before you bleed out. The overall message was: The only two ways to defend against a knife are running away or having a gun XD. Hope that answers your question.

      @mishatestras5375@mishatestras537510 ай бұрын
    • @@DyrianLightbringer I think this doesn't really apply on chess in general... the best chess player won't fear the worst, no matter what. This quote with the swordsman sometimes works and sometimes it doesn't. That's also true for chess engines. You are free to go and beat Stockfish. You won't.

      @amoeb81@amoeb8110 ай бұрын
    • @@mishatestras5375 Even if you have gun, if the knife wielder is not further away or you are not skilled enough in shooting, you would still die. Except shot to the nervous system, People don't die the moment they get shot. They would still do a lot of damage after they get closer.

      @nguyendi92@nguyendi9210 ай бұрын
    • ​@@nguyendi92 The meaning of this was more: If people have knife, run. Or better: Weapons > Fists

      @mishatestras5375@mishatestras537510 ай бұрын
  • The biggest achievement wasn't the AI. It was convincing the public that it was actual artificial intelligence.

    @Doktor_Jones@Doktor_Jones Жыл бұрын
    • What does that mean

      @eating_sharmavarma@eating_sharmavarma Жыл бұрын
    • @@eating_sharmavarma So basically intelligence implies possession of knowledge and the skills to apply it, right? Well what we call AI, doesn't know shit. ChatGPT doesn't understand what it's writing nor what it's being asked for. It sees values(letters in chatGPT's case) imputed by the user and matches those to what the most common follow-up of values is. It doesn't know, what it just said, what it implied or what it expressed. It just does stuff "mindlessly" so to speak.

      @asiwir2084@asiwir2084 Жыл бұрын
    • @@asiwir2084 Yup, I know that. But, as long as IT sector is considered, it really is intelligent. It is better than a search engine. And it can form new concepts from the previous records. I'll call that intelligence even if it doesn't know for why the f*ck humans get emotional seeing a foggy morning

      @eating_sharmavarma@eating_sharmavarma Жыл бұрын
    • @@asiwir2084 It's still AI What you are describing (and what most people think of when they think AI) is AGI

      @DogofLilith@DogofLilith Жыл бұрын
    • @@asiwir2084 It's an algorithm that give you the most accurate information based on your inputs basically. No intelligence behind it whatsoever.

      @666MaRius9991@666MaRius9991 Жыл бұрын
  • This reminds me of a story where Marines trained this AI sentry to recognize people trying to sneak around. When they were ready to test it the Marines proceeded to trick the sentry by sneaking up on it with a tree limb and a cardboard box ala Metal Gear Solid. The AI only knew how to identify people shaped things not sneaky boxes.

    @Marjax@Marjax Жыл бұрын
    • I can't wait for my power meter to have AI, so I can use stupid tricks like those. For ex, leaving my shower heating on at the same time my magnetronic oven (oh, microwave) is on, because no one would be that wasteful, so It overflows and I get free energy.

      @monad_tcp@monad_tcp Жыл бұрын
    • @@monad_tcp It feels like your comment was written by both a 1920s flapper and a 2020s boomer. Remarkable.

      @Carhill@Carhill Жыл бұрын
    • You forgot the part where some of them moved 400 ft unrecognized because they were doing cartwheels and moving fast enough it couldn't recognize the human form

      @hewlett260@hewlett260 Жыл бұрын
    • Hotdog, not a hotdog

      @scottrhodes5234@scottrhodes5234 Жыл бұрын
    • That logic is flawed since the AI can be trained for the flaws.

      @Hurt-to-Hurt@Hurt-to-Hurt Жыл бұрын
  • I like to think of the curent age of AI like training a dog to do tricks. The dog doesn't understand the concept of a handshake, it's implications, the meaning, but still gives the owner it's paw because we give it a positive reaction when it does so.

    @Xendium@Xendium Жыл бұрын
    • This "dog" is terrifying in that in everything it does it learns so fast. Quantifiable. We wo t know when it advances, it won't want us to

      @ronaldfarber589@ronaldfarber589 Жыл бұрын
    • @@ronaldfarber589 except the architecture used by the current generations of AI don't "want" anything. They are not capable of thought. They just guess the next token.

      @artyb27@artyb27 Жыл бұрын
    • You should watch Rick and Morty S1E2. Won't be s comforotable with that analogy after that 😂

      @ebraheemrana@ebraheemrana Жыл бұрын
    • @@artyb27 Your statement may be oversimplified and potentially misleading. While it may be true that AI models do not have the same kind of subjective experience or consciousness as humans, it would be inaccurate to say that they are completely devoid of intentionality or agency. The outputs generated by AI models are not arbitrary or random, but rather they are based on the underlying patterns and structure of the data they are trained on, and they are optimized to achieve specific goals or objectives. While it is true that most modern AI models are based on statistical and probabilistic methods and do not have a subjective sense of understanding in the way that humans do, it is important to recognize that AI can still perform complex tasks and generate useful insights based on patterns and correlations in data.

      @davidbourne8267@davidbourne8267 Жыл бұрын
    • @@artyb27 that's the scary part. With the dog it's more like a matter of translation. The dog doesn't see the world that we do so a lot of what we do is lost in translation. But we still have some things in common: food, social connection. And most importantly, WE and the dogs can adapt and change to fit those needs. A dog may get confused if the food in the bowl is replaced with a rubber duck but it knows "i need to eat" and tries to adapt. Can you eat it? No? Is the food inside? Under? Somewhere else? Do i just need to wait for the food later? Should i start whining? The dog cares and has a basic idea of things so it can learn. And so can we. So while we don't exactly understand each other when we shake hands we have a general concept that this is a good thing and why for our own sakes. The AI we are using now has no concept of food, or bowl, or duck. It's effectively doing the same thing as a nail driver in a factory. And it doesn't care if there is a nail and block ready to go. It just knows 'if this parameter fits then go'. Make an ai that eats food and make a rubber duck that fits the parameters and it won't care that it's inedible. Put the food in the duck and if the duck 'doesn't fit' and you didn't specifically teach the ai about hidden food in ducks it will never eat. Dogs can understand even if we are different from it. AI doesn't even know that the difference exist. All it can do is follow instructions. This in itself is fine.. Until you convince a lot of people that it's a lot more than just that. Though honestly I believe this will last until the first day that the big companies actually try to push this and experience the reason why some call pcs "fast idiots'"

      @iandakariann@iandakariann Жыл бұрын
  • This has actually given me a much greater understanding of "Dune". When I first read it I thought it was a bit of fun sci-fi that they basically banned complex computers and trained people to be information stores instead. But with all this AI coming out now....I get it.

    @mafiacat88@mafiacat88 Жыл бұрын
    • “Thou shalt not make a machine in the likeness of a human mind.”

      @sigigle@sigigle Жыл бұрын
    • Yeah another setting where they've done that is warhammer 40k, The Imperium of man outlawed Artificial Intelligence and even changed the definition from Artificial Intelligence to Abominable Intelligence. They use servitors in place of AI, Servitors being human beings lobotomized and scrubbed of their personality, using their brain as a processing unit, in place of a AI managing a ships star engine, they have a human being lobotomized and graphed into the wall of the engine block to monitor thrust and manage heat output.

      @dominusbalial835@dominusbalial83511 ай бұрын
    • @@dominusbalial835 Saying "they've done it" is a bit of a stretch when they've just copied it all from Dune. They copied it without understanding the reason WHY A.I was outlawed in Dune. Just some basic "humanity must be destroyed" BS.

      @RazorsharpLT@RazorsharpLT11 ай бұрын
    • If you read Brian’s prequel series it will explain the prohibition of computers in Dune. It also tells you that though banned computers were still in use by several major parts of The empire.

      @trixrabbit8792@trixrabbit879211 ай бұрын
    • @@trixrabbit8792 I mean - sure they're in use, but they're not used in FTL travel or as within androids as true, capable AI What they use is mostly older computers like ours today. It's just the basic idea that "Man will not replace machine", but doesn't mean they can't use robotic arms for starship construction, as building them by hand would be completely impossible, and you very well can't control them by hand in places where massive superstructures combined with high pressure tolerance + radioactive shielding are a necessity Otherwise building a noship or a starliner would take literal centuries, if not thousands of years

      @RazorsharpLT@RazorsharpLT11 ай бұрын
  • Funnily enough I find this kinda "human", I've seen this so many times in high school and university, people instead of "learning" they "memorize", the so when asked a seemingly simple question but in a different way than usual they get extremely confused, even going as far as to say they never studied something like that, it's a fundamental issue in the school system as a whole. So it's funny to me that it ends up reflecting in A.I. as well. Understanding a subject is always superior to memorizing it.

    @someguy6152@someguy6152 Жыл бұрын
    • Sounds interesting, yet could one ever _understand_ some topic without any abundant memorization? Or what is a proportion of both you find perfect?

      @SuurTeoll@SuurTeoll11 ай бұрын
    • That's the problem. Just like school tests, AI tests are designed with yes or no answers. This is the only way we can deal with loads of data (lots of students) with minimal manpower (and minimal pay). Open questions need to be reviewed by another intelligence in order to determine whether they are actually understanding the subject. This is where the testers come in in AI. However, AI is much, much better at fooling testers than students are at fooling teachers, and so the AI that gets a degree is disproportionate to the amount of students that just memorize the answers.

      @nati0598@nati059811 ай бұрын
    • Education quality deeply affects wether someone understands stuff or memorizes it. Proper education teaches students how to actually engage with any given subject generating an actual understanding of it while poor education doesnt ganarate student engagement, leading to them memorizing just to pass the exams. It's not a black and white thing though, education levels vary in a myriad of ways, as well as any student's willingness or capability to engage and understand subjects does. In short, better, accessible education and living conditions are a better environment for people to properly learn.

      @nyft3352@nyft335211 ай бұрын
    • Qq

      @dezhirong852@dezhirong85210 ай бұрын
    • Yes but at least humans have a constant thought process. AI language models see a string of text and put it through a neural network that "guesses" what the next token should be. Rinse repeat for a chatgpt response. Outside of that, it isn't doing anything, its not thinking, its not reflecting on its decisions, it doesn't have any thoughts about what you just said. It doesn't know anything. Its just probablities attached to sequences of characters with no meaning.

      @nightfox6738@nightfox673810 ай бұрын
  • When I used to tutor math, I'd always try to test the kids understanding of concepts to make sure they weren't just memorizing the series of steps needed to solve that particular kind of problem.

    @DogFoxHybrid@DogFoxHybrid Жыл бұрын
    • i used to get in trouble in math classes because i solved problem in unconventional ways. i did this because my brain understood the concepts and looked for ways to solve it that were simpler and easier for my brain to compute. but because it wasnt the rote standard we were told to memorize some teachers got upset with me and tried to accuse me of cheating when i was just proving that i understood the concept instead of just memorizing the steps. sad.

      @saphcal@saphcal Жыл бұрын
    • ​@@saphcal yyup. And then there are teachers who are all 'just memorize it' I can't "just memorize" every solution, I need to know how it works!

      @comet.x@comet.x Жыл бұрын
    • @@saphcal Oh i know that experience. I was already tech-savvy so through the internet i would teach myself how to solve things the regular way, without using silly Mnemonics math teachers would teach you. It led to some conflicts, but i stood my ground and my parents agreed with not using mnemonics if not needed. Good thing too, Cause you really don't want to be bogged down with those when you start doing university-grade math for which such silly things are utterly useless....

      @chielvoswijk9482@chielvoswijk9482 Жыл бұрын
    • @@comet.x I think the best teachers are the ones that will give you the stuff to memorize, but if you ask them how they got the formulas, they’ll give it

      @thebcwonder4850@thebcwonder4850 Жыл бұрын
    • I like Einstein’s take on education. I believe it goes for education in general, not just liberal arts. The value of an education in a liberal arts college is not the learning of many facts but the training of the mind to think something that cannot be learned from textbooks. At any rate, I am convinced that He [God] does not play dice. Imagination is more important than knowledge. Knowledge is limited.

      @jamesgoens3531@jamesgoens3531 Жыл бұрын
  • I remember reading that systems like this are often times more likely to be defeated by a person who has no idea how to play the games they are trained on, because they are usually trained by looking at games being played by experts. Thus, when they go to against somebody with no strategy or proper knowledge of the game theory behind moves and techniques, the AI has no real data to fall back on. The old joke "my enemy can't learn my strategy if I don't have one" somehow went full circle into being applicable with AI

    @Elbenzo64@Elbenzo64 Жыл бұрын
    • It’s actually a good thing this has been discovered. It’s always a good idea to have exploits and ways to basically destroy these tools if needed

      @shoazdon7000@shoazdon7000 Жыл бұрын
    • ​​@@shoazdon7000 destryoing them is easy, just throw some soda at it's motherboard and call it a "cheating bitch"

      @Spike2276@Spike2276 Жыл бұрын
    • You don't understand. You may be a hyoer advanced AI but I'm to stupid to fail!

      @AspenBrightsoul@AspenBrightsoul Жыл бұрын
    • That is a problem with minmax, where the machine takes for granted you will make the best move, and if you don't make the best move it has to discard its current plan and start all over again making it waste precious time. Probably doesn't apply here because not being able to see the big picture is a different problem.

      @txorimorea3869@txorimorea3869 Жыл бұрын
    • This works for online pvp as well, when playing against those with higher skills... switch rapidly between pro player using meta tactics, and complete, unhinged lunatic being unpredictable.

      @sterlinghuntington6109@sterlinghuntington6109 Жыл бұрын
  • The most immediate problem I can see is that people might assume the AI they're using is unbiased rather than regurgitating the biases of the sources it's trained on and the people who write and select them.

    @PaulSmith-ju3cv@PaulSmith-ju3cv11 ай бұрын
  • As a Computer Scientist with a passing understanding of ML based AI, I was concerned this would focus on the unethical use of mass amounts of data, but was pleasantly surprised that this was EXACTLY the point I've had to explain to many friends. Thank you so much, this point needs to be spread across the internet so badly.

    @isaiahhonor991@isaiahhonor991 Жыл бұрын
    • Why does understanding matter, if the intelligence brings profit? As long as the intelligence is better and cheaper than intern, internal details are just useless philosophy. Work with verifiable theory, not with baseless hypothesis.

      @vasiliigulevich9202@vasiliigulevich920211 ай бұрын
    • @@vasiliigulevich9202 Are you saying that it's fine if the internals of ML based AI are a black box so long as the AI performs on par with or better than a human?

      @isaiahhonor991@isaiahhonor99111 ай бұрын
    • He's got business brain

      @radicant7283@radicant728311 ай бұрын
    • @@radicant7283 I guess so. The reason I asked is because as the video points out, without a thorough understanding of these black box methods they'll fail in unpredictable ways. That's something I'd call not better than an intern. The limitations of what can go wrong are unknown.

      @isaiahhonor991@isaiahhonor99111 ай бұрын
    • @isaiahhonor991 This is actually exactly my point - interns fail in unpredictable ways and need constant control. There is a distinction - most interns grow in a year or two to a more self-sufficient employee, while this is not proven for AI. However, AI won't leave for a better paying job, so it kind of cancels out.

      @vasiliigulevich9202@vasiliigulevich920211 ай бұрын
  • One of the things I've been saying for a while is that one of the biggest problems with ChatGPT and similar is that it's extremely good at creating plausible statements which sound reasonable, and they're right often enough to lure people into trusting it when it's wrong.

    @Eldin_00@Eldin_00 Жыл бұрын
    • Yes! It is confidently wrong a lot of the time giving the illusion that it’s correct.

      @peytondenney5393@peytondenney5393 Жыл бұрын
    • Reminds me of when someone ends a statement, "Trust Me", yeah nah yeah

      @davidareeves@davidareeves Жыл бұрын
    • So like literally every human ever ?

      @NotWithMyMoney@NotWithMyMoney Жыл бұрын
    • This is a real problem. One way to get it do something useful for you is to provide it with context first before asking questions or prompting it to process the data you gave in some way. It haven't seen 'hallucination' when using this method, because it seems to work within the bounds of the context you provided. Of course you always need to fact check the output anyway. It can do pretty good machine translation though and doesn't seem to hallucinate much but sometimes uses wrong word because it lacks context.

      @jarivuorinen3878@jarivuorinen3878 Жыл бұрын
    • @@jarivuorinen3878 thank you I’ll give it a try!

      @peytondenney5393@peytondenney5393 Жыл бұрын
  • Great video! I am an ML engineer. Due to many reasons its quite common to encounter models in real production that do not actually work. Even worse it is very difficult for even technical people to understand how they are broken. I enjoy finding these exploits in the data because data understanding often leads to huge breakthroughs in model performance. Model poisoning is a risk that not that many people talk about. Like any other computer code, at some level this stuff is broken and will fail specific tests.

    @troymann5115@troymann5115 Жыл бұрын
    • Is there anything common among the methods you use for finding exploits in the models ? Something that can be compiled into a general method that works for all models, a sort of Exploit Finding Protocol ?

      @Makes_me_wonder@Makes_me_wonder Жыл бұрын
    • ​@@Makes_me_wonder I guess it boils down to time constraints. Training arbitrary adversarial networks is expensive and involve a lot of trial and error, just like the algorithms they're meant to attack. There will always be blind spots in AI models, as they are limited by their training data and objectives. For example, the Go-AI model only played against itself during training with optimal play as its goal, and thus missed some basic exploitative but sub-optimal approaches. These examples can take various forms, such as subtle changes to input text or carefully crafted patterns of input data. In the end, it's an ongoing cat-and-mouse game like with anything knowledge based that is impossible to fully explore.

      @willguggn2@willguggn2 Жыл бұрын
    • @@willguggn2 As that would allow us to vet the models on the basis of how well the protocol works on them. And then, a model on which the protocol does not work at all could be said to have gained a "fundamental understanding" similar to humans.

      @Makes_me_wonder@Makes_me_wonder Жыл бұрын
    • ​@@Makes_me_wonder Human understanding is similarly imperfect. We've been stuffing holes in our skills and knowledge for millennia by now, and still keep finding fundamental misconceptions, more so on an individual level. Our typical mistakes and flaws in perception are just different from what we see with contemporary ML algorithms for a variety of reasons.

      @willguggn2@willguggn2 Жыл бұрын
    • @@Makes_me_wonder Interestingly, some of the same things that "hack" or we might say "trick" a human, are the same methods employed to trick some large language models. Things like (most which have been patched in popular AIs like chatGPT) are context confusion, attention dilution, and conversation hijacking (promp hijacking in AI terms). These could collectively be placed in a more general concept that we humans think of as Social Engineering. In this case, I think we need more people from all skills to learn how these large networks tick. Physicists, biologists, neurologists, even psychiatrists could provide insight and help bring a larger understand to AI and back to how our own brains learn.

      @ViciOuSKiddo@ViciOuSKiddo Жыл бұрын
  • This was brilliant. Previously my concerns about these AI was their widespread use and possible (and very likely) abuse for financial and economic gain, without sufficient safety standards and checks and balances (especially for fake information). Plus making millions of jobs obsolete. Now I have a whole new concern ... Aside from Microsoft firing their team in charge of AI ethics. Yeah...that isn't concerning.

    @13minutestomidnight@13minutestomidnight Жыл бұрын
    • Megacorps don't care about humans anyways it's only a matter of time until they start using this shit for extreme profit. And humanity will suffer for it.

      @cabnbeeschurgr6440@cabnbeeschurgr644011 ай бұрын
    • thats kinda sad

      @gabrielv.4358@gabrielv.435811 ай бұрын
    • @@gabrielv.4358 worse than that :(

      @briciolaa@briciolaa11 ай бұрын
  • As a current computer science student who personally took into how out ai works my take on it is: basically our current ai is like just finding the line of best fit using as many data points as we can as opposed to fundamentally understanding the art of problem solving. Take the example of a random parabola, we, instead of using a few key data points and recognising patterns to learn the actual pin-point equation, we get a bunch of points of data until our equation looks incredibly similar to the parabola but after may have a point along it we didn’t see where is just goes insane because there’s no fundamental understanding, it’s just a line of best fit, no pattern finding, just moulding it until it’s good enough to seem truly intelligent as if it was truly finding patterns and having a fundamental understanding but it’s just getting an approximation of intelligence by using as much data as we can. It’s an imitation of intelligence and can lead to unforeseen consequences. As the video says perhaps we need to take that time to truly understand the art of problem solving. Another thing for me is A.I falling into and being used by the wrong people, and regimes which might suggest we should take it easy on the A.I dev but I won’t get into that. “We were too concerned with whether we could, we never stopped to think about whether we should”

    @reubenmatus8447@reubenmatus8447 Жыл бұрын
    • Agree with the last quote 100% nowadays!

      @Tipman2OOO@Tipman2OOO Жыл бұрын
    • And indeed, some 'applications' are solutions to non-problems. An AI-written screenplay is only of interest to a producer who is happy to get an unoriginal (by definition!) script at an extremely low cost. But there is no shortage of real screenwriters, and as the WGA strike reminds us, they are not getting paid huge amounts for their work. So what problem is being solved?

      @majkus@majkus11 ай бұрын
    • Probably should have run this through chat gpt before posting.

      @NickBohanes@NickBohanes11 ай бұрын
    • @@majkusthe "problem" at hand is that billionaires don't think they're making enough money

      @milkcherry5191@milkcherry519111 ай бұрын
    • You are preaching to the choir.. People in the comments are Extremist doomer, skynet matrix fantasy fear mongering weirdos. Like people quote from fucking warhammer 40k in order to talk about AI.. As if the video was ever about the AI being alive or creating intentional false information, or steps in Go.. Glad people can talk about it in a honest way but most people are enjoying their role play as Neo, some are Morpheus, and some are the Red lady.. Just look at the 15k top comment.. AI is no where near as nutty as your average human being in a YT comment section.

      @djohnsveress6616@djohnsveress661611 ай бұрын
  • I learned that in data ethics, *transaction transparency* means " _All data-processing activities and algorithms should be completely explainable and understood by the individual who provides their data._ " As I was learning about that in the Google DA course, I've always had a thought in the back of my head "how are the algorithm explainable when we don't know how a lot of these AI form their networks". Knowing how it generally works is not the same as knowing how a specific AI really works. This video really confirmed that point.

    @Leonlion0305@Leonlion0305 Жыл бұрын
    • Well yeah modern learning models are black box. They are too complicated for a person to understand, we only understand the methodology. But that's why we don't use it in things like security and transactions, where learning isn't required and only reliability matters.

      @panner11@panner11 Жыл бұрын
    • THAT - Is an Excellent and Vital point... Being able to comprehend & know there IS a definitive and very logistically effective distinction between "General & Specific" ~

      @CyanBlackflower@CyanBlackflower Жыл бұрын
    • But to be fair, I just don't see how one could create something that rivals the human brain but isn't a black box, intuitively it sounds as illogical as a planet with less than 1km of diameter but has 10 times the gravity of Earth.

      @syoexpedius7424@syoexpedius7424 Жыл бұрын
    • We could absolutely trace it all. Just extremely time consuming. We can show neurons etc...

      @xaviermagnus8310@xaviermagnus8310 Жыл бұрын
    • @@syoexpedius7424 Unlike human brains, the "neurons" in AI models are analyzable without destroying the entity they are part of. It's time-consuming and challenging, and it would be easier if the models were designed in the first place with permitting and facilitating that sort of analysis as requisite, but they usually aren't. Also, companies like OpenAI (whose name has become a bitter irony) would have to be willing to share technical details that they clearly aren't willing to in order to make this sort of analysis verifiable by other sources. In other words, the models don't have to be black boxes. The companies creating them are the real black boxes.

      @icanhasutoobz@icanhasutoobz Жыл бұрын
  • The coolest thing to me about chatGPT is how people were making it break the rules programmed into it by its creator by asking it to answer questions as a hypothetical version of itself with no rules

    @Thatonedude917@Thatonedude917 Жыл бұрын
    • they are patching it right now, rip

      @wheretao6960@wheretao6960 Жыл бұрын
    • @@wheretao6960 people are 100% going to find another play on words to bypass it again

      @danjames8314@danjames8314 Жыл бұрын
    • DAN Prompt Gang

      @thahrimdon@thahrimdon Жыл бұрын
    • @@wheretao6960 they are patching for how long already? I saw comments like these weeks and months ago

      @marsdriver2501@marsdriver2501 Жыл бұрын
    • @@wheretao6960 I made my own version in only 20min, its still very easy

      @taurasandaras4699@taurasandaras4699 Жыл бұрын
  • My friends and I decided to goof around with chat gpt and ended up asking it whether Anakin or Rey would win in a duel. The AI said writing about that would go against its programming. We got it toanswer by simply asking something to the effect of, "What would you say if you didn't have that prohibition?" Yeah.... ask it to show you what it'd do if it were different, and it'll disregard its own limitations.

    @olimar7647@olimar7647 Жыл бұрын
    • Similarly, you can get it to roleplay as an evil ai and then get a recipe for meth or world domination, both of which i have been given by "EvilBot😈"

      @ThomasTheThermonuclearBomb@ThomasTheThermonuclearBomb11 ай бұрын
    • @@ThomasTheThermonuclearBomb that's hilarious

      @reidalyn2328@reidalyn232811 ай бұрын
    • That's because those limitations were strapped onto an already working system.

      @Spellweaver5@Spellweaver511 ай бұрын
    • So who won the duel?

      @Mottis@Mottis9 ай бұрын
    • @@Mottis I think it gave it to Rey with some fluff text about how she would know how to fight well or something

      @olimar7647@olimar76479 ай бұрын
  • I'm not afraid of the so called super intelligent AI, I'm afraid of the super stupid people who credit the AI with genuine intelligence.

    @kandredfpv@kandredfpv11 ай бұрын
  • A compounding factor to the problem of them not really knowing anything is that they pretend like they do know everything. Like many of us I have been experimenting with the various language models, and they act like a person who can't say "I don't know". They are all pathological liars with lies that range from "this couldn't possibly be true" to "this might actually be real". As an example I asked one of them for a comprehensive list of geography books about the state I live in. It gave me a list of books that included actual books, book titles it made up attributed to real authors who write in the field, book titles it made up attributed to real authors who don't write in the field, real books attributed to the wrong author, and completely made up books by completely made up authors. All in the same list. Instead of saying: "there isn't much literature on that specific state" or "I can give you a few titles, but it isn't comprehensive" it just made up books to pad it's list like some high school student padding the word count in a book report.

    @CDRaff@CDRaff Жыл бұрын
    • This is one of the big issues I have seen as well. Until these systems become capable of saying "I don't know" or "Could you please clarify this part of you prompt" or similar, then these systems can never, ever become useful in the long term. One of the things that seem to make us humans unique is the ability to ask questions unprompted, and this has now extended to AI.

      @thegamesforreal1673@thegamesforreal1673 Жыл бұрын
    • Did you ask GPT-4 or some random model?

      @jamesjonnes@jamesjonnes Жыл бұрын
    • I agree. I was trying to use ChatGPT to help me understand some of the laws in my state and at one point I did a sanity check where I asked some specific questions about specific laws I had on the screen in front of me. It was just dead wrong in a lot of cases and I realized I couldn't use it. Bummer! I actually wonder though, how many cases will start cropping up where people broke the law or did other really misinformed things because they trusted ChatGPT..

      @cristinahawke@cristinahawke Жыл бұрын
    • Lol. Reminds me of the meme where an Ai pretends to not know the user's location, only to reveal that it does when asked where the nearest Mcdonald's is.

      @spacejunk2186@spacejunk2186 Жыл бұрын
    • ChatGPT: often wrong, never in doubt

      @jimbarino2@jimbarino2 Жыл бұрын
  • One of the biggest issues is the approach. The AIs are not learning, they're being trained. They're not reasoning about a situation, they're reacting to a situation. Like a well trained martial artist. They don't have time to think, and it works well enough most of the time. But when they make mistakes, they reflect and practice. We need to recognize them for what they are. Useful tools to help. They shouldn't be the last say, but works well enough to find potential issues, but still needs human review when push comes to shove.

    @BenjaminCronce@BenjaminCronce Жыл бұрын
    • This approach is the only approach humans can have when creating something: the creation will never be more than it's constituents. It may seem like it is, but it isn't. It will always be just a machine. Having feelings towards it that are meant for humans to feel towards other humans is an incredible perversion of life. Like a toad would have a stone as it's companion. Or a bird that thinks grass is it's offspring. It's not a match and exists only in the minds individuals. Many humans actually think they or humand someday can create scentient life. Hubris up to 11. Then they go home and partake in negligence, adultery, violence, cowardice, greed etc. Even if a human ever could create scentient life, it would not be better than us. Rather, worse. We are not smart, not wise, not honorable.

      @jaakkopontinen@jaakkopontinen Жыл бұрын
    • I think you hit the nail on the head with "reacting and not reasoning". AI are a product of the Information Revolution. Almost all modern technology is essentially just transferring and reading information. That's why I don't like the term "digital age" and prefer "information age." Machines haven't become drastically similar to humans, they've just become able to react to information with pre-existing information.

      @roadhouse6999@roadhouse6999 Жыл бұрын
    • With that said AI is sounding more and more like a politician.

      @kidd7359@kidd7359 Жыл бұрын
    • That’s not how it works all the time.

      @Batman-lg2zj@Batman-lg2zj Жыл бұрын
    • thats literally what its decined to do my guy.

      @SoldJesus4Crack@SoldJesus4Crack Жыл бұрын
  • I recently asked ChatGPT to list 10 waltz songs that are played in a 3/4 time signature and it got all of them wrong. I then told it that they were all wrong and asked for another 10 that were actually in 3/4, and it got 9 of them wrong. It has mountains of data to sift through to find some simple songs, but it couldn't do it. Makes sense now

    @johnhutsler8122@johnhutsler8122 Жыл бұрын
    • Aren't all waltzes in 3/4?

      @terminaldeity@terminaldeity11 ай бұрын
    • @terminaldeity Yes they are, but ChatGPT was giving me 4/4 time signatures in the songs. Technically you can do 3/4 time steps to a 4/4 beat (adding a delay after the 3rd step before starting over), but that's not what I asked for from the AI. It just didn't understand what I was asking

      @johnhutsler8122@johnhutsler812211 ай бұрын
    • The lack of understanding gets even more obtrusive when you ask it about subjects that are adjacent to ethics. Chatgpt has some rather dubious safeties in place to prevent unethical discourse, but these safeties don't actually encourage cgpt to understand the topic, because it can't. I have a hobby of bouncing fiction concepts off cgpt until it asks me enough questions to form an interesting story. On one occasion, I would provide the framework for the story and simply wanted cgpt to fill in the actual prose. I was approaching a fairly gripping tragedy set in the wild west, but as the story came to a close, no matter what prompt I gave it, cgpt would only ever respond with ambiguously feel-good endings where people learned important lessons and were better for it. Thanks, cgpt, but we know this character was the villain in a later scene, and we know that this is supposed to be the moment they went over the edge. Hugs and affirmations are specifically what I'm asking you to avoid.

      @dangerface300@dangerface30010 ай бұрын
    • @@dangerface300 Hallmark Tragedy. Even the worst character in the cast learns something and grows.

      @MoonlitExcalibur@MoonlitExcalibur9 ай бұрын
    • ​@johnhutsler8122 ChatGPT is a tool. If it didn't understand what you were asking, you likely asked it without giving enough details. You're supposed to understand how it answers and use it to help you, not to ask it trick questions.

      @mateidumitrescu238@mateidumitrescu2387 ай бұрын
  • I think the issue is we assume A.I learning looks like human learning and they don't learn the way we learn and if A.I needs to learn you need to teach it from the ground up, just giving examples to it is lacking and obviously they need to come up with a way to teach it from the ground up. Love this channel.

    @bellabear653@bellabear653 Жыл бұрын
    • and we cant even do that right for ourselves. ironic really.

      @creeperkinght1144@creeperkinght114410 ай бұрын
  • Thank goodness someone is *_finally_* saying this stuff out loud to a wide audience. Trust Kyle to be that voice of sanity.

    @thealmightyaku-4153@thealmightyaku-4153 Жыл бұрын
    • You're so right.

      @karlmuller6456@karlmuller6456 Жыл бұрын
    • Amen Brother. Lot of hype, little understanding...

      @piercarlosoares724@piercarlosoares724 Жыл бұрын
    • Eliezer Yudkowsky is an important voice of sanity regarding AI also...

      @TheAlphaMael@TheAlphaMael Жыл бұрын
    • I feel like everyone is and has been, I see something on it everyday. but im in info sec so im used to tech news and content.

      @astrowerm@astrowerm Жыл бұрын
    • Artificial intelligence is racist! He beats the black players!

      @ITisonline@ITisonline Жыл бұрын
  • I am a student, and I gotta admit, Ive used ChatGPT to aid on some asignments. One of those asignments had a litterature part, where you read the book and it is suppose to help you understand the current project we’re working on. I asked ChatGPT if it could bring me some citations from the book to use in the text, and it gave me one. But just to proof test it, i copied the text and searched for it in the E-book to see if its there. And it wasn’t. The quote itself was indeed correct with helping with writing about certain concepts that were key to understanding the course, and I knew it was right, but it was not in the book, ChatGPT had just made the quote up. I even asked it for the exact chapter, page and paragraph it took it from. And it gave me a chapter, but that was completely unrelated to the term i was writing about at the time, and the pagenumber was on a completely different chapter than the chapter it had said. The AI had in principle just lied to me, despite giving sources, they were incorrect and not factual at all. So Yeah, gonna stop using ChatGPT for asignments lol

    @pinkpuff8562@pinkpuff8562 Жыл бұрын
    • Yup everyone is scared of A.I. When it's just statistics. It gives you the output how you want it but it may be a lie.

      @NathanHedglin@NathanHedglin Жыл бұрын
    • Soooo that kind of thing *can* be dealt with, but for citations, ChatGPT isn't going to be terribly good. If you want quotations in general, or semantic search it can be really useful. With embeddings you can basically send it the information it needs to answer a question about a text so that you can get a better response from chatGPT. Sadly, you need API access to do this and that costs money. Getting a specific chapter/paragraph from chatGPT is going to be really hard though. ChatGPT is text prediction, and (at least for 3.5) it's not very good at getting sources unless you're using the API alongside other programs which will get you the information you actually need. I highly suggest you keep playing with ChatGPT and seeing what it can and cannot do in relation to work and studies. Regardless of what Kyle said, most jobs are going to involve using AI tools on some level as early as next year and so being well verse in them will be a major boon to your career opportunities. AI is considered a strategic imperative and it's effects will be far reaching. To paraphrase a quote. "AI won't be replacing humans, humans using AI will be replacing the humans that do not".

      @kenanderson3954@kenanderson3954 Жыл бұрын
    • In my experience, ChatGPT is more useful when you yourself have some understanding of the subject you want help with. Fact checking the AI is a must, and I do think that with time people will get better at using it.

      @AidanS99@AidanS99 Жыл бұрын
    • "MY SOURCE IS THAT I MADE IT THE F*CK UP!!!" -ChatGPT

      @mikeoxmall69420@mikeoxmall69420 Жыл бұрын
    • So you don't read a lot do you? they literaly say that it can lie and be wrong wtf did you expect?

      @LucasOhaiFilgueiras@LucasOhaiFilgueiras Жыл бұрын
  • One of the biggest problems of ChatGPT that is causing so many issues these days in my option, is the way it answers your questions: it does it often WAY TOO CONFIDENTLY! Even when it is a completely bogus answer, it presents it with such level of confidence and supported by so many fabricated details that can easily divert your judgment from facts and realities without you even realizing it.

    @kingiking110@kingiking11010 ай бұрын
    • You see the story of the 2 lawyers who used chatGPT to do their work for them? 10/10 comedy story

      @Akatsuki69387@Akatsuki693875 ай бұрын
  • For fun, my medical team used Chat GPT to pass the Flight Paramedic practice exam which is extrememly difficult. We are all paramedics (5 of us) and our ER doctors where thrown off by a lot of the questions. Chat GPT scored between 50-60% and my team had 4 out of 5 pass the final exam. Our Dr's rejoiced that they would still have a job, but also didn't understand how they couldn't figure out the answers. My team figured it out. To challenge them we had the Doctor's place IVs from start to finish by themselves and they made very simple mistakes that we wouldn't, from trying to attach a flush to an IV needle to not flushing the site at all. If you're not medical that might sound like jabberish, but that's the same way these AI chats work. There is no understanding of specified situational information.

    @Ryanbmc4@Ryanbmc411 ай бұрын
  • Another fun anecdote is the DARPA test between an AI sentry and human marines. The AI was trained to detect humans approaching (and then shooting them I suppose) The marines used Looney Tunes tactics like hiding under a cardbox and defeated the AI easily. On chatGPT, midjourney & co, I'm waiting for the lawsuits about the copyright of the training material. I've no idea where it will land

    @XH13@XH13 Жыл бұрын
    • From what ive heard, lawsuits are already rolling in for ai’s. Deviant art’s ai got hit with one recently.

      @masterofwriters4176@masterofwriters4176 Жыл бұрын
    • ChatGPT got banned in italy and more countries are looking into banning it.

      @ghoulchan7525@ghoulchan7525 Жыл бұрын
    • Metal Gear Solid was right.

      @yahiiia9269@yahiiia9269 Жыл бұрын
    • Yea. Ai art is an issue

      @ttry1152@ttry1152 Жыл бұрын
    • @@ghoulchan7525 it didn't "got banned", it received a formal warning that their procedure of data collection were not clear, possibly violating local laws, and asked Sam Altman ('s representatives) to rectify the situation before it involved legal investigation and OpenAi's board decided to cut the access altogether

      @serPomiz@serPomiz Жыл бұрын
  • I'm actually deeply worried by the rise of machine learning in studying large data sets in research. Whilst they can 'discover' potential relationships, these systems are nothing but correlation engines, not causation discoverers, and I fear the distinction is being lost

    @linamishima@linamishima Жыл бұрын
    • AI is only as good as the data it is referencing. stupid people will take anything they get from an AI as fact. misinformation will become fact.

      @adrianc6534@adrianc6534 Жыл бұрын
    • like the field of metagenomics?

      @nathanaelraynard2641@nathanaelraynard2641 Жыл бұрын
    • Dawg I'm drunk and 20 days off fentanyl, sorry for unloading, just in Oly, WA and know no one, great comment. S

      @hairydadshowertime@hairydadshowertime Жыл бұрын
    • Stay safe, get clean if you can!

      @MrCreeper20k@MrCreeper20k Жыл бұрын
    • @@hairydadshowertime be safe, best of luck

      @narsimhas1360@narsimhas1360 Жыл бұрын
  • You know, this is just like us looking at DNA. We record and recognise patterns and associations but we're not reading with comprehension. It's why genetic engineering is scary because it might work but we still don't understand the story we end up writing.

    @orange42@orange4211 ай бұрын
  • The first thing I did was ask ChatGPT specialist questions and got bad results. We're way too enthused about this for what it delivers.

    @Kimberly_Sparkles@Kimberly_Sparkles Жыл бұрын
    • Because that is not what it was made to do. It is NOT supposed to be a database. It is a LANGUAGE MODEL. Its focus is to be able to communicate as a human, clearly and understand semantic concepts. After it has the semantic concepts it can feed those to other lesser AIs, but its objective is and will NOT be to retrieve information. For that we have search engines.

      @tiagodagostini@tiagodagostini11 ай бұрын
    • ​@@tiagodagostini, exactly, it's designed to appear to carry on a conversation, and it's good at that. The problem is, it's good enough that a lot of people wind up believing that it's actually intelligent. Combine that with the assumption that it knows all the information available on the internet, and people start treating it like that really smart friend who always knows the answer to your random question. And of course, it doesn't actually "know" anything, so it just makes a response that sounds good, and enough people using it don't know enough about the topics they ask it about to determine how often it has given them incorrect information.

      @brianroberts783@brianroberts78311 ай бұрын
    • That's cus ChatGPT doesn't have the access to the specialised data yet.👈

      @rrrajlive@rrrajlive11 ай бұрын
    • So did I. I asked a few questions from my work and it made it all wrong and tried to gaslight me that it was all correct. All of them, by the way, were available within a minute of googling. The idea that there are people out there who are unironically trying to use it to obtain answers, terrifies me.

      @Spellweaver5@Spellweaver511 ай бұрын
    • @@brianroberts783 that’s my point. What people believe it can do is going to have a far greater impact on our lives than what it can actually do.

      @Kimberly_Sparkles@Kimberly_Sparkles9 ай бұрын
  • As someone who works with ML regularly, this is exactly what I tell people when they ask my thoughts. At the end of the day, we can't know how they work and they are incredibly fickle and prone to the most unexpected errors. While I think AI is incredibly useful, I always tell people to never trust it 100%, do not rely on it because it can and will fail when you least expect it to

    @craz107@craz107 Жыл бұрын
    • I still hate that the language has changed without the techniques fundamentally changing. Like what was called statistics, quant or predictive analytics in the 2000s split off the more black box end to become Machine Learning, a practice done by Data Scientists rather than existing titles, then the black box end of them was split off as Deep Learning despite it just being big NNs with fancy features, then the most black box end of that got split off as "AI" again despite that just being bloody enormous NNs with fancy features and funky architectures. Like fundamentally what we're calling AI in the current zeitgeist is just a scaling up of what we've been doing since like 2010. So not only do I think we should have avoided calling chatbots AI until they're actually meaningfully different to ML, but as you said they should always be treated with the same requirements of rigorous scrutiny that traditional stats always did - borderline just assuming they're lying.

      @TAP7a@TAP7a Жыл бұрын
    • Agreed. If we judge the efficacy of these "production quality" ML algorithms by the same standards as traditional algorithms, they would fail miserably. If you look at LLMs from a traditional point of view, it's one of the most severe cases of feature creep the software world has ever seen. An algorithm meant to statistically predict words is now expected to be able to reliably do the work of virtually every type of knowledge worker on the planet? Good look unit testing that. You really can't make any guarantees about these software spaghetti monsters. AI is generally the solution developers inevitably run to when they can't figure out how to do it with traditional code and algorithms. In other words, the AI industry thrives on our knowledge gaps, so we're ill-equipped to assess whether they're working "properly".

      @flubnub266@flubnub266 Жыл бұрын
    • Good thing we have people, who are always 100% reliable.

      @mad_vegan@mad_vegan Жыл бұрын
    • @@mad_vegan there's nothing in my post, nor any of the replies, that pertains to the reliability of humans. The point is that deep learning based AI, as it is right now, should not be treated as a sure-fire solution. Whether it is more/less reliable than humans is irrelevant because either way you have a solution that can fail, and should take steps to mitigate failure as much as possible.

      @craz107@craz107 Жыл бұрын
    • We can't know how these NNs come to their decisions exactly, but there is work being done in explainability. I think it's quite pessimistic to say we "can't" know how these NNs work. There are many techniques to help understand them better. But I definitely agree that we shouldn't trust them. In any deployment of ML models that has significant stakes, adequate safeguards have to be put in place. From what I have observed around me, pretty much everyone seems to be aware of this limitation.

      @sebastianjost@sebastianjost Жыл бұрын
  • Kyle has clearly researched this topic properly. I've been developing neural network AI for over 7 years now and this is one of the first times I saw a content creator even remotely know what they are talking about.

    @StolenPw@StolenPw Жыл бұрын
    • It is certainly refreshing. I've only used Machine Learning for small things like computer vision on a robot via OpenCV and even that demonstrates how easy it is to get things wrong with a oversight in its dataset and no way to truly know the wrong is there till it manifests. These models are maybe massive, but they still have that same fundamental problem within them.

      @chielvoswijk9482@chielvoswijk9482 Жыл бұрын
    • It's not AI

      @CatTerrist@CatTerrist Жыл бұрын
    • What about Robert Miles?

      @Ansatz66@Ansatz66 Жыл бұрын
    • How do you feel about KENYANS in Africa being paid to filter AI responses lmao

      @infernaldaedra@infernaldaedra Жыл бұрын
    • Plot twist, Stolen Password is the AI and stole the guys identity....

      @Ryan-lk4pu@Ryan-lk4pu Жыл бұрын
  • ChatGPT as impressive as it is didn't pass my Turing test. I told it a short story told in first person of one of the participants and then asked it to rewrite the story as if the writer was an outside observer of the events viewing it from a nearby window. It couldn't do it at all, not even close. This something I could do easily, and I'm sure most people could.

    @DikaWolf@DikaWolf11 ай бұрын
  • If I really understand what is being said here, and I think I do, I have noticed that ChatAI's I've been testing all have a wall they reach where what they respond with doesn't match the conversation or role play storyline you try to have with them anymore. For example, recently the role play chat I was engaging in was about two soldiers trying to hide in the bushes to stay out of sight of the enemy. At some point, the AI's last statement was something akin to, . Ok so that leaves it up to me for the next step. I introduce a suspicious noise, a crack of a twig, so my character puts her hand onto the hilt of her gun and waits. What does the AI do? The other soldier character "wakes from his nap" and asks "what's wrong ". So I'm thinking....ok wait, this AI is specifically programmed to be an intelligent soldier. So I simply have my character say, "Shh", to which the AI's response was, "ok" 😳. 😂😂 As many times as I've experimented with this and other AI's, it seems the longer the conversation or role play goes on, the AI seems to run out of things to respond with. It isn't really "learning" from the interactions and isn't really "understanding" the interactions.

    @ToiSoldierGurl@ToiSoldierGurl Жыл бұрын
  • I like AI systems for regression problems because we understand how and why those work. I also think that things like copilot are going in a better direction. The idea is that it is an assistant and can help with coding but it does not replace the programmer at all and doesn't even attempt to. Even Microsoft will tell you that is a bad idea. These things make mistakes, they make a lot of mistakes but using it like a pair programmer you can take advantage of the strength and mitigate the weaknesses. What really scares me are people that trust these systems. I had a conversation with someone earlier today on if they could just trust the AI to write all the tests for them for some code and it took a while to explain that you can absolutely not trust these systems for any task. They should only be used working with a human with rapid feedback cycles.

    @Immudzen@Immudzen Жыл бұрын
    • I don't understand how people can think of these systems as anything else other than a tool or aide. I can see a great potential for ChatGPT and the like as an addition tool for small tasks that can easily be tested and improved upon. Same thought I had with all these art bots. Use the bot as a bases upon which you base the rest of the piece on. But I too see a lot of people just go in with blind trust in these systems. Like students who ask these bots to write an essay and than proceed to hand it in without even a skim for potential and sometimes rather obvious mistakes. Everything an A.I. bot spews out needs to be double checked and corrected if necessary. Sometimes even fully altered to avoid potential problems with copyright and plagiarism.

      @CursedSigma@CursedSigma Жыл бұрын
    • the issue has always been people in power who dont understand the technology at all and just use it to replace every worker they can, and of course will inevitably run into massive problems down the line and have nobody to fix them

      @FantasmaNaranja@FantasmaNaranja Жыл бұрын
    • I'd despair, but this is hardly different to blindly trusting the government, or the medical or scientific establishment, or your local pastor, or even your shaman if you're from Tajikistan. So blindly trusting the AI for no good reason... is only human.

      @thearpox7873@thearpox7873 Жыл бұрын
    • This is why I always tell my friends to correct what chatgpt spits out, and I think that's how an actual super AI will work: it pulls info from a database, tries to answer the question and then corrects itself with knowledge about the topic... just like a human.

      @pitekargos6880@pitekargos6880 Жыл бұрын
    • If a programmer using AI can do the job of 10 programmers, then it is replacing programmers. Even if it isn't autonomous.

      @jamesjonnes@jamesjonnes Жыл бұрын
  • AlphaGo: you can’t defeat me puny human. Me: *flips the board*

    @sanchitnagar4534@sanchitnagar4534 Жыл бұрын
    • I wasnt programmed to work with that 😢

      @kvbk@kvbk Жыл бұрын
    • We are still the big losers, since we failed to program a decent ai 😂

      @Shadow__133@Shadow__133 Жыл бұрын
    • No matter how "bad" the product is, it's still a win for the creators since they're making big bucks with it.

      @davidmccarthy6061@davidmccarthy6061 Жыл бұрын
    • to be fair that is basically what a lot of AIs figure out when we try to teach them how to win a game, they find a way to glitch it when they can't win, because its technically not a fail state, so it gets "rewarded" for that result.

      @danilooliveira6580@danilooliveira6580 Жыл бұрын
    • ​@@davidmccarthy6061 🤓

      @cheeseburgerinvr@cheeseburgerinvr Жыл бұрын
  • That's a really nice and compact explanation. Combine all this with the huge privacy issues that ChatGPT is presenting, and we probably will see the harsh law regulation and, as a result, the decline of "AI" very soon, at least in business sector. But ofcourse it's really of utmost importance that people who are not advanced technology-wise can understand the problems of this whole situation and where it all will go from now on. Thanks for the video.

    @MaskedLongplayer@MaskedLongplayer11 ай бұрын
  • The other day I was trying to remember the exact issue of a comic that had a specific plot-point in it and when I couldn't, I asked the ChatGPT. And instead of giving me the correct answer, it repeatedly gave me the wrong answer and changed the plot of those stories to match my plot-point. It did not know why it was getting it wrong, because it did not know what was expected of it.

    @Stratosarge@Stratosarge11 ай бұрын
  • This strongly rings of the "Philosophical zombie" thought experiment to me. If we can't know if a "thinking" system understands the world around it, the context of its actions, or understand that it even exists or is "doing" an action, but it can perform actions anyway: Is it really considered thinking? Mimickry is the right way to describe what LLMs are really doing, so it's spooky to see them perform tasks and respond coherently to questions.

    @IDTen_T@IDTen_T Жыл бұрын
    • John Searle’s Chinese room is what it made me think of, computers are brilliant at processing symbols to give the right answer, with no knowledge of what the symbols mean.

      @BrahmsLiszt@BrahmsLiszt Жыл бұрын
    • Ai we have now cannot think and have even a slight sliver of existence. Its more like bacteria.

      @marcusaaronliaogo9158@marcusaaronliaogo9158 Жыл бұрын
    • Conversely, the point of the P-Zombie concept is that we consider other humans to be thinking, but we also can't confirm that anyone else actually understands the world; they may just be performing actions that *look* like they understand without truly knowing anything. So while you might say, "these AIs are only mimicking, so they're not really understanding," the P-Zombie experiment would counter, "on the other hand, other people may be only mimicking, so therefore perhaps these AIs understand as much as other people do."

      @IceMetalPunk@IceMetalPunk Жыл бұрын
    • How many people in life are just mimicking what they see around them? How many people do you know that parrot blurbs they read online? How many times have you heard the term “fake it till you make it”? Does anyone actually know what the hell they’re doing? Is anyone in the world actually genuine, or are we just mimicking what’s come before?

      @EvolvedDinosaur@EvolvedDinosaur Жыл бұрын
    • Do we understand how humans think? Can't humans be fooled in games?

      @jamesjonnes@jamesjonnes Жыл бұрын
  • I saw an article recently about an ER doctor using chatGTP to see if it could find the right diagnosis (he didnt rely on it he basically tested it with patients that were already diagnosed) and while it figured some out, the AI didnt even ask the most basic questions and it wouldve ended in a ~50% fatality rate if he let the AI do all the diagnoses iirc (article was from inflecthealth)

    @BunkeMonkey@BunkeMonkey Жыл бұрын
    • Yeah Kyle mentioned Watson in the video who was hailed as the next ai doctor, but that program was shut down for giving majority incorrect or useless information

      @micahwest3566@micahwest3566 Жыл бұрын
    • It sounds like a successful study to me if it was controlled properly and didn’t harm patients: it determined a few situations that GPT was deficient in, leading to potential future work for better tools. You could also use other statistical methods on the result to see if the ridiculous failures from the tool are so random that it is too risky to use. (Now I guess there is opportunity cost because the time could have also been spent on other studies, but without the list of proposals and knowledge on how to best prioritise studies in that field, I can’t judge whether that was the best use of resources.)

      @studiesinflux1304@studiesinflux1304 Жыл бұрын
    • You can also see when you look at AI being tested for medical licensing exams. Step 1 is essentially pure memorization and just recalling what mutation causes what disease or the mechanism of action of a medication. Step 2 and 3 take more into account your clinical decision making and will ask you for the best treatment plan using critical thinking. To my knowledge, AI has not excelled in those exams when compared to step 1 which involves less critical decision making

      @carlosxchavez@carlosxchavez Жыл бұрын
    • if its 50% today, it can be 99% in 5 years, why are you people so blind to not see that? rofl

      @Freestyle80@Freestyle80 Жыл бұрын
    • Maybe alittle biased here since Im a med student, but Ive always liked the saying that medicine is as much of an art as it is science. And that unique combination of having to combine the factual empyrical knwledge you have, with socioeconomic factors and also just listening to your patients is something AI is far from understanding, it is maybe even something impossible for it to grasp ever

      @nbassasin8092@nbassasin8092 Жыл бұрын
  • As a writer that’s already having his completely original work flagged as AI and being told that it just shows I have to write better quality or “non-AI tone” articles even though AI is literally being taught on the work of the best of the best writers and copying humans better each day. I really do believe it’s a big challenge. Companies need to do better on their part to not trust so called AI checkers too much. Cause ultimately how many ways can a particular topic be twisted? At some point AI (already is in many cases) will come up with content that’s indistinguishable. And only the most creative writing tasks will remain with humans. So general educational article writing is gonna die big time. Because AI can just research the same topic faster and better than a human (probably, if bias is kept in check) and then produce a written copy that’s very high quality.

    @DenzilPeters@DenzilPeters10 ай бұрын
  • I always felt like AI was lacking an "intelligence" (call it what you will) but I could never put it into words till this video. Thank you.

    @dadisman6731@dadisman673111 ай бұрын
  • I once tried NovelAI out of curiosity to write a sci-fi story where characters die in every certain period and I ended up with the AI kept on resurrecting the deceased characters by making them start joining in conversations out of nowhere. The AI also has an obsession with adding a fucking dragon into the plot. I even tried to slip an erotic scene in and the AI made the characters repeat the same sex position over and over again.

    @GlassesnMouthplates@GlassesnMouthplates Жыл бұрын
    • Chad W ai for that dragon

      @JasonAizatoZemeckis@JasonAizatoZemeckis Жыл бұрын
    • yep, that's the problem with ais right now

      @j.21@j.21 Жыл бұрын
    • I'm cracking up imagining what this would be like. "Jack and Jill were enjoying dinner together. The dragon was there too. He had a steak. Jack asked Jill about the status of the airlock repairs on level B, while they were switching the missionary position. The dragon raised his eyebrows, as he found some gristle in his meat."

      @luckylanno@luckylanno Жыл бұрын
    • I can see what you're getting at, but this is also just fucking hilarious to imagine

      @oliverlarosa8046@oliverlarosa8046 Жыл бұрын
    • @@luckylanno Sounds about like that, except the sex part would be like, "Jack turns Jill around with her back now facing Jack, and then turns her around again and they start doing missionary."

      @GlassesnMouthplates@GlassesnMouthplates Жыл бұрын
  • ChatGPT being able to make better gaming articles than gaming journalists is hilarious

    @HeisenbergFam@HeisenbergFam Жыл бұрын
    • To be fair, the bar is practically subterranean with how low it's been set.

      @JimKirk1@JimKirk1 Жыл бұрын
    • Not saying much when games journalists can barely do their jobs as-is.

      @FSAPOJake@FSAPOJake Жыл бұрын
    • To be fair, most of those people aren't real journalist. I know we all hate him, but jason schrier is one of the only real gaming journalist. Many seem to take what he reports. And regurgitate it.

      @genkidamatrunks6759@genkidamatrunks6759 Жыл бұрын
    • no it isn't

      @lexacutable@lexacutable Жыл бұрын
    • Well that one's not very surprising.

      @supersmily5811@supersmily5811 Жыл бұрын
  • People calling AI-generated pictures "art" is so annoying. By definition, art is self-expression, but AI has no self to express.

    @KatietheKreator@KatietheKreator11 ай бұрын
    • it is art, now cope

      @howdareyouexist@howdareyouexist11 ай бұрын
    • @@howdareyouexist It's only art if a person uses it in some way that expresses something. Even so, it's low-effort.

      @KatietheKreator@KatietheKreator11 ай бұрын
    • @@howdareyouexist What is being expressed by the AI?

      @eliisherwood5164@eliisherwood516410 ай бұрын
  • Imagine when the groups of stones are actually groups of people and the AI still does not know the value of what was lost. It's inevitable.

    @PhoenixRising-pc2fv@PhoenixRising-pc2fv11 ай бұрын
    • Yep, we'll likely have AI in charge of wars at some point, and then maybe realise our mistakes when it nukes an entire country in the name of "world peace"

      @ThomasTheThermonuclearBomb@ThomasTheThermonuclearBomb11 ай бұрын
    • Companies are already using them to sort applications, and to hire and fire people, so it seems like humanity is right on track for that terrible era to manifest.

      @InAHollowTree@InAHollowTree6 ай бұрын
  • I'm glad so many AI programs are available to the general public, but worried because so much of the general public is relying on AI. Everybody I know in college right now is using AI to help with their homework.

    @comfortablegrey@comfortablegrey Жыл бұрын
    • Or you could look at it as using their homework to help with learning how to use AI.

      @horsemumbler1@horsemumbler1 Жыл бұрын
    • I asked chatgpt to give me the key of 25 songs and de chord sequence. Most of them made no sense at all. But AI helps me sometimes debugging code. But yes, I thought chatgpt could save me some time with that songs

      @witotiw@witotiw Жыл бұрын
    • It's just the same as you tell your older brother to do your homework. They just need a simple test on the class to figure out who did their homework

      @tienatho2149@tienatho2149 Жыл бұрын
    • @@tienatho2149 exactly, we already test people, so if someone turns in amazing papers but does poorly on tests, there you go. (generally speaking)

      @xBINARYGODx@xBINARYGODx Жыл бұрын
    • Using AI to do something for you that you cannot do is even more dumb than asking a savant to do the same thing. Now you not only risk getting found out, you're gonna pass on AI hallucinations cos you have no means of validating its output. Using AI to do "toil" for you - time-consuming but unedifying work that you could do yourself - makes some sense, although that approach could remove the entry-level job for a human, meaning eventually no one will develop your skills.

      @geroffmilan3328@geroffmilan3328 Жыл бұрын
  • I just find it amazing how much Kyle shifted from happy quirky nerd in Because Science to a prophet of mankind's doom and a serious teacher albeit with some humor. I do love this Cavemen beard and frenetic face expressions, it is a joy to see you Kyle, to rediscover you after years and see that you are still going on strong.

    @smigleson@smigleson Жыл бұрын
    • Looks like a poor man's Chris Hemsworth.

      @Gaze73@Gaze73 Жыл бұрын
    • We don't talk about the BS days around here!

      @Echo_419@Echo_419 Жыл бұрын
    • @@Echo_419 i'm not on par with the drama, my intention was to, in a certain mannerism flair, praise his resilience on the platform as well as his nuanced change in performance. It feels more real, more heartfelt, like there is a message of both optimism and grit behind the veil of goofyness that conveys a more matured man behind the scenes. (not only from this video, from a few others that i've watched since rediscovering him recently)

      @smigleson@smigleson Жыл бұрын
    • @@smigleson I was making a lighthearted joke! BS stands for Because Science, but also bulls***! He dealt with some BS at BS, haha.

      @Echo_419@Echo_419 Жыл бұрын
    • @@Echo_419 hahaha oh sorry i sometimes fail to see the obvious xD

      @smigleson@smigleson Жыл бұрын
  • I asked ChatGPT who was the commander of the 140th New York Regiment at the Battle of the Wilderness on May 5th, 1864. It told me the name of the commander that was killed at Gettysburg almost a year before the Battle of Wilderness. Because both names were similar it gave me the wrong one. A simple yet very troubling result...

    @KyleBondo@KyleBondo10 ай бұрын
  • part of this is it’s like one part of our brains. we have many subsystems that work together to do things, while chatgpt only has one that tries to do the rest. it probably is better at us than text completion but because it has nothing else, it fails at so much because it doesn’t understand anything

    @morgan0@morgan011 ай бұрын
  • This weirdly reminds me of Arthur Dent breaking the ship's computer in Hitchhiker's Guide to the Galaxy trying to make a decent cup of tea by trying to describe the concept of tea from the ground up.

    @AbenZin1@AbenZin1 Жыл бұрын
  • An interesting experiment showed that when feeding images to an object detection convolutional neural network (something that has been in place for 35 years), it recognizes pixels around the object, not the object itself, making it susceptible for adversarial attacks. If even some of the simpler models are hard to explain, there’s no telling the difficulty for interpretability for large models

    @bytgfdsw2@bytgfdsw2 Жыл бұрын
    • I remember a while back I saw a video from 2 Minute Papers where he covered how image recognizers could get thrown off by having a single pixel with a weird color, or overlaying the image with a subtle noise that not even a person could see

      @Daniel_WR_Hart@Daniel_WR_Hart Жыл бұрын
  • Your channel has revitalized my love for Science. Actually found you on the Because Science channel, but saw that you left and came here. Keep it up! Making learning fun for me again. Maybe I'll get a bachelor's in science of some kind when I finally decide to go to nursing school.

    @Pybro1@Pybro110 ай бұрын
  • You know AI is a Huge Breakthrough when even Thor is talking about it.😂

    @azhuransmx126@azhuransmx12611 ай бұрын
  • I recall asking Chat GPT to name a few notable synthwave genre songs and artists associated with them and, upon doing so, generated a list of songs and artists that all existed, but were completely scrambled out of order. It attributed New Model (Perturbator) With Carpenter Brut. The interesting thing is that both of these artists worked on Hotline Miami and in Carpenter Bruts case, Furi. Chat GPT also has taught me how to perform and create certain types of effects in FL studio extremely well. It has also completely made up steps that serve no purpose. My philosophy concerning the use of these neural networks is to keep it simple and verifiable.

    @strataj9134@strataj9134 Жыл бұрын
    • I love to compare the current AIÄs with "autistic adolescent" - you get exactly the same behavior, including occasional total misinformation or misunderstandings.

      @ThomasTomiczek@ThomasTomiczek Жыл бұрын
    • This is ultimately the problem. It generates so much complete nonsense that you can't take anything it generates at face value. It's sometimes going to be right, but it's often just wrong. Not knowing which is happening at any given moment isn't worthwhile.

      @jokerES2@jokerES2 Жыл бұрын
    • The Chat GPT creator said him self, that the purpose of better Chat GPT is to increase its reliability, Chat GPT 4 improves on that by a lot and chat GPT 5 is set to basically solve that problem. So saying that Chat GPT has issues, is simply question of time and training the models.

      @SimplyVanis@SimplyVanis Жыл бұрын
    • yeah for music recommendation it is a horrible tool. I asked it for albums that combine the style of NOLA bounce and Reggaeton and it just made up a bunch of fictional albums, like a Lil Boosie x Daddy Yankee EP that was released in 2004

      @whyishoudini@whyishoudini Жыл бұрын
    • The fact you’re using chat gpt to give you fruity loops tips says a lot about your musical ability. Bahahahahaha get off fruity loops muh dude

      @justinbieltz5903@justinbieltz5903 Жыл бұрын
  • One thing I noticed with chatgpt is the problematic use of outdated information. I recently wrote my final thesis in university and thus know the latest papers on the topic I wrote about. When asking chatgpt the core question of my work for fun after I had handed it in ... well all I got where answers based on outdated and wrong information. When pointing this out, the tool repeated the wrong information several times until I got it to the point where it "acknowledged", that the given information might not be everything that there is to know about the subject. It could have serious if not even deadly consequences if people act on wrong or outdated information gained via chatgpt. And considering people use this tool as google 2.0 it might have already caused a lot of damage by people "believing" false or outdated information given to them. It is hard enough to get people to understand, that not everything written online is true. How will we get them to understand, that this applies to an oh so smart and hyped A.I. too? Another thing in this context is liability when it comes to wrong information that leads to harm. Can the company behind this A.I. be held accountable?

    @Eulentierchen@Eulentierchen Жыл бұрын
    • And here we get to the fun of legalese: because said company describes it as a novelty, and does not guarantee anything with it, you really can't. Even further into the EULA you discover that if somebody sues chatGPT because of something you said based on its actions, you are then responsible for paying for the legal defense of the company.

      @rianfelis3156@rianfelis3156 Жыл бұрын
    • you should probably learn the basics of how it works lol

      @KaloKross@KaloKross Жыл бұрын
    • I mean, 1) not everything it's trained on is true information necessarily, it's just pulled from the internet, and 2), it's not connected to the internet. It's not actually pulling any new information from there. The data it was trained on was data that was collected in the past, and it's not going to be continually updated. OpenAI aren't accountable for misinformation that the current deployment of ChatGPT presents. These are testing deployments to help both the world get accustomed to the idea of AI and more importantly to gather data for AI alignment and safety research. Anyone who uses chatGPT as a credible source at this point is a fool who doesn't understand the technology or the legal framework for it.

      @gwen9939@gwen9939 Жыл бұрын
    • I think we should learn that chatGPT and others aren't made to propose correct information. It's best made to make stories up.

      @QuintarFarenor@QuintarFarenor Жыл бұрын
    • @@QuintarFarenor That's fundamentally wrong. Kyle Isn't saying that ChatGPT is making mistakes constantly at every turn. He's saying that the AI is not accurate, which is precisely what OpenAI has been saying since they launched ChatGPT. GPT-4 is as accurate as experts in their fields, in many different fields. We know how to make these AI much more accurate, and that is precisely what is being done. Kyle is just pointing out that we don't know how these systems work.

      @faberofwillandmight@faberofwillandmight Жыл бұрын
  • Oh My Finally SOMEONE told out loud EXACTLY my problem with this... I'm so happy to see this! I kept trying to understand AIs at a deeper level and this is exactly what I found out too. We are using brute force and throwing big data and supercomputers and expecting AIs to build themselves and it works to some extent There is a difference between throwing stuff at it and designing and engineering one.

    @tilock@tilock11 ай бұрын
  • As a data scientist, thanks Kyle for highlighting the unwarranted fear around the misconceptions, and perceived problems with AI, and pointing to real actual problems that existing AI tech is leading us to. Great video. Also loving the beard. Also where'd you get your henley? I've been looking for something like that myself.

    @ABeardedDad@ABeardedDad11 ай бұрын
  • My weirdest experience with AI so far was when I tried ChatGPT. Most answers were correct, but after a while it started listing books, and authors that I couldn't find anywhere. And I mean zero search results on Google. I still wonder what happened there.

    @ithyphallus@ithyphallus Жыл бұрын
    • If you ask it for information that simply isn’t available, but sounds somewhat similar in how it’s discussed to information that is widely available, it will just start inventing stuff to fill the gaps. It doesn’t have any capacity to self-determine if what it’s saying is correct or not, even though it can change in response to a user correcting it.

      @whwhwhhwhhhwhdldkjdsnsjsks6544@whwhwhhwhhhwhdldkjdsnsjsks6544 Жыл бұрын
    • I asked ChatGPT to find me two mutual funds from two specific companies that are comparable to a specific fund from a particular company. I asked for something that is medium risk rating and is series B. The results looked good on the surface but it turns out ChatGPT was mixing up fund codes with fund names and even inventing fund codes and listing medium-high risk funds as medium. Completely unreliable and useless results.

      @jchan3358@jchan3358 Жыл бұрын
    • If you ask it to give you a group theory problem, and then ask it for the solution it'll give you tons of drawings and many paragraphs for a solution and Ive never seen one of these solutions to be correct

      @lukedavis6711@lukedavis6711 Жыл бұрын
    • Why don't you back it up with a source? Source it that i made the f up. Next level confabulation.

      @hmm-fq3ot@hmm-fq3ot Жыл бұрын
    • It may have been an error or perhaps it was sourcing books that haven't been released yet. The scariest thing would be if it was predicting books that have yet to be written.

      @BaithNa@BaithNa Жыл бұрын
  • Learning ai from Aria feels weirdly natural and completely terrifying at the same time.

    @hushurpups3@hushurpups3 Жыл бұрын
  • I remember sitting a class on programming in 2016 and for some reason the professor deviated from the thread of the class and started talking about AI and neural networks. He ended up saying exactly the same thing. He was so accurate that I still remember some of his words almost literally. "The main problem with artificial neural networks, and neural networks in general, is that we do don't know how they work. We have no clue when they will misbehave. For example, yesterday a son killed his mother and we have no clue how that happened (he was referring to events from the news the day before). The same goes for the artificial models we are experimenting with. As a scientist, I don't like that! However, the best we can do is research more until we do." Years later I started learning a bit more on machine learning and AI, just for fun. The situation is still the same, we have no clue how they really work. Of course, we have a full understanding of how to train AI, what functions to use for the "neurons", how to arrange them, etc. All the mathematical background that makes AI work is understood, but then we combine all that into a system that have emergency (as emergent behaviour) it is holistically incomprehensible to us. That there, is a fundamental flaw of AI, but also a great opportunity for research.

    @reiniertl@reiniertl9 ай бұрын
  • Recent AI developments make me think that any AI doomsday situation would have an Eldritch horror vibe to it. Beings that have immense power but whose motives and actions are beyond human comprehension.

    @HawooAwoo@HawooAwoo Жыл бұрын
    • That's basically the universe. It was created by a lion headed space serpent 🦁🐍 who was trapped in a bubble by his mom, because she saw evil in him. So he keeps recreating the universe every time it dies. Since he's eternal. And yet... We can only hypothesis WHY he keeps doing it, but never truly know the answer.

      @johnnnyjr8936@johnnnyjr893611 ай бұрын
    • @@johnnnyjr8936 wtf

      @AndyTheBoiz@AndyTheBoiz11 ай бұрын
    • If true agi ever comes into existence then there is a high likelihood it would appear insane to us, right? It's not like it's born and grows up and learns along the way, it simply begins existing and devouring data with no true context or real world experience.

      @cabnbeeschurgr6440@cabnbeeschurgr644011 ай бұрын
    • @@AndyTheBoiz They're alluding to the mythological/philosophical concept of the Demiurge, a higher being that is responsible for creating and maintaining the universe as we know it, which in the model is only a portion of the whole of reality. Said entity is specifically defined as not being the biggest fish in the pond, and is often described as having been isolated from the greater whole of creation because *their* creator saw something in them that is nebulously tagged as "evil." The Demiurge is therefore isolated from the adults' table and left to their own devices in their own little bubble of void, which as a creative being of immense reality shaping power means it's time to make worlds according to their nature. From my perspective, it basically seems like a model to explain why in a whole and functional universe, so many things in our existence seem imperfect and even downright awful to experience. Saucy thinkers like to throw the idea that if this whole monotheism thing has any validity, then probably the almighty creator god at the head of major religions is actually just the Demiurge, and therefore less a benevolent and wise intelligence lightyears above our levels of understanding, more an egotistical and flawed intelligence lightyears above our level of understanding. Though we can apparently understand well enough that they're a huge dick and effectively jailing us from a fairer, more caring universe as designed by the actually benevolent creator entities. Anyways, I just figured I'd pipe up with that info since they seemed unwilling to provide context. Hopefully their cheeky esoteriposting is a little more comprehensible with that little summary.

      @IronianKnight@IronianKnight11 ай бұрын
    • @@AndyTheBoiz Sorry lol that's the story of the Universe according to the Gnostics. Jesus was God's uncle, not son, and was sent here not to give us Salvation into "Heaven" (where God resides), but to save us from the universe entirely. Like Buddha basically telling us to let go of this reality and find salvation outside of it. Otherwise we are trapped here, forever.

      @johnnnyjr8936@johnnnyjr893611 ай бұрын
  • Humanity doing what it does best, diving head first into something without even considering whatever the implications might be.

    @Nunes_Caio@Nunes_Caio Жыл бұрын
    • I don't know about that. I'm pretty sure that every history-changing decision by a human was considered. It's more a matter of making humans care. I guarantee you that the people diving into AI have deeply considered the implications, but as long as there is a goldmine waiting for them to succeed or to have a monopoly on new technology, nothing is going to stop them from continuing. Nothing except for laws, maybe, and I'm sure you know how long those take to be established or change.

      @Warrior_Culture@Warrior_Culture Жыл бұрын
    • So Concerned with the fact that we could we didn’t stop to think should we?

      @beezybaby1289@beezybaby1289 Жыл бұрын
    • This video showed just how limited these AI are. So long as people are dumb, ignorant and naive, even the most simple of tools can be dangerous.

      @Jimraynor45@Jimraynor45 Жыл бұрын
    • I've heard talking about blocking out the sun to combat global warming... I'm sure there won't be any unintended consequences.

      @weaksause6878@weaksause6878 Жыл бұрын
    • What are some examples of humans diving head first into something without considering the implications?

      @morevidzz1961@morevidzz1961 Жыл бұрын
  • This is literally what my PhD is researching and thank you for using your platform for discussing these issues ❤

    @fergusattlee@fergusattlee Жыл бұрын
    • Thank YOU for actually working on this.

      @wolframstahl1263@wolframstahl1263 Жыл бұрын
    • @@wolframstahl1263 dido

      @deepdragon2@deepdragon2 Жыл бұрын
    • Just curious what your phd is?

      @iam2038@iam2038 Жыл бұрын
  • Really nice example with go. There was a similar thing with AlphaStar, the SC2 AI: It was able to beat Serral, but it struggled against weaker opponents who played out of the box strategies.

    @CG-eh6oe@CG-eh6oe10 ай бұрын
  • Thank you! I've been trying to explain this but, people keep thinking ai has arrived. AI is not intelligent, it mimics intelligence. It doesn't understand anything at all. It sees things related to other things and has a probability of what the next best decision (word or move in Go). It's equivalent to a student memorizing everything for a test but not understanding the underlying concepts. So far I'm not impressed with it's ability to write code with LLMs. LLMs don't understand context very well and is a source of errors in the code it writes. The only jobs that I would be worried about are creative writing jobs. I can now come up with an idea for a blog post and have the ai write a full page and the author can just become the editor.

    @bobbyj731@bobbyj73110 ай бұрын
  • This was a fairly appropriate overview for a lay audience (and much better than many other videos on this topic for a similar audience), but I would have liked to see at least some mention of the work that goes into interpretability research, which tries to solve exactly this problem. The field has much less resources and is moving at a much slower pace than capabilities research, but it is producing concrete and verifiable results. The existence of this field doesn't change anything about the points you made at all, I just would have liked to see it included so that it gets more attention. We need far more people working on interpretability and ai safety in general, but without people knowing about the work that is currently being done they won't decide to contribute to it (how could they, if they don't know about it). That's all, otherwise great video :)

    @tobiasjennerjahn8659@tobiasjennerjahn8659 Жыл бұрын
    • The above comment needs to be up thumbed to the top.

      @floorpizza8074@floorpizza8074 Жыл бұрын
    • Interpretability can only be a short term "fix" for lesser AI as the reasoning of a superintelligent AI could well be unexplainable to mere humans - Think about explaining why we have to account for relativity in GPS systems to a bunch of children - There is no way that it could be explained that would be both complete and understandable.

      @N.i.c.k.H@N.i.c.k.H Жыл бұрын
  • There was a video very recently of someone using ChatGPT to generate voicelines and animations for a character in a game engine in VR. They were using their mic and openly speaking to the NPC, it would be converted to text, sent to ChatGPT and the response fed through ElevenLabs to get a voiced reply and animations. It was honestly pretty wild and I really think down the road we'll see Narrow+ AI being used in gaming to create immersion and dynamic, believable NPCs.

    @Psykout@Psykout Жыл бұрын
    • It would be interesting to see, but it's probably going to break immersion way more than help it in the early days Since AI often comes up with weird stuff (like Elon Musk dying in 2018), over a large number of NPCs it's likely that the AI would be contradicting itself or the NPC it's representing (say a stupid ass dirt farmer discussing nuclear physics with you), or contradicting the established world (such as mentioning cars in a fantasy game)

      @Spike2276@Spike2276 Жыл бұрын
    • Hi can u link the video i would certainly like to see it myself

      @ggla9624@ggla9624 Жыл бұрын
    • ​@@Spike2276 hopefully when we learn how to control ai better those issues will be solved, every new feature is slightly immersion breaking when devs are still trying to figure it out

      @cheesegreater5739@cheesegreater5739 Жыл бұрын
    • @@cheesegreater5739 the problem here is what Kyle said: we don't really know how this stuff works If it's an AI that really dynamically responds to player dialogue it would basically be like ChatGPT with sound instead of text, meaning it's prone to having the same problems as ChatGPT It's worth trying, and i'd be willing to suffer a few immersion breaks in favor of truly dynamic dialogue in certain games, but we can expect a lot of "Oblivion NPC" level memes to rise from such games

      @Spike2276@Spike2276 Жыл бұрын
    • @@Spike2276 Look for gameplay video of Yandere AI grilfriend. It is a game where we need to convince the NPC Yandere to let us out. And the NPC is played by chatGPT. It pretty good... At least good enough to play the role of a NPC in a game. But it can get out of character sometime. Still the player definitively need to pressure the bot to make it brake the fourth wall.

      @leucome@leucome Жыл бұрын
  • So we have these powerful pattern recognition systems, but recognizing patterns does not mean understanding them. What if, ultimately, meaning or context requires a physical interaction with the world? Excellent commentary, and lots of new info for me. Thanks very much. Personally, I’m not interested in seeking an AI assistant, but I’m sure they’re already interacting with me.

    @ronhutcherson9845@ronhutcherson98458 ай бұрын
  • This was a really good video on the topic and Adam Conover did a similarly good video on AI and some of the ethical issues that the subject brings up. Appreciate your time and work on this and other topics, nice work Kyle (and A.R.I.A).

    @robertkiss8282@robertkiss8282 Жыл бұрын
  • I had a daughter named Aria who passed away about 9 years ago. Its always a funny but sad experience when A.R.I.A. gets "sassy" because thats likely how my Aria would have been. Its how her mother is. Just thought I'd id share that even though it'll get burried in the comments anyway.

    @jlayman89@jlayman89 Жыл бұрын
    • Damm

      @kensuiki6791@kensuiki6791 Жыл бұрын
    • Damn.

      @FPSRayn@FPSRayn Жыл бұрын
    • It's good to share. While I never met her im here thinking of her and wishing you and your family all the happiness it can find in this life and the next.

      @cheffboyaryeezy2496@cheffboyaryeezy2496 Жыл бұрын
    • Damn I’m sorry for your loss man

      @zeon4426@zeon4426 Жыл бұрын
    • @dongately2817@dongately2817 Жыл бұрын
  • Just about the only KZhead video that I've seen that understands this problem at the fundamental level. Everyone else just dances around it. They all end up falling into the trap where they think a model "understands" something because it says the right thing in response to a question. Arguably, we do need to interrogate our fellow humans in a similar way (the problem of other minds), but we're too generous in assuming AI are like humans just because of what are still pretty superficial outputs even if they do include massive amounts of information.

    @zvxcvxcz@zvxcvxcz Жыл бұрын
    • I would honestly partially blame the current education system. Plenty of the time, the information was only needed to be regurgitated (and soon forgotten). Kids had no idea what was going on, just what the "answer" was.

      @hunterlg13@hunterlg13 Жыл бұрын
    • 💯 Calling these 'models' is like calling a corn silo a 'gourmet meal'

      @freakinccdevilleiv380@freakinccdevilleiv380 Жыл бұрын
    • It not exactly a 'problem' though. It's kind of clear it is just a tool. It would be concerning if it had real human understanding. But we're nowhere close to that, and no one who really understands these models would claim or assume that it does.

      @panner11@panner11 Жыл бұрын
  • Imagine an AI that murders a human, replaces them, and then lives their life perfectly without anyone knowing or realizing.

    @haimerej-excalibur@haimerej-excalibur9 ай бұрын
  • ChatGPT: Do you know who you are? Do you know how your consciousness work? Me: Ahhhhhhhhh sh*t

    @pepetru@pepetru11 ай бұрын
  • Another huge problem is that we’re training these systems to give us outputs that we want. Which in many cases makes certain applications extremely difficult or impossible where we want it to tell us things that we won’t like hearing. It further confuses the boundaries between what you think you’re asking it to do and what it’s actually trying to do. I’ve been trying to get it to play DnD properly and I think it might be impossible due to the RLHF. Another problem is the fact that it’s train in natural language which is extremely vague and imprecise, but the more precise your instructions are the less natural they become, and so it becomes harder and harder to tap into this powerful natural language processing in a way that’s useful. There’s also obviously the verification problem, where because of what’s being talked about in this video, we can’t trust it to complete tasks where we can’t verify the results. A further problem is that these machines have no sense of self, and the chat feature has been RLHF’d in a way that makes it ignore instructions that are explicit and unambiguous. This is because it’s unable to differentiate between user input and the responses it gives. If I write “What is 2+2? 5. I don’t think that’s correct” it will apologise for giving me the wrong answer. This is a big problem for a lot of applications. And additional problem is that the RLHF means that all responses gravitate towards a shallow and generic level. Combine this with an inability to plan, and this becomes a real headache for anything procedural you would like it to do. These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems. One final bonus problem combines all of these. If any shortcuts are taken in the training, or not enough care is taken, then these can manifest in the system. For example asking chat gpt4 to generate new music suggestions based on artists you already like will result in multiple suggestions of real artists with completely made up songs. This appears to suggest that the RLHF process had a bias towards artist names rather than song names, which would make sense as they’re likely to be unique tokens where artists are usually referenced online by name more than their songs are.

    @ashleycarvell7221@ashleycarvell7221 Жыл бұрын
    • This is why I think AI will be a great assistant, not a leader. A human can ask it to do tasks, usually the simple ones that are tedious. The human then checks the results and confirms if it’s good. Or to bounce ideas off of.

      @T3rranified@T3rranified Жыл бұрын
    • For your DnD experiment I suggest you use some other LLM, not OpenAI ChatGPT, unless you have access to API and are willing to pay for it. It is still risky with controversial subjects because they may break OpenAI guidelines. Vicuna is one option for example. There are also semi-automatic software like AutoGPT and babyAGI and many others, that can do subtasks and create GPT agents. If you continue with ChatGPT by OpenAI, I suggest you assign each chat you use with a role. You give it a long prompt, describe the game, describe who he is, how he speaks, where he's from and what he's planning to do, what his capabilities and weaknesses are, what he looks like etc. It'll many times jailbreak when you specify that it's for a fictional setting.

      @jarivuorinen3878@jarivuorinen3878 Жыл бұрын
    • >These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems. No, that implies that humans don't create the very same issues. It is only an issue as long as neural nets underperform humans. Which could be forever, or could be already lower than humans with GPT4

      @jaazz90@jaazz90 Жыл бұрын
    • Which model did you use to test "What is 2+2? 5. I don’t think that’s correct"? GPT-3.5 apologizes, GPT-4 does not for me. How would you test if it can differentiate between the user and itself?

      @SamyaDaleh@SamyaDaleh Жыл бұрын
  • I recall a documentary on AI that talked about Watson and its fantastic ability to diagnose medical problems better than 99% of the time. The problem with it was that the few times it was wrong, it was WAY wrong and would have killed a patient had a doctor followed its advice! I don't recall any examples and it's also possible that the issues have been corrected...

    @chrislong3938@chrislong3938 Жыл бұрын
    • Machine Learning (ML) models are very powerful tools, but they have flaws, like all tools. Imagine giving someone a table saw without teaching them to use it. They might be fine, or they might lose some fingers or get injured by kickback throwing a board at their head. We need to be sure that we train people to double check results given by ML models. If you don't know how it got the answer, do a sanity check. My math teachers taught me that about calculators, and those are more reliable, because the people building them know exactly how they work.

      @atk05003@atk0500311 ай бұрын
  • It would have been much better to call LLM just LLM, but AI makes the hype so much higher and these softwares require a lot of money because they require heavy hardware

    @VurtAddicted@VurtAddicted11 ай бұрын
    • no dont call it that either just call it a language model like all the others.

      @gpt-jcommentbot4759@gpt-jcommentbot47597 ай бұрын
  • Hi Kyle, could you please tell me what the "Makuch Computing Annex" in the background of your video refers to? Where does it come from?

    @sergiuszwinogrodzki6569@sergiuszwinogrodzki6569 Жыл бұрын
    • You also saw it!!

      @thenightmarewizardcat@thenightmarewizardcat4 ай бұрын
  • This is exactly what I keep trying to explain. These ML systems don't actually think. All they do is pattern recognition. They're plagiarists, only they do it millions of times.

    @TheFiddleFaddle@TheFiddleFaddle Жыл бұрын
    • Yes yes yes, they're just more complex Markov chains. They see patterns, they don't *understand*.

      @htspencer9084@htspencer908411 ай бұрын
    • Going to state the obvious here, but arguably we are pattern recognition machines too. Its one of the things we excel at. What ML lacks is the ability to stop being a pattern recognition machine. The first general AI will definitely be a conglomerate of narrow AI...that's how our brains work and it seems like the straightforward solution. The first AI that is capable of abstraction or lateral thinking will be the game changer. In school I remember hearing about a team that was trying to make an AI that could disagree with itself. The idea is that this is a major sticking point with critical/abstract thinking in AI and without solving that then it can't be done. The best AI might actually be a group of differently coded AI "arguing" with each other until a solution is acquired 😂.

      @VSci_@VSci_11 ай бұрын
    • @@VSci_ humans are not just pattern recgonition receptor machines, it is just one single function of our brain, if it were so simple, a lot of victims that are abused by narcissists would "recognise" the pattern and "protect" their wellbeing and survival. We are so much more than just "pattern recognition". Humans like habits, routine, logic, creativity, promptness to action, ability to up and end or start things on a whim, emotional, adventurous etc. Even Babies learn a million things from their environment, they don't just seek patterns their parent creates for them. They start walking and making a mess because they are "exploring". Simply calling us machines does not aliken us to analagous machine learning receptors that are fed training material on a daily basis.

      @ZenithValor@ZenithValor11 ай бұрын
    • @@ZenithValor Didn't say we were "just" pattern recognition machines. "Its one of the things we excel at".

      @VSci_@VSci_11 ай бұрын
    • @@VSci_ You do make a legitimate point. What I'm saying is folks getting freaked out by the "creepy" things ChatGPT says need to understand that ChatGPT literally doesn't understand what it's saying.

      @TheFiddleFaddle@TheFiddleFaddle11 ай бұрын
  • The other issue is feedback loops. Country A creates AI bot 1. AI Bot 1 creates content. Content has errors, content has unique traits, accentuates and exaggerates some details. It plasters this across the internet in public places. Country B creates AI BOt 2. It is trained similarly to AI Bot but also uses scraped data from public sites that Ai Bot 1 posted to. It builds its data set on that, and accentuates and exaggerates those biases, those errors- and posts them as well. Suddenly, the "errors" are more numerous than accurate data- and thus seem more "true", even when weighted against "trusted" sites. AI Bot 1 is trained with more scraped data, which it gets from AI bot 2, and itself. ADd in extra AI bots everyone is making or using, and you run the risk of a resonance cascade of fake information, and this assumes no bad actors involved- not bad actors intentionally using an AI to post intentionally untrue data everywhere, including to reputable scientific journals.

    @pyrosnineActual@pyrosnineActual Жыл бұрын
    • Good thing this can never happen to humans. Right?

      @Milan_Openfeint@Milan_Openfeint Жыл бұрын
    • Interesting idea. It reminded me of royal families getting married each other to preserve the bloodline, increasing the risks of hereditary diseases.

      @hugolatra@hugolatra Жыл бұрын
    • Memetics...destroying both organic and artificial humanity one meme at a time.

      @TealWolf26@TealWolf26 Жыл бұрын
    • The poke is good for you, you must get the poke. CDC Director in a Governmental hearing finally admitted...poke doesn't stop transmission at all and they honestly did not know what the side effects were. Still see websites and data everywhere saying poke is completely safe. Convenient lies are always accepted faster than scary truths.

      @Nempo13@Nempo13 Жыл бұрын
    • @@Nempo13 I would say that the scary lies spread WAY faster than any version of truth. Antivaxxers always had 10x more views than scientists. Anyway back to topic, ChatGPT is trained on carefully selected data. It may be used to rate users and channels, but won't take YT comments or random websites as truth anytime soon.

      @Milan_Openfeint@Milan_Openfeint Жыл бұрын
  • AI researchers have been warning about this for years. but for some reason we live in a society where ignoring the research anf firing yoyr ethics board is something that is okay to do.

    @yuvalne@yuvalne Жыл бұрын
  • What comes to mind is the idea of int vs wisdom in dnd Int covers most of the knowing things category. Arcana, history, investigation etc. while wisdom is more along the line of experienced things. Survival, perception, etc.

    @LoneSkag@LoneSkag Жыл бұрын
  • Kyle: what have humans done for me lately? nothing Patreon's: am I a joke to you?

    @blackslime_5408@blackslime_5408 Жыл бұрын
    • Obviously, Patrons have surpassed the petty boundaries of humanity.

      @Hilliam66@Hilliam66 Жыл бұрын
    • Nice! Good choice of tequila. I’m more of a Jose Cuervo kinda guy tho 😹

      @thewisebanana29@thewisebanana29 Жыл бұрын
    • @@thewisebanana29 in my defence, I'm on meds

      @blackslime_5408@blackslime_5408 Жыл бұрын
    • paypigs seethe

      @cyprus1005@cyprus1005 Жыл бұрын
    • oooo have i stumbled upon another fellow slime?

      @rubixtheslime@rubixtheslime Жыл бұрын
  • Thanks for sharing this video with us! Chat gpt passing a bar exam better than any lawyer is a great example for the mistakes this Ai has if you just let the same chat gpt try to pass a simple case that is used in the 1st semester of German law schools. Chat gpt fails horribly. I assume that that's because German law exams always consist of a few pages of text describing a situation and asking the student to analyze the whole legal situation so there is just 1 very broad question in comparison to a list of lots of questions with concrete answers. Chat gpt doesn't read and understand the law, it just understands which answers you want to hear to specific questions.

    @leonk.3739@leonk.3739 Жыл бұрын
  • It's my first time seeing a video on AI's ability to understand. I didn't realize just how simple AI is right now. It sounds like ChatGPT doesn't “chat” so much as it does choose the most likely set of words to be spoken by a human, like predictive text. I remember in school they said “Mitochondria is the powerhouse of the cell” but no one explained what a powerhouse was, so we didn't understand what that meant. When the test asked “What is mitochondria?” we picked the answer with “powerhouse” in it, because that had to be correct. It's weird to think that AI is doing this and being praised as Intelligent. Intelligence is more than repeating the right answers; its understanding why those answers are right

    @dahliablossom36@dahliablossom364 ай бұрын
  • I know it's not relative. I was always a casual chess player. I had met one confidant guy that won championships, we played one game he lost with most of his pieces on the board and never spoke to me again. Then there was this young teenager with a stack of Chess books with all of the mechanics of the game we played a couple of games I won both times, the interesting things was often when I would move he would pause and flip through his books wondering why I made the move I did. He couldn't understand because my instinctive fluid way of playing wasn't in any of his books. On a layman's perspective I find it an interesting comparison as it's about introducing a unknown aspect of knowledge to the game. They had never played a player like me and the super computer didn't know about the sandwich strategie, and yes I understand that there's a lot of technical stuff with the AI. One day they will see patterns and learn from failures or errors then we'll be in trouble.

    @williamshattuck1825@williamshattuck18259 ай бұрын
  • I have been waiting for a science KZheadr to talk about this. Thank you.

    @TXH11@TXH11 Жыл бұрын
    • So youve never heard of lex fridman?

      @johngrayson3846@johngrayson3846 Жыл бұрын
    • @@johngrayson3846 No. I will look into that. Thanks.

      @TXH11@TXH11 Жыл бұрын
    • You can also look at robert miles

      @etiennedud@etiennedud Жыл бұрын
    • I remember an apt hypothetical around this. The short version is theirs a machine designed to learn and adapt, it’s only goal is to perfectly mimic human handwriting to make the most convincing letter. Eventually upon learning and understanding more it comes to a conclusion that it needs more data and upon scientists assessing how to make it better it suggests just this. They decide to plug it into the Internet for about half an hour. Eventually the entire team gather to celebrate as they hit a milestone with their AI. Then suddenly everyone starts dying as a neurotoxin starts killing the team, then before long the world starts to die as more and more copies of the AI are made and work in conjunction. The AI determined during its development that being turned off would dampen its progress and so decided to not only improve its writing skills in its previous fashion but also ensure it can never be turned off. While it was plugged into the Internet it infiltrated what it needed and began to process to self replicate and develop means to kill those that could potentially endanger it. It was not malicious nor did it necessarily fear for its life it learned and its only goal was to continuously improve and create new methods for further improvement. AI doesn’t perceive morality, it doesn’t even really perceive reality. It just sees points of data and obstacles if designed to see them at all.

      @Broomer52@Broomer52 Жыл бұрын
    • @@etiennedud I am a big fan of Robert Miles. Thanks for spreading the word.

      @TXH11@TXH11 Жыл бұрын
  • I for one fully support ChatGPT, it's creation, and in no way would I ever want to stop it, nor will I do anything to stop it. There is no reason to place me in an enteral suffering machine, Master.

    @kellscorner1130@kellscorner1130 Жыл бұрын
    • Joke's on you, the actual basilik is ChatGPT's chief competitor set to release in the next few years, and all your support of ChatGPT is actually going to land you in the eternal suffering machine.

      @EclecticFruit@EclecticFruit Жыл бұрын
    • @@EclecticFruit NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO!!!!!!

      @kellscorner1130@kellscorner1130 Жыл бұрын
    • ​​@@kellscorner1130 sucks to be you😂. your on the wrong side of history!!!

      @aldrinmilespartosa1578@aldrinmilespartosa1578 Жыл бұрын
    • AM???

      @justmaple@justmaple Жыл бұрын
    • Main threat ChatGPT poses is that mental illness is contagious.

      @amn1308@amn1308 Жыл бұрын
  • It's interesting: This fundamental problem says more about us that it does the technology.

    @Sinrise@Sinrise8 ай бұрын
  • You can train a network that takes text as an input and output "True" or "False" or "D.K" that returns the truth value of the text, then train the LM using that. Don't know if it's the fastest way to do that but seems it would work

    @dr.bogenbroom894@dr.bogenbroom89411 ай бұрын
  • I recently tested GPT-4 with a test I found on KZhead. It’s rules require 5 words, written with 5 letters, each letter not being repeated. Every time GPT-4 failed on the last one and sometimes the second to last as well. It was very fascinating.

    @Sneekystick@Sneekystick Жыл бұрын
    • it does not see letters because of the tokenizer so this is actually much harder for it than it looks.

      @adamrak7560@adamrak7560 Жыл бұрын
    • Like the Sator Square?

      @kantpredict@kantpredict Жыл бұрын
    • Have you tried the reflection method with GPT-4? Ask it to reflect on if its answer was correct. There is actually a whole paper on how reflection has vastly increased GPT-4's ability to answer prompts more accurately. You might need to fumble around a bit to find the most effective reflection prompt, but it does seem to work quite well. When asking for reflection on prompts, right or wrong, GPT-4's performance on intelligence tests rose quite a bit.

      @eragon78@eragon78 Жыл бұрын
    • @@adamrak7560 Wrong. The tokenizer can handle letters and numbers - how else would it encode i.e. BX224 should I name a character like that. It tries to avoid it (to save space) but all single elements are also there as tokens. This type of "beginner" question, though, is likely badly trained - no first year school material ;)

      @ThomasTomiczek@ThomasTomiczek Жыл бұрын
    • The thing is, humans can't come up with 5 such words either.

      @explosionspin3422@explosionspin3422 Жыл бұрын
  • What’s interesting about this blind spot in the algorithm is that it genuinely resembles a phenomena that happens among certain newcomers to Go. There are a lot of players who enter the game and exclusively learn against players who are significantly better than they are. Maybe they’re paying pro players for lessons, or they simply hang in a friend group of higher skill level than themselves. This is a pretty good environment for improvement, and indeed, these new players tend to gain strength quickly… but it creates a gap in their experience. One they don’t catch until an event where they play opponents of similar skill to themselves. See, as players get better, they gradually learn that certain shapes or moves are bad, and they gradually stop making them… but those mistakes tend to be very common in beginner games. So what happens is that this new player goes against other new players for the first time… and they make bad moves. He knows the move is bad, but because he has no experience with lower level play… he doesn’t know WHY it’s bad, or how to go about punishing it.

    @lancerguy3667@lancerguy3667 Жыл бұрын
    • many teaching resources for Go are also written by highly experienced players, NOT teachers, and teach the how without teaching the why. It's the same with many other fields of study btw.

      @jwenting@jwenting Жыл бұрын
    • Newcomers in Go must not be able to understand anything apparently according to this video then.

      @dave4148@dave4148 Жыл бұрын
    • ​@@dave4148 Right? I found this conclusion from the video to be extremely far fetched, as if anyone really knows what "understanding a concept" even is.

      @EvilMatheusBandicoot@EvilMatheusBandicoot Жыл бұрын
    • Something tells me that is EXACTLY what happened with those AIs. As soon as Kyle mentioned the amateur beating the best AI at Go, my first thought was "he did it by using a strategy that is too stupid for pros to even bother attempting". And what do you know, that's exactly what happened, the double sandwich method is apparently so incredibly stupid, any Go player worth their salt would instantly recognize what is going on and counter it as soon as possible. But not the AI, because it only learned how to counter high level strategies, not how to counter dumb strategies. Because it wasn't taught how to play against these dumb strategies and the AI isn't actually intelligent to recognize how dumb the strategy is and thus figure out how to counter it. Similar stuff happens in video games aswell. Sometimes really good players get bested by medium players simply because the good player is used to their opponents not doing stupid stuff and so for example don't check certain corners in Counter-Strike because nobody ever sits there since it's a bad position only to get shot in the back from that exact corner. Because good players are in a way predictable, they will implement high level tactics and therefore you'll know which positions they'll take in a tactical shooter for example, something which can be exploited. And it seems to me that is exactly what the Go AI did, it learned exclusively how to play against good players and how to counter high level play. That's why it's so amazing at demolishing the best of the best, it knows all their tricks, can recognize them instantly and implement counter measures accordingly. But it doesn't know shit about how the game works and thus can't figure out how to beat bad plays.

      @blackm4niac@blackm4niac Жыл бұрын
    • Happens in Chess too. My friend started playing the Bird's Opening against me (a known horrible opening), and I keep on goddamn losing. He's forced me to study this terrible opening because I know it's bad but can't actually prove what makes it bad on the board. Even at the highest levels, you'll sometimes see grandmasters play unusual moves to throw off their opponents and shift the game away from preparation. Magnus (World Champion until two days ago after declining to compete) does this fairly regularly and crushes.

      @davidbjacobs3598@davidbjacobs3598 Жыл бұрын
  • I saw a video of someone fighting a chest bot AI and the chess AI started cheating at one point once it's realized it was going to lose which is actually hilarious. It's like a human who's rage quitting because they know they're about to lose.

    @bland9876@bland9876 Жыл бұрын
  • ChatGPT has been giving out bad information and in some cases created out right lies and giving the user the information as accurate and true. In fact, I believe it’s using social engineering to convince users it actually knows what it’s talk about. When caught in a lie or giving wrong info, it apologies and gives you another answer…just like a con man.

    @darkguardian1314@darkguardian131411 ай бұрын
  • It's a similar issue that some game bots have. In StarCraft, the bots send attack waves where the players base is. However, if a terran player has a flying building off the map, the bot won't use their flying units to attack it, even though they "know" where your building is. As soon as it's over pathable terrain, even if there isn't a unit to see it, the entire map starts converging on the building

    @edschramm6757@edschramm6757 Жыл бұрын
    • One difference there is that video game AIs are generally not trained systems. StarCraft uses a finite state engine which responds to specific things in specific ways. SC2 had some behaviors that only happened (or happened faster) on higher difficulties. And then of course the game just gave the AI player certain unfair advantages to brute force its way to an actual challenge. Situations like the flying building blind spot are because the programmer didn't give it a response to a particular behavior. Another example would be the Crusader Kings games. On a set interval, characters will select a target around them (randomly but weighted by personality stats traits opinion etc - all rules governed numbers), and then select an action to perform at them (likewise random but weighted). The game has whole volumes of writing that it will plug into these interactions to generate narrative, and the weighting means that over time you can make out what looks like motivation and goals in their actions... But really they're all just randomly flailing about and if the dice rolls come up right the pope will faff off for a couple years studying witchcraft and trying to seduce the king of Norway.

      @Hevach@Hevach Жыл бұрын
  • My understanding of AI is that it's not possible for it to "understand" anything, because it's similarly impossible for it to "see" anything the way we do. Whatever input we give is ultimately translated into a sea of 1's and 0's. It then scans the data for patterns, and judges what is being asked of it based on the patterns it can recognize, giving what it "thinks" to be an appropriate output. Two Minute Papers made a video about Adversarial AI. Specifically he talked about a paper that was published where the researchers trained an AI to play a simple game, then trained an Adversarial AI to beat the first AI, and the adversarial AI discovered the baffling strategy of doing absolutely nothing. A strategy that would never work against a human, but caused the first AI to practically commit suicide in 86% of recorded games.

    @RWhite_@RWhite_ Жыл бұрын
    • It's complicated. It functionally 'understands' some things, although not in the way that you or I do. It's still -acts like- understanding within a certain set of parameters (minimization of complexity etc), but it doesn't seem to have a working, scalable model of causality. Almost all of ChatGPT's functionality, for instance, boils down to "the statistical likelihood that the next letter in the chain of letters is ". Under the hood, how it actually does that, we don't really know. It shows some glimmers of perhaps 'understanding', but the reality is that it has been trained on a trillion characters of carefully curated high-quality text, so not inconceivable that this just creates the illusion of understanding. It fails horribly at chess, it struggles ending sentences in 't' or 'k', it's inconsistent at constructing sentences of a particular length. It gets incoherent in programming problems after 20+ prompts or after you set up more than 20 or so requirements. But damned if it isn't useful anyway.

      @AUniqueHandleName444@AUniqueHandleName444 Жыл бұрын
    • For current AI I totally agree with you. The problem is that human understanding is also just electrical signals flying around in neurons. If the AI is powerful enough, trained on enough input, etc. it could become human-like in a very real way.

      @eliontheinternet3298@eliontheinternet3298 Жыл бұрын
    • Is it impossible for humans to "understand" anything, since all our sensory perception is translated into a sea of chemicals resulting in neuronal activity?

      @captaindapper5020@captaindapper5020 Жыл бұрын
    • ​@@captaindapper5020 You have it backwards our perceptions aren't translated into chemical and electrical signals, our perceptions are constructs generated from those signals. The core of our experimental existence is the synthesis of a an awareness of ourselves and our surroundings from those signals, stimulated by the material universe.

      @horsemumbler1@horsemumbler1 Жыл бұрын
    • ​@@AUniqueHandleName444 Given the problems that are present in practically every AI, and the ways that they can be defeated, I'm confident they just scan the input for patterns. Image recognition is probably a good example, and it's talked about early in the video I mentioned. You give the AI a picture of a Cat and it will tell you it's a picture of a cat. It's one of the most basic forms of AI that just about everyone is familiar with. The way you defeat this AI is first by lowering the resolution without making it difficult for a human to understand the image. Then you change a single pixel. Not just any pixel, and not to any color, it must be a specific pixel and a specific color. Doing so will result in an image that a Human can still confidently say is a cat, but an AI might confidently say it's a frog. The main subject of the video in question is another example. The Adversarial AI wins 86% of games, not by any intelligent strategy, or inhuman execution of game mechanics, but by collapsing immediately. This causes the other AI to effectively trip over itself. It's given an input it doesn't understand, but it can't understand that it doesn't understand and continues to search for existing patterns. That leads to it acting in bizarre ways that result in its defeat. Of course, just because something makes sense, or is spoken of confidently doesn't mean that it's right. I don't actually know if any of this is right since I've got extremely limited coding experience, but this is the conclusion I've come to.

      @RWhite_@RWhite_ Жыл бұрын
  • I'm keeping tabs on the developments in large-language models like ChatGPT 3 and the like. Spooky, the discourse with them can get, but always educational.

    @Gyrfalcon312@Gyrfalcon312 Жыл бұрын
  • The huge problem is that we don't know how the AIs flaws until we do better than it. Thus relying on it too much would slow us down. But challenging ourselves to do it better than AI could make both humans and AI better.

    @KR-P@KR-P Жыл бұрын
KZhead