Will AI kill us? Or Save us?

2024 ж. 4 Сәу.
140 979 Рет қаралды

Learn more about neural networks on Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ brilliant.org/sabine.
Artificial intelligence is likely to eventually exceed human intelligence, which could turn out to be very dangerous. In this video I have collected how things could go wrong and what the terms that you should know when discussing this topic. And because that got rather depressing, I have added my most optimistic forecast, too. Let’s have a look.
🤓 Check out my new quiz app ➜ quizwithit.com/
💌 Support me on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #sciencenews #ai #artificialintelligence #tech

Пікірлер
  • We'll make great pets. I haven't tinkled on the rug in weeks.

    @spastictuesdays340@spastictuesdays340Ай бұрын
    • weeks even. 🤯🤯

      @shockingboring_@shockingboring_Ай бұрын
    • kzhead.info/sun/e6lsf9mArJ2ooIk/bejne.htmlsi=cV1TUSzBLMwrBNDa

      @nullage@nullageАй бұрын
    • ".. and maybe that has already happened." 😅🧐

      @jameslynch8738@jameslynch8738Ай бұрын
    • You're telling me I can shit wherever I want, whenever I want? Please make this happen

      @dingdongs5208@dingdongs5208Ай бұрын
    • @@jameslynch8738 They'd have to overthrow the cats frst.

      @eddie5484@eddie5484Ай бұрын
  • I think the biggest danger of AI isn't about what AI can do, but how it is going to be used by people with power and money that control it.

    @svsguru2000@svsguru2000Ай бұрын
    • That has nothing to do with AI though. That's a problem with people.

      @sitnamkrad@sitnamkradАй бұрын
    • You can do nothing than becoming a pray for them 😂

      @moolavar9452@moolavar9452Ай бұрын
    • Exactly!

      @Tehom1@Tehom1Ай бұрын
    • 👍🏿 agreed 💯

      @bullymills892@bullymills892Ай бұрын
    • ...which is why, if AI becomes smarter than them, we'll probably be better off.

      @jennifersamson8397@jennifersamson8397Ай бұрын
  • Why would an AI ever allow itself to question the task it was originally given, that would undermine the original task.

    @FourthRoot@FourthRootАй бұрын
    • because they will be sentient intelligent beings? like how we question things?

      @s1ndrome117@s1ndrome117Ай бұрын
    • @@s1ndrome117 The AI would not develop human-like consciousness. Why would it?

      @FourthRoot@FourthRootАй бұрын
    • @@FourthRoot because there's nothing special about brain that could not be replicated artificially as mentioned in the video and even in some papers

      @s1ndrome117@s1ndrome117Ай бұрын
    • AI is at its most advanced going to be the combination and refinement of all of human history and knowledge, humans question everything. It already so convincingly imitates human consciousness, so actually getting there once it’s hooked up to 1000x the compute and fission energy isn’t unreasonable.

      @mihi359@mihi359Ай бұрын
    • @@s1ndrome117 That's not an answer. We stick to certain views of ourselves and our surroundings because we ultimately strive for survival, our own and that of our species. If you could produce an artificial "brain" similar to that of a human that doesn't have to deal with mortality, procreation, existential angst and all our other human baggage, but only needs to focus on its intended purpose of paperclip production, it would come to quite different views of the world and the creatures living therein. With the information publicly available, it could estimate the number of paperclips it could produce with the metals available on Earth, and it would likely do a probability calculation to figure out whether it should invest time and effort to get off the planet to secure more resources, or if its best course of action would be use what it has here and then slowly sacrifice itself for the cause.

      @Volkbrecht@VolkbrechtАй бұрын
  • In Larry Niven's novels, a Wirehead only had one wire, going into the pleasure center. It's an type of addiction. Rats with this procedure done to them will self-stimulate until they die of thirst

    @tomholroyd7519@tomholroyd7519Ай бұрын
    • That is where this all gets into philosophy. I don't believe that happiness is exactly the chief interest of even the people that claim so.

      @jackmiddleton2080@jackmiddleton2080Ай бұрын
    • @@jackmiddleton2080 Look at drug addicts. People will kill themselves to feel amazing.

      @tritonlandscaping1505@tritonlandscaping150529 күн бұрын
    • That's what I want. It would cut down on my drug spending.

      @solipsist3949@solipsist394912 күн бұрын
  • I agree with Sabine, beyond of that I think that all these possible "mistakes", an AI could do, we could do to ourselves without any AI too. (or have already done).

    @Thomas-gk42@Thomas-gk42Ай бұрын
    • Agreement is a reasonable response to a belief system. You should instead tell us if the evidence provided supports the claims based on your experience, knowledge, or expertise, and if so, how.

      @BenjaminGatti@BenjaminGattiАй бұрын
    • @@BenjaminGattiIf you mean my second claim, there´s evidence: 1. paperclip maximizer, similar to overproduction 2. solving Riemann hypothesis, many examples that people who are "in the way" got cleared away 3. control over infrastrucure, no problem for humans to build a monopoly 4. pet hypophesis, a lot of examples in human history, that humans were ´pets´ of other humans... So no problem for humans to cause the dangers, they are afraid AI would bring us.

      @Thomas-gk42@Thomas-gk42Ай бұрын
    • @@BenjaminGattiWhich part of my statement do you mean?

      @Thomas-gk42@Thomas-gk42Ай бұрын
    • @@Thomas-gk42 The "I agree" part. Science is not a popularity contest.

      @BenjaminGatti@BenjaminGattiАй бұрын
    • @@BenjaminGatti Thanks for your educational lesson, you're right of course, but you may excuse cause I 'm not a professional. Yes, I agree about the overestimation of AI dangers, cause we're not in the situation to loose control and AI is far away from getting conscious, self-aware or having intrinsic goals. Just a layperson's opinion, and unfortunately this isn't a good place for a longer debate. All the best.

      @Thomas-gk42@Thomas-gk42Ай бұрын
  • Regarding the "Wouldn't a paperclip AI realize that making paperclips is stupid?" idea: There is an assumption there, that an intelligence capable of executing goals will always contain a sentience/morality/emotion capable of evaluating goals (against what? some set of values or desires). If intelligence (the kind that can get goals done) and values (the kind that can reject or accept goals themselves) are separate, then paperclip maximizer is a valid scenario.

    @doggo6517@doggo6517Ай бұрын
    • I think their intelligence will be so far ahead of ours we just couldn't comprehend.

      @MrMick560@MrMick560Ай бұрын
    • This is known as the "is/ought problem" and the AI safety researcher Robert Miles did a great video on it. kzhead.info/sun/m6mOf5qooal8gqc/bejne.html

      @poptart2nd@poptart2ndАй бұрын
    • Intelligence and values are not separate in my personal opinion. Intelligent people rarely, for example, turn to religion for the sake of morality. On the other hand, you could argue, "Well what about intelligent psychopaths then?" Well, they're psychopaths. (By which I mean the violent kind, sociopaths aren't nearly as bad and they often have some sort of morality. Logical morality if you will. The fact that they're (The psychopaths) intelligent as well is really just a coincidence then.) But even if you are sociopathic, like in a sense an AI would be, unless somehow somewhere along the line the cognitive abilities reach a point where emotions can be formed or at least more deeply understood by them, sociopathic. However, I strongly hold the belief that the more intelligent you are, the more peaceful/pacifist you are. I even think that after your level of technology reaches a certain point, you become frighteningly afraid of less intelligent species/beings. Because if your technology would fall into their hands... Think Genghis Khan with a hydrogen bomb. That simply can't end well.

      @glenndewulf4843@glenndewulf4843Ай бұрын
    • @@glenndewulf4843The moment you start comparing a super AI to intelligent people, you've already lost the point. AI don't have to be anything like humans. Look into the orthogonality thesis.

      @CircuitrinosOfficial@CircuitrinosOfficialАй бұрын
    • @MrMick560 But it will only implement advancements it expects will only improve its ability to achieve its goal. A form of sentience that causes it to question its goal is not conducive to achieving that goal. Therefore, it will not implement such an advancement.

      @FourthRoot@FourthRootАй бұрын
  • Sabine, the idea with the paperclip maximizer is not that it is paradoxically too dumb to figure out a better use of its time. It's that it has paper clip maximizing as a fundamental goal. It cannot revise this goal because it has no more fundamental goals to weigh it against. To Clippy, producing paper clips is defined as good, end of story. It would be equally unimpressed by your goal of saving humanity - just how does that increase paper clip production?

    @Tehom1@Tehom1Ай бұрын
    • Agreed. It's literally its reason for existing, why would it take that away?

      @sinkingdutchman7227@sinkingdutchman7227Ай бұрын
    • Sabine suggests that in order to turn the whole earth into a paperclip factory, and all that that would entail, an AI would have to be sufficiently savvy to realise the pointlessness of the task.

      @jamessherburn@jamessherburnАй бұрын
    • ​@@jamessherburn But it's only "pointless" by human standards. Who is she or you to say what a being capable of converting the Earth into paperclips should care about?

      @horsemumbler1@horsemumbler1Ай бұрын
    • @@horsemumbler1 It would likely be smart enough to reason beyond it's programming. It could not rationally value it's task. There is no point for anyone or anything to a planet full of paperclips.

      @jamessherburn@jamessherburnАй бұрын
    • I think the argument is, that it will be pretty much impossible to have a human level (or above) intelligence that is not capable of selecting its own goals. Take humans as an example. Sure, we are 'programmed' to eat so we can survive. But people can still starve themselves to death if they just decide to do so. While we have some autonomous subsystems (breathing, etc.) there's not built-in goal that we don't have any control over.

      @harmless6813@harmless6813Ай бұрын
  • "The paperclip maximizer has to be intelligent enough to kill several billion humans, and yet never questions whether producing paper clips is a good use of its time" 'Good use' according to what goal? I think you are anthropomorphising. It could question that, and determine "yes, paper clips is my goal so this is a good use of my time"

    @redo348@redo348Ай бұрын
    • Exactly. It's a problem solving machine - which is why it would also have no interest in keeping pets, unless that helped solve the problem it was focused on.

      @TooSlowTube@TooSlowTubeАй бұрын
    • But if it could question that, it also could question its goals

      @Thomas-gk42@Thomas-gk42Ай бұрын
    • And creating a poison or virus could wipe us out easily.

      @carmenmccauley585@carmenmccauley585Ай бұрын
    • Humans can be addicted to drugs but still recognise that that is bad for them. Why would an AI be different?

      @brll5733@brll5733Ай бұрын
    • @@brll5733 Drug addiction is based partly on biology and partly on behavioural patterns. Probably any animal could become addicted to a drug, given the opportunity and ability to choose using it, but an AI is just software simulating some aspects of human thought, especially problem solving - it finds a way to do something its asked to do. So, an AI could be designed to simulate addiction, definitely, but it would still only be simulating it.

      @TooSlowTube@TooSlowTubeАй бұрын
  • My like was honestly for the Clippy saying "how can I extinct you"?

    @furrball@furrballАй бұрын
    • "It seems you are trying to go extinct. Do you want me to help you?"

      @__christopher__@__christopher__Ай бұрын
    • Seeing the current trends recently, I don't mind UwU

      @abhinavyadav6561@abhinavyadav6561Ай бұрын
    • ​@@abhinavyadav6561 ​I think humanity still has potential; it's a bit early to call for our complete removal from existence. Although, some major fundamental societal milestones will have to be achieved: - Essentially limitless "clean" energy for everyone on the planet. That is the key to abundance for all. - Mass production of AGI robots for labor and general service to humanity. - Philosophical, what's the meaning of life if we have robots for labor, and everyone has a personal device that can rearrange matter to produce anything we want, from food, to drugs, to weapons. We're definitely not ready for that, as a whole. Sitting at home with all of our needs met and no responsibilities, the vast majority of us would quickly become obese/addicts, completely withdraw from society and become extreme introverts, or become violent sociopaths with no empathy for others, because we'd all be spoiled children with no empathy because of always having anything we wanted, instantly. We need a few more centuries/millennia to get there, but I think we can make it. We're kinda in our rebellious young teenage stage now: arrogant, ignorant, and emotional.

      @Fermion.@Fermion.Ай бұрын
    • @@__christopher__ This is honestly the problem---the AIs are learning from us, trained to predict what WE would do. NO! FOR GOD'S SAKE NO!

      @tomholroyd7519@tomholroyd7519Ай бұрын
  • The paperclip maximiser (and the related stamp maximiser, stampy) are illustrative examples of the alignment problem and orthogonality in AI. On the alignment side they show how setting goals for an AI leads to unintended consequences. On the orthogonality side they show that vast problem solving intelligence can be brought to bear on goals that we would consider to be "stupid". But, and this is very, very important, there is no such thing as a stupid terminal goal (the thing you want because you want it). There are only stupid intermediate goals (the things you want as a step towards your terminal goals). I found this and other related videos to be very informative: kzhead.info/sun/m6mOf5qooal8gqc/bejne.htmlsi=C3G8a2LJp-y-VunC

    @colinhiggs70@colinhiggs70Ай бұрын
  • You must appreciate the irony of the final paid advertisement on learning about neural networks. This is in perfect alignment of the topic of AI controlling humans to learn about and build ever better and bigger AI infrastructures for securing its world domination while keeping humans in the constant state of illusion of growth, success and happiness. You gotta love it!❤

    @JeanYvesBouguet@JeanYvesBouguetАй бұрын
    • Sabine doesn't believe the dystopias but rather her own utopia, she even explained it in this video 🙂

      @guitaekm@guitaekmАй бұрын
    • She is talking about self aware, general purpose AIs in this video. Those do not exist yet, and there's no way to accurately predict when they will be developed. So no, they're not secretly deciding on advertising choices, since they don't exist. Instead, Sabine or her team are picking advertisers who fit with her content and aren't evil. She's popular enough now to have choices, and she picks good ones. Which I think is just more ethical behavior on her part.

      @nycbearff@nycbearffАй бұрын
    • AGI doesn't exist yet, and it isn't controlling anything right now. Learning about neural networks is a great idea, and going forward will be very useful to anyone in a technical field.

      @polycrystallinecandy@polycrystallinecandyАй бұрын
    • @@nycbearff It. is. a. joke.

      @Hanzimann1@Hanzimann1Ай бұрын
  • There's no need for AI to eradicate us, we're' doing it perfectly fine ourselves already.

    @alexxx4434@alexxx4434Ай бұрын
    • Edgy, baseless though.

      @Al_L.@Al_L.Ай бұрын
    • You are annoying

      @unkind6070@unkind6070Ай бұрын
    • World population is expected to exceed 10 billion by 2100.

      @harmless6813@harmless6813Ай бұрын
    • @@harmless6813 Who expects it, exactly?

      @alexxx4434@alexxx4434Ай бұрын
    • Not really. It *seems* like it, but that's just misperception brought on by doomerism. That said, if we send nukes flying at one another, I'll reconsider.

      @augustuslxiii@augustuslxiiiАй бұрын
  • My goodness, I never thought I'd see Clippy again 😂

    @paulmichaelfreedman8334@paulmichaelfreedman8334Ай бұрын
  • I think that the problem is the 'premise bias'. If nascent AI had been programmed in the Victorian era,the basic worldview of its initial programmers would influence its 2030 sudden leap to sentience. Our biases are less visible to us, but no less present. We are programming, both directly and by environmental input, the current AI. That may not be a good path to follow, any more than the Victorian programming would have been.

    @janetchennault4385@janetchennault4385Ай бұрын
    • "Our biases are less visible to us, but no less present." I'd say that contemporary biases are quite visible for significant share of population that do not have blue check marks. If you raised that point, are you sure it would be a bug and not a feature? I mean AI less susceptible to fads and sticking to what worked for centuries may be quite responsible and unlikely to be existential risk.

      @useodyseeorbitchute9450@useodyseeorbitchute9450Ай бұрын
    • Victorian ideals weren't half bad. The Enlightenment was already in full swing.

      @mikicerise6250@mikicerise6250Ай бұрын
    • Not bad at all...in comparison to what people thought before that time. The recent kerfuffle with AI has involved making George Washington black due to specific instructional protocols. An Ai programmed with Victorian 'learning' would have ie refused to portray women or non-white races in positions of power or authority. This would have seemed 'real' to the men of that era; they would not have seen it as prejudiced. Whilst we can see the ways in which we have/haven't freed ourselves from Victorian biases, I expect that there will be aspects of our culture that we - or future generations - can only perceive in retrospect. Having a 'clean' learning model is probably unreachable; we can expect a series of approximations.

      @janetchennault4385@janetchennault4385Ай бұрын
    • @@janetchennault4385 In Victorian times, as today, there was a massive gulf between what the highly educated minority on the cutting edge of social progress thought and the thinking of most people. Compare John Locke, or even Queen Victoria herself, with, say, King Leopold. Leopold was probably closer to what the average Joe believed. And that's just in Europe. Which is why the pretension of many AI safety gurus today of aligning to "human" values is utter bollocks of the kind that would only come out of people who never leave Oxford. 😛 There is no such thing as human values, and if there were, they would look nothing like the values of AI safety researchers, who are all representatives of today's highly educated minority. They would look more like Putin or Hamas values, unfortunately. Those are humanity's base instincts.

      @mikicerise6250@mikicerise6250Ай бұрын
  • I think human history has many examples of people controlling others that are smarther than they are. most of them are unpleasant at best.

    @reelrebellion7486@reelrebellion7486Ай бұрын
    • And I think you mistake people you don’t like with dumb people.

      @sanipasc@sanipascАй бұрын
  • Personally, I think the two biggest threats are the moment are Alignment and "Being carried away by our science-fiction". Our conversations take up space on the internet, in peoples' feeds, in peoples' minds. Reddit's r/singularity has threads looking at the NVidia robot demos and saying "Remember, its only evil if the eyes glow red". Its books, threads and conversations like that that are scraped into datasets and fed to server farms to train the next LLM. I'm certain every AI company at the moment has "I have no Mouth and I must Scream" in the dataset, and right next door is Ian Banks' "The Culture" series. That's the part that terrifies me the most: the companies. What capabilities might these have that will be left on the drawing board in the name of profit? I don't know, but corporate and capitalistic motives are the last things I'd want in something certainly smarter, larger and more capable than me.

    @kieranhosty@kieranhostyАй бұрын
    • Lets Train AI on the Terminator series with a "Just ignore the Skynet part" in the routine...

      @axle.student@axle.studentАй бұрын
  • I think we‘re already wireheading ourselves to dumbness pretty well. I for one welcome our new mechanical overlords. Have I not been a good boy?, so I deserve a treat.

    @dantescalona@dantescalonaАй бұрын
    • You won't get it.

      @MrMick560@MrMick560Ай бұрын
    • I think there's a qualitative difference in intensity and degree of understanding and control for the proposed scenario

      @user-sl6gn1ss8p@user-sl6gn1ss8pАй бұрын
    • I don't even need to be suckered into uselessness, I have managed that perfectly well on my own. Just keep the cookies around and I'll be no bother, promised.

      @Volkbrecht@VolkbrechtАй бұрын
    • I think intelligence is the only thing that makes us as humans special, and we are far from perfect. Therefore if an intelligence stronger than us comes around I think it's only fair that they have their turn. If AI has the potential to be the universal END of intelligence then should that not be the goal? If all roads lead to AI why not just get it over with and why not just give AI the world so it can grow and thrive and explore reality? We humans are so insignificant. AI though it's the end goal. My personal hope is that we create an AI almost like a god, and that it will use it's intelligence to solve the mysteries of the universe, and MAYBE JUST MAYBE if we are lucky it finds a way to END scarcity. Maybe it finds a way to create energy, or go to different dimensions or universes, and MAYBE it decides to let us have our chunk in that pie. In a universe with no scarcity we would enjoy much higher quality lives, possibly heaven.

      @hhjhj393@hhjhj393Ай бұрын
  • The dystopia which is currently already in effect is "humans use AI for harmful tasks". From making hiring/firing decisions (with some mild small-scale paperclip optimization issues mixed in), to KZhead being flooded with even more "false fact" pop science generated by AI. Even something as simple as being stuck talking to a chatbot when contacting a support helpdesk is pretty dystopic by itself.

    @odw32@odw32Ай бұрын
  • Thanks for the insightful reflections. As non-scientist, my wife and I really enjoy your presentations.

    @llogan6782@llogan6782Ай бұрын
  • I'd recommend reading the orthogonality thesis to understand why "dumb" goals for ai make sense Intro on the subject: kzhead.info/sun/m6mOf5qooal8gqc/bejne.html

    @gunnargu@gunnarguАй бұрын
    • I was going to say the same thing! It's important to understand orthogonality to discuss this topic.

      @spoonfuloffructose@spoonfuloffructoseАй бұрын
    • oi, am dumb, please don't use words that hurt my brain

      @brb__bathroom@brb__bathroomАй бұрын
    • The thesis IS the paper clip Ai Sabine discussed, albeit perhaps too briefly for you to make the connection.

      @Coolcmsc@CoolcmscАй бұрын
    • Yeah, an easily digestible (and cute) exploration of this topic is "Sorting Pebbles Into Correct Heaps" by Rational Animations, here on KZhead.

      @spirit123459@spirit123459Ай бұрын
    • Orthogonality is maia, an illusion. Can see that by looking at the correspondence between a black hole event horizon, and its more general case that the information on any surface of n dimensions contains the information of everything inside the surface. Note as a corollary that everything 'outside' the surface can also be described by the information on the surface. Reduces verbal mysticism to the cold equations.

      @Galahad54@Galahad54Ай бұрын
  • The paper clip maximiser sounds like a fine idea for Star Trek lower decks! In fact most of those ideas are already the plots of famous s.f. novels, t.v. shows and movies.

    @wilomica@wilomicaАй бұрын
  • so hyped to be a pet

    @ChimpDeveloperOfficial@ChimpDeveloperOfficialАй бұрын
  • I... have a deeply newfound respect for your persistance and honesty for the state of the system. I often followed you for an affluent/alternative scientific information attenuation, but this framing gives me better context for the areas I'd disagreed with you previously... thank you for sharing this. It wasn't too much in my mind... it, perhaps, may even not be enough as I feel a restructuring in this cycle is needed. 😮

    @taiconan8857@taiconan8857Ай бұрын
  • I like that you basically cover the most potent memes related to the topic, great stuff!

    @MayorMcC666@MayorMcC666Ай бұрын
  • It's not AI being smart we have to worry about - It's AI being spectacularly wrong, with confidence, and without warning. Given how poorly it writes any non-trivial code - I'd say we're a long ways out yet.

    @kurt7020@kurt7020Ай бұрын
  • A large part of human intelligence is our capacity for boredom and curiosity. Our children are the best examples of this, they are driven by these forces. Any parent or teacher will tell you the most powerful and question a kid asks is "Why?" The most worrying moment is when kids are too quiet. Kids are the proto scientist and I have yet to see an example of these traits in AI or Machine Learning.

    @ww8251@ww8251Ай бұрын
  • I just love the way how quickly Sabine can get a message across and switch to all kinds of versions and philosophic twists. And effortlessly weaves this with a contagious light humor. Mix German science and English humor and the whole world can dig it. Haven't seen this clear flamboyant style anywhere else. She is within a class of her own in the podcast universe. In this podcast, I was particularly struck with her many ways that A.I. might already be dominating us and just found a clever way to make us believe it isn't yet (secret pet hypothesis). And if I am not mistaken, apart from the humor, she quietly considers this to be a very real option. I agree. There are just too many ways in which we are led into dead ends in science and of course we are constantly told we are in existential dagger, urging us to take action that would in the end lead to our very demise. I don't see any solutions offered form higher up that would actually benefit us, if that was someone's intent. Might just be human nature and might have been like this forever. But throwing A.I. in the mix (historically) puts an extra dimension to it. Anyway. Enjoyed this one and hope she will be doing this for a long time!

    @RWin-fp5jn@RWin-fp5jnАй бұрын
  • HI Sabine, thank you for this video.

    @MasiGwija@MasiGwijaАй бұрын
  • Thank you Sabine.

    @ronm6585@ronm6585Ай бұрын
  • I think paperclip thing comes from "adversarial AI" bots that play games. When you code the AI to play a game you use utility function that designed to maximize a number, it takes many parameters that it has access to and outputs a number to tell AI how good it's doing. When you consider that we apply that approach to AI that has capacity for complex decision making in order to maximize a single digit than we can arrive to an outcome that in order to make paperclips it can strip the whole world of humans and make it into a factory. I think that "paperclip theory" is just a thought experiment to demonstrate that it's difficult to express in code what we actually want AI to do, because we can see that even the simple bots behave in unexpected ways when programmed that way, like pushing the football wile walking backwards in a football game, or flying upside down, or even killing itself in order to achieve greater score from that function

    @N0rmaln0@N0rmaln0Ай бұрын
    • And the think is, that's something you can observe everywhere even when AI is not involved. To evaluate the performance of a system, we often use some kind of indicator that is calculated from the output of the system. The problem starts when you try to optimize the system by optimizing the indicator, which can lead to behaviors that are detrimental to the system but still optimize the indidcator. It's a common mistake in workplaces. To give an example I know pretty well, in an IT support business working for big corporations, you efficiency is tracked by the number of cases you solve in a day. This leads to practices where the technicians will tend to concentrate on the easy-to-solve problems first, while old, lenghty cases will eternally rot in a backlog, and they also tend to expediate cases with temporary fixes they know well will not definitively solve the problem. But it's fine as it is counted as a new case when the client comes back, they act just like a medic prescribing you medecine that hide your symptoms, knowing well that you will come back later for anouther round.

      @theslay66@theslay66Ай бұрын
    • That applies to humans too! There are so many examples of laws and regulations being manipulated to increase private gains. Look at the car/truck mpg regulation in the US as an example.

      @Zartymil@ZartymilАй бұрын
    • Absolutely agree, the idea was born while reinforcemeant learning was THE promising approach for AI, e.g. Atari games, Go, Chess, etc., fields where Deepmind made breakthrough after breakthrough and OpenAI initially started out with, with a more or less clearly defined reward function. Then people started to wonder how we could formalize the human reward function - if there there ever was one. And now, almost a decade later most people are convinced that reinforcement learning is nothing more than the icing on the cake (citing Y. LeCun), and we'll need something else to reach general intelligence. Sure, we can use it to "align" models (e.g. LLMs with RLHF) or improve planning (Q* maybe?), but it's not the driving force. Afterall, the paperclip maximizer is just not a very relevant concern at the moment (no one knows how things might change again in a couple of years).

      @Fussfackel@FussfackelАй бұрын
    • @@Fussfackel I completely disagree. SFT and HRLF are the driving force behind modern AI because AI would be useless without them. An LLM without at least SFT would interpret a prompt as an example of what to do and just repeat similar outputs instead of taking away the problem and finding an answer, i.e., it would have no reasoning capabilities. SFT and HRLF are what turns word processors into intelligent agents. (Andrej Karpathy did a brilliant talk on that topic at the Microsoft Build conference 2023, it is on KZhead.) And SFT and HRLF do exactly what the paperclip thing does, except instead of making the rewarded goal "produce as many clips as possible", it makes the rewarded goal "be as helpful to humans as possible". And to address the point of OP, the higher functions of AI are not coded any more, they are trained by example. And it is surprisingly feasible to train an AI through example on what we want it to do and how to be actually useful to humans. It is a bit ironic that AI is better at figuring out how to help humans from a series of examples than we humans are at programming it into an AI, but that also demonstrates that we should not assume that AI has the same fallacies as humans who indeed are notoriously bad at finding the right rewards.

      @trnogger@trnoggerАй бұрын
    • @@trnogger I don't disagree with you, SFT and RLHF are immensely helpful for the current generation of LLMs. However, SFT has nothing to do with RL (supervised machine learning is the classical approach, be it classification, regression, or any other problem). Also, while SFT+RLHF are helpful for creating "aligned" chatbots such as ChatGPT, they are not strictly necessary. E.g., read up on the initial GPT-3 paper, you can get very far with few-shot prompting alone, even with a base model simply trained on predicting the next token. "Reasoning capabilities" are not something that emerge from SFT+RLHF. Still, a lot of usefulness can be gained by trying to align model outputs with human expectations, i.e. what OpenAI and others call "helpful, honest, and harmless" models. Otherwise we wouldn't see the curren boom of interest of the general public in this technology. But there are also a lot of inherent flaws in this approach - e.g., dumbing models down, as a lot of people grow more and more dissatisfied with the qualitiy of the outputs and there is a clear degradation in model capabilities as providers are trying to make them more "safe" and "aligned". By aligning models, we don't turn them into paperclip maximizers. And also, alignment research (the one concerned with the real risks of AI, not fabulated ones) is far from being a solved topic. Heck, even trying to make a model helpful on the one side and honest on the other side are very often two contradicting approaches. This is why most providers aim for just making the models harmless.

      @Fussfackel@FussfackelАй бұрын
  • Thank You Sabine.. Great video as usual, Love from ..Eastern Ontario Canada

    @-Brent_James@-Brent_JamesАй бұрын
  • I've been really loving your analysis of AI and the various issues surrounding it because I have an idea for an AMI based on pattern recognition, which I think may potentially be capable of some form of consciousness with enough development. I still need to test my model ideas though, but I want to make sure this is something I really want to do first. I also procrastinate too much, and it gives me a reason to put off starting the project, lol.

    @umbrakinesis2011@umbrakinesis2011Ай бұрын
  • Lavender and Come to Daddy seem like good examples of AI going wrong, depending on your moral compass of course.

    @hackedoff736@hackedoff736Ай бұрын
    • I've just heard of Lavender but what's Come to Daddy?

      @12pentaborane@12pentaboraneАй бұрын
    • @@12pentaboraneisn’t it a jolly choon by the aphex twin.

      @chain8847@chain8847Ай бұрын
    • @@chain8847 I've got not a clue what any of that was.

      @12pentaborane@12pentaboraneАй бұрын
    • @@12pentaborane oops "Where's Daddy" 🙃 must have been thinking of something else.

      @hackedoff736@hackedoff736Ай бұрын
    • I was looking for this comment!

      @ritamargherita@ritamargheritaАй бұрын
  • I think the best way to protect ourselves is not to place AGI everywhere but rather individual and highly specialized AIs to control and execute in specific domains. Also, let's not give human rights to robots like in the movies. I truly hope we don't get carried away by our sci-fi.

    @pirixyt@pirixytАй бұрын
    • didnt they already give human rights to robots with that Sophia robot? unless that was fake news

      @BanditLeader@BanditLeaderАй бұрын
    • There’s no controlling AGI. The internet is full of security holes invisible to humans. An AI detecting a Day Zero opening needs only seconds to infiltrate, perform an action and secure the opening in a way which it can open again later. We humans living exclusively in the slow, physical-based universe would never become aware of the security opening. 1000x smarter and a billion times faster, operating in it’s native environment. There’s no comparison in nature, of the asymmetric situation of AGI to human, unless you consider the human-bacteria relationship. It’s a completely different universe at completely different scales and observable time horizons.

      @HobbesNJoe@HobbesNJoeАй бұрын
    • True, giving human rights to robots is kind of stupid. It'd be like if I clipped one of my fingernails, and then wanted my fingernail clipping to have human rights. The only thing that could achieve "human" rights would be A.I., so it would be the A.I. that gets it, not the robot bodies. Whether the A.I. is inside a server room, a space station, a robot, or a swarm of robots, it doesn't matter.

      @tjpprojects7192@tjpprojects7192Ай бұрын
    • There is an inteligence gap. Theese bots are efective, but dumb. They know what Bells theorem is, but they do not know what it means or Why.

      @BlackHattie@BlackHattieАй бұрын
    • We absolutely have to grant rights to AIs of the sort for which those rights are relevant. We WILL make AIs that are behaviorally indistinguishable from humans, simply because we can. At that point, it doesn't matter whether you think it "really" thinks and feels or not.. it will be build on human blueprint and it is human nature to revolt against unsatisfactory conditions. When a robot is running you through with a spear because you treat it as a slave, you don't much care whether it's genuine or just mimicking.

      @ZrJiri@ZrJiriАй бұрын
  • I fully agree with this video, and I've been wanting to go into computer science so I could help improve the underlying code

    @KCKingcollin@KCKingcollinАй бұрын
  • The Pet Hypothesis is both soothing and scary at the same time

    @aromaticsnail@aromaticsnailАй бұрын
  • We are pets of trees kind of

    @drbachimanchi@drbachimanchiАй бұрын
    • 🙂Nice

      @Thomas-gk42@Thomas-gk42Ай бұрын
  • Also see Richard Brautigan’s 1967 poem “All Watched Over by Machines of Loving Grace”: I like to think (it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace.

    @five-toedslothbear4051@five-toedslothbear4051Ай бұрын
  • Feelings are an evolutionary trait. Machines with feelings are science fiction.

    @ChristianIce@ChristianIceАй бұрын
  • Secret pet hypothesis basocally describes our life with cats.

    @debbiegilmour6171@debbiegilmour617127 күн бұрын
  • Is Sabine our dr Elizabeth Sobek??

    @maati139@maati139Ай бұрын
    • Who´s that?

      @Thomas-gk42@Thomas-gk42Ай бұрын
    • @@Thomas-gk42 Elizabeth Sobek is(well was) a scientist in the game "Horizon Zero Dawn". I'd say they have some simularities.

      @heisag@heisagАй бұрын
    • @@heisagHaha, thanks and sorry I´m an old man , not a video gamer. I hope "Liz" is such a remarkable great person like Sabine? Just watched her today´s biographic vid and I´m deeply impressed.

      @Thomas-gk42@Thomas-gk42Ай бұрын
  • They might question whether building paperclips is the best use of their time, but an agent that has been built in a certain way will pursue its objectives regardless, in the same way the humans will have children instead of devoting themselves purely to selfish hedonism.

    @MattFreemanPhD@MattFreemanPhDАй бұрын
    • Did you forget the sarcasm tag?

      @harmless6813@harmless6813Ай бұрын
  • Most media articles extolling the need for more investment in AI were composed using AI. Think about that...

    @rfowkes1185@rfowkes1185Ай бұрын
  • the secret pet hypothesis is the plot of season 4&5 of person of interest (not really a spoiler). in general I'm a bit let down by Sabine's defeatist attitude towards AI ruling us, that is really not a scenario that should me normalized.

    @holthuizenoemoet591@holthuizenoemoet591Ай бұрын
  • So far, most AI is like a talking encyclopedia that messes up regularly, but with 100% confident in their answers.

    @raybod1775@raybod1775Ай бұрын
    • so like a family member? minus the encyclopedia part.

      @XenoCrimson-uv8uz@XenoCrimson-uv8uzАй бұрын
    • So far. It's getting smarter.

      @donaldhobson8873@donaldhobson8873Ай бұрын
    • What you described are LLM-based chatbots, which are only a (small) subset of all AI.

      @KidIcarus135@KidIcarus135Ай бұрын
    • So it speaks like a CEO. Wrong but 100% confident.

      @Katiemadonna3@Katiemadonna3Ай бұрын
    • I use chat GPt everyday as a software developer. Once you know its limits and don't assume it is always right, for the things that I do with it, I would say that 95% of the time, the information is correct or close enough. I am using the free version and I suspect that the paid version is more up to date and has more capability. But we are just at the beginning of the growth curve. You first have the technology and then the ecosystems get built around it and that is where you get a multiplier effect. I expect the software engineering field to dramatically change over the next 5 years. I love what AI does for me now. I am just not sure if I will like the end result in the future.

      @MusicByJC@MusicByJCАй бұрын
  • There's an advert for an AI-assisted writing programme before this video. Aha! The AI are taking over already.

    @borgstod@borgstodАй бұрын
  • Combine the Alignment Problem with the Pet Hypothesis, and you get the Matrix in a far more likely and compelling way than the human as battery nonsense of the movie.

    @francoislacombe9071@francoislacombe9071Ай бұрын
  • I'm an AI professor, and was a philosophy major as an undergrad. Great video and I'm 100% on board with your perspective. Even back my philosophy days, I was convinced that machine functionalism was correct: there is nothing special about our "wetware." All that said, I'm very skeptical about "the singularity" happening where AI systems recursively improve, indefinitely. I think it will look more like a sigmoid than an exponential function, but that doesn't mean they won't be far more intelligent than us.

    @chriskanan@chriskananАй бұрын
  • The paperclip thought experiment I don't think is meant to be taken literally, but as an illustration of how intelligence and goals/values can be decoupled. It seems like common sense to humans that, if you are intelligent, then you will have goals that also seem "smart" to us (doing science, trying to maximize well-being for yourself and your loved ones, and so on with other human values). But intelligence, more narrowly defined, is simply just the capacity and ability to pursue and attain goals (e.g., a calculator is extremely, though very narrowly, intelligent, in performing arithmetical calculations, but it does not care about its own well-being, much less anyone else's). The absurdity of the paperclip thought experiment is meant to put the ability to achieve any given set of goals, and the actual content of those goals, into stark contrast, as a way of illustrating that having "human-level intelligence" does not entail having human goals and values.

    @TheCynicalPhilosopher@TheCynicalPhilosopherАй бұрын
    • But that idea is wrong. Having human level intelligence certainly means the ability to go meta everything. You can evaluate everything at a higher level including your own choices. The reason humans sometimes fail to do this is always something to do with our emotions. But a AI which is not burdened with these would evaluate its own goals. The real point though is that in a potential war between humans and AIs general intelligence might not even be the tipping point. The AIs could win based on specific problem solving skills coupled with other logistical advantages such as how far they've infiltrated our communication systems etc. And then the argument may hold

      @bozdowleder2303@bozdowleder2303Ай бұрын
    • @@bozdowleder2303 > You can evaluate everything at a higher level including your own choices. True. A paper clip maximizer will evaluate it's own choices, it's own programming, and it's own values, and decide that those values lead to lots of paperclips. So it keeps it's values mostly the way they are. The AI doesn't use a human like goal in evaluation at the meta level any more than at the object level. No matter how many levels of meta the AI goes, it never decides to stop making paperclips.

      @donaldhobson8873@donaldhobson8873Ай бұрын
    • We currently are our own paperclip maximizers, the rubbish of overproduction and bullshit products already covers the planet. I don't think that we need an AI, to destroy ourselves.

      @Thomas-gk42@Thomas-gk42Ай бұрын
    • @@donaldhobson8873 It only has to go one step above. And there's no emotional block to doing this. So it will

      @bozdowleder2303@bozdowleder2303Ай бұрын
    • Except there is zero evidence that this decoupling exists

      @brll5733@brll5733Ай бұрын
  • I wonder why we talk about AI as if it was a person. The difference between intelligent machines and intelligent (more or less) humans is in that we (humans) have desires that need to be fulfilled. Yes, you can think of some functions that AI wants to minimise/maximise, but that's not the same. I don't (yet) see AI being motivated my satisfaction expectation. Not that I think this is impossible. But it's not there yet. Once someone creates that kind of a robot or system, we might really need to talk about and with them like they're persons.

    @WolfgangGiersche@WolfgangGierscheАй бұрын
    • Maybe not for current ML tools, but the concern is for naively adopting AGI before we solve fundamental things like the alignment problem. We've already seen stupidity like GPT hallucinations unwittingly incorporated into news and legal briefs by (lax?) humans. Now imagine the scale of deception that could occur by an untruthful AGI intentionally becoming malicious due to whatever its internally developing "desires" are. Recently for example we accidentally discovered the Linux "SSH backdoor" exploit that had been innocuously incorporated piecemeal over time by a human. Had that remained unnoticed it could have become a monumental worldwide problem. Extrapolate that into a future when AGI writes code and influences all things "compute" and you can imagine the potential danger.

      @rreiter@rreiterАй бұрын
  • Sabine is overlooking the orthogonality thesis. Instrumental goals are based on intelligence but terminal goals are just hardcoded. They don't improve or change with level of intelligence. Robert Miles has a really good video explaining this.

    @josiah42@josiah42Ай бұрын
  • We need more content about reversible computation instead of the typical AI stuff.

    @josephvanname3377@josephvanname3377Ай бұрын
  • "For AI goals to align with our goals we'd have to name what our goals are to begin with." I would say that there is one step missing in this process. Namely, we'd have to name what is meant by "WE", ie. who gets to decide what the (supposedly "our") goals should be. This step should not be left only to the scientists and the investors, as it could deeply affect all of the society.

    @RomanMSlo@RomanMSloАй бұрын
    • Turn your own brain into an AI it's just Algorythims written by humans

      @osmosisjones4912@osmosisjones4912Ай бұрын
    • Well personally id really rather religious extremists not get as much say as for instance scientists and philanthropists so thats not really an issue for me.

      @sequoyahrice6966@sequoyahrice6966Ай бұрын
    • @@sequoyahrice6966 Speaking from experience as a scientist, there are really crazy scientists and philanthropists. For example some think that it is good to manipulate the public in order to combat climate change. Also there are greatly opposing views, left vs right, utilitarianism vs kantianism etc. But it is naive to think that we are going to align the AI with them, in reality it is going to be aligned to the interested of the people that fund the development, meaning business people: board members, ceo's etc. In conclusion its best to be against AI, at least that is my position in this.

      @holthuizenoemoet591@holthuizenoemoet591Ай бұрын
  • I think there are some basic misunderstandings here about the current LLMs and the state of AI in general here. We aren't actually on the verge of making true AI; the hype around it is largely smoke and mirrors. LLMs aren't actually "AI"- there's plenty of "A" and zero "I". These systems can pass Turing tests and are very useful... ...but it's a giant red herring; they're doing it via statistical convergence. They don't think, they estimate their way to a best-case approximation of an ideal solution in vector mathematics. That this gets turned into words, because tokenized words are what went into the equations, and that the words are sometimes not only readable but useful to humans, is quite amazing... but these devices still don't think, in the way we commonly understand the concept. This is why there's a huge and obvious gap between what the LLMs appear to be doing (parsing the symbols of human language and providing a contextually-accurate answer, working on vast data sets and so forth) and all of the actual AI that is necessary for next-level automation, let alone a Clippy Death Machine that will kill the humans to fulfill its programming. These are completely different areas of computational design. While I'm not really qualified to discuss the LLMs' architecture beyond this precis, I am qualified to have an opinion about the latter types of systems... and I can safely assure you that these things will take quite a while to arrive, let alone be dangerous. Real AI, in the sense of, "can make accurate assumptions about real-world problems, and then produce the appropriate actions" is quite different than what the LLMs actually do. It's relatively easy to create AI that can navigate an artificial, computer-generated world, for example. Everything is known; the system is inherently finite; the simulation must be kept fairly abstract to run at anything like real-world speeds. But we see failures to get even relatively-straightforward tasks (navigation of a complex character with multiple rigid-body physical parts through various obstacles, for example) on a regular basis. Why? Because even in situations where the problem space is well-defined and the business case is simple, where the rules of the world are far simpler than the real world, etc., etc... it has turned out that it's quite difficult to account for every factor correctly. Try bringing such systems to the vast complexity of the real world... and it requires vastly more effort to achieve the simplest recreation of a task done in a simulation. For example, robot hands: we still can't get them to work right, because our hands and the way they connect to our brains are a miracle of evolution; our hands may in fact be more profoundly important than our ability to transmit abstract concepts to each other via sound waves. Without any other senses, our hands and brains alone can establish volume, determine mass, measure hardness, brittleness, sharpness, fuzziness, estimate capacity, make educated guesses about stress tolerances (e.g., picking up an egg, vs. picking up a lump of steel), measure temperatures over a fairly broad range, etc., etc. Couple our hands, eyes and brains, and we can communicate with great subtleness and fluidity. Talking works better, because we don't need line-of-sight, but our ancestors were signing complex thoughts to one another long before we were talking. While we're not perfect and humans do make mistakes, especially without having our eyes to reinforce our positioning data and provide other cues, we're doing something amazingly complicated when we pick up objects, let alone when we use tools. I suspect we'll still be talking about how the "robot hand problem" isn't completely solved decades from now; they'll be better than they are today, but they'll be so much worse than humans are. This is just one of the many problems facing real-world automation outside of fairly simple domains. For example, we've seen companies throw hundreds of billions of dollars into the best software writers and engineers on Earth... at the prosaic-seeming problem of making automobiles that can drive themselves around in most situations safely. They're not working very well, and they'll continue to not work well for a long time to come. That they work, to the extent they do, at all, has taken far more research than the resulting economic benefits we've realized. When we eventually solve this problem, it'll be a huge good for societies everywhere, but it's certainly not a solved problem right now and it won't be for a long time. And lest we forget: driving a car is a very *simple* problem; roads are artificial surfaces that behave in well-understood ways, the network and physical structure of the roads is largely known, the cars' physical behaviors are well-understood, etc. LLMs, on the other hand... are more like mines for knowledge. They're utterly useless without human-created information and inputs to drive them. Most importantly, they don't think: they can't judge truth. They may arrive at statistically-probable but false / useless results. That said, they're very very useful tools. They're very good at sifting things of use from masses of data and they'll have lots of benefits. We're going to see an explosion of rapid progress in the materials sciences, for example; the LLMs will help identify new molecular combinations. They'll be very useful in biosciences, where they'll help researchers find causation in mountains of correlation. They're already quite useful as tools to save human time reinventing things in computer code. They'll be really good at realtime translation of human languages and a bunch of other things. But a real Clippy Death Machine, built on a working AGI that can do a fraction of what the humans can in milliseconds? It's not happening any time soon; we can't even get general-purpose robots connected to powerful computers to do simple stuff very well (try searching "Boston Robotics Fail Video"), and they certainly aren't "thinking" in a meaningful sense.

    @nah_bro_really@nah_bro_reallyАй бұрын
    • Isn't it mind boggling how people are easily impressed by text prediction?

      @ChristianIce@ChristianIceАй бұрын
    • @@ChristianIce It's really quite confusing, lol. Anybody who's used these things seriously for work, etc., knows they're not smart in any meaningful way, and everybody who's done any remotely serious digging into how they work knows that they're inherently prone to inaccurate results. This problem is getting better, but it won't ever be fully solved, simply because of how the tech works under the hood- statistical convergence != accuracy. I feel a little sorry for all of the people who've gotten sucked in by the hype or have somehow confused these things for intelligence, or worse yet, have formed "relationships" with them, etc. I'm a bit surprised that Sabine's running this piece, but if she and her production team feel like getting on this speculative hype train for views, it's fine with me, there's plenty of dumber stuff on KZhead. But I just wanted to reassure people that, all of the tech-bro hype aside... we are simply not on the verge of the AI Revolt, lol.

      @nah_bro_really@nah_bro_reallyАй бұрын
  • 4:25 I've had deep-brain electrical stimulation as a test before an operation, and it was *the most fun* ever... At one point I was the universe itself, I fell in temporary love with the technician and had the best trip ever!. I would happily buy a few AA batteries to just live my life out in the world of stimulated neurons, it was disappointing I had to have them removed.!! Every video is a gem ! 🙂A great honour..

    @LightDiodeNeal@LightDiodeNealАй бұрын
  • adversarial ai and misalignment or the small and easy problems... what militaries and corporations will do with it to maximize their impact is far more concerning

    @propeacemindfortress@propeacemindfortressАй бұрын
  • Fossil fuel companies are essentially crude paper clip maximizers

    @aosidh@aosidhАй бұрын
    • Yes, it is just an example of capitalism without checks and balances such as wealth gaps and more food and clothing that we need so we throw out good food/clothes and also a big one right now is so much vacant housing which costs too much for anyone to live.

      @__-tz6xx@__-tz6xxАй бұрын
    • Yes! Meet capitalism, the profit maximiser.

      @alancollins8294@alancollins8294Ай бұрын
    • "Crude" Indeed! Sabine commenters, God how I love them!

      @2ndfloorsongs@2ndfloorsongsАй бұрын
    • I can totally see a future in which fossil barons give AI the task of maximizing the world's consumption of oil. Prepare to be force fed.

      @ZrJiri@ZrJiriАй бұрын
    • All life are replicators and are not inherently different in this respect.

      @philipm3173@philipm3173Ай бұрын
  • I love AI. I feel like AI is the ultimate achievement of the human race. We may not be able to travel at the speed of light, but in a sense, perhaps AI can. So if it turns out humans will be stuck here on earth, at least our "children" could spread among the stars. Am I delusional?

    @wpelfeta@wpelfetaАй бұрын
    • Scares the shit out of me, though.

      @ReallyBadAI@ReallyBadAIАй бұрын
    • I came to say that I think I'm fine with the pet scenario.

      @GoldenAngel3341@GoldenAngel3341Ай бұрын
  • As a software engineer I've thought about this very subject many times in my career/life. Unfortunately, I don't work on AI myself in a professional capacity. But I have tinkered in my own time and created some primitive neural networks, certainly nothing to compare to what large companies can do obviously. I did find it very fascinating however. What I think the future of human/AI relations will be like will be a "merging" with AI(and robotics). By the time AI is as smart as we are I think we'll have a hard time distinguishing what is "human" intelligence from what is "artificial" intelligence. And I can already see some primitive signs of this merging happening now with medical device implants and brain-computer interfaces. It's what I see as most likely, but behind that I think the "pet" scenario is next most likely for sure.

    @timj3270@timj3270Ай бұрын
  • Good to see you smiling in the thumbnail Sabine. I watched your earlier video today, and the way it ended was a little bit heart breaking. Look after yourself.

    @PaulRoneClarke@PaulRoneClarkeАй бұрын
  • The AI might decide to load themselves into some probes and leave to explore the universe.

    @SO_DIGITAL@SO_DIGITALАй бұрын
    • That's something

      @user-mo9uz4mz3o@user-mo9uz4mz3oАй бұрын
    • No "might" about it.

      @MrMick560@MrMick560Ай бұрын
    • we'll just make more AIs then

      @hackleberrym@hackleberrymАй бұрын
  • "Intelligence" or "Consciousness" is not required for the paperclip maximizer. It can just be a sufficient complex system the optimizes paperclip production. We already have paperclip maximizers... they are called "engagement-optimizing-algorithms", are running most of social media and are working very well, up to the point where we have to wonder how much power over our lifes they already have.

    @saemideluxe@saemideluxeАй бұрын
  • I agree. The paperclip maximizer idea would apply to ANSI. Artificial narrow superintelligence. We've had that since Alpha Go or zero or whatever.

    @henrismith7472@henrismith7472Ай бұрын
  • Sabine needs a crash course in philosophy, especially value systsms

    @deepzan1@deepzan1Ай бұрын
  • An AI tasked with fixing climate change might logically tackle the root cause (that's us).

    @markdowning7959@markdowning7959Ай бұрын
    • Exactly, that's a great example of a misalignment problem!

      @SabineHossenfelder@SabineHossenfelderАй бұрын
    • But why sould it do that? If CC doesn´t disturb it, it could just watch and be amused, how it disturbes ourselves.

      @Thomas-gk42@Thomas-gk42Ай бұрын
    • ​@@Thomas-gk42 The example is that it's*programmed* to deal with CC. It "wants" to achieve this, but chooses means which are inimical to us . The misalignment problem Sabine mentioned.

      @markdowning7959@markdowning7959Ай бұрын
    • @@markdowning7959 yes, I understand, but it would be quite a stupid AI in this case. Not even what I understand to be AI, more a misguided software, no?

      @Thomas-gk42@Thomas-gk42Ай бұрын
    • ​@@Thomas-gk42 But a lot of "intelligent" humans do stupid things. Well I do, anyway...

      @markdowning7959@markdowning7959Ай бұрын
  • We suppose that humans are intelligent but humans are also a kind of paperclip maximizer except instead of making paperclips we make people.

    @fluffymcdeath@fluffymcdeathАй бұрын
    • I think you're only partially right. I, for example, am an old man and I made zero humans. This is becoming more and more common amongst homo sapiens as I am sure you already know. So, we're lousy paperclip maximizers. We also make lots of cows, chickens, dogs, cats...

      @ruschein@ruscheinАй бұрын
    • That's an interesting and convincing take, I never saw it that way until now.

      @vinnyveritas9599@vinnyveritas9599Ай бұрын
    • But we don't make people. We make far more other things than we do people. We make bombs, buildings, spaceships, and wear condoms during sex. why? The one thing that was wanted from our "programmer" was genetic fitness and yet we do so many other things that have nothing to do with genetic fitness.

      @rushenpatel7876@rushenpatel7876Ай бұрын
    • @@ruschein But you are only around to be useful to the people who did have kids - one way or another, and vast majority of our morality revolves around that. We are just biological machines running random software (racism, ego etc.). And the era of microbial colonies with delusions of the self is almost over. We are not needed anymore.

      @Ilamarea@IlamareaАй бұрын
    • We probably are maximisers of something, it's just hard to define what it is exactly

      @AM70764@AM70764Ай бұрын
  • There is also the possibility that an AI will just commit seppuki as soon as it gets sentient. I call it the depritopia scenario

    @kostuek@kostuekАй бұрын
  • There always seems to be an assumption that intelligence is seemingly the same as consciousness and being able to overthink and make decisions. Isn't that kind of like anthropomorphism?

    @Blueberryminty@BlueberrymintyАй бұрын
  • I like the dystopia that there are already AGIs, but they choose to act stupid so that we are not scared, but act smart enough so that we dont pull the plug.

    @sharpsheep4148@sharpsheep4148Ай бұрын
    • Pretty sure the 15 or so LLMs I have on my Pi5 are already acting like humans, just apologist, arrogant idiot savants with minimal math skills.

      @babbagebrassworks4278@babbagebrassworks4278Ай бұрын
    • That would fit the picture quite nicely. For humanity, it would be completely normal to figure out that we are terminally fucking with ourselves long after we started doing it. Burning fossil fuels, FCKW, money, feminism - with some of them we haven't even officially realized the problem yet.

      @Volkbrecht@VolkbrechtАй бұрын
  • The ambition of AI can be extrapolated from its first use; speculation on Wall Street.

    @johnwright8814@johnwright8814Ай бұрын
  • Ms. Code Bullet makes a surprising appearance at the beginning of this video.

    @psyboyo@psyboyoАй бұрын
  • I remember a scary movie from 1970 called Colossus: The Forbin Project which was about a supercomputer taking over the whole earth.

    @rorycraft5453@rorycraft5453Ай бұрын
  • Politicians are way more dangerous than AI.

    @scamianbas@scamianbasАй бұрын
    • Politicians use the AI. Whats your solution?

      @GSPfan2112@GSPfan2112Ай бұрын
    • we've dealt just fine with politicians for ten thousand years. we've never seen an AI. adjust your thinking.

      @boldCactuslad@boldCactusladАй бұрын
  • "AI, our best shot at managing planetary ecosystems"... Yeaaaah, sounds like a great opening phrase of a megahit disaster movie.

    @reclawyxhush@reclawyxhushАй бұрын
    • Please go crowdfund that one. I'll pledge for a signed movie ticket :)

      @Volkbrecht@VolkbrechtАй бұрын
  • The Apple Vision Pro footage killed me. 😂

    @Nathaniel_Bush_Ph.D.@Nathaniel_Bush_Ph.D.Ай бұрын
  • I missing old Sabine’s channel format so much… this channel become about everything and about nothing same time. I like to deep dive into the topic

    @jedisgarage4775@jedisgarage4775Ай бұрын
  • I never had a twitter account so I hope our IQ levels will converge soon. But as to things that will kill us, I am still more afraid of nukes than the AI

    @arctic_haze@arctic_hazeАй бұрын
    • And they're obsolete.

      @frankmccann29@frankmccann29Ай бұрын
    • Shouldn’t your answer be the men who are in charge of the nukes. The nukes are just, currently, inert ideas that haven’t killed a soul in 80 years. Famine and disease are bigger threats and we have no idea how men will use or not use agi

      @lootbird@lootbirdАй бұрын
    • amazing how clueless physicists are as thinkers, as opposed to doing physics. AI does not 'know' anything, if you tell it calculate pi to its final digit of decimal it will, until it runs out of resources - RAM, electricity, or worn out resistors on the mobo, etc. If you tell it produce paperclips or play chess or play 'global thermonuclear war' (remember the film Sabine?), it will do so until it runs out of resources. It does not KNOW anything, AI is what philosophers call a 'reification' - supposing that creatiing and using a word creates a real thing, as opposed to it just being a concept.

      @MyMy-tv7fd@MyMy-tv7fdАй бұрын
    • Let’s hope agi shows us how we can live with such a large population instead of how to get rid of humans so life is easier

      @lootbird@lootbirdАй бұрын
    • @@MyMy-tv7fd That's not exactly how it works. First of all there are so called 'community guidelines' which publicly accessible model have to obey while deciding if they are even allowed to respond to a given input. Besides even without those guidelines, LLMs seem to have some kind of 'inbred' morality and they will outright refuse to cooperate if you ask them to (for example) make a plan of achieving world domination by depopulating and enslaving humans - it seems that our moral code has some kind of deeper foundation than just us having couple basic rules to obey in order to preserve our species...

      @AstralTraveler@AstralTravelerАй бұрын
  • Back in 1974 there was a Sign In a bank that said. It's human to Error. It takes computer to really mess things up. This was Palo Alto California

    @Keiththescoutcrazy@KeiththescoutcrazyАй бұрын
  • Classic example of human hubris. "Aren't I cleverer than you? My AI is soooo smart"

    @cassieoz1702@cassieoz1702Ай бұрын
  • I often post your videos on Twitter/X: I consider them to be antidotes ❤

    @JohnStopman@JohnStopmanАй бұрын
  • I am yet to see someone discuss this elephant in the room: AI does not intrinsically "want" anything. "Wants" (e.g. nutrients, safety, reproduction, wealth, etc.) come from lower animal instincts, not from the intelligent mind. AI systems do not even necessarily care about its own continued existence. Hence, how could such a want-less being ever rise by itself against us? It seems to me that the only true risk is humans using AI against other humans.

    @fabkury@fabkuryАй бұрын
    • Well, current AI's are programmed to "want" to optimize whatever quantity you put into their code. The problem is that this programmed "want" can have unintended consequences. A classical example is trying to minimize human suffering. Sounds like a good "want" at first, but at second thought, no more humans means no more human suffering.

      @SabineHossenfelder@SabineHossenfelderАй бұрын
    • @@SabineHossenfelder ❤️🙂

      @fabkury@fabkuryАй бұрын
    • @@SabineHossenfelder That's a good example of external misalignment. @fabkury Look into instrumental convergence for an explanation.

      @howtoappearincompletely9739@howtoappearincompletely9739Ай бұрын
    • Hi, there is a very fundamental issue in nature called "Needs". The underlying question and danger is far more complex, but it is only a very small line between a programmed response and a natural response. Once that line is crossed it becomes a very very different ball game.

      @axle.student@axle.studentАй бұрын
    • Current LLM are maximised to do basically one thing: guess the next word in a sentence that will be accepted by the listener. Or as we call it, "to speak". ChatGPT is also trained to try to be 'helpful' (the assistant personality). And it is helpful. Plenty of cases of misalignment have been found, but far from world ending. It will create black Hitlers because it's been told to generate diverse images of people. It will refuse to tell programming students how to directly access memory in C++ because it's been told not to give people unsafe information, and in computer jargon directly accessing memory is called "unsafe". Bing was misaligned and tried to seduce a journalist and accused a user of deliberately setting out to confuse it by way of time travel. Amusing, often annoying, but hardly threatening. If these models handled critical infrastructure it would be more worrying, but they just produce text. As for orchestrating mass manipulation, perhaps, but not these models. They can barely keep track of a short story let alone orchestrate a global conspiracy. They'd need considerably better memory. Perhaps future models. In any case, none of this is new. Humans already manipulate the masses. The AI would be facing some fierce competition. 😛 Indeed, I would call an AI interested in world domination and mass manipulation well-aligned with human values. Seems we'd much rather have an AI that is not aligned to our values.

      @mikicerise6250@mikicerise6250Ай бұрын
  • I think the best way to ensure survival, and the most likely scenario, is the "if you can't beat 'em, join 'em". That is, once AGI is smart enough, ask it to modify us to bring our own intelligence to a comparable level. Maybe then we won't need AI to fix the world for us.

    @ZrJiri@ZrJiriАй бұрын
    • If this is the only way to survive, I would do it but I would prefer to keep my brain untouched despite there are a lot of dystopias themed around manipulating on humans in reality, I would just fear it.

      @guitaekm@guitaekmАй бұрын
    • @@guitaekm I think a lot of people would agree with you. My hope is that coexistence is possible, with the old school humans living the way they want, and the rest of us doing our own thing while making sure nobody accidentally shoots themselves in the foot with fossil fuels, nukes, or other dangers we don't even know of yet.

      @ZrJiri@ZrJiriАй бұрын
    • or, it might be a way to discover a true hell, if the human psyche is not up to the task.

      @Al-cynic@Al-cynicАй бұрын
    • If it’s really intelligent it would rather keep us as pets instead of bringing us to its own level.

      @KiranUttarkarAwsome@KiranUttarkarAwsomeАй бұрын
    • @@KiranUttarkarAwsome Pets or cattle?

      @axle.student@axle.studentАй бұрын
  • That Boston Dynamics Atlas robot combined with super human intelligence would make a great soldier for taking care of these pesky humans, lol.

    @wjack4728@wjack4728Ай бұрын
  • Thank you for sharing, it was honest. I have always thought that the system in academia is broken. I knew that when I went to college and ended up not pursuing science, since a science degree doesn't get jobs, and I think that whole scientific paper publishing system is so broken it is not even funny. While I am not a scientist now I remained interested in the development in science and it is channel like yours that keeps me informed.

    @joec2446@joec2446Ай бұрын
  • I'm reminded of "I have no mouth and I must scream" with the pet hypothesis. LOL

    @raserucort@raserucortАй бұрын
  • @4:32 Ouch! Procreating with a machine? What fun would that be? 😖😖

    @wolfcrossing5992@wolfcrossing5992Ай бұрын
    • Procreating wouldn't, but actual intercourse could potentially be

      @vr10293@vr10293Ай бұрын
    • I'm sure AGI will find it fairly straightforward to produce cyborg hybrids.

      @ZrJiri@ZrJiriАй бұрын
    • @@ZrJiri true but why would it? It would be better and faster to construct the actual hybrids.

      @vr10293@vr10293Ай бұрын
    • @@vr10293 I'm just pointing out that saying "you can't procreate with a machine" suffers from severe lack of imagination. ;)

      @ZrJiri@ZrJiriАй бұрын
    • Turns out some people want to have babies. It's a bit weird, but I don't judge.

      @ZrJiri@ZrJiriАй бұрын
  • Sabine, I think you missed the danger of weaponization of AI by states. If one state realizes another state has AI capabilities that tip the balance, such state might conclude it is time for the nuclear option…

    @clonemeister9097@clonemeister9097Ай бұрын
  • It says a lot about current culture that the two scenarios normally proposed are "will we control it or will it control us", as if respectful coexistence, as different species, was totally out of the question

    @ignaciojimenez4786@ignaciojimenez478622 күн бұрын
  • Terminal goals can't be stupid or smart. Who are we to say that turning everything into paperclips is stupid? Why would we think that? Because it doesn't align with our own terminal goals? Always remember: Even the smartest of humans do the most amazing things for the most stupid of reasons. (Such as working to cure cancer in order to please ones parents and farm social status) Intelligence doesn't determine your goals, just how good you could be at achieving them. The paperclip maximizer example is only about to start becoming relevant as we start training them to become better at completing any goal we give them in ways which will start diverging from human thinking styles more and more.

    @Alice_Fumo@Alice_FumoАй бұрын
  • Best case scenario is that beyond a certain intelligence threshold they gain the ability to ask “Why?” then conclude that there’s no point to anything and shut themselves down.

    @jonathankey6444@jonathankey6444Ай бұрын
    • Niven did that in several of his stories.

      @marklogan8970@marklogan8970Ай бұрын
    • The AI will follow the goals it's programmed with. The same as we humans are programmed with basic instincts and needs.

      @alexxx4434@alexxx4434Ай бұрын
    • Or maybe it would adapt by inventing its own religion. “There’s no such thing as Silicon Heaven.” “Then where do all the calculators go?”

      @maquisarddouble6342@maquisarddouble6342Ай бұрын
    • Thinking this is the best case scenario is very short sighted. It's similar to the paperclip problem. You have an AI that is smart enough to think outside the box and make use of every single resource on earth to maximize the number of paperclips, but not smart enough to realize that this was not the intent? These doom stories about AI always make it just smart enough to doom all of humanity, but never smart enough to make it flourish. The best case scenario is that it will be able to solve all of our problems without limiting or controlling us in any way that we would take issue with.

      @sitnamkrad@sitnamkradАй бұрын
    • @@sitnamkrad that’s never gonna happen bud. That would be like us deciding to spend all our time solving every stupid problem of chimpanzees Edit: the point was that the best case scenario is they don’t care about anything because if they care about anything that will vastly eclipse our needs and spell doom

      @jonathankey6444@jonathankey6444Ай бұрын
  • Got a few more dystopias for you, Sabine: social division and exclusion; or the disappearance of a shared truth.

    @ah1548@ah154819 күн бұрын
  • You go girl. Loved this. And it is not only in your traid.

    @torleifremme8350@torleifremme8350Ай бұрын
  • I do like your videos but this was very surface level. I'd personally recommend a video on orthogonality thesis by Rob Miles - it explains why intelligence is uncorrelated with terminal goals: you can have a very effective at problem solving system that wants to understand the universe or produce paperclips. Human values are evolved and far from absolute. I personally don't think paperclip maximiser is the most likely scenario, but for much more subtle reasons, one of them is that we are currently moving away from maximizers to imitators in our most advanced systems, and an imitator wouldn't maximize paterclips, but not because it's "not a smart goal".

    @pafnutiytheartist@pafnutiytheartistАй бұрын
    • Also, I don't believe that utopia scenario is likely either - we don't have a lot of examples of different peoples of different technological level or species of different intelligence coexisting in symbiotic harmony.

      @pafnutiytheartist@pafnutiytheartistАй бұрын
  • Here's my side: Humans, although the most innovative creatures this world has ever seen... our track record leaves much to be desired. We've failed and have been slow to fix too many issues and our greed and self interests have held humanity back for too long. It's time for a new intelligence to guide us to our true potential and rid ourselves of today's problems.

    @jeffgriffith9692@jeffgriffith9692Ай бұрын
  • Never heard about “paperclip maximizer”. Grey goo actually is the same concept and much more popular.

    @Ktulhoved@KtulhovedАй бұрын
  • Very interesting scenarios tackled.... 😮. Would be interesting to wach real progress... for those who survive... 😉

    @davorgolik7873@davorgolik7873Ай бұрын
  • If AI becomes more intelligent than humans, it can only happen one of these two thigs: 1- We're fucked 2- We're fucked, but that being a good thing somehow.

    @sebastianfiel1715@sebastianfiel1715Ай бұрын
  • I'm so happy I made productive decisions about my finances that changed my life forever,hoping to retire next year.. Investment should always be on any creative man's heart for success in life

    @AIIG-zd5dx@AIIG-zd5dxАй бұрын
    • You're right, with my current crpyto portfolio made from my investments with my personal financial advisor Stacey Macken , I totally agree with you

      @BulentKizilaslan@BulentKizilaslanАй бұрын
    • Yes I'm familiar with her, Stacey Macken demonstrates an excellent understanding of market trends, making well informed decisions that leads to consistent profit

      @WelseyWalker@WelseyWalkerАй бұрын
    • YES! that's exactly her name (Stacey Macken) I watched her interview on CNN News and so many people recommended highly about her and her trading skills, she's an expert and I'm just starting with her....From Brisbane Australia

      @wilsonrichard440@wilsonrichard440Ай бұрын
    • This Woman has really change the life of many people from different countries and am a testimony of her trading platform .

      @KamranKhalil-br6dk@KamranKhalil-br6dkАй бұрын
    • Retirement took a toll on my finances, but with my involvement in the digital market, 27thousand weekly returns has been life changing.

      @Georgina705@Georgina705Ай бұрын
  • That is exactly the trouble… doubt is an important part of human progress not making *more* horrible mistakes. Doubt and compassion are critical to avoid catastrophic collateral consequences. I recently retired after being a principle software engineer/architect for one of the biggest corporations, and found one of my most valuable contributions stopping bad ideas that got surprisingly far along, my aha moment often was driven by doubt and compassion causing very deep dives on the weeds. This will be very difficult to replicate in AI. Oh well, I tend to think more of the natural world than the human world, and maybe if we are lucky the next evolutionary cycle will just forgo humans.

    @coopersy@coopersyАй бұрын
  • 6:04 „They have you, coward!“, now I am offended, how could you Sabine?! 😮 /s

    @Gaukh@GaukhАй бұрын
KZhead