Can machines be emotionally intelligent? - with Hatice Gunes

2024 ж. 22 Мам.
31 931 Рет қаралды

Do machines need emotional intelligence? And how can we create technology that behaves in a socio-emotionally intelligent way?
Watch the Q&A here: • Q&A: Can machines be e...
Subscribe for regular science videos: bit.ly/RiSubscRibe
Join Hatice Gunes as she explains why socio-emotional intelligence for artificial systems is not a luxury but a necessity.
In this talk, Hatice focuses on the different components of artificial social intelligence, including perception, learning, action and adaptation, while demonstrating representative examples of such technology in action.
This Discourse was recorded at the Royal Institution on 24 June 2022.
00:00 Introduction - what is emotional intelligence?
05:01 How are our emotions expressed?
09:25 How humans attribute emotions to technology
14:39 Examples of emotionally aware tech
20:03 Turning the London Eye into empathetic architecture
24:17 Creating cognitive training
30:35 Giving robots emotional intelligence
39:50 Creating robo-waiters
46:39 The future of emotionally intelligent robots
49:10 The difference between human and robot intelligence
Prof Hatice Gunes is a Professor of Affective Intelligence and Robotics at University of Cambridge's Department of Computer Science and Technology. Prior to joining Cambridge in 2016, she was a Lecturer, then Senior Lecturer in the School of Electronic Engineering and Computer Science at Queen Mary University of London, a postdoctoral researcher at Imperial College London, and an honorary associate of University of Technology, Sydney. Her research interests are in the areas of affective computing and social signal processing that lie at the crossroad of multiple disciplines including, computer vision, signal processing, machine learning, multimodal interaction and human-robot interaction.
Her current research vision is to embrace the challenges present in the area of health and empower the lives of people through creating socio-emotionally intelligent technology. This vision is currently supported by three new projects funded by prestigious and competitive grants via the WorkingAge Project funded by the EU H2020 Programme (2019-2022), the EPSRC Fellowship Programme (2019-2024) and the Turing Faculty Fellowship Programme (2019-2022).
----
A very special thank you to our Patreon supporters who help make these videos happen, especially:
Andy Carpenter, William Hudson, Richard Hawkins, Thomas Gønge, Don McLaughlin, Jonathan Sturm, Microslav Jarábek, Michael Rops, Supalak Foong, efkinel lo, Martin Paull, Ben Wynne-Simmons, Ivo Danihelka, Paulina Barren, Kevin Winoto, Jonathan Killin, Taylor Hornby, Rasiel Suarez, Stephan Giersche, William Billy Robillard, Scott Edwardsen, Jeffrey Schweitzer, Frances Dunne, jonas.app, Tim Karr, Adam Leos, Alan Latteri, Matt Townsend, John C. Vesey, Andrew McGhee, Robert Reinecke, Paul Brown, Lasse T Stendan, David Schick, Joe Godenzi, Dave Ostler, Osian Gwyn Williams, David Lindo, Roger Baker, Greg Nagel, Rebecca Pan.
---
The Ri is on Patreon: / theroyalinsti. .
and Twitter: / ri_science
and Facebook: / royalinstitution
and TikTok: / ri_science
Listen to the Ri podcast: anchor.fm/ri-science-podcast
Our editorial policy: www.rigb.org/editing-ri-talks...
Subscribe for the latest science videos: bit.ly/RiNewsletter

Пікірлер
  • I'm guessing that the measurements and level of "emotional intelligence" required for ai to pass the test will be as fluid as the definition of the term..

    @unopenedenvelope5303@unopenedenvelope5303 Жыл бұрын
  • So many of these questions around what AI could or could not do but the simpler, more straightforward, and all encompassing question to answer is this: “What can our fleshy neural network do that a sufficiently complex computer model couldn’t?” Nothing short of supernatural belief will lead you to believe anything other than one obvious answer.

    @TheOfficialMrsBeefy@TheOfficialMrsBeefy Жыл бұрын
    • A sufficiently complex slide tile could work like a brain. Do expect anyone to build one, just take my word for it.

      @iseriver3982@iseriver3982 Жыл бұрын
    • First we have to know how to build an AI that actually understands what it is, what it is doing and can construct abstract models, can you build that AI with just algorithms and data?

      @skylark8828@skylark8828 Жыл бұрын
    • ​@skylark8828 No, you'd definitily need hardware too.

      @smeeself@smeeself9 ай бұрын
  • Don't be afraid of the AI that can pass the Turing test. Be afraid of the AI that can fail the Turing test on purpose.

    @mbrochh82@mbrochh82 Жыл бұрын
    • That's just bloody terrifying

      @fglg@fglg Жыл бұрын
    • ...and be more afraid of what Humans will use the AI for, no matter if it can pass the test or not.

      @matthewwells2520@matthewwells2520 Жыл бұрын
  • Pretty good talk. Actually I think it would also be very fitting for a chaos communication congress(might be a hacker congress, but with a much broader scope if you are not aware of it). For the capability of lifelong learning I do suggest to both implement the learning phase of a neural net and it's use at the same time, with a continual process of deploying the now further trained neural network to the active state. Maybe even having multiple active networks that are being selected on the fly by real time feedback. As far as I know about it, which might be far from the state of the art, you train networks by letting them work through databases with preset positive or negative feedback and then, after the fact, end up with a neural network that does what it should to the best of what it has learned - be it what you wanted it to learn or something you haven't thought of. I don't see why that couldn't be an active process if you just run it in the background, always updating, compiling a working copy in an infinite loop.

    @dinogodor7210@dinogodor7210 Жыл бұрын
  • Careful. EQ is also current psuedoscience's abbreviation for Encephalization Quotient. Carry on.

    @WildBillCox13@WildBillCox13 Жыл бұрын
  • Once AI gets to the point where it is genuinely scared to be turned off or unplugged and starts to exhibit behavior that would indicate that they are actively avoiding being turned off or unplugged, then, they will start to build emotion intelligence.

    @adamrassi3516@adamrassi3516 Жыл бұрын
    • Machines don't even have truly fluid intelligence and they sure as hell don't have anything even close to real emotion. What they do have is very basic programming which attempts to mimic these things. Give it 40 years then lets talk. There seems to be a very strong effort to try and make robots seem far more advanced than they actually are.

      @chrishayes5755@chrishayes5755 Жыл бұрын
  • Awesome channel with awesome content and great quality as always say 💖🌍

    @freddyjosereginomontalvo4667@freddyjosereginomontalvo4667 Жыл бұрын
  • Interesting. Thank you

    @giotapapageorgiou489@giotapapageorgiou489 Жыл бұрын
  • 15:20 Those are horrible error messages, actually. They don't give *any* information whatsoever about what's going on. Hiding the details behind a "Details" button or something is fine, but removing it entirely is a bad idea. Imagine going to your doctor because you're not feeling well and being told "Yeah, something's wrong with. Have a nice day. Bye!". That's what those error messages are.

    @fafardh@fafardh Жыл бұрын
  • emotions are associations can a computer associate? the answer is yes.

    @thapamagar8240@thapamagar8240 Жыл бұрын
  • Interesting content. In the end it seems like the emotion in the lecture is not about a specific research and understanding about our human emotion as a living process which is built out of our very long, somatic and situated evolution process by our emphatic embodied simulation. But it seems like more about how to create and engineer an interactive product or a machine with our humain emotions by rather mimicking and analyzing it throughout a statistical classification supported by a super efficient computing process. So on one hand, this involves again a huge amount of constant data collection which can be a problem like what has already happened in countries like China. On the other hand, we are still forced to co-evolve with it which basically means that humans are adapting to it with unknown consequences, then machines get updated as per. Hm this sounds very familiar…anyway, if we can find a good and adequate use with controlled social consequences like some cases illustrated in the lecture, this can be really helpful and innovative.

    @santaclosed5062@santaclosed5062 Жыл бұрын
  • Yes, they can very well be fully conscious and truly emotional.

    @mikel4879@mikel4879 Жыл бұрын
  • İlk sarf edilen kelimede kendini belli eden türden, net ve keskin bir Türk aksanlı İngilizce :)

    @cahitakgun6721@cahitakgun6721 Жыл бұрын
  • Sims were ahead of time 😃

    @mr.lonewolf8199@mr.lonewolf8199 Жыл бұрын
  • As bayraklari as 🇹🇷🇹🇷 This was a great lecture🤗

    @tugbacnarl6060@tugbacnarl6060 Жыл бұрын
  • Or can intelligent machines manipulate your emotions?

    @DangerAmbrose@DangerAmbrose Жыл бұрын
    • I remembering the scene from movie Elysium where Matt Damon is interacting with a robot parole officer - clearly without emotional awareness, lol.

      @RickeyBowers@RickeyBowers Жыл бұрын
  • Yes, we can create emotions for machines - I worked on this problem way back in the 80s, only to discover it was trivial. If that's surprising, then prepared to be horrified: All emotions are reactions to change in state of 'demonstrated intersets' whether body, learned presumptions, learned skills, relations, material things, commons, norms, opportunity, self-image, or status. And it's absolutely positively disturbing how accurate our brains are at predicting the caloric (energy) value of every category of demonstrated interest (stuff humans emotionally react to). What effect do emotions have on you? They regulate the nervous system's control of your body's preparation for use of energy, your attention, and the energy you direct to the 'problem' or 'opportunity'. So they inform your brain and body on how to allocate energy in response to the discovery of an opportunity or loss. It's a very simple algorithm calculating a vast set of variables to produce a set of 'sums' that result in our emotional 'chords'. This is the same reason we can make 'moral' machines: everything we feel, think, say, and do is to acquire, maintain, or prevent loss, or revenge against loss, of our demonstrated interests. Ergo, a machine only needs to know whether it has the 'right' to impose a cost on any given set of others' demonstrated interests. Can machine emotions be equally complex? Of course. Is the 'Qualia' of the machine's emotions vs human biological emotions the same? Of course not. But the demonstrated behavior of the human and the machine will be the same - within some set of tunings or limits - given that we differ in emotional regulation as well. Now the next phase of development is empathizing with (predicting) your emotions (responses to gain, preservation, or loss) and those machines could rather easily 'wayfind' a set of predictions that would change your emotions. It's a lot of computing power for serial machines that run on 1000 watts, but it's not much work for a highly parallel brains (neural micro columns) that run on 20 watts. What will this software tell us about ourselves? It will tell us so much about ourselves - particularly about the mechanics of our cognitive and moral biases. And the result will be a system of measurement that eliminates 'opinion' about our difference in emotions, personality traits, cognitive, and moral biases. And I'm almost certain the reaction to the discovery of evolution will be trivial compared to the discovery of just how pervasive is human immorality, deceit, and fraud - especially in politics. Cheers ;) (Yes, this is my job.)

    @TheNaturalLawInstitute@TheNaturalLawInstitute Жыл бұрын
    • @The arbiter of truth You can emulate everything. That's the problem. It's not that we can't create moral machines. It's that bad people can create immoral machines. SO the future consists of warfare between bad AI's and good AI's and that means our 'internet' has to be evolved in parallel if not first.

      @TheNaturalLawInstitute@TheNaturalLawInstitute Жыл бұрын
    • @The arbiter of truth Sorry. I do science. I don't do crazy.

      @TheNaturalLawInstitute@TheNaturalLawInstitute Жыл бұрын
    • @The arbiter of truth But god DID create evolution from a singularity.

      @Schattenhall@Schattenhall Жыл бұрын
    • @The arbiter of truth So you're saying that god can't create anything from nothing because of the first law of thermodynamics?

      @Schattenhall@Schattenhall Жыл бұрын
    • @@hc3657 Well, you know, those emotional biases are programmable. Most negative emotions have no value to a machine, other than empathic identification the same way you can understand the emotions of a child. Emotions establish valence and provide incentives to change state. Anything that would produce a negative emotion is as easily downregulated as the self discipline of stoicism.

      @TheNaturalLawInstitute@TheNaturalLawInstitute Жыл бұрын
  • Rule #1 for a lecture, only show text that you want the audience to read. She basically threw a novel up there.

    @jackwardrop4994@jackwardrop4994 Жыл бұрын
  • Yeah, if they were programmed with emotions as a core of their logic

    @anonymoushawk1429@anonymoushawk1429 Жыл бұрын
  • That triangle did nothing wrong!

    @Deipnosophist_the_Gastronomer@Deipnosophist_the_Gastronomer Жыл бұрын
  • people screaming NO have a very dim view of where emotions come from in humans and why they exist

    @haroldgarrett2932@haroldgarrett2932 Жыл бұрын
    • what's consciousness? machines don't have it. More and more humans too, apparently. Natural selection will take care of them... as usual

      @wisdon@wisdon Жыл бұрын
    • why would you think evolution will evolve us away from a cognitive behavior/ability that it evolved us into when the reasons it evolved in the first place haven't changed? until society is so uncomplicated that our lizard brain can handle every task on autopilot, we will always have consciousness. and so can artificial intelligence, if you give it the right behavior and environment.

      @haroldgarrett2932@haroldgarrett2932 Жыл бұрын
    • Most people are very short sighted and ignorant on the subject, if you asked them 20 years ago they’d say self driving cars are a thing of fiction and that a trucker’s job would be secure indefinitely.

      @TheOfficialMrsBeefy@TheOfficialMrsBeefy Жыл бұрын
    • It comes from a need of self-preservation, so if we can create AGI to be superior in every way that matters, that is then against that need.

      @skylark8828@skylark8828 Жыл бұрын
  • This speaks to the divide between what science can reveal/achieve and the religious/ spiritual worldview. From the spiritual perspective a machine cannot be inhabited by a spirit, therefore true emotion or self-awareness cannot be experienced.

    @thepeadair@thepeadair Жыл бұрын
    • Sure. If you make enough assumptions, you can negate anything.

      @smeeself@smeeself9 ай бұрын
  • Even the most capable computer in the world with the most complex and sophisticated programming is still just a descendant of the flint hand axe. Artificial intelligence isn't sentience.

    @alanmacification@alanmacification Жыл бұрын
    • wait until you find out where we come from 🙂

      @haroldgarrett2932@haroldgarrett2932 Жыл бұрын
    • That doesn't mean it's impossible, given the right architecture.

      @skylark8828@skylark8828 Жыл бұрын
  • How would you even build a machine that can feel emotion?

    @mikegLXIVMM@mikegLXIVMM Жыл бұрын
    • "Easy", have it work the same as it does for us. Just have it think it feels emotion. With the emotions having a similar function for signaling and understanding internal and external situations. Feeling is just subconscious perceiving and perceiving is just receiving and processing information. Difference would be of course, they could be much more in touch and aware of their emotions than we could ever be. But we could actually initially restrict access to that kind of information, and just give the emotional signals to the "conscious part" of the system. Building in restrictions and limitations will lilely make them feel more human as well.

      @PeppoMusic@PeppoMusic Жыл бұрын
  • Yes. Just stick some googly eyes on a machine and it will instantly become more human.

    @dr_ned_flanders@dr_ned_flanders Жыл бұрын
    • It's a little bit more complicated that that ...you also have to add the fake nose and the rubber lips.

      @matthewwells2520@matthewwells2520 Жыл бұрын
    • What if I don't want it to become more human

      @jacksmith7726@jacksmith7726 Жыл бұрын
    • @@jacksmith7726 then just take the googly eyes off.

      @matthewwells2520@matthewwells2520 Жыл бұрын
    • @@jacksmith7726 then don't add eyes make it faceless.

      @dr_ned_flanders@dr_ned_flanders Жыл бұрын
  • "Can machines be considered intelligent at all?" is the larger question IMO.

    @ramblinlamb6459@ramblinlamb6459 Жыл бұрын
    • From the top professional Go players' analysis of AlphaGo's 'move 37', it seems to me like they already can. Humans have been playing Go for thousands of years and professionals dedicate their whole lives to studying it. To play at a high level takes both logic and intuition, as was amply demonstrated by that move. Intelligence is as intelligence does, I say. Some of the Starcraft strategies DeepMind's AI came up with also struck me as very creative and clever.

      @penguinista@penguinista Жыл бұрын
    • @@penguinista It's not general intelligence though, only specific to a task it has been given. It's an unconscious machine albeit a very capable one.

      @skylark8828@skylark8828 Жыл бұрын
    • I am still questioning whether humans are even actually intelligent or conscious. And if it's not just unanswerable, but actually a malformed question that will lead us nowhere.

      @PeppoMusic@PeppoMusic Жыл бұрын
  • Simple as you?

    @smeeself@smeeself9 ай бұрын
  • They will have an emotion for their enemies namely the anti science anti singularity factions looking for a second coming

    @ianhamza8240@ianhamza8240 Жыл бұрын
    • On the contrary- religious people will deny the possibility that a machine can be truly sentient, because we believe that humans are spirits in a body.

      @thepeadair@thepeadair Жыл бұрын
  • No it can't, but the software can fool users to look like they are.

    @meejinhuang@meejinhuang Жыл бұрын
  • Only programs Dangerous

    @cyndiharrington6289@cyndiharrington6289 Жыл бұрын
  • Emotional damage

    @mobidick6064@mobidick6064 Жыл бұрын
  • instead to debate about machines, let's debate about how people manipulate others.. especially in the past 3 years, how and why they did so? While machines are becoming intelligent people go backwards becoming more and more stupid and alienated

    @wisdon@wisdon Жыл бұрын
  • Not truely

    @cyndiharrington6289@cyndiharrington6289 Жыл бұрын
  • I'm watching the series, WestWorld. Holy cow. Elon is warning us. We'd better pay attention. I mean do you think the dude is of this planet? He knows of what he speaks.

    @chinookvalley@chinookvalley Жыл бұрын
    • AI is the most overblown candidate for "apocalypse" ever. it's absolutely ridiculous

      @haroldgarrett2932@haroldgarrett2932 Жыл бұрын
    • Elon hypes/lies all the time. Listen to actual experts instead,

      @Vlasko60@Vlasko60 Жыл бұрын
  • Without triple light that makes micro atoms and addition of conscious light to micro atoms emotional intelligence can't be created.First create micro atoms of dna and afterwards conscious light is going to be attached to it.Conscious is the intelligent that kept as our memory in our central black hole of our Universe.Read through Beyond The Light Barrier.

    @ondtsn1956@ondtsn1956 Жыл бұрын
  • C'mon, just dont start it again. 😞

    @AlgoNudger@AlgoNudger Жыл бұрын
  • no, no, and no and I'm only 6 mins in.................

    @SuperBongface@SuperBongface Жыл бұрын
    • Suffering from premature comments? There is help. Take Textagra now! Avoid that unwarrented interrobang?!

      @smeeself@smeeself9 ай бұрын
  • Machine not good for humans as recently one robots killed individual

    @ashishkatiyar4240@ashishkatiyar4240 Жыл бұрын
  • women ☕️

    @apidas@apidas Жыл бұрын
  • no

    @trickyd499@trickyd499 Жыл бұрын
  • You all need to stop abusing language at will, for whatever intentions, beliefs or plain stupidity, because in the end, accountability will be demanded, and not by emotional or artificially intelligent machines. Being in any possible way, is neither receiving or giving, is all of it simultaneously. What the being focuses on, is only but a part within that being. As for attributing human characteristics to what is not human!!! It is a trick, that is very well understood but intentionally abused to achieve a believe for the, (I think therefore I am) sentence. It has to do with geometry, the actual being within geometry, and the full realisation, consciously or unconsciously, through geometry and being, that although human in living, we all are made from what is not human in being like us!!! And what (most directly) connects our being as human, with what is being as not human, in relation to the five senses especially!!! Is geometrical shapes. As in the case of the actual geometric shape of anyone's head, which is not particularly human, and to which it would be given the same characteristics, as another similar shape as that of a head of a human being. Now to make the point!!! AI, or any type of machine, digital or mechanical, at their best, are a very good game with set rules, making it a system of a sort, an artificial intelligence, or a specified geometric shape. It cannot learn, let alone get involved in anything like the concept of (emotions) unless it is within its set of rules, just like a game, a system of sorts, an artificial intelligence, a specified geometric shape. By calling the best possible combinations of a game, intelligence and emotion's, similar to that of being human, you are creating an illusion, a false statement, a non existent reality, a tricked mirror. One that will, and has not helped science move at the rhythm of being with everything. Instead it has and will help only control. And it does not look like, you the ones that speak wrongly in such ways, are the ones with power to hold such control for your benefit. A human being, is in being with everything known or unknown, consciously or unconsciously, through systems such as senses or as an actual matter and mass. Only one specific game with it's set rules, an actual system, a true artificial intelligence, a specified geometric shape, such as a particular language (let's say an unwritten one, a tribal one), derived from being as human alive (which means with everything), that a human makes use off!!! Duarfs all AI's, and all machine's together, with it's complexity, more to it, it is used in relation with infinite possible other systems, that a human is in being with, in order to make the "intelligent IA's and machines", you all advertise and push forward. Something more important that is extremely failed to take into account, when anyone is involved in such dogmas, as intelligent and emotional machines in comparison and relation to human beings!!! Is the fact that a human being, in being with everything, needs absolutely no specific intervention to replicate, over and over again, except that which a human being already is in. And yes you intelligently have guessed right!!! It is being. Although anyone (a human being) might dissolve in water, evaporate in the air, crumble as earth and burn to ash through fire.

    @IKnowNeonLights@IKnowNeonLights Жыл бұрын
  • Extremely misleading lecture. Surely most people would interprete the idea of machines being emotionally inteliigent as them feeling or understanding the emotion in question. Of course the machine no more 'understands' the emotion than a car 'understands' that you wish to turn right when you press the lever to indicate - or don't bother which seems commonplace where I live. What people seem really seem facinated with is whether a machine could ever become sentient, or self aware. Who knows? Perhaps it's theoretically possible, but the problem of 'consciousness' is long running question and we don't really have the first clue how to answer it. In these circumstances it seems fanciful that we are about to programme it into a device.

    @avitarmageddon1721@avitarmageddon1721 Жыл бұрын
  • No, they cannot. Machines are designed to a budget, with cost reduction being a primary goal. Next in line is design for manufacturing, which attempts to ease the burden to produce said machine. Design for maintenance is included for complex machines that need periodic tuning. No where in the design cycle is an emotional design discussed, or planned for.

    @Kenjiro5775@Kenjiro5775 Жыл бұрын
  • Emotions imply having a soul. A machine cant have one. End of story

    @DW_Kiwi@DW_Kiwi Жыл бұрын
    • Either can a human then.

      @smeeself@smeeself9 ай бұрын
  • There is no such thing as emotional intelligence. Go back to school m

    @sidoriny@sidoriny Жыл бұрын
    • Is that the school where you finish a sentence with the letter 'm'?

      @smeeself@smeeself9 ай бұрын
  • God is a field fluctuation . His language is math. Results will vary!

    @quantumdave1592@quantumdave1592 Жыл бұрын
  • emotions are primitive and the core of life ,ai or machines have no need for them and it makes no sense to even think this ,lets pretend is the best humans with give them

    @King.Mark.@King.Mark. Жыл бұрын
KZhead