MIT Introduction to Deep Learning 6.S191: Lecture 5
Deep Reinforcement Learning
Lecturer: Alexander Amini
2023 Edition
For all lectures, slides, and lab materials: introtodeeplearning.com
Lecture Outline:
0:00 - Introduction
3:49 - Classes of learning problems
6:48 - Definitions
12:24 - The Q function
17:06 - Deeper into the Q function
21:32 - Deep Q Networks
29:15 - Atari results and limitations
32:42 - Policy learning algorithms
36:42 - Discrete vs continuous actions
39:48 - Training policy gradients
47:17 - RL in real life
49:55 - VISTA simulator
52:04 - AlphaGo and AlphaZero and MuZero
56:34 - Summary
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Excellent slides and explanations!
Very good work. Seen many lectures on the topic but this is by far the best one and very intuitive. Thank you for sharing.
haha at 19:50, William Lin the CP legend is answering the question :D Its so weird, I am not even from the US neither I study there but I recognize a student from his voice at MIT in an MIT online lecture :D
Great video! 🙏
Great as always, thanks for being consistent
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you very much. 😊
Wow, Thank very much you )) 🥰🥰😊
Amazing lecture delivery. No words to thank you for sharing this wonderful resource for free. Thanks, MIT as well.
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you so much
Great video!
Thanks!
great thanks for the course!❤
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you so much! I loved the lecture, and I'm learning so much! Im only 16 now, but I hope I can one day get into MIT or another great university that teaches this well!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Great!
It is so clear. Thank you very much!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thanks for explaining complex Deep Learning and Reinforcement principles in a simplistic manner 🙌👍
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Great lecture. To be precise, at 24:37, you propose the 'target' as a function of the best action a' in some state s', but you don't explicitly define where this s' comes from. I may be mistaken, but I believe that this s' essentially represents the state s in the next step (t+1), as demonstrated in kzhead.info/sun/qqiPpMmZsImNqY0/bejne.html (at 14:45). I hope this information is useful to someone.
Thanks for sharing!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you so much :)
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
~wow my favorite area about AI =] cant wait to finish the lecture
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Great lecture from a great instructor.
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Dude, this guy did such a good job!!!!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Great lecture
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
This is so great! but unfortunately due to my limited English, I didn't understand some parts. Hopefully in the future there will be subtitles in Indonesian or other languages, thank you very much!
you can use subtitles if you want
Thank You, Ostad Amini, But how can I find some code examples for policy learning like ppo?
@ 50:00 Very impressive work, VISTA!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
RL is so good for optimizing the trading strategies
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
I recommend Barto Sutton „Reinforcement Learning“, 1st Edition, way,way better than the newer 2nd Edition.
Glad to see ML can figure out what I did as an 8 year old with a stack of quarters :D
Once again a great lecture. I have a challenge, and I wonder if you can help me. I'm currently implementing a NN to determine customer satisfaction through a set of inputs that translate behavioural patterns (think # of complaints with our customer service, rate of usage of our services, etc.), and I'd like to know how much each input i'm using contributes to the overall satisfaction score. I imagine this would involve performing the gradient of the output node (a single one in this case), to each input. Is there any lecture where you go into the details of this, both the math and tensorflow code? Thanks in advance!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Can you teach AI to play City Skylines.
54:38
Thanks for the thorough vid! I'm a bit lost @ 39:31 on where the "-0.8" velocity come from. The closest I'm trying to interpret is given the mean=-1 and var=0.5 the prob of norm dist at mean would be about 0.8... and since your going the negative direction to action a, then it becomes -0.8 ?? But this interpretation seems wrong since the mean should indicate the direction and velocity of action a, while the prob is for computing the loss. So.... what am I missing here? Thanks!
when you say " the prob of normal distribution at mean would be around 0.8" where did you get 0.8 from ? (the maximum value of this distribution is 0.564 at mean ) and secondly I think he is using 0.8 m/s as an example ( its a random value which you might get after mapping it back to a speed variable in your game )
@@gnikhil335 Good call! I misused that variance for std. My mistake. And I also really should've said likelihood there. But yeah, really I was just trying to figure out why he said the mean is centered at -0.8 but also shows a mean of -1 for the predicted params of pdf. As in are they just separate random examples or are we using a pdf with mean=-1, var=0.5 to determine the prob when speed is -0.8, which also doesn't seem likely since I thought we would use the velocity with the max likelihood (i.e. mean).
Your videos on this channel are much appreciated they help me so much since I am teaching everything to my self. I am currently teaching myself DQN and I am stuck on this one question. The only thing I don't understand is how does the Q network not converge to the incorrect target since the target network is being updated much slower. I understand that the target network is being updated slower to be more stable but wouldn't the Q network just converge to the incorrect target because the network that is providing the estimate for future values (target network) is being updated slower so it will be in accurate for longer ? Your help would be much appreciated! Thx again for the awesome videos !
for a longer series of lectures, you could try David Silver's. That one has some more information about policy value iteration, if that is what you are talking about. (in Q learning, there is only one network. not sure what you mean by the target network here).
By chosing with nonzero probability a non optimal actions. To explore different non optimal strategies, which might be not optimal yet but not yet explored, like a new variant line in a chess game. It is called exploitation- exploration dilemma or problem. So far as I know, the target network is not updated (besides of the discount term, which has a Q value)
👏👏
7:00
14:25
Hi Alex, Could you please suggest any best platform(online coding) that works properly for Reinforcement Learning, In our local systems, are getting errors(system dependencies). Even google colab is showing error when using gym library Thanks Your KZhead Follower
Have you try to solve those errors by installing the the correct version of the packages?
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
An apple with a byte❤ ✒️ fellow August 13th🤳🏿
Oh my God, he is so Handsome. And your spoken, lecture delivery, and fluency in RL in as awesome as your looks are....🤩 focusing on the speaker more than the slides. May Allah Almighty bless you man
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
You say state-action-pear but show an apple, I AM CONFUSION! AMERICA EXPRAIN! :) Loved the lecture, really well done.
Lecture 7 ?
Lecture 7 is having some technical difficulties so it will be published tomorrow same time (10am ET) -- sorry for the delay!
@@AAmini I am very happy for reply within few minutes. Today I feel the power of mit .
Thank you for your understanding :)
Now, he knows that Q values can be converted into Probability?
once again, audio is super quiet. Had to turn the volume to 100. Fire the audio guy lol
SEALCLATCONTITOIN - YALL NEED TO INCORPORATE HARD-CODED TRAJETORIES LIKE POLITICAL VIEWS IN DEEP LEARNING .. THE SYSTEM DYNAMICS CHANGE BASED ON POLITICAL MODALITIES
33:13
16:03