Reinforcement Learning: Machine Learning Meets Control Theory
Reinforcement learning is a powerful technique at the intersection of machine learning and control theory, and it is inspired by how biological systems learn to interact with their environment. In this video, we provide a high level overview of reinforcement learning, along with leading algorithms and impressive applications.
Citable link for this video: doi.org/10.52843/cassyni.x2t0sp
@eigensteve on Twitter
eigensteve.com
databookuw.com
This video was produced at the University of Washington
%%% CHAPTERS %%%
0:00 Introduction
3:34 Reinforcement Learning Overview
7:30 Mathematics of Reinforcement Learning
12:32 Markov Decision Process
13:33 Credit Assignment Problem
15:38 Optimization Techniques for RL
18:54 Examples of Reinforcement Learning
21:50 Q-Learning
23:53 Hindsight Replay
Steve is a phenomenal lecturer, isn't he?
never seen a better one
very much so
He is!
Yessss
no, he is the most phenomenal one!! Respect
*"WELCOME BACK"*
Viewing reinforcement learning as time delayed supervised learning is a really good way of looking at it.
Indeed!
I wish i knew this channel at the start of quarantine
I found about the channel just as quarantine had started. It was quite the treat.
Just wanted to comment about how much I love these videos. Last year while applying for PhDs I was searching for passions. In a discussion with my friend (a computer scientist), I accidentally outlined genetic programming without knowing it. My friend told me so and I went researching. Found these videos and became enthralled. Now I have a PhD studentship in soft robotics and plan to use SINDy to help with modelling and control and honestly think that giving machines brains may be my future work too. Thanks Brunton, my passion was helped by your own.
That is amazing to hear! Helping people develop their passions is exactly why I do this!
This is THE BEST explanation on reinforcement learning over all the articles, books, or youtube videos, that I've seen so far. Period.
I still have no idea as to who could possibly dislike these videos
u
@@phaZZi6461 I wanted to add a comment but 69 looks so good \
Would love for a full series on how can we use RL to control real world dynamical systems!
I just cannot express how grateful I am to prof Steve Brunton for posting these videos. Waking up at 6am to watch him explain is the most satisfying thing ever. Thank you! We all are grateful.
Is there something you dont know dude? You seem to be an expert on everything. You are such an inspiration.
Great channel! Please record more videos on the edge of reinforcement learning and control theory. Congrats on your work.
Wow! I would love to see Prof take on RL topics!
This sweet spot between control theory and machine learning definitely interests me, especially applied to astrodynamical systems. Please, continue making these videos, Professor Brunton!
The lecture was very well constructed. Well done! As an electrical engineering student trying to specialize in ML I find that you really hit the mark when it comes to putting these though and convoluted topics together with examples.
I am binge watching this chanel from past 3 hours
Every time he said "good", i felt appreciated for not giving up on a lecture whose subject is far, far away from mine and im pushing myself to try and learn the concept. thank you, steve.. much love.!
Never clicked a video that fast 😆. Great content prof as always love it!
I love how you emphasize the intersection between machine learning and control (theory). That's exactely what sparks my interest about reinforcement learning!
Glad you like it! I always found this connection fascinating and a very natural way to merge the two fields.
Looks like I'm not the only one working on a video early in the morning! Really cool stuff, love the doggie!!
I am doing research on the Model-based RL for safety-critical systems; I really enjoy doing it. These are so cool. Thanks for making videos on this topic!
Mr.Brunton saves me from my final review. His lectures made crystal clear those seemingly unfathomable terms. Just watched him videos for days and I already like him!
those bipedals are too cute they deserve another cmt
That's an awesome video indeed. A great introduction to RL!
Simply great subject and excellent presentation thank you prof for all your efforts
Dear Steve, Im very, very grateful that I get to watch such extraordinary instructive videos for free!!! Thinking that elsewhere in the world people are killing others atm (as in Kabul), it gives me a lot of hope seeing how people like you just make the world a little better and allmost brings tears into my eyes. You have such great talent in teaching, thank you!
I just found out your channel and the contents you cover is a treasure to me!
I really love your content, please keep spoiling us! These were the fastest 26 minutes! I learnt a lot and I'm looking forward to the python lab implementations of these concepts! Thank you very much for your work.
Hi Professor Steve, Lovely presentation.
Theeeere we go Steve! Waited for this :)
Hi, Steve. I've been working on Fluid Mechanics 25 years or so. Always using experimental and some analytical tools to approach the subject. I had a lot colleagues migrating to CFD back in the 2000s because these methods seem to find valid results with "little" effort in comparison to expensive, frustrating and time-consuming experiments. So I always disregarded CFD as nice tool that could predict a lot of stuff that you will never know if it is correct or not. However, I have to say, that from some time, reinforced (see what I did there?) by new material that I am studying and your papers on ML for Fluid Mechanics I am looking at the subject with new eyes. Thank you very much for your material and the dedication you put in every video.
Professor you're awesome. My thesis topic is, deep reinforcement learning based robotic arm torque control. I love control theory and machine learning. Thx for your support.
top quality this is what they said about education on the internet that "the best teacher can teach everyone" this is that video for this topic
It's really interesting to watch this video, although I have also studied and read it a few times, its boredom is hard to describe. thank you teacher
Great video! If everyone was as great on KZhead as your delivery we would have a lot more passion in the area. Keep up the good work, train on!
Awesome lecture! Thanks Steve. I really enjoyed watching this!
Thank you so much for this lecture. I really enjoy your videos, this is helpful as a PhD student. I also bought your book "Data-driven science and engineering" which have nice explanations for the tools I use. Keep on this awesome work! Greetings from France!
Waiting this topic from long time, your lectures are so clear. Thanks lot.
As a CS grad student who took RL in the last semester ... this is truly the best refresher I have seen until now. Thanks a lot for uploading.
Great to hear!
you have created such high quality content that i just really enjoy watching it instead of playing games :)))
Explained in an understandable way and RL nicely connected to control theory!
Very well illustrated! Thanks
Your series are excellent . They have a good pace and use powerful graphics to explain difficult concepts. I've watched many of your videos on my TV which doesn't allow me to give a thumbs up. See here it is. I am not a Python programmer , but I am sure that those watching who DO use Python must have itchy fingers.
I’ve been following your content for a at least 4 years now! It’s the reason I am a robotics control engineer now, you pulled me through 4th year control systems with your conveniently-timed boot camp. Please keep up the great content! PS are you accepting PhD students?
That's so interesting and well explained. Thank you !
Glad you liked it!
Thank you, professor Steve Brunton. I am pleased to inform you that I am considering to do, after my master degree in computer engineering, a PhD related to the Data Driven Control Theory subject and the merit in part is also your.
What I like is I don't pay for this knowledge. I was planning to take a data science certificate, but you know what. Let me spend 6 months learning by myself I have spent a solid 1 month only on your videos starting from SVD. it has been amazing. I love when a small thing builds up into a bigger thing. Soon I will make a sample project based on what I have learned from your video.
I was waiting for this!!!
Keep up the good work, love your videos
The essence of this content is profoundly influential. A book with akin messages was transformative. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
How very mean, I was looking forward to see the trial 7 right away. Great explaining. Thanks
phenomenal video, thank you
After watching these videos I have actually understood the concept of reinforcement learning. I might be wrong but to me it seems it generalizes the feedback loops into more abstract concepts of agent action policy environment etc. In a feedback loop we have control policy which is a PID controller that controls the behaviour of a plant it is attached to. The model of the plant is environment here and the action is taken by the output of the PID controller. The reward in feedback loop is to converge to desired output value at the steady state by ignoring its transition time values, so it is in a sense a semi supervised learning. The states in the feedback loop is derivative components of the system. In noisy systems, sometimes it is crucial to remove derivative component to avoid impulsive behaviour which corresponds to state feedback from environment to agent in RL. By thinking like this, RL is more meaningful to me as an engineer, that RL is a generalized feedback system where we try to get a desired output given some input to the system. Thank you for these video series!!
Steven lectures are great help to the society ❤
You have a nice way of explaining the topics.
Thanks!
it's brilliant ! . Keep working with this topic please
Steve is one of the gifted teachers. I wish you can guide postgraduate to make a good publication in control and learning by highlighting the hot topics and promising research aspects.
Thanks so much!
the fantastic lecture that I've ever seen...
Perfect video, I will watch all other at one time 😍
RL can be interpreted from this perspective, amazing
I've been seriously considering starting a degree in A.I./Machine learning but with videos of this quality available for free, it is hard to justify the cost. Subscribed and liked!
Just incase you read this and have time to reply... Do you have any suggestions for an education path to your level of understanding? There are degrees for data science, computer science, artificial intelligence, software engineering, etc. They all seem so inter-related. I want to know them all but I'm struggling to pick a starting point. My current level of related education is highschool level advanced maths and a year of teaching myself MQL4/5 and R code mostly from free resources online. Just so you know my starting point (or state haha).
Great lesson.. Thank you
Dude, you are the best lecturer. DONE
This is really really great teaching.
Amazing Clarity
Yay! Hero has decided to teach Reinforcement Learning
Excellent video ❤️
Great video, thanks.
love your lectures
A very well done lecture. Bravo! I'd like to make a suggestion, if I may, to modify the Policy function as pi(s,a) = Pr(A = a, S = s); A is the place holder for an action, and a is the actions of taking; S is the place holder for the state and s is the given state.
All of your lecture series are very good and very helpful. A series on convex optimization problems would be good. Any thoughts about it?
Kudos on the awesome lecture
I really like your videos. Keep up the good work! :)
Hi Steve. I am an amateur mathematician (hoping to go pro) who is really into category theory. Have you or your team ever looked at this? Usually, when you see two subjects talking about the same thing, it's a good bet that category theory is working in the background. And I just looked at category theorist Tai-Danae Bradley and her explanation of SVD in terms of category theory. Thanks! AWESOME CHANNEL!
I thought I was witnessing a breakthrough concept trying to link deterministic control theory with machine learning. But, when you mentioned the words probability and policy, I was disappointed. Looking forward to more conceptual lectures. Could also highlight real world applications. Thanks.
nice lecture sir! thanks a lot!
I really like your explanation
Thank you, professor!
thank you very much for your lesson, it is really useful to me!
Amazing feeling to watch a video After completing a project on the same topic
we also look forward to your explanation for GAN in the future
I love u, Steve! I have been currently working on Machine Teaching and Project Bonsai. I really needed to know this.
amazing work as usual !! ... could you please consider doing a lecture about whole body control for robotics?
the best one i have ever seen
Glad this content is on KZhead -- the past year kind of derailed me going to grad school. Question, could the reward structure of a chess game be broken into incremental steps? As in, the main reward is to win, but couldn't a game be discretized into incremental rewards defined by the value of a target and the probability that a sequence of moves would capture a high value target? Or, is that just Q-Learning in different words?
I think it's also important to mention the distinction between discrete and continuous action spaces.
i have been trying to teach my guys that machine learning and control theory (fuzzy autotuning) is the same principle. This video will be used!
hey Steve, love your videos! Wondering if the videos in this playlist are in the correct order?
I Like this video, but there is already a very common way to optimise the density problem. It’s called actor critic, where you basically get a second network to learn what reward it is expecting (q-learning) where the actor is a policy gradient network. Works fine so far and I know it’s not enough to get away from the semi-supervised but let’s be honest, the “semi” is what really defines the technique. Because the agent needs to learn what “could” be good in future by itself without a supervisor. That’s how animals and humans learn too. so a fully supervised agent wouldn’t be exploring the world on its own anymore. Greetings Firestorm (ETH-Zurich Student)
Thanks Steve.
You are very welcome!
I wish my teachers had seen your videos before trying to teach us these subjects :)
just a question, i am viewing the Control boot camp playlist, so it goes from control dynamic systems with non minimum phase to control theory and convid 19 to reinforcement learning. is this the correct way of viewing the videos? i feel there was for to talk about on the previous topics maybe i am wrong. Thanks to you/your team for all your amazing videos. i finished my controls classes 3 years ago when i found ur videos, i have been going throught all your playlists and am loving it! Is there a specific job that requires/teaches these skills? the closest job to this i found were graduate automation engineer .
Prof Brunton: You are one bad-ass teacher!!!🤓
Prof. Steve. Please make videos on Immersed Boundary Methods and how PINNs can help to solve complex problems in this area.
Legendary!!!!
Love the videos
Thank you!
Thank you ❤
Hey Steve! Loved your lecture! Could you tell me what your setup is? I love your production, setup, and content of course! Some questions: 1. Do you have a screen/script in front of you and a green screen behind? 2. Which cam and mic do you use? Is it only a lav mic? I assume it's not shotgun since you're far away from any particular point of the frame. 3. How much time does it take to create a video like this one? 4. How many dry runs do you usually do? Or for this video in particular? You're setting a new standard for production (and beyond haha), keep up the good work! I'd really appreciate your answers, thank you in advance!
Thanks, glad you like it! No script, but I have a screen so I can see where I am relative to the presentation. I use a lav mic and a canon 4k camera. I usually do everything in one run, sometimes I redo the intro a couple times until i'm happy with it.
@@Eigensteve thanks Steve!
Wow! Thank you so much. Maybe the next lecture can be about UMAP please :D?
Amazing 🤩
It would be an honor to be supervised for a PhD by him.