In this episode, we dive into Variational Autoencoders, a class of neural networks that can learn to compress data completely unsupervised!
VAE's are a very hot topic right now in unsupervised modelling of latent variables and provide a unique solution to the curse of dimensionality.
This video starts with a quick intro into normal autoencoders and then goes into VAE's and disentangled beta-VAE's.
I aslo touch upon related topics like learning causal, latent representations, image segmentation and the reparameterization trick!
Get ready for a pretty technical episode!
Paper references:
- Disentangled VAE's (DeepMind 2016): arxiv.org/abs/1606.05579
- Applying disentangled VAE's to RL: DARLA (DeepMind 2017): arxiv.org/abs/1707.08475
- Original VAE paper (2013): arxiv.org/abs/1312.6114
If you want to support this channel, here is my patreon link:
/ arxivinsights --- You are amazing!! ;)
If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbr...
Variational Autoencoders starts at 5:40
You just saved five minutes of my life!
@@pouyan74 no the first part was necessary...
@@moazalomary8123 you think someone would enter a video about Variational Autoencoders if he doesn't know what Autoencoders are
@@selmanemohamed5146 yeah i did... 😂😂😂 and i was lucky he explained both 😎🙌😅 + the difference between them and that's the important part
@Otis Rohan Interested
This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.
Except for "Gaussian" that is weirdly russian pronunced "khaussian" wat?
This guy does a real job of explaining things rather than hyping up things like "some other people".
are you referring to Siraj Raval? lol
@@malharjajoo7393 lol
Your way of simplifying things is truly amazing! We really need more people like you!
A really great talk! I have been reading about VAE a lot and this video helps me to understand it even better. Thanks!
The beta-VAE seems enforcing a sparse representation. It magically picks the mostly relevant latent variables. I am glad that you mentioned ‘causal’, because that’s probably how our brain deals with high dimensional data. When resources are limited (corresponding to use large beta), the best representation turns out to be a causal model. Fascinating! Thanks
I love your channel. A perfect amount of technicality so as to not scare off beginners, and also keep the intermediates/ experts around. Brilliant.
"You cannot push gradients through a sampling node" TensorFlow: *HOLD MY BEER!*
Great! No BS, strait and plain English! That`s what I want!! :) Congratulations!
This guy was a VAE to the VAE explanation. Really need more of such explanations with the growing literature! Thanks!
I was very interested in this topic, read the paper, watched some videos, read some blogs. This is by far the best explanation I've come across. You add a lot of value here to the original rapper's contribution. It could even be said you auto-encoded it for my consumption ;)
Just found your channel and I realize how with some passion and effort you explain things better than some of my professors. Of course, you don't go into too much detail but putting together the big picture comprehensively is valuable and not everyone can do it.
I discovered your channel today and I'm hooked! Excellent work. Thank you so much for your hard work
Your explanations are quite insightful and flawless. You are are a gifted explainer! Thanks for sharing them. Please keep sharing more.
Really liked it. Firstly giving an intuition of the concept, its application and then to the objective function while explaining its individual terms, in a way everyone can understand, it was simply professional and elegant. Nice work and thanks!
I would like to see more videos from you. Clear explanation of concept and gentle presentation of math. Great job!
Great video ! Very clear and understandable explanaitions of hard to understand topics.
Bloody nicely explained than the Stanford people. Subscribed to the channel, I remember watching your first video on Alpha, but didn't subscribed then, I hope there will be more content on channel with same level of quality, otherwise its hard for people to stick around when the reward is sparse.
Your videos are quite good. I am sure you will get an audience in no time if you continue. Thank you so much for making these videos. I like the style you use a lot and love the time format (not too short and long enough to do a good overview dive). Well done.
Thank you very much for supporting me man! New video is in the making, I expect to upload it hopefully somewhere next week :)
Bro this was insanely helpful! I'm writing my thesis and am missing a lot of the basics in a lot of relevant areas. Great summary!
Great! Crisply clear explanations in such a short time.
wait how did I not know of this channel. Beautiful explanation, perfectly clear. Thanks for the awesome work!
Best explanation found on the internet so far. Congratulations!
This is a LIT channel for watching alongside papers. Thanks
hands down this was the best autoencoder and variational autoencoder tutorial I found on Web.
Excellent video!! Probably the best VAE video I saw. Thanks a lot :)
This was very lucid. You are gifted at explaining things!
Thanks a lot for sharing such a succinct summarization of VAEs. Very helpful!
Your explanation is crisp and to the point. Thanks.
Great videos man, keep them going, you're gonna find an audience!
This was an amazing video! Thanks man. Will stay tuned for more!
Great work. Thanks a lot! Highly appreciate your effort. Creating these videos takes time but I still hope you will continue.
Dude what a next level genius you are! You made them so easy to be understood, and just look at the quality of the content. Damn bro!🎀
I hand such a hart time understanding the Reparameterization trick, now i finally got it. Thanks for the great explanation. Would love to see more Videos from you.
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
Great channel! Keep up with this awesome project. Already subscribed and going to share this channel with my colleagues.
First video I see from this channel. Immediately subscribed!
Your videos are absolute crackin for a quick revision before an interview!
Very Helpfully Arxiv! keep the good Quality videos coming
Really appreciate your effort of simplifying research papers for viewers.Keep it up.I want more such videos
Just found this channel ... today... one word Brilliant...!!!
I love you. I spent so long on this and couldn't understand the intuition behind it, with this video I understood immediately. Thanks
Thank you very much, this is the first time I understand the benefit of reparameterization trick.
This is really good. I like the way you explain things. Thank you for sharing!
always the best place to have a good overview before diving deeper
Great explanations. This filled two crucial gaps in my understanding of VAEs, and introduced me to beta-VAEs.
Great video, better than many tutor lessons in university, this animation and simplified the things with simple words
This is sooooooo useful for 2am and you dragged by all the math in the actually paper. Thanks man for the clear explanation!
Great thanks for the video and the paper explanation! Really, really helpful, keep that paper explanation content!
Great content ! The format and delivery is perfect, hope to see more of these videos :) . Are you planning on doing a video on Capsule Networks in the future ?
More videos are definitely coming, the next one will be on novel state-of-the-art methods in Reinforcement Learning! I don't plan on making a video on Capsule Nets since there is an amazingly good video by Aurélien Géron on that topic and there's no way I can explain it any better than he did, no need to reinvent the wheel :p Here is his video: kzhead.info/sun/o7SHaMhofGVvY2g/bejne.html
Great explanation on why we actually need the reparameterization trick. Everyone just skims over that and explains the part that mu+var*N(0,1) = N(mu,var), but ignores the part why you need it. Good job!
Really really awesome channel!!! Look forward to watching more of your videos!
Wow! Great video. Very concise and easy to understand something quite complex.
This is great! Keep going, we need you! Don't stop making amazing videos like this
This video suddenly popped up today morning on my home page. Now i know my Sunday will be great. :D
Hi, I am a Graduate Student at UMass Amherst. I really liked your video, it gave me a lot of ideas. Watching this before reading the paper would really help. Please keep it coming I'll be waiting for more.
Simply Amazing! Thanks for sharing this. Absolutely loved it. Hope to see more videos from you :)
This is great! Keep going, we need you!
Amazing explanation to a complicated topic! Thank you so much!!!!
Shared your work with my followers. Keep making amazing content
Subscribed. Very useful -- i'm an applied ML researcher (applying these techniques to real-world problems) so I need a way to quickly "scan" methods and determine what may be useful before diving in-depth. These styles of videos are exactly what I need.
So many ideas come to mind after watching this video. Well done!
Thank you! This was comprehensive and comprehendible.
Absolutely great stuff Arxiv Insights! Subscribed to your videos for life :)
Very good explanation of Variational Autoencoders! Kudos!
Your explanation is so clear.
Don't stop making amazing videos like this
Very useful. Great content. Continue what you're doing. Great job.
Finally I understood the intuition of sampling from mu and sigma and reparameterization trick. Thanks!
You help so much with my exams, thanks man, subscribed for more high quality stuff!
Wow, love your videos. I have not worked with reinforcement learning, but I’d love to hear your analysis of other generative models.
Awesome explainations and interesting subjects, keep it up!
Very clearly explained. Good job.
Amazing description... Need more videos on different things
Don't you ever stop explaining papers like this. Better than Siraj's video. Just explain the code part a bit longer. And your channel is set.
exactly. show some more code please.
Yea we can't really do much until we code and see results ourselves.
Siraj has improved his videos and provides more content. Don’t be stuck in the past ;)
@@shrangisoni8758 He's explained the fundamental concepts, you can take those concepts and translate them to code. He shouldn't have to do that for you.
@@pixel7038 Please stop spreading his name. He has faked his way more than enough already. Read more here: twitter.com/AndrewM_Webb/status/1183150368945049605 and here www.reddit.com/r/learnmachinelearning/comments/dheo88/siraj_raval_admits_to_the_plagiarism_claims/ And what really bugs me is not the plagiarism- that's bad and shameful in itself- but the level of stupidity this guys had shown while plagiarizing- "gates" to "doors" and "complex Hilbert space" to "complicated Hilbert space".
Epic video Xander! I learned a lot from your explanation. Now to try an implement some code!
That was a great explanation! Thank you so much!
Thanks, this video clarified many things from the original paper.
You're explaining this very well! Finally an explanation on an AI technique that's easy to follow and understand. Thank you.
We needed a serious and technical channel about latest findings in DL. That siraj crap is useless. Keep going! Awesome
Immediate subscribe :) Thanks for this in-depth video. Please keep a format like this in the future (relatively in-depth explanation, to build a real intuition about these techniques).
I like the subtle distinction you made between the disentangled variational auto-encoder versus the normal variational auto-encoder: Changing the first dimension in the latent space of the disentangled version rotates the face while leaving everything else in the image unchanged. But changing the first dimension in the normal version not only rotates the image, but changes other features as well. Thank you. Me gleaning that distinction from Higgins, et al. Beta-VAE Deepmind paper would be unlikely...
Really love this video! Good job!
I just found this channel and subscribed. Great video, I enjoyed the pacing and technical components. Can you make videos like this for popular ML topics like backpropogation or EM algorithm?
Big Fan. Would love to see videos where you breakdown some of the applications using deep learning tools.
One minute watching this video is enought to be a new subscriber! Awesome
your videos are awesome don't lose track bcuz of subscribers.
Very clear explanation. Thank you!
This is just good content. Such in depth explanations are what we need in AI community. Great work.
Holy crap, another Xander interested in machine learning :D
Finally, someone who cares their viewers actually get to understand VAEs.
So clear, amazing 15 mins. ty,
Thank you very much, I was trying to understand it, but it's much easier when I found this video!
Very good explanation. Subscribed to the channel. Looking for more thoughtful videos on cutting edge ML stuff.
Excellent material!!
Im always intimidated when he says it is going to be technical, but then he explains it so concisely.
That was a great explanation, thank you!
Very good explanation! Thank you man!
what a gem of a channel I have found her...
Thanks for this video. It gave a nice overall idea about variational auto-encoders.