Visualizing the Latent Space: This video will change how you imagine neural nets!
Latent Space is how neural networks store information. In this video, we discuss Autoencoders and Variational Autoencoders and how we can explore, interpret, and manipulate images by looking at it's latent space representations. The CelebA dataset and the DFC-VAE model (arxiv.org/abs/1610.00291) shows some pretty interesting results that I found super enlightening about this mysterious topic of Machine Learning.
To support the channel and access the Word documents/slides used in this video, consider JOINING the channel on KZhead or Patreon. Members get access to scripts, slides, animations, and illustrations for most of the videos on my channel!
Join and support the channel - www.youtube.com/@avb_fj/join
Patreon - / neuralbreakdownwithavb
Follow on Twitter: @neural_avb
Timestamps-
0:00: Intro
0:48: Intuition
2:11: Autoencoders
2:50: Nearest Neighbor Search
4:30: VAE and Generative AI
5:41: Latent Space Arithmetic
8:05: Finding patterns and trends
This is the most concise, simplest and best explanation of latent space that I've found so far!
Thanks a lot! Glad you enjoyed it! 🙏🏽
Excellent clarification of multiple concepts in one pass. Helped me relate the encoders relative to latent space in a much more accessible metaphor. Thank you.
Great content!
Excellent video
I hope you can fix the quiet audio!
Unfortunately YT doesn't allow to increase the volume after posting videos. I could re-upload it, but idk if it'll be really worth it. I'll just take this as lesson for future videos, thanks for the comment!
This was a insanely good video and explanation ty
Thanks! Awesome to hear that! 😊
@avb_fj Thank you! I'd like to see one where you code the encoder and decoder I'm coding a autoencoder and it's a little tough trying to find a good balance of reduction but also keeling the important details
This video is clear and concise, amazing work!
Truly Amazing
Amazing Explanation Thanks a lot!
10 min felt like 30 min, I had so many rewinds during the vid. the video is so full of info, thanks a lot.
This video is so fascinating. Amazing work.
Thanks! Glad you enjoyed it!
Thanks for great video! Very well explained!
Great explanations! Getting better understanding how some parts of Stable Diffusion work without any efforts )
This video is super cool. It's good to see those visualized concept
Thanks!😊
Great video, thanks
A great description of interpreting deep learing models. Well done!
This is a fantastic break down. Great pacing, wonderful examples with easy to follow metaphors. Fix your audio and keep em coming!
Awesome content!
🙏🏽🙌🏼
I really appreciate your lucid explanation. Superb. I wanna request you if you could enhance the sound quality a bit. Good wishes and thanks for such videos
Thanks! I’ll keep that in mind going forward…
Excellent video! Thanks for your work! QQ: Is there a repo for the real-time image manipulation software you used as your demo?
Thank you sir ! You clear the concept of latent space for me ! And I can’t wait to click on your multi modal video in this channel
Just subscribed after your NeRF video, and this one is awesome too! You, Yannic, and Two Minute Papers are great at making AI content relatable and interesting and freaking cool :) What a time to be alive! lol
Wow that’s high praise! Those two are definitely an inspiration, so I’m kinda feeling surreal reading this! Thank you so much!! 🙌🏼🙌🏼
Which tools did you use to be able to change each principal component and see its effect on output image?
If I remember correctly, I just used ipywidgets inside a jupyter notebook to do the UI and display. I also wrote the logic for the PCA (sklearn), encoding/decoding, and interpolating the latent vectors.
can we do same with the pixels to enhance the image?
Can you clarify what you meant by “doing the same with pixels”?
Oh man. I was a bit lost when you where saying encoder this decoder that but the smile example at 6:50 hit right on the nail. It's indeed mindblowing. I'd love to know more about AI for outsiders, subscribed. PS: A concept I picked up from Ezra is that AI turns semantics into geometry. So you can do king - man + woman and get queen! (paraphrasing). If you could expand on this and give more examples in different modalities... that'd be awesome.
Nice…glad you enjoyed it and stuck around for the whole thing. The semantic example is pretty awesome yeah… I’ve brought it up in the channel in my History of NLP video, but more examples on different modalities seems like a nice idea for a video!
Great content! Just as a FYI, might want to turn up the Mic volume. It's easier to lower the volume than to turn it up from the consumers POV.
Thanks for the feedback! Will keep it in mind for the next one…
Great video but the audio level is way too low. Also the video and audio is not in sync.
Great content!