Variational Auto Encoder (VAE) - Theory

2023 ж. 25 Қаң.
15 946 Рет қаралды

VAE's are a mix between VI and Auto Encoders NN. They are used mainly for generating new data. In this video we will outline the theory behind the original paper, including looking at regular Auto Encoders, Variational Inference, and how they mix together to create VAE.
Original Paper (Kingma & Welling 2014): arxiv.org/pdf/1312.6114
The first and only Variational Inference (VI) course on-line!
Become a member and get full access to this online course:
meerkatstatistics.com/courses...
** 🎉 Special KZhead 60% Discount on Yearly Plan - valid for the 1st 100 subscribers; Voucher code: First100 🎉 **
“VI in R” Course Outline:
Administration
* Administration
Intro
* Intuition - what is VI?
* Notebook - Intuition
* Origin, Outline, Context
KL Divergence
* KL Introduction
* KL - Extra Intuition
* Notebook - KL - Exercises
* Notebook - KL - Additional Topics
* KL vs. Other Metrics
VI vs. ML
* VI (using KL) vs. Maximum Likelihood
ELBO & “Mean Field”
* ELBO
* “Mean Field” Approximation
Coordinate Ascent VI (CAVI)
* Coordinate Ascent VI (CAVI)
* Functional Derivative & Euler-Lagrange Equation
* CAVI - Toy Example
* CAVI - Bayesian GMM Example
* Notebook - Normal-Gamma Conjugate Prior
* Notebook - Bayesian GMM - Unknown Precision
* Notebook - Image Denoising (Ising Model)
Exponential Family
* CAVI for the Exponential Family
* Conjugacy in the Exponential Family
* Notebook - Latent Dirichlet Allocations Example
VI vs. EM
* VI vs. EM
Stochastic VI / Advanced VI
* SVI - Review
* SVI for Exponential Family
* Automatic Differentiation VI (ADVI)
* Notebook - ADVI Example (using STAN)
* Black Box VI (BBVI)
* Notebook - BBVI Example
Expectation Propagation
* Forward vs. Reverse KL
* Expectation Propagation
Variational Auto Encoder
Why become a member?
* All video content
* Extra material (notebooks)
* Access to code and notes
* Community Discussion
* No Ads
* Support the Creator ❤️
VI (restricted) playlist: bit.ly/389QSm1
If you’re looking for statistical consultation, someone to work on interesting projects, or give training workshops, visit my website meerkatstatistics.com/ or contact me directly at david@meerkatstatistics.com
~~~~~ SUPPORT ~~~~~
Paypal me: paypal.me/MeerkatStatistics
~~~~~~~~~~~~~~~~~
Intro/Outro Music: Dreamer - by Johny Grimes
• Johny Grimes - Dreamer

Пікірлер
  • @6:10 VI starts.The run down was awesome..puts eveything into perspective

    @paedrufernando2351@paedrufernando23514 ай бұрын
  • Fantastic video! This effectively resolved my queries.

    @user-qt5xd9lk5j@user-qt5xd9lk5j4 ай бұрын
  • Great explanation! The two ways of looking on the loss function is insightful

    @evgeniyazarov4230@evgeniyazarov42305 ай бұрын
  • Very clear and useful information!

    @123sendodo4@123sendodo4 Жыл бұрын
  • awesome video.

    @shounakdesai4283@shounakdesai42833 ай бұрын
  • Hey great video, thank you for your efforts. Is it possible to get your slides ?

    @evaggelosantypas5139@evaggelosantypas5139 Жыл бұрын
    • Thanks. The slides are offered on my website meerkatstatistics.com/courses/variational-inference-in-r/lessons/variational-auto-encoder-theory/ for members. Please consider subscribing to also support this channel.

      @MeerkatStatistics@MeerkatStatistics Жыл бұрын
    • @@MeerkatStatistics ok thnx

      @evaggelosantypas5139@evaggelosantypas5139 Жыл бұрын
  • Thank you for the awesome video! I have a question @11:35. I don't clearly understand why g_\theta takes x. am I correct that it does not take x if g_\theta is a gaussian distribution? as it will just be g_\theta(\epsilon) = \sigma*\epsilon + \mu (where \sigma and \mu comes from \theta)?? Again, I appreciate your video a lot!

    @minuklee6735@minuklee6735Ай бұрын
    • Although not explicitly denoted, q(z) is also dependent on the data. This is why g(theta) will usually also be depending on x. I didn't want to write q(z|x) as in the paper, because it is not a posterior, but rather a distribution who's parameters you tweak until it reaches the true posterior p(z|x). I have a simple example (for the CAVI algorithm) on my website (for members) meerkatstatistics.com/courses/variational-inference-in-r/lessons/cavi-toy-example/ and also a bit more elaborate example free on KZhead kzhead.info/sun/a6izebSSpYxnZaM/bejne.htmlsi=8Un505QqOEtij9XV - in both cases you'll see a q(z) that is a Gaussian, but whose parameters depend on the data x.

      @MeerkatStatistics@MeerkatStatisticsАй бұрын
  • Could you please tell me if there is a mistake in the notation? @8:26 z_{i} = z_{l}?

    @stazizov@stazizovАй бұрын
    • Hey, yes of course. Sorry for the typo.

      @MeerkatStatistics@MeerkatStatisticsАй бұрын
    • ​@@MeerkatStatistics Thank you so much) Great video!!! 🔥

      @stazizov@stazizovАй бұрын
KZhead