Intuitively Understanding the KL Divergence
2021 ж. 12 Сәу.
75 149 Рет қаралды
This video discusses the Kullback Leibler divergence and explains how it's a natural measure of distance between distributions. The video goes through a simple proof, which shows how with some basic maths, we can get under the KL divergence and intuitively understand what it's about.
I just want to say. This is--by far--the best explanation of KL divergence I've found on the internet. Thanks so much!
This was actually one of the most helpful videos. Thank you
You are unbelievably good at teaching man. You explained it better than they did in my course.
holy smoke, you are legit GOAT. so concise yet clear and intuitive explanation.
Great video! Loved the intuition behind the KL distribution. For some thinking about applications, this is used in the loss function of Variational Auto Encoders, a class of deep networks, and is used to find low dimensionality features of high dimensionality input data as an encoder. (e.g. use this to deconstruct images into "features")
Thanks for the brilliant, intuitive and crystal-clear explanation!
Thanks, that made the idea make a lot more sense to me. Showing how it arises so nicely from a large sample size, made it feel much more natural.
Thanks for the simple, yet helpful, explanation!
I didn't expect that good explanation from a randomly suggested youtube video
KL divergence confused me for so long, and I understood it just by watching your video for one time, thank you very much!
This type of explanation is perfect! First boiling the problem down to the most intuitive understanding and from there deduce the general formula. Thanks so much!
I'm just rewatching this video to freshen up my deep learning fundamentals. Super clear video, thank you so much!
One of the most useful explanations ever. Thanks!!
Great explanation! One technical remark I have is that (from my understanding) KL divergence is not technically a measure of distance, since it's not symmetric ( Dlk(P||Q) != Dlk(Q||P) ).
Yes, that's why it's called divergence instead of distance.
Thanks Adian! The connection back to cross entropy loss is cool. Slowly coming together for me.
Thank you so much for this content. By far the explanation of KL Divergence seen so far
Great video. Thanks for sharing. Really intuitive.
This is awesome, thanks for breaking it down Adian
Thank you so much for this video and clear explanation!
Thanks so much for this, needed to understand what KL Divergence is for a paper I'm reading and you just saved me so much time!
Great explanation. Thank you so much!
Best explanation on the interwebs!
Very well-explained. Thank you!
Very well explained! Thank you!
Amazing explanation, thanks!
Perfectly explained in 5 minutes. Wow.
Concise and clear, thank you!
thx for sharing very helpful and intuitive.
Bro, this intuition was not normal, u r just genius!!
Beautiful explanation!
Keep the vids coming this is so so useful
Very great explanation!
Great content! Thank you.
A question here why will the number of heads and number of tails be the same for both the distributions at 3:04. If the probabilities for both the coins are different then the number of occurrences of heads and tails can also be different
Very good video. Thanks so much!
this was great and super useful in my internship (which really just started), Thanks! :)
This was awesome. Thank you.
excellent explanation
thank you so much, very nicely explained
This is so intuitive!!!!!!!!!❤
This was very good have liked and subscribed
Great video . Thanks.
Wow this is an amazing explanation. So is KL divergence equivalent to Bayes factor with equal priors?
Thanks for the explanation!! One thing is, formulas were confusing with how you denoted *q1* & *q2* for probabilities for coin 2, instead of *p2* & *q2=1-p2*
thanks a lot ! 5min for explaining what I could'nt understand in hours
super helpful! Thank you
Thanks for the explanation. With the RLHF stuff happening in ChatGPT, does anyone know why they choose to use KL divergence instead of Cross-entropy loss when calculating the RL policy penalty?
This video is amazing
Finally a simple explanation
Excellent video. Can someone help me understand why is it called Divergence in the first place? Why are we taking 1/N power to normalise it to sample space, I did not understand the logic behind this.
Nice video! Can you say something about alternatives? E.g. why wouldn't mean squared error (of two probability distributions) work as well?
Just perfect!
Excellent!!!
This is gold!
@3:26 I don't understand how are we normalizing by raising it to the power of 1/N. Could you please explain that?
Same question here. This is a fantastic explanation but it defeats me when you mention “we normalize by raising to power of 1/N”. Why do we do this? What does that do or mean to the data? Thanks for making this video! Awesome!
I think the 1/N gives us the 'average' probability of a single toss; e.g. if we had a fair coin and had 3 tosses, the probability of our sequence would be 1/2 * 1/2 * 1/2 = 1/8. If we had ten tosses, the probability of the sequence would be 1/(2^10). These numbers are currently incomparable. If we now look at the probability of the sequence to the power of 1/N, where N is the number of tosses, then suddenly they are the same ... which is what we would want .... it basically normalizes the probability sequence!
@@vyasraina3930 thanks for the explaination! in general why is power 1/N more important than let's say multiplying by 1/N?
@@vyasraina3930 so basically the 1/N gets rid of the number of tosses/sample size and in your case of a fair coin makes it so the probability would be 1/2 regardless of N by getting rid of the N (exponent in your probability sequence)
Thank you!
Thanks very much.
Thank you a lot
Great video! Can you make a video about soft actor critic?
Thank you so much
beautiful!
...tremendous!
Thank you! May I ask how you made the video? I want the numbers to move like they do in your show. It looks great and maintains comprehensibility by bringing it to life! We have to make a video about AIC for our neuroinformatics class, so your video would be a nice introduction to the topic anyway... You do it a little better than our prof^^
This might break the magic a bit but I just use plain old fashioned Microsoft power point! To move the equations I use the inbuilt animations functionality, though it can get a bit tedious to make everything move exactly how you’d like to. But best of luck on making your video.
@@adianliusie590 thx for your answer! Good to know. It doesn't break the magic. I just use another program and I am a noob at some points
Nice video :)
Greaaaat job
Awesome
Awesome video, but at 3:27, on what basis did we take the log?
That's a good question which I'm not sure I could answer too well. One could claim that the log function makes numbers more readable, and often when we deal with large/small numbers we log expressions first since the log operation is reversible and squeezes the range into a smaller one (e.g. e^10, about 22000, becomes 10), like is done with things like log probabilities. It could also just be mathematical convenience to drop the powers so that the overall expression looks much simpler. However I think you'd find a more satisfying answer by looking in the direction of entropy, as entropy is defined as the expected log probabilities of a distribution. Since the KL is interlinked tightly with entropy, something may drop out there which will show that logging the ratio makes the expression more natural and intuitive. I'd have to think bout it more, and maybe I'll make a video on entropy in the near future, but if I figure anything out I'll get back to you then.
@@adianliusie590 wow that’s such a great answer. I truly appreciate that! And yeah, what you said makes sense, and with regards to entropy you’re very right; since entropy is the expected/avg information of a distribution of random events and KL div measures the *relative* difference in expected information between two distributions.
Hi Darkev and Adian, there is another video on youtube (study squad academy) which explains the KL divergence from the perspective of Jensen's inequality. The main argument for taking the log is that it is a concave function, which does somewhat touch Adian's comment.
So fire
So a Kale Divergence of zero means identical distributions? What do the || lines mean?
Dude just plops in some God-tier eye openers in the credits and leaves. Never realized this relationship between KL and cross-entropy loss.
Only that is not a distance ('cause is not symmetric), but a pseudo distance. Great video!
The vedio is good, but what confuses me is the correctness of the division. Sometimes,we have different probability(like NH = NT = 1,and p1=q2,p2=q1),but the division result is 1,which mean they are similar ,or same. It is wrong actually. So, may this explanation is just coinstance, or I have made some mistakes. Hopefully you can help me.(If my pool english make it confusing, I am sorry for that)
Hi, I don't get why you assume that the nH and nT for the coin two would be the same as the coin 1?
Yeah i don't get it either, any explainations anyone?
@@Marcus-ok2jy nH and nT are just the number of heads and tails generated in the sequence by the 'true coin', not by coin 2.. i.e., if i have a true coin and I flip it a few times I may get H,H,T,H (nH=3, nT=1) and you will notice that nH/N=0.75 and nT/N =0.25 which is not equal to p1 and p2 respectively. However, if were to flip the coin many more times, infinitely more times, we would notice the number of heads is the same as the number of tails. Thus, he is saying in the limit of a sufficient amount of coin flips, we will notice nH/N = 0.5 and nT/N = 0.5.
@@Drewbie_T Hi Andrew, But in 3:21 , the formula P(observations|coin 2) looks at the nH and nT of Coin 2 does it not? This is so that the KL divergence could take into the account the disparity in probability distribution between the 2 coins.
@@Marcus-ok2jy No it does not, it is only looking at nH and nT of the true coin. Coin 2 is not being flipped at all. The only part where coin 2 comes in is after flipping the true coin (which has probability p1 heads and p2 tails), we obtain some chain of outcomes (i.e., H,H,T,H,T,T). Now that we have flipped the true coin and obtained an outcome, we look at the coin 2 probabilities and say, how likely is it that this sequence (H,H,T,H,T,T) could have come from coin 2? If coin 2 has .95 probability of landing on heads every time, it is unlikely that we would see an equal number of heads and tails in the distribution.
It's because we first flip a coin N times and record the number of heads (nH) and the number of tails (nT). It is assumed here that the coin used here repesents the real coin (which has p1 probability for head and p2 probability for tail). We are now interested in finding how close coin 2 can mimic the real coin's flips. And since the real coin produced nH heads and nT tails during our experiment, we use the same values. Hope this helped.
😁😁😁gotcha. super ez explanation
nice
Why raise to 1/n power, why use log? Why don't we use just sum(P/Q)?
I love you Biradr
KL loss is not exactly equivalent to cross entropy loss right
I still dont know why the log appears there.
It has my initials
wowowowo
So, Why is KL Divergence is not symmetric?
great explanation, would be perfect if you speaked slower
when you say "likelyhood of the observation of each coin", you really mean "probability" instead of "likelyhood", right?
It’s technically not “distance”
It is not a measure of distance between distributions!
Thanks so much