Word Embedding and Word2Vec, Clearly Explained!!!

2024 ж. 19 Мам.
246 422 Рет қаралды

Words are great, but if we want to use them as input to a neural network, we have to convert them to numbers. One of the most popular methods for assigning numbers to words is to use a Neural Network to create Word Embeddings. In this StatQuest, we go through the steps required to create Word Embeddings, and show how we can visualize and validate them. We then talk about one of the most popular Word Embedding tools, word2vec. BAM!!!
Note, this StatQuest assumes that you are already familiar with...
The Basics of how Neural Networks Work: • The Essential Main Ide...
The Basics of how Backpropagation Works: • Neural Networks Pt. 2:...
How the Softmax function works: • Neural Networks Part 5...
How Cross Entropy works: • Neural Networks Part 6...
If you'd like to support StatQuest, please consider...
Patreon: / statquest
...or...
KZhead Membership: / @statquest
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org/statquest-store/
...or just donating to StatQuest!
www.paypal.me/statquest
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
0:00 Awesome song and introduction
4:25 Building a Neural Network to do Word Embedding
8:18 Visualizing and Validating the Word Embedding
10:42 Summary of Main Ideas
11:44 word2vec
13:36 Speeding up training with Negative Sampling
#StatQuest #word2vec

Пікірлер
  • To learn more about Lightning: lightning.ai/ Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

    @statquest@statquest Жыл бұрын
  • In simple words, word embeddings is the by-product of training a neural network to predict the next word. By focusing on that single objective, the weights themselves (embeddings) can be used to understand the relationships between the words. This is actually quite fantastic! As always, great video @statquest!

    @karanacharya18@karanacharya189 күн бұрын
    • bam! :)

      @statquest@statquest9 күн бұрын
  • Probably the most important concept in NLP. Thank you explaining it so simply and rigorously. Your videos are a thing of beauty!

    @rishavkumar8341@rishavkumar8341 Жыл бұрын
    • Wow, thank you!

      @statquest@statquest Жыл бұрын
  • Josh; this is the absolutely clearest and most concise explanation of embeddings on KZhead!

    @exxzxxe@exxzxxe2 ай бұрын
    • Thank you very much!

      @statquest@statquest2 ай бұрын
    • totally agree

      @davins90@davins90Ай бұрын
  • When I watched this,I have only one question which is why all the others failed to explain this if they are fully understood the concept?

    @ah89971@ah899718 ай бұрын
    • bam!

      @statquest@statquest8 ай бұрын
    • @@statquest Double Bam!

      @rudrOwO@rudrOwO5 ай бұрын
    • Bam the bam!

      @meow-mi333@meow-mi3334 ай бұрын
  • Can't believe this is free to watch, your quality content really helps people develop a good intuition about how things work!

    @HarpitaPandian@HarpitaPandian5 ай бұрын
    • Thanks!

      @statquest@statquest5 ай бұрын
  • this channel is the best resource of ML in the entire internet

    @pichazai@pichazai7 күн бұрын
    • Thank you!

      @statquest@statquest6 күн бұрын
  • This channel is literally the best thing happened to me on youtube! Way too excited for your upcoming video on transformers, attention and LLMs. You're the best Josh ❤

    @rachit7185@rachit7185 Жыл бұрын
    • Wow, thanks!

      @statquest@statquest Жыл бұрын
    • Yes, please do a video on transformers. Great channel.

      @MiloLabradoodle@MiloLabradoodle Жыл бұрын
    • @@MiloLabradoodle I'm working on the transformers video right now.

      @statquest@statquest Жыл бұрын
    • @@statquest Can't wait to see it!

      @liuzeyu3125@liuzeyu312511 ай бұрын
  • Statquest is by far the best machine learning Chanel on KZhead to learn the basic concepts. Nice job

    @SergioPolimante@SergioPolimante3 ай бұрын
    • Thank you!

      @statquest@statquest3 ай бұрын
  • So good!!! This is literally the best deep learning tutorial series I find… after a very long search on the web!

    @yuxiangzhang2343@yuxiangzhang23438 ай бұрын
    • Thank you! :)

      @statquest@statquest8 ай бұрын
  • Thank you Josh, this is something I've been meaning to wrap my head around for a while and you explained it so clearly!

    @dreamdrifter@dreamdrifter11 ай бұрын
    • Glad it was helpful!

      @statquest@statquest11 ай бұрын
  • On of the best videos I've seen till now regarding Embeddings.

    @mannemsaisivadurgaprasad8987@mannemsaisivadurgaprasad89876 ай бұрын
    • Thank you!

      @statquest@statquest6 ай бұрын
  • I was struggling to understand NLP and DL concepts, thinking of dropping my classes, and BAM!!! I found you, and now I'm writing a paper on neural program repair using DL techniques.

    @harichandananeralla8099@harichandananeralla80998 ай бұрын
    • BAM! :)

      @statquest@statquest8 ай бұрын
  • That was the first time I actually understood embeddings - thanks!

    @TropicalCoder@TropicalCoder8 ай бұрын
    • bam! :)

      @statquest@statquest8 ай бұрын
  • Damn, when I first learned about this 4 years ago, it took me two days to wrap my head around to understand these weights and embeddings to implement in codes. Just now, I need to refreshe myself the concepts since I have not worked with it in a while and your videos illustrated what I learned (whole 2 days in the past) in just 16 minutes !! I wished this video existed earlier !!

    @tanbui7569@tanbui75698 ай бұрын
    • Thanks!

      @statquest@statquest8 ай бұрын
  • This is the best explanation of word embedding I have come across.

    @user-wr4yl7tx3w@user-wr4yl7tx3w Жыл бұрын
    • Thank you very much! :)

      @statquest@statquest Жыл бұрын
  • Thank you so much for this playlist! Got to learn a lot of things in a very clear manner. TRIPLE BAM!!!

    @muthuaiswaryaaswaminathan4079@muthuaiswaryaaswaminathan40796 ай бұрын
    • Thank you! :)

      @statquest@statquest6 ай бұрын
  • Keep up the amazing work (especially the songs) Josh, you're making live easy for thousands of people !

    @chad5615@chad561511 ай бұрын
    • Wow! Thank you so much for supporting StatQuest! TRIPLE BAM!!!! :)

      @statquest@statquest11 ай бұрын
  • The phrase "similar words will have similar numbers" in the song will stick with me for a long time, thank you!

    @haj5776@haj5776 Жыл бұрын
    • bam!

      @statquest@statquest Жыл бұрын
  • StatQuest is great! I learn a lot from your channel. Thank you very much!

    @ananpinya835@ananpinya835 Жыл бұрын
    • Glad you enjoy it!

      @statquest@statquest Жыл бұрын
  • Absolutely the best explanation that I've found so far! Thanks!

    @FullStackAmigo@FullStackAmigo Жыл бұрын
    • Thank you! :)

      @statquest@statquest Жыл бұрын
  • Hey Josh. A great new series that I, and many others, would be excited to see is bayesian statistics. Would love to watch you explain the intricacies of that branch of stats. Thanks as always for the great content and keep up with the neural-network related videos. They are especially helpful.

    @mamdouhdabjan9292@mamdouhdabjan929211 ай бұрын
    • That's definitely on the to-do list.

      @statquest@statquest11 ай бұрын
    • @@statquest looking forward to it.

      @mamdouhdabjan9292@mamdouhdabjan929211 ай бұрын
  • BAM!! StatQuest never lie, it is indeed super clear!

    @mycotina6438@mycotina6438 Жыл бұрын
    • Thank you! :)

      @statquest@statquest Жыл бұрын
  • haha I love your opening and your teaching style! when we think something is extremely difficult to learn, everything should begin with singing a song, that make a day more beautiful to begin with ( heheh actually I am not just teasing lol, I really like that ) thanks for sharing your thoughts with us

    @wizenith@wizenith Жыл бұрын
    • Thanks!

      @statquest@statquest Жыл бұрын
  • It's so nice to google and realize that there is a StatQuest about your question, when you are certain of that there hadn't been one some time before

    @channel_SV@channel_SV Жыл бұрын
    • BAM! :)

      @statquest@statquest Жыл бұрын
  • Wow!! This is the best definition I have ever heard or seen, of word embedding. Right at 09:35. Thanks for the clear and awesome video. You guy rock!!

    @awaredz007@awaredz00714 күн бұрын
    • Thanks! :)

      @statquest@statquest14 күн бұрын
  • I love all of your songs. You should record a CD!!! 🤣 Thank you very much again and again for the elucidating videos.

    @lfalfa8460@lfalfa84605 ай бұрын
    • Thanks!

      @statquest@statquest5 ай бұрын
  • This is one of the best sources of information.... I always find videos a great source of visual stimulation... thank you.... infinite baaaam

    @rathinarajajeyaraj1502@rathinarajajeyaraj1502 Жыл бұрын
    • BAM! :)

      @statquest@statquest Жыл бұрын
  • Thank you sir. Your explanation is great and your work is much appreciated.

    @mahdi132@mahdi1329 ай бұрын
    • Thanks!

      @statquest@statquest9 ай бұрын
  • Hopefully everyone following this channel has Josh's book. It is quite excellent!

    @exxzxxe@exxzxxeАй бұрын
    • Thanks for that!

      @statquest@statquestАй бұрын
  • This video explains the source of the multiple dimensions in a word embedding, in the most simple way. Awesome. :)

    @flow-saf@flow-saf5 ай бұрын
    • Thanks!

      @statquest@statquest5 ай бұрын
  • highly valuable video and book tutorial, thanks for putting this kind of special tuts out here .

    @vpnserver407@vpnserver40711 ай бұрын
    • Glad you liked it!

      @statquest@statquest11 ай бұрын
  • Thank you statquest!!! Finally I started to understand LSTM

    @michaelcheung6290@michaelcheung6290 Жыл бұрын
    • Hooray! BAM!

      @statquest@statquest Жыл бұрын
  • your work is extremely amazing and so helpful for new learns who want to go into detail of working of Deep Learning models , instead of just knowing what they do!! Keep it up!

    @acandmishra@acandmishraАй бұрын
    • Thanks!

      @statquest@statquestАй бұрын
  • the best video I saw about this topic so far. Great Content! Congrats!!

    @gustavow5746@gustavow57466 ай бұрын
    • Wow, thanks!

      @statquest@statquest6 ай бұрын
  • Great presentation, You saved my day after watching several videos, thank you!

    @user-eq9cf4mt2s@user-eq9cf4mt2s19 күн бұрын
    • Glad it helped!

      @statquest@statquest19 күн бұрын
  • Hey Josh! Loved seeing your talk at BU! Appreciate your videos :)

    @eamonnik@eamonnik Жыл бұрын
    • Thanks so much! :)

      @statquest@statquest Жыл бұрын
  • Thanks for enlightening us Master.

    @danish5326@danish53267 ай бұрын
    • Any time!

      @statquest@statquest7 ай бұрын
  • I admire your work a lot. Salute from Brazil.

    @RaynerGS@RaynerGS6 ай бұрын
    • Muito obrigado! :)

      @statquest@statquest6 ай бұрын
  • Great video! One suggestion is that you could expand on the Negative Sampling discussion by explaining how it chooses purposely unrelated (non-context) words to increase the model's accuracy in predicting related (context) words of the target word.

    @p-niddy@p-niddy11 ай бұрын
    • It actually doesn't purposely select unrelated words. It just selects random words and hopes that the vocabulary is large enough that the probability that the words are unrelated will be relatively high.

      @statquest@statquest11 ай бұрын
  • Thank you Statquest!!!!

    @saisrisai9649@saisrisai96494 ай бұрын
    • Any time!

      @statquest@statquest4 ай бұрын
  • Thank you so much Mr.Josh Starmer, you are the only one that makes ML concepts easy to understand Can you , please , explain Glove ?

    @ramzirebai3661@ramzirebai3661 Жыл бұрын
    • I'll keep that in mind.

      @statquest@statquest Жыл бұрын
  • Way better than my University slides. Thanks

    @wellwell8025@wellwell8025 Жыл бұрын
    • Thanks!

      @statquest@statquest Жыл бұрын
  • Hey Josh, i'm a brazilian student and i love to see your videos, it's such a good and fun to watch explanation of every one of the concepts, i just wanted to say thank you, cause in the last few months you made me smile beautiful in the middle of studying, so, thank you!!! (sorry for the bad english hahaha)

    @user-qc5uk6ei2m@user-qc5uk6ei2m7 ай бұрын
    • Muito obrigado!!! :)

      @statquest@statquest7 ай бұрын
  • BAM! Thanks for your video, I finally realize what the negative sampling means ~

    @bancolin1005@bancolin1005 Жыл бұрын
    • Happy to help!

      @statquest@statquest Жыл бұрын
  • I love the way you teach!

    @m3ow21@m3ow2111 ай бұрын
    • Thanks!

      @statquest@statquest11 ай бұрын
  • You are a beautiful human! Thank you so much for this video! I was finally able to understand this concept! Thanks so much again!!!!!!!!!!!!! :)

    @alfredoderodt6519@alfredoderodt65198 ай бұрын
    • Glad it was helpful!

      @statquest@statquest8 ай бұрын
  • Great video for explaining word2vec!

    @wenqiangli7544@wenqiangli7544 Жыл бұрын
    • Thanks!

      @statquest@statquest Жыл бұрын
  • Bro , i have my master degree in ML but trust me you explain it better than my teachers ❤❤❤ Big thanks

    @fouadboutaleb4157@fouadboutaleb41578 ай бұрын
    • Thank you very much! :)

      @statquest@statquest8 ай бұрын
  • Wow, Awesome. Thank you so much!

    @pakaponwiwat2405@pakaponwiwat24058 ай бұрын
    • You're very welcome!

      @statquest@statquest8 ай бұрын
  • Awesome as always. Thank you!!

    @yasminemohamed5157@yasminemohamed5157 Жыл бұрын
    • Thank you! :)

      @statquest@statquest Жыл бұрын
  • This is by far the best video on embeddings. A while university corse is broken down in 15minutes

    @manuelamankwatia6556@manuelamankwatia655628 күн бұрын
    • Thanks!

      @statquest@statquest28 күн бұрын
  • the best channel ever.

    @AliShafiei-ui8tn@AliShafiei-ui8tn9 ай бұрын
    • Double bam! :)

      @statquest@statquest9 ай бұрын
  • This is an amazing video. Thank you!

    @pedropaixaob@pedropaixaob4 ай бұрын
    • Thanks!

      @statquest@statquest3 ай бұрын
  • Great Explanation. Please make a video on how do we connect the output of an Embedding Layer to an LSTM/GRU for doing classification for say Sentiment Analysis

    @tupaiadhikari@tupaiadhikari9 ай бұрын
    • I show how to connect it to an LSTM for language translation here: kzhead.info/sun/f5yBe9udkXuFoJ8/bejne.html

      @statquest@statquest9 ай бұрын
    • @@statquest Thank You Professor Josh !

      @tupaiadhikari@tupaiadhikari9 ай бұрын
  • Thank you so much for these videos. It really helps with the visuals because I am dyslexic… Quadruple BAM!!!! lol 😊

    @ColinTimmins@ColinTimmins7 ай бұрын
    • Happy to help!

      @statquest@statquest7 ай бұрын
  • Hi, I love your videos! They're really well explained. Could you please make a video on partial least squares (PLS)

    @phobiatheory3791@phobiatheory3791 Жыл бұрын
    • I'll keep that in mind.

      @statquest@statquest Жыл бұрын
  • It would also be nice to have a video about the difference between LM (linear regression models) and GLM (Generalized Linear Models). I know they're different but don't quite understand thAT when interpreting them or programming them in R. THAAANKS!

    @mariafernandaruizmorales2322@mariafernandaruizmorales2322 Жыл бұрын
    • Linear models are just models based on linear regression and I describe them here in this playlist: kzhead.info/channel/PLblh5JKOoLUIzaEkCLIUxQFjPIlapw8nU.html Generalized Linear Models is more "generalized" and includes Logistic Regression kzhead.info/channel/PLblh5JKOoLUKxzEP5HA2d-Li7IJkHfXSe.html and a few other methods that I don't talk about like Poisson Regression.

      @statquest@statquest Жыл бұрын
    • @@statquest Thanks Josh!! I'll watch them all 🤗

      @mariafernandaruizmorales2322@mariafernandaruizmorales2322 Жыл бұрын
  • Absolutely mind blowing and amazing presentation! For the Word2Vec's strategy for increasing context, does it employ the 2 strategies in "addition" to the 1-Output-For-1-Input basic method we talked about in the whole video or are they replacements? Basically, are we still training the model on predicting "is" for "Gymkata" in the same neural network along with predicting "is" for a combination of "Gymkata" and "great"?

    @LakshyaGupta-ge3wj@LakshyaGupta-ge3wj5 ай бұрын
    • Word2Vec uses one of the two strategies presented at the end of the video.

      @statquest@statquest5 ай бұрын
  • For those of you who find it hard to understand this video, my recommendation is to watch it at a slower pace and make notes of the same. It will really make things much more clear.

    @avishkaravishkar1451@avishkaravishkar14515 ай бұрын
    • 0.5 speed bam!!! :)

      @statquest@statquest5 ай бұрын
  • Great video as always!

    @study-tp4ts@study-tp4ts Жыл бұрын
    • Thanks again!

      @statquest@statquest Жыл бұрын
  • Extremamente didático! Parabéns.

    @denismarcio@denismarcioАй бұрын
    • Muito obrigado! :)

      @statquest@statquestАй бұрын
  • thanks for your tutorial!!!

    @minhmark.01@minhmark.01Ай бұрын
    • You're welcome!

      @statquest@statquestАй бұрын
  • Love this channel.

    @auslei@auslei11 ай бұрын
    • Glad to hear it!

      @statquest@statquest11 ай бұрын
  • Keep going statquest!!

    @MaskedEngineerYH@MaskedEngineerYH Жыл бұрын
    • That's the plan!

      @statquest@statquest Жыл бұрын
  • Hi Josh, thank you for your excellent work! Just discovered your videos and consuming like a pack of crisps. I was wondering about the desired output when using the skip-gram model. When we have a word as input, the desired output is to have all the words found within the window size on any sentence of the corpus activate to 1 at the same time on the output layer, right? It is not said explicitly but I guess it is the only way it can be.

    @guillaumebarreau@guillaumebarreau7 ай бұрын
    • The outputs from a softmax function are all between 0 and 1 and add up to 1. In other words, softmax function does not allow more than one output to have a value of 1. See 12:16 for an example of outputs for the skipgram method.

      @statquest@statquest7 ай бұрын
    • @@statquest, thanks for your prompt reply! You are right, I didn't look carefully enough. I guess I got confused because after watching the video, I read other sources which seem to consider every skip-gram pair as a separate training example, which confused me.

      @guillaumebarreau@guillaumebarreau7 ай бұрын
  • Great vid. So your going to do a vid on transformer architectures? That would be incredible if so. Btw bought your book. Finished it in like 2 weeks. Great work on it!

    @NewMateo@NewMateo Жыл бұрын
    • Thank you! My video on Encoder-Decoders will come out soon, then Attention, then Transformers.

      @statquest@statquest Жыл бұрын
    • @@statquest When the universe needs you most, you provide

      @thomasstern6814@thomasstern6814 Жыл бұрын
  • Thank you so much for the video! I have one question, at 15:09, why we only need to optimize 300 steps? For one word with 100 * 2 weights? not sure how to understand the '2' as well.

    @neemo8089@neemo80898 ай бұрын
    • At 15:09 there are 100 weights going from the word "aardvark" to the 100 activation functions in the hidden layer. There are then 100 weights going from the activation functions to the sum for the word "A" and 100 weights going from the activation functions to the sum for the word "abandon". Thus, 100 + 100 + 100 = 300.

      @statquest@statquest8 ай бұрын
    • Thank you!@@statquest

      @neemo8089@neemo80898 ай бұрын
  • you da bessssst, saved me alota time and confusion :..)

    @ishaqpaktinyar7766@ishaqpaktinyar77663 ай бұрын
    • Thanks!

      @statquest@statquest3 ай бұрын
  • Awesome explanation..

    @janapalaswathi4262@janapalaswathi42623 ай бұрын
    • Thanks!

      @statquest@statquest3 ай бұрын
  • Just incredible!

    @aniketsakpal4969@aniketsakpal496911 ай бұрын
    • Thank you!

      @statquest@statquest11 ай бұрын
  • I struggled with this video series and its only been with 3 blue 1 brown's incredibly comprehensive and clear videos on deep learning that I've been able to understand gradient descent, back propagation and basic feed forward networks. Just different learning and training styles I guess.

    @benhargreaves5556@benhargreaves555610 ай бұрын
    • That make sense to me. I made these videos because 3blue1brown's video's didn't help me understand any of these topics. So if 3blue1brown's works for you, bam!

      @statquest@statquest10 ай бұрын
  • Please make a video about the metrics for prediction performance: RMSE, MAE and R SQUARED. 🙏🏼🙏🏼🙏🏼 YOURE THE BEST!

    @mariafernandaruizmorales2322@mariafernandaruizmorales2322 Жыл бұрын
    • The first video I ever made is on R-squared: kzhead.info/sun/ZaWKe9GvaGaje4U/bejne.html NOTE: Back then I didn't know about machine learning, so I only talk about R-squared in the context of fitting a straight line to data. In that context, R-squared can't be negative. However, with other machine learning algorithms, it is possible.

      @statquest@statquest Жыл бұрын
  • That was quite informative

    @pushkar260@pushkar260 Жыл бұрын
    • BAM! Thank you so much for supporting StatQuest!!! :)

      @statquest@statquest Жыл бұрын
  • Machine learning explained like Sesame Street is exactly what I need right now.

    @lexxynubbers@lexxynubbers11 ай бұрын
    • bam!

      @statquest@statquest11 ай бұрын
  • Thank you Josh for this great video. I have a quick question about the Negative Sampling: If we only want to predict A, why do we need to keep the weights for "abandon" instead of just ignoring all the weights except for "A"?

    @user-bd2fm9lk5b@user-bd2fm9lk5b5 ай бұрын
    • If we only focused on the weights for "A" and nothing else, then training would cause all of the weights to make every output = 1. In contrast, by adding some outputs that we want to be 0, training is forced to make sure that not every single output gets a 1.

      @statquest@statquest5 ай бұрын
  • great stuff as usual ..BAM * 600 million

    @c.nbhaskar4718@c.nbhaskar4718 Жыл бұрын
    • Thank you so much! :)

      @statquest@statquest Жыл бұрын
  • Can you do GloVe? i really enjoyed Word2Vec it will be great to see how GloVe works...how factorization based method works. Thank you for this amazing content!

    @user-ck3qk5ce9k@user-ck3qk5ce9k3 ай бұрын
    • I'll keep that in mind.

      @statquest@statquest3 ай бұрын
  • watched this video multiple times but unable to understand a thing. I'm sure I am dumb and the Josh is great!

    @MrAhsan99@MrAhsan995 ай бұрын
    • Maybe you should start with the basics for neural networks: kzhead.info/sun/dtWIls1saH6cd68/bejne.html

      @statquest@statquest5 ай бұрын
  • I appreciate the knowledge you've just shared. It explains many things to me about neural networks. I have a question though, If you are randomly assigning a Value to a word, why not try something easier? For example, In Hebrew, each of the letters of the Alef - Bet is assigned a value. these values are added together to form a sum of a word. It is the context of the word, in a sentence that forms the block. Sabe? Take a look at Gamatra, Hewbew has been doing this for thousands of years. Just a thought.

    @kimsobota1324@kimsobota13245 ай бұрын
    • Would that method result in words used in similar contexts to have similar numbers? Does it apply to other languages? Other symbols? And can we end up with multiple numbers per symbol to reflect how it can be used or modified in different contexts?

      @statquest@statquest5 ай бұрын
    • I wish I could answer that question better than to tell you context is EVERYTHING in Hebrew, a language that has but doesn't use vowels, since all who use the language understand the consonant-based word structures. Not only that, but in the late 1890s Rabbis from Ukraine and Azerbaijan developed a mathematical code that was used to predict word structures from the Torah that were accurate to a value of 0.001%. Others have tried to apply it to other books like Alice in Wonderland and could not duplicate the result. You can find more information on the subject through a book called, The Bible Code, which gives much more information as well as the formuli the Jewish Mathameticians created. While it is a poor citation, I have included this Wikipedia link: en.wikipedia.org/wiki/Bible_code#:~:text=The%20Bible%20code%20(Hebrew%3A%20%D7%94%D7%A6%D7%95%D7%A4%D7%9F,has%20predicted%20significant%20historical%20events. The book is available on Amazon if you find it peaks your interest. Please let me know if this helps. @@statquest

      @kimsobota1324@kimsobota13245 ай бұрын
    • @starquest, I had not heard from you about the Wiki?

      @kimsobota1324@kimsobota13245 ай бұрын
  • You ARE the Batman and Superman of machine learning!

    @exxzxxe@exxzxxe2 ай бұрын
    • :)

      @statquest@statquest2 ай бұрын
  • I have a question? Are the number of outputs softmax generating at the end of word 2 Vec varying between 2 to 20? Thats why the numbers of params calculated as 3M × 100 × 2? If it were to predict probs for all 3M words, would it have been 3M × 100 × 3M?

    @user-qo1qe9wq4g@user-qo1qe9wq4g8 ай бұрын
    • We get 3M * 100 * 2 because 1) there are 3 million inputs 2) 100 activation functions 3) and 3 million outputs = 3M * 100 * 2

      @statquest@statquest8 ай бұрын
    • @@statquest Thank you very much, Josh. My bad, instead of adding the number of parameters on either side of the hidden layer, I was taking their product for some reason. Thank you for the awesome videos.

      @user-qo1qe9wq4g@user-qo1qe9wq4g8 ай бұрын
  • My favourite topic its magic. Bam!!

    @meguellatiyounes8659@meguellatiyounes8659 Жыл бұрын
    • :)

      @statquest@statquest Жыл бұрын
  • Thanks!

    @aoliveiragomes@aoliveiragomes9 ай бұрын
    • BAM!!! Thank you so much for supporting StatQuest!!! :)

      @statquest@statquest9 ай бұрын
  • Thank you very much for your excellent tutorials! Josh. Here I have a question, at around 13:30 of this video tutorial, you mentioned to multiply by 2. I am not sure why 2? I mean if there are more than 2 outputs, will we multiply the number of output nodes, instead of 2? Thank you for your clarification in advance.

    @familywu3869@familywu386911 ай бұрын
    • If we have 3,000,000 words and phrases as inputs, and each input is connected to 100 activation functions, then we have 300,000,000 weights going from the inputs to the activation function. Then from those 100 activation function, we have 3,000,000 outputs (one per word or phrase), each with a weight. So we have 300,000,000 weights on the input side, and 300,000,000 weights on the output side, or a total of 600,000,000 weights. However, since we always have the same number of weights on the input and output sides, we only need to calculate the number of weights on one side and then just multiply that number by 2.

      @statquest@statquest11 ай бұрын
    • @@statquest Thanks for explaining! I also had the same question.

      @surojit9625@surojit96259 ай бұрын
    • Ohhhhhhhhh! I missed that the first time around! BTW: (Stat)Squatch and Norm are right: StatQuest is awesome!!

      @jwilliams8210@jwilliams82105 ай бұрын
  • Thank you so much for this video. Could you do something like this for audio embedding as well? or how could we merge (do fusion) audio and text embedding? I really appreciate it.

    @S.A_1992@S.A_19923 ай бұрын
    • Unfortunately, I'm not familiar with audio embedding.

      @statquest@statquest3 ай бұрын
  • How are models that map text (i.e. not just a word, but say up to 256 words) to vector trained? (such as the popular `sentence-transformers/all-MiniLM-L6-v2` model at HuggingFace)... Are similar or different principles used?

    @meirgoldenberg5638@meirgoldenberg56389 ай бұрын
    • "sentence-transformers" is a type of transformer. Word embedding is just small part of how transformers work. To learn more about basic transformers, see: kzhead.info/sun/rdyKqbiDb6OrrJE/bejne.html

      @statquest@statquest9 ай бұрын
    • @@statquest What other videos should I watch before I can fully grasp the video on transformers?

      @meirgoldenberg5638@meirgoldenberg56389 ай бұрын
    • @@meirgoldenberg5638 I would recommend just watching the Transformer video first. If you are confused, have a lot of recommend videos in the video's description. Alternatively, you can just watch the whole neural network playlist: kzhead.info/sun/dtWIls1saH6cd68/bejne.html

      @statquest@statquest9 ай бұрын
  • Awesome video! This time, I feel I miss one step through. Namely, how do you train this network? I mean, I get that we want the network as such that similar words have similar embeddings. But what is the 'Actual' we use in our loss function to measure the difference from and use backpropagation with?

    @BalintHorvath-mz7rr@BalintHorvath-mz7rr2 ай бұрын
    • Yes

      @statquest@statquest2 ай бұрын
    • @@statquest haha I feel like I didn't ask the question well :D How would the network know, without human input, that Troll 2 and Gymkata is very similar and so it should optimize itself so that ultimately they have similar embeddings? (What "Actual" value do we use in the loss function to calculate the residual?)

      @balintnk@balintnk2 ай бұрын
    • @@balintnk We just use the context that the words are used in. Normal backpropagation plus the cross entropy loss function where we use neighboring words to predict "troll 2" and "gymkata" is all you need to use to get similar embedding values for those. That's what I used to create this video.

      @statquest@statquest2 ай бұрын
  • Hello. Thank you very much. Great, great video. I have a question. In the negative sampling procedure we never use A = 1 as input at any step in the training process. I am wondering about the time the embeddings for A are trained. I can see how the weights for A at the right of the activation functions are trained, but not for the weights at the left. I can see that because we use a lot of training steps, in some moment A will be a word we don't want to predict at the input; therefore the embeddings for A will change, however, the prediction won't be A for those steps.

    @smooth7041@smooth7041 Жыл бұрын
    • Why would we never use "A = 1" in training?

      @statquest@statquest Жыл бұрын
  • so goooood! Thank alot!

    @CaHeoMapMap@CaHeoMapMap Жыл бұрын
    • Glad you like it!

      @statquest@statquest Жыл бұрын
  • Hi josh firstly thank you for all your videos. I had one doubt , in skip gram what will be the loss function on which the network is been optimized, in CBOW i can see that cross entropy is enough

    @jayachandrarameshkalakutag7329@jayachandrarameshkalakutag73295 ай бұрын
    • I believe it's cross entropy in both.

      @statquest@statquest5 ай бұрын
  • Amazing lecture, congrats. The audio was also made from an NPL (Natural Language Processing), right?

    @gabrielrochasantana@gabrielrochasantanaАй бұрын
    • The translated overdubs were.

      @statquest@statquestАй бұрын
  • Fantastic video! How do you apply in powerpoint the style of "pencil-written" boxes?

    @robott12@robott12 Жыл бұрын
    • I use Keynote, and it's one of the default line types.

      @statquest@statquest Жыл бұрын
    • @@statquest Thanks!

      @robott12@robott12 Жыл бұрын
  • Thanks ❤

    @hasansoufan@hasansoufan11 ай бұрын
    • :)

      @statquest@statquest10 ай бұрын
  • al fin alguien me explica como se convierte en si, todos me dicen "usa red que te lo ejecuta automaticamente" pero yo quiero saber que esta haciendo esa red internamente... al fiiiin

    @anonymushadow282@anonymushadow2829 ай бұрын
    • Muchas grasias!!! BAM!

      @statquest@statquest9 ай бұрын
  • Great video! Was just wondering why the output of the softmax activation at 10:10 are just 1 and 0s. Wouldn't that only be the case if we applied ArgMax here not SoftMax?

    @MadeyeMoody492@MadeyeMoody492 Жыл бұрын
    • In this example the data set is very small and, for example, the word "is" is always followed by "great", every single time. In contrast, if we had a much larger dataset, then the word "is" would be followed by a bunch of words (like "great", or "awesome" or "horrible", etc) and not followed by a bunch of other words (like "ate", or "stand", etc). In that case, the soft max would tells which words had the highest probability of following is and we wouldn't just get 1.0 for a single word that could follow the word 'is'.

      @statquest@statquest Жыл бұрын
    • @@statquest Ohh ok, that clears it up. Thanks!!

      @MadeyeMoody492@MadeyeMoody492 Жыл бұрын
  • Hello Josh, thanks for your video.May I know if we could use 3 neuron network to predict the next words?

    @lancezhang892@lancezhang8926 ай бұрын
    • Sure

      @statquest@statquest5 ай бұрын
  • funny and very nicely explained.

    @user-pd1gy8xh4y@user-pd1gy8xh4y8 ай бұрын
    • Thanks! 😃

      @statquest@statquest8 ай бұрын
  • Does this mean the neural net to get the embeddings can only have a single layer? I mean: 1. Say total 100 words of corpus 2. First hidden layer (with say I put the embedding size of 256) 3. Then another layer to predict the next word which will be 100 words again. Here, to plot the graph, or say to use the cosine similarity to get how close two words are, I will simply have to use the 256 weights of both words from the first hidden layer, right? So does that mean we can only have a single layer to optimise? Can't we add 2, 3, 50 layers? And if we can, then weights of which layer should we take as the embeddings to compare the words? Will you please guide? Thanks! You are a gem as always 🙌

    @enchanted_swiftie@enchanted_swiftie8 ай бұрын
    • There are no rules in neural networks, just guidelines. Most of the advancements in the field have come from people doing things differently and new. So feel free to try "multilayer word embedding" if you would like. See what happens! You might invent the next transformer.

      @statquest@statquest8 ай бұрын
    • @@statquest Haha, yes but... then weights of which layer should be used? 🤔😅 Yeah, I can use any as there are no strict rules, may take mean or something... but if there are any embedding models... may I know what is the standard? Thanks 🙏👍

      @enchanted_swiftie@enchanted_swiftie8 ай бұрын
    • @@enchanted_swiftie The standard is to use a single set of weights that go to activation functions.

      @statquest@statquest8 ай бұрын
    • @@statquest Oops, okay... 😅

      @enchanted_swiftie@enchanted_swiftie8 ай бұрын
  • Hi Josh, great video. I have one question, how are the 2-20 words selected for being dropped while doing negative sampling

    @Rex389@Rex389 Жыл бұрын
    • This is answered at 13:44. We can pick a random set because the assumption is that when the vocabulary is large, the chances of selecting a similar word are small. And I believe you select a different subset each iteration, so even if you do pick a similar word, the long term effects will not be huge.

      @statquest@statquest Жыл бұрын
    • @@statquest Got it. Thanks

      @Rex389@Rex389 Жыл бұрын
KZhead