Autoencoder Explained
How does an autoencoder work? Autoencoders are a type of neural network that reconstructs the input data its given. But we don't care about the output, we care about the hidden representation its learned. Its a lower dimensional compression of the input that preserves its features. We can use this learned representation for tasks like image colorization, dialogue generation, and anomaly detection.
Code for this video (with Coding Challenge):
github.com/llSourcell/autoenc...
Please Subscribe! And like. And comment. That's what keeps me going.
Want more education? Connect with me here:
Twitter: / sirajraval
Facebook: / sirajology
instagram: / sirajraval
More learning resources:
ufldl.stanford.edu/tutorial/un...
ai.stanford.edu/~quocle/tutori...
lazyprogrammer.me/a-tutorial-...
blog.keras.io/building-autoen...
jaan.io/what-is-variational-a...
hackernoon.com/autoencoders-d...
Join us in the Wizards Slack channel:
wizards.herokuapp.com/
And please support me on Patreon:
www.patreon.com/user?u=3191693
Signup for my newsletter for exciting updates in the field of AI:
goo.gl/FZzJ5w
Hit the Join button above to sign up to become a member of my channel for access to exclusive content! Join my AI community: chatgptschool.io/ Sign up for my AI Sports betting Bot, WagerGPT! (500 spots available):
www.wagergpt.co
A wonderful high-level explanation of AEs. I'm recently starting research in this field and this has been really helpful. Thanks!
6:43 had to jerk the earbuds out from that sound. Thanks for this content. It was a good introduction, I'll do more research on this
I tried to understand autoencoders so I googled and started reading various articles. After an hour I just gave up and checked if you have a video on the topic, another 5 minutes later i finally got it. keep up the great work!
Siraj has learnt a lot about teaching and is now capable of expressing complex ideas in a more didactic way. Kudoz for him.
Finally I understand autoencoders, thank you!! So many explanations fail to mention that the hidden layer in the middle represents compression! They talk about "learning the identity function" which only confused me further. But your explanation was clear and on point. I'm happy to be your patron.
Wow, this is incredible stuff! I’ve been experimenting with using neural networks for data compression recently, but had no idea it was the same thing as an autoencoder. I also didn’t know that there were any reliable generative models other than GANS, but variational autoencoders seem to be a lot simpler and easier to implement than GANS, and really useful. Great video!
Watching your videos for around a year now NOW i can proudly say I understand 99.99% of the things you talk about
so glad to hear
Not sure how you managed that when Siraj himself barely understands 30% of what he talks about.
Hi Siraj, thanks a lot for that anomaly detection case - I'll soon be implementing such a model, never thought about that possibility :)
This covers way more than AEs. This is a very brilliant high level generalized view of what Neural Networks do and what they actually represent,
Wow this video was very informative. I go so many information and ideas for future projects
Visually appealing and easy to follow! Thanks Siraj! :)
Great video Siraj, great pace and I like the content to meme ratio here. In these shorter videos you don't stutter or repeat yourself at all, I wanted to know how you do that? Do you type a script and then practice it a few times?
@siraj Thank you sooo much for this video i have this seminar coming up where i have to talk about autoencoders so this was very helpful
Your videos are amazing. Thank you bro!
Love your videos. I"m just getting into machine learning and the math behind it all is really daunting. Do you have any tips for learning? Should I just keep practicing making my own models from scratch? I don't feel like I learn anything using pre-built libraries. Keep up the great content!
I am happy every I get alerted You post a video bcz I know I will learn sth I thought it was so hard to learn!
love your ability to explain things without jargon
Thanks for the hard work Siraj.
This was an awesome video Siraj!!!!!!!
Hi Siraj, I wonder how the "stylize" functionality in google photos work. It randomly chooses images from library and produces more saturated images. Does it also use autoencoders to recolor the images?
Great video. Appreciated the 3 takeaway points at the end. Question: How could I use autoencoders to perform dimensionality reduction on categorical data?
Great video as always, Siraj!
Cool , seems to have a lot of potential. Thanks for the video , great !
Great job Siraj, this helped a lot in my understanding! One small critique: the floating text around 0:38 made me feel a little nauseous! Please consider changing in the future. Many thanks!
I just wanted to map the idea in the NN space, I got it thanks, great video!
Great video siraj, I don't understand what you say, one day i will, but i just love it. God bless.
Thanks, clear enough to understand Autoencoders
Hi thanks for the video. How can I see the kernels that used for convolutional operation in Keras. I use Keras, and I have no idea what kind of kernels are automatically used during the training of CNN models. Thanks!
Denoising Autoencoder is really useful!
what kind of network i need to input a image and train it with a wanted image for output, so if i input later a similar image it give me a similar output?
Thanks my brain hurts but I a tenuous grasp of the autoencoder now. Ironic that this is a dense presentation discussing dense representations
Nice vid. But how to validate the Autoencoder latent space is a good representation?
7:47 do you have any refs for chatbots using auto encoders ?
Much appreciated Siraj, Can we do image clustering using AutoEncoder if yes then what type of Autoencoder would do that?
Thanks Siraj, as usual, excellence delivered in a simple and intuitive manner on a highly technical subject! You should consider a career in teaching!
Excellent great task greetings
Could you make a video about superintelligence? As always keep up the good work!
thank you for autoencoder video
What's your video set-up pls (camera & software) ?
Must go deeper
Please make a video and explaining faster rcnn and especially focus on how the multiple region of interest are handled.
Hello Siraj, are vanilla autoenocoders same as simple autoencoders ,,,,,I don't see much research papers on this ,,, can this autoencoders be implemented using spectral data ,,,,as most of the examples are on MNIST dataset,,,, i need to visualize , pca vs t-SNE vs autoencoders
Have you heard of the new autoencoder that learned to represent transformations ala capsule networks , but much simpler?
Great video!
Hello~ Great video! Could you please make a video on activation functions- general idea, basic math, visual example? Thanks :)
kzhead.info/sun/YJusk7WogYZtqKc/bejne.html are you talking about this? haha
Whoops thanks :)
Awesome! Thanks Siraj!
Really good video!
You named the output "Learned representation", but isnt it actually just the approximated reproduction of the image? The actual learned representation would be in the bottleneck layer right before the decode layers start reproducing from it. I know the video is old, but I hope someone can rectify me on this if I misunderstood something here.
Hello sir, Please make a video on machine learning in clinical ,biomedical related with practical!!! I hope u will make it.
nice video editing
Can confirm about the GM thing. Although, I'd add a qualifier: any car made by them after 1970,
I feel like your videos have got less hand on ,I want my old Sirajology's scarybugsmac :'(
I'd like to request a video on Dynamic Time Warping algo :) merc
Isn't autoencoder a fancy name for the encoder-decoder style architecture (used popularly for image segmentation)? Because in both the cases you don't care about the input and the output but the latent space representation after the encoding. Is there something that I am missing in the autoencoders that makes them different from naive encoder-decoder networks?
I wish someone, like you, would continue the Hacker's guide to NN….
You hit a bit more than AEs and that may confuse some folks, but after a while they will realize that it was for the better as GANs and such follow these techniques.
Very well explained, and overall a high tier video. Summed up some of my confusions atleast
I'm guessing it still lose some data in the process, uses autoencoder to reconstruct itself in a condensed mode rinse and repeat
I feel like Im missing habilities to follow you and try those challanges, except knowing to code what other tools/knowledge do I need?
I would recommend starting with Keras, it is pretty simple to use and you will be able to quickly build neural networks in a few lines
three points to remember in your boilogical neural networks :)
I was like "Daaaamn that's a nice house!". And then I realized haha 1:30
Can you explain DeepFakes?
Y2Kvids yes contact Lucas arts , lol all this is sooo old
@@TheTravisweb DeepFake uses Generative Adversarial Models, not AE I think.
How can an ANN with activation functions like Sigmoid, ReLU, and tanh approximate functions with high local variations in value? Take the function f(x) = exp(x) and an ANN with any of the mentioned activation functions. How could it work?
Keep adding functions, with bias.
Excellent!! This video explains the essence of AI from the deepest depth people usually don't know, with just a few words in such a short time. So it's very insightful !!
Why SoftMax is better than svm with autoencoder if you have paper explain that
Siraj tell me why when you said "Ok Google do you love me" it activated Google on my phone 🤣🤣🤣🤣🤣
hey Siraj! You got any ideas to do a thesis on? Any subjects or problems I could research? Broad question I know but anything could be valuable to me, thanks!
Bryan Cardenas machine translation using autoencoders
With low resources
variational autoencoders for drug discovery
Hello sir, can I contact you?
You're the best! Thank you very very much
that moment when ok google from your phone triggers google in your phone
Explain the weights of the decoder.
There is a compilation error with the code attached: Can someone help me with this Using TensorFlow backend. Traceback (most recent call last): File "variational_autoencoder.py", line 65, in vae.compile(optimizer='rmsprop') TypeError: compile() takes at least 3 arguments (2 given)
Hi guys, can somebody explain to me the meme about using logistic regression in production at 2:58 ? thanks
6th :)) I am curious about micro architecture + machine learning would produce lol
For coding challenge: github.com/ParmuSingh/autoencoder-mnist I'm sorry its tensorflow and not keras :)
2:46 ... Output where to send a car, like _the dumpster_, if it was made by GM. *LMAO*
I was gonna comment that. haha
That just earned him a stiff downvote
lol, not just gm every american car is company is trash lmao. ford, gm, tesla all shit lol.
This guy is fraud www.reddit.com/r/MachineLearning/comments/d7ad2y/d_siraj_raval_potentially_exploiting_students/
cool
Hey +UCWN3xxRkmTPmbKwht9FuE5A/Siraj, I have an use case where I would like to use an autoencoder to detect anomalies. Now, as per my understanding, to make sure the AE detects anomalies, we must feed it non-anomalous data during training. This part sounds something like, "To do anomaly detection, you actually have to do anomaly detection." :) Any helpful comment is appreciated. Big fan!
Hii. Why is the audio so unclear, some disturbing background noise while listening to this good tutorial.
I made Image autoencoder a week ago that can create new ones at decent quality with small numbers of examples. Working on GAN with same technique right now. github.com/Mylittlerapture/Continuous-Image-Autoencoder
I feel like im watching this on a boat on the high seas
At 5:13, Me: Wo wo wo slow down! :)))))))))))
everyone got some **** for Machine learning today
3:30 you forgot to add bias!!
4:34 *higher
chicken yeah, he wanted to say lower dimension
Emotional analysis of textile images paper hahahahhaha
where to learn python
thanks
Go to Udacity, Sentdex on KZhead or even My Channel for being an expert in Python
Send GM cars to the dumpster? Do you have precognition?
Hey Siraj, I loved when the videos were fast paced! :)
vikram iyer aren't all?
T H A N K S !
Simple autoencoder for colored images : github.com/karanrn/Deep-learning/tree/master/Autoencoders
great work karan!
nice hair style
atleast they underlined the problem ! LOLL
Good videos, bad memes
That einstein colored pic scared me
lol nice meems .
a simple beta to mix and generate music by working on the latent space generated by the encoder: github.com/damgambit/seq2seq4music_generation
Hey Siraj, I want to build an application to lock Pendrive. How to build one please could you give me some direction that will be very helpful. Thanks.
sachan ankit dude, just download one
Bro i need to make some changes according to my needs. That is why i want to know how to build one.
5th
For Coding Challenge : github.com/neerajkrbansal1996/Denoising-autoencoder
great work neeraj!