Building a neural network FROM SCRATCH (no Tensorflow/Pytorch, just numpy & math)

2020 ж. 23 Қар.
1 854 758 Рет қаралды

Kaggle notebook with all the code: www.kaggle.com/wwsalmon/simpl...
Blog article with more/clearer math explanation: www.samsonzhang.com/2020/11/2...

Пікірлер
  • Making a neural network from scratch is easy, what I really want to see is how to make a neural network ON scratch.

    @victorafonso4534@victorafonso4534 Жыл бұрын
    • Make the scratch cat sentient challenge (gone wrong) (humanity destroyed)

      @d3vitron779@d3vitron779 Жыл бұрын
    • Just create a python interpreter in Scratch, easy

      @theRPGmaster@theRPGmaster Жыл бұрын
    • Ok

      @Despatra@Despatra Жыл бұрын
    • Lmao, understimated comment, but perfect one

      @v037_@v037_ Жыл бұрын
    • lol

      @BurNJoE@BurNJoE Жыл бұрын
  • i like how numpy has become so ingrained in python that it's basically considered vanilla python at this point

    @you_just@you_just Жыл бұрын
    • interestingly much of that functionality is built into other languages used by the ml community such as R, matlab and julia.

      @nathanwycoff4627@nathanwycoff4627 Жыл бұрын
    • @@nathanwycoff4627 matrices and linear algebra are really useful for math and engineering less so for general programming. Different languages focusing on different usability concerns quite interesting.

      @mattrochford6783@mattrochford6783 Жыл бұрын
    • @@mattrochford6783 stop coping julia is just a better language

      @machineman8920@machineman8920 Жыл бұрын
    • @@machineman8920 ???

      @HilbertXVI@HilbertXVI Жыл бұрын
    • I don't like it. I wish people stopped being overly-lazy with Numpy and just wrote their own libraries so they'd understand what they are actually doing. No, scratch that, if they can't accomplish the same thing using only Assembly, they're a total noob, should put down their keyboard, and get an MBA instead...

      @thebluriam@thebluriam Жыл бұрын
  • If you make more deep learning videos with numpy and math(without any framework) just like in this video, it would be great for begginers to learn basics!!! Do you think to keep continue??

    @alperengul8654@alperengul86543 жыл бұрын
    • Merhaba Eren!

      @cemsalta@cemsalta3 жыл бұрын
    • upp!

      @kanui3618@kanui36183 жыл бұрын
    • Hey guys, a reply would be highly appreciated. I want to plot the cost vs the number of iterations but I am not able to figure which parameter to plot ? I am a beginner and I would really appreciate the help. Thank you

      @anishojha1020@anishojha10203 жыл бұрын
    • Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv

      @KHM95@KHM952 жыл бұрын
    • @@anishojha1020 you're probably not a beginner anymore so I hope you found your answer! Unfortunately, youtube comment section isn't a forum and a lot of people disable notifications(including me) so an actual forum although people are sometimes really rude and condescending, is your best bet for future questions.

      @whannabi@whannabi2 жыл бұрын
  • I watched this video when I was studying in grade 11. Had no clue what he was talking about but I tried to understand as much as possible. Now I watch it again as a university student, it is so satisfying to understand everything now.

    @khoa4k266@khoa4k2665 ай бұрын
    • Hope that will happen to me to

      @viCuber@viCuber3 ай бұрын
    • @@viCuber same LOL

      @CR33D404@CR33D4043 ай бұрын
    • @@CR33D404 lmao

      @viCuber@viCuber3 ай бұрын
    • It happens to me several time. Sometime you just stumble on a knowledge and can't understand a single thing about it then suddenly 1 or 2 years later you completely understand it without any try.

      @codevacaphe3763@codevacaphe37632 ай бұрын
    • same

      @nachoyawn@nachoyawn2 ай бұрын
  • Took a Machine Learning course in university and this is what we did the whole semester in Matlab. Tensorflow was introduced right at the end for the final project.

    @jumpierwolf@jumpierwolf Жыл бұрын
    • sounds amazing

      @gasun1274@gasun1274 Жыл бұрын
    • oh hell yea matlab

      @marshmellominiapple@marshmellominiapple Жыл бұрын
    • ​@@marshmellominiapple oh he'll yeah methlab

      @ElectrostatiCrow@ElectrostatiCrow Жыл бұрын
    • @@ElectrostatiCrow LET HIM COOK

      @dumbfate@dumbfate9 ай бұрын
    • @@dumbfate no you let him cook

      @PluetoeInc.@PluetoeInc.26 күн бұрын
  • 00:51 Problem statement 01:18 Math explanation 11:18 Coding it up 27:43 Result's

    @TimeRoot@TimeRoot Жыл бұрын
    • Thank you

      @omgcyanide4642@omgcyanide4642 Жыл бұрын
    • Thank you

      @Zetzumarshen@Zetzumarshen Жыл бұрын
    • Thank you

      @Dejwv_@Dejwv_ Жыл бұрын
    • Thank you

      @Salien1999@Salien1999 Жыл бұрын
    • Thank you

      @stringstudios2262@stringstudios2262 Жыл бұрын
  • This video is one of the best descriptions of neural networks written in only Numpy and Python I've ever seen. Thanks

    @tecknowledger@tecknowledger3 жыл бұрын
    • Hey guys, a reply would be highly appreciated. I want to plot the cost vs the number of iterations but I am not able to figure which parameter to plot ? I am a beginner and I would really appreciate the help. Thank you

      @anishojha1020@anishojha10203 жыл бұрын
    • @@anishojha1020 Hi, try posting comment again in regular comments part, so more people see it. This is only a sub-comment.

      @tecknowledger@tecknowledger3 жыл бұрын
    • @@KHM95 Hi, are you a bot?

      @waterspray5743@waterspray57432 жыл бұрын
    • @@waterspray5743 No man, I am not.

      @KHM95@KHM952 жыл бұрын
    • I advise looking at sendex's 'Neural Network from scratch' series

      @ME0WMERE@ME0WMERE2 жыл бұрын
  • My man really explained how a back propagated neural network works from scratch in 10 minutes

    @MegaJesusini@MegaJesusini Жыл бұрын
  • Just your intro alone in your motivations was so capturing. You laid out everything so clearly, including creating those row and column matrices in the early steps. Thank you.

    @LydellAaron@LydellAaron Жыл бұрын
  • This was interesting, it certainly made neural networks far more approachable to me as someone who's never needed to/been inclined to try making one, but encounters them frequently by being involved in STEM. Your explanations coupled with my familiarity with numpy as opposed to dedicated libraries for neural networks really helped - thanks!

    @Hex...@Hex... Жыл бұрын
  • Excellent tutorial and example. Reveals the magic that most don't know about NNs and I love how you go about it.

    @randyscorner9434@randyscorner9434Ай бұрын
  • I've never heard any of this explained before. After watching this once, I understand the mathematics behind neural networks and why the functions are used. Great job with the explanation here. Many thanks.

    @traviss7740@traviss7740 Жыл бұрын
  • Just discovered this channel. Very cool stuff. Much respect for doing something challenging like this.

    @Mutual_Information@Mutual_Information Жыл бұрын
  • I'm so glad you actually went in depth with the math explanation. So often people will just explain surface layer and then "alright lets jump into the code".

    @hcmcnae@hcmcnae11 ай бұрын
  • This was a really good video. I’ve never build a neural network but it was interesting seeing how the fundamentals add up to build something a little more complexed.

    @darrellrayford3817@darrellrayford3817 Жыл бұрын
  • It feels like it took me months to understand programming feedforward neural networks but I finally understand it. Thanks for the video.

    @joschkazimdars@joschkazimdars2 жыл бұрын
  • I need to come back to this after learning some more preliminaries but you are a very natural teacher and good at presenting. Keep it up 👍

    @Bobbleheads56@Bobbleheads56 Жыл бұрын
  • This was really neat. The math explanation was frustrating the first time around but really made sense after working through the code. Thanks for sharing.

    @lbb2rfarangkiinok@lbb2rfarangkiinok Жыл бұрын
  • You sir, are my hero. You are the first person to actually explain this properly to me. Thank you so much for that.

    @mauricioledon4498@mauricioledon4498 Жыл бұрын
  • Just learned basics around the neural networks and saw this video. So satisfied to all the math formulas are laid out clearly in numpy and real-world coding and training neural network with back propagation. It really helps beginners like me. Thank you so much!

    @momol.9892@momol.98928 күн бұрын
  • Super cool! Would also recommend the series from The Coding Train about creating a neural network from scratch, going a little more into the details of math and what is a perceptron and so.

    @jeandy4495@jeandy4495 Жыл бұрын
  • Another thing that would be helpful for those of us that want to copy what you did and experiment with it is to have all the code together instead of separated as it is using Kaggle - this way you can put in some comments with the code explaining the different features. Again, very good video.

    @robertknopf6207@robertknopf62073 жыл бұрын
  • Amazing. Needed to see the low end and finally found it. Thank you for the amazing video!

    @minjunkevink@minjunkevink2 жыл бұрын
  • Most of the videos are titled “how to create a blabla” when they’re actually teaching how to use… so I really appreciate your video! This really contributes to knowledge 🥰

    @albertolemosduran5685@albertolemosduran5685 Жыл бұрын
  • You should continue making video similar to this maybe something a training course for machine learning and reinforcement learning AI. You have a real talent for explaining it in the best way possible then from what most videos I’d watched. 👍

    @KSATica@KSATica Жыл бұрын
  • This is pure gold, MSc in Data Science and Artificial Intelligence, no professor ever gave me the answer to "what is the code inside the libraries we use", until I found you. Thank you

    @luisbq8045@luisbq80459 ай бұрын
    • thats sad

      @rushisy@rushisy9 ай бұрын
    • I don't want to sound too catchy and annoying but the NN's in Tensorflow and PyTorch are not actually implemented like this. They don't store functions to compute gradients for every single option rather they use AutoGradient which does all backpropogation job. I would highly recommend to watch Andrej Karpathy's tutorial on micrograd (mini AutoGradient which you will implement)

      @stanislavlia@stanislavlia7 ай бұрын
    • I got a master in physics and statistics but I do know how to code a lot of "machine learning" techniques from scratch. Yet human resources look at my degree and think I am incapable, so they rather hire master in AI. I can also code CFD, SPH and FEA from scratch but HR say I am dumber than engineer who just uses third party software (ansys).

      @michaelpieters1844@michaelpieters18442 ай бұрын
    • @@michaelpieters1844 welcome to recruitment in 2024... you need to feed the recruiters what they want to hear, so that you can then get to the guy who you actually want to talk to about your stuff.

      @suscactus420@suscactus420Ай бұрын
  • After Andrew Ng's course, this is the first time I'm watching math functions, thanks buddy, it was a nice refresher for me.

    @anandptyagi5275@anandptyagi5275 Жыл бұрын
  • I know the Maths and Programming behind it and listening this guy doing all that on his own is pure respect from my side.

    @letticonionepic@letticonionepic9 ай бұрын
  • Thank you so much Mr. Samson!! This was so informative and enlightening

    @faris.abuali@faris.abuali Жыл бұрын
  • What an awesome video! Thank you for sharing this insightful walkthrough, it was really helpful in getting a better understanding of how neural nets works. Thank you!

    @RonClabo@RonClabo3 жыл бұрын
    • Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv

      @KHM95@KHM952 жыл бұрын
  • This is great. Built a backprop in C thirty years ago to solve the same problem. Just for a goof. It worked well before I finished debugging. These things are awesome and now I want to take another look. Thanks for posting this.

    @peterweicker77@peterweicker775 ай бұрын
  • Brilliant. Kind of the Hello World of neural nets. It shed a lot of light for me on how back propagation works.

    @aureliencobb199@aureliencobb199 Жыл бұрын
  • Samson, Keep doing this kind of videos please!! Very intelligent and understandable video

    @juliocardoza6066@juliocardoza6066 Жыл бұрын
  • this type of learning is honestly the best, i implemented k means clustering by myself in c (pretty easy stuff but still) , and i can never forget it now, makes me happy that i can do stuff too

    @omlachake2551@omlachake25512 жыл бұрын
    • When I was in high-school algebra I programmed an algebra calculator to do my homework for me, and for some reason I never actually needed it. Programming something really is a great way of learning it, even if it does take significantly longer than just some p-sets or flashcards.

      @Emily-fm7pt@Emily-fm7pt Жыл бұрын
    • @@Emily-fm7pt dude are you serious ??? SAME SAME lmao

      @OT-tn7ci@OT-tn7ci Жыл бұрын
    • I remember when I tried to implement a decision tree on paper !! With a very small data dimensions (maybe 5x6 dim? Can't remember). I spent all the night doing the math but after 5-6 hours I realized I made a mistake in an iteration 😂😂 that's when I realized that we're lucky to have computers to help do it because a human mind can't build completely without doing mistakes in the process (can't stay focus for long time)... I also remember when I implemented a PCA from scratch on excel ( still have the Excel 😂)...😮

      @auronusben4567@auronusben456711 ай бұрын
  • I loved this video! Cool stuff. I implemented a tfidf clustering algorithm myself, very satisfying to see it all working

    @deananderson8186@deananderson8186 Жыл бұрын
  • Most tutorials I watch online about ML, you can just tell that the instructor doens't know whats happening. They've just memorized libraries and tensorflow syntax, and I don't want that to be me! This is exactly what i've been looking for! THANK YOU!!!

    @llewsub@llewsub8 ай бұрын
  • Better lecture and example for understanding and building NN than any in my math and stats MSc

    @GrahamEckel@GrahamEckel Жыл бұрын
  • What an impressive speed run! Just nitpicking: 15:45 `rand` is for a uniform dist U(0,1) and `randn` is for the standard normal distribution N(0,1), therefore unbounded, not U(-0.5, 0.5)

    @solleo9@solleo9 Жыл бұрын
  • Awesome fundamental class on neural networks equations. Bravo!

    @ricardogomes9528@ricardogomes9528 Жыл бұрын
  • Really excellent breakdown of a Neural Network, especially the math explanation in the beginning. I also want to say how much I appreciate you leaving in your first attempt at coding it and the mistakes you made. Coding is hard, and spending an hour debugging your code just because of one little number is so real. Great video

    @ItsNaberius@ItsNaberius26 күн бұрын
  • Thank you for your time and effort, Samson, this tutorial is a treasure.

    @chessprogramming591@chessprogramming591 Жыл бұрын
  • In case any beginners to ML came here wondering why they are really confused, this video isn't really for beginners and he doesn't really explain that. Its "from scratch" in the sense of not using any prebuilt models in the code. Its a good explanation for people who are already familiar with neural networks, prebuilt layers, loss functions, etc. not for people starting their understanding "from scratch."

    @jasonavina8135@jasonavina8135 Жыл бұрын
    • actually im new to ML, (2-3 months in) and this helped me understand a lot, i am implementing it on my own now, without even using numpy so i can code out stuff like transpose on my own and learn more. Random is tricky tho lol

      @OT-tn7ci@OT-tn7ci Жыл бұрын
  • Great video! It's really solid in foundation! I will definitely recommend this to those just like to use framework and library without understanding

    @jasonkim1642@jasonkim1642 Жыл бұрын
  • A great introduction to neural networks is Parallel Distributed Programming by Rumelhart and McLelland from about 1986. They do something similar and give a lot of additional background.

    @nextcomputerparts@nextcomputerparts10 ай бұрын
  • It's worth noting that softmax IS actually very similar to sigmoid. But it essentially does a sigmoid over multiple classes.

    @Crayphor@Crayphor Жыл бұрын
  • Man this video is a masterpiece. I learned a lot and I love your thorough, calm style. Please keep doing similar content!! Best wishes

    @FreakyStyleytobby@FreakyStyleytobby Жыл бұрын
  • Amazing video for beginners to gain an insight in how neural networks work. You just have to have programmed a simple neural net from scratch once to have a good basic understanding.

    @Felix14325@Felix14325 Жыл бұрын
  • Thanks for lovely video Samson. I'm a prof and love seeing this kind of content. I'll definitely share with students

    @drgatsis@drgatsis Жыл бұрын
  • I’m always too intimidated to try some of these things. But seeing your process makes it really seem feasible. Need to brush up on my linear algebra again tho 😆

    @SnackPack913@SnackPack913 Жыл бұрын
  • Samson, this was such a great walk through. Just wanted to say that if you ever made other videos recreating machine learning models from scratch, I'd 100% watch them. In any case, hope all is good and thanks for this great content :)

    @AAAJ27@AAAJ27 Жыл бұрын
  • Samson, we need more videos like this from you. Great content, more visuals would be nice, too 🙂

    @straightup7up@straightup7up5 ай бұрын
  • Love your sense of humor! Brought the video to life, thanks! You are appreciated!

    @waynesletcher7470@waynesletcher74704 ай бұрын
  • Great video! I did the same thing in python about a year ago, but I didn’t like relying on numpy so much. Your video gave me the motivation to write both a matrix manipulator and neural network from scratch in Java

    @FireNLightnin@FireNLightnin Жыл бұрын
    • I did it in assembly, easy

      @TheJackTheLion@TheJackTheLion9 ай бұрын
  • I actually did this exact same thing for my German a level project. Same database. :D good times

    @work9466@work9466 Жыл бұрын
  • This is the first ASMR NN video that I have ever seen. Well done.

    @gnorts_mr_alien@gnorts_mr_alien Жыл бұрын
  • The yt algorithm only recommends me this now, 1 year after i've encountered a similar discontent with neural network tutorials. Still very interresting to see how someone else does it. I did give myself a bit of help by using a library called Eigen for the matrixes calculations. Very well done nice video

    @sanglar3623@sanglar3623 Жыл бұрын
  • Bro, that is exactly how I study! I found out your channel and I am so glad I did. Instantly subscribed! I see you have learnt from Andrew Ng

    @sharmakartikeya@sharmakartikeya2 жыл бұрын
    • yeah the notations reminded me of Andrew Ng

      @rishikeshkanabar4650@rishikeshkanabar46502 жыл бұрын
    • @@rishikeshkanabar4650 usage of the word called "intuition" reminds me of him saying ..."to get a better intuition" in his lectures

      @kumaranp8764@kumaranp87642 жыл бұрын
  • Hi! I did a recreation of your code with more hidden layers and noticed what I think is a bug in the db calculation. Changing it to db = 1 / m * np.sum(dZ, axis=1).reshape(-1, 1) was able to get me better results. I think the old db = 1 / m * np.sum(dZ) sums the entire dZ to one float. Very good video though!

    @David-ip2sd@David-ip2sd Жыл бұрын
    • noticed the same thing. The way it was implemented here returns db to a float and thus b will always be "similar" to the random initialization, only shifted up/down by a constant.

      @Hyngvi@Hyngvi Жыл бұрын
    • Hey, I know you posted this a while ago, but I noticed the same thing and saw your comment. I am still not sure how to solve this, dZ is still a 1D array (1 by 10) so in your solution, what does axis=1 do? won't .sum*() just turn the 1D array into a scalar regardless, and then you are back with the same problem of updating all your biases the same way?

      @mattlange00@mattlange006 ай бұрын
    • Actually, nevermind, dZ is 10 by m so this does make sense

      @mattlange00@mattlange006 ай бұрын
    • Numpy requires some strange things when you have only 1 dimension: Verfied that without this change the final biases weights aren't being updated. With it, training works better. Didn't verify the details of David's solution, just that it was needed, and that it seemed to work. def backward_prop(Z1, A1, Z2, A2, W1, W2, X, Y): one_hot_Y = one_hot(Y) dZ2 = A2 - one_hot_Y dW2 = 1 / m * dZ2.dot(A1.T) db2 = 1 / m * np.sum(dZ2, axis=1).reshape(-1, 1) dZ1 = W2.T.dot(dZ2) * ReLU_deriv(Z1) dW1 = 1 / m * dZ1.dot(X.T) db1 = 1 / m * np.sum(dZ1, axis=1).reshape(-1, 1) return dW1, db1, dW2, db2

      @gpeschke@gpeschke3 ай бұрын
    • I see the same. Also, either this is old enough that something has changed in Python or numpy, or he hasn’t included other things as well. Using his code line for line and the same data set, I get a divide by zero error on the softmax function.

      @danielmyers76@danielmyers765 күн бұрын
  • Samson Zhang is the BEST Cinematographer, editor, musician& tech geek in the WORLD

    @edisonbekaj863@edisonbekaj863 Жыл бұрын
  • I have learned this back in 1988, when visiting lectures by Prof. Rechenberg at Technical University. So inspiring. Today I am trying this with my new M1 max and its neural network😝😝

    @LeicaM11@LeicaM11 Жыл бұрын
  • Hi Samson! I'm a developer and trying to learn the basics of ML. Much of the beginner stuff I see is using pre-trained models and frameworks which might be convenient to get things going. However, for me this is something completely new and I really what to understand what happens behind the scenes. Thank you for posting this! /Kevin from Sweden

    @DiAMONDBACK85@DiAMONDBACK858 ай бұрын
    • Take the statistics course then. Don't learn from programmers. They do not know either.

      @jmw1500@jmw15006 ай бұрын
    • ⁠Exactly!

      @paultvshow@paultvshow3 ай бұрын
    • try jeremy howard part2 of 2022 courses

      @carnap355@carnap3552 ай бұрын
  • Haven’t finished video yet, but this looks like the missing piece of my experience learning about neural networks at a high level…I probably lacked the linear algebra skills I have now though. Whoa! This could be incredibly exciting! I can’t wait!

    @f.osborn1579@f.osborn1579 Жыл бұрын
    • Nobody cares what you have to say

      @mrgenetics4063@mrgenetics4063 Жыл бұрын
  • Thank you. I'm doing this in class right now and your explanations were super helpful!

    @kurtameyer@kurtameyer3 ай бұрын
  • Musician, filmmaker, data scientist, and etc. bro maxed out on skill trees. 😂

    @notyou4122@notyou4122 Жыл бұрын
  • Could you please do more tutorials ? This is such a great video

    @kiaruna@kiaruna Жыл бұрын
  • This solved a lot of doubts and brought up mu confidence levels to deep dive into AI/ML. Thanks for the explanation.

    @arksodyssey@arksodyssey2 ай бұрын
  • I am going to do the same over the next two weeks , at the end I'm coming back to see any differences between our code, thanks for sharing :)

    @carloscortes2391@carloscortes23914 ай бұрын
  • This is a great way to teach ANN - congrats. However, I would like to suggest you to not worry too much about the time to finish the implementation. Double-checking all steps will avoid coding errors.

    @ricardo5875@ricardo58752 жыл бұрын
  • An excellent nice video with abundant mathematical insight. It may be worth to note that instead of partial derivatives one can work with derivatives as the linear transformations they really are, and also looking at the networks in a more structured manner thus making clear how the basic ideas of BPP apply to much more general cases. Several steps are involved. 1.- More general processing units. Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong, beyond Euclidean spaces, to any Hilbert space. Derivatives are linear transformations and the derivative of a neural processing unit is the direct sum of its partial derivatives with respect to the inputs and with respect to the weights; this is a linear transformation expressed as the sum of its restrictions to a pair of complementary subspaces. 2.- More general layers (any number of units). Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a unique layer is equivalent to taking their product (as functions, in the sense of set theory). The layers are functions of the of inputs and of the weights of the totality of the units. The derivative of a layer is then the product of the derivatives of the units; this is a product of linear transformations. 3.- Networks with any number of layers. A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers; this is a composition of linear transformations. 4.- Quadratic error of a function. ... --- Since this comment is becoming too long I will stop here. The point is that a very general viewpoint clarifies many aspects of BPP. If you are interested in the full story and have some familiarity with Hilbert spaces please google for papers dealing with backpropagation in Hilbert spaces. A related article with matrix formulas for backpropagation on semilinear networks is also available. For a glimpse into a completely new deep learning algorithm which is orders of magnitude more efficient, controllable and faster than BPP search in this platform for a video about deep learning without backpropagation; in its description there are links to a demo software. The new algorithm is based on the following very general and powerful result (google it): Polyhedrons and perceptrons are functionally equivalent. For the elementary conceptual basis of NNs see the article Neural Network Formalism. Daniel Crespin

    @dcrespin@dcrespin Жыл бұрын
  • i have no idea what your were really saying but at the same time i do because you explained how the math is used and implemented for the code. thank you !

    @jamescardenas837@jamescardenas837Ай бұрын
  • the fact he actually shows the first overconfident of its memory programer stage is actually so real.

    @iwasdeleted708@iwasdeleted7089 ай бұрын
  • It's a shame it isn't taught this way in courses. Excellent video!

    @tommyhuffman7499@tommyhuffman7499 Жыл бұрын
  • Just 1 minute in the video and I can easily tell that you're gonna own a multi-billion company within a few years. You've got the IQ, the voice, the clarity, the confidence, and the right personality. Best of luck Mr. Zhang

    @123arskas@123arskas Жыл бұрын
  • That is very neat and captures the fundamental ideas of neural nets! great job

    @williamzhang6955@williamzhang6955 Жыл бұрын
  • Perhaps I overcomplicated matters compared to your approach when I did this a couple of years ago, but like you, I wanted to program it "from scratch". My language of choice: java. I actually simulated "neurons" which were a class that stored its activation data value, and its connections to the next layer, so that it "looked" like a K_m,n graph, and the connection was an array which stored the biases along each "synapse" so to speak. Then when the hidden layers activated, I had each neuron simply sum the outputs from each synapse connecting to it from the previous layer, which was just the product of its activation value and its bias, then sigmoided this to get its own activation value. Note that while each neuron's activation was only in (-1,1), I let the biases be free parameters. When I programmed the backprop algo, I did the gradient descent the same as you, but effectively set that alpha parameter to one. It didn't occur to me to mess with that. Starting the network out with random parameters, then training it on randomly chosen sets of 10,000 images five or six times seemed to work pretty well. I saw 93% accuracy on the test data. And just for fun, I put the network on a discord bot so my friends could feed it images of the same size and see its guess. Two interesting results came out. The network fails on inverted colors: i.e., drawing white on black using MS paint or something wouldn't get reliable predictions. Secondly, using MS paint to give it new data did work, but at a much lower rate. Our best guess for why this happened was due to the sharpness of the lines between the number and backgrounds.

    @cbeezy4733@cbeezy4733 Жыл бұрын
  • Helpful, thanks. Made my own from scratch in bare C++. From image to 32 to 16 to 10 outputs, using leaky ReLU. 96% accuracy on the test set. 🥳

    @Kaetemi@Kaetemi7 ай бұрын
  • Keep doing it man, I am from Perú and the information that your are giving is the important I have heared about

    @luizarnoldchavezburgos3638@luizarnoldchavezburgos3638 Жыл бұрын
  • This is exactly what I've been looking for!Thank you.

    @cgswjs@cgswjs Жыл бұрын
  • Timestaps if you forgot 0:51 Problem Statement 1:18 Math Explanation 11:18 Coding It up 27:43 Results

    @isreallealbertsanchez1156@isreallealbertsanchez11562 жыл бұрын
    • Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv

      @KHM95@KHM952 жыл бұрын
    • @18:07 is the time stamp where the other error was made, a2 = softmax(a1) which should be a2 = softmax(z2)

      @Achrononmaster@Achrononmaster Жыл бұрын
    • @23:30 you also see two errors, there is no axis argument for the np.sum(), the lines should be db2 = 1 / m * np.sum(dZ2) ... and ... db1 = 1 / m * np.sum(dZ1)

      @Achrononmaster@Achrononmaster Жыл бұрын
    • And @23:00 ReLU_deriv(z) should really be return np.array(zn > 0, dtype=float) if you are aiming for good typing practice.

      @Achrononmaster@Achrononmaster Жыл бұрын
    • I don't understand anything but wow

      @elivegba8186@elivegba8186 Жыл бұрын
  • Hi, i found this video very helpful for beginners. Could you please tell how you came up the equations of dz,dw and db? That would be really helpful as well

    @themoonlight1922@themoonlight19222 жыл бұрын
    • watch andrew ng he copied every single equation from his course

      @aryamankukal1056@aryamankukal1056 Жыл бұрын
    • @@aryamankukal1056 I wouldn’t say he copied every equation. These equations are taught in all ML/AI courses and it is just mathematics

      @Nanakwaku309@Nanakwaku309 Жыл бұрын
    • @@Nanakwaku309 andrew's notation is a very specific and if u watch carefully he uses all of the same conventions

      @aryamankukal1056@aryamankukal1056 Жыл бұрын
  • Really cool video Samson! Great stuff!

    @bf300@bf300 Жыл бұрын
  • Very impressive! Great commentary/explanation as well

    @youri655@youri655 Жыл бұрын
  • Very good video and explanation! Thanks 😊. I just would have liked it if you had explained the backprop a little more in depth. Like how the derivatives are calculated on each layer (chain rule etc.) But other than that one of the best nn videos

    @xuxusito@xuxusito3 жыл бұрын
    • Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv

      @KHM95@KHM952 жыл бұрын
  • You can actually use momentum for gradient descent. The result is slightly better (I tried on your nn and it gets 91% accuracy) // I'm a beginner at ML so your video taught me a lot. Keep up your great work you're doing man. It's really cool.

    @quanduong8917@quanduong89173 жыл бұрын
    • can you please send link of your code

      @akainu3668@akainu36683 жыл бұрын
  • This was really useful to me, and incredibly well explained. Thank you.

    @iggzistentialism8458@iggzistentialism8458 Жыл бұрын
  • Great Video! Inspired me to build up my basics first and start from a low level perspective.

    @a-balah@a-balah Жыл бұрын
  • There is one thing I do not understand. Because the derivation and chain rule stuff, shouldn't the derivative of the softmax activation function also be included somewhere?

    @mercedeszkistoth5367@mercedeszkistoth5367 Жыл бұрын
  • It's a MLP, you easily computed the backpropagation step in closed form, but I wonder how those famous frameworks can compute any network's partial-derivatives tensors automatically

    @GiacomoMiola@GiacomoMiola Жыл бұрын
    • usually the partial derivatives in backpropagation are of functions specifically chosen to be convex and have nothing to do with the problem you are working on, but are just ones that work nicely for ML algos

      @elliott614@elliott614 Жыл бұрын
  • This is a project I have to put in my portfolio, thank you.

    @malice112@malice112 Жыл бұрын
  • Nicely done, Samson, thanks!

    @dbbyres@dbbyres4 ай бұрын
  • I agree with you. I also did this by scratch. It was a lot of fun! What’s the point of masters math degree if I am not going to use it lol. Nice work!

    @harisjaved1379@harisjaved1379 Жыл бұрын
    • bro can you help i also wanna learn can you tell us resources which you use to learn this neural network

      @Pk-tw6li@Pk-tw6li Жыл бұрын
    • @@Pk-tw6li study some basic linear algebra, just with that you'll understand at least 85% of whats going on with the algorithm

      @juliopaniagua8723@juliopaniagua8723 Жыл бұрын
  • Hello, it's such a great tutorial. thank you very much. I think people who are over exited because of this AI-hyped should learn this basic, and see whether those people really fit in to this field 🤣🤣

    @danielniels22@danielniels223 жыл бұрын
  • thank you for the knowledge Mr. Samsung

    @hayashii5837@hayashii5837Ай бұрын
  • Awesome! Great video and explanation of ML roots. Take care!

    @petrdvorsky6615@petrdvorsky6615 Жыл бұрын
  • Everyone praises this video for being so helpful and I'm just sitting here understanding NOTHING. :D I feel so dumb! Maybe I should've stared with something even more basic having learned in a nutshell only print("hello world") so far. I will definitely go back and watch it all again in the future after I learn more. Thank you for the video, Samson. Cheers!

    @Milorae@Milorae Жыл бұрын
    • defintely pick up a book on algorithims and data structures first!

      @xianzai_ad1928@xianzai_ad1928 Жыл бұрын
  • I've found that educators EVERYWHERE make things more complicated than they really are. The attitude goes something like this, if I explained it so a toddler could understand it. (Which they're capable of doing) I would be out of a job or people would think less of me. True of people talking about music, true of programming and of portrait painting as well. None of this is complicated, just explained poorly or not at all. Number one rule of educator, don't assume the student knows what you're talking about.

    @DanIel-fl1vc@DanIel-fl1vc Жыл бұрын
    • I see people in youtube videos just types for 5 minutes without saying a thing or suddenly start to rewrite things.

      @doords@doords Жыл бұрын
    • Spot on

      @abashimov@abashimov Жыл бұрын
    • @@doords What, I've been through the grind I know what I'm talking about. Music theory in particular is more than any other discipline guilty of this, programming to a lesser extent. There's a community of underachievers that need to boost their ego by making it seem harder than it is. Usually by using made up lingo instead of two words everyone are familiar with, or made up quirky symbols instead of something that visually makes sense to everyone. What could be explained in 20 minutes takes 3 years to "teach". It does annoy me, fragile egos that need to prove themselves and hammer down at anyone that doesn't suck up to their sense of...moral superiority or superiority in any way. If you want to teach something, stop messing about damn soy boy.

      @DanIel-fl1vc@DanIel-fl1vc Жыл бұрын
    • It is more complicated if it's supposed to work with arbitrary amounts of layers. But yeah I think coding up something like this can be very helpful, because it shows you what information is actually available and typically not used by practitioners. You don't just have the gradients of batch-averages of losses - you have much more.

      @MrCmon113@MrCmon113 Жыл бұрын
    • Skill issue

      @user-jg7lv5og6t@user-jg7lv5og6t18 күн бұрын
  • Really appreciate the example & tutorial, thank you. Can you make another video? Perhaps on adding more hidden layers?

    @Theeoldmann@Theeoldmann Жыл бұрын
  • TY so much for the great explanation about neural network and example!!

    @OsmanZekiYilmaz@OsmanZekiYilmaz2 жыл бұрын
  • Now build one IN Scratch

    @ReBufff@ReBufff Жыл бұрын
    • been done actually

      @be7256@be7256 Жыл бұрын
  • Amazing stuff! Just wondering what value does the coding timer add to the video? I mean instead of correcting your mistakes with overlapping text you could have taken a little bit of time to review your code instead of rushing it through. But again, amazing content!

    @Ari-Matti.Rintala@Ari-Matti.Rintala Жыл бұрын
KZhead