Neural Network Architectures & Deep Learning

2024 ж. 11 Мам.
765 068 Рет қаралды

This video describes the variety of neural network architectures available to solve various problems in science ad engineering. Examples include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and autoencoders.
Book website: databookuw.com/
Steve Brunton's website: eigensteve.com
Follow updates on Twitter @eigensteve
This video is part of a playlist "Intro to Data Science":
• Intro to Data Science
This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company

Пікірлер
  • Does anyone else feel weird when he says Thank You at the end? He just gave me a free, high-quality, understandable lecture on neural networks. Man, thank *you*!

    @mickmickymick6927@mickmickymick69273 жыл бұрын
    • :) People watching and enjoying these videos makes it so much more fun to make them. So indeed, thanks for watching!

      @Eigensteve@Eigensteve3 жыл бұрын
    • @@Eigensteve ..being happy to see other people making progress. Man, you have a great heart..!

      @antoniofirenze@antoniofirenze2 жыл бұрын
    • Steve, we should be thanking "you"

      @carol-lo@carol-lo2 жыл бұрын
    • Presenter with true class 👏

      @oncedidactic@oncedidactic2 жыл бұрын
    • 😁😍

      @Learner..@Learner..2 жыл бұрын
  • KZhead's recommendation algorithm is becoming self-aware...

    @teslamotorsx@teslamotorsx4 жыл бұрын
    • It was KZhead's turn in the introduction round

      @florisr9@florisr94 жыл бұрын
    • I hope Jus relu and sigmoid

      @GowthamRaghavanR@GowthamRaghavanR4 жыл бұрын
    • @@GowthamRaghavanR those are the safe ones

      @Xaminn@Xaminn4 жыл бұрын
    • Imagine for a second also what the algorithm never recommended to you, because it already knew you were aware.

      @resinsmp@resinsmp4 жыл бұрын
    • @@resinsmp Now that's an interesting thought haha. "Since user searched this type of topic, it must already be aware of some other certain type of topics." Simply marvelous!

      @Xaminn@Xaminn4 жыл бұрын
  • I don't know why youtube decided I needed that little course, but I'm glad that it did now.

    @farabor7382@farabor73824 жыл бұрын
    • This video has common variables with other videos you watch!

      @brockborrmann2931@brockborrmann29314 жыл бұрын
    • Sounds like you’ve been autoencoded

      @TonyGiannetti@TonyGiannetti4 жыл бұрын
    • That's why the CF algorithm did

      @fitokay@fitokay4 жыл бұрын
    • same thing

      @Kucherenko90@Kucherenko904 жыл бұрын
    • KZhead also uses neural networks

      @user-yp6ze3dh5j@user-yp6ze3dh5j4 жыл бұрын
  • You really simplify the stuff in a way that has me feel enthusiastic to learn it. Thank you.

    @theunityofthejust-justifyi7951@theunityofthejust-justifyi79514 жыл бұрын
  • I am addicted to your series of lectures for the last three months. your "welcome back" intro looks like a chorus to me. thank you!

    @Savedbygrace952@Savedbygrace95210 ай бұрын
  • Thank you, I've always seen the term neural networks generalized and always thought of it as probably a bunch of matrix operations. But now I know that there are diverse variations and use cases for them

    @brian_c_park@brian_c_park4 жыл бұрын
  • This is the best short intro to this topic I've seen. Thanks!

    @elverman@elverman4 жыл бұрын
  • Hey I just wanted to say thank you for making this video. I found it really helpful! I particularly enjoyed your presentation format, and the digestible length. About to watch a whole bunch more of you videos! :)

    @Jorpl_@Jorpl_4 жыл бұрын
  • Steve, you are the first person I have ever seen describe an overview of neural networks without paralyzing the consciousness of the average person. I look forward to more of your lectures, focused in depth on particular aspects of deep learning. It is not hard to get an AI toolkit for experimentation. It is hard to get a toolkit and know what to do with it. My personal interest is in NLR (natural language recognition) and NLP (natural language programming) as applied to formal language sources such as dictionaries and encyclopedias. I look forward to lectures covering extant NLP AI toolkits. Sincerely, John

    @johnwilson4909@johnwilson49094 жыл бұрын
    • John, I recommend Stanford's course on recurrent neural networks. Free on KZhead. It's a playlist with over 20 lectures

      @pb25193@pb251934 жыл бұрын
    • kzhead.info/channel/PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z.html

      @pb25193@pb251934 жыл бұрын
  • Thank you for your video! Seeing your example for principal values decomposition made neural networks much clearer to me than anything else I had seen till now. It allowed me to connect this to SVD-based linear modeling I used almost 10 years ago to create simplified models of visual features seen in fluid dynamics. I did not expect how much easier this suddenly seemed when it connected to what I already knew.

    @ArneBab@ArneBab4 жыл бұрын
  • This was massively helpful as an intro! When my question is just "yes but how does this ACTUALLY work", you either get pointlessly high level metaphors about it being like your brain, or jumping straight into gradient descent and all the math behind training. A+ video, thanks.

    @dantescanline@dantescanline4 жыл бұрын
  • Very nice. I like the autoencoders. That is basically just understanding. Intelligence is basically just a compression algorithm. The more you understand the less data you have to save. You can extract information from your understanding. That's basically what the autoencoder is about. For instance, if you want to save an image of a circle you can store all the pixels in the image, or store the radius, position and color of it. Which one takes up more space? Well, storing the pixels. We can use our understanding of the image containing a circle in order to compress it. Our understanding IS the compression. The compression IS the understanding. It's the same.

    @MikaelMurstam@MikaelMurstam4 жыл бұрын
    • shut up

      @TheMagicmagic290@TheMagicmagic2904 жыл бұрын
    • profound observation

      @dizzydtv@dizzydtv4 жыл бұрын
    • Thank you for your comment, excellent observance!

      @bdi_vd3677@bdi_vd36774 жыл бұрын
    • I dig that perspective. I do think that compression can have some downsides. I feel like my emotional reactions to things are a sort of "compression". I can't keep track of everything I've read about a potentially political topic, but I can remember how it made me feel.

      @SirTravelMuffin@SirTravelMuffin4 жыл бұрын
    • I like to think of autoencoder as an architect outputting a blueprint, then a construction company building that building

      @PerfectlyNormalBeast@PerfectlyNormalBeast4 жыл бұрын
  • steve brunton idk who u r before watching this. but this presentation style of a glass whiteboard w/ image superimposed is the best way ive ever seen someone teach tbh. thank u at least for that. but more importantly this actually helped me understand the beast of neural nets a little more and hopefully be more prepared when our new ai overlords enslave us at least we will know how they think

    @PhoebeJCPSkunccMDsImagitorium@PhoebeJCPSkunccMDsImagitorium4 жыл бұрын
  • These were most productive 9 minutes. Great explanation on the architectures.

    @KeenyNewton@KeenyNewton4 жыл бұрын
  • Amazing program... I love the thing he's drawing on that projects his diagrams.

    @josephyoung6749@josephyoung67494 жыл бұрын
  • Amazing video and explication , focusing on key points is very interesting for such sciences, thank you a lot and keep doing that !

    @lucasb.2410@lucasb.24104 жыл бұрын
  • Important note about the function operating on a node. If the functions of two adjacent layers are linear, then they can be equivalently represented as a single layer (compositions of linear transforms is itself a linear transformation and thus could just be its own layer). So, nonlinear transformations are -necessary- for deep networks (not just neural networks). That isn't to say you can't have a composition of linear transformations to compose an overall linear transformation, if there's nonlinear constraints for each operator.

    @culperat@culperat4 жыл бұрын
  • This is a perfectly compressed overview of neural networks. What autoencoder did you use to write this?

    @PiercingSight@PiercingSight4 жыл бұрын
    • Human brain

      @bunderbah@bunderbah4 жыл бұрын
    • @@bunderbah Bruman hain

      @MilaPronto@MilaPronto4 жыл бұрын
    • @@MilaPronto Humain bran

      @3snoW_@3snoW_4 жыл бұрын
    • one hot encoder. lols

      @mbonuchinedu2420@mbonuchinedu24204 жыл бұрын
    • @@mbonuchinedu2420 That's like a robot trying to be funny

      @mjafar@mjafar4 жыл бұрын
  • Sir your deep learning videos are the only ones on KZhead I take seriously.

    @XecutionStyle@XecutionStyle3 жыл бұрын
  • Gosh i needed this intro at the start of my seminar paper...

    @Illu07@Illu074 жыл бұрын
  • Simple perfect enjoyable expaining of DNNs. Thanks for sharing!

    @easylearn9350@easylearn93504 жыл бұрын
  • Awesome concise high level explanation! Thank you

    @husane2161@husane21614 жыл бұрын
  • I just found your channel as a suggestion from a 3Blue1Brown video. I subscribed instantly, easily explained, thanks.

    @RolandoLopezNieto@RolandoLopezNieto15 күн бұрын
    • So cool! Which video?

      @Eigensteve@Eigensteve15 күн бұрын
  • Great content for existing developers. Wow. Incredible. To say the least I am speechless. You didn’t waste my time and I appreciate that!!

    @tottiegod8021@tottiegod80212 жыл бұрын
  • I really appreciate this talk, thank you.

    @FlowerPowered420@FlowerPowered4209 ай бұрын
  • Excellent overview on neural network architecture. Very interesting and worthwhile video.

    @robertschlesinger1342@robertschlesinger13424 жыл бұрын
  • I have been looking for this content a really long time. Thanks so much.

    @parvezshahamed370@parvezshahamed3704 жыл бұрын
  • Amazing time spent to understand the Networks a little more.

    @lightspeedlion@lightspeedlion2 ай бұрын
  • forget neural networks, this guy figured out that it's better if you stand behind what your presenting instead of in front of it. mind blown

    @chris_jorge@chris_jorge4 жыл бұрын
  • Best. I love your lecture. It explains problem in a simple way. Thank you so much.

    @nghetruyenradio@nghetruyenradio4 жыл бұрын
  • Thank you so much for the video! The way you teach makes learning so much fun:) If you were born in ancient time, you alone would have shot the literacy rate by over 20%

    @YASHSHARMA-bf2mm@YASHSHARMA-bf2mm Жыл бұрын
  • KZhead is trying to teach us about itself.

    @-SUM1-@-SUM1-4 жыл бұрын
    • Hahaha. Good.

      @FriendlyPerson-zb4gv@FriendlyPerson-zb4gv4 жыл бұрын
    • It's becoming sentient! Even worse, it's a teenager who just wants to be understood. XD

      @ImaginaryMdA@ImaginaryMdA4 жыл бұрын
  • I started to learn NNs in good old early 2000-s. No internet, no collegues, nor even friends to share my excitement about NNs. But even then it was obvious that the future lies with them, though I had to concentrate on more essential skills for my living. And only now, after so many years have passed, I tend to come back to NNs, cause I'm still very excited about them and it is much-much-much easier now at least ot play with them (much more powerful computers, extensive online knowlegde base, community, whatever), not speaking about career opportunities. I'm glad YT somehow guessed I'm interested in NNs, though I haven't yet searched for it AFAIR. It gives me another impetus to start learning them again. Thanks for the video! Liked and sub-ed.

    @amegatron07@amegatron074 жыл бұрын
  • A really really great video to point out essentials of Neural Network Architecture, thanks for that video

    @VikiGradwohl@VikiGradwohl4 жыл бұрын
  • Love your videos and your book! Can't wait to start working through it actually!

    @goodlack9093@goodlack9093 Жыл бұрын
  • Clear, simple, effective. Thank you!

    @mrknarf4438@mrknarf44384 жыл бұрын
    • Also loved the graphic style. We're the images projected on a screen in front of you? Great result, I wish more people showed info this way

      @mrknarf4438@mrknarf44384 жыл бұрын
  • Thank you for a good explanation. This is the quality of content we want to see! 10 folds better than Siraj Raval's channel, in my opinion.

    @SaidakbarP@SaidakbarP4 жыл бұрын
    • Well, that makes sense given he's a renowned professor =)

      @fzigunov@fzigunov4 жыл бұрын
  • Clear and concise. Thanks for posting.

    @carnivalwrestler@carnivalwrestler4 жыл бұрын
  • thank you. i somehow get inspiration from videos like these.

    @satoshinakamoto171@satoshinakamoto1714 жыл бұрын
  • Such a great explanation, thank you

    @bambam10years@bambam10years4 жыл бұрын
  • This was most helpful, very clear, thank you

    @jonacacarr3839@jonacacarr38394 жыл бұрын
  • One of the best introductions to AI I have seen.

    @kevintacheny1211@kevintacheny12114 жыл бұрын
    • YES. ☝️this

      @bensmith9253@bensmith92534 жыл бұрын
  • He Steve, thank you a lot for all your brilliant videos! One request on the topic, could you please cover how all this works with shift/rotation/scale of the image? Nobody on youtube covers this tricky part of the neuron networks used for image recognition. I keep fingers crossed that you the one who could clarify this.

    @doctorshadow2482@doctorshadow2482 Жыл бұрын
  • Great explanation. Thank you!

    @myway2mars@myway2mars4 жыл бұрын
  • Thank you so much! I needed this.

    @beepboopgpt1439@beepboopgpt14394 жыл бұрын
  • Great explanation. Thank you.

    @solargoldfish@solargoldfish4 жыл бұрын
  • I need to watch all the videos of this channel.

    @yourikhan4425@yourikhan4425 Жыл бұрын
  • Great explanation Thank u Sir

    @izainonline@izainonline8 ай бұрын
  • Adore this free online schooling, thanks so much Steve!!

    @SimulationSeries@SimulationSeries4 жыл бұрын
    • Glad you enjoy it! Thanks!

      @Eigensteve@Eigensteve3 жыл бұрын
  • simply great, thanks for this intro video

    @userou-ig1ze@userou-ig1ze3 жыл бұрын
  • Could you please do a follow up on this? I basically came here for the "many many more" you mentioned towards the end. LSTMs and other architectures that are useful for time series processing. It would be nice if you could do an overview video about that class of networks.

    @JohannesSchmitz@JohannesSchmitz4 жыл бұрын
  • Strangely enough. I needed this vid. Thank you YT ALGO

    @tw0ey3dm4n@tw0ey3dm4n4 жыл бұрын
  • ty YT, is all joy your latest state of recomendations

    @alalalal5952@alalalal59524 жыл бұрын
  • Thanks for sharing Steve

    @raoofnaushad4318@raoofnaushad43184 жыл бұрын
  • Thank you very much for this extraordinary way of teaching.

    @karemabuowda2695@karemabuowda26952 жыл бұрын
  • Thank you for this beautiful explanation.. I really enjoy it.

    @aminnima6145@aminnima61452 жыл бұрын
  • Great work on this video!

    @IamWillMatos@IamWillMatos4 жыл бұрын
  • a fantastic overview thanks!!♥

    @neiltucker1355@neiltucker135510 ай бұрын
  • One of the most effective and useful introductory lectures on neural networks you can attend. It provides basic terminology and enables a good foundation for other lectures. HIGHLY RECOMMENDED. It would be helpful, Mr. Bunton, to say a little bit more about Neurons. Is a neuron strictly a LOGICAL function point in a process (my simple excel cell doing a logical function qualifies as a neuron with your definition), is it a PHYSICAL function point like a server, or is it both? Was there a reason you did not mention restricted Boltzmann motors? Thank you again, Sir, for the quality of this lecture.

    @kennjank9335@kennjank93355 ай бұрын
    • A neuron is pure software, a computational unit that mimics the basic functions of a biological neuron. While software relies on specific hardware for execution, a neuron is not a simple server. Unlike an Excel cell, which takes a single input and produces a straightforward output, a neuron receives multiple inputs from other neurons, processes them, and generates an output based on the combined information. Each input to a neuron is multiplied by a weight, a numerical value that represents the strength of the connection between the neurons. These weighted inputs are then summed together, and a bias value, representing an inherent offset, is added to the result. The resulting value is then passed through an activation function, which introduces non-linearity into the network's decision-making process. Activation functions, such as sigmoid and ReLU, transform the weighted input into the neuron's output, allowing the network to capture complex patterns and relationships in the data. ReLU is often used as an activation function because it requires less computational power compared to other activation functions, such as the sigmoid function. Through a process called learning, artificial neurons adjust their weights over time, enabling the network to improve its performance on a given task. Algorithms like back propagation guide this learning process, allowing the network to minimize errors and optimize its decision-making capabilities. Hope this helps.

      @JorgeMartinez-xb2ks@JorgeMartinez-xb2ks5 ай бұрын
  • So youtube know that i am currently learning neural network and this video is appear in my recommendation ,great

    @AllTypeGaming6596@AllTypeGaming65964 жыл бұрын
  • Amazing video, thanks for the information

    @toonheylen4707@toonheylen47074 жыл бұрын
  • Really clear. Thanks for the vidéo !

    @sitrakaforler8696@sitrakaforler8696 Жыл бұрын
  • Very well explained. Thank you

    @flaviudsi@flaviudsi Жыл бұрын
  • Thanks for this explanation

    @abhaythakur8572@abhaythakur85724 жыл бұрын
  • this is 9 minutes of pure quality education

    @insomnia20422@insomnia204224 жыл бұрын
  • Liked that the approach was direct and simplistic; and of course you can write your code in this manner too. So that you're not overwhelmed. Say four or five layers being coded, then you have outboard functions that handle the input and out put arrays. This last might take up most of the landscape of a program. Isn't this fellow clever? Dang. He's gotta be a Professor somewhere. Many thanks. The computer training that I had gotten was very rudimentary, first in the 60s and then another drop in the mid 90s. Luckily there's YT where you can catch up. And after a while the 'training' starts to remind you of subliminal sorts of stuff. Maybe?

    @jimparsons6803@jimparsons680311 ай бұрын
  • Good overall neural net explanation!

    @jaredbeckwith@jaredbeckwith4 жыл бұрын
  • once you get hold of the back propagation and how to do the chain rule derivatives, you understand that was not the goal! you merely opened the door, and this video is the way to your goal!

    @saysoy1@saysoy1 Жыл бұрын
  • Steve: nice talk,... many questions come up, I'll ask a few 1)Do you distinguish planar vs non-planar networks? 2)Do RNN(s) become unstable? They look like control system time dependent processes. 3)Has anyone applied Monte Carlo toward selection of topology of a NN, or toward the activation function selection,...? Fascinating area to study.

    @mr1enrollment@mr1enrollment4 жыл бұрын
  • Glad I found this channel! Loved everything about this video.

    @GarimaaThakur@GarimaaThakur4 жыл бұрын
    • Glad you enjoy it!

      @Eigensteve@Eigensteve4 жыл бұрын
  • Ok, gotta bring my notebook, thank you for the content btw

    @latestcoder@latestcoder3 жыл бұрын
  • I like the way of explaining by projecting on glass board....very very nice...

    @youcanlearnallthethingstec1176@youcanlearnallthethingstec11763 жыл бұрын
  • Thanks a lot to Steve and KZhead for recommending this great video

    @dejavukun@dejavukun4 жыл бұрын
  • Awesome 😎... well ☺️ i didn’t understand much but i think I could use as inspiration to Spinal Cord my Dark Matter.

    @lucyoriginales@lucyoriginales4 жыл бұрын
  • Very good explanation. 🎉

    @ts.nathan7786@ts.nathan77864 ай бұрын
  • "...a smiley face, I took this from Wikipedia." You know he's an academic when he cites EVERYTHING. He cites a smiley face image.

    @reallynotadatascientist@reallynotadatascientist Жыл бұрын
  • Thanks for your explanation in the video. have learned a lot. Am doing research in speech emotion recognition. Can you pls tell me the best Deep learning algorithms that will work?

    @radhikasece2374@radhikasece237410 ай бұрын
  • I love this man. You are my role model.

    @namhyeongtaek4653@namhyeongtaek46533 жыл бұрын
    • Thanks so much!

      @Eigensteve@Eigensteve3 жыл бұрын
    • @@Eigensteve OMG it's my honor😯. I didn't expect you would read my comment lol. I hope I could get in to UW this fall so that I could be in your class in person.

      @namhyeongtaek4653@namhyeongtaek46533 жыл бұрын
  • Thanks, this was awesome.

    @Selbstzensur@Selbstzensur Жыл бұрын
  • Amazing good explanation and simple word for non english native speaker like me

    @randythamrin5976@randythamrin59764 жыл бұрын
  • Finally a good presentation

    @DanWilan@DanWilan3 жыл бұрын
    • Thanks!

      @Eigensteve@Eigensteve3 жыл бұрын
  • That was beautiful.

    @mikegunner5539@mikegunner55394 жыл бұрын
  • I really really really like the way you present- could you help me understand your set up? There's a see-through glass that you draw on, there's a projector (i think) that's allowing you to see which part of the presentation you're in. Plus the dark shirt enables me to just focus on your face, and your hands. It's a very intuitive interface for learning. Your hand gestures easily capture my eyes' attention. Do please elaborate. Thanks!

    @lonelym13@lonelym134 жыл бұрын
  • Oh wow I've been educated by your channel for a while now but did not realise you have published a textbook until your remark. Only A$80 here in Aus. Done! purchased..

    @BenHutchison@BenHutchison2 жыл бұрын
  • KZhead recommended it. But i love it.

    @its_me_kirankumar@its_me_kirankumar4 жыл бұрын
  • Thank you is all I can say but it doesn't feel like enough for this

    @nex4618@nex46182 жыл бұрын
  • KZhead read my mind this was exactly what I was curious about

    @vinster9165@vinster91653 жыл бұрын
  • Very nice explanation

    @vijaykumar.jayaraj@vijaykumar.jayaraj4 жыл бұрын
  • Amazing. Thank you :)

    @jeewonkyrapark9153@jeewonkyrapark91533 жыл бұрын
  • 7:45 Can it be combined with a Decision Tree? I think it would be a good idea, and I have found some research that has a similar idea

    @tsylpyf6od404@tsylpyf6od4049 ай бұрын
  • I guess neurones can be thought of a functions that call other functions if a certain variable has a sufficient value. And the main difference between an ANN and our biological neural network is that ANN has a fixed set of functions with fixed connections, only changing the conditions triggering the next callback, whereas brains can grow new neurones and even disconnect and rewire connections. The question then becomes: Can we write a function that writes a new function? Or a function that modifies the content of an existing function so as to change its callback to call a different function? If this holds true, we could get even closer to natural neural networks. I’m also debating myself when to use “artificial” vs “synthetic”. I guess an [A]NN can’t rewire/reprogram itself, whereas a real one can? In which case if we produce a neural network that indeed can change its own inner structure, we could promote it from “artificial” to “synthetic”? Great video. Definitely earned yourself a subscriber. :)

    @mathiasfantoni2458@mathiasfantoni24582 жыл бұрын
    • I was actually actively looking for a video like this - it wasn’t just the Algorithm™️ 😂

      @mathiasfantoni2458@mathiasfantoni24582 жыл бұрын
  • Loved neural nets since 1998 when I read a book which showed how 3 layer nets can solve difficult problems. In the 21st century the neural nets are magnificent and a credit to the brains of the human race. I am using a 21st century neural net myself and it's great. Hahahaha. Great video

    @arnolddalby5552@arnolddalby55524 жыл бұрын
  • beautiful! thanks.

    @Didanihaaaa@Didanihaaaa3 жыл бұрын
  • So youtube decided to make this 5 month old video famous? :D all comments are max 2h old..

    @smilefaxxe2557@smilefaxxe25574 жыл бұрын
    • 2 days later and I'm here haha

      @jvsonyt@jvsonyt4 жыл бұрын
    • Could easily be that some person with alot of followers shared the video. Then it has more views which makes it a more reccomended video.

      @cyberneticbutterfly8506@cyberneticbutterfly85064 жыл бұрын
    • @@cyberneticbutterfly8506 so the WHOLE system is self aware?

      @jvsonyt@jvsonyt4 жыл бұрын
    • @@jvsonyt Hardly. It's just a trigger. Person A with a high number of followers shares a video -> They then go watch the video -> The video view number increases -> IF video has increase in X views THEN bump video ranking in reccomendations by Y amount -> You now get it in your reccomendations.

      @cyberneticbutterfly8506@cyberneticbutterfly85064 жыл бұрын
    • @@cyberneticbutterfly8506 aliens

      @jvsonyt@jvsonyt4 жыл бұрын
  • Thanks, Sir !

    @MrFischvogel@MrFischvogel2 жыл бұрын
  • I don’t like the traditional sigmoid function as much. I did a performance test on both speed, loss, & accuracy on image classification, and tanh performed better on all three metrics compared to sigmoid (and ReLU ended up being better than tanh).

    @GogiRegion@GogiRegion4 жыл бұрын
  • Ok, thank you.

    @KelvinWKiger@KelvinWKiger4 жыл бұрын
  • Dear Sir, would you mind advising which book will talk particularly on each of the architectures illustrated in the neural networks zoom? Thanks.

    @hahe3598@hahe3598 Жыл бұрын
  • Damn good video never knew I needed it but damn. Thanks

    @6lack5ushi@6lack5ushi4 жыл бұрын
    • Thanks!

      @Eigensteve@Eigensteve4 жыл бұрын
  • Thank you... 💋

    @lucyoriginales@lucyoriginales4 жыл бұрын
KZhead