Data-driven model discovery: Targeted use of deep neural networks for physics and engineering

2024 ж. 14 Мам.
25 950 Рет қаралды

website: faculty.washington.edu/kutz
This video highlights physics-informed machine learning architectures that allow for the simultaneous discovery of physics models and their associated coordinate systems from data alone. The targeted use of neural networks and enforcement of parsimonious models allows for a robust architecture that can be broadly applied in the sciences.

Пікірлер
  • So much information packed into a 45 minute presentation. The insights are amazing! Would you mind adding links to the work you cite in the description as well? Thank you again for posting these!

    @alizaidi1700@alizaidi17003 жыл бұрын
  • Excellent content again. Thank you Prof. Kutz!

    @dakota8450@dakota84503 жыл бұрын
  • Thanks for putting the time and energy to publish these. They are quite unique even though the topics are covered so many other places.

    @kashmohammadi9785@kashmohammadi97853 жыл бұрын
  • Seems like this technique has enormous potential. Great talk!

    @mattkafker8400@mattkafker84003 жыл бұрын
  • This is simply amazing Dr. Kutz.

    @siriuslot4708@siriuslot47083 жыл бұрын
  • Brilliant talk from Brilliant prof Kutz!!

    @HassanKhan-cs8ho@HassanKhan-cs8ho3 жыл бұрын
  • Fantastic fantastic fantastic! Especially the history of science intro puts everything into perspective.

    @konstantinosvasios3852@konstantinosvasios38522 жыл бұрын
  • I got printing presented papers one by one and each time I said to myself: "Here is what you may use for your work":)) Wonderful presentation of great order and logic! One of the best I have ever seen from you on youtube. Thanks a lot

    @emadarasteh270@emadarasteh270 Жыл бұрын
  • Hi Nathan, this topic is absolutely mind blowing, congrats for the whole team for this job!!

    @ginebro1930@ginebro19302 жыл бұрын
  • Heavily loaded presentation. Thank you so much

    @simeona.8058@simeona.80583 жыл бұрын
  • Wonderful talks, many thanks, it is very inspiring.

    @fei9799@fei97993 жыл бұрын
  • Great work.. Thank you for sharing

    @AAAA-mw2td@AAAA-mw2td3 жыл бұрын
  • Awesome lecture!

    @tinkeringengr@tinkeringengr Жыл бұрын
  • Great material

    @jjk8417@jjk84172 жыл бұрын
  • Excellent!

    @enotdetcelfer@enotdetcelfer3 жыл бұрын
  • honestly content is Amazing but unorganized. please make some Playlist for related contents ! thank you!!!

    @Starcfd@Starcfd5 ай бұрын
  • Fantastic! Your contents are awesome. But you need to make people click on your videos to make them see it. Unfortunately you must use eye catching thumbnail and a caption that is both exciting and accurate. I want your channel to boost. You deserve the fame.

    @durjoychanda4611@durjoychanda4611 Жыл бұрын
  • Very interesting

    @simeona.8058@simeona.80583 жыл бұрын
  • Thank you for the video Prof Nathan. I have a question though: are SINDY libraries baked in the model as a model layer? Or are they simply used in the loss function? Thank you.

    @adokoka@adokoka2 ай бұрын
  • Thanks for the great work! Is the phase for "finding coordinates" just "feature engineering" in machine learning terminology or any difference?

    @actuaryquant5480@actuaryquant54803 жыл бұрын
    • I feel it's more similar to looking at the neuron activations and trying to come up with the co-ordinates.

      @divyamgoel8902@divyamgoel89022 жыл бұрын
    • yes they are precisely the same things

      @kvazau8444@kvazau8444 Жыл бұрын
  • Koopman theory reminds me of Hilbert-Huang Transform.

    @BruinChang@BruinChang2 жыл бұрын
  • How to make collective excitations out of walking droplets?

    @frun@frun Жыл бұрын
  • lol during the previous video I was like "oh, this notation for the model is exactly like what I saw in that paper. I guess that's pretty much standard then." Now I noticed the paper was from this guy...

    @maya-amf3325@maya-amf3325 Жыл бұрын
  • Fast Transform fixed-filter-bank neural networks are a thing.

    @hoaxuan7074@hoaxuan70743 жыл бұрын
  • A lot of what I'm starting to think is that non-linearity is actually misrepresentation. Maybe everything is linear? Obviously Koopman sort of says this but what if non-linearity is a product of our representation of systems? Maybe because we try to force things in to our mathematical systems and this requires, in our system, non-linear representation. In some sense this is also obvious but then it seems the natural thing would be to understand precisely how to represent things in their canonical system. This may not be possible, maybe the devil is really binding us to what we think is linear.

    @kodfkdleepd2876@kodfkdleepd2876 Жыл бұрын
  • Old wine in a new bottle!! What is the use of Neural networks in this framework? I mean, if you define the polynomials in advance, why do we need NNs? Why cannot we find the parameters by using the standard regression techniques, like in established methods? I guess this is what you are doing anyway. How is this different from standard well-established system identification methods?

    @kesav1985@kesav198511 ай бұрын
  • It's Ludacris The only neural network you need is in your head there will be no progress in physics until first principles are revisited not going to be able to black box this one

    @StephenCrowley-dx1ej@StephenCrowley-dx1ej5 ай бұрын
  • Not that it's super important but, Netwon wasn't made more famous because he "invented" a heuristic. Both did that in different ways. What made Newton more famous than Kepler is more likely the wide pop cultural applicability of the heuristic.

    @insightfool@insightfool3 жыл бұрын
KZhead