Max Tegmark, MIT
Abstract: After briefly reviewing how machine learning is becoming ever-more widely used in physics, I explore how ideas and methods from physics can help improve machine learning, focusing on automated discovery of mathematical formulas from data. I present a method for unsupervised learning of equations of motion for objects in raw and optionally distorted unlabeled video. I also describe progress on symbolic regression, i.e., finding a symbolic expression that matches data from an unknown function. Although this problem is likely to be NP-hard in general, functions of practical interest often exhibit symmetries, separability, compositionality and other simplifying properties. In this spirit, we have developed a recursive multidimensional symbolic regression algorithm that combines neural network fitting with a suite of physics-inspired techniques that discover and exploit these simplifying properties, enabling significant improvement of state-of-the-art performance.
Related papers:
* AI Feynman: a Physics-Inspired Method for Symbolic Regression - arxiv.org/abs/1905.11481
* AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity - arxiv.org/abs/2006.10782
* Symbolic Pregression: Discovering Physical Laws from Raw Distorted Video - arxiv.org/abs/2005.11212
Just binge watch. Public. No one shall kick us out of the Tegmark paradise😊I will watch in VR. 🎉
Few people can lecture this well. Max is one.
wonderful and so simple tysm
8:50 - Understanding in depth helps in 10:20 - Example of the above 11:31 - A myth
Have you tested it's stability against systematic and random noise? Real data are always noisy.) We also can look at this as kind of signal (or Fourier) and try to denoise (as one of an applications).
arxiv.org/pdf/1905.11481.pdf They have tested with noise as you can see in pages 8-10.
can we get the github link?
Brilliant .....
Elicit did mot picj uo my paper. Google schilar did. You know what Max Tegmark is proposing for AI safety. Extract the program from the AI model and then prove the program will not do certain things in a provabke manner. And then release the AI. I did do a gradcam project on explainable ai.
I oppose most advances in AI