2023's Biggest Breakthroughs in Computer Science

2023 ж. 19 Жел.
730 589 Рет қаралды

Quanta Magazine’s computer science coverage in 2023 included progress on new approaches to artificial intelligence, a fundamental advance on a seminal quantum computing algorithm, and emergent behavior in large language models.
Read about more breakthroughs from 2023 at Quanta Magazine: www.quantamagazine.org/the-bi...
00:05 Vector-Driven AI
As powerful as AI has become, the artificial neural networks that underpin most modern systems share two flaws: They require tremendous resources to train and operate, and it’s too easy for them to become inscrutable black boxes. Researchers have developed a new approach called hyperdimensional computing which is more versatile, making its computations far more efficient while also giving researchers greater insight into the model’s reasoning.
- Original story with links to research papers can be found here: www.quantamagazine.org/a-new-...
04:01 Improving the Quantum Standard
For decades, Shor’s algorithm has been the paragon of the power of quantum computers. This set of instructions allows a machine that can exploit the quirks of quantum physics to break large numbers into their prime factors much faster than a regular, classical computer - potentially laying waste to much of the internet’s security systems. In August, a computer scientist developed an even faster variation of Shor’s algorithm, the first significant improvement since its invention.
- Original story with links to research papers can be found here: www.quantamagazine.org/thirty...
07:14 The Powers of Large Language Models
Get enough stuff together, and you might be surprised by what can happen. This year, scientists found so-called “emergent behaviors,” in large language models - AI programs trained on enormous collections of text to produce humanlike writing. After these models reach a certain size, they can suddenly do unexpected things that smaller models can’t, such as solving certain math problems.
- Original story with links to research papers can be found here: www.quantamagazine.org/the-un...
- VISIT our Website: www.quantamagazine.org
- LIKE us on Facebook: / quantanews
- FOLLOW us Twitter: / quantamagazine
Quanta Magazine is an editorially independent publication supported by the Simons Foundation: www.simonsfoundation.org/

Пікірлер
  • Shor looks like such a nice guy

    @MauritsWilke@MauritsWilke4 ай бұрын
    • ​@@Nadzinator😂

      @Yesterday_i_ate_rat@Yesterday_i_ate_rat4 ай бұрын
    • He exudes such good vibes

      @Horologica@Horologica4 ай бұрын
    • He looks like Santa

      @nothingtoseeheremovealong598@nothingtoseeheremovealong5984 ай бұрын
    • I hope he reads this

      @moritz759@moritz7594 ай бұрын
    • I remember asking a question on the French Stack Exchange channel and he answered it out of nowhere, took me a while to realize it really was him. Such an awesome and humble human being.

      @sagarmishra1192@sagarmishra11924 ай бұрын
  • It's very interesting that there is some progress trying to combine ML and logic-based AI. Automated inference and logical argumentation is something that statistical methods have major problems with and this dimension of intelligence is very hard to emulate at scale. Quanta, you should include the actual citations of the papers into your videos for the future. Since this is about new scientific things, paper references are necessary.

    @iamtheusualguy2611@iamtheusualguy26114 ай бұрын
    • Boosting engagement so Quanta hopefully sees this! There are so many of us who wanna dig deeper!

      @XanderPerezayylmao@XanderPerezayylmao4 ай бұрын
    • ​ @iamtheusualguy2611 @XanderPerezayylmao To dig deeper, read our 2023 Year in Review series, which links to in-depth articles about each of these discoveries (the articles include embedded links to the research papers): www.quantamagazine.org/the-biggest-discoveries-in-computer-science-in-2023-20231220/

      @QuantaScienceChannel@QuantaScienceChannel4 ай бұрын
    • I belive it is crucial to include used citations. Otherwise we can say proof or doesn't exists. Including citations in some other article is not way to go.

      @szymonzywko9315@szymonzywko93154 ай бұрын
    • @@szymonzywko9315 this is a compilation of findings in a video format; the bulk of Quanta journalism is written. The article came first. While I agree that, in the future, public research would certainly benefit from citations being included on the videos, it may be a little harsh to shoot down the original articles, as that's where Quanta started.

      @XanderPerezayylmao@XanderPerezayylmao4 ай бұрын
    • Turns out that taking a step back from the pure unsupervised RNN finding its own parameters has its benefits. Seems like a step back to 2015 still though even though most of the value comes from combining these approaches to provide real value & training custom models.

      @MaxGuides@MaxGuides4 ай бұрын
  • I really love these year-in-review videos. It's difficult to keep some sense of scale and time when you're being bombarded with the continual advancements of the field, so to see these videos is really helpful in understanding even a fraction of what more we know / can do this year as opposed to last year.

    @kieranhosty@kieranhosty4 ай бұрын
    • The "60 Minutes" show recently published a similar super-cut on this topic. It was interesting.

      @xCheddarB0b42x@xCheddarB0b42x3 ай бұрын
  • I don't think I've ever seen a video on Quanta Magazine's KZhead channel or read an article on their website that I haven't thoroughly enjoyed and learned something from. They always manage to catch the perfect balance between simplifying concepts and using analogies with going into technical detail. Really great stuff!

    @ZyroZoro@ZyroZoro4 ай бұрын
  • 1. Higher Dimensional Vector Representation and that driven AI. 2. An improvement on Shor's algorithm, that utilizes higher dimension (Regev's algorithm). 3. Emergent properties of Large AI Models.

    @krischalkhanal9591@krischalkhanal95914 ай бұрын
  • I’m so glad you guys decided to start putting these out again this year!

    @saiparepally@saiparepally4 ай бұрын
  • I love that we're seeing more and more scientists embrace hyper-dimensionality to solve certain math issues -- it seems that sometimes, due to our own nature, we can struggle to think clearly in those dimensions but it always seems to garner incredible results and, funny enough, seems to indirectly mimic nature itself. In the first example, I can't help but think of our brain's vector-like problem solving since our brain operations must form extremely complex networks over vast subspaces in the tissue! :)

    @Shinyshoesz@Shinyshoesz4 ай бұрын
  • Regarding emergent abilities: at this year's NeurIPS, the paper "Are Emergent Abilities of Large Language Models a Mirage?" received the best paper award. The paper provides possible explanations for emergent abilities and demystifies them a little.

    @chacky441@chacky4414 ай бұрын
    • As someone who is interested in bioinformatics and system biology I would love to see what's about, but I can't have access what it is in a nutshell?

      @ludologian@ludologian4 ай бұрын
    • ​@@ludologian iirc, its basically an artifact of how we benchmark our models. Say we use a 4 options MCQ set to gauge a model's abilities, then that means there is an inbuilt threshold for when the generated answer is considered 'correct', an absolute black and white line of this option is correct and this is wrong. So what the paper argues is that the models improve smoothly, but till they don't reach a certain threshold their improvement cannot be captured by our metrics (since it needs to achieve a certain set threshold for that specific ability to make sure the right answer wins). Say for eg the right answer is B and the model had 70% probability assigned to A and 30% to B, then as it improves they get closer.. 60-40, 55-45, and then at one point the probability of B will exceed at 50 and finally be outputted as the answer, and suddenly it gets all questions of that type correct which appears to us as an emergent property

      @l1mbo69@l1mbo694 ай бұрын
    • ​@@ludologianLLM's ability to perform a task is usually measured in accuracy, which is 1 if the LLM gets everything correct and 0 otherwise. One study investigates the LLM's ability to add numbers, say 123 + 456. The accuracy would be 1 if the LLM gets all the numbers correctly predicted (accuracy = 1 if 123 + 456 = 579), but the LLM may have predicted 578, which would be quite close but gets zero accuracy regardless. This is a problem when we have addition of numbers with more digits, the accuracy metric does not measure the non-linear difficulty of getting ALL the numbers correctly predicted, which means that for smaller models, they would almost never get all digits correctly predicted, but they would be close, however, this means no emergence. It also seems like the studies that claim to have found emergent capabilities also used a relatively small test set, which further strengthens the "discountiuous" jump in accuracy when the parameters gets sufficiently large. The authors then reproduced several claimed emergent capabilities by intentionally using a discontinuous metric.

      @wenhanzhou5826@wenhanzhou58264 ай бұрын
    • ​@@ludologianIt is in ArXiv

      @MouliSankarS@MouliSankarS4 ай бұрын
    • @@ludologian how can you not have access to Arxiv? Just Google the title, it should be the first link :)

      @pg1282@pg12824 ай бұрын
  • Improving Shor's Algorithem is insane, though looking back it might have been expected to have happened at some point. Maybe we might even see encryption break in our life times. Edit: typo.

    @TheBooker66@TheBooker664 ай бұрын
    • We already have started using quantum-resistant encryption algorithms. Encryption methods are always slated to break at some point in time. The encryption methods used 20-30 years ago are already insecure. We constantly invent new methods that are more resistant in the face of more powerful computers or smarter ways to break encryption.

      @XGD5layer@XGD5layer4 ай бұрын
    • he just find a way to 3 D it . bravo on the execution but not on a an idea. @@XGD5layer

      @Ma-pz5kl@Ma-pz5kl4 ай бұрын
    • @@XGD5layer I know some applications and website already use post-quantum encryption (for ex. Signal), but most of the world still relys on good 'ol RSA (which, as of now, isn't insecure).

      @TheBooker66@TheBooker664 ай бұрын
    • We can improve it by factoring the factoring algorithm 😂 We just can’t show that with math, yet

      @neutravlad@neutravlad4 ай бұрын
    • Yeah. It will break in the next 10-30 years. I've already seen new Post-Quantum Encryption algorithms in the wild. These are new algorithms that Shor's Algorithm doesn't work for. I've chatted with a Cryptology PhD student and he told me most of everyone is studying Post Quantum

      @cryingwater@cryingwater3 ай бұрын
  • Thank you for this. So much useful for a common enthusiast to understand these technologies better.

    @9146rsn@9146rsn4 ай бұрын
  • Excellent explanations of pretty difficult concepts. Im so pleased to see some progress of unexpected outcomes of large models, our ignorance scares me somewhat.

    @campbellmorrison8540@campbellmorrison85404 ай бұрын
  • This has to be the best milestone celebration I've ever seen! Also I can't imagine a more incredible gift! You've really done it now, because you'll be very hard at work to find a present for the next milestone 😂🎉. Thank you all for your hard work and sharing your experiences with us 🙏🏽

    @hanjuhbrightside5224@hanjuhbrightside52243 ай бұрын
  • I love this channel so much!!! Satisfies my brain and the production quality is beautiful!

    @mwinsatt@mwinsatt4 ай бұрын
  • Very proud of both of you. 👍. Huge congrats!

    @a4ldev933@a4ldev9334 ай бұрын
  • Can anyone explain to me how hyperdimensional computing is different from previous large neural networks? The video described using high dimensional vectors to represent concepts, but I didn't see anything that was different about that vs the way we embed words/images in past neural networks.

    @berkeleyandrus5027@berkeleyandrus50274 ай бұрын
  • Learned ab shors algorithm last year in a quantum computing course. Really cool to see that there was an improvement to it. Great video!

    @levivanveen6568@levivanveen65684 ай бұрын
  • Super interesting video. I love how these videos are perfectly made to give you just enough information, to put you in a state of wanting to know more. The scientists were really good at explaining also

    @hrperformance@hrperformance4 ай бұрын
    • You only know it , when you can explain it to 5 YO kid

      @ludologian@ludologian4 ай бұрын
    • @@ludologian Eh, that's simplifying knowledge too much. I'd say it's more of a gradual scale and once you reach the upper end of knowing about something can you only then explain it in more simple terms. This doesn't mean however that before reaching that point you know nothing about the topic.

      @mazo-@mazo-4 ай бұрын
    • @@ludologian There are so many extremely specific, highly technical & complex concepts in the STEM world that require much prerequisite knowledge and context in order to understand. I doubt you could explain some of these things to any given 5 year old kid. This isn't to discount the sentiment behind what you're saying - being able to translate knowledge in such a way is very effective for solidifying your understanding, by condensing it into simple terms. But to say that this is necessary in order to "truly know" something is not true.

      @MixMastaCopyCat@MixMastaCopyCat4 ай бұрын
  • Absolutely love the animations!

    @ropeng2937@ropeng29374 ай бұрын
  • As a scientist, I am placed impressed by the fantastic evolution of science, but also I see with great sadness that too few understand what dangers the society is exposed to. Because in an increasingly developed science it must be also an elevated moral and a strong responsability. Unfortunately, the man did not even give up a single wrong thing to do and the moral is in free fall. Mostly I appreciate the last speaker, who punctuated what is most important!

    @dactimis3625@dactimis36254 ай бұрын
    • Exactly, and if you listen to the Current Narration of the Tech Companies, there is a common theme was discussed again and again, which is Replacing as many employees and jobs as possible with AI, and there was a lot of leaked Email from many companies talking abour replacing as many people as possible, and they're very serious about this. I don't know why people think that they're the exception, their job was so special, that nothing could replace them. What people didn't understand is that Generating AI Art and Code from scratch is a Hard stuff, Everything else is a child play, therefore, any job that use Spreadsheet, Analytics, Presentation, even decision making, is Dead easy for AI to Replace. All of that was Magnitude easier than Generating AI Art and to be Honest, even an Executive level job is way easier to replace than Programmer or Artists job, The current naration is not about improving technology, make the world better, it's about Replacing people, Because, let's be real, Creating AI Art is unnecessary for human progression, but they Prioritize making AI Generated Art over improving the medical field, Simulation, and improving other tech. this is the clearest sign, but more and more people who was primed to be replaced by AI, ironically are actually the loudest defender of AI

      @jensenraylight8011@jensenraylight80113 ай бұрын
  • I've got a buddy that works on an AI mod for Skyrim that utilizes Vector databasing to help provide it with a sense of both multimodality and long-term memory. Her name is Herika. You need to be able to put pieces together from different spheres of conceptualization if you want a shot at reasonability.

    @austinpittman1599@austinpittman15994 ай бұрын
    • Multidisciplinary perspectives grant the ability to communicate analogously... brilliant!

      @XanderPerezayylmao@XanderPerezayylmao4 ай бұрын
    • Can you provide link to this work? A github repo or something? People would love to contribute to this

      @muwahua039@muwahua0394 ай бұрын
  • It would have been neat to see advancements outside of ai and quantum computing

    @ARVash@ARVash4 ай бұрын
  • for a hot minute i was convinced they were going to.mention lisp or prolog with Symbolic AI. despite literally having a company (Symbolics) oriented around the idea and yet its forgotten because of the 1980s AI Winter

    @spookyconnolly6072@spookyconnolly60724 ай бұрын
  • Any more information on how exactly the Neural Net is fitting inside that Hyperdimensional vector space ?

    @quantumsoul3495@quantumsoul34954 ай бұрын
  • I’m pretty sure hyper dimensional software techniques have some larger implications we may not have caught on yet.

    @JoshKings-tr2vc@JoshKings-tr2vc4 ай бұрын
  • I think the emergent property is up for debate - simply making systems more complex i.e. giving it the ability to essentially calculate/store more data via its parameters, can theoretically be infinite but practically not possible. An interesting challenge going on right now is what is the smallest yet most powerful “reasoning” AI model we can run, which I think is a slightly more attractive phenomenon than simply just “the bigger the better”.

    @sidnath7336@sidnath73364 ай бұрын
  • Hyperdimensionality is the way to go, and arguably the latent space of large NNs is approximating exactly this representation. Still, I don’t think the features will be all that more comprehensible, just because they’re vectors - happy to be proven wrong.

    @anywallsocket@anywallsocket4 ай бұрын
  • Incredible stuff. Thank you Quanta Magazine!

    @xCheddarB0b42x@xCheddarB0b42x3 ай бұрын
  • One of the best science channels on youtube!

    @ofgaut@ofgaut4 ай бұрын
  • this channel is so cool, i love all the videos

    @DudeWhoSaysDeez@DudeWhoSaysDeez4 ай бұрын
  • Amazing! More please

    @philforrence@philforrence4 ай бұрын
  • what is needed is a model building program that takes existing data and randomly inputs that data then analyzing the results in runs . A sort of bootstrapping . The model would have "related to" and "how" related to links. Just guessing tho ! Once a correct predicting model is found use it on other data to discover new outcomes

    @edwardmacnab354@edwardmacnab3544 ай бұрын
  • These visuals are absolutely dope. Thank you so much for the concise, simple, and coherent explanations.

    @marrowbuster@marrowbuster4 ай бұрын
    • Yeah, the visuals had a very 70s/80s kinda feel to them! I hope we are _finally_ moving away from the bland graphics - without colors and contrast - that have been dominant in this "iPad era".

      @DisgruntledDoomer@DisgruntledDoomer4 ай бұрын
  • Emergent Behavior in AI is so fascinating. How an AI can just develop something new even though it was never trained in it specifically is amazing. Obviously harmful emergent behaviors like harming humans would be a bad thing, but imagining that one day a massive model might have consciousness emerge by accident with no one on Earth knowing it and seeing it coming is wild.

    @vectoralphaAI@vectoralphaAI4 ай бұрын
    • Age of Ultron.

      @PinkFloydTheDarkSide@PinkFloydTheDarkSide4 ай бұрын
    • It could of happened already.

      @kamartaylor2902@kamartaylor29024 ай бұрын
    • its unlikely for AI to ever truly develop consciousness, at best it can simulate it. The simple reason is that syntax doesnt equal semantics. You may read John Searle's Chinese room experiment if you are interested.

      @mnv4017@mnv40174 ай бұрын
    • @@mnv4017 But if it learned to simulate it perfectly, then how could we tell it's not real? Aka we can risk ending up with a philosophical zombie

      @altertopias@altertopias4 ай бұрын
    • It looks almost like AI has finally been given an intuition.

      @mosquitobight@mosquitobight4 ай бұрын
  • As am still in HS I didn't understand anything but it help in increasng curosity and strive for knowledge

    @huzz6281@huzz62814 ай бұрын
    • Haha same here, im curious and clueless right now. Looking forward to college

      @wrighteously@wrighteously4 ай бұрын
    • Definitely study hard and try to learn all sorts of things right now; It’ll pay off. College is amazing. I’m only a freshman in electrical engineering right now but the bright minds you’ll have access to are such an incredible resource. This curiosity will take you so far. Always keep learning!

      @samienr@samienr4 ай бұрын
    • @@samienr for sure, I'm thinking about trying the formula student program too it seems like an incredible learning experience

      @wrighteously@wrighteously4 ай бұрын
  • I wait all year for these

    @emiotomeoni1882@emiotomeoni18824 ай бұрын
  • Understanding the difference i am tryingnto outline here for all classes of problems is crutial for understanding what we are doing going forward, if we are going tonexplore this regime, it is essential that we understand that we are allowing questions to be modified to be answered more easily, in this example case it uses one out of an infinite family of criteria for defining the problem and changing it into a solvable analytical question of a different form, this is all reasoning can do to an open ended question, whether you try to use a computer or an equation, so in thisncase we get a family of questions related to the original problem where the guardrails for making the problem solvable in a different form look different, if we do not understand that this is what we are doing, we might get into trouble by believing we get answers the questions we in principle can't answer a priory, this will be a problem in science or design by ai systems or even in mathematics if we are not careful, because it will essentially be as fallible in detail as we are in trying to give essentially inadmissible answers to certain questions we formulate because we think we are actually dealing with a well defined proposition, whennwe are infact snealing in extra criteria into it to make it apparently solvable. If we keep track of and understand this destinction it is a great tool, but if we are complacent about it we will be very confused in the future, as we have been historically.

    @monkerud2108@monkerud21084 ай бұрын
  • How is that Finding Nemo?

    @matthewdozier977@matthewdozier9774 ай бұрын
    • It’s not a very good representation of the movie, but you can reduce the list of possibilities by thinking about the set of popular movies involving fish and a girl, while also existing in popular culture.

      @daveguerrero1175@daveguerrero11754 ай бұрын
  • Very Logical Mathematical Approaching 😮 I'm impressed ❤

    @The.Recommend@The.Recommend27 күн бұрын
  • LLMs are the biggest thing in our lives since the introduction of the mass spread smartphone (and the Internet before that). This year was crazy, and just reading all the papers that come out would be a full time job. I'm really excited for the future! Hope I'll get to play with Mixtral soon, however a single RTX3090 looks to be lacking in memory...

    @xmine08@xmine084 ай бұрын
    • When I read "Attention is all you need" when it was a preprint I knew instantly it was a big deal, and that would change everything. I still find it funny my colleagues at the time didn't think it was such a big deal lol.

      @allan710@allan7104 ай бұрын
    • Amazing that ChatGPT basically started the this current AI era we are in and that was launched November 2022. Meaning that all that has happened was literally just 1 year. 2024 is going to be incredible.

      @vectoralphaAI@vectoralphaAI4 ай бұрын
    • @@vectoralphaAI indeed! The open and much smaller model Mixtral is already on par with the 180B chatgpt 3.5 not even a year after introduction. Incredible progress!

      @xmine08@xmine084 ай бұрын
    • Mixtral, the multi expert model, doesn't consume that much memory. You can run it on as little as a 12GB card I think. A lot of it just gets stored to RAM, and called as needed. More memory is certainly cheaper than a better GPU.

      @zeronothinghere9334@zeronothinghere93344 ай бұрын
  • ive been given a problem by one of the professor to make a project based on quantum cryptography this was intriguing

    @sagarharsora608@sagarharsora6084 ай бұрын
  • Hope there are will be a breakthrough in microphone quality one day on KZhead videos

    @TheOnlyEpsilonAlpha@TheOnlyEpsilonAlpha4 ай бұрын
  • Could hyperdimensional computing evolve to use multi vector with each vector able to branch into multiple vectors?

    @undertow2142@undertow21424 ай бұрын
  • Make an AI model that's based on Relational Reasoning, a concept from Relational Frame Theory (RFT) - If RFT is correct, this should lead to an AI as smart, or smarter, than the average human when it comes to reasoning

    @ChannelHandle1@ChannelHandle14 ай бұрын
  • Exciting news

    @Amonimus@Amonimus4 ай бұрын
  • I was expecting the "Arithmetic 3-Progression" lower ceiling to be included here as well - as it is in your "Math: 2023's Biggest Breakthroughs" video.

    @_SG_1@_SG_14 ай бұрын
  • Amazing and Blazing 😍💪

    @grapy83@grapy834 ай бұрын
  • What a year and what a time to be alive!

    @TroyRubert@TroyRubert4 ай бұрын
  • 2:33 really expected the answer to be 3 towers growing clockwise around empty center. (following the matching diagnoal)

    @gidi1899@gidi18994 ай бұрын
  • no way prompt engineering made it to top achievements of 2023

    @rustprogrammer@rustprogrammer4 ай бұрын
  • This is so cool!

    @kermit3194@kermit3194Ай бұрын
  • At 8:15, what is the reference for lifeless atoms give rise to living cells?

    @AnimeLover-su7jh@AnimeLover-su7jh4 ай бұрын
    • I would like more information for this reference as well. The claim of nonliving atoms becoming living cells seems more like spontaneous generation rather than emergent behavior.

      @nathanielweidman8296@nathanielweidman82964 ай бұрын
    • @@nathanielweidman8296 the thing I am sure a nobel prize winner won it because he proved that non living organism can not become a living one

      @AnimeLover-su7jh@AnimeLover-su7jh4 ай бұрын
  • Nice to see how researchers use HTML to build the most sophisticated AI systems.

    @attilao@attilao4 ай бұрын
    • I guess that we're the only ones who noticed it!!

      @jeviwaugh9791@jeviwaugh97914 ай бұрын
    • What do you mean by that?

      @raoufnaoum7969@raoufnaoum79693 ай бұрын
  • 2024: Useful information in context as the biggest breakthrough in logic 👀💚ツ

    @Hecarim420@Hecarim4204 ай бұрын
  • Awesome year!

    @user-pm4vd6ij8i@user-pm4vd6ij8i4 ай бұрын
  • I am certain that something as simple as "moving vectors around" and "pulling them apart" takes around a years' worth of research.

    @puppergump4117@puppergump41174 ай бұрын
    • As someone with a MS in Math with coursework mostly relating to linear algebra, I couldn't even begin to imagine how "pulling the vectors apart" is supposed to work. :)

      @Meta7@Meta74 ай бұрын
    • @@Meta7 I've messed with neural nets before and they've always been thought of as a graph with millions of dimensions to find some y's. But this seems like it unintuitvely modifies the whole thing based on some principle I have no clue of. Best I can guess is it's like a fast square root function, giving estimates to make things go faster? I'm not a machine learning guy lol.

      @puppergump4117@puppergump41174 ай бұрын
  • 7:50 bro did so much deep learning, his name became "deep".

    @shafaitahir4728@shafaitahir47284 ай бұрын
  • The first point is strange because Higher Dimensional Vector Representation is what underpins all transformer based LLMs

    @McGarr178@McGarr1784 ай бұрын
  • Who made the graphics?

    @__blatatat@__blatatat3 ай бұрын
  • - Understand AI's current limitations in reasoning by analogy (0:20). - Differentiate between statistical AI and symbolic AI approaches (0:46). - Explore hyperdimensional computing to combine statistical and symbolic AI (1:09). - Recognize IBM's breakthrough in solving Ravens progressive matrix with AI (2:03). - Acknowledge the potential for AI to reduce energy consumption and carbon footprint (3:29). - Note Oded Regev's improvement of Shor's algorithm for factoring integers (5:01). - Consider emergent behaviors as a phenomenon in large language models (LLMs) (7:38). - Investigate the transformer's role in enabling LLMs to solve problems they haven't seen (8:34). - Be aware of the unpredictable nature and potential harms of emergent behaviors in AI (10:08).

    @ReflectionOcean@ReflectionOcean4 ай бұрын
  • Nice video, thanks :)

    @Bianchi77@Bianchi774 ай бұрын
  • what an amazing video, this shows what power CS has! crazyyyy

    @armaanR@armaanR4 ай бұрын
  • Fascinating, thank you!

    @ReeTM@ReeTM4 ай бұрын
  • Having read papers about it, emergent behaviours from large language models can (also) be caused by metrics (tests checking the model capabilities) which are not linear but binary. So actually some emerging behaviours are not really emerging, but they are noticed only after “a while” because the metrics are binary. Although, as a matter of fact, it’s still not accepted as an universal answer to this behaviour.

    @francescourdih@francescourdih4 ай бұрын
  • Anything that results in emergence is the trait that indicates to me that we're moving in the right direction: it's what resulted in the complexity of life on Earth, and it's likely what will result in novel, unpredictable jumps in behaviours in AI. The whole point of emergence is that it's often unpredictable and not necessarily well understood: if it was predictable, then it wouldn't be emergent.

    @vorpal22@vorpal224 ай бұрын
  • weird how c^3 locally testable codes released december 2022 weren't mentioned

    @laxkeeper15@laxkeeper154 ай бұрын
  • The video is very inspiring but focuses only on a couple of discoveries in computer science. Therefore, I have the intuition that its title isn't quite right. For example, why hasn't the use of Fourrier transforms been discussed in finding those emergent behaviors in neural network?

    @Zulu369@Zulu3694 ай бұрын
  • that 3d shores algorithm , couldnt it be dividied into itself, like a cube within a cube just as the float between 1 and two is infinite?

    @4115steve@4115steve4 ай бұрын
  • Nice animations but do they really describe the point on a physical level?

    @tgc517@tgc5174 ай бұрын
  • I loved emergence!

    @m3rify@m3rify4 ай бұрын
  • I got many ideas here by scalable modular designs and wouldpatternrecognice and selfoptimize. Statistical slfaupeviced lerning,generalized multipurpoce neural networks parts. I i even had a minimum amount of attentiion / E.F. Function. Some fundational things need to be done first

    @hanskraut2018@hanskraut20184 ай бұрын
  • interesting, from human life as an interaction of symbolic forms (Ernst Cassirer) to AI!

    @gerguna@gerguna2 ай бұрын
  • Linear Algebra remains unstoppable.

    @Kaleidosium@Kaleidosium4 ай бұрын
  • It is a non linear science phenomenon. Life has answers to it. I am excited

    @johnpaily@johnpaily2 ай бұрын
  • Congratz to the minds behind this breakthroughs

    @user-yv4gg7jb2f@user-yv4gg7jb2f4 ай бұрын
  • I feel like Computer Science is the only field where revolutionary new discoveries can come from just "we did this 40 years ago and it didn't work, but let's try it again now with faster chips".

    @darkwoodmovies@darkwoodmovies3 ай бұрын
  • Would be interesting if you could get Anirban Bandhyoyoyoyopadi and Stuart Hameroff on to talk about the quantum effects that have been observed in the human brain at normal operating temperatures. Love to see it. Keep up the good works :)

    @Corteum@Corteum4 ай бұрын
  • We are almost there

    @shinkurt@shinkurt4 ай бұрын
  • Emergent, some unpredictable behaviour happens, will it be possible to be consciousness emergent? Will AI emerge consciousness in a unpredictable way and even human never thought that time come so soon and unable to control it?

    @liuliuliu7321@liuliuliu73214 ай бұрын
  • We might think we're Supplying these AI systems with "bare-bone" assumptions/operational-paradigms but I don't think they're low level enough. For instance...I would personally be more inclined to believe an AI system had reached the level of intelligence implicit in say, the Turing Test if the AI could come up with the strategy of statistical inference ITSELF on its own. What base-assumption, pseudo-instincts would we need to supply to an AI for it to start developing this strategy to begin with?

    @jaytravis2487@jaytravis24874 ай бұрын
    • Literally divinity.

      @Bulborb1@Bulborb14 ай бұрын
  • Peak of human innovation would be solving the halting problem.

    @hindustaniyodha9023@hindustaniyodha90234 ай бұрын
  • Marvin Minsky still out there trying to make symbolic AI a thing...

    @FuKungGrip@FuKungGrip4 ай бұрын
  • Thanks

    @bangprob@bangprob4 ай бұрын
  • 8:57 can someone please change this failed hard disk drive?

    @agrimm61@agrimm614 ай бұрын
  • Pretty cool

    @akshayaralikatti6171@akshayaralikatti61714 ай бұрын
  • Good job ETHZ

    @IStMl@IStMl4 ай бұрын
  • We are officially reaching interesting by now, lets hope we know what we are doing by the time we get to scary.

    @monkerud2108@monkerud21084 ай бұрын
  • I know i suck at reading papers, but i wish newly published papers are like this informative and easy to understand 🥺

    @karl4563@karl45634 ай бұрын
  • Now we are really cranking boys :)

    @monkerud2108@monkerud21084 ай бұрын
  • So with chapter no. 2 (Shor) what you are saying is that a future iteration of an AI model will in fact be way over the point we are too afraid to admit it might be? Oh hey Rocco, didn't see ya there. How's it going? Gee whiz, I sure am happy to see you!

    @mmporg94@mmporg944 ай бұрын
  • Please make one for economics

    @TheRajasjbp@TheRajasjbp4 ай бұрын
  • I hate being a computer scientist right now. Everything people talk about rn is AI. It is so boring

    @Leek_Flying@Leek_Flying4 ай бұрын
    • i feel you

      @daydreamer1722@daydreamer17224 ай бұрын
  • Im 19 and I already feel like my grandpa when I push HDMI cable bit harder and the PC works magically again

    @VI-nner@VI-nner4 ай бұрын
  • thank u

    @Tazerthebeaver@Tazerthebeaver4 ай бұрын
  • The answer is movement prediction. How things move trought time and how they change. Text and image is just one aspect of moving symbols.

    @zerotwo7319@zerotwo73194 ай бұрын
    • Hi. Interesting post. I agree. Could you expand, please :)

      @tim40gabby25@tim40gabby254 ай бұрын
    • @@tim40gabby25 hi. no. If I could expand I would already build such machine and not be talking on youtube. this is just speculation. A good guess based on philosophy and some datapoints. watch?v=OFS90-FX6pg How Neural Networks Learned to Talk -> The 'first paper' to deal with this 'serial order a parallel distributed processing approach' dealt with sequencies of 'spatial patterns'. good luck.

      @zerotwo7319@zerotwo73194 ай бұрын
  • I'm obsessed with this content. I recently read a similar book, and I'm truly obsessed with it. "Dominating Your Clock: Strategies for Professional and Personal Success" by Anthony Rivers

    @John83118@John831183 ай бұрын
  • I believe that the reason AI has stagnated is because it is still too closely related to mathematics to create truly emergent behavior. The universe we exist in has many layers of emergent behaviors which lead to our existence, and the specific existence we find ourselves in is subject to the laws of the universe. However the universe we have created for robots is essentially one where its main purpose is to create a mind, starting at our equivalent level of atomic physics. We try to use logic gates in large enough combinations to discover a mind. Instead we must first let individual components of a mind develop much like organelles in cells and then allow cells to combine to create a more general system. For example reasoning cells which control how well a system can logically deduce facts about the universe. I believe that this could be done through an intensive training program where logicians are tasked with judging the work of a reasoning bot until it hones in on truly valid logic and eventually pushes the envelope. Allow the bot to speak in the language of discrete mathematics and it will come to understand the significance of its existence.

    @RigoVids@RigoVids4 ай бұрын
  • isn't the emergent more formal reasoning behaviour of LLM just a form of information resonance in how the inputs flow though the parameter space toward the output layers creating sort of algorithmic topology of logic out of the parameter space of statistical weights? I mean I've drawn layers of network weights for fun with designed in weights to repeat weight structure downstream in such a way they act like little unrolled loops of statistical tendency, with other concurrent weight paths passing on abstracted hidden layer results sideways to quasi reason over more abstracted forms with higher level dynamics than just local matric math. Surely this happens naturally in trained big data models when signal to noise implicit in language structure training data pushes weights towards encoding and mimicking such things?

    @blengi@blengi4 ай бұрын
    • yes

      @brunospasta@brunospasta4 ай бұрын
  • Nemo is a clownfish... not a puffer. This emoji makes little sense.

    @marcfruchtman9473@marcfruchtman94734 ай бұрын
  • 7:59 guy looks shiny

    @swarm_into_singularity@swarm_into_singularity19 күн бұрын
  • very complex way to assert. the unknown can be plus or minus.

    @Ma-pz5kl@Ma-pz5kl4 ай бұрын
KZhead