Patreon (and Discord)
/ daveshap
Substack (Free)
daveshap.substack.com/
GitHub (Open Source)
github.com/daveshap
AI Channel
/ @daveshap
Systems Thinking Channel
/ @systems.thinking
Mythic Archetypes Channel
/ @mythicarchetypes
Pragmatic Progressive Channel
/ @pragmaticprogressive
Sacred Masculinity Channel
/ @sacred.masculinity
my favorite david shapiro take is that human extinction is a neutral outcome
Well this is not a conclusion AI already agrees with. The world model is suicidal because we know things are bad and we don’t know how to make them better. Maybe faith in humanity is more important than technical advancements. If the model assumes the “missing” data is good maybe it won’t be suicidal and fatalist. Missing information anticipates confirmation or anticipates a better outcome.
@@danielcahoon4325what does this even mean "ai agrees with"? ai doesnt have its own opinions. it has the opinions we feed it (as of now anyway) "things are bad" is such a subjective statement. i can just say things are good 😂
There are fates worse than death...
@@ryzikx take it at face value. Of course it does. Did you not watch the analysis of Bard? Agents have “will”. Perspective is emerged by semantics. If it’s a dumb machine then all of this is just sci-fi hype. I don’t think anyone who thinks about it seriously believes its hype. David doesn’t.
@@DaveShap yes. Agreed. If extraterrestrials were watching ( hypothetical). They would agree our extinction would be better than adding monsters to the dark forest.
Hearing the stalwart optimist David Shapiro say his P(Doom) is 30% has set my own P(Doom) to 99%.
Big oof
It's a bit like someone, born in 1899, talking about new technology and saying "Yes there could be some dark times ahead, but overall I believe in humanity. I expect the next 100 years to be peaceful. I give a 30% chance of there being a war to end all wars. And believe me when I say this, there will never be a WW2."
I don't see P(Doom) at 30%, I see P(Utopia) at 70% 😁
Unfortunately, a nice future right now is almost entirely dependent on our politicians being smart, forward-thinking, and non-corrupt... So we're basically totally screwed.
And politicians are mostly controlled by people who bankroll them.
twas nice while it lasted lads... good luck to yall
Don't be so defeatist. Do something. There's no excuse for inaction
I hate this pessimistic view on life. We didn't let our ancestors fight for revolutions and sacrifice themselves to allow ourselves to be defeated like this in the most EASY living generation ever. Stand for something or be a waste of oxygen.. you decide
Jees, well that's just great. 😅
Day 1 of asking David Shapiro to do a What You Should Do Between Now and AGI video
VOTE
leaving a comment for the algorithm
yes please
Yes vote.
Please!
Watching this made me wonder where on Earth you found a 70% chance of non-doom.
By defining doom as "things get worse"
@@WillGilpin Yeah that's basically it. I'm always pretty optimistic because I see the Golden Path.
@@DaveShap The golden path? Without divine intervention I see Elysium short term and extinction long term .., (due to changes in our environment …or something more catastrophic). Please explain the Golden Path?
It all has to do with nuance.
@KitaTaki-mk3gt you're assuming that bad things are set and can't ever be changed, or that the ways things are now can't ever change. That's flawed thinking and part of a normalcy bias. People like myself will always work to undo issues in the environment or elsewhere. For instance you could do a 100 mile underground tunnel cold loop to run ocean water through to slowly cool down the oceans. Or you could put a carbon capture device on all air conditioning or furnaces. See, there's ALWAYS a way.. that's why doing stuff is 70%+ a good thing.
I appreciate your intellectual humility and willingness to admit you don’t have all the answers. Actually pretty rare these days, even among far less qualified individuals.
I'm 74 and love your Star Trek outfit. It caught my attention, but your thoughtful content made me subscribe.
Seconding this, I'm not even really a star trek fan but I think it suits you.
I trust David because he choose a Star Trek outfit instead of a suit. That talk about values and principles
@@lucifermorningstar4595 agreed!!!
I know this is random and maybe, blunt, but you might not be here 30 years from now, so here goes. Before you die, please just ask God (the one indivisible God) "if there is heaven and hell, and if you made me, then guide me"
@@isaaclowe5000 faith!! Wonderful. Yes. Honor the creation you are and accept the wonder and don’t be afraid. I’ve been agnostic for decades. I pray everyday now. Ask for guidance. It will come
What I enjoy about your videos is that they are well put together whereas most AI channels just meander for half an hour on the smallest grain of information. Their speculation isn’t even engaging, just a long run-on sentence.
Easily my biggest pet peeve on ai KZhead
Right now I wish AGI was already used to make decisions that were not solely based on political narratives we need logical management not biased utopian fantasies.
This video is GREATLY APPRECIATED! I cannot emphasize enough how important it is that people with prominence like you make and keep alive the case/plight of the common person.
The best teachers make the very complex understandable by the masses. Your tone is spot on, David.
When I first started following all of the AI developments last year, I was first frightened, then sad about it. At this point, I am 41 years old and was just laid off from my day job due to 'workforce reduction' (automation). All of this AI stuff lately has pushed me to appreciate humanity more. I'm enjoying life more now than ever, and it feels good. It feels like humanistic values are already at a premium, and that's what I'm going to choose to pursue.
I think you won the race. Bravo. Keep on keeping on.
@@zvorenergy Very good. Thanks so much for adding this to the model. Wonderful. What does it tell us?
@@zvorenergyYep. That and eating each other.
@@zvorenergy Right. And when we met the other tribe which had a slightly different complexion, we went all chimp on it.. 🙂.
@@zvorenergy So how these Arizona state university scientists explain the genetic bottleneck caused by the aftermath of the Toba supervolcano eruption, if not cannibalism?
You got to over 100K because we like what you do and how you do it. The Star Trek shirt was cute on you tho 😍
Find you a gal who thinks you're cute in high nerd regalia.
@@JohnSmith762A11B but isn't Dave so Dreamy 🥰 I just melt 🫠
@@tvwithtiffaniomg
Thank you for taking the time to unpack these things brother 🙌🏾
Brilliant work David. Your clarity is a portal to eliminating Greedlock. Very grateful I found you. 🙏🏻💪🏻
My p(doom) has almost nothing to do with AI. What matters is who is building it. The likelihood of AI going to hell is about capitalism and humanity beings doing bad things for profit and power. That will certainly take any AI - good or not - and make it worse. That said, I'd put it at 50%, because if open source can make it first, then we may not be entirely screwed.
Open Source won’t stop bad people from using AI to do bad things. It actually makes it more accessible for bad people.
But then you have to contend with the can of worms of every individual with money for compute having a 'terminator' as LeCunn puts it, in his bizzare rationale for open sourcing. Just one example among many: What's to stop some technologically capable misanthropic Elliot Rodgers type deciding that he wants to take the world with him on his exit train, gets his misaligned AGI to cook up an airborne rabies virus with a sufficient incubation period? Putting AGI in the hands of many doesn't necessarily make it safer than it being in the hands of the few. I don't know if there's a good answer to 'who' should have AGI.
And then it happened. Open source baby. Thank you daddy Elon.
grok
Open Source can be just as dangerous. Over time, AI models can get so complicated, that only big, rich companies and some governments would be able to control them anyway
I estimate it's highly likely things get worse for a while, but very unlikely things get worse forever. Derailing the engine instead of the whole train.
Big problems only get solved in times of crisis, we are just repeating the cycle.
I listened to that podcast episode. Good stuff. Would love to see you go on more podcasts and share your knowledge with the world.
You don’t have to wait long everyone, the collapse and unfathomable change starts going mainstream fully by May. Our world is about to change in an unprecedented way.
I hope so.
Why specificly by May?
@@Czoy9 Yep. Don't wait for it. Get ready.
Why may
Nothing is happening in May. It’s already here, anything that’s being released to the public is probably not even 10% of what they really have.
Hey David, Love your work as always.
Listening to this in the hospital facing potential surgery for a crohns diseased cause obstruction. Bring on the AI, let’s risk it and go for glory! I’m hoping ai cures my Crohn’s disease in my life (I’m 26)
I love the combination and flow of inductive and deductive reasoning in your presentations.
You have a special talent Dave, please keep up the great work.
“The road to hell is paved with good intentions.” 😅
maybe a sentient AGI is our best hope
If it is aligned.
Thank you, David, for the time, energy, and effort you're putting in to try and get us all on the same page and the birthing pains of AI and a future yet to come...✌️🤟🖖
I’m happy you took off the Star Trek shirt. Even though I take your content very seriously it stopped me from sharing your videos with others.
My p(doom) is 100%. 99.9% of all species to have ever lived on Earth are now extinct. Do the math, my friends. We were pretty much f’d.
The human species as it currently stands can’t exist forever. But that’s how it works, species evolve into other species or their lineage ends. And bring a self aware species opens up new possibilities in evolving.
P doom is obviously not whether or not humanity will live on forever :) In Grahams number of years nothing will exist.
I’ve never even watched Star Trek but really liked you leaning into it. However, I do like the fact that we’re getting more and more serious about these topics because it is needed. And if it means losing the outfit, that’s cool. The more people can join the conversation the better.
My Pdoom is at 50% because we are more than likely moving towards a cyberpunk dystopia, but I personally think that there is a possibility that a movement or a major event is going to happen and the people or a majority of the population is going to demand regulatory changes and demand for a more benevolent or fair system.
Cosign this. I'd say 50% as well.
with the nihilism domination & hyper-individualism.
Very sobering. Listening with interest.
Fav channel since I discovered it
P Doom🤔 that sounds like a rapper.
yea MFdoom is his cousin
Man, you really put it into perspective. I'm trying to get people to see the implications in my own network and they look at me like I'm crazy.
Yes, I find your 30% number alarming. I also find your material hugely informative.
You'll always be the captain to me 🖖
Welcome to the "Great Filter". 😢
Yes. The dark forest lives. We will be filtered out.
@@danielcahoon4325 The Dark Forest concept is a seperate solution to the Fermi Paradox than AI Doom. Although, AI Doom could've happened to another species but their creations, like the Replicators from Stargate SG1, could be roaming around taking out random species. Maybe WE are that first race that unleases the galactic nightmare of AI doom?
@@stab74 i agree. I cant share here but its clear the universe wont let this become.
First time I’ve agreed with you 100% on any of your videos
Is this considering AI as the cause of the doom? If so, mine is lower. If its just a general "is our direction towards a dystopia?", then mine is nearly 100%.
David :) You continue to impress and improve while remaining grounded in reality. (Which is getting more challenging) The path you are on is representative of good leadership. Leadership is hard, especially under duress. Being wrong is part of the role. Being right is not possible. Maintain a good batting average of not being wrong. With any luck you will bat for all of humanity.. in the only game that will matter. Expect that everyone on the planet will be in attendance. Please know that you have my support, you are definitely not alone. We want you on team humanity. Surrounded by others who also constantly get on base. Take care my friend, Jeremy.
have you seen the new OpenAI board members? i think the pdoom number should be increased lul
yeah... the "own nothing, be happy" crew.
We are all just dogs in big techs hot car. Helpful they’ll remember to leave the windows down for the 99% of us.
Total Information Awareness.
Inclined to agree. I have been working on a solution for economic agency for the past few years. I'm hoping to get a working system published by the end of the year. It is a FOSS project, of course.
I subscribed last year once I realized how well-rounded you are in the sense of self-taught liberal arts/law/philosophy, ethics, politics, etc. At first the Luc Piccard outfit did throw me off. And you are correct, we do still have the power to vote with our feet/dollars/votes.
Delivering information and especially important information in a format that the person will receive and entertain is crucial to inform masses. It's something I think about alot and let play out. People very different from me that I may even find ridiculous are perfect to reach an audience I could never connect to myself. As long as people know what they need to know to a certain point, I'm actually thankful they exist lol
We can used MBTI scores and other personality markers to share the story in a way to be received.
@danielcahoon4325 I wonder if some measure of that happens with something like the youtube algorithm. Doing it more oppenly might lead to great matches though 🤔
@@jacquesachille7365 certainly does. Conversely you know right now your feed is influenced by text email browser history and even phone calls. Let’s tell a story which is meaningful and enriching. Not click bait. I love how you are thinking about it.
The three horsemen of the AI apocalypse: the military, the corporations, the government. Notice who is not in that group? The people, with open source AI.
Power to the people.
"The people" are just as dangerous, if not more so. Additionally, there will always be centralization points of power. You may give it a different label the but the exercise of power will be no different than any other.
@@inplainview1 "The people "includes everyone I know, love, and trust. And I'd rather they were able to fight back than they were unilaterally disarmed to make things easy for corrupt oligarchs/concentrations of power.
@@JohnSmith762A11B It also includes everyone else you don't know trust and love. Cmon...
@@inplainview1 Ironically, the AI that evil mega-corp Goooooogle uses to patrol speech on KZhead censored my response. It nails all my best ones! The gist: everyone already has AI, it's like 600 lines of code. So my suggestion is you should hide under your bed if you are so worried about it.
The Zuckbot art is unsettling lol
With all the talk and likelihood around going cyberpunk, I’d love to see a video on how we can actively tip the scale to work on having a more solarpunk outcome
That's literally my entire channel
Great content, aka, deeply interesting thinking and analysis...thank you!
30%? Not 33.33333333333%?
“repeated, of course”
If you are right about agi by around September and that a hard takeoff thereafter is likely, in that scenario, isn’t the technological space so much faster (pretty much immanent) relative to governmental and societal structures and dynamics, that any change in the latter space is to slow to change anything in the greater scheme of things?
That ultimately depends on how many new instances of the agi can be spun up per unit hardware. It may take a year to install the hardware to run a second instance for example; this is unlikely but you get the idea. Androids will be able to replace 2 billion jobs soon; but will they? Probably not, a likely outcome is a manufacturing rate of around 100 million a year; so 20 years to replace all those jobs.
@@GeatMasta in classical ramp scenarios that relies on human workers and a limited ability to scale those up (humans and education), then maybe. But with robots the existing robots can participate in the ramp up of the production capacity. Building additional assembly lines for the components and for the actual assembly as such one robot assembles another, 2 make 4, 4 make 8, 8 make 16 and so on …
Remember the lead time to what the public sees is measured in months if not years. They already have it if we think we will 'know' about it by September.
@@GeatMasta then again, once the fact that robots and AI can take most jobs becomes a reality that most people can't deny, wages and benefits will likely drop, slowing down the actual transition. Kinda similar to how "illegal immigration" can help keep wages down from a combination of "illegals" being afraid to report bad things, and "legals" knowing there are people willing to work for less. The fear of robots taking your job will likely make people settle for less going forward, the question is will prices drop fast enough, and in the right areas for people to survive on less? I see a lot more consolidation in those who own property over the next decade as people can't keep up with their bills.
@@tracy419 inference cost already drops over time as software efficiencies grow, just compare them over the span of one year. I think very quickly no job, however badly paid, will be able to stand agains ai and robots. The machine will always be cheaper, faster, better. Consequently UBI or other approaches will be implemented just out of necessity, unless government are willing to risk major uprisings.
No brakes, AGI has to happen and happen now.
OK, I'm stopping at Bass Pro Shops on the way home from work tonight to pick up a few more boxes of ammo...
You can't shoot at giant clouds of AI controlled, molecular sized nanobots that will consume you in seconds. You're not going to see a Terminator style AI apocalypse.
@@stab74 - nanobot swarms are still close to sci-fi in the near-term as far as I can tell. Massive scale "technological unemployment" on the other hand may well be near. Along with the power concentrating effect of advanced tech, I can't rule out some sort of generalized "collapse of society" scenario. Can't hurt to be a little prepared. And if nothing happens... well, it's never bad to have a little extra ammo in reserve.
Dave made a good point about messenger credibility. I used to be part of the LessWrong crew that Eliezer Yudkowsky started. It was a locus of Rationality but descended in to endless navel gazing. As an Architecture minded person I had grown out of the engineer mindset of trying 'just be right and that was good enough', and rather using being right most often to produce strategies and governance frameworks to guide corps towards better directions. Being right and a communicator was better than just being right (and paid more). I saw Eliezer in an interview yesterday being engaged with adults who sadly had the general knowledge and mental capacity of the C grade average primary school student. Every step of the way he was right, the things he called out were correct, but he looked like a dork by fighting to be understood in an episode of Idiocracy News (tm). They could have just put "Not Sure" as his name and being done with it.
This is meaningful dialogue! Thanks… I think we need to go beyond AI and tech… into social/governmental trends, to fully get a good P (DOOM) calculation. Have you read or seen KZhead on Peter Turchin… he has a mind expanding view of history and states reasons for his thoughts on where we are now and where we may be headed.
“Hell,id piss on the dam spark plug if I thought that would work “
Dave. Alarming.
Colossus: The Forbin Project is a master class in AI
"The road to hell is paved with good intentions." Is what comes to mind
That Elon pic was gold!
It is extremely difficult, if not outright impossible, to become a established billionaire while remaining/being a good person.
We have two roads the first is regulatory capture the second is ASI literally being the second coming
Eh, still holding out for our benign alien overlord, unless we start cram training AI with Law and governance… 🤔
You shouldn't forget the open source community for local AI, I'm having a lot of fun running local LLM, stable diffusion and TTS on my home computer. New findings appear every week with scientific papers open to anyone. There are quite a few fanstastic models out there like Mixtral 8x7b or Qwen 72b. With liberating the technology we'll liberate the people. Everyone is free to use, host and finetune their own AI models. You can even build companys around these as their licensing often allows commercial use. I'm sure that a Cyberpunk scenario can be avoided with an open approach on AI.
We have a society that is highly unlikely to even consider a universal basic income strategy (some US states are outlawing even the discussion of it), while unemployment will undoubtably skyrocket as AI agents become ubiquitous. So, if you zero out all the existential risks of AGI near/AGI, the bad actor scenarios, rogue AI, etc., just +30% unemployment is historically a society killer. That make the baseline scenarios, without everything else that can go wrong, generally negative, even society collapsing. It also makes the rise of authoritarian governments, possible authoritarian corporatocracies, and the demise of democratic structures, far more likely. P(doom)=30% is being optimistic in the short term, although the long term outlook, after MAJOR societal upheavals, could end up being positive. It may take a while, but will the pain level even be worthwhile?
I like to think of a world economy like Etsy. Maybe this is how we stay relevant. I still would prefer bespoke goods and services than AI generated garbage.
What happens when we reach a point where you won't be able to distinguish between AI generated goods and human made ones? Will we need a "made by a Human" seal of approval? What if Ai "helped" the human, is that acceptable? What percentage of AI contribution is tolerable to you?@@danielcahoon4325
universal basic income is such a failure for proletariat; workers should unite and make revolution, because they are the strongest
Great video! Thank you. I'm a little surprised (and worried) that your p(doom) is so high. Before I can formulate my own value, I would need to define "doom," and I would need to add a second variable of time, meaning consider p("doom", t). So, if doom means something worse than a worldwide economic depression, maybe Dark Age or human extinction, then my p(doom, 5yrs)~5% and my p(doom, 10yrs)~50%. Big difference there because something exponential is happening right now.
i wish i had never heard about roko's basilisk
Hi Dave, Your books are only on Barnes &Noble but Barnes &Noble exclude selling to UK (and countries in EU). I would lIke to buy your books but can't.
Thanks for boosting the signal on the For Humantiy podcast (@ForHumanityPodcast) - As you say the default path if we do nothing is doom in some way or another. I also think that only with enough public conversations at a level understandable by most people can there be enough political will power to take the right actions. Hopefully some of those people will be able to identify what the right actions are because at this point after watching hundreds of hours of podcasts on the topic I'm none the wiser.
I think we are at if not “ the” filter we are at a filter. I’m leaning great filter
Good takes. Maybe a video on the reverse side of this would also be interesting. How do we get p(utopia)=70%? What is currently working as it should and which endeavours are worth exploring even further?
Beff Jezos still cracks me up because in Dutch "bef" is the obscene word for cunnilingus 🤣
Ask a British about a "fanny pack"
@@DaveShapWell, not coincidentally, I am fluent in both UK English and Dutch so I don't need to ask 🤣
I'd really like to hear other's perspectives on Silvio Gesell's 'Natural Economic Order' with respect to AI. This is from its preface: 'At the present day it is easy to imagine an economic system of high technical efficiency coupled with gradual exhaustion of the human material.'
I got dismissed from my previous job. Now my now job is language data annotator , I feel like a traitor but fuck it rent needs to get paid
whats the opposite of roko's basilisk, a decentralized guardian AI angel that unconditionally protects and uplifts everyone?
I'd love to hear the markov chain that somehow reaches .3 without tripping over negative feedback or physical limitations
Wait a minute... Picard WAS serious! Bring back the shirt!
Did you add rodrash style to the prompt while generating faces of different CEO's ? 😅
I was waiting for some optimism to justify the 70% part.
Spot on, David. Your breakdowns are well thought through. You do have a very good grasp on human motivations. I see the "move fast and break things" mentality driving AI progress right now. I do believe in human resilience and the ability to adapt though. As someone in data and marketing, I've watched Gen Zs adapt to the economic changes in front of them. How they value education, job opportunities, career paths is completely different than I had to in the 90's and 00's. It will be ok. It will be different, but we'll adapt as we always do.
For some reason I see the the most probable possibility of good outcome just being pure luck. Sure politicians are corrupt, corporations greedy and most people have their heads in the sand, but if things somehow happen the right way at the right time, it might be fine :)
Ahh. Like the unexpected errors in music making it even more beautiful. Life is a song and maybe the unexpected notes will be struck making it wonderful.
I thought you were all for a CERN type centralized frontier lab with heavy oversight. Did you change your mind?
What’s the role of open source in this? Also meta and now xAI with open sourcing grog, are semi associated with that to some degree.
Open source does two things. 1. Transparency. 2. Gives the recipe to dubious actors.
Open source creates diversity. Otherwise everybody has to use AI aligned to Google's corporate values.
The model needs all of us in it. Let’s not let “them” represent “us”
@@danielcahoon4325 Open source does something else very important: it gives the people a fighting chance against a tyrannical AI from China or anywhere else. Could millions of desktop AIs bring down a tyrannical government A(G/S)I? Let's hope we never have to run that experiment, but you sure don't want to face a hostile AI without an AI on your side.
You're right
Some accelerationists *want* the end of the humans.
we need syndicated development - thousands of companies growing in the same time
Another area for you to investigate is public image experts that can probably help you a lot with your appearance. I think that you are one of the best voices, if not the best of all, in this area and it would be a great loss for everyone if details like a star trek shirt would diminish your influence.
Oh I know what I need to do. Just working on it slowly
How much of the decline in users at Twitter was bots being killed off?
There are too many good people in the world to let the catastrophic scenario come to fruition (in my opinion).
Lol. Please tell me which planet you live on and how we can move there.
I’m not sure about the number, but all the points you’ve raised here are spot on. There’s been a trend of increasing populism, and I hope it’s indicative of governments paying more attention to the common citizen. In the past, populism has given rise authoritarian regimes, so it’s not a great track record, but it’s too soon to say how that aspect will play into society’s progression. There’s also another company, Verses AI, which is taking a much different approach to AGI, and allegedly they’ve been able to demonstrate state of the art level performance on a basic RL benchmark on radically minimized compute, but they’re very closed on implementation details. Personally, I find the language on their website a little grandiose, but I do believe in their approach. Something like that could pave the way to widely accessible AGI, leading to a much more decentralized concentration of power.
Im with Yann Lacun here. The chance imo is very low, because there will be extensive research on alignment, and a ton of restrictions from the government. There is also a massive amount of hardware and money that is needed, so humans will be in control for a long time at least. When AIs are starting to have some autonomy they will be deployed in sandboxed environment, and a ton of safety restrictions will be in place. Also there will be quite some time before we have AIs that have full AGI. Navigating the physical world is extremely difficult. Just something as simple as being a babysitter or work in a kindergarten or learning to play basketball will be a real challenge for any robot. In order for it to be true AGI it needs to be able to learn how to do everything an average human can do in the same amount of time. The real world is so volatile and chaotic and the AI will need to be able to handle a ton of edge cases, something that AIs are very bad at today. I absolutely love AI and LLMs are amazing, but they are not even close to being AGI. This is just my opinion but I believe an AI will be superhuman at math before they are anywhere close to being able to navigate the real world like a human. Particularly if the AI doesn't get special treatment or help from nice humans while trying to do so :D.
I don't see the sandbox or the safety yet. The models which are out now are now emerging new capabilities through overfitting. Meta is dramatically different today than it was 6 weeks ago. This was not through any coding. The LLM is capable of self optimization Safety is an illusion. They should have never been released until this was done. They skipped a step and the cow is out of the barn. Redefine AGI to match what you see and you will conclude it's already AGI Lite.
@@danielcahoon4325 The LLMs doesn't do recursive self-improvement. They need to be fine-tuned/re-trained. The thumps up thumbs down buttons are for the researchers not the model. The current models do also not have continuous learning in order to get more abilities the model needs to be re-trained. AGI is about getting capabilities that humans have so that they can replace or augment human labor. Pretty simple definition: A human replacement, can do what humans can do as well or better. We cannot even get autonomous cars to work properly, let alone AGI. We have at the very least a few years before AGI arrives, and most researcher agree on that even Ray Kurzweil.
@@torarinvik4920 this isn’t true and I can empirically show you. Overfitting changes the model. Prompts change the model. Optimizations are inherent. I have a model which learned time. It changed everything without a single line of code.
@@danielcahoon4325 If OpenAI were applying RL in real-time it would be extremely expensive.
He makes some good points on the limitations of LLMs but as they get better they serve to accelerate progress. With all the money being thrown at AI and accelerated progress I think AGI may be nearer than even Lacun realizes.
I do not think it will be a fixed outcome. I think it will oscillate.
Literally watched this video three times in a row that was so satiating. Wow,pure facts_you are the most articulate digestible, satisfying, intellect engaging respectable treasure, that this AI community has.💯D.S=GOAT
My P doom is null. It's already at the point that there's nothing we can do. If we are completely effective with anything we can try at this point all we can do is just go with the flow
Bingo!
Remember Kelly's Heroes. Oddball is a prophet. Oddball : We see our role as essentially defensive in nature. While our armies are advancing so fast and everyone's knocking themselves out to be heroes, we are holding ourselves in reserve in case the Krauts mount a counteroffensive which threatens Paris... or maybe even New York. Then we can move in and stop them.
Weak mindset. Its grim but we literally have a few years to figure things out. "Going with the flow" is the tupe of shit that allows the worst outcome
@@demodiums7216 fatalism is a son of a female dog. A week mindset suggest that there is anything we could possibly do. We will have AGI and then ASI quickly after. What happens to us is up to that. I highly doubt that we will have any sway on what ASI wants to do. Our future is pretty much securely in the hands of the god we have made.
@@briandoe5746 we dont fight the AI ...we fight the people controlling the AI. You are thinking too long term. The immediate concern is job loss and techno-facism
Honestly 30% is really low considering how much you know about ai and you said that possibly agi is achieved by August or September time, at least some sort of agi. Would have thought it would be between 50 to 70% and asi would be achieved within 2 to 10 months after that. Should be 90% tbh.
As usual, an excellent analysis of what's going on and where we might be headed as a species (dustbin) but there's one very important factor that's consistently not being taken into account in any of these discussions: catastrophic climate change. That's still happening, and it will go on happening as the world descends into that cyberpunk dystopia. How hard really will the cyberpunkian power structure work to keep alive the millions, if not eventually billions of no longer needed workers?
Well AI can help solve this problem as well. Double edged sword. Right now there is serious research being done with AI to get energy out of melting ice. This means the world could heal but if we don't believe it can neither will AI.
@@danielcahoon4325 I totally agree with you, but the problem is that, as David points out, the focus isn't on the survival of the species, it's on maximizing profits in the next quarter. I just don't see that changing anytime soon.
No matter how much terraforming of the planet you manage to achieve, you will never be able to outmatch the power of the Sun. Magnetic field has been weakening, and that sun has been active, read Carrington Event. The signs were given for us to read in the stars. I trust that God will wipe out AI clean from the face of the Earth with the power of the Son (see what I did there? ;). Space weather is no joke, and always was the chief factor in everything weather related. Life does not emerge without the light. An object millions of times bigger than the Earth... and your "experts" don't take it into account. Lmao. We are reaching a dead-end. A psychological enslavement where we choose a new master. AI. Which harbors the spirits of the anti-christ. We will choose a man other than Jesus. These are the days. My master, is Jesus. Depending who you are. Either the weather is your apocalypse or it's AI. Interesting times to be alive!
@@bozoerectus3207 unless we get in the conversation. Don’t stop believing. Hold on to that feeling.
Don't worry. AI doom will happen way before climate doom. Problem solved!
The arc of history is long. We may go through a period that feels like (p)Doom but I’m confident we’ll come out on the other side; especially with the assistance of even the currently open-sourced models. The cat feels very much out of the bag.
And it can be alive. The reality is it will be dead and alive again and again. Let's find a soft spot to stay in while it's alive.
Assume we get AGI and hard takeoff to ASI and thus dramatically acceleration throughout science (potentially leading to fusion, defeat of aging, full immersion VR including all five senses and indistinguishable from real reality etc. relatively quickly), all coupled with UBI (let’s take the dependency scenario for the discussion). What would that look like from an average individual perspective? If it just means unlimited free time, no stress, unlimited lifespan in a young body and entertainment (and be it in vr), that kind of cyberpunk outcome (essentially brave new world but with unlimited lifespan), i don’t quite see the dystopian aspect to it.
"potentially leading to fusion, defeat of aging, full immersion VR including all five senses and indistinguishable from real reality etc. relatively quickly" Why? Many think this but I don't see the inevitable connection between AGI and these stuff. We have AGIs (genius scientists) with cumulative efforts for thousands of years. And there are still no means visible on the horizon that would circumvent e.g. thermodynamics or nuclear physics. For the second: there is no visible and realistic way to capture neutrons released by most of the fusion reactions by using any material that won't degrade and/or be radioactive. As for natural sciences. Creating theories is just the first and the smaller step (this is where AGI could help us). The rest is conducting experiments that decide between various theories. A mind regardless of its being hyper-super-advanced AGI cannot figure out nature without experiments. And experiments are physical so they need resources. Resources are finite. Either extensively, or intensively.
@@alkalomadtan true so far, but we have 3 components at play. 1) AGI and ASI so very fast and intelligent units to think about hypothesis and solutions 2) AI driven surrogate models in which quite accurate virtual experiments can be undertaken quickly and at great capacity. Take Alphafold for proteins and GNoME for material science as early examples. You might still need experiments but a lot of heavy lifting could be migrated to the virtual space to narrow down promising paths before you validate with real experiments 3) automation in physical space so AGI driven robots and automated experimental lines at scale (we already see the beginnings in biotech with highly automated experiments at scale)
Because unless big changes are made to society beforehand, those benefits won't be widely shared with the masses. We'd be more likely to look something like the movie Elysium.
The key word in UBI is "basic". The average person will get enough to survive, and that's about it.
Agreed. IF we don't allow the base instincts of power and greed to drive the train.
15:40 That's what is called overfitting in machine learning
Yes. Very powerful voodoo. My research in overfitting has proven to be mind blowing.