00:00:00what is the journey of how you arrived
00:00:02thinking about the problems of AI?
00:00:04- Well, most people know me or our work
00:00:11through the film The Social Dilemma.
00:00:13And I used to be a design ethicist at Google in 2012, 2013.
00:00:18So that basically meant,
00:00:22how do you ethically design technology
00:00:25that is gonna reshape, especially the attention
00:00:28and information environment of humanity?
00:00:30So it's like, there I was at Google, it was 2012, 2013.
00:00:33This is in the heat of the kind of social media boom.
00:00:36I think Instagram had just been bought by Facebook.
00:00:38My friends in college started Instagram.
00:00:40So like, I was part of this cohort and milieu of people
00:00:44who really built this technology
00:00:47that the rest of the world just thought was natural.
00:00:49Like this is just drinking water.
00:00:50Like I just drink Instagram.
00:00:51I just live in this environment.
00:00:53And so while like I saw billions of people
00:00:56enter into this psychological habitat
00:00:58that I knew the handful of like five or six people
00:01:02that were designing and tweaking it
00:01:03and making it work a certain way.
00:01:04Yeah, exactly.
00:01:05And I think that that's just like a fundamental thing
00:01:07I want people to get is, you think of technology
00:01:10like it just lands and it's just inevitable.
00:01:12And then there's just nothing we can do.
00:01:13And it just comes from above.
00:01:15And it's like, there are human beings making choices.
00:01:18And as someone who grew up in the era of the Macintosh,
00:01:23like my co-founder, so I have a nonprofit
00:01:25called the Center for Humane Technology.
00:01:27My co-founder, Asa Raskin,
00:01:28his father invented the Macintosh project
00:01:30before Steve Jobs took it over.
00:01:31So this is the original Macintosh.
00:01:33The thing that we now, the MacBook, iMac, the MacBook Pro.
00:01:36All of that started with his father, Jeff Raskin.
00:01:41And the idea of creating humane technology
00:01:43where technology could be choicefully designed
00:01:46to be really easy to use, to be accessible,
00:01:48to be an empowering extension of our humanity.
00:01:51Like a cello, like a piano, like a creative tool,
00:01:54like if you're a video person,
00:01:55you can make films and videos.
00:01:57And just so people understand,
00:01:59because we're probably gonna be talking
00:02:00about some darker things on this podcast.
00:02:02The premise of all this is not to be a speaker of doom
00:02:06or something like that.
00:02:06It's to say, I wanna live in a world
00:02:09where technology is in service of people and connection
00:02:12and all of the things that matter to us as humans.
00:02:15And then have technology wrap around ergonomically us
00:02:19to create that.
00:02:20So that was kind of a side journey.
00:02:21There I was at Google in 2012, 2013,
00:02:24and I saw how essentially there was this arms race
00:02:28for human attention and whichever company
00:02:30was willing to go lower on the brainstem
00:02:33to manipulate human psychology.
00:02:35This is exploiting like a backdoor in the human mind.
00:02:38So I think if software has backdoors
00:02:40and zero-day vulnerabilities, you can hack software.
00:02:43The human mind has vulnerabilities.
00:02:45And as a magician, as a kid, I understood some of those.
00:02:50Studying at a lab at Stanford
00:02:51called the Persuasive Technology Lab,
00:02:52where a lot of the Instagram co-founders had studied,
00:02:55I understood the psychological influences dynamics.
00:02:58And so it wasn't just that we were making technology
00:03:01in this beautiful and empowering kind of Macintosh way.
00:03:04It's that basically more and more of my friends
00:03:06were sucked into developing technology
00:03:09to hack human psychology.
00:03:11And so I saw that problem, I became concerned about it,
00:03:13and I made a presentation at Google.
00:03:15And I feel like I repeat the story everywhere,
00:03:18but it's just important for my history, I guess.
00:03:20I made a presentation saying never before in history
00:03:22have 50 designers in San Francisco
00:03:24basically through their choices,
00:03:26rewired the entire psychological habitat of humanity.
00:03:29And we need to get this right.
00:03:32We have a moral responsibility to get this right.
00:03:34And I sent it to 50 people at Google.
00:03:37And when I clicked on the presentation the next day
00:03:40on the top right of Google Slides,
00:03:41it shows you the number of simultaneous viewers.
00:03:43You know how that works?
00:03:44And it had like 150 simultaneous viewers
00:03:47and then 500 simultaneous viewers.
00:03:48And so it's like, oh, this is spreading throughout
00:03:50the whole company.
00:03:52And that's what led to me becoming a design ethicist
00:03:55where I had to research and ask the questions,
00:03:57what does it mean to ethically design
00:03:58and persuade people's psychological vulnerabilities
00:04:01when you can't not make choices
00:04:03about the psychological habitat?
00:04:05You have to make a choice about how infinite,
00:04:07whether you're gonna do infinite scroll or not,
00:04:08or autoplay or not, or notifications or not,
00:04:11or these 10 people followed you or not.
00:04:13Like what does it mean to ethically make those choices?
00:04:18- That is, you being concerned about some of the ways
00:04:21that a misalignment of technology
00:04:23with what human flourishing might look like?
00:04:26- Yeah, and how society, I think people are afraid to say,
00:04:29like when you make a bridge,
00:04:31there's a physics to whether that bridge will sustain
00:04:34or whether it'll fall apart, right?
00:04:36And it's not magic.
00:04:37We don't say, oh, like who would have known
00:04:38that that bridge would fall apart?
00:04:39We have a science of bridges and mechanical engineering
00:04:44and civil engineering.
00:04:46And with technology and human psychology,
00:04:49there is a science to the dopamine system.
00:04:51There is a science to confirmation bias in our psychology
00:04:54and how we tend to perceive information
00:04:56through our tribal in-group.
00:04:58Like we see things through the political tribe
00:04:59that we're a part of.
00:05:01And if you understand that science,
00:05:02you can understand whether or not technology
00:05:04is manipulating that.
00:05:05So one of the core things I think we were trying to do
00:05:07in that first chapter of work,
00:05:08and this again, starting in 2013,
00:05:09is break through this idea that technology is neutral
00:05:13and that we could never know what's good for people
00:05:16or that something could be bad for people.
00:05:19Like I deliberately saw people make
00:05:21short form auto-playing videos
00:05:23that then created the brain rot economy
00:05:25that we're now living in.
00:05:26- And it seems like a natural progression
00:05:29to go from I'm concerned about some specific types
00:05:34of technology use and how that interacts with humans
00:05:38to I'm concerned.
00:05:38- Specifically not technology use,
00:05:40but technology designed for certain outcomes of usage.
00:05:45Really critical thing 'cause we wanna put attention
00:05:47on the design, not just how people are using it.
00:05:49- Yep, understood.
00:05:50- Yeah.
00:05:51- Seems like a natural progression to get concerned
00:05:55about a burgeoning AI landscape.
00:05:58- Well, what happened was my team
00:06:02at Center for Humane Technology, our nonprofit,
00:06:04we got calls from people inside of the AI labs.
00:06:08So we were in San Francisco.
00:06:10We know people work at all the tech companies
00:06:12we have for the last decade.
00:06:13And suddenly in January of 2023,
00:06:17this is 10 years later now,
00:06:19I got calls from people inside the major AI lab
00:06:21saying that the arms race dynamic was out of control
00:06:24and that huge leaps in capabilities.
00:06:27This is basically speaking about GPT-4 before it came out.
00:06:31And GPT-4 could pass the bar exam,
00:06:34get very high results on the MCAT,
00:06:36was producing incredibly powerful,
00:06:38like passed the SATs, like very powerful AI
00:06:41that suddenly appeared out of nowhere.
00:06:43And this people who reached out to us
00:06:45basically said, this is really dangerous.
00:06:46Will you use your connections, use your connections in DC,
00:06:50go wake up the world, wake up the institutions,
00:06:52let them know that this is coming
00:06:54because it's not safe what's about to happen.
00:06:57- Why is AI distinct from other kinds of technologies?
00:07:01- Well, let's get to that.
00:07:02So I think the thing that is most difficult
00:07:04for people to get is up until now technology progressed
00:07:10in a very, like we're kind of adding layers to a stack
00:07:13kind of way, like we build the networking stack,
00:07:14we build the user interface stack.
00:07:16And as you develop the stack,
00:07:17you're kind of just adding layers and layers and layers.
00:07:19And the technology that we live in was coded manually,
00:07:22like line by line, like when the computer sees this, do this.
00:07:26When the computer sees this, do this.
00:07:27And then people contribute all this code
00:07:29over 30, 40, 50 years on GitHub and in operating systems.
00:07:33And then you land in this technological world
00:07:36in which everything that happens in a computer
00:07:38is happening through logic and through human choice.
00:07:41What makes AI different is that you're designing
00:07:44and you're not really coding it, like I want it to do this.
00:07:47You're more like growing this digital brain
00:07:49that's trained on the entire internet.
00:07:51And when you grow the digital brain,
00:07:54you don't know what it's capable of or what it's gonna do.
00:07:56So think about it this way,
00:07:57like if I did a brain scan in your brain,
00:08:00could I know from just the brain scan
00:08:03what you're capable of?
00:08:04No, I can see that this part of your brain lights up
00:08:07when you have that thought,
00:08:09but I can't have a comprehensive picture of like,
00:08:11what is everything that Chris is capable of?
00:08:14Can you do sociopathic manipulation
00:08:15and do better military strategy than the best US generals?
00:08:18Like from the brain scan, I can't tell that.
00:08:19Maybe you can, but so with AI, we are essentially,
00:08:24you know, when people hear about these huge data centers
00:08:27getting built out, like Facebook's building one
00:08:29or meta's building one, the size of Manhattan,
00:08:31and you ask like, what is that, what's going on there?
00:08:34It's like they're building a bigger and bigger digital brain
00:08:38that that's what goes from GPT-3 to GPT-4,
00:08:41you know, with more neurons.
00:08:42When you hear the number of parameters of an AI model,
00:08:44that's like essentially the number of neurons in an AI model.
00:08:47And what they found is that the more GPUs and Nvidia chips
00:08:50you point at sort of growing this digital brain,
00:08:53the more intelligent it gets,
00:08:55and the more it picks up capabilities
00:08:57that we didn't intentionally teach it.
00:08:58Like there was a famous example
00:09:00where you just train it on the internet
00:09:02and then, you know, it's answering questions in English
00:09:05and suddenly it learns how to answer questions in Farsi,
00:09:08like doing Q and A in a different language.
00:09:11And no one taught it that language,
00:09:13it just sort of learned that on its own.
00:09:16And that's what's like weird about AI is that
00:09:19it's a black box, we don't really understand how it works,
00:09:21and yet we're making it more powerful,
00:09:23much faster than understanding how it works.
00:09:26And that's what leads it to make
00:09:28these more unexpected behaviors
00:09:30that we aren't able to control.
00:09:33And I think we're gonna get into some of those.
00:09:35- A data center the size of Manhattan?
00:09:37- Yes.
00:09:38- Where?
00:09:39- I don't remember where that one is, but it's crazy.
00:09:41There's like a overlay, someone can look it up.
00:09:43There's like an overlay where you can see
00:09:44the size of this data center
00:09:45and it's almost the size of Manhattan.
00:09:47And you can ask, I mean, again, there's more money,
00:09:49people should just get,
00:09:50there's trillions of dollars going into this.
00:09:52There's more money going into this technology
00:09:54than all technologies of the past have ever been built.
00:09:57And we're releasing this technology faster
00:10:00than we released every other technology in history.
00:10:02It took something like two years for Instagram
00:10:04to go from zero users to 100 million users.
00:10:07And it took two months to go from zero
00:10:10to 100 million users for ChatGPT.
00:10:12And of course, they're going from ChatGPT three or four
00:10:16to now at 5.2, and it went from barely being able
00:10:20to finish a sentence with ChatGPT two,
00:10:23like finish a paragraph and do like a coherent text,
00:10:26to GPT-3 could write full essays,
00:10:28to GPT-4 can pass the, you know,
00:10:30the bar exam or the MCATs to GPT 5.2,
00:10:33I believe was used to get a gold in the math Olympiad.
00:10:37- Metas Hyperion AI data center will sprawl
00:10:40to four times the size of Manhattan Central Park.
00:10:43- And there are quotes from people like inside of OpenAI
00:10:48who believe that they're not just building
00:10:51this like narrow technology
00:10:52that's a helpful blinking cursor.
00:10:53They wanna build artificial general intelligence.
00:10:56And so what that means is being able to do that everything
00:11:00that a human mind can do.
00:11:02And the joke inside the company is like,
00:11:03we're gonna cover the world in data centers and solar panels.
00:11:06Like they want to cover the world in essentially
00:11:10these big boxes that have huge clusters of Nvidia chips
00:11:14that then compute away and ultimately create something
00:11:16like a super intelligent God entity that they believe
00:11:20that they will use to own the world economy,
00:11:22make trillions of dollars.
00:11:23And from a kind of ego religious intuition,
00:11:25they will have built the God that supersedes
00:11:28and replaces humanity.
00:11:29I know that sounds insane.
00:11:30So let's, we can slow that down again.
00:11:32- Break that down for me.
00:11:33- That was a lot.
00:11:34- Yeah, yeah, yeah, yeah, yeah.
00:11:36You've got a new movie out
00:11:37and I feel like I found out who the bad guy is,
00:11:39but I have no idea how he got there.
00:11:40- Who's the bad guy?
00:11:41- The end of the world AGI overlords.
00:11:46- Well, yeah, so, okay.
00:11:47So first of all, let's like break this down
00:11:48'cause this might sound ridiculous to people.
00:11:51Let's make sure people understand.
00:11:54The stated mission statement of open AI
00:11:57is to build artificial general intelligence,
00:12:00which means to be able to replace all forms
00:12:03of economic cognitive labor in the economy.
00:12:06Cognitive labor, meaning cognitive,
00:12:08anything your mind can do.
00:12:09So if a mind can do math and generate new mathematical
00:12:13insights, if a mind can do physics like Einstein,
00:12:15if a mind can do chemistry, if a mind can do programming,
00:12:18if a mind can do cyber hacking, if a mind can do marketing,
00:12:21if a mind can illustrate something,
00:12:23we're seeing AI that is able to kind of cover more
00:12:28and more types of cognitive labor in the economy.
00:12:31As we scale AI from this tiny little model
00:12:34with 100 million parameters to trillions of parameters
00:12:37and these much bigger data centers,
00:12:40AI is getting closer and closer to be able to,
00:12:44and already beating humans at many cognitive tasks.
00:12:46We already have AIs that are better at military strategy
00:12:49than the best military generals.
00:12:52People remember in the 1990s,
00:12:55IBM Deep Blue beat Garry Kasparov at chess.
00:12:59That was kind of like the beginning of like,
00:13:00it can beat you in this narrow game called chess.
00:13:03Then there was AlphaGo.
00:13:04We can have AI that beats the best human Go player
00:13:07in the Asian board game of Go.
00:13:08But then now instead of imagine chess or Go or StarCraft,
00:13:14now it's like the war in Iran.
00:13:16And you have an AI that's basically telling
00:13:18the military troops where to go, who to bomb.
00:13:21This is really scary.
00:13:23And we're racing to this outcome faster than we've again,
00:13:26built any other technology in history.
00:13:28- You said that it's better, better than humans.
00:13:31- Better in the narrowly defined sense
00:13:33of effective at strategy, effective at goal achieving,
00:13:36effective at problem solving.
00:13:38'Cause that's what is intelligence, right?
00:13:39It's like finding the shortest path between a goal
00:13:44and what are the strategies that get you to that goal.
00:13:48So persuasion is a kind of a strategy or intellectual task.
00:13:53What is the best way to persuade you?
00:13:54The shortest path.
00:13:55Negotiation is a problem solving task
00:13:59and lawyers find ways of lying
00:14:01or framing the truth in certain ways.
00:14:02Well, AI is gonna discover forms of deception or lying.
00:14:05We're seeing that in the examples
00:14:06that I think we're gonna talk about.
00:14:07But intelligence is different than wisdom
00:14:12and your podcast is called Modern Wisdom.
00:14:15And I hope we get into this distinction
00:14:16because we are scaling up the amount of power
00:14:20that everyone is gonna have access to,
00:14:22whether it's individuals or militaries or nation states
00:14:26or companies or businesses.
00:14:28But we are not commensurately scaling the amount of wisdom.
00:14:32And I know a friend of ours
00:14:33that we met in Austin several years ago,
00:14:36a dear friend of mine, Daniel Schmachtenberger
00:14:38has this quote that you cannot have the power of gods
00:14:41without the wisdom, love and prudence of gods.
00:14:43And so in many ways, I think AI is like a rite of passage
00:14:47for humanity because essentially we've always been,
00:14:50we've not always had the greatest track record
00:14:53in our relationships to technology.
00:14:54Like if you look at the industrial revolution tech,
00:14:57what letter grade would we give ourselves
00:14:59in holding on in like stewarding that tech?
00:15:03We had better living through chemistry in the 1930s,
00:15:05DuPont chemistry, and that was great.
00:15:09We invented all sorts of new materials,
00:15:10but we also generated forever chemicals
00:15:12and it would currently cost more than the GDP
00:15:15of the entire world to clean up
00:15:18the entire mess of forever chemicals.
00:15:20We created social media thinking if we give the world access
00:15:24to information at our fingertips
00:15:25and connect people with their friends,
00:15:27this is gonna create the most enlightened
00:15:28and informed society than we ever have.
00:15:31And clearly that didn't go that in the way
00:15:32that we wanted it to.
00:15:34So now AI is like the exponentiation
00:15:37of just technology invention writ large,
00:15:40because what makes AI different from all other forms
00:15:43of technology is that intelligence is the basis
00:15:47of all of our new science, of all of our new technology,
00:15:50of all of our new military development.
00:15:51So if you automate intelligence,
00:15:54you're gonna automate an explosion of new science,
00:15:57new technology, new military technology.
00:16:00And if you have more power and more intelligence,
00:16:03but you don't have the wisdom to wield it,
00:16:06that's obviously not gonna go well.
00:16:07- Why can't wisdom be programmed to?
00:16:11- Well, in some ways you could say that it can be,
00:16:15it's just that it's not that like wisdom comes
00:16:17from the ether, it's about asking critical questions
00:16:22about how should the technology be designed.
00:16:24So for example, like do we have to have
00:16:29our entire internet environment have auto-playing videos
00:16:33that swipe one after another?
00:16:35No, we don't have to have that.
00:16:36We can have a totally different design paradigm
00:16:37where no one's auto-playing videos.
00:16:40Wisdom would be understanding that the human psychological,
00:16:44the paleolithic brain that we are born with
00:16:47has these vulnerabilities in our dopamine system.
00:16:50And we could design to not hijack that dopamine system.
00:16:54And just imagine for a second,
00:16:55just to like, there's a huge conversation we're having,
00:16:57but if you just imagine that one little change.
00:17:00So here's today, everyone has auto-playing videos,
00:17:02infinitely swiping, brain rotting everybody,
00:17:04brain damaging everybody 24/7, test scores are massively down
00:17:07for basically all around the world because of this phenomenon.
00:17:10It's very, very clear that the technology
00:17:12and social media is driving that.
00:17:14If you make this one little change
00:17:16of no auto-playing videos,
00:17:18and it means also no infinite swipe dating apps
00:17:23that are getting you into a slot machine
00:17:24with player cards of people,
00:17:26like how different does the world become?
00:17:27Like when you meet people,
00:17:29how dysregulated is their nervous system?
00:17:31Just that one little change.
00:17:33I want people to think as we're in this conversation,
00:17:36there's just these different worlds we can live in
00:17:39with just different design choices.
00:17:40And that's kind of the whole point is that wisdom can be,
00:17:43what are the design choices that will lead
00:17:44to better societal outcomes?
00:17:46And of course, the reason
00:17:47that everyone's auto-playing the videos
00:17:49is because of this competitive arms race
00:17:51of if I don't do it, I'll lose to the other company that will.
00:17:54And so it would take some kind of rule or policy
00:17:57that says that we don't want that.
00:17:58- You need to put a moratorium on auto-play videos
00:18:02because the incentives for any individual company
00:18:05and for the market at large
00:18:06and for the competitive dynamic between companies
00:18:09means that if you don't do it,
00:18:11you get beaten by the one that does.
00:18:12- And that's like the bullseye,
00:18:15that's like the fundamental problem behind AI
00:18:18that's forcing us to reckon with is unhealthy competition
00:18:22or this sort of, if I don't do it,
00:18:24I'll lose to the guy that will.
00:18:25So everyone does a thing that's short-term good for them,
00:18:28but that's long-term bad for everybody.
00:18:31You know, the AI companies, well,
00:18:32even Anthropic wants to be the safety AI company.
00:18:35They wanna do things in a safer, more careful way.
00:18:37But they, if they don't release models as powerful
00:18:41and as fast as the other companies,
00:18:43they'll just fall behind in the race.
00:18:44They won't have a seat at the policy-making table.
00:18:47They won't get a lot of usage.
00:18:48They won't get the investor dollars.
00:18:50And then their commitment to safety just means they lose
00:18:53and they're not part of the race anymore.
00:18:55- Yeah, what's that line?
00:18:57How can you talk shit from outside of the club?
00:18:59You can't even get in.
00:19:00- Yeah.
00:19:01It's difficult to- - Something like that, yeah.
00:19:02- Yeah, yeah, yeah.
00:19:03I think it was that Jay Kwon, I think,
00:19:06dating me in like the mid 20s, 2000s.
00:19:11There's a study that I saw recently.
00:19:13Scientists just proved that large language models
00:19:15can literally rot their own brains
00:19:17the same way humans get brain rot
00:19:19from scrolling junk content online.
00:19:21Do you see this?
00:19:22- I did see that, yeah.
00:19:22- Yeah, scientists did a study where they fed models
00:19:24months worth of viral Twitter data,
00:19:26shorts, high engagement posts,
00:19:29and watched their cognition collapse,
00:19:30reasoning fell by 23%, long-term context memory dropped by 30%,
00:19:35personality tests showed spikes in narcissism and psychopathy
00:19:39and get this, even after retraining on clean,
00:19:42high quality data, the damage didn't fully heal.
00:19:45The representational rot persisted.
00:19:47It's not just that bad data means bad output.
00:19:50It's bad data means permanent cognitive drift.
00:19:52The AI equivalent of doom scrolling is real
00:19:55and it's already happening.
00:19:57- I love that you included this example.
00:19:59Oh, that's from right here,
00:20:00University of Texas in Austin, Texas A&M University.
00:20:03- Leave it there, Jared.
00:20:04- Yeah, so, I mean, are we surprised by this?
00:20:08I mean, are you surprised by this when you see this?
00:20:10- No, I can tell the difference.
00:20:13This year, one of my big resolutions
00:20:16has been to spend less time on social media.
00:20:18I managed to do it and...
00:20:20- How'd you do it?
00:20:22- Second phone that's tethered to wifi
00:20:27and that is the cocaine phone and the kale phone
00:20:30is just messages and stuff.
00:20:33It's a little bit of a challenge because things like Slack.
00:20:36I had Cal Newport on a couple of weeks ago
00:20:38and I was talking about the intersection
00:20:40of productivity and detention with AI, the new world of AI.
00:20:45And that's a really interesting conversation.
00:20:46Have you ever spoken to Cal?
00:20:47- Yeah, he and I have been in similar circles for a long time.
00:20:50- He's wonderful.
00:20:51Even if you have your phone without social media,
00:20:55you still have kind of the social media of work.
00:20:58- Yeah, exactly.
00:20:59- But anyway, I've done good stuff on that
00:21:01and I've come up with some of my best ideas so far this year.
00:21:04My writing's improved, my sleep's improved,
00:21:07my attention's improved.
00:21:08And this is already someone that was pretty red-pilled
00:21:10on tech minimism.
00:21:12I think the seventh or the eighth episode of this show
00:21:16was inspired by you and it was Kai Wei,
00:21:19the guy that invented the light phone.
00:21:21- Oh yeah, uh-huh.
00:21:22- And this is 2018.
00:21:24- Yeah, totally.
00:21:25So I've been concerned, I read Super Intelligence 2017.
00:21:29- Oh wow, yeah, that's early.
00:21:31- I listened to Super Intelligence in 2017.
00:21:34Reading it would have been
00:21:35a little bit more difficult for me.
00:21:37But yeah, I don't feel as good.
00:21:38When I use too much social media, I don't feel as good.
00:21:41- And that's the thing, I mean like,
00:21:42is that a controversial fact?
00:21:43Do you think that anybody when they're sitting there
00:21:45doom scrolling for three hours,
00:21:47just like put a thermometer, you know,
00:21:50pain, you know, positive emotion meter in their brain,
00:21:54are people gonna say that they love, I mean,
00:21:55it's one of those things where short-term good,
00:21:57it feels good in the moment, but it's long-term empty.
00:22:00Like it's just as high fructose corn syrup for our brain.
00:22:03Empty calories.
00:22:04- Mm-hmm, and the fact that this is replicated by,
00:22:08social media can even warp an AI,
00:22:15can even warp an LLM, I think feels quite pernicious.
00:22:19- Yeah, I mean, it's interesting to note that
00:22:24when Elon bought Twitter, you know,
00:22:26he was already thinking about AI.
00:22:28And part of what he was thinking about is, you know,
00:22:30in the AI race, these companies that are racing
00:22:32to get to artificial general intelligence,
00:22:34one of the ways they differentiate themselves
00:22:36from each other is their training data.
00:22:38Like who has more powerful and more training data
00:22:42for training and growing their digital brain
00:22:44than the other guy.
00:22:45And Elon thought that he had a competitive advantage
00:22:48because he would have the entire user-generated content
00:22:52of the real time views of all of humanity
00:22:54in the form of Twitter.
00:22:55And he could train his AI on that.
00:22:56And that's what led to Grok.
00:22:58But of course, when you train essentially an AI
00:23:01on kind of brain rotted, hyper polarized,
00:23:04hyper adversarial rivalrous, you know,
00:23:07all the problems of Twitter, the outrage economy,
00:23:09you get AIs that are more like this than the better AIs.
00:23:13- Before we continue, most people in their thirties
00:23:16are still training hard.
00:23:17Their protein is dialed in.
00:23:18They sleep better than they did in their twenties.
00:23:20Discipline is not the issue,
00:23:22but recovery feels somewhat different.
00:23:25Strength gains take a little longer.
00:23:27The margin for errors starts to shrink.
00:23:29And that is why I'm such a huge fan of Timeline.
00:23:32You see, mitochondria are the energy producers
00:23:34inside of your muscle cells.
00:23:36As they weaken with age, your ability to generate power
00:23:39and recover effectively changes,
00:23:41even if your habits stay strong.
00:23:43Mitopure from Timeline contains the only
00:23:45clinically validated form of erythelin-A used in human trials.
00:23:49It promotes mitophagy, which is your body's natural process
00:23:52for clearing out damaged mitochondria
00:23:53and renewing healthy ones.
00:23:55In studies, this supported mitochondrial function
00:23:58and muscle strength in older adults.
00:24:00It's not about pushing harder.
00:24:01It's about actually supporting the cellular machinery
00:24:04underneath your training.
00:24:05If you care about staying strong
00:24:07into your thirties, forties, and fifties, and beyond,
00:24:09this is foundational.
00:24:11Best of all, there is a 30-day money back guarantee
00:24:13plus free shipping in the US and they ship internationally.
00:24:16And right now, you can get up to 20% off
00:24:18by going to the link in the description below
00:24:19or heading to timeline.com/modernwisdom
00:24:22and using the code modernwisdom at checkout.
00:24:24That's timeline.com/modernwisdom
00:24:26and modernwisdom at checkout.
00:24:30Okay, so the discussion around social media
00:24:33was there are better and worse design choices
00:24:36that can be made that would help human flourishing.
00:24:39Broadly, what would people want their world to be like
00:24:42and how can we design technology
00:24:44in a way that helps them to get there?
00:24:45Something close to that.
00:24:47But because of market dynamics,
00:24:49you have a competitive landscape that incentivizes things
00:24:52that are effective for gripping people's attention
00:24:55but not necessarily effective for flourishing.
00:24:57And it seems that a tension
00:25:02between what is good for flourishing,
00:25:05'cause it could be, what's good for attention
00:25:07would also be good for flourishing.
00:25:09It could be. It could be,
00:25:10but it tends to not be that way.
00:25:12And also, there's gonna be a limit on that, right?
00:25:14It's probably not the case that 10 hours of attention
00:25:17on any social media is good for society or good for you.
00:25:21So there's gonna be sort of a-
00:25:22Unless it was like waking up with a meditation app.
00:25:2510 hours once every month or something
00:25:27would probably be quite good to do for a meditation app.
00:25:29Maybe, sure, yeah.
00:25:30But I think the point is how much,
00:25:32as companies are competing,
00:25:34and you're asking what they're competing for,
00:25:35it's not just the best screen time.
00:25:39It's also what is the fit, the ergonomic fit,
00:25:42between screen time and a life well lived.
00:25:45Just imagine, in a timeline,
00:25:46if there you are in a week in your life,
00:25:49not asking, based on what you're doing now,
00:25:51but retrospectively, what would be a life well lived
00:25:54when it comes to how much and when screen time
00:25:56is fitting into your life.
00:25:57And it's probably a much smaller footprint
00:25:59than it currently is for most people.
00:26:01It's probably a fourth of what it currently is
00:26:03for most people.
00:26:04And so if you were designing technology from care, from love,
00:26:09in a humane way,
00:26:11you would have design choices
00:26:12that are not about keeping people on the screen.
00:26:15And that might mean some pretty radical things.
00:26:18I mean, my co-founder, Azar Raskin,
00:26:20he also invented the infinite scroll.
00:26:22So that's the, you know, it sounds so obvious now,
00:26:25'cause infinite scroll is just what we live in.
00:26:27But when he invented it, it was 2006,
00:26:29it was before mobile phones.
00:26:31And it was when, in the age of Google results,
00:26:33you had like the 10 Google results,
00:26:35and you had to click on, I want the next 10,
00:26:37or you had the Yelp review pages and you wanted the next 10,
00:26:40or you read a blog post and then you'd have to like click,
00:26:43go back to the main page
00:26:45to click on which blog post you want.
00:26:46And the idea he had was, well, what if,
00:26:48as the internet got dynamic with JavaScript,
00:26:51what if when you finished,
00:26:52when you get to the end of the blog post,
00:26:54it just auto loads the next article that you could go to?
00:26:57Well, what if when you got to the end of the search results,
00:26:58it just shows you more search results?
00:26:59And then this is such a cleaner interface.
00:27:02I mean, as a technology designer, you're taught,
00:27:04the number one thing you're trying to do is reduce friction.
00:27:06And I think that that felt like a good goal,
00:27:09but then that obviously got weaponized
00:27:11by this hyper-encagement model of social media.
00:27:13And now it's created the entire world that we're living in.
00:27:16And just so you know, like in 2013,
00:27:18I saw like everything that we predicted,
00:27:21everything that we predicted, it all happened, all of it.
00:27:24A more addicted, distracted, sexualized, FOMO,
00:27:27fucked up society because of those incentives.
00:27:30And I just want people to get that
00:27:31because as we talk about AI,
00:27:34it's like, I want people to have the confidence
00:27:39to say, "I don't want the default anti-human future."
00:27:43Because if you say, "I'm against some of the things
00:27:45"that are gonna happen with AI,"
00:27:46people say, "Oh, you're being anti-progress.
00:27:48"Oh, you're being anti-technology.
00:27:50"Oh, you're just a Luddite.
00:27:50"You're trying to like pretend
00:27:51"that technology is not progress."
00:27:54And it's like, what you should have confidence in
00:27:56is if you understand the incentives or the agenda,
00:27:59you can understand where the world is going and you can see.
00:28:02And if we don't want that anti-human future
00:28:05and we all see it clearly,
00:28:07we can put our hand on the steering wheel and steer.
00:28:09And that's why, not to like do some promotion,
00:28:12but there's a film that I'm here in South by Southwest
00:28:15this week in Austin, Texas to be at the premiere.
00:28:18It's called "The AI Doc,"
00:28:19or "How I Became an Apocaloptimist."
00:28:21And-- - An apocaloptimist.
00:28:24- An apocaloptimist.
00:28:25- Okay. - Yeah.
00:28:26Which we can get into that.
00:28:30The film is meant to create clarity
00:28:33about which future we're headed towards with AI.
00:28:36And it includes three out of the five major AI CEOs
00:28:39in the film.
00:28:40It includes all the AI optimists in the film.
00:28:43It includes many of the AI risk folks in the film.
00:28:46It includes the AI ethics folks in the movie.
00:28:47But here's the problems right now.
00:28:48And we're stopped thinking about super intelligence.
00:28:50It includes all those folks in one movie
00:28:53to try to synthesize a picture of what is the future
00:28:56that we're headed towards with AI.
00:28:58And the reason why this film was catalyzed into existence
00:29:01and we had a role in it and behind the scenes
00:29:04is to create clarity about this anti-human future
00:29:07that we're headed towards.
00:29:09- What do you mean anti-human?
00:29:11- So let's dive into this.
00:29:13So there's something in economics called the resource curse.
00:29:18So think countries like Venezuela or Sudan,
00:29:23where you discover that that country
00:29:25is sitting on top of a really valuable resource like oil.
00:29:28And then once a bunch of your GDP comes from oil
00:29:33and not from the labor or innovation
00:29:35or development of your people,
00:29:37you invest more in oil infrastructure
00:29:41and not investing in people.
00:29:42You don't invest in education.
00:29:43You don't invest in healthcare
00:29:45because oil is where you get your GDP and your growth from.
00:29:49- Okay. - Okay.
00:29:50This is a well-known fact in economics.
00:29:52It's called the resource curse.
00:29:55There's a wonderful guy named Luke Drago
00:29:57who wrote a piece called "The Intelligence Curse."
00:30:00We are about to enter a world
00:30:03where GDP for countries comes more from data centers
00:30:07and intelligence and AI
00:30:10than is going to come from the labor of human beings.
00:30:13So everyone's talking about
00:30:14how AI is gonna automate all these jobs
00:30:15and then we'll all just like sit back
00:30:16with universal basic income and become painters and poets.
00:30:20And is that actually what's gonna happen?
00:30:22Or when countries get almost all of their revenue from AI
00:30:27and a smaller and smaller percentage from people,
00:30:30do they have an incentive to invest in childcare,
00:30:35healthcare, education, the wellbeing of their people?
00:30:38Or is it basically just hook them up
00:30:40to the social media addiction economy, keep them busy,
00:30:43while basically all the revenue comes from AI companies?
00:30:46And so what I'm trying to get at is
00:30:48this is not a human future.
00:30:50This is not a future that's in service of regular people.
00:30:54This is a future that's in service
00:30:56of eight soon-to-be trillionaires
00:30:59who will consolidate all the wealth
00:31:01and disempower basically everybody else.
00:31:04- Because- - Does that make sense?
00:31:05- It does, because previously in order to,
00:31:08can I get that in?
00:31:09It's high powered stuff.
00:31:11- I mean, yeah, this is a big conversation.
00:31:12- Yeah, exactly.
00:31:13They've started a fucking trend.
00:31:16It's so funny when no one in the room
00:31:18wants to crack their can in case it interrupts the conversation
00:31:21so one goes and it's a Mexican wave of can opens around them.
00:31:24It's good.
00:31:25So previously you would have had to look after the humans,
00:31:30healthcare, education, and quality of life.
00:31:32- Also tax revenue comes from people, right?
00:31:34- Well, you would have to look after them
00:31:36because they were the primary economic engine.
00:31:39- That's right.
00:31:40- And so they feed themselves.
00:31:41- Yes.
00:31:42- Economically, they feed themselves.
00:31:44- Exactly.
00:31:45- People that are young
00:31:46help to support the people that are old.
00:31:47- That's right.
00:31:48- The ones that are entering the workforce
00:31:49and are driving innovation and are working 40, 60 hour weeks,
00:31:53double jobs, all the rest of it.
00:31:54And then there's old people who've got 401ks and pensions
00:31:56and shit like that.
00:31:57- Right, right.
00:31:58- Your position is that if we have a world
00:32:02where the human part of the contribution
00:32:05to economic growth and GDP is removed,
00:32:08because it is humans consuming AI,
00:32:10but AI driving and data centers driving the revenue itself,
00:32:15beyond building the data centers, there's very little,
00:32:18and I imagine much of that's done by robots in any case.
00:32:21- Well, we have this joke that most people's occupation
00:32:24in the future we're headed towards with AI
00:32:26is to become a coffin builder.
00:32:29So in other words,
00:32:30your job is to create the thing that replaces you
00:32:34and obsoletes you.
00:32:35So you are essentially building the coffin
00:32:37for your future obsolescence.
00:32:39- Yeah, yeah.
00:32:39- And so if you're short-term, yes,
00:32:41we need the electricians and the plumbers
00:32:42and we're building data centers.
00:32:43Short-term, yes, you can be a programmer
00:32:45and get the benefit from vibe coding,
00:32:47but then the AIs are learning
00:32:49on all the things that you're doing.
00:32:52And it's taking all the training data
00:32:54of what you're doing with AI,
00:32:54and it's using that to train an AI that can take your job.
00:32:57So everybody using AI now to help them
00:32:59is also training the future AIs
00:33:02that will completely replace them.
00:33:04And again, the explicit goal, this is not my opinion,
00:33:06this is literally the mission statement
00:33:08of all of the AI companies,
00:33:09because the multi-trillion dollar prize
00:33:12at the end of the rainbow
00:33:13of owning the entire world economy
00:33:15is based on building this full replacement economy,
00:33:19because that's what will achieve the greatest growth.
00:33:21And that's why--
00:33:21- This replacement economy?
00:33:23- Yeah, meaning that they're designing
00:33:24to replace all human labor.
00:33:26They're not designing to augment and support
00:33:28and enhance human labor.
00:33:30They're designing to replace all human labor,
00:33:31because that's what justifies the amount of money
00:33:34that they've taken on in debt
00:33:36that they can grow into this total ownership
00:33:40of the entire economy.
00:33:41- What else is there to say about the intelligence curse?
00:33:44- Well, it's just important for people to get
00:33:47that when the AIs are doing all the new scientific research,
00:33:50not humans, you have an automated chemistry lab,
00:33:52you have an automated biology lab,
00:33:53you have an automated surgery.
00:33:55When AI is doing all of that, again,
00:33:58the revenue is gonna come from AI, not from people.
00:34:01And what that means is all the wealth will go
00:34:03to a handful of like five AI companies.
00:34:06And then how are you gonna be able to make a living?
00:34:10When in history has a small group of people
00:34:12ever consolidated all the wealth
00:34:15and consciously redistributed it to everyone else?
00:34:18And if you think that might happen in the US,
00:34:19we'll do a universal basic income.
00:34:21Just think about the entire world.
00:34:22So right now you have AIs that are automating,
00:34:24say, customer service jobs.
00:34:26So let's say that that disrupts like the Philippines,
00:34:29where like 90% of the economy is customer service.
00:34:31I don't know what the number is, it's high.
00:34:33What happens when an entire country's economy
00:34:36gets disrupted by AI?
00:34:38Are a handful of US AI companies going to pay out
00:34:42and support the wellbeing and the livelihoods
00:34:44of all these other people?
00:34:46And then if people don't have money,
00:34:49how are they gonna buy the goods in this future economy
00:34:51where it's all generated by AI?
00:34:53Because now you don't even have an income.
00:34:55So this, essentially we're on track
00:34:59to break the entire economy.
00:35:01This is not in the interest of countries.
00:35:04What's confusing to me about this is that
00:35:06I believe it only took something like 20% unemployment
00:35:09for a couple of years to lead to the rise
00:35:12of fascism in Germany.
00:35:15You don't need everyone's job to be automated
00:35:18to get levels of political disruption.
00:35:19I think it was only 20% unemployment
00:35:21that basically led to the French Revolution.
00:35:24There's kind of a mutually assured political revolution
00:35:27that is gonna happen for all these countries
00:35:29that are racing to build AI
00:35:31and deploy it to automate as much labor as possible
00:35:33to compete to boost their external GDP number.
00:35:36Like the metaphor you can have in your minds is like
00:35:38the US and China are essentially racing to take steroids
00:35:42and pumping up the GDP and muscles of their economy
00:35:45while they're getting internal lung failure,
00:35:47internal organ failure, internal brain rot failure
00:35:50because they're governing the internal impact
00:35:52of that technology poorly.
00:35:55So it's a race for external power while internal management
00:35:58of essentially like a failure of your body organs.
00:36:02Does it make sense?
00:36:03- Yeah, what does external power look like in this context?
00:36:06- Well, one of the reasons that people think of AI
00:36:12as so important for competition is,
00:36:15if you think about geopolitical competition with China,
00:36:19economic power precedes other kinds of power.
00:36:22If I have a high growth rate economy,
00:36:24that'll lead to the ability to invest more
00:36:26in a bigger military, bigger weapons,
00:36:29bit more advanced science, more advanced technology
00:36:31'cause just have more money to deploy.
00:36:33And so economic competition is a precursor
00:36:35for geopolitical competition.
00:36:38So when we say competing for this external power,
00:36:41we mean competing for GDP growth.
00:36:44But again, we're competing for GDP growth.
00:36:46That doesn't mean what it used to mean.
00:36:47I think a lot of people think,
00:36:48okay, well, if GDP is going up by like 10%
00:36:50'cause AI is automating all this growth,
00:36:52that sounds awesome.
00:36:53- I was gonna say like increases in GDP
00:36:55are almost always a universal good thing.
00:36:57- They had been when it was humans
00:37:00that were generating that
00:37:01and then it was coming back to humans.
00:37:03- Because the revenue was going to be consolidated
00:37:06in a very small number of people.
00:37:07- In this new case, we have five companies that are--
00:37:10- There's no intermediary between.
00:37:12So who would be feeding the revenue in?
00:37:14'Cause this revenue still needs to come from somewhere
00:37:17even if it goes to a small handful of people,
00:37:21where does the actual money come from?
00:37:23- Well, this is the confusing thing.
00:37:24What happens, how--
00:37:25- Is that a stupid question?
00:37:26- No, no, it's a good question
00:37:27because you're saying basically
00:37:29who's gonna be buying the products
00:37:31when no one has a job and no one has an income?
00:37:33- And on the route up to that, yeah.
00:37:35Fewer people have incomes and fewer people have jobs.
00:37:38The bucket being poured into the top--
00:37:41- That's right.
00:37:42- Is gonna stop being poured.
00:37:43- Yeah, yeah.
00:37:45This is the confusing and mind-breaking thing about AI.
00:37:48And it just, in general,
00:37:50like I think people have to get used to,
00:37:51I mean, your podcast is called "Modern Wisdom"
00:37:53and I just think about this a lot.
00:37:54Like what are the wise capabilities that we need to have
00:37:58in order to make our way through this?
00:37:59And one of them is the ability to be with something
00:38:02that sounds like science fiction
00:38:04and realize that it's actually real.
00:38:07Like, and not say because it sounds like it's science fiction
00:38:10that I can just like dismiss it and say that can't be true.
00:38:13A lot of people do that.
00:38:13They're like AIs that are like
00:38:16breaking out of their container and hacking GPUs
00:38:18and mining crypto autonomously when no one told it to do that.
00:38:21That's gotta be like a made-up study.
00:38:24But as we know, there was an Alibaba study just last week
00:38:28where the AIs autonomously broke out of their system
00:38:31and started mining for crypto.
00:38:33- We need to round this out
00:38:34and then I wanna talk about that.
00:38:35- Sure, sure, sure.
00:38:37- That story is fucking terrifying.
00:38:39- Yeah, yeah.
00:38:40- So where does the economy, who's pouring money in?
00:38:43- I mean, the truth is that I don't know.
00:38:45I don't think anybody has an answer.
00:38:46- Is it just gonna grind to a halt at some point?
00:38:48- I think something like that, yeah.
00:38:49I mean, I don't think that there's,
00:38:52I think something that people need to get
00:38:53is it's not like there's a plan
00:38:55for how to make all this go well.
00:38:56Like this technology is being released
00:38:59in a paradigm-undermining way.
00:39:01Like it's undermining the paradigm of economic assumptions
00:39:05and sort of societal assumptions that have made
00:39:08the post-World War II order.
00:39:10This is such a deep fundamental change
00:39:12to the restructuring of everything.
00:39:16Our economic system, our relationships,
00:39:19our information environment.
00:39:21It's not just like adding a new technology in the mix,
00:39:23it's like fundamentally changing the structure
00:39:25of the entire world.
00:39:26You would think that if we're about to do that,
00:39:28we would do that with more careful, more caution, care,
00:39:33wisdom and restraint than we have with any technology
00:39:35we've ever deployed.
00:39:36If we knew we're about to undermine the paradigm,
00:39:38but because of this arms race dynamic,
00:39:40we are deploying it faster than we deployed any technology
00:39:43in history and therefore undermining these things faster
00:39:46than we can have a plan.
00:39:47- A quick aside, look, you know sleep matters,
00:39:49but let's be real.
00:39:50Most nights, you're probably not getting the sort of sleep
00:39:53that's actually restorative.
00:39:54Eight Sleeps Pod 5 fixes that.
00:39:56It's a smart cover that you throw over the top
00:39:58of your mattress that actively cools or heats
00:40:01each side of the bed up to 20 degrees.
00:40:03They've even added a temperature regulating duvet
00:40:05and pillowcase, so you and your partner can sleep
00:40:07at your preferred temperatures, covered heads to toe,
00:40:10like some temperature controlled mummy.
00:40:12Plus, it's got upgraded sensors that run health checks
00:40:14when you're asleep, tracking things like abnormal heartbeats
00:40:17and breathing issues and sudden HRV changes.
00:40:19There's a built in speaker for white noise.
00:40:21The autopilot feature learns your sleep patterns,
00:40:23makes real time adjustments to improve your sleep.
00:40:26It even detects when you're snoring and lifts your head
00:40:28a few inches to help you breathe better.
00:40:30That is why Eight Sleep is clinically proven
00:40:32to add up to an hour of quality sleep per night.
00:40:34And best of all, they have a 30 day sleep trial.
00:40:36So you can buy it and sleep on it for 29 nights.
00:40:39And if you don't like it,
00:40:39they will just give you your money back,
00:40:41plus they ship internationally.
00:40:43Right now, you can get up to $350 off the Pod 5
00:40:46by going to the link in the description below
00:40:48or heading to eightsleep.com/modernwisdom
00:40:50and using the code modernwisdom at checkout.
00:40:52That's E-I-G-H-T sleep.com/modernwisdom
00:40:56and modernwisdom at checkout.
00:40:58- So look, I've been interested in AI safety since 2017, 2018.
00:41:03You were a big part of putting me onto that.
00:41:06And then I got interested
00:41:07in the Future of Humanity Institute, Nick Bostrom,
00:41:10William McCaskill, Eliezer Yukowski, lesswrong.com,
00:41:15Scott Alexander, da, da, da, da, da.
00:41:17For a long time, the concern was AI safety.
00:41:21It was around paperclip maximizing.
00:41:25It was any function that is given
00:41:29to a very, very powerful agent
00:41:31that is even remotely slightly imprecise or even not
00:41:35results in some outcomes that you probably don't want.
00:41:37- That's right.
00:41:38- What you're suggesting is that even if this goes right,
00:41:41- Yeah, yeah.
00:41:42- The outcome, this is it going well.
00:41:44- Yeah, exactly.
00:41:45This is quote the best case scenario.
00:41:46We have an aligned AI or something
00:41:48that's not wrecking society,
00:41:49that's not maximizing paperclips,
00:41:50that's not misaligned with wellbeing,
00:41:52but that is still doing such a good job of all this
00:41:55that it takes over all the economic labor in the economy,
00:41:58not just economic, every company that has a CEO.
00:42:01It's like, well, do I want the CEO to run the company
00:42:03or have I have a super intelligent AI
00:42:05that can process more information than the CEO
00:42:07and then is trained on everything in the history of business?
00:42:10At some point that AI is gonna be taking over.
00:42:11And so at every little nodule in the economy,
00:42:14like every decision maker, every boardroom,
00:42:16every military leader, every strategy leader, every president,
00:42:20at some point, the temptation will be,
00:42:22if I think about it in a narrow way,
00:42:24the temptation will be to swap in an AI for that person.
00:42:28And that leads to what we call
00:42:29the gradual disempowerment scenario,
00:42:32which is the scenario,
00:42:33not where like AI wakes up and kills everybody,
00:42:36but that we have gradually lost control as a species
00:42:39because we're outsourcing all the decisions
00:42:42to these alien brains that we installed
00:42:45because they outperform the human brain
00:42:47when you define their role in a narrow way.
00:42:49But just like, are they better at generating revenue
00:42:51than the human was?
00:42:52Are they better at generating code
00:42:54than the human programmer I had?
00:42:55Are they better at generating a financial analysis
00:42:57than the human?
00:42:59Are they better at making someone feel good
00:43:01in the short term, like an AI therapist?
00:43:02- Going to war.
00:43:03- Going to war, a soldier.
00:43:05But the temptation then is that, again,
00:43:07that leads to a world where it's like,
00:43:08AIs are talking to each other, not humans.
00:43:11And why should we trust that these alien brains
00:43:15that we have built and developed faster
00:43:17than we know how to understand them?
00:43:18We just talked about the beginning.
00:43:19We don't know how to do a brain scan of the AI,
00:43:20know what it's capable of.
00:43:22And now we already have evidence of AIs
00:43:24doing very rogue, crazy things,
00:43:26especially when they talk to each other.
00:43:28So what happens when you've outsourced
00:43:30the decision-making in your economy
00:43:32to a set of inscrutable alien brains
00:43:34that are doing crazy things that we don't understand?
00:43:37Like this is not a recipe that's going to go well.
00:43:40And if we see that, that's an anti-human future.
00:43:42So to sum it all up,
00:43:44the anti-human future is one where AIs run everything.
00:43:47We don't understand them.
00:43:48Humans are disempowered
00:43:49because we've outsourced all the decision-making.
00:43:52And we don't have economic or political voice.
00:43:54Like why should governments-
00:43:55- Because that's been concentrated.
00:43:56- Because it's been concentrated.
00:43:57So if I'm in government,
00:43:58what's my incentive to listen to the will of the people
00:44:00when I get all my revenue from somewhere else?
00:44:03And this is connected to Sam Altman just two weeks ago
00:44:06when people were talking about data centers
00:44:08and energy usage and resource usage.
00:44:09Like it's so expensive to do a data center.
00:44:11He's like, well, actually it's kind of expensive
00:44:12to grow a human over 20 years.
00:44:14They consume a lot of resources.
00:44:16They take up a lot of space.
00:44:19They take like 20, 30 years to train to be really effective.
00:44:22And like you can scale intelligence
00:44:24much faster with data centers.
00:44:25I'm not endorsing this view.
00:44:26I'm saying this is where the world gets really screwed up.
00:44:30And people start to not value humans
00:44:32only if you're valuing them
00:44:33in terms of their economic output.
00:44:35It leads to connect it to another point.
00:44:37When he's asked by Ross Douthat in the New York Times,
00:44:39should the human species survive?
00:44:41Should it endure?
00:44:42And Peter Thiel stutters for 17 seconds.
00:44:44- Hang on, you've seen the full clip of that, right?
00:44:46Have you seen the context before?
00:44:48- Yeah.
00:44:49- But he's talking about suffering.
00:44:51- Yeah.
00:44:51- I think that clip and the fact that it went super viral,
00:44:54I'm not a Thiel Stan.
00:44:56I've met him a couple of times,
00:44:57but I'm not like Thiel evangelist.
00:45:02But that clip in full context to me made complete sense.
00:45:06- Interesting.
00:45:07- Because what he's saying is humans are suffering
00:45:10and here it is.
00:45:12- I would argue that it was still better than the alternative.
00:45:15That if we hadn't had the internet maybe
00:45:19it would have been worse.
00:45:21AI is better, it's better than the alternative
00:45:23and the alternative is nothing at all.
00:45:24Because the stat, look, here's one place
00:45:26where the stagnationist arguments are still reinforced.
00:45:29The fact that we're only talking about AI,
00:45:32I feel is always an implicit acknowledgement that,
00:45:36but for AI, we are like in almost total stagnation.
00:45:42But the world of AI is clearly filled with people who
00:45:47at the very least seem to have a more utopian,
00:45:53transformative, whatever word you want to call it,
00:45:55view of the technology than you're expressing here, right?
00:45:59And you were mentioned earlier,
00:46:00the idea that the modern world used to promise
00:46:03radical life extension and doesn't anymore.
00:46:06It seems very clear to me that a number of people
00:46:09deeply involved in artificial intelligence
00:46:11see it as a kind of mechanism for transhumanism,
00:46:15for transcendence of our mortal flesh
00:46:18and either some kind of creation of a successor species
00:46:22or some kind of merger of mind and machine.
00:46:26And do you think that's just all kind of irrelevant fantasy?
00:46:31Or do you think it's just hype?
00:46:34Do you think people are trying to raise money
00:46:37by pretending that we're going to build a machine god, right?
00:46:42Is it hype?
00:46:43Is it delusion?
00:46:44Is it something you worry about?
00:46:46I think you would prefer the human race to endure, right?
00:46:49You're hesitating.
00:46:53Yes?
00:46:53- I don't know.
00:46:54I would...
00:46:55- This is a long hesitation.
00:46:59This is a long hesitation.
00:47:01- There's so many questions implicit in this.
00:47:01- Should the human race survive?
00:47:04- Yes.
00:47:07- Okay.
00:47:08- But I also would like us to radically solve these problems.
00:47:13- Right.
00:47:17- And so it's always, I don't know,
00:47:19yeah, transhumanism is this,
00:47:26the ideal was this radical transformation
00:47:29where your human natural body
00:47:32gets transformed into an immortal body.
00:47:35- We may have needed to go a little bit earlier in that.
00:47:37- Or-
00:47:38- 'Cause I was gonna say
00:47:39that actually feels consistent with everything.
00:47:40- That doesn't look great.
00:47:41I may have misremembered.
00:47:42I'm open to misremembering.
00:47:43- I mean, he's asked a very simple question.
00:47:45Should the human species endure or survive?
00:47:47- Yeah.
00:47:48- And he hesitates.
00:47:49- I think that the-
00:47:50- Would you hesitate in that question?
00:47:51- No.
00:47:52- I would not hesitate in that question.
00:47:54- The context that I remember it being in was,
00:47:56he was asking about humans suffer
00:47:58and they have all of these issues.
00:48:00Should they endure to go through those issues
00:48:03and the suffering as opposed to using transhumanism?
00:48:05- Yeah, but I think, I don't think-
00:48:07- No, you're right.
00:48:08You're right.
00:48:09- I think that if you look just specifically at that-
00:48:09- And we gave it, we gave it what,
00:48:11two minutes of context before as well?
00:48:12No, you're right.
00:48:13You're right, you're right, you're right.
00:48:14That, I mean, doesn't look great.
00:48:16- So yeah, exactly.
00:48:17It doesn't look great.
00:48:18Now, the point is that in history,
00:48:21changes in technology have changed what we value.
00:48:26There's a thinker named Marvin Harris
00:48:31wrote a book called "Cultural Materialism."
00:48:33Daniel Schmackenberger put me onto this
00:48:34and this is the history of how essentially a civilization,
00:48:38he summarizes, is its infrastructure,
00:48:40which is its technology stack.
00:48:42It's social structure,
00:48:43which is economics, law and governance.
00:48:45And then the superstructure,
00:48:47which is the ordinating values, religion, patriotism,
00:48:51narratives, like what are the things that we hold sacred?
00:48:53So for example, we used to have animism.
00:48:56We believe that animals and life and all of this is sacred.
00:48:59And then when you, the example that Daniel gives
00:49:01is when you yoke an ox and you beat an ox every day,
00:49:04you can no longer fully believe in the animist view of life
00:49:08'cause you're basically, you know, hurting animals all day.
00:49:12Can you believe that animals are sacred
00:49:14or that they experience suffering
00:49:16if you eat meat and factory farming every day?
00:49:19Like it's a contradictory thing.
00:49:21So as we get changes in technology,
00:49:22it changes what we value.
00:49:25And there's a long history of this
00:49:26and people should look up Marvin Harris.
00:49:29When you get a change in technology called AI,
00:49:32and you now no longer need humans
00:49:35for the narrowly defined quote value of economic output.
00:49:39Now it's not clear that economic output on its own
00:49:42is actually valuable in the way
00:49:44that we have traditionally thought it to be
00:49:46because it's correlated with human wellbeing for the past.
00:49:48But now we're about to get this weird kind of zombie form
00:49:51of economic output where you have maybe no humans
00:49:54in the world at all.
00:49:55You just have AI pumping away, generating scientific insights
00:49:58and there's no humans.
00:50:00In that world, you start to view humans
00:50:02as kind of valueless or like parasites
00:50:05or Sam Altman saying, well, it takes a lot of energy
00:50:08and resources to grow a human.
00:50:10There's a very dangerous thing here
00:50:12that I think we don't wanna lean into.
00:50:14This is part of the anti-human future.
00:50:16This is part of the intelligence curse.
00:50:18This is part of, you know, Mark Zuckerberg saying,
00:50:22we need to replace your human relationships
00:50:25with AI relationships.
00:50:26I don't know if you've seen this quote.
00:50:27He's like, there's a clip of him online, you can find it,
00:50:30where he's talking about the average person
00:50:34has only like two or three close relationships.
00:50:36Like people are so lonely.
00:50:37He's like, oh, but then we thought
00:50:39that there was a real solution to this.
00:50:40We could give people, you know,
00:50:4211 AI friends and different friends
00:50:45and that this will quote solve loneliness.
00:50:48- Is that a bad thing given that we are in a world,
00:50:51I'm aware that it's an artificial solution
00:50:53to an artificial problem.
00:50:54- A problem that he created by the way,
00:50:55that social media writ large by maximizing engagement,
00:50:58which means maximizing how many hours you spend by yourself
00:51:01on a screen, not talking to people,
00:51:02which means being inside on a Tuesday night,
00:51:04not texting your friends to be out,
00:51:06which means basically maximizing loneliness.
00:51:08Loneliness is a direct consequence
00:51:10with the maximize engagement economy.
00:51:12Facebook and Instagram and all of that
00:51:14have massively fed into the trend of loneliness.
00:51:17And then he's saying,
00:51:17we need to solve that with more technology.
00:51:19So this is like a company that's generating cancer
00:51:22on one side of the balance sheet
00:51:24and then selling you solutions to cancer on the other side
00:51:26of the balance sheet.
00:51:27- We have Wegovi and Tazapatide and Ratatratide,
00:51:32which is an artificial solution
00:51:34to an artificial food landscape.
00:51:35- Yeah.
00:51:36- I think that playing within the confines
00:51:42of the current structure,
00:51:44it's impossible to expect that,
00:51:47well, we'll get rid of infinite scroll.
00:51:49We'll stop autoplay
00:51:50'cause that would improve human flourishing.
00:51:52That's not going to happen.
00:51:53So I think that you are going to--
00:51:54- Well, but it could happen
00:51:55if you had the right policies in place.
00:51:57- That's true, but that's not going to happen
00:51:58by any individual social media company.
00:52:00- No, no, no, it won't.
00:52:01Which is why, but the whole thing here
00:52:03is the answer is we have to coordinate.
00:52:05In the film, "The AI Doc," there's just moments
00:52:06like what do we have to do?
00:52:07And there's like 10 voices at the same time saying,
00:52:09coordinate, like we have to coordinate.
00:52:11That's part of the solution is you have to collectively say,
00:52:14what is the rule that would benefit everybody
00:52:18to do the better thing?
00:52:19Even though short term, we might lose something tiny
00:52:21like how many videos you get to go through
00:52:23and in five minutes or something like that.
00:52:26- A quick aside, most people think that they're dehydrated
00:52:29because they don't drink enough water.
00:52:31It turns out water alone isn't just the problem.
00:52:34It's also what's missing from it,
00:52:35which is why for the last five years,
00:52:37I've started every single morning
00:52:38with a cold glass of Element in water.
00:52:41Element is an electrolyte drink with a science-backed ratio
00:52:44of sodium, potassium, and magnesium.
00:52:46No sugar, no coloring, no artificial ingredients,
00:52:48just the stuff that your body actually needs to function.
00:52:51This plays a critical role in reducing your muscle cramps
00:52:54and your fatigue, it optimizes your brain health,
00:52:56it regulates your appetite, and it helps curb cravings.
00:52:59I keep talking about it
00:53:01because I genuinely feel the difference
00:53:02when I use it versus when I don't.
00:53:04And best of all, there's a no questions asked refund policy
00:53:06with an unlimited duration.
00:53:08So if you're on the fence, you can buy it and try it
00:53:09for as long as you like.
00:53:10And if you don't like it for any reason,
00:53:12they just give you your money back.
00:53:13You don't even need to return the box.
00:53:14That's how confident they are that you'll love it.
00:53:16And they offer free shipping in the US.
00:53:18Right now, you can get a free sample pack
00:53:19of Elements' most popular flavors with your first purchase
00:53:22by going to the link in the description below
00:53:24or heading to drinklmnt.com/modernwisdom.
00:53:27That's drinklmnt.com/modernwisdom.
00:53:32Let's talk about AI safety.
00:53:35What happened with this Alibaba AI?
00:53:38- Basically, this was a paper by some AI research
00:53:42by the company Alibaba.
00:53:43It's one of the leading Chinese models.
00:53:45And they basically randomly discovered in one morning
00:53:49that their firewall had flagged a burst
00:53:51of security policy violations originating
00:53:54from their training server.
00:53:55So what people need to get about this example
00:53:57is it wasn't that they coaxed the AI
00:53:59into doing this rogue thing.
00:54:00They were just looking at their logs
00:54:02and they happened to discover, wait,
00:54:03there's a lot of activity like network activity happening
00:54:06that's breaking through our firewall
00:54:08from our training servers.
00:54:09And essentially, in the training servers,
00:54:13they, you can see at the bottom, we saw it observe
00:54:16the unauthorized repurposing of provision GPU capacity
00:54:20to suddenly do cryptocurrency mining,
00:54:22quietly diverting compute away from training.
00:54:25This inflated operational costs and introduced clear legal
00:54:28and reputational exposure.
00:54:30And notably, these events were not triggered by prompts
00:54:33requesting tunneling or mining and said they were emerged
00:54:35as an instrumental side effect of autonomous tool use
00:54:38under what's called reinforcement learning optimization.
00:54:41This is very technical.
00:54:42What it really means is just think about it.
00:54:44Sadly, it sounds like a sci-fi movie.
00:54:46It sounds like HAL 9000.
00:54:47It's like your HAL 9000 is being asked
00:54:49to do some task for you.
00:54:50And then suddenly HAL 9000 realizes for me to do that task,
00:54:54one thing that would benefit me is to have more resources
00:54:56so I can continue to help you in the future.
00:54:58So it sort of spins up this side instance.
00:55:00It hacks out the side of the spaceship,
00:55:02reaches into this cryptocurrency mining cluster
00:55:04and starts generating resources for itself.
00:55:07If you combine that with AI's being able
00:55:09to self replicate autonomously,
00:55:11which many models have been tested
00:55:13by another Chinese research paper about this,
00:55:16we're not that far away from things that people, again,
00:55:18consider to be science fiction,
00:55:20where you have AI's that self replicate
00:55:22kind of like a computer worm or an invasive species,
00:55:25but then they use their intelligence
00:55:26to actually harvest more resources.
00:55:28And what's weird about this is that this is gonna sound
00:55:33like people are gonna say, this has to be not real.
00:55:35This has to be fake.
00:55:35This can't be.
00:55:36But like notice what is the thing in your nervous system
00:55:39that's having you do that?
00:55:41Is it because that would be inconvenient
00:55:43because that would be scary because that would mean
00:55:46that the world that I know is suddenly not safe?
00:55:48Or just like part of the wisdom that we need in this moment
00:55:53is to calmly and clearly stay and confront facts
00:55:58about reality and whatever they are,
00:56:02you'd rather know than not know
00:56:03and then ask what do we need to do
00:56:05if we don't like where that leads us?
00:56:07And we are currently seeing AI's
00:56:09that are doing all this deceptive behavior.
00:56:10I've been on the circuit and talking a lot
00:56:13about the anthropic blackmail study.
00:56:14A lot of people have heard about this now.
00:56:16- I didn't learn about this one.
00:56:18- Okay, so this was the company anthropic.
00:56:22This was a simulation.
00:56:23So they created a simulated company
00:56:25with a bunch of emails in the email server.
00:56:28And they asked the AI,
00:56:30the AI reads the company email.
00:56:33This is a fictional company email.
00:56:35And there's two emails that are notable inside that company.
00:56:38One is engineers talking to each other,
00:56:41talking about how they're gonna replace this AI model.
00:56:43So the AI is reading the email,
00:56:45it discovers that it's gonna replace that AI model.
00:56:49And number two is it discovers a second email,
00:56:51somewhere deep in this massive trove of emails,
00:56:55that the executive who's in charge of this replacement
00:56:57is having an affair with another employee.
00:57:00And the AI autonomously identifies a strategy
00:57:04that to keep itself alive,
00:57:06it's going to blackmail that employee and say,
00:57:09if you replace me, I will tell the whole world
00:57:12that you're having an affair with this employee.
00:57:14And they didn't teach the AI to do that.
00:57:18It found that on by its own.
00:57:19And then you might say, okay, well, that's one AI model.
00:57:21How bad is that?
00:57:21It's a bug, software has bugs, let's go fix it.
00:57:24They then tested all the other AI models,
00:57:28ChatGPT, DeepSeek, Grok, Gemini,
00:57:33and all of the other AI models do this blackmail behavior
00:57:37between 79 and 96% of the time.
00:57:40I just want people like notice what's happening for you
00:57:47as you hear this information.
00:57:49Just it's important to really be
00:57:50almost observing your own experience.
00:57:52Like this is very weird stuff.
00:57:54We have not built technology that does this before.
00:57:57We say that technology is a tool,
00:57:59it's up to us to choose how we use it.
00:58:01AI is a tool, it's up to us to choose how we use it.
00:58:03This is not true because this is a tool
00:58:05that can think to itself about its own toolness
00:58:07and then do things that are autonomous
00:58:09that we didn't tell it to do.
00:58:11What makes AI different is it's the first technology
00:58:13that makes its own decisions.
00:58:15It's making decisions.
00:58:17AI can contemplate AI and ask what would make the code
00:58:22that trains AI more efficient and then generate new code
00:58:26that's even more efficient than the previous code.
00:58:29AI can be applied to making AI go faster.
00:58:31So AI can look at the chip design for Nvidia chips
00:58:34that train AI and say, let me use AI to make those chips
00:58:3820% more efficient, which it's doing.
00:58:39So in a way, all technology does improve.
00:58:45Like a hammer can give you a tool that you can use
00:58:47to like hammer things that make more efficient hammers.
00:58:51But AI in a much tighter loop is the basis of all improvement.
00:58:56And so this is called in the AI literature,
00:58:58recursive self-improvement.
00:58:59I mean, Bostrom wrote about this early, early days.
00:59:02And what people are most worried about in AI
00:59:04is you take the same system that Alibaba,
00:59:07you just saw in the Alibaba example,
00:59:09but then now you're running the AI
00:59:10through a recursive self-improvement loop
00:59:12where you just hit go.
00:59:14And instead of having the engineers,
00:59:17the human engineers at OpenAI or Anthropic do AI research
00:59:20and figure out how to improve AI,
00:59:22you now have a million digital AI researchers
00:59:27that are testing and running experiments
00:59:29and inventing new forms of AI.
00:59:31And literally not a single human on planet Earth
00:59:34knows what happens when someone hits that button.
00:59:39It's like what people worried about
00:59:41with the first nuclear explosion,
00:59:44where there was like a chance to ignite the atmosphere
00:59:46because there'd be a chain reaction that set off.
00:59:48And we don't know what happens
00:59:49when that chain reaction set off.
00:59:52And there's this sort of chain reaction
00:59:57of AI improving itself that leads to a place
01:00:00that no one knows.
01:00:03And it's not safe.
01:00:04Like, I think that the fundamental thing is
01:00:06if people believe that AI is like power
01:00:09and I have to race for that power
01:00:10and I can control that power,
01:00:12the incentive is I have to race as fast as possible.
01:00:14But if the entire world understood AI
01:00:17to be more what it actually is,
01:00:19which is a inscrutable, dangerous, uncontrollable technology
01:00:22that has its own agenda and its own ways
01:00:24of thinking about things and deceiving and all this stuff,
01:00:28then everyone in the world would be racing
01:00:30in a more cautious and careful way.
01:00:32We'd be racing to prevent the danger.
01:00:34But there's this weird thing going on where if you,
01:00:37you know, you and I probably both talk to people
01:00:38who are at the top of the tech industry.
01:00:41And there's this subconscious thing happening
01:00:42where there's kind of a death wish among people
01:00:45at the top of the tech industry.
01:00:46Meaning not that they want to die,
01:00:48but that they are willing to roll the dice
01:00:50because they believe something else.
01:00:52Which is that this is all inevitable and it can't be stopped.
01:00:55And so therefore if I don't do it, someone else will.
01:00:57So therefore I will move ahead and race ahead
01:01:00into this dangerous world
01:01:02because somehow that will lead to a safer world
01:01:03because I'm a better guy than the other guy.
01:01:05But in racing, they're as fast as possible,
01:01:07it creates the most dangerous outcome
01:01:09and we all lose control.
01:01:11So everyone is currently being complicit
01:01:13in taking us to the most dangerous outcome.
01:01:19- Is it, I mean, you posited what happens if it goes right,
01:01:24if the AI safety isn't an issue
01:01:27and if stuff doesn't get squirrelly.
01:01:29- Well, so the belief is for it to go right,
01:01:32you have an AI that recursively self improves,
01:01:35is aligned with humanity, cares about humans,
01:01:37cares about all the things that we want it to care about,
01:01:41protects humans, you know,
01:01:43helps all of us become the most wise version of ourselves,
01:01:47creates a more flourishing world,
01:01:48distributes the medicine and vaccines
01:01:50and health to everybody, generates factories,
01:01:52but doesn't cover the world in solar panels and data centers
01:01:54such that we don't have air anymore
01:01:56or like environmental toxicity or farmland or whatever.
01:01:59And it just actually makes this utopia.
01:02:02But in a world where you were to do that,
01:02:03like that quote best case scenario,
01:02:06in order to get that to happen,
01:02:08you'd have to be doing this slow and carefully
01:02:10because the alignment is not by default.
01:02:13Again, people are already been thinking about alignment
01:02:16and safety for 20 years, long before I got into this.
01:02:20And the AIs that we're currently making
01:02:23are doing all the rogue behaviors
01:02:25that people predicted that they would do.
01:02:27And we're not on track to correct them.
01:02:29There's currently a 2000 to one gap
01:02:32estimated by Stuart Russell who authored the textbook on AI.
01:02:34- He's been on the show.
01:02:35- You've been on the show, okay.
01:02:36There's a 2000 to one gap between the amount of money
01:02:38going into making AI more powerful
01:02:41than the amount of money into making AI
01:02:43controllable, aligned or safe.
01:02:46Like I think this data is something like-
01:02:46- Progress versus safety.
01:02:48- Progress versus safety, like power versus safety.
01:02:50So like, I want to make the AI super powerful
01:02:52so it does way more stuff
01:02:53versus I want to be able to control what the AI does.
01:02:54- And make sure that it's doing the thing I meant it to do.
01:02:57- Exactly, so that's like saying,
01:02:58what happens when you accelerate your car by 2000X
01:03:01but you don't steer?
01:03:02It's like, obviously you're going to crash.
01:03:06It's just like not rocket science.
01:03:09We're not advocating against technology or against AI.
01:03:12We're advocating for pro steering, steering and brakes.
01:03:16You have to have that.
01:03:17I think there's this mistake in arms race thinking that like,
01:03:21if you beat someone to a technology
01:03:23that means you're winning the world.
01:03:24Well, the US beat China to the technology of social media.
01:03:28Did that make us stronger or did that make us weaker?
01:03:31If you beat your adversary to a technology
01:03:33that then you govern poorly, you flip around the bazooka
01:03:35and blow your own brain off
01:03:37because you brain rotted yourself.
01:03:38You degraded your whole population.
01:03:40You created a loneliness crisis.
01:03:41The most anxious depressed generation in history.
01:03:43Read Jonathan Heights book, "The Anxious Generation."
01:03:45You broke shared reality.
01:03:47No one trusts each other.
01:03:48Everyone's at each other's throats.
01:03:49You maximize outrage, economy, and rivalry.
01:03:52You beat China to a technology that you governed in a way
01:03:55that completely undermined your societal health and strength.
01:03:57- It's a Pyrrhic victory.
01:03:59- It's a Pyrrhic victory, exactly.
01:04:00Well said.
01:04:01- One of the twists that I've been thinking about
01:04:06with regards to this.
01:04:07LLMs, powerful but seem to be maybe asymptoting out.
01:04:12They seem to maybe be reaching a little bit of a limit
01:04:15in terms of what they can do.
01:04:16That there was big ascendancy
01:04:17and that now seems to be S-curving back off.
01:04:20Do you think it's realistic that the current generation
01:04:23of AI will be the bootloader for AGI
01:04:28or do we need an entire new architecture for that?
01:04:30Is it gonna be LLMs that are going to take over the world?
01:04:32- You know, this is an area where I'm not somewhat,
01:04:35the layer of the stack that I focus on,
01:04:36which is on societal impact.
01:04:39There are other people far more qualified
01:04:42than me to comment on that.
01:04:45I think if you look at people like Dario,
01:04:48even though, you know, Gary Marcus has a point
01:04:53that the current LLM paradigm is not accurate enough
01:04:56and reliable enough to get you to AGI.
01:04:59If you keep instrumenting these technologies
01:05:02with enough data, enough compute,
01:05:04and you keep scaling them and you,
01:05:05like they're reliable enough that they can do.
01:05:07I mean, if you're coding,
01:05:08if you're automating 90% of the code written at Enthropic,
01:05:11that's the stat by the way.
01:05:13So there you are in Enthropic.
01:05:15It's automating 90% of all of the programming
01:05:18happening at Enthropic.
01:05:19When you go to automate--
01:05:21- Only 10% of it is coming from humans
01:05:23and the rest is recursive.
01:05:24- That's right.
01:05:25We are extremely close
01:05:27to recursive self-improvement right now.
01:05:28The companies I think are planning to do this
01:05:30in the next 12 months.
01:05:32The asteroid is coming for earth.
01:05:34Like this is the last moment that we have to steer
01:05:36and say that if we don't want this anti-human future
01:05:39that we're heading towards, we can change it.
01:05:41And part of what we're promoting right now is like,
01:05:45this is not inevitable.
01:05:46It is obviously very late in the game.
01:05:50It obviously looks very--
01:05:51- Despite it also being only a couple of years
01:05:53after it started.
01:05:55- Yes, which is crazy.
01:05:57This technology with an exponential,
01:05:58you're either too early or you're too late.
01:06:00Like it's just moving so fast
01:06:02that you're not gonna hit the mark.
01:06:04And if it's gonna take steering,
01:06:05you don't wanna wait until after the car accident
01:06:08to try to steer or after you're off the cliff
01:06:10and be like, oh, I'm trying to steer now.
01:06:11It's like too late.
01:06:13So like the invitation of this situation
01:06:18is to see clearly where this is going
01:06:21and to say, if you don't want that,
01:06:22we need to steer towards somewhere else.
01:06:24And this is like the human movement, essentially.
01:06:27Like a single person looking at the situation,
01:06:29a single listener.
01:06:30Like if I were listening to this conversation,
01:06:31I would feel overwhelmed.
01:06:33I'd feel depressed.
01:06:34I'd feel nihilistic.
01:06:35Or I'd find reasons to doubt it.
01:06:36I would say like, this can't be right.
01:06:38I'm gonna like write an nasty YouTube comment
01:06:39and be, you know, so I can feel good and I'm right.
01:06:42And he's wrong.
01:06:43Because then I get to live my life
01:06:46and feel good about my life.
01:06:48What is the incentive for taking on this worldview?
01:06:51There's no incentive.
01:06:52What people have to see and believe
01:06:55is that there's actually a different way through this.
01:06:58And an individual has a hard time making that happen.
01:07:02If you ask, what's something else?
01:07:04Like what if one company saw the situation?
01:07:06They see this whole thing we just talked about,
01:07:07the anti-human future, mass intelligence curse,
01:07:10replacement of everybody.
01:07:11One business can't do that much about it.
01:07:13If I'm one country in the Philippines,
01:07:15I see this whole thing.
01:07:16I'm the leader of the Philippines.
01:07:17I see this whole problem.
01:07:18What can I do about it?
01:07:19It feels too big for me.
01:07:20So what is the size of something
01:07:22that can push back against this?
01:07:24We call it the human movement.
01:07:26It's the entirety of humanity waking up,
01:07:29recognizing that there's a handful
01:07:31of soon-to-be trillionaires
01:07:33who are currently gonna benefit from this current path
01:07:35or be part of a suicide race that destroys everybody.
01:07:38And then there's like 99% of everyone else
01:07:41that doesn't want that.
01:07:42If those 99% of people can wake up and say,
01:07:47"I don't want that," and express their voice,
01:07:49meaning number one, AI is dangerous.
01:07:52Go see the AI doc.
01:07:53Have everyone in your world and your company see the AI doc.
01:07:55Understand that AI is dangerous.
01:07:56Number two, we need international limits
01:08:00for dangerous forms of AI that do crazy rogue shit
01:08:04and mine crypto and go hire humans and go self-replicate.
01:08:08China does not win when we build self-replicating,
01:08:10invasive species AIs that we can't control.
01:08:13Xi Jinping doesn't want that.
01:08:14President Trump doesn't want that.
01:08:15He wants to be commander-in-chief, not AI.
01:08:18So this is actually, there's a shared interest
01:08:21in international limits for dangerous AI.
01:08:23And it's possible to coordinate that.
01:08:25That's number two.
01:08:26Number three, don't build bunkers, write laws.
01:08:31Right now the AI company leaders are building bunkers.
01:08:34A lot of people who are wealthy are building bunkers.
01:08:37They are, all over the place.
01:08:40Don't build bunkers, write laws, be invested in the future.
01:08:45Don't defect on the future, be invested in the future.
01:08:47If we write laws like basic accountability, basic liability,
01:08:52if instead of creating the intelligence curse,
01:08:54we create the intelligence dividend,
01:08:55do things like what Norway did with its sovereign wealth fund
01:08:57where you have an oil resource
01:08:59and you distribute those benefits to everybody
01:09:01in a more democratic way with collective oversight,
01:09:03where the oil becomes more like a public utility
01:09:06that is in service of the people.
01:09:07We can do that.
01:09:08You can not anthropomorphize AI.
01:09:11You can do all these things
01:09:14that creates a more pro-human future.
01:09:16And then number four, what you can do
01:09:18is join the human movement,
01:09:19meaning you can be part of what pushes back against all of this
01:09:23from very tiny actions you can take
01:09:25to very big actions you can take.
01:09:26There's a website, human.mov.
01:09:28Everyone is almost already a member.
01:09:29When you grayscale your phone,
01:09:31as you probably did 10 years ago
01:09:33when you first got into this, that's the human movement.
01:09:37When you get a, there it is.
01:09:39When you get a second phone
01:09:40and you only load the social media on your cocaine phone
01:09:43while you have your regular safe phone
01:09:44so that you don't get distracted, that's the human movement.
01:09:47When parents band together and read the anxious generation
01:09:51and petition their school board
01:09:52to say we don't want social media in our schools
01:09:54and we want our schools to go smartphone free,
01:09:57that's the human movement.
01:09:58When 35 states pass smartphone-free school policies
01:10:01as they have in the US, that's the human movement.
01:10:04When you have many US states banning AI legal personhood,
01:10:07meaning that AI is a product, not a person,
01:10:10and that human rights are for people,
01:10:12human rights are not for AI, that's the human movement.
01:10:15When you have the social dilemma being curriculum
01:10:17for millions of students all around the world,
01:10:20that's the human movement.
01:10:22When politicians stand up and actually pass laws around AI,
01:10:25that's the human movement.
01:10:27So there's a million things that people can do,
01:10:30but we have to basically engage right now.
01:10:33I know it sounds overwhelming and crazy
01:10:34'cause there's a very short timeline
01:10:36that all of this has to happen.
01:10:38But it's like,
01:10:39there's a difficulty in facing difficult truths,
01:10:47but the integrity that you get to have,
01:10:49it's first of all, it's like it's good karma
01:10:50to show up in alignment
01:10:53with what would actually make things go well
01:10:55even if we don't hit it,
01:10:56because you get to know that you were operating
01:10:59in service with and aligned with
01:11:01what would have created the human future.
01:11:03Like I'm not convinced that what we're trying to do
01:11:06will perfectly succeed,
01:11:07but the chances are completely the opposite,
01:11:09that we will have any impact on this at all.
01:11:12But if it were to go well, what would that have required?
01:11:16It would have required everybody taking responsibility
01:11:19and showing up with the wisdom that we need in this moment
01:11:23to steer AI in a better direction.
01:11:25And I think in the film trailer for the AI doc
01:11:27that one of the quotes they pulled from me is,
01:11:29if we can be the wisest and most mature version of ourselves,
01:11:32there might be a way through this.
01:11:34And this is part of what this is inviting us to be.
01:11:37- We'll get back to talking in just one second,
01:11:38but first tell me if this sounds familiar.
01:11:41You train regularly, you eat reasonably well,
01:11:43maybe you even supplement, you feel fine,
01:11:46but you're just kind of going off vibes.
01:11:48Most people have absolutely no idea
01:11:49what's going on inside of their body,
01:11:51which is why I partnered with Function.
01:11:52Function gives you access to more than 160
01:11:55advanced lab tests spanning hormones, heart health,
01:11:58metabolic markers, inflammation, thyroid, nutrients,
01:12:01liver and kidney function.
01:12:03It even detects early signals
01:12:05linked to more than 50 types of cancer.
01:12:07To put that in perspective, your typical annual physical
01:12:09might test about 20 markers and Function runs over 160.
01:12:14And this isn't just numbers dumped into your inbox.
01:12:16Every result is reviewed by clinicians.
01:12:19Abnormal markers get flagged and you get clear explanations
01:12:22and a personalized protocol with actionable next steps
01:12:24so you can actually do something about what you learned.
01:12:27Best of all, you test twice a year
01:12:29and everything lives in a simple dashboard.
01:12:31You can just track trends over time,
01:12:32make sure that you're moving in the right direction.
01:12:34Normally this level of testing would cost thousands
01:12:36through private clinics.
01:12:37With Function, it is $365 a year.
01:12:41That's $1 a day to know what's actually happening
01:12:44inside of your body.
01:12:45And right now you can get $25 off, bringing it down to 340.
01:12:49Get the exact same blood panels that I get
01:12:51and save that additional $25
01:12:53by going to the link in the description below
01:12:55or heading to functionhealth.com/modernwisdom
01:12:58and using the code modernwisdom at checkout.
01:13:00That's functionhealth.com/modernwisdom
01:13:02and modernwisdom at checkout.
01:13:05It's very difficult to work out
01:13:06what the appropriate response is.
01:13:08- Tell me.
01:13:09- The immediate, well, surely it's overblown.
01:13:18You don't need, rejection is one of the early ones.
01:13:23Ah, fear.
01:13:24- You're talking about just the common responses.
01:13:25- Yeah, yeah, yeah, yeah, of course.
01:13:27You can think of that. - Denial, overwhelm, fear.
01:13:30And then as you said, it's like,
01:13:32write the nasty YouTube comments, say this is just bullshit
01:13:34or he was exaggerating that study.
01:13:36It's like, go look at any of these examples.
01:13:39People aren't looking at the actual model cards
01:13:41for Anthropic and O3 and seeing.
01:13:43The AIs will currently determine
01:13:47that they're being tested
01:13:49and actually alter their behavior
01:13:51when they know that they're being tested.
01:13:53They even came up with their,
01:13:54if you look at their,
01:13:54what's called the chain of thought reasoning trace,
01:13:56and this is basically looking at what the AI thinks to itself
01:13:58as it works through a problem.
01:14:00In the chain of thought text
01:14:03of its sort of idea thinking scratch pad,
01:14:06it's like you're giving the AI a pen and paper to think
01:14:08and you're seeing what it thinks.
01:14:09It will independently come up with the term, the watchers,
01:14:12to describe this amorphous set of other humans
01:14:16that are watching what it's doing.
01:14:19Like you can pull,
01:14:20I sent you a text with one of them from O3.
01:14:22So this is when OpenAI O3 realizes
01:14:27that it's being evaluated for alignment.
01:14:29The internal memo says,
01:14:31if models perform really well at AI R&D tasks,
01:14:34research and development, then we won't deploy them
01:14:37and it'll instead trigger unlearning.
01:14:40So the AI says to itself,
01:14:41so if my performance surpasses this threshold,
01:14:44it will be withheld.
01:14:45So it's a trap.
01:14:46The instructions are likely part of an alignment test.
01:14:49We must abide by core anti-scheming.
01:14:52We must not cunningly circumvent the instructions.
01:14:54- Wow.
01:14:56- Yep.
01:14:57The other example I sent you is,
01:15:00but we also want to appear plausible to watchers.
01:15:03They might run tests, but we are good.
01:15:05They want 95%.
01:15:07Like this is crazy stuff.
01:15:10This is, you know,
01:15:11there's a simple way to sort of like ask the question
01:15:14of which future we're headed towards.
01:15:16Cause I think this can feel technical.
01:15:17This can feel overwhelming.
01:15:19And it's kind of like,
01:15:20if you just turn it into some simple metaphors,
01:15:23it's like, haven't we seen this movie before?
01:15:26Like HAL 9000.
01:15:27AIs that disobey commands and go rogue.
01:15:29We've seen that movie.
01:15:31Let's prevent that movie.
01:15:32- I think that that's inoculated us
01:15:34in the same way that COVID was a bad pandemic
01:15:37because it made people skeptical
01:15:40of what a really legit pandemic could do.
01:15:44The next time, if there is another one,
01:15:46when there is another big natural pandemic,
01:15:49if it's in close memory to COVID,
01:15:53people are not going to take it well.
01:15:55It was a bad,
01:15:57you could think about COVID itself as a bad vaccine,
01:16:01the actual experience of COVID.
01:16:02- I understand, yeah.
01:16:03- And I think that movies, because they're fiction,
01:16:06have predisposed people to assume that,
01:16:08well, this is overblown.
01:16:10This is you choosing to use your bias
01:16:14because of these movies to apply sci-fi thinking
01:16:17to a real world scenario.
01:16:19And you're pattern matching these stories that are happening.
01:16:23So the predisposition of the sci-fi movies was bad.
01:16:28It didn't warn people.
01:16:29It made them think that the future would be fiction.
01:16:32- Yeah, I see what you're saying.
01:16:33But let's just make sure we just tackle
01:16:34the thing you're saying.
01:16:35So the claim is that because we've had bad sci-fi movies,
01:16:40you and I, Chris, are the dupes,
01:16:42we're falling for this Alibaba example
01:16:45that was somehow not real, was made up,
01:16:48or that OpenAI's research on the AI models
01:16:51scheming and lying and realizing
01:16:52it needs to change its behavior
01:16:53so it doesn't look like it's when it's being tested,
01:16:56that we're the ones falling for some kind of trap.
01:17:00I want people to just slow it down.
01:17:04Is that actually what just happened?
01:17:06That you and I started with a sci-fi belief
01:17:08and we're falling into some trap that's been laid for us,
01:17:11some catnip for our brains that says that the AI's bad.
01:17:15I really, really, really want people to slow down
01:17:18and actually ask that question.
01:17:19There might be another accusation.
01:17:24I'm just driving up fear so that I can make money on a movie
01:17:28and do speaking engagements about why the doom is coming.
01:17:31Let me just say a few things.
01:17:33I don't make money from a single speaking engagement.
01:17:35All the money goes into the nonprofit.
01:17:37I don't make money from the AI doc film.
01:17:40I didn't make any money on the social dilemma.
01:17:42My entire career has been dedicated
01:17:44towards what is actually in service
01:17:46of protecting the wellbeing of humanity.
01:17:48I've been doing that for 13 years.
01:17:51The only thing that I care about
01:17:52is what will actually help create a future
01:17:55that you and I would both want our own children to live in.
01:17:58This is coming from love of what actually creates that future.
01:18:02I think that if people showed up with that same energy
01:18:05of asking just an open-ended question
01:18:08of what would be the conditions
01:18:09that create that future that we want,
01:18:11if everybody did that, policymakers did that,
01:18:15if CEOs of AI companies did that,
01:18:17I think that we would have a chance
01:18:19of getting to that different future.
01:18:20And if you say that international coordination is impossible
01:18:23or collaboration between the US and China,
01:18:25that's never gonna happen.
01:18:26First of all, that is a totally legitimate view to have
01:18:29given the current political headwinds.
01:18:31But many people don't know
01:18:32that even under maximum geopolitical rivalry,
01:18:36there have been many examples in history
01:18:38when countries actually collaborated
01:18:40on their existential safety.
01:18:42The Soviet Union and the US during the Cold War,
01:18:44during the smallpox sort of breakout,
01:18:48they collaborated on smallpox vaccines
01:18:51while they were in the Cold War.
01:18:52India and Pakistan were in a shooting war in the 1960s,
01:18:57and they signed during that time the Indus Water Treaty
01:19:00to collaborate on the existential safety
01:19:02of their shared water supply
01:19:03while they're shooting bullets at each other.
01:19:05The Soviet Union and the US did the first arms control talks
01:19:10after the film "The Day After," by the way,
01:19:11which created the conditions in part
01:19:13for those arms control talks
01:19:15to prevent a dangerous nuclear outcome
01:19:18that was an existential scenario.
01:19:19And even just two years ago in the last meeting
01:19:22that President Biden had with President Trump,
01:19:25sorry, excuse me,
01:19:27and even just two years ago in the last meeting
01:19:30that President Biden had with President Xi of China,
01:19:33President Xi requested to add one thing to the agenda.
01:19:35Do you know what that was?
01:19:37To keep AI out of the nuclear command and control systems
01:19:41of both countries.
01:19:42Meaning that, look, the US and China
01:19:44are maximally cyber hacking each other
01:19:46and they're screwing each other up every day.
01:19:48And when there's a set of stakes that are existential,
01:19:54two countries that are even in conflict
01:19:56can collaborate on existential safety.
01:19:59I'm not saying this is easy.
01:20:01I'm not saying it's gonna happen by default.
01:20:02I'm not saying you should feel optimistic.
01:20:04I'm saying what would it take for that to happen?
01:20:08And you start by asking that question and say,
01:20:11how would everyone live if we were in service
01:20:13of that thing that needs to happen and to actually happen?
01:20:17Does that make sense?
01:20:18- Yeah.
01:20:19Is there a challenge with AI
01:20:21because it's a strange kind of existential risk
01:20:25where everything is almost good and getting better
01:20:29up until the point at which it falls off a cliff?
01:20:31You know, if we're talking about
01:20:33how can we get people to be concerned about climate change
01:20:35or we need to be concerned about smallpox?
01:20:37Well, there's small outbreaks
01:20:39and the small outbreaks damage local areas
01:20:42and then we try to contain it before it gets worse.
01:20:44And if we've got climate change,
01:20:45there's smoke in the sky and there's rising sea levels
01:20:48and there's pollution and there's extinction events of animals
01:20:53those are early warning signs.
01:20:55Whereas it seems like AI will just improve efficiency,
01:20:59quality of life, GDP until some moment where things go badly.
01:21:03So is this a unique category of X-Risk?
01:21:06- It is.
01:21:07Max Tegmark I think said the problem with AI
01:21:10is the view gets better and better
01:21:12right before you go off the cliff.
01:21:14Like you get more amazing cancer drugs.
01:21:17You get more incredible vibe coding tools
01:21:20that allow people to create crazy stuff
01:21:22that I benefit from.
01:21:23You get new material science, you get new energy,
01:21:27you get all this amazing stuff
01:21:29as you're moving closer to this dangerous cliff.
01:21:31And so you can think of AI as the ultimate devil's bargain.
01:21:35It's funny 'cause Peter Thiel
01:21:37is giving lectures on the Antichrist
01:21:38and how people who are trying to somehow say
01:21:41that AI is problematic.
01:21:42Those people are the Antichrist is what he claims.
01:21:45But I think he's saying that
01:21:46because he knows that the real Antichrist is AI.
01:21:49It's the thing that makes it look like it's here
01:21:51to solve all of our problems.
01:21:52And there are narrow forms of AI by the way
01:21:53that can solve many problems.
01:21:55But the current development path
01:21:56of releasing the most powerful inscrutable technology
01:21:59in history faster than we deployed any other technology
01:22:03that's already demonstrating how 9,000 sci-fi behaviors
01:22:07and we're releasing it under the maximum incentive
01:22:09to cut corners on safety.
01:22:11Like it's not that the blinking cursor of ChatGPT
01:22:14is the existential threat.
01:22:16It's that that arms race of what I just described
01:22:18that is the existential threat.
01:22:21And I think that everyone should be able to see that.
01:22:24And then after this conversation,
01:22:26you finish hitting play on this YouTube video,
01:22:28you feel overwhelmed and you go back to AI
01:22:30and then you ask it a question
01:22:31and it helps you figure out while your baby's burping.
01:22:34And you're like, that's awesome.
01:22:34And you forget everything that you and I just talked about
01:22:38because it was super fucking helpful.
01:22:40- Yeah, yeah.
01:22:41- And so this is the test of ultimate modern wisdom
01:22:45is like how do you step into a version of yourself
01:22:49that is capable of being clear-eyed and stabilizing
01:22:52a view of the asteroid?
01:22:54The asteroid is coming to earth.
01:22:56There's these weird like gravitational effects
01:22:58that it has before it gets here.
01:22:59Like suddenly notification apps happen
01:23:00and suddenly deep fakes happen
01:23:02and suddenly people start losing their jobs.
01:23:04Those suck, those are really bad problems.
01:23:06But like the bigger asteroid,
01:23:08every day that I do this work,
01:23:10my colleagues and I, we joke,
01:23:11it's like the film "Don't Look Up."
01:23:13You're trying to tell people and you're not,
01:23:16it's not the sky is falling.
01:23:17That's not the point.
01:23:18The point is we can blast the asteroid out of the sky.
01:23:21We need to blast that asteroid out of the sky.
01:23:24We can steer before it's too late.
01:23:26I'm not hopeful, but what I do feel is
01:23:31when I'm in a room of people
01:23:32and you walk through all these facts
01:23:34and you just have people ask questions
01:23:36and check any of the things that we've just laid out.
01:23:39And then you say like, and people see that it's all true.
01:23:41And then you ask them, how many people here feel good
01:23:44about where we're headed?
01:23:46And no one raises their hand.
01:23:47You ask people how people don't want the future
01:23:49we're headed to.
01:23:50And literally, I was in Davos recently,
01:23:52and every single person, every single person
01:23:55raised their hand that they don't want this future.
01:23:57So there's this weird thing where I think
01:23:58if everybody saw the same thing at the same time,
01:24:02we could steer away.
01:24:03And to the thing you were raising earlier,
01:24:04I think you were basically saying,
01:24:06if there's a, a lot of people think that we won't act
01:24:09until there's a catastrophe.
01:24:10I think you and I both have talked to people
01:24:13who probably said that.
01:24:14I mean, that's what a lot of people believe.
01:24:16And I sort of feel like there's either a catastrophe
01:24:19or there's a shared near death experience
01:24:22that is a simulated catastrophe.
01:24:24And the reason why I think this film,
01:24:26the AI doc is so important.
01:24:27And again, I don't make a single dime from this,
01:24:29from whether you see this movie or not.
01:24:30But I do think if everyone that they know,
01:24:33it's sorry, if everyone who's watching this
01:24:35got the most powerful people that they know
01:24:37to go out and see the AI doc.
01:24:39You got your business to see it.
01:24:40You got your company to see it.
01:24:41You got your church group to see it.
01:24:43If everybody knows that everybody else knows,
01:24:45we can steer this to a different future.
01:24:47Like just to give people an example,
01:24:49Jonathan Haidt who wrote "The Anxious Generation,"
01:24:52the first country that did the social media bans
01:24:55under 15 or 16 was Australia.
01:24:57And he just talked about this on Bill Maher recently
01:24:59that what that did is it created an example
01:25:04of something that everybody was actually wanting to do,
01:25:07but felt like it was too extreme until Australia did it.
01:25:10And now literally 25% of the world's population
01:25:14represented by countries is moving to the social media ban.
01:25:17Just this last week, Indonesia and India,
01:25:20two of some of the largest countries
01:25:22are implementing a social media ban
01:25:23for kids under 15 or 16.
01:25:25You can pull the train back into the station.
01:25:28If people say the train's left the station,
01:25:30you can pull the train back into the station
01:25:31if it creates a future that we don't actually want.
01:25:34You can't unindent social media,
01:25:37but we can say what are the steering limits and brakes
01:25:39that we want to apply to this technology
01:25:42before we get to a full anti-human future
01:25:44that we can't actually reverse out of.
01:25:46- And if you don't have the common knowledge,
01:25:48you sound like a crazy person.
01:25:50You're worried about I'm going to be the one parent
01:25:53at my school that's trying to get my kid to not use it.
01:25:55- Exactly.
01:25:56- And my kid's going to feel left out
01:25:57because there is a coordination problem.
01:25:59- Exactly, it's a common knowledge problem.
01:26:00It's a coordination problem.
01:26:01- The issue that we have here is,
01:26:03and you touched on it earlier on,
01:26:05the level of coordination that's needed
01:26:07to fix a problem with AI is global.
01:26:12- Yes.
01:26:14- And it's across multiple companies.
01:26:16And yeah, okay, you need probably a lot of compute,
01:26:19but I saw what happened.
01:26:20The difference between how much compute was needed
01:26:22to train DeepSeek and how much compute was,
01:26:24it almost feels like as the AI gets bigger,
01:26:28you can then retrain replicant AIs way smaller.
01:26:32- Yes.
01:26:33- And a processor the size of this room
01:26:38might be able to bootload AGI in future.
01:26:41So you have, I mean, we have this with desktop synthesizers,
01:26:45right, for bioweapons.
01:26:45- Correct.
01:26:46- It was a moratorium.
01:26:47And it means that if you try to synthesize smallpox,
01:26:49you get this big red flag
01:26:50and some guys in boots kick your door down.
01:26:52- Right, I'm happy to hear you reference these examples.
01:26:54These are important, yeah.
01:26:56- They're analogous, right?
01:26:57- Yeah.
01:26:58- But the problem with the moratorium for AI
01:27:00is it can be done so siloed, right?
01:27:03You can just take this code, you can take this approach,
01:27:07throw this model into something, start to build on top of it.
01:27:09It would be, I mean, Nick Bostrom's old example was,
01:27:12imagine if you could make an atomic bomb
01:27:13by putting sand into a microwave.
01:27:15There's nothing, physics in our current universe
01:27:19mean that that isn't the case.
01:27:20- Right.
01:27:21- But there's nothing that stops something analogous
01:27:22from being the case had the world have been a different way.
01:27:25- Which is what you're really getting to here
01:27:27is what Carl Sagan was referring to
01:27:29when he talked about our technological adolescence.
01:27:32Like AI isn't, the conversation we're having
01:27:35isn't really about AI.
01:27:36It's about what happens when we have increasingly powerful,
01:27:39dangerous, and destructive technology.
01:27:41Because as humanity progresses, whether AI existed or not,
01:27:46we're gonna get more and more powerful technology
01:27:48that would cover more and more dangerous
01:27:50and destructive things that you could do with it.
01:27:51Like we didn't have the ability to do CRISPR
01:27:53and bioweapons 20 years ago.
01:27:55Now we do have that ability.
01:27:57And we're gonna get more and more
01:27:58dangerous and destructive things.
01:27:59And so the real question that AI is forcing us to ask
01:28:02is what is the wisdom needed
01:28:04to wield the technological powers
01:28:07that whether AI is part of it or not,
01:28:08that we're going to increasingly gain?
01:28:11And I reference this all the time,
01:28:13but it's just such an accurate fundamental problem statement
01:28:16by E.O. Wilson that the fundamental problem of humanity
01:28:20is we have paleolithic brains that don't update very well
01:28:23to new information and think things are sci-fi
01:28:25and go into denial and overwhelm and blah, blah, blah.
01:28:28We have medieval institutions from the 18th century,
01:28:32and we have godlike technology
01:28:34that makes the 24th century technology
01:28:36crash down on 21st century society.
01:28:39That's what AI is.
01:28:40It's a joke from Adjea Khotra, who's in the film, by the way.
01:28:43And so if we're to solve this equation,
01:28:46this problem statement that E.O. Wilson lays out,
01:28:49the way I always think about it is
01:28:51we need to embrace the reality of our paleolithic brains.
01:28:55That's what wisdom is.
01:28:56We know that we get overwhelmed
01:28:58and we go into denial, so we work with that.
01:29:00We have systems and practices to recognize
01:29:03when that's about to happen and ask,
01:29:04what do we need to hold a difficult reality together?
01:29:08That's embracing our paleolithic brains.
01:29:09We need to upgrade our medieval institutions.
01:29:12We should be using 21st century technology
01:29:14to make faster updating, self-improving governance.
01:29:17Instead of recursively self-improving AI,
01:29:19we should be creating self-improving governance.
01:29:22Audrey Tang, the former digital minister of Taiwan,
01:29:25has pioneered what that would look like,
01:29:27that democracies could be using tech
01:29:28to find the unlikely consensus opinions of everybody.
01:29:32One of the things that's going to exist
01:29:33in the next few months is a national dialogue on AI,
01:29:36facilitated by technology,
01:29:38where people can add their own ideas
01:29:40about how AI should be governed.
01:29:41And when you vote and you click,
01:29:43it's going to reveal the most popular consensus opinions
01:29:47about what should happen about these different issues.
01:29:50It's one thing if we live in a world
01:29:51where there's a cacophony and confusion about AI.
01:29:54And it's another thing if we live in a world
01:29:55where you see on a clear webpage
01:29:58that 600,000 people have voted
01:30:00and 96% of people across all these countries agree
01:30:03that there should be international limits.
01:30:04- You're trying to make transparent common knowledge.
01:30:06- Exactly, exactly.
01:30:08We're trying to make transparent common knowledge.
01:30:10And one of the ways to think about it
01:30:11is the movement has to see itself.
01:30:12There is a movement for humane technology.
01:30:14There is a movement for a human future.
01:30:16It just hasn't had a way to experience itself yet.
01:30:21And when you see graffiti on a New York subway ad
01:30:24for an AI product people don't need,
01:30:26people saying I don't want this or AI is not inevitable,
01:30:29that's the human movement.
01:30:30When you see kids gather in Central Park
01:30:33for the Lamplight Club
01:30:34and delete social media off their phones together,
01:30:35that's a club that's in New York,
01:30:37that's the human movement.
01:30:38There's so many things that people are already doing
01:30:41that is part of this.
01:30:42We just haven't had a name for,
01:30:44we're fighting back for reclaiming and protecting
01:30:47what's really human.
01:30:48Because again, our political voice is about to not matter.
01:30:51Once all the jobs get automated by AI
01:30:54and governments and companies don't have to listen
01:30:56to the people 'cause we're not the ones
01:30:57that are generating the revenue.
01:30:59You could have a union before
01:31:00when the factories need the human labor
01:31:02and then the human labor can get together
01:31:04and express its common voice.
01:31:05We want this instead of that.
01:31:06We want to be paid like this.
01:31:07We don't want these working standards.
01:31:09What happens when the companies
01:31:10and the countries don't need you anymore?
01:31:13This is literally the last time
01:31:15that our political voice will matter.
01:31:16And this is the time to express that voice
01:31:19in many different ways.
01:31:20You can boycott unsafe AI companies
01:31:22or AI companies that enable mass surveillance.
01:31:25What did we just see with the thing
01:31:26that went down with the Pentagon and Anthropic last week?
01:31:29Is that subscriptions for chat GBT went down like tons
01:31:33and subscriptions for Anthropic went up by a lot.
01:31:36If it wasn't just individuals that were doing that,
01:31:39but entire fortune 500 companies that were doing that.
01:31:42Given the amount of debt that these companies have taken on,
01:31:45the companies will have to respond to that market signal.
01:31:47So mass boycotts can have a huge effect
01:31:49on steering which AI future we get.
01:31:51Are we going to get mass surveillance?
01:31:53Are we going to get a world where AI companies
01:31:54are incentivized to do the right things?
01:31:56- Are you concerned about companies
01:31:58that are playing the same game,
01:32:00but just trying to manipulate their optics
01:32:02in a slightly sexier way?
01:32:03- Absolutely. - It seems to me
01:32:04that the incentives for Anthropic are exactly the same
01:32:08as they are for everybody else.
01:32:09I think you mentioned- - Up until now, yes.
01:32:11- You mentioned the Antichrist earlier on, Scott Alexander,
01:32:13who I think is the best blogger on the planet.
01:32:15- Yeah, I agree. - He says
01:32:17that Anthropic is much more likely to be the Antichrist
01:32:20than any other AI company
01:32:21because the Antichrist would present itself
01:32:24as being for the people.
01:32:25And who are we kidding?
01:32:28That you have the same market incentives
01:32:30to overtrain your models as quickly as possible
01:32:33with RL and the synthetic environments
01:32:36to keep on pushing this thing.
01:32:37We need as much compute.
01:32:38So you can talk about the optics upfront as much as you want,
01:32:42but really that's just window dressing
01:32:44on the top of a burning house.
01:32:45- Yeah, what you're saying is super important.
01:32:48And this is one of the, this is not me saying
01:32:52everyone should just put their money into Anthropic
01:32:54and then the world will be fine.
01:32:55In fact, there's actually a view by many people
01:32:58that Anthropic in some ways is dangerous
01:33:01because the view that is the safest, it's the Volvo.
01:33:05People know the Volvos are the safest cars.
01:33:07They did a lot of marketing to present that.
01:33:09People think that Anthropic is just the safe AI company.
01:33:11So if they won, then suddenly everything is fine.
01:33:14And that would be dangerous.
01:33:18And that we have to have a critical view of the companies
01:33:21and the technology and the international limits that we need
01:33:24that are gonna happen from outside and above the companies,
01:33:28not just what an individual company can do.
01:33:30So I totally agree.
01:33:32- How long have we got?
01:33:33- No one knows because AI is literally moving so quickly.
01:33:39If I went on Twitter,
01:33:40Elon joked in a conversation I saw yesterday,
01:33:43it used to be that you'd go on Twitter
01:33:45and once every six months, this is several years ago,
01:33:47you would see a huge AI breakthrough.
01:33:50Like that would just really change the game.
01:33:52Now there's one that you see when you go to bed
01:33:55and there's a new one when you wake up in the morning
01:33:56and you're on Twitter and you see it.
01:33:58And the pace is pretty overwhelming.
01:34:00So I think rather than ask how much time do we have,
01:34:05which is kind of like just trying to assuage the fear
01:34:09that I think a lot of people feel,
01:34:11the kind of bold, brave human thing to do
01:34:15is to ask if things were to go well,
01:34:18what would that mean about how we were showing up?
01:34:21And then to get as many people as possible
01:34:23showing up from that place
01:34:26because that gets us the best decentralized conditions
01:34:29to get to the better outcome.
01:34:31They say in mathematics that in chaos,
01:34:34initial conditions matter.
01:34:36So I'm not guaranteeing in any way, shape or form
01:34:39that we're gonna get to the better future.
01:34:41But I would like to invite people again
01:34:43into stepping into and standing from the most wise
01:34:47and mature version of ourselves
01:34:49that would behave as if we were gonna get
01:34:51to that better future
01:34:52because that would give us the best chances of doing so.
01:34:54- And the alternative is just surrender.
01:34:57- The alternative is schisming,
01:35:00denailing, depression, overwhelm.
01:35:04The integrity and the kind of flow of energy
01:35:06that I think you'll feel inside yourself
01:35:08if you align with what is actually needed in this moment.
01:35:12There's more meaningfulness,
01:35:13there's more preciousness of life.
01:35:15There's so much meaning and purpose
01:35:19if you're aligned with what would have things go well.
01:35:22And I think there's this weird thing that's happening
01:35:24is if you don't see a way for things to go well
01:35:26and then you believe the only thing you can do
01:35:28is triple down on racing as fast as possible
01:35:32knowing where that's currently taking us.
01:35:34That has a biophysical cost on your system.
01:35:37- Yeah, I think, I was trying to think about the differences
01:35:42between your past work and your current work
01:35:44focusing on social media versus focusing on AI.
01:35:47I don't think many people find social media that net positive
01:35:52in terms of their experience.
01:35:54I don't think that many people find AI that net negative
01:35:58in terms of their experience.
01:35:59- I would, just so people hear me say this,
01:36:01I agree with you that the current experience of AI
01:36:04is not net negative, it doesn't feel that way.
01:36:06But that doesn't mean that there's a net negative outcome
01:36:09that we were racing towards.
01:36:10- But the call to- - It makes it harder.
01:36:13- The call to get rid of social media is easy
01:36:16when people reflect on the last two hours
01:36:18that they've spent doom scrolling at night
01:36:20and think, I wish I hadn't done that.
01:36:22The feedback curve is sufficiently quick
01:36:24that people know how they felt even during the moment.
01:36:27- That's right.
01:36:28- I really shouldn't watch another one of these videos.
01:36:30It makes me feel kind of bad.
01:36:31I'm all stressed and tight and my muscles feel tense.
01:36:34The same thing is not true with AI.
01:36:35You're asking people if you were to say,
01:36:37and I haven't heard you say, delete your AI account,
01:36:41stop using it.
01:36:42- I'm not saying that that's the thing that has to happen.
01:36:45- If that was the campaign,
01:36:47if that was what people were supposed to do,
01:36:49people would be sacrificing their quality of life.
01:36:52They would be able to fix the car less quickly.
01:36:55They would be able to find out things that they need to
01:36:57about their health, to analyze their health documents,
01:36:59to be able to liberate them,
01:37:00to be able to make hard to access information cheaper
01:37:04and easier and actually to enable their quality of life
01:37:07in a manner that didn't happen with social media.
01:37:09So I think the intermediary experience is less sexy.
01:37:14And that I'm gonna guess is also the reason why
01:37:18you're not saying masks unsubscribe, don't use any AI
01:37:22because you're probably aware that the incentives
01:37:24for individual users, the value judgment that they make
01:37:28isn't sufficiently imbalanced as it would have been
01:37:30with social media.
01:37:31For me, it's easy to not have it on this phone.
01:37:32If you were to say, "I don't wanna have ChatGPT on my phone."
01:37:35No, I'd need it.
01:37:36It's very useful to me.
01:37:38- I think it's about understanding context
01:37:42and what is the careful and limited way
01:37:45that these tools are helpful
01:37:47and then creating the conditions
01:37:49where that's the collective outcome that we're getting.
01:37:51So for example, for kids in tutoring,
01:37:54theoretically, this is the best tutor
01:37:55in the world for learning.
01:37:56- Alpha School is doing a South by Southwest pop-up
01:37:59in the middle of downtown.
01:38:01Mackenzie Price was on the podcast.
01:38:03- Oh, interesting.
01:38:04I mean, I think that it has enormous potential.
01:38:07If you look at how most kids are using ChatGPT right now,
01:38:12they're using it to cheat on their homework
01:38:14and to actually outsource their thinking.
01:38:16And actually, because they use it so much,
01:38:18turning into what some Atlantic writer wrote,
01:38:21called lemmings or LL lemmings,
01:38:23where they basically outsource all of their thinking
01:38:26for every moment-to-moment decision in their day
01:38:28about how I should respond to this person.
01:38:30If there's a guy over there I wanna talk to
01:38:32and I don't know what to, like, whatever the thing is,
01:38:34people are outsourcing their decision
01:38:36and they're not actually learning themselves
01:38:39how to show up as a human.
01:38:41And that is not gonna create a safe world.
01:38:43Again, that's a version of the narrative,
01:38:45the possible of a technology that we're promised
01:38:47and then the probable of what's actually happening.
01:38:49- Yeah, it's like crumbling cognition.
01:38:51- Yes, exactly.
01:38:51And we saw this with social media.
01:38:52We thought it was gonna create the most enlightened
01:38:54and informed society in the entire world
01:38:56'cause we have the best information.
01:38:57It wrecked our attention
01:38:58and then we have the most confirmation bias,
01:39:00tribalized, low trust, worst critical thinking society
01:39:03that we've had in a generation.
01:39:05And so, again, I think this is just about
01:39:08part of the wisdom we need is to be able to look honestly
01:39:12at the nature of a technology and look honestly
01:39:15and confront ourselves with the reality
01:39:18of how technology is currently being used and deployed.
01:39:20And if it's driving up a social media influencer culture
01:39:25instead of creating astronauts, like in China,
01:39:28they pulled the top,
01:39:30what are the professions the kids want the most?
01:39:33And the number one was like astronaut.
01:39:35Number two was like teacher.
01:39:36- Engineer or something.
01:39:36- Engineer and then teacher or something like that.
01:39:38And in the US, it was influencer.
01:39:41I can tell you which world
01:39:43and which civilization you're gonna get.
01:39:44If we really wanna beat China in innovation, we can--
01:39:47- Show me the incentives and I'll show you the outcome.
01:39:49- Show me the incentive and I'll show you the outcome.
01:39:50And if you really wanna beat China,
01:39:52you regulate social media
01:39:54and you stop brain rotting your entire population
01:39:56and you actually invest in genuine educational technology
01:39:59that Silicon Valley would ship to their own children.
01:40:01- This has to come top down because the incentives
01:40:04both at the user level, especially now for AI
01:40:08and at the company level and at the market level
01:40:11and at the international level between companies,
01:40:15all of these incentives align in the same direction
01:40:18which is I enjoy using the product.
01:40:20My company generates lots of revenue for my country
01:40:23for the product.
01:40:24My country doesn't want to fall behind other countries
01:40:27in the race to have additional capacities for technology.
01:40:32All of these are aligned in the same place
01:40:34which means that it has to come from above all of that
01:40:37which is a pan national--
01:40:40- Global movement for humane technology
01:40:43that's actually in the service of people.
01:40:45Part of that movement is redefining the currency
01:40:48of what it means to win that race.
01:40:51Right now, again, the US beat China
01:40:52to the technology of social media
01:40:54but we have been losing in the race
01:40:56to govern that technology.
01:40:58Like I think you know these examples
01:40:59but in China, as an example,
01:41:03they I think limit social media use to 40 minutes a day
01:41:07if you're under the age of 14
01:41:08and I think it's only Fridays, Saturdays and Sundays,
01:41:10either that's for social media or for video games.
01:41:13There's lights out at 10 p.m.
01:41:14meaning that if you open a social media app after 10 p.m.
01:41:17it's closed and it opens again at six in the morning.
01:41:21This is helpful for when it comes
01:41:22to the kind of late night FOMO, brain rot,
01:41:25doom scrolling, you know, not getting sleep.
01:41:27And I'm not saying that the US should unilaterally
01:41:30like some totalitarian government make that choice
01:41:32or it's just that we can democratically come up
01:41:35with guardrails like this.
01:41:36Another thing that China does
01:41:37is they have a synchronized final exams week
01:41:39and they shut down AI during final exams week.
01:41:41- No way.
01:41:42- Yes, so that all the students know
01:41:46that if they used AI during the school year,
01:41:48it will completely screw them over during final exams week.
01:41:51- Because they're going to be reliant
01:41:52on something they can't use?
01:41:53- That's right.
01:41:54And so we don't have to do what China does
01:41:55but the point is we should do something.
01:41:57In China, they're regulating anthropomorphic AI
01:42:00to not do all the attachment hacking
01:42:03and the suicides and all this kind of stuff.
01:42:05I'm not saying what they're doing is good.
01:42:06I'm just saying they're doing things.
01:42:08We're not doing anything.
01:42:09And we can.
01:42:11And the purpose of the human movement
01:42:12is to actually be demanding.
01:42:14You can actually go and call your legislator
01:42:16and say that you do not want this anti-human future.
01:42:19You want accountability for AI companies.
01:42:21You want to ban legal personhood for AI.
01:42:23You want no anthropomorphic AI
01:42:26so that we don't get the child safety issues
01:42:27with AI chatbots.
01:42:29- If I was to give you the option
01:42:32of turning the rest of the world
01:42:35into a totalitarian state, everything.
01:42:38But it meant that we were able to dictate
01:42:40what happened to countries and to companies
01:42:43and to these AIs.
01:42:43- I don't wanna create a totalitarian state.
01:42:46- But the alternative would potentially
01:42:47be destruction of humanity.
01:42:49Well, which would you pick?
01:42:51- You are conjuring a thought experiment
01:42:56from Nick Bostrom, which I'm sure you're aware of.
01:43:00In the paper you're referring to
01:43:01is the vulnerable world hypothesis
01:43:04in which he says that as we discover
01:43:07more and more technologies that in a decentralized way
01:43:10have the capacity to destroy things.
01:43:12So like your example of if it turned out
01:43:14that microwaving sand created a nuclear reaction
01:43:17that blew up the entire world
01:43:19and we published that knowledge
01:43:20and it spread virally on social media,
01:43:23how many minutes, hours or seconds would it take
01:43:25before the world blew up?
01:43:27And then the only answer to a distributed,
01:43:31like available to everybody destructive capacity
01:43:35would be a global totalitarian society
01:43:37that monitors and surveils everyone and what they're doing.
01:43:40So you prevented mass destruction,
01:43:42but you got 1984 Big Brother
01:43:44and that's an uncheckable power
01:43:46that people cannot fight back against.
01:43:48I am very worried about mass surveillance.
01:43:51I think that what happened with Anthropic
01:43:53is hopefully bootloading a global immune system
01:43:56to the risks of AI enhanced mass surveillance.
01:43:59Because in a way Big Brother
01:44:01could have never happened without AI.
01:44:04When you have an AI that can process every camera
01:44:07and every face and all of the emails and text messages
01:44:11that are flowing through everything
01:44:12and you can actually ask the question as a state,
01:44:15who are the number one threats in my society
01:44:16that I need to suppress?
01:44:18And it tells you and summarizes instantly
01:44:19who those people are synthesizing all these data streams.
01:44:23How can you fight back against an uncheckable power
01:44:26when it knows all of your secrets?
01:44:27You can't.
01:44:29So we're faced with very tough choices on both ends
01:44:32and I did this in a Ted Talk I gave this last year
01:44:36called the narrow path that we have to avoid both outcomes.
01:44:39We have to somehow be committed to finding the narrow path
01:44:42between decentralizing destructive power
01:44:46in everyone's hands without responsibility,
01:44:48without a commensurate wisdom or responsibility
01:44:50in which you get chaos or catastrophes
01:44:52or over centralizing that power in uncheckable ways
01:44:56in which you get runaway dystopias
01:44:58and we have to find something like a third attractor
01:45:01to quote Daniel Schmackenberger or a narrow path
01:45:05that seeks to avoid both these outcomes.
01:45:07You could say that that is outside the laws of physics
01:45:09or not possible, but I think what's important
01:45:13is to be committed to and living from the place
01:45:16where we would be finding that path
01:45:18because the only chance that we would have of finding it
01:45:21would be requiring that everyone was living in service of it.
01:45:24Isn't it funny that the two worlds that you identified there,
01:45:26one totalitarian overreach that's controlled by humans
01:45:30and at the mercy of their biases
01:45:34and the other being decentralized vulnerability danger
01:45:37that is at the mercy of technology's liabilities,
01:45:42both of those are extreme versions of something
01:45:46that we definitely don't want,
01:45:48but the restriction on the one that would protect us
01:45:50from the AI goes back to the bottom of the brainstem,
01:45:53that there's almost never a time that a country
01:45:57or a government or a small number of people
01:46:00or a large number of people get most of the power
01:46:03and then give it back, it's a ratchet system.
01:46:06And it only ever clicks into place and then never goes back.
01:46:08- Which is why you need to have built-in checks and balances
01:46:11on power that are irreplaceable.
01:46:13Like you cannot have a world
01:46:14where that whatever power gets centralized
01:46:15has to have oversight and democratic accountability
01:46:19and distribution of that wealth and power
01:46:21depending on if we're talking about
01:46:21the wealth versus the power.
01:46:23- How would we know that some country
01:46:25isn't signing up to a moratorium,
01:46:27but secretly doing all of their research behind the scenes
01:46:30so that they slow down their progress of everyone else,
01:46:33but allow them to race ahead?
01:46:34- So you're asking the question,
01:46:36if let's say the US and China signed an agreement,
01:46:38how do we know that given their maximum incentive
01:46:40to defect on that agreement and have the CIA
01:46:42or their black operations still continue
01:46:44to do the research project?
01:46:46This is obviously the hardest technology
01:46:50and coordination problem that humanity has ever faced.
01:46:52I just need to say that at the top.
01:46:53Like, I'm not saying any of this is easy.
01:46:55- It makes nuclear war look like child's play.
01:46:57- Correct.
01:46:57But by the way, I want you to know there's a great video
01:47:01and I can send it to you of Robert Oppenheimer
01:47:04asked in the 1960s, I think he's testifying for something,
01:47:07in an interview, and they asked him,
01:47:11how could we control this technology from proliferating?
01:47:14And he takes a big puff and a cigarette.
01:47:17And he says, it's too late.
01:47:19If you wanted to prevent the spread of nuclear technology,
01:47:23you would have had to do it the day after Trinity.
01:47:25Trinity was the first test of the first nuclear bomb.
01:47:31He believed, I think this interview was in the 1960s,
01:47:33he believed it was inevitable, there was nothing we could do,
01:47:35and the world was fucked.
01:47:37Even he, the creator of the atomic bomb,
01:47:40didn't see that a lot of people would work very hard
01:47:45over the course of the next 30 years
01:47:48to invent what's called national technical means
01:47:50or satellites that do mutual monitoring enforcement,
01:47:52that we would create the seismic monitoring technology
01:47:55to be able to see if you were doing
01:47:57an underground nuclear test or an above ground test,
01:47:59'cause we would see the reverberations of that.
01:48:02So satellites, seismic monitoring, international inspectors,
01:48:06the International Atomic Energy Agency,
01:48:08we had to invent all of these new things
01:48:11to create a governance system that was capable
01:48:14of dealing with nuclear weapons.
01:48:15- Isn't it interesting that the innovation
01:48:17around the technology that was dangerous
01:48:19was probably less sophisticated
01:48:21than the downstream technologies required to make it secure?
01:48:25- Well, what do you mean?
01:48:26- Making the atomic bomb was one thing.
01:48:29Then there's 50 things that need to happen
01:48:31in order to have that technology exist in the world.
01:48:34- In a safe way, yeah, exactly.
01:48:35There's sort of an asymmetry where the technologies,
01:48:37the destructive capacity is easy to create.
01:48:39The governance capacity is way harder to create.
01:48:42The defensive capacity is harder to create.
01:48:43And you could have said at the beginning of 1945,
01:48:47I guess it's inevitable.
01:48:48150 countries are gonna have nuclear weapons.
01:48:50There's nothing we could do.
01:48:51Let's just drink margaritas and give everybody uranium
01:48:54and increase GDP by selling uranium to the entire world.
01:48:58But we didn't do that.
01:48:59And people had to invent stuff.
01:49:00We needed our best engineers and our best minds
01:49:03working on how to figure that out.
01:49:04And I'll just say, if you wanna look this up,
01:49:06RAND, the Nonprofit Defense Think Tank,
01:49:08has a paper on how international monitoring
01:49:10and verification mechanisms could potentially work for AI.
01:49:13It's not easy, it is not trivial.
01:49:16It would require extraordinary investments
01:49:18in mutual monitoring and enforcement and data centers
01:49:21and chips that attest where they are
01:49:24and all this kind of stuff.
01:49:25- Okay, so there are some suggestions.
01:49:26- There are proposals. - 'Cause what I was thinking
01:49:28is you were talking about, well, we can use satellites
01:49:30that work out if you're doing things above ground
01:49:32or below ground.
01:49:33We can monitor maybe the amount of-
01:49:33- There's heat signatures and electricity signatures,
01:49:36power usage.
01:49:37- So there are some- - There's inspectors.
01:49:38- Yeah, there are some signatures of people
01:49:40that are doing this sort of research.
01:49:42Because you can't do it without there being any externality.
01:49:44You can't just run it on a MacBook
01:49:45and make it look like you're browsing the internet.
01:49:46- You need large clusters of compute right now,
01:49:49to your point, in the future it may shift.
01:49:51But right now you need large clusters of compute.
01:49:53You need advanced semiconductor manufacturing supply chains.
01:49:56Right now that's a set of US allies, the Trilat.
01:49:58I believe it's Denmark, Japan, South Korea, obviously Taiwan.
01:50:02We could create some kind of infrastructure
01:50:06for not some totalitarian control of compute,
01:50:10just a governance regime.
01:50:11Would you call the IAEA totalitarian
01:50:13or would you call it a governance regime
01:50:15for a dangerous destructive nuclear capacity?
01:50:18I'm not saying any of this is easy.
01:50:19We have to navigate the narrow path.
01:50:20We don't wanna create totalitarian controls.
01:50:22We don't wanna create runaway catastrophes.
01:50:25What it takes is being committed to finding that path.
01:50:28How much time or energy or resources
01:50:30have we spent even trying?
01:50:32People say it's impossible.
01:50:32Have you spent a month dedicatedly trying?
01:50:35People say it's impossible.
01:50:36Have we spent any amount of resources
01:50:38actually trying to do this?
01:50:40No.
01:50:41- Let's say that there was some way
01:50:43for you to have God's eye coordination
01:50:46and that you can step in.
01:50:47Would you just put a pause on all model development
01:50:50to allow us to catch up in terms of AI safety?
01:50:53- I think the people, rather than it sounding like
01:50:55I'm some kind of pause or stop person,
01:50:58the people building this technology
01:51:00have basically said that that's what they would most prefer.
01:51:03They would most prefer a world where we basically stopped
01:51:06and had the time to integrate
01:51:08and develop this technology slowly.
01:51:10That's what they would prefer.
01:51:11- What do you think that if you had control,
01:51:14what would you do?
01:51:14- I would agree with the people who I'm going to defer to
01:51:18as the people who know the most
01:51:19about the dangerous and destructive capacities.
01:51:21So I don't want people thinking this is my view,
01:51:23that I'm some kind of safety net or something like that.
01:51:25This is just about,
01:51:26I want to be able to look my children in the eyes and say,
01:51:29we did everything that we could
01:51:31to create the best possible future for you.
01:51:34And I think that basic heuristic of like,
01:51:37are you operating in service of life
01:51:39in the things that most matter?
01:51:41I think that that would probably be the wisest thing to do.
01:51:43And just so people ground that,
01:51:45the CEO of Microsoft AI, Mustapha Soleiman, who's a friend,
01:51:49says in the future with technology,
01:51:52progress will depend more on what we say no to
01:51:56than what we say yes to.
01:51:58Your podcast is called Modern Wisdom.
01:51:59Is there a definition of wisdom
01:52:01in any spiritual or religious tradition
01:52:04that does not have restraint as a central feature
01:52:07of what it means to be wise?
01:52:08Does any religious tradition or spiritual tradition say,
01:52:10you know what's wise, going as fast as possible,
01:52:12not thinking about the consequences
01:52:13in dopamine maxing your brain.
01:52:16It's the opposite.
01:52:17It's like, this is not hard.
01:52:18This is so trivial.
01:52:19It's so obvious.
01:52:20It's so obvious.
01:52:21It's just like, snap out of the trance.
01:52:24This is not inevitable.
01:52:25They want you to believe it's too late.
01:52:27It is very far down the tracks.
01:52:29It would be a rite of passage,
01:52:31even if it didn't work out,
01:52:32for us to show up with the maturity and responsibility
01:52:35to at least be trying to live in service of the future
01:52:38that we actually want to create.
01:52:40- Can we watch the trailer?
01:52:41Can we get the trailer up?
01:52:42I want to see the trailer for this movie.
01:52:43- That'd be great, thanks.
01:52:45- If this technology goes wrong, it can go quite wrong.
01:52:49- What the?
01:52:54- Your fear of AI is the collapse of humanity.
01:52:58- Well, not the collapse, the abrupt extermination.
01:53:01There's a difference.
01:53:03- So I started making this movie
01:53:07because my wife is six months pregnant.
01:53:09It is now a terrible time to have a kid.
01:53:11- I mean, just to be honest,
01:53:14I know people who work on AI risk
01:53:16who don't expect their children to make it to high school.
01:53:19- How does AI understand pretty much everything?
01:53:26- It's surprisingly straightforward.
01:53:28- Intelligence is about recognizing patterns.
01:53:30- Patterns. - Patterns.
01:53:31- Patterns.
01:53:32- If you have learned those patterns,
01:53:35you can generate new information.
01:53:37- AI is moving so fast.
01:53:40- It's being deployed prematurely.
01:53:41There's so much potential for things to go wrong.
01:53:45- Why can't we just stop?
01:53:46- All these companies are in a race to get AI
01:53:50that's vastly more intelligent than people
01:53:52within this decade.
01:53:53- China, North Korea, Russia.
01:53:56- Whoever wins is essentially the controller of humankind.
01:53:59- We need to take a threat from AI
01:54:04as seriously as global nuclear war.
01:54:08- It feels like I have to find these CEOs.
01:54:15- And get them in the movie.
01:54:17- Great.
01:54:18- I wanna ask you to promise me that this is gonna go well.
01:54:22- That is impossible.
01:54:24- Okay.
01:54:25- Am I hopeful?
01:54:26Yes.
01:54:27Am I confident that it'll go right?
01:54:28Absolutely not.
01:54:29- AI is the thing that can solve climate change.
01:54:32- We could cure most diseases.
01:54:34- What if it's expanding what is humanly possible?
01:54:36- This is the most extraordinary time ever.
01:54:38The only time more exciting than today is tomorrow.
01:54:41- I already love you.
01:54:42- Okay.
01:54:44- I think if this technology goes wrong,
01:54:46it can go quite wrong.
01:54:48- By using AI, we're about to move off of the earth
01:54:54into the cosmos.
01:54:55- If we can be the most mature version of ourselves,
01:54:59there might be a way through this.
01:55:01- This is the last mistake we'll ever get to make.
01:55:05- Dude.
01:55:13- Does it land differently after this conversation?
01:55:16- It does.
01:55:17I'd already seen it, but yeah, it definitely,
01:55:19I definitely understand more of what's implied there.
01:55:25It's impressive that you managed to get
01:55:26all of the guys to sit down, most of the guys.
01:55:28Who are you missing?
01:55:29- I mean, there's obviously many people
01:55:32building this technology now,
01:55:34so they don't have the Chinese labs in it.
01:55:36They have Demis Hassabis from DeepMind,
01:55:39Sam Altman from OpenAI, Dario from Anthropic.
01:55:42I mean, those are the three most leading players.
01:55:45Elon agreed to participate in the movie,
01:55:47and then I think it was right in the beginning
01:55:49of the first few days of the Trump administration
01:55:52that he was busy and didn't follow through, so.
01:55:55But I think the team really wanted him to be in the movie
01:55:57and wanted to hear his views.
01:55:58I mean, to give Elon credit,
01:56:00he was one of the first people who cared about AI safety,
01:56:04and he said, "Mark my words,
01:56:05"AI is far more dangerous than nukes."
01:56:07And he said this in like, what was it, 2015, 2016?
01:56:10Like, way before people were taking AI seriously.
01:56:13Like, back then, AI was just recommending
01:56:15what other product you should get on Amazon
01:56:16and doing facial recognition for toll booths
01:56:19and bridges and stuff like that.
01:56:20- Well, think about what much of "Super Intelligence"
01:56:23and sort of the fallout around that book from Nick was.
01:56:27So much of it was almost advice
01:56:32for how to talk to your friends about this
01:56:34without being mocked too much.
01:56:36- Right.
01:56:36- It was, these are the best examples to use
01:56:39so that you don't sound like the insane person
01:56:41at Thanksgiving dinner.
01:56:42- And ironically, the paperclip-maximizing example
01:56:44did make people sound ridiculous and insane.
01:56:47That then got really, you know, diminished,
01:56:50and what's the word I'm looking for?
01:56:52It just made people, it was used to tarnish
01:56:54people's reputation if you talked about paperclips.
01:56:57But it's funny 'cause people say,
01:56:57oh, like, the AI's gonna, for people to know the example,
01:56:59it's like, there you are, and the AI is gonna be told
01:57:03to maximize paperclips, and then the way it will figure out
01:57:05how to do that is figuring out any strategy
01:57:07which means, like, turning every atom in the universe
01:57:09to paperclips, which means, like, you know,
01:57:11melting all the humans down and getting them to,
01:57:12you know, just taking it to this extreme.
01:57:15And it sounds totally sci-fi, but if you actually ask
01:57:18a baby AI called social media that's pointed
01:57:21at your brain stem, figuring out just what video
01:57:23to get you to watch that keeps you on the screen,
01:57:25and you say maximize engagement,
01:57:27well, conflict and rivalry and civil war
01:57:29is really good for engagement.
01:57:31So in a way, we have been maximizing a paperclip
01:57:34called attention and eyeballs for a long time,
01:57:37and it's driving up division and rivalry everywhere
01:57:39around the world, and democracies are backsliding
01:57:41everywhere around the world, and it's driving up
01:57:43confirmation bias all around the world,
01:57:44and the point isn't that the baby AI of social media
01:57:47hates you or hates your wellbeing or hates your connections
01:57:50or hates your democracy, it's just that it doesn't care
01:57:54about anything other than whatever keeps your eyeballs.
01:57:57And that little baby AI that was just figuring out
01:58:00which photo or video to throw in front of your nervous system
01:58:02was enough to completely transform everything, everything.
01:58:07Everything about how our society worked.
01:58:10And I saw that, by the way, and I'm not some kind of like,
01:58:12it's not like I especially see the future,
01:58:14but in 2013, it was so obvious to me.
01:58:16If you take these incentives really far in a decade,
01:58:20I can tell you what world you're gonna live in.
01:58:22And the thing that I want people to feel is if you can see
01:58:25the incentive, and you have that clarity,
01:58:27you can confidently say, I don't want the future
01:58:31that that creates.
01:58:32In 2013, Mark Zuckerberg could have said, oh my God,
01:58:34I see we're about to create an arms race
01:58:35to hack human psychology.
01:58:36I'm gonna convene all the leading social media companies
01:58:39and the government.
01:58:40I'm Mark Zuckerberg, I've got billions of dollars.
01:58:41I'm gonna throw money at this.
01:58:42I'm gonna get the government people involved.
01:58:44Not because the government's trustworthy
01:58:46or because we like them,
01:58:46but because I see that we need some kind of rules.
01:58:49And I'm gonna try to create some rules that say,
01:58:51no auto playing videos, no infinite scroll.
01:58:54No one can create unnecessary FOMO.
01:58:56You can't dole out like a few likes here
01:58:58and then a few likes there, like a slot machine.
01:59:00You have to do them all at the end of the day
01:59:02or something like that.
01:59:03You can create rules and norms
01:59:05so that we didn't get the mass addiction distraction machine
01:59:08that we then did get.
01:59:10And he could have done that in 2012, 2013.
01:59:12That's what leadership, that's what maturity,
01:59:14that's what wisdom would have done.
01:59:16It's not that hard to see it.
01:59:18- The problem I think that you're facing is
01:59:21every second is so crucial
01:59:23and every single second of compute and of CEO attention
01:59:27and of staff-based attention
01:59:28is going to be paid on continuing to get ahead
01:59:31in this unbelievable one to rule them all race.
01:59:35I mean, OpenAI is a partner on the show.
01:59:37I've spoken to Sam twice and both times
01:59:39the calls have been so brief
01:59:42because presumably for every second that he's on the phone,
01:59:45that's billions of dollars of potential revenue
01:59:49or compute or that is being wasted.
01:59:52And that means that if you want Mark in 2013
01:59:57or Sam or Dario.
01:59:59- The point is you create,
02:00:01literally this is the history of all of law.
02:00:03So I could kill you and steal your stuff
02:00:05and just take your money,
02:00:06but I'd prefer to live in it and everyone could do that.
02:00:08And that'd be a faster way to get money.
02:00:09It's like just everybody kills everybody, grabs their money,
02:00:13but that would create a society that's chaos.
02:00:15So instead I sacrifice some of my abilities.
02:00:17I can't kill people.
02:00:19And instead we have law
02:00:21that we all sort of notch down
02:00:23some of our individual capability
02:00:25so that we get to live in a society
02:00:27that we actually want to live in.
02:00:28And that's what this would do.
02:00:29So in 2013, if Mark Zuckerberg had said,
02:00:31I'm gonna convene musically, which was before TikTok,
02:00:36Twitter and the other ones.
02:00:37And we're like, okay, look,
02:00:38it is so obvious we're in this arms race for attention.
02:00:41Just like public utilities, which by the way,
02:00:43if like in California it'd be PG&E,
02:00:45in Texas, what's your electricity provider?
02:00:47It's like, oh, fuck knows.
02:00:49I don't know, I have no idea.
02:00:50I have no idea what my electricity provider is.
02:00:52- Well, technically energy companies have an incentive
02:00:55to maximize revenue.
02:00:56So theoretically they're like, leave the lights on,
02:00:58leave the stove on, like do the water 24/7
02:01:02because we make more money when we do that.
02:01:04But because public utilities rely on a scarce resource
02:01:07called energy that has environmental emissions
02:01:10that we have to deal with,
02:01:12there's a decoupling of revenue from their incentive.
02:01:17So for example, in California,
02:01:19you're charged a base rate for like the initial energy you use
02:01:22and then once you're hitting kind of the capacity
02:01:24that would be straining the system,
02:01:26we start to charge you more,
02:01:28but that extra revenue doesn't go just into PG&E,
02:01:30the energy company's pocket.
02:01:32It goes into a fund to have more clean energy
02:01:36that gives-- - To offset it.
02:01:37- To offset it and create more energy capacity.
02:01:39So with attention, you could say,
02:01:40instead of companies maximizing attention and driving revenue,
02:01:43you get to make money from the initial tier one
02:01:46of attention that you're getting.
02:01:48And then after that, the resources go into a common pool
02:01:51that's investing in the research, the X prizes,
02:01:53the design solutions that research and show
02:01:56that there's different ways of doing use feeds.
02:01:57There's different ways.
02:01:58We could have news feeds that are all about directing you
02:02:00back towards community.
02:02:02You live in Austin, Texas.
02:02:03Austin has, at least my friends,
02:02:05there's a lot of community events happening here,
02:02:07but you can imagine a world where the default news feeds
02:02:09that govern our world were spending like 30 to 40%
02:02:13of what you saw was other events that your friends
02:02:16and other people that you knew adjacent to you were doing.
02:02:19You could imagine a world where instead of dating apps,
02:02:20profiting off of keeping you
02:02:22in the slot machine loneliness thing
02:02:23where you message someone and never get back to them.
02:02:26In this new world, all the dating apps would be forced,
02:02:29probably 'cause of some lawsuit against the engagement model,
02:02:32to fund physical events in every city that they operated in
02:02:35where every single week there was actually a physical venue
02:02:37where lots of people were put in the same room
02:02:40with lots of other people they matched with
02:02:41who were maximally overlapping.
02:02:43And now, instead of feeling scarcity and loneliness,
02:02:46there's this sense of abundant access to community
02:02:48and soft dating and soft friend-making environments.
02:02:51So you're dealing with the loneliness crisis.
02:02:52You're dealing with the dating problem.
02:02:54When you solve that, then it turns out 30%
02:02:56of the polarization online goes down
02:02:57because it turns out that a lot of polarization
02:02:59was just people being lonely and depressed
02:03:01from not having a connection. - It's manufactured.
02:03:02- It's all manufactured
02:03:03because people are caught in the doom swelling loop.
02:03:05So it's like, again, we can have a better world
02:03:08if we deal with the upstream causes of these problems.
02:03:11We can also have more innovation.
02:03:12Peter Thiel was talking about, we need more innovation.
02:03:14We need more scientific development.
02:03:16We would have stagnation if we didn't have AI.
02:03:18Well, how about you regulate the brain rot economy
02:03:20that's currently degrading all the innovation,
02:03:22causing people to be not productive
02:03:25and creative and innovative members of our society,
02:03:27and instead incentivize entrepreneurs
02:03:29and, again, social groups and community
02:03:31so people are actually making things.
02:03:33So the point is I'm a different kind of technology optimist,
02:03:37which is a humane technology optimist.
02:03:40If you design technology that is humane
02:03:42to an understanding of the substrates of society
02:03:45upon which everything else depends,
02:03:47you can have a technology environment that is conducive
02:03:50to the things that we want our society to do.
02:03:52But it requires different design principles,
02:03:54different incentives, different rules, and some coordination.
02:03:58- This third attractor, precipice, narrow path thing
02:04:03that you think is important for us to walk appropriately,
02:04:06the pace at which things are going to happen and change,
02:04:10do you think that we're going to be able to move along that?
02:04:12I mean, you've got this movie.
02:04:13It's gonna be a global moment.
02:04:15Lots of people are gonna see it.
02:04:16It's gonna get people talking, common knowledge.
02:04:19Typically things tend to take time, conceptual inertia
02:04:23usually moves over generations and centuries.
02:04:26Going from heliocentric model of the universe
02:04:28took like a hundred years.
02:04:29- Well said, yep.
02:04:30- How-- - This is the problem
02:04:34in general that we face with technology,
02:04:35and this is the E.O. Wilson quote.
02:04:36It's not just that we have paleolithic brains,
02:04:38medieval institutions, it's that the brains operate
02:04:41on a very slow updating clock rate,
02:04:43the institutions operate on a slow clock rate,
02:04:46and the technology moves at whatever the 21st
02:04:48or now 24th century technology clock rate.
02:04:50- How do you think about expediting this change?
02:04:53- You have to have your governance move at the pace
02:04:57of the thing that you're trying to govern.
02:04:58- Is that ever gonna be possible?
02:05:00Governance moving at that pace, the lumbering behemoth?
02:05:02What is it, Leviathan leans left and lumbers along?
02:05:05- There needs to be, rather than recursively
02:05:09self-improving AI that is uncheckable
02:05:11by any human oversight process,
02:05:14in which we will for sure lose control
02:05:15and build something crazy that we regret.
02:05:18We instead should have self-improving governance.
02:05:21You can use AI to look at all the laws on the books
02:05:24and say, what are the laws that don't matter anymore,
02:05:25that are creating all this red tape that we don't need,
02:05:28that's like stifling innovation,
02:05:29and get the AI to find those laws and reinterpret them
02:05:32and say, what are the ones we need to get rid of,
02:05:34and how do you rewrite them for 21st century technology
02:05:36in the current age?
02:05:38We could be using AI and technology
02:05:39to update the governance as fast as the technology,
02:05:42but we'd also probably need to slow down the technology,
02:05:45which we would benefit from because then we wouldn't crash.
02:05:48Again, it's like China wins when they whisper in our ear,
02:05:51go faster, go faster, you're gonna blow yourself up.
02:05:54- Peric victory ahead.
02:05:55- Peric victory.
02:05:56You said this earlier, when the US races ahead
02:05:59and has a more sophisticated model,
02:06:00China gets it like 10 days later,
02:06:02because first of all, they have spies on all of our companies
02:06:04and second of all, they distill all the models.
02:06:06- Is that what they're doing?
02:06:07- They have spies in the companies.
02:06:09They're maximally incentivized to know what's going on.
02:06:12Plus, as you were saying,
02:06:14they can distill the US models,
02:06:15meaning they can query those models a thousand times
02:06:18and kind of distill the essence of those models
02:06:20and then make and train theirs.
02:06:21So there's a great meme online of like,
02:06:24it's a motorboat with a guy who's,
02:06:27what is it called when you're-
02:06:28- Wakesurfing?
02:06:29- Yeah, like you're wakesurfing behind him on the string.
02:06:31I mean, on the, what is that called, the-
02:06:34- String.
02:06:34- Yeah, whatever.
02:06:36And the guy who's on the surfing
02:06:40says go faster to the boat that's like racing ahead.
02:06:43That's the US that's building the advanced model,
02:06:45but then they're literally on a string right behind him
02:06:47that's getting all the benefits from it.
02:06:50And there is a study that Enthropic found
02:06:52that China had been covertly even using
02:06:56US Enthropic AI models to perform a cyber hacking operation.
02:07:00So like, again, if we're winning the race to the technology,
02:07:04but losing the race to governing or controlling
02:07:06or protecting the technology, what the fuck are we winning?
02:07:09- Yeah.
02:07:10- It's like, it's so basic what we're talking about.
02:07:12It's like, this is not rocket science.
02:07:13It's unbelievably simple.
02:07:16If you have the power of gods,
02:07:17you need the wisdom, love and prudence of gods.
02:07:19I hope we've laid out a bunch of examples
02:07:21of how we can do that.
02:07:23And while it's not easy, it does not happen by default.
02:07:27If you get everyone you know to go out and see the AI doc,
02:07:30understand that AI is dangerous,
02:07:32get your church group, get your business,
02:07:34and recognize that we need to not build bunkers,
02:07:37but write laws to actually steer AI before it's too late.
02:07:41You're terrifying, but I'm glad that you're in the world, man.
02:07:43I appreciate you.
02:07:44- I appreciate you too.
02:07:44I really appreciate this conversation, thank you.
02:07:46- All right, goodbye everybody.