00:00:00- Dude, you must be feeling like Cassandra at the moment.
00:00:04So prescient, the distraction, the necessity of deep work,
00:00:09the inherent bombardment of our attention.
00:00:11Do you feel like you saw the future earlier
00:00:14than what even at the time maybe felt late with deep work
00:00:17and focusing on quality over quantity and stuff?
00:00:20- I mean, I think part of what I noticed
00:00:21was the present was crazy to me
00:00:24and no one else recognized it.
00:00:25So it was less even predicting the future.
00:00:27I feel like there was a time, God, it's like 10 years ago now
00:00:31where I was looking around and yeah, saying two things.
00:00:34One, the social media doesn't make sense.
00:00:36Why are we all pretending like this is at the center
00:00:39of democracy and civic life and all business?
00:00:41We all have to be on here all the time.
00:00:43And two, email doesn't make sense.
00:00:45Not what was gonna happen in the future.
00:00:47I'm just like looking at the way we're working today
00:00:50with email and Slack and Teams was coming.
00:00:51Like this completely does not make sense.
00:00:53You're switching your context once every two or three minutes.
00:00:55This is a terrible way to actually use your brain.
00:00:58So I never thought of myself as predicting the future
00:01:00as much as just telling people what was going on then
00:01:01didn't make sense.
00:01:03And everyone thought I was crazy.
00:01:04And 10 years later, it just kind of jumped from
00:01:06I was crazy to it's common sense.
00:01:08So it's not even that interesting that I'm saying it anymore.
00:01:11So I kind of skipped the part where it sounded prescient.
00:01:15- Do you feel vindicated?
00:01:16- I think certainly on a couple issues.
00:01:20The social media issue was a big one
00:01:22because I used to get a lot of flack for that.
00:01:25For going out and I wasn't even saying
00:01:27the social media was bad
00:01:29or that no one should use it.
00:01:31Really what I was pushing back on
00:01:33was just the idea of ubiquity.
00:01:35The idea that everyone had to use it.
00:01:37I said, this doesn't make sense.
00:01:37I get there's some people this makes sense for.
00:01:39There's a lot of technologies that have markets
00:01:41that make sense for it.
00:01:42But why is there this pressure
00:01:43for everyone to be on these services?
00:01:45This is not going to a good place.
00:01:47They're spending a lot of money to mine attention
00:01:50and they're gonna get better at it, right?
00:01:52And at the time this was considered crazy.
00:01:55What do you mean like you wouldn't use social media?
00:01:58I wrote a New York Times op-ed back.
00:01:59I looked this up the other day.
00:02:01It was 2016.
00:02:03And it argued maybe social media is not the biggest thing
00:02:07for a young person to focus on
00:02:09if they're thinking about their career.
00:02:10That's what it was.
00:02:11It was like focus on your career instead of social media.
00:02:13Actually doing things well is what really matters.
00:02:15And you would think that I had just come on
00:02:19and said like America has an idea is done
00:02:21and grandmothers should be kicked.
00:02:22Like people were upset about this.
00:02:24The New York Times commissioned a response op-ed
00:02:26two weeks later that went through mine
00:02:30and said this is what is wrong
00:02:32about Cal Newport's op-ed or whatever.
00:02:34'Cause it made such a fear to suggest it.
00:02:36And today it's boring to suggest like,
00:02:38ah, you know, social media has problems
00:02:39and most people probably shouldn't use it.
00:02:41People agree with that.
00:02:42The one that upsets me though,
00:02:44that one I feel like people have come along too
00:02:47and more and more people are being much more selective
00:02:49and minimalist about their social media.
00:02:51The distraction, email, Slack,
00:02:53constantly jumping back and forth between different things.
00:02:56That just got worse.
00:02:57I mean, I think people recognize it now.
00:02:59This is probably not a good way to work.
00:03:01But I thought because there was dollars and cents here,
00:03:04this is less productive
00:03:05from an economic productivity standpoint.
00:03:07To have all of your workers
00:03:09changing their attention all the time.
00:03:11You're just getting a really low return
00:03:12on all the money you're investing in these human brains.
00:03:14So I thought, oh, this is dollars and cents.
00:03:16This is the one that's gonna change.
00:03:18Social media is fun.
00:03:19Like that's gonna be hard to change people's behavior.
00:03:21But certainly this hyper distraction thing
00:03:23and knowledge work, that'll change
00:03:24because we're leaving money on the table.
00:03:26It hasn't changed at all.
00:03:27It's gotten worse.
00:03:27It's worse than it was.
00:03:29I'm at the 10 year anniversary now of the book, "Deep Works."
00:03:33So this month is a 10 year anniversary.
00:03:35- Congratulations, dude.
00:03:36That's fucking seminal.
00:03:37Like that has become a part of the lexicon.
00:03:39That's really, really cool.
00:03:41- Yeah, but it's got me a little bit depressed
00:03:43because I've been doing this 10 year reflection.
00:03:46Like, okay, it's been 10 years and the book was a hit
00:03:48and it's millions of copies, et cetera.
00:03:50And that is the issues I talked about are worse.
00:03:52They're like really worse than they were 10 years ago.
00:03:55So people know the problem, nothing has changed.
00:03:57- What does the data suggest around the worstness of it now?
00:04:02- The one I've been following,
00:04:06the study that I think is useful as a trend line
00:04:08is Microsoft actually does this annual report
00:04:11where they gather data from Microsoft 365.
00:04:14So it's like Office and Word and PowerPoint and Excel.
00:04:17Nowadays you use this sort of the web based version of these.
00:04:19It's very common.
00:04:20So they can gather data from just tens of thousands
00:04:23of knowledge workers actually using
00:04:25all these different tools.
00:04:27And the latest report they put out in 2025
00:04:29now has the interruptions on average once every two minutes.
00:04:32So it's just gotten out of control.
00:04:36So switching to a communication tool once every two minutes.
00:04:38They also found the latest report
00:04:40and this is depressing to me as well.
00:04:42There's one time in the week where they see a notable rise
00:04:47in the use of the non-communication.
00:04:49So actually using the core productivity tools like Word
00:04:52or PowerPoint and it's Saturday and Sunday morning.
00:04:55So we've just put the work off until the weekend
00:04:58when there's no expectations of responses
00:05:00and spend the actual weekdays talking about work.
00:05:04Which I just don't get.
00:05:06Like that is not economically productive.
00:05:08Like companies are leaving money on the table
00:05:10but it's just where we are.
00:05:11We really can't quit this behavior.
00:05:13- Isn't it interesting that you had to try and appeal
00:05:15to a very utilitarian approach for this?
00:05:17But you didn't say this is probably making staff miserable.
00:05:22It's not a good use of time.
00:05:24We've got some really strong evidence that suggests
00:05:26that doing one thing and getting better at it
00:05:29over a protracted period of time
00:05:31actually makes you feel more satisfied.
00:05:32You get into a flow state, et cetera, et cetera.
00:05:35You look back on your day
00:05:36and you can look at the things that you did.
00:05:39None of that, which is the much more immediate experiential
00:05:44way that people interface with distraction.
00:05:47You tried to appeal to the bottom line,
00:05:49which you thought, incentives, incentives,
00:05:51align the fucking incentives.
00:05:54And that didn't work, which obviously means also
00:05:57that people's level of administrative burden misery
00:06:02is also coming along for the ride at the same time.
00:06:04Yeah, it's a fucking mess, dude.
00:06:06And I think even with what I do, it's not a very big team,
00:06:09but Slack is like, it's so useful and invites so much chaos
00:06:17at the same time.
00:06:18It is, and was Slack, Slack wouldn't have been that big
00:06:21during Deep Work, I'm gonna guess.
00:06:23It wasn't big, it wasn't out yet.
00:06:25I talk in Deep Work about these very early
00:06:27instant messenger tools that no longer exist,
00:06:30like HipChat, it was just emerging among the programmer class.
00:06:33I was basically saying there be dragons,
00:06:35like let's be careful about that.
00:06:37But I wrote an article about Slack years later
00:06:41when Slack was bought.
00:06:42So I think Salesforce bought Slack.
00:06:44I wrote an article about it for The New Yorker.
00:06:46And I think the title of that article gets to the core
00:06:48of the issue you're talking about.
00:06:50The title was Slack is the Right Tool
00:06:53for the Wrong Way to Work.
00:06:54And I think what happened, here's my whole theory on Slack,
00:06:58is that when email arrived, it moved us to this new style
00:07:01of collaboration that I call the hyperactive hive mind,
00:07:04where we'll just figure things out on the go
00:07:06with ad hoc back and forth, unscheduled messaging,
00:07:08just sort of like shooting messages back and forth.
00:07:10We'll figure things out,
00:07:11like we're all just kind of connected all the time.
00:07:14That's a terrible way to work for all the reasons
00:07:16I talked about.
00:07:16It's distracting, it's context switching,
00:07:18you can't do anything deep, it's hard to produce value.
00:07:20But if that's the way you're gonna work,
00:07:22email clients are not a very good tool for that.
00:07:25You have threads and it's clunky
00:07:27and it's hard to search through your email
00:07:29and find what you did before.
00:07:30So Slack came along and said,
00:07:31look, if this is the way you're gonna work,
00:07:33hyperactive hive mind, constant back and forth,
00:07:35ad hoc coordination, we'll build you a better tool for that.
00:07:38So that's why people both love and hate Slack.
00:07:41It's a really good tool for that style of collaboration.
00:07:44It works really well.
00:07:45But that style of collaboration makes us miserable.
00:07:48So it's this weird love-hate relationship we have,
00:07:50like this works great.
00:07:51I hate the thing that is making it easier.
00:07:53- Why does it make us miserable, that style of collaboration?
00:07:57- Because our brain isn't meant to switch
00:07:59our target of attention that quickly.
00:08:00It just takes us a long time
00:08:02if we're talking about targets
00:08:04that are abstract and symbolic.
00:08:06It takes us a long time to switch from one to another.
00:08:09Physical world targets, we can switch quickly, right?
00:08:11We're wired for that.
00:08:12If there's a tiger's roar, I can boom.
00:08:15A hundred percent attention, what's going on over there.
00:08:17But when we're thinking about abstract things,
00:08:20information, ideas, things that are symbolic
00:08:22and in our head, that's us.
00:08:24We're basically reappropriating our brain hardware
00:08:27to do something we're not evolved to do.
00:08:29It takes a lot of effort to do symbolic thinking,
00:08:31to think about abstract concepts.
00:08:34And we know it takes 10 to 20 minutes
00:08:36to fully change our attention context
00:08:39from one abstract target to another.
00:08:40It takes a long time.
00:08:41That's why if you sit down to write something,
00:08:43everyone has this experience.
00:08:45The first five or 10 minutes, you're like,
00:08:46"Man, this is terrible."
00:08:47Like, "I'm making no progress," or whatever.
00:08:49And then after a while, you're like,
00:08:50"Oh, this is starting to flow."
00:08:52Like, "It's going better."
00:08:53That's because it took that much time
00:08:55for your brain to load up all of the relevant information
00:08:59and to inhibit all the unrelated circuits
00:09:01and get your brain really ready to do that activity.
00:09:03So if you now interrupt that brain once every two minutes,
00:09:06it never can lock in on anything.
00:09:09And what you feel then
00:09:09is this sort of diffuse cognitive friction
00:09:12that we begin to experience as fatigue, cognitive fatigue.
00:09:15And it's a really frustrating experience.
00:09:17It's why if you go to an email inbox,
00:09:20you're like, "I have time.
00:09:21"I'm gonna empty this inbox.
00:09:22"I'm gonna go message by message.
00:09:23"Here's the best way to do it, write on paper.
00:09:25"I'm gonna go message by message,
00:09:27"and I'm gonna answer these messages."
00:09:28Why does that get so hard?
00:09:30Why do you find yourself like jumping around
00:09:32and looking for easier messages?
00:09:33Because each message is a different context than the other,
00:09:35and that's torture for the brain.
00:09:37It's really, really hard to go from, all right,
00:09:40this is a complicated question
00:09:41one of my employees is asking me,
00:09:43and now this is a completely different issue,
00:09:45completely unrelated to that,
00:09:47where I have to think up a good title for something,
00:09:48and now here's a completely different issue,
00:09:50and you're trying to switch one after another.
00:09:52Our brains aren't wired for that.
00:09:54It really makes us unhappy.
00:09:55- What would you say to someone
00:09:58who wants to try and retrain that attention?
00:10:00Maybe they're gonna try and make some sort of a stand
00:10:07inside of Slack and say,
00:10:09"I will only be available at certain times of the day,"
00:10:11but regardless of the inbound,
00:10:14let's say that they fixed the inbound,
00:10:15'cause that's a totally separate problem.
00:10:16That's much more sort of structural,
00:10:18unless you've got any advice for that as well.
00:10:21But how does someone go about re-appraising,
00:10:26retraining their mind away from that?
00:10:29Because we do become, we get like Stockholm syndrome,
00:10:34the Slack Stockholm syndrome, where our captor tormentor
00:10:38becomes the way that we operate.
00:10:40We've got our favorite little ways of working,
00:10:42and it feels like we've done,
00:10:43but then at the end of the day,
00:10:44we look back and have this sort of odd malaise thing
00:10:48about, well, what did I actually do today?
00:10:53What got done?
00:10:54Well, not much, not much got done.
00:10:57- Yeah, well, it's hard unilaterally.
00:11:00If you've changed nothing else about your workload
00:11:02or your communication protocols,
00:11:03if you just say, I'm not gonna be on Slack
00:11:07from this hour to this hour,
00:11:08I only check my email twice a day
00:11:09or whatever that standard advice was from 15 years ago,
00:11:12it doesn't work well.
00:11:13Because if you're involved in a large number of projects
00:11:16that are timely, and the way progress is gonna be made
00:11:19is with ad hoc back and forth messaging,
00:11:21you have to be in there checking.
00:11:23That's the brutal part of the hyperactive hive mind,
00:11:26is that it has defenses to its elimination
00:11:28built into its very nature.
00:11:30Because if this is how we're gonna figure this out,
00:11:32like we have to have five or six back and forth messages
00:11:35to figure out what we're gonna do
00:11:36about this client coming tomorrow,
00:11:37we have to get this done today.
00:11:39That means you have to see my next message right away
00:11:42so that we have time for me to answer you
00:11:43and you to answer me and for that ping pong match to happen.
00:11:46That means you have to be checking your inbox
00:11:48or Slack constantly.
00:11:49Otherwise you're not gonna see my next message in time
00:11:51for this whole game to unfold.
00:11:52So the very nature of that style of collaboration
00:11:55demands constant inbox checking,
00:11:57which is what I think people often get wrong about this.
00:11:59When I think about things like Slack or email,
00:12:01they think too often about either information,
00:12:04like, oh, I've got so many messages in my inbox
00:12:07that I don't need.
00:12:08I have all these newsletters and spam.
00:12:10That's not a problem.
00:12:11That's a minor problem.
00:12:13That's an easily solvable problem.
00:12:15It's like clutter.
00:12:16That's not a big problem.
00:12:18The issue is actually my collaboration style
00:12:22requires me to be in there
00:12:24because if I miss messages in a timely fashion,
00:12:26everything falls apart.
00:12:27And so the issue is not, how do I interact with my inbox?
00:12:31It really has to be,
00:12:33how do I change the way the inbox is being used?
00:12:37I mean, so I ended up,
00:12:38I feel like had three big ideas on this
00:12:40that span three different books, right?
00:12:41So in deep work, like one of the big ideas was,
00:12:45you can train your personal ability to focus.
00:12:48Focusing is really important.
00:12:50Putting aside for now,
00:12:51all the things trying to prevent you from focusing,
00:12:53you have to practice it.
00:12:54And if you practice it, you'll get better at it.
00:12:56And if you get better at it,
00:12:57you'll be a superstar because like,
00:12:58that's what matters in the knowledge economy.
00:13:00Everything good comes out of focus.
00:13:02Then I wrote a book after that called "A World Without Email."
00:13:06And in that book, I was arguing the way we,
00:13:10the thing I was telling you about, hyperactive hype mind,
00:13:11communication is a problem.
00:13:12This is a real problem.
00:13:13The fact that we are using this method for coordination
00:13:16is causing all these trouble, is really causing problems.
00:13:20And I went through all the data and all the research
00:13:22and made the case, this is super non-productive.
00:13:24I went back to the archives of the New York Times
00:13:27business session in the 80s and 90s to exactly document
00:13:30the rise of email and how people were talking about email
00:13:32when it first came onto the business scene.
00:13:34And I made the case, the way we work is arbitrary.
00:13:38This hyperactive hype mind was not a plan.
00:13:40It wasn't seen to be more productive.
00:13:41We stumbled into it, so we really should change it.
00:13:43So that was that book.
00:13:45And then the most recent book,
00:13:46"Slow Productivity" from a couple of years ago.
00:13:48In that book, I argued, oh, wait a second,
00:13:51workload matters too.
00:13:52The other issue with this problem is we don't put any limits
00:13:55or transparency on how many things we're working on.
00:13:58And if you pile too many things on your plate,
00:14:01too much communication interruption becomes unavoidable
00:14:03because they each have little issues
00:14:05they need you to deal with.
00:14:05So I've now, over this 10 year period,
00:14:08have kind of broken down this problem.
00:14:10There was like training yourself to focus,
00:14:12fixing your communication protocols.
00:14:15Like how do I communicate in a professional context?
00:14:17How do we collaborate?
00:14:19And then managing workload to be more reasonable.
00:14:21All three of, and this might be why
00:14:23this problem's not solved.
00:14:24There's no one thing to fix, right?
00:14:25So all three of these things go into the issue
00:14:28and they're each complicated.
00:14:30- What, across those three books, all of which are great,
00:14:33and everyone needs to go and check out.
00:14:34I think we've done episodes about each of them
00:14:36so they can just go and listen to those.
00:14:37- Yeah, I think, yeah.
00:14:38- And then buy the books.
00:14:39Looking back across this portfolio of productivity advice,
00:14:45what have you heard from readers
00:14:50or what has been the stickiest strategies for you?
00:14:55You look back and you go, okay, that's the 80/20
00:14:58of what I've published over the last three books.
00:15:01- To me, I think the big two
00:15:04that give you the biggest results,
00:15:06and I'll tell you the one that's the hardest,
00:15:07and this is why this book probably sold the least.
00:15:10The big two that gives you the biggest results
00:15:12is taking focus seriously like a skill.
00:15:14That really does make a difference.
00:15:15Practicing focus, you get better at it.
00:15:18And it has a demonstrable difference.
00:15:22You sit down to work and you're just producing better stuff,
00:15:26or you're trying to pick up some complicated new thing.
00:15:28Like, oh God, I can learn this faster.
00:15:30That makes a huge difference.
00:15:31And then the second one, which was more recent in my life,
00:15:34was, oh, you really gotta control the workload.
00:15:36So much is downstream from how many things
00:15:39you've agreed to work on.
00:15:40You have to leave the mindset of everything I say yes to
00:15:43brings with it value.
00:15:45So saying yes to more things,
00:15:46it's just gonna aggregate more value.
00:15:47That's not the right mindset.
00:15:49That's not the way, it's a nonlinear reward function there.
00:15:54There's a certain point as you add more things
00:15:56that not only does value stop growing,
00:15:59it begins to go down on the other side.
00:16:00And that there's a real saying no to many more things
00:16:03is actually a way to optimize reward and output,
00:16:08which is not natural.
00:16:10It doesn't make sense at first.
00:16:11It doesn't feel like common sense.
00:16:12So workload and focus training,
00:16:15you can control those more than you think,
00:16:17and you're gonna have huge results from those.
00:16:19- A quick aside.
00:16:20Do you remember learning about the mighty mitochondria
00:16:22back in grade school?
00:16:23Here's a quick refresher.
00:16:24It's the tiny engine inside of your cells
00:16:26that powers everything you do.
00:16:28But here's what they didn't teach you.
00:16:30As you age, your mitochondria break down.
00:16:32That's what can cause you to feel tired more often,
00:16:35take longer to recover,
00:16:36and wake up feeling like you're never fully recharged,
00:16:40no matter how long you sleep.
00:16:41I started taking Timeline nearly two years ago
00:16:44because it is the best product on the market
00:16:47for mitochondrial health,
00:16:48and that is why I partnered with them.
00:16:49Timeline is the number one
00:16:51doctor-recommended urolithin A supplement
00:16:54with a compound called Mitopure.
00:16:55Basically, it helps your body clear out damaged mitochondria
00:16:59and replace them with new ones.
00:17:01Mitopure is backed by over 15 years of research,
00:17:04over 50 patents, and nearly a dozen human clinical trials.
00:17:07It was recommended to me by my doctor,
00:17:08and that is why I've used it for so long,
00:17:10since way before I knew who even made the product.
00:17:13And best of all, there's a 30-day money-back guarantee,
00:17:16plus free shipping in the US,
00:17:17and they ship internationally.
00:17:18So right now, you can get a free sample,
00:17:21or get up to 20% off by going to the link
00:17:23in the description below,
00:17:24or heading to timeline.com/modernwisdom.
00:17:28That's timeline.com/modernwisdom.
00:17:32The learning to say no thing is interesting,
00:17:35especially as people progress inside of their career,
00:17:40and they get better at what they're doing.
00:17:43They have to learn to be able to say no
00:17:45to opportunities that they would've only begged
00:17:47to have had the opportunity to be in the room
00:17:49to have maybe said yes to only half a decade ago.
00:17:53In that time, you've had to go from needing that opportunity
00:17:58to actively being able to say no to something
00:18:00that's probably better than it.
00:18:01Alex, my friend, taught me about,
00:18:04you remember in "The Matrix," the woman with the red dress,
00:18:07and Neo turns around, and he says,
00:18:09"Well, you're looking at me.
00:18:10"We're looking at the woman in the red dress.
00:18:11"Look again," and it's an agent with a gun in his face.
00:18:14And the analogy that Alex used was,
00:18:17now imagine that she's not a 10 out of 10,
00:18:20but imagine 1,000 hypothetical 1,000s out of 10,
00:18:25and you need to be able to say no to them,
00:18:27which previously you didn't even know existed.
00:18:30So this, I think, the kind of,
00:18:32it's almost like reverse entropy or habituation.
00:18:38You know, your opportunities get better,
00:18:40which means that your capacity to say no
00:18:42needs to get better more quickly than that.
00:18:45You can't be chasing your tail trying to learn
00:18:48to be able to say no less quickly
00:18:49than the opportunities get more seductive.
00:18:52- Yeah, it's almost perverse, the way that works.
00:18:55It's like when you have all the time in the world,
00:18:58all you want is opportunities.
00:19:00And then when you have opportunities,
00:19:01all you want is all the time free in the world.
00:19:05I had to change, I don't know what you do,
00:19:06but I had to change my rule at some point.
00:19:08This was hard for me, to the default no.
00:19:11Like that's just how I have to operate now.
00:19:13'Cause as soon as you try to have a triage rule,
00:19:17well look, I'm not gonna do this opportunity unless,
00:19:20I only do speaking gigs that have this much money or this,
00:19:24or I only gonna go meet with someone
00:19:26if they're like this interesting or this or that.
00:19:28Eventually the number of things that satisfy
00:19:30that criteria overwhelmingly just as well.
00:19:32It's just so I've just had to fall back on the default no.
00:19:36- You're talking to somebody who came back
00:19:39from a two day trip to Qatar at the start of this week.
00:19:43So I spent as much time traveling
00:19:46as I did in the country to give a talk.
00:19:49And as I looked around, there was this first,
00:19:51the first night dinner, there was maybe 300 people there.
00:19:54And I'm talking to Logan Paul
00:19:56and Steven Bartlett over his shoulder.
00:19:58And the CEO of Qatar Airways is here
00:20:00and the Middle Eastern director for Metis over there.
00:20:02And I was looking around thinking,
00:20:04everybody here wants to be here, it's very exciting.
00:20:07Everyone's really lovely, but also everyone here can't say no.
00:20:10Everybody in this room is chronically incapable of saying no.
00:20:15- I've said no to this one several times by the way.
00:20:18The amount of invites to Qatar and the UAE and other places,
00:20:23I have said no to many of those.
00:20:25- All right, well, consider me a fucking,
00:20:26consider me a slut compared to you, Cal, whatever it is.
00:20:30I must be easy, an easy booty call.
00:20:32They tried to get Cal Newport.
00:20:33We couldn't get Cal, so we'll ring Chris instead.
00:20:35- The default no, oh man, yeah.
00:20:37It's crazy the things you end up saying no to after a while,
00:20:40but I mean, there's a currency shift.
00:20:41For me, time to think is such a valuable,
00:20:44that's a more valuable currency than money, right?
00:20:46You get to a point where you're like, oh, I'm doing fine.
00:20:49But if I don't have time to think, what's the point?
00:20:51And then that becomes this like really rare currency
00:20:53that's much harder to get a hold of.
00:20:57And that's the only way I can protect it now
00:20:59is anything that requires me to like go somewhere,
00:21:01it's a default no.
00:21:01And then I can talk myself out of it later, right?
00:21:04I'm like, you know what?
00:21:05I could bring my family with it.
00:21:06We could have a trip, right?
00:21:07So actually, you know what?
00:21:08I will do this.
00:21:10Or I just did a masterclass course released this week.
00:21:15I spent a year and a half say no to that.
00:21:20And then like eventually I sort of talked myself,
00:21:22I talked to some people.
00:21:25They're like, we'll come to DC to do it.
00:21:26I talked to James Clear, just done one.
00:21:28And I had a good talk with him about it.
00:21:30And I was like, you know what?
00:21:31This will be interesting.
00:21:32And it took me a year and a half,
00:21:34'cause I finally talked myself into it.
00:21:35So I will say yes, but it just,
00:21:37the default no means that you don't have to-
00:21:39- To high standards.
00:21:40- Yeah, you don't have to run it through the ringer.
00:21:41And then you're like, okay, if it really sticks with me,
00:21:43then maybe I'll be like, all right, all right, I'll do it.
00:21:46- How much should people actually be working?
00:21:48- Well, it depends what you mean by work
00:21:50and what they're doing, right?
00:21:52Because think about it, let's say you're an athlete.
00:21:55It's super well-defined.
00:21:56Like here's optimal training, here's optimal rest.
00:21:59And like, that's what you should be doing.
00:22:00Like that's really clear.
00:22:03We don't have those limits as clear in the culture
00:22:05for other types of jobs that we probably should.
00:22:07If you're at a high wage hourly build job,
00:22:11like a law partner at a big law firm,
00:22:14there the economic model is the more you work,
00:22:18the more profitable it is.
00:22:19And we'll pay you big money to do this,
00:22:21but like you should basically work as much as you can
00:22:23that your body will take it.
00:22:24That's the economic engine.
00:22:25That's why I think those jobs are scary.
00:22:28If you're a novelist that writes literary fiction,
00:22:31so you're like, I really need to be award nominated
00:22:35for each book, or I'm gonna fall out of this like slipstream
00:22:38of, because no one's gonna read these books
00:22:39unless they're some of the best books.
00:22:41Then you should be doing like four hours in the morning
00:22:43and then just disappear, right?
00:22:46Like you should be doing very little more work than that.
00:22:49'Cause almost anything else will get in the way
00:22:51of you like sticking in that position.
00:22:53And so it all just depends on what you do.
00:22:55- Didn't you look at some experiment of shorter work weeks?
00:23:01- Yeah.
00:23:02- What did you learn from that?
00:23:03- There's a lot of these right around the pandemic,
00:23:06right before and then right after in Europe and Iceland.
00:23:11So some European studies, I think Germany did one,
00:23:13Iceland did one, UK did one.
00:23:15And they were looking at four day work weeks.
00:23:17So what would happen if we take away one day?
00:23:20The interesting thing about those experiments
00:23:22is what they found is whatever measures of productivity
00:23:25they came up with, they didn't get worse,
00:23:28which I thought was very interesting.
00:23:30They took a day away and yet the perceived productivity
00:23:34or the measured productivity didn't go down.
00:23:36And there's two ways to look at it.
00:23:37The one way to look at it is to say,
00:23:38oh, this means that like we should have a four day work week
00:23:40because things didn't get worse.
00:23:42And okay, maybe, right?
00:23:44But to me, there was like a bigger observation
00:23:46that came out of that, which is like, wait,
00:23:47so what are we doing during the work days?
00:23:51Like there's something going on here
00:23:54that should really catch our attention.
00:23:55What does work mean?
00:23:57That we could take an entire day off the table
00:23:59with no other preparation
00:24:01and the valuable stuff being produced doesn't change.
00:24:04This tells us that like whatever we're doing
00:24:07while we're sitting here in work is not just sitting down
00:24:09and trying to produce value.
00:24:11We're clearly have all sorts of other sorts
00:24:13of distractions going on, context switching,
00:24:16time that's being devoured, Parkinson's laws at play.
00:24:19Work must be broke.
00:24:21To me, that was the more important observation is that like,
00:24:23if you can take away a day and nothing changes,
00:24:25then I don't think we're doing in the office
00:24:27what we think we're doing in the office.
00:24:29- Parkinson's law was on the tip of my tongue.
00:24:31Work expands to fill the time given for it.
00:24:33And if you give people five days, they'll take five.
00:24:36And if you give them four days, then they'll do it in four.
00:24:39Look, everybody knows just how much time they waste
00:24:44not doing the work,
00:24:46not doing the thing that they're supposed to do.
00:24:47And this isn't victim blaming.
00:24:50This is a lot of the time dealing with admin
00:24:52and necessary meetings.
00:24:53You can't get out of them.
00:24:54You have to be there for whatever reason.
00:24:57So it's not as if it's bottom up.
00:24:59A lot of it is top-down dictated.
00:25:01This is the environment that you work in
00:25:02and you have to do this.
00:25:04But even outside of that,
00:25:05when you do have your one hour in between meetings,
00:25:09your inability to not...
00:25:11I remember when I used to run nightclubs
00:25:13and I'd get in at 2.30 in the morning,
00:25:16the final part of the night was cashing the till.
00:25:19So this was before we switched to tickets,
00:25:22which was sort of the late teens, just before COVID.
00:25:26Digital tickets online, which meant that you didn't have
00:25:28to cash as much money in the till.
00:25:30But before that, it was all five pounds and 10 pound note
00:25:33and 20 pound notes and single pounds
00:25:35and all the rest of it.
00:25:36And I would go into the office with the manager of the venue
00:25:39and we would be counting the money.
00:25:40But this is the final task.
00:25:42The final bit of the night.
00:25:43It's fucking 2.15 or 2.30 in the morning.
00:25:45We've just taken the till off as it's called.
00:25:48Anybody that's coming in doesn't get to come in.
00:25:50We're not gonna take any more money.
00:25:52And I'm sat up there doing light lift mental arithmetic.
00:25:57But for me, somebody who hadn't done math since I was 16,
00:25:59it was a relatively heavy lift.
00:26:01Flicking through the money, flicking through the money,
00:26:04huge fluorescent overhead lights just before.
00:26:06And then I get to drive home and I'm thinking about it
00:26:08and I got to go put the money in the till
00:26:09and I got to write it in the spreadsheet
00:26:11and then I get into bed.
00:26:12And as I got into bed, my eyes below my eyelids
00:26:15would start flicking left and right.
00:26:17I wouldn't be able to tune myself.
00:26:19I'm also doing this, let's not forget,
00:26:21in a sweaty beer stinking office above a room going,
00:26:26(imitates drum beating)
00:26:29I've had to walk through the club.
00:26:30I've had to shout at the hostesses.
00:26:32One of them's getting fingered on the dance floor.
00:26:33Stop doing that.
00:26:34You're supposed to be at work.
00:26:35The DJ's pissed.
00:26:36I need to, you know, it's chaos.
00:26:37And I've tried to coordinate this orchestra of bullshit.
00:26:40And then I've had to do mental arithmetic.
00:26:41And then I get to drive home and then I'm like,
00:26:43okay, chill out brain.
00:26:45It doesn't want to.
00:26:46And that eyes moving left and right thing,
00:26:49I think is the sort of optical equivalent,
00:26:53ocular equivalent of how people feel
00:26:57when they finally get a moment.
00:26:58It's like, okay, all of my stuff is done.
00:27:01And then they try and sit down to work on the thing
00:27:05that ostensibly that's actually there to do, right?
00:27:08'Cause all of the other bullshit, the meetings,
00:27:09you're not there to do the meetings.
00:27:10You're not there to do the Slack.
00:27:11You're not there to do.
00:27:12All of that is foreplay to get you to do the thing
00:27:15that you're there to do.
00:27:16And then you sit down to do the thing you're there to do.
00:27:19And your eyes are moving behind your eyelids
00:27:21is the equivalent.
00:27:21You're swiping and moving across the screen
00:27:24and you've got a few different other,
00:27:25well, just check on this thing.
00:27:27Like what the living fuck is going on?
00:27:28I've like trained, the environment that I work in
00:27:32has trained me out of being able to do my work.
00:27:36- Well, we are meant to do,
00:27:38like what would be the ideal workday
00:27:40in an office environment
00:27:41that would actually mask the human brain?
00:27:43It would probably be you come in,
00:27:45you work on something hard for a while.
00:27:47Like that's what you do in the morning.
00:27:49You have lunch and then you like catch up with,
00:27:52have some meetings, talk to some people,
00:27:54hey, what's going on and do some tasks and that's your day.
00:27:58Like that's basically what we can do, like two things.
00:28:00One big burst of like, let me focus on something hard.
00:28:03And then we can kind of come down the mountain after that
00:28:06with let me chat with people, what's going on.
00:28:08Some decisions need to be made or whatever.
00:28:10That's probably about optimal.
00:28:11Instead, we juggle a dozen to two dozen tasks
00:28:14that all have their own demands.
00:28:16They all have their own communication needs.
00:28:18This is why the Microsoft data shows,
00:28:20oh, the work happens Saturday and Sunday morning.
00:28:23It is really hard.
00:28:25You can't go from, and meetings are very hard as well.
00:28:28We think like, oh, I'm not actually doing work
00:28:29during meetings, but what you are engaging in a meeting
00:28:32is all the parts of your brain
00:28:34that deal with social interaction.
00:28:36And those are a large part of your brain
00:28:38and that is a fraught and mental energy consuming activity
00:28:42to sit in a room or on a Zoom screen
00:28:43and try to manage all these different people
00:28:46and how do I look and what am I saying
00:28:48and what's going on here and I have to say the right things.
00:28:50It's draining.
00:28:52And you come out of something like that,
00:28:54it's difficult just to jump right back into something else.
00:28:56And if you come out of something like that
00:28:58and there was a lot of obligations generated,
00:29:00oh, we discussed in this meeting things I need to do.
00:29:03And now you try to go straight
00:29:04from that meeting into another.
00:29:06Well, now that's really in the back of your head.
00:29:08What about this?
00:29:09What about this?
00:29:10We can't forget this.
00:29:11We just made our obligations.
00:29:12That feeling of fatigue.
00:29:13It's really as fatigue is what it feels like,
00:29:15a mental fatigue.
00:29:17Like there's a sand in your brain,
00:29:19sand in the gears of your brain.
00:29:20That's the state that a lot of people
00:29:22who work in front of a computer screen,
00:29:23like that's the state they're in most of the day
00:29:26and they don't even realize,
00:29:28oh, that's a bad feeling.
00:29:29That's a negative state.
00:29:30That's not how it needs to feel
00:29:32because you have nothing else to compare it to.
00:29:35Yeah, the amount of things we're doing,
00:29:36the amount we're trying to switch back and forth,
00:29:39I always thought that part of the problem
00:29:41was a lot of our current thought about work culture
00:29:45and hustling and what it means to produce
00:29:47was influenced by Silicon Valley in the '90s and 2000s
00:29:50because that was considered this very ascendant part
00:29:54of the economy through the 2000s,
00:29:56through the Steve Jobs era.
00:29:57We looked at Silicon Valley like,
00:29:58these are the coolest companies.
00:30:00They're doing all the coolest stuff.
00:30:02Over there, I think they adopted a model of work
00:30:06that was very inspired by computer processors, right?
00:30:09So because that was what was in the air in the '80s and '90s
00:30:12in Silicon Valley was the computer processor world,
00:30:14the 386 versus the 486 versus the Pentium
00:30:17and it was all about speed.
00:30:19And the thing with a computer processor,
00:30:20if you're a computer type,
00:30:22what matters is you never want the pipeline to be empty, right?
00:30:27You wanna always make sure you have stuff
00:30:29for that processor to do so it never waste time.
00:30:32The processor will, every command you give it,
00:30:35it operates the same as any other.
00:30:36It can switch.
00:30:37It doesn't care what they are.
00:30:38It just sits there and operates one command after another.
00:30:40And the whole game with getting processors to be effective
00:30:43is like don't have downtime.
00:30:44Like the real fear,
00:30:45I can put on my computer scientist hat for a second.
00:30:48The real fear in computer processor design
00:30:50is that you sometimes get to a command
00:30:52that's gonna generate a huge delay.
00:30:55So you say like, oh, go get something from memory.
00:30:58That takes a lot of time from the perspective
00:31:00like a computer processor cycle.
00:31:01It's just sitting there, cycle after cycle,
00:31:03doing nothing while you're waiting
00:31:05for the memory bus or whatever.
00:31:06So we invented these processor pipelines like,
00:31:08oh, while we're waiting to get something back from memory,
00:31:11here's some other stuff the processor can run
00:31:13so that it's never not working.
00:31:15And the idea was you wanna move as fast as possible
00:31:18and you never wanna have downtime.
00:31:20And that's how you get the most out of a computer processor.
00:31:22The human brain is like 180 degrees different.
00:31:25We can't just switch back and forth
00:31:27between unrelated commands.
00:31:29You switch me from one to another thing and boom,
00:31:3230 minutes of my mind is fried.
00:31:34Humans operate very differently,
00:31:36but I think Silicon Valley associated,
00:31:38it said here's the thing we're gonna associate
00:31:39with being really good at your job.
00:31:40It might've used to been, I don't know, your skill.
00:31:43It was Don Draper and Mad Men.
00:31:45Remember that conception of,
00:31:47was it mean to be good at your job?
00:31:48They weren't showing Don Draper grinding it out.
00:31:51Like man, Don Draper is like in the office
00:31:53till 3 a.m. every night or whatever.
00:31:56Now he took the five o'clock train
00:31:57back to Connecticut or whatever.
00:31:59It was, he was really, really good
00:32:02at coming up with ad copy.
00:32:03He was good at what he did.
00:32:05That's what you used to respect.
00:32:06And then after the 80s, 90s, Silicon Valley became pervasive.
00:32:10Like no, what matters is you never have a no op.
00:32:13You never have a down cycle.
00:32:15You might as well say yes to more things.
00:32:17You might as well get more emails.
00:32:18You never have time or you're not working.
00:32:20That's what productivity is gonna be.
00:32:22And that was a disaster for the human brain.
00:32:24- If you struggle to stay asleep
00:32:26because your body gets too hot or too cold,
00:32:28this is going to help.
00:32:29Eight Sleep just released their brand new pod five,
00:32:32which includes the world's first
00:32:33temperature regulating duvet.
00:32:35Compare it, the smart mattress cover,
00:32:37which cools or warms each side of the bed up to 20 degrees.
00:32:40And you've got a climate controlled cocoon
00:32:42built for deep uninterrupted rest.
00:32:44The new base even comes with a built-in speaker
00:32:46so you can fall asleep to white noise, nature sounds
00:32:48or a little ambient Taylor Swift, if that's your thing.
00:32:52And it's got upgraded biometric sensors
00:32:54that quietly run health checks every night,
00:32:56spotting patterns like abnormal heartbeats,
00:32:58disrupted breathing or sudden changes in HRV,
00:33:00which is why it has been clinically proven
00:33:02to increase total sleep by up to one hour every night.
00:33:06Best of all, they've got a 30 day sleep trial
00:33:07so you can buy it and sleep on it for 29 nights.
00:33:09And if you don't like it, they will give you your money back
00:33:11plus they ship internationally.
00:33:13Right now, you can get up to $350 off the pod five
00:33:16by going to the link in the description below
00:33:18or heading to eightsleep.com/modernwisdom
00:33:20using the code modernwisdom at checkout.
00:33:22That's E-I-G-H-T sleep.com/modernwisdom
00:33:26and modernwisdom at checkout.
00:33:28There's definitely an element of this
00:33:31that it's very public productivity.
00:33:36It's very obvious.
00:33:37Look at how hard I'm working, right?
00:33:39If you're the one that replies quickest on Slack or on email,
00:33:42then it's evident that you're the one,
00:33:44it looks like you're the one that's working hardest
00:33:47because you're the one that's most responsive.
00:33:49Whereas the person who's silently working on their own,
00:33:51they can't broadcast it by design.
00:33:53They can't broadcast it to everybody else.
00:33:55So yeah, this obvious productivity in a way
00:34:00is way less sexy.
00:34:01So I think the new elephant in the room is AI
00:34:06and how that is enabling an increase in pace of output,
00:34:11but almost certainly a decrease in quality.
00:34:18So fold AI into your existing worldview
00:34:23because to me it just seems like a huge force multiplier
00:34:28for what already was pretty sloppy, Slack, email,
00:34:33async communication that's always on,
00:34:35people taking their work home with them,
00:34:37never being able to not context switch,
00:34:38not focusing on quality and instead focusing on quantity,
00:34:42not being able to dial themselves in
00:34:44to do deep work for one moment.
00:34:46And now that is enhanced and magnified even more
00:34:51by the use of LLMs to help you put out more,
00:34:55to help you think less.
00:34:56So your focus is actually, you're in Slack with your LLM.
00:35:00It wouldn't surprise me if there is a LLM integration
00:35:03into Slack at some point in future.
00:35:05I don't know whether there is,
00:35:07where you can just do it in there.
00:35:09So you're just talking back and forth
00:35:10in one fucking workspace.
00:35:12Talk to me, fold AI into this.
00:35:13You must have a million thoughts.
00:35:15- Oh, there's a lot going on with AI.
00:35:17I mean, I think in its current instantiation,
00:35:19so we think about like an office worker for the most part,
00:35:21put programmers aside, I'll get back to them,
00:35:23but non-programmers are really interacting with chatbots.
00:35:25Like that's the main way they're integrating right now
00:35:28with AI.
00:35:30It's exaggerating exactly what you said.
00:35:32For a lot of people,
00:35:32it's exaggerating the problems that already exist.
00:35:35Now there's a term for this
00:35:36that comes out of a Harvard Business Review article
00:35:38from last year.
00:35:39They call it WorkSlop, as they put together as one word.
00:35:43And they have some pretty compelling data on this.
00:35:45- So what's WorkSlop?
00:35:45Define WorkSlop for me.
00:35:47- So WorkSlop is AI generated work products
00:35:50in the knowledge work sector.
00:35:51So like emails, reports, and PowerPoints, or what have you
00:35:55that are generated quickly by AI,
00:35:59but they're so low quality that they actually,
00:36:02it's very difficult, they make everyone else's jobs harder.
00:36:05This seems to be,
00:36:06this is like the defining aspect of WorkSlop.
00:36:08It's quick to produce, but it's so low value
00:36:10that it actually no real progress is made.
00:36:12So like you get a WorkSlop email from your boss or whatever,
00:36:17and like, this isn't useful to me.
00:36:18It's this weird wordy thing that's broken up into sections,
00:36:22and it doesn't get to the core of the problem
00:36:23we have to solve.
00:36:24So you made that email quick,
00:36:26but in the bigger scheme of things,
00:36:29we made very little progress towards what we want to do.
00:36:31Or you put together a WorkSlop PowerPoint presentation
00:36:34so that you would have something at the meeting,
00:36:36but now we're spending 20 minutes looking at this nonsense
00:36:38and nothing, it's not helping us.
00:36:40It's not helping us actually do things.
00:36:41So this is what's happening, or at least my fear.
00:36:43I mean, the reality is most people
00:36:46aren't using these tools in the office.
00:36:48Let's just set the reality, right?
00:36:49So, but for the people who are using them right now,
00:36:53which is a healthy percentage, but it's not-
00:36:54- Do you know what the numbers are?
00:36:56- Well, it's difficult because there's a lot of fudging
00:36:58of the numbers here.
00:36:59There's a lot of mistaking I have used or experimented with,
00:37:04with our regularly using them.
00:37:06So I see this mistake happen a lot.
00:37:09And so it's difficult to get good numbers.
00:37:11There was like famously,
00:37:13maybe it was an Ethan Moloch article
00:37:15where he was talking about, in my world, like academia,
00:37:17the homework apocalypse.
00:37:18And he's like, look at this study,
00:37:20students just don't do work anymore.
00:37:22Nine out of 10 are just using chatbots now.
00:37:25But you look at that study and what it actually said was
00:37:28nine out of 10 had tried using a chatbot at least once.
00:37:31And if you looked at who's using them regularly,
00:37:33it was like two out of 10, right?
00:37:36Because like for most of the students,
00:37:37it wasn't helping the way they thought it would.
00:37:39So I don't know what the numbers are.
00:37:41If you count like advanced Google use, I think it's larger.
00:37:46Like, yeah, I searched for information on this
00:37:48instead of going to Google, that's larger.
00:37:50But in terms of people who are actually making office work
00:37:53product out of it, I think it's smaller
00:37:55than the people who follow AI commentary
00:37:57or talk about it on AI Twitter, AI YouTube.
00:37:59I think it's a lot smaller than they probably assume
00:38:01just because in their world is pervasive.
00:38:03But the people who are using it, this is the problem.
00:38:06They're trying to avoid, this is my theory on this.
00:38:09It's like, how is AI helping like an office worker now?
00:38:11Well, their brain is exhausted
00:38:12from all this context switching.
00:38:14So what problem are they looking to solve?
00:38:16They're looking to avoid having to do hard moments
00:38:19of cognition because their brain is so fried.
00:38:21It's really difficult to like solve the blank page problem.
00:38:24Oh God, I got to send this email.
00:38:27I got to, it's a blank screen.
00:38:28I got to start writing from scratch.
00:38:29That's really hard.
00:38:30- And the inertia that they've been trained
00:38:33out of overcoming because of the primer.
00:38:35It's almost like a one, two punch.
00:38:38Humans were primed to not like heavy co...
00:38:42Well, we already didn't like heavy cognitive load.
00:38:44Then our ability to deal with it and get
00:38:47through that initial resistance was decreased
00:38:49through the context switching.
00:38:50And now we're, don't worry about it.
00:38:53Don't worry about it, carbon-based life forms.
00:38:55The silicon-based life forms are coming.
00:38:57- And let's throw in one other aspect in there.
00:38:59Also outside of work, we had these distraction machines
00:39:03in our hand that were further degrading our comfort
00:39:05with concentration because any possible moment
00:39:08of introspection we would have had even outside of work.
00:39:10Why would I do that when TikTok has like the perfect dash cam
00:39:14video of a Karen getting punched or something?
00:39:17Like I got to watch that, right?
00:39:19And so we have that revolution comes along
00:39:22plus the email revolution.
00:39:23We completely atrophy our ability to think
00:39:25and we exhaust our brain.
00:39:26So the other aspect of it, as we talked about
00:39:28it's really exhausting to go through your day
00:39:31context switching.
00:39:31So like I don't have any reserves left
00:39:33to write this PowerPoint.
00:39:34That seems impossible.
00:39:36And then AI is like, Hey, Hey, Hey, Hey,
00:39:38I can do it for you.
00:39:39It'll be fine. It'll be fine.
00:39:40It'll be good enough.
00:39:41It'll be good enough.
00:39:41You're like, Oh, okay.
00:39:42I can smooth over.
00:39:44I use this analogy in a New Yorker piece last year.
00:39:46It's like, it takes your effort graph looks like spikes
00:39:49like an EKG or something like that.
00:39:51And AI smooths over those peaks.
00:39:54And so you don't have to,
00:39:55your peak concentration required can come down.
00:39:58Like, well, you can fill the blank page
00:40:00and then maybe I have to work with it a little bit
00:40:02but that's easier than doing it from scratch.
00:40:04But the stuff being produced is no good.
00:40:06And so I feel like work slop,
00:40:08it's almost less of a,
00:40:11it's less of a critique of AI
00:40:15than it is AI making obvious a problem
00:40:19with the way we were already working.
00:40:20I think that's what's going on there.
00:40:22I think this is even happening with computer programmers.
00:40:24This is considered, you know, heretical right now.
00:40:27I guess I'm used to being yelled at.
00:40:29People are really excited by this workflow
00:40:33where I have seven or eight cloud code agents
00:40:36going concurrently producing code and testing them.
00:40:38And I'm just a manager of all these different processes.
00:40:40And they're all producing this code on my behalf.
00:40:43And it feels really cool and interesting.
00:40:44Like this has to be the future.
00:40:46I don't know that that is.
00:40:48I mean, I don't know the context.
00:40:51The problem is outside of like demos or internal tools
00:40:53or just having fun.
00:40:55That's not really code you can trust very well.
00:40:59And it does though completely lower the peaks
00:41:02of being a computer programmer, those peaks of cognition.
00:41:05It's much, much easier to manage
00:41:07a bunch of cloud code processes
00:41:09than it is to come up with an algorithm.
00:41:11And then you have that same blank page.
00:41:12So I think the jury is still out on even where
00:41:15we're gonna end up in the AI impact on programming.
00:41:18I don't know where it's gonna end up,
00:41:20but the way it's being talked about in the last few months
00:41:22after the latest cloud code update, which is sort of,
00:41:26I guess that's something humans don't do anymore.
00:41:29I don't think we're there ready to say that yet.
00:41:31- I get popped with cloud code ads.
00:41:34I get, you give me a terminal, I have no idea what to do.
00:41:38I'm like someone's grandmother trying to use an iPad.
00:41:41I have no idea what's going on.
00:41:42So they are pushing very, very hard at the moment.
00:41:46- It's funny, but it's a little bit crazy.
00:41:48But it's my world, I'm a computer scientist
00:41:50is that for engineer computer scientist types,
00:41:54they forget how technically advanced they are.
00:41:56So yeah, cloud code works in the terminal, right?
00:41:59And that's why it works so well.
00:42:01It exists in a world of text only.
00:42:02Text command line commands like the old DOS command line,
00:42:06it's all text commands, which you can do a lot with.
00:42:09You can create an edit and compile a computer program.
00:42:11So it's very good at that.
00:42:12And it's a limited set of textual commands.
00:42:14That's perfect for a language model.
00:42:17And the engineers are like,
00:42:18oh, we can use this terminal based tool
00:42:20to do all sorts of other stuff
00:42:22that's not computer programming.
00:42:24Great, this is solved, everyone's gonna be doing this.
00:42:27Everyone is gonna have these sort of personal assistants
00:42:30based on something on cloud code.
00:42:32I'm like, man, do you realize how foreign a command line
00:42:36interface is to people?
00:42:37You realize like how weird and nerdy
00:42:39and complicated your world is?
00:42:40You're like, yeah, this will be great.
00:42:41My grandma will just on the command line understand
00:42:44that like the cloud code agent can bring up a bass script
00:42:47that's just gonna cat those files
00:42:48over to the the regex graph, it'll be fine.
00:42:51No one knows how to do any of that type of stuff.
00:42:53So it's sort of funny seeing the engineers
00:42:55building these incredibly intricate nerdy,
00:42:58wonderful tools they've custom built for cloud code
00:43:01to help them in their life.
00:43:03And they think the gap between that and everyone else
00:43:06having AI automating things in their life is like,
00:43:08oh, it's this real small thing.
00:43:09I'm like, oh man, I don't think you understand.
00:43:11I mean, people are still not quite sure about the right click.
00:43:14I think you still have a ways to go before they're--
00:43:18- I saw this tweet from Robert Frndlaw.
00:43:23Lawyer uses ChatGPT to help write a brief.
00:43:25ChatGPT hallucinates cases in quotations.
00:43:28Court sanctions lawyer and four co-counsel
00:43:31for not catching the errors.
00:43:32The lawyer who used ChatGPT has practiced for over 30 years.
00:43:35He prompted ChatGPT write an order
00:43:38that denies the motion to strike with case law support.
00:43:41Told the court that he doesn't normally use ChatGPT
00:43:43and he used it this time
00:43:43'cause he was caring for his dying family members.
00:43:46Said no of his co-counsel
00:43:47were aware of this use of generative AI.
00:43:49Court says that because all five attorneys
00:43:51signed both documents that included these errors
00:43:54and they admit that not one of them verified
00:43:57that the case law in those briefs actually exist,
00:44:00that conduct violates Rule 11(b)(2).
00:44:03- There's hundreds of those happening, right?
00:44:06I heard, I don't know where this site is.
00:44:08There's a site that tracks this.
00:44:10Lawyers getting busted for ChatGPT written briefs
00:44:15that just make things,
00:44:16because it will for sure make up things if you ask it.
00:44:19Because again, what it tries to do is not to get,
00:44:23people know this, but right at the very bottom,
00:44:25what is a language model trying to do
00:44:26is trying to solve the word guessing game.
00:44:28That's how it was trained.
00:44:28It was given real text, you knock out a word
00:44:31and say, replace that word.
00:44:32Can you figure out what word
00:44:33was really there in the real text?
00:44:35So the language models just think
00:44:36they're trying to expand a real text that really existed.
00:44:39So they're trying to produce text
00:44:41that makes sense given the prompt.
00:44:43There's not world models or structured reasoning in there
00:44:47of like, okay, this is a legal brief
00:44:49and we have a notion of a citation.
00:44:51We don't know how it thinks about that.
00:44:53There's hundreds and hundreds of cases of this happening.
00:44:56I heard Scott Galloway talk about this on the Pivot Podcast,
00:44:58that there's some site that tracks this
00:45:00that he keeps an eye on.
00:45:01And he says, it astounds you.
00:45:02You think it's a handful of people?
00:45:04It's not, it's all the time.
00:45:06Here's my story of getting burned by that.
00:45:08I sort of learned my lesson.
00:45:09I was working on, because the one way I'll use ChatGPT
00:45:13is just sometimes instead of Google, right?
00:45:16Especially if I want like instructions
00:45:18for how to, whatever, change settings on something.
00:45:20It's great.
00:45:21It has a lot of really useful--
00:45:22- It's fucking spectacular for all of that stuff.
00:45:24If you want to use it as basically a glorified Wikipedia
00:45:26that's more instructive, like--
00:45:28- Yeah, like Wikipedia, you can ask questions of, yeah.
00:45:31So I was using, I was writing an essay
00:45:35and it was on Isaac Asimov's Rules Robotics.
00:45:39This was a New Yorker essay.
00:45:42And I left my copy of iRobot.
00:45:45I was here at my studio and I'd left it at home.
00:45:48I was like, oh, I needed to add this quote, right?
00:45:51Oh, I left it.
00:45:52And I was like, oh, you know what?
00:45:53That story's in the public domain.
00:45:55It's all over the internet.
00:45:56And this seems like it would be perfect for ChatGPT.
00:45:59Like, hey, can you just grab a copy and find me that quote?
00:46:02And they'll save me a little bit of time.
00:46:03Like, yeah, here it is.
00:46:04Here's the quote.
00:46:05I was like, yeah, roughly I remember I put it in there.
00:46:07And then the fact checker was like,
00:46:09where's this quote from?
00:46:11I was like, yeah, it's from the story or whatever.
00:46:12I get the book.
00:46:13It had just hallucinated a quote that was more or less
00:46:18like what was said, right?
00:46:19Because again, it's kind of playing the game
00:46:20of this is the type of text
00:46:22that would make sense giving the prompt,
00:46:25but it wasn't the actual quote.
00:46:26It had full access to it, right?
00:46:28You can search this.
00:46:29It's in the public domain
00:46:31so that the actual story is everywhere.
00:46:33So I had just naively assumed
00:46:36if you ask it for some information that exists
00:46:38on the internet that, oh, it'll just go find it
00:46:40and format it for you.
00:46:41It didn't.
00:46:42And then I went to a whole dialogue with it where I was like,
00:46:44this is not the right quote.
00:46:45And it was like, yeah, you're right.
00:46:46You know what?
00:46:47I thought you meant paraphrase a quote.
00:46:48Here it is, made up.
00:46:50I was like, that's not the real quote.
00:46:51Can you go get the real quote and give it,
00:46:54at this point I was just experimenting.
00:46:56I'd already filled it in, the article.
00:46:57And it was like, you're right.
00:46:58Yeah, you know, I was being hasty.
00:47:00Here you go.
00:47:01I could not get it to give me the real quote.
00:47:03So anyway, I learned my lesson.
00:47:05I was like, oh, don't assume,
00:47:08even if it's common information that it has access to.
00:47:11- Dude, the desire to fucking reprimand an LLM.
00:47:15And I've shouted at them.
00:47:17I've capital letter exclamation marks.
00:47:19It's like, what are you doing?
00:47:21What do you do, what are you hoping to achieve
00:47:25by throwing your emotional distress
00:47:27at this fucking disembodied voice on the other side?
00:47:29Okay, we, bits aside.
00:47:31I fucking love chat JPT.
00:47:32I think it's been really, really fantastic
00:47:34for tons of things.
00:47:36What's important is learning the limits
00:47:38and not using it for case law.
00:47:39This episode is brought to you by Whoop.
00:47:43I have been wearing Whoop for over five years now,
00:47:45way before they were a partner on the show.
00:47:47I've actually tracked over 1600 days of my life with it,
00:47:52according to the app, which is insane.
00:47:55And it's the only wearable I've ever stuck with
00:47:57because it tracks everything that matters,
00:47:59sleep, workouts, recovery, breathing, heart rate,
00:48:02even your steps.
00:48:03And the new 5.0 is the best version.
00:48:06You get all the benefits that make Whoop indispensable,
00:48:097% smaller, but now it's also got a 14 day battery life
00:48:13and has health span to track your habits,
00:48:15how they affect your pace of aging,
00:48:17it's got hormonal insights for ladies.
00:48:19I'm a huge, huge fan of Whoop.
00:48:21That's why it's the only wearable that I've ever stuck with.
00:48:23And best of all, you can join for free,
00:48:25pay nothing for the brand new Whoop 5.0 strap.
00:48:29Plus you get your first month for free
00:48:31and there's a 30 day money back guarantee.
00:48:33So you can buy it for free, try it for free.
00:48:35If you do not like it after 29 days,
00:48:37they just give you your money back.
00:48:39Right now, you can get the brand new Whoop 5.0
00:48:42and that 30 day trial by going to the link
00:48:44in the description below or heading to join.whoop.com/modernwisdom
00:48:49that's join.whoop.com/modernwisdom.
00:48:53What opportunities do you think
00:48:54an increasing reliance on AI opens up?
00:48:57'Cause I get the sense that as more people use LLMs
00:49:00to do work for them, this will create advantages
00:49:04in some areas for people who don't need to be reliant.
00:49:08So have you thought about the holes,
00:49:11market openings that will occur?
00:49:13- It will, I mean, the way I think about LLM based AI
00:49:16versus more advanced AI that we don't know how to do yet
00:49:18is my theory is what is being affected
00:49:22is gonna be more narrow at first.
00:49:23It's gonna be places where there's an exact match
00:49:26between what generative AI existing tools can do
00:49:29and existing market sectors.
00:49:32We saw this actually, the week we're recording this,
00:49:34we actually saw this reflected in the stock market.
00:49:37It was this interesting paradox that was going on this week
00:49:40where the stock price of software companies
00:49:44that deal with stuff that is well suited for an LLM
00:49:48went down, they call it the SaaSpocalypse, right?
00:49:51The software service apocalypse.
00:49:53So companies that do like legal advice,
00:49:56companies that do graphic design like Figma and Adobe,
00:50:00because a lot of, we have generative image generation
00:50:02is making building images from scratch is less useful.
00:50:06Customer service, so companies that do a lot
00:50:08of customer service type software.
00:50:10We saw the stock was sliding
00:50:12on these very specific software industries
00:50:14because like, look, I think LLMs are gonna be able to do this.
00:50:16It was triggered by Anthropic releasing some plugins
00:50:18that made it easier to integrate LLMs into your services
00:50:21without having to hire these other companies.
00:50:24But you would think that would be good news
00:50:26for the big tech companies building the AI
00:50:28that's gonna replace all this.
00:50:30Their stock was sliding as well.
00:50:32So the big tech companies had this big slide
00:50:34that at the end of the week we're recording this,
00:50:36there was a rebound at the end,
00:50:37it was like a trillion dollars in market cap disappeared
00:50:39from the big tech companies at the same time.
00:50:42So what does that mean the market was betting on?
00:50:45What are investors betting on at that point?
00:50:47What was gonna happen?
00:50:48And they were betting that in the near future,
00:50:50the next year or two, what we're gonna see
00:50:52is selective impacts in specific fields from generative AI,
00:50:57but also that too much money is being invested
00:51:00in these AI companies as it already,
00:51:03which means they're betting that they're not about
00:51:05to automate most of the economy.
00:51:07They're not about to just one more iteration away
00:51:10from a huge economic disruption.
00:51:12They're not at this peak of like complete transformation
00:51:15because if they were, you would be trying
00:51:17to increase your holdings in these companies.
00:51:18Like I don't care how much money they're investing.
00:51:21These companies are gonna be worth
00:51:22an astronomical amount of money, but the market is betting,
00:51:26I think the impact is gonna be more limited
00:51:28in the one to two year window
00:51:30than a lot of the commentary was seeing.
00:51:32So I think that's important because talk is cheap,
00:51:35but tech stocks aren't.
00:51:37And so people, the way they spend their money
00:51:39actually often has more of,
00:51:42I think there's a lot of information in that versus just,
00:51:45I've been reading these articles online
00:51:47and my God, the vibe really seems to be saying
00:51:49this is a big deal.
00:51:51So I kind of agree with the market's consensus right now.
00:51:54For sure, there's gonna be industries that are affected,
00:51:58but it's not gonna be one of these situations
00:52:00where you say, okay, any work
00:52:01that's not just the deepest creative work
00:52:03is all gonna be automated in the next few years.
00:52:04I better go learn how to like do art or something like that.
00:52:07I don't think it's gonna be that broad at first.
00:52:09I don't think the current generation of AI technology
00:52:12can support as broad of impacts as people think.
00:52:15There's a lot of extrapolation from,
00:52:16well, if it can do this with code,
00:52:18certainly it could do this with all these other jobs.
00:52:20If it could do this with this industry,
00:52:21well, certainly next it'll do it
00:52:23for all these other industries.
00:52:24We have to be wary of those extrapolations.
00:52:26- Right, I think I read an article from you.
00:52:28What if AI doesn't get much better than this?
00:52:31Sort of, if we have, I don't know,
00:52:32some sort of Flynn effect thing that kicks in, but for AI,
00:52:36where, you know, 'cause I think a lot of people
00:52:41would agree chat GPT two to three, fucking hell,
00:52:45to four, 4.0, I know there's a whole furor on the internet
00:52:49about people that have got girlfriends or boyfriends
00:52:51that are virtual on 4.0,
00:52:52and they're all getting upset and sad about it.
00:52:55And I don't understand.
00:52:56I don't think I use the tools sufficiently deeply
00:52:59to be able to test this and benchmark it.
00:53:01It's like, my Fire TV sticks remote isn't working well.
00:53:05It was able to do that fucking five years ago.
00:53:09But is your, your thinking is that we're maybe
00:53:14gonna reach asymptote for what LLMs generally
00:53:18and transformer technology is able to do,
00:53:20and then it's gonna be a new architecture entirely
00:53:23if we're going to actually get beyond this?
00:53:25- Yes, yeah, that's what that article is about.
00:53:27I think that was a very, of the articles I've written,
00:53:30I think that was a really important one
00:53:31that came out in August.
00:53:33And the story it tells,
00:53:35and a lot of other people have told the story as well
00:53:37or around that time and since.
00:53:38But the story it tells is basically what happened
00:53:41is there was this big paper that was published in 2020.
00:53:44The lead researcher Kaplan, Jared Kaplan,
00:53:46I think was at Anthropic at the time.
00:53:48And it was this paper where they said,
00:53:49"Hey, something weird is happening here.
00:53:51If we make LLMs bigger and we train them longer,
00:53:55they perform better."
00:53:56And technically they're seeing the loss decreased.
00:53:59That sounds kind of obvious,
00:54:00but in like machine learning circles,
00:54:01that was surprising because there's this idea of overfitting
00:54:04where if you just make your model bigger,
00:54:07the performance goes down.
00:54:08So it used to be like,
00:54:09you have to find the perfect size model
00:54:12for your problem space.
00:54:12That's the way people thought about machine learning
00:54:14until this paper came out.
00:54:15And like, I don't know, transform based LLMs,
00:54:18they were using GPT-2
00:54:19and they were systematically making it bigger.
00:54:22And they were seeing that the performance just kept going up.
00:54:25Like, this is interesting, so let's try it.
00:54:27And that was GPT-3.
00:54:29All right, let's actually make this like 10X bigger.
00:54:32Surely this can't be right, and it was.
00:54:34It matched the Kaplan curve exactly.
00:54:36Like, oh my God, this actually got way better
00:54:39just by making this bigger.
00:54:40Like, all right, well, certainly that must be the end of it.
00:54:43Let's try it with GPT-4.
00:54:44They made it bigger, they trained it much longer.
00:54:46Months and months they trained it.
00:54:47Microsoft had to build these custom data centers
00:54:49to train it with new AC technology that didn't exist before.
00:54:53And it fit the curve.
00:54:54It was like way better.
00:54:55And the thing GPT-4 did that really got,
00:54:58so GPT-4 set off the whole industry.
00:55:01The thing it did is it started showing abilities
00:55:03beyond just language.
00:55:04And that's where people got excited.
00:55:06Like, oh wow.
00:55:07If you train a language model on enough language,
00:55:11it learns about things that isn't just producing language.
00:55:14It can play games, it can do math problems, it can do logic.
00:55:17I mean, this was super exciting.
00:55:19It was super exciting.
00:55:20So the assumption was do this two or three more times.
00:55:25You have AGI.
00:55:26So that's what the whole industry was based off of.
00:55:28When we went from three to four was,
00:55:31this was legitimate, justified excitement.
00:55:33Expand the size and the training duration
00:55:36two or three more times,
00:55:38and the economy is gonna happen in a box.
00:55:41I mean, it was so, that's where all of,
00:55:42that was the engine for all this excitement.
00:55:45So they tried.
00:55:46At OpenAI it was called Project Orion.
00:55:48They made it bigger, modeled in four.
00:55:51They trained it even longer, like here we go.
00:55:53And they tried it and they said, it's not much better.
00:55:57And this was this big brick wall surprise for the industry.
00:56:02Like, wait, it didn't get better.
00:56:04Everyone else tried as well, right?
00:56:06Grok, they tried this with Grok as well
00:56:07at the Colossus Data Center was like,
00:56:08we're gonna have 200,000 GPU data center.
00:56:12No one's ever built anything this big.
00:56:14And it was like a little bit better.
00:56:16Meta tried this.
00:56:17They had a model called Behemoth.
00:56:19Like we built the biggest data,
00:56:20is bigger than any one we've had before.
00:56:22They didn't release it because it was marginally better
00:56:25than the last model that they had.
00:56:27And so this was a huge issue, right?
00:56:29You couldn't just make the models bigger
00:56:31and train them bigger.
00:56:32So what they did was they switched to,
00:56:35what are other ways we can get performance increases?
00:56:38And can we get more narrow by what we mean with performance?
00:56:41And this is when we began
00:56:42to get all the alphabet soup models.
00:56:44Well, it's GPT-03-mini slash whatever.
00:56:49And they switched the focus from just,
00:56:51this is amazing if you use it
00:56:52to we have these benchmark graphs
00:56:54and look at these graphs.
00:56:55Things are going better on these benchmarks.
00:56:56It all became about benchmarks
00:56:58because these are very narrow things
00:57:00that you could train models to do well on.
00:57:02They weren't intuitive.
00:57:03GPT-04 was just awesome.
00:57:05By the time we got to GPT-05,
00:57:06their whole launch, their launch page had 28 graphs
00:57:09of benchmark names that no one knew what they were.
00:57:12And so then they had to look for all these other ways
00:57:14to get improvement.
00:57:15And that's where you got like inference time compute.
00:57:17Well, what if we compute longer for harder questions
00:57:21and they began really pushing fine tuning?
00:57:23Well, for specific types of problems,
00:57:25we can get data sets that have answers
00:57:28and questions and answers.
00:57:29And we can use reinforcement learning
00:57:31that try to take this pre-trained model
00:57:34and make it better at this particular type of problem.
00:57:37And then we can have a benchmark
00:57:38that shows us we got better at this problem.
00:57:40And my argument in that article is like,
00:57:43this is a way different game than we were playing
00:57:45when we went from two to three and three to four.
00:57:47We're no longer scaling to AGI.
00:57:49We're taking basically GPT-04
00:57:51and we're doing all of this like tuning
00:57:53and adding extra stuff on top of it and around it
00:57:56and measuring these very narrow benchmarks.
00:57:58And that's why people have this feeling ever since.
00:58:00Like I guess they're better, but it's not in an obvious way.
00:58:03It's better in specific tasks or if I vibe code this,
00:58:06it looks better, I guess, and it seems more narrow.
00:58:09And so, yeah, we're reaching an ad,
00:58:11there's a long answer to a short question,
00:58:12but we are reaching an asymptote on just pure fine-tuned
00:58:16LLMs as an engine for AI.
00:58:19We're gonna need more architectures.
00:58:20It's gonna take more time.
00:58:21- Well, presumably chat GPT-6 could come out and oh fuck,
00:58:26they just blew through the entirety of my prediction.
00:58:29This curve no longer curves flat in the way that I thought
00:58:33and shit, this is a different universe now.
00:58:37- Yeah, but that won't happen because they tried
00:58:39and they don't know how to do that.
00:58:41So it's not gonna be just an LLM.
00:58:43I mean, my prediction of the future of AI
00:58:45is I think what we're gonna see,
00:58:46I think LLMs are very powerful,
00:58:48but what we're gonna see is much more of hybrid models
00:58:51that are custom fit to particular problems
00:58:54where, okay, this system does this thing better than a human.
00:58:59And in its guts, there's like an LLM in there,
00:59:02not a huge frontier model, but one that's like souped up
00:59:05and optimized for this particular type of thing.
00:59:07But there's also like five or six other models
00:59:09and there's an explicit world model,
00:59:10there's a future predictor,
00:59:12there's a policy network trained to reinforcement learning
00:59:14to try to evaluate situations to see what's good or bad.
00:59:17There's a whole logic engine on top of this
00:59:19that hooks these together.
00:59:21These are what I think the AI systems of the future
00:59:24are gonna be like, they're gonna be bespoke
00:59:25and there's gonna be a ton of them.
00:59:26So when we get to AGI, it's not gonna be GPT-7
00:59:30can do everything you ask it as well as a human.
00:59:32It's gonna be a world in which there's 10,000
00:59:34different AI products and you realize,
00:59:37everything I can think of now,
00:59:38there's some product out there somewhere
00:59:41that can do this better than humans.
00:59:42Just like there's AI that can play chess better than humans.
00:59:44There's a different AI that can play Go better than humans.
00:59:46There's an AI now that can beat professional poker players
00:59:49at Texas Hole of no limit.
00:59:51They're all different systems with their own pieces in them.
00:59:54And a lot of them have some language models in them as well,
00:59:56but a lot of other pieces as well.
00:59:58It's distributed AGI, that's what it's gonna be like.
01:00:00We're just gonna wake up one day and say,
01:00:03there's fewer and fewer things where we say,
01:00:05humans can do this better than computers.
01:00:07And it's a different model than PAL 9000.
01:00:10There's one giant, it's a really inefficient way
01:00:13to imagine solving this problem.
01:00:14If we just have a big enough language model,
01:00:17it's gonna do all activity, it's gonna power all agents,
01:00:20it's gonna automate all systems.
01:00:22That really doesn't make sense.
01:00:24I think it's gonna be a much more distributed path
01:00:27towards AGI and AI.
01:00:29- Given what AI can and can't do
01:00:36and what the quality of work is
01:00:38that it puts out at the moment,
01:00:40what is some good advice for somebody
01:00:42who wants to work against the weaknesses
01:00:47that are going to be exposed in other people
01:00:49because of their reliance on AI by avoiding it themselves
01:00:52or by using it appropriately?
01:00:53What would you focus on?
01:00:54Because again, it seems to me like quantity
01:00:57is easier to achieve than ever before.
01:01:00Quality is going to be rarer.
01:01:02That inertia, getting the project off the launch pad,
01:01:04the blinking cursor of the blank page,
01:01:06where should people focus their time and their attention
01:01:11in order to capitalize this?
01:01:12- I think you need to begin thinking about
01:01:16the feeling of cognitive strain,
01:01:19the way that a weightlifter thinks about the burn of a muscle
01:01:22or a runner thinks about burning lungs.
01:01:24As a thing that is uncomfortable in the moment,
01:01:26but man, I'm excited about this feeling
01:01:28because I'm getting stronger.
01:01:31You got to make yourself really comfortable thinking hard.
01:01:35That is the differentiating factor.
01:01:37I mean, obviously I've been saying this
01:01:38since 10 years now, but that's even more now
01:01:41going to be the differentiating factor, right?
01:01:43And if you talk to athletes, they're like,
01:01:44this is like Schwarzenegger and pumping iron
01:01:46talking about pump.
01:01:47And that's a really painful what he's doing actually, right?
01:01:50Like lifting the level of weights
01:01:52that the physical pain he's in is high
01:01:54and he compares it to an orgasm, right?
01:01:56Because if you're a weightlifter, you're like,
01:01:57oh, that pain is directly translating
01:02:00into more strength and more muscle mass.
01:02:01You got to think that same way about your brain.
01:02:03You cannot flee cognitive strain.
01:02:07You have to think about it
01:02:08in a knowledge work cognitive age.
01:02:11That is the feeling of my brain getting more capable.
01:02:14Yeah, I want to seek that out.
01:02:15Let's go get it.
01:02:16Let's go get some, right?
01:02:17Like I want to this, nope, bring my focus back to this thing.
01:02:20I'm going to try to push this through.
01:02:21And then when you're done, be like, oh man,
01:02:23I exhausted my brain.
01:02:24That's awesome.
01:02:25That was like a really good cognitive workout.
01:02:28So don't, while everyone else is using AI to run away
01:02:30from strain, you should be the person running for it
01:02:33because especially in the American context,
01:02:35I mean, the knowledge economy is now a massive portion
01:02:38of our GDP and the knowledge economy itself
01:02:40is shifting more towards cognition intensive work.
01:02:45So, you know, knowledge work can capture anything
01:02:47where you're not building things.
01:02:48But now all the lower level knowledge work
01:02:50is being outsourced or automated.
01:02:53A lot of it has been replaced over the last 30 years
01:02:55by software.
01:02:56We don't have support staff and assistants
01:02:57and secretaries like we used to because,
01:02:59well, you can use Microsoft Word and email.
01:03:01We don't need separated people.
01:03:03And so the work that's left in our economy,
01:03:05the knowledge economy has been getting more
01:03:07and more cognitively demanding.
01:03:08And so the number one skill is I'm used to straining my brain,
01:03:12learning hard new things and maintaining focus.
01:03:13That's what I would train.
01:03:15- That's so good.
01:03:17I really, really agree.
01:03:19And the funny thing is, that's why I asked at the top
01:03:23if you just felt like fucking Cassandra,
01:03:25because each subsequent development in technology
01:03:30makes this more important.
01:03:32There's always gonna be that seductive whisper
01:03:37in the back of someone's mind that,
01:03:40well, yeah, but I can work faster with AI.
01:03:42I can work quicker by what if my boss sees me
01:03:45doing executive functioning through Slack more, whatever.
01:03:49What's the elevator pitch for you should do
01:03:55a work of high quality and that will end up winning.
01:03:59- You have to think about employment.
01:04:01Ultimately, it's a marketplace.
01:04:03There's a lot of obfuscation and fog and smoke,
01:04:06but it's ultimately a marketplace, right?
01:04:08You're paid money.
01:04:10In exchange, you produce things that have economic value.
01:04:13That's what makes that exchange make sense.
01:04:15There is not ultimately an underlying economic value
01:04:19to the coordination activities by themselves.
01:04:21There is no actual economic value
01:04:23to the speed of your Slack responses
01:04:25or the number of meetings you go into
01:04:27or the number of like bullet pointed emails
01:04:29with those sort of chat CPT emojis that you put out.
01:04:32That itself doesn't generate economic value.
01:04:34The stuff that does a knowledge work
01:04:35almost always requires you mastering hard skills
01:04:38and applying them through concentration.
01:04:40And ultimately that shakes out.
01:04:42There's only so far you can get or so far you can hide
01:04:45being busy because busyness can't be monetized.
01:04:49And of course you can create a smoke for a while.
01:04:52Like, I don't know, like, you know,
01:04:54Chris seems like productive, I guess,
01:04:56like he's always on these emails and this and this and that.
01:04:59But if you're not actually producing things
01:05:01that have economic value,
01:05:02like ultimately that catches up to you.
01:05:03Your opportunities narrow.
01:05:05You're gonna get found out at some point
01:05:07where if you do the other thing,
01:05:09it's like, no, I'm creating stuff that is rare and valuable.
01:05:11It's unambiguously has value in the marketplace.
01:05:14You write your own ticket.
01:05:16Like what, you wanna have a business
01:05:17where you work half the year, you can do it.
01:05:19You wanna get paid a huge amount of money, you can do it.
01:05:22You wanna like work for a company,
01:05:23but you choose when you come into the office
01:05:25and you declare, like, I don't wanna do meetings.
01:05:27That's actually a thing, by the way,
01:05:29I talked to a marketing team
01:05:31at one of the major tech companies not long ago.
01:05:33And they said, you know what?
01:05:35We're in the sales side and like our group, the sales group,
01:05:39we are exempt from meetings
01:05:41because they can directly monetize.
01:05:44Oh, you brought in this many dollars.
01:05:46We can see it.
01:05:47And if you're bringing in dollars,
01:05:49they're like, you can do what you want.
01:05:50And they could also see if we make you go to meetings,
01:05:52those dollars go down.
01:05:53It's like, forget the meetings for you.
01:05:54Everyone else, where there's not a clear number
01:05:56where they can see how much value you're bringing,
01:05:57like, oh, you better be there in the meetings.
01:05:59- Dude, I've always thought this,
01:06:00the big problem that most people have
01:06:05that doesn't exist in the world of sports stars.
01:06:08If you're a sports star,
01:06:10everything that you're doing is to facilitate performance
01:06:12and performance is very tightly bounded and it's quantifiable.
01:06:17If you're a weightlifter, 300 kilos is 300 kilos.
01:06:21You either pick it up or you don't pick it up.
01:06:23And your sleep and your recovery and your nutrition
01:06:27and your hydration and your game tape
01:06:28and your technique work and your SNC and your body work
01:06:31and massage and soft tissue and all of that stuff
01:06:34combine to this output.
01:06:36It's a very, very sort of single ordinating principle.
01:06:40The same thing goes for tennis
01:06:41and the same thing goes for football
01:06:43and the same thing goes for baseball and so on and so forth.
01:06:45You do not perform well.
01:06:47You begin to scrutinize all of the contributing elements
01:06:49that come toward that.
01:06:50The problem that you have in most normal people's lives
01:06:54is the output that they're optimizing for
01:06:57is diffuse and very hard to work out.
01:07:00Well, I wanna be a good father
01:07:02but I also wanna perform at work.
01:07:05I do Brazilian jiu-jitsu on an evening time
01:07:07and my wife makes me go dancing
01:07:09and I wanna be engaging at a cocktail party.
01:07:11Okay, well, first off, that's lots of things.
01:07:14It's not a single ordinating principle.
01:07:16And secondly, define to me the lineage
01:07:19between your disrupted sleep last night
01:07:23and your poorer performance around the dinner table
01:07:26or in Brazilian jiu-jitsu or whatever.
01:07:29The diffuse thing contributes
01:07:31because you inevitably have to make trade-offs
01:07:33from one thing in order to do another.
01:07:35But also it's just hard.
01:07:36It's hard to work out how your performance is performing.
01:07:39And this is the same in the work life.
01:07:43Perfect example, the salespeople,
01:07:45we just know if we make you do this thing,
01:07:48we lose that thing.
01:07:49And that thing is more important than this thing.
01:07:51It would be like if for some reason
01:07:54sports stars were being encouraged to stay up late.
01:07:56You go, well, we know if we make you stay up late
01:07:58answering fucking slacks,
01:08:01your performance in the game the next day decreases.
01:08:04But for most people, there's this implicit assumption
01:08:08that part of what you do is the contribution
01:08:11to the strategy and the operations
01:08:13and the executive function culture and so on,
01:08:17which means that you forget what you're there for.
01:08:19I think people have forgotten what they're there for.
01:08:21What am I supposed to be here at work doing?
01:08:23What is my outcome goal?
01:08:25- There's so much fat
01:08:27in the American knowledge work sector right now, right?
01:08:30'Cause we're so wealthy
01:08:33and there's so much money being slung around
01:08:34that we can have whole organizations
01:08:37where most people don't even know
01:08:38how they're directly connected to producing that value.
01:08:40And they could just be doing email all day or whatever, right?
01:08:42It's so inefficient.
01:08:45But there are plenty of knowledge work areas
01:08:49where people don't put up with a bunch of this nonsense.
01:08:50And it's all areas where it's very easy
01:08:52to quantify your production.
01:08:55I did this essay a couple of years ago
01:08:58where I did a reflection where I said,
01:08:59God, almost every thought I've had in my books
01:09:02all came out of my experience as a grad student at MIT.
01:09:06So I was at the theory of computation group
01:09:09in the computer science department at MIT.
01:09:11I don't call it department,
01:09:12but the theory of computation group in the CS lab at MIT,
01:09:16which is like a group, the professors there,
01:09:18the students, we weren't like this,
01:09:19but the professors were super geniuses.
01:09:21Like literally Turing Award, Turing Award,
01:09:24MacArthur, MacArthur, Turing Award, Dijkstra Prize,
01:09:26like smartest people in the world.
01:09:29And it was incredibly clear if you were successful or not.
01:09:32What major theorems did you prove in the last few years?
01:09:35That's it, that's all that mattered, right?
01:09:37And that required a lot of thinking.
01:09:38So they were terrible with email.
01:09:41They had no interest in social media.
01:09:44Meetings, like if you're trying to throw meetings at them,
01:09:46they would just ignore you, right?
01:09:47I wrote about this in deep work even, and people pushed back.
01:09:50I was like, this is what it's like in that world.
01:09:52If you send someone an email in this world,
01:09:54like one of these professors,
01:09:56and you're like, this is ambiguous.
01:09:59You kind of didn't word this well,
01:10:00or I don't really want to do this.
01:10:01They just ignore it.
01:10:03Like that's on you, buddy.
01:10:04Like I have to get, you know, I'm being,
01:10:06I will lose my job if I'm pre-tenure,
01:10:08if I don't come up and solve theorems.
01:10:10And they put up with no nonsense.
01:10:11And a lot of that actually infused the book, Deep Work,
01:10:14because like, you know what?
01:10:15I came of age in an environment
01:10:17where all anyone cared about was focus,
01:10:20and everything else was secondary.
01:10:22It's like athletes, just like you said,
01:10:24if this is getting in the way of my launch angle going down,
01:10:27or my batting average adjusting, I'm going to change it.
01:10:30But it's crazy right now in knowledge work
01:10:32how many positions that's not true.
01:10:33But what I advise people then,
01:10:34get in a position where that's true.
01:10:37Change your profile at work,
01:10:39or if you're changing your job, change your job into one
01:10:42where your value production is unambiguous.
01:10:44Now, this is a double-edged sword,
01:10:46'cause it swings both ways. - 'Cause you can't hide anymore.
01:10:48- You can't hide anymore.
01:10:49But if you get into one of those situations
01:10:51and then you do the cognitive work,
01:10:53I know how to focus, I build the skills, I apply the skills,
01:10:57I'm not afraid of cognitive strain,
01:10:59you're in the absolute best position in our economy, right?
01:11:02You can write your own ticket,
01:11:03but you have to be willing to go into a circumstance
01:11:07of like, this is the only world I know.
01:11:09And academia is, what did you publish?
01:11:11That's all that matters.
01:11:12That's all we care about, what'd you publish.
01:11:14Book writing, how many copies did your last book sell?
01:11:17That's all that matters.
01:11:18There's no, you know what though?
01:11:19He answered our publisher email so quickly,
01:11:22so let's give him another deal, folks.
01:11:24No, it's exactly how many dollars did you make us last time?
01:11:27That's what we care about for the next time.
01:11:30So it's a scary world where you're being held accountable.
01:11:34But it's an equation I always say,
01:11:36is that if you're accountable,
01:11:37you don't have to be accessible.
01:11:39If you're like, I can point to,
01:11:42this is the value I produced and I'm killing it for you,
01:11:46then I don't answer emails, I don't go to these meetings,
01:11:49I don't do 50 sort of things.
01:11:50You can get away with almost anything you want.
01:11:52So I think that's, more people should make that move,
01:11:54especially in the AI age, I suppose.
01:11:57More people should make that move towards like,
01:11:58hey, hold me accountable,
01:12:00and then do the work to actually show up.
01:12:03It makes your life so,
01:12:04it's such a better way to go through knowledge work.
01:12:05You get away from that hyperactive hive mind,
01:12:08brain melting, distracting, soul crushing,
01:12:11slack all day long nonsense.
01:12:13- In other news,
01:12:14you've probably heard me talk about Element before
01:12:16and that's because I am frankly dependent on it.
01:12:20And it's how I've started my day every single morning.
01:12:24This is the best tasting hydration drink on the market.
01:12:27You might think, why do I need to be more hydrated?
01:12:28Because proper hydration
01:12:29is not just about drinking enough water,
01:12:31it's having sufficient electrolytes
01:12:33to allow your body to use those fluids.
01:12:35Each Grab and Go stick pack
01:12:36is a science backed electrolyte ratio
01:12:38of sodium, potassium, and magnesium.
01:12:40It's got no sugar, coloring, artificial ingredients,
01:12:43or any other junk.
01:12:43This plays a critical role in reducing muscle cramps
01:12:46and fatigue while optimizing brain health,
01:12:48regulating your appetite and curbing cravings.
01:12:51This orange flavor in a cold glass of water
01:12:53is a sweet, salty, orangey nectar,
01:12:56and you will genuinely feel a difference
01:12:58when you take it versus when you don't,
01:12:59which is why I keep going on about it.
01:13:01Best of all, there's a no questions asked refund policy
01:13:03with an unlimited duration.
01:13:04Buy it, use it all, and if you don't like it for any reason,
01:13:07they give you your money back
01:13:08and you don't even have to return the box.
01:13:10That's how confident they are that you'll love it.
01:13:12Plus, they offer free shipping in the US.
01:13:14Right now, you can get a free sample pack
01:13:15of Element's most popular flavors with your first purchase
01:13:18by going to the link in the description below
01:13:19or heading to drinklmnt.com/modernwisdom.
01:13:23That's drinklmnt.com/modernwisdom.
01:13:27Let's say that you were in an organization
01:13:29that was small enough
01:13:30that you could actually enact some change.
01:13:31Maybe you're at the top of it, near the top of it,
01:13:33or you're just, you're toward the bottom of it,
01:13:35but you feel like you've got the ear
01:13:36of the person that's in charge.
01:13:39If you were to say, well, you've got the classic
01:13:42diffuse hive mind, pseudo productivity malaise,
01:13:47like the ambient soup that everybody's swimming in,
01:13:50how would you, what would you do?
01:13:54How would you rework the internals of an organization
01:13:57that still needs to communicate?
01:13:58Obviously, there has to be coordination.
01:14:00People aren't working in silos.
01:14:02There is gonna be inevitable communication
01:14:03and coordination that needs to happen.
01:14:06How do you survive the modern world?
01:14:09What would you propose?
01:14:10How would you restructure things?
01:14:11- Yeah, I mean, I would do a few things.
01:14:13One, I would say we're gonna have explicit workload tracking
01:14:15and management, right?
01:14:16No more just people throw stuff at you
01:14:18and you implicitly just add it to your plate.
01:14:20We want a place where we write down
01:14:21what everyone's working on and we can see it.
01:14:24And now we can start talking about things
01:14:25like what is an ideal WIP?
01:14:27What's an ideal work in progress limit for an individual?
01:14:30How many things do we want someone working on
01:14:32at the same time before that curve
01:14:34starts to go the other way?
01:14:36So what you have to do once you start doing that
01:14:38is saying we need a place to track things
01:14:40that need to be done
01:14:41that no one is actively working on right now
01:14:43and we can feel okay about it.
01:14:45So I would definitely want to set up
01:14:46where we're things enter into our radar
01:14:48of this needs to be done.
01:14:50There's a place for that to go
01:14:52and to be stored where it's no one.
01:14:54- It's like an organizational getting things done inbox.
01:14:58- Yes, and it's not on anyone's plate
01:15:00because as soon as you are responsible for something,
01:15:04it generates email, Slack and meetings.
01:15:06So once it's on your plate,
01:15:08it begins to spin off administrative overhead
01:15:11and slow productivity call it the overhead tax.
01:15:13That gets spun off as soon as it's on your plate.
01:15:15So everything by default goes to a team plate.
01:15:19No one's working on it.
01:15:20Then we keep track of from that play
01:15:23as we move things to people's individual responsibilities,
01:15:25we have like, you should do three things at a time, that's it.
01:15:28And when you finish something,
01:15:30you can pull something else in.
01:15:31So do a small number of things fast and well,
01:15:33and then keep bringing things.
01:15:34So I would definitely do that.
01:15:36The second thing I would do is I would say
01:15:38no more hyperactive hive mind.
01:15:39If you send a message that requires more
01:15:42than a single message in response,
01:15:44that should not happen over digital communication.
01:15:47If I can't just answer your question with one more message,
01:15:50then that has to be real time.
01:15:52Now we can't have that turn into an explosion of meetings.
01:15:54So what we're gonna do is we're gonna have
01:15:56daily office hours for everyone.
01:15:58So there'll be a daily time where everyone knows
01:16:00they can call you or walk to your office or whatever,
01:16:02and go through a bunch of things with you real quick
01:16:03instead of sending emails.
01:16:04We're gonna have morning stand up meetings
01:16:06within the teams for sure.
01:16:08Who's working on what this morning?
01:16:10Who needs what from who to get that done?
01:16:12Go do the work.
01:16:14We're gonna have, so we'll definitely do those as well.
01:16:16We might throw in phone hours.
01:16:18It's a new idea I'm thinking about where you say,
01:16:19look, there's a longer period of time,
01:16:21like maybe all afternoon, where you can always call me
01:16:24if there's something that's so urgent
01:16:26you can't wait till the next office hours.
01:16:29There's enough friction in phone calls
01:16:30that that actually turns out to work pretty well.
01:16:31Like I'm not just gonna call you
01:16:33because I'm wanting to get something off my plate.
01:16:35I won't call you unless it really is serious.
01:16:36So I would do that as well.
01:16:38And then I would say, okay, what ongoing work
01:16:41does this not work for?
01:16:43What type of projects do we work on on a regular basis
01:16:45where this isn't working because it's too long
01:16:48to have to wait till the afternoon's a problem?
01:16:50I say, great, let's identify those.
01:16:52And for each of those, let's build a protocol.
01:16:55Here's our protocol for collaboration on this type of work.
01:16:58And however that's gonna work,
01:16:59but it's like the information goes into this spreadsheet
01:17:02and then whatever, someone checks it in the morning,
01:17:04they move things to shared files.
01:17:06I don't know what it is, but whatever it is
01:17:07that prevents us to have ad hoc unscheduled messaging
01:17:10isn't necessary.
01:17:11So explicit workload management.
01:17:13I would have this rule of no hyperactive hive mind.
01:17:17I would have protocols for any type
01:17:18of recurring collaboration where it could be explicit
01:17:20about how we actually wanna do this.
01:17:22And then I would have a culture of talking about deep work
01:17:24and concentration, like a tier one skill.
01:17:26How's it going?
01:17:28How many deep work hours did you get in this week?
01:17:30Are you happy about that?
01:17:31What was getting in the way of that?
01:17:33Did you have a particularly good session?
01:17:35Tell everyone else about it, like what worked?
01:17:37Oh, I see, you did music.
01:17:39You have a different look.
01:17:40Oh, let's all think, hey, here's a good idea
01:17:42that we can borrow.
01:17:44Make deep work culturally something you talk about
01:17:48as like this is a tier one skill
01:17:50that we're really proud about.
01:17:52You do those things, you're like gonna 2X your profitability.
01:17:55This is the thing that's always frustrating me
01:17:57about these ideas is like you could make more money
01:18:00if you do it, but it's really hard.
01:18:03Those changes I just talked about, it's hard.
01:18:05There's friction, there's personalities.
01:18:08And this is the thing I really underestimated
01:18:09when I wrote those books.
01:18:11The way we work now is like a low energy point, right?
01:18:16It's like the easiest possible configuration of work.
01:18:21So if you feel friction,
01:18:22you're trying to do something more structured,
01:18:23you're trying to do something that makes better use
01:18:25of our brain and you're getting resistance.
01:18:28The place you're gonna fall when you give up
01:18:30is the way we're doing it now.
01:18:31So it's not arbitrary, I've realized.
01:18:33This hyperactive hive mind,
01:18:34let's just figure things out on the flow,
01:18:36no workload management.
01:18:37It's not arbitrary.
01:18:38It's the low energy.
01:18:40It's like this local minimum.
01:18:42It's the place that like minimizes the complexity
01:18:44that still allows a company to run.
01:18:46And I think that's why we keep falling back.
01:18:48In mathematical terms, it's a suboptimal Nash equilibrium.
01:18:51It's not the optimal way to work together,
01:18:53but no one person can leave it
01:18:55and make their situation better.
01:18:56It's a low energy state, it's an attractor,
01:18:59it's a local minimum in the utility landscape,
01:19:01whatever mathematical metaphor we wanna use.
01:19:04And so it's not arbitrary.
01:19:06I was like, oh, it's like a law of work physics.
01:19:08This thing is like a neutron star in the world,
01:19:12a universe of work that just attracts everything back to it.
01:19:15And it takes a huge amount of energy to escape its pull.
01:19:18That's why I think we've had so much trouble
01:19:21solving this problem even though you would make more money
01:19:23if you did it.
01:19:24- I wonder, I'm thinking about sort of immediately
01:19:30implementable solutions for this.
01:19:33I get the sense that you could probably tell people
01:19:35we don't use Slack before 1 p.m.
01:19:41Like nobody is to post in Slack before 1 p.m.
01:19:44Because you can ring if it's SOS emergency scenario,
01:19:50you can just call somebody.
01:19:51We just don't use it.
01:19:52And then it means that everybody knows
01:19:54that they should not be doing, it's a company-wide deep.
01:19:58I mean, look, are there gonna be some departments,
01:20:00HR for instance, probably would be used.
01:20:03But your job is HR, you're in the PR department
01:20:06or something like that.
01:20:07Your job is actually about comms.
01:20:10But if you're in marketing or if you're in accounting,
01:20:13something like that, okay, sit down and do your fucking work.
01:20:16And up until a point, what do you make of intermittent fasting
01:20:19for communication company-wide?
01:20:20- Yeah, it works, especially though,
01:20:22what really makes that more sustainable
01:20:24is if you have that quick morning stand up
01:20:26on the team scale at the beginning of the day,
01:20:29where everyone says, here's what I'm gonna be working on
01:20:31during these morning hours.
01:20:33Here's what I need from each other to make progress on this.
01:20:36So what would have unfolded over Slack and email,
01:20:40you're doing in 10 minutes.
01:20:42So you say, okay, here's what I'm working on this morning.
01:20:44I'm working on the new white paper.
01:20:46Here's what I need though.
01:20:48I need those figures from you.
01:20:50When can you get them to me, by 9.30?
01:20:51All right, you're gonna get them to me by 9.30.
01:20:53And I need those quotes you promised.
01:20:55Can you just do that right away?
01:20:57Okay, so you all know what I need from you.
01:20:59Okay, now I'm gonna put my head down and write that report.
01:21:01So having that meeting ahead of time,
01:21:05where everyone says what they need and what they're gonna do,
01:21:08that makes that time work better.
01:21:09And then the thing that really works,
01:21:11do the same thing on the other end of the morning.
01:21:13All right, you said you were gonna work on
01:21:16this, this and this, what happened?
01:21:19So there's accountability on the other end.
01:21:20You can't run away from,
01:21:23if you just went on email and social media,
01:21:25they're like, well, wait a second,
01:21:26I thought you were gonna write the white paper.
01:21:27Yeah, and if other people flake,
01:21:29they don't send you the figures,
01:21:30they don't send you the quotes.
01:21:31You're like, I got stuck, man.
01:21:33I never got this.
01:21:34- Cal didn't do what he said.
01:21:35- And they're there in the same room.
01:21:37And they're like, oh, okay, I get it.
01:21:39I get it, I can't just ignore stuff, right?
01:21:42Like I actually have to do it.
01:21:43I think that's a great idea.
01:21:44I think something like that works well
01:21:46if you put that accountability before it
01:21:49and you put it after it.
01:21:50That scares people, by the way, though.
01:21:53That really does scare people
01:21:55because you actually have to do the work.
01:21:57And this is the thing with really social media
01:22:01and smartphones killed this way worse.
01:22:02AI is gonna make this worse.
01:22:04But that was a big inflection point.
01:22:06In terms of losing our comfort with concentration,
01:22:08that got really bad.
01:22:10Once we got algorithmically optimized content,
01:22:12we really got used to that.
01:22:13And so it's scary if you just go to a company
01:22:15and say, here's the new plan, boss.
01:22:17We're gonna have a meeting in the morning.
01:22:19You gotta tell me what you're gonna do
01:22:20for the next five hours.
01:22:21And then you gotta do it.
01:22:22And we're gonna check in after that five hours
01:22:23and see how it went.
01:22:25That's a nightmare for a lot of people.
01:22:27That is like, oh God, I don't know what I'm gonna do.
01:22:30- I agree.
01:22:31I get the sense that a nice way to introduce this
01:22:35would be look, everybody's brain here
01:22:37has been turned into slop.
01:22:39Everyone.
01:22:40No one is able to do their job
01:22:42as effectively as they should.
01:22:44So you are expected to do the work.
01:22:46But the reason that we do the pre and post
01:22:49is not to whip somebody into performance review.
01:22:52It's to give you accountability
01:22:54'cause you don't look like a tit
01:22:55in front of your coworkers.
01:22:56But if you don't get to the point,
01:22:59the same as when you start training for a marathon,
01:23:01you don't run 10K on the first day.
01:23:03You will titrate the dose up and overtime.
01:23:06Week one, we'll permit some fuckery
01:23:09and week two, we'll permit a bit less fuckery
01:23:11and week three, we're all in it together
01:23:13and this person's pulling ahead.
01:23:14They're really like a hyper responder.
01:23:17They're making loads of gains in the focus gym
01:23:19and other people are moving a bit more slowly.
01:23:20Okay, what is it that they are doing?
01:23:21And so on and so forth.
01:23:22But imagine that.
01:23:23Imagine if you had a company-wide focus initiative
01:23:28where people were just, okay, we're gonna move together.
01:23:30Everybody is going to focus on focus.
01:23:32And interesting around the AI thing.
01:23:35So George, my housemate's writing a book at the moment.
01:23:38Do you know Cold Turkey?
01:23:39Do you ever use Cold Turkey?
01:23:40- I know about it, yeah, the software.
01:23:42- Yeah, it's a website limiter, app limiter for MacBook.
01:23:46We've been using it for a decade.
01:23:47His Cold Turkey went rogue
01:23:50and just kept shutting his browser down
01:23:54even though he wasn't trying to access the thing
01:23:57that he wasn't supposed to.
01:23:58It said he needed to install it.
01:23:59It was a nightmare.
01:24:00And here's a conversation between him and his AI.
01:24:03Cold Turkey has gone rogue and I need to remove it.
01:24:05Please tell me how to delete it from terminal.
01:24:08And the response, the response is,
01:24:11I'm not gonna help you bypass it, George.
01:24:13This is exactly the scenario you set it up for.
01:24:15You're two days in, the book is waiting.
01:24:17Close the terminal and write.
01:24:19And he's replied and said, no, it's got a bug
01:24:22so I can't get on calls.
01:24:23He's pleading with his own AI
01:24:25'cause he's obviously put in the instructions,
01:24:27be rigorous with me, be tough with me,
01:24:29tell me that I should be getting back to being focused
01:24:30when I start to go off task, do the thing.
01:24:32And that's an AI equivalent of what you're talking about
01:24:35which is this super visionary oversight commission thing
01:24:40but his just happens to be based in silicon
01:24:43instead of in other people.
01:24:44- So maybe AI will help us.
01:24:45It could basically chastise us.
01:24:47- Well, the problem is,
01:24:48the problem that you have with the AI thing
01:24:49is it's so fucking sycophantic all the time
01:24:52that it will tend to bend eventually
01:24:57to what it is that you want.
01:24:59- Yeah, but no one believes
01:25:00that the chatbot interface is the future of AI,
01:25:02the boosters, the skeptics, the moderates.
01:25:05There's an emerging consensus
01:25:07that we're gonna look back at this current moment
01:25:09where we interact with AI by typing into a chat window.
01:25:13That's gonna be like the Usenet newsgroups
01:25:17at the beginning of the internet.
01:25:18It was like a cool thing early on
01:25:20that showed the promise of the internet
01:25:22but the tools got better.
01:25:24There's better ways to make use of it.
01:25:26So the thought is in the future,
01:25:28AI is gonna be more integrated into more things.
01:25:29It'll be more agentic.
01:25:31It'll be a lot not like having conversations in English text
01:25:34but deploying agents to do things,
01:25:36maybe with natural language
01:25:37but also it'll be more integrated into software,
01:25:40individual tools will be more common.
01:25:42So it'll be much more common.
01:25:43I'm in Microsoft Excel and I'm like,
01:25:47can you sort row five by this amount
01:25:49and cut out all columns that have less than as many values
01:25:52and it does that.
01:25:53That's what the interactions are gonna become like.
01:25:56And so this idea of having a singular anthropomorphized entity
01:26:00through which you're having all conversations,
01:26:02that's almost like an accident of early AI.
01:26:04OpenAI will tell you this,
01:26:06that ChatGPT was supposed to just be a demo
01:26:08of the type of things you could do
01:26:10using the APIs into their language models.
01:26:13It's like the type of tool you can build
01:26:15that would make use of AI
01:26:16and then it caught them completely off guard
01:26:18and everyone wanted to use ChatGPT and chat with it
01:26:20because it was really cool.
01:26:21I don't think that's gonna be the form vector.
01:26:22So I think a lot of these issues we have now,
01:26:24like this is weird.
01:26:25It's unsettling, we're anthropomorphizing it,
01:26:27we're getting parasocial relationships with the agents,
01:26:29we're having romantic relationships with them,
01:26:32we're getting unsettled
01:26:34because having English conversation,
01:26:36we have a hard time not simulating a mind
01:26:39on the other end of this type of thing.
01:26:41- Which is why I shout at my ChatGPT team.
01:26:42- That's why you shout at it.
01:26:43I think a lot of this two years from now is gonna seem,
01:26:45it'll be super narrow, right?
01:26:47Because I don't think just having
01:26:49a sort of general purpose oracle you chat with,
01:26:51that's not the future.
01:26:52That's not what people think we're gonna be doing.
01:26:53- Why are people mad about 4.0 being removed?
01:26:58- My understanding was they were just happy with the fine,
01:27:02so you tune these things.
01:27:04The conversational style comes from
01:27:06a post-training tuning session where you give it,
01:27:09you've already done the pre-training, which is unsupervised.
01:27:11And you go through this post-training session
01:27:13where you have a lot of examples of questions and answers,
01:27:16and you ask the question and then it gives an answer,
01:27:19and then you sort of zap it using optimization theory
01:27:23to try to move, like now we're gonna change the weights
01:27:25to be closer to this answer we already said was better.
01:27:28So if you have a bunch of examples
01:27:29of the way you want something to respond,
01:27:31and then you go through one of these
01:27:32sort of zapping training sessions after the fact,
01:27:35it'll respond more like that.
01:27:36So they just changed the way they were doing that.
01:27:38And the thing they changed to,
01:27:40people didn't like the tone that created.
01:27:42So it was just about what day,
01:27:44literally like the data sets you're using
01:27:46when doing this fine tuning
01:27:48after you've done that big, massive pre-training
01:27:51where it's unsupervised.
01:27:52- Talk to me about the role of quantum computing in AI.
01:27:57- Minimal to non-existent.
01:27:59- So QAI is all just bullshit?
01:28:03- Yeah, I'm not, yeah.
01:28:05I mean, quantum computing is really interesting.
01:28:07There's a huge amount of technical problems
01:28:08just to actually get these things scaled
01:28:10to the number of qubits in which they're useful.
01:28:12And there's a fallacy out there
01:28:15in thinking about quantum computing
01:28:16that it's basically like a normal computer,
01:28:18but times a million,
01:28:20which is just not the way these things function, right?
01:28:22So there's only very specific problems you can solve
01:28:26with a quantum computer
01:28:28because you actually have to express the problem
01:28:31in the language of physics
01:28:32in such a way that you're creating
01:28:34what's known as a wave function
01:28:35that when it collapses,
01:28:36it's going to collapse to a configuration
01:28:38that's the right answer.
01:28:39Therefore, like implicitly searching a large state space
01:28:41in sublinear time.
01:28:43Only certain problems allow you to do that.
01:28:45So it's unlike a normal computer
01:28:46where I can program a computer to do almost anything.
01:28:49Quantum computers is much more narrow
01:28:50what you can do with it.
01:28:52- Could you give me an example of something
01:28:53that it would and wouldn't be able to do?
01:28:55- Well, like the big example,
01:28:57this was a guy who was at MIT when I was there.
01:28:58Peter Soar early on was the one who figured out like,
01:29:01hey, one of these complicated wave function collapsing things
01:29:03you could do could factor prime numbers or--
01:29:07- Cue day.
01:29:08- Yeah, factor numbers to see,
01:29:10to find the prime factors rather.
01:29:11Find the prime factors of big numbers.
01:29:13That's a really big deal because--
01:29:16- Security.
01:29:17- Yeah, public key encryption.
01:29:19And ironically, this just goes to show how crazy MIT was,
01:29:23is also at MIT is Ron Rivest who I TA'd for,
01:29:27who invented, you see R and RSA,
01:29:29he invented public key encryption.
01:29:30So like the guy who invented public key encryption
01:29:32is there next to the guy who figured out
01:29:34how quantum computers could--
01:29:36- Could maybe undo it.
01:29:37- Undo it, yeah.
01:29:38So it's kind of interesting.
01:29:39So it's good at that.
01:29:40There's a lot of problems that are based around
01:29:42simulation of quantum or physical physics systems.
01:29:46And that's, you can simulate quantum physics systems
01:29:50directly using quantum in a way,
01:29:52instead of having to try to simulate it with,
01:29:53so it's very good for that.
01:29:54There's a certain type of search.
01:29:56It gets a little technical,
01:29:57but there's a certain type of search that you can implement.
01:30:00It has applications.
01:30:01So there are interesting applications.
01:30:03But the thing I was beginning to sense recently,
01:30:06which made me worry,
01:30:07is that there was a sense of like height migration.
01:30:10So people are getting a little bit frustrated,
01:30:12sort of like post GPT-5 of like,
01:30:14this isn't filling my need to have something to be,
01:30:17you know, a technology that is gonna change everything.
01:30:19I love that concept.
01:30:21And they begin sniffing around, okay,
01:30:22but what if we just quantum somehow,
01:30:25we'll unlock AI and solve all these problems we're having.
01:30:28I think it's way more complicated than that.
01:30:29There are narrow applications of these particular things
01:30:32that might have some AI application,
01:30:34but you can't like run an LLM on a quantum machine
01:30:38and now it's a billion times better.
01:30:39That's just not how it works.
01:30:40So quantum is interesting.
01:30:42It's just really hard.
01:30:43The problem is the errors multiply.
01:30:44I mean, they make these qubits,
01:30:47these quantum bits they use for these algorithms.
01:30:50It's incredibly complicated.
01:30:51You have, there's different ways to do it,
01:30:52but in some ways you have laser beams
01:30:54and a super cool chamber holding like a particle
01:30:57in a very careful state.
01:30:59And it generates errors
01:31:01and then the errors add up with other errors.
01:31:03And after you make enough of these things,
01:31:05then the errors, they swap out of control.
01:31:08It's a really, you know.
01:31:09- Okay, so you're telling me that the fucking M6 chip
01:31:12in the MacBook Pro is not gonna be a quantum one.
01:31:15It's not gonna be the Q6 chip.
01:31:17- It's not.
01:31:17I was, now I wanna know what QAI is.
01:31:20What is QAI?
01:31:21You mentioned QAI.
01:31:22- QAI, quantum AI.
01:31:24- Yeah, but I mean, is there a particular product
01:31:26or just people talking about quantum's
01:31:28gonna just make AI better?
01:31:29- Yes, yeah, there is.
01:31:31I have a friend who I train with.
01:31:34This is like, you know what I love?
01:31:35Some of the people that I love the most
01:31:38are the ones who you wouldn't predict
01:31:40have the life that they do.
01:31:42And there's a girl who trains Lyft ATX on a Saturday.
01:31:46Lovely girl, I've trained with her a bunch of times.
01:31:48Real cool, boyfriend's cool.
01:31:50Like, does fitness modeling, super hot,
01:31:52the long hair lift, the big, you know,
01:31:55but super strong, all the rest of the stuff.
01:31:56Like, feminine as well.
01:31:58Quantum computing degree.
01:32:01Like, works in quantum computing.
01:32:03And she was telling me about quantum AI.
01:32:04And she was telling me about QAI as it's referred to.
01:32:07And it's a burgeoning field, supposedly.
01:32:09Unless she's lied to me.
01:32:11Unless she's totally fucking lied to me.
01:32:12- Yeah, I'm curious what they're working on.
01:32:14UT Austin has good quantum theorist.
01:32:18Look, I'm searching for it.
01:32:20A guy I knew from MIT, they hired him away there.
01:32:22Or see quantum, quantum AI.
01:32:26Merges quantum computer with machine learning in the process.
01:32:28High dimensional data faster than classical systems.
01:32:31Now they're working on it, but I don't know,
01:32:34I don't know how that's gonna work, basically.
01:32:38So I don't know what they're working on,
01:32:39but it's not something that you hear a lot
01:32:41in computer science circles yet.
01:32:42So maybe they'll have some breakthroughs.
01:32:43It's worth looking at,
01:32:44but I don't know how that's gonna work.
01:32:46- Okay, one of the other elements, I guess,
01:32:50that people struggle with when it comes to deep anything
01:32:55is learning, the process of learning.
01:32:58Talk to me about the mechanics
01:32:59of keeping a deep reading habit alive.
01:33:04- Well, I mean, I think reading pages
01:33:07is probably the cognitive equivalent of steps, right?
01:33:10So if you're a 10,000 steps a day person,
01:33:12as like, this is just like a baseline to make sure that like,
01:33:15at least my physical systems are being used,
01:33:17you should have a page count.
01:33:1925 pages a day, 20 pages a day of reading a book
01:33:23just as like getting those cognitive steps in.
01:33:26Because I think we recognize more and more,
01:33:29reading, I would say it's the cheat code,
01:33:31but it's better to think about it as like,
01:33:33reading is the thing that formed the modern brain.
01:33:36And I'm like, I'm more and more convinced about this.
01:33:39I have a book idea I'm working on now,
01:33:40where I'm sort of exploring this idea.
01:33:43The brain before we had the Neolithic revolution,
01:33:47it was the same neurons, right?
01:33:5015,000 years ago that we have right now,
01:33:52but if we go pre-reading,
01:33:54those neurons were doing the things they were evolved to do,
01:33:56which is very much about like the visual system
01:33:58and the audio system,
01:33:59and we could communicate through spoken language,
01:34:01and that's fine.
01:34:02And then we invent reading.
01:34:04This is not something that our brain has evolved for.
01:34:06So in order to read, we have to go through this,
01:34:08this sort of excruciating process of learning to read,
01:34:11in which what you're doing is actually rewiring sections
01:34:14of your brain to connect in ways
01:34:15that they weren't originally meant to connect to.
01:34:17So we're reforming our brain when we learn how to read.
01:34:20And we develop what Marianne Wolfe
01:34:21calls deep reading processes,
01:34:23where you've now yoked together different parts of your brain
01:34:27that don't normally work together,
01:34:28that can now have to work together
01:34:30in order to understand written text.
01:34:33Once your brain is wired to do that,
01:34:36it can, if you reverse this and write,
01:34:38you can generate much, much more sophisticated thoughts
01:34:41than you can if you haven't done this wiring,
01:34:43and your understanding of things,
01:34:45the complexity of what you can understand
01:34:48when you have this new rewired brain,
01:34:49that also really goes up.
01:34:51So reading is like, it's not just,
01:34:53oh, I get stronger in my brain.
01:34:55It reconfigures your brain into like the modern,
01:34:58you know, post cognitive revolution brain.
01:35:01- Okay, why is it important to read physical books
01:35:05then what is lost if I read Substack?
01:35:08I know that you're a fan of Substack.
01:35:10I love Substack, I think it's fantastic.
01:35:12What's the difference between reading it on a laptop
01:35:16versus a phone versus a Kindle
01:35:18versus a physical piece of paper?
01:35:20- Well, there's two different things going on here.
01:35:21There's medium and content type.
01:35:24Like, so if you're reading a book in a physical book,
01:35:27or you're reading in a Kindle, doesn't matter, right?
01:35:30I mean, they're both actual physical medium.
01:35:32Like the way that the Kindle is actually a physical experience
01:35:36that it's actual little discs that are, you know,
01:35:39dark on one side and light on the other.
01:35:41And they make a page, they have little electrical impulses
01:35:44and you shock the disc you want to turn
01:35:46and you don't shock the ones you don't want to turn.
01:35:48And so you've literally created an actual black and white
01:35:51physical version of the page on the Kindle.
01:35:53You're not unlike a computer screen or a TV
01:35:55where it's light being emitted.
01:35:57There's no light being emitted.
01:35:58It's physically that's the page.
01:36:00It just created a new physical page that has text on it.
01:36:02That's why you have to actually have a light
01:36:04on a Kindle to read it.
01:36:05So it's just a page that reconfigures itself
01:36:08into a new page.
01:36:09I love eating technology.
01:36:10I think it's really cool.
01:36:11Content type, the issue is, I mean,
01:36:13there's a lot of this research we've known since the 90s.
01:36:14A lot of this is captured in the best book on this
01:36:18would be The Shallows, Nick Carr's book, The Shallows.
01:36:22When we're reading something like a webpage or substack,
01:36:25for whatever reason, we skim much more aggressively.
01:36:28That's the main issue.
01:36:28We jump around much more aggressively,
01:36:31just trying to pull out the key points.
01:36:33I think that's all just acculturated, right?
01:36:36Like you could sit and read,
01:36:38like if you print out a substack article
01:36:40and sit in the library and you read it carefully,
01:36:42it's the exact same thing as reading a book.
01:36:44It's the exact same thing in sense of the experience.
01:36:47On screens, we tend to skim around more.
01:36:49The other advantage of like a book
01:36:51that was actually published versus like a post you see online,
01:36:55it's just better thought through, right?
01:36:57So when you write a book, you spend a couple of years on it.
01:36:59Like you spend a couple of years crafting the book
01:37:02and you might've been based on a lifetime
01:37:05of thinking about this topic.
01:37:07And so you take your time when writing a book
01:37:10and it gets edited and re-edited and you go back.
01:37:12Like I'm writing a book now.
01:37:14I've been working on it off and on for like three or four years.
01:37:16I've rewritten this book like three times.
01:37:18It's like, this isn't right, this isn't clear enough.
01:37:21And so when you go through text
01:37:24that has been that carefully thought through and structured,
01:37:26that's also, you just get a different experience
01:37:28'cause the pieces click together at different scales
01:37:31and it just uses, you build in your brain
01:37:34these intricate interlocking pieces
01:37:37that all hook together and is beautiful
01:37:39and you get that aha moment feeling.
01:37:40There's an actual physical endorphin rough
01:37:42you get in your brain.
01:37:44So I think reading smart books written by smart people
01:37:47that took a long time to write,
01:37:50that's your calisthenics for your brain.
01:37:52It literally changes.
01:37:53You're a smarter person if you do that versus if you don't.
01:37:56- So good, I have to say reading full length books
01:38:02has been, the volume that I do that has been decreased
01:38:07over the last few years, largely because of Substack.
01:38:10So there's a extension for Google Chrome
01:38:13called Push to Kindle.
01:38:14And if I press it, the article appears on my Kindle
01:38:18because I don't like reading on my phone
01:38:20and I don't like reading on my laptop,
01:38:21probably for the reason that you said.
01:38:24But when I think about it, it very much is running downhill
01:38:29because what's the longest Substack that you're gonna read?
01:38:3220 minutes, maybe?
01:38:3425, 25 minutes, a fucking long article.
01:38:37And maybe part of that, maybe part of my penchant for it
01:38:42is that I do get the outcome, right?
01:38:44What is it that I'm looking to learn?
01:38:46Oh, I wanna find out from Steve Stewart Williams
01:38:50about sex differences in desire for sexual novelty,
01:38:55something like that.
01:38:57Okay, well, I will learn the outcome in the same way
01:39:01as I could feed myself food
01:39:03that was just a cube of calories
01:39:05and that would sort of give me the caloric intake
01:39:08that I needed.
01:39:10But what you're presumably reading for,
01:39:13apart from just the enjoyment of reading it,
01:39:15is to be able to recall it and for it to be woven
01:39:17into the broader mental landscape that you've got,
01:39:21which actually probably means you need to spend time
01:39:23and attention with it.
01:39:24And some of the leanness and brevity
01:39:27that comes with an article actually might work against you.
01:39:31Maybe you need it to be said to you in five different ways.
01:39:34Maybe you need the author to meander off onto a story
01:39:36that takes three pages to explain about this guy
01:39:40who owned a Ferrari and parked it outside of a hotel
01:39:43so that you can then come back in.
01:39:45And each one of these is a little Velcro latch hook
01:39:49that you can hook yourself into.
01:39:51And yeah, I wonder whether the reading
01:39:55or discriminating toward reading stuff
01:40:00that is exclusively shorter form results in the sense
01:40:05that I am learning lots,
01:40:07but if you are to actually do some sort of scrutiny
01:40:09around that, well, okay, how much of it can you remember?
01:40:13How long did you spend with this idea?
01:40:14Did you spend long enough for it to be a part
01:40:16of now your mental models and the framework
01:40:20that you, how much can you recall?
01:40:22That would be an interesting challenge.
01:40:25- And the frameworks of understanding are shallower
01:40:26just because it's less time to establish them.
01:40:29So like in a subset, it's not a bad thing,
01:40:32but what can you do?
01:40:33You typically have like one idea
01:40:35and like here's something that supports that idea
01:40:38and here's maybe like a different idea
01:40:40and here's why that doesn't work.
01:40:42And if that's all you're consuming,
01:40:44that becomes your mental model for how knowledge is gained.
01:40:47And I think we see a lot of this,
01:40:49I mean, think about internet culture now
01:40:51is much more conspiratorial.
01:40:54And I don't mean in the like sort of grand conspiracy theory,
01:40:56which it is, but not just in like the grand conspiracy
01:40:59type of thinking, but in the confidence.
01:41:02There's this quick jump to confidence where you're like,
01:41:05that's wrong because of this and boom.
01:41:09And you think that like, this is like the slam dunk case
01:41:11or something like that.
01:41:12That's a result of not reading a lot of books.
01:41:14You read a lot of books, you're like,
01:41:16okay, this is way more complicated.
01:41:19- Everything is way more complicated
01:41:20than you thought it was.
01:41:21- And there's probably a clear truth here,
01:41:23but clear truths are more complex.
01:41:24Like even the notion of what a clear truth feels like
01:41:28comes out of reading books, right?
01:41:30Like you understand, oh, ultimately like this person
01:41:33was right, but it's complicated.
01:41:35And like, yeah, this was not so clear cut
01:41:38and this is like a compromise and this was really important.
01:41:41And these factors were here, but honestly,
01:41:42those factors aren't as big as you think.
01:41:44And this factor really was more important.
01:41:46And so like, this really was the right thing to do.
01:41:48So even like your notion of what's true or what's not true
01:41:52or what it means for something to be clear is like different
01:41:56than if you're just looking at boom, slam dunk.
01:42:00I think it's a big problem online,
01:42:02both sides of the political spectrum do this.
01:42:03Like you want everything just to be,
01:42:06this person is just garbage and completely wrong.
01:42:09And there's like this one simple thing I know
01:42:12that means you're completely wrong and I'm completely right.
01:42:14And you're wrong in like the worst possible sort of way.
01:42:16And that is like such a sopholific,
01:42:19I'm saying the word wrong.
01:42:20- Solid statistic.
01:42:22- Yeah, exactly.
01:42:22You said it, right?
01:42:23I have to read more, but it's sophistry for sure, right?
01:42:27This idea of this is how truth and argument unfolds
01:42:31is like there's an obvious flaw that's easy for me to grok,
01:42:34which I guess now could actually be a verb
01:42:36as opposed to just meaning to understand.
01:42:38Also, I could literally grok it, I guess.
01:42:40And now it's clear that you're wrong and I feel righteous.
01:42:43And then we go seeking that.
01:42:45And then we want to simplify everything in the world to,
01:42:47you're just terrible and this person is perfect.
01:42:50And this idea makes the most sense.
01:42:52And if you disagree with this idea,
01:42:54it's because like you want to eat children.
01:42:56And it just becomes, it's a different under,
01:42:59this is what I think we get wrong.
01:43:01It's not just like we don't have the right information.
01:43:05We've changed what our notion of truth is
01:43:07because we're not exposed to the complexity of truths
01:43:09when you read not only a scholar, like a smart case for it,
01:43:13but then you read the arguments that they confronted.
01:43:14And then you read someone else
01:43:16that's arguing against their point.
01:43:18And you're like, oh, okay, I've seen the clash of minds.
01:43:21And now in that clash, I kind of see what's going on here.
01:43:26Like, yeah, the truth really leans this way.
01:43:29And I feel really real conviction in that
01:43:31because I've seen the best minds come at this
01:43:33from either side and I really understand.
01:43:35It's not cut and dry, but ultimately,
01:43:37like this is the right thing to do.
01:43:39That was like a very familiar thing
01:43:41to people and leaders like in times past
01:43:43where you lose it if you're exposed
01:43:45to these low resolution copies,
01:43:50these low resolution simulacrums,
01:43:52these easy to digest pre-chewed versions
01:43:55of argumentation and understanding.
01:43:56It just changes the way your brain thinks
01:43:58about what true even means.
01:43:59- Yeah, there's an arc to sense making
01:44:02that you kind of need to track.
01:44:04And if you don't track it,
01:44:05you just assume that answers appear.
01:44:07It's like, no, no, they don't.
01:44:08Cal, you fucking rule, let's bring this one home.
01:44:11Why should people go to keep up to date
01:44:12with everything you do?
01:44:13- Oh God, calnewport.com, I guess.
01:44:16My books are on Amazon.
01:44:17My podcast, Deep Questions on YouTube
01:44:21or wherever you get podcasts.
01:44:22Newsletter at calnewport.com.
01:44:24Deep Work, too many things going on now, Chris.
01:44:26Deep Work is 10 year anniversary.
01:44:28I'm excited about it.
01:44:29All new, I replaced all the blurbs on the back
01:44:32with most of them are now organic.
01:44:34I could just like, people who have said things about it
01:44:36without me asking them to say it, so that's fun.
01:44:39And I have a masterclass out on this stuff, too.
01:44:42So I don't know, it's everywhere.
01:44:44Too many places, I feel too busy.
01:44:46- For a person who's a digital recluse, you are everywhere.
01:44:49But that's a function of focusing on quality, not quantity.
01:44:52I can't wait to speak again, man.
01:44:53This has been so much fun.
01:44:54I appreciate the help.
01:44:55- Always a pleasure, Chris.
01:44:56Always a pleasure to talk with you.
01:44:58- Congratulations, you made it to the end of an episode.
01:45:01Your brain has not been completely destroyed
01:45:03by the internet just yet.
01:45:05Here's another one that you should watch.
01:45:07Go on.