We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Stanford Graduate School of Business · 1.9K views · 0 likes
Analysis Summary
Ask yourself: “Whose perspective is missing here, and would the story change if they were included?”
Worth Noting
Positive elements
- Provides grounded historical analogies and economist debates on AI's labor impacts, emphasizing epistemic humility in predictions for business students.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
[music] Thank you everyone for coming. Derek Thompson is a writer, podcaster, and author. He co-authored Abundance with Ezra Klein, which is a number one New York Times bestseller. My family loves The Atlantic. I followed Derek from uh The Atlantic to his Substack, where he continues reporting on issues that we all care a lot about at the GSB, the future of work, AI, culture, how technology is shaping society. I also listen to his podcast, Plain English, which does exactly what it promises. It takes really complex ideas and it distills them down in plain English. So, we're very lucky to have him here tonight. Uh he flew out here to be with us. So, thank you so much for coming. >> It's great to be here. Thank you. [applause] >> All right, let's start big picture. >> Okay, >> the stock market is booming, but it feels like the economy is getting weaker. >> Prices are high. Young people are having trouble getting jobs. They're having trouble buying homes and it feels like we're living in what you call two economies, the AI economy and the everything else economy. So this room is full of people who are going to go into the job market. So what is the future state of play? Very hard to predict the future uh unless you're going to extrapolate from the edge of the present. And the present looks like two different economies. Um, if you look at equity values, I think between 70 and 80% of equity growth in the last three years has come from stocks that are related to AI. So, from an equity standpoint, we're living in an AI economy. And it also seems like from a GDP standpoint, we're living in an economy that is unusually inflected by one industry, in this case, artificial intelligence. Um, the amount of money that is going into AI is completely unprecedented in terms of any private sector project in American history. Um to give you a sense of its scale, the Apollo project, the Apollo program, 10 years, $300 billion to put a man on the moon in inflationadjusted dollars. This year, the hyperscalers are projected to spend $700 billion in capital. So, you're talking about one Apollo program every five months. Um there's nothing like that in American history really unless you go back to the middle toward the end of the 19th century which was the railroad buildout. The railroad buildout contributed an enormous share of GDP for the US economy. And on the one hand you could say well if AI is the railroad railroads changed the world. They changed the economy. They changed our perception of time. They created time zones. They created the very concept of time travel. James Glick in his book Time Travel says that there's really no record in human history of anyone thinking about moving backward or forward in time in literature until the invention of the railroad and the time machine by HG Wells published in the late half the 19th century was maybe the first mention of any um forward movement through time in in the history of literature. So you're talking about a technology that completely changed our sense of space and time. Maybe we'll be just like that. And that makes it seem like this is not going to be a bubble if you just stop the sentence there. However, if you go back to late 19th century history, the railroads were not a bubble. The railroads were three or four separate bubbles. There was a panic of an of the 1850s. There were panics of the 1870s. There was a panic of the 1890s. There's a panic of 1907. These panics weren't just thought of as recessions. They were called railroad depressions because railroads were such a huge part of not only the economy but also the lending picture that when you had a bust in one railroad you had a bank crisis that ended take that that often took down the economy. I'm not predicting anything specific here. I'm just saying we're looking at a single sector driving both equity growth and overall economic growth that might be unlike anything we've seen in the last for for 150 170 years. Um, and that I think, you know, rather than give us a sense of, oh, once we have the historical analogy, we know exactly how this is going to go. I'd prefer to say this should give us an enormous amount of humility. Um, if something's happening that only happens once every century and a half, um, there is definitionally no roadmap. Um, so it's it's an exciting time for someone like me who writes about it, but it's of it's also a nerve-wracking time because the the bets that are being placed in this one industry are absolutely are absolutely astronomical. You front me a little bit on the bubble. >> Sorry about the front. >> And we're going to get there because that's what everyone wants to know about. But you often talk about the scariest chart in the world. Um, which is since ChachiBT shipped, job postings have fallen by a third while the S&P rose by 75%. Which is historically quite unusual. >> Yeah. We've never seen that kind of decoupling. >> Yeah. >> And so if you just look at this graph, which makes it look like CHBT comes out and the stock market does this, the labor market does this, it's like, oh my god. Like this is the moment that Marks warned us about. like this is capital taking over for labor. Um the chart looks scary. I don't think it is as scary as it looks. And the reason is one of the biggest problems with really understanding what AI or what chatbt has done to the US economy is that the the release of CHBT is not the only significant macroeconomic event that happened in the final quarter of 2022. The other thing that happened in the months just before that is that the Federal Reserve started raising interest rates and the Federal Reserve jacked up interest rates faster than it's ever raised interest rates I believe in the history of the Fed which itself dates back I think to about 1913. >> So it's very hard to say are we looking at a phenomenon that is overdetermined by the fact that the AI hyperscalers are seeing enormous returns to their legacy businesses at the same time that the Federal Reserve purposely cooled off the economy. I think it's very likely what's happening. Or are we seeing the beginning of this decoupling of labor and capital? It's possible that we're seeing the latter. It's possible that we're seeing this like, you know, this Marxist moment. Um, I think what's more likely happening is that what looks like Sam Alman destroying the labor market and setting stocks to the moon is really stocks going to the moon because a handful of hyperscalers have incredibly powerful underlying legacy businesses that are allowing them to spend $700 billion um on AI. At the same time that the Federal Reserve did exactly what it intended to do, jack up interest rates, to raise the price of money, to cool off the economy to slow down the pace of hiring. That's exactly what's happened. >> Okay. And your opinion has waffled a little bit over the past year. So, a year ago, you wrote about weak hiring of young grads and how that could indicate AI disruption. Um, and so young workers 22 to 25 in highly AI exposed jobs experienced a 13% decline in employment since chachit launched >> according to >> according to Stanford >> a group team of Stanford economists. Yes. >> Yes. >> So I'm going to >> which is which is which isn't to say they're wrong. [laughter] Eric Benolen who's the um who's one of lead economists I think is one of the more trusted economic voices on on AI and on many leading technologies. Um sorry I interrupted your question. I'm going to let you finish your question and not front run you again. >> You know where I'm going, [laughter] but your thoughts on this have ranged from possibly, this is AI disruption, to definitely to almost certainly no to plausibly yes. So, where are you right now? >> I'm actually back to I I have no freaking idea. Um, the last substack that I published um is called Nobody Knows Anything. And um nobody knows anything is not well it it is it's partly me me giving up in the prognostication game but um it literally those three words nobody knows anything is a famous quote uh by a screenwriter and author William Goldman who wrote All the President's Men and Butch Cassian and the Sundance Kid and a personal favorite Princess Bride. He wrote both the book and the screenplay um for the film. And um and William Goldman had an autobiography about his time in Hollywood. And the first three words of that autobiography are nobody knows anything. And that was his describing the degree to which Hollywood prides itself on being able to predict the future of hits. But nobody knows anything. No one can predict the future of hits. You know, just because you happen to greenlight some movie that was successful 5 years ago is not remotely predictive of your ability to do that five years from now. And nobody knows anything in the context of my essay was about macroeconomic predictions related to artificial intelligence. Nobody knows what this technology is going to do to the economy. And even if you ask the simplest possible question, which is where can we see artificial intelligence in the labor market today? [clears throat] Nobody knows anything. >> You could on the one hand say unemployment still 4.7%. Where is AI? unemployment has declined or or job hiring rates have declined linearly since the Federal Reserve jacked up interest rates. Where's the presence of AI? A lot of different think tanks and a lot of different professors have looked into this question of okay, if we zoom in, this is what the Stanford professors did. If we zoom in at the most AI expo, I wonder if like one of the co-authors is here in the room. Sorry if if you if you are or or or know one of the co-authors, feel free to correct me in the Q&A. If we zoom in using private sector ADP data on 22 to 25 year olds in the most AI exposed occupations like software developer can we detect a distinction between the hiring rate postbt and pre-hatbt and they said the distinction was 13 14% >> 14%. Yeah. But then a lot of other economists looked at that paper and poked a ton of holes in it and said, "Nope, you you you still haven't ruled out all these other possible co-ounds. It's possible that nothing is happening here." So, I don't know. I don't know the answer to the one of the one of the most basic questions. If if AI were ravaging through the economy, it would likely be doing something to the labor force. What effect is it having? the smartest economists in the world. You get them together in the room, you say, "We're closing the door until you reach a consensus." They can't reach a consensus. So, nobody knows anything. Is my way not of like throwing up my hands and saying, "I refuse to make any prediction." It's my way of saying, you know, every few days it seems there's some like viral post on Twitter that moves markets by a trillion dollars. But these posts like the Catrini post that you all are certainly aware of or at least most of this room is certainly aware of, that's a piece of science fiction. Like it's it's literally a a story about the future. It's a piece of science fiction. And I compared it to in 1938 when um Orson Wells read HG Wells's The War of the Worlds on the radio and a bunch of people called in to CBS radio and thought that literally Martians had landed in New Jersey and were like liquidating people because they thought it was so real. Like not since then. It's like a science fiction story like spooked like human beings like this. The fact that the market of analysis for AI is a marketplace of science fiction is a very very good sign that the marketplace of non-fiction is not working out for us. Right? We don't have a very clear indication from economists of what AI is doing to the economy. And I am as frustrated by this as the folks in this room looking for a job. >> And so the company block laid off 50% of their workforce this week. Do you think that CEOs are dressing up layoffs as AI? >> Of course. >> Or do you think it is AI? >> I've gone long on a lot of my answers, so I'll go short on this question. >> Okay. >> Um, is it possible that Jack has figured out how to triple the productivity of a decimated labor force? >> Yes. Is it possible that Square, now technically Block, having lost 85% of its market cap three years ago and having seen a stagnating stock price for the last three years, might think that what its investors want is some show of force by decimating the labor cost of the company to theoretically raise the profit. Also, an incredibly plausible interpretation. My bet is toward the ladder. >> Seems like a easy thing to hide behind for CEOs, >> especially if you're see you have a company whose stock price is down 80% and hasn't moved in three years. >> I would be much more impressed, so to speak, or scared if a company whose stock had tripled was reducing its labor force by 50%. >> Right? >> That seems to me like it would be an indicator. But a company that struggled to regain the profit momentum that it had during the end of the pandemic, >> that's that's a very different story. >> So something that I think about a lot is that um if junior work, which we we all know as grunt work, which we all did a lot of, is what builds judgment and that work is the most easily automatable work. How do we build durable companies for the future? How are we without training the next generation of talent? So in a world let's like so like we're stepping into a science fiction scenario that might be plausible right in a world where other companies did to [clears throat] block what Jack did to block did to Twitter what Elon did at Twitter >> and said we're just going to mush mush mush all of our employees like their I did a red dogs and we're going to use cloud code to essentially attempt to triple the productivity of every software developer In a world like that where entry level employment was decimated as Dario Amade has predicted it could be decimated. Um I have enormous fears for the economy at the level of you know human decency. I don't want people to lose jobs and what you said at the level of of workforce development. Um I'm lucky to be able to write about a lot of different things. I write about AI. I write about the economy. Um I write about media. But I got my training as an economics reporter. Um, I got my job at The Atlantic in 2009 where the most important story in the world was the Great Recession. And I learned how to um, and I didn't take any economics classes except for uh, one in college. And so I I woke up every morning not really understanding the world. And what I would do is I would have an idea in my head like um, oh, I want to talk about uh, how to create jobs when the unemployment rate is 9%. and I would call economists and I would write down what they said and at 3:00 I would start to write my article, but I would remember my questions about the subject from 9:00 a.m. when I didn't know anything before I'd had these conversations with the economists. And I sort of conceptualized my job at The Atlantic as writing at 4 p.m. an article for my 9:00 a.m. self, right? And that's like where my identity as a writer came from. That's what plain English is all about. How do we explain complicated ideas to people who are smart but are 9:00 am smart, not 4pm smart? >> Um, so I got a I got amazing training at my career from doing something that was entry level. And I'm incredibly concerned about any economy where the corporate ladder is sawed off at the bottom >> and you have this crisis of young people not being able to generate the skills that are needed to be middle managers and leaders of companies. I think that that would be an enormous tragedy at the human level and the corporate level. >> Yeah, I see a lot of private equity friends in the room who are unfortunately going to be checking the assumption in those models for a long time because they're not going to have analysts to do it for them. >> So, >> well, they'll have Claude. >> So, that is true, but you they have to check Claude. So, given the potential for this job displacement, the million-dollar question is what is the role of regulation in job protection? And do you think that Washington is paying attention? >> Washington's not paying attention, but Washington is pretty good, not great, but pretty good at quickly responding to absolute obvious crises. Washington is terrible at solving boiling frog problems. So a problem like the deficit or a problem like climate change is much harder to solve than a problem like a pandemic. >> Um pandemic preparedness very hard. But once you have a pandemic, once it's March 2020, the speed with which we set up PPP, the payroll protection program for businesses, the speed with which we started sending out checks >> to Americans was government [clears throat] at hypers speed. I mean, Operation Warp Speed, which was spun up in just a few weeks, might be the most successful pound-for-pound government policy in modern history. I mean, millions of lives were saved by an expenditure of like $30 billion. Um, I'm not the biggest Trump fan in the world. Uh, that's safe to say, but like Operation War speed was an amazing accomplishment. It was accomplished in a matter of days. Mhm. >> Um, so my fear is is that Washington isn't going to pay attention until a catastrophe is obvious. >> What is a catastrophe in this scenario? >> I mean, the Catrini scenario is a catastrophe, right? Um, unemployment rising quickly month after month, something like the Great Recession, right? Right now, unemployment is 4.7%. If unemployment went up over 6%, I think you would have you would have a freakout start. And I think that what the government would probably do is pad out the tools they already have like unemployment insurance. Um I know people in the Bay Area and people in California love to talk about universal basic income. >> Um >> not at the business school. >> Maybe not at the business school, right? Exactly. Maybe that's a couple down. [laughter] Um I mean, okay, enough said. That that's that's not going to happen. Um what you're not going to have universal basic income. you're going to have is something much more like a policy that exists like earn income tax credits being buffed up or the unemployment insurance being buffed up um or being topped off essentially where you have an unemployment insurance um formula that essentially says we're going to replace this percent of your income but then if it looks like unemployment is going to be structural because of an AI displacement they'll say no we'll replace not 40% of your income but 80 or 90 um indefinitely and we'll extend unemployment insurance so it's not just 30 weeks but 60 weeks 90 weeks that's much more likely so I I I don't want to go too deep into policy details unless people are really interested. But in a Catrini style catastrophe, you're much more likely to see a a program like unemployment insurance expanded rather than a program like universal basic income stood up out of nothing. >> Yeah, I'm concerned about them not paying attention. Um, >> and I'm concerned for reasons that have nothing to do with the economy. I mean, if if you have a a super intelligence that's good enough to displace 20 million workers, that super intelligence is probably very good at doing a lot of other stuff that we can't even imagine. Um I mean I don't you know the the the Leopold Ashen Brener um memo that was so preant from several years ago situational awareness made this incredibly pre maybe not not preion scary point that once you have a super intelligence that can essentially act as a kind of digital nuclear bomb that can take down the grids of foreign governments that can essentially you know uh uh spin up bioweapons. Um now you're talking about something that's basically a nuclear weapon. How did we build the atomic bomb? We built it in a very tightly controlled top secret fedssonly program. We didn't let the private sector just go willy-nilly on the atomic bomb. Like we didn't say, "Hey, Ford GM, in addition to making all the tanks, if you could try to build an atomic bomb in Detroit, just give us a call once you think you have a uranium." Like that's didn't happen at all. And so the idea that we're allowing the private sector to build something that could turn out to be a national security weapon could theoretically lead to stuff that's just crazy like the immediate nationalization of the frontier labs. I mean that's not out of the question if what we're fundamentally dealing with is a modern Manhattan project. So I think unemployment insurance is important, employment is important, the economy is important. But once this technology, if this technology passed a threshold where it was doing the Dario Ammedday prediction of displacing 20 million workers, it's probably doing a lot of other [ __ ] that's so scary that we can't even imagine the level of creativity of policy response that is necessary to counteract that growth. >> Yeah. But Anthropic published a constitution and their values are wonderful. So I'm sure they're they're not doing anything dangerous. >> You know what's really crazy? Um, Amanda Ascal, who's the uh who was the author of the Constitution, um, was my roommate briefly in New York. Um, because she used to be married to a man named William Macascal, who's a founder of the effective altruism movement. And when my friend left um uh I was I was living in a loft in New York City with uh with six roommates. And when um he left to start a company um in Africa, he said, "I'm going to ask one of my like nerdy EA friends to come uh live in my room." and that was uh Will and Amanda. So I've spent a weird amount a weirdly large amount of time talking to the author of Claude's Constitution about general philosophy and all I can tell you is that she is very very nice [laughter] >> as as are Claude's values. >> Um I see one of our deans in the audience and so I'm going to throw you a bone. Um, so you've written about the end of thinking and the school is obviously very concerned about what the cognitive decline is in students and people because we're learning how to use AI. And so are we eroding what makes humans economically valuable by becoming AI literate? So [clears throat] your last question asked me to step into a science fiction scenario where I was imagining implications of something that isn't already happening. >> I want to be clear that this question is asking me to talk about what is absolutely already happening. >> Yeah. >> Um let's start at the lower level educationally speaking and move up. >> The level of cheating at middle schools and high schools right now >> is off the charts [snorts] and everyone knows it and no one knows what to do. >> The level of cheating at undergraduate levels is off the charts and everybody knows it and nobody knows what to do. And the professors who are generally left-leaning have decided that because Silicon Valley is bad, period, it can't make anything that's actually useful. >> And therefore, we're not even going to attempt to pull artificial intelligence into the way that we teach this class. We're just going to pretend it doesn't exist. >> Yeah, >> that's an absolute recipe for disaster. And I am constantly in my own working life trying to be mindful about not outsourcing to artificial intelligence the part of work that is thinking, the part of writing that is thinking. Because once I let all of the hard thinking go to the machine, >> like it's just like going to the gym. If you don't do the reps, how do you expect to build a muscle? And there was a lovely distinction that I saw made recently that I included in one of my recent pieces that um you can think of different tasks in life as being either akin to a gym or a job. At a job, the point is to get the work done. >> At a gym, the point is to do the work. It doesn't make any sense to say, I went to the gym and I asked Joe to lift all the dumbbells. Like even if he lifted more than you've ever lifted, there's no sense of accomplishment that's like, "Wow, like the 50 pounders, we really got it today, didn't we, Joe?" Like you, we didn't get it at all. Joe got it. And so I think it's really important for people to think about internalizing that principle for their own life and think about like at the level of parenting. Um, I'm the father of a two-year-old and a two-month-old. And I've definitely asked JapT some questions um about that at the level of parenting, at the level of relationships, because I mean, these tools are absolutely being used as Ursat's therapists. My wife's a clinical psychologist and I I know how she feels about this. Um, at the level of work, I think we really need to be thoughtful about where is the gym and where is the job. What is the work that when I outsource it to Chat GPT, it merely accelerates that which has to be done. And what is the work that when I outsource it to Chatt and Claude and Gemini, I am depriving myself of the reps that I need to get stronger. And I might not feel that proverbial muscle atrophy today or next week or next month, but the same way that if you just let Joe do your entire workout regimen for a year, you will be weaker a year from now. If we as a society let these tools do our deep thinking, we might not see those effects in a month, in a quarter. We will absolutely see them down the line. And the last thing to say about this is that to a certain extent, this is already happening. If you look at American literacy and numeracy test scores for fourth and eighth and 12th graders, they're all going down. I blame smartphones. Other people blame other things. Doesn't even matter what you blame. The results the results. But the fact is that we are collectively getting dumber at the same time that we invent a super intelligence. And so we talked about that decoupling in the graph of equities going to the moon and hiring going down. That might be an illusion. What's not an illusion is that AI is getting smarter and a lot of us are getting dumber. And I see no way that that ends well. But we're essentially offloading our cognitive capacity of these machines. I do think we lose something both at the level of intelligence and at the level of humanity because I think that our ability to to engage in deep thought to engage in thought that feels hard and brings about genuine insight that's a that's a huge part of being a thinking person. If it all goes to the machines that sucks. >> So maybe we win the AI war but we become functionally illiterate. >> Yeah. And I I thought about this a lot like I mean I I wrote an a cover story for um The Atlantic called The End of Work uh nine years ago. Um and I hope it is impression but it basically was like taking seriously predictions like Eric Molson's that eventually technology is going to be good enough that it's going to throw a lot of us out of the labor force and we're going to have to figure out what to do with ourselves. And there's this great book that I found by a German philosopher named Joseph Piper. And it's called leisure colon the basis of culture. And Piper believed that leisure was it's right there in the title the basis of culture. He said but the west doesn't understand what leisure is. We think that leisure is turning off our brains. We think leisure is television. We think leisure is leaning back in our chairs and falling asleep. He said, "Leisure is an active, playful engagement in an activity that is not economically necessary. >> That's leisure." And he he has this other lovely idea that um that the word school comes from the Greek word for leisure and play. That we used to teach people to play and now we can only teach them to work. >> [clears throat] >> Now, I went to a journalism school at Northwestern to be a journalist. So, I am as pro pre-professional education as anybody in this room in business school. But I think there's something really powerful about this idea that in a world where we find that we're no longer the smartest entities in the planet and that the machines are smarter than we are, where do we get our humanity? Where do we get our purpose? I I'm I'm interested in sort of reviving this idea that Piper was talking about that um there might be something something important about rediscovering leisure as a human value in a world where there's a little bit less work. >> Well, if you want to discover leisure, you could just hang out at the GSP for a little bit. [laughter] Um let's let's talk about the bubble. Everyone wants to know about it. I don't expect you to have all the answers, but help us think through it so we can be informed. So there's a major gap between what companies are spending on infrastructure and then what they're bringing in in revenue. So are there historical examples that can help us understand how this plays out? >> Literally every general purpose technology pass through a phase of being a bubble. >> Um and there's a famous theory for this. I mean the idea is basically that if you if you are in the process of summoning a general purpose technology like the railroads or the internet so many different actors are going to be excited about that infrastructural buildout that you're going to get a level of excitement spread between actors who can't coordinate on the amount of revenue that will arrive in time to pay off the capital expenditure. And so you know there's just a general theory that general purpose technologies create bubbles. Um, so the best reason to think that we're in the middle of a bubble is that everything that is this big is a bubble. >> Um, that that we are still we are still participants in we are still residents in history. And so as long as history's contin um that's the historical reason to think this is a bubble. The more technical reason to think it's a bubble, you did a great job of outlining the general principle, which is if capital expenditures are 400 billion rising to 500 billion rising to $700 billion a year and revenues are like or external revenues from uh general general uh artificial intelligence or generative artificial intelligence, not just um one company paying another company within the ecosystem. If those are rising from like maybe just 10 to 20 billion. Well, the gap between 700 billion and 20 billion is you know a lot of billions about 680 if my math is right. So that's a that's a huge gap. >> Yeah. >> Um that's the next level uh to re to think this is a bubble is that um the capital expenditure is just way too big for the revenue to catch up. The more technical reason to think it's a bubble which came from my um conversation with Paul Kadoski who's an investor and writer in the space. >> Great podcast. You should listen to that episode. >> Yeah. My think was my most popular podcast of last year. Um he said, "Look, the reason I think we're in a bubble is that you look at what these companies are spending all the money on. They're spending the money on GPUs. They're spending the money on chips, and they are paying off these chips in their earning statements with an appreciation schedule of about six years." So you buy, you know, whatever, you buy $100 of chips, and you can say, "We're spending that $100, we're allocating it over the next six years." If the chips run out of utility in the next three years because you have to reinvest in a new generation of Nvidia chips, then you have to re-up that spending twice as fast as your depreciation schedule, which means that six years from now, you're going to have to report on your earning statement the fact that capital expenditures are absolutely devouring quarterly profit. At which point the market's going to say, "Okay, OpenAI, Anthropic, Microsoft, Meta, whatever. Your your your quarterly profits declined by 70%. Let's say in the last few years, there's no way you can justify this current valuation." And then what what you get is a selloff. And then what you get is a policy change where this where the um rate of chip buying slows down because companies realize they can't afford this investment. And if the rate of chip buying slows down, then Nvidia goes down. And [snorts] if Nvidia goes down, you basically have the in entire stock market correcting by 30%. Um, and that's the beginning of this [clears throat] financial and even industrial bubble because you would get even data centers going dark. So that was the original argument for why this is a bubble. And the reason that I've gone from maybe 65% this is a bubble to like 40% this is a bubble is that after Claw came out and I realized what these agents were capable of and just how many tokens were just going to be flooded into the system of of knowledge work. I thought, you know what? I think the joke that I made on Twitter is like agents are going to be deployed in the battlefield of knowledge work at like Soviet levels. Like the Russians have historically had like one military policy. Their strategy is [ __ ] terrible and they just throw people into the m of whatever the enemy is and hope they overwhelm it. And we're just going to have an enormous just like Soviet level deployment of these agents, which means token use is going to go crazy, which means that inference is going to go crazy, which means that revenue is going to grow much faster than I had previously anticipated. You're already seeing this with Enthropic. >> I'm changing my mind because I think the situation on the ground is changing and Anthropic is seeing um an inflection curve in terms of its uh in terms of its revenue growth and other companies will as well as they grow their own um Asian businesses. But the future's uncertain and the distance between 20 and 700 is a lot of billions. >> And you were talking to me earlier about um that like the chips they're not being um >> Right. Right. Yeah. Exactly. The the older generation of there's there's some indication now that despite Paul's prediction that the older generation of chips would you would lose utility and would be swapped out. Instead, in fact, we're seeing foury old chips still seeing 100% us um usage. And so if you have chips that were bought four years ago, used for another two years from now, then in fact the utility of the chip is even longer than depreciation schedule, which means you don't have this this problem where you have these companies essentially lying, not lying, um these companies pretending to their investors like they have an investment that's going to pay off for a long period of time, but actually their investment is in like a banana and they have to just buy new chips over and over and over again. That situation seems less plausible if they're still using the chips for >> and they're using them still because they're not using them for retraining, just for inference. >> Using them for inference. Exactly. Really key. They don't need them for pre-training anymore. They're so they're shifting the usage over to inference. >> Okay, great. So, a quote of yours that I love, I took this from your podcast, is economic activity is bubbleicious if it is dressed up in financial opacity. So, what financial mechanisms are technology companies using today to mask their investments and take them off their balance sheet that are concerning to you? >> Right. one of them I just described which is the depreciation schedule which um maybe it's a pretense um and maybe it's reality and I'm in like the process right now of like reallocating my own prediction between like it's a pretense versus no it's actually reality um the other thing that a lot of companies are doing especially Meta is they're creating special purpose vehicles that essentially they create a box the special purpose vehicle they put money into a box private equity Apollo whoever puts money in that box too and that box builds a data center and so the construction of that data center doesn't appear on Meta's balance sheet. It appears on the balance sheet of the special purpose vehicle. And I think they're I I I my sense is that companies are doing this as well. I I have a sense that it's not just Meta, it's other companies, too. But, you know, Meta is a public company and a lot of the other um frontier companies, Anthropic and OpenAI, for example, are private and so they might be doing aspects of this that haven't been publicized >> and that can end up catastrophic. May maybe catastrophic, maybe not, but it certainly suggests that they're spending that they don't want. In the case of Meta, it suggests that they're spending that they don't want investors to scrutinize. And the quote was about how if you're a company that's trying to find some way to make your business model opaque, the motivation to make your business model opaque might be a sign that there's a little bit of foolishness happening. And so that's basically what I'm trying to say. I don't know that. I mean, I'm certainly not accusing Meta of fraud, but I am suggesting that these kind of deals are what you would tend to see if a company was trying to disguise its level of spending. >> Yep. And you talk a lot about the infinite money glitch and we call it circular financing, but companies are simultaneously customers, competitors, investors. Talk to us about that trend and what you think. >> Yeah, I mean, right, you've you've got these companies essentially saying, right, like I'll buy 10% of your company if you give me $300 billion. It's um it's one of these things where like if it all works out it all works out and if it doesn't work out who boy um you know Nvidia in particular is um so inshed in this ecosystem and so dependent on this network of buyers that of course they are willing to to create deals where they are giving to their buyers something in return for the revenue that they expect to come to them. Um, I think it's possible that this is exactly the sort of ecosystem in which everybody is co-owning everybody else where if Nvidia catches a cold, everyone else is going to catch a pandemic. >> But again, >> if you have a scenario where old chips are still seeing enormous use because the demand for inference, the demand for agents, the demand for tokens is going astronomical as it in this moment like this week, this month, it currently is. Um, it's not a bubble. It's just really clever accounting. It's just really smart business to say, um, let's bring everyone on board. Let's everyone hold hands because we're all building something unprecedented together. $700 billion. Like I said, a new Apollo program every five months. Like, you need something a little bit creative in order to keep that flywheel going. >> Yeah. So, I'm going to ask one more question and then we want to open it up um to everyone to ask all of your questions. I know you have tons. Um the inscription in your book abundance is to have the future we want we need to build and invent more of what we need. So you're in a room full of students mostly. So give us the upshot here. What are you optimistic about? What are the opportunities that lay ahead? Well, look, I mean, again, I said the best reason to think we're in a bubble is that we live in history >> and every time a general purpose technology has come around, there's been an enormous incentive to overbuild it because of the potential returns. >> The other corollary of the principle we live in history is that we're all still here. The unemployment rate is under 5%. There is no technology. Despite all the predictions since the lites up to player piano Kurt Vonagget predicting that the computer was going to take all of our jobs in the 1950s, we're still here. We still have more jobs than ever. And frankly, the jobs are just better than they've ever been. No one wants to go back to coding in the 1950s. Like, no one wants to go back to hunting for whales in the 1870s. You can smash open their brains and crawl inside of it to get the gunk to light your lamps. Like everything about the labor force is more comfortable and well lit and air conditioned than it used to be. Like human progress is a fact and it comes with pain and it often comes with local dislocation as change happens. But progress is a fact. We live longer I think often happier and certainly healthier lives than we used to. So if we live in history, AI is a normal technology. It's a technology that's going to make our lives easier because the grunt work of coding is going to be shunted off to claude and codeex and all those other tools. And we can focus on the more interesting questions like what do we build? Or we can focus on the more human questions like you know what is what is technically known as business development and what is commonly known as a dinner. Like that is more fun than being at your desk desk just looking at a computer and being on Excel all day. If Claude can do Excel and we can go out and have martinis and try to get people to invest in our company like that's that's that's a more fun life. So um especially for folks like me who like martinis. So, um, yeah, if if if history is still history, and if AI is a normal technology, then there's going to be bumps along the road, but what we're going to be left with in the 2040s and 2050s is a world with many more jobs, um, more interesting jobs and jobs that elicit our our human faculties of of interpersonal humanness and not just, oh, it's another day at the office, time to um, have another eight hour relationship with Excel. Yeah. Well, we are very good at soft skills here with interpersonal dynamics being our core class. So hopefully we will be leaders in that. Um let's open it up to some questions. I knew Dora's hand was going to go up first. Dora, go ahead. >> Thank you for being here. Really interesting. I disagree with your last point about optimism with AI, but um getting to another point. >> I'd like to end on optimism, so I might even agree with your disagreement. >> I like that. Um so my question is about uh more of a political view on AI and uh whether there's political utility for Democrats in establishing themselves as a counterveailing force to this techno optimism given affordability crisis given that a lot of people who are going to be losing jobs there's going to be a lot of resentment. Is there a way to capitalize on that? Is that a smart move? >> Yeah, politically of course data centers are not popular. Um, they're especially unpopular when, as the Wall Street Journal recently reported, the companies building the data centers are buying the land off of residential developers. So, I mean, let's say you're a political consultant and uh you've got a candidate running in a state where uh the faceless Goliaths of Silicon Valley are buying housing land, taking homes from your community and giving that space to AI [clears throat] whose builders promise will take their jobs. I mean, the political ad does not particularly need that many clawed prompts to write itself. It's it's pretty straightforward. Um, data centers are huge and popular right now. AI is is um unpopular among certain groups. Um, the politics of AI itself are actually like a little bit scrambled. I think white collar workers are the most afraid. um other workers maybe aren't paying quite as much attention. Um but look, it's just I I I I think it's so strange and interesting to have the architects of a technology promise the world their technology is going to destroy work. Like I I was thinking about this the other day at home. I was like what's what's a historical analogy of something as as bizarre as that? It's like what if what if Henry Ford in 1910 said I'm working on this thing this assembly line Model T. You know, if this takes off, if automobile penetration really takes off, I think one cool thing that could happen is that more Americans could die every decade than died in World War I. How interesting is that? Like, who the [ __ ] advertises their product this way? In fact, cars have done that. 35,000 people die from car accidents every year, multiply by 10, 30 350,000 is more than the number of Americans that died in World War I. Ford could have said this about his product, but no one says this about their product. And so the fact that AIs like like the people like Daria, who I think are like moral, I mean, look at him standing up against the Department of War today. Um, and just trying to tell the truth, don't understand the political consequences of the like very specific Silicon Valley theory of like just tell the truth and talk about probabilities. Like that's not how politics works. And so I see I could I could answer this question for an hour. The the Silicon Valley mindset and the DC mindset are like completely different MyersBriggs personalities. Like they are so fundamentally opposed. The language that is used in Silicon Valley is so alien to the language that is used in Washington DC. It's basically like the difference between like folks who talk in probability theory and folks who talk in what would make the best bumper sticker. And like those are different languages. So I think um to answer your question directly because I went on a bit of a rant. Um yeah, Democrats are going to get on the anti- data center train for better and for worse. Um and there's going to be a for worse as well. Populism can go too far and and it almost always does. >> And thank you so much for sharing and I think you give me more face towards the future. >> Good. And I have a question followup question. >> So I'm I'm one of two now in terms of injecting optimism into the 50 50% penetration. >> Introduce yourself. >> I'm Joyce. I'm B1 student. So I'm curious about I understand the theory that we have experienced waves of uh technology innovation but we still living a better life. But do you think the AI's revolution is more similar or different towards the previous technology evolution? Because I think humans capability have different components. We have labor, we have intelligence, we also have creativity. I think machine is replacing the labor maybe. Okay. But actually I'm thinking AI is more intelligent and more creative than us. So my question is more about what [clears throat] do we still left? [laughter] >> Yeah. Um well, right. What do we still have? Um [laughter] >> do you have the answer to that? >> Yeah. Well, I mean I I I mean every time I think of an answer, I think of an exception. Um I was going to say um >> yeah, what do we still have? Um [laughter] I'll say what we still have for me like I I I was going to say, you know, we have we have uh we have faces and you know, hands and feet and bodies and we are embodied in a way that AI is not. you know, we're the the robotics uh recursive self-improvement age is is not upon us yet. Um, and human [snorts] needs and wants have moved a lot in the last 150 years, creating the possibility for new jobs to answer those human needs and wants. Um it's not difficult for me to imagine a world in which um the jobs of the future are seen as more human precisely because we will be spending our income and gross domestic income is gross domestic product. So we'll be spending our economy on the things that are truly for humans only. Um we've always been at an overlap with machines when it came to say mathematical abilities, right? Um it's always been the case since the invention of the calculator that a machine was better at some kinds of math than the human. Um statistical intelligence, which is a huge part of what so many companies do, has always been a skill that humans often employed, but machines did better. that like Excel was or Excel was better at a certain kind of compiling of data than a human would be without that Excel. And so there's a way in which we've been participants in a certain definition of intelligence that machines have always been a competitor in and that we're about to enter into a generation where they're going to leaprog us on those things that are sensitive especially to mathematics and statistical analysis. Um, but when you think like are machines better at business development? Are machines really better at marketing? Are machines really better at um entertainment? Um, if Sora had taken over the world, I'd be worried about the ability of artificial intelligence to like almost on its own create a source of entertainment that was rivalous to what humans could do. [clears throat] But it hasn't. Um, Hollywood uses AI, but they use it to tell human stories. So, I still think there's a lot of ways for us to move up the value chain of our own humanness. Um, we used to all work at farms, then we realized that we could do something with our hands that wasn't just grow food. And then a lot of us moved to factories. We realized we could do something with our minds that wasn't just, you know, putting in um screws and bolts. And then we moved into white collar offices. And sometimes those jobs were like profoundly human. And sometimes they were, you know, so deadening that satire was made of them. Severance, office space. Um, it's conceivable that we're moving up a value chain that would allow us to do jobs that are just more fun. Um, it's not a prediction, but it's absolutely a possibility. >> Unfortunately, we're really good at the jobs that are not fun. >> Hi, my name is Sai. I'm an MBA1 uh student. Um, I met with Andrew Inc. yesterday and we were discussing about how smaller teams um especially in low resource settings who have constraints are now able to leverage intelligence and create better products. Um and you on the other hand you've also said that AI is making us dumber. Um I wonder if we are >> I said it [laughter] has the capacity to make us dumber. >> Okay. >> Right. Yeah. I wonder I wonder if we're moving into a world where individually we're getting dumber but collectively more powerful. >> What do you think about that? >> We're certainly collectively more powerful. I mean I think what that what almost almost axiomatically technology increases collective power and I'm a fan of the idea I think it was Tim Urban who said um the more technologically advanced a society gets the greater its capacity to do both good and evil. Right? You can take this all the way back to um Fritz Taber and the development of you know the first chemical weapons that the man who invented ammonia and whose whose nitrogen fixing technology feeds two billion people in the world also developed chemical weapons that the Germans used in world war world wars one and two. Um our intelligence expands the bounds of our ability to do good and evil. True of um uh chemistry, true of nuclear weapons. It'll certainly be true of artificial intelligence. Um, so that's an answer on the collective standpoint. I really think it's important to emphasize the individual standpoint like the the role of agency here that two different people with the exact same IQ can use the same technology. One to feed their curiosity and the other to replace it. The difference is not intelligence. The difference is choices on a momentby-moment basis. So I think optimistically that AI can be a steroid for curiosity. If you're the sort of person who like wakes up in the morning being like, "God, like I want to know about this moment in history. I want to understand this industry. I want to I I read a headline about um CARTT cell therapy and imunotherapy for cancer. How does that how does that work?" If you're that kind of person who's motivated by this need to know more, more more, what could be more beautiful than a bespoke Wikipedia machine in your pocket? Like that's a beautiful thing. But it's also obviously the case that some people maybe with the exact same IQ as the person I just made up two seconds ago are going to use this this machine to write all of their essays in high school so that they can just play video games and watch Tik Toks. It's not a matter of intelligence. It's a matter of choices and it goes to the level of almost like personality. It goes to the level of curiosity. How much do you actually want to know versus how much do you want to just get the job of getting an A done with so you can move on to the thing you actually want to do with your day which is look at your phone. >> Um people are going to make different choices with this technology and it's and it's those choices that are going to determine um whether become smarter or dumber not the um underlying IQ of the person who's using it. >> My name is Max. I'm a first year MBA student here. uh you alluded to a minute ago, but I'd love to get your perspective on the anthropic department [snorts] of war scenario and how that evolves both with anthropic and some of the other labs. >> Yeah, I I it's a great question. Um in in the interest of being both honest and moving through questions as quickly as possible because it was traveling today. I I I'm not like I saw the tweet and I saw like a couple quote tweets. So like I could like spin up an answer that's just me like riffing on some quote tweets that I saw, but that's literally what you would get. I think it's pretty cool for a CEO to stand up for um his or her company's morals and I do think that Daario did that. I also think that I also wonder whether what the Department of War was asking him to do was like um aimatically unconstitutional in which case I would hope that even a less moral CEO would also say I'm not particularly interested in using my lending my technology for um unconstitutional purposes. Um, Amade has has said his piece. Um, I think it'd be great if Sam and um, Sundar did the same. Um, and left Hexathth out to dry, but um, you know, I don't know if that's a pipe dream. >> In the back. >> Oh, hi. My name is Tedu. I'm an MBA one. I think you're right on the spectrum for um, leisure all the way to UBI and I think it's possibly true for the US. I'm curious about your thoughts for outside the US. So in developing economies where you have high population growth, a lot of people depending on BO style jobs, what you think the opportunities are if there's like an optimistic case. >> What does BO stand for? >> Um business process outsourcing. So like digital remote jobs. >> Yeah. >> Right. Um I do much less thinking about um the developing world. But when I do think about the implications of AI in the developing world, I think it's um uh I think it's pretty nerve-wracking. Um, you've got a lot of countries. Um, my best friend, uh, has a business that's, uh, that's based in, uh, in Western Africa, and so I know a little bit about those countries. Um, I think there's a lot of countries outside of China in particular that are trying to make the leap from lower income to middle income without building up a manufacturing base. And that's historically been very, very difficult. Almost every country that's gone from lower to middle income has passed through manufacturing stage. Um, countries that have Let me >> take that sentence back. Um uh but the growth of China and its extractive export model has eaten manufacturing that might have theoretically gone to other developing countries. As I understand, a lot of those countries have instead tried to move up the value chain by turning to services. But because those services are almost inherently lower skilled and lower paid, I'm absolutely worried about the effect that artificial intelligence could have on them. Um I in in in the spirit of nobody knows anything. I want to read that NBR paper. I want to read that economic paper that says we've seen in the last six months that the roll out of CHBT has had this effect on the BO uh sector in Bkina Faso in Nigeria. I want to read those papers first before like I make a really strong prediction here, but like is it a is it a space that has like a surface area of fear for me? Like absolutely. It's it's it's hard to imagine that transition not going a little bit funkily. >> Last question. >> Uh hi, I'm Adrian NBA 1. I'm a big fan of your work. Um I'm wondering I think you mentioned you have young children, so I'm wondering how you were thinking about raising them in the era of AI and making sure that they have the best opportunities moving forward. Yeah, I mean uh great great last question. Um always always nice to have a touchy feely question to things. Um uh my my children are two years and two months old. So um I there's a piece of me that's afraid for them. But there's also for folks in here who either have children or know people who have children that are this age, um you are in uh trench warfare with a very small terrorist in your house. So, [laughter] like you are you are so plunged into the present of like how can I get you to stop screaming for a snack that mom said you couldn't have? Um that it's actually quite difficult in those moments to be like and also what's the impact of AI in 16 years. [laughter] It's a I'm not mocking the question. Like it's I I sometimes in my in my rare moments of calm like you know like the the Uber from from Berkeley to uh to Stanford today think like oh man you like AI is coming and eventually like Isa's going to need a job. Um I have these thoughts but they're they're unclear. What what's clear to me is um you know I think I think it was it's Yaval Harrari who was who was asked this question at a conference and he gave an answer that stuck with me where he said um identities work identities of the past um could be built like homes with deep foundations that reach deep into the earth and the work identities of the near future when we're in a period of enormous swarming changes our work identities will have be less like a house and more like a tent will have to be prepared to pick up and move to say initially I thought I was going to do this one job but a technology has started to eb away at that job and so now I'm interested in this other career and that requires a certain flexibility a certain emotional intelligence um a disposition toward the future that is the the simple simplistic way to put it is comfortable with discomfort and Maybe the more complicated way to put it is um someone who looks forward to challenges. I remember like one of like the when I was first reporting on on VCs um I was talking to venture capitalist in Washington DC and I remember asking about one of the businesses that he was invested in and he said yeah that's a problem and that's what's so exciting is that it's a problem and I like I could see like how thrilled he was by the idea this company was struggling and they had invested in it and they needed a solution and that instinct to find challenges thrilling is one that has to be cultivated. It's not born into us. I think it's learned. And so maybe we have to to learn a new skill which is to like get thrilled by these kind of challenges because of how they'll make us think deeper, how they'll change our models, how how we might be excited to change our own minds. I think maybe skill like not even skills, dispositions like that might be just as important as hard skills in the future where certain industries look like they are flushed with employment and then six months later look like they're not. Um, >> easy to say, hard to do, hard to cultivate. Um, but I think that's the skill that I would love my children to have. >> I think it's a good note to close on. And I think one thing I love about your work is it's very clear that you're open to ch like changing your mind all the time and that new information changes how you see things and you write about that and I think that's a skill that we all should maintain. So thank you so much for being here. It was a pleasure having you. >> Thank you. [music]
Video description
In this AI@GSB episode, Jenni Steiger, MBA '26, sits down with Derek Thompson to unpack what’s actually happening as AI reshapes technology, the economy, and society. A longtime writer at The Atlantic and now author of the Substack Plain English, Derek brings a sharp, grounded perspective on some of the biggest questions of the moment. The conversation explores whether AI is a bubble, how to think about investment versus real economic value, and what adoption actually means for jobs, skills, and the labor market.