bouncer
← Back

Podcast Jensen Huang LIVE: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis

All-In with Chamath, Jason, Sacks & Friedberg · 1:06:41 · 26d ago

Queued Transcribing Analyzing Complete
45% Low Human

"Be aware of how the hosts' personal financial interests in the AI sector (like the acquisition of Groq) are blended into the technical interview, which may bias their questioning toward validating NVIDIA's market strategy."

MildModerateSevere

Transparency

Transparent

Primary Technique

Parasocial leveraging

Leveraging the one-sided emotional bond you form with creators you watch regularly. Because you feel like you "know" them, their opinions carry the weight of a friend's advice rather than a stranger's. Creators can monetize this by blurring genuine sharing with paid promotion.

Horton & Wohl's parasocial interaction theory (1956); Reinikainen et al. (2020)

Jensen Huang discusses NVIDIA's transition from a GPU manufacturer to an 'AI factory' company, emphasizing the necessity of high-end hardware for the 'agentic' future. Beneath the technical discussion, the podcast uses parasocial trust and conversational consensus to frame NVIDIA's premium pricing as a logical necessity rather than a market choice.

Listen

Provenance Signals

The content is a live podcast interview featuring high-profile public figures with distinct, recognizable voices and spontaneous conversational dynamics. The presence of natural interruptions, specific industry context, and interpersonal humor confirms it is human-created content.

Natural Speech Patterns The transcript contains natural interruptions, self-correction ('I had an inkling that... We're his friends'), and conversational banter between the hosts and Jensen Huang.
Contextual Specificity The dialogue references specific, real-time events like the acquisition of Groq, the GTC conference, and internal jokes about the hosts' personalities (Chamath being 'insufferable').
Filler and Reactive Language Use of phrases like 'I know it', 'Yeah, thank you', and 'I'll let you take this one' indicates real-time human interaction rather than a pre-scripted AI generation.
Episode Description
(0:00) Jensen Huang joins the show! (1:00) Acquiring Groq and the inference explosion (9:27) Decision making at the world's most valuable company (11:22) Physical AI's $50T market, OpenClaw's future, the new operating system for modern AI computing (17:12) AI's PR crisis, refuting doomer narratives, Anthropic's comms mistakes (21:22) Revenue capacity, token allocation for employees, Karpathy's autoresearch, agentic future (31:24) Open source, global diffusion, Iran/Taiwan supply chain impact (40:19) Self-driving platform, facing competition from active customers, responding to growth slowdown predictions (48:06) Datacenters in space, AI healthcare, Robotics (56:44) OpenAI/Anthropic revenue potential, how to build an AI moat (59:38) Advice to young people on excelling in the AI era Thanks to Airwallex for making this happen: Airwallex is a leading global payments and financial platform for modern businesses, offering trusted solutions to manage everything from business accounts, payments, treasury, and spend management to embedded finance. https://airwallex.com/allin Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect

Worth Noting

This episode provides a rare, high-level strategic look at how NVIDIA views the transition from simple LLMs to complex 'agentic' and 'physical' AI systems.

Be Aware

The 'Besties' hosts have significant personal and professional investments in the companies discussed, creating a conversational consensus that rarely challenges the guest's market assumptions.

Influence Dimensions

How are these scored?
The discussion of 'token cost' vs 'factory cost' (44:10) → excludes the risk of hardware lock-in and the environmental/energy costs of such massive scale → benefits NVIDIA's long-term contract stability.

Single-cause framing

Attributing a complex outcome to a single cause, ignoring the web of contributing factors. A clean explanation is more satisfying and easier to act on than a complicated one. Especially effective when the proposed cause is something you already dislike.

Fallacy of the single cause; Kahneman's WYSIATI principle

The assumption that 'physical AI' is a $50T market (11:22) → treated as a mathematical certainty rather than a speculative projection to justify current valuations.

Anchoring

Presenting an extreme number or claim first so everything after seems reasonable by comparison. The first piece of information becomes your reference point — even when it's arbitrary or deliberately inflated. Works even when you know the anchor is irrelevant.

Tversky & Kahneman's anchoring heuristic (1974)

About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed: 26d ago
Transcript

Special episode this week. We preempted the weekly show, and there's only three people we preempt the show for. President Trump, Jesus, and Jensen. And I'll let you pick which order we do that. But what an amazing run you've had and a great event. Every industry is here. Every tech company is here. Every AI company is here. Incredible. If you were building a global financial system from first principles today, you wouldn't build it on 50-year-old legacy rails. You'd build Airwallex, one AI-native platform for global accounts, cards, and payments. It's designed to make the entire world feel like a local market. Others are bolting AI onto broken infrastructure. But Airwallex was built for the intelligent era from day one. Stop paying the legacy tax and start building the future at airwallux.com slash all in. Airwallux, build the future. And one of the great announcements of the past year has been Grok. When you made the purchase of Grok, did you realize how insufferable Chamath would become? I had an inkling that... We're his friends. We have to deal with him every week. I know it. You had to deal with him for the six-week close. I know it. It's like two weeks. Two weeks. It's all coming back to me now. It's making me rather uncomfortable. The thing is, many of our strategies are presented in broad daylight at GTC years in advance of when we do it. Two and a half years ago, I introduced the operating system of the AI factory, and it's called Dynamo. So Dynamo, as you know, is a piece of instrument, a machine that was created by Siemens to turn essentially water into electricity. And Dynamo powered the factory of the last industrial revolution. So I thought it was the perfect name for the operating system of the next industrial revolution, the factory of that. And so inside Dynamo, the fundamental technology is disaggregated inference. Jason, I know you're super technical. Absolutely. I know it. I'll let you take this one. Go ahead and define it. I don't want to step on you. Yeah, thank you. I knew you wanted to jump in there for a second. But it's disaggregated inference, which means the pipeline, the processing pipeline of inference is extremely complicated. In fact, it is the most complicated computing problem today. Incredible scale, lots of mathematics of different shapes and sizes. And we came up with the idea that you would change, you would disaggregate parts of the processing such that some of it can run on some GPUs, rest of it can run on different GPUs. And that led to us realizing that maybe even disaggregated computing could make sense, that we could have different heterogeneous nature of computing. That same sensibility led us to melanize. You know, today, NVIDIA's computing is spread across GPUs, CPUs, switches, scale-up switches, scale-out switches, networking processors. And now we're going to add Grok to that, and we're going to put the right workload on the right chips. You know, we just really evolved from a GPU company to an AI factory company. I mean, I think that was probably the biggest takeaway that I had. You're seeing this fundamental disaggregation where we've gone from a GPU and now you have this complexion of all these different options that will eventually exist. The thing that you guys said on stage or you said on stage was I would like the high value inference people to take a listen to this. And 25 percent of your data center space, you said, should be allocated to this Grok, LPU, GPU combo. We should add Grok to about 25 percent of the Vera Rubins in the data center. So can you tell us about how the industry looks at this idea of now basically creating this next generation form of disaggregated, pre-fill, decode, disag, and how people do you think will react to it? Yeah, and take a step back. At the time that we added this, we went from large language model processing to agentic processing. Now, when you're running an agent, you're accessing working memory, you're accessing long-term memory, you're using tools, you're really beating up on storage really hard. You have agents working with other agents. Some of the agents are very large models. Some of them are smaller models. Some of them are diffusion models. Some of them are autoregressive models. And so there are all kinds of different types of models inside this data center. And we created Vera Rubin to be able to run this extraordinarily diverse workload. My sense is, and so we added, we used to be a one rack company. We now added four more racks. So NVIDIA's TAM, if you will, increased from whatever it was to probably something, call it, you know, 33%, 50% higher. Now, part of that 33% or 50%, a lot of it's going to be storage processors. It's called Bluefield. Some of it will be, a lot of it I'm hoping, will be Grok processors. And some of it will be CPUs. And a lot of it is going to be networking processors. And so all of this is going to be running basically the computer of the AI revolution called Agents. The operating system of modern industry. What about embedded applications? So my daughter's teddy bear at home wants to talk to her. What goes in there? Is it a custom ASIC, or does there end up becoming much more kind of a broader set of TAM with developing tools that are maybe different for different use cases of the edge and an embedded application? We think that there's three computers in the problem at the largest scale when you take a step back. There's one computer that's really about training the AI model, developing, creating the AI. Another computer for evaluating it. depending on the type of problem you're having. Like, for example, you look around, there's all kinds of robots and cars and things like that. You have to evaluate these robots inside a virtual gym that represents the physical world. So it has to be software that obeys the laws of physics. And that's a second computer. We call that Omniverse. The third computer is the computer at the edge, the robotics computer. That robotics computer, one of them could be a self-driving car. Another one's a robot. Another one could be a teddy bear, little tiny one for a teddy bear. One of the most important ones is one that we're working on that basically turns the telecommunications base stations into part of the AI infrastructure. So now all of the two trillion dollar industry. All of that in time will be transformed into an extension of the AI infrastructure. And so radios radios will become a edge devices, factories, warehouses, you name it. And so there are these three basic computers. All of them, you know, aren't going to be necessary. Jensen, last year, I think you were ahead of the rest of the world in saying inference isn't going to 1,000x. Just last year. Yes. Brad, you're hurting my feelings. Is it going to 1,000,000x? Is it going to 1,000,000x? Yeah. Right? And I think people at the time thought it was pretty hyperbolic because the world was still focused on pre-scaling, on training. Here we are. Now inference has exploded. were inference constrained. You announced an inference factory that I think is leading edge, that's going to be 10x better in terms of throughput to the next factory. But yet, if I listen to what the chatter is out there, it's that your inference factory is going to cost $40 or $50 billion. And the alternatives, the custom ASICs, AMD, others are going to cost $25 to $30 billion, and you're going to lose share. So why don't you talk to us? What are you seeing? How do you think about share? And does it make sense for all these folks to pay something that's a 2x premium to what others are marketing? The big takeaway, the big idea is that you should not equate the price of the factory and the price of the tokens, the cost of the tokens. It is very likely that the $50 billion factory, and in fact, I can prove it, that the $50 billion factory will generate for you the lowest cost tokens. And the reason for that is because we produce these tokens at extraordinary efficiency. Ten times, you know, the difference between $50 billion. Now, it turns out $20 billion is just land, power, and shell, right? Right. And then on top of that, you have storage anyways, networking anyways, you've got CPUs anyways, you've got servers anyways, you've got cooling anyways. The difference between that GPU being 1x price or half x price is not between $50 billion and $30 billion. Pick your favorite number, but let's say between $50 billion and $40 billion. That is not a large percentage when the $50 billion data center is actually 10 times the throughput. Right. That's the reason why I said that even for most chips, if you can't keep up with the state of the technology and the pace that we're running, even when the chips are free, it's not cheap enough. Can I just ask a general strategy question? Yeah. I mean, you're running the most valuable company in the world. This thing is going to do $350 plus billion of revenue next year, $200 billion of free cash flow. It's compounding at these crazy rates. How do you decide what to do? Like how do you actually get the information? I mean, it's famous now, these sort of emails that people are meant to send you. But how do you really decide to get an intuition of how to shape the market, where to really double down, where to maybe pull back, where to actually go into a greenfield? How does that information get to you? How do you decide these things? In a final analysis, that's the job of the CEO. And our job is to define the strategy, define the vision, define the strategy. We're informed, of course, by amazing computer scientists, amazing technologists, great people all over the company. but we have to shape that future. Well, part of it has to do with, is this something that's insanely hard to do? If it's not hard to do, we should back away from it. And the reason for that is if it's easy to do, obviously, lots of competitors, a lot of competitors. Yeah. Is this something that has never been done before? That's insanely hard to do. And that somehow taps into the special superpowers of our company. And so I have to find this confluence of things to that meets the standard. And in the end, we also know that a lot of pain and suffering is going to go into it. There are no great things that are invented because it was just easy to do and just like first try, here we are. And so if it's super hard to do, nobody's ever done it before, it's very likely that you're going to have a lot of pain and suffering. And so you better enjoy it. So can you just look at maybe three or four of the more long-tail things you announced and just talk about the long-term viability of whether it's the data centers in space or whether it's what you're trying to do with ADAS and autos or what you're trying to do on the biology side. Just give us a sense of how you see some of these curves inflecting upwards in some of these longer-tailed businesses. Excellent. Physical AI, large category. We believe, and I just mentioned, we have three computing systems, all the software platforms on top of it. Physical AI as a large category, it's technology industry's first opportunity to address a 50 trillion dollar industry that has largely been you know void of technology until now and so we need to invent all of the technology necessary to do that i felt that that was a 10-year journey we started 10 years ago we're seeing inflecting now it is a multi-billion dollar business for us it's close to 10 billion dollars a year now. And so it's a big business and it's growing exponentially. And so that's number one. I think in the case of digital biology, I think we are literally near the chat GPT moment of digital biology. We're about to understand how to represent genes, proteins, cells. We all know how to understand chemicals. And so the ability for us to represent and understand the dynamics of the building blocks of biology, that's a couple of two, three, five years from now. In five years time, I completely believe that the healthcare industry or digital biology is going to inflect. And so these are a couple of the really great ones. And you could see they're all around us. Agriculture. Agriculture. Inflecting now. No question. Yeah. Jensen, I want to take you from the data center to the desktop. The company was built in large part on hobbyists, video gamers, and all those graphic cards in the beginning. And you mentioned in front of, I think, 10,000 people here, just clawed, open claw, clawed code, and what a revolution agents have become. And specifically, the hobbyists who are really where a lot of energy, we see a lot of the innovation breaks, want desktops. You announced one here. I believe it's the Dell 6800. This is a very powerful workstation to run local models, 750 gigs of RAM. Obviously, the Mac studio sold out everywhere. In my company, we're moving to open claw everything. Freeberg just got claw-pilled. You got claw-pilled, I understand, and you're obsessed with these. What is this from the streets movement of creating open source agents and using open source on the desktop mean to you? So great. Where is that going? Yeah, so great. First of all, let's take a step back. In the last two years, we saw basically three inflection points. The first one was generative. ChatGPT brought AI to the common everybody, to our awareness. But the fact of the matter is the technology sat in plain sight months before GPT. It wasn't until ChatGPT put a user interface around it, made it easy for us to use, that generative AI took off. Now, generative AI, as you know, generates tokens for internal consumption as well as external consumption. Internal consumption is thinking, which led to reasoning. 01 and 03 continued that wave of chat GPT, grounded information, made AI not only answer questions, but answer questions in a more grounded way useful. We started seeing the revenues and the economic model of open AI start to inflect. Then the third one was only inside the industry that we saw, cloud code. The first agentic system that was very useful. Really revolutionary stuff. But but cloud code was only available for enterprises. Most people outside never saw anything about cloud code until open claw. Open claw basically put into the popular consciousness what an A.I. agent can do. That's the reason why open claw is so important from a cultural perspective. Now, the second reason why it's so important is that OpenClaw is open, but it formulates, it structures a type of computing model that is basically reinventing computing altogether. It has a memory system. It's a short-term memory file system. Skills. It has scales. Did you say skills or scales? Skills. Oh, skills. You have scales, theoretically, yeah. So the first thing, it has resources. It manages resources. It does scheduling. Right? And it cron jobs. It could spawn off agents. It could decompose a task and solve problems. It does scheduling. It has I.O. subsystems. It could input. It has output. It can connect to WhatsApp. And also, it has an API that allows it to run multiple types of applications called skills. Yeah. These four elements fundamentally define a computer. Yeah. And therefore, what do we have? We have a personal artificial intelligence computer for the very first time. Open source. It's open source. It runs literally everywhere. And so this is basically the blueprint, the operating system of modern computing. Yeah. And it going to run literally everywhere Now of course one of the things that we have to help it do is whenever you have agentic software you have to make sure that an agentic software has access to sensitive information that can execute code it can communicate externally We have to make sure that all of it has to be governed, all of it has to be secure, and that we have policies that gives these agents two of the three things, but not all three things at the same time. And so the governance part of it, we contributed to Peter. Peter Steinberger was here. And so we've got a mountain of great engineers working with him to help secure and keep that thing so that it could protect our privacy, protect our security. Jensen, that paradigm shift makes some of the AI legislation that has passed around the country to regulate AI and a lot of the proposed legislation effectively moot, doesn't it? Can you just comment for a second on how quickly the paradigm shift kind of obviates a lot of the models for regulatory oversight of AI, which is becoming a very hot topic in politics right now? Well, this is the part that we just – with policymakers, we need to always get in front of them. And Brad, you do a great job doing this. We have to get in front of them and inform them about the state of the technology, what it is, what it is not. It is not a biological being. it is not alien it is not conscious um it is computer software yeah and and it is not something that um we say things like we don't understand it at all it is not true we don't understand it all we understand a lot of things about this technology and and so so i think one we have to make sure that we continue to inform the policymakers and not affect, not allow doomerism and extremism to affect how policymakers think and understand about this technology. However, we still have to recognize that technology is moving really fast and don't get policy ahead of the technology too quickly. And the risk that we run as a nation, our greatest source of national security concern with respect to AI is that other countries adopt this technology while we are so angry at it or afraid of it or somehow paranoid of it that our industries, our society don't take advantage of AI. And so I'm just mostly worried about the diffusion of AI here in the United States. Can you just double click if you were in the seat in the boardroom of Anthropic over that whole scuttlebutt with the Department of War, it sort of builds on this idea of people didn't know what to think, it's sort of added to this layer of either resentment or fear or just general mistrust that people have sometimes at the software levels of AI. What do you think you would have told Dario and that team to do maybe differently to try to change some of this outcome and some of this perception? The first thing that I would say about Anthropic is, first of all, the technology is incredible. We are a large consumer of Anthropic technology. Really admire their focus on security, really admires their focus on safety. The culture by which they went about it, the technology excellence by which they went about it, really fantastic. I would say that the desire to warn people about the capability, the technology is also really terrific. We just have to make sure that we understand that the world has a spectrum and that warning is good, scaring is less good. because this technology is too important to us. I think that it is fine to predict the future, but we need to be a little bit more circumspect. We need to have a little bit more humility that, in fact, we can't completely predict the future. And to say things that are quite extreme, quite catastrophic, that there's no evidence of it happening, could be more damaging than people think. And of course, we are technology leaders. There was a time when nobody listened to us. But now, because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter. And I think we have to be much more circumspect. We have to be more moderate. We have to be more balanced. We have to be more thoughtful. Well, I would nominate you. I think the industry's got to get together. 17% popularity of AI in the United States. I mean, we see what happened to nuclear, right? We basically shut down the entire nuclear industry, and now we have 100 fission reactors being built in China and zero in the United States. We hear about moratoriums on data centers, so I think we have to be a lot more proactive about that. But I want to go back to this agentic explosion that you're seeing inside your company, the efficiencies, the productivity gains inside your company. There's a lot of debate whether or We're seeing ROI, right? And you and I entering into this year, the big question was, are the revenues going to show up? Are the revenues going to scale like intelligence? And then we had this kind of Oppenheimer moment, a $5, $6 billion month by Anthropic in February. Do you think as you look ahead, you announced a trillion dollar, you know, visibility into a trillion dollars of just Blackwell and Vera Rubin over the course of the next couple of years. When you see this happening at Anthropic and OpenAI, do you think we're on that curve now where we're going to see revenues scale in the way that intelligence is scaling? I'll answer this a couple of different ways. When you look around this audience, you will see that Anthropic and OpenAI is represented here. But in fact, 99% of everything that is here is all AI and it's not Anthropic and OpenAI. And the reason for that is because AI is very diverse. I would say that the second most popular model as a category is open models. Open source. Open source. Open ways, open source. Open AI is number one. Open source is number two. Very distant third is anthropic. And that tells you something about the scale of all of the AI companies that are here. And so it's important to recognize that. Let me come back and say a couple things. One, when we went from generative to reasoning, the amount of computation we needed was about 100 times. When we went from reasoning to agentic, the computation is probably another 100 times. Now we're looking at in just two years, computation went up by a fact 10,000x. Meanwhile, people pay for information, but people mostly pay for work. Yes. Talking to a chat bot and getting an answer is super great. Right. Helping me do some research. Unbelievable. But getting work done, I'll pay for. And so that's where we are. Agentic systems get work done. They're helping our software engineers get work done. And so then you take that. You got 10,000 X more compute. You get probably at this point, 100 X more consumption now. Yeah. And we haven't even started scaling yet. We are absolutely at a million X. Which is, I think, a great place to talk about the number of engineers you have, 20,000, 30,000 at the company. We have 43,000 employees. I would say 38,000 are engineers. The conversation we've had on the pod a number of times is, oh, my God, look at the token usage in our companies. It is growing massively. And some people are asking, hey, when I join a company, how many tokens do I get? because I want to be an effective employee. And you postulated, I believe, during your two-and-a-half-hour keynote, pretty long keynote, well done, that you were spending – If it was well done, it would be shorter. Yeah. You didn't have time to do – You didn't have time to write it for an hour and 45. So you guys know there is no practice, and so it's a gripping and ripping. Gripping and ripping. Yeah, yeah. Love it. So I just want to let you know I was writing the speech while I was giving the speech. You never know. But does that mean if we do back in the envelope math, $75,000 in tokens for each engineer or something like that? So are you spending in NVIDIA $1 billion, $2 billion on tokens for your engineering team right now? We're trying to. Let me give you the thought experiment. Let's say you have a software engineer or AI researcher and you pay them $500,000 a year. We do that all the time. This is happening all of the time. that $500,000 engineer at the end of the year, I'm going to ask him how much did you spend in tokens? And that person said $5,000. I will go ape something else. Yes. Right. If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. Okay. And this is no different than one of our chip designers who says, guess what? I'm just going to use paper and pencil. I don't think I'm going to need any CAT tools. This is a real paradigm shift to start thinking about these all-star employees. It almost reminds me of what we learned in the NBA when LeBron James started spending a million dollars a year just on his health of his body and maintaining it. That's right. Here he is at age 41 still playing. It really is, hey, if these are incredible knowledge workers, why wouldn't we give them superhuman abilities? That's exactly right. Where does that go? If we extrapolate out two or three years from now, what is the efficiency of that all-star at NVIDIA and what they're able to accomplish? What do they look like? Well, first of all, things that, wow, this is too hard. That thought is gone. This is going to take a long time. That thought is gone. We're going to need a lot of people. That thought is gone. This is no different than in the last industrial revolution. Somebody goes, boy, that building really looks heavy. Nobody says that. But, wow, that mountain looks too big. Nobody says that. Everything that's too big, too heavy, takes too long, those ideas are all gone. You're reduced to creativity. That's right. What can you come up with? Exactly. Now, the question is, how do you work with these agents? Well, it's just a new way of doing computer programming. In the past, we code. In the future, we're going to write ideas, architectures, specifications. We're going to organize teams. We're going to help them define how to evaluate the definition of good versus bad. What does it look like when something is a great outcome? How to iterate with you, how to brainstorm? That's really what you're looking for. And I think that every engineer is going to have 100, 100 agents. Back to the PR problem the industry has right now. You have executives like David Freeberg with Ohalo, who's looking at literally taking through the use of technology, your technology and AI, the number of calories produced and making high quality calories. What is the factor you think you can bring the cost down, Freiberg? And what impact does this vision have for what you're doing? Zero shot genomic modeling. And it works. Yeah. And you have that moment and you're like, holy shit. Honestly, like, and that's after people are replacing entire enterprise software stacks in a night. I did something in 90 minutes I was telling the guys about. Replaced a whole software stack and, like, a whole bunch of workload. 90 minutes on Claude ran this agentic system. Built the whole thing. Deployed it. And we got it. On a Sunday night. On a Sunday night. 10 p.m. I was done at 1130. I went to bed. As the CEO, you replaced software. Yeah. And everyone on my management team had to do a similar exercise over the weekend. And what we saw on Monday, I was like, it's over. But the technical stuff, the science stuff, we did something in 30 minutes using auto research. And I'd love your view on auto research and what that tells us about how far we still have to go in terms of efficiency. But using auto research and a chunk of data, something was published internally that we said, oh, my God. And that would normally be a Ph.D. thesis that would take seven years. It would be one of the most celebrated Ph.D. pieces we've ever seen in this field. And it would be in the journal science. and it was done in 30 minutes on a desktop computer running on auto research with all the data we just ingested. We got it on Friday and we're like, hey, let's try it. Try it, boot it up, go into GitHub, download it on auto research and ran it. And you see everyone's face just go like, and then the potential of what this is unlocking for us is like the kind of thing that would take seven years. And it happened in 30 minutes. And we're experiencing it in genomics. And we're like, this is unbelievable. So I think the acceleration is widening the aperture for everyone in a way that you didn't imagine a few years ago. But just going back to the auto-research point, can you just comment on what you think about the fact that this thing got published with 600 lines of code in a weekend and the capacity that it has to run locally and achieve what it can achieve with all of these diverse data sets and what that tells us about the early stages we are in terms of optimization on algorithms and hardware? The fundamental reason why OpenClaw is so incredible, number one, is its confluence, its timing with the breakthroughs in large language model. Its timing was perfect. It was impeccable. Now, in a lot of ways, Peter wouldn't have come up with it probably if not for the fact that Clawed and GPT and ChatGPT have reached a level that is really very good. It is also a new capability that allows these models to tool use. The tools that we've created over time, web browsers and Excel spreadsheets and, you know, in the case of chip design, Synopsys and Cadence and Omniverse and Blender and Autodesk and all of these tools are going to continue to be used. There's some some people say that that the enterprise IT software industry is going to get destroyed. There's it's there's a let me give you the alternative view. the enterprise software industry is limited by butts and seats it's about to get a hundred times more agents banging on those tools they're going to be agents banging on sequel they're going to be agents banging on vector databases agents banging on blender agents banging on photoshop and the reason for that is because those tools are first of all do a very good job second those tools are the conduit between us in the final analysis when the work is done it has to be represented back to me in a way that I can control. And I know how to control those tools. And so I need everything to be put back into synopsis. I want everything to be put back into cadence because that's how I control it. That's how I've ground truth. Let me ask you a question about open source. So we have these closed source models. They're excellent. We have these open weight models. Many of the Chinese models are incredible. Absolutely incredible. Two days ago, You may not have seen this because you were busy on stage, but there was a training run that happened in this crypto project called BitTensor. Subnet 3, they managed to train a 4 billion parameter LAMA model, totally distributed, with a bunch of people contributing excess compute. But they were able to do it statefully and manage a training run, which I thought was like a pretty crazy technical accomplishment. Yeah. Because it's like random people and each person gets a little share. Our modern version of folding at home. Exactly. So what do you think about the end state of open source? Do you see this decentralization of architecture as well and decentralization of compute to support open weights and a totally open source approach to making sure AI is broadly available to everybody? I believe we fundamentally need models as a first-class product, proprietary product, as well as models as open source. These two things are not A or B. It's A and B. There's no question about it. And the reason for that is because models is a technology, not a product. Models is a technology, not a service. For the vast majority of consumers, the horizontal layer, the general intelligence, I would really, really love not to go fine-tune my own. I would really love to keep using ChatGPT. I love to use Cloud. I love to use Gemini. I love to use X. And they all have their own personalities, as you know, which just kind of depends on my mood and depends on what problem I'm trying to solve. I might do it on X or I might do it on ChatGPT. And so that segment of the industry is thriving. It's going to be great. However all these industries their domain expertise their specialization has to be channeled has to be captured in a way that they can control And that can only come from open models. The open model industry we're contributing tremendously to. It is near the frontier. And quite frankly, even if it reaches the frontier, I think that products as a service, world-class models as a product is going to continue to thrive. Every startup we're investing in now is open source first and then going to the proprietary model. Yeah, and the beautiful thing is because you have a great router you connect it to, on first day, every single day, you're going to have access to the world's best model. And then it gives you time to cost reduce and fine tune and specialize. And so you're going to have world class capabilities out to shoot every single time. Can I ask that question? Nobody wants the U.S. to win the global AI race more than you. Right. But a year ago, the Biden era diffusion rule really was an anti-American diffusion of AI around the world. So here we are a year into the new administration. Give us a grade. Where is where are we in terms of global diffusion and the rate at which we're spreading U.S. AI technology around the world? Are we an A? Are we a B? Are we a C? What's working? What's not working? Well, first of all, President Trump wants American industry to lead. He wants American technology industry to lead. He wants American technology industry to win. He wants us to spread American technology around the world. He wants the United States to be the wealthiest country in the world. He wants all of that. At the current moment as we speak, NVIDIA gave up a 95% market share in the second largest market in the world. and we're at zero percent president trump that's right president trump wants us to get back in there and and uh the first thing is uh to get license licensed for the companies that we're going to be able to sell to we've got many companies who have requested for licenses we've applied for licenses for them and we've got approved licenses from secretary how uh lutnik Now we've informed the Chinese companies, and many of them have given us purchase orders. And so we're in the process of cranking up our supply chain again to go ship. I think at the highest level, Brad, I think one of the things that we should acknowledge is this. Our national security is diminished when we don't have access to miniature motors, rare earth minerals. It's diminished when we don't control our telecommunications networks. It's diminished when we can't provide for sustainable energy for our country. It is fundamentally diminished. Every single one of these industries is an example of what I don't want the AI industry to be. When we look forward in time and we say, what do we want? What does it look like when American technology industry, American AI industry leads the world? we can all acknowledge that there is no way that ai models is one universally it is we can all acknowledge that that is an outcome that makes no sense however we can all imagine that the american tech stack from chips to computing systems to the platforms are used broadly by the world where they build their own AI, they use public AI, they use private AI, whatever, and they can build their applications in their society. I would love that the American tech stack is 90% of the world. I would love that. The alternative, if it looks like solar, rare earth, magnets, motors, telecommunications, I consider that a very bad outcome for national security. Agreed. Yeah. How much are you monitoring the situation with the conflicts around the world right now? And how much does it worry you, Jensen? So China and Taiwan and then helium availability coming out of the Middle East, I understand, can be a supply chain risk to semiconductor manufacturing. How much do these situations worry you? How much are you spending on them? Well, first of all, I think in the Middle East, we have 6,000 families there. We have a lot of Iranians at NVIDIA, and their families are still in Iran. And so we have a lot of families there. The first thing is they're quite anxious. They're quite concerned, quite scared. We're thinking about them all the time. We're monitoring and keeping an eye on them all the time. They have 100% of our support. I've been asked several times, are we still considering being in Israel? We are 100% in Israel. we are 100% behind the families there we are 100% in the Middle East I was also asked given what's happening in the Middle East is that an area where we believe that we can expand artificial intelligence too I believe that there's a reason we went to war and I believe at the end of the war Middle East will be more stable than before and so if we were there if we're considering it before we should absolutely be considering it after and so I'm 100% in on that With respect to Taiwan, we have to do three things. One, we have to make sure that we re-industrialize the United States as fast as we can. And whether it's the chip manufacturing plants, the computer manufacturing plants, or the AI factories. How are we doing on that? We're doing excellent. by gaining the strategic support, by gaining the friendship of the supply chain of Taiwan, by gaining their friendship, by gaining their support, we were able to build Arizona and Texas, California, at incredible rates. They are genuinely a strategic partner. We really, they deserve our support. They deserve our friendship. They deserve our generosity, and they're doing everything they can to accelerate the manufacturing process for us. And so I think that's number one. Number two, we ought to diversify the manufacturing supply chain. And whether it's South Korea, whether it's Japan, it's Europe, we ought to diversify the supply chain, make it more resilient. And number three, let's demonstrate restraint. And while we're reducing, increasing our diversity and resilience, let's not press, push. Unnecessarily. You need to be patient. It's thoughtful. Is helium a problem? A lot of reports have helium. I think helium could be a problem, but it's also the case that the supply chain probably has a lot of buffer in it. These kind of things tend to have a lot of buffer. but you know you've made massive progress in self-driving you made a big announcement you've added many more partners including BYD, there was just a video of you driving around in a Mercedes and a huge announcement with Uber that you're going to have a number of cars on the road from many different manufacturers your bet is I believe that there's going to be an Android type open source platform that you're going to play a major part in with dozens of car providers. And then maybe on the other side, there could be an iOS with Tesla or Waymo. What's your strategy thinking there and how that chessboard emerges? Because it feels like you have a pretty deep stack and in some ways you're competing and in other places you're collaborative. Yeah. It's taking a step back. We believe that everything that moves will be autonomous completely or partly someday. Number one. Number two, we don't want to build self-driving cars, but we want to enable every car company in the world to build self-driving cars. And so we built all three computers, the training computer, the simulation computer, the evaluation computer, as well as the car computer. We developed the world's safest driving operating system. We also created the world's first reasoning autonomous vehicle so that it could decompose complicated scenarios into simpler scenarios that it knows how to navigate through, just like us, reasoning systems. And so that reasoning system called Alpamayo has enabled us to achieve incredible results. we open this we we vertical optimization we horizontally innovate and we let everybody decide do you want to buy one computer from us in the case of elon and tesla they buy our training computers um do they want to buy our training computer and our simulation computers or do you want to let us uh work with us to do all three and even put the car computer in your car so we Our attitude is we want to solve the problem. We're not the solution provider. And we're delighted however you work with us. Let me build on this question because I think it's so fascinating. You actually do create this platform. A thousand flowers are blooming. But it's also true that some of those flowers want to now go back down in the stack and try to compete with you a little bit. Google has TPU. Amazon has Inferentia and Tranium. You know, everybody's sort of spinning up their own version of, I think I can out NVIDIA NVIDIA, even though they also tend to be huge customers. How do you navigate that? And what do you think happens over time? And where do those things play in the complexion of this kind of video? Yeah, really great. You know, first of all, we're the only AI company. We're an AI company. We build foundation models. We're at the frontier in many different domains. We build every single layer, every single stack. we're the only AI company in the world that works with every AI company in the world they never show me what they're building and I always show them exactly what I'm building right yeah and so so the confidence comes from this one we are delighted to compete on what is the best technology and to the extent that to the extent that we can continue to run fast I believe that buying from NVIDIA still is one of the most economic things they could do and And there's just incredible confidence there. Number one, number two, we're the only architecture that could be in every cloud. And that gives us some fundamental advantages. We're the only architecture you could take from a cloud and put into on-prem, in the car, in any region. In space. That's right, in space. And so there's a whole part of our market, about 40% of our business. Most people don't realize this. 40% of our business, unless you have the CUDA stack, unless you can build an entire AI factory, the customers don't know what to do with you. They're not trying to build chips. They're not trying to buy chips. They're trying to build AI infrastructure. And so they want you to come in with the full stack, and we've got the whole stack. And so surprisingly, NVIDIA is gaining market share. If you look at where we are today, we're gaining share. Do you think what happens is these guys try and they realize, oh, my God, it's too much, and then they come back? Is that why the share grows? Well, we're gaining share for several reasons. One, our velocity has gone. And we help people realize it's not about building the chip. It's about building the system. And that system is really hard to build. And so their business with us is increasing. In the case of AWS, I think they just announced, I think it was yesterday, that they're going to buy a million chips in the next couple of years. I mean, that's a lot of chips from AWS, and that's on top of all the chips they've already bought. And so we're delighted to do that. But number one, we're gaining share this last couple of years because we now have Anthropic coming to NVIDIA. Meta SL is coming to NVIDIA. And the growth of open models is incredible. And that's all on NVIDIA. And so we're growing in share because of the number of models. We're also growing in share because all of these companies are outside the cloud. and they're growing regionally in enterprise and industries at the edge. And that entire segment of growth is really hard to do if it's just building an ASIC. Brad, related to that, and not to get in the weeds on the numbers, but analysts don't seem to believe, right? So if you look at the consensus forecast, you said compute could 1 million X, right? And yet they have you growing next year at 30%, the year after that at 20%. And in 2029, which is supposed to be a monster year, at 7%. So if you take your TAM and you apply their growth numbers, it suggests that your share will plummet. Do you see anything in your future order book that would make that correct? First of all, they just don't understand the scale and the breadth of AI. Yeah. I think that's true. Most people think that AI is in the top five hyperscalers. Right. That's right. There's also an orthodoxy around these law of large numbers where they have to go back to their investment banking risk committee and show some model. They're not going to believe in their minds that $5 trillion goes to $15 trillion. It can go to $7 trillion. Or they just have a $10 trillion company. It's all just CYA stuff that I think ultimately made. It's never happened before, so you can't say it will. And because you have to redefine what it is that you do. There was somebody who made an observation recently that NVIDIA, Jensen, how can you be larger than Intel in servers? And the reason for that is because the CPU market of the entire data center was about $25 billion a year. We do $25 billion a year, as you guys know, in the time that we were sitting here. And so obviously, obviously, that was a joke. No, but it's all in podcast. Don't worry, everything on this show is roughly true. Don't worry about it. It's all in. Wow. That was not guidance. But anyhow, the point is how big you can be depends on what is it that you make. NVIDIA is not making chips, number one. Making chips does not help you solve the AI infrastructure problem anymore. It's too complicated. Number three, most people think that AI is narrowly in the things that they talk about and hear and see. it's ai is much open ai is incredible they're going to be enormous anthropic is incredible they're going to be enormous but ai is going to be much much bigger than that and we address that segment tell us about data centers in space for a second yeah um we're already in space how should the layman think about what that business is versus when you hear about these big data center build-outs that's happening in in on the ground well we should definitely work on the ground first because we're already here. And number one. Number two, we should prepare to be out in space. And obviously, there's a lot of energy in space. The challenge, of course, is that cooling, you can't take advantage of conduction and convection. And so you can only use radiation. And radiation requires very large surfaces. And so that's not an impossible thing to solve. And there's a lot of space in space. But nonetheless, the expense is still quite there. is there. We're going to go explore it. We're already there. We're already radiation hardened. We have CUDA in satellites around the world. They're doing imaging, image processing, AI imaging, and that kind of stuff ought to be done in space instead of sending all the data back here and do imaging down here. We ought to just do imaging out in space. And so there's a lot of things that we ought to do in space. And in the meantime, we're going to explore what does the architecture of data centers look like in space. And it'll take years. It's okay. I got plenty of time. I wanted to double-click on healthcare. I know you've got a big effort there. We're all of a certain age where we're thinking about lifespan, healthspan. I mean, we all look great, I think. Some better than others. I think some better than others. I don't know what your secret is, Jensen. I mean, what are you taking? What's off the menu? You've got to talk to me when we're backstage. I want to know in the green room what you've got going on. Squats and push-ups and sit-ups. Perfect. Okay. But what you know in terms of the build out in healthcare where is that going and what kind of progress are we making I was just using Claude to do some analysis and saying where are all these billing codes We spend twice as much money in the U We seem to get half as much. It seemed like 15% to 25% of the dollars spent were on these first GP visits. And I think we all know, like, ChatGPT and a large language model does a better job more consistently today at a first visit. So what has to happen there to kind of break through all that regulation and have AI have a true impact on the health care system? There are several areas that we're involved in health care. One is AI physics or AI biology, using AI to understand, represent, predict biological behavior. And so that's one that's very important in drug discovery. There's second, which is AI agents, and that's where the assistance and helping diagnosis and things like that. Open evidence is a really good example. Hippocratic is a really good example. Love working with those companies. I really think that this is an area where agentic technology is going to revolutionize how we interact with doctors and how do we interact for health care. The third part that we're involved in is physical AI. The first one is AI physics, using AI to predict physics. The second one is physical AI, AI that understand the properties of the laws of physics, and that's used for robotic surgery, huge amounts of activities there. Every single instrument, whether it's ultrasound or CT or whatever instrument we interact with in a hospital in the future will be agentic. Open claw in a safe version will be inside every single instrument. And so in a lot of ways, that instrument is going to be interacting with patients and nurses and doctors in a very unique way. Yeah, I mean, we're seeing so much investment in AI weapons. It would be wonderful to see some investment in AI EMTs and paramedics and saving lives, not just taking them, which I think is a great segue into robotics. You've got dozens of partners. We have this very weird, I don't know what I want to call a lost decade or 20 years of Boston Dynamics. Google bought a bunch of companies. They then wound up selling them and spinning them out where people just thought robotics is just not ready for prime time. And now here we have the world's greatest entrepreneur at this time tied with you, Elon Musk doing what that was a good save, I hope. Optimus, pretty impressive. And then other companies in China. How close is that to actually being in our lives where we might see a chef, a robotic chef, a robotic nurse, a robotic housekeeper, you know, this humanoid factor actually working in the real world, knowing what you know with those partners and the fidelity, especially in China where they seem to be doing as good a job as we're doing here or maybe better? We invented the industry largely. America invented it. You could argue we got into it too soon. Yeah. And we got exhausted. We got tired about five years before the enabling technology appeared. Yes, the brain. Yeah, yeah. And we just got tired of it just a little too soon. Okay, that's number one. But it's here now. Now, the question is how much longer? from the point of high-functioning existence proof, high-functioning existence proof to reasonable products, technology never takes more than a couple, two, three cycles. And so a couple, two, three cycles would basically be somewhere around three years to five years. That's it. Three years to five years, we're going to have robots all over the place. I think China is formidable. And the reason for that is because their microelectronics their motors, their rare earth, their magnets, which is foundational to robotics, they are the world's best. And so in a lot of ways, our robotics industry relies deeply on their ecosystem and their supply chain. And they're obviously moving very quickly. Our robotics industry will have to rely a lot on it. The world's robotics industry will have to rely a lot on it. And so I think you're going to see some fast movements here. Ultimately, one for one, Elon seems to think we're going to have one robot for every human. 7 billion for 7 billion, 8 billion for 8 billion. Well, I'm hoping more. Yeah, I'm hoping more. Well, first of all, there's a whole bunch of robots that are going to be in factories working around the clock. There's going to be a whole bunch of robots that don't move. They move a little bit. Almost everything will be robotic. What does the world look like? I think this is one of the... The robotics for me is one of the pieces that I think unlocks economic mobility opportunities for every individual. Everyone now, like when everyone got a car, they could now go and do a lot of different jobs. When everyone gets a robot, their robot can do a lot of work for them. They can stand up an Etsy store or a Shopify store. They can create anything they want with their robot. They could do things that they independently cannot do. I think the robot is going to end up being the greatest unlock for prosperity for more people on Earth than we've ever seen with any technology before. Yeah, no doubt. I mean, just the simple math at the moment is we're millions of people short in labor today. Right. Yeah. Right. We're actually really desperate in need of robotics. And so that all of these companies could grow more if they had more labor. I mean, number one. Some of the things that you mentioned are super fun. I mean, because of robots, we'll have virtual presence. I'll be able to go into the robot of my house and virtually operate it. I'm on a business trip. Right. Walk around the house. Walk the dog. Yeah, walk the dog. Break the leaves. Yeah, exactly. Break out the dog. Maybe not quite that, but just wander around and just see what's going on in the house. Chat with the dogs. Chat with the kids. Time travel is also, we're going to be able to travel at the speed of light. And so clearly, we're going to send our robots ahead of us. I'm not going to send myself. I'm going to send a robot. Check it out. Yeah, yeah. And then I'm going to upload my AI. Well, it's inevitable. It unlocks the moon and it unlocks Mars as targets for colonization, which gives us infinite resources. Getting back from the moon is effectively zero energy cost to move material back because you can use solar and accelerate. So you could have factories that make everything the world needs on the moon, and the robots are going to be the unlock for enabling that. That's right. But distance no longer matters. Distance doesn't matter. The more revenue we get out of models and agents, the more we can invest in building the infrastructure, which then unlocks more capabilities on models and agents. Dario on Dworkesh's podcast recently said by 27, 28, we'll have hundreds of billions of dollars of revenue out of the model companies and the agent companies. And he forecasts a trillion dollars by 2030. This is non-infrastructure AI revenue. I think he's being very conservative. I believe Dario and Anthropica is going to do way better than that. Way better than that. So from $30 billion to $1 trillion. And the reason for that is the one part that he hasn't considered is that I believe every single enterprise software company will also be a reseller, value-added reseller of Anthropics tokens. Value-added reseller of OpenAI. That's right. And they're going to – that part of their – Get this logarithmic expansion. Yes. Their go-to market is going to expand tremendously this year. What do you think in that world is the moat? What's left over? I mean, you have some moats that are, frankly, I think, as this scales, almost insurmountable. The best one that nobody talks about is probably CUDA, which is just like an incredible strategic advantage. But in the future, if a model can be used to create something incredible, then the next spin of a model can be used to maybe disrupt it. Sort of in your mind, what do you think for these companies that are building at that application layer? What's their moat? Like how do they differentiate themselves? Deep specialization. Deep specialization. I believe that these models, they're going to have general models that are connected into the software company's agentic system. Many of those models are cloud models and proprietary models, but many of those models are specialized sub-agents that they've trained on their own. Right. So the call to arms for you for entrepreneurs is, look, know your vertical. That's right. Know it as deep and as better than everybody else. That's right. And then wait for these tools because they're catching up to you, and now you can imbue it with your knowledge. That's right. And the sooner you connect your agent with customers, that flywheel is going to cause your agent to get hyper. It very much is an inversion of what we do today because today we build a piece of software and we say, what generalizes? And then let's try to sell it as broadly as possible and then sell the customization around it. And in fact, exactly right, we create a horizontal. But notice there are all these GSIs and all of these consultants who are specialists who then take your horizontal platform and specializes it into. Exactly. And that's arguably a five or six time bigger industry is the customization. It is. Absolutely. Yeah. That's right. Very much is. That's right. So I think that these platform companies have an opportunity to become that specialist, to become that vertical domain expert. You know, I just want to give you your flowers. I think it was three years ago you said, you're not going to lose your job to AI. You're going to lose your job to somebody using AI. And here we are. The entire conversation has revolved around this concept of agents making people superhuman and the business opportunity expanding and entrepreneurship expanding. You actually saw it pretty clearly. Have you changed your view? Well, I know this is Doomer. I'm not Doomer. I do have Doomer. You can hold space for, I think, two ideas. One is there are going to be a large – That's viral J-Cal. But that's just because he doesn't hang out with me enough. We talk a little bit. Be careful. We don't talk about it. He will show me your breakfast table. He'll follow you around. I'm not asking for it. He'll follow you around. I'm not asking for it. You can come with me and Tucker. We ski in Japan every January. I love it. We and Tucker will go road trips. There is going to be job displacement. And then the question becomes, do those people have the fortitude, the resolve to then go embrace these technologies? We're going to see 100% of driving go away by humans. That's a beautiful thing in the lives saved, but we have to recognize that's 15 million people in the United States, 10 to 15 million who are employed in that way. And so that is going to happen, yes? I think that jobs will change. For example, there are many chauffeurs today who drives the car. I believe that many of those chauffeurs will actually be in the car sitting behind the steering wheel while the car is driving by itself. And the reason for that is because remember what a chauffeur does. In the end, these chauffeurs, they're helping you. They're your assistants. They're helping you with your luggage. They're helping you with a lot of things. And so I wouldn't be surprised, actually, if the chauffeurs of the future become your mobility assistant and they are helping you do a whole bunch of other stuff. And check into the hotels. And the car is driving by itself. The autopilot and planes created a lot more pilots and didn't take any of the pilots out of the cockpit. Even though the autopilot is flying the plane 90% of the time. And by the way, while that car is driving itself, that chauffeur is going to be doing a bunch of other work on his phone. And he's going to be making money doing other things. For example, coordinating a bunch of things for you. The pie just grows in a way. One of the things that, yes, every job will be transformed. Some jobs will be eliminated. However, we also know that many, many jobs will be created. The one thing that I will say to young people who are coming out of school who are anxious about AI, be the expert of using AI. Yes. Look, we all want our employees to be expert at using AI. And it's not trivial, not trivial. And so knowing how to specify, not to overprescribe, leaving enough room for the AI to innovate and create while we guide it to the outcome we want, all of that requires artistry. You had this great advice to when you were at Stanford, I think it was, which is I wish to you pain and suffering. Do you remember that? Yeah. Fantastic. What's your advice to young people around what they should be studying? So if they're sort of about to leave high school, because now those are the kids that are at this really native, they haven't made a decision about college, what to study, if at all, go to college. How do you guide those kids? What would you tell them? I still believe that deep science, deep math, language skills, as you know, language is the programming language of AI. The ultimate programming language. And so as it turns out, it could be that the English major could be the most successful. And so I think I would just advise whatever education you get, just make sure that you're deeply, deeply expert in using AIs. One of the things that I wanted to say with respect to Jobs, and I want everybody to hear it, that in fact, at the beginning of the deep learning revolution, one of the finest computer scientists in the world, I deeply respect, predicted that computer vision will completely eliminate radiologists. And that the one field he advises everybody to not go into is radiology. Ten years later, his prediction was 100% right. Computer vision has been integrated into all of the radiology technologies and radiology platforms in the world 100%. The surprising outcome is the number of radiologists actually went up and the demand for radiologists is skyrocketed. The reason for that is because everybody's job has a purpose and it's task. The task that you do is studying the scans. But your purpose is to help the doctors, help the patient diagnose disease. And so what's surprising is because the scans are now being done so quickly, they could do more scans, improving health care. Yes. But doing more scans more quickly allows patients to be onboarded a lot more quickly. Treated a lot more quickly. And as it turns out, because hospitals enjoy making money too, they're doing more scans. They're treating more customers and more patients. The revenues go up. And guess what? And a country that grows faster, productivity increases, a wealthier country can put more teachers in the classroom, not less teachers in the classroom. That's right. You just give every one of those teachers a personalized curriculum for every student in the room. It makes them all bionic and leads to a lot more. Every single student will be assisted by AI, but every single student will need great teachers. Amazing. Jensen, congratulations on all your success. And really, this is an incredibly positive, uplifting discussion. We really appreciate you taking the time for us. He is the steward we need. You are. I think you need to be more vocal. I'm being very, very honest. Be more vocal about the positive side of it. I think there's so much humorism. But I also think it takes the humility to have this level of success and be humble about we're making software, guys. yeah and i think that that's actually really healthy for people to hear we have done this before we have invented categories and industries before yes we don't need to go to this scare-mongering place it does nothing and we get to choose right we have autonomy and and agency we get to pick how to we sure do by this okay everybody we'll see you next time thank you on the all-in interview okay well done brother thanks man good job thank you sir that was awesome good Good. Appreciate you. You guys are awesome. Look at this. Look at this big crowd behind you guys. Man, I think they're here for you.

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC