We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Anthropic's $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence
All-In with Chamath, Jason, Sacks & Friedberg · 1:29:18 · 3d ago
"Be aware that guest host Brad Gerstner's investment in Anthropic may color his enthusiastic endorsement of their decisions, though it's openly stated."
Transparency
Mostly TransparentPrimary Technique
The podcast features VC hosts bantering and debating AI developments like Anthropic's Mythos model security concerns, OpenAI competition, Anthropic's revenue ramp, and geopolitical topics like Iran ceasefire and Israel's influence. Beneath the surface, Brad Gerstner's status as an Anthropic investor lends authority to his praise without overt pressure to invest, but the show's transparent opinion format makes biases obvious. No significant covert mechanisms; it's healthy rhetoric from a self-selected audience.
Worth Noting
Provides granular VC perspectives on Anthropic's revenue ramp, Mythos vulnerabilities, and AI market dynamics from investors like Brad Gerstner.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
How many PRs do you think are going to get pushed to the core structural internet in 100 days? What's the over-under number? Because I'll give you a number. You're going to say zero. My answer to that is... I'll say like 10,000, but it's going to be a meaningless thing. But if it prevents your browser history from being released to everybody in the world, Chamath, that may be something that you're willing to let 100 days pass on. I think you got Chamath's attention when you said browser history. What about the dick pics? Chamath, he's going to release them himself. to let your winners ride. Rain Man gave it to the fans and they've just gone crazy with it. Love you, besties. All right, everybody. Welcome back to the number one podcast in the world. David Freeberg is out this week, but in his place, the one, the only, our fifth bestie, Brian Gerstner. I mean, why don't you ever give me, puts a little namaste in your payday anymore you used to be you know i'll bring you used to be the greatest moderator but now it's just you know what these guys beat me up they beat me up and they just beat the the joy out of me doing this program it's because you're a ro-kana apologist no no i we'll get into it okay save it for the f**k ro-kana apologist just because i said like hey, they've stopped retard maxing and they've started doing some logical things. It's great to be here. Good to have you. Good to have you here. Of course, we have David Sachs is back. Everybody wants to hear from David Sachs. We missed you last week, Bestie. We didn't beat the joy out of you. We just tried to beat some of the hot air. Any fluff that you can put on the show that just involves you talking and saying nothing is we got yeah okay yeah cut it right out and we'll cut it out and we'll just put a promo in for the syndicate dot com thank you also with us how's your retard maxing going since last week did you have a retard maxing full weekend did you have a good full weekend of just smoking cigars in the back deck and not ruminating about all the chaos you've caused in the last 20 years I think I've done generally more good than not. Oh, you have, but there's been some chaotic moments. Don't think about it, Tremont. You can't, bro, you can't have ups without downs, man. It's like, what are you there to do? Just like placate everybody and be a loser? Are you there to be a winner? Yes, you're in the arena, but have you stopped going to there after realizing you're ruminating? What's up with this sudden interest in retard maxing? Are you like that clavicular for retard maxing? No, the world finally caught up with me. That's it. I mean, I've been retard maxing this whole time. They just didn't have a name for it, guys. Eli's videos are really good. I watched two more this week. Take us through what's so appealing about not ruminating, smoking a cigar, and just living your life. Because what he says actually works at every level of society and every sort of thing that you may want to achieve. even if you're trying to like climb the rungs you very quickly learn that the more you want something the less you're going to get it and i think that's like his real message is let go live life and just try stuff or don't try stuff and i think that that detachment is really healthy for people i like it i like it a lot who's the guy who says this i actually didn't know elisha long but eli i think is how he goes by but he's fantastic he mark he's got a youtube channel and he's like this is this guy is the new guy modern day philosopher he gives you a roadmap for how to live your life right new age sage what's the name of the guy the character's name from dune i was into girls i didn't read these books i was dating girls he's the lison al gaib of the modern internet this is why we need freeberg here is to explain these deep holes all right listen we got a lot to get to don't the basic point is build something and don't ruminate okay ruminating is just not worth it just everybody go forward just do stuff stop blathering in your own head just do stuff absolutely all right listen speaking of doing stuff anthropic is withholding its newest model mythos i'm using the greek uh pronunciation Its newest model, Mythos, saying it is far too dangerous for any of us to have access to it. According to the company, the model autonomously found thousands of vulnerabilities, including bugs in every major operating system and web browser. This little study they did included 20-year-old exploits that had been missed by security audits for decades. Some examples, they found a 27-year-old vulnerability in OpenBSD, used in firewalls and critical infrastructure. They found a 16-year-old bug in FFmpeg. That was missed by automated tools after 5 million scans. The Linux kernel, all kinds of bugs they found. They released a hype video hyping up why they were not going to share this model. Here's Daria. Come on the program anytime, brother. But as a side effect of being good at code, it's also good at cyber. The model that we're experimenting with is by and large as good as a professional human identifying bugs. It's good for us because we can find more vulnerabilities sooner and we can fix them. It has the ability to chain together vulnerabilities. So what this means is you find two vulnerabilities, either of which doesn't really get you very much independently. but this model is able to create exploits out of three four sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome all right brad uh by the way that's set they're using there that's the same room those guys play dungeons and dragons in every sunday brad you're brad you're an investor in this company is this virtue signaling or is it reality is this a good move by them to not release this model and be thoughtful give it to a handful of people and just find all the bugs it can before releasing it to the public. And we've got a lot more issues to discuss. I actually think they deserve a ton of credit here. And let me walk you through why. Right. The company could have just released mythos, broken a lot of core things on the Internet. Oftentimes in Silicon Valley, we say move fast and break things. In this case, it means just releasing the model to move further ahead of your competition. But here the company realized it would wreak havoc. They ran their own vulnerability testing. They saw that it would allow offensive hacking and people to expose browsers and browser history, expose credit cards, you know, on the Internet. So, you know, what I like about this is they didn't need government to hold their hand on this. We have plenty of government regulations. They know it's in the best long term interest of the company and the industry. You know, so they set up Project Glasswing. It's an AI driven, you know, kind of cyber coalition. Apple, Microsoft, Google, Amazon, JP Morgan. 40 of the most important companies, and their goal is very simple. Let's spend 100 days, use advanced AI to find and to fix and to harden these software vulnerabilities before hackers exploit them. Now, what I think this represents, Jason, is a threshold that we're crossing. Mythos and Spud, which is going to be out from OpenAI any day now, which is the first Blackwell trained model at OpenAI. They represent the beginning of what I would call AGI models. These are models with massive step function improvements and intelligence. And they're just too smart to be released immediately. You know, and by the way, there was nothing that said that every time you finish a model, you got to immediately release it GA. So they set up this idea of sandboxing, building defensive alliances, you know, in order to move away from that regime. I think it shows, and Saxon and I have talked about this a lot, so I'm interested to hear what he thinks. It shows you can trust the industry and market forces in coordination with the government. They were talking to the government about this, but they're not relying on some top-down regulation in order to do this. They laid out a blueprint that seems to me very pragmatic that now that we're at this threshold, we're going to sandbox these things. I think that OpenAI will end up doing the same thing. I think Google will end up doing the same thing. It's an aggressive way to keep the pressure on and win the race at AI while making the tradeoffs to protect safety. So, you know, I think you're always going to have to make these tradeoffs. I think in this case, it was a great move by Dario and team, and I think they deserve a lot of credit. Saks, when you look at this, we had Emil Michael on the program a couple of weeks ago. It might have been four or five weeks ago. And we had a very thoughtful discussion about, hey, if the government is going to have these tools, you know an anthropic wants to withhold them and you know what is the proper relationship there you have to think that the government and i know you don't speak for all parts of the government if you were just going to run through the game theory they must have gone to the government and said listen this thing is so powerful it can put together two or three hacks create a novel attack vector and this is incredibly dangerous what if china has it and if this thing is as powerful as Dario says it is, then this is an offensive weapon as well for us to take out, let's just pick, you know, a prescient issue, the North Korea's ballistic missile program. This is equivalent, the way it's being described, as the Manhattan Project, perhaps. So what are the chances, two-part question for you, Sachs, that China already has this and is using it? And do you think Dario is doing the right thing by regulating themselves? I think Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people. And we've seen a pattern in their previous releases of at the same time, they roll out a new model or new model card, something like that. They also roll out some study showing really the worst possible implication of where the technology could lead. We saw this last year, about a year ago. They rolled out this blackmail study where supposedly the new model could blackmail users. There's been a whole bunch of these things. Actually, I went back to Grok and I just asked, hey, give me examples where Anthropic has basically used scare tactics. And it's a pattern, okay? It's a pattern. Okay. These guys, I'm not saying it's not sincere, but they have a proven pattern of using fear as a way to market their new products. And if you think back to, again, my favorite example is this blackmail study where they prompted the model over 200 times to get the result they wanted. And that result was clearly reverse engineered and it got them the headlines they wanted. And I would say the proof that it's reverse engineered is we're now a year later. there's a bunch of open source models out there that have the same level of capability that that anthropic model had and have you seen any examples of blackmail in the wild i don't think so so in other words if that study were true in the sense of being a likely outcome of that model i think you would see examples in the wild of that behavior we haven't seen any of that in the past year Now, let's talk about this specific example with cyber hacking. I actually think that this one is more on the legitimate side. I mean, look, the reason why I bring this up is anytime Anthropic is scaring people, you have to ask, is this a tactic? Is this part of their chicken little routine? Or is it real? You know, are they crying wolf or not? I actually would give them credit in this case and say this is more on the real side. It just makes sense, right? So that as the coding models become more and more capable, they're more capable of finding bugs. That means they're more capable of finding vulnerabilities. And like one of their engineers said, that means they're more capable of stringing together multiple vulnerabilities and creating an exploit. And so I do think that over, say, the next six months, we're going to have this, call it, one-time period of catching up where AI-driven cyber is going to be able to detect a whole range of bugs that maybe have been dormant over the past 20 years. across a wide range of systems. And so I do think that there is real risk here. And I do think, therefore, that having this pre-release period makes a lot of sense where they're giving the capability to all these software companies that have existing code bases to use the tool to detect the vulnerabilities themselves so they can patch them before these capabilities are widely available. And by the way, it won't just be Anthropik that makes these capabilities available. But we know that, like, let's say the Chinese open source models like Kimi K2, it's about six months behind. So we have a window here of maybe six months where we're still in this pre-release period where I think companies that have large code bases can get advanced access to this model. And I guess OpenAI is going to release a similar thing in the next few weeks. I do think that every company or IT department or CISO that is managing code bases should take this seriously and use the next few months to detect any, again, like dormant bugs or vulnerabilities and rollout patches. If everybody does their job and reacts the right way, then I do not think it will be the doomsday scenario that Anthropic is sort of portraying. But it's one of these things where the fear might end up being a good thing in order to drive the correct behavior. So I ultimately think this is going to work out fine, but you do need everyone to kind of pay attention, use the capabilities, fix the bugs. Then we're going to get into a big arms race between AI being used for cyber offense and AI being used for cyber defense. But it'll be a more normal sort of period. Chamath, we have Dario and a number of the participants here taking this super seriously. They're making a big statement. Zach's very nuanced, I think, take there. What's your take on how do these companies have it both ways? Hey, this shouldn't be regulated. This should be regulated. If this is, in fact, a cataclysmic, oh, my God, they're going to hack everything. What if the Chinese have this right now? That would speak to more government either coordination, regulation or some kind of relationship between the CIA, the FBI for domestic stuff and these companies, because it is a non-zero chance that the Chinese have an equal capability here. We're assuming they're behind, but who knows what they're doing behind closed doors. So what's your take on this? Is it The Boy Who Cried Wolf or is this the real deal now? I think it's mostly theater. Okay. In February of 2019, when Dario was still at OpenAI, they did the same thing with GPT-2. That was a 1.5 billion parameter model, which sounds like a total fart in the wind in 2026. but at that time this 1.5 billion parameter model was supposed to be the end of days and it was supposed to unleash this torrent of spam and misinformation and that was the big bugaboo at the time and so what happened they went through this methodical rollout over six or nine months they started releasing the smaller parameter models and then they scaled up to the big 1.5 billion parameter model and at the end of it it was a huge nothing burger if you actually think that Mythos is capable of doing what it says it can do, two things are true. One is a very sophisticated hacker can probably do those things right now with Opus. And two, if these exploits are this easy to find, whether you use Opus or whether you use Mythos, the reality is you'd have to shut down the internet for about five years to patch them all. So when you see like a large multi-trillion dollar G-SIT bank it's a bit of theater. Why? What do you think they can actually accomplish in two months? Do you actually think that if there's these vulnerabilities, it's all going to get fixed? Let's give them six months. Let's give them nine months. But the reality is that capitalism moves forward, the funding needs moves forward, and the need for these guys to build adoption moves forward. And that's going to supersede what this is. So I do think that Sachs is right. that they have figured out a very clever go-to-market muscle here and a go-to-market motion that activates hyper-attention and hyper-usage. And so I give them tremendous credit. And I'll maintain what I've maintained before. Anthropic is shooting the lights out right now. This is like Steph Curry going bananas. From everywhere on the court, these guys are hudgen threes clay thompson it's all net okay so huge kudos to anthropic but we've seen it before we saw it when these folks were the principal architects at open ai who are now seeing the same playbook here i think we'll look back and i think what we'll say are these two things one is if we're really going to patch all these security holes we need to shut down the internet for some number of years honestly literally years and the second is an advanced hacker can probably do this today with Opus if they really wanted to. Okay. Hey, Brad, I'll get you in here for the last word. I'm going to go with, yeah, maybe they did cry wolf before, but based on what I see with these models advancing and using them, and I'm using a lot of the open source ones right now from China, I think that this is like code red kind of moment. This is DEF CON. We should be taking this deadly seriously. And I think these companies got to coordinate with the CIA. And this is equally a defensive as offensive opportunity. Do you think this is a nationalization of AI? No, I'm actually I don't think it should be nationalized. Although I did see people sort of insinuating that I think these companies need to build a group, Brad, that work and coordinate with the CIA. I assume that they're already doing this. I'm assuming, you know, Emil Michael, and, you know, Trump and everybody have these people in a room, and that they've given the DEFCON and said, hey, how can our government use this to stop bad actors? And this is already being coordinated with the CIA and the FBI. I am 100 percent certain of that, that Dario went to them and said, look what we found. This is the real deal. I'll give you the last word on this. Brad, since you're an investor in both companies, you know them quite well. The Frontier Model Forum, which was which was put together in 23, is cooperating on anti and adversarial distillation stuff as we speak. right they don't want to make it easy on you know so google and and open ai and anthropic they're coordinating on this stuff you know there are times where i pushed back on anthropic because i thought it was you know perhaps regulatory capture or something else this is very different in my mind right he could have easily dario could have easily come out and said oh my god we passed a threshold we need to have a government moratorium remember even our friend elon called for a six month moratorium in 2023 because of civilization risk. This guy didn't do that. Instead, he said, okay, what should we do? I'm going to get 40 of the leading companies together. We're going to spend 100 days sandboxing, hardening the systems, and then we're going to keep pushing forward. What do you honestly think is going to get accomplished in 100 days? How many PRs do you think are going to get pushed to the core structural internet in 100 days? What's the over-under number? Because I'll give you a number. You're going to say zero. My answer to that is... I'll say like 10,000, but it's going to be a meaningless thing. But if it prevents your browser history from being released to everybody in the world, Chamath, that may be something that you're willing to let 100 days pass on. I think you got Chamath's attention when you said browser history. What about the dick pics? As Chamath is going to release them himself. Right now, Chamath's like, hey, Chinese hackers, here are my dick pics. Please put them out. Oh, my God. We have to be out there complimenting when they're doing the right things or relying on the market rather than running to the nanny state and saying, do more of this. So this to me was just an example of a good balance. I'm sure we're going to have plenty of debates about this in the future. But, you know, this is one I would like to see more of. This is why, to use your word, Jake, I tried to have a more nuanced take, is because we have no choice but to take this seriously. Whether it's total theater, whether it's fear-mongering, and they do have a pattern around this, we can't take the risk, right? And it does logically make sense that as these models become more and more capable at coding, they're going to get better at cyber. And there's going to be that one time period where you're moving from pre-AI to post-AI and you need a patch for that. So my guess is we're going to see a lot of patches over the next few months. I think that that will resolve the problem. I think this is a case where I'm going to give them the benefit of the doubt. I think that, you know, I've criticized them in the past. I think that blackmail study was embarrassing to the level of being a hoax. But I think in this case, I'm going to give him credit and say that I think that it's legit. So it's not the anthropic hoax. This could be legit. We have no choice but to treat it that way. Of course. Yeah. I mean, even if two things could be true at the same time, Sax, they could have used this tactic before. It could be performative, like the video with the dramatic music in the background. It does have a little bit of drama to it, and the way they presented it is very dramatic. But it does make logical sense that the one company that made the bet on code bigger than anybody else would be the one who would discover this quickest. And in 100 days, that's a pretty big advantage versus the hackers. But let me take one more point there, Jamal. The most important thing that people haven talked about here is the amount of code being pushed right now because of these tools is 10x 100x in most organizations So we need to have this type of security embedded in these new coding tools to do it in real time. That's the opportunity. There should be real time correcting of this. If this is real, they pick the wrong companies. Meaning there are energy companies, folks that control nuclear reactors. There are airplane companies that are flying hundreds of thousands of people in essentially manufactured missiles of like streaming gas going at 500 miles an hour. None of those companies were the ones that were included in this. And so I think if you really thought that this was end of days, at a minimum, we can agree, Maybe we should have expanded the circle a touch. Well, maybe those are customers of the ones they're including here. Anyway, this is a really important story. We'll obviously track it in the coming weeks to see what turns out to be reality. And Dario, do come on the program at some point. Hey, Brad, will you get Dario to come on the program? I've invited him like three times. I got his phone number. He's ghosted me. I don't know why. He's ignored you? I literally got an introduction from the number, like one of the number one venture capitalists in the world. He's on the cap table very early. he just won't respond. I don't know why. I would tell you, Dario's podcast with Dworkish, who I think is an excellent podcaster, I've listened to that three or four times, taken notes every time. It is a really exceptional piece, really exceptional piece of work by them. All right, let's keep moving. We've got a lot on the docket today. J. Cal, you may once again be tarred with your affiliation with us. Poor you. I mean, I don't care. Literally, I've got friends on both sides of the aisle. I have friends. Of course you do. Even J. Cal. Even J-Cal has friends everywhere. Let me ask Brad a question here, just while we're on the topic of Anthropic. There was a really interesting story or tweet, I guess you could say, by the founder of OpenClaw. Peter. Peter, yeah. What's his name? Peter Steinberger. Steinberger. Steinberger, yeah. Renowned coder who created OpenClaw, which is kind of the thing that launched this whole agent era now, I guess you could say. any event he said that anthropic was cutting off his access to okay that's what's the call is that the next topic this is on the docket it's a little bit nuanced everybody using open claw would take their 200 a month subscription to anthropic which was essentially like a people were using more tokens than it's an average the people from open claw it is very verbose and those people are 100x the usage of the average subscriber so he said you can't use your 200 you have to use the api you move from the 200 plan to the api add a zero to your token use so or more and so they essentially ankled open claw and then 10 days later or less they released or announced their new agent technology which is a according to them a safer better version of open claw so hey all's fair in love and war and they have basically shot a huge cannon across the bow of open claw right can you just explain that exactly so so i think you're right that they systematically copied feature by feature of open claw incorporated that into clawed and then the coup de grace was basically cutting off open oxygen can you just explain exactly what they did okay very simply when you buy a subscription to these services, they have blended your usage across many users. So there's, you know, nine out of 10 users use less than the tokens they're paying for and the top 10% use much more. When OpenClaw became a phenomenon, the number one open source project in history on GitHub with all of this usage, people went crazy. And you heard me talking about how crazy I went for it. Those people with the $200 subscriptions were using $2,000, $20,000 worth of tokens. So they said you can no longer use your subscription to, you know, either your professional or enterprise subscription at $200 and plug that into your open clock. You now have to go to the API and pay per usage. So no more like unlimited. If you use Anthropics own agent harness, are you part of the bundled flat rate? You can assume that that's what they'll do, which if you were thinking on an antitrust level might be token dumping or price dumping. I'm not saying like I'm ratting them. Bundling. No, it's like bundling, isn't it? Well, price dumping or bundling. When you price something under the market price in antitrust, that would be price dumping, right? And if you were to bundle, it would be like the bundling issue. Critically important, you can use OpenClaw via Claude API. And every company has a right to set the price for its products. It's just saying that you were under their current regime, they were selling dollars for 10 cents via OpenClaw because these were such power users. And now they're just saying we have to price this rationally, but we're happy to have you guys use the API. Okay, okay. But, Brad, when you use the OpenClaw competitor that Anthropic now offers, are they subsidizing that? Are you paying? We don't know yet because it's enclosed bin. So, in other words, what I'm saying is if they charge for API usage, their own first-party agent harness or system, then that would be apples to apples. But if they end up charging the bundled flat rate, let's say, for their stuff, but then charge the metered rate for third-party stuff, you could make a bundling argument. Sure, sure. And you could say it's anti-competitive, assuming that Anthropic has dominant market share in coding, which I think most people would say they do at this point. And assuming that it's the same product. I mean, the reason most enterprises will probably use the Anthropic version of this agentic product is because it meets all of your security parameters, right? So Altimeter runs, you know, a lot of stuff on Anthropic. They're already integrated within our data warehouse, our data lake, things of that nature. So just letting OpenClaw loose on the Altimeter, you know, data set would not be wise. And so it's a different fundamental product. No, I get that. And I think that Anthropic has a huge advantage, let's say, cloning OpenClaw and just building it into Claude. I'm not denying that. To me, that would be the reason why they don't need to do price discrimination is because there's already a very good reason to use the, let's call it the bundled offering on a featured basis. But the question I'm specifically asking is whether they're giving themselves a price advantage. because i think you're at is giving the the most generous interpretation you're taking a more cynical one i'm with you sacks i'm 100 on the cynical side open claw is so powerful it's got so much momentum that not only is anthropic trying to ankle it i believe when sam altman bought it it was uh and he didn't buy open claw itself he hired aqua hired peter i believe it was to subvert the open source project to get peter's next set of genius ideas inside of open ai as opposed to letting them go there. People are going to say I'm a conspiracy theorist. But this is the number one focus. And let me just give you a list of who is trying to kill open claw slash compete with them. Obviously, you have Anthropic, but also perplexity computer launch. It's awesome. I've been using it. Anthropic has this clawed managed agents. They dropped that on Wednesday, April 8th, yesterday, today's Thursday when we tape you guys listen on Fridays. And then you have Hermes agent that was released on february 25th that's also open source and very good so that's in the open source camp alibaba is coming out with one that's going to be out based on their quinn model then you have elon who said he's got something called grok computer coming out of macro hard which is a play on words for microsoft in addition to that amazon and apple are preparing new releases of their retard maxing assistants alexa and siri that will be less retarded in this new version and then nothing out of satya and microsoft yet so the number one goal i believe in the large language model frontier model space is to kill this open source product no i mean come on like why they're building multi-functioning agents that can move from answering questions to actually doing something for you like you got to do that because that's what consumers and enterprises wants it doesn't mean that it's about killing open claw it's just this is an obvious thing they have the right to do it but this is a giant movement to stop it because this is the equivalent of having an open source android like player in the market and that could be incredibly disruptive these i believe open source is going to win the day on the large language models and take 90 of the token usage and i think the entire frontier model space could be undercut by open source and i think they realize that slms the the smaller language models that are verticalized now that will run on desktops and laptops and is even starting to run on the top ones, that is their biggest competitive threat. And I hope it happens. All due respect to your investments, Brad, I think this technology and the interface is, you know, he plays bets, but I think it's imperative that the agent level, which is essentially your entire life, you don't give that to Anthropik. You don't give that to OpenAI. That's your entire business, your entire life. It is foolish for you, Brad, to give your entire business and all the knowledge you have to Anthropic through that unless you're just doing it to boost your investment in those companies. But I would be very concerned if I was you with putting all of your knowledge that you've earned over a lifetime into any of these large language models. All right, Jake, let me ask you guys a question. Thank you for that impassioned monologue. Thanks for coming to my TED Talk. Yes, thank you for that TED Talk. I have a yes-no question for each of you. Do you believe that Anthropic has dominant market share in coding right now? Yes or no? No. In coding? Yes. They had the lead but not dominant. I think it's a trillion-dollar market, and these guys have less than 10% of it today, so it's hard to make a case. What percent of coding tokens do you think that Anthropic is providing the market right now? Greater than 50%. Yeah, that's true. Okay, that's called dominant market share. I don't know about that. More than 50% on the market. You've got to look at what the TAM is. You've got to look at what the TAM is, David. There are a lot of people who provide, you know, that are in the business of helping people write software. You want to be the tiebreaker before we move on to the next slide? I'm not saying it's a permanent condition. But if you're telling me that today Anthropic is delivering over half of the coding tokens, that's clearly a dominant position in the market for coding. It's an early market. it could change but if i were representing them david i would say nine months ago everybody taught call us uh you know out of the game we were being destroyed by open ai in three months now people are saying we have dominant market position this is the fastest changing most competitive market in the world i think it'd be very hard pressed to walk into you know some district court make the case that these guys have somehow already formed a monopoly against amazon Google, Microsoft, OpenAI, etc. Well, I'm not saying it's already a permanent monopoly, but I am just asking about market share. And I do think you guys all agree. Let's get Shemov. Shemov, go ahead. They probably have 50% to 60% market share because I think Codex is actually quite broadly used as well. But that belies the more important point, which is AI-enabled coding, I think, is still 5% of the broad market. So it's kind of a nothing burger. Yes, they're leading, but they're leading in something that isn't that big yet. Now, you would say, how could it not be big? And what I would say is because most of the stuff that's being written is still white sheet de novo code. And I think the ugly truth is I don't care what model you have, but the long horizon ability for any of these models to actually build enterprise-grade software is still shit. S-H-I-T, shit. and that's the actual lived experience. Not for me, but when I call on our customers, half a trillion dollar banks, hundred billion dollar insurance companies, none of these guys are like, wow, it just works out of the box. It doesn't work. So most of it is still hand-tuned. So until I can honestly tell you that we can point a model at this with the right guardrails, which I can't today, what I would say is it's a small market that will become large as these models become better. But we are in the world where we have 50 years of accumulated tech debt as a world. And I suspect when you enumerate the number of lines that that represents, it's hundreds of trillions of lines of just pretty marginal, mediocre code to bad code. On top of that, we have all these legacy languages. I'll tell you one of our customers, they have to go and get 60-year-old pensioners to come into the office to interpret, no, I'm not joking. This is a hundred billion dollar a year revenue company. And that's how they solve these problems. It's not Opus just solves it. So I would just keep in mind that most of the tech debt in the world that exists, 99% of it is still poorly addressed by these models. We are untying this Gordian knot. It's gonna take decades to do it right. So all the breathlessness about all this other stuff, I really think it's not where the money is. It's not the big time stuff. And you can tell me, oh, yeah, it's going to be the future. And I would say, tell this business that has $100 billion a year of revenue and 50 million billing relationships that all of a sudden you're going to open claw your way to a solution. It's bullshit. Not to say that you can't have a great chief of staff and not to say you can't do some useful stuff and trickery and have a good knowledge base. I'd like that too. but the core things that your lived experience sits on today is a mess of tech debt that will get very slowly replaced and that's just the reality of life and there are competitors that are extremely disruptive i'll tell you about one we talked about bit tensor tau on this program a couple weeks ago when we had the um jensen interview you brought it up actually chamoff there's a there's a project that's subnet 62 it's called ridges ai and what they're doing is a competitor that is not only open source, but anybody can contribute to it. They spent about a million dollars in Tau, like rewards, and in 45 days, they hit 80% of what Claude 4 is. And they did that in under 45 days. The way that works is they give rewards for people who, and they can do this anonymously, make that coding product, which is like Codex or Claude Code, better. That flywheel is racing right now with participation in the same way Bitcoin is. So you're going to see a lot of open source and these crypto open source combinations. And anybody who's not investigated this, I highly recommend you investigate this. I do think you're right about one specific thing. I would put zero, literally the probability zero of any important company worth anything more than a dollar having and outsourcing their production code to an open source project. That'll never happen. However, what will happen, though, is when you look at the cost of training this $10 trillion parameter model on Blackwell, and when you look in the future, let's just say in six or nine months, that a $15 or $20 trillion parameter model is going to get trained on Vera Rubin, I think, Jason, where you are right, I have zero – and just to be clear, I have no investments in this at all. I do, to be super clear. I'm just observing because another project other than BitTensor that someone brought up to me is Venice. The concept of open source training and orchestration is a hugely disruptive idea, which is the complete orthogonal attack vector to this idea that you have to raise tens and tens of billions of dollars to train your models. Because if the capital markets run out of 10 and 20 billion dollar checks to give people, the only solution is to be totally distributed. So I tend to agree with you, Jason, that there is going to be, at some point, a very successful open source project for pre-training. Absolutely will there never, ever be an open source way where a real company that has any skin in the game says, here, guys, re-engineer my code base as an open source project. Never going to happen. Yeah, I think the coding tools will. And if you look at the history of open source, Brad, you actually, I think, had a lot of bets in this space. Linux, Kubernetes, Apache, Postgres, like Terraform. Like these open source projects are deep inside of enterprises, deep. And we were sitting here 15, 20 years ago. The same argument was made. Nobody will ever adopt these inside the enterprise. You got to go with Oracle, whatever. And fair enough. Many people do. But I think this $29 Ridges subscription to do this versus $200, it's starting to take hold inside of startups. And that's where I always look at the tip of the spear. Startups love to use open source products. I think this could be the next big thing. But listen, I invest in things that have a 90% chance of going to zero. So do your own research. No crying in the casino. Can I just make a final few points? So just quickly. So number one is with respect to this market for code or code tokens, whatever you want to call it, it might be 5% today, meaning 5% of the codes AI generated versus human generated. I think it's going to 95%. I mean, I bet any amount of money on that. The only question is when, probably over the next few years. So that's point number one. Point number two is it's possible that if you're the early leader in coding as an AI model company, let's say you have 50% to 60% market share, you have the most developers using it. Therefore, you have the most access to code bases. You might get the most training tokens. There is a potential flywheel there where you can see the early market leader consolidating its lead because it's generating the most code tokens and it's getting access to the most existing code. Now, I'm not saying for sure that's going to happen. It's possible that the other guys catch up. But I think there is a possibility of a flywheel there and strong, I guess you'd call it data scale effects, things like that. So I do believe that the market for coding tokens could be monopolized. Third, Anthropix revenue run rate is based on what I can tell and what's been publicly released is the fastest growing revenue run rate at scale that I think we've ever seen. Perfect segue. It's the next story. Okay, maybe pull up the tweets. But this thing is ramping at a rate we've never seen before. We can get into that in a second. But just one last final point is I think it's pretty clear that where we go from here is agents. And coding gives you a huge step up on agents because one of the main things that agents need to do is write code to be able to enable them to complete tasks. Correct. And so if it is the case that coding is this huge market that's going to be dominated by one or two companies, and then that leads to another huge market, which is agents. My point is just I think all these companies need to behave in a very clean way and not engage in tactics that later the government might say, you know what, that was anti-competitive. everyone should just i think play fair do not engage in discrimination against other people's products engage in fair pricing i'm not accusing anyone of breaking any of the rules but what i'm saying is that eventually the government's going to look at this market with the benefit of 2020 hindsight and i think everyone should just basically you know keep your nose clean keep it tight keep it tight tight is right i think it's an excellent point let's talk about the revenue ramp of Anthropic. This is just unprecedented. Anthropic's revenue run rate has topped $30 billion with a B. Early 2023, they turned on revenue. They started charging for API access. End of 2024, they're at a billion-dollar run rate. February 25, they launched Claw Code. That was the starter's pistol. Mid 2025, $4 billion run rate. End of 2025, $9 billion run rate. just a couple of months later in April, $30 billion run rate. Yes, that's right, triple. And the way they did this is enterprise customers are a major part of the spend. Dario announced a couple of months ago that there's over 1,000 enterprises paying over $1 million annually. This is truly mind-boggling when you think about it because those are the most coveted customers in the world. These are the big fish that you just, when people are running enterprise software, they dream, Slack dreamed of getting these million-dollar customers. Salesforce dreams of getting these million-dollar customers. Brad, you're an investor. I guess Sam famously on BG2 asked you to sell your OpenAI stock back to him. You didn't. You demurred. But you're an investor in both. how shocking is it to you to place both of those bets and then see one of them come from so far behind you know chat gpt has 900 million users i don't think they've passed a billion officially yet but they are the verb right they're the uber they're the xerox they're the polaroid of ai but they didn't go after the enterprise dario made that and dario worked he was the co-founder of open ai he left and according to the new yorker story that came out from ronan farrow this week he was basically left because of his disgust in working with sam altman your thoughts well you know before we go down the open ai rabbit hole let's just really contextualize like what's going on here you know i have this additional chart you showed one you know they added four billion of revenue in january seven billion in february 11 billion of annualized run rates or 10 or 11 billion in March. Just to put in perspective that Databricks plus Palantir combined that they added in a single month right So we started with everybody at the start of the year wringing their hands including Gurley and others saying we in a big bubble asking whether the AI revenues would show up to justify all of this investment And bam, you have the largest revenue explosion in the history of technology. So the company's plans were to end the year at about a $30 billion exit run rate. They got there by the end of March, right? And I suspect that it's continuing in April. So you have to ask what's going on and what's the big so what? The first thing for me is that model and product capability just hit this threshold we talked about earlier, near AGI, whatever the hell you want to call it. And everybody like Altimeter said, damn, this is so good. I have to have it. This is no longer about my IT budget. This is about labor augmentation and labor replacement. And by the way, co-work is growing even faster than Claude Goad at the same stage of development. So what it showed is we have a near infinite TAM. It turns out that the TAM for intelligence is radically different than anything that we've seen before. And I think the best example of this, right, this is millions of self-interested parties, consumers, enterprises, a thousand now over a million dollars, right? It's not that there was some great go-to-market in Anthropic that all of a sudden, you know, they snuck up and blew everybody away. No, it was companies demanding the product. They're getting throttled on the product. Why? Because it's so good. It makes them better at their business. We are all self-interested actors, and when millions of those people are all making the same decision, there's a huge tell. And the tell here is that the TAM is as big as Dario and Sam and others have been saying. We knew intelligence was going to scale on the exponential. The question was whether revenue will scale on the exponential. And that's what we're seeing. And remember, they're doing this with only one and a half to two gigawatts of compute, right? These guys are massively compute constrained. They're each going to be adding three gigawatts of compute this year. And so that will unlock. They would be growing even faster. But for that, and then Jason, to your point about the open source models that we all want to be a part of the solution. I've talked to a lot of big companies, 65 to 70% of their token consumption is open source model, right? Are these cheap Chinese and other tokens? So these revenue ramps are happening while the world is already using open source. This is not frontier only. This is frontier plus open source. We're going to see massive token optimization over the course of the year. But what happens on this Jevons paradox is the unit costs, right, of intelligence is plummeting. Not the cost of tokens. The unit cost of intelligence is plummeting because the capabilities of these models is so much better. I look at what it does for Altimeter day in and day out. I talked to a major company yesterday. They're on a run rate to do $100 million of token consumption this year on about $5 billion in OPEX. They think that we're now nearing peak employment in their company, but that their intelligence consumption. Let's not call it token consumption. right because tokens may go up a lot but their intelligence consumption is going to go up a lot so i would leave you with this we're early to chema's point we have low penetration of the global 2000 we have low penetration of the use cases we have low penetration of uh within the use cases that they're already using and the models are only getting better so i think when you look out toward the end of the year. I would not be shocked if you see Anthropic exiting this year at 80 to 100 billion dollars in revenue. And by the way, doing it at the same time, the OpenAI, who is also on the wave, they'll be releasing an incredible model in the next imminently. They're going to be on that wave and you're going to see an inflection in their revenues as well. Okay, Chamath, question one has been answered. The question of, hey, does this stuff actually have utility? That went from a question mark to an explanation point. Of course, it's got utility. People are getting value from it. And it might be variable. Some people get more value than others. Number two, the revenue ramp was a big question. Now that's turned into an explanation point. The final piece of the puzzle that you've brought up many times is, can this be profitable? And these companies are burning through a large amount of cash. So what is your take on when these companies can get out of the J curve? We talked about this, I think three episodes ago, I estimated like we're going to be looking at four or five hundred billion dollars in investment into these data centers at a minimum. And then they have to climb out of that to get to profitability. So what are your thoughts on these becoming profitable companies? Do you remember that investor that published this list, Jason, where he put all of the terms you talk about when one of the terms you can't talk about is profit? it's a list where it's like if you can't talk about free cash flow you talk about EBITDA when you can't talk about EBITDA you talk about margin you can't EBITDA when you can't talk about that you talk about revenue and then when you can't talk about revenue you talk about gross revenue bookings so you can kind of figure out i think where we are in any part of any cycle by just indexing into what does everybody talk about i think where we are is we are between gross revenue and net revenue that's where the discussion is okay there was another article i think today and i think maybe it was the information that tried to categorize and distinguish that anthropic presents gross open ai presents net they're different we don't know what the various take rates are. So they're saying that there's a difference. If it's not true, there's been no clarity provided by these companies. So at a minimum, you have this confusion where there's the breathless talk. Then there's people that don't even know the difference between actual recognized revenue and run rate revenue. So we're definitely there. We can quibble about the details, but we are not at the place where people are like, oh, here's your steady state free cash flow margin and here's what your EBITDA is. We're years from that. They're going to have token maxing EBITDA, like a cumulative EBITDA at the WeWork. The thing that we need to understand is how gross margin negative is this revenue growth. We don't know that. And at least we don't as outsiders. Brad might know. Brad may know. I will tell you, think about this. What are their big cost inputs? The number one cost input is the cost to compute. Cost to compute, right? I just told you they only have a gigawatt and a half of compute. And they had that gigawatt and a half of compute, whether they have a billion in revenue or whether they have 80 billion in revenue. So you might actually expect to see these companies, their gross margins are exploding higher. Like the fastest increase in gross margins I've probably seen out of any technology company. So this is not gross margin negative, you're saying? No, definitely not gross margin negative. And what I would tell you is... So then they must be hugely profitable then. Well, you may see accidental, what I call it accidental profitability, they may not be able to spend this revenue fast enough, Chamath, on compute. And remember, it's only 2,500 people. Google crossed this revenue threshold when they had 120,000 people. These guys have 2,500 people. So the only thing you can really spend money on, right, is compute, and they can't stand up to compute fast enough. But none of this puts to me then, to be honest, because if you were on a threshold of 90% plus gross margin i'm not saying it's there i'm not saying it's 90 plus i'm just saying it's gone from meaningfully negative 18 months ago to you know very very positive i've seen rumored out there so the trend is going down is what you're saying right the trend is there let me just say this i think if you're an incumbent you want the cost of compute to go down i think if you're not an incumbent so specifically who do i mean meta Google and SpaceX I think those three people who have all three of them well sorry Meta and Google have a fortress balance sheet I think by the end of June SpaceX will also have a fortress balance sheet what they will want to do is they will want to make this a compute problem because they will control the conditions on the field you already see this today yeah Meta's models today what people's general reviews are, it's okay, but the one thing that people say is it's incredibly performant. The model quality is okay, but the performance is great, which speaks to Meta's huge advantage. They have a massive compute infrastructure. If you're not OpenAI and Anthropic, they'll want to make this a capital problem because then they can win it. If you're Anthropic and OpenAI, you want this thing to be as efficient as possible. I think where we are is very much in the early innings then we're bumbling around talking about gross margins and you know revenues we are not at profitability and what is true for facebook and what was true for google was irrespective of where they got to a billion who cares they were profitable by year three and they never looked back i was there i remember it was glorious the cost the cost of building uh You know, AI totally stipulate is radically higher than the cost of building retrieval at Google. Right. Like it's just a fundamentally more expensive problem. But I will tell you that there's a lot of thought out there about negative gross margins. I mean, Jason, you started this segment by saying they're burning through large amounts of cash. I think people are going to be shocked at the burn, how low the burn levels are at these companies. Yes. And I would say at OpenAI as well. Like if they're on, you know, if they do $50 billion this year, again, just look at the number of people they have. Revenue per people is pretty low. And the inference cost is plummeting. Inference cost is down by 90% year over year. And so just finally, I want to respond to this point about gross versus net, this tweet that Chamath was referencing. Okay, so there's a certain percentage, a smallish percentage of Anthropics revenue, right, that they distribute through the hyperscalers. and like a lot of arrangements, whether it's Snowflake or Databricks or others, you pay a commission on that. I will just tell you that you're talking single-digit percentage of total revenue of these companies. So the gross versus net thing isn't what's being reported. The apples for apples is pretty easy, and if you want to be conservative on it, take down Anthropics' revenue by 5% to 10%, which, again, I think it's better to gross up OpenAI's revenue. But any way you do it, I just think it's a distraction from what's really going on here. Happy to. Zach, do you have any thoughts on this massive revenue ramp? Yeah. I mean, I want to go back to a point that Brad Bate, because I think it was just really important, and I want to just underline it. Consider where we were at the beginning of the year and what everybody was saying is that AI was a big bubble. and the evidence they would point to was the fact that hundreds of billions of dollars was going into capex that needed to be spent on these data centers and there was no evidence of significant revenue to justify that spend where was the roi by the way as an aside the same doomers who were saying that ai was in a bubble were also the ones who were saying that ai was so powerful it's going to put us all out of work and it's going to you know take over from humanity i mean in other words they couldn't decide if AI was too powerful or not powerful enough. But putting aside that contradiction, they clearly were making this case that AI was this big bubble and that there'd be no payoff or justification for this massive capex that's being spent. And I think we're starting to see here, there is justification for it. We're seeing it just in this one vertical of AI, which is coding. We're again, seeing the fastest revenue growth in history. It's utterly unprecedented. And this is just one category or vertical of AI. We know that agents are coming next and the enterprise adoption of that is going to be absolutely massive. So I guess what I'm saying is that this is early proof for, I think, the thing that makes Silicon Valley special, which is we're willing to basically bet on things that just intuitively on a gut level we know are the next big thing. We're not that spreadsheet driven, actually. Silicon Valley believes that if you build it, they will come and is willing to finance that build out. And that's basically what's been happening. Again, just the top four hyperscalers, $350 billion of expected CapEx this year. On its way, I think Jensen said $1 trillion by 2030. So Silicon Valley, whether it's big companies, whether it's founders, are always willing to bet on this next big thing. They're not like Wall Street. They don't need, you know, to tell them where to go. they know where the technology is going and they make their bets based on that. And I think that there is going to be a big payoff for this. And I think it's the thing that's going to make our economy and the United States in general remain extremely dynamic and in the lead on this thing is that we are willing to make those kinds of bets. And I think it's going to pay off big time. Yeah, clearly. Hey, Brad, you didn't answer my question about the vibes over at open ai versus quad open ai is um i wouldn't say reeling but there's a lot of hand wringing going on a lot of employees leaving a lot of people who are wondering like is our strategy the winning strategy of like consumer first they shut down sora you know unwinding the disney deal and really trying to get the company focused and it's kind of like i mean listen the new yorker story was a bit of a rehash i don't think we have to go into the blow by blow because we covered here three years ago. But the truth is, a lot of the great founders, co-founders of OpenAI and a lot of the great contributors are now at Anthropic and other large language models. And in the secondary market, OpenAI is trading lower than the last valuation and Anthropic is trading significantly above the $380 billion. So maybe talk a little bit about this competition this microsoft versus apple this google versus facebook well let's let's start with immense credit where credit is due anthropic was literally counted out of the game last year yep right and here they come over the last 12 months and and they've kicked open ai's ass over the last 90 days right and what did anthropic do anthropic made choices no multimodal no video no hardware no chips no building data centers they said we're just going to focus on coding and co-work we think that is the path to agi and and and and asi they executed their butts off they took the lead 2500 people tight pulling on the ore in the same direction but i think you would be seriously foolish to count out open ai right and i think we're why we're at peak open ai fud and i'll tell you it starts with great researchers and great models and i think when you see the spud model they're about ready to release i think it's going to be an excellent model shows that they're firmly on the wave. If you look at what's going on with Codex, incredible ramp on Codex, fastest ramping model with 5.4, I think 5.5 or Spud, whatever we're going to call it, it's going to be an even faster ramp. Have you seen Spud? Have you used it? Have you gotten a preview? People are using Spud, right? So it is being previewed. And so you're talking to people who've used it and what are they telling you? They're telling us that it's an incredible model on par with mythos, right? And that it's a very usable model in terms of how it's packaged. I will say that back to David's point, now this is the most important point I think anybody can take away here. This is not zero sum. The TAM of intelligence is dramatically larger than any TAM we've ever seen in our investing careers over the last two decades, right? And if you're on the wave, which OpenAI is, you are going to be selling into the world's biggest TAM. They are going to build a very big company. I'm a buyer of the shares today, notwithstanding all of the vibes that you describe. I think these companies are firmly on the wave. They are jarred. They are sitting there saying, what did we do wrong and how do we get our mojo back? They want to compete. It is embarrassing to people on the research team and the product team over there so i'm not saying there's not a real awakening occurring there but i think that's what the case is and by the way to chama's point do not count out meta right i think meta is absolutely in this game google is absolutely in this game elon is absolutely in this game and if you're on got some stuff dropping shortly that's going to be very impressive if you're on team america the fact that we have five frontier models competing against each other and david made sure they weren't throttled by excessive government regulation. We have mythos come out. It's a self-imposed safe harbor, you know, to harden our system. It wasn't a call for moratoriums or getting the government involved. We have the type of competition that's causing us to accelerate our lead against the rest of the world. We can't take our eye off the prize. We got to stop adversarial distillation. And we need to make sure that we're distributing our products around the world, but I view this as really good for Team America. Well said. And here is your Polymarket IPOs before 2027. Obviously, SpaceX at 95%, Cerebrus at 94%. And hey, number five on this list, 51% chance that Anthropic goes out before the end of the year, 44% chance that OpenAI comes out before then. All right, here is the closing market cap for Anthropic on Polymarket. Only $158,000 in volume. So Chamath, when you put in 400K, you're going to really tilt this market. 78% chance that it's above $600 billion, 19% chance that it doesn't go out. So it's looking like this will be a decent investment for you. Brad, what valuation did you get into anthropica we first invested and i believe it was the 130 or 150 billion dollar round so this will be a 7x 5x for altimeter l please congratulations i mean listen i i again there are lots of people who were there before us and who are on the board and who are going to do better than that would you put in 50 what'd you put it no we've got billions of both companies uh billions in both companies oh my lord um i think there's this existential thing going on in venture today and david could talk about as well i mean people can't they're extraordinary nervous about you look at the igv stock index down 30 year to date down five percent today all software stocks plummeting right venture capitalists are terrified to invest money in anything other than these frontier models and things like SpaceX or military modernization, finding something that's out of harm's way of AI, right, where you can count on the terminal value to Chamas insights over the last few weeks is very difficult to do. That's why you see this crowding. So we've taken a barbell approach, right? We've got a lot in what we think are the most important companies that are on the frontier. And then we're betting on really small teams that we think have very defensible businesses in a world of, you know, AGI, but it's tricky. What happens to all these enterprise software companies? Do they become PE takeouts? Do they get consolidated? Or do they just have to adopt these AI technologies and solve this problem of, hey, the frontier model is just going to solve for whatever these niche software companies do? I think the market is probably being a little too pessimistic with respect to at least some of these software companies. I mean, obviously, there's going to be big differences in the quality of the moats of these companies. And so, look, software is going to be a lot cheaper and easier to generate, but I'm not sure that was the competitive advantage of a lot of these companies. So there's probably a little bit of the baby being thrown out with the bathwater right now, and there probably are some value buys in enterprise software. I think the interesting question here, and we've been talking about this for a couple of years in the pod, is just where you see the AI value capture being in terms of layer of the stack. Remember where we started? It was really just the chip layer of the stack was where all the value capture was. It was basically NVIDIA was the first company to be worth multiple trillions of dollars because of AI. And for a while, it looked like that's where all the value capture was going to be because OpenAI, for example, was losing so much money and Anthropic wasn't on the radar as much. Now we're seeing, wait a second, you know, it's not just the chip companies. It's also the hyperscalers are now benefiting. and now we're seeing at the model layer, it looks like Anthropic and OpenAI, they're all going to be huge beneficiaries. I think the next question is at the application layer of the stack. Okay, well, now, does all that value capture just get eaten by the model companies or are there applications that get turbocharged? I guess you could say that Palantir is already one of them, right? It's an application company that's been turbocharged by these model capabilities. Who else will be a big beneficiary? Again, is it all going to be at the model layer or will you see an explosion of value at the application layer? I'm hoping, obviously, that it'll be at all layers of the stack you see beneficiaries. But to me, that's a really interesting question right now. Yeah, what happens to Salesforce, HubSpot, Oracle, right down the line? David, Chamath, your thoughts here on the layers here and where the value is captured? It's too early to tell. Too early to tell, right. And energy, you kind of put into sort of data center as well, but that's obviously been a clear winner. A little housekeeping here. Liquidity, put a little Tiffany in here, producer Nick, is sold out. There's a wait list of hundreds of people, but it is what it is, folks. If you snooze you lose And top tier speakers are coming It going to be great We get an update from jimak but i think brad you going to be joining us again yes for liquidity i have an update that probably not your headliner though i'm probably not your headline no but you always score so high every event you've spoken at you've been either number one two i don't think you've ever dropped to three go ahead jimak make your announcement here not sent me an article from wikipedia about peanut links when you guys are talking about Dickens. Breaking news. Showing me that I'm in the large category. Top 5%. She highlighted it. Top 5%. Is that with Nano Banana or without? She just texted, dummy, it's Claude. My apologies, Claude. This is why Jamoth isn't afraid of the cyber. Nothing is going to come out that's more embarrassing than what he says himself on the pod. It's like Bezos. He's like, guys, I got hacked. so i saw the agenda for this thing it's incredible congrats to you guys i mean like the uh like just the fun of being in nap all the poker all the the dining experience this is five star all the way it looks really cool it's a mom level because chamath was i dare i say belligerent in his demands he said this has to be six star or i will not show up jake al i said okay boss get to work and uh chamath what do you got and no mids this is all elite and for the hundreds of people who are on the waitlist i am sorry but we have a capacity issue we'll try to get you in for next year but shem up give us some updates here you have any updates that you want to share because you are running programming for liquidity 2026 up in yon look it's going really well really excited to hear all of these great folks speak i think the next two will release today brad gerstner and thomas lafont of Go to go to that's great. Yeah. We also have, I think three people confirmed for their best ideas pitch. Really interesting folks. They each run between one and six or 7 billion. Awesome. Superstar compound. This is the new zone. Career. This is the new zone. It's great. So right now we have Bill Ackman. We have Andre Carpathy. We have Dan Loeb. We have Thomas Lafont. We have Brad Gerstner. We have Sarah Fryer and more to come. We will announce more. There might be one or two surprises. J-Cal and a couple of surprises. Yeah, we don't announce all the speakers. J-Cal's got a couple of surprises coming. And if you didn't get in to liquidity, apologies, you're on the wait list. We are going to be hosting the fifth annual All-In Summit in Los Angeles, September 13th to the 15th. Saks, are you going to come to that? All-In.com sessions. Saks, you should come to that. I've been advised that I can attend business. I can be in the state for business reasons. Okay. There you go. Then we'll see you at liquidity and the summit. Correct. That's great. That's big news. Now we just got a bunch of SAC stands who are racing. Now we're going to get SACs. This is what happens every year behind the scenes. SACs at the last minute says, oh, I have four speakers and I have 72 people who need tickets. And then the whole team has to do a fire drill 48 hours before the event. Okay, here we go, guys. We're going to go to the third rail here. we got to catch up on the iran war here's the latest two weeks into a ceasefire i've started just two days ago at the taping of this vp jd vance friend of the pod is a and some special consultants uh wikoff and friend of the pod jared kushner are headed to islamabad the capital of pakistan for talks this very weekend so while you're listening to this event they are going to be working on the peace deal easter sunday trump posted a truth stating open the straight and crazy bastards or you're going to be living in hell just watch praise be to allah on tuesday morning trump posted uh another threat on social media a whole civilization will die tonight never to be brought back again i don't want that to happen but it probably will tweets were obviously discussed uh a lot over the last week he gave him an 8 p.m deadline at 6 30 p.m potest announced on truth social that he had agreed the president trump had agreed to a two-week ceasefire if iran opens the straight he also said hey listen we got the straight maybe there'll be a toll booth but we'll take the majority of the toll and we'll split it with iran here's the quote we received a 10-point proposal from iran and we believe it's a workable it is a workable basis on which to negotiate and apparently netanyahu took the ceasefire to mean level lebanon dropping 160 bombs in 10 minutes yesterday sacks uh you were out last week everybody wants to know your position on the war i'll hand it off to you what are your thoughts on how on the two-week ceasefire and everything that's occurred up until this point well look i have to preface what i'm about to say which is i'm not part of the foreign policy team at the white house and the last time i commented on the war on this show. It somehow made international headlines that Trump advisor says X, Y, Z. And I'm not a Trump advisor on this issue. I think that'd be a fair headline to write if it was a technology issue, but this is not. So whatever I say is just my personal opinion, but then the media is going to somehow portray it or attribute it to the White House or try and create an issue out of it. So I feel like I'm limited in what I can say, except that to say that I think it's terrific that we have the ceasefire. I think it's great that there's going to be this meeting in Islamabad to hammer it out. And I think what the president's accomplished so far with the ceasefire is it's a great thing because what happens with these wars is they take on a life of their own, meaning they tend to go up the escalation ladder. Right. There's a lot of podcasts are discussing the so-called escalation trap. And supposedly there are stages of this based on historical patterns. And so I think it's actually very hard to pull out of these things. And I give the president tremendous credit for negotiating the ceasefire that we've achieved so far and then sending the team to hopefully work this out. Brad, actually, my first trip to the Middle East was when you and I maybe four years ago went. Thank you for taking me. What is your take on where we're at here? I think we just wrapped up week six of this and we're going into week seven. First, on March 4th, I tweeted the Trump doctrine in Iran, massively destroy our military capabilities, kill the people building lethal weapons to use against us and get out. Reserve the right to do it again if needed. Zero efforts to build Madisonian democracy. Iran's going to have to build what comes next. And I think what the market has said, right, if you look back at last year on tariffs, Jason, the top to bottom drawdown was about 15 percent on the Nasdaq intraday is down 22 percent. OK, the drawdown in this period over Iran was only down about five to seven percent on S&P and Nasdaq. Right. So the market has said, listen, trust Trump at his words. He said he's not going to get into an entangled war here. I think he terrifies the hell out of people with his tweets about, you know, destroying civilization and all this other stuff. But I think people, even though they don't like to hear it, they've resolved for themselves that when he says he's going to get out, he will, in fact, get out. Of course, there was a lot of hand wringing. But if you look at the markets today, we basically bounced all the way back from where we were pre-Iran on both the S&P and the NASDAQ. If, in fact, we land the plane, if JD lands the plane, and by the way, on Lebanon, yes, they were bombing yesterday, but Netanyahu has now said that you're going to have direct government talks between Israel and Lebanon. So if we land the plane on these two things, I think it's off to the races in the market. And by the way, while everybody's focused on Iran, stay tuned. I think we're getting close to a deal on Ukraine, Russia, right? Venezuela is, you know, kind of going seemingly very well. I think there's also going to be news on Cuba. You could envision a world. There's risk to the downside, certainly. I will stipulate. But you also have to pay attention to the risk to the upside. If you land the plane on those things, heading into America 250 July 4th, the market could really take off. All right. Well, let's maybe up-level this a little bit and talk about why we're in this war to begin with. And that's the big discussion amongst both sides of the aisle. On Tuesday, the New York Times dropped an inside-the-room piece on how President Trump made the decision. According to this report, if it's true, I know some people don't subscribe to the New York Times anymore or think it's fake news, but how Trump decided to basically follow Netanyahu into this war on February 11th. Netanyahu met with Trump at the White House where he gave him a four-part pitch on attacking Iran. jd vance according to the story if it's true disclaimer disclaimer warned trump that the war could cause regional chaos and break apart trump's maga 2.0 the trump 2.0 coalition we talked about here the big tent and that's turned out actually to be true there's been a bunch of hand-wringing from megan cali tucker colson right on down the line rubio was anti-regime change but he was largely ambivalent according to this story about the bombing campaign susie wiles chief of staff said she had concerns about gas prices before the midterms pretty good advice there and general dan kane chairman of the joint chiefs of staff said this of netanyahu's pitch quote sir this is in my experience standard operating procedure for the israelis they oversell and their plans are not always well developed they know they need us and that's why they're hard selling if you put this together with rubio's walk back comments at the start of the war we knew this is quote from rubio We knew there was going to be an Israeli action. We knew that would precipitate an attack against American forces, and that's why we did it. I had Josh Shapiro on the All In Interview show, and he talked a lot about this. There is a big underpinning here, Chamath, that the United States foreign policy is being driven by Netanyahu. who every Jewish American person I've talked to feels Netanyahu is not doing, Jewish American and the Jewish diaspora, any favors here by his approach to these wars? What are your thoughts on why we got into this and how we get out of it? I mean, the person that decides is the president of the United States. So a foreign leader isn't getting to call the shots in the United States. I think very practically speaking, the markets are effectively pricing in that this was a small blip for whatever people think. That's just what the best prediction market that we have is telling us. I think that's important to acknowledge that we're probably in the endgame here. And the second thing to acknowledge is if I was Israel, I would really be concerned that unless I help find an off-ramp quickly, the risk that Israel loses America as a predictably steadfast ally could go down. And I think that that's problematic for Israel, far more than is problematic for the United States. So all of that kind of tells me that we will find an off-ramp. A, because I think economically it makes sense. And then B, geopolitically, I think Israel will want to make sure that this doesn't burn a longstanding relationship. Yeah, that seems to me to be the major issue here is Americans basically do not want to be in this war. Americans do not want our forest policy being influenced to the extent they believe. I'm not putting my belief in here. Just Americans believe we are being dragged into this by Israel and that Israel has too much or Netanyahu specifically has far too much influence. And then people believe the anti-Semitism that's occurring here. Josh Shapiro gave me a lot of pushback on this. But all the Jewish Americans I talked to say Netanyahu is causing with his actions in Gaza, Lebanon, Iran. He's gone too far and it's causing the anti-Semitism we're experiencing today. So you can make your own decisions about that. Any final thoughts here, Brad, on the American foreign policy being influenced too much by Israel? No, I mean, listen, kind of like Sack said earlier, I think that we will ultimately be judged by the outcomes. Right. And everybody is an armchair pundit today on, you know, the the the approach that we're taking in these two different places. I think we could be on the verge of a massive transformation of the Gulf states. You went there with me, Jason. Saudi, Qataris, Kuwaitis, Emiratis, I've talked to a lot of them this week. I think they're very hopeful and optimistic. I think you could bring Iran into the fold. But listen, I'm an optimist on all of this stuff. I just want to remind people, doing nothing in Iran had tremendous risks. Doing nothing in Venezuela had tremendous risks. So it's not as though this was, you know, something that I think wasn't well calculated. But I think we have to let the cards be played and then let history be the judge. But I think there's a risk in both directions, but I'm going to remain optimistic. All right, Sacha, you said in the Gaza situation, we should have a wide berth for criticism of Israel and Netanyahu. What are your thoughts on this belief here in the United States now in this discussion that Israel is having far too much influence over the United States foreign policy. Well, I noticed in my feed today that Naftali Bennett, who is a major Israeli politician who was a former prime minister, tweeted polling that showed that Israel was becoming very unpopular in the U.S. and he was expressing concern about that and expressing the need to basically address that or fix that. So I think you're starting to see Israeli politicians raising that as an issue, and I think that's probably a good thing. Yeah, there it is. And it's really cool, actually, how XNAL just automatically translates things from foreign languages, in this case Hebrew, and it puts it in your feed. So yeah, so here's Naftali Bennett, former prime minister, saying this is a very situation. There's a lot of work ahead of us to fix everything. Now, obviously, this is not Netanyahu. This is one of his political opponents. But yeah, I mean, this is something for Israel to consider and think about. And I think that they would improve their popularity if they got behind the ceasefire. And I have no indication that they won't. But that would certainly be a good place to start. I have to say, just as an aside, this auto-translate feature has done more for understanding across borders than anything I've ever seen. And it is the most impressive tech feature I've seen released in years, putting AI and large language models aside for people who don't know what's happening. Because of Grok being really good at doing auto-translate, they've taken the pockets of the best of what's happening in Japan, what's happening in Israel, what's happening in France. and they're surfacing it auto-translated. Then when you reply as an American to somebody in Japan, they see it auto-translated as well, which has led to people who don't speak the same language engaging on X in a very nuanced, fun, interesting way. And that, as a truth mechanism, is just absolutely extraordinary. I think this is going to have such a profound effect. Maybe Elon and the X team should get a Nobel Peace Prize award for this. I think it's going to change. I mean, I hate to be hyperbolic, but have you been using this feature, Chamath? Has it been coming up in your feed? And which language is up in your feed right now? English. Okay, so you're not part of the translation thing. Brad, has this hit your feed yet? And in which regions are you seeing? Definitely see it on the Middle East stuff. And, you know, I've seen it on Chinese. I've seen it on the Russian stuff. Super helpful. Let me tell you, base Japanese is a whole other level of base. Whoa, man. Base Japanese makes Fuentes and Alex Jones seem tame. They're like, look at this group of people. Insert whatever group of immigrants you like. And they're like, this is unacceptable behavior. This is not Japanese culture. These people need to get the hell out of Japan. It is wild, folks. And if you don't have an X account, you are missing out. Go to X.com and sign up. for this reason only because you think about the velocity like journalists are not even taking the time to translate and cover what's going on in those areas and this is happening automatically in real time so you start thinking about what happened in ukraine if you had people in russia and ukraine doing this and kind of conversations with each other it would be wild you're like such a good hype man the problem is you hype buttered bread the same way you hype a nuclear reactor and so it's hard to really tell you know what you're really hyping because your level of excitement the intonation is exactly the same yo man there's nothing better than a slice of great toast i mean if this is very this in in a way it is like sliced bread it's very simple but it is so powerful in the experience well it is true x is better today than it's ever been and remember they have 70 percent fewer employees than they had the day elon walked into the building and so if there were ever a debate about this like and i remember everybody saying oh it's going to tip over oh it's going to be a crappy experience so he's going to go down the fact of the matter here's we are a few years later 70 fewer employees and every other company in silicon valley is looking at that i think for a lot of these tech companies we've hit peak employment we're going to create a tremendous number of new jobs but for the existing jobs these companies are all realizing they can do more with less nikita beer just tweeted that they're about to go ham on these bot accounts that auto reply Yes. Those those literally ruin my feed. That's why I went to subscriber mode in my replies. And it's worked out great. Yeah, no. Shout out to him and to Chris Saka, who was in tears at what happened to Twitter. You know, it's gonna be OK, Chris. Sorry, you know, no more tears. You only let subscribers respond to your tweets. I do 50 50. Sometimes I'll just let it rip and get chaos. And then other times I have 2000 paid subscribers. I give all the money to charity, like 30 grand a year. And it's just wonderful to get to know the same 2,000 people out of my million followers. It's kind of like having this little subset. So sometimes I'm like, I don't have time to deal with 100 or 200 or 300 replies. You have a million followers. That's incredible. I mean, it's just – I mean, you have 2 million. I think Saks must have a million, right? You have a million, right, Saks? Brad, how many you have now? You're getting popular. You built a brand. I've got a couple of hundred. I've got a couple of hundred. What's your – oh, your alt cap, K-L-T-C-A-P? I'm at 1.4 million. What do you got, Jacob? Have I surpassed you? I think you have. I'm like 1.1 maybe. How much would it cost me to get my real name, Jason? I know a guy. I couldn't find out. You're 1.1. Yeah, I made it to 1.4. I don't know how that happened exactly. I'm just having the number one podcast in the world. Another amazing episode of the number one podcast. And Chamath has 2 million, but that's only because he has just incredible moments of, engaging with his haters oh my god the the the reply is that chamat sometimes drops are so great i love chamat goes i like them up i like them up he likes them up and then you had somebody who was like oh my god i was in the casino and you told me to bet black so you bet black so i bet black and i lost my money and so you're responsible and then you pay for the kids college he has two young girls and so i i funded their college accounts i thought that was hilarious just as Obviously, I'm very happy for him and his two daughters. I'm even more happy in how much it'll anger all these other goofball dorks living in their mom's basement. Yes. Who literally take no responsibility for their lives. And they should enjoy those Hot Pockets. By the way, for those folks in their mom's basement, the Hot Pockets and the fish sticks are ready. And you get one more hour of Xbox from mom. All right. Listen, we missed you, Freedberg, but this is the best episode in two years. to make it revert at the end of the show and we will see you all at the liquidity summit except for the 400 people on the wait list who aren't going to get in we got an email from the guys at Athena because we were just oh my god they're going to hire like 500 new Athena assistants yes they had a thousand people after last week when we mentioned how much we love Athena go to Athena.com but that's amazing those are like 500 hardworking men and women who are like working in the Philippines. We now have great jobs. Sax, I'm going to get you a couple of Athena assistants as a birthday present. That's what I'm going to get. You're going to love this, Sax. Athena assistants are the best. Congratulations to my friends over there. All right, everybody. We'll see you next time. Love you, boys. On your favorite podcast. See you. Take care. Love you. Bye-bye. Bye-bye. We'll let your winners ride. Rain Man, David Sax. And instead, we open-source it to the fans, and they've just gone crazy with it. Love you, Wesley. I'm the queen of Kewa. I'm going home, baby. What, what, your winner's line. What, what, your winner's line. Besties are gone. Go, 13. That is my dog taking an ocean, your driveway sex. Oh, man. Oh, man. My habatazzer will meet me at what? We should all just get a room and just have one big, huge orgy, because they're all just useless. It's like this sexual tension, but we just need to release it out. Wet your feet. Wet your feet. Wet your feet. We need to get merch. I'm doing all in. I'm doing all in.