bouncer
← Back

Jeremy Howard · 18.7K views · 641 likes

Analysis Summary

30% Low Influence
mildmoderatesevere

“Be aware that the critique of AI-generated code serves as a strategic setup to frame the speakers' own technical projects as the necessary foundation for 'software that lasts.'”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
100%

Signals

The video is a long-form, unscripted interview between two well-known figures in the tech community, characterized by natural conversational flow, spontaneous reactions, and deep personal expertise. There are no signs of synthetic narration or AI-automated editing patterns.

Natural Speech Patterns Transcript contains filler words ('uh', 'um'), self-corrections, snorts, and laughter that align with real-time human interaction.
Personal Anecdotes and Rapport The speakers reference specific shared history (2017 TensorFlow Dev Summit) and mutual opinions that reflect a long-term professional relationship.
Contextual Depth The conversation involves nuanced technical history regarding LLVM, Swift, and Mojo that is delivered with personal perspective rather than generic facts.

Worth Noting

Positive elements

  • This video provides a rare, high-level technical perspective on the architectural debt created by LLM-generated code from the creator of LLVM.

Be Aware

Cautionary elements

  • The use of 'insider' status and historical anecdotes to frame their specific technical philosophy as an objective moral necessity for the industry.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-08a App Version 0.1.0
Transcript

Hi, Jeremy Howard here uh with Chris Latner. Uh a man who probably needs no introduction, but I'll give you a bit of intro to anyway. Uh good day, Chris. Uh welcome. Thank you. >> Hey, Jeremy. It's great to be able to spend time with you and hang out and talk on this nice Sunday. >> Go on. Um Chris, how long have we known each other? I think it was the very first TensorFlow Dev Summit that we met. >> Yeah, probably 2017. So, it's been uh not long enough, but a good a good amount of time. We've been through a number of different projects together. And so we uh developed a mutual distaste for TensorFlow pretty early on. I was trying to [laughter] trying to fix it at the time. >> Yes. >> Yeah. And so I was making it better or trying to um >> but the experience of those early days of that first that first TensorFlow Dev Summit, you were fresh new to to AI. large percentage of the people there today are [snorts] running big labs or you know well-known folks and you brought in a very different background to anybody else that was there. Your your PhD project turned into LLVM which today is at the heart of most of the world's most successful programming languages. You built a C++ compiler on top of that which is very widely used including at Google. You built an Objective C compiler and then you thought what the hell it's all too easy. So you built a new programming language called Swift which is running all the stuff I'm using right now to talk to you. Um, and what I found interesting is when you then came into to AI, you definitely didn't say, "Oh, cool. Everyone's everybody did use TensorFlow and you want everybody uses TensorFlow. Cool. I'll tweak TensorFlow." You basically did the same things to like, what is this mission that you've been on for the last, you're a young man, what is that, two or three years? um that kind of has you starting at the bottom and and what did that look like when you hit the AI world? >> Well, so so Jeremy, you and I have some things in common. I I think you know this, but other people may not. We like to build things and we like to build things from the fundamentals. We like to understand them. We like to ask questions, right? And so for me, my journey has always been this uh like trying to understand the fundamentals of what makes something work. And then when you do that, you start to realize that a lot of the existing systems are actually not that great, right? And so for me at least, you then feel compelled to say, well, how about we make it better, at least for the ones that matter. And so AI systems, programming languages, compilers and systems and tools, there's a lot of things out there and none of them were really great. And so I have thrown myself into trying to make this better. And through all of that, I've been doing a lot of coding. So I've been a developer my whole career. you that your underlying mission here is or your kind of quest is to create higher level ways of talking to machines like what why are you trying to do that and how on earth does creating LLVM the lowest possible layer of talking to machines help in that mission >> well so I I think that computing is a big part of our lives right and we see it through the products we use we see it even if you're not a programmer and the widgets and the devices and the connected components that are all just taking over everything. AI is the most recent manifestation of this, of course, right? But but that doesn't happen unless programmers, developers can actually build these experiences. And so I'm curious about your background too, Jeremy. Like say you've built a few things. You built Fastmail and many other things. So what what threw you into this and and what makes you passionate about building things? >> I mean, I'm I'm not sure my quest is that different to yours. um you're interested in creating higher level ways of talking to machines. I think I'm pretty interested in the whole conversation of like also higher level ways of machines talking to us. Um unlike you, I don't have a fancy PhD and so I never had the tools in my toolkit to go and start at the bottom and build it from from there. So I think I've kind of come in the opposite direction of starting at the top and gradually going down. And interestingly, we've met in the middle um which is so we worked together on a project called Swift for Swift for TensorFlow which had nothing to do with TensorFlow but was actually one part of you saying okay we have to throw away everything and restart. So you built ML which is kind of like LLVM but for the modern world and then you built Swift or TensorFlow on that, but you also built out stuff for TPUs and you also built out the TensorFlow runtime and you you did the whole thing again. You built all these layers of abstraction and and I had kind of come from the opposite direction and we were both there saying like, "Okay, we need a better way >> because you actually know AI, right? So you you know the the use case, you understand the researcher painpoints, you understand what was beautiful about PyTorch. I think you were one of the first people and Andre turned me into PyTorch >> back in like 0.3 kind of time frames and back before PyTorch was inevitable and obvious and so um and so you brought a lot of the domain expertise and so as a tool builder myself right the thing that you need is you need who's who's the person that you're solving the problem for right and so finding experts like yourself that actually understand um not just what people are doing today but also the frontier sheer >> and it was the lack of that integrated end to end system that was killing me cuz I remember when I first met you I said like oh Chris you're coming to work on TensorFlow are you need to warn you it's kind of and I spent half an hour telling you all the ways it was terrible and then you didn't yell at me instead you said that that sounds very interesting let's fix it together um and because what I wanted for my students for myself for my research was to never get stuck where I'd be like how does that work how does that work how does that work until I get to the It's like, okay, that's the thing I need to fix. And with like PyTorch or TensorFlow, you don't get very far at all before it's like, oh, this is inside, you know, the Flash Attention 3 kernel that was actually in Triton, but actually then being converted to this death language and then it was compiled and then >> or some binary CUDA blob. >> Yeah, I don't know what just happened. Um, >> and that just felt absolutely unacceptable to me. And even Swift for TensorFlow, I remember complaining to you about that because I was like, "What about autode? >> What's that written in?" You're like, "Class?" I'm like, "What am I to do about that, Chris?" >> Yep. Yep. >> So, >> well, I think I I I think it's fair to say that I learned a lot from that project. [laughter] >> Um, but you know, interestingly, just like LLVM, how long ago was that, Chris, that you wrote it? It was actually more than two or three years ago. >> 25 years ago. I started it over Christmas break in 2000 >> 25 years. So there's a project that the world has been using you know and still uses everywhere and interestingly when you kind of reinvented that with MLR you know which is like intermediate representation for massive multi-core Petrogenius compute blah blah blah processes and matrix manipulations and all that. It's it's the same things happened you know everything today is now getting built on on malia you know um so well so so the question of the day is how do you build a system that actually can last more than 6 months right because the reason that those systems were fundamental and that they became scalable and were able to be successful and then grow but then didn't crumble under their own weight is because the architecture of those systems actually worked well, right? They were welldesigned. They were scalable. Uh the people that worked on them had an engineering culture that they came together and they rally behind because they want to make them technically excellent, right? And so I think that that's something that enables systems to actually be more than just, you know, a solution to a today problem. But in the case of LVM for example, it was never designed to support the Rust programming language or Julia or something like this or even Swift, right? But because it was designed and architected for that, you could build programming languages. Snowflake could go build a database optimizer which was really cool and a whole bunch of other applications of the technology came out of architecture, right? And I think that's something that today, you know, architecture and um some of the craftsmanship that goes into building things is at at risk. Like it's under threat today really. I was I was hearing I was thinking the same thing as you said it. I was thinking like, okay, the things you're describing feel very different to the culture that I feel is being pushed very hard by the world around me. Like I'm feeling this pressure to say screw craftsmanship, screw caring. You know, we hear VCs say, "Oh, my founders are telling me I can they're getting out 10,000 lines of code today." Are we crazy, Chris? Are we are we are we old man men yelling at the clouds being like, "Back in my day, we cared about craftsmanship," you know, or >> No. No. I don't think so. I mean, so to me, I think that there there's a thing going on and it's kind of driven by sometimes VCs, but but moreover by the world that's eager for progress. Let me tell you a story. In in 2017, I was at Tesla working on self-driving cars and I was leading the autopilot software. >> So, you were >> and uh I was convinced that in 2020 cars would be everywhere and would be solved and it was this desperate race to go solve autonomy. Um, in fact, my job, which I failed at, by the way, was to get a Tesla to drive coast to coast from LA to New York. Of course, here we are, I don't know, eight years later. Nobody else to solve that problem either. But, >> and you have like six months to get that working. >> Yeah. A little bit less than that. >> Okay. >> But, uh, at the time, nobody even knew how hard that was. But what was going around, what was in the air is trillions of dollars are at stake. Job replacement, forming, transportation. Look at the TAM of all of the uh human capital that is being put into transportation and with one cute trick that'll be replaced. And I think that today exactly the same thing is happening. Today it's not about self-driving although yes it's making progress. I think a little bit less gloriously and immediately than people thought, but it is making progress. But now it's about programming, right? And so I think that a lot of people are saying, "Oh my gosh, tomorrow all programmers are going to be replaced by AGI, and therefore we might as well give up, go home. Why are we doing any of this anymore?" Uh code is uh like if you're learning how to code, or if you're uh um you know, taking pride in what you're building, then you're not doing it right. And I think that this is something I'm pretty concerned. >> Yeah. And and it's it's almost worse than that, right? And the analogy there is actually really striking because I remember when you were at Tesla, it wasn't just like we're on the cusp. It was very specifically all Tesla needs is a bit more data because data is the oil and you just grab the data and instantly you have yourself a self-driving car. And that's now what we're hearing, you know, the bitter lesson which Rich Saturn is now saying is being totally misunderstood. Um, is like, oh, you just need more data and and AGI is inevitable and it's around the corner and and if that was true, it's a whole question there about how would one live one's life, but >> it hasn't been true in the past [laughter] and >> yeah, >> it it doesn't it doesn't feel I don't know. I don't know. What do you think, Chris? Yeah. >> Well, well, so so Jeremy, I'm I'm not the AI researcher, so I'm going to spin this one around, ask you this, right? But to me, I believe that progress looks like S-curves, right? And pre-training was a big deal. It seemed exponential, but actually it S-curved out and got flat as things went on. I think that we have a number of piled up S-curves that are all driving forward. Amazing progress, but I at least have not seen that spark. And so, I guess a a quick question for you, an easy easy slow ball for you is like what is AGI made out of? Is it just more layers in your transformer? Is it just more parameters, more experts in your like or is it something fundamentally different? And and another way to spin it is like is it actually next year or 5 years off or is it actually maybe something we'll not actually get to at least economically. So I mean something I haven't talked about much is but maybe I should here is you know as you know I I created the first LLM and >> UML fit. >> Uhhuh. Yeah. And I can tell you why I created it and I can tell you what it's looked like since then. I didn't create it to create AGI and everything that's happened since then is exactly what I hoped and expected would happen um with LLMs which is to say I was telling everybody in 2017 you know and 2018 this approach and in fact I was talking in academic conferences before that even this is an approach which by learning the next word of a sentence predictor It must build internal abstractions that would allow it to solve a wide variety of problems quite concisely and crisply with a smaller amount of data. And so finetuning is the key thing. Um and so nowadays we use exactly the same three-step approach. We train a big language model. We fine-tune it on introduction tuning nowadays. And then you fine-tune it on a classifier which is RLHF nowadays. Yeah. None of that process was ever designed to create AGI. There's no reason to believe it would. It's um it it's a very big uh pattern predictor machine. And it's funny, for years I was the one telling people you're deeply underappreciating how important a big pattern recognizing machine in natural language is. It can actually solve all kinds of problems and you can actually talk to us and it will actually like the amount of things which are just >> interpolation >> sorry >> everything's a remix right and so >> yeah exactly >> to a certain extent that's actually quite valuable for a lot of different reasons >> if you could interpolate between everything in the internet that's a lot but like you and I both know that when you're literally on the cutting edge of anything. And some of those things I find you hit very quickly because they're really small things. Suddenly GPT5 or Sonet 4.5, whatever become incredibly stupid. Like they just start saying things that you're like, "Ah, you actually had no idea what you're talking about the whole time. It was all a big pretense." Well, the big pretense was very helpful actually when I yeah >> need help. But >> well, so so as as not the AI researcher, I'll say this, right? So I have I've been surprised multiple times with AI. I am a maximalist. I want AI into all of all of our lives, all the products. I am continuously delighted by what we can do. However, the thing I don't like is I don't like the people that are making decisions as though a AGI or ASI were here tomorrow. Right? Because if you're going to live your life in fear and if it isn't actually coming or maybe it's 20 years from now or it's 30 years from now or something, who knows? And I I'm not saying that. It's not my place to speculate. But but being paranoid, being anxious, being afraid of living your life and of building a better world seems like a very silly and not a very pragmatic thing to do. And so again, I can't tell you that tomorrow. >> Let me come back. I don't see it as a practical thing. >> A really good point. So I mean first I'll say people often ask me you know where where is AGI and I've been doing AI for a bit over 30 years and I tell people I I can't tell you with any more certainty than I could 30 years ago or 20 years ago or 10 years ago been fantastic interest in it. a lot of investment, a lot of improvements, but there's nothing particularly about that that makes me think, oh, AGI is closer, you know, is 5 years away now, whereas 30 years ago I thought it wasn't. Like it's it's a different path. >> It could be like fusion fusion's always 5 years or 50 years away. >> It's a very useful path, but there's nothing particularly about it that says like, oh, this is an AGI path, you know? So, if we had AGI in 5 years, I wouldn't be shocked. 30 years ago if we had AGI in five years, I wouldn't have been that shocked then either. Like it, you know, but I'm not going to live my life differently because it doesn't feel any different. But it it it in my heart it kind of does because being able to use a user interface with a computer, which is actually the language I speak, does feel really different >> and it feels even more different to people that aren't AI researchers. And so everybody else around the world is doing this like AGI's around the corner. If you're not doing everything with AI, you're an idiot. And honestly, Chris, it does get to me like I like I question myself and I've seen that at my company. We we had a few months there where we I think would I'd have to say we we struggled to continue to believe in our vision, but everybody was like agents, agents, agents. And so a lot of us just did end up being like we can't not do this. Everybody else is doing this. They're writing 10 thousands of lines of code a day. It's ridiculous that we're not. We're still >> Are we doing something? are we doing something wrong because we're not getting the 10x wins that everybody else is claiming they're getting. >> So, we really invested in it, you know, and all of my team is amongst the best in the world at practical usage of AI and the results were just terrible. We we our productivity fell off a cliff. Our morale fell off a cliff. I was unhappy. >> How do you measure productivity? Is it number of lines of code written or is it product progress or is it something else? >> It's getting out the door that people use. >> Yeah. You know, >> so that's how I measure productivity too, right? >> Have you seen this, Chris? Like I mean, you've got lots of people working for you, super smart, know a lot about AI. Did Did they find the trick to writing these 10 thousands of lines of code a day, which transformed your company, and you're now going twice as fast or a thousand times as fast or >> Well, so so no. No. I mean, I I use AI coding. I think it's a very important tool. Um, I feel like I get a 10 to 20% improvement. Some really fancy code completion and autocomplete and it's amazing for learning uh a code base you're not familiar with. And so it's great for discovery. >> I do a lot of production code work, but it's I it probably is five or 10x productivity for prototypes. >> I think that's that's a place where it can be a huge deal if you're just saying crank out five different theories and you can get to a working mockup or air quote working. But there I can see major productivity wins but for the kinds of work that we do um it's not that great and the reason is is that I don't measure progress based on number of lines of code written. In fact I see code as or verbose redundant not wellactored code as a huge liability. I mean take take one example of this which is unit tests. So AI is really great at writing unit tests. This is one of the things that nobody likes to write tests. It feels super productive to say just crank out a whole bunch of tests and look I've I've got all this code amazing. >> But there's a problem, right? [clears throat] Because unit tests are their own potential tech debt. >> Absolutely. >> Right. Because the test may not be testing the right thing. Um if you're using >> they might be testing a detail of the thing rather than the real idea of the thing. And if you're using mocking now, you get all this like super tightly bound uh implementation details in your tests which make it very difficult to change the architecture of your product >> as things evolve, right? And so tests are just like the uh code in your main application where you should think about them. Also, lots of tests take a long time to run and so they they impact your future development velocity. There's like all these things. And so, um, so to me, the the question is not how do you get to the most code? Like I'm not a CEO bragging about the number of lines of code written by AI. That's I think a completely useless metric. The question is like how productive are people at getting stuff done, at making the product better. This is what I care about. >> And what have you seen? Like have you have you seen people I mean there must be people using these kind of trying out these highly agentic workflows. describe coding style things in your company like what how has that gone? >> So I'll give you some negative examples first, right? So I've seen somebody a senior engineer who you know a bug gets reported and it's like let the agentic loot rip go spend some tokens and maybe it'll come up with a bug fix and create a PR and go get this PR and it's completely wrong. It made it made the symptom go away. So it air quote fixed the bug. But it just was so wrong that if it had been merged, and it didn't, but if it had been merged, then it would have just made the product way worse because now suddenly you replace one one bug with a whole bunch of other bugs that are harder to understand. A ton of code that's just in the wrong place doing the wrong thing. And so that is deeply concerning. And to me, the actual concern is not this engineer because fortunately they're a senior engineer. they're smart enough not to just say like okay it passes tests merge. Uh we also do code review which is also a very important thing by the way. Um but the concern I have is that it's this culture of okay well I'm not even going to try to understand what's going on. I'm just going to like spend some tokens and maybe it'll be great and now I can not have to think about it. This is a huge concern because a lot of evolving a product is not just about getting the results. It's about the team understanding the architecture of the code. >> Yeah. >> Right. And so if you're delegating knowledge to hopefully an AI uh but you're now just you know reviewing the code but you're not thinking about what you want to get. I think that that's very very very concerning. >> Yeah. And it's it's missing the whole point of the the the craft of software engineering, isn't it? Because software engineering has always been about trying to get a product that gets better and better and your ability to work on that product gets better and better and things get easier and easier and things get faster and faster and and that's about you building better and better abstractions and better and better understandings in your head and I mean to fundamentally what you're trying to do with again there's lots of different kinds of software projects uh software generally lives for more than 6 months or a year, right? And so the kinds of things I work on and I think the kinds of systems that you like to build also are things you continue to evolve, right? You look at the Linux kernel, right? The Linux kernel has existed for decades now with tons of different people working on it and that is made possible by an architect Lenus who is driving consistency, driving uh abstractions and driving improvement in lots of different directions. And so that longevity is made possible by that architecture question >> and requiring a high level of craft from everybody and sometimes being an about it. But >> I don't think that's so much, you know. >> Yeah, I don't think that's required. But the uh I mean different different >> I don't think it's required but it's >> different paths are fine. But but the uh but the thing that's required is for people to actually give a damn >> like so people to care about what they're doing, people to be proud of their work. And as you say, the craftsmanship, software craftsmanship, I think is the thing that AI code threatens. Not because it's impossible to use properly. Again, I use it. I feel like I'm, you know, doing it well and I care a lot about the quality of the code, but because it encourages folks to not take the craftsmanship and the design and the architecture seriously. instead just evolve to get my bug cue to be shallower and just make the symptoms go away. And I think that's the thing I find concerning. >> Yeah. Yeah. Our focus at at Answer AI and Fast AI is all about, you know, despite being a very longunning AI research lab, the focus has always been the humans. You know, Fast AI's original first goal was, can we make it so that people without a PhD can use AI? And that was absolutely unheard of, absolutely insane, you know. So Jeremy, let that you and I have a lot in common, but let me tell you one big difference between the two of us is that I always am obsessed with building things and chasing my interest and understanding the next the next thing and and digging into things. You actually take time to teach people things. This is one of the reasons you've had so much impact on the world, particularly with Fast AI, but with many of your other projects, is that you don't just do a thing and fill your head with knowledge and build systems. you then actually take it back and do the hard work of trying to get other people to understand it which is a completely next level next level problem by the way because humans are complicated and so I think that that that approach is something that you've brought uh to answer AI but also it's what enables the tools the technologies the ideas to actually get out and I wish I had spent more time doing that >> I disagree with the implication you haven't Chris I mean with stuff like the VLM foundation and the way you've built these open source communities and you've create you've I I actually think you're much more patient with humans than I am. But yeah, I mean I I do think that that you know that that idea of teaching is important and and at at Answer Aai now, you know, I I always tell my staff I'm much more interested in if you spent the day learning a lot more about a thing or getting much better at a thing than producing one extra bug fix or one extra feature because then tomorrow you'll be twice as good at that thing. And those things uh just like gain this momentum, right? And you get better and better at doing stuff and you can do more and more as a human. And the feeling is magnificent of being good at something like being really good at something and and being able to branch out and using that to get good at other things. And I think that was part of I mentioned like we had these kind of months where we went off and doing stuff with the agentic loops and whatever and I was miserable. when you're improving, you're vi vibe coding things and suddenly you're >> what what I've seen another thing I've seen is the people that say like okay well maybe it'll work and it's almost like a test and you go off and you say maybe the agentic thing will go crank out some code and spend all this time >> waiting like a gambling machine right >> coaching and then and then it's like oh it didn't [laughter] it's like again try again just try again >> exactly exactly and and again I'm not saying the tools are useless or bad or something like this right but when you take a step back and you look where is it adding value and how, right? I mean, I think that there's a little bit too much enthusiasm and they're like, okay, well, when when AGI happens, it's going to solve the problem. Um, just I'm waiting and seeing. Um, well, so here's here's another aspect of it, right? So, the anxiety piece. So, I see a lot of a lot of junior engineers that are uh, for example, coming out of school and they're very worried about will I be able to get a job? And I think a lot of things are changing and I don't really know what's going to happen. Um, but uh to your point earlier, a lot of them say, "Okay, well, I'm just going to vibe code everything, right?" Because this is productivity. This is air quotes expected. But I think that's also a significant problem. >> Seems like a career killer to me. >> Absolutely. And so if I were coming out of school, my advice is don't pursue that path. Particularly if everybody is zigging, it's time to zag. Um, what you want to get to, particularly as your career evolves, is you want to get to mastery so you can be the senior engineer and so that you can actually understand things to a depth that other people don't. And that's how you kind of escape out of that um the thing the airway can do to get more differentiation >> which might require finding the right company you know because some CEOs are doing this like we're going to judge people based on how much AI they use you know and how many lines of code they have AI create and you know I guess there are going to be some places to work that maybe you just shouldn't um but >> there's outside of AI there's a lot of reasons why certain companies are career dead ends And so I don't think that's actually particularly new. It's just that now it's some of the shinier tech companies are trending towards some of the other things that people learned not to do in the previous generation. >> What you mean, Chris? Oh, also I mean the uh I'm not going to name names, but right there it used to be that the uh Google or the Microsoft or the other Meta, the big companies were the shining stars and uh career progression. And then there's other other like less Silicon Valley companies that were in the Midwest or something. They're not as shiny and awesome and were not seen as as prestigious. Um and so, you know, you could theoretically have a faster career growth if you went to Silicon Valley or something like that. But um but I think that the question is really about progress. It's about you as an engineer. What are you learning? How are you getting better? How much mastery do you develop? Why is it that you're able to solve problems that other people can't solve? And if everybody's using the same tools, you need to find a way to differentiate yourself and figure out either domain expertise or a portfolio of what you've done or ideally the ability to build things that actually are sustainable and scalable and actually work well. Um, and so if everybody's, you know, just using the same AI coding tool, like you need to figure out how to like break past that a little bit instead of just doing the that wrote, you know, unfulfilling. >> I'd love to I'd love to tell you a bit. I mean, I know you know a bit about it, but I'd love to tell you a bit about the, you know, AI coding tool that we're building because because you've been part of that journey for many years and have been an inspiration for some of it. Um, I remember when we were sitting next to each other working together um that we both had something in common, which is we both had a very tight iteration loop and we had different ones. Uh, my one >> very important to me. Yeah. Um, yeah. I I remember the first thing I said to you and your team is like, "Okay, guys, if we're going to work together on this swift thing, I'm going to need a notebook, you know, and you immediately were like, of course you do. You do, right? We" And like a week later, you're like, "Here's a Swift kernel. All the AI stuff works, you know." And I was straight into it then because I could like I can type see the result. Type see the result. I'm manipulating the data through a process step by step and watching it and you know very much the kind of um Brett Victor style you know his his inventing on principle of like you want to be close you know you want to be crafting the the the thing through the steps and watching it and you had a very different set of tools but I mean you you say how do you create this tight tight iteration loop because it's in statically you know compiled things and lowle things it's probably a bit different I'm not an expert on it >> yeah so I I work on systems and systems are slightly different but um what I care about is I care about that that that loop of edit the code compile run it get a get a test that fails and then debug it and iterate on that loop and so there's a number of things that go into that one is uh just the build system make sure you can incrementally build just the target that you care about. Just the executable, just seconds. >> Yes, exactly. Um, the next is running tests. Running tests should take less than a minute. Ideally, less than 30 seconds. And so, you're not going to run all of the tests in the modular monitor repo in 30 seconds. Let me tell you that, right? But when you're working on a component, you're not, you know, running all of the tests. You're working on one piece. Now, what that means is that you actually have to design your test suite. You have to design the architecture. you have to design and build uh testing specific tools. This is something that I think LVM has been quite good at. And so it's one of the things that was very unusual. It's very different than what GCC for example did before where LVM has a number of tools that are not the C compiler, but they're just the optimizer or they're just the uh the parser or they're a small piece of a larger system. So you can write unit tests and actually investing into the test harness so you can write large scale efficient to run reusable tests instead of just doing like G tests for everything is a is a big piece of that. And so again this comes back to the software craftsmanship like thinking about the big picture as things evolve as naturally code bases get bigger it's very rare that a production codebase ever gets smaller right usually they just grow and grow and grow until they start crumbling under their own weight. Um the uh this this is quite important and so I care about >> you know we've both like >> we've both really deeply invested in our tools you know like we've written you know so now at Mojo you know I know that's something else you've been working on it's like I was I was quite surprised at how quickly you had a full VS Code development environment with all the nicities but then I kind of thought well of course Chris would focus on making you know having his team do that first because without tools that let you create quick iterations. >> Yeah. >> All of your work is going to be slower and more annoying and more wrong. >> Yeah. Well, so in the case of Mojo, I think there's two things going on. One is fortunate to have an amazing team. [laughter] So I'm not the only developer on Mojo. Trust me, I am mostly an intern on the team that's helping out with uh obscure things. Um but the second piece is the experience of having done it before. Right. And so building Swift, I learned tremendous number of lessons about what to do that worked out well, but also what not to do. And so the the tooling piece of this, the using your own product piece of this is actually really important. One of the big things that uh caused, for example, the IDE features and and many other things to be a problem with Swift is that we didn't really have a user. We were building it and we but before we launched we had one test app that was kind of air quotes dog fooded but not really and so we weren't actually using it in production at all and by the time it launched it you could tell like the tools didn't work it was slow to compile crashed all the time lots of missing features and so with Mojo you know we consider ourselves to be the first customer we have hundreds of thousands of lines of Mojo code it's all open source >> it's amazing >> and so it's a complete >> it's like over half a million isn't it like >> yeah yes >> amazing >> it's quite large we're going to be open sourcing a lot more soon too >> but the uh uh but that approach is very different and so this is a product of experience but it's also a product of like we're we're building mojo to solve problems and so we're learning from the past we're we're taking best principles in if we're going to make a mistake let's make a new mistake and so this is this is a big piece of it >> yeah it's interesting so you know my my background is um so different Um, but wanting to achieve the same tight iteration loop is my number one goal as well. And it's interesting, you know, when you look at uh the history of the tools that I admire, it's small talk, it's APL, it's Lisp, it's Mathematica, they're all environments where you have a a workspace, you know, or >> you're manipulating the code. >> You have a symbol table. Yeah. And every line of code you run is manipulating objects inside that workspace. So like with APL, you don't ship somebody the compiled piece of code. You ship somebody the state of the workspace when you're done. You know, you're typing at the ripple, you're solving things and then you're like, "Yeah, this is working nicely." And you just >> That's what Emac was. You know, Emac originally was a piece of >> It's kind of like you're you're like pickling it and you're sending you're sending the pickle almost. Yeah, you know, when you when you start terms, >> it was basically just pulling out, you know, what Richard Storman's local RAM looked like at the point when he was like, "Okay, I'm happy with these sets of manipulations." And people don't realize Python is is that, you know, it's not that far from being a lisp. people have done everything they can in the recent years to make it look like Java, you know, um, but it's not, you know, that's that's that's a fake. And actually, if you lean into these kind of small talk lisp roots, there's an extraordinary thing there. So nowadays, for example, we have a really cool uh AI bot uh rather unimaginatively called Discord buddy. It sits in Discord and it has access to all of our GitHub repos and uh indexed versions of all our Discord messages and our lawyers use it, our accountants use it. We have a whole professional services uh subsidiary actually called Virtual, they all use it and we just ask it questions about like hey you know how does this fit into our strategy around that or can tell me about this particular client's most recent share thing or whatever. Um that is a piece of code we never deployed. Um it's it's actually a live symbol table living in this piece of software we wrote called Solvent. um where um Eric who who wrote Discord buddy just one at a time typed commands to like edit that symbol table you know to say like okay let's create a dictionary let's put this in here let's do this and like oh that was a good set of things let's wrap that up into a function let's and you just read it top to bottom and when he wants to fix a bug or add a feature he doesn't like pull it out and go here and go here and go here he's just like okay I'm going to manip ulate that route or that dictionary and he does it and it's it's just there you know and and this way of working like you were you know the the really the first notable Yeah let's go I talk about MBE I just want to say you were like the first notable supporter of MBD like you >> really understood this idea >> let me tell you something that I see in your work okay so because you you and I uh do have many commonalities Right. And so I'm basically reinventing stuff I built 15 years ago but for the modern AI and GPU era. Uh you what I see you're taking lessons learned from Jupyter notebooks which then inspired MBDEV. MBDev is a really cool way of saying let's build a Jupyter first development environment. Let's actually make it so you could do production development in notebooks and take advantage of all the real time >> in steps. >> Yeah. Exactly. And so all the things you're talking about there. But now what you're doing is you're bringing all those knowledge all that knowledge and all that experience forward to solve it where you're saying let's actually solve a couple of new problems. One is take advantage of that category technology. Let's bring in LMS and AI coding. >> Exactly. >> And let's make it so people actually can build products while retaining mastery of their code >> and learning as learning more and more. >> Exactly. And so the work product there is not just the outcome, but it's also something you can work with, you can manipulate, you can maintain, you can evolve. I think this is what makes it to me so exciting and so fundamentally new. Because again, you go back >> to the top of the discussion, there's so many entrenched interests that want to convince everybody in the world that coding is dumb. We shouldn't think about this anymore. Like if you're not just vibe coding, then then you're doing it wrong. But in reality, okay, you're making a lot of disposable code. You don't know how anything works. If you're not developing and growing as an engineer, you're not creating something with durable value. And I think to create something with durable value, you need to both get the benefit of AI coding. It's amazing for discovering and finding new ideas. It's amazing for uh teaching you new things like what is that new API? What is that that uh data structure? I don't know what is um what what are four different ways of approaching this problem, right? But then get making sure that you own the result and actually can evolve it and work with it and be proud of it. >> Yeah. Yeah. And we discovered this amazing thing that um when you when you know so previously I've always I always felt like I had a dialogue with a computer where I'm you know so supercharged ripple. I'm you know typing in things and seeing results. Now I've got a third person thing in this dialogue which is the AI. And so we've kind of developed this uh key rule behind everything we do with AI, which is the AI should be able to see exactly what the human sees and the human should be able to see exactly what the AI sees at all times. And so then um Eric Ree has been writing his new book insolvent and he's the the AI can see exactly you know he writes every line of of the book but he's always asking the AI you know like hey does this paragraph align with this mission that we did in the last paragraph or hey have we discussed this case study before or hey can you go into my um editor's notes and check whether we have any comments. on this thing and the AI doesn't need any claw.md file or whatever like it's it's there with you you know it's in the trenches doing doing the work and watching the progress so we've got this thing called um Shell Sage um built by one of our great team members Nate who used to um co-run the LLM stuff at stability AI and he was like okay I'm going to take this idea and I'm going to put it into bash into the shell and it was like a 100 lines of code. He was like, "Wait, T-Max already shows everything that's happened." So, if I just add a command which you can type while you're using T-Max that talks to an LM, the LLM can see all of my previous commands, all of my previous questions, all of my output. It's this amazing thing. It's like 100 lines of code and by the next day, all of us were using it all the time. Like so we kind of found this like key theme behind how to work with AI that just so happens to perfectly fit with the NB dev kind of background now where it's like yeah bringing in a a co-worker who always is pairing with us because they'd always can see everything we're doing and why we're doing it. we also end up with better artifacts because >> so it sounds like what you're doing is instead of bringing in a junior engineer that can just crank out code you're bringing in a senior expert a senior engineer adviser somebody that can actually help you make better code and teach you things. >> Yeah, we exactly we you know so we still have some agentic pieces you know for the stuff that you know it's good at. So sometimes we're, you know, the other day we're like, okay, this is clear clearly a bug in Jupiter here. It hasn't sent this thing across. And you just say like, hey, use your tools to go in and have a look to see what printed out this warning, what called that, why it called that, how, you know, and it came back and it said like, okay, [snorts] here it all is. I could have done it myself. Would have taken 15 minutes. Would have been boring and annoying. So that was nice, you know, but >> yeah. No, the the the automation features of AI are super important, right? I mean, I think this is something where uh getting us out of writing boiler plate, getting us out of memorizing APIs, getting us out of uh you know, looking up that thing from Stack Overflow, I think is >> really quite profound. I think this is this is a good use. >> Yeah. >> The thing that I get concerned about is if you go so far to to not caring about what you're looking up on Stack Overflow and why it works that way and things like this and not learning from it, that's that's the concern. It it feels like we're going to have a bifocation of skills because people who use AI the wrong way are going to get more and more and the people who do use it to learn more and learn faster are going to like out they're going to outpace the speed of growth of AI capabilities because they're a human with the benefit of that. >> It's going to be interesting. I feel like there's going to be this group of people that have got a learned helplessness and there's maybe a smaller group of people that everybody's like, "How is this person know everything? How are they so good?" You know? So, so to me, just like as I reflect on my my personality and how I work, um I don't watch the evening news. I don't you I do doom scroll Twitter a little bit, but I don't I don't really uh sink myself into social media and things like this. And so there's so much of the noise of the world that's going on that I'm completely unaware of. And guess what? That makes me happier. But what it does is it frees up brain cell and brain capacity for other things. I don't care who who Elon's fighting with. Trust me. Like that's not something that I really want to care about. Um, but I think that AI coding can be the same thing, right? If if you get sucked into, okay, well, I need to figure out how to make this thing make me a 10x programmer, it may be a path that doesn't bring you to developing at all, right? It may actually mean that you're throwing away your own time because we only have so much time to live on this earth, right? And so what that can do is it can end up retarding your development and preventing you from growing and actually getting stuff done. And so I think to your point earlier about the team and losing productivity, I think that what it would be great to do would be to actually have some kind of objective metric in terms of productivity and like can we actually look at okay well where where is this 10x benefit this theoretically they're coming from and who does it acrew to and I think that maybe what we'll find out is that there's a a very biodal distribution like the people building prototypes it is actually transformative I actually totally believe that or the people who don't know code being able to get something done that they otherwise couldn't do. That is actually transformative. >> Completely convinced. >> But for other other kinds of programming, maybe it's not actually the thing that you should be aspiring to. >> And if we can actually define these things as two different terms, define these worlds, get some of the BS and the overhype out of the equation and get people to be more rational about this, maybe we can find a better way to use the tools for what they're good at. And I think that could lead to a more productive and happier world for people. >> And we've done that before, right? Like I feel like we've had this same conversation about giant Excel spreadsheets and Microsoft Access databases and and VB scripts and you know folks in business groups quite correctly say like this has genuinely transformed our department and people in IT genuinely saying like that's great but we can't run our whole company around this. >> Well, okay. So technology waves like cause massive hype cycles and overdrama and oversell and whether it be like object-oriented programming in the 80s everything's an object versus uh the internet wave and the 2000s and now everything has to be online otherwise you know you can't buy a shirt or something right or dog food right I mean there's truth to the technology right but what ends up happening is things settle out and it's less dramatic as is initially promised And so the question is is that when things settle out like if you as a programmer where do you stand like have you lost time? Have you lost years of your own development because you've been spending it the wrong way and now suddenly everybody else is much further ahead of you in terms of being able to create productive value for the world. Right? I think that's the question. And so I do do expect some amount of correction. Also do expect the tools get way better. And I do I don't think that we're done yet for sure. Right? But uh but the real question I think we all weigh is like how do we actually add value to the world? How do we actually do something meaningful? How do we move the world forward? And and for me personally, it's about how am I proud and happy and doing something that I feel like is contributing. >> Yeah. And in the end like you and I enjoy the process of software craftsmanship and so we enjoy putting the time in and we enjoy getting better at it, I guess. I feel like a lot of folks can enjoy that if they put in the time to get good at it, but not everybody has to. Uh, for sure. I think >> yeah, I respect people that are not into that. I mean, it's totally reasonable to uh think about uh programming as just a means to an end. And that's that's fine. And to that I would say if your end is building something that lives more than 6 months, then you do care about the maintainability and the evolution of the software architecture. So there's a air quote business re reason to do that. But I also admit that I don't program because I have to. It's definitely not my day job. You've recently pointed out my GitHub history. It's like this is how you spell nights and weekends because all the green dots are on the weekends, right? But the uh um but uh you know there's lots of reasons that different people can weigh these factors in different Yeah. what's right for them. >> Yeah, for sure. And I think your comment about the news and Twitter and stuff is pretty relevant here, which is just, you know, by choosing not to put yourself in environment where you've got all this sloshing over you all the time, you don't have to fight it, you know, because it's not being thrown at you. >> That's right. I'm I I'm very much focused on taking the next hill. So that's what I'm always looking at and that's why I climb hills. >> Yeah. Well, you've done an amazing job of it, Chris. And I think uh I think it's actually quite clear now Mojo is going to be hugely successful. You just just raised how much money? I think other people are starting to understand this too. >> Yeah, we raised $250 million from some folks that believe in us very strongly. >> Yeah. Um >> so obviously I've believed in you since before you did. Um, and it's great to see, you know, that other people are now seeing how successful this is going to be. Um, >> well, so I I mean, I would I would say that a couple of different ways. I mean, this is all to plan, of course. And so, raising money is an artifact of being able to build things and being able to be successful. And so, it's a great validation. And I'm very excited to work with these new folks. Um, but honestly, it's about building the right thing. And to me what I struggle with and again come back to software craftsmanship is that when you build a new programming language and I I agree with you now Mojo is inevitable. Yes it will be open source. Yes we're planning all this stuff and it will do amazing things for sure. Um but because of that I've have a huge burden of responsibility like it will happen and so the question is is it actually good or not and so this is where design matters architecture matters this is where tools matter. This is where um things fitting together and being able to evolve quickly and be able to make mistakes and fix them matters. And so this is why I care about building these things is because I do believe that the things we build can have an impact, but they can only have an impact if they're built well. And they don't have to be perfect. Of course, I'm sure Mojo will have its own mistakes and its own bugs and problems, and that's guaranteed. But if we care, then we can get closer to the mark. And if we don't care, then you're probably never going to get there. >> Look, it's built around a strategy and a vision, and that's being consistent. And that's, you know, I think that's the key thing, you know, it's it's channeling where you, you know, you've got an amazing team, but it's channeling the direction that you've figured out, you know, that you've spent years figuring out. And so, >> and that's actually something I should put in a pitch for >> Eric Reese's upcoming book. Like this is kind of the key thesis of his is like people who lead projects need to be believed in, you know, and need to be given the room to >> keep at it. And he's developed ways to make sure that happens. >> Um, and >> I'm excited to read his book. >> Yeah. Well, I mean, you've got it sorted out already. You know, you've you've you've been able to stick with your vision and it's clearly going to happen. So, I'm really excited. >> So, well, so I mean, I think that that again goes both ways, right? Right. So on the one hand um this is my life's work, right? I mean this is what I was put here to do and Mojo and Max and solving AI compute and making it so people can program all the chips and people have choice. Like this is this is what I'm about and I haven't been too shy about that. Um but on the other hand that doesn't mean everybody believes and so that's okay with me. Like a lot of people haven't believed for various reasons whether it be the angry person on hacker news shaking their fist at whatever. um why didn't I just use Julia >> or or whether or whether it be an employee that's like yeah actually you know I'm I don't believe in X and Y and Z and whatever and it's like okay cool this is the wrong place there are other people that would love to join and so I'd love to work with them and if you're the it's not the right fit then please leave because it's better for both of us you know it's about making sure that you're working with the people that can get to success and making sure they're rewarded and people can be part of the success if they believe in it but not trying to have to make everybody happy because trying to make everybody happy is how you get watered down committee things and you can't really make you know a big bold bet if you do that and so you have to actually as you say have a hypothesis h have have a core of belief and it may be directional it may not have all the digits of precision but you have to stick to it otherwise you won't get to it. Yeah, for sure. Well, thank you, Chris. I think, you know, so before we wrap up, I just want to ask people watching that if you're interested in seeing the vision that I'm trying to create, please come along to solve.it.com and have a look. It's kind of crazy. It's very different, but it's really working. We've got so many people now that have literally said their life has changed dramatically for the better thanks to it. And definitely don't sleep on Mojo because it's got it's what we're all going to be using pretty soon. So, >> well, so and and Jeremy, one of the things that I'm I'm fond of you for many reasons, but the uh your your generosity with your time, not many people, particularly with your background and your incredible talent and capabilities would spend their time building tools to teach people and allow them to grow. You've taught me so much. I feel very thankful to you for that. Um, but a lot of other people wouldn't do that. And so I think that the that even trying to move the world forward in this place, it can be very um isolating sometimes. And there's all these people that say uh you know, your ideas are terrible and blah blah blah. The world doesn't want X, Y, or Z. >> But that always happens. Like you should definitely not listen to those people. I mean, thank God we have those are the people that aren't building anything themselves, right? >> No. Exactly. And thank God now we have our community. You know, we've got what's you know, four million people have learned AI through fast AI. They're still with us. A lot of them are running labs and stuff nowadays and they're in our Discord and they're, you know, we support each other and we have this really warm, encouraging, deeply skillful community. And I think that's everybody needs people around them that >> yeah, >> they're supporting what they're doing. Um >> well and what I've seen across my career is I've seen people that you know they might have been an intern for me 20 years ago now somehow this happens. Um but what I see is the story arc of different people's careers, their passions where they drive. And the people that I see doing really well uh in their careers and their lives and their development are the people that are pushing. They're not complacent. They're not just doing what everybody tells them to do. They're actually asking hard questions and they want to get better. And so investing in yourself, investing your your tools and techniques and really pushing hard so you can understand things at a deeper level, I think is really what enables people to grow and and achieve things that they maybe didn't think was possible a few years before. >> I couldn't agree more. Thank you, Chris. That's so inspiring, and I hope everybody takes it to heart. Thanks for your time today. >> Yeah, it's great to see it. >> All right. >> Cool. Now you got

Video description

Chris Lattner (creator of LLVM, Swift, and Mojo) and I (Jeremy Howard) discuss why rushing to AI-generated code might be destroying the craftsmanship needed to build software that lasts. When CEOs brag about 10,000 lines of AI code per day, are we racing toward a future where no one understands how anything works? In this conversation, Chris shares hard-won insights from 25 years of building foundational systems that still power most of today's programming languages. We explore the difference between "vibe-coding" (hoping AI will solve your problems) and using AI as a tool to enhance your mastery. Chris reveals specific examples of AI coding gone wrong at his company, why unit tests generated by AI can become technical debt, and how he actually uses AI to get productivity gains without sacrificing understanding. We also discuss Chris's journey from LLVM to Mojo, the parallels between today's AGI hype and the 2017 self-driving car predictions, and how developers can build skills that differentiate them in an AI-saturated market. Whether you're a junior engineer worried about your career or a senior developer trying to navigate the AI tooling landscape, this conversation offers a pragmatic path forward: how to embrace AI while still building software, and a career, that lasts.

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC