bouncer
← Back

Jon Gjengset · 20.2K views · 595 likes

Analysis Summary

10% Minimal Influence
mildmoderatesevere

“The video is highly transparent; be aware that the career advice is based on the host's specific high-end engineering experience which may not apply to all market segments.”

Transparency Transparent
Human Detected
100%

Signals

The video is a nearly three-hour live stream featuring highly natural, unscripted speech with significant personal anecdotes and real-time interaction. The presence of filler words, spontaneous laughter/noises, and complex personal context confirms it is human-created.

Natural Speech Patterns Frequent use of filler words ('uh', 'um'), self-correction ('I guess New Year's Day'), and natural pauses/stutters ('I I tend to').
Personal Anecdotes and Context Detailed personal life updates including a proposal in France, buying an apartment, and calculating his age (36) based on his birth year (1989).
Live Interaction and Physical Cues Transcript includes a physical sound [snorts] and direct responses to live chat questions that weren't in the pre-prepared list.
Cognitive Load Indicators The speaker describes the mental process of 'recomputing' his age, which is a very human psychological trait rather than a scripted or synthetic output.

Worth Noting

Positive elements

  • This video provides deep, nuanced insights into senior-level software engineering philosophy and the specific technical evolution of the Rust ecosystem.

Be Aware

Cautionary elements

  • The host's personal preferences (like terminal-only workflows) are presented with such authority that viewers might mistake personal taste for industry requirements.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 16:07 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-08a App Version 0.1.0
Transcript

Hi everyone. Happy New Year. Uh welcome to a new New Year's I guess New Year's Day uh Q&A stream. Um we've been doing these on January 1st for a couple of years now and it's just it feels like the right time to answer a bunch of questions because it can all be about the year that just was and besides what else are people doing on January 1st, right? Like a bunch of you were probably like hung over, tired, or both. Um and this feels like a good time to just sit down and have casual chats about random things we were wondering. Um there's so there's been a website uh we were wandering.com that I built a while back that we've been using for people to ask questions over the past about like week and a half. Um and you can vote on questions, you can ask questions. I I tend to put these out ahead of time so that even people who can't necessarily watch the live stream still have a chance to ask questions. Um the people who are watching live right now are probably going through and voting on questions as we speak. Um, but in general, like the sorting algorithm for this website is going to bias towards the things that have steadily gotten votes over the past um, week and a half. And then once we get through those, we'll start to surface the ones um, that are are more recent and have more recent votes. [snorts] Um, I'll try to get through as many questions as I can. I'll take some follow-up questions from chat as well. Um, but I will not get through all of the questions. I I hope you forgive me, but there's like 300 questions or something. So, this is like there's just no way I get through all of them. Um, but if you have a burning question that is not answered in this video, um, o over the course of this video, you'll see I'll put chapter marks down at the bottom for the ones that uploaded um, after the fact where you can scroll to the questions you care about. If the question is not answered, then just leave a comment on the video and hopefully I'll manage to get back and give you a textual answer of some kind at least. Cool. Um, I think that means we're going to start right ahead with the very first question. The very first question this year, do you and your girlfriend plan to get married and or have kids? Um, also this is maybe the I think I don't think I've mentioned these news already. So, I got engaged this year or this past year, which is very exciting. Um, I proposed to my girlfriend in France. Um, and so we are now engaged. Um, which so so to the first part of the question, yes, the plan is to get married. Um we we haven't set wedding plans or anything like that because we also have bought an apartment. Um and so the organizing all of the stuff with the apartment is sort of step one and then after that we we'll get to um planning how that works out. Um as for kids, I very much want kids. Um it's still, you know, it's it's a big decision in life. So it's something we're still trying to figure out the logistics of and the timing of, but the the answer is sort of a yes, I want to have kids. Um the more in-depth than that I I don't think I can meaningfully answer right now but that that is the plan. Uh this followup question in chat which is how old am I? Um so I was born in 1989 uh which means in December. Uh so I turned 36 like about a month ago. Um and so I am 36 years old even though I I have to go through the math of the yearish time because at this point I don't really remember anymore. Like my age is just not a thing that's readily accessible in my brain. If you asked me how old I think I am, I would probably say like 32. Um, but then I had like recomputee and I go, "Oh, more years have passed, so I I guess I'm older." Um, let's see. Next question. Have you tried the Helix editor? I have not. I I've heard very good things and I've sort of heard sustained good things over time. I feel like there are a lot of editors that people got excited about and then they sort of died off after a little while. Um, Helix seems to sort of be sticking around so far. I've also heard a lot of people like the the swapping of the the subject verb order in Helix compared to Vim. Um, but but I have and also the sort of uh relatively little configuration setup where you just get all of the bits you would have configured anyway and they just kind of work. At least that's the the intent. Um, but I I have not tried it myself and I mean I I've been through questions like this in the past and the the reason why is because I don't really have a need. Like I my I don't feel like my editor is a thing that's currently holding me back. Um, and so therefore spending a bunch of time learning a new one doesn't feel like it's all that valuable to me right now. I I think if you're starting out like you're starting out your programming journey and you haven't built up a a lot of muscle memory and a lot of configuration for your editor, then maybe it makes more sense to start with Helix than with something like Vim today. Um but for me who's like I I've used Vim now for I don't know 20 years maybe a little less maybe 15. Um, and you know, it's sort of built into my brain and my fingers how that editor works. I've built up a config that I'm very happy with. I have the plugins that I'm happy with. Um, and so that environment I feel works very well. And so there's not a huge amount of incentive for me to try to switch. Uh, but I'm not opposed to it. It it looks like a a good editor. That's probably the one I would try if I weren't using the one that I'm happy with, if that makes sense. Um, no jet brains. No, I don't. Um, I don't really like guey editors. Um, so I I sort of live my life in the terminal. Uh, apart there's like browser and terminal and those those are really the only two things I basically ever have open. Um, and so I I want an editor that is a a twoy, not a guey. Uh, I want to be able to just pop in and out of my editor directly from my terminal rather than embedding a terminal in the guey window. Um, like I that's not how I want to interact with with the shell where I spend like so much of my time. Um, and and so editors like like the Jet Brands ID or uh or VS Code, it's just like not my cup of tea. Um, cool. Next question. Uh what is your usual set of questions when you are interviewing Rust developers? Um so I don't know that I have a a usual set of questions. Um it also depends on what I'm interviewing them for. So it's not really the case that there's a standard set of questions that are the right way to evaluate a software engineer. Um in fact I think quite to the opposite what you would want to do is you want separate interviews to try to measure different things. So some of the some of the angles here right are um you have the sort of test if they can code kind of interview which is I have um in general I have I have a problem with interviews like that. I I tend to find that they they don't evaluate the right thing. They they let you weed out people who just cannot code. Like there are a bunch of people who claim that they can program and just cannot program. But but I'm not talking at the level of like they don't know details of the Rust type system. I'm talking like they couldn't do fsbuzz in Rust. Um and at that point like that that is clearly a problem, something you need to screen for. But things like asking people to implement data structures and algorithms um or uh or even just solving how to describe it like general purpose or or computer science problems like the kind of lead code problems you often see those things I I don't I don't really think you get a lot of data points out of it about how good that engineer is as an engineer. I don't even think you learn that much about how good they are as a developer. um like even even how good they are at Rust. So so if if you have to measure like how someone how technically competent someone is the the direction I tend to lean is more one of give them a very open-ended problem um and then just have them work through how to solve it. Um and that includes asking you questions about the requirements about the context of the solution um about things that might have been uh tried before systems to integrate with. I I want to hear how they think how they reason their way through a uh through a programming problem rather than like write a bunch of code that you've either memorized or have to like bring back from like computer science studies or reading lead code one day. that doesn't seem that useful to me. Um, so so that's one avenue is like h how do you program? I think another one is how do you think about engineering more broadly? And the the way I tend to ask this question is also very open-ended. It tends to be something like um like one prompt that I like like a lot is um um how do you convince yourself that you've built the right thing or the the correct thing in the correct way correctly? So there's there's three corrects here. It's like how do you know you've built the right thing? Um how do you know you've built it in the right way? And how do you know that it's actually correct in the end? Um, and this is like this is a question that people could spend, you know, hours answering, but the reality is that if if you have, you know, a 50-minute slot to interview someone, hearing how they think through that question, hearing how they reason and prioritize which aspects to talk about, whether they talk about um other people or continuous integration or testing strategies or, you know, uh, llinters, like there are a lot of different avenues they could take here. and hearing about that tells me a lot about how they do engineering. Um, so so that that's tends to be a sort of second avenue. Uh, and I think maybe the the third I like to prompt for is more, you know, tell me about something you built. Tell me about something you're excited about, something you spent a bunch of time on, you put a lot of thought and effort and care into, and just let me um, you know, vibe with you in the the old school sense of the word. Um, right? like I I want to feel my way through that project with you. I want to understand what was hard. I want to understand why you made the trade-offs you made. Um and it's easier to do that on something they've actually built themselves and care about rather than some artificial thing that I gave them that they have to analyze like a system design question or where um you know we've built it over the past 30 minutes and now we need to talk about it at depth. It tends to be more useful to talk about something they've built. Um, and I think those are the three the three things I want to evaluate when evaluating someone. Um, so it's like it's not really a usual set of questions as much as it is three different perspectives I want on someone that I would consider hiring. Um, see if there's any followups. Uh but for what it's worth, um this is like my opinion for how people should hire. It is not how many companies hire, right? So so if you followed my advice and sort of prepared for interviews like that, you might be sorely disappointed by many of the companies that are going to interview you because many of them don't do interviewing this way. And I I think that's a mistake. But it is, you know, it can be timeconuming both for you and for the candidate. Um, and it requires people who can ask those questions, who know how to evaluate the answers. It's not it's not quite as easy to create a, you know, a grading rubric for these kinds of questions. How do you compare one candidate to another in a fair and unbiased way when the questions are inherently very vague and open-ended, right? It's sort of the the antithesis almost of what you want interview questions to be because you want them to be measurable and comparable and quantitative so that you eliminate bias from the process. So so like there are good reasons too why companies don't always use these kinds of interview techniques. Um so so even though this is how I I would like to evaluate someone else, it's not necessarily what you will meet in the real world. Um but but you know there's a question here of how much grinding should you do in in lead code and you know the the answer for me is none you know but there are a lot of companies that take those kinds of programming puzzles um they place a lot of value on them in the interview chain or or more like it's almost seen as a screen or a filter where you need to be able to do those otherwise they will not be asking about anything else And because that's the case, if you're applying to companies where you know that that might be something that they look for, then yes, you might have to practice those things. Um, but but I don't think it's inherently useful, but I think it's sometimes uh forced upon you. Um, okay. Next question. Uh, how did you and your girlfriend meet? So, uh, we met on an online dating app. Uh, we met on Hinge um, back when I was in Boston studying. Um, she was in Boston working and, uh, yeah, we just we we got along well. We we actually first matched on um, Halloween. And I I'm not I don't really care for sort of special events very much. I I don't really celebrate like birthdays. I don't make a big deal out of New Year's Eve. Uh I don't make a big deal of it out of Halloween. So for me, that was just kind of any other day. Um and I I actually I asked her out on a date on Halloween. Um not thinking anything of it. And she she thought that was kind of scary, like meeting a guy the same day as you started talking to them, but on Halloween. I was like something scary about her to to her. So, um, so she sort of delayed and we we met I think a week later. Um, and you know, we've been together basically every day since. So, it was a you know, it worked out very well after after we we finally got to meet. Um, but that's that's how we met. Our first uh date was at a a tea store in Boston. It was it was very cute. Uh, if you don't care about New Year's Eve, why are you streaming now? Um, you know, it's more that like I wouldn't really go out to like have a big celebration for New Year's Eve, but I also recognize like you you got to tell the passage of time somehow. And like years taking over seems like a reasonable one. You you started from January 1st again. Um, and so if you ever needed to batch time into larger units that you can answer questions about and summarize, years seem like a useful, you know, human concept for that. And also like I'm I'm exaggerating a little bit, right? It's it's not like I don't care about years or don't care about any of these special events. It's more that I don't celebrating them is not particularly important to me. Um, okay. Next question. How much is clawed code or similar used at Helsing? Is it a security concern? Um, so this is a really interesting question and something I hadn't really thought about before I started working there. So Helsing, you know, being a defense company, obviously we work with both code and data that is sensitive in various ways, right? So that can be um sensitive because it's, you know, classified in some way. So in the in the very uh sensitive uh category or it can just be sensitive in the sense of like this is stuff that you don't necessarily want adversarial nations to have access to or adversarial companies or they're just company internal. It can be all of the above. Um and sometimes it's it's more about you know national sovereignty like it's not necessarily about um uh you know something being officially classified but just you know it should stay within this country for example. um and using hosted models like Enthropics Claude or you know um uh OpenAI's models or Gemini like using any of these that are hosted remotely um you can do for things that are not any more sensitive than what other companies have but you cannot do it for things that are more sensitive. Obviously classified being the extreme version, but even just like sensitive internal data like things that you really don't want to leak in any way. For those things, um you can't really use those hosted models. You need to run it run that data and that code against models and and tools where you know that they don't escape anywhere. Um and so so we have a sort of split system internally where there every codebase every piece of data is sort of allocated to one of several bins of sort of uh you know classification but not in the secret sense but just picking categories. Um so we have different categories internally and for some categories it's fine to use um you know hosted tools like like Gemini. Um and for other things it's not okay and you must use certain tools that are you know open source and configured in a certain way and and um um you know sandboxed in a certain way and that only interact with models that we host on our own infrastructure and so we control the the the content of them. We don't use local LLMs all that much. Um well local in the sense that they're like on my laptop. um that's less common, but we do have things that we like models we self-host internally at the company in our own infrastructure. Um and so that that is stuff that um you know we we've set up environments where it is okay to send sort of medium sensitivity data. Uh for the for the really classified stuff, you you often can't even use um systems like that. You you basically are required to keep things on the device where that data or or code is supposed to be located and nowhere else. And so then even self-hosted solutions aren't an option. So so it's it it's really a wide scale. Um I would say though that you know in terms of the the first part of the question like how much of these use the teling I'd say quite a lot. Um you know we we use so there are complications here around uh licensing and uh terms of use and such as well. So claude for example has a bunch of restrictions in their terms of use for what you are and are not allowed to use the cloud model for. Um and so obviously we need to abide by those. Um but but in general I'd say we use these tools a lot. They tend to be um very useful for things like prototyping, bootstrapping, uh some amount of like extracting stuff from large bodies of PDF documents which there for legacy reasons as a lot of in the the military industry. Um, we also can use it for things like starting documents of the same kind like we have a format we need to match and we have, you know, all the code, we have all the the sort of internal protocols and everything and when you turn it into something can be submitted as written PDFs. That's often a good way to get that process bootstrapped. Um, tends to be really good for um developing additional test cases. Um, it can be useful for um code review to an extent. Um, I have used it a lot actually for debugging third party code. Like if I find a a bug in my software that I realize is caused by some third party software we depend on. Um, the LMS are often very good for discovering the sort of transitive dependency chain and figuring out the the root cause of that bug. Um, so so I would say yes is quite a lot and and in fact outside of the the engineering part of the company as well. So we're increasingly seeing adoption among um you know the finance side of the company not not for like keeping track of balance sheets right but there's so much other work that they end up doing where maybe they want to you know just um build a quick website that gives them a dashboard over you know some expense stats or something right those things previously a software engineer would need to come in and build it and now they can increasingly just whip something up quickly themselves and then you we can review that the numbers are all right, but it lets them iterate on their tools without sticking engineers in the middle. Um, and we're finding that that is a a pretty good boost to productivity as well for those teams. Um, how can you prove the information isn't being leaked to the model creator given the blackbox nature of LLMs? Ah, so so this is why we self-host many of these models. So we can put them on hosts where we know that those hosts have no connectivity to the rest of the internet. So the like the model can't really even if the the model was malicious somehow it's not clear that it can do anything with that data because it is entirely sandboxed. Um uh how do you split these on your computers sandboxing? Um so so we have a bunch of tooling internally to make sure like this repository is tagged as that. So therefore you need to use these tools, this sandboxing, this model and so on um cool. Okay, next question. Oh, are you also using Linux at work? Um so yes and no. I'm using like my my main device is a is a MacBook at the moment. Um, but you know, I basically run like I run Nyx home manager on it. And so I have basically all the Linux tooling. Um, and then I also use a um, uh, like a cloud hosted virtual machine to do a lot of my work which is obviously Linux based because it gives me more cores and everything. Um, okay. Next question. Hey, have you heard of Mojo? What do you think of it? I haven't heard of Mojo. So, Mojo is like um it's a GPU programming language basically, right? Let me um actually I can do a Mojo uh screen. Yeah. So it's this thing um which is the idea being that you can write you know Python or something that kind of looks like Python and it ends up being able to run on CPUs and GPUs and sort of gets compiled into essentially GPU kernels. Um I I haven't used it myself but I have heard very good things about it. Um but but have not used it myself to the to the extent that um to the extent I can like comment on how well it works. I I will say though that I think this is it's sort of targeting the right it's targeting something that feels like it should we should be possible to do better right which is currently you have a bunch of Python code that sort of sits around the GPU code that comes in in the form of tensors or whatever else and the stuff that runs on the GPU is heavily accelerated because you need it to be but the Python bits on either side aren't really but they could kind of be right like there's enough um structure to the compute that happens before and after that if you're able to turn that into stuff that could run on the GPU or or at least more efficiently on the CPU, you could actually pretty significantly reduce the the overhead the bottlenecks on either side of the actual GPU programming. Uh and I think Mojo is basically an attempt to to try to get at that. Um I I I don't actually think that this is quite a like a just use Rust instead. like I I actually think there's there's a value ad beyond that here. Um which is at least my understanding is Mojo's aiming for I think two things that are separate from this. One of them is obviously there's a lot of exper expertise in Python in these communities. So much of the research so much of the prototyping so much of the work has happened in Python and then saying well throw that all away and everyone's going to go write Rust now or go write you know C++ instead now. um isn't really feasible and it's also not clear to me that it's the right path. Like I don't think you want all the people doing this re research to spend their brain cycles on like the borrow checker. That's not the right focus for the work the intellectual work that they need to be doing. And so I actually think giving them something that's more uh that's like easier to to just write and experiment with and play with is probably the right angle. Um the second one is that at least my understanding is Mojo is aiming for um you you write you write your code you write just sort of Pythonic code or you know the normal code um and all of it becomes code that can run on the GPU and on the CPU and sort of not quite a transparent way but it sort of compiles into that. Um, if you were to write Rust code, you're not really writing the same kind of Rust for what would go on the GPU to what would not. In fact, even just writing Rust code to go on the GPU is itself kind of tricky. Although there are libraries sort of being developed for that now. I I I think leaning into the this stuff is often in Python anyway, isn't necessarily a bad idea. But I am really interested to see how this progresses over time. Whether Mojo becomes a sort of industry standard of a kind or whether it's a a fad that ends up dying away. I I genuinely don't know. Um Mojo does have some additional stuff beyond just Python. I don't know if it actually is borrowing, but it has a bunch of things that tries to make it safer and more correct um in a way that Python does not. partially because it needs to I think in order to be in order to be able to manage the the memory channel between CPU and GPU and stuff as well. You you need a little bit better tracking here than um than what pure um what what pure Python syntax would give you. But again have not used it enough to to really say. [snorts] Okay, next question. Um, [snorts] how can I learn Rust and be good at it so I can get a 100K job as a Rust dev? Um, you know, I I don't think I have any special insights for how to learn Rust. I think the the way to learn Rust is is much the same as the way you would learn any other programming language in in my mind at least which is you find something that you want to build that ideally is like a decently good fit for writing it in Rust and then you build it. I think it's very hard to learn languages in isolation like without a goal, without a purpose, without a target. It it's so much easier if you say this is the thing I want to build. let me go figure out how to build that because you have a driver. You have something that pulls you in a particular direction. There are so many parts of a language that you don't need to learn from the beginning. And if you just sit down and I'm going to learn the language, you don't know which things you don't need to learn yet, right? And so I I think I would just say instead of trying to learn Rust, instead just say I'm going to build X where X is something you care about in Rust. That can also be, you know, you're using there's a bunch of software already installed in your your computer that you're probably using a lot. Pick one of those that happens to be written in Rust and see if you can contribute to it some way like fix a bug you found or add a feature that you need. That can be another way. But but it really feels the best to just build something yourself. Like for example, when I um many many years ago, I wanted to learn HALL. Um and you know there are a bunch of good books for learning Haskell like learn is is a common one. What I instead did and I mean in hindsight I could tell you this is not the right project to try to build in Huskell but I wanted to build a uh UI for visualizing um the poster for a bunch of movies. So, I had like a a movie collection of like a bunch of movie files and then I had little um like cover art for all of them. And I wanted like a swipeable like 3D interface that let you scroll through all the covers and pick a movie. And then when you picked one, I wanted it to like launch a program that plays the file. Uh this is like not really Haskell's wheelhouse. Uh but it forced me to learn a bunch of Haskell, right? and a bunch of weird parts of Haskell that I otherwise wouldn't have thought to learn about because that's not what you're supposed to use for. Um but but it was a really fun experience. It meant that I wrote a bunch of HASLL. Some of it very complicated. Some of it um where I really had to like grind my head to understand, but that was also part of the fun. That's why I stuck with it was because I got to build something that I wanted to have. Uh and I think with Rust it's the same. If you want to learn Rust, pick something to build with it and then the learning will sort of come. I if you if you're starting truly from scratch, so you've you've done, you know, very little programming, you just want to know like what is even the basic syntax. Um I really like um the uh rustlings exercises. I think the website is just like rustlings.cool. um- which I find to be a very nice like um progressive set of exercises that also roughly matches the books that walks you through changing code to fit the the lessons of Rust. Uh and it comes with like editor integration for highlighting the tests that you pass as you go and everything and pretty good code instructions for what you need to modify. So, so I would say that's a really good place to start, but it won't really take you to like you know rust super well now. It will just take you over the initial hurdle of getting to okay, I have a rough working familiarity with the language now. In order to take the next step, you really need to build something. And there are tools for this too, right? So there's um stuff like code crafters or um and I have a video on code crafters or a couple actually you can go look at that might be an interesting thing to like try to build it yourself. watch my video alongside it and sort of compare notes as you go. Um, another would be using something like exorcism where other people can also basically review your code for you and some standard set of exercises, but ultimately there's no replacement for just going building something. Um, [snorts] let's see. Uh, does it work in Emacs or Zed? I think so. So, I think Rustlings works it it basically works with LSPs. So you um as long as you have like an LSP running, which most modern editors do, I think rust links just works nowadays. I just replace all my programs on Linux with Rust alternatives and fix bugs and add features and push them upstream. That is one way to do it, right? Like even if you don't currently have a lot of programs that are written in Rust that you use in your daily work, maybe adopt some that do and then they'll be a little janky to begin with, maybe because they're newer. Um, but that means you have more of an opportunity to contribute to them and learn Rust in the process. Should one learn functional programming in general? Um, I think it's a useful tool in your tool belt, right? Like there is some code that just reads better or is expressed better when written in a functional style. And so it's useful to both be able to read and write that style. Um I think sort of the the pure everything must be functional is not really a that's not something I ascribe to. Um but I I do think that having a um a sort of general understanding of the topic is is useful. Okay. Uh oh. Can you recommend a distributed systems project for doing this? Um yeah, the so I I think the um the fly.io challenges are interesting. I have a video on these two. Uh I forget what the link is. It's like fly.ioists or something. If you just s search fly distributed systems, you'll find it. Um, I have a video too where I basically started building out some of the early challenges there in Rust. And those are distributed systems challenges that are they're not even related to Rust really. It's just you can write the the solution in whatever language you want. Um, the other thing would be to look at something like um uh let me find this. Um, so MIT is class uh 65840 on distributed systems. Um I helped uh TA this class for a couple of years as well. This is a really really good class for learning distributed systems too. So the whole um schedule is online including all of the lecture notes, all of the related reading like everything is available plus all of the labs um and the labs here are implementing distributed systems challenges. So I think the first lab is implementing map produce but the later labs is implementing a basically implementing raft like implementing key value distributed key value. Um it the labs are written in go um so you know that's not rust but at the same time if you want to learn just distributed systems concept you could easily just do this in go instead or you could try to write the the same implementation but using rust instead of uh instead of go. It does mean you won't have the automated testing that comes with the labs here, but they are really good guide to here's something to build and and what to measure for. Uh the if you just search for MIT uh 65840, you'll find it. All right. Um are you considering writing a new version of your book? If so, what topics would you update or add? Um, I am considering writing another version. Not just considering, I'm like I'm in the middle of writing a new version of of Rust for Stations. Um, I'd say the, you know, the main changes are you could almost take like the change log from Rust over the past fiveish years. And I don't mean just the the blog post, but also the detailed release notes like the the things in cargo, the things in Clippy, um the addition changes. So all of that stuff uh I want to have reflected in the book. Obviously, some of that is just you know tweaking things like adding new concepts have been added. Um some of it is new like some of it is not just new functionality in the language but also new um sort of theoretical developments in the language. things like strict provenence and the fact that Rust has provenence. Um, so some of those things are going to end up going in there as well. I'm um I'm not expecting there to be, you know, a a fully new chapter on the in the sense of like I'm not planning to add a full web assembly chapter. I'm not planning to add a, you know, full embedded programming chapter. Um but but the main reason for this is not because I don't think it would be useful because but rather because I haven't worked enough with those topics that I feel like I could write an authoritative source on them. And the other reason is because I think those environments are still sufficiently in flux that trying to write them into a book now would not necessarily sort of stand the test of time. One of my goals with respations was that you could buy the book and it would be useful to you 10 years later even though the language has evolved and I think that has remained true so far like if you pick up Russians today all of it I think is still useful and and this is partially because the book does not go into the ecosystems outside Rust itself very much um and I want to continue that trend with the newer version. I want to adopt the newer parts of the language, but I don't necessarily want to explore ecosystems that are themselves still developing. Um, and then let's see, what other bits am I putting in there? Um, there's just a lot of things that, you know, I've learned in the past 5 years, like patterns that I think work really well. um a and some amount of things where I think the the book sort of underserviced a topic where I think there's more depth more nuance to be explained. Um and so there's a lot of just reading through everything and figuring out where where should I flesh things out or in some cases where was there too much dedicated to something that turned out to not be that important. Um but but on balance I would expect that the the new version of Russations to be a um sort of a a minor not a major upgrade in the in the sever sense of the of the word. Right? So I wouldn't expect it to have complete rewrites. I don't think it's a you know the the everything in the old book is useless now. Um but rather just a everything gets slightly better. Uh in terms of timeline I don't know exactly when it'll be. Um I [snorts] think it'll either be end of this year or start of next. That's my current estimate. Uh but it sort of depends on a couple of other things I have um I have sort of going in the background as well. Uh what the what the exact timeline ends up being um should we not be ordering the old version? No. Again I I think the old version is still fully useful. also like do you really want to wait a year until you get the book? I mean that that's up to you. But I think the old book remains useful and remains u like a good reference point. Um I I don't think the new version is a um you really should wait for that now because the old one is outdated. I don't think that's true. Um uh do you also rewrite edit previous chapters only add new stuff? No, I do I do a lot of editing as well. So, I have um I have a long list of notes of things that are either things that need to be modified in the old code because they were um like poorly phrased or or you know there's some code that was just like slightly wrong. Like some of it is stupid. Some of it is just like curly brackets that are missing. Some of it is like I used the wrong variable name that didn't match the one in the argument list. Some of those things nothing that's sort of outrageously wrong. Um, but there's also things where, you know, I've I've come up with better ways to explain certain concepts since I first wrote about them. Um, and so I'd say those are other things that that um like old texts that will be updated. Uh uh will there be will the new version be a free or cheaper digital upgrade for current owners? I don't know. Um this is a conversation I need to have with the publisher with no starch press. Um I don't know if they generally do digital upgrades. Um if they do then yes, if they don't then it might be harder. I I genuinely don't know. uh why didn't you try to compile the book's code ahead of time? So the code in the book was compiled. Um this is one of the reasons why I had a technical editor uh who basically took all the code and checked that it compiled. Um some of these are more about the iteration process of the book, right? So some of this code was like the technical editor does review including sample compiling compiling all the sample code and everything. But then after that we continue to do iterations on the book. There are usually smaller things where the like um like there's a sort of grammar checker that goes over the book. There's the layout designer that goes over the book. And sometimes things have to change there. So sometimes I might change the name of a variable so that they like fit better in a code block for example. um or because there was just a typo in the variable name or whatever and then I miss one of the other occurrences of it or we like move some code around with the new lines and then one of the curly brackets gets wrong or there was one where the indentation was the only thing that was wrong and so they're not really like deep errors in the code. Those would have been caught earlier. They're they're usually more visual problems. Um, [snorts] how do you type set your book? So the the first Russians was actually written in word. Um, because that's what no star press used at the time. Um, they've since moved to Latte. Um, so the whole book is now written in Latte, which makes me very much happier than trying to write it in uh in Word, which was a bit of a pain. Um, I'm I'm hoping that eventually they might move to tips. Uh, but for now, it's Latte, and I'm I'm happy with that. Cool. Uh, next question. How do Rust developers get girls? Honestly, people uh the the same way anyone else does. Like I I don't think there's a special trick to Rust developers, and I don't think there's a special trick to girls. I think this is just like, you know, be be humans, be people, talk to other people, be nice to them, get to know them. Like that that's the the answer to all like interpersonal questions, I think, really, right? Like if if I were to take the question slightly more seriously, right? Which is which is the I'm going to misread the question, right? So a misread of the question here is um I am someone who is like deeply focused on programming. I'm a techie nerd and I care a lot about Rust and programming. How do I talk to someone who or attract someone who might be like someone I would want to date and eventually be my partner in life? I think if approached from that angle the the way I would recommend anyone approach this is you have things that you are curious about in your life right like there's a reason why you like programming uh there are other people who also like programming probably from the same reason but they also like other things it doesn't have to be programming but most of us are driven by some sense of curiosity in something other people's curiosity is and then align your curiosity with theirs if it doesn't align then the two of you probably are not going to go get along well. But if you find ways to have your curiosities align, you'll have things to talk about. You'll have things to bond over. Um, and you'll also just find that you're interested in the other person and they are likely to be interested in you. Um, but that's sort of like that's part one. That's how to how to, you know, have interesting conversations with other people. And then part two is be nice. Like there are so many people out there who I think uh are too focused on themselves, their own needs, uh their own ambitions. Uh that they're very inward-looking like their own securities and anx insecurities and anxieties are sort of the the main focus of their life. Um and I think it's really important to fight that instinct. Like I think the way that we get along as people in a society is that we think of other people not above ourselves but sort of in parallel, right? Like you this is sort of the put on your own mask before helping others, right? You you need to um you need to take care of yourself as well. But if you just focus on yourself, you're not going to be able to uh you're not going to be able to appeal to anyone else either. Uh we need to care about other people and that's important. Yeah. Don't be a weirdo is is a good one. Although at the same time, I'm a weirdo and I think me being a weirdo is one of the things my partner likes about me. Like I I genuinely think you should not need to suppress who you are to get along with your partner. If you do, you have a bigger problem. That's why I say try to look for people where you can align your curiosities, align your weirdnesses. U that is the way that you you're sort of be uh going to be uh set for the for the long haul. Okay. Uh, this is a question where someone's cheated, so I'm going to get rid of it. That's also a question where someone cheated. This website is built to um it's not built to be like secure. It's not I did not build this so that like it makes sure there are no ways to double vote or anything like it. Uh there are two reasons for this. One of them is I assume that most of my audience are not and they don't sort of abuse the systems that I give them to ask questions. Um, but the second is because I am a human. I get to look at this list and not answer the top question if I see that it's gotten 200 votes in like five minutes. I will just get rid of the question and not answer it. Um, so please don't do that. Um, great. Uh, next question. Uh, what applicationwide error design patterns do you like the most and why? Do you use a single error type across the application library? Do you type erase errors, etc.? Um, you know, I I don't think I have one that is my favorite error pattern because I I think it really depends on the application. Um, you know, the the the most common three that I use are enumerated errors. These are the ones where you have an error enum and the variance are the different kinds of things that can go wrong. Uh type erased errors or opaque errors which are there's just an error type. It's a strct. It doesn't have any fields. Maybe it has some methods on it so you can introspect its state, but all it really tells you is something went wrong. Um and this is usually something where if you print the error, it'll give you some more details, but you can't programmatically inspect the error. And the third is sort of a a hybrid approach that you often see in things like I remember the earlier versions of the AWS SDK had this. Um the standard library kind of has something like this with the stood IO error. Um where you have a wrapping error type um which is a strruct and then you have an inner error type that is either exposed as a uh field on the strruct or a method on the strct that gives you back something that is an enum um or it can be a generic on the strruct as well. um and that enum lists the variants that are specific to a particular instance of the error. So imagine something like um you have an API that has functions fu and bar. Um fu returns a error open bracket fu error kind closed bracket and bar returns a um result error open bracket bar error kind closed bracket. So the intent being that they both have the same error type, the same outer error type that is sort of the one that your library exposes, but they have um enum varants for things that can specifically go wrong when you call this function unlike others. And the decision function in between these is for me uh what is the caller likely to do with this error. So for a lot of errors, and it this is not really just about applications and libraries, but for a lot of errors, the caller will simply propagate your error up and it will propagate all the way up to main where it will eventually crash the program and print what the error was. That is the most common by far way to interact with errors. If you expect that that is how someone is going to call your library code, then there's no reason for you to give them error variants because they will not match on them anyway, right? And it doesn't matter to them whether the operation failed because um I don't know uh it doesn't matter to the calling application whether it failed for a permission error or it failed because the network cable was disconnected or it failed because the remote server was overloaded. the application will not change its behavior as a result. And so you don't need to expose that information to the program. You still expose it to the user because the that program is almost certainly going to print the error in the end. But that means you can get away with just having it be an opaque error that just gets propagated up. Opaque errors also have the nice property that it's very easy to extend them over time. You don't have to do a breaking change just because you add something to your error or even restructure it internally. And you can still expose useful information about the error. If you look at the hyper error type, for example, it works kind of like this where you can, for example, look up the status code for a hyper error and it will give you back an option status code where it's none if the error was not from the remote end with a status code. Um, and this tends to work really well if you can get away with it. If you know that callers will care which specific error that happened, then you're in you need to use one of the other two options. um the just return a straight error enum. Uh I tend to prefer if there are not very many variants and the variants don't vary a lot across your library surface. So if most if you don't have very many methods or where most of the methods fail in the same set of ways and you think the caller cares which way they failed, an error enum is totally fine. If on the other hand you're writing something like the I don't know the AWS SDK for Dynamo DB then the ways in which a put can fail are very different from the ways that a get can fail which are very different from the ways that a I don't know change permission can fail and the caller probably cares about specifically which way it failed and so then you probably want the hybrid approach where you have a an outer type that is generic over some variant that's specific to each method um that structure tends to work really well in that kind of environment. And the outer error type you implement, you know, debug printing for, you have some convenience methods like is it retriable, for example. Um, and so that those are available directly on the strruct type and same with like question mark lets you propagate it up. Um, but if they want, they have access to the the um uh the specific information from this method directly there. Um, IO error is kind of like the latter in the sense that you have IO error and on it you have a method called kind that returns you um an edom over the variants that are there that are some of the known IO error kinds. But again, it's not all of them, right? There's one called other. Um, it's also a non-exhaustive enum. So over time they can add more and they have in the past. Um, so that is a little bit of that hybrid kind, but it's not generic, right? and they just have an enum of um non-data holding variants that don't depend on which method you called. And the downside of this is that when you call a given method, an IO method from the standard library, you don't know which of the error kinds that function specifically can return. You have to kind of handle all of them or only a specific subset, but you don't know which it could return. You would have to go look at the man pages, for example. Um [snorts] uh wouldn't most program want to differentiate between server overload versus network completely out maybe right the this maybe a poor example like server overload might be retriable errors you might want to try again later but even that I think most applications will not care about that they they won't have a retry loop right it really depends on the kind of library you're building. If again if you're AWS and you have you're building an SDK that you expect everyone to be using for all kinds of applications then yes you need to surface whether an error is retriable but if for example you're building [snorts] I don't know uh an error an image decoding library then whether decoding failed because you expected a you know bite 48 and you got bite 52 or whether it's because you got an early end of file or whether it's because you got uh the dimensions of the image didn't match the size of the data. [snorts] Those aren't there's no distinction between those to the caller and so there's a real point giving variance for them. Uh what do you think about the go solution for error handling? Errors values seems really good. I I'm going to claim the Go does not have a solution for error handling. It just has a convention for error handling and I I think it's actually pretty bad. Um, like at least I remember in my Go days, which is now quite a long time ago, so maybe it's gotten a lot better since. I haven't really kept up. Um, but I used to hate ghost errors so much because it was so easy to fail to handle them correctly. [snorts] Um, okay. uh please make a dedicated video on getting job in the rust market as a graduate. I don't think I don't think I have any special tricks here. Uh I think in fact the like the the the tricks for getting a job in the Rust market is as a graduate is sort of the same as getting a job as a graduate in programming in the first place. I don't think there's something special about Rust here and I certainly don't think I have any um special answers that will give you a a particular leg up sadly. Um, which other Rust streamers do you recommend watching? You know, I don't know. I [snorts] don't This is like one of the ways in which this channel got started was like back in 2018 there was a Rust survey uh where the thing most people requested from the Rust community was access to more intermediate resources. Um, you know, they said there were a lot of beginner resources, but not a lot beyond that. Um, I sort of respond to that being like, sure, I could do some intermediate level Rust. I've done Rust for a few years now. Why not? Um, but my first thinking was I should do like some blog posts or something. That feels like that feels like the kind of thing people are after because that's what I consume myself. Um, but then I did a poll uh on way back on Twitter. Um, and I mean it got like 20 results or something. So it's it wasn't particularly representative, but mo by far the most votes came for live streaming, like people wanting to see someone use the language for real in a real setting in video form. And I responded with, well, but that's silly. Why would anyone look at like watch other people programming it? That's weird. And I think I reacted that way because that was my reaction. like I would not want to watch other people just talking about programming or or uh doing programming in this sort of live sense. Uh but but clearly a lot of other people wanted to. So I started the channel and lo and behold a lot of people seem to learn really well from it. And I I think I understand some of the reasons why. But they still don't really appeal to me. Like if I sit down and want to watch a video about programming, I wouldn't really watch streamers. I would watch like a a talk given by someone who's taken a bunch of time to prepare, but that might also be because of where I am in my career, right? Like I feel like [snorts] the things I learn from right now is not someone uh building something because I do that myself a lot or from just sort of free talking. The closest would be watching interviews. Um but usually interviews are not streamed, they're just uploaded after the fact. Um, I think you know that there are um, you know, if if I had to sort of force myself to watch streams, I think the the kinds of streams I would watch would be where someone is either doing an interview or where someone is actively programming something where I think the thing they're building is interesting. Um, anything else? anything that's sort of social or technical commentary or just sort of let me try something for 20 minutes kind of thing um does not interest me that much. Um so I I don't that that might give you some some guidelines but it's not I I don't think I could list a set of streamers because again I don't really watch them. [snorts] Um uh any advice for someone early in their career both practical finding interest jobs and technical what skills are most fundamental and appreciated? Uh thank you for all the content and I I'll combine this with the next question which is which companies would you consider interesting to work at currently? You know, um I I think the advice I would give to people, we've talked about this earlier in the stream, too, is work on things that interest you because it makes such a big difference. Like working on a programming job where the thing you're building you don't really care about is not only demotivating but it also tends to mean that you're not really learning because you don't have that eagerness to dig deeper and to learn more and to sort of push yourself. Um and I mean it it's it's tricky advice because at the other hand you can't you don't often have the luxury of just being able to freely choose your job. But I think what I would recommend is try to find jobs but also side projects where you have like a real interest and also where you feel like you have to stretch a little. If you are always picking a job where you think I can obviously do this job like this is no problem. This is a walk in the park like this feels like you know just at my bar. Then then I think you've picked the wrong job. And I mean it depends on what your priorities are in life, but but if your goal specifically is to become better at software engineering, at scale, really hone your technical skill, uh if you want to grow your ability to tackle more complex problems, um more complex teams, bigger problems, higher bigger impact, um get jobs at bigger companies where you have higher leverage in your job, I if that is your goal, I I think you need to challenge yourself a little bit with each position you take. You need to take something that you're, you know, not that's wildly out of your league that like there's there's a balancing act here, but I do think that you need to sort of uh like step on your toes, like stand on your toes a little bit and lean forward, lean into something where you feel like here, this is maybe beyond me, but I don't think it is. It's like I think it's doable and I think it's interesting if you have that combination. I think that's how you grow and that's how you grow pretty quickly. Um it will come sometimes come at a cost like it might mean working a place where um you end up working a lot. May maybe just out of interest like not necessarily because you're forced to. It might mean taking the job the pay slightly less because it is more interesting. Uh it might mean relocation. Like sometimes the more interesting things aren't where you are or they're not willing to hire remotely and you might have to sort of lean into it a little bit. Um and like that this is ultimately a trade-off of where do you want to spend your time and effort and how much you willing to sort of pay for it. Um and I think that's that's like how I think you really develop those skills in a sort of semiaccelerated way. I I don't think there's a set of like skills that are the most important ones. I I don't think it's a you know, if you learn these four things, you're set. Um because it really depends on again what you want to work on. If you want to work on, I don't know, sending rockets to space, that's very different from if you want to build um uh the next um cloud infrastructure for LLM training. Like those are just wildly different skill sets and they're also different from you want to build a bipedal robot or you want to build a real time translation service for videos or you want to build um the next version of Akami like the backbone for the internet. Those are all wildly different set of skills and I don't think there's like one rule you can go by. I I would say that I think if I had to sort of try to pick out three things that I think are useful almost universally, the first is debugging skill. Like the more time you spend debugging things and ideally things that are at multiple levels of the stack and the more you're willing to keep going deeper into the stack to really find the root of the problem, that skill serves you everywhere. The second is soft skills. So um you will cap out at your sort of intellectual skill like your your individual contributor. How much can you do to move the needle for the company all your users the world like you will cap out on that that realistically the way to build your impact and build things that matter more to more people is by working through others. That does not mean making other people do the work for you. It doesn't just mean delegating. Like my mentality here is very much to to lead from the front, right? Like you should be the person that builds many of the hard things, but you build it with a team. You build the team, you bring the team with you. But that requires you to have develop these soft skills like how to understand whether everyone else feels like they're fulfilled by the work. uh whether people are overstretched, uh whether you can take on more work, but also things like is the team able to achieve this thing, who needs to grow what skills, how do you handle conflicts in the team, how do you make sure that people are sort of uh incentivized in the right way within the team, how do you let people go if they're not meeting the thing that like meeting the bar for for the stuff you're working on? All of those things start to become more of your responsibility. What do you do when you fall behind? What do you do if not everyone agrees on what you're supposed to build? How do you figure out what to build in the first place? Like those soft skills, I think, are extremely valuable and undervalued by a lot of individual contributors. Um, in part because they're really hard to to develop. They're really really hard to build and they take time, but it's so worth investing in getting better on those fronts. Um, and then I think the third is actually teaching. I I think um no matter what field you're in, teaching serves multiple purposes. One of them is it makes you understand the content better. If you're able to explain it, that means that you've sort of reracked your brain sufficiently that it's organized enough in your head that you can get it into someone else's. Um it also forces you to do deeper research. You need to understand the wise to be able to answer the questions that the people you're teaching are asking. Um a and also teaching is um is a really good practice for social skills of course but it's also a sort of um path to mutual respect. So if you're able to teach other people they are likely to teach you in return which accelerates your growth but also it develops a bond between you like now the two of you are working together in a closer way because you help them develop. Um, and that I think is likely to give you a leg up sort of the do onto others as you wish others would do onto you kind of principle. Um, and I I think those are three skills that I really want people to to do more of because I think it's it's so rewarding to to the entire ecosystem. Um, and uh getting a job you're interested in is way easier in interviewing and letterw writing as well. It's true. If you genuinely care about the job you're looking for, like if the the job you're interviewing for, that will come across in your interviews, too. It's not necessarily you'll be less nervous, but if if you're excited about what they're doing and you can express that excitement, then that tends to be an advantage in your interviewing process. Um, [snorts] there's a um the other sort of part of this question was which companies would you consider interesting to work at currently? Um, and I think it's a really good one. Um, and actually this is a good place to I'm going to inject here a uh, this video is actually sponsored. Uh, this is only the the second time this channel's been sponsored. I don't do a lot of these. Um, but it's actually sponsored by the Let's Get Rusty Jobs board. U, which fits very very well into this. So, I mean, I haven't used this myself. I currently am in a steady job. Um, but this is probably where I would start looking for what jobs are available. And I think it fits this question really well because what we can do here is actually look through what Rust jobs are available and I'll look at which ones would I actually consider interesting to work at. So I mean in my case for industry uh probably backend, maybe embedded, maybe finance, probably systems programming. Uh let's look for senior see what they have. So, there's there's a bunch of companies that I don't really necessarily care about. Um, there are many that I don't know what are, but like EA, like working in games, I think working in games would probably be really fun. Um, I think EA specifically, I don't think I would want to work at. Uh, I would probably want to work at a smaller game development company in the first place. Um, Proton could maybe be interesting. Um, this is sort of the Proton private email stuff, although that feels like probably more front-end traditional cloud hosting stuff rather than I tend to like things that have a deeper algorithmic component to them. Um, which like is probably less the case of something like Proton. Um, embedded comms could maybe be interesting. Cloudflare I think would be interesting. I really like infrastructure, not the not building the sort of UIs and tooling and stuff for infrastructure, but building the infrastructure itself. Like how do you build the the network, the distributed systems that run the internet? Those things I think are really fascinating because I think the scale is just so cool. Like the scale is an interesting problem in and of itself. Um, so like Cloudflare I could maybe work at. Uh, I think that could maybe be interesting. Um, quantum quantum systems is also interesting. Like would I work on quantum computing? Maybe. I feel like it doesn't really match my expertise, but it does peique my interest. Um, what else do we have here? There's a lot of ones that I haven't even heard of. I mean, this is actually a sort of related question, right? Which is, would I work for a smaller company like sort of a a startup basically? Um, it it's not it's not out of the question, but but I do think that I'm I like slightly more mature companies. Not not in the sense of like they need to have thousands of employees. Um, but that early phase where everyone does everything and not all of it is programming, I think is is not really right for me anymore. And and I think one of the reasons for this is because there's so many things that I want to do outside of my work, right? Like these streams are a good example or writing the next version of Russians or I'm giving a lecture series at MIT um in a couple of weeks that's a sort of new iteration of the missing semester um computer science class. And those things I really want to spend time on. And working at a startup I think tends to consume all of your time and energy. Um, so I like I would totally work for a company, you know, like Kelsing, like where I currently work, where, you know, it arguably may be a startup, but doesn't really feel like a startup anymore. Um, and anything that's sort of older, bigger than that. Um, but but the sort of like two, three, four, five people, very scrappy team, I don't think is in the cards for me, at least at the moment. I if I were looking for a job, which just to be clear, I'm I'm not. Um, what else do we have? Oh, Airbus helicopters. I don't Helicopters are, this is completely anecdotal, but helicopters are fascinating flying machines. Like, if you went to someone today and asked them to design a helicopter, I think everyone would say you're crazy if they hadn't already been invented. Cuz like even just the the tensile strength of the rotor blades is wild because the outer part of the rotor moves faster than the inner part because centrial force and everything um centrifugal force. So there's like force pulling in on the rotors, but also the outer parts are spinning faster. So there's like extreme tensile strength that needs to sit in them. They're just like they're crazy machines. Unrelated to the software engineering. It's just uh just interesting. Um [snorts] there's a lot of uh space stuff. I work on space. Space sounds like fun. IBM lead Rust developer. What is IBM doing with Rust? That's interesting. I don't think I would want to run work for IBM. I I would maybe work for Microsoft. I think Microsoft has um uh has sort of, you know, if you look at Microsoft like 10 years ago, it's a place I would never work. Now, I've heard very good things. I' I've heard very good things from both people at Microsoft, um but also I've just seen a lot of the things Microsoft has done. I still have a sort of tingling feeling out of the back of my head of like, is it too good to be true? Like, okay, they bought GitHub. Is GitHub going to get better or worse? And I mean, we're a few years into it now. I I don't think I think the jury is still out on that one. Um, supply chain security. Oh, it's because of Red Hat, maybe. I guess IBM Red Hat would be part of it. Black Duck. This is like uh supply chain security stuff, which is pretty interesting. Um, what else do we have? you know, VPNs maybe. Um I I'm pretty picky about VPNs. Like I I would only work for a VPN company that I really felt like were um solid both in terms of technology and in terms of like ethics and morals. Um I actually I quite like Mulvad here. Um I know they're also they a lot of their stuff is written in Rust which is exciting. Um this is not an endorsement of Mulvad. more of a that's that's the kind of place that maybe would be interesting to me. Um Canonical is pretty cool. Um I wonder what they do in Rust. I guess they adopted um pseudo RS and the UTILS in Rust. So a lot of the core software there is now Rust. That might be the kind of things that they're looking to expand. Oxide computers is pretty cool. Heineken that's weird. There's a lot of like weird interesting stuff on here. Dragons. Oh, I guess Mulvot is hiring. That's funny. All right, I think we we've uh probably spent enough time on this now, but uh oh, Dis Discord were also very early adopters of um uh of Rust, and I'm sure they have some like pretty interesting distributed systems problems. L3. I wonder if Akami is hiring. There's probably F5 in here somewhere, too. Disney. Yeah, I mean, you know, looking at this, there are a lot of interesting senior positions, Rust, that I'm like, yeah, these places look like they could be interesting. Um, but I don't I don't think I have one that's like, you know, which, you know, the original question here is which companies would you consider interesting to work at currently? And I don't think I have a list of the top of my head. I more have like a I can look at a company and go yes or no that's interesting or not. Um but not so much a I have a pre preset list. It really depends on whether they're tackling interesting problems to me uh in Rust. Like I think those are the a and that they're not like a tiny startup. I think that's um probably not what I'm looking at at the moment. Um but yeah, so also the stream sponsored by uh the Let's Get Rusty job board. I I'll put the link in the video description down below as well for those of you watching after the fact. Um, seems to be a good place. I again not currently looking for a job, so I've not been using this actively myself. Um, but having used it now for a solid couple of minutes, uh, this seems like a good place to be looking. There are certainly a lot of interesting jobs on here. All right, let's see. Um, in the long run, do you think AI tools will widen the gap between experts and noviceses by magnifying the impact of existing understanding or skill or will it narrow it by acting as a skill equalizer where you get similar productivity and output quality regardless of skill? Considering the uncertain future improvements in models, workflows, UIUX, which trajectory seems more likely? Okay. So, so to simplify the question here is basically will AI tools make expertise useless because everyone gets the expertise because the LLM gives it or will it widen the gap because now understanding matters way more and the LLM works better when you have that understanding. Um, I think I think this is a really interesting question and I think it's one where it's not obvious to me that I have that I have an answer. I I think this is still it's still a sort of wait and see situation. I think in a in a way the answer is a little bit of both. So, I think what what LLMs are are great at is giving giving junior people access to I don't quite want to say experts, but giving juniors access to um a slight slightly dumb or naive expert tutor, but it doesn't make them experts. And and it's also it can be dangerous in the sense that if you're a junior, you might just take the expert's word as truth. And this applies even if it was a human expert, right? Like if if you talk to someone else who is much more senior than you, much more of an expert than you are and you just sort of we have this expression in Norwegian of taking something as a good fish, which is like you just accept it as truth, as gospel um without doing your due diligence on it. You will get burned by that, right? Experts can also be wrong. uh and certainly people who think they're experts but aren't um or you know they they just haven't spent the time to think it through or they don't understand the problem well enough and I think that's often what ends up happening with LMS is that they will give you an answer but if you haven't given them enough of the context they might give you the wrong answer if they don't know the problem space well enough they might give you the wrong answer but it will be stated in expert voice uh it will look like expert code even if it is not necessarily and I think that's sort of the trap but it's also an opportunity right so it is a way to not quite be a skill equalizer but sort of an opportunity equalizer right so it gives you access to expert level stuff but it doesn't make you an expert it doesn't replace the expertise um you still have to work to learn from that expert and question and grow on your own and do your own research and I think that's sort of where I'm seeing the most risk maybe to the current trajectory. I it's that I I actually think that it can be a skill equalizer. It it can be a good opportunity for people to sort of raise themselves up, you know, the what's the what's the expression like pull yourself up by your bootstraps or something is a weird uh English expression. But I think it's that there is an opportunity here to really give you a um give people a sort of leg up to becoming experts themselves. But it requires a lot of due diligence on the part of the people learning. And the same applies to experts actually like experts are usually experts in some field. They're not just experts. And the LLM is an expert in all fields. Um, but that means that if an expert in one field uses the LLM for something they're less familiar with, they might be inclined to just take the output of it as good fish and and just sort of assume that because it they the the parts of it that they can seem plausible from their expert position, but the parts that they don't have experience with, they don't realize that that part can still be really bad because, for example, there they haven't given enough context. Um, and so there too, the trap exists. And I think reminding ourselves that the the stuff that comes out of the LLM is it's a tool, but it's not an infallible one is really the most critical lesson we can learn here. And I mean, all the models say this like on, you know, if you go to claude, you go to chat, it always says like this model can make mistakes and everything. But I think it's something deeper than that. It's not just that it can make mistakes. It it's that the the onus is still on us to understand and to grow and develop because the stuff that comes out of the model, how will you know whether you gave it enough context to give the correct answer because it has no way to really gather that context on its own. Um, and I mean we see this time and time again where the stuff that the LM produces is right but in the wrong way, right? it's the right solution to the wrong question. Um, or sometimes it's just like stitch together multiple things that are independently correct but together incorrect. Um, and expertise is needed to sort of tell tease those apart. Um, and sometimes you can use the LLM to learn what those shortcomings are and like you you pair the LMS with each other and everything. So, I do think it's a really good learning tool, but it has to still be considered a um a learning tool. the the expression I've used in the past is like it's sort of like the the invention of the what's it called in in English? Um the table saw like the thing carpenters use to cut wood, right? It is a power tool. It is something that allows you to work faster in some cases, but you shouldn't use it for everything. Like it's not everything you need a table saw for. Similarly, it's not everything you should use an LLM for. Um, and if you just like cut wildly with it, it will happily cut, but it might not be the cuts you meant to do. Sort of a measure twice, cut once kind of ordeal, right? And I think we need to think increasingly of LLM as a power tool that has to be wielded responsibly, but also that we need to understand the task we wanted to achieve, otherwise it is just a blunt instrument. Um, let's see. Oh, we have a bunch of follow-up questions here. Um, does LM not being able to actually understand like humans even matter? or is the accuracy of the output uh or does the accuracy of the output make actual understanding trivial? Um so so I think it does matter that LLMs are not actually able to understand that because that is in a way the the underlying reason why we need to treat them as power tools, right? That's why we can't just take the output at face value because we don't know that they've been given everything that's needed. We don't know that the the you know the the the patterns that they're replicating are representative of the correct solution. We don't even know if they've been trained on the patterns that are actually needed for this output. Imagine you're trying to build something completely novel like an algorithm that never existed before. um then I I don't think the LLM would be able to give you the correct algorithm because it has not seen the thing that is that needs the creativity to develop and you know this almost becomes a philosophical argument right of what is creativity isn't it just replicating subsets of patterns until you get a new pattern and maybe it is but but my sense is that there's not enough [snorts] understanding in the LLM to know whether it is solving the right problem in the right way. If you think back to the the interview question we discussed a little while ago, right, of I like to ask this question of how do you convince yourself that you built the right thing in the right way correctly? Uh, and I I think the LM can't answer that question. Um, but this would imply that LLMs can never retain AGI because an AGI should be able to complete tasks at at least as well as human experts. Oh yeah, I don't think LLMs are on the path to AGI, but that's maybe a maybe a contentious uh point from my end, but like I I think whatever AGI looks like, LLMs are not really a stepping stone on a direct path there. I think the way we get AGI is something else or or maybe something in addition, but it it's not just better LLMs and then we get there. At least not to me. [snorts] Um, I have a lot of skepticism for all the LLMs. I honestly hate all the push from companies to use them. Sometimes I spend more time writing pros than just code. The damn thing after reading the docs takes five minutes. Yeah. Um, I think this is the other aspect of LMS or power tools, right? which is you need to know when they're suitable to use and when they are not the right tool for the job. Right? So if you took a table saw and you used it for I don't know felling a tree, that's not what a table saw is built for. It would be terrible at the job. It will take a long time. You're going to have to like you're probably going to injure injure yourself in the process. Um you also wouldn't use it to like hammer a nail. Uh and and you know the the anidology is a little a little stretched maybe but I I think as you discover where the LLM gives you leverage where it speeds you up you also discover the places where it does not and and that is one of the reasons why you know I I spent like a week or two where I forced myself to use LLMs a lot that for that period of time um and the reason was because I felt like I didn't yet have the compass for where is it a waste of my time and where is it a timesaver and and I'm not saying a week or two is is sufficient to de develop that but you need to start somewhere and and you need to recognize that that is even a question worth asking like should I use an LLM for this and there are a bunch of things where they're just not well suited and then there are a bunch of things where they truly save me a lot of time Um, [snorts] okay. Uh, next question. The 1 billion row challenge. Is Java really the winner here? And so this is a reference to the last video we did, which was an imple um where we implemented what's known as the 1 billion row challenge. um where you basically take a a giant CSV file with a billion rows and your goal is to produce some metrics over the data in that CSV file as quickly as possible. Um this was originally a challenge in the Java ecosystem and increasingly people have started to try to solve it in other languages. A and um the the net result of that video was that we produced a Rust solution that was quite fast but the Java solution was faster. I don't think this means Java is really the winner. Um it's more a matter of when you're at this level of optimization um it it almost doesn't matter what language you're using because you're not really using the language anymore, right? Like um many of the solutions like if you look at there's a really really fast C++ implementation for example and like arguably it's just not C++. It's like there's a lot of assembly in there. It's like really you're you're just trying to squeeze every ounce you can get out of the performance of the system and you're pulling all the tricks the program language allows you to do. And the Java sort of winners in those competitions spent way more time than we did in our I mean it was a 12-h hour stream which feels like a lot of time but it was like one person starting from scratch and also refusing to use a bunch of like third party libraries and everything which isn't like the the solution the the solutions in the Java space were like teams of multiple people working on this for a long period of time trying to really get that performance out whereas this was like me sitting down for one session to try to write a program. So, the fact that I didn't get mine faster than the Java version, I don't think we should read too much into. I think the fastest one is like a C++ solution. Um, and I I forget how much it beats the Java one by, but it's like a substantial margin. Um, I don't think I don't think the language is really the um the right measuring stick here. I don't think we should use this challenge to determine which programming language is faster. Um, we can use it for things like how much overhead does the language bring. Uh, we can use it for how easy is it to get a relatively performance solution quickly. Um, like using the standard tools, the standard ecosystem. It's like in a way how ergonomic is it to get pretty good performance. We can maybe use it to measure that. But for like objectively like what's the fastest time, I think you're not really comparing the programming languages anymore. >> [snorts] >> Um, are you Wom yet? I I am not Wom yet. I I've played around a little bit with Wom, but it's it's mostly like toy things at the edges. I I would not say I'm uh I'm Wom enough yet to to say yes to that question. Um, can you do a crust arrest about the problems of selfbarrowing and the solution used in popular crates like oraoros and their tradeoffs? Um, maybe that doesn't sound like a bad idea for a crust of rust. Self borrowing though is one of these things where it's not really all that interesting. Like it's interesting to try to develop a framework in the language itself for how to express self borrowing. That's how we ended up with pinning. That's why we have crates like or borrows. But like self borrowing itself is very simple. It is just you have a strct that has a pointer into itself. That's that's all self borrowing is right. Um and the the solution used in oraoros for example it is in a way not really a solution at all. It is just saying that the you produce a an owned object where the only way you can get at the the reference is through a reference to self and then you tie the lifetimes together at a point of access. Um and it's really a workaround for the fact that we don't have a self lifetime on strruct. So you can't say for a given field that its lifetime is the lifetime of self because there's no self at that point in time. Um but once you have getter methods then you do have a lifetime for self because it's an argument to the function. That's that's all really makes use of plus like a a closure for the constructor to to makes this possible to express. So like this could be interesting to to talk about but I think realistically it's a it's actually a fairly short u fairly short topic but maybe that maybe for once we will have a short crust of just crust of rest that's uh it's not a bad idea. Uh I'll put the link to the Q&A page in the chat again for people to look at. [snorts] >> [clears throat] >> How has Nyx OS been and how has it affected your workflow? Uh ah I think someone has been spying on my my dot files. So my dot files are on a in a repo for a repo on GitHub and recently I in I wiped my laptop and I installed Nixxos on it. Um, and I checked in those config files because obviously I don't want to lose them. And I guess this is how someone knew that I adopted it. Um, you know, I I've enjoyed it so far. I mean, I've been using Nyx at work a bunch. Um, and so I figured why not try to adopt it in like a complete OS install on my laptop. Um, and uh I think the so a couple of thoughts on Nexos. Uh, one of them is the process of getting it set up was fairly straightforward. the process of making it making a sort of standard setup where you also have home manager and figuring out where should you install each of the packages and stuff that was not well documented anywhere. Arguably I should have documented myself but like feels like there should be a more straightforward install the entire base system and lay out my config files and everything. Um that that kind of installer would maybe be useful. Um, second one is figuring out which packages are even available in Nyx. Uh, can be a little bit of a pain. Um, at this point I think I've landed on the one to use, uh, which is I use, uh, searchix.ovh. I'll I'll put it in chat, but I use this one. Um, and that seems to search like all the different things. So, home manager and Nyx OS and Nyx packages and maybe something else. But I still feel like they're like meta packages that aren't there. Like if I search for it's like LVM or something then uh it lists a bunch of specific versions of LVM, but I can also install just LVM. And for some reason that works, but I don't know why. Um like how can I search for those as well? I'm not sure. So, I'm still figuring out the ecosystem of like where things live in the Nyx world. Like not not like I can um I can I I've I've found all the source files for the NYX packages. I just don't I haven't found how to find them yet. Um and I think the third is it can be pretty annoying to figure out why something goes wrong if something goes wrong. like um I had a an error in my config where I what did I do? Um, I tried to set some field for some configuration thing that had been deprecated in an old version of the home manager or the package and so it was removed in the newest version and the error I got was like a 100 lines long with a huge back trace that wasn't really that helpful. Um, and so I really had to like go dig into the the source Nix files for the install of the packages to figure out what changed and look at the git blame for that. Uh, and then it was just and then it was in a dependent package. So I needed to install Nick's tree and like the the process for debugging Nyx things feels to me very painful painful. But it could also be because I don't I it's not in my fingers enough yet. Um, so ask me again in a year and we'll see. Um I I think it is true that Nixos feels like someone pointed this out in chat too where it really shines is for servers a little bit less so for desktop computers. But I will say like I I found the experience pretty nice for my laptop especially now cuz I can now wipe it and set it up again pretty easily. Not trivially but pretty easily. Um I have a setup too where like things that are specific per host I have in a separate file. And so it's like that looks like it could be pretty nice. Um, but jury is still out on whether I prefer it to Arch. Like my desktop still runs Arch. It's only my laptop that's next OS for now. Um, what is your approach for assessing candidates in software engineering interviews? This one we've sort of answered already. Um but the follow-up question is and has your approach changed given the rise of LLMs especially when the interview is in a remote setting? So I I think the um uh you know I think there's a future where uh interviews become the equivalent of university open book where the intent is use whatever tools you want to use. I don't care. Um but where you design the interview questions such that the open book just makes the setting realistic rather than too easy. So what what's an example of this? Um imagine that I you showed up to an interview and I said okay you can use whatever tools you want. I just I want to see your screen. Um but not because you're disallowed from using anything. It's just the the way that you for example prompt your LM is part of what I want to ascertain. Like that's part of what I want to evaluate. So I just want to see everything. Um and the setup for the interview, let's say it's a 50-minute interview or something is um build um write me a program that lists files. Full stop. That's the end of the interview prompt. Um, the interesting things I would get out of that interview are not how you produce the code. Like, if you use an LLM to generate part of that code, that's fine by me. Um, as long as you then show me how you actually vet that that code is correct. Um, in fact, you know, if you choose to do that, there's no there's no downside to that. Doesn't mean you're a bad programmer. It doesn't mean I wouldn't hire you. But the process you go through to get that code to vet it afterwards to test it that matters. But perhaps more importantly, how you how you explore the problem space is what I want to learn from. Right? When I ask you, okay, build me a thing that lists files, the questions you ask in response are the data. That's what I want to hear. Right? I want to understand how do you like what question what follow-up questions do you ask? And then we'll take we'll take the interview in whatever direction you end up saying. But presumably you'll ask things like what operating system am I running on? Uh what information do you want to list about the files? Should it be recursive? Um how do you want the output to be printed? Should be visual or should it be text? If it's text, should it be um just asy or is unicode fine? Um do you how do you like what kind of um text encoding do the file names have? um am I allowed to use libraries or do I need to just use like sis calls directly? There's a bunch of these questions that like really get into how much are you able to think through the deeper levels of the question. Some of them might be who is this for like where is this going to be installed? How is it going to be installed? And that's that becomes the interview. So the programming part of it is really just like that the producing of the code is a small part of it but aptly so because in your real software engineering job writing the code is only a small part of the job. And so that's why I I would I'd be totally fine with this being fully open and using LLMs for the interview and everything. I don't care because you will probably do similar things in your real job. I just want to see how you use those tools in practice. Uh, okay. Let's pause for a second. Um, because now we're we're down into uh some of the questions that have been asked more recently. So, let's everyone go go to the question page again. Um, take some time, pause the updates top right, uh, go through the list, vote for questions that you really want to see answered. Uh, and then we'll come back here in like 2 minutes and then we'll resume. Uh, but go through, vote for the questions you care about. I would suggest not asking new questions right now because they will probably not make it into the top list. Um, but instead vote for the ones that are already there. There should probably be one that already uh matches your interests. Um, so I'll see you in like two minutes after you've uh done a bunch of voting. And we're back. Uh, let's see. Next top question now is are there any changes in Rust you think would be worth a breaking change? You know I think I get this question every year. Um and I think the the answer is sort of the same every year too which is there is no single change that I think would be worth a breaking change in Rust. Um you know we we have some that are maybe good contenders right like um uh pin and drop for example um uh asent cancellation there might be something around like there are a couple of things that maybe are or or not that are substantial enough to be a breaking change but feel very substantial and that would be almost certainly a breaking change. The reality though is that a breaking change to a language is hugely disruptive. I mean, Python 2 to 3 is the classical example of this. Um, and I I would be very surprised if there was ever a Rust 2.0. I think instead what we're likely to see is a sort of maybe an expansion of what additions are able to do. And that might not be a policy expansion. It might actually be a technical expansion of like additions become able to do more things in a way that prever preserves backwards compatibility. I don't know. But but I don't think that there's any single change that would allow that that would where the the sort of where the Rust core team essentially would decide we're going to do a breaking change. Maybe over time there's like enough enough important changes that together make it compelling to do a breaking change, but even that I feel like is like uh it's a pretty tall order. And I think it's like not sufficient for those changes to in isolation be worth it. It would also need to be that there's a clear way to improve the status quo. Right? So, so it's not sufficient to say that the problem has been identified as important enough, but the substitution, the replacement solution needs to there needs to be enough confidence in being the right solution so that you don't have end up with a breaking change that doesn't actually fix the problem. Um, so so I think like this there's a vanishingly small uh likelihood here. think uh async is a big thing that stops me from recommending Rust to my team. I I don't think async is that bad. I I I simply do not. I think there are two things that are problematic with async and rust today. Uh maybe three, but probably two. Um the first of these is cancellation. Cancellation is not a problem in and of itself, but it's like enough of a footgun that people keep being bitten by it and we need a solution to it. It's not clear to me that the solution requires a breaking change. It's also not clear to me the cancellation means you can't use the language. It just doesn't work. I I don't think that's true. It does require some diligence. Um, and it can be hard to get right, but I don't think it means the whole async stuff is just useless. Um uh the second is uh uh the sort of async drop which is closely related to pinning which is closely related related to move semantics. Um here it you know this is a little annoying but we also have workarounds for it right. So instead of having it be drop you have an asynchronous function called you know cancel called close not cancel um that gets to do the async work and in drop you try to get a handle to the runtime and you call that asynchronous function if it fails then you just ignore the error or you panic. Um so there are workarounds for that too. It would would it be better if drop had a pin on it? Absolutely. It would also make pin itself easier. If we had move semantics, then maybe that would also make things easier. But like I I again think that's somewhere where it is very rare I actually have to think about that like that this is not the majority of asynchronous code. Quite to the contrary. It comes up quite rarely un unlike cancellation actually. Cancellation comes up much more. Uh but async drop a and even just writing manual pinning code much less common. And it's similar kind of to writing unsafe code where you yes it's painful when you do it but you only have to do it a few times and then the rest of your code doesn't need to care. Um um I I think there's a the reason I said there's maybe a third is around um blocking the asynchronous executor but we are increasingly getting better tools for this like Tokyo console is sort of a start. Um, we have some things like there's a lint now for types you can mark as being not good to hold across an await point like if you try to hold a stood mutex over an async await uh await point. Um, that will help make this problem a little bit better. Um, but also not a dealbreaker, not a reason why you cannot use do asynchronous programming in Rust. And and actually maybe there's a fourth which is like the debugging tools aren't as good as they could be. totally agree with that. But again, does not mean to me at least that I just can't write async code. Um, not at all. Um, uh, what about the split between sync and async code? For example, if anything inside a function becomes async, everything of that function call chain must also become async. Yes, but I think that's a fundamental thing in programming. like this is sort of what color is your function. Yes, I think you need to think about that and and I I think you know if you look at something like Go um the only reason Go gets away with not dealing with that problem is because everything is asynchronous and it's managed by a runtime. And I don't think we want to get into that world because it means that now the runtime has to always be there. And one of the values of Rust is that the runtime doesn't have to always be there. And that's why you can use Rust for things like embedded programming. And so I think that's a thing where we've sort of said we're willing to give up a little bit. Uh is like it's it's almost like um uh you get to pick one or the other, right? You either pick a runtime or you have to deal with colored functions. And I would much rather deal with colored functions. Um for a team of Java or Cotlin devs, the complexity of async and Rust is too hard to explain. um especially for a migration to Rust. Yeah, pitching it to a team can be harder. is certainly a barrier but but I also am like this whole rewrite it in Rust um sort of movement I guess uh I think often makes less sense and I talked about this in the Jet Brains interview that I did where you know I think there are often very good reasons why people don't want to rewrite things in Rust and I I don't think we should force them right if you have a giant team of Cotlin developers it's not the right path most of the time to say you all have to learn Rust now and we have to rewrite the whole thing. That seems like insanity, right? Instead, you find the incremental adoption paths and you find the arguments for why Rust makes sense in those specific paths. And if you can't find them, then maybe there aren't any for that company. And that's okay, right? We shouldn't require the Rust goes everywhere. Um, okay. Um, how do I improve at Rust as an intermediate Rust developer? I'm at a level where I can do almost anything I want in Rust, but anytime I watch you or any other advanced Rust devs do the same thing, my solution always feels inferior. How do I get to that next level? Can you recommend any classes or courses specifically aimed at intermediate to advanced developers looking to level up? You know, I think you've hit on a question that every developer asks themselves on the path from junior to senior. And I I don't think there's a shortcut here. I I don't think there's a course you can take that just teaches you how to know h how to smell the right solution or smell the wrong solution. I I think it comes with experience and it makes me sad to say, right? I wish there was a I wish there was an easy where I could say, "Yeah, go go go take this course and then read this book and now you'll just make better decisions as a programmer." There's I mean there's some of that, right? Like you can read books like um the um idiomatic programmer isn't is that what it's called? Idiomatic programmer. Uh, no. What's it called? Idiomatic. I think it's called idimatic programming. Let me see. No, the pragmatic programmer. That's what it's called. Uh, pragmatic programmer. Let me this book uh the programmatic programmer which is really just a um a book written by uh experienced programmers who talk about the essence of software development independent of languages. It's not about frameworks and not about methodology. It's like how to engineer. Um I thought this was a really good book. I read it many many years ago. I think there are a lot of good lessons in there, but I don't think it I don't think it solves your problem. Like I don't think it immediately means that you now are senior and you now have all that expertise. You now know all of the ways in which you can potentially go wrong. Um so so like in a way you could think of reading that book as one way to gain some experience. The other is just like build systems and some of some of the time that means building systems that you're a little bit uncomfortable with like things where you need to stretch like we talked about in an earlier question where if you're never really challenged if you're building the same system over and over again like let's say you're a website developer and you feel like you're really just like building the same template over and over and over again then you're not really learning very much from each iteration because there's not that much new. Um, and so there's nothing for you to learn from. And so by sort of leaning forward a little bit into things that are a little uncomfortable where you really need to learn as you go, I think that is the way that you gain that expertise. But then it also just takes time. Like even if you are constantly taking projects that force you to grow and evolve and learn, it still will not happen overnight. It will take years. But that's okay. Um I I think the other part to this is like there are you know you can learn from other experienced programmers. You can you know watch advanced developers and learn from them. Um like what you're already doing right comparing your solution to someone else's is a great way to learn. You should not feel you should not use that as like evidence that you're inferior. You should use it as data that helps you grow. Right? The diff the delta between say my solution and your solution. Inspecting that delta is an opportunity to grow. It's something to learn from. Um, same thing with like watching talks from experienced developers who talk about the kind of pitfalls they've run into. Like learn from the failure cases, not just from the success stories. In fact, if anything, the success stories tend to paper over all of the problems. The the talks I really like to listen to are the ones that are about the things that went wrong, the things that we had to learn from. Um, like at Amazon, for example, they have um they have these things called COE's, which are corrections of errors. And this is anytime a big problem happens like a big error occurs um afterwards is sort of a writeup of what what went wrong and why. Um and those I found fascinating to read because I get to learn from the mistake so that I hopefully don't repeat that same mistake. But it's like accumulating all of these over time plus doing your own development and learning your own lessons. That's how you grow there. there will not be a course you take an accredititation you get and now you're good. Um uh even as a senior engineer, do you still find yourself thinking you made the wrong decision? For example, do you have any regrets on Squaba or other crates uh of yours? Oh, I mean I I certainly make incorrect decisions still. It's more that I think these days um some of the mistakes are sort of calculated risks. Not all. I'm not going to claim I can see the future, right? But um it's increasingly the case that I can spot when I'm taking a decision that I know might be the wrong one. It might come back to bite us. but where a decision has to be made and I I you know let's say I have two options or the two options are the only ones I can think of right now and I don't like either of them uh and I'm like well I I think that one right and I it might be wrong but we need to pick something. It's more those kinds of decisions. Um, I do also make other kinds of mistakes like uh in the API for a crate or something, but those are easier to correct, right? Like at Amazon, there's this uh notion of one-way and two-way door decisions where two-way door decisions are ones that you can relatively easily undo. Think of it as a two-way door. So, if you walk through the door, you can easily go back or at least it's not that costly. You can change the API for crate. It might be a breaking change, but hey, you can do it. um you can I don't know do a software update for this thing and it's a toy it doesn't matter u you're building a prototype you built it slightly wrong you have to spend some time to build it again or build it differently that's all fine then you have one-way door decisions one-way door decisions are the ones that are very costly to undo um things like um I don't know what's a good example of this choosing to purchase another company right or if Amazon announced that they're building a they're expanding to a new region or um Helsing like announcing some new product they're developing. Things that were like the moment the moment you make the decision going back is either impossible or extremely costly in terms of um money, time, um it can be reputation. It can also be things like um you know if um let's say you're working on medical systems, there will be some decisions for implementations where making the wrong call means that people can die. And if that's the case, you should be really sure you're making the wrong the right decision. And so that would be an example of a one-way door decision. And the the the sort of the mental exercise here is that you should run through two-way doors. You should run through two-way doors because they're so easy to undo. And one-way doors, you should take a long time before deciding you should make sure you have all of the data. Uh two-way doors, if you don't have all the data, that's fine. You can undo it later. And and that mental model, sort of to bring it back to your question, I use a lot when trying to take decisions. I'm trying to identify is this a one-way door or a two-way door decision? If it's a two-way door decision like the API of Squaba, then okay, I I think you know I I gather 80% of the data and then I go this this seems right now and then I can change it later. But if something is a one-way door decision, I really sit down and do my due diligence. I talk to other people. I work through the possible use cases in the future to a much deeper degree than I would do with a two-way door decision. And so if you manage to correctly identify one way door and two-way door decisions, the chances that you'll make um really bad decisions like like decisions that are incorrect, but also where the outcome is really bad is lowered. It doesn't go away, but that's one of the ways to try to mitigate the the the chance of you making a catastrophic mistake. Um how do you gather quality learning materials when learning hard or complex subjects? for example, writing a database or an OS. Um, there's there's no right answer to this really. I I think learning material is one of those where you can usually tell pretty quickly whether it's good. Like whether it's a book or an article or a lecture, five minutes in, you'll know whether this is worth spending more time on. And sometimes there aren't really any good resources and you kind of just need to go do it yourself. Um, sometimes the the the strategy that works is find someone else who's done the same thing or something similar and talk to them. They might not have produced a book. They might not have been doing streaming. They might not be giving lectures, but like send them an email and be like, "Hey, I'm actually building the exact same or something very similar and I just want to hear something about your experience." They might not reply. People are busy. They don't want to talk to strangers. their their time is is um worth a lot to them. Like there are all sorts of reasons why they might not, but they might also be willing to talk to you. And I think that is one of the one of the best learning resources is like firsthand accounts for someone else who did something similar. Um uh does this door analogy work for medium impact work? Can a single software developer make a catastrophic decision when you're just a gear in a big system? Ah, so yes and no. Um, a single software developer can make a catastrophic decision, but if that catastrophic decision has a catastrophic impact, it's not that developer's fault. There's um there's a process called the five wise process. I don't know where it originated. It was used at Amazon. It's used at at Helsing. Um, and the idea is that if something goes wrong, you ask the question why. Why did it go wrong? And the first order answer is usually something like this person clicked this button at this time or this person wrote this code and it was wrong. But that's not the real why. This you go you ask why. Why could that person make that mistake? The answer to that might be, well, they didn't run the tests or the tests didn't catch this problem or let's let's take one of those. And then you ask another why. Why were the tests not run? And the answer to that would be well because we don't have automated tests that are enforced in CI. Well, why is that? Well, we don't have that because and so on. And you go five levels deep at least. Um and the the sort of mantra for five wise is that the pro like everyone always did the best they could with the information that they had available at the time. It is never a single individual's fault. It is always the fault of the the systems around them that failed to contain the problem. And so the goal of five wise is to make sure that it cannot happen again. The goal of a five wise process is to identify the root cause causes, try to fix or put in place defenses against those root causes so that that class of problem cannot occur going forward. And so even if a single developer can make a catastrophic error, the the question then becomes why were they put in a position where they could and where that could have an a catastrophic impact. Um how to tie this to the the door analogy? Um I I think it's you know I think the same applies to individual developers where as you make decisions you shouldn't necessarily think of oneway door decisions as like catastrophic things. It doesn't have to be life or death or like Amazon chooses to make a decision that cost them like billions of dollars. It can also be at the level of um I have to make a decision here of what's a good example of this um [sighs] uh let's say someone um proposes a new system architecture for something and maybe it's a slightly junior person um and you think they've done a really bad job. You think this is a really bad idea. Um and you're about to send a scathing email to this person and their adviser or supervisor, whatever it might be, um explaining all the ways in which they're wrong. This could be a one-way door. Um because the net result of this could be this junior basically goes, "My input isn't wanted. I'm going to leave this company." And if they're actually really talented, they just this wasn't their best work. Or maybe they were right and you were wrong, right? The impact of you sending that email is not something you can take back, at least not easily. Um, pushing to production, another example, uh, editing code live in production, an even worse example, right? These are things that have immediate impacts that are costly to reverse. And it is a one-way door decision at the at the level of an individual. Um it is true though that as you know if if you're sort of a cog in the machine type employee um more of your decisions should be two-way door decisions than one way door decisions. And that's kind of on purpose, right? Like if all of your decisions are one-way decisions, one-way door decisions, are you sure you want to be in that in that position? Um usually like the there's a that brings risk and reward, right? So um the more senior you are at a company, it might seem like the only thing that changes is like you you get paid more, but the reality is also that more things fall on you. You take more one-way door decisions, which means there are more critical decision points in your path. And that also means you can make bigger mistakes, right? because you're trusted to be able to evaluate those decisions better and b balance the risks. And so you're signing up to taking more oneway door decisions and living with the consequences. Um okay. Um how do you test or benchmark your critical software like flight systems that communicates over networks? Do you have some devices it runs on? Um, I don't think this question is really about flight systems. I think this is more about how how do you convince yourself that critical software is actually correct regardless of what that critical software does. It is true that critical software that runs across devices has even more of a chance of being wrong than something that runs contained in in a device. The reality here is that the way you test is a a sort of layered approach, right? You start by testing unit testing like testing components in isolation where everything around them all their connection to other components are mocked. The next step you test is you test integration where those systems get to talk to each other but their perception of the world is mocked usually with like fixed inputs fixed outputs. You check that they you know end up achieving the result that you want. The next level above that is simulation. So you run um you run your software stack and you run a simulation that simulates real world. Um ideally it's a it's a closed loop simulation where the outputs of the system affect the inputs of the system. Right? So if your system decides to I don't know um you know turn the right flap of the wing then the next time it reads its own position the position better in have indicated that it's made a turn because the physics part of the simulation feeds back into the inputs that the system sees. Uh and so this is how you test that your systems not only work correctly together but also that they correctly interpret and affect the real world or the simulated real world. Um, the next step up is that you run on hardware but with simulated inputs. So this is you take all of the hardware you can get your hands on, not necessarily like the full airplane for example, but like the computers, the wires, maybe some of the sensors, and then you hook those hardware devices up to your simulation engine. And so there's still simulated inputs, but you now get to also check for does it work correctly on this hardware? Is the hardware fast enough to be able to run this in real time and so on? Like does is does the latency start building and so you're no longer keeping up with the the true flight trajectory for instance. The next step above that is you build like a a physical rig of you know in the case of a flight system you build a physical rig of an aircraft and the you hook that up to physical simulators. So you're no longer in the sort of software only realm. you generate things like um GPS signals for example like instead of having an antenna you have a cable to a box that generates a GPS signal so that the plane thinks the GPS receiver thinks it somewhere and so at this point you now have um you're now testing the hardware sensors as well in addition to the software that's part of that loop and then the last step is you do test flights right so again for a flight system you would fly the actual craft under controlled circumst irstances. So like you know you would have a test range, you would only allow it to fly in certain areas, you have fail saves, you have all of these things, remote piloting, whatever it might be, everything such that even if something goes wrong, it can only go wrong in very controlled ways. The the tricky thing here, right, is that the closer you get to test flight, the more confidence you have that your system is correct, right? Because you you're doing more realistic testing. The cost though is both the the literal cost like it's costlier to run those because you need to like in monetary cost but it's also costlier in the sense that you can run fewer iterations of them because it needs an actual craft. It needs airtime. It needs fuel. It needs pilots. Um whereas at the simulation layer you can run way way way more simulations because it's all just software. You can run it across a whole cloud of devices that are all running simulations like day and night under all sorts of different scenarios, but you can't check quite the real system. But that doesn't mean that it's useless, right? It means you're testing the system under more possible configurations, even if you're only finding a subset of possible errors. The the reality here is that this layer is what gives you the the confidence that your critical systems are correct. It's you couldn't do only one of them. you do all of them and each of them give you a different kind of confidence and when you have that that sort of tower of confidence that's what ultimately gives you the the sort of um evidence to say we now think with a large degree of confidence that the system is is fit for the real world. Um so so you know to the to the basic parts of the question like do you have some device that it runs on? The answer to that is kind of yes, right? It runs on lots of potential devices, both simulated ones and unit tests where there is no device and hardware ones and then hardware rigs and then actual test flights like all of the above. [snorts] Um, depending on the use case, there might also be systems for building systems that are relatively simple but where failure is not an option like the code for railway switches. This is also true. So for some critical systems um you want to subdivide and isolate. So you want to say this part of the system is so critical to the safety of the system that we're going to isolate it from the rest and we're going to build this under much more rigorous standards than everything else. This one we're going to write in, you know, um you can imagine something like a a language where there's formal verification, right? Um maybe maybe not necessarily coaul but you can even imagine going all the way to there like something where the program cannot be very complex but we can be very very sure that it's correct and as long as this component is correct even if all the other systems do wild stuff because we know this is correct we know that the set of failure cases is contained. Uh so that would be an example of like one of the ways in which you can build a system so that your confidence is doesn't hinge on the entire system being fully correct but on one part being fully correct and the other parts being mostly correct and then you you're narrowing down what the what the possible failure cases are. Um, do you consider yourself to be a generalist or a specialist? Do you think the job market is unusually harsher towards one or the other? Um, I I've I've thought about this question in the past and I've never really reached a satisfactory answer. I think I've always been very curious um about a wide variety of things which sort of tends to drive me to broad but it also tends to drive me to I maybe I have some amount of OCD or something but I really want to get to the bottom of things. I I hate having like something be unanswered and so I tend to keep asking why until I get a satisfactory answer. And the curiosity drives me like broad like horizontally and the wise which is I guess a different kind of curiosity drives me deep like vertically. And so I think what I've ended up with is sort of an M shape or something where I have a couple of very deep depths and then I have broadness across them. Um I don't think I would say I'm a generalist in the sense that it's not like I know equally much of a hundred different things. I would say I I've specialized in some topics like concurrency, distributed systems, um data structures and algorithms like that kind of space. Um operating systems maybe to some extent but not not to the same extent as the others. Um Rust is probably another one that that would be considered a specialty. Right? So so those are sort of the the pillars of my specialty but then I've built um broader understanding across them. I would say I have you know generally good working knowledge of things like um uh web development. I did web development for many years. Um some amount of embedded development which I've done some of um operating systems as I mentioned. Um things like networking um network protocols and protocol design um system architecture. Um, so like those are things where I I wouldn't say I have deep depths in those, but I have enough substance that I can like carry my own. Um, and so that's what makes me say it's sort of an M shape, like a couple of pillars with a broad base that mostly covers between those, but but I wouldn't say I'm like a a super broad generalist. Um, that said, I think my curiosity means that I can usually pick up new things as needed pretty quickly. not super deep necessarily, but uh I've I've I've gotten good at learning in a way. And I actually think this is one of the big outcomes from the PhD that I did was to get good at learning. Like I think if there's one thing a PhD teaches you, it's how to learn. And how to learn quickly and deeply and rapid rapidly and quickly are the same thing. Um, but but I really think that it it drilled me in learning because you have to learn so many things in order to produce a a good thesis at the end. Um, and I think that's served me well and sort of means that I can I can emulate being a generalist because I can usually pick up a topic to a credible degree relatively quickly. Um, [snorts] why do you care about having a high salary when living in Norway and what are you spending it on? Um, so Norway is a society that tends to sort of carry people through. Like it's it's hard to it's not impossible. Like you you can you can struggle in Norway too, but it's a fairly good welfare system. Um and most people live a fairly good life. Um and so in that sense, it's not like you have to have a high salary in Norway. Um at the same time just like I think any country on earth having a a good salary gives me affordances that I otherwise wouldn't have right in a way the salary buys me choice right it it buys me choice not not just in the sense of like what what products I buy or those things um but it also gives me flexibility to do things like these streams like to spend time on the things that I care um the the the money buys me that freedom or or the ability to like buy an apartment like I now have. Um like those things would be less possible if my salary was lower. It gives me more freedom. Um so like I think the reason I care is the same reason people would care anywhere else as well. That said, I don't think I have um you you know, I don't I don't have a deep need to be paid like enormous salaries. That that's never been my goal. I I'm not particularly materialistic. Um I I think, you know, I I like to be able to not worry about money. That that does matter to me. But like being rich is not a goal that I have and also not something that's currently true, right? Like um I get a decent amount actually of comments like, "Oh, the reason you work at Helsing or the reason you work in the defense sector is because it pays so well." And the answer is that it it doesn't really like I was paid more at Amazon than I am now at Helsing, depending a little bit on how you count equity and such. Um, but like it it's not the money that's driven me to the jobs that I've taken. Quite to the contrary, I think if the money drove me, I would have taken very different jobs. Um, so I think um, you know, I'm driven by money to the extent that I want to be paid a a fair wage, but not to the point where I'm like seeking the highest pay at all times. Um, and in terms of what I'm spending it on, um, well, I mean, like about half of it goes to taxes, so that's a bunch of money that goes out. Um, and you know, my my partner is currently working to get into the voice acting industry. Um, and so that means she doesn't really have a stable income at the moment. And so that means I'm my salary sort of pays for both of us. Um, so a bunch of money goes there. Um and then you know it's um a sort of combination of bought an apartment so now I need to pay a mortgage so the money goes there just normal things. I don't think I have like giant expenditures really um in that sense. Um yeah I don't think there's like a huge category of spending they have that that other people do not. Um, uh, what is your prep like for the Imple Rust videos? As a longtime viewer and even longtime programmer, the speed at which you digest protocols, the Bittorin implementation comes to mind and implement them feels supernatural. Um, I I think I've I think I've answered this in a couple of the Q&As and maybe even in some of the videos, but I I genuinely don't prep very much for the Rust implementation videos. Like if I have an idea for a video, I will usually do the sort of an initial scan for whether it's even feasible to do a video about it. Like for example, one of the videos I've thought about doing is one where we implement a um brown noise generator that generates just a Rust program that generates a wave file that has brown noise in it just because seems interesting to generate some audio. It's doesn't seem super complicated, but also I want to know more about audio formats. I need to learn a tiny bit about codecs. Um, and like I've looked up the Wikipedia article on brown noise and I've looked up the Wikipedia article on the wave file format just to see that they're like there's not huge dragons here where this would actually be like a 20 10hour streams kind of ordeal and it doesn't seem to be but that's sort of the extent to which I prepare. Um, same with the Bit Torrent one like I hadn't implemented Bittor before. I'd read about the protocol um actually back when I took the distributed systems class at MIT and so I like I knew the the the very basics of it but I'd never implemented it before. Uh and I I think that's part of the point of the videos, right? Is to show you how I uh ingest the topic as we go. Like what do I go read? How do I read them? Do I read the whole thing start to front before I start programming? do I not? Um, and and so the the prep is in the video. That that is sort of by design and there's relatively little I do ahead of time. Um, do you think software engineers should be worried about job security with the rise of AI? Well, we touched on this earlier in the question about LLMs. Um a and I think I answered this in a in the Jet Brains interview as well on you know to me AI and and LLM's and agentic coding is [snorts] really like a it's a it's a power tool. It is it is like the table saw like I discussed earlier. I think it has the ability to greatly accelerate those who know how to use it well in certain tasks. I don't think it replaces programmers. Um I do think that if you don't learn them, then you are at a disadvantage for some tasks. Um, and I do also think that when programmers become more efficient, that will also mean that fewer of them will be necessary, right? But but that's that's different from saying that like job security goes away. It just means that the the bar end up becoming a little higher because efficiency has gone up. Um, is that a bad thing? Maybe. But like was the invention of the table saw a bad thing? Maybe. Um, but like it it's not to me really a question of job security. It's not like the agentic AI is coming to take all our jobs. It is like there are some jobs that are particularly mechanical or automatable that are better suited like they sort of align well with the benefits or the the tasks that LM are really good at. And so you can now have like the efficiency gain is very large. You can now have one engineer that does the job of 10 engineers for that particular kind of task. And if your job is just that task, then yes, it has a a big impact for you. But across the sort of general industry of software engineers, I I don't think this is a everyone is about to lose their job kind of thing. I I do think though that there's sort of a perception problem here of even if even if what I just said is true, if companies perceive that um AI could take software engineer jobs, then the companies will probably make the mistake of firing a bunch of people because they're like, "But efficiency will go up. We've seen efficiency go up over here, so surely it will go up everywhere. So, we're just going to let a bunch of people go, right? Because AI is replacing all of them. And if it hasn't replaced them yet, we'll replace them soon. And I think that is a huge mistake on the part of companies. Um, but it is something we're starting to see. And that is the way in which I'm worried about job security. So, it's not really about the AI coming to replace us. It's about the perception that AI might replace us and companies acting sort of preemptively towards that conclusion and and that worries me more. Uh but I do think that this is a um sort of educational problem as well, right? Of time it will become increasingly apparent that that was a mistake and then those companies will regret and possibly fail because they made a critical error. But it also means there's an opportunity for other companies that don't make the same mistake to sort of get into that same spot. It's not clear to me that the net number of positions goes down, right? It it could be that even if one company eliminates a bunch of those positions, those people will then just get hired elsewhere that haven't made the same sort of miscalculation. Um but but it is it does introduce volatility into the system which which is unfortunate. Like it sucks when people get fired, right? Like that that that volatility impacts real people's lives. Um but but I do think it's it's a sort of a misattribution error rather. The problem is that people inherently know not to stick their hand into a table saw but often don't have the same intuition for LLMs. I think that's totally right. [snorts] Um do you think it's currently quantifiable as to how many fewer jobs will be needed? 10% 20% really depends on the industry. It depends on the the type of task too. like like there are some programming tasks that I've used LLMs for that have saved me hours of debugging or that have um saved me hours of just typing stupid to iterate on something that didn't really matter or looking up a bunch of documentation for something where I didn't really care. Like as an example, um I wanted to redo some of the styling on my website recently and I wanted to adopt the tough CSS style which because I think it looks nice for some of the side notes and stuff. Um and I haven't done CSS in quite some time but I did a lot of it back in the day and like CSS has evolved in a bunch of different ways. And so one of the things I wanted was to change the styling, which is currently all handwritten, to be um split into files and then imported using SAS and then minified. And I could just tell the LLM to do it. And it just did it. And it saved me so much annoyance that otherwise would have taken me ages. And I would look up all this stuff. I had to figure out how to configure the tools, all of that stuff. And that just that time just I got back. Um, there's also some time where it's not clear the LLM is faster than me at a task, but I can have the LLM do it while I do something else. And so it gives me sort of IO parallelism, if you will, right? Like I get to just start something and I can go do the dishes while it's finishing and it does the job. It just does it slower than me, but that still means more network got done. Um, so it's like that makes it hard to quantify the the the sort of efficiency gain because it's so task dependent. Like for some tasks it's like 90% and for some it's zero. Um, so no, I I think it's um I think it's too hard to say. Um, what do you think when you see engineers at places like Enthropic say that software engineering will be dead to next months? I I think they're just wrong. I think that is simply incorrect. Um I think the job will change, right? So, as I said, like there are tasks now where I will have an LLM do it because I know that it's well suited for what LM are good at. And so, if I hadn't known that, I would be much less efficient as an engineer. So, I'd need to learn those tools and make use of them. Um, and so it changes the job in that more of my job is now prompting LMS. Not all of it, not even the majority, but more of it is. Um, but that we don't need software engineers, I think, is just wrong. Um uh if I'm a professional software engineer and now AI can make me 10x better, does it mean it's easy for me now as an engineer to start my own company or build a product? I don't think so because again I I think it's not a it's not a 10x across the board is the problem. It's a 10x for certain tasks and for those tasks it really is a huge timesaver. But there are other tasks where it's completely useless at. And so maybe there are some things that now like your skills um sort of align so well with the LLM's gaps that you can now be a smaller team that builds something from scratch and it goes really well. But but I don't think it's a and we do see some of that, right, of people being able to build their idea quickly because Agentic AI lets them do it. Um but but I don't think it's a everything is now 10x easier. I I don't think it's the case. [snorts] Okay. Uh are you following supporting Rust for Linux? Yeah, I I mean that following is maybe too strong of a word. Like I read the occasional news articles. Um I think it's really cool that it's in the midline kernel. Uh I think it's cool that it's now sort of seemingly here to stay. I think it's cool that um Lionus seems to think that it is also here to stay and sort of agrees with that. Um I've seen some of the drama as well. Uh and not surprised at all that there's drama there. Um, and I think it's really unfortunate that we've lost some really good Rust people on the Linux side because they basically got tired of the drama. That that I think is really unfortunate, but but overall I think we're on a good trend here. And I I would be, you know, in a way I think the adoption has almost happened faster than I thought it would. Um, I I thought there would be even more barriers to entry here. Uh, and I mean the adoption is slow, right? But things like AMD deciding to just like write their um their new Linux graphics driver just in Rust from scratch is really cool, right? Like this is the kind of stuff that was sort of the the early promise of Rust that we're seeing time and time again come back. Um so yes, I'm I'm very excited for us for Linux. Um I I'm not involved in in any way, but I'm I'm cheering those the the people on very much so. Um, okay. So, we're now to questions that have fewer than 50 votes, which means I think it's time for a uh um a sort of uh quickfire round here at the tail end. Um so, the way this is going to work, we do this for for all the Q&As's. Um I'm going to start like a one minute timerish. Um during that, go through vote. There's no point asking new questions right now. You should only vote for questions that are already there. Go through and vote for the questions that you most want to see me answer and then I will go through the list and try to answer each question like 30 seconds to a minute. So it'll be very fast answers. I will not give you deep answers. I will give you as deep as I can in like 30 seconds. So uh timer starts now for All right. And we are at time. Okay, quick fire round starts now. Hey John, any tips to measure my growth as an engineer as early mid-stage career? Measuring your growth as an engineer. I would recommend writing a brag document. Julie Evans has a really good blog post on keeping this kind of document. It's a way to remind yourself of the big things you did. And when you read back over the brag document, you should pretty easily be able to spot your own career progression just by the size, complexity, and impact of the things that you did. Next, um, what are your thoughts on Zigg? Have you tried it? No, I have not. So, I can't really speak to it very much. Uh, where does it score better than Rust? Where does it lag behind Rust? Where do you see both fit in the near future? In general, my impression is that ZigG work really well for um if most of your program is unsafe code, then Zigg is better. But most programs, most of the code does not need to be unsafe. And then it's better to have a single language be both the safe and the unsafe part. So the places where Zig is the best fit is a minority of use cases. Um what is the current state of the Rust job market for juniors and interns? I'm 19. have been working hard on rust and I've written many implementations for hazard pointers and epic based memory reclamation with a lock free stack and a queue over it. I've also been studying async. Um I don't know whether my interests align with what the industry expires uh in requires. I also don't want to be a developer whose job is to stitch libraries together and write simple cred APIs. Can you guide me in the right direction? Um so so the rest job market for juniors and interns is pretty bad right now. It is true. I think that part of the reason is because there's so much excitement for the language. So there's a big supply of juniors. But what most companies need is they need seniors that can help build teams that then those juniors can join. So that what this is sort of speaking to is that the industry has uh many new Rust teams which suggests that over time we should start to see more junior job postings but they won't happen until they have those senior people to bootstrap those teams and mentor those teams to begin with. Um, in terms of advice for you, I don't have great advice here. The best I can think of is to work on things that you think are interesting. Just continue to do what you're doing. And you might have to in the meantime take other jobs, maybe not in Rust, that let you learn things that are going to eventually be relevant in Rust as well. But right now, your goal shouldn't necessarily be to work in the language you want to on things that are interesting. Focus instead on building working on things that are interesting. ignore the language for now and then later on you can search for something that's interesting and Rust, but it doesn't have to be both to begin with. Uh, next, how do you think the Rust community uh affects its adoption and development? It feels like there's a prevalent stigma of the Rust community being loud and annoying about rewriting it in Rust. Uh, but in my opinion, it's also one of the most accepting and welcoming programming language communities. I personally wouldn't have the community any other way, but I'm curious about someone more well-versed in the language thinks. So, I think the Rust community has gotten a really bad reputation from people who are not in the community. There's some internal community drama there. There will always be um but there's a decent amount of like once you get a large number of people interested in language sort of from the outside of the community, they have different opinions about the community than the community does about itself. And I do think that the there's there are some people in the Rust community who um aren't great at subtlety. They're not great at or or rather maybe that's the wrong phrasing. Maybe it's more that they think other people get are in on the joke, but increasingly when you have more people adopting the language, those people aren't in on the joke. It used to be the community had a relatively slow steady trickle of people who came in through a well-known pipeline where they sort of got they got to know the community. Now that's no longer the case. People sort of meet the the language sort of from nothing and then I think the communities I think the Rust community has a sense of humor in a way that doesn't carry across to everyone adopting the language. They see the humor as serious things like rewrite it in Rust and as a result they just think the community is annoying and oversimplifies things. I do also think that Rust has maybe an um uh people are very excited about the language and they let that excitement sort of um mean that they miss some of the nuance when explaining it to other people and that tend to rub people with a lot of experience the wrong way because they know that the nuance matters. so much. Um, but I don't think the community is broken. I I think this is a almost like a PR problem for the community more so than an actual community problem. Uh, thoughts on Zigg? We already did. Um, how was 2025 from both a personal and work perspective? Um, let's see. I got engaged. Uh, I bought an apartment. So, those are both exciting. Uh work-wise, um I've worked on some really really interesting things. Um I've worked on radar systems. I've worked on self-lank planes. I've worked on distributed network protocols that are like CRDTs from scratch, distributed databases over unreliable networks. Um, uh, I feel like I am still in a position that sort of challenges me and makes me grow in a lot of interesting ways, both in soft skills and in technical skills. Um, on a personal level, I've also gotten to play a lot more games this year than I did previously. So, one of my goals for 2025 was to spend more time on things that are fun because I think I needed some I needed some fun in my life. sort of uh things that weren't necessarily useful, but that sort of let me chill out and have fun. And I've managed to play both more board games and video games in the past year. Uh that's I've been very happy with that. So overall, I'd say 2025 pretty good years. [snorts] Um have you tried Rust's nextgen trait solver and do you have any thoughts on whether it's okay to break older versions of crates that used the types incorrectly, mainly Bevy and Mini Ginga, uh to improve the trait solver for everyone else? Um, I I haven't tried the nextG trait solver. Uh, I've read a bunch about it, but I've not tried it myself. In terms of whether it's okay to break older versions, I actually think this is something where the the Rust community has a pretty sensible policy of um, not all breaking changes are major changes and that means that you are allowed to make some breaking changes within the same like Rust version one. Basically, um the nuances of when that is okay are subtle and they involve talking to the community, talking to the infected parties, but we have made breaking changes in the past. Um where there was known breakage, but the breakage was deemed acceptable. This might be another case of the same given how much the next gen trade solver is going to buy us. But we should also challenge ourselves to make that not be breaking. But but this is also something where we should talk to the owners of those crates and figure out whether we as a community value the benefits over the cost. Uh is there any technology you want to learn in 2026? Um oo not off the top of my head. I mean, Nicks would be the closest, but I feel like I learned a bunch of Nicks in 2025. I [snorts] think maybe I want to try to have a better mental model for Nicks in 26. Like, in a way, it would be nice if I could teach Nicks by the end of 2026. Currently, I'm not in a position to teach Nyx. Um, so maybe that would be an answer here. Um, you've mentioned having to learn some machine learning as part of your role in Rust. What were some topics you ran into and what is your opinion of the field? What are some areas in machine learning where you think Rust might have a role? Um, so um, it used to be that my opinion of the machine learning field was there's a whole lot of throwing at the wall and seeing what sticks and there's a whole lot of just tweaking numbers until you get better output numbers. I think that position is tempered a little bit. I think there's a little more actual science in ML research these days. Um, but only a little. I think there's still a lot of we're just trying stuff and seeing what works. Um, and that that's okay. I mean, that that not unlike science in general as well. Um, but I do think there's a uh hyperfocus on incremental improvements and I think we need to get out of that bubble a little bit. We need things that are not just incremental improvements. Uh, and I think a lot of what we're currently seeing is that um, in terms of where I think Rust might have a role, I think Rust has a decent amount of of um, potential playing room in the training at scale side of things. Currently, a lot of that it's Python. Um, and I think the more we get to inject Rust here, not necessarily by saying they need to rewrite all the Python to Rust, but finding ways to bridge Rust into that space so that the Python code is not on the critical path, um, could be really interesting. And, and I think there are some opportunities here for Rust. Um, because I think it could genuinely speed up um, training quite a bit. And I don't mean the the critical part of the training like the back prop because a lot of that doesn't require that you go through Python anyway, but more things like um transformation code for example. I could totally imagine Rust fitting in better. Uh how do you deal with big monor repos in Rust regarding CI duration and general compilation time? I mean the answer to that is don't have big monor repos. Uh I think monor repos are a mistake. Uh come after me big co. Um, but no, I I think actually in general like monorreos are are not really the way to go here. You really want to incentivize people to build smaller reusable components. That's not always feasible, but I think that is the path here. Um, I don't think we're going to get like um compilation time that is like order of magnitude improvement. That that seems unlikely to me. Um, CI improvements too is is often you can um try to modularize your CI, find ways to sort of cache dependencies. Ideally something like uh you know cargo has this um in project work on building only dependencies and feature unification. Some of that is going to help but realistically some of this also requires just breaking up your repos. Um what missing features would you want to see in Rust? You know, there are a couple of uh Rust like RFC's that I think are interesting. Um let me go through and see which ones I have here. So, cargo script I'm really excited for. This is a way to have um to write like single RS files that you can run the same way you can run bash scripts. I think that's really cool. Um postfix macro macros I think could be really neat. I think it's a way to um reduce some of the syntax pain we sometimes have in um in Rust today in a fairly elegant way. Um what else do we have? Unsafe fields I think could be really nice for unsafe programming. I think uh fields is one of the places where unsafe is a little annoying right now. Um those are the ones that mo oh the the what's it called? Krabby the sort of Rustlike AI where it's not quite a Rust AI but the intent is a an AI that allows uh FFI calls using a slightly more expressive type system than just the C AI. So the intent being something that's that allows you to do like high efficiency FFI between for example Go, Rust, Python without having to drop to C. Um I think would be really neat. Um, those are the ones that immediately come to mind on a sort of quickfire response kind of timeline. Have you looked at Deiosis? What are your thoughts? Um, no. I I've thought about trying to build like a GUI for something, but I I haven't looked into it enough to to have any meaningful thoughts here. [snorts] Reflection is a big topic in C++ right now. Is Rust lacking in that department? And if so, what improvements should be made? Um, I think Rust is lacking in reflection. like compile time reflection is something where I was really really excited for um for compile time the compile time reflection uh effort that we started to see in Rust and then the whole drama there happened and that makes me really sad that that that individual also decided to to stop working on this I think was a huge loss to the community and the language I'm hoping that eventually gets picked up again I think compile time reflection um would do Russ a lot of good um but you specific improvements should be made here. I think it's um it's too hard for me to say. Um okay, let's do three more. Um Zigg moving away from GitHub. Um you know, I' I've increasingly seen repositories move away from GitHub um to whether it's the what's it? Codeberg or GitLab. I just I don't like any of the alternatives either. I think GitHub is like the least bad, but I've also seen very little in terms of meaningful improvements to my workflow from GitHub. And so I'm like I'm sad that GitHub hasn't gotten better, but I also think the alternatives are really bad whenever I try to use them. So moving from GitHub, good for them. Um, but I I genuinely don't know where to move. Um, and there's there is also the sort of network effect here, but the network effect is weaker on GitHub than it is on um on other um services. Um, yeah, I'm really hoping someone just builds like a a really good one that I could just move everything. I tried Source Hut for a little while. Um, and Source Hut is okay. I I think moving to email is like probably a mistake though. It's not something that's going to um you're not going to get most people to move. Even though I see I I see the attraction. Um what are your favorite computer science books? Oo. So there's a book called a pragmatic programmer I like a lot. The art of Unix programming is pretty good. Um there's one called Seven Languages in Seven Days that I thought was a fun sort of exploration of how different languages can be. Um you know there's this book called Russ for restations that I've heard is really good. Um oh there's um what's it called? Um, there's a book called the FreeBSD operating system that is a I mean it's really a manual for FreeBSD, but it just it's a really good book that goes through a lot of the details of a Unix like operating system at a very low level. Um, would recommend looking at that. Um, I like that a lot. Um, yeah, those those are the ones that really come to mind. Um, and last question of the day. Oh, sorry. Seven languages in seven weeks, not seven days. Um, last question of the day. I'm not going to do that one because it's not really a question. Um, I'm going to go with Can you recommend a source books preferred to learn more practical theory about type systems? Most books are either very specific or very abstract CS theory. Looking for some middle ground. H good sources for that specifically. I don't know that I have like a a great source. I I actually think maybe the place to start there would be I would actually read the um learn you a hasll for great good. um e even though it's I mean it's teaching you hasll but not rust but one of the big sort of selling points of right is its rich type system and so learning hasll is a lot about learning type systems it it you're learning it in hasll syntax but the things you're learning are type theory um but because you're learning in the context of a language it ends up being more of a like a slightly more pragmatic approach to it rather than very abstract type theory. Um and also not a you know um for this particular kind of programming problem uh use this particular type instruction. Um yeah I would not recommend trying to read like research papers on this topic. They tend to be very very hard to hard to follow sadly. Um trying to think if there are any other good type system things. No, I I think that's where I would start is uh learn you for great good. All right, we are just about at the three-hour mark, so I think it's time. Uh thank you all for staying with me. For those watching after the fact, I hope you found that interesting. Um, the video will go up as a recording, which if you're watching the recording, you already know. Um, but if you're watching it live, then you know, you can go back and watch the questions later on. Um, thank you for all the questions. This is fun. We'll we'll do it again next year, I guess, 2027. We might do it before then. Who knows? Otherwise, I will see you at the next stream. Thank you all. Bye.

Video description

As has become tradition, it's time for another new year's Q&A! In the span of three hours, we got through 45 questions covering everything from to job hunting for juniors to testing critical software. The questions were asked both ahead of time and live (using https://wewerewondering.com/ ), and I've timecoded them all as chapters on this video for easier discovery. This video was sponsored by the Let's Get Rusty job board: https://jobs.letsgetrusty.com/ 0:00:00 Introduction 0:01:44 Marriage and kids plans 0:03:26 Have you tried Helix editor? 0:06:10 Interview questions for Rust developers 0:13:19 How did you and your girlfriend meet? 0:15:31 Claude Code usage at Helsing 0:22:29 Thoughts on Mojo 0:26:47 Learning Rust to get a 100k job 0:34:40 Writing a new version of your book 0:41:47 How do Rust developers get girls? 0:45:50 Application-wide error handling patterns 0:54:03 Getting a Rust job as a graduate 0:54:38 Recommended Rust streamers 0:57:42 Advice for early career 0:57:55 Interesting companies to work at 1:14:11 Will AI widen or narrow the expert-novice gap? 1:25:27 One billion row challenge: is Java really the winner? 1:28:16 Are you WASM yet? 1:28:36 Crust of Rust on self-borrowing and Ouroboros 1:30:41 NixOS and its effect on your workflow 1:35:02 Interview approach for assessing candidates with LLMs 1:39:49 Breaking changes worth making in Rust 1:47:47 Improving as an intermediate Rust developer 2:02:54 Testing critical software (flight systems) 2:09:33 Generalist vs specialist 2:13:05 Why care about high salary in Norway? 2:16:49 Prep for Impl Rust videos 2:18:58 Job security with the rise of AI 2:27:20 Rust 4 Linux 2:29:45 Quick-fire answers In the quick-fire round, we covered Rust for Linux, measuring growth as engineer, thoughts on Zig, Rust job market for juniors, the Rust community, how 2025 was, the next-gen trait solver, tech to learn in 2026, ML and Rust, monorepos and CI, missing Rust features, Dioxus, reflection in Rust, Zig leaving GitHub, and favorite CS books. Live version with chat: https://youtube.com/live/g1ZgInFTfEo

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC