We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Performed authenticity
The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.
Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity
Worth Noting
Positive elements
- This video provides a sophisticated look at how behavioral data (version control history) can be combined with static analysis to prioritize refactoring efforts.
Be Aware
Cautionary elements
- The guest uses 'revelation framing' to suggest that standard industry metrics are useless, creating a vacuum that his proprietary product is conveniently designed to fill.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
[music] [music] So, welcome back to another episode of the Fine Pro podcast with [music] me, Christopher. And today I have with me Adam Ton Hill. Welcome to the podcast, Adam. >> Thanks a lot, Christopher. Really good to be here. >> Yes, I'm really happy to have you. Uh it feels like we've have the like I always feel like this show kind of uh has kind of like a static typing bias. Uh but today we're going to talk a lot about closure. So hopefully we can make it a bit more like you know fair and balanced. >> So it's not only like statically typed functional programming. But uh first, can you tell the audience a little bit about who you are like your background and what you're doing now? >> Yeah, sure. I'd be happy to. So, I'm Adam. I'm based in Sweden. I'm a programmer. I've been a software developer for many, many years. Been working for almost three decades in the industry. I'm an electrical engineer by training, but I have my second degree in psychology. And that combination has influenced a lot of the work I've been doing over the past decade. >> Oh, I didn't know that. The psychology thing that was news news to me. >> Yeah, that's where all the strange codes and stuff came from. [laughter] >> Yeah, we'll have to uh talk about that later. But okay, so you you were trained as an electrical engineer, but at some point you sort of switched into computer science or >> Yeah, I was always more interest, you know, I I grew up with the Commodore 64 in the 1980s. So that's where I learned to program. I was always super interested in programming, but somehow never understood as a kid that I could actually get a job in that discipline. So I studied electrical engineering, but um I've never worked as an electrical engineer. I went directly to software and uh never looked back. >> Okay. Okay. And um now you are I guess most of your time is taken up by your company right? >> Yes. >> Codesin. >> Codesin. Yes. >> Yeah. Can you tell me bit more about like okay what what is codein? So, Codesin is a software analysis tool and uh it's a very different tool compared to you know the regular llinters and static analysis tools that you have because what codes does is that it's completely automated. It generates a complete visualization of your whole codebase and then it points out where the good parts are and where the bad parts are and we also prioritize based on impact. So we're not just looking at the source code. We're also looking at how do you and your organization interact with the code. Where is the development work actually happening? And that means we can prioritize the technical debt that impacts you the most which is always complicated code that you also have to work with often. So it's really an intersection of code and people that's code scene. >> H so and what what is technical depth? Yeah, that that should be a simple question but it's it's not really because uh technical depth in the original sense I mean we had drifted far far from that original definition. So the definition I tend to use is that technical that is code that's more complicated than it should be. >> H and is that sort of what codes detects as well? like is that what uh how does code sin reason about technical depth? >> Yeah, so codesin reasons about technical depth in um on two different dimensions. The first is the the quality aspect. We need to be able to tell if one piece of code is objectively good or objectively bad, right? And everything in between of course so for that we use a metric that we call code health. It's a codes in innovation that I've been working on for the past I think eight years or something like that. So that one analyzes the code and look for certain patterns that make the code hard to understand for a human. So it could be for example low cohesion. You have stuffed too many responsibilities into a module right? So now it's hard to reason about for because there are so many business rules there. It could also be uh patterns in how you implement the code. For example, deep nested logic and all that nasty stuff. copy pasted code 25 factors like that we weight them together and then we can categorize a piece of code as being in either the healthy space or the unhealthy space. But the second thing we do the second dimension is relevance because you can have bad code that's been stable in production for 5 years that you never need to touch. It just works. So it's good to know that it's there, right? That there is a potential problem but it's probably not going to impact you right now. it's not holding you back. >> So that's why we need this second dimension that we call hotspots, developer hotspots. So what we do here is that we look at version control data and see where is the development work actually happening and how is it shifting over time. And when you see an intersection of a highly complicated piece of code that's being worked on frequently, then we know based on our research that this is very likely to be super wasteful technical debt and we can flag it as a prioritized improvement. >> Okay. So you can see that like okay this is this code is like it's both problematic and people are actually working with this thing like uh continuously. >> So then that makes it the hot spot. And like the the code health thing is this is it kind of like does it um kind of create a coal graph of different like how the different modules interact or or uh like uh does it how does like a static analysis thing uh work? >> Well um let me share a little bit about the background. So originally when I founded coding I was mostly interested in like the behavioral aspects of software development. So like uh how do people collaborate? Where are we working over time? But I kind of quickly figured out that I actually need a quality dimension too because you can do a lot of work in parts of the code that are healthy that are good and when that happens you're in a very good place right that's where you want to be. So I was looking to use um existing metrics because there are a lot of code complexity metrics in the industry but when I start to look at the research behind them I kind of found that none of them really works in the sense that [laughter] they don't have much predictive value right >> okay okay >> so if you look at something like I don't know like the most popular metric cyclomatic complexity it's been around since the 1970s and if you look at the research on it you will see that the moment you start to control for the number of lines of code this more elaborate metric of cyclomatic complexity doesn't add any further predictive value. So I I thought there must be a way to go beyond lines of code and provide something better and more insightful that's actually actionable. >> So what I thought was that it's really really hard to agree what good code is. I mean um if you ask 10 people you get at least 11 responses. >> Yeah. Yeah. >> But it's pretty simple to agree on what bad code is. We can probably agree on a number of things that make code bad. So that's what we do when we calculate code health. We pick up a piece of code. It works at the file level. So we look at individual like classes modules and then we stick these 25 probes that we have into each piece of code and pull them out and see what did would it detect and how frequent and how deep do the problems go and then we have algorithms that can wait the finding and classify the code as being uh good, bad or in between >> in essence. Uh yeah that's uh that's interesting and then this comes back as kind of like a report that you can uh visualize as well right >> yeah so codesin is a it's a source tool so you have a you have a dashboard where you can see all your findings you can see u you know aggregated KPIs you can get reports but codesin is also quite heavy on visualizations because uh if we can visualize thing then the human brain is really really good at detecting in patterns in visualizations. So we include visualizations of the complete codebase. You think about it as a map over your code. We see every single piece of code and u it's indicated via colors like red, orange, u green, how healthy it is. And then we do the same thing with the hotspots. So you can also see okay yeah I have some unhealthy code but this is code that that's actually impacting us. Are we working there? So the visualizations I would say are a big part of code scene. >> Yeah. Because I remember you had a presentation and you have a written a book right as well with the same title like your code as a crime scene. >> Yeah. >> Uh yeah can you tell me about a bit more about the about the book? >> Yeah sure. So uh the book actually predates uh code scene. So the book grew out of my experience. I mean before coding and before the book I used to be a software consultant. So I worked for lots of different companies large product companies and um very often I took on some kind of technical leadership role like a software architect or tech lead something like that. And in that role, what I found was that I very often end up in the crossfire between the engineering department and the management, right? Management pushing for more features faster >> and um the engineers trying to push back and say, "Hey, we cannot do this. This code is a mess." And I found it really really hard to be that communication bridge because um I mean let's be honest about this source code is so complicated that even we as developers struggle with understanding it, right? M >> and how can we expect someone that's maybe nontechnical that have never written code in their life how can we expect them to understand what we as developers mean when we say that code is a mess or it's high on cyclometric complexity or whatever so we needed some kind of shared understanding so that's where I start to look at um ways to visualize code and ways to measure and classify code so I had a so I started to do this I wrote a bunch of scripts and I had my own tooling and I only intended them to be used uh for my own consulting. So I I used them for the companies I worked for and I quickly found that um you know using a hotspot visualization I could have conversations with managers and explain why we need to take a step back and refactor some code before we can push ahead with this feature in that area. But I also quickly found that it helped me a lot with the engineering team because now suddenly we as engineers could agree around the priorities that yeah maybe the thing we thought was urgent isn't really maybe our hotspots are elsewhere. So we could get this [clears throat] objective shared situation awareness and that I think is useful. So after using that for a couple of years I um wanted to share my findings with the community. So that's when I started to write your code is a crime scene which came out in 2014 if I remember correctly. Oh, it's like more more than Yeah, more than 10 years ago. It's >> time flies. >> But I think you touched upon something very interesting there like uh like you said, you found yourself sort of like as a uh like emissary between the developers and the and management. Uh I mean in my experience a lot of the issues with uh with technical depth. It's not like really the depth itself but or like the the actual technical issues. It's more like there's usually a lot of like soft issues involved as well like issues of prioritization for example like management wanting to push out new features. Uh maybe they're and maybe they like maybe they're not seeing the the issues in the codebase that's slowing down the velocity and my like I guess on the development side there's the opposite problem like I mean you and I we're both developers we know that we can get a bit myopic sometimes like you if you see a piece of code like ah this is really ugly like uh like I really want to go in and fix that so maybe management is not sensitive enough to uh technical depth and maybe developers are too sensitive. Um but or what's your take on like the the kind of the psychology or the like those kinds of like soft softer issues around technical depth. uh technical depth is very much a I would say a people problem and uh you mentioned it it's a lot of it has to do with priorities and the reason it's so hard to sell technical debt remediations it's really really hard for me as a developer to come to a manager and say hey to my product people and say hey you know all these features that you promised our super important customers you have to put them on ice while we spend three months rewriting some code you won't see any obser observable difference compared to what the code what the product looked like before. It's a really hard sell and that's because um a new feature has an immediate value. For example, a new feature could uh open a new market. It could land you a new customer whereas um technical debt remediations lack on immediate and obvious value. So that's something we have been working on a lot to connect technical debt to a business impact and only by doing that only so do I think that we can close this communication gap between engineering and management and product >> h and because like I mean this is a bit of a hobby horse of mine. Uh I remember I I read like the I read a lot about Lin manufacturing like a few years ago. Uh like back when Lin Lin startup was like a kind of a hot thing. I went back and read a lot of those like original Toyota uh production system books and something I found very interesting is that it feels like in the manufacturing industry they have sort of come around to this idea that you know in order to go uh or maybe better way to put it is uh like you know uh slow is smooth is fast kind of kind like in order to go fast maybe you have to slow down sometimes. It feels like we haven't really adopted that mindset in software for some reason. Do you do you have any idea why why that is? >> Yeah, I do. I do. I think there's a big difference and the big difference is that if you look at lean manufacturing that yeah pretty much grew out of Toyota production system, right? If you produce a car and the car leaves the production line with only three tires, it's obvious to everyone that it's flawed, that it has a defect. >> Software lacks visibility. It's ridiculously hard for us as developers to understand large and complicated systems. And modern systems are like millions of lines of code, shorter if you do closure of course, but still there are a lot. And nontechnical people to them, it's really just a black box. So that's our mission at Codesin to bring this visibility to code to make these flaws obvious as obvious as in the physical world. >> Yeah. Cuz that was going to be my next question like let's say I put you put yourself in the shoes of a developer and I want to sort of uh like sell my manager on the idea of either code sin or like uh like taking that technical depth seriously like what what should I do? So uh what I would recommend and I'm obviously a bit biased here in this conversation is that I would recommend that you check out our white paper called code red the business impact of code quality. So what we do there is that we summarize our research that we have been doing and uh what we found in that research is that if you look at the code health scale you can basically I'm oversimplifying a bit but you can basically split code into healthy and unhealthy and what we have done is that we have established a link to outcomes. So we know that if you have unhealthy code then you will be 10x slower with any feature changes any new implementation compared to a potential competitor with healthy code. We have also established the link to correctness to production defects and have shown that if you have unhealthy code you have 15 times as many defects. >> So what's so interesting with this is that it allows you to make the business case for refactoring. So if you find a hot spot so you know that it's relevant then you can show that you can actually bring up this uh code health and hotspot visualization map show that to a manager and tell them that you know you see this piece of code we have been doing 50 commits over the past year we have changed that code 50 times. Each time we do that we're 10 times as slow as we could be and we have increased the defect risk massively. if you give us two weeks, we're going to fix this, make this code healthy, which means that all these five features that you have lined up for that area of the code will be implemented 10 times as fast as you're used to >> and that I think is uh quite powerful. >> H and I guess even if the manager like you know like you said like the code is a black box but uh if the developers say like okay we can reduce the technical depth then at least the manager can see in code scene that something did happen after like two weeks. It wasn't just like nothing happening for like this this whole time. >> No, that's correct. And transparency is really really important here because I know from the other I mean as a startup founder I know how important it is with speed to market, right? I know how important it can be to have the right feature set at the right time. So what we want to do in codin is that we we're not only want to show the like the code side that's like the the leading indicators of performance. We also want to show the outcome >> a lagging metrics. So in codes you have a second module that you can connect which shows you the actual delivery outcomes. So there you can see trends in how long does it take you to implement or uh a typical task, how many defects do you have, what are your lead times and that means that if you act upon this data then as a manager can actually log in and see that yes indeed we became faster or we reduce the amount of defects that we ship to production which means happier customers. So it's possible to have these conversations now. >> Yeah, that's really cool because I think that's one of the things that are kind of like lacking. I mean uh unfortunately I think the state of the art is that a lot of the organizations they don't even know what kind of like flow like outgoing flow of features that they have. So even if it was improved like you know they wouldn't necessarily notice other than like you know maybe general feeling of increased well-being like less stress or something like that. But uh yeah that sounds really cool. And speaking of papers, like before we recorded, you mentioned that you had uh you had just put out like a new paper like related to AI. >> Yeah, that's correct. It's a brand new paper. It's a research paper, but uh we have written written it up as a white paper too that's a bit more accessible. But what we wanted to look at is um what we call AI friendly code. So the the thing with the is is that it's um we all know it's severely overhyped and that means that there's a lot of pressure on organizations to implement and adopt AI coding and I keep hearing enterprises that fail brutally with it but also see enterprises that succeed and indeed increase their productivity with AI and I always thought that um technical depth management and code quality maturity is going to be part of that equation. So what we did in this paper was that we looked at how good or rather what's the break rate that is the risk that AI introduces a problem what's the AI break rate depending on the code health and what we found is that I mean on AI always hallucinates we we know that right but they do a pretty good job in healthy code >> the moment you leave the healthy space AI error rates increase with 30%. >> Mhm. So that's to me is really really interesting because it means we have this metric, we have this visualization that can help you adopt AI agents uh safely. So if you pick up a code health visualization, it's no longer about just waste. It's actually an important part of AI readiness. So if you look at that map and you see that okay here's a green part of the codebase, right? That means that that's a part where you can safely adopt AI agents and benefit from the speed with quality. And then you might have other parts of the system that are unhealthy. And that's a part of the code that an AI agent will fail on and it will fail brutally. It would just be wasteful to attempt to use AI. Rather, you have to kind of remediate the technical depth first before you can adopt AI. >> Did I manage to explain that? >> Yeah. [laughter] No, but I think this is really interesting like especially the fact that this feels like another area where the you know the LLMs they are like more they're very similar to humans in like just the way that they they sort of um do their best work when they have the same like the preconditions are the same for doing good work for humans like you know when the specs are clear you know when there are tests that sort of can guide the implementation etc. Uh but now now it sounds like they also like you know thrive with healthy healthy code. >> Yeah it's very interesting because code health as I mentioned earlier it's very much designed to you know identify code that's hard to understand for human but it turns out that that code is hard to understand for an LLM as well. So machines do get confused by the same patterns that confuse humans, which is interesting. Perhaps not that unexpected, but now we have the data to prove it. >> Mhm. That's very cool. I I'll make sure to link to the paper in the show notes. It makes me think I think it was like, you know, an Isaac Aimo novel or something where like they don't have like computer programmers. they have like computer like robot psychologists that sort of like [laughter] do like psychoanalysis on the on the machine because they're too complex to program like you know with explicit instructions. So instead you have sort of like a machine therapy and it um yeah I don't know it it does feel sometimes like people treat you know they're they're sort of acting as like you know like a psychologist or you know uh sort of like what do you call like a success coach to their claw code they go like you are an expert developer you make no mistakes. >> Yeah. Yeah. That's a partially visual thinking but there's also a lot of u important truth to it because I think the key to succeed with AI assisted coding is to double down on all the good engineering principles that we also know about uh with AI everything I've seen all the research I've seen is that we need need to do more of everything rather than less of it right >> so >> like yeah >> yeah test-driven development absolutely essential I would say with I we need we need to turn the engineering dials to 11 >> to use them safely and effectively. So you're you're thinking almost more like it's going to uh instead of like getting like more wild west, it's going to be more and more like an maybe like an uh operating room theater where you sort of like have this checklists and uh you know very uh strong like I strong engineering process maybe. Yeah, I think I mean it's a bit of a sidetrack and no one really knows where we will end end up. But if I can speculate a bit, I think that the future of the developer role would be more like a a team lead, more like a tech lead, right? >> Where the AI and maybe a few other humans make up our team, right? Our collaborators. But to me, working with AI agents is very close to a tech lead role, >> right? where I need to, you know, give very precise instructions in some cases. In other cases, it can be more relaxed because I know that my team can pull it off and also need to kind of be responsible for the outcome, right? And maybe I spend more time reviewing stuff than writing things in the first place. M >> it's going to be really really challenging but um I do think that we have the components we need to pull it off but um we need to take things like uh technical depth and strong software engineering principles really really seriously. I think that's going to be the difference because um I think actually actually that AI will create a divide there where some companies will truly thrive and others will fail brutally because AI will be an amplifier amplifying both the good and the bad. >> So you're thinking of it sort of like I don't know uh maybe like the transitioning to using open source like a lot of open source technologies or something like that. Yeah, maybe. I mean, open source is uh it's interesting. I I always used to think that that was the the biggest productivity gain ever, right? Uh because you you look at the typical enterprise codebase today and 80% of it is based on open source, right? >> So that's code that we no longer have to write. And I think that yes, maybe um the role of a software developer could be close to the maintainer on an open source project. Yeah. Yeah. I mean like I mean it definitely has that same feel as like you know this thing where people all of a sudden people started pulling uh down like dependencies instead of writing code by hand. I guess like you know the AI uh AI workflow is quite similar to that in a sense. >> Yeah. I think the key difference is that with open source we tend to use it a lot for like the the infrastructure and the foundation the frameworks for application right and but uh the core business logic is something that we always used to write ourselves but now with AI even that is challenged right because the AI will be writing our uh business logic whether we like it or not that's the way it seems to go >> and like you know speaking of uh being like a uh like a therapist for for LLM like uh did the your psychology degree help you with like you know become a better software engineer or like uh in the development of code scene. >> Yeah, I definitely think it did. I mean the the reason I went into psychology was because I was interested to figure out why it's so hard to write good code. >> Oh, really? Was that like the original, you know? >> Yeah, was many years ago. I you know I worked for some large product companies early on in my career and I kind of tended to see one software disaster after another and I was you know I was young I was naive and I was a bit surprised because there were so many highly educated and skilled people and yet we software seemed to be too hard for us so that's why I decided to pick up psychology and originally I just wanted to do like an introductory semester just to get an understanding of it but you know psychology is so fun so I ended up spending six years and >> Oh Really? Wow. >> Most of them were more or less by accident. >> So you you you really like you know went to deep into into the jungle. >> Yeah, I have that tendency. Yes. >> But okay. So I mean if we go down into bit more detail like you know what were some of the the like the questions that you went in hoping to answer? Uh like did you have any specific ones? You know why? >> So I think uh it falls into like two. I mean psychology it's a it's a pretty broad discipline right but ended up kind of being mostly interested in two parts cognitive psychology which is a lot about how we think reason and solve problems >> and that proved uh super relevant to um a software developer >> and the things I learned there that's pretty much the knowledge I brought into the codeolf metric so the code metric was very much designed with like the limitations of the human human brain in mind >> the other part that I became interested And it was like the the social psychology stuff like how we collaborate and how we work together which is something that I think is also partially reflected in the work I've been doing. So if you pick up your code as crimes and you will see that there are a lot of analyzes that aren't about the code as such but are more about the people organization like you know do you have things like knowledge islands large parts of the application absolutely critical that only one developer knows about. M >> so organizational risks like that is something that um I also been interested in. >> H and uh did you did the did did the psychology degree help you answer all of those questions or are there still ones that are like unresolved? Yeah, I think I came I came far, but there are still things I'm I like to understand. And the thing that fascinated me back when I that's it's a long time ago now. But when I studied psychology, we also had a course where we read about uh or learned about human consciousness and conscious thought and this this very subjective feeling of me. Where does it come from? And it turns out that I mean, yeah, I'm going to summarize a lot of reading material for you now, but it turns out that no one really knows what consciousness is. No one has any idea whatsoever how it works. And uh that fascinates me because I keep hearing from Silicon Valley about how close Agi is, right? >> Oh yeah. >> We have no idea how a consciousness happen and the chance that we can build it in in Silicon is I think not really going to happen in our lifetime. But that's my guess. Yeah, I think this is so like everything about this I think is really fascinating because it feels like there's so many different fields intersecting like you know psychology and also like I always figured like programming is also kind of like a bit like practical philosophy in a way in that it sort of like lets you express like completely completely uh unambiguous statements about something. and like you know mathematics I don't know it's just uh and like yeah like you said it's like different aspects of psychology as well like cognitive psychology and like how uh we interact as a group etc. Uh yeah, lots of lots of interesting stuff there I think. >> Yeah. And I think it's also I mean um yeah I mean this is a podcast about uh functional programming too, right? So I I think that that's also my psychology background also explains my love for functional programming because uh one of the key ideas behind human problem solving is that we start with an imperfect understanding of the the problem. Then we try to express it as a solution which in our case means code and then we observe outcomes and we learn from that and then we refine our understanding of the problem and reflect that in the next iteration of solution. So human program solving is inherently iterative and that's very much what attracted me to FP that um you know with the ripple I can get exactly this workflow right where I can iterate, refine, learn, experiment. So I think FP is really one of the keys to get the most out of our uh let's be fair limited brain power. >> Okay. So like with the you're saying with the ripple you can sort of get this uh workflow where you have like an hypothesis first and then you do an experiment and then you can sort of either reject or validate the the hypothesis >> that that's uh exactly and before I before I even started with FP languages I was quite an early adopter of um test driven development. I started doing that uh like almost 25 years ago. I stumbled upon it in a news group and got fascinated and what I liked about it even though if I I couldn't really articulate that back then but I very much like this interactivity that you can partially mimic with testdriven development right even in say compiled object-oriented language >> but the feedback loops are longer and it's not as you cannot just you know go in and start to run random code right you have to do a lot of work to get that interactivity >> but um I do think it's uh the feeling I get with Ripple is like testrian development on steroids. >> So yeah, you get the the you can close the loop very very quickly. >> Yeah. >> But but speaking of which, like uh how how did you get into uh lisp? >> Oh. Uh yeah, that was also many many years ago. I um I read an essay by Paul Graham called Beating the Averages. >> Oh, that's a classic. I think that was the one that got me too. >> [laughter] >> Yeah, I think it's responsible for a lot of list programmers and I back then I found it fascinating but I also kind of doubted it because what Graham basically says is that using lisp allowed them to outcode all their competitors right >> so I thought this sounds too good to be true but I I got interested enough to start to learn common lisp so I started to learn that I think that was uh a little bit more than 20 years ago and I it took me a while because at that time I was already considering myself a fairly experienced programmer but I found that lisp is so different that I had to kind of rethink everything so it was very challenging but once I got it then um you know I never looked back I really really liked it and when closure came out a couple of years later then I thought this is a perfect fit because closure being on a JVM solved a lot of the problems I have with common list and that was that common list the language I really But the community is so small that you often find that you're struggling with you know finding the right libraries or whatever challenge you have and closure being on JVM suddenly had access to the vast amount of Java software. >> So to me it was um uh closure was really godsend. >> So but what languages had you tried before or what were you like working in before you found common lisp? So I started out uh doing a lot of uh assembly code uh C code low-level stuff and uh did a lot of C++. C++ is pro probably the language I use the most after closure but I also um did a lot of consultants and gigs using uh Java and uhnet and some Python and uh when it comes to FP the only other language I I learned lisp first and closure and um then I was fortunate enough to do a project in Erlang which I really love too >> so that's my background >> yeah so what what was the thing like you know what was the biggest difference in your mind like when you came from C++ and then it came to common list like do do you remember what were some like like what what was like the the biggest the most important uh differences you think >> I mean there were a couple of habits that I quickly had to unlearn that really held me back early on I think the first one was uh the whole concept of immutability I really really struggle with that because if you come from a background with C and C++ your quest you used to you know you mutate stuff that's how you make changes that's the whole essence of object orientation as well right >> that was the first challenge the other one was um it was a bit simpler for me because uh but still I I struggle a bit as well with higher level functions because and I think it's unfortunate because uh it's a it's a learned difference that functions are somehow different from other types of uh data right and Um so that it's a couple of habits that I had to unlearn. Yeah, there are some sub subtleties uh with them as well like you know when how closures work and how they capture the environment and in in common list there are different name spaces as well for like functions and values or uh did I get that wrong maybe a there's a difference between like list one and lisp two lisps right >> yeah there are differences so if you look at the scheme and commonal lisp they behave differently with respect to closures But um but I think higher level higher order functions what was a bit easier because I could you know use a mental model for object orientation right it's a little bit like uh you know you package an object it has some data and it has just a single function so it's very close to some design patterns I was familiar with like say uh the command pattern for example >> but still to learn to truly design with functions that took a long time and I'm 20 years later I'm still not sure that I fully get it. But it's better. >> Yeah. Yeah. I mean, it's it's much it's very powerful. I mean, there's this classic, you know, the I don't know if you've seen that. I think it was like Peter Norwig who had a bunch of slides where he takes like pretty much all of the gang of four patterns and like uh says like, "Oh, this is just higher order functions." Like, this is just higher order functions. This is >> Yeah, that's a that's a beautiful presentation. I I really love it. I mean I I remember in my C++ days I spend a lot of time trying to understand the visitor pattern right where you can do like uh dispatch on multiple dynamic types >> and it's extremely painful to implement in an object-oriented language and it's like it causes a very fragile design hard to change how to extend and then you learn about multimethods in common list or closure and you realize that yeah this is the way to do it this is how we model that specific problem. Yeah, [laughter] it's very it's m it's it's much much uh much simpler like uh you know the don't don't have to sort of like um uh I don't know contort yourself too much to fix that but um so like when you started building codes was it like obvious that you were going to build it in closure or was there some like uh was there still some like hesitation like some some other uh or or did Yeah, tell me more about that. >> Yeah, so uh closure to me that was a given. I knew that I I I wanted to do closure and I want to do it for I think three different reasons. The first is that um >> I mean when I started coding it I was the only programmer right so I knew that I have to >> yeah I have to really really optimize for productivity here right and I knew from my hobby projects that I can be much more productive in closure. So that was number one. >> The second reason was that closure is really fun and I think that fun is a much underestimated driver in software. fund guarantees that things get done and I knew that doing a startup will be like 70 80 hour hours a week >> and it's been that way for many years. So I I want to make sure that those hours really count that I do something I truly enjoy. And the third one was that I had this hunch back then. I have since confirmed that it's correct. But also had this hunch that closure developers tend to be very very skilled engineers, right? Quite often there are self- selected bunch. It's people that usually started doing something else and start to explore closure in the spare time >> which uh to me is a very very good sign that someone cares deeply about what they do and care deeply about programming. So I think the closure community is also excellent. So those are like the the three uh main pillars. >> It's very interesting. Yeah. Oh sorry you were saying >> yeah now that said I always there was also I had some initial thoughts around you know I've been doing quite a lot of Erlang the years leading up to code scene and I always thought that I mean closure it's great at a lot of things but I always thought that maybe I could use Erlang as like the the platform right and then I spin up closure uh processes >> but um also and I even have a prototype for it but I quickly realized >> yeah Uh but I quickly realized that I mean it's might be hard enough to use one niche technology right? >> Yeah. >> The intersection of people who know closure and Erlang is not that big. So >> no no but have you seen Robert Weirdings lisp flavored Erlang. >> Yeah. Yeah. Yeah. Many years ago. Yes. That's it's really >> maybe that could have been an option. But I I see where you're coming from. It's like you know uh you it's a niche language already. like uh no no need to double down. But did you ever have any doubts whether like you know oh I wonder if this is going to work like there are there any like drawbacks with closure that sort of worried you initially? >> Yeah I mean was what I was worried about initially was performance. So I I wasn't sure because uh codein is processing a lot of data. I mean uh I don't know think about a a large enterprise code base maybe 20 30 40 million lines of code you have to scan all that data and you have to scan all the git history and you have to scan their uh all their Jira tickets or or whatn not right it's a lot of data processing >> and I was worried about performance I thought the closure being dynamically typed and all of that it's not going to be fast enough >> but I kind of always thought that you know I could use uh Java as like the assembly language of the JVM if needed >> because you fall back to that if you >> Yeah, I could and I could even, you know, I could even write parts of it in, I don't know, C++ or whatever. But it turns out that that didn't really happen. That was it has never been a problem because what I think closure does so well is that it allows you to truly think about the data and it allows you to truly think about the algorithm and uh the most impactful optimizations are always at algorithmic level. M >> so 10 years later it has it never really became an issue with performance. I'm really really happy that I choose closure. >> That's interesting. I mean I I also find it interesting that you mentioned you mentioned the uh hiring pool as a plus like a positive of picking closure cuz uh a lot of the time when it comes to like niche technologies uh people are hesitant because like ah you know maybe I won't be able to find developers etc like maybe I can't get enough help maybe the community is too small etc But that doesn't seem like it was your experience. >> No. Uh not at all. I I never had uh problems finding really skilled people that want to work with closure. The but I heard similar concerns. So very early on when the company was just starting uh you know to land the first customers. I spoke to a couple of investors and um the ones that asked me about how yeah what languages have you used what technologies and when I talk told them about closure they have of course never heard about it before. So I tried to motivate it with you know the productivity benefits and whatn not and they said that hey you should rewrite everything in Java because if you do I can hire you 10 developers tomorrow and you know I always thought I don't want 10 developers I want maybe two free that know what you're doing right >> because there's so much coordination overhead with the larger team and that I think is the big advantage of these languages that they allow you to get more done with a smaller group of people that also limit like the organizational overhead and that I think is valable Yeah, I mean that's a really good point. I think and I mean there's a classic example of like what's up and I I think it's like a billion dollar company and they're just like I don't know I think I think there's still like 20 30 engineers uh building the product in in Erlang. So I mean it it does allow you to be like stay productive I guess. Uh and I mean you you mentioned Erlang as well but if you let's say you couldn't use Closure for some reason Rich Hickeyi comes out and says like you know no closure is closed source now it's a commercial product you have to pay a license or something like would you what what other language would you pick to write codes in in >> oh then I would most likely have gone with uh this might be a surprise but I would most likely have picked uh Python. Oh, okay. And why why Python? >> I have this uh sweet spot for it. It's always been like my second favorite language and I u >> I still tend to do a lot of stuff in Python. And uh the reason is first of all I think the language itself I mean it has grown more complicated in recent years but it's still I think quite brain friendly. >> It's like fairly easy to read uh Python code and it's I like the feeling of writing it. And um there's also an extremely strong ecosystem for data analysis in uh Python. So I think that could have been a good fit as well. >> Yeah. What what was the word you brain friendly? >> Yeah. Yeah. >> Yeah. I think that's a really that's a neat way to put it. I think it it is very like you know uh uh it is kind of like lispish I guess in some ways in that like you know it's it's not at least it like the Python from way back like like you said now it's it feels like it's getting more and more complicated every year but back in the day at least it felt like it had kind of like that like skills that it was supposed to be very simple uh and I don't know if you used uh IPython it's pretty I would say it's pretty close to like having a ripple experience and you have the notebooks and everything. >> Yeah, it definitely is and a super strong ecosystem. >> So I really like that. And of course there are a couple of things uh I try to ignore when doing Python like the whole object-oriented additions and that stuff. >> Yeah. Yeah. I mean you can use just like you know use functional programming and use dictionaries etc. Um but like when it comes like speaking of programming languages uh like what makes closure stick out among the other programming languages available today you think? So I think um I mean there are so many things that closure kind of pioneered in the you know maybe not mainstream but close to the mainstream of programming like the whole idea with immutable data structures that are still uh efficient and um what I like about closure is that it really really forces me to do um to embrace immutability and that I think is the biggest win. What I also like is that uh closure kind of encourages this like what do you call it like this uh rail railway style of coding right with the threading macros we simply chain together uh whatever functions you like and you can compose those into higher level abstractions >> so I think >> stuff forward like all the time >> yeah yeah yeah it's a lot of it is about getting your data structures right and then the algorithms are dead simple right it's a bunch of map filter and reduce and you're done and then you put a good name on it. So what I really like is that I have to write so little code to uh test something out and um that I think is one of the big benefits with 10 years of insight of using this in production. Another benefit is that the language has remained remarkably stable. There have been virtually no changes to the core language in 10 years, right? No new syntax. So it's like the opposite of where Python went right closure was very very carefully designed and uh I haven't felt anything lacking and should I feel the need for any extensions then I can implement them myself thanks to the macro system of lisp right >> so those two together I think are invaluable. >> Yeah. So, and I mean speaking of like using Closure in production, are there any other like benefits you think to using Closure uh like in a production setting? I think closure excels on the development side. I think in uh production I mean the then we come into one of the drawbacks of closure and that I think is the like the whole error handling with the stack traces. M >> the closure is extremely high level as a language but once something goes wrong that whole abstraction just breaks down and you're exposed to all these layers of implementation in between and of course it's a learnable skill right after a couple of months with closure you become quite good at reading these stack traces but it's uh it's not super friendly and it's definitely [clears throat] a leaky abstraction I would say >> yeah I mean I it was a while since I did closure but I remember those you that like it goes all the way down to like the Java class layers, >> right? And a lot of generated code in between and whatnot. It's uh really really hard to parse at first. >> Yeah. But at least it usually points to the right place in the in the file if you read somewhere in the middle, right? Or is that is that also hard to like find find what what actual like line in the in the file? Uh, >> no, no, it's it's pretty straightforward and you become used to it. This becomes like almost an unconscious skill, right? You do more or less like a binary search of your stack trace was here. No, it's there. >> Yeah. Yeah. I mean, I remember when I started using Erlang, they did they didn't even have a stack traces. So, the [laughter] that's definite definite uh improvement I would say. >> I mean, the whole error handling in Erlang is so it's a marvel of engineering. It's one of the most beautiful things we have in software I think. >> Yeah, Erlang is really cool for like you know resilience and uh things like that. But I mean closure is not uh is pretty strong with like async uh like I mean parallel programming with the core async library etc. >> Yeah, it it definitely is and that those are really really good. I mean that's uh one of the things I talked earlier about my concern of performance right but closure >> being immutable at its core makes it so easy to parallelize stuff. >> So for example I mean in the first implementation I did for when we you know when we calculate the code elf then I simply scan through every single file sequentially and that took forever right. >> Yeah. And then with closure, it was like a five minute fix or even maybe even less, maybe just a minute, right? To make all of those calculations run in parallel and simply fan out on whatever CPU coursees I had available. And I couldn't even imagine doing that in say C++ or any other language with the mutations, right? >> Yeah. [laughter] Be a nightmare. >> It's got like two week uh two week rewrite instead of like a two minute fix. >> Yeah. And two two months debugging. >> Yeah. Yeah. Uh but are there any like other things that you don't like about closure like either from like an operational perspective or just like personally? I think it's like a combination of operational and personal that um what I would really really like to have is on good optional static type system because I have found that yeah closure being a dynamic type language it's very very powerful because uh it lets you explore and evaluate ids very quickly without bothering about types >> but then the thing is that after a while your solution starts to stabilize and it would be really really nice if I could go in and kind sprinkle types on top of it, right? It would help a lot with understanding the code. So that what I that's what I think is one big benefits of static type system that the types communicate a lot about your data >> and in closure this can be much much harder because u you know you have a a map you know a dictionary and it could basically be anything in that bag. So um I would like to be able to have that combination and I think it was like eight nine years ago there was a pretty big drive behind closure spec if you remember that. >> Yeah. >> And initially that was a little bit my promise that yeah now I can get the best of two worlds right I can do dynamic dynamic type programming and then I can add the types as documentation and have them checked at runtime and whatnot. >> But spec I think never really lived up to its promise. We're not using it. We're using it a bit but not as much as I would have hoped. >> Okay. Okay. So, and what what was the were the issues with like uh I mean the obvious one is that it's like you know that closure is not compiled that like the spec thing it exists at runtime. I guess that's one thing, but were there were there another other issues with this with spec? >> No, I think that's definitely the biggest drawback that it's a it's a very light late detection if you violate your own specs. But also found like the the format of the specs themselves made it quite hard to use them as documentation. >> They're macros, right? >> Yeah. It quickly becomes uh too complicated, right? you when you sit there with a with a function even if it has a spec on it, it's quite hard to decipher what the data actually is. >> So I think like a regular static type system would have um gone a long way. M >> yeah like you know I think that's like in that's one the thing I really like about static typing as well is that usually like once you're in the right function you usually don't need to you know look anywhere else in the code like cuz you know like okay I know what's coming in I know what's I'm supposed to produce so I can just stay here within this couple of lines but as soon as you're I mean I've had some really bad experiences working in large dynamically typeyped code bases where maybe people haven't weren't maybe as disciplined about what code like what data goes in and what goes out. So you kind of like you know you have this user object coming in and you need to maybe access an email property and like you know oh will this always be available maybe maybe not like you know is it a string is it a map where the email is kind of like taken apart into domain and like username etc. Yeah, I mean that that whole thing becomes a bit much more difficult I think in that. I agree upon that and um that yeah that's the thing that's the to me the biggest advantage with static typing you know in addition to the perform boost and all of that you can also you have the full context for every single function and that's super valuable it's a big part of the documentation I would say the type system >> on the other hand most of the code bases I look like look at in um in an enterprise setting they are statically typed but uh you know 80% of all the arguments are strings So I'm not sure how much you gain in practice. >> Yeah. But I'm I'm spoiled because I I usually like you know I spend most of my time nowadays coding in F and then it's it's very very easy to create like this very specific types like but but I agree sometimes like you know it's uh when you have when you have a method and like all of the arguments are like string string string like you know okay what was the point of this again? It's like >> um but have you tried any of the like alternatives to spec? I mean I know there's Maui for example and there was this other thing uh which I don't remember the name of but it was a effort to like sort of put a proper like or like a static type system that could uh analyze things well statically uh but I I'm blanking on the name unfortunately. No, no, I haven't. My colleagues have been played playing around with that stuff, but uh eventually what we ended up with was that um [clears throat] you we had to more or less double down on discipline, right? So you can compensate for the lack of static typing by being extra diligent with your unit tests. They become really really important as part of documentation, right? And then also about uh you know encapsulating stuff and keeping our um you know our modules our name spaces as small as possible. That also helps a lot. >> So I think yeah today I'm less worried about that than I used to be uh when I started up with closure. >> Okay. So yeah I mean it it kind of like it's a drawback but you can kind of work around it. Yeah, I would say so. Definitely. >> It it hasn't been holding us back, but it would definitely have been nice to have maybe a little bit more as the code base grows. >> I mean, for a small code base, it doesn't really matter, but I mean closure code scene has been around for 10 years now. So, it's a it's it's a relatively large uh code base maybe maybe the largest closure codebase in existence. I don't know. >> Okay. So, sorry. How many lines of code do you think it is roughly? >> I don't know. Uh in closure, I would guess it's definitely a couple of hundreds of thousands of lines of closure code. >> Yeah, that's that's pretty big. >> Yeah, it's like 20 million lines of Java probably. [laughter] >> But um do you have any other tips for like you know uh managing the like a big dynamically typed codebase? uh test coverage was one thing modularizing but do you have any more? >> Yeah, I think we we need to use the tools that we have available. Right. So I think um aiming for not only a strong code coverage but also meaningful code coverage means that you need to spend a lot of time reviewing your test cases make sure they are maintainable, understandable too. >> But we were also using a lot of uh linting tools and stuff like that. But um I I do think that uh tests are the the single most important uh property for us. It's really the enabler that allows us to move fast and we try to use tests at like at least three different levels. So a lot of the stuff we do is um is test driven. Mhm. >> So we have a pretty strong unit test suit and then we always do some integration tests and uh finally we have some end to end tests and some of these end to- end tests are even executed in production. >> Oh okay. Okay. Yeah. So you sort of get like a really good like maybe uh triple triple coverage in some in some cases. >> Yeah. And I I think that's absolutely necessary because uh I mean a modern software product is so complicated. There's so many layers. And the the reason we do a lot of testing in production too is because we have all these integrations with uh say the version control providers like GitHub and GitLab and whatnot. And uh every now and then there's an issue with their their API, right? And that's something that you cannot really test. I mean we have integration tests but they are kind of based on like on you know an expected contract and that might change or there might be a bug in the external API and that we can only catch in production. >> Yeah. And even then like you know types type system won't save you then if someone like you know >> no it won't >> violates the >> it won't >> violates the contract like do you have any other like since you have such a big closure code base do you have any other like advice or things you have learned like managing this like this large uh closure code base any like lessons learned along the way? Yeah, I think it's uh it's two things. I think uh first of all, good naming is so important and uh maybe even more so with the dynamically typed language. So I really want to be able to pick up that codebase and be able to you know immediately identify what do I need to change if I'm looking to modify something. So naming of at the subsystem level, at the namespace level, file level, function level, that is really really important and I do spend a lot of time thinking about that stuff. The second thing that I think is really really a success factor that's um the skill of the team. So with a really really strong team, uh I think you can succeed no matter what programming language you choose. M yeah I guess that's a yeah that's a good uh good takeaway I think that like you know the in the in the end it's not the tool it's the person using the tool right >> it is and the tool can you know it can hold you back or it can become an amplifier closure is an amplifier and uh it it's I mean it's a sharp tool right so >> in the hands of skilled people you can make wonders with it >> and like when it comes to naming do you use like uh any kind of domain driven design techniques uh to sort of organize the codebase and name things or or how how do you think about naming and organizing things? >> Yeah, I mean I'm quite a big fan of domain driven design. So I always like to see domain concepts and if I look at the file names I want to see domain concepts reflected not necessarily solution terminology. And uh I think I got most of my the way I think about naming I got a lot of influence from a really good book uh Kent Beck's first book that uh I think few people have read it today but it's called uh small talk best practices. >> Ah is that like the the one with the flies on it or maybe that's this first edition? >> Yeah it might be. It's an old book. I think it's from the 1990s mid mid 1990s was before XP and all that stuff. >> Yeah. Yeah. But it's a >> I've heard about that like it's it and I as far as I remember it has like a really really weird looking cover as well. But I' I've heard lots of good things about it. >> Okay. But that book taught you about naming and >> that book I think that's the hallmark of of a good book that it changes the way you actually program. And back when I read that book I was doing consulting as a C sharp developer. >> Oh okay. And I kind of found that okay when I read that book I kind of changed how I name functions right I started to think about them in a different way. I started to name them not based on what the function does in itself but rather what does it look like in the calling context. >> So I try to aim to you know get my code lined up as like descriptive sentences that can guide you as a developer. >> So that's pretty much the pattern we've been following at Codesin 2. >> Yeah. So it should be like sort of optimized for um what you say like you know who for the what the function is supposed to do in the context rather than what's inside it or >> yeah something something like that right so you always when I name the function I name it from how I'm using it. Mhm. Uh so did you ever get to program any small talk or >> I mean I I did not in a professional setting but I uh did uh learned small talk. I read a couple of books about it did some projects in it just for my own uh learning >> and I I it's it's pretty cool way to think about programming. >> Yeah. Yeah. I mean small talk is very very impressive. Uh there are lot of touch points with Erlang as well. I've never never used it myself actually but it's very very cool concept but I like that idea that like you know you can learn so much uh not related to small talk from a book on small talk. I remember one of the books that had a huge influence on me when I was uh like uh earlier in my programming career was a book like a pearl book by Damian Conway. I think it was called Pearl best practices [snorts] even though I I'm not writing any Pearl right now and I only write like maybe I wrote like a couple of hundred lines of Pearl in my entire career. Uh but maybe there's a pattern here that like you know if you have a book like of best practices I guess a lot of those they sort of transcend the language that the the book is written for. >> Yeah, I think so. I think there are some universal truths about what makes good code that apply to almost all languages. >> Yeah, I guess that also loops back to like the whole like cognitive psychology thing as well. But I mean speaking of like best practices and stuff is do you organize the the code in any like specific way? Uh do you use any like um uh I know there's the the polit framework for example for closure. Do you use that or something similar or how how is the code how is the code organized? >> No we we never used any frameworks. I'm uh I don't know I have this almost like allergic reaction to frameworks. >> So I always try to avoid them and uh we we try to keep things uh very very simple. So a lot of it is uh vanilla uh closure. Of course we we use you know frameworks in small parts of the codebase like I don't know it's you know implementing HTTP requests it's pointless to do that on our own right. >> So let's use what's already there >> but otherwise we try to organize it according to domain and uh responsibilities. So if you pick up the the folder structure of say codeins core analysis library then uh the names of these folders that you see they will reflect things that you also see in the UI. >> M okay. Okay. So it sort of mirrors the way the product works kind of or >> yeah not fully but um to some degree. So the highle folder names they are all domain concepts. >> H I see. But uh are there any other like uh important libraries that you use uh in the codson code bit because I know there are a lot of like you can do so much with closure right you can sort of add a kind of like a prologue like language if you use core logic. Uh there are a lot of like really cool uh libraries for uh working with data for example. Like are there any like core libraries that you leverage he have or like any uh like packages or libraries that you leverage heavily in in code scene? >> Yeah, there there are tons of them and uh I mean if you would ask my colleagues, you probably get different answers. Uh I'm a bit biased with the parts I tend to maintain, but some core libraries to me is I'm it's a simple library, but I really love it. It's a semantic CSV that makes it super simple to work with CSV files. Okay, >> so I really really like that library. It's uh saves me so much pain. And then I'm also I'm also doing a lot of work with um parsing source code. So that means I'm relying heavily on tools like um antler and uh tree sitter those type of tools >> and uh yeah also I mean it's a it's such a productivity boost using these um tools and libraries compared to say uh 30 years ago when I wrote my first parsource in Lex and Yak. >> Yeah. Can you tell me a bit more about the semantics thing? >> Yeah, that that's the nice thing. there's not so much to uh tell about it and that's what I really like. So [laughter] we we have a lot of our data sets are organized as uh CSV files >> and semantic CSV you simply point it to a CSV and you get back a closure map and you can even you know it's it's very declarative. So you can basically tell it that yeah this particular column in your CSV it's on it's a double with this or that precision and this is an integer and whatnot and you basically at the closure map back uh mapping closure [clears throat] is like a dictionary and everything is already parsed and resolved for you. >> Oh that's very nice. Yeah because I mean parsing is surprisingly complex and >> it is >> errorprone. So >> like if you just think that oh What's the big deal? I can't just split on commas. Like you're in for a world of pain. >> Yeah. >> No, I mean, but I guess that's sort of like the essence of closure, right? It's like, you know, it's not not trying to do anything fancy. Things should just like, you know, work should let you be uh let you be productive. >> Yeah. And that that's what I like so much about it. the language kind of gets out of my way and I can just focus on solving the actual business problem >> rather than [clears throat] doing all this meta work around it, right? >> Yeah. Yeah. If someone was to get into closure like uh in the year 2026 like you know what what would be a good place to start like any YouTube videos or like how how how should you go about it you think? Oh, it's a fantastic question that I'm I don't think I'm capable of answering it. I mean, 10 years ago, I would still have known what a good introductory resource could be. Now, I don't know. I'm uh it depends a bit on learning style, too. I like to learn from books. >> Uh so, I I tend to buy the pragmatic programmer books. I read a bunch of the books they pro they released on closure. Learned a lot from them. But I'm not really up to date with uh 2026. >> Yeah. But I guess it hasn't changed uh that much like uh what's your what's your recommendation for a good like closure book for a uh maybe both beginner and advanced. >> Yeah, I'm trying to remember what books I read back in the day. Uh it's been a while since I read a closure book, but I think there was a programming closure from the programmatic programmers was a really really good read. And there's also a book that came out I think it's fairly recent called Closure Brain Teasers. >> Oh okay. >> By Alex Miller. That one is I would say for advanced closure programmers because uh you know after doing closure for more than a decade there was embarrassingly many of these sprint teasers that I couldn't figure out without actually trying them out. So it's um I learned a lot from that book. So I think that is a good next step. >> Okay. So if you want to like if you past the bit like know the point of a beginner and you want to sort of like elevate your skills a bit then that could be a good good book. Yeah, I think it's really really good in pointing out how complicated things can actually be, you know, and a lot of it is the intersection with the with the JVM. >> I've heard a lot of good things about Eric Norman's book, you know, Groing Simplicity. uh even though that isn't I think that book is actually in JavaScript. I think it sort could sort of help you get into the right mindset for like uh being a productive closure programmer especially if you're more used to like you know I don't know C++ or whatever like the classic uh more like procedural imperative programming style. Uh but have you have you read that? >> No, I haven't read it. I've uh it's been on my reading list for uh quite some time. Um so thanks for the reminder. I will most likely pick it up now. So uh [laughter] >> yeah, >> I think it's useful because a lot of the like we talked about earlier a lot of the the skills I mean the syntax enclosure is that simple. You learn that in a day, right? But the hard thing is the mind shift going from the imperative program to functional. So anything that helps with that is useful. >> Yeah. Yeah. There's not much syntax to learn in closure nearly like you know learn a bunch of different like uh uh like uh what do you what do you call it like u parentheses and uh like brackets and then you're pretty much done right >> yeah the curl in the square brackets and you're done that's it >> yeah but uh I mean and of course Uh, everyone should watch, you know, the rich the rich hickey talk, the simple med talk. >> Yeah, >> probably the best one of all time. >> And then I will also remember, I mean, early on I found myself browsing the API documentation of the core libraries as well because there are surprisingly many helpful functions that are already implemented for you. >> And I remember early on in particular, I often found myself that I've been implementing something that I could have been replacing with the library function. And after a while, you learn that. But the the library is surprisingly complete and large. >> M yeah the I remember that like you know the documentation is really well written uh like for the just just the core like the core closure language and the standard library is you know it has a bunch of examples etc. And it's like you know it's a uh like you said it's it's quite nice to just sit down and like you can sort of like just read it from top to bottom. Yeah, >> you get a good like impression of what the what's in the library. Um, yeah, I think uh we're we're nearing the end time of the podcast. Like, do you have anything else you would like to add about the closure or or code scene? >> Yeah. Well, uh, I mean, one thing I can add is that, um, everything we talked about before about AI friendly code and all of that, we want to help you with that. So, one thing I've been working on a lot over the past few months is to together with my team to build an MCP server that understands code quality and understands code health. So, that's something if you're interested, that might be something to check out the codes in MCP server. It's really, really cool. It can uh help you safeguard AI and can also elevate AI performance right to an acceptable level where the AI can start to self-correct instead of just adding to their technical problem. >> Yeah. Yeah. I was going to ask if you've experimented a bit with that because uh a lot of people are saying that if you like the AI performs best when you can put it in a feedback loop so you can sort of like it can sort of correct its own mistakes. Uh, does it work with the MCP server? >> Yeah, it works really, really well. I have a couple of uh YouTube videos on it demonstrating it and I also have a blog post called inner developer loop that uh shows and explains it. It's >> okay. Okay, I'll make sure to add those to the show notes. >> Yeah, it feels feels really really magical, right? So you see this AI agent go off and uh then it kicks on the MCP server. It gets some feedback. It selforrects. It improves. It works uh surprisingly well. It feels almost magical. >> Yeah. I mean that sounds really cool cuz it's like one of the most annoying things I I think when like coding with LLMs. It's like, you know, it's it's one thing to sort of like coach the the LLM and and sort of sometimes you realize that you gave it ambiguous instructions or, you know, you're you see something that it wrote and you don't really like uh the way it looks, etc. I mean, I can accept being the I don't mind being the human in the loop in those situations. But what really annoys me is like when it makes obvious mistakes sort of like you know it sort of copies and pastes code like almost exact same thing like little bit tweaked a little bit or like when it does yes you know like obvious things like this. So maybe the uh code MCP thing can help with that. Yeah, it uh seems seems really really promising because I think what happens once you give u it's all based on the codoff metric of course which is validated right with this business impact and we start to inject that into the AI what happens is that first of all the AI gets a goal it knows what good looks like >> and when it fails to meet that goal it also gets detailed feedback on what it did wrong so that kind of sets the scene for this selfcorrection loop >> h are you going to integrating like more AI tools into codes in itself like I'm thinking more on the analysis side >> the analysis uh most likely not because the analysis I think it's really important that it's uh deterministic >> that we have a ground truth >> but uh what we find is that we do a lot of AI work when it comes to the remediation >> so you know um detecting technical depth and prioritizing it is really just the first step now you need to act upon it as an organization, right? And that's where it usually blocks because you know people might lack the skill sets on how do you write the code in a better way, how do you refactor it or the priorities and time might simply not be there. So that's where I think AI can help a lot and as an industry we seems seem obsessed with writing more code faster and I simply think it's the wrong problem solved. I think it's much more important to take the code we already have and uplift it so that not only you as humans but also your AI agents can safely iterate on it. M >> so that there we are finding that we use a lot of AI to uplift and uh remediate technical that >> yeah because you can have codes in identify the technical and then claude can go and fix it and then claude can use code sin to fix its own mistakes and like you know you're golden. >> Yeah, something like that. >> Yeah. Yeah. Um yeah, unless you have anything more to add, I think I'll say, you know, thank you Aan so much for coming on the podcast. And you know, where can people find you? Uh like do you have a are are you on any of the social medias? >> Yeah, I tend to be fairly active on LinkedIn. So you will find that on Torn Hill on LinkedIn and I'll be happy to connect with uh anyone interested in uh code quality technical depth functional programming. I love that stuff. So that's where you find me. >> Yeah. And like you know uh and code you can find that at codesin.com I guess. >> Yeah. >> Yes. And uh yeah uh I don't know uh I every time I talk to someone who's really into closure I I sort of like uh I I want to start you know get back into lisp again like you know it's it is a lot of fun uh and uh like you know I mean closure especially because it's so so pragmatic as well but um yeah we'll see hopefully we [music] can get more list people on the podcast as well but Uh yes, thanks a lot Dam for coming on the podcast and uh well talk to you later. >> Yeah, thanks for having me. [music] Take care. [music] [music] >> [music] [music] [music]
Video description
In this episode I sit down with Adam Tornhill, founder of CodeScene, to talk about technical debt, Clojure, and why it's so hard to write good software. === Topics covered === - From electrical engineering to software psychology - Why writing good code is so hard - The origin story of CodeScene - What technical debt really is, and why traditional metrics like cyclomatic - complexity fall short - Code health: measuring what makes code hard to understand - Visualizing code to align engineering and management - The story behind Your Code as a Crime Scene - Making the business case for refactoring - Lean manufacturing vs. software: the visibility problem - Code quality and business impact (10× slower, 15× more defects) - AI-friendly code: when LLMs break (and why) - How technical debt amplifies AI failure rates - AI as an engineering force multiplier (or multiplier of chaos) - The future developer: AI team lead? - Why Adam chose Clojure for CodeScene - Immutability, REPLs, and iterative problem solving - Test-driven development as cognitive support - Performance myths in dynamic languages - Parallelism made simple with immutability - The real drawbacks of Clojure - Static vs dynamic typing in large codebases - Hiring in niche languages: small pool, strong engineers - Naming, domain modeling, and long-term code health === Links === https://codescene.com/hubfs/whitepapers/AI-Ready-Code-How-Code-Health-Determines-AI-Performance.pdf https://arxiv.org/abs/2203.04374 https://pragprog.com/titles/atcrime/your-code-as-a-crime-scene/ https://www.youtube.com/watch?v=7FApEq8wum4 https://www.paulgraham.com/avg.html http://codescene.com/ http://adamtornhill.com/ === Func Prog Conf === https://funcprogconf.com/ === Episode in Spotify === https://open.spotify.com/episode/7l3TDCeUxSnUoturNxG666 === Episode in Apple Podcast === https://podcasts.apple.com/se/podcast/16-adam-tornhill/id1808829721?i=1000752399791 #funcprogsweden #funcprogpodcast