We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
SmartLogic · 84 views · 4 likes
Analysis Summary
Worth Noting
Positive elements
- This video provides practical, real-world insights into how a professional consultancy manages dependency drift and project scaffolding using Nix and Igniter.
Be Aware
Cautionary elements
- The content is highly transparent; the only minor influence is the 'revelation framing' of internal tools as industry-leading standards to attract potential clients.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
Hey everyone, I'm Sunundy Vance, software engineering manager at Cars Commerce, and I'm your host for today's episode. For season 14, episode 12, I'm interviewing fellow Elixir wizards Dan Ivich and Charles Suggs to learn all about the cool stuff that they're working on with Elixir and around Elixir, surrounding Elixir, all the Elixir things at Smart Logic. Hey guys, how you doing? >> Hey. All right, other side of the table. >> Other side of the table. >> I'm so nervous. >> The mythical table here. Yeah, we've been talking about a lot of cool different technologies this season with all the different guests and we thought, you know, Smile Logic is doing cool stuff too, working with a lot of different technologies. You all are in the specific maybe unique position of getting to work on different projects. So that means you have the opportunity to work with different technologies, whereas some people in product lands might have to just stick to their, you know, their one tech stack and maybe they get that one chance at trying that new thing. But you've got the best of both worlds here or the best of eight 12 10 10 technology worlds here. So let's get into it. Where would we like to start? There are so many different things. I don't know what's Okay, how about what are we most excited about recently? Dan, let's start with you. >> Most excited about recently in the elixir verse for us. I think for us the like trying to optimize some getting started stuff which actually Charles and I have been working on kind of handinhand although mostly Charles. So I think that's probably what's most exciting relevant to this season in terms of what conversation we had around Nyx and just like having Elixir installed at the version you want ready to go compiled and like on your path correctly without having to build a bunch of things from source and getting yourself into like a dependency knot. So I think Nyx and then just you know how do we kind of get our standard tool set our our little mini elixir verse the smart logic version of an elixir application how do we get that going as quickly as possible I think is where a lot of our energy begins. >> Okay I had like five five follow-up questions but before I'll come back to it. Charles what is your favorite thing or most excited about thing that you've worked on recently? Part of what Dan was referencing would be working with Igniter and learning to use Igniter to build out a bunch of code generators for faster project startup. >> You just answered one of my follow-up questions. >> Yeah. Uh and also not as recently, but some work we did with with Explorer that I referenced in the Explorer episode recently. >> Cool. Um All right. So, one of my basic follow-up questions is actually something I technically already know, but for our audience who maybe isn't as familiar with what SmartLogic is doing, why why would you be looking into startup tools when you're building out new projects? What does SmartLogic tend to do when you all go through the process of making a new product or a new project? And what what's the starting point normally? What are some of those pain points with starting? Can you speak to some of those things? Well, you know, being a consultancy, we often have the the opportunity to work on green field projects, which is is nice and diving into existing code has its own fun to it. But when we're building a new Elixir app or starting fresh and when that's something that we might do multiple times a year, a lot of that is kind of the same stuff that you're doing. A lot of projects have a lot in common. And so being able to automate as much of that as we can really is a is a benefit. Almost every app needs authentication for users. Almost every app, you know, we of course want tests. Almost every app, there's things that we tend to do for Phoenix components. And so this way we can just start with a good consistent base where developers don't have to spend time thinking about it or double-checking the docs. They can just run a few commands and then get to work. >> Cool. So Dan, you started this off by saying you were adjusting some of the things that you all are doing with Knicks. Anything new that you're adjusting based on something we might have talked about this season? Any >> um yeah, I think in the next episode it came up a little bit, but uh Charles has been pushing kind of the use of flakes on our side, which I think has been there's definitely been some advantages. I don't know how perfectly I can articulate them other than it seems to play a little nicer with the way we do things and I find the the Nyx language to to kind of script up the the either the shell definition or the flake like maybe not the most obvious thing to declare even as it is a declarative thing but the flakes I find easier to kind of edit and maintain and copy and like kind of move between versions and yeah I think that's been just kind of a benefit benefit of embracing that approach, you know, even as it's an experimental feature, I believe, still in Nyx that it gives us a configuration that just seems to work a little bit better for our use case. It comes with a lock file even. >> Yes. Well, I don't know. On the internet this week, I was seeing a lot of lock file hate around like you shouldn't need a lock file. And it's like, well, maybe, but I think the reality is we do. So, let's let's have one rather than not have one. to lock file or not lock file. >> Yeah. >> Okay. Fine. >> I think for us in particular being a consultancy that works on lots of different applications and maybe this is true for a product company with a lot of microservices but even in that environment I would expect that their microservices are all running, you know, relatively the same version and general software stack just are independent applications to communicate a different way. You know, we we have things that are on Elixir 118 or 117 or 116. And certainly our work in Elixir is easier in that regard than some of the things we used to do in like Ruby or still doing in Ruby where those version upgrades tend to be a little bit more challenging. The extreme so far backwards compatibility of elixir and they're like really easy upgrade paths through minor versions when no no major version change even on the horizon certainly has made that upgrade path easier. But I think our kind of like approach to solving this problem is definitely rooted in our history of well we have this stuff that's still on Ruby 18 and this stuff that's now on two and this stuff is on 1 nine and we're needing we're trying to get all this stuff upgraded by the time security patches end and you know we're we're switching between branches and therefore switching between versions and how do we manage all that and it used to be you know RVM and then RBM and then ASDF and hopefully every version you need compiles for everybody's laptop and that was almost never the And Nyx has like mostly solved that problem for us and did so at a time when we were starting to roll out the Apple silicon laptops across the team. And so it was like even more complicated now of like new architecture >> on top of everything else that we were trying to do there. And so the timing there years ago was really to our advantage. And then I think as we've continued now if I want to add to our elixir verse of a project jq or rip gp or a command line tool for github release management we can add those to the nyx environment to the flake and then I I know every developer when they pull it down will have it. I don't have to say okay when you pull this branch make sure you brew install this other thing. Like it's all just there at a version I know works regardless of operating system version or whatever. Yeah. Consistent. We we all like continuity in our universes, right? Like we're we're we're big on canonical continuity. Yeah. Well, Nyx gives us version controlled and with flakes lock file controlled continuity. And I want to add to that too that we were at one point having a little bit of trouble where someone's Elixir language server would be compiled against a different Elixir version than what the project was running, especially if that had been installed at the system level and then we're having a like per shell per environment configuration for our project. So by baking that into the flight configuration and some slight adjustment to make sure that our editors also connect to that version of the language server was also a really big benefit. >> Yeah, there was definitely a time I remember where people were on the fence about tools that generated things. So like the default Phoenix generators definitely got some hate for some time in and out of Smart Logic. I mean you said you're using Igniter now for some things. What's the what's the concept behind that? Why are you interested in that? Why does that work better than other things? And what what are some alternatives you might have gone for otherwise? >> Well, we had already integrated Igniter into kind of an internal tool that we use for some reproducible stuff on projects. And so I used it because it was already there, but also it was something I wanted to learn and get familiar with. And the why is it it still comes back to consistency but also not reinventing the wheel on every project when we have developers that are maybe we might be on one project for a while but you're going to switch around. You might need to go do maintenance on a project and so when you're switching between multiple projects it really cuts down on the time and cognitive load if you don't have to figure out how things are in this project and what's different. And so as long as we can have consistency in the way that we do certain tasks in our projects that are pretty consistent across projects, then it saves us that time. So Igniter was a way to kind of standardize on that so that developers didn't have to go check, oh, how are we doing it now on this project? Let me go find the most recent project and apply that to how I do it here. We can just generate it. And when we improve something, we learn something, we can contribute it back into those generators so that the next projects pick up on that improvement. >> Yeah, I think the distinction is offthe-shelf generators versus generators we're writing. And like certainly I I don't want to generate code that we're going to mostly throw away or have to edit extensively. But if we've already set them up the way we want them, then yeah, let's let's give ourselves that kind of starting advantage, especially for the things that we see often. And for us, that's everything from like Charles mentioned, you know, O almost everything we build is behind some sort of user login because we do just a lot of businessto business type software. and then the rest of the tooling that we we reach to. So, you know, how we're going to do CI/CD, how we're going to track errors with Sentry, how we're going to monitor with Prometheus. And I think a handful of these I'd like to touch on as we keep talking about what our Elixir universe looks like. But it starts with the stuff we pull into every project to create that kind of consistency and robust platform that we're going to build this custom application for our clients on top of. Okay, that makes sense. The the thing I was thinking about was also just like I kind of even remember when Igniter came up on the podcast. I think we we had Zack Daniels on to talk about that. And I think that was with Owen and Owen I feel like said, "Yeah, I want to get off of this call. I want to go like play with something." So, I guess the thing got built and you guys use it. So, there's that update for our audience. Owen followed through and then, you know, has left, but you know, he he he got it rolling and Charles picked up that mantle. >> Yep. Thanks, Owen. >> Shout out to Owen. You mentioned CI/CD. How how are you managing deployments and making that repeatable and more manageable over time? >> We have our kind of elixir release approach that we have now. And what we've started doing is having our continuous integration server kind of like when a release is either merged to main or tagged for production, we use that to trigger the build of a an Erlang release, you know, using mix release and tar that whole thing up and then we we kind of scroll it away so it's ready to deploy. And then our deploy process is really just about putting that binary in place. And so we use Ansible kind of in a very like old Rails Cabrano style of you know a release folder timestamped and then put everything in place update some sim links restart some processes and depending on the the complexity of the deployment the size of the deployment that can be you know rolling restarts across a fleet of servers putting things in and out of the load balancer we get kind of all of that automation is at our fingertips with Anible which is our kind of go-to for server configuration but from a release standpoint point, you know, the the Erlang releases, the runtime EXS stuff, kind of everything that's happened there in the last decade or so that we've been doing this has really gotten to a point where I think it's it's pretty nice to work with. And, you know, Charles and I are still we were just talking the other day about configuration management and, you know, kind of the right way to do some of that. We experimented with Vapor for a while, but I think we're we're really kind of settling in on probably a lot of environment variables that are being managed and read by the runtime config file and then using that to kind of load up your standard Erlang style application module type config and then letting the letting all the config within the app follow those like standard Elixir and Erlang patterns. Okay, so let me throw a scenario at you then. You've got a mythical new engineer starting next week. They tell you, "Hey, I've got some time now. I don't know a ton about deployments or DevOps and I'd like to study up a little bit on how you all are doing deployments over at Smartlogic. What should I read? What should I look at?" Dreaded question maybe. I don't know. What do you tell them? Uh well I was going to tell I was my answer until you finish the question was going to be don't worry about it we made it really easy the read me will tell you how to deploy but then you said they want to get involved and they're interested in it so now it's like okay so they want to like peak behind the curtain what do I tell them to look at we have some like anible resources that I tend to point developers to to just understand the basic building blocks that kind of base around what we're doing I think if you don't if a developer didn't know how Earlang releases work. Like I would point them at the looks of like release documentation, the Phoenix release kind of documentation there. And then honestly, that's probably the extent of what you would need to know. Anything we're doing kind of beyond that would be something I wouldn't expect somebody who's like in their first few weeks of the project to need to have under their belt. >> Yeah, that's fair. The likelihood of that question coming up the week before somebody starts is like nothing. But it's always a fun thought process. >> Yeah. Well, I mean, I think part of standardization for us is certainly people move between projects because we're working on many projects at a time, but it's also like anybody else when somebody joins the team, what does that look like, right? And you know, standardization helps in both cases. >> Okay, so asset pipelines, we have a note here for our audience. What are they? How do you work them into your day-to-day? what was awful about them maybe like three years ago and what is better about them now with the way that you're doing them? >> Yeah, so I you know I think for us we certainly started with Brunch, right? A lot of Elixir apps back in the day we're using Brunch. We started pushing things under Webpack before I think for a few releases it was the default asset pipeline but now we're pretty fully in on ESU Tailwind and when we don't need Tailwind using like Dart CSS or Dart SAS I guess it is to handle that side of the house there. I think what Elixir and Phoenix has always done really really well from the get-go is the asset pipeline is really isolated like it is just its own thing and you tell your Phoenix app how to make assets exist or how to watch for asset changes and the external process whatever it is can kind of be whatever you want and it has made moving through the versions on older things more of a JavaScript challenge than an Elixir Phoenix challenge for us from uh Elixir verse as a also a Rails shop. The way that Rails has now made similar moves to JS bundling, CSS bundling and just delegating this work to external tools and moving away from the sprocket style approach. That parallel is really nice for us because now our like updated Rails apps or our brand new Rails apps and our Phoenix apps follow a similar pattern of ask tools who make assets hash them so that they like cache bus nice and put them in a folder where you can potentially serve them up either yourself via CDN or with like you know some sort of proxy that pattern our apps look the same right if you squint and pay don't pay attention to Ruby vers elixir they look the same >> okay Cool. And then the things that are nicer about this than 3 years ago. I think you said that a little bit, but I guess like what was really painful before. >> Well, I mean, I was never like, "Yay, brunch or yay Webpack." Um, I mean, I'm sure there are people >> Webpack in particular. >> Webpack and Webpack in particular, not very yay Webpack. ES build is awesome. I love ESB build. Tailwind's great for like what it does. And if you like that utility class approach for the things we have that don't use Tailwind, Dart SAS is it's it's a fast binary that turns slightly nicer to write garbage into browser ready garbage. >> And that's the tagline for the entire episode. No, you can't. >> We're just just shoveling garbage from one side to the other and trying to make it a little nicer. >> Arl, anything else to add to our trash pile here? I'm just so glad that Webpack is not something that I have to deal with anywhere close to a regular basis anymore. >> Yeah. Yeah. Very fair. I had an engineer who like voluntarily updated our Webpack build and I was like, "Do you have a few? You okay? Happy." >> Yeah. I mean, I like a challenge. I've done a few of those like, you know, but you know, a few was fine. That that was more than enough. >> Yeah. Exactly. Cool. What about observability? What's what's new and interesting in the world of smart logic for observability? >> Nothing really new. We went pretty hard into Prometheus which you know with everything with happening with like telemetry inside Phoenix and Elixir and just that whole space in general has been a really good move for us. I believe like it is it is trivial to add Prometheus metrics to things and add a Prometheus export and then get anything we stand up being scraped and monitored by our Prometheus instance so that we can know how it's performing from the second it's deployed. And I consider Prometheus pretty critical to like how we operate in that sense because we do support a lot of our clients running systems. We don't just build and hand off. We're long-term partners and part of that long-term partnership is knowing that it's running the way it's supposed to, knowing when there's a problem and then mitigating those problems if they recur. And I think observability is key to that kind of long-term relationship. And I've been very happy with Prometheus as a technology choice to accomplish that need. >> Charles, do you have anything to add to that one? I I made Charles do a project that was where Prometheus and Graphfana was core to core to the the product. So he maybe feels a little less pro yay than I do. >> Yeah. I was I was thinking about how to maybe slot that in because that wasn't really about observability so to speak. At least not in the way that we're talking about it here, >> right? This was more about how how do we aggregate data for a project that is constantly collecting data over sensors. So not telemetry for servers or applications but like real world data that's that's coming in and >> real. Yeah. Yeah. Like >> like hardware data >> like environmental data like temperature. >> Okay. humidity, things of that sort. >> And so then to to connect together Prometheus with Graphfana and an Elixir application to be able to facilitate that data coming in, being transformed and sent off to Prometheus, but then also establishing a lot of other metadata around what's being collected. Where is this sensor? What else is going on with this particular sensor? And that way the client would be able to make use of that data. >> Keeping it vague for >> there is a piece of me here that is just like you are real life like come to life book characters because how to build a weather station and elixir is a book title that we have out there in our ecosystem. >> It's true. >> Shout out Frank. >> Yeah, we just didn't do any nerves on this. Yeah, in this case it was commercial off-the-shelf data, but we knew we had time series data and we have a lot of Prometheus experience. So the question was can we leverage Prometheus to be our ingestion point of this time series data, you know, and it was to to the integration partner that we had for this hardware. It was can you get into this format or a format that we can like make Prometheus looking? And I think what ended up being cool about that project too was the integration kind of went like both ways where like Prometheus you almost ultimately have to tell it what to scrape and we have a bunch of configuration we maintain to tell our Prometheus what to scrape in terms of all the applications we monitor but in this case it was like well what to scrape depends on what's deployed and you want to be able to edit what's deployed through a web interface. So we actually had Prometheus getting its configuration from Elixir to then turn around and scrape stuff like Elixir was processing and then like where we had to do some reformatting stuff. they are pretty tightly weaved together in a way, but also we're leveraging each piece for what it's really good at. And I think that's like, >> you know, for the season, right? Like the elixir verse. Elixir is great for a lot of things, but so is like a lot of other stuff. And, you know, how well can you make them play nice? And as somebody in the Wii is doing the work, Charles may not agree, but I think overall they play really nice together. >> It served the purpose, I think. >> Yeah. Yeah. It definitely I mean it accomplished the goal and it's running and it's doing what it needs to do and you know like anything else. There's like weird edge cases and scaling challenges that are never where you think they're going to be but overall it it's been it's been a good use of the technology applied for like a custom application with this open source core in a way. >> Yeah, it's interesting. I feel like I didn't do or I didn't work a ton with Prometheus at Swarm Logic and I'm also not like in the observability world a ton, but like I know I'm a data dog every day. Got dashboards I'm always looking at. There are certain metrics I'm always concerned with. Core web vitals, SEO tracking, alerts, any kind of errors. At this point, I don't even know where errors originate from because like they just kind of load into Slack and I check them. I check the runbook, see how they're doing. How's Prometheus for that? And just like your general usability standpoint, anybody can grab a grab a look and make sure that things are operating as expected. >> Yeah, I mean I think we use Prometheus. I see it in two very kind of particular ways. Like sure, we have it monitoring response times, query times, things like that. That's generally not where we get a whole lot of value out of it. we get, you know, just generally making sure servers are running, CPU loads are where they're supposed to be, RAM loads are where they're supposed to be, that there's plenty of free space, that processes are running the way they're supposed to go. That is a piece of Prometheus where it's like, you know, you could pay New Relic or Data Dog or somebody to like monitor infrastructure and you would get all of that. We rolled our own with Prometheus. And then the other side of it is specific application telemetry like this process ran it took this amount of time. It ran at the canes it was supposed to. Its end result was this amount of data added or removed or cleaned or marshaled or notified or whatever it is that's like core to the business logic happening the way it's supposed to. We have a lot of that in place and then making sure then the monitoring is are things occurring when they're supposed to and if the process breaks down and data stops flowing through your system the way you expect it to do you know right and and for us that's often in places where it is not an enduser web metric that you would see like oh like checkouts or searches are down this is like no data is not loading so the website is out of date >> and it's like well you can't tell that from >> necessarily from like a web scraping side of observability, but you can certainly tell from a you know, hey process has not checked in. It's gone rogue slashde go resurrect. And that's been really critical for us for a number of our clients to make sure that you know the business processes are flowing the way they're supposed to. >> Got it. Is there anything toolwise that is new to your ecosystem maybe in the last year or within the last season that uh you like you've tried or that you've tried and didn't like? I mean that's a fair fair topic too. >> I think if we stretched the season a little bit further than actually the season. I know Charles is big on explorer which we talked about earlier this season. Charles, you want to talk more about our usage of explorer? >> Sure. Yeah, that was a fun project. So there's no user authentication as part of this application. It's all kind of just front-facing data that users can interact with the data, can explore the data, filter, sort, and kind of a tabular display, but also can enter some of their own data and get some calculated values back based on what they input and the other existing data in the system that I had mentioned that they could kind of filter and sort through. By using explorer, the client already had data in spreadsheets. It was how they were working with this data. and to avoid having a database and to also enable kind of quickly being able to do the sorting and filtering and other calculations across rows of of data. Explore seemed like a really good fit because Explorer it it brings the concept of data frames into Elixir and the ability to do operations on a tabular representation of data. You can say add a new column and that column should be the product of these two other columns divided by this number. And you can just do that with one or a few lines of elixir code. Add that to your data and build like that. So with this project, we just load the data when the application starts via a gen server or an agent and keep it in state that way and then users can interact with it. And it was it was really handy and kind of a fun challenge to also think about I don't remember the specifics now but there were a couple of times where we had to change a little bit about how we might do things to keep this as a database free essentially application. >> Yeah, I think it's it's there's two projects over the last however many years probably been since the other one like threeish years. Anyway, you know, you do web software for as long as we've been doing it and it's like browser, server, database, browser, server, database over and over and over again. And then we do something like that app where it's like browser data in memory. Huzzah, no database. And then we also had a project a few years back that was mobile app, no server, just database on the phone locally, don't talk to the internet, like mobile application. was like this is cool. We can use SQL light and just store some data and like you know oh now we got to do like database migrations at startup of a mobile application you know and it's just like it's the same stuff but different um >> which is always a funny or a interesting way to challenge your brain. It's like a brain teaser on how to think about a different setup. >> Um I think I didn't even I think I actually worked on that one that you're talking about Dan. Um, but then like even now working on mobile apps stuff and thinking about having web things match mobile things as they get deployed out and just figuring out how those different >> interactions result in the same thing in the database or the same thing in metrics. Oh my god, analytics metrics is a beast. I just trying to have that the same across all three clients with like the same like click events and whatnot is just like a whole I could probably talk for hours about that one. Um, but it it changed the way my brain thought about these kind of problems, which is fun. We, you know, we need challenges like that to keep us on our toes, I think. >> Mhm. >> Um, cool. Uh, so there's always the fun section of every episode where we like dig in a little into AI tools. Um, how is Smart Logic using AI tools either in the workflow of how things are like how you actually like do the work versus like are you working on any like AI products? Are there any like what what's new in the world of AI with Smart Logic? >> So, I'll start with the we do work for hire. We don't own the intellectual property we create. So, we're like being a little cautious and working with our clients to make sure that we have like all the right agreements that if if what work we do contains code that may or may not be copyrightable by in currently United States, you know, copyright law that like everybody's cool with that. So we've been kind of a little cautious in that regard, but we have a lot of kind of ongoing exploratory work, you know, for our own benefit, for our clients benefits, for where is this going to fit in and kind of slot in nicely. We've been using GitHub copilot. I've personally been doing some work with like codeex and cloud code and kind of the agentic you know agent pair programmer and then also some LLM integration in terms of you know what for in our clients products can we summarize anticipate generate on their behalf of a user that maybe is relevant to help them catch up on what they've been missing looking for the right places to do that and I think the other side of what we do being business to business or business to business consumer and generally very missiondriven product is to make sure that we are not distorting the purpose of the software just to have an AI feature built in there, right? And so like yes, we could make writing this thing or do this process a lot easier on the end user by AI assisting it, but that's not the mission and goal of the product. And so like if those are in conflict then we try to you know not necessarily head down that that road um and look for the right value ad that doesn't detract from the purpose of the product itself. >> Cool. Yeah. I know the goal is always like how do you make yourself more efficient and faster when it comes to your own like personal tool set and figuring out what what helps you out. I've noticed that, you know, I'll try something with an AI tool that'll help me move faster and it seems faster at first. I guess it depends on like what it is. I know I think maybe I've talked about this before, but I had a like more than 10 direct reports at one point and chat GPT was really helpful for me during review season just to organize my thoughts and get anonymized data together. But this time around, I was just like, "Yeah, I can write it faster." And it would take me to organize my thoughts. I'll just jam it into a keyboard and see how that goes. I know I haven't done anything like vibe coding wise, but that's just kind of what it reminds me of tool-wise. Speaking of vibe coding, do you want to do a shameless plug for uh Yer did a did a fun vibe coding video a few few I want to say days ago time. >> Yeah, it has. It's been more than days, >> maybe long. I I think for us especially like being product focused is like the I have a blog post coming out about some of this stuff too is like the ability to get to something workable in a prototype just by like describing what you're looking for is is cool, right? And it's like that that gets you something, you know, and like the first version is often your most expensive version. So, if you can drive that cost down, like that's great because feedback off of something real trumps anything, you know, in terms of any kind of feedback you could get off of a wireframe or a color mockup or a even a clickable prototype. >> Your first version is your most expensive version is a phrasing that I have never thought about and it's breaking my brain because that is so correct. I mean especially that like I mean I think the common phraseiology you'll hear too is like you know the 80% is is easy and then the last 20% cost 80% or whatever right like it's that question of like where can you shorten right and like you said these tools what people are trying to figure out is like where is it actually more efficient and where does it just feel more efficient and so I think in areas where we can help clients prototype out an idea or prove out an idea before investing a ton of money in the technically sound solution like there's there's an advantage there and we're trying to figure out given those early prototypes, what's the burden to take them on and try to maintain them, how much can they be maintained outside of the original tools that they were built with, how much can those tools actually work to maintain them? And I think we see once the complexity of an application gets to a certain point with the current technology, we're seeing the LM kind of start to chase its tail a little bit around, you know, either making the same mistakes or oscillating back and forth. And I think anyone who's worked with these tools has seen these things happen and the technolog is improving. How we communicate with these tools is improving. How we set up their guard rails is improving and you know that may change the the math for us you know at some point but I think proof of concept proof of idea is critical and then from there then it's like okay well how do we get it to market? How do we architect it correctly? And I think that's where like our experience as an agency that's built a lot of first versions can be really helpful. As an agency that maintains a lot of what we built can be really helpful. You know, I did some some vibe coding attempt to just get like codeex to build an elixir or a Phoenix application and it felt tedious and took a lot longer to make it do things that like I know how to do quickly. But then I also set it off to try to go find a bug that I was pretty sure I knew where it was and was curious if it could find it. and it found the exact line that had the wrong, you know, it was it was one function call should have been a different function call. It was the only one in the application that was incorrect. That was my what I suspected the problem was and the the agent could find it and suggest a diff and it's like okay well like if that was a more complicated problem that I didn't already know the answer to and it could also find it. That's a big win because they can do that while I do something else. >> Yeah, that's that's a good point. I don't think anyone's brought up the concept of bug finding with AI tools yet. So that's interesting. I I think I've seen it in like some of the communities I participate in, you know, where it's what can you task it with, right? And it's like, well, you know, and I think in this case, like I described it as this input from this editor looks right in all these places but wrong on this screen, you know, what's up, right? And you give it enough words so that you know, if you've ever watched like codeex or cloud code work, you see it searching your codebase, you know, doing putting together a bunch of gps to find the right files, to give it the context it needs. As long as you give it the words that will help it find the files the problem exists in, then it might be able to find it. Especially when the error is one of these things is not like the other one, that's something computers are good at detecting. And then I think the other side of that is like the thing I hear the most from from peers is I make it write my test because no one likes writing tests. If it gets us good test coverage faster, awesome. Good test coverage is also hard. So, >> you know, we'll we'll see. And there's a difference between good test coverage and full test coverage or like broad test coverage because you can cover all the functions, but the tests may not really be that of high quality. Sometimes these tools seem like they're going to be really efficient and they do have their places where they increase efficiency, but sometimes it just kind of drops off a cliff and it can be deceptive at first that this is being efficient until you realize, oh, it's >> there's three way to three ways to solve this problem. >> It's chosen two of them and it's half implemented these two as opposed to one completely and it working. >> Yeah, I think that's a good point, right? Cuz you think about like developers who are new to the space just generally, right? And like knowing when you're heading down a bad path is like is an experience skill, right? Of like like cuz cuz you can really convince yourself. You can gaslight yourself into this is this is going to work out. I'm going to get this to work, right? And like sometimes you just have to stop, throw it away, and start over. And I think that with the AI stuff, >> there's something here and this is not a fully formed previous to this last 45minut thought, but like the how do I as an experienced developer know that I'm heading down a bad path that heristic is now different because >> you're interacting with something completely different than your own self or a peer or pair programming or feedback from just your test suite. And so I do think there is risk of like, oh no, like I'll just keep explaining it and it'll all figure it out. And it's like, well then maybe not. You might just end up going around in a circle. Maybe it is time to just like start over so that both you and the LM have different input that you can just try with. >> Yeah. At some point I'll try something a few times and it'll give me like the wrong answer enough times where I'm like, I know what the right answer is. I saw the wrong answer four times and I know the right answer is now. >> So that's that's got its uses, too. Yeah, it's rubber duck debugging except now the rubber duck talks back through a giant statistical model, >> right? This is like the world's nerdiest deep cut. I don't know if any one of you or our listeners are fantasy readers, but it reminds me of I read Aragon long time ago and the magic system in that world was like if you like you could do things with magic but you couldn't do those things or it was dangerous to do those things with magic if you didn't already have the ability yourself. So like if you couldn't pick up a rock yourself, you try to pick up a rock with magic that was bigger than your physical body could do it. Like that would hurt you, that would burn you out. And then at some point in the book they make lace. That's something that you can do easily but like it takes a lot of time to do it but they did it faster because it was like they just use their energy towards making the lace like super quickly. And then whenever people talk about using AI to speed up a process that they already know how to do that is already within their ability to do but they're just doing it faster now. I'm like that is the correct way to use it. That's at least my opinion. That's the correct way and just makes me want to go re reread Aragon every time. >> Yeah. I mean, I think there's that and there's the brainstorming side of it too of like I just don't know. Give me a place to start, right? Like empty buffer syndrome, right? You know, I I don't know what do I type? Like, you know, I think there's there's advantages there. It's similar to the advantages we see with like pair programming, right? Or just like having a chance to talk about it. It is not early days anymore, but it still feels very early days, right? And that part of that is just how fast it's moving. Part of that is, you know, we just got to see how much of this sticks and for what, you know, for what good. >> Very fair. Charles, are you going to play with Tidewave when that is out of beta? >> Of course. And I and I I feel like I saw some options for kind of playing with it now. I >> just haven't made the time to do so. But yes, of course. >> Cool. Mhm. >> I think that's one thing that comes up whenever we talk about Elixir systems and AI comes up is just like, oh, it did a great job in Ruby or did a good job in JavaScript, but there's just not enough data to build off examples for doing something in Elixir. So, it'll be interesting to see kind of what what else comes out over time. >> Mhm. Yeah. I I think people maybe are underestimating how much work is going into the stuff that sits on the model that then we interact with. So if if that's optimized a certain way or that if it's trained on certain things and what is what's the search space of elixir code that things are trained on versus JavaScript, right? Or you know anything probably pales in comparison to how much JavaScript this stuff has been trained on. And so I think you know what's the right way for us to structure and express things so that these things give us an advantage as opposed to sending us back because it's going to lead us astray. I think that's an open question for sure because we've definitely seen LM generate elixir code that like no one should ever write there's no reason to right it's like overly defensive or like you know is like you know checking to make sure modules exist >> inside a function it imports the module inside a try rescue >> right >> you know and there are languages where that pattern makes sense but elixir is not one of them if you're doing like extreme dependency injection on various things then like sure you want to be able to look and see >> based on how I'm installed what do I have available. Use the best thing available. But that's like generally not how at least we write our Elixir applications. >> Yeah. >> Sometimes working with these tools reminds me of the days of dialup when I might queue up a few tabs and while I'm waiting on those tabs to load in the browser, I can go and work on something else and then I'll come back to those when they're when they're done or when I finish on this other task. And so now it's a little bit of ask the LLM, work on something for me, especially if it's something larger, go work on something else, then come back to it when it's done. >> Yeah. >> Cool. I think this conversation and this season has just been a really good opportunity to reflect on the tools that we have in the world and can use to help us with our Elixir applications just in general. like the elixir verse is not just elixir but all of the things that help us write elixir for example I think I was in the vicinity when Zach Daniels was working on Igniter and I was like what you doing over there in the lobby at Elixir comp and he was like I had this idea you know and he's just like kind of going and it's just short how Zach is right he's just like ah I got an idea I got to get it out so it's always kind of fun to see you know a few years later where certain things are and how we're using it every day or in our day-to-day so this conversation's been really fun I do want to plug that we are for one of our final episodes of this season. We actually are curious what you all thought about the different conversations this season, the different topics, maybe different opinions that people have had that they've come on to talk about. So, we do have a listener survey in the notes section or wherever you're listening, there should be like a show notes section where you can go grab this link and then we would love to hear from you, hear what your thoughts are on different episodes of the season. And then we're actually going to go over that and do some episode recaps uh towards the end of the season. So please, if you're listening or you're watching on YouTube, please click the link. We definitely would would love to hear from everybody, hear your feedback and talk about it. So with that major plug and ask for the audience out of the way, Charles, Dan, do you have anything else uh for for the group? >> No, I mean definitely fill out the survey. We really want to hear from you and talk through, you know, your thoughts on this season. This has been a Well, I don't want to get ahead. We're going to do a recap episode, but like the the world of Elixir and all the things that we can use while also using it and how easy that can be kind of across the board has been great. And so, you know, certainly interested in people's feedback on what types of integrations and and what parts of the Elixiverse speak to you, where you're finding good value, maybe pieces of the universe that we have yet to discover. Insert some sort of Star Trek meme something here, I guess. So, you know, definitely interested to hear what people think about the elixir verse and how we can continue the conversation through the end of the season into into further seasons around, you know, what what's interesting to people. We're trying to not necessarily be a news show, but also have kind of interesting conversations that are relevant to engineers working in this space. And I've certainly enjoyed the conversations this season. >> Dan said it pretty well. Please fill out survey, help us keep this interesting for everybody. And uh thanks to those who are doing the work out there to enable working with these tools in Emacs instead of having to leave the editor that I love. >> Ah yes, the ultimate plug. Please don't make my fingers learn how to type in something different. And I'm sorry, Vim mode for VS Code is not enough like Vim for me to feel really good. >> Yep. And then for the last ultimate plug, uh if you're listening to this episode and you thought, "Wow, those are some smart people doing cool stuff over there. Maybe they can build me an app." You can always reach out at smartlogic.io. As you can hear, there's always some innovations happening over here that make things faster, more efficient, and just all around a good time for everybody involved. So, do that. >> Thanks, honey. >> Yeah, no problems. Got you. >> All right. Well, this is a fun time, and we'll see you next week, everybody.
Video description
Elixir Programming Language | DevOps | Software Development In this episode of Elixir Wizards, host Sundi Myint chats with SmartLogic engineers and fellow Wizards Dan Ivovich and Charles Suggs about the practical tooling that surrounds Elixir in a consultancy setting. We dig into how standardized dev environments, sensible scaffolding, and clear observability help teams ship quickly across many client projects without turning every app into a snowflake. Join us for a grounded tour of what’s working for us today (and what we’ve retired), plus how we evaluate new tech (including AI) through a pragmatic, Elixir-first lens. 01:30 Optimizing Elixir Setup with Nix 02:26 Igniter and Project Startup Tools 11:49 Standardizing Deployments with Ansible 15:26 Asset Pipelines Evolution 18:39 Observability with Prometheus 20:00 Real-World Data Integration 22:46 Scaling Challenges and Technology Applications 23:34 Prometheus Usability and Monitoring 25:32 New Tools and Explorer Project 27:47 Database-Free Applications 29:40 AI Tools in SmartLogic's Workflow 31:37 Efficiency and AI-Assisted Development 33:35 Challenges with AI in Development 39:58 Reflections on AI and Elixir 43:08 Listener Survey and Final Thoughts Key topics discussed in this episode: * Standardizing across projects: why consistent environments matter in consultancy work * Nix (and flakes) for reproducible dev setups and faster onboarding * Igniter to scaffold common patterns (auth, config, workflows) without boilerplate drift * Deployment approaches: OTP releases, runtime config, and Ansible playbooks * Frontend pipeline evolution: from Brunch/Webpack to esbuild + Tailwind * Observability in practice: Prometheus metrics and Grafana dashboards * Handling time-series and sensor data * When Explorer can be the database * Picking the right tool: Elixir where it shines, integrations where it counts * Using AI with intention: code exploration, prototypes, and guardrails for IP/security * Keeping quality high across multiple codebases: tests, telemetry, and sensible conventions * Reducing context-switching costs with shared patterns and playbooks Links mentioned: smartlogic.io https://nix.dev/ https://github.com/ash-project/igniter Elixir Wizards S13E01 Igniter with Zach Daniel https://youtu.be/WM9iQlQSF_g https://github.com/elixir-explorer/explorer Elixir Wizards S14E09 Explorer with Chris Grainger https://youtu.be/OqJDsCF0El0 Elixir Wizards S14E08 Nix with Norbert (Nobbz) Melzer https://youtu.be/yymUcgy4OAk https://jqlang.org/ https://github.com/BurntSushi/ripgrep https://github.com/resources/articles/devops/ci-cd https://prometheus.io/ https://capistranorb.com/ ansible.com/ https://hexdocs.pm/phoenix/releases.html https://brunch.io/ https://webpack.js.org/loaders/css-loader/ https://tailwindcss.com/ https://sass-lang.com/dart-sass/ https://grafana.com/ https://pragprog.com/titles/passweather/build-a-weather-station-with-elixir-and-nerves/ https://www.datadoghq.com/ https://sqlite.org/ Elixir Wizards S14E06 SDUI at Cars.com with Zack Kayser https://youtu.be/nloRcgngT_k https://github.com/features/copilot https://openai.com/codex/ https://www.anthropic.com/claude-code YouTube Video: Vibe Coding TEDCO's RFP https://youtu.be/i1ncgXZJHZs Blog: https://smartlogic.io/blog/how-i-used-ai-to-vibe-code-a-website-called-for-in-tedco-rfp/ Blog: https://smartlogic.io/blog/from-vibe-to-viable-turning-ai-built-prototypes-into-market-ready-mvps/ https://www.thriftbooks.com/w/eragon-by-christopher-paolini/246801 tidewave.ai ***!!* We Want to Hear Your Thoughts *!!*** Have questions, comments, or topics you'd like us to discuss in our season recap episode? Share your thoughts with us here: https://forms.gle/Vm7mcYRFDgsqqpDC9