bouncer
← Back

Code Sync · 572 views · 21 likes

Analysis Summary

20% Minimal Influence
mildmoderatesevere

“Be aware that the presenter's preference for gen_statem is framed as a 'level up' for your career, which may make standard GenServer implementations feel unnecessarily obsolete.”

Transparency Transparent
Human Detected
100%

Signals

The video is a recording of a live technical conference presentation featuring natural human speech, personal humor, and real-time audience interaction. There are no signs of synthetic narration or AI-generated scripting.

Natural Speech Patterns Transcript includes filler words ('um', 'uh'), self-correction, and conversational asides about personal anecdotes (GitHub handle, co-workers, dog).
Contextual Awareness The speaker references specific events from the previous day's keynote by Josh Price and interacts with the live audience via show of hands.
Live Event Metadata The video is a recording of a professional conference (Code BEAM Europe) featuring a specific named speaker (Coby Benveniste) with a 38-minute duration.

Worth Noting

Positive elements

  • This video provides a rare, deep-dive comparison between GenServer and gen_statem specifically for modern AI workflows in Elixir.

Be Aware

Cautionary elements

  • The presentation uses 'revelation framing' by calling gen_statem a 'hidden gem,' which can make the viewer feel they've discovered a secret advantage, potentially leading to over-engineering simple processes.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 20:40 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-11a App Version 0.1.0
Transcript

[music] [music] [applause] Gonna put my water down and get started. Um, hello. I'm Kobe. Very honored to be here. It's my first Code Beam conference, so I figured I would start off strong and just completely challenge what Josh Price said in the keynote yesterday. Figured that would be a good idea. Um, he had a slide that said that Gen Server is the perfect agentic abstraction. Um, but as you can see from my title, I'm going to make you guys ask the question, is it really though? Um, in the interest of time, I'm not going to spend too much on the first couple slides. They're just gonna give you a quick introduction of who I am. I'm Kobe. My handle on GitHub is probably not. Yes, it really is. Uh, it's mostly to annoy my co-workers who are in the front row. Um, they get little notifications saying that I probably not approved their pull request. Gets them a lot. Uh, I've been programming in Elixir since 2016. I've done a lot of other things, too. Uh, some things you might know me from in the community. Uh, I built Artifix, which is a template repo for creating a private hex repo on S3 and CloudFront. It's fully functional. All you got to do is run a Terraform script and that's it. Um, I also built Flame EC2, which is an adapter for the Flame library to work with EC2 machines. Um, very bare metal, but also serverless, I guess. Um, and then Live Flip, which is a fun little animations library that en that is a firstass invert play. uh enabled animations on live view. Um I can click into a demo, but I feel like that would be a lot for some people. There's a lot of moving shapes. Um I'm also pretty active on the Elixir Discord, so you might know me from there. Um and the thing I'm most proud of, I have contributed to the Elixir uh core language, which is really awesome. That's the pull request. I fixed a typo, so I'm pretty amazing. I got the hard combo and everything. It was a lot of fun. Um, dogs get likes. So, that's Luna. I'm her co-parent. She's not really mine, but I'm going to profit off the cuteness. Um, there she is when I told her that I got the code beam conference talk. And there she is when I left. Um, I work at Market Team AI. I'm the VP of engineering there. We build AI agents for marketing. This stuff's sort of boring. I want to talk about the Elixir stuff, but for anyone who's interested, um, everything's in Elixir. We use Live View. We use Fenix. Uh we even integrated with lit for custom components. So it's a lot of fun stuff. Um and everything that I mentioned on the earlier slide we use in production. So feel free to use it. And also if you find issues tell me. Um quick preview before we really dive into things. Um I don't really have a lot of time. 40 minutes feels like very short even though it might feel longer than the 20-minut presentations. Um, so my goal here is to expose you to a lot, not everything but a lot of informations about how AI agents work. Um, how gen state machine works, uh, and how they go together. Um, we'll try to do live demos. I really hope they work. Um, and as long as nothing crashes, then, you know, we'll be able to see some cool stuff on screen. Um, I also want to say upfront, nothing in this talk is going to be like a this is the right way to do things. I think that gen state machine is very powerful. It's a very unique behavior that at market team we've really seen how it meshes perfectly with how we build our agents and our engines. Um but that doesn't mean that it's going to work for every use case and you're not doing anything wrong by using gen servers instead. Um also I know this is a code beam conference. I'm an elixir alchemist so I you might occasionally hear me refer to Erling's quirks. I hope that doesn't offend anybody on the Erling side of things. Everyone ready? Cool. Let's get started. Um, we're gonna start off with a show of hands. Who here has heard of Gen State Machine? Everyone's hands should be up. My talk has it in the title. Um, keep your hands up if you've used Gen State Machine somewhere in your apps. You haven't. You don't need to have coded it, but you can have used it. Um anyone who load your hands by the way um you probably have actually used it. DB connection uses it from Ecto. Redex uses it. Finch uses it. It's very commonly used in networking libraries. Um especially things that have specific protocols that have different states state machines. Um keep your hands up if you've implemented gen state machine. Okay, we're starting to lower the amount of hands. That's good. That way I can introduce you guys. Um, last but not least, have you ever encouraged your colleagues to actually use Gen State Machine for their use case? One, two, two hands in the entire audience. Well, my co-workers should have raised their hands, but that's a whole other thing. Um, so the fact that most of you haven't really kept your hands up the whole time, it's totally okay. The talk is a beginner talk. We're going to talk about the what and the why around Gen State Machine. Let's start with the what. Gen [snorts] state machine is an OTP behavior for generic event- driven state machines. That is a quote right out of the Erlang docs. Um it's not very widely used, but uh I think the last time I checked on GitHub, it's got like 1/100th the amount that gen server is used in. Um why is it not widely used? Well, despite it being very very powerful and a very flexible behavior, there's a very big learning curve to it. The docs are super super comprehensive, but they're also complicated. Um, and there's a lot of options for all sorts of actions that can trigger state transitions and trigger events. And more often than not, people just say just use a gen server. So, what's it good for? Why use a gen state machine instead of a gen server? Uh, classic reason, when your behavior is dependent on your state. Um, let's take a classic example of a state machine. It's a click pen. You click it once, it opens. You click it twice, it closes. Um, it's got only one API available. You push the end, it opens, closes, all that. Um, [snorts] the the the behavior itself is dependent on what the state is. And of course, this is a super super small example, but I'm sure everyone here has written a gen server that's more than 15 lines and dealt with a massive amount of state in their data. >> [snorts] >> And those uh those are the two keywords state and data. Uh gen server mixes the two. Your state and your data are a single strct that gets passed around uh the generous process loop. Another [snorts] reason to use gen state machine is when you want to be declarative about your states. So right now our API only exposes a click function. What if we had a refill function to let users refill their pens? Younger people in the audience that might sound weird. We did used to refill pens back in the day. So, how do we write our handle call function? Do we group it by the state? Do we [snorts] group it by the event? Is there any way we can enforce something so that the gen server doesn't become like a bunch of spaghetti that you don't know where to go and where you're jumping to? What if it was just super easy and it looked like this? It's clean. Your state is separated from your data. grouping and style is all enforced by the compiler which is fantastic. [snorts] So these are very classical examples of state machines but I don't really like that they don't give a good explanation of how powerful the behavior is. So we're going to take a look at a slightly different more unorthodox example. Uh Andrea who presented the update from the core team um he gave a talk a couple years ago at code beam uh where he used the analogies that beam processes are people. [snorts] So, we're going to take a look at what my morning routine looks like as a state machine. Now, we're going to walk through this. There's a lot that you're going to see here. [sighs] First, we don't have a use gen server. We have to manually define our behavior module. And we're also manually defining our child spec. That way, we can add it into a supervision tree. Uh, [snorts] we have our start link function so that we can add it to the supervision tree itself. Um, and you'll notice that we also set the name to local module. That is one of the Erlang quirks. Um, local module, it's the same exact thing as setting the gen server start link name parameter. Uh, you can also set it to a via module, a registry, use global, anything for a name. [snorts] Um, there's our public API. For right now, it's just simple things that we can trigger state changes with. Um, it's exposing a lot of weird things. Don't worry, we're going to get to refactoring. Um a little bit lower we've got our callback functions for the actual gen state machine behavior. Um in it very classical everyone should know that one. We all have to initialize our processes as we start up. Uh the callback mode is something unique to gen state machine. It's how we define the actual behavior. When your state is simple and it can be represented by a single atom then you can use gen state function the state functions mode meaning that functions are named after the state itself. Callback mode also has a handle event functions mode. It's more commonly used if you have a very complex state more than a single atom. Um, we're not going to get into that today. I personally think that handle event functions are a much more complicated way to reason about state and very often you can really distill your state into a single atom. Um, but you [snorts] know what? Everyone has opinions. Come chat with me. I'll be here all week. Uh, finally we got our state functions. Let's scroll through this a little bit. Um, we have uh my morning routine. I get in, I unpack my bag, I get coffee, I drink my coffee. Finally, I'm ready to work. Pretty basic. Um, you'll notice that there's no handle, caller, handle, cast. Like we said, we're using state functions. So, each function is going to receive in its first parameter the actual event type, the second parameter, the event, and the final parameter, the server data. We also don't have reply or no reply. You're going to notice that the return values of each function are uh different actions that the state machine behavior can take. You can keep state, you can transition to a next state, and we're about to learn that you can do a lot more. Okay, [snorts] let's see if I can potentially run the code wherever my mouse is. [snorts] There we go. [snorts] So, we're in a code beam demo. We have our dev at work. We're going to demo a basic state machine. [snorts] So, I get into work. I unpack my bag. I get started with my day. Um, people talk to me and I yell at them because that's who I am in the morning. Um, just keeps going. They keep bothering me. But eventually, eventually I start taking care of the actual tasks. Um, it's a very basic example. Let's get to the refactoring part. So in reality, I don't have an API for someone that tells me that my bag is unpacked. How do we model the state machine doing things on its own? The answer is internal events. Uh internal events let a state machine define an event that will be triggered immediately. It's very similar to the continue directive if you've used those before. Um and it lets us trigger an action from within our state machine's internals. [snorts] So let's do a bit of refactoring in our code. We're going to add an action directive [snorts] uh at the very end of our init callback that just tells us that we're sending an internal event. Uh it's just going to inject a little event as we move on to the state. Now, we're going to also adjust our state callbacks. [snorts] You'll notice the new internal atom. Uh that's the new event type for any internal event that comes in. Uh we've also removed the old uh clauses. we don't have a lot of clauses. Um, after initializing, we can only unpack our bag and as we're moving directly through the state machine, the there's no other events that can come in to the mailbox. [snorts] We can also reduce our public API. Like I said, we are refactoring here. We are [snorts] lowering the amount of footprint that we need for our server. Let's take a look at how it runs. Again, live demos are terrifying. and internal events. All right. So, you can see instead of getting yelled at almost immediately, I started unpacking my bag. I go and grab my coffee and then people start talking to me again. I know you've all felt this pain. You don't want anyone to bother you beforehand. Um, so how can we start kind of refactoring a little bit more to make me nicer in the mornings? Um, well, when I make coffee, I don't actually make the coffee. I'm way too lazy for that. I have a Jura. I click a couple of buttons. It makes everything for me. And then I sit around and kind of wait and I don't yell at people. I just kind of hold my hand up and stop them from coming at me. So, Gen State Machine lets us postpone events. We only handle them when we actually want to handle them. The state machine engine is automatically reuing our events and we don't have to think twice. So, we just adjusted our unpacking bag state to handle the internal event. Let's [snorts] make another adjustment. We're going to add an async task that creates our coffee. Um, we're going to also rename our state from getting coffee to waiting for coffee. Um, the async task is going to occur in a separate process. Um, and because we're using task async inside an OTP behavior, we're going to just receive the tasks response message back in our own mailbox. So, let's change our state function a little bit. We renamed it into waiting for coffee. Now instead [snorts] of yelling I am postponing the events, I hold up my hand and wait for it. Uh when I do get the info call back from my task async, uh I have my freshly brewed coffee. And of course, whenever my coffee machine breaks down, I crash. Um you'll also notice that I'm I'm not sending a reply anymore in our handle call. Um, from the client's perspective, this is all synchronous, but from my perspective, I'm just postponing the events. So now we can reduce our public API even more. The task is running inside. We don't need that brewing coffee. Let's run let's run some code demos. All right. So we can see it's a little bit different. Um, I come in, I grab my coffee, people start talking to me, but I make them wait. Little bit less yelling, but then when I get my coffee, they still talk to me because they won't let me finish doing what I want to do. Um, we still have something left in our public API, and it's that finishing the coffee function. Um, coffee, I don't get told that I'm done with it. Um, it just it takes me some time to to drink. So let's take a look at it at our final feature of gen state machine that we're going to see today. Arbitrary timeouts. Um raise your hands if you've used process send after in a gen server. Yeah, everyone. Classic. Um gen state machine lets us define timeouts as actions. So they run automatically for us and they're run by the the engine itself. So for drinking coffee, we're going to add a timeout action. After a certain amount of time, we [snorts] have a finished coffee timeout. It's going to take me 5 seconds. I chug my coffee. I burned my tongue, but it's okay. We're going to handle our timeout properly by just adjusting the event. And we're going to postpone everyone who is coming to me while I actually drink my coffee. [snorts] And finally, our public API is down to a normal public API. All you need is just a request response. So altogether our final code looks a little bit like this. If I can get my mouse over to the other screen to scroll through. It's a lot cleaner. Our public API is very small. The state machine engine handles pretty much everything for us. I like it. Let's run the uh the last code example of today. All right, I come in, get my coffee, my hand goes up, people talk to me, I don't yell at them, I keep not yelling at them, and eventually I start taking care of tasks. It's not bad. Who likes this? I do. Okay, so that was gen state machine. The talk is about AI. So, how does all of this remotely relate to AI agents? First off, we're going to define what an agent is. There's a thousand different definitions out there. A lot of companies say they're doing agents. I'm sure you've heard of tons of them. I personally like Simon Willis's definition, which is that an LLM agent runs tools in a loop to achieve a goal. Although, um, me and my co-orker recently came up with a tiny little adjustment to it. which is that an LLM agent runs tools in a loop to achieve a goal with a contextual stop condition. Meaning the agent not only runs the loop, but it decides when it's done. Let's take a quick hypothetical example of an agent. I want to ask my agent to find me something to do after the conference. I haven't been to Berlin in like 5 years. Uh it's my first conference. I want to celebrate. I want to go wild. So given a few tools and the task, the agent is going to maybe think a little bit, do some searches, do something. Eventually, it's going to arrive at suggestions. I'm sure we've all seen this happen with chatbt, Gemini, or Claude. But how does an agent actually work? What's it doing behind the scenes? React, reason, act, observe. React is the sort of design pattern that kicked off the whole agent thing before that's what they were called. It was conceptualized by Google in 2022 for optimizing tool calls. Um all of the agent patterns nowadays are very very based on this with slight variations. At its core, it's pretty simple. Given a task, you prompt it for reasoning. You then prompt it for which tools you should use. You generate the tool calls, of course. You run those calls. You do some observations and you send them back to the LLM. Sometimes you repeat. Eventually the LLM is going to say it found the answer, completed the task. Here's a result. Looks a little bit like this. Who thinks that seems familiar? Okay. So, let's make some some code. [sighs] Let's make it as a gen server first. Kind of ease you into things. We have our our classics. We use our gen server. We're going to add a logger because that's a good thing to do. Um, we define a strct for our state just to kind of start using types. Um, you can already see that there's a lot of data here. We have our task reference, an ID, some sort of ref that we can use. We have the initial message. We have our list of messages because we are after all working in a chatbased world. We have a set of references for tool calls. Um, we're going to use our latest and greatest LLMs. So, we want to parallelize as much as possible. We're going to receive a lot of tool calls. We're going to run them all at once. We want to run fast. We have our final result. Um, we have a bunch of subscribers. So, whenever we create a task, we want to let users know that the task actually completes. Um, very classic in a live view to be able to subscribe to something happening. Uh, and then we have a reference for a timeout and a timer reference. Uh, why both? Well, cancelling a timer with process cancel timer does not guarantee that the message is necessarily canceled. It's a fallacy that I have fallen into myself many times. Um, so because of this non-G guarantee, we have to make sure that our timeout message is always the latest one that we want to get. [snorts] We have our public API. It's pretty simple. Start link. We all know that. We have an API that lets us be notified. We're using cast. Like I said, we want to notify asynchronously so that we don't block live views or clients or things like that. Um, we're going to cast the P ID as our subscriber and then our gen server is going to notify us later on. We also have a little convenience function just to do an await little receive block. I like adding those. Uh, now we're going to get into the meat of the gen server. So, our init, we create our state. We trigger the continue because we're wanting to do one thing at a time. Wait a second. That's not a handle continue though. We have a little bit of SK spaghetti already. We don't really know what's going on next. We're going to have to jump. This is the handle cast to receive a new subscriber. This isn't so bad though. We're going to keep going. [snorts] Uh we have our reasoning step. Just a simple handle continue. We're going to call our LLM. Um and then we go into tool calling based on the output. Our tool calling step takes our latest reasoning, extracts tools from it. Um, like I said, we're running things in parallel, so we fire off async tasks and then a timer so that if we take too long, we simply time it out. We update our state and then [sighs] well, where do we go next? Once again, we're kind of jumping around. We're hitting some spaghetti. See, the handling code is all going to be in handle info. So we jump down to that and we're receiving our messages back for any results. Now here it's basic handling. We receive the result. We add a message. We keep waiting. If we have more um if we're done we move into observations. We're also handling the down just in case the task crashes. We can make sure that we just say we failed. Um of course task async is linked to the caller. So this is not a production way of doing things. you should use a task supervisor. Um, but for now, we're going to keep it simple. We're going to kind of fake it till we make it. Um, since we're down here already, I'm going to also show you the timeout. Um, we time out. Pretty basic. We're just going to drop all the data. We're going to add a message. We have our clause for ignoring old references, just like I mentioned earlier. [snorts] Okay. Wait, wait, wait. We're going back up. Handle continue. We have our observations. That's another state. Uh, and then if we hit our final answer, we're going to move into notifying. If we don't, we're looping. We're just going to go back to the reasoning. [snorts] So, overall implementation, 182 lines of code. I'm going to take a pause here. How are we feeling? Maybe breathe a second. Um, I don't really like this. It's a lot of jumping around. We're having to buffer our subscribers. We're having to handle old timeouts manually. There's a lot of things that I wouldn't want in my code. So, let's uh let's turn it into a state machine. So, we're going to start off with what we did before. We have our behavior. We require the logger. We got to have that just in case. We're going to create our strct. Notice it's a little bit smaller. Um and we called it data instead. Why? We're being declarative. This is not our state. It's our server's data. We also have a uh no subscribers, no timer references because the state machine is going to take care of all of this for us. We have our public API start link just delegates to gen state machine instead of the gen server. Um you'll see that little quirk that I mentioned about the local naming. We have our child spec also manually added. Um and then we have our other two public API functions. Pretty identical so far. Not too many changes. We get into our callbacks. We're going to set our callback mode with the state functions. And uh you'll see that in our init we get the first message and we move into our reasoning step with an internal event. [snorts] Our reasoning state function. It's just like on our unpacking bag function. We can't arrive at it without any other events. So we only have one clause. Run the reasoning. Now inside the function again, most of this is actually pretty similar. Once it completes, we're moving into our calling tools state. [snorts] Calling tools also pretty similar. We extract the calls. We loop over them and create tasks. Fire them off. Store our references. Something's missing. We're not doing process send after anymore. Instead, we have a new thing in our return value. This is a state timeout. Now, we talked about arbitrary timeouts, but gen state machine has a lot of different timeouts. Um, arbitrary timeouts, they just fire whenever. They're not connected to anything specific. It's very, very similar to process send after. State timeouts are a little bit different and I think much cooler. Instead of being agnostic to states, they're fully tied to it. There's only one state timeout at a time. When you switch the state, it automatically cancels. It's discarded. you no longer manage any references to make sure that you're receiving the latest timeout. It's amazing. We just removed an entire class of complexity from our code. All right, let's go into state functions for async task handling. So, we're waiting for tool calls to arrive. We're just going to wait for them. Again, more of the same. It's the only state where we might actually receive other unknown events because we're waiting for messages. So, that's why we have our catch all clause over here. Any event that we receive while we're waiting for tool calls to complete, we're postponing. We don't want to handle them. [snorts] They'll be handled by someone else. After that, we move into observing. Very similar to what we already did. Um, if we're done, we move to completion. If we're not done, we go back to reasoning. Uh, you're going to notice here another fun little quirk. Who sees it? Let me highlight it for you. What's that zero timeout that's added as an action at the very end? This is where we get a little bit into the into the weeds. Uh, gen state machine is a little bit different than a normal process with a mailbox. It actually has two cues. One is the mailbox, one is the scheduled Q. Whenever we change states, the scheduled Q runs first immediately. That's part of the magic of internal events. Internal events are injected into the head of the scheduled Q. Postpone events are placed in scheduled Q to be run whenever the state changes. Now, why do we add a timeout for 0 milliseconds with a little flag for stop? See, the the engine sees this 0 milliseconds and it's smart enough to say that's not a timer. So, we're not going to start a timer. We're just going to add an event to our mailbox. It's pretty nice. >> [snorts] >> So when the observing state decides that we finish the final answer and we're done, we move into our done state. First thing that happens is that all of our postponed events get run from our scheduled queue. [snorts] And that's anyone who requested a notification for subscriptions. That's all of it managed by the gen state machine. Once all those events are run, our normal mailbox starts running. The normal mailbox has our timeout event and that lets us know that we can stop the process and complete the task. It's a little bit funky. So, all in all, 162 lines. Not that much smaller, but we've removed a bunch of different things that we no longer need to manage. We don't have the timer references. We don't need to hang on to our subscribers and buffer things manually. Our code is fairly focused and readable. The only stuff that it's actually doing is what's necessary for our server to run properly in its behavior. And our behavior is well, I hesitate to use the word verified with these guys in the audience. But I would say it's verified to be exactly what we want to be doing at any point in time. How are we feeling about this? No shows of hands at all. Wow. I like it. It reduces a lot of things that I have to manually take care of. A lot of it may uh be a little easier to understand and track and I know what each state does. Of course, it's not a full-fledged example. Some people might notice that there's things missing. I don't handle empty tool calls. We're not doing any retries. We're not limiting the number of loops. We don't have the task supervisor. But I think it provides a good starting point. Showcases how the AI agent pattern can be combined with gen behavior to create really, really powerful process loops. [snorts] Let's go a bit crazy. We cover gen. We've covered the basics of AI agents. We can see how the simple patterns become and how powerful the combination of the two are. But let's kick it up a notch. Remember when I said at the beginning how gen state machine is used for networking. It's really protocols, things like that. Well, protocols are not the only things. Uh networks are not the only things that have protocols. I am also a protocol. See, on weekdays I I go to work, I drink coffee, I do tasks. On weekends, I'm mostly sleeping. Uh, but I am one process. So, what do I do? I wake up, I classify what day it is, and then I move on to deciding what states I actually have. Of course, at the end of the day, I always yield back. I go to sleep. My internal classifier wakes me up. Well, it's not the internal classifier, it's the alarm clock, but same thing. Uh, and all of my internal processing gets shifted depending on what day of the week it is. But I don't change. Can we do this in gen state machine? >> Of course we can, guys. I wouldn't have brought it up otherwise. This is all part of the presentation. You see, gen state machine allows us to actually decide to swap the callback module. So if for example based on a specific classifier I need to use a completely different state machine then I can tell the gen state machine engine to use that state machine for the time being. Let's dive in. So you'll notice that our state machine engine is a lot shorter now. Okay, not as much. [snorts] Our init function instead of uh going into unpacking bags starts off by waking up and triggers an event to classify the day. Our classify function is the fun part. We look at the day and we check to see if it's a weekday or a weekend. And then we push a new callback module as an action. This tells the state machine engine to use that new module from now on, at least until we pop it back. So when it's a weekend, we go into my weekend routine. We start the day. When it's a weekday, we go to the weekday routine. And again, we start the day. You're you're already familiar with that one. So we're going to take a peek at the weekend routine. It's another gen state machine module. Uh I transition from uh from waking up to starting the day. Uh and then of course I immediately transition into sleeping. I set a timeout because I'm not going to do anything else on a weekend. And that alarm triggers a pop of the callback module. So we go right back to our original state machine and shift back into classification. [snorts] Everything else gets postponed. I turn off my Slack or my Discord or whatever messages you want. I'm not doing anything on a weekend. Conversations are not static. They're [snorts] really dynamic. Anyone building AI agents needs to make sure that their conversational engines are also dynamic based on the given context. See the basic React pattern. It's flexible, but it doesn't do that well in all chat scenarios. For example, if I just want to say hello, do you really need to reason, take actions just to figure out how to say hi back? Can we apply the same logic thinking about how AI agent flows our protocols to what we've done so far? Once again, we most certainly can. Again, this is the point of this talk. So, we're going to gloss over the beginning. This is a lot of the same. We've already covered the data, the initial parts of the code. Let's take a look at the adjustments that we've made. Similar to my morning routine. We start by classifying. Classification is going to be anything. It's, you know, an LLM pattern match on the word hello, the length of the message, a local model running NX serving. I really wish I would have done that before, but I didn't have a time to set up a nice little demo with NX running. And based on this classification, we decide what agentic pattern to use. Maybe it's just a simple answer. Maybe we want a full React. This pattern lets us add much more functionality, allowing to add things like guard rails, reflection, selfcorrection, just by popping and pushing the callback modules in control. All right, we covered a lot. Again, take a breath, [sighs] let's recap. Gen state machine, it's an OTP behavior for event-driven state machines. Now, that tiny little sentence hides a ton of features and potential behind it, ranging from automatic buffering of events, automatic timeout handling with cancellation, declarative state and transitions, and way more than I'm ever going to be able to cover in such a short talk. Beam processes are people. My morning routine and much of my life can be modeled as a state machine. And since I'm a person and therefore a process, much of my life can be modeled as a gen state machine process. [snorts] AI agents are mostly variations of a basic state machine. You receive a task, you reason about how to solve the task, you choose tools, you run, you observe, you loop, you go around, you go around, and then you get an answer based on the information you've observed. And AI agents, just like people are processes, and as state machines themselves, lend themselves perfectly to Gen State Machines powerful feature set. Guys, if you want to learn more about anything, feel free to reach out. I'm I'm here roaming, ask questions. Uh, I'm happy to chat about anything. [applause] Excellent talk. Uh, do we have any questions in the audience? Go ahead. um >> with the um special quirk the zero timeout you mentioned um is that something you'd always add to a gen statem um >> or is it only if you're using uh the u whatever internal events or something >> it is very dependent on what you're actually doing so that little quirk is just a way to say put a message in my mailbox right now don't trigger anything else it's not triggering a timer it's just adding it to the end of the mailbox in our example We buffered all of our subscribers, so I need to know when I'm actually done with what's been buffered. Adding that little quirk of the zero puts a mail uh puts something into my mailbox at the end and lets me know when I finished running everything. You don't always need to do that. Um, for example, in our React agent loop, we're just looping until we finish by some sort of classifier. Our LLM says this is the final answer. But again, it's all based on use case. more questions. >> Wow. >> I've got nobody wants. >> Yeah, I know. So, >> we had to deal with the boilerplate in Elixir because it doesn't give us a a little bit of sugar like use gen server. >> Do you think that was a mistake? Do you think we should convince maybe the language maintainers to support a gen stmm use gen stadium or something? Um, so I do think that there is a library called gen state machine on hex which can give you that boiler plate. Um, but it's a fairly direct wrapper to just the earl module. Um, elixir itself, their core docs discourage doing that kind of thing. Um, and that boiler plate's not that big. You just have your little behavior and you have to add a child spec. Everything else is pretty much the same stuff you add. Sir, [snorts] >> thank you Cody for the talk. Uh what about when gen server uh state machine crashes? How do you save your state or do you save your state? >> In the same way when a gen server crashes, you want to be able to figure out recovery. So when a gen server crashes, I typically would store my state in some sort of database. I use a process flag to trap my exits, things like that. Gen state machine is an OTP process. So everything is the same in terms of behavior. You have a terminate call back that can run and you can continuously write to whatever logs you want in order to maintain your state. >> Anyone else? >> All right. Thank you, Kobe. >> Yeah. [applause] >> [music] [music]

Video description

✨ This talk was recorded at Code BEAM Europe in November 2025. If you're curious about our upcoming event, check https://codebebeameurope.com ✨ --- gen_statem is a rarely talked about and used OTP behaviour. Most BEAM developers reach for GenServer when building processes, but in the world of AI agents, gen_statem is the hidden gem that transforms complex agent loops into declarative state machines. Learn why gen_statem's behaviour aligns with AI agent patterns, enabling developers to easily write cleaner code describing how their agents behave. You'll learn how to build gen_statem processes, using features such as state functions, internal events, and postponing to build ReAct (Reasoning/Actions) AI agent loops. This talk is perfect for anyone building AI agents with Elixir wanting to level up their implementation with OTP behaviours. --- Let's keep in touch! Follow us on: 💥 Bluesky: / codebeam.bsky.social 💥 Twitter: / codebeamio 💥 LinkedIn: / code-sync

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC