We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Ruby on Rails · 1.8K views · 34 likes
Analysis Summary
Worth Noting
Positive elements
- This video provides high-quality, specific insights into 'un-distributing' microservices back into a monolith to reduce network complexity.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Miguel Conde & Peter Compernolle: Inside Gusto’s Rails Biolith
Ruby on Rails
Behind the Fizzy Infrastructure with Kevin McConnell
37signals
Microservice the OTP Way - Diede Claessens | Code BEAM Europe 2025
Code Sync
On Writing Software Well #2: Using callbacks to manage auxiliary complexity
David Heinemeier Hansson
Jay Tennier: How Testing Platform Rainforest QA Tests Itself
Ruby on Rails
Transcript
Welcome to On Rails, the podcast where we dig into the technical decisions behind building and maintaining production Ruby on Rails apps. I'm your host, Robbie Russell. In this episode, I'm joined by Alexander Status, a principal software engineer at Angel List. Alex's team has helped evolve a complex Rails monolith over the last several years. From deep survey integrations to merging large apps into a modular engine structure, Angelus's backend powers some of the most intricate business logic in investing, accounting, and banking. And they've done it all while staying committed to Ruby on Rails. Alex joins us from Chattanooga, Tennessee. All right, check for your belongings. All aboard. Alexander Status, welcome to On Rails. Hey Robbie, happy to be here. >> So I want to ask you a quick question. What keeps you on rails? >> Yeah, it's a good question. I think um actually had another engineer, we were working on a new product message this weekend. He said something along the lines of like, "Oh, there's nothing quite like Rails new." I think Rails is just really high quality. A lot of the gems have been around for a really long time or battle tested. Everything kind of just works. And you know, you can't beat active record when it comes to interacting with the database. So that would be why I'm on Rails. >> Active record is very much a special thing and I mean admittedly it's like one of my favorite things about Ruby on Rails as well and kind of something I wish I had had prior to my experience with Rails. What other types of frameworks stacks do you have some experience in prior to Rails? Presumably that you did work with other programming languages. >> Yeah, in the distant past maybe like 10 years or something ago I'd used Entity Framework. net core C. Honestly, I I kind of like it. Link has like kind of a cool expressive way of of building queries. More recently, we've used Prisma quite a bit. We have some node back end. I don't like Prisma as much. Try not to say anything bad, right, on the internet. But yeah, I mean, working in active record just has like an ergonomics to it that is like pretty hard to beat for sure. >> Definitely. So, one of the reasons I wanted to have you on the podcast was want to talk about how Angel List is approaching using Rails. I believe you have maybe one or two Rails monoliths there. For our listeners, could you paint a picture of the architecture you're working with today at Angelist and approximately how large is the engineering team at this point in time? >> The engineering team I think is like maybe 40ish engineers, you know, don't quote me on that, give or take 10, but not super large. Our core product is entirely Railsback. More recently, we've acquired a company that had a product built in node. And so we've got some code there as well, but the core product is all Rails. So we have a Rails monolith with a few Rails microservices floating around. We power all of our authentication that way. We use device, we use GraphQL, Ruby, and then we've got some like next frontends that interact with that. >> Are you doing that where the so the next app separate like repositories on a basic level or those all kind >> same repo obviously different application, right? But the next apps sit in a you know slashfrontend or whatever and the Ruby app the Rails app is in a slashbackend and we use a couple of different Rails engines that we serve up through a single application. >> Interesting. Was that all intentionally from the beginning kind of put into a is that kind of monor repo or those like separated deployment processes or >> Yeah, definitely not. So we had one main monolith and then we had some microservices in the last year or two. We'd actually taken one of those microservices and like refactored it into a Rails engine and then moved it into the monolith application. I would say the reasoning for that was like less technical. It was because over time the two had become co-mingled in a way that we wanted to kind of undo and there was a lot of complexity due like networking and like you know transient failures due to network stuff and it was just easier just to be able to call the Ruby code directly. We still have them as separate modules. We use Packwork which is a a static analysis tool that I think Shopify built. And we use Packwork to manage the dependencies between the various modules inside of our Rails monolith. We also have a few other sort of small products that we've built and shipped that are also built as separate Rails engines inside of the same monolith. >> You mentioned like that you acquired a node app and so what was that process like? The team come along with that then to some extent? Yeah, a few, but most of the people I would say working on the NDA app now or we've hired more recently. We have some engineers that were pre-existing at Angelus that are working on it as well. When you acquire a company, obviously like the two things are totally distinct and over time like part of what we're working on now is sort of like killing the duplicative parts. So we're, you know, moving some things one way and some things the other way and and trying to really clearly define the boundaries between the products, whereas before there was some overlap, which is what made it such like a nice fit for us. Were you around when the decision to start using Packwork came about? >> Yeah, I drove a lot of that. The reason it was a microser to begin with was primarily because it really was supposed to be a separate like set of concerns. I'm trying not to get like overly specific about the details. I'm trying not to put anybody to sleep, but the separate micros service is really intended to house a lot of our legal and economic. Angelus is a platform that helps founders and general partners raise and deploy venture capital and so we track a lot of legal and economic structures. Our erd is very complex and our separate microser was largely intended to house a lot of that. that's distinctly different from like our marketing site or like a lot of the stuff that happens before we sort of formally define a fund or collect capital into it. You know, we were very intentional about separating those two things and we're still very intentional about it. And over time as you build, you know, your startup, you like you ship stuff, right? >> And that stuff doesn't necessarily always conform to like the very precise preconceived notions of what should live where. You make concessions to meet timelines. You do stuff that's kind of hacky. you make it work. So part of why we consolidated the two things was not necessarily because we wanted to merge them together and turn them into like a bigger ball of mud was actually because they had already kind of over time turned into a ball of mud. And it's much easier to unwind that when you're working in the same repository versus like managing it across an API when it's become deeply integrated already. So that was like a large motivating factor reducing complexity increasing simplicity. Right. Back to the original question, we kept packwork and we implemented packwork specifically to hold those conceptual boundaries very neatly. >> When you're thinking about setting those boundaries, how much of that is strictly just the architecture or the data related to those different domains versus like how your team might separate boundaries of teams like themselves like who's working on what. >> Yeah. Yeah. Really good question. Right. There's that what I can't remember what it's called, but it's like the law that you ship your org structure or whatever it is. I would say we really don't have a culture at Angelist of like I'm a this engineer. I mean, it certainly happens that way and you can't have like totally fungeible engineers all the time. Like it's just really it's really expensive to do that especially when you're working across many stacks. So I wouldn't say that the the point of it was like separating out the teams or like being able to like have one team operate in a silo from another. It really was mostly designed around separating the concepts and the business domain and keeping things organized in that way. You know this is probably a naive statement but we work in a very complex domain. Like I'm sure others out there are like haha right you know mine's more complex or whatever but we deal with a lot of like legal requirements a lot of economic requirements the relationships are very complex our data graphs are very highly connected highly interconnected and I'm a big believer in like getting the erd right and if you get the erd right then like kind of the code falls out of it but if you get it wrong then you have to like fight really hard to make the logic work. I think the major driver here is really around establishing and maintaining the conceptual the business domain boundaries and holding firm on that. >> For the few listeners who might be wondering what erd means. >> Oh yeah, entity relationship diagram just the the graph of our models and you know in Rails speak right you have your models your active record base and then you have your associations the associations are the edges and the models would be the nodes. >> Thanks for helping clarifying that. like when it comes to using things like Packwork, I think it's just a been a trend that since I'm speaking with a lot of people and this is something I want to clarify for the audience is that like I'm talking with a lot of companies that are maybe a little larger in scale. So that want to make sure that everybody's thinking about like when do you think there's a benefit of using Packwork versus like should everybody be using Packwork because everybody that's been on on rails so far has mentioned packwork but that might not apply. Do you think there's a point where organization or an application gets to a certain scale or a certain level of complexity that something like that can be really helpful? Do you feel like it would have hindered the project earlier on in the process? Like it's something you should tack on later if and when you maybe need it. Where's that distinction in your mind? >> Yeah, it's a good question. I wouldn't be too prescriptive here. I guess my general philosophy here, right, is like if you can, you should enforce a rule with a static analysis tool, something you can run in CI. This is why we use formatterers. This is why we use llinters. This is why at angelist we use sorbet, right? Because it allows us to not have to be as disciplined when we approach these types of problems. I think packwork fits into that category very nicely. If you have different product verticals or if you have different, I don't know, domains like we do, it's really helpful to just have something that that consistently checks and enforces those boundaries. act has kind of gone through its own evolution. They tried to do like this privacy thing and then they sort of like pulled it out. At the end of the day, what it does is it just sort of defines a dependency graph between modules in a Ruby application. For us, we use that to make sure that we aren't calling a cross, right? We maintain a almost a linear dependency tree and that's what we use it primarily for and it helps. It catches things. It's kind of like type checking also. Most people are pretty good about not shipping bad stuff, but every once in a while you make a mistake and it's nice to have that in CI running to keep you honest. It's easier to catch it with a tool than it is in review. Let's put it that way, right? And you just mentioned Sorbet. How does using something like Sorbet influence your Ruby style as an organization? >> Yeah, good question. I think it was Jake Zimmerman, like one of the poor Sorbet guys said um said something along the lines of and I'm totally like poorly paraphrasing, but the point of sorbet is not so that you can like keep writing Ruby the way you want to write it. Sorbet, the type shift system is like supposed to influence the way you write the code. And I find that to be like very true. For instance, something that I've had like great difficulty trying to add typing to is like active support concern. >> The included hook just like anything that uses like classy val or whatever, right? like it's not easy for sorbet to understand at least statically. Metarrogramming is such a big part of the Ruby culture and that type of stuff can be so prominent in Ruby code but it just doesn't fly in sorbet or it can but you have to get pretty mean with it. And so yeah, we tend to write very different I would say Ruby code than your general Ruby. We don't use a lot of multiple inheritance. We don't even really use a lot of instances. I mean of course our active record models are instances of classes but we use a lot of class methods on modules right to define service logic to try to make it more functional. One it works better with the typing system certainly and then it also makes it a little more functional for us which is a little easier to understand. >> We had had a previous call for our listeners so to talk through maybe some of the conversation topics for this interview and one of the things that you mentioned in that conversation is that your team intentionally avoids a lot of I'm air quoting Rails magic. What does that look like in practice? >> Yeah, in practice there's like a pretty funny video of some engineer from like five years ago or something describing his experience debugging a series of active record callbacks where like you make some change somewhere and then like it's like how did this occur and you're like jumping callbacks for for many chains. So yeah, we don't really use callbacks. We're not ultra dogmatic on it. There are times when callbacks are appropriate, but we really try to avoid anything that isn't explicit and obvious. So callbacks, I would argue, are a little more implicit behavior. Things happen and then like you have these side effects. It's not like you have like a clearly defined method. In order to understand callbacks really well, you have to understand how all the hooks work like what order they call in, right? You have after commit, after save, after validate, right? Another component of this, a lot of our engineers coming in don't know Rails or Ruby very well. And so I think we've partly developed a style that caters more to like a more imperative or like procedural sort of coding. So we try to avoid meta programming. Monkey patching is amazing but also a little bit evil. It's definitely a two-sided sword. We avoid callbacks if possible. Yeah, we don't use a bunch of multiple inheritance. When you look at Rails for instance, you see these like deeply nested includes and you have to kind of like figure out where this method's coming from. Chasing references around like that, it's a little easier with Sorbet now, but definitely at the time we adopted the practice, it was like pretty intense to try to figure out what was going on unless you kind of knew already how the internals worked. And I think that's what people really struggle with with magic, right? Is like you have to have this deep knowledge of how the internals work and you can't see it and that's why it feels magical. We write in a way that tries to avoid that. Yeah. So what does that look like on like a let's say a typical CRUD type process if like you're saving some data whether that's coming from your next app your one of your next JS apps or you have a Rails interface or some view in your in your Rails application and you're saving some data and it needs to trigger some additional things that happen whether it's syncing to another system maybe notifying people like how does your team kind of think about that process then if you're not going to lean on the callbacks are you doing that primarily within code in the controllers you use something like a service object type hyper approach. >> Yeah. >> When you say procedural, talk us through that. >> Yeah. Yeah. Good question. We use GraphQL Ruby. So everything comes in as a query or a mutation. Then we use thin models with I don't know you might even call these anemic models, right? Like if you were Martin Fowler, I'm not sure. And then we use pretty thin controller like pretty thin presentation logic and try to keep our business logic very clearly in the in the business service layer. So the majority of our business logic lives inside of SLervices and we have a lot of modules with class methods defined on them that implement the various business logic methods. An example of that might be like it's very common pattern to have a create service. So you might call like object create service widgets colre service rightreate and pass in the things that you need and that would create the model. I wouldn't say we're dogmatic but we're pretty consistent about using bang. We use a lot of bang methods and none of the instantiate and save. I mean, we do this a little bit and then anything that needed to happen after that, we would structure it as like sequential calls inside of that logic. In our most complex cases, we might have something where we have like an after commit helper method where it's sort of like a callback, but it's just like a an actual method in the service. Maybe it's private in the class or in the uh module with the create method. And then we would call that after the transaction wrapping the things that we're creating. We would generally structure this as just like sequential calls and then if we need to make something asynchronous or if we want something to fire and forget or like a call back would typically be something where we'd fire and forget it, shoot off a sidekick job or shoot off a good job job or whatever and let it kind of run and not have it impact the end user experience at all. Like we'll recover it separately if it fails or whatever and it's not it's not mission critical. So yeah, that's about how we do that. So is the controller in in like say a create action I'm just in your controller is that then you're not directly interfacing with the model then you're interfacing with like create type thing and then it's like a little layer between >> yeah exactly we don't end up with a lot of thin wrappers though because like I said our business domain is very complicated it's very complex and the graphs are highly interconnected and so typically it's hard to create something in a void you're creating some object it usually has follow-on objects that you need to create with it for that state to be consistent our models We try to map very closely to the business domain. That just makes it a lot easier like it closely ties the business concepts to the code which is a feature for us because the business concepts are very complicated and hard to understand and so like being able to read them out of the code is somehow helpful for engineers to figure it out. The point is that we do have a lot of logic that is included in those create services and it's not just a lot of like thin wrappers that call active record create and then return. It's usually like create a couple things, maybe wrap it in a transaction, right? Maybe kick off a sync job or something and update an index or whatever it is. When those things fail, how do you handle like when things don't saving properly and bringing things back? Does that come back to the controller level? Are you rescuing things like or raising exceptions and things of that nature? Or can you rely a little bit on active record in those situations? >> I would say generally when we encounter errors like when our create fails like it's usually due to some sort of validation, right? And we'd probably just surface that back up to the user either in an explicit error message coming from the validation, right? Or through some sort of no translated error domain. You know, it depends on how friendly the error messages. Some error messages are not super friendly and you don't really want to show them to your end user. You just want to show them like I don't know what I would say is like the 500 the red screen of death kind of thing, you know. But we want our errors to be relatively loud because for us it's actually pretty bad when we get these inconsistent states because our data graph is so complex and there are so many objects that are all kind of have to be sort of kept together. When you get into a bad state it's it's actually more expensive to recover than it is to just kind of roll back and like be like hey that didn't work. I'd rather debug like why it didn't work and then just redo it, make it item pot and write than try to like recover midway, which is like can be very expensive when you have a lot of pretty complex business logic that's running running and needs to run in the right order. And there's a lot of assumptions throughout the system that these things exist or that they exist in a certain way. And when you allow them to like get created halfway and then like you know either swallow that error or or don't transactionalize, right? Don't make it atomic, then other parts of your system start behaving badly as well. And like that's that's definitely not good. I'd rather one user see an error than like take down the admin view or whatever. >> Sure. Sure. So you're typically wrapping those in a transaction and then that way it could if something triggers it, it'll just roll back everything and have to figure it out and then solve that problem and then go through that process again. >> Yeah, definitely. When we have these states that need to stay consistent, right? If we're creating multiple models and they kind of need to be created all at once together or updated all at once or whatever, definitely wrapping those in transactions. we actually have multiple different databases. So we run into some issues where we need those actually to be atomic and then that gets very complex, right? Because you start having to do some kind of gnarly things like uh stacking transactions. So like you might start a transaction on one database and then immediately open another transaction on the other database and kind of nest them that way. You also get a lot of complexity from this for nested transactions which are automatically flattened. So like if record if if you open a transaction and then maybe you call some other service method and open another transaction inside of that service method those get flattened into a single database transaction and so if they fail deep right then it unrolls the whole thing and you can like get into weird goofy states that way. So I guess what I would say is I personally try to use a lot of very narrow transactions ones that are like very tightly wrapped to just the actions that we're trying to perform and keep in sync. That of course is like totally conceptual thing that does not work in practice and not everyone's so disciplined about how transactions are used and you know we're trying to ship stuff. We're trying to deliver value right we're not trying to like be ultra nitpicky about our use of transactions in our application. So yeah we have a lot of like atomic states that we keep kind of glued together with transactions and then when those fail we roll them back. We also just have a lot of logic that runs sequentially and it's okay for it to fail in the middle and it is recoverable. It is like it's just item potent by nature right? We don't need a pessimistic walking or whatever it is, right? To like try to keep that in line. >> Out of curiosity, what sort of scenarios do you think having multiple databases came into play for? Is it because there's sensitive data in some area where like we need to keep this more locked down or is it more like just companies I talked to like well we have some HIPPA compliance related isolated that it's a different database but still interact with it but we just keep privacy information. What does that look like in your world? So yeah, we have a few different services and like I said before, we we actually um consolidated one of them into the same repo into a monolith. So not just the same repo but the same application actually as an engine and those as separate services naturally had different databases. We also have a bit of tech debt where a lot of our older databases are my SQL. I'm not personally a huge fan of my SQL. I have had plenty of fun times debugging 5.7 query plans and like trying to deal with that crap. So we also have like a Postgress database that's floating around and we're trying to sort of like do this long staged migration and so we have some models living in one and some in the other. So I would say like one of our services has multiple databases because we we want to eventually well really we adopted it because we wanted to use good job. Although in hindsight if I had known about solid q I would have used solid q instead but I guess even I apologize I forget her name but but she even um said from episode one or or earlier um she even said I think they use separate databases at um I can't remember any of the specifics. I'm sorry I'm totally flubbing this but >> that was Rosa Rosa Gutierrez. >> Yeah. Yeah, Rosa. So, Rosa, author of Solid Q. I think she said that they use separate databases for it, although I I forget. It sounded like they had gone back and forth a little bit. And so, we originally adopted Postgress in one of our services to so that we could use Good job and it was like a means to an end. Sidekick was not doing what we wanted it to do. We wanted some observability, right? Like why are these things failing? Like where are they going, right? We wanted to be able to query and like have a little more interactiveness with the with the queue itself. And then we also had these separate services going on that had separate databases. And so now we have like just a few databases floating around. Um, and we have to kind of try and keep all of those in sync. And and honestly, it's a real pain. Like I I kind of hate it. But I think I would still keep separate databases for the separate services. Kind of like what you mentioned. There's a there's like a conceptual domain kind of thing. You almost want a separate split there. If I could, I would consolidate to a single instance and just use different databases in the same instance, right? rather than like actual physical different databases like in AWS or whatever. But that's we have a lot of other problems to solve before before I think we unwind that one. >> I know that you mentioned using Sidekick. Good job. And I think you also in our previous conversation you mentioned temporal. Is that right? That you're using there. Yeah. So it sounds like you've experimented with multiple background job tools. So what have you learned along the way? I've learned that there is no onesizefits-all solution for this type of work. Sidekick has some has a lot of like packages and gems that add on to it that provide different functionalities and you can kind of install them at will and whatever. We've had some trouble with like Sidekick unique jobs for instance. Not to like try and like pick on anybody, right? But we've just had trouble tr trouble with like the digest getting stuck and then you have to like go and manually clear the digest and you're like why didn't this thing update and it's like oh because the job didn't run and then whatever and you know it's sort of like it's kind of hard to recover from that type of stuff because it's hard to even know that it's happening and so we also had like observability like why is this even happening right and so I think like adopting good job was primarily because there wasn't a lot of transparency like you know I always say it wrong but everybody gives it a hard time reus incredible database. It's credible tool, but >> right, >> you can't look at the jobs after. They're just gone. The queue is gone once once it's processed. And having that, the ability to sort of like introspect on like what happened before is like a really useful tool when you're debugging jobs that that have more complex stuff going on, right? Like you want to kind of know when they failed or like why are they failing for all of these reasons? And it was just hard for us to do that in Sidekick. It was a lot easier if we just had an actual database table backing that. So we pulled good job in. The problem with good job is that it's not super performant when you're interacting with a bunch of really small jobs like fire and forget type stuff. There's an active issue in the good job dashboard. Do you know like you install Good job in the gem and you can add this dashboard to your routes.rb, right? And it's like an admin panel and it's incredibly slow for us. Like it never loads. You have to load it a bunch of time. You have to like warm the DB cache up so that it can actually serve the query. And it's just kind of a pain. And then, you know, we run into issues with it will acquire a lock on the table and then suddenly the whole table's locked up and now we got to figure out why the lock is still holding and like what's running and how it's going, right? I would say that where we're sort of leaning now is we're going to try and move I think and this is totally speculative because we haven't actually done any of this yet and everything is you know the best laid plans. I would like to see us try to move any of our more complex work that we're doing asynchronously in a worker or in a job or whatever into temporal. And so temporal is like I think it came out of Coinbase. It's just like a really powerful workflow orchestration tool. We use temporal manage solution. The we like I think it's temporal IO. The downside there is it's kind of expensive and it's also the ergonomics are not great, right? It's not active job. I mean this is the thing, right? Like why am I on Rails? Like active job, right? Active job's awesome. So I think it'd be cool to build an active job connector for temporal. I'm going a little tangential here, but like I think it'd be really cool to see like an active job connector come out of that that would work for temporal. It's something I've kind of noodled on for a while, but just haven't sat down to actually try to write. But yeah, I'd like to see us move a lot of our our more complex workflows, things where we need like guaranteed delivery, things where we need observability or stuff that that we're not comfortable just firing and forgetting that we need to run for maybe a little longer, right? Because sidekick, you don't want your jobs to run forever. Anyways, I'd like to see us move that into temporal. I think we have a couple of cases where we might either keep good job or adopt. I think we also have delayed job, although I've been less involved in that in some other services. We've really gone through them all and um I think it would be cool to maybe consolidate those with solid Q maybe when Rails 8 comes out and then use something use that for our like quick fire and forget stuff or use something like solid Q or good job or even Sidekick for our stuff that is going to be quick ceue up this email or whatever. But yeah, I mean coming back to the original question I think as much as it sort of offends me conceptually there's no like perfect asynchronous solution at least that I found. I don't know, maybe the listeners will, you know, write in and complain that this other thing is perfect, but for us, I think we're going to end up probably with with multiple solutions. I think like coming to terms with that more recently and and I think it's okay. There's probably some ergonomics we need to figure out and then you kind of have to like define for people when to use what, but ultimately I think it'll be the better solution. >> Yeah, that was going to be my next question is like how does your team then go about deciding what to use where? I hadn't heard of temporal before our conversation recently and you know so that's like a platform you people can I think there's open source versions that you can use as well but and they've got a bunch of example you can use this with lots of different programming languages so are you actually triggering anything with your next apps or does almost all of your background processing async jobs get triggered by something within your Ruby or Rails processing >> yeah this is an active conversation in our node apps as well it's like what do we do there >> we do have a temporal setup but again like we're running into the same issue where you know you have this kind of more complex processing job versus like a short fire and forget type job. So like temporal works by defining workflows and activities. Workflows are basically comprised of some logic although it needs to be totally item potent and then activities that they can call to to execute actual work. So I think it's possible to set up and I don't know I haven't done this. This is like totally conjectural to set up a workflow from one service that cobbles together activities from other services. I think but I'm not sure. I think that's true. We're not doing that. We're explicit. We're straightforward. We're like no magic right now. We're just using workflows and activities defined locally in the same service and running those. But we do have like a lot of complex workflows that we need to run asynchronously in these jobs like processing jobs basically and it's important that they don't like fail in the middle or that we have like we know when they do so we can recover or whatever it is. Um and temporal is really really useful for that. >> Was that something that your the node app that your organization acquired were they already using that? Was that is that how it got introduced or is this something you implemented after that process? >> I don't think they were using it prior. No. Yeah, we we implemented it after. I think the the idea was like we need an async solution and like processing worker solution and we're already using temporal so let's use it. There's been some issues with trying to adopt it. We're getting ahead of my expertise here but we have some like patches to make it work on the mpm packages because I don't know you know this is like technical this is like normal stuff. You have to like do this kind of stuff to make stuff work. So >> sure that is always the case. there always a little bit of interesting edge cases or things you have to figure out how to make work with whatever you're doing your apps and using some third party service. Does using a third like kind of like I'm air quoting outsourcing something like that work to something like temporal like how does that then translate back to like if it runs some processing or is it then orchestrating and triggering some API endpoints in your system be like okay once this is finished tell us that this is done and then like what does that end up looking like or are you just checking temporal without having used it or look at too much into the thing but like if you trigger a thing in temporal is are you then just checking the status if it's finished or not and then continuing or is sending some sort of call back and let you know that it's finished doing whatever it's trying to do or am I totally misunderstanding how it works? >> It's just like um Sidekick or Good Job or probably Solid Q although I haven't used Solid Q yet, but you still have to run worker instances, right? So these are deployed on our own in AWS in the internet, right? Like they're pods in our Kubernetes cluster >> okay >> and these actually take the workflows, take the activities and process them. This is like actually our code running. Okay, >> the database that actually stores the queue, right, is managed uh through temporal IO. You can self-host, you can do all of that. It's just a lot of complexity from like a DevOps perspective, I guess, um to do that. And so for us, it was like easier to just have a managed solution that we don't, you know, you just connect to it and push your jobs there. >> Okay. >> They have a really nice UI. I I think it's super cool. there's like a burndown or time chart and it kind of shows you where each activity is running and what stage it's in and why it failed and and this and that. I think you can also query on it. You could certainly have it do callbacks. We don't do a ton of that. We just need to make sure that the jobs actually execute and that they execute in the right order or whatever it is. So >> do you think that Rails needs more say stronger opinion around asynchronous processing? >> I don't think so. Right. I mean this is like one of the beauties of active job right is like the thing just plugs into whatever and I think it is an interesting part of the ecosystem. So in npm land you get a thousand packages that do the same thing and they're all like tiny forks of each other. You've got Joe's whatever and you've got Frank's whatever and they do the same thing slightly differently. In Rubyand I think that's not quite so true. You generally don't, right? you have actually these like longstanding, you know, these packages, these gems that have been around forever and they just work and they do the thing that they're supposed to do and the Ruby community is very good at consolidating on these efforts and like dumping a lot of energy into them and making them very good. And if I can be totally just like talk totally unfounded way. I think the reason there are so many different solutions here is partly because active job is pretty flexible, but I think it's partly what I said before which is that I think depending on what you're doing, asynchronous work comes in a lot of forms. It's asynchronous work and so it's similar in that way. But depending on whether you're doing like long running processing jobs, you might want a different solution than if you're doing like quick little fire and forgets that you don't care about, right? Um you know, if you're sending a two-factor code email, you kind of want that to land. Like you don't want that one to go missing. If it's, I don't know, something else, maybe you don't care, right? If you're just updating an elastic search index index that you know that later today the cron will run and fix it or whatever, maybe you don't care as much versus like if you're pulling in a file off of an SFTP uh server from a bank to process, right? because this is still how a lot of the financial industry works today. And then you're going to like do some heavy compute on that and pull a bunch of records into the database and you need to know if it fails, you need to know where and why and you need to be able to sort of recover and all of this. Then you kind of want a different solution than maybe sending a transactional email. I guess if I were to just sort of guess at like why this is a one the one area in the Ruby ecosystem where you see a lot of diversity, I think it's because asynchronous jobs probably have less in common than they have in common, right? So like they tend to be more different than they are the same. It's just that they're all asynchronous work, right? They're all sort of like in some sort of kick off the job and run it later. And so I think the opinionatedness here would probably hinder people and would probably not be so useful. I think the work defines the tech and the tooling, right? And and I think that's like a pretty basic engineering principle. And like you want to build the right things with the right tools, right? And I think that's like how async workers work. And I think that's what we've definitely found at Angel. Like we keep adopting new ones trying to hope it'll solve all the problems, but at the end of the day, we're just going to need like two, you know, or three or whatever it is, right? Different solutions for different problems. I think that's a good point and you know I always feel like most of the teams I talk to or and developers I talk to they're always trying to like what's what which one do you recommend so I could just just start using that one or what have you and so Rails is now obviously shipping with solid cube and things like that as well as maybe a good reliable default for this type of work but knowing that you can then use with active job a lot of different types of platforms to accomplish that work. So, I'm always a little hesitant to be like, "This is the one you should be using. This is the best one or which one's the most popular to be using when we, you know, we run a survey every couple years." We can see how things the landscape changes there. But hearing someone actually say, "No, we're we probably actually need to run a couple different ones for very different reasons," I think is something I haven't heard that much of from a lot of people I've talked about because they're always trying to just find that one solution. Is that something you feel like you just have come to understand and concede or is it just like no this makes the most sense for us and maybe for other companies that are do you think that they should be thinking about like embracing multiple different paths rather than just like we need to migrate all over to X because we're done with Y? >> Yeah, I mean we've certainly tried to just migrate everything to X because we're done with Y. The thing about migrations is you you end up migrating forever. So doing that is not super successful. I mean it can be but it takes a lot of concerted effort right migrations are hard everybody knows this I think what we found is that we have really diverse asynchronous work right kind of back to the point here like I think our we have a lot of different stuff going on in workers and a lot of it tends to be expensive compute stuff that's like not super cheap and so it's not well suited to something like sidekick which is backed by reus however you say it so I think yeah I mean as someone who really values like clean organized really really like precise stuff. It's a little offensive to me that I need multiple of these things, right? And I think that's probably what you're detecting also is like I think engineers are very similar in that way. Like we all want we want the solution that works. We don't want to have to like cobble together stuff. But I think for us like we just have different work and it requires different tools and asynchronous processing, asynchronous workers, asynchronous jobs, whatever is a tool, but it comes in a lot of different shapes, right? So you wouldn't use a sledgehammer in the same way you'd use a framing hammer, right? Like and I'm sure if you knew a lot more about hammers than I do that there'd be even different framing hammers you might use. I think using Sidekick to run jobs that may run for hours, you know, is like using a framing hammer to try to run the thing you need a sledgehammer for, you know, whereas like temporal great solution can literally do it all is a little bit abstract. It's the the ergonomics are not super great and maybe is it great for you know for us with a managed solution cost is a factor. We have to kind of think about you pay per workflow run or whatever it is and so you got to kind of think about that as well. it's maybe not the best solution for little fire and forget update this cache jobs that you're not actually that worried about landing. We just have very distinct different needs in that realm. And uh we've tried the let's adopt this one and let's adopt that one and let's push all the new jobs and let's migrate, let's do it. And then we like keep running into the limits of that system of whatever system we're migrating into. And you know, it's really tough to do a migration like that and then be like, "Oh man, did we mess up? Did we like should we have left this work in Sidekick because it's like just better suited for it and like would make more sense. That's like a painful thing. Like that's a pretty awful thing to feel. But yeah, I mean I'm kind of coming around to the fact that I think we just need multiple solutions. We just need like multiple different types of of asynchronous work. And then you asked earlier like how do you decide I don't know >> how do you onboard people into this this world of like well we've got three different things and it's just has having you know you even mentioned like delayed job there which has been around and maybe not so invogue in a long time. I'm curious about like has any of those decisions been based off of is anything hindering you from keeping things relatively up to date with say Ruby and Rails versions themselves? >> Yeah. No, not at all. >> That's good. >> I think we're on Ruby 3.4. Uh we're on Rails 7.2. to I think we are on 7.2.2 something. So we're like we're pretty bleeding edge I would say you know like in a related realm we've also adopted Falcon in one of our microservices I don't think so we're pretty good about like really grinding this kind of stuff out honestly this is fun like I like this kind of stuff personally like I did a lot of this kind of stuff as like side side projects in parallel with product work where I would just be like okay I'm going to upgrade this thing from Rails 6 to Rails 7 and it's you know it's a little sketchy it's a little painful like stuff breaks on you don't you kind of don't anticipate, but it's fun work and it I like it anyways and it gets you really deep into Rails itself or Ruby itself because you're like forced to contend with these idiosyncratic things that you like never knew and I like that. So, no. Yeah, it hasn't really held us back. We're pretty cutting edge with all of that kind of stuff. This episode of On Rails is brought to you by Application Controller, where all roads lead and no logic escapes. Do you need to share a method across all controllers? Add it here. Want to run a call back before every action? You know where to put it. Do you need a place for O logic, flash helpers, and a little panic code? It's waiting for you. Application controller is the one file that's holding your app together >> and also maybe holding it a little hostage. >> Side effects may include confusion, long scroll sessions, and blaming something in here for half of your bugs. >> Application controller because if you don't know where it goes, it probably goes here. I'm curious, you know, just um if you're willing to answer this question, do other developers and engineers on your team have much experience also participating in those upgrades or is it primarily you doing a lot of the heavy lifting there? >> We have a couple people like me and another person that I would say like drive the majority of it or have driven the majority of it because I think I like it. It's easy for me to just do it and it's not like something that you have to do that often. Rails isn't releasing new versions every day. I mean, honestly, Ruby ships more versions than I realized. If you'd asked me 5 years ago before I started working in Ruby, like how often does a new Ruby version come out, I probably would have told you like every 10 years or something, right? But like it's like actually like every few months or every couple months or whatever. But it's not that hard with patch versions with minors usually to to upgrade. You usually don't run into that many issues. And so really it's just the majors that are kind of tricky. Rails is a little different. Rails will ship stuff in minors that's that require some effort. But typically I would say the hardest part about upgrading is there's two really key facets of it. One is fixing all the tech debt and monkey patches to whatever internals of whatever gems, Rails or otherwise that you're relying on. And the other is figuring out how to get the dependencies to come come along with it. Because usually when you bump a Rails version, you also have to bump a bunch of other packages and that can come with other stuff for sure. >> Do you have any kind of like team guidelines or at least scenarios where you're when do you evaluate when it makes sense to bring in an external say Ruby gem dependency into your applications or your or just doing something yourself. So you're not so dependent on that gem being potentially a bottleneck for keeping things updated or giving you too much or not. Yeah. >> Yeah. Yeah. No, I don't think so. Historically, Angelist has been very like high trust engineering culture. And so if you have some problem and the gem solves it, like let's let's pull that in. It's nice in Ruby, like I said, cuz the ecosystem is pretty high quality. You know, in MPM, you get these packages that are like eight lines of code and it's like, oh, I'm pulling in this one method through a package, each one function through a package. In Ruby, it's less like that. I mean, we certainly have adopted our fair share of technical debt from from some packages. I think like my magic, we have like our own fork of my magic. We have some like forks and stuff of these kind of like core dependencies that are not updated very often and you end up with it. I'm a pretty big proponent of upstreaming stuff when you can. It's hard to do, right? And and then also the gems themselves have to be maintained, which is like another technical complexity, but I always try. I think we should always try, you know, I would rather upstream something than fork. But usually what we do is we fork like slot our own thing in and then begin the slow grind to try to upstream it so that we can like get unreliant on the fork. But yeah, I would say like generally we're pretty I'm I'm pretty open. Like I'll approve any PR that has reasonable justification and you know isn't sketchy. I don't know. Maybe that sounds too willy-nilly. Maybe I should back off of that one. >> I don't know the answer to that. Like if there's one true way to do that either. I think obviously as a developer it depends. But how does your team actually then keep track? I mean obviously you can look at your gem file and see which ones you have forked because it's probably pointing to a different repository. Do you have like a regular process or anything or is it just kind of like top of mind everyone like oh right we have that thing let's see if that has already been addressed upstream because maybe we're just waiting on that gem to eventually get updated so we can run it on the latest version of rails or active record or whatever that is like potentially whatever preventing you from just bumping up the gem itself. How does that like process happen? Or is it just kind of like materialize when it when you or someone comes across it and you're like, "Oh, right. We have this thing. I wonder if we can go back and I'm gonna go or is there like a task to remind yourself every couple months?" So like, "Let's go back and review these things." And >> yeah, it's a good question. I wouldn't say we have a formal process for it. I generally try to be pretty precise and like excise any technical debt that I can find when I'm like touching it, but that's me, right? Like I that's certainly not true universally. like some people are just trying to ship the thing they're trying to ship and that's totally okay. Like that's a reasonable tact to take. When I like monkey patch for instance, I try to leave comments with the context of the monkey patch in there. So like, hey, we're doing this because and once they do this or once this VR ships or once this issue is resolved or whatever it is, like we can remove this monkey patch and upgrade like it should be whatever. It's not perfect. You end up with this stuff. It just becomes dead. I wouldn't say that we have like the velocity of something like Shopify, you know, and I don't know, I've never seen their repos, but I imagine they have, you know, they have thousands of engineers like shipping stuff into these things. And so, they probably have to be a little more careful about this kind of stuff. I wouldn't say that we have enough people shipping enough code where we're like growing exponentially in our number of monkey patches or fork gems or whatever. We have a bunch of internal gems that we've written, but mostly like internal APIs and clients and stuff like that, but our number of forks is relatively low. It's probably less than 10, maybe even less than five. And so it's it's not incredibly hard to maintain. I mean, it usually is packages that aren't maintained or things that are very old. Like I think we forked money at some point, which is a gem for manipulating money. My magic comes to mind. like we have a few sort of different ones that we've we've acquiesced on over the years for sure. >> Could you tell us a little bit about the story behind creating the Boba Gem and how that relates to Sorbet? >> Yeah. Yeah. Yeah. Yeah. Totally. Sorbet is a gradual type system for Ruby in case you're unfamiliar. And what that means is that you can enable it on a file by file or even a methodby basis. It's kind of like TypeScript, you know, if you're familiar with TypeScript. I would say it's not as mature as TypeScript and it has some expressiveness issues because Ruby is such a a dynamic language that it's really hard to there's a great post by Jake Zimmerman on this like like definitely go read that if you're interested in this but yeah so storebas a gradual type system uh we use it very heavily at Angelist it's awesome preventing stupid mistakes one of the core sort of pieces to it is that the static analysis tool which is one very important half of it relies on type files for any gems or any DSLs like domain specific languages like Rails for instance that that sort of like provides code available at runtime. So for instance like associations by default the static compiler doesn't know about these associations because they're generated from a method that's running on the class when it's instantiated. And so you can either handcode these type files RBI or RBS or you can generate them. And Shopify built a really great tool called Tapioca which is like playing on the Sorbet theme. What it does is it allows you to define compilers and it ships with a preset collection of compilers that generate these RBI files for you. It runs a copy of your Rails app and it you know reflects on the classes and it uses this to generate these RBI files for you. So you make an update to your your model, you add an association, whatever. We use make files. So we have a make command like make RBI and it just reruns it and shoots out the new RBI files. So the beauty of tapiocas is extensible. If you have a domain specific language that you need to generate an an RBI file for, you can define your own compiler and you can ship a compiler with it. It also ships with some default compilers for active record, for Rails, for some various other gems that they actually just revised their stance here, but various other gems that Shopify uses. I'm trying to pick my language here carefully. I'm not trying to offend anybody at Shopify. Um, one of the things that's very annoying for us is that the default active record compiler is 100% type safe. And anybody listening is probably like is probably like that's a weird thing that that's annoying, right? But in Rails, at least in our Rails app, it's very typical to assume that your objects, your active record models are actually persisted. They're coming from the database. And because they're coming from the database, that means that they're subject to validations. They're subject to like unique indexes or non-null indexes on your database. You can assume certain things exist. Also, sorbet is not it's not complex. It's not mature enough to be able to generate type files that can know whether something is coming from the database or whether it and should be beholden to these. You know, for instance, like if it has a presence validator, like you you might know that that object exists there. It's not sophisticated enough to split the types, right, to know that that thing exists unless you tell it. So the default compiler generates everything assuming everything is nilable because you can always do new and then even if you, you know, on an active record model and ID could be null, right? It could be nil. And so that's fine. It's 100% type safe. It's like totally reasonable, but it's very annoying because you have to you have to assert everywhere, right? And because the way to do this in Ruby is very it's not like TypeScript where you can just add a bang, right? That's valid Ruby code. So, you can't just add a bang. So, you have to do T do must or you have to explicitly check it, you know, return if nil and and it's just like super unergonomic. Like really unergonomic. I don't think I'm alone. I don't think we at Angelist are alone in this pain because I'd seen a number of issues over the years and attempts. The backstory is we used a gem called Sorbet Rails to generate our RBI that's now deprecated. It's been replaced by Tapioca. And when we were trying to convert the option was either go through and fix all 20,000 violations of this like put t.m 20,000 times or like figure out how to write a tapioc compiler. So, I wrote up what I thought were pretty nice changes and I attempted to get them upstreamed into Tapioca and um there were some back and forth on the threads and ultimately I think it was the right call for them to make but they decided not to accept those changes. That was a very long-winded story of saying I created my own gem, you know. So, Tap Boba is a another play on tapioca because Boba Pearls, you know, tapioca or whatever. Well, one it has our custom compilers that we've built at Angelist, right? like kind of sharing with the community and it's intended to be a rep repo where people can contribute their compilers for other gems that papio is not willing to accept into the main main repo. It felt like there were a lot of demand for these gems that we had built and people were sort of like circling around the issue a bunch. So I created Boba and put them up there. >> Nice. I'll share a link in obviously in the show notes for our listeners and I was just looking at the list of compilers that are available in Boba so far and I see things aside from you know the active record things you mentioned there's chemar money paperclip even >> yep >> and so you're kind of accounting for these different scenarios there and is that even maybe state machine related stuff in there as well >> yeah so there's state machines which we use which is a gem which sort of extends your active model objects to have like these state methods on them. So like a good example here is like money. Shopify actually has their own internal money gem that they wrote. >> It works a little differently than the standard Ruby money gem that we're using. And so they probably not interested in accepting that compiler. And actually I think I'm aligned with them long term on the end state here which is ideally all of these gems would actually ship with the compiler. >> So like it would be like part of the gem, right? money would ship with the money compiler and then tapioca knows how to just load it up and run it and you're using money so you're it works >> interesting. >> I think that's like a pretty beautiful sort of like end vision but you know we're not there yet. Uh Sorbet I don't think quite has the adoption or the buyin from the general Ruby community quite yet. There's still some sort of open questions to solve there around like adopting it like RBS and this you know these other sort of competing things. And then again like some of these gems are just not not wellmaintained. So, >> why do you think the community is still a little cautious about static typing? >> Mz, you know, like Matz's opinion, right? I watched like a Ruby World con or whatever it is a little while back and he was like, you know, I don't need it. Like, I don't really want it, but like I see why it could be useful and I'm okay with it in certain cases, right? And that's I think that's about as good as we're going to get. So, I think like I said, it changes the way you write Ruby. Like fundamentally you have to alter the way you write Ruby. And I think for a lot of Rubists they use Ruby because they love Ruby. They love the stuff that lives outside of that that ven diagram of sorbet Ruby, right? Like the sorbet Ruby circle. The other part is Yeah. I mean I I saw a thing with I think it might have been DHH and he was talking about not using TypeScript or something because he was like I don't need it and it just slows me down and whatever. And so like I think people that like working in Ruby tend to to like the fact that it's not so opinionated on how you do this type of stuff. It's like uh you know pointer arithmetic from like C right like you know it's bad but everybody does it because it's like good right so I think it's kind of an uphill battle but I think there's evidence of success there like we've used it to great effect. I mean, our new engineers coming in who don't know Ruby, especially coming from languages like C or even TypeScript now where where you're sort of expecting things to have types and you're expecting to have the compiler or the runtime or, you know, whatever sort of protect you a little bit from yourself. It's really valuable in that regard. I can't speak for why anybody would or wouldn't adopt it, you know, in their specific projects, but I think culturally it just changes the way you write Ruby. And I think generally rubists like Ruby the way it is. They don't like that. >> Has using Sorbet changed how you approach testing and refactoring as a team? >> I mean it has definitely made refactoring easier for us for sure. That's one of the big reasons I would say we don't use instances super heavily of like instance service logic basically. So like we wouldn't create a service class and then instantiate it and then call methods on that thing an internal state. Like I went through an exercise recently where I was looking for source the method source s o ur ce right and you'd be amazed how many sources there are in a codebase right at least ours there are and doing a refactor where you have to like change fundamentally alter source and it exists across all these models and it's sort of like used all over the place and you don't have any sort of type system or type type safety having a static analysis tool like sorbet is super powerful It gives you a lot more confidence. It makes broad refactors like way more chill. You know, you can literally comment out the method and see what errors and then suddenly you have like a pretty good idea of what you need to change, what you need to update. In terms of testing, not really. Hasn't altered that a ton. We actually don't have sorbet enabled in tests because we use factorybot a lot which is a gem that helps mock out like active record data and it actually creates the records in the database. It has a really nice feature. I mean if you're not familiar with factorybot you haven't used it. It's like a pretty popular gem for this type of thing, but it's really hard for sorbet to get because you use create as a method and you pass in a symbol which is the name of the factory and it pops out an active record object. So, we could use it. It would still give us type safety on like RSpec like you can generate even RSpec is not super well supported. Um, like I said, like you have to generate type files for all of these DSLs and stuff and so we just have it disabled in our test files. But I think your question was more around like do we write fewer tests or do we write more tests or whatever. And I would say no, we we write the same tests. I mean we like to have unit tests or smoke tests, but I would say we don't have a ton of smoke testing. What we have more is like we want to know we want to prevent regression, right? We have very complex business logic. We we were talking about this earlier and we want to make sure that the nuances of that logic are captured in tests so that other engineers can have confidence when making changes, right? because ultimately this allows us to move faster as an engineering organization. So um so yeah I mean they work hand in hand. I think they work together right? I don't think you need to change your your testing when your team is thinking about testing and and preventing regressions are how often when like a weird edge case thing pops up do you then make sure that you include that in your tests from here on out? >> We use RSpec we use factory bot to mock stuff. We run our full suite of tests every branch in CI. It has to all pass before it merges. You know, I don't know if this is standard practice or not, but it's what we do. So, if you add a test, it now gets run forever. That's good if it works. It's bad if it's a flake. But, you know, I would say that if I hit an edge case in very tricky logic and it's not clear to me, like, you know, like at the outset why it is that way, I will 100% yeah, add a test to try to enforce that. when you have stuff that's like, hey, calculate the amount of money this person is going to get when they, you know, whatever, and then the the state is super complex and you're modeling like an incredibly difficult allocation waterfall. Like maybe that means something to you, maybe it doesn't, but you're doing this very complex calculation and a number pops out and you know the inputs are like this huge data graph. you tend to want to try to like capture the edges of those in a way that the next person who has to touch this thing is not going to shoot themselves in the foot by being like, "Why is this weird thing here? Like, let me just remove that. That's unnecessary." You know, and then and then uh you know, creates a regression and and some number is wrong and you got to figure out how to solve that in an incident or whatever. out of curiosity when you've touched on this a number of times saying like how complicated your domain modeling is and and your type of uh business there and you're dealing with like finances money all that making sure people get the right amount of money for or things get calculated in appropriate way I guess what I'm trying to get at do you have a lot of hard-coded logic for very weird specific things in there or do you are you able to kind of abstract that in a pretty healthy way do you feel like >> yeah it's a good question I'm sure at At this point, I sound kind of like a jerk. Like, everybody's like, "Oh, yeah. Your business is hard." You know? I don't know. It's hard for me, right? Like, that's where I'm coming from. It's tough for me. It's It's pretty It's pretty thoughtful stuff. I have to spend a lot of time thinking about it to get it right. But maybe smarter people than I wouldn't find it difficult. We do have some hard coding, but it's usually because our product is built in a way to assume a lot. How do I say this? So in in the legal world, right, like at the end of the day, what we're trying to do is implement the terms of a legal and economic contract. That legal contract is pretty precise about a lot of stuff, but it can be pretty imprecise about stuff. And it can really come down to like what was the economic intent that, you know, everyone agreed on in some email or what was the the preconception that someone had coming in to to the agreement. So our system it works you know in the volume case right in the general case. So like it works the way that we expect it to work but every once in a while you have somebody come in and they're like actually I thought this you know my lawyer interprets this clause slightly differently and then and really if it can be written in a legal contract and we agree that we're willing to take on the fund or whatever it is we have to support it. In the business that we're in you don't get the option of being like well your lawyers are wrong >> right? like you can, but that's like that's like lawsuit territory, right? And you're just like not that's not where you want to be. So, we've had to bend over backwards a little bit in some ways to try to support things. And we do have that to some extent. I think we'd like to build in a way that's a little more configurable over rigid, but it doesn't always it doesn't always work out that way. Yeah. So, you do you do sometimes have to have hard codes, but I would say typically hard codes happen for us not in our deep deep business logic like deeply. it ends up in like the presentation layer like hey that you know our customer would really like it if this was displayed a little differently or whatever or they want to use this custom whatever can you make that work and it's like yeah sure we'll just ship it >> I'm just yeah I was kind of thinking with without having seen anything about your existing codebase there I work in the consulting space so I get to get invited into such situations and you mentioned you organize everything into say services and are there files with like specific pro or clients or whatever the name of your air quoting customers or clients >> with their names like in the thing and you've got like a collection of like 50 different things for each different things and like >> there's a file that's almost copy pasted and then you're modifying it for that particular business entity >> you know no of course not Robbie right like we're a premier all of our engineers are perfect um >> is that even a bad thing it's like that's the logic or business logic for that particular client or to >> totally yeah we have some of that it's not written because we're like, oh, you know, we have this one whale, for lack of a better term, client and like we're going to just build stuff to make it work for them. In the past, we've generally had this attitude of like, if we're going to build it, we want it to work for anyone because it becomes a value prop for all of our future customers as well. That said, we do have a lot of code where we just shipped it to like make it work because we were trying to make the sale. You know, we're a startup like you live or die on the next sale sometimes. So over the years, yeah, we've definitely like shipped a lot of code and we've done some things and we've gotten ourselves into some situations where there's definitely a customer service or here or there. We have an edge cases service where we try to pull any like kind of super gnarly logic that needs to live. It's kind of like a common almost like uh like we also collect our monkey patches sort of in a similar vein, right? If we're going to monkey patch something, uh we want to know kind of where it is. is like we want to know where to look when we're thinking about like why is this behaving oddly or like why is this thing different? We're not perfect about it, but if a customer wants something and we're agreeing to build it, we generally try to build it so that we can give value to our other customers as well. And then if we're not and we're like breaking the rules and just like making it work and getting somebody over the line or whatever it is, then uh we try to keep that like contained, keep the footprint kind of narrow. >> That makes sense. Alex, another thing I wanted to kind of go back on is you've mentioned a number of times how you're using GraphQL. Do you also have any just restful API endpoints or you primarily doing everything through GraphQL and is that primarily to support the like the next apps or trying to understand how that kind of came into play? But also, do you think that's making things easier for you or harder as a team? >> We do have some older controllers certainly a lot of internal APIs. Those are restful. We're not using grass graphql. Well, they're more RPC, right? So, but we have a bunch of internal controllers that we're using to communicate between inner services. GraphQL comes with a little bit of performance overhead. You have to like parse the query and like JSON and whatever that you can kind of avoid with internal stuff. You don't need to have the whole powerful query language built on top of that either. So, mostly for our next apps, mostly for our consumerf facing products. I would say like the big thing that GraphQL really provides us, it allows the client to define the data that it needs. This is one of the core features that's really useful for us because we have really complex data that's kind of can be computationally heavy if we can be really precise about which fields we want or not and like really flexible in this view or that view or this component or that component about querying this or that field. when you're working with like a restful or an RBC style API, you have to have some way of you either have to like cut a separate endpoint for each of those different use cases or you have to like modulate the response which is a little awkward in like a restful situation. And so GraphQL just really gives us the ability to like be really precise about the data that we want from the back end and also to move very quickly. Like I can spin up a new view and all the data is already there. The type's already defined. I just need to like write the query on it. The trade-off comes at that because you don't know what you want beforehand. If you need to query it up very deeply, it's tricky, right? Like you you don't know what it is beforehand. You don't know what the client's requesting. And so you can't just naively preload it. Like if you have a bunch of deeply nested active record objects, you typically use an active record preloader and you would uh preload the objects beforehand to prevent N plus1s. We use a gem called GraphQL batch loader. It was written by an exangelist engineer YvGenny. It's used by some other people. There's a bunch of solutions out there now for this. There's also one called like batch graphql loader or some other sort of like incantation of those words. GraphQL has their own thing called data loader that they come out with now. We tried to switch to that. It it didn't work super well for us. Uh we ran into some other issues. But what it does is it essentially allows you to wrap a field in your GraphQL schema and batch those together. And so they're lazy evaluated. So instead of you know querying on widgets and uh widgets have an association to ddads instead of querying on the collection of widgets and loading the ddads for each widget one after another right doing the classic n plus one you can wrap the dad method the field on the widget type in a batchl graph loader and it will lazy evaluate them all together and load them all up at once and so we can do that to prevent those ones we have a little bit of a custom field that we've written right to try to work around it to make it easier to add these types of preloads to each of the fields. Uh that code I'm pretty sure I found on a Stack Overflow thread somewhere, although it's like since like gotten boosted. One of our DevOps wizard guys, Corbin, who who I love, like he's incredible, looked at how Rails handles through associations and like how they handle pre-loading in that case and how they predictably sort of reflect to determine what to preload and added all of that to it, too. So now you don't even always need to specify exactly what the preloads are. Sometimes it can just get it right, which is pretty slick. >> You mentioned earlier that a lot of the developers that might be coming in might not already be Ruby on Rails developers and we could maybe get into the wise or not of that, but like if they're coming from other tech stacks and backgrounds, what does that onboarding experience look like for getting people ramped up with Rails? And like do you have any onboarding tools that you use to help developers kind of spin up their local development environments? if you have all these different services and you got delayed job and you got good job and like what does that environment look like at the moment? >> Yeah, totally. It's a great question. We have a notion doc. It has the steps to reproduce, right? The uh 100% deterministic definitely no issues there. Uh steps to reproduce a dev environment. It's like the Apple silicon. We're all on MacBooks. Uh we use Docker >> install homebrew. >> Yeah, exactly. Brew like install and then brew install. Here's a list of things. And then uh we use ASDF that helps us manage. It's a tool manager. So you can define on a folder byfolder basis what versions of what tools. It uses plugins. So we maintain our PNPM versions, Ruby versions through that and you just go to the directory uh run asf install and it installs all the things you need. So we've got it wired up a little bit and then we use Docker to containerize some of our uh dependent services like our DBs, our Reus, Reddus, again I don't know how to say it, you know, stuff like that. We use a company called Tonic and and I I'm not sure what people's experience is with or them or other solutions is, but what it does is it takes our production database and then it like allows us to define which columns we want to scramble and then it like produces like a subseted version of our production database with scramble data, right? So no PII and then we can load that locally. So we have some make file scripts that we've written that that make that pretty slick. some Argo jobs, stuff like that. The cool thing about it is that it preserves primary keys. So if you have multiple services running, you can install your consistent tonic images from each database >> and they all like any cross references across assuming that the subsets overlap, right? You know, there's still a little bit of work to do here. It's not magic. You can get relatively good production environments. And like I said, for us, it's really important that we that we do that because uh our data graph is quite complex and there are a lot of relationships in there and it kind of has to all be there or all not. You can't just like pull in one table and our flows are pretty complicated. So like trying to create new stuff is, you know, we had for a while some docs that were like here's how you spin up a new thing like this or do that. Um but being able to just load up the app and have it work is like a pretty sweet dev experience. It still requires some maintenance. migrations still affect this thing like we have to regenerate it pretty regularly like I find myself in there every once in a while like adding a relation or you know you still have to maintain the data the data is still organic it's still growing it's still shaping right the new things are getting added things are getting removed um and so you still have to maintain that you can't get away from that I don't think it automates a lot of the like ETL staff step of like taking this thing and turning it into like a thing you can use locally >> does tonic then like literally bring that data snapshot that's been scrubbed off use data so there's no you know private information about any of the the real user data and then is that loading into in your local docker containers and everything like if you got Postgress or MySQL and and things like that or is it you making a network connection to a an endpoint that Tonic is providing your developers. We use a a self-hosted a self self-managed solution. So it like data doesn't leave us, right? Like it's not going to tonic. That's important. That's really really important. Like our data is not leaving. And uh what it does is it it loads up off of our you know our production DB. The way we have it set up, I'm sure you can do this different ways. It loads it up and pushes it down, transforms it, and then pushes it down to another database. And then we have a rake job or you know an Argo rake job. Argos our you know CI/CD prawn tool and it runs a rake job which actually takes makes a physical copy of that database. So like it copies the database files and into a a tar which you can then download which goes to S3 and then you can download that s that tar then you can use to just replace like you stop your database container and you replace all the file contents and then you up your database container and there the data is. >> Okay. So, it's it's a little bit of a pipeline, but it works relatively well. >> That's good. Yeah. I'm always curious about like how teams are thinking about seeds or keeping things kind of efficient locally for developers or like, hey, we need to spin up a couple more of this type of user like or how does that actually end up working if you're wielding out a new set of functionality and that's not already in your existing data. What's that kind of process look like a little bit there? Are people using seeds in that scenario or something like that or some rake tasks to spin up some data locally? >> Yeah, I mean the way it's set up is that it's a live copy of your schema locally. So you can run migrations on it and so like if someone's adding a feature on a branch or you know ship something uh that's not going to impact me, right? Like I can just run the migration make you know we have again make file scripts so make migrate or whatever and it and it sort of runs and applies the migrations to this database and it works fine. Or if I want to pull in some data from tonic, I can kick off a generation job in tonic and then you know I don't know some time goes by right boop and then like uh I can run make setup again or whatever and it pulls the data down and installs the new data. So all that's really required there at least with tonic and is you got to go into the UI and you have to kind of set up the relations so it knows what to pull into the subset or leave out of the subset or whatever. You configure it a little bit but then we've got the pipeline set up so that it's like relatively straightforward. it uh you can kind of kick it off and then pull it down when it's done. It's not a fast process and it certainly has its gaps. You know, it's a solution. I think this is one of those things that I don't know if somebody has a great solution, definitely let me know. But I don't think there's great answer here like at all. Like I don't think you can just like say like, oh, here's the here's the end all the solution that everyone should use and this is what works. I think this is just a tough problem. Your data is organic. You have to come up with states that are consistent. it it's sort of expensive to maintain because the code is living breathing organisms changing over time and it's hard to keep seeds or whatever sort of solution you have in sync with that and so what other way I would put it is that uh people still complain but it uh but it works mostly and it's like relatively maintainable and that's like I think the what you really want right is like something that's like you feel like you can keep on top of because in the past with like hardcoded seeds or whatever it is it gets really hard to keep on top of like it's just a real pain. >> I'm curious like in those scenarios, are you able to do you have enough data typically to address a performance issue that's only seemingly happening when there's a high volume of data or that much information your data and you're not going to be able to run that locally or you're maybe not cloning that to some air staging type environment of data. Yeah, sometimes production issues come in a lot of different flavors. Latency is like a particularly gnarly one to try to unwind. You know, where is it coming from? Like, you know, what's slow. Sometimes the data is the issue and sometimes you can nail it. You have enough of it or whatever. Other times you just have to get into data dog and start looking at the flame graphs. One of the nice things about our GraphQL setup is we have it instrumented so that each field has its own span and data dog. And because Rails is uh largely not asynchronous or parallel, >> you get these really pretty flame graphs of like the fields resolving themselves and you can easily identify the big ones, right? Like it's as simple as like look at the thing and which one's longer, >> right? >> Um you know it sometimes works locally but but a lot of the times I think you have to kind of like try to repro it in production or you have to you have to look at a data dog. >> That makes a lot of sense. you know, you're thinking about how different teams are navigating this and there's always a lot of contextual things and I'm particularly trying to ask some questions around this at least with a lot of the different guests I'm bringing on the show just to show there's a lot of varieties to this but also to help share this. I think it's maybe the less exciting part of what we're solving sometimes is like how do we just make the local developer environment a little bit simpler to get up and running because Rails new is amazing and you can start really quickly but when you're coming into an existing job there's existing code and existing data it's a lot of things to wrap your head around as a developer and so how do I have something somewhat realistic that I can start working it with to understand everything and then there's not a strong opinion in the Rails ecosystem necessarily because we're kind of relying a lot of a third party tools or approaches or people have literally just taking database dumps, scrubbing the data, and there's a bunch there's gems that do some of that as well, but it's it's not glamorous, I think, in a lot of ways. And >> yeah, I wouldn't describe what we're doing as glamorous. It's definitely not. You know, it still requires elbow elbow grease. Again, I I don't know that there's a great solution to this. I view it kind of as like a it's like writing a you know, a notion doc or or writing a document. You can write a document which explains how code works. That's good for usually a few days, maybe a week, and then that thing is is no longer relevant. I mean, I hope your company is shipping enough so that that's true. Not to be too opinionated, but yeah, I mean, you know, it's kind of like a document. When you write a document that sort of explains a technical thing or like an area or calculation or whatever it is that you're trying to sort of capture, it requires kind of constant tending. You have to like be willing to maintain. And I I just don't know that there's a way around that with code that's changing. You're adding models, you're removing models, your business logic is changing. The assumptions that you have around the relationships in the models are changing. I don't know how you can allow the code to evolve and organically grow and become more or less in some cases, you know, because less can be kind of bad in seeds also, right? Like you end up with these things that don't work anymore. I don't know how you do that without a gardener, without someone to like to like constantly tend to the to the garden. With Tonic, it automates a little bit of it, right? you just pull the new table in and you can say like get rid of these columns and that and that works pretty well because it automates like I don't have to go and write each column or like do any you know like write like generate data or whatever right like it's just kind of there but it also requires its tending like you have to take care of it and you also have to like make sure that you're scrubbing the data that you need to be scrubbing and like you're doing justice to the data itself. >> I think that makes a lot of sense. Were there other tools like tonic that you also evaluated that you recall? >> No. Yeah, I actually I wasn't part of the decision to adopt there. I'm not sure where Tonic came from. >> Okay, interesting. Well, I'll definitely include links to Tonic in the show notes and this is I think an ongoing conversation I want to have with a lot of people. So, maybe I'll compile a list of a bunch of these different approaches at some point because there's there's no right way to do it. Yeah, >> it'd be cool to hear if other people have alternatives that work for them that are similar or different or whatever. This is just kind of what we've made work. I recently talked to a couple of developers that work at Doximity and they were mentioning that they had they I think they have some tooling kind of like Tonic as well, but they also have some tooling where they have a bunch of like rake tasks that they have like a web UI. They're like, "Okay, so I'm working on this part of the application. I need five of these types of users, three of these types of things, and it'll go generate the data so that developer can work on that area of their codebase." But I think they also got like, you know, 40 50 services there. So it's it's a much bigger I think environment at least number of engineers and number of services there. But I thought it was kind of an interesting way but it's still not this standardized thing that we're everybody's kind of having to make up their own solution for this. And I think that's just part of the job of having like an engineering team is you're going to have to figure this stuff out. >> Yeah. And it's and it's like it's one of those things those jobs that's like not really owned by anybody. So it's like well what I mean by that is like if I go add a feature and it requires let's say a new table. If I mean good right if I like put my Halo on and I'm like being the proper engineer that I should be then I'm going to go and grab the seeds. Rake or whatever it is and I'm going to go update it and add the table and do the whole thing. But in the real world like I don't know this doesn't this is the constant gardening right? Like if you're a relatively new engineer to a company maybe you're not familiar with rails or maybe you're not familiar with that setup. You don't know that seeds. Rake is a thing. So you add your new table or whatever and you don't do that and then the next new engineer that comes along is the first person to hit it right because everybody else is like already got their setup or whatever doing it and and they run into it. And so my take here would be that I think what doesn't matter what your solution is for this maybe but you need a gardener. You need someone who's like willing to take that role on and be like yes I'm going to keep this thing healthy. And if you have that, then it'll work whatever it is, right? Like you, you know, someone new will hit it, they hop on a puddle or a Zoom chat or whatever it is and like debug it, update the seeds so the next person doesn't hit the same issue. But it requires that sort of proactive gardening, I think, and that's worked really well for us at Angelus. Like I do a lot of that kind of gardening. I enjoy that type of work. There's a few other people that sort of hitch in and help out with that as well. >> Do you keep like a separate backlog of gardening type tasks? >> We have a platform team. Yeah, we we do a little bit. You know, for a long time I just had a uh you know, you can chat to yourself in Slack and I would just keep a list of things that I wanted to do in Slack. Um we're a little more sophisticated now. We use linear to track work. So yeah, I would I would say like nowadays what we would do is create a linear issue, push it to the correct team. We have a platform team which is not that name's a little overloaded. It's sort of typically in your typical environment would mean like your DevOps, your like you know deployments, CI/CD. Our team's a little more like they own our core services like authentication or they own like our design system that we use, right? We have our own internal design system. They'll own some of the CI/CD stuff. They'll own some of these sort of like in between projects, right? Like upgrading packages, things like that. So that's probably where something like that would land like dev experience would fall like squarely in their in their domain. >> So at Angelus, you've using these different tech stacks now, but primarily Rails and Nex.js JS and you have node and and do are you using typescript that as well there if I recall or >> yep totally typescript >> how do you find rails to still be part of angelist's say secret sauce or is that an assumption that it is >> yeah I think like look it comes down I think to rails is I think pretty easy actually to learn like I said the ecosystem is super high quality it's funny because like when I started at Angelus I didn't know rails I didn't know Ruby and I had the previous job written a lot SQL. Every engineer goes through this phase where they're like SQL. And uh I came in and I and I remember this like thread where somebody was like, "Hey, how could I get this?" And I was like, "Oh, here's how you'd write the SQL, but I don't know if you know whatever or whatever." And I just remember being so baffled by active record and you know the Rails console and all of this. And um now I'm like I don't even want to write SQL anymore. I just want to write Active Record. And so I think like one of the beauties of Rails is that the whole thing kind of works like that. like you start to get it and you start to work with it and at least for me the more I use it like the more I kind of fall in love with just how easy it is like rails new is magic you know you do rails new and the thing just works right and so I think we can bring engineers in who um because of some trend shifts or whatever in the industry don't have any rails experience and in 3 months they can be writing you know or in less than three months like in a month they can be writing proficiently in our codebase and shipping code and it it's not that hard to like figure out how to add GraphQL type in GraphQL Ruby or whatever it is. And and so I would say like our ability to continue to ship product and deliver value and build cool is uh a lot due to Rails, right? You know, you could do that in Node Express is not so complicated, right? But I think Rails has really organically evolved into this thing that it's just really nice to use and it just works and like you can kind of bring anybody in and they figure it out pretty quickly and and ship stuff. >> No, I like that. And it's really refreshing to hear somebody come into the, you know, that given that you came from a different tech stack. You found your way into Rails. You think it's cool. You can do built cool with it. You're able to ship things for Angelist and you're able to onboard people in like a month or two that didn't have Rails experience. Like that's that's quite a testament to not just Rails, but also to how Angelist has been able to benefit from that and your team culture there, I would imagine. And so I I do talk to some people that are when they're thinking about trying to hire people and they're like, well, it sometimes can be hard to find Ruby on Rails engineers depending on the job market at that point in time, but Angelist has been able to make that not necessarily the requirement of is that was that a conscious thing that you recall there like why aren't you just looking for people that already have rebound rails experience or you're just trying to hire for aptitude and knowing that like people will figure this stuff out as well? Yeah, I you know this is going to sound really opinionated, but I think any engineer worth hiring should be able to learn any language. I don't care if you write like, I don't know, um Scola, right? And you're like using like some gnarly functional Scola stuff. Like it's not easy for me to to write Rails code and then immediately switch to Scola and write, you know, functional Scola. But um if you gave me a month or you gave me a couple months, I I feel like I could figure it out. And I feel like any engineer who's worth hiring should be able to make that switch. Especially because dude, we do not write crazy uh Ruby code here. You know, we're not doing a lot of like metaroming stuff like that. So like if we hire somebody in, I kind of expect them to be able to read the Ruby code. It's not super different than reading any other sort of code. You know, I'm not super closely involved in hiring, but we do have some trouble finding Ruby and Rails just is not in vogue the same way that it was maybe 10 or 20 years ago. This is what I'm told, right? And so it's hard to find Yeah. like wizard Ruby like Ruby poets, right? Or Rails poets. But like I said, like it hasn't really held us back in terms of like at the end of the day, what we're trying to do is make cool things and deliver value to customers and innovate the industry and help startups like start up and innovate and change the world. And that's what, you know, Angelus is all about. And I don't really care if you know Ruby or not to do that, right? Like the the tech stack is is something separate. And so it does make hiring hard. This is what I've heard. You know, I can only relate this secondhand, but it does make hiring hard. It's really hard to like find like Ruby on Rails engineers in this day and age, but um I also just think it's not that hard to learn or pick up. And I think any engineer worth hiring should be able to do that. And so uh it hasn't really hindered us. It hasn't seemed like a problem. >> That's great to hear that. And it's interesting because like I I I mentioned this earlier, but given that I work in the consulting side of things, prospective clients come to me and my team at Planet Argon expecting us to have people that are wellversed in rub. So I don't really have the luxury of hiring someone and waiting a few months to see if they figure out Rails so they can show up and be like, you know, I'm air quoting an expert in a consulting engagement. So we have to specifically find those types of people in with that skill set. But we'll get called into situations where like, "Hey, can you help our team kind of ramp up on some of these Rails things or we've got some inconsistencies or some of the developers that were were around early on in the when we started building this application no longer here? We have a bunch of people on our team now that have inherited this tech stack that they kind of understand a lot of it, but they don't really understand some of the Rails ethos or they may or may not even know who DHH is." you know that that like that actually happens you know and like that's that is a reality of a lot of teams of and developers like oh I'm just working in this thing called rubon rails that this team picked 15 years ago I get a job and that's that's kind of how I get ramped up and stuff. So all that to say is that there's these different spectrums and there are a lot of really talented Rails developers out there and I'm trying to help get those people highlight that and so we're sharing these stories and maybe this will help Angelus find some of more of those Ruby and Rails poets in the future. Yeah, totally. We're we're hiring and we'd love to have you. You know, I would say, you know, most of our engineers are not like like the the poet analogy I really like because it's like you can kind of know a little bit of a language and that can get you by in a country or you can be conversational or you can be well read in the cannon and you can be a poet, right? Or whatever, right? However you want to gradate it out. And I think any any company to be successful in their tech stack, they kind of have to have at least one, you know, probably there's some perfect mix. you have to have one person who's like can write poetry or whatever. Most of our engineers are not that way. They're not like digging into the internals of Rails or or doing any of this really complex stuff. Um but that's not necessary, you know, like in at least for what we're doing, it's totally not necessary. So, um when I say people come in and they learn Rails in a few months or in a month or whatever it is or Ruby or whatever, I'm not talking about people. These aren't people who are like wizards, you know? They're they're just like >> they're they're not monkey patching yet. >> Yeah. Yeah. you know, but yeah, I think they they come in and they can work in it. And at the end of the day, I think like when you're a company and you're a software company, what's your goal, right? Like you're you're trying to deliver value to your customers. You're not you're not trying to write like, you know, Rails poetry, right? Um you need you sometimes. That's just like part of solving the problems, but most people can can get conversational pretty quickly and and that works pretty well for us. >> All right, kept you long enough, Alex. I have a couple of quick questions for you. Is there a programming book that you find yourself recommending to peers? Um yeah, so I don't really read too many programming books. Um I I did read semi-reently uh staff engineers by Will Larson, which is not a programming book, but it's uh maybe relevant to some some people listening. It helped me conceptualize some of the things that I was doing as a senior engineer and like some of the things that I wanted to be doing or not wanted to be doing like and so I thought that was pretty useful. >> I'll definitely include links to that book in he said that was Will Larson in the show notes for everybody. It was recently it was the CTO of Cardo which is one of our competitors although recently he's moved on to another career or another option. So >> interesting. And where can folks follow your work or learn more about what you're building over at Angelist. >> Yeah, you know I unfortunately don't do a great job of like having a public programming persona. You can check out my GitHub, I guess, but it's mostly in our private repos. I do occasionally do a little bit of side work or whatever for my own personal pleasure, but yeah, not not a ton of public programming stuff. >> Again, that's one of the things I'm really trying to accomplish on this podcast is I'm trying to get in and have conversations with people that are working in the weeds and they have a lot of things that they have to share with the rest of the community, but they're not necessarily hanging out on Twitter or Blue Sky all day or broadcasting on an engineering blog or something. So, thank you so much for stopping by to talk shop with us, Alex. I really appreciate that. >> Yeah, thanks. It's been great. I've I've enjoyed it a lot. >> That's it for this episode of On Rails. This podcast is produced by the Rails Foundation with support from its core and contributing members. If you enjoyed the ride, leave a quick review on Apple Podcast, Spotify, or YouTube. It helps more folks find the show. Again, I'm Robbie Russell. Thanks for writing along. See you next time.
Video description
In this episode of On Rails, Robby is joined by Alexander Stathis, a Principal Software Engineer at AngelList @angellist , where Rails powers complex investment, accounting, and banking business logic across a modular monolith. They explore how AngelList maintains conceptual boundaries, uses gradual typing to influence their Ruby style, and why they’ve adopted multiple async job solutions for different types of work rather than seeking a one-size-fits-all approach. Alex shares insights on consolidating microservices back into their monolith, creating the Boba gem to extend type generation capabilities, using production data subsetting tools for local development, and successfully onboarding engineers without Rails experience in under a month while staying current on Ruby 3.4 and Rails 7.2. *[00:00:00]* – Intro and welcome to Alexander from AngelList *[00:00:53]* – What keeps Alex “On Rails” and Active Record’s appeal *[00:02:28]* – AngelList’s architecture: monolith, engines, and microservices *[00:05:03]* – Consolidating a microservice back into the monolith *[00:09:15]* – Using Packwerk to maintain conceptual boundaries *[00:12:10]* – Avoiding Rails “magic” and Callback Hell *[00:14:30]* – Service layer approach and thin controllers *[00:23:22]* – Why AngelList needs multiple async job solutions *[00:35:58]* – How Sorbet influences Ruby coding style *[00:45:28]* – Creating the Boba gem for better type generation *[00:53:46]* – GraphQL vs REST and preventing N+1 queries *[01:06:14]* – Setting up local dev with production data subsetting *[01:23:26]* – Hiring engineers without Rails experience *[01:27:03]* – Book recommendations and where to find Alex online Socials: LinkedIn: https://www.linkedin.com/in/alexstathis/ GitHub: https://github.com/stathis-alexander Company: Homepage: https://www.angellist.com Tools & Libraries Mentioned: Active Job – Framework-agnostic job API built into Rails. (https://guides.rubyonrails.org/active_job_basics.html) Active Record – Rails’ ORM for modeling and persisting data. (https://guides.rubyonrails.org/active_record_basics.html) ASDF – Tool version manager for Ruby, Node, PNPM, and more.(https://asdf-vm.com/) Boba – AngelList’s open-source gem extending Tapioca compilers for Sorbet. (https://github.com/angellist/boba) Delayed Job – Database-backed background job processor. (https://github.com/collectiveidea/delayed_job) Docker – Containerization platform for local and production environments. (https://www.docker.com/) FactoryBot – Test data builder for RSpec and Rails. (https://github.com/thoughtbot/factory_bot) GoodJob – Postgres-backed background job processor for Active Job. (https://github.com/bensheldon/good_job) GraphQL Batch Loader – Utility for batching GraphQL queries to avoid N+1 problems. (https://github.com/exAspArk/batch-loader) GraphQL Ruby – Ruby implementation of the GraphQL specification. (https://github.com/rmosolgo/graphql-ruby) Linear – Issue tracking and project management tool. (https://linear.app/) Money – Library for handling and formatting currency values. (https://github.com/RubyMoney/money) Packwerk – Static analysis tool from Shopify for enforcing modular boundaries in Rails monoliths. (https://github.com/Shopify/packwerk) Paperclip – Legacy file attachment gem for Rails (now deprecated). (https://github.com/thoughtbot/paperclip) RSpec – Testing framework for Ruby. (https://rspec.info/) Sidekiq – Background job framework using Redis. (https://sidekiq.org/) Solid Queue – Built-in Active Job adapter introduced in Rails 8. (https://github.com/rails/solid_queue) Sorbet – Gradual static type checker for Ruby. (https://sorbet.org/) State Machines – Adds finite state machine support to Ruby and Rails models. (https://github.com/state-machines/state_machines) Tapioca – Tool for generating RBI files for Sorbet from gem dependencies. (https://github.com/Shopify/tapioca) Temporal – Workflow orchestration system for long-running jobs. (https://temporal.io/) Tonic – Platform for generating realistic, de-identified datasets for development and testing. (https://www.tonic.ai/) Will Larson – Staff Engineer – Book exploring technical leadership and staff-level impact in engineering organizations. (https://staffeng.com/book) #rails #rubyonrails #tech #angellist On Rails is a podcast focused on real-world technical decision-making, exploring how teams are scaling, architecting, and solving complex challenges with Rails. On Rails is brought to you by The Rails Foundation, and hosted by Robby Russell of Planet Argon, a consultancy that helps teams improve and modernize their existing Ruby on Rails apps