bouncer
← Back

Dreams of Code · 48.6K views · 1.8K likes

Analysis Summary

30% Low Influence
mildmoderatesevere

“Be aware that the 'problem' of standup meetings is framed as a personal memory failure to make the automated solution feel like a necessary productivity tool rather than a fun engineering experiment.”

Ask yourself: “Did I notice what this video wanted from me, and did I decide freely to say yes?”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
98%

Signals

The video features a human creator sharing a personal project with natural, conversational narration that includes filler words and specific life experiences. While the project itself uses AI (LLMs) as a tool within the software, the presentation and content creation are clearly human-driven.

Natural Speech Patterns The transcript includes natural self-correction ('direct message uh but also as an email') and personal anecdotes about Netflix and PagerDuty.
Personal Branding and Gear The description lists specific physical hardware (ZSA Voyager keyboard, specific camera/mic) and links to a personal GitHub repository with custom code.
Subject Matter Expertise The content describes a highly specific, idiosyncratic workflow involving n8n, Docker, and LLM integration for a personal use case.

Worth Noting

Positive elements

  • This video provides a practical, step-by-step guide on self-hosting n8n and connecting it to GitHub and LLM APIs, which is highly educational for DevOps-curious developers.

Be Aware

Cautionary elements

  • The integration of the sponsor (Hostinger) as the 'recommended' way to follow the tutorial can make the viewer feel that other hosting options are more difficult or less compatible.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

It's no secret that I love developing software. However, when it comes to being employed as a software developer, I tend to have more of a lovehate relationship. There's a few reasons as to why this is, but one of the more major ones is due to the amount of bureaucracy that tends to come with the job, especially when it comes to ceremonies. One such ceremony is everyone's favorite 15-minute meeting, standup, which I find can be rather hit or miss. On the one hand, standup itself can sometimes be rather valuable, especially when you have a really good team. Unfortunately, however, I think that that's the exception rather than the rule. And even if you do happen to have a great team, I often feel like standup runs into the same repeating issues. For one thing, whenever I start giving my daily update, I personally can never remember what it is that I did the day before, especially if it happens to be a Monday, as I've usually spent the entire weekend doing everything that I can to forget what I worked on the week before. Other times, standup can be way too early in the morning, especially if pager duty went off at 2:00 a.m., or more likely, I stayed up till 2:00 a.m. binge watching a new series on Netflix. Perhaps the worst part of standup, however, are the many derailments, typically always followed with a can we take this offline. Personally, I find that all of these issues take away from the original purpose of standup, which is a bit of a shame because I think the underlying idea behind it is a good one. In any case, whether or not standup is a good thing is kind of a moot point as it's not going anywhere anytime soon. And so, as developers, we're left with two options. either accept it as is or try to make it the best we can. For myself, I decided to put my own spin on making standup the best I could. However, rather than trying to do this the responsible way of both improving the communication and underlying process, I instead decided to just break it in the only way that I know how by building an overengineered solution that only I myself would use to automate the process of standup for me. In order to build a solution to automate standup for myself, I began as if it was any normal project, defining both the goals and requirements in order to scope it as success. However, in order to do so effectively, I first needed to understand the core of the problem I was trying to solve. In this case, standup. When it comes to standup, this typically involves communicating three main data points. What one has done, what one is doing, and any current blockers. For me, all three of these can be a little hazy when it comes to an early morning meeting. And so, I wanted a way to automate the collection of these three data points. However, rather than trying to attempt all three of these at once, I decided to take a little bit more of an iterative approach and chose just one data point to focus on at the beginning before adding in the other two. Therefore, for my initial implementation, I decided to build a system to automatically remind me of what it was that I achieved the day before. as this was not only going to be the easiest to implement, at least in my mind, but it would also solve one of my biggest personal pains when it comes to standup, making it a great MVP. As for the actual implementation itself, well, I decided to achieve this by setting up a simple automation, which would collect any information about my previous day's work activities from a number of different sources, such as GitHub, which I use for committing code, and linear, which is what I use for issue tracking. This data would then be sent down a pipeline to an LLM in order to summarize it into some key standup talking points before then sending it to myself both as a Slack direct message uh but also as an email. Pretty simple. With that, I had my highle design in place. Now it was time to begin implementation. Normally when it comes to building projects like this, I would go about implementing it by hand using a language such as Go. However, recently I've been trying to broaden my horizons. And so for this project, I decided to build it using a piece of technology that's been hyped quite a lot online. One that I originally dismissed, N8N, which if you're unaware is an automation tool allowing you to connect multiple services together, similar to something like Zapier. However, unlike Zapier, N8 is both source available and self-hostable, which is something that I really appreciate. In addition to this, however, it also provides a huge number of integrations out of the box and provides the ability to easily add your own. Because of this, it means that N8N is extremely popular with [snorts] just over 156,000 stars on GitHub at the time of recording. Because of this, and because I like to learn new things, I decided that this project was going to be a good excuse to use NA10. So, I began researching how to deploy a self-hosted instance of it for me to use. As it turns out, the NA10 documentation provides a guide on how you can deploy it using Docker Compose, which will also install traffic as a reverse proxy providing HTTPS. Therefore, I decided to make use of the new Docker Manager feature available on my VPS provider and the sponsor of today's video, Hostinger. This feature allows you to easily deploy a docker compos yaml straight to a VPS instance from the hostinger dashboard meaning you can do so without needing to SSH in which makes deploying a self-hosted application incredibly fast. Therefore, in order to install an 8N using Docker Manager, I first needed to obtain a new VPS instance from my VPS provider. Typically, when it comes to hostinger, I usually run all of my production services on the KVM2, which comes with two vCPUs and 8 gigs of RAM. However, because the NA10 documentation recommends an instance size between 320 megabytes and 2 GB, I decided to save myself a little bit of money and instead went for the KVM 1, which provides one vCPU and 4 gigs of RAM, all for just $4.99 a month when you purchase a 24-month term, thanks to Hostinger's current Black Friday sale. Of course, you can get this VPS instance even lower if you use my coupon code dreams of code when you check out, which will save you an additional 10% off. So, if you want to use the new Docker manager feature to deploy your own instance of N810 or to deploy pretty much any services that you want, then you can visit hostinger.com/dreamsof code or use the link in the description down below. And also make sure to use my coupon code dreams of code when checking out to get that additional 10% off. A big thank you to Hostinger for sponsoring this video. Okay, so once I had my VPS instance in hand, it was then time to set it up for both Docker Manager and N8N. Whilst Hostinger does provide an N8N ISO image that you can use, in my case, I kind of wanted to follow it by the documentation which gives you that docker compose file which also will install traffic and provides you with HTTPS. So I decided to select the docker app ISO image instead which installs both docker and docker compose allowing you to use the docker manager feature. With the image selected, I then went through the rest of the VPS setup steps. And once it was complete, I headed back on over to the N8N documentation to begin installing it on my new machine. Because I had both Docker and Docker Compose already installed, I could then skip on to step number three, which was to set up DNS records for my new VPS instance. To do this, I added in a new record to my CloudFlare dashboard to n.zvps.xyz, pointing it at the IP of my new VPS instance. After this, I then moved on to step number four, which was to go ahead and create av file. Fortunately, when using Docker Manager, this is incredibly simple. To do so, I headed on over to the Docker Manager page found in the hostinger dashboard and opened up the YAML editor view. Here you can see I have two different text entries, one for the docker compos.yaml and the other for any environment variables. To add these in, I went and copied the example file from the n documentation before making a couple of changes to suit my own environment, including the domain name, the time zone, and email address for the TLS certificate. After that was done, I went back to the documentation and actually skipped step five and went straight to step six, which was to copy in the Docker Compose file and pasted into the following field inside of the Docker Manager YAML editor. With that, all that remained was to name the project and it was ready to deploy. After a couple of minutes, the Docker Compose stack was up and running, which I verified by heading on over to my configured DNS record for it. With N8N successfully deployed, the next thing to do was to set up a user account and I was ready to begin implementing my standup automation. First things first, I selected to start a new workflow from scratch before then deciding to give it a name, one that I felt would be rather accurate. With that, I had my first workflow ready. Uh, but what exactly is a workflow? Well, NAN defines a workflow as a collection of nodes which act as each of the individual steps to define your automation. The first of these nodes is the trigger, which is the condition or event that will kick off your workflow's execution. NAT provides a number of different trigger types, such as an inbound web hook, a form submission, and even based off the result of another workflow. For this automation, I wanted to use the schedule trigger, which would run the workflow at the same time each day. As for the time it would run, I decided to set this to 8:00 a.m., which would be early enough to ensure that my trigger would complete before standup started. One nice thing about triggers when it comes to NA10 is you can execute these at any point during development, which means you're not having to wait around for a trigger to execute in order to test your flows. With the trigger defined, the next thing to do was to obtain my first data source, which in this case was going to be the git commits from my previous day. To do so, I would need to go ahead and add in a new node. Nodes in NA10 are the key building blocks of a workflow allowing you to perform a range of different actions such as fetching data, writing data, performing data transformations, or even just performing some control flow such as conditional expressions and loops. If we take a look at all the different node types that are available in N8, you can see it provides a huge amount of integrations for various different services out of the box. This is one of the key features of N8N as you can take a pretty much no code approach to being able to interact with many different APIs and services. In my case, I wanted this first node to collect data from GitHub, specifically about the previous day's commits. So, I went ahead and searched for the GitHub node and selected one with the action of get repo. In order for this node to work, I needed to set up some authentication credentials, uh, which in my case, I did by creating an access token inside of my GitHub account. When it comes to NATM, the majority of the nodes that you'll use require credentials in order to integrate with their associated service. Fortunately, NAM provides some really comprehensive documentation that shows you how to obtain these credentials for whichever node you're configuring. In any case, with my GitHub credentials added, I was now able to configure the rest of the nodes properties, beginning by selecting which repository I wanted to pull the commit data from. Once configured, I then tested the node by clicking execute step in order to view the returned data. Upon doing so, I realized that the results from the git repo action didn't provide any commit data inside. Instead, it was only returning data about the actual repository itself. Therefore, in order to obtain the actual commit data, I needed to use a different operation. Unfortunately, when it came to the GitHub Node, there wasn't one that could do this. At this point, I assumed I was cooked. However, before declaring that breaking standup couldn't be achieved, I stumbled across the custom API call operation, which was actually a bit of a rugpool as it didn't actually provide a custom operation, but instead directed me to make use of an HTTP request node, although it did mention it would take care of authentication for me. With the path forward now clear, I went ahead and followed it, replacing the GitHub node with an HTTP1 instead. [snorts] With this new node added, I then configured it by first setting the URL to the GitHub API endpoint for pulling down the commits of a repo, which in my case I set to be the Next.js SAS starter repo of my Zenart kits organization, which is a new product I'm looking to ship soon. With the API URL defined, the next thing to do was to set up authentication with GitHub, which was as simple as selecting a predefined credential type of GitHub API, followed by selecting my configured GitHub account. With the HTTP node now configured, I was ready to test it out. And upon executing it, I was now retrieving a list of my recent commits on the repo. Unfortunately, however, that wasn't all it was retrieving, as it was also pulling down commits made by other authors. Therefore, because I'm not a fan of plagiarism, I needed to constrain this to only returning a list of commits that I myself had authored. Fortunately, I was able to achieve this using the following query parameter of author, which allows you to constrain the commits to only those made by the username whose value you pass in, which I set to be myself. Now, when I went and executed this, I was only receiving commits that were authored by myself. With the commit author now constrained, I began to notice another issue in the returned data where it was now including commits that were made beyond the previous working day. Therefore, in order to resolve this, I needed to make use of yet another query parameter, this time called since, which will only return the commits with a timestamp that's greater than the value you pass in. [snorts] Unlike the author query parameter, however, the value for this needed to be dynamically generated uh basically being set to 24 hours in the past. Fortunately, N8N allows you to set dynamic values using an expression which allows you to use JavaScript with it. [snorts] To define a block of JavaScript, you use the following syntax, placing the code you want to execute inside. In my case, I added in the following expression, which took the current daytime, subtracted 24 hours from it, and formatted it as an ISO string. With that, I had my node now correctly configured, returning a list of the commits that I myself had authored the previous day. Unfortunately, I had yet another bug, one that only appeared when I ran this code on a Monday, which produced an empty list of commits. This was happening because I hadn't produced any commits the day before, which happened to be a Sunday. Therefore, I instead needed to modify my expression to return a list of the commits that were made during the previous working day, which in a typical job was going to be Friday. Fortunately, thanks to my friend Claude, this wasn't too difficult to whip up an expression for. And so, I went ahead and pasted it in, confirming that it was now producing the correct results. With my new expression defined, I was now able to pull out my recent git commits from the GitHub repo. However, these were currently only the git commit hashes and not the actual commit data itself. So whilst this was a good start, I needed to add in yet another step in order to pull out the actual commit information. To achieve this, I needed to be able to loop over each of the commit hashes that was returned in the original result and then perform an HTTP request in order to pull down the information for that specific commit. In order to achieve this, I began by first looking at the loop over items node. However, upon reading the documentation, it turns out that you don't actually need to use this node in the majority of cases as NA10 often handles the looping of input data for you. Therefore, all I needed to do was use another HTTP request node. This time using the same commits path as before, but also adding in the commit hash as a path parameter using the following expression. As for this node's authentication here, I just reused the existing GitHub API credentials I had set up before. Now, when I went ahead and executed this step, N8N was now looping through each of the individual commit hashes and pulling down the commit information for each one, including most importantly the patch, which communicated the actual code changes that each commit made. This meant I was ready to move on to the next step, which was to pass this information into an LLM in order for it to be summarized. To do so, I went ahead and created another node. this time an open AI node called message a model which again does exactly what it says on the tin. Uh this is something I actually really like about an A in that they're very clear with what each node actually does. For this node to work, I again needed to set up some credentials. Uh which meant creating a new API key inside of the Open AI platform. With my OpenAI credentials defined, I could now configure the rest of the node. The first step was to choose the model I wanted to use. Uh which in my case, I decided to go with GPT5. With this model selected, the next step was to define a message to send to it. However, rather than just sending a user message to the LLM, I instead defined a system message, which is used to configure the behavior of an LLM and how it should respond to user messages. In this case, I asked the model to summarize the commits that I would be sending in the next message as a simple stand-up update that I could share with colleagues as to what I achieved in my previous day. With the system prompt defined, the next thing to do was to define the actual user message, which in this case I just sent across the entire input data as stringified JSON. With that, I now had my node configured. However, I unfortunately ran into yet another issue where the node itself was being invoked multiple times, one for each individual commit. This meant that the generated AI message didn't have the full context of all of the changes taken together and was subsequently producing multiple outputs. This issue was occurring due to the default behavior of a node in N8N, which is where it will execute once for each input item in an array. This was the same reason as to why I didn't need to use a loop over items node earlier when it came to pulling out each individual commit from the list I was retrieving in that previous node. Therefore, I needed to find a way to turn the multiple outputs from the previous node into a single input for the next. Fortunately, N8N provides a node to achieve this called the aggregate node, which is used to turn multiple items into a single one. The aggregate node is one of a few different data transformation nodes that N8N provides with others including the filter node, merge node, and even dduplication. Therefore, to resolve this, I went ahead and added in a new aggregate node in between both the HTTP and open AI nodes. As for configuring this aggregate node, this was incredibly simple. All I had to do was configure it to aggregate all of the item data into a single list and set the output to be a field called data. Additionally, I also had the option to specify which fields I wanted to include. And so to save a bit of money when it came to AI tokens, I decided to only include the two fields that I needed. Now, when I went ahead and executed the aggregate node, it was turning my 14 input items into a single output item, which meant the LLM would now have the entire context in a single message. Upon now executing the LLM node, I received a summary of all of my commits showing me that it was working sort of. Unfortunately, the output felt kind of wooden and instead I wanted it to be a little bit more natural sounding. So, I went about refining the system prompt. As always, when it comes to prompt engineering, it often takes a few iterations to get it right. Although, for me, it always feels a little bit arcane. However, in addition to modifying the system prompt, I also found a big improvement from changing the actual model from GPT5 to 4.1 Mini, which not only saved costs, but I felt gave a more natural result, which ended up giving me an output I was pretty happy with. With that, I now had summarization configured, and the standup report for the work I had completed the previous day was starting to take shape. Now, all I needed to do was send it to myself over a couple of communication channels, Slack and email. Out of the two of these, the easiest to configure in my mind was going to be Slack. So that's the one I decided to implement first. To do so was as simple as selecting the Slack node with the send a message action, which required setting up more authentication credentials. In this case, I configured the authentication to use OOTH 2 so that the messages would be sent from my own user account. With the credentials added, the next thing to do was to configure where the message should be sent, either to a channel or to a user. Long-term, I wanted this to be sent to the engineering channel in order to automate my standup updates. However, for the MVP, I instead decided to just send it to myself. With my user selected, it was then time to configure the message, which I set to simple text message type. And then for the message text, I just dragged in the output from the LLM. Once configured, I executed the node and received a Slack DM from myself with my summarized Git commits. Very cool. With Slack confirmed as working, the next thing I wanted to do was to also send this to myself as an email on the off chance that I didn't have access to Slack that day or if I one day suddenly lost my mind and decided to migrate to Microsoft Teams. To set this up, I went ahead and added in an email node, which allows you to send an email over SMTP. As for the SMTP service, in my case, I configured this to work with resend, which is pretty much what I'm using for all of my email sending these days, even though it is kind of expensive. In any case, with the email credentials configured, I was then able to finish setting up the rest of the node, adding in the from email to email, the subject, which I used an expression with in order to generate the current date, and the email's content, which I defined as plain text, and then just dragged in the output from the LLM summarization. With my email configured, I was now able to test this and received a nice plain text email with the summarization of my previous day's commits. With that, I had my initial implementation completed. Now, all that remained was to activate the workflow and then wait for it to execute the following day. The next morning, I woke up just after 6:00 a.m., went about my normal morning routine before heading on over to my desk and waiting with nervous apprehension to see if my creation was going to work. As ATM rolled round, I soon received both an email and Slack notification containing my standup update for the work I completed the day before. With that, I had successfully managed to complete my MVP. So, it was now time to focus on making the rest of the standup process obsolete. To do so, I first needed to obtain the other two data points, which was what I was currently working on, and any current blockers. This ended up being rather simple to achieve by adding in an integration to linear, which is what I use for issue tracking. By doing so, I could pull down any issues that were assigned to myself and were currently marked as in progress, which would serve as the data for communicating what I was currently working on. To do so required the use of a filter node, which I used to remove any items that didn't match these constraints. In addition to this, I also attempted to use linear in order to pull down any tasks that were closed in my previous working day. Unfortunately, however, this wasn't possible as the linear node didn't return the timestamps for when a task status changed, which meant I had to nip the idea in the bud. Maybe it would have been better to use Jira instead. As for obtaining the data to communicate any blockers here, I made use of another GitHub node in order to fetch data of any open PRs that were created by myself. With both of these new data sources added, the next thing to do was to link them to their own aggregate node, each with their own unique field name, which meant I also needed to modify the output field for the existing commits aggregate node before then using a merge node to turn each of the three inputs into a single object that could then be passed to the LLM. Once that was complete, I then modified my system prompt to reference each of the input data fields for their respective standup communication topic. With that, I was now generating a stand-up update that I would consider to be somewhat complete and could be used to refresh my memory whenever the hacky sack of doom landed in my hands. Of course, I could have just stopped there. However, given how far I'd come, I wanted to see how much of standup I could end up automating. Therefore, I decided to first tackle the task of asynchronous standup, which is where you typically publish your standup update inside of a Slack channel. Uh, in my case, this would be engineering. Whilst sending a message to a channel would be simple enough, the real challenge was to modify my workflow so that it would only publish on days I didn't have a stand-up meeting scheduled. Fortunately, this was possible using a couple of nodes provided by NA10. The first of these is the Google calendar node, which can be used to pull down any events from one's Google calendar. In my case, I configured this node to pull down events from that scheduled day that matched the query of standup. This meant it would only produce a result if I had a meeting that day. However, in order for my workflow to succeed, I configured this node to always produce output data, which meant there would always be an output even if I didn't have a calendar event for that day. With the calendar node now added, the next thing to do was to define a different behavior depending on whether or not standup took place. To achieve this, I used an if node, which allowed for branching based on a conditional expression, which in this case was to check if the calendar input was empty, which if it was, I would consider that day to be a sync. Lastly, all that remained was to modify the existing Slack node to explicitly reference the LLM input, followed by then duplicating the node, linking this duplicate up to the asynchronous branch, and then modifying it to write to the engineering Slack channel instead of sending the message to me. Now, when I went and tested this on a day I didn't have standup scheduled, I was now receiving a message from myself direct to the engineering Slack channel. With that, I had managed to automate my async standup meetings. Uh, but what about the synchronous ones? Well, this I'm still figuring out. However, my current idea is to make use of something such as 11 Labs in order to generate an audio clip that makes use of my voice. Here's a quick sample. Let me know how you think it sounds. Yesterday, I worked on the starter template, updating the dashboard UX with a personalized welcome message, added a clearer upgrade CTA, and some reusable UI elements. Today I'm working on getting integrations set up within the Zenstart dashboard, starting with one to create a new database instance when a new repo is generated from a starter kit. As for blockers, currently I have none. In addition to generating this clip, I'm also going to have to figure out how to automatically play it whenever it's my time to give an update. However, that's going to be a problem for my future self. And given the team that I currently have, I'm pretty sure that standups are going to be asynchronous for the foreseeable future. I want to give a big thank you to Hostinger for sponsoring this video. If you're interested in deploying Docker Compose applications using the new Docker Manager feature, then make sure to use my link in the description down below and use my coupon code dreams of code in order to get an additional 10% off when checking out. Otherwise, I want to give a big thank you to you for watching and I'll see you on the next one.

Video description

No scrum masters (nor bananas) were harmed during the filming of this video. 🍌 To get your own VPS instance this black friday to try out Docker Manager with, visit https://hostinger.com/dreamsofcode and make sure to use my coupon code DREAMSOFCODE to get an additional 10% off. Whilst I believe standup is a good idea on paper, in practice I find it often runs into some of the same repeating problems, with perhaps the most pervasive of all being my lack of ability to remember. Therefore, rather than succumbing to this, I decided to see if I could use my skills to resolve it, and ended up doing so, using a tool I've overlooked. This video was sponsored by Hostinger Links: - n8n: https://n8n.io/ - n8n Docker Compose docs: https://docs.n8n.io/hosting/installation/server-setups/docker-compose/ - My "Breaking Standup" workflow JSON: https://github.com/dreamsofcode-io/automating-standup Watch my course on building CLI applications in Go: https://dreamsofcode.io/courses/cli-apps-go/learn 👈 My Gear: - Camera: https://amzn.to/3E3ORuX - Microphone: https://amzn.to/40wHBPP - Audio Interface: https://amzn.to/4jwbd8o - Headphones: https://amzn.to/4gasmla - Keyboard: ZSA Voyager Join this channel to get access to perks: https://www.youtube.com/channel/UCWQaM7SpSECp9FELz-cHzuQ/join Join Discord: https://discord.com/invite/eMjRTvscyt Join Twitter: https://twitter.com/dreamsofcode_io 00:00:00 Intro 00:02:01 The Game Plan 00:03:48 n8n (hot stuff) 00:04:58 Hostinger 00:06:13 Installing docker & docker manager 00:06:49 Setting up n8n 00:07:59 First n8n workflow 00:09:02 Obtaining my commits 00:13:57 Obtaining patches 00:15:11 Summarizing my work 00:18:42 Sending myself the report 00:20:40 Testing my implementation 00:21:11 The other data points 00:22:47 Automating Standup (async) 00:24:37 What about synchronous?

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC