bouncer
← Back

octetz · 2.9K views · 114 likes

Analysis Summary

10% Minimal Influence
mildmoderatesevere

“This video is highly transparent; be aware that the code examples prioritize simplicity for educational purposes over production-ready error handling.”

Transparency Transparent
Human Detected
98%

Signals

The content exhibits clear signs of human narration, including spontaneous linguistic fillers, real-time cognitive processing during technical explanations, and a non-formulaic script structure. The depth of technical nuance and the specific references to kernel source code suggest a subject matter expert recording live rather than a synthetic voice reading an AI-generated script.

Natural Speech Patterns The transcript contains natural filler phrases ('ah you know', 'I'm sure you've likely done it before'), self-corrections ('physical or I guess not physical but a technical file'), and conversational transitions ('let's go ahead and dig into that').
Domain Expertise and Context The narrator references specific kernel source file paths ('FS inside of here') and relates technical concepts to personal implementation experiences ('if you've ever implemented like a ring buffer').
Personal Branding and Continuity The video links to a personal blog (joshrosso.com) and the channel 'octetz' has a consistent, niche technical focus with a distinct personal voice.

Worth Noting

Positive elements

  • This video provides a clear, hands-on explanation of the 'everything is a file' philosophy in Unix by showing how standard input can be treated as a file descriptor in code.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 16:07 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-10a App Version 0.1.0
Transcript

pipes easily one of the most ubiquitous things that we use when we're bringing multiple commands together on a Unix style system like FreeBSD Mac OS or Linux now we've all done this we've run some type of command like curl and we get output from it and then we realize ah you know I want to do something or manipulate that output to some degree so we introduce the pipe character and then send it to an application like JQ which while it can access multiple fields we often just use it to pretty print the output and get us a little bit of an easier way to look through things and JQ again is putting out output through standard out in this case so we can pipe that into some tool like less and then we can use less to actually scroll through the output that we get so this is a pretty common practice I'm sure you've likely done it before but the thing that I want to explore today is a little bit behind the scenes of these pipe characters and specifically what they're doing and also what if we wanted to persist this idea right so what if we didn't just want in this one array of commands to pipe everything through but maybe actually make almost a cue that we could listen in and provide some form of inner process communication on a given system so let's go ahead and dig into that today starting off let's look at what a world might look like without this pipe construct we can can we can conceive that there would be a similar effect if we were to just output to some form of Json file and then from that Json file run the similar commands where we in the case of JQ give it an argument that is that file we then overwrite the files content and then we can just call less against that same file and once all is done we can do some amount of cleanup of that file as well now obviously pipes are giving us a huge convenience factor so we don't have to deal with this every single time and if we take a quick look at what a pipe is from a kind of unixie Linux standpoint well these pipe characters are effectively buffers where that standard output that's coming out of each command is ending up buffered by the Linux kernel in fact there is a syscall called pipe and you'll see the file descriptor that gets returned and how it kind of pulls some amount of buffering in and out and then that can act as standard in for another application that we pipe to so it is effectively negating the need for us to do all of these different files each time we're moving between applications which is a huge convenience with a quick look inside of the Linux kernel we can actually infer some details about pipes so if we look or in this case search for the pipe C file which is going to be under FS inside of here we can get some good details about how pipes are buffers we also can understand some of the things that we can do such as what the max size of a pipe buffer would be along with some details about where we can go in to actually change the size of a pipe buffer very uncommon thing to do but just kind of cool in the weeds details that we can discern from here and there is a note up here talking a little bit about head and Tails if you've ever implemented like a ring buffer or a you know a circular array I guess in a way where it can kind of overwrite as a buffer that's effectively what we've got going on under the hood here and if we just look at the syscall definition we can see that it's taking in the pipe and it's also giving back a file descriptor as well or taking a file descriptor rather so there's kind of just these cool mechanisms happening to present as if there's a file descriptor but it again being buffered under the hood so that we don't have to worry about the overhead of a physical or I guess not physical but a technical file instead it's all buffered by the kernel now we have to look at the question of how do we ensure that our applications end up interoperable with the larger Unix landscape of applications like we saw with curl JQ in less and as we saw to make them interoperable a really good way to do that would be to ensure that we can accept input on standard in from a pipe so what we'll do now is take a look at building a small application in this case in go but the principles will apply similarly called Json check now I'll show you kind of what the end state of Json check is going to look like in this case I have Json check fully compiled but we'll go into the code shortly and Json check should be able to accept a Json file in this case I have products.json and I happen to know that products.json is in fact a valid Json file it'll also print out the timestamp and we'll see why a bit later now if we give Json check something through a pipe I'm also expecting it to work just fine so let's go back to our curl here and in the curl I want to be able to pipe to Json check and be able to validate that it is good Json and because it's interoperable with a variety of tools this of course should work if we were to do something like Echo so if we echoed some improper Json intentionally this is not right we'd expect the same thing to happen of course this time it's going to report that it is in fact invalid Json rather than valid Json so this little tool that's been built is interoperable and let's take a look in the source code as to how we can ensure we can accept these piped inputs now we're inside the source code and we'll start off by just looking at the constants at the top the constants are obviously just the messages that we'll be returning back to the client and our Focus right now is going to be all contained inside of mains this is a really simple tool and it's just going to operate inside of the main function all right now that we're inside of main we are starting off by just capturing a byte array that is going to hold the Json data once we capture it and out of the gate we're not just going to read from the file because what we want to do in the order of operations is to see if there is input from a pipe that we would like to use if so use it otherwise read from an argument in a file and I should warn you the source code is doing a lot of things like omitting errors and things that you wouldn't normally do but for brevity and keeping it simple I'm going to kind of push some of that stuff side first thing we do is we get the stat from the file and this is going to give us some information about a particular file now what we're stating as you'll notice is the descriptor effectively for standard in so it might feel a little weird treating standard in as a file but like that's effectively what it is there's a descriptor we can grab there's some information about it we can get and by getting this information we're going to be able to understand whether it's a pipe to understand whether it's a pipe we can actually look at the comments here but also this if conditional okay so the first thing we do is we look for the file mode now the zero is what we're expecting for both of these here okay and for file mode luckily zero just references that it's a directory so if we go into mode into file mode these are the different options we've got for file modes and you'll notice in this const that zero references that it is a directory which is what a pipe will act like under the hood then on top of that we're looking for the character device now pipes are going to be character devices so when we see that it is a Unix character device we can discern it's a directory it's also a character device and thus for all intents and purposes here it is a pipe so now that we know that this is a pipe we just read the standard input and in our case just using read all which will grab that and return it as a byte array is a really easy way for us to just store that inside of our Json data byte array now in the else condition this would be when there was not a pipe we are going to still handle the file use case and eventually when we talk about it the named pipe so in the file use case we can simply use os.open we're assuming that there is an argument passed in and this will index out of bounds if not because again we're keeping it simple we're going to defer the close of the file after the fact and we're going to set up a buffered i o reader against that file descriptor now what we do here to handle both files and a potentially named pipe is that we just go through and we read every single line and for every single line that we read we are going to get back a byte array representing that line since Json data is in fact a byte array what we can do here is just append to the Json data byte array whatever that line is so we're just continuing to build it up and build it up and build it up and finally if there is an error which we should be smarter about this but what we're going to look for here is a end of file error an i o eof right if it's an an end to file error we're just going to break out of the loop because we've hit the end of that content now you can kind of see that we've basically got two cases here that work we've got the file case which will load the Json data byte array and then we've got the piped case which will also load that Json data byte array coming down to the bottom we finish out our logic by just doing the validation lucky for us the Json package has a json.valid call we can make we can give it our byte array and it will return true or false whether that Json is valid based on that we will return a message either valid or invalid and then give a status code back as well another kind of best practice so if we return 0 it's considered a successful exit and then non-zero would be considered an error exit and these numbers are a bit arbitrary we'll talk about that a different day but suffice to say we're kind of following best practices with this model so now we've got our main setup and we've got the Json data ready to go now we can easily build this and what we can do is do a go build specify the output flag and we'll just call it Json check so that it's a little bit clean for us to call once Json check has been built now we should be able to send our curl command into it so let's try exactly that we'll do the curl command and the curl command will go into Json check now we've got our input no problem at all and just like you saw before we can do the different variances we can do the file name we can do a echo command with bad Json inside of it this will all be parsed and give us the appropriate response now we're going to wrap up on the concept of named pipes and name pipes are quite similar to what we were talking about before they're often referred to as fifo cues they facilitate inter-process communication the big difference is that we are going to be using a command make fifo and on most Unix based distributions if not all you should be able to find make fifo you're going to find it on an Apple computer or Mac you're going to find it on FreeBSD you'll find it on Linux and so on and make fifo is just going to create a first in first out queue essentially and this will basically be like a pipe that's referenced as a file so when we call make fifo temp Json buffer so let's go ahead and do that now we'll do make fifo and I'm just going to put it in the temp directory it doesn't have to be there for the sake of kind of cleaning it up after the fact and we're going to call it Q in this case so now if I go to TMP and just check out Q so Q There It Is we've got this queue we can see that it is a little bit of an interesting file right we can see the P at the beginning here so this is almost like a named or persistent pipe for us now what we'll do is actually utilize our same Json buffer application that we built to kind of loop and keep watching this as different applications send Json data in we're going to have our Json check print out both the timestamp and whether that Json was valid kind of showing off how this name pipe can be really cool for longer running process communication let's try it so I will actually just maybe copy this and we will paste in this Loop so this is basically a bash while loop and what we're doing in this bash Loop is we are going to be doing Json check against and let's make sure we set it to our cue name which was just Q right and it will just sit here and wait because it is again looking for that file it's got the descriptor open and it's looking to read until it gets to end of file so while we've got this here we'll open up another tmux buffer so let's do that now all right great and then we can just send stuff in and effectively see what the output from Json check is up here so let's start off by doing something bad so we'll do an echo and we'll say not right just some kind of random characters that clearly are not Json and then just like we would a file we can just send something in here so let's go ahead and send to tmpq this sends it in it puts it in the queue Echo pops back and we can see I'm just going to minimize a little bit here to make it on one line we can see that this is considered invalid Json that was received and we just continue this pattern right if we go back to our curl request now let's put the curl request inside of the queue as well so we'll send that in just like this and in this case we put it in the queue it responded back with Json check and it'll just keep going and going and going until Json check is no longer watching this queue and if it's not we'll just see stuff kind of hang up here so you know we'll do maybe you know kill this at the top so Json Q is no longer listening then at the bottom we'll do this request again and now it's just blocked right it's just sitting here putting it on the cube but not getting anything that's kind of popping back from the queue to release it more or less but once again if we did this in fact we don't even have to do the the loop I guess in this case we could just do Json check and we'll make sure that Json check is getting the file tmpq hit enter Json check receives it off the queue which thus relieves curl at the bottom here and since a named pipe is effectively a file the way that we've set up the read here is that we could just read in a file as well so Json check could also grab this products Json file and effectively do the same thing so that is it for pipes you've now seemed make you've seen make fifo for named pipes you've seen normal pipes for just moving information to and from and kind of getting an idea of some of the internals which I find cool and if you leave with this talk with kind of anything what I'd hope is that maybe you've got a little bit more respect and appreciation for what's going on with pipes they're really cool I know I take them for granted all the time but also if you're writing a tool or a script in the future you may want to consider the idea of accepting standard input through pipes because then your tool set will largely become interoperable with the larger landscape of unix-based tools

Video description

Blog post: https://joshrosso.com/c/pipes Pipes are cool. We all use them, but have you ever considered what’s happening behind the scenes? Additionally, did you know there’s a way to persist them to act as simple queues, facilitating interprocess communication? I’ll be delving into pipes today. Let’s go! A Unix pipe is a form of redirection that allows data to flow from one command to another, connecting the output of one command to the input of another command without using an intermediate file. Pipes are a powerful feature of Unix-like operating systems and can be used to create complex command pipelines for achieving higher-level tasks. Overview: 00:00:00 Internals: 00:01:59 Building Pipe Compatible Tools: 00:03:57 Named Pipes: 00:10:45 Wrapping Up: 00:14:45

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC