bouncer
← Back

octetz · 2.8K views · 111 likes

Analysis Summary

10% Minimal Influence
mildmoderatesevere

“This video is a transparent technical tutorial; be aware that the creator expresses a personal preference for 'crane' over 'skopeo' based on user experience, which is explicitly stated.”

Transparency Transparent
Human Detected
98%

Signals

The content exhibits high-level technical expertise with a natural, conversational delivery that includes personal opinions and specific workflow preferences. The presence of filler words and informal sentence structures strongly indicates a human creator rather than a synthetic script or voice.

Natural Speech Patterns The transcript contains natural conversational fillers and phrasing such as 'chances are good', 'now typically you'll just hear people say', and 'honestly the reason i primarily don't use'.
Personal Anecdotes and Context The speaker references specific personal preferences ('the one that i use day-to-day', 'i really love crane') and historical context ('back when that was a company that existed').
Cross-Platform Identity The video links to a personal website (joshrosso.com) and a Twitter handle that matches the niche technical expertise shown in the video.

Worth Noting

Positive elements

  • This video provides a clear, low-level explanation of how OCI manifests and multi-architecture images work, which is highly valuable for DevOps engineers.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 16:07 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-10a App Version 0.1.0
Transcript

have you ever used a container if so chances are good that you have asked for some image to run the container runtime has gone out to some repository pulled that image down for your architecture unpacked its file system and then used some lower level primitives in linux like c groups and slices to actually set up and run that container on the system now there are a multitude of different container runtimes that we can be using an example and most prevalent is docker docker really brought containers to the forefront and to the hearts and minds of developers now along with docker there are many other runtimes such as container d which docker is using under the hood there's podman cryo and many more as the proliferation of different run times came about it became especially important that we had a standard image format for these run times in fact core os back when that was a company that existed had a container run time called rocket and a lot of their work went into having the standard be developed and have it be something that is vendor neutral so this is partially why we live in a world today where there is a format that can be used across many different run times the specification we go by today is under oci or the open container initiative now typically you'll just hear people say that something is an oci image and this is just shorthand for saying that it is typically a container image that can run on a bunch of different container runtimes because it is following the oci image specification as containers have become more popular so has the need to put different configuration and ancillary assets with the containers on a given host example being in kubernetes there's a very common policy setup that uses open policy agent now open policy agent allows you to do a bunch of authorization type policy for things that are trying to access resources through the api server and these policies can be bundled up in an oci format and same goes for things like helm charts which we use to then deploy and configure different packages in kubernetes and same with the karvel package suite as well now these different tools and these different assets don't fit the standard file system model that we're used to in a container image and this is where the oci artifact specification comes into play this allows us to define a different type of oci asset that we can pool down with similar client tooling and use this way our container images and these ancillary assets can sit side by side in the same registry all of this is just to say that the number of oci assets is growing quickly whether it be container images or artifacts we as developers are going to be using this format in clusters in hosts all over the place now as developers it'd be really nice if we had some tooling available to us that wasn't tied to a container runtime that would enable us to discover and introspect these images and artifacts and that's exactly what we're going to be talking about today now before i go any further i should mention that this video has an accompanying post on my website i'll put the link to this in the description if you want to check it out and follow along there'll be some diagrams and snippets that you can go by if you want to try some of this out for yourself now for tools the one that i use day-to-day is called crane and crane is a cli tool that is built on top of the libraries in google's go container registry these also happen to be the go libraries i use when i'm working with oci artifacts or images inside of code so all in all i really love crane it's what i'll be showing off today but there are a couple others worth mentioning as well probably the next closest tool i'm aware of to crane is scopio and scopio has very similar functionality under the hood scopio is using libraries that exist inside of podman so if you happen to be in that ecosystem this might be a good option for you honestly the reason i primarily don't use scopio is i just find crane to be a little bit better from a ux perspective and a little bit easier to work with when piping through different unix tools like a very small example although silly is when i run scopio instead of it giving me the available commands which i usually expect from a lot of my tools it gives me this fatal error and then if i use crane which probably is using something like cobra it actually prints out the commands gives me some better help information about what i might have done wrong and i just love some of the command names like ls list get the different tags which we'll be looking at today so really love crane and another one worth mentioning that i do use a lot is image package now image package is something that i do use for creating bundles of assets arbitrary configuration at work i do a lot with carvel packaging and bundling those up and pushing them to different container registries image package is also really capable in doing some different things like bringing recursively a bunch of configuration which might reference images which then might even reference images again in moving all of those assets to a repository so if you have like a local repository running you can bring a whole slew of images over image package really cool for bundling really cool for packaging it's not what i use again for querying and introspection but it is something worth checking out if you're working in this space now we'll start actually using crane and if you want to follow along you can get the download and install instructions from the crane github depending on your package manager you might have it available there too in my case with arch i will mention that in the arch user repository there are two cranes crane bin which is not what you want you can see it's coming from a different repository but there is one called go crane bin and this is the one from the go container registry so in my case and in your case if you're googling around just make sure you're getting the right crane installed on your system let's start off with a simple use case where we want to figure out what tags or versions are available of the kubernetes api server container image using crane we can run the ls or list command against the repository location which happens to be in gcr and we can do cube api server here and i'll also limit some of the results just so we only get things that are of 124 and hit enter now these are all the different versions available to us and we can see that one of the newer versions that have been released is 124.2 so what can we then do from here well a common thing we might want to know is what's the actual digest value or shaw of the 124.2 image we can find this out by running a similar command as before this time we will do a we'll add the tag first 1.24.2 and then instead of doing an ls i'll go ahead and run the digest command and this will just pop back and give us the actual checksum value for this image now getting the digest is cool a really good way to verify things but at the same time it doesn't actually tell the whole story what i mean by that is that this image may actually be backed by multiple architectures what i mean by different platforms and architectures is that a lot of modern environments you're going to have different hosts with different arcs an example of this would be a linux host running amd64 and then a linux host running arm64 be it a snapdragon cpu or perhaps it's one of those fancy new m1 m2 macs where docker is running a vm of linux behind the scenes to run container images now what happens with these different architectures is that we want to be able to point to a canonical tag v124.2 in this case so what happens is that this tag then can point to reference different architectures for cube api server and what this means is that when each of these hosts runtime which are pointed at the exact same tag resolves the tag it'll be able to say oh i'm on an host i'm gonna pull down the shot e31 b9d which is the amd64 one and then same goes for arm a65 a65 so this gives us that flexibility to run different architectures not all images have this some of them will just be a single arc and if you were to inspect their manifest it'd just be links to the different layers under the hood but this is becoming more and more common with different architectures so going back to our crane example how would we actually inspect and understand what is available well one way that we can do this is we can run crane manifest against this same url that we were using before so i'll just copy that url in and we'll go ahead and let it run with some pretty printing through jq now with this coming back we can scroll through and you'll actually see although a little hard to read the platform area of the json output which says what the operating system is and then what the architecture is so i'll even simplify this one step further with a jq selector to make it a bit cleaner i'm going to be selecting on the platform amd 64 and arm 64 or sorry architectures and here we can see the results so i've got two different types here i've got the shaw for amd 64 cube api server in the sha for arm 64 api server now you can continue to go down this rabbit hole to kind of see how this manifest points to other manifests in an example being let's assume we want the amd 64 one since that's often the most common we'll just copy that for a moment and with that copied we can run the exact same command so this is crane manifest and this time instead of using the colon we're going to use the at signs this will be at this which is how we represent the digest value and then i'll grab the prefix one more time for this so it'll be cube api server and we run manifest here you'll notice it looks a bit different it no longer has that section for platforms instead it has the section for layers which is again more digest values which point to the actual container image layer so you can see how this introspection is going deeper and deeper and deeper and really revealing what all the pieces are that make up our end state container image now that we've done some discovery let's look at introspecting the contents of the image or artifact so i'll start off by making a quick temporary directory that i'll cd into so that we have a clean scratch pad here inside this clean scratch pad we're going to run crane export and i'm going to add the v flag so i can show you some of the different requests that happen from an http level i'll also go ahead and grab the url again that we've been working with for the cube api server and normally export will bring down the actual tarball but i'm going to go ahead and pipe this out to unpack it so we can look right into the contents of that tarball so we'll go ahead and let this run and now that it's run i'm at the very top of the command so i can show you some of the things that occurred so initially there's some redirect magic and you can see this almost like a curl request that you've looked at and if we scroll down a little bit here we'll see eventually we get a 200 okay requesting the manifest and the first manifest that comes back is the manifest with all of the different architectures now this is a linux machine running amd64 so i'd actually expect e31 to be the one that crane resolves and unpacks and if we scroll down here we can see eventually that the docker content digest that's resolved is e31 if we continue scrolling we can see it eventually pulls back all the layers and then if you continue to go through the requests you'll see each one of these layer digests getting pulled down as well back in the scratch directory if we do a quick ls we'll see that there's a linux file system here which makes sense that is largely what the container image would have inside of it in fact we can even go in and look in user local bin and we'll see the cube api server binary this is running again on a linux host so in theory i actually could even run that binary if i do user local bin and i type in cube api server and help i will get the help flags for the binary i could technically run i don't know why i would but run this binary that i extracted out of the container image itself so again we have some pretty deep introspection here about what the contents of the image are and we can go around now you might wonder well what might you actually look at and there's a whole set of things i've used this as a way to really dig in to really obscure behavior i was seeing in a container but here's like a quick random example let's say that this binary got built and i'm kind of curious how it got built because there's something weird going on with it well using the go tooling i can actually run go version and of course this would only work with a go binary but i can do go version and then do user local bin cube api server and this will actually give me details about how this binary was built i can see the go version it was built under i can see some details about linker flags gc flags and so on so yeah pretty cool i got this deep way to look into a binary that is normally fully abstracted from me because it's just running in a container image so some deep troubleshooting capability there should i want it now in my preamble i mentioned the idea of oci artifacts which aren't just typical container images so we should spend a little bit of time at least looking at one of those the example i'll give you is for kpac and kpac is a tool that you can deploy into a kubernetes cluster and it does automated builds or kind of i guess if you've ever used cloud native build packs it introspects what you're trying to build and then produces a container image for you without you having to produce like a standard docker file all that put aside this is a carval package that can be deployed it's stored in an oci format and if you're not familiar with carvel think of this sort of like a helm chart just a different set of tools that that does something similar so what i've done is i've gone ahead and created a new directory and what we'll do is we'll run crane export again and i've grabbed the location of kpac since i already knew the location of the tag052 and we'll pipe this one through again and also unpack it so if i hit enter here this is an example of it pulling down a non-standard container i almost said container image but oci artifact where it has a bunch of configuration files if we look inside of one of those config files like cap config.yaml you'll notice that this is all valid kubernetes yaml that could be deployed inside of a cluster so this is really helpful again not just for looking at container image file systems but actual configuration that might be bundled in an oci asset in one final example i'll show you is how you can use crane to copy images between repositories so crane features the ability to do a cp command or a copy command where you specify what the source image is and then some form of destination image so i'm actually going to point to my account in docker hub you'll see that it's index.docker.io if you haven't seen that before you might be used to using docker where there's no prefix it gets assumed but if you're using something that isn't docker you need to put in index.docker.io i think crane might actually assume it too i can't remember and then my username and cube api server i'll do like custom cube api or something like that just for fun now this is all set up and good to go to do the copy so if i go ahead and hit enter it is going to grab that source and it's going to start copying it over and what's cool about this are a couple things one is it's capturing all of the target architecture so i'm getting a full copy of the amd 64 the arm 64 and so on images in that target repository another thing that's cool about this and you'd probably assume this is true but it's really important is that digest values are going to remain exactly the same so once this is done copying i'll actually be able to show you how we can do a quick diff and see that the digest has not changed at all in fact if we go while this is loading to the crane github page you'll see that there's a little link for useful things you can do and if you scroll down there's a couple really sweet ideas here but we're gonna do the diff between manifest to verify that the original manifest in the new manifest is exactly the same so we'll go back to the terminal it looks like it's still pushing it up and now it has finalized the push and put the digest up here for custom cube api so i'll tell you what we'll go in here we'll just copy their diff command from github and paste it in they're using visibox as an example but of course we're going to go ahead and do the gcr dot io location so i'll paste that in that's the original and then for the manifest here i will do my target location so this was here and i think i did custom cube what was that custom cube api one word and i will hit enter and when diff returns nothing that means the manifest value is completely different so if i came back here and did 124.1 diff would come back with a bunch of differences because that's a completely different image so effectively i've gone ahead and put or copied this over into my own registry now putting it in docker hub is a little silly but one of the common use cases of this is that let me grab the tag out of here so i can show you so my tags in here is bringing over into private repositories oftentimes in like air gap scenarios or maybe not fully air gapped but just like a private repository that you're running locally perhaps co-located with your cluster this is a way for you to move that over you can point to these and have assurance that you're getting the exact same image that you originated from in that upstream in this case gcr repository so i hope you found this video interesting largely this is just a shout out to the awesome tool crane which i use in my development environment all the time and you know it is part of the go container registry repository and shout out to everybody who maintains this thing and everybody who contributes from code to docs is a really exceptional tool i love using it and yeah just wanted to say thanks so much for all the work on this thing and after you've seen this video hopefully you'll go check it out for yourself

Video description

OCI has long been the standard format of container images. Over time this standard has grown to support additional artifacts. As both the types of OCI-compliant artifacts and images have grown, it is important to have tooling enabling discovery and introspection. This video covers the commend line tool crane and how it can be used for discovery, introspection, and copying OCI assets. Website post: https://joshrosso.com/c/navigating-oci Twitter: https://twitter.com/joshrosso

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC