bouncer
← Back

DevOps Toolbox · 49.4K views · 1.6K likes

Analysis Summary

30% Low Influence
mildmoderatesevere

“Be aware that the '87% off' lifetime deal is a high-pressure sales tactic designed to bypass your evaluation of long-term service viability by making the one-time cost seem negligible.”

Ask yourself: “Did I notice what this video wanted from me, and did I decide freely to say yes?”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
95%

Signals

The content features a distinct personal voice with subjective industry critiques, specific troubleshooting experiences on a Mac, and conversational humor that is highly characteristic of a human creator rather than a synthetic script.

Personal Anecdotes and Humor The narrator uses self-deprecating humor ('Yes, I'm a genius that uses the same key for everything. Sue me.') and personal context about his hardware ('Darwin option for my Mac').
Natural Speech Disfluencies The transcript contains natural phrasing like 'Well, didn't really provide it yet' and 'I thought I'd be able to... but not only I'm not sure', which reflects real-time problem solving.
Specific Technical Opinion The narrator expresses specific, subjective opinions on industry shifts (Minio's licensing change) and personal preferences ('my beloved SQLite').
Hardware/Physical Context The description lists specific physical hardware used (Dygma Defy keyboard, 3DKeyCaps) which aligns with the 'battle station' link.

Worth Noting

Positive elements

  • This video provides a practical, hands-on demonstration of configuring Garage and Caddy, which is highly useful for developers looking to understand S3-compatible self-hosting.

Be Aware

Cautionary elements

  • The use of 'lifetime' pricing as a primary selling point for cloud storage, which often masks long-term sustainability risks for the user's data.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

Back in 2006, AWS didn't really exist. Amazon just needed a way to keep their bookstore from melting under its own weight. So, they built a storage service. They called it S3. Sold it as a service to anyone else, and the rest is history. A year in, it had 10 billion items. Today, we're looking at over 500 trillion. That is half a quadrillion objects. It's a number so big it feels fake. And look, S3 is a masterpiece. The APIs, the features, the way it just works. [music] But for the most of us, S3 has three massive catches. First, the cost. It's not the storage that gets you. It's the egress trap. You pay a fee for fetching your stored objects, often more than just storing them. Second, the complexity. [music] To actually save money, you basically need a PhD in AWS pricing. To navigate Glacier, intelligent tiering, and API request fees, and lastly, control. [music] For a long time, the answer was Minio. It was the gold standard for self-hosting and Kubernetes object storage. But lately, things have changed. Between the licensing shift, the gutting of community features, and the sudden change to maintenance mode, Mo has started to feel less like a community tool and more like a giant that's getting old and corporate. [music] Sound familiar, anyone? So, what if we could have the full S3 compatibility, the CLI, the APIs, the workflow, but in a package that's actually open source, like for real? something that doesn't need a cluster of enterprise servers that can run on a couple of old laptops or a Raspberry Pi you have lying around. That's where Garage comes in. It's not just an alternative. It's a different way of thinking about object storage. Let's get into it. Garage is open source, written in Rust, and as befitting as a self-hosting platform, it's served on Forgejo, a self-hostable Git alternative. [music] There's also a GitHub mirror for exposure, but beyond this, it has a fantastic documentation website with easy to follow tutorials and reference manuals. For context, the goal of Garage is a lightweight object store for self-hosting. It's meant to be resilient, simple, and handle extreme load. One of my favorite lines here is the feature extensiveness, which aims to cover the majority of S3. This is important for two main reasons. One, well, you can enjoy your own S3. That's obvious. But the second, if you're running S3 in production, but it makes sense to use another local S3 for staging or dev, maybe you're just starting out and have servers lying around and you can start using the same APIs. Enjoy the same features with your local physical rock. [music] What are these amazing features I'm so excited about? Well, the S3 API to start with, but also allowing you to distribute different cluster nodes in different geo locations just like S3. This already suggests a huge feature which is clustering of nodes forming a scalable object storage like S3. It can serve static websites and we'll see that in action and even an integration to orchestrators like Kubernetes and Nomad. Now I thought I'd be able to pre-install it but not only I'm not sure what the Guardian Home is for. It's not officially documented on their website. The release builds don't offer a Darwin option for my Mac. So I went with the containerized option. We can run it, but it'll immediately fail asking for a configuration file. So, let's create our first garage toml starting with a few paths for data and metadata. The engine, which is my beloved SQLite, a replication factor, which we're not going to have, and a bunch of other options for the API, web hosting, etc. Do note we've provided a few secrets here for communication encryption. Well, didn't really provide it yet. So, let's create the key first and add it to the RPC secret metrics token and the admin token. Yes, I'm a genius that uses the same key for everything. Sue me. Even with a robust local setup, sometimes you need a secure place for those final backups. Ideally, without adding yet another recurring monthly SAS bill to your overhead, this is where Interext, the sponsor of today's video, comes in. Interext is a privacyfirst open-source cloud storage platform that's essentially built for the way we work. Unlike the mainstream providers, Interext is a zero knowledge and uses end-to-end encryption. That means you own the keys. Not even Interext can see your database dumps or config files. You can pipe your garage backups or Proxmok snapshots directly into Internext or use Docker to bake it into your NAS workflow. It acts as a private encrypted infrastructure layer that just stays out of the way. The best part for those of us who are trying to minimize recurring costs, they offer a lifetime ultimate plan with 5 TB of storage. It's a onetime payment, no subscriptions, no gripping monthly cost. It also includes their full suite of security tools like a VPN and an anti virus, making this a sweet deal. If you want to lock in your off-site storage for good, they're offering DevOps Toolbox viewers an exclusive 87% discount, which is actually higher than what you'll see on their website. Go to interex.com/devopstoolbox and use the code DevOps Toolbox at checkout. You'll see that 87% discount apply immediately. It's a massive win for securing your long-term data ownership without the SAS tax. Now, let's get back to Garage. A quick dock run and we're up. One thing that's missing, which Sharp viewers have probably already noticed, is multiple ports serving the K2V API, S3 API, admin APIs, and the web server. All of which we didn't actually wire through Docker. So, if you're running a containerized version of Garage, map these out, [music] and we're good to go. Browsing the first port sends back a familiar access denied XML, which for anyone who had ever had the pleasure of working with S3, this is not the first time seeing one of these. There's a 404 not found from our web server port. [music] Now everything we're going to run is being done via the Garage CLI. Since we're working through a container, we'll exit to the container followed by gar cli and command and then its attributes like the help menu to start with offering control of workers, status view, cluster controls all the way to keys, buckets, [music] etc. For example, with the bucket subcomand, we can obviously create, delete, allow permissions and more. Let's create a bucket, shall we? The first issue is the lack of quorum, which is funny for a single node object storage cluster. So under status we have healthy nodes and their ids. The first characters are enough for registration. We run carriage layout assign DC1 is the default for the first and only node. We'll create a 1 GB disc and the ID of our node to provide. All that's left is to apply the layout. And there we have it. New cluster layout has been applied. This is your place to add nodes, configure replication and sharding, but we're keeping it simple today. Now time for a new bucket with an original name. List it. New bucket is there. We can read its info like size and objects, web access enablement and aliases. And this is where the fun starts, at least for me. Also, sorry to realize halfway into the video I didn't alias the long command for the exec every time I need the CLI. So gar from now on is shortened. The next bit is a key. You know AWS keys, a key ID and a secret. The ones that AI autocompletes, if you just ask, the ones that GitHub is full of leaks of these keys, like on AWS, are generated once. So, keep the secret and make note of it. Don't be me. With the key in hand, we can run bucket allow, the permissions we want, a bucket, and a key to pair it with. You'd want to have your AWS CLI configuration ready because we're about to run AWS S3 as if we're working with the real thing. So the AWS default region parameter is going to be just garage. The endpoint URL based on where garage is running will take my local host 3900. But when we try and also well of course we need the keys. So AWS access key ID and AWS secret access key. Hopefully no leaks here beyond these temporary ones. Now AWS S3 LS and boom, new bucket in the house. Let's create a non-p parsible JSON file and with AWS CLI copy it to our bucket using the S3 path of the bucket then ls and there we have it. Remember S3 presign URLs the ones letting you create a single use or expiration based special links. Check [music] this out. Presign with a number of seconds translating to 7 days of the JSON file path. There's our link. Let's see if it checks out. Yay. Here's a malformed JSON. I literally managed to fail a six characters JSON object. Now, what about websites? S3 is a fantastic way to serve static [music] websites. It's always been as easy as creating a bucket named after your domain and just switch on the web serving feature. Garage [music] does exactly the same for you. Take a look at this. Garage bucket website allow with the bucket's name and we're done. Let's see what Kodium comes up with if I just chuck an HTML tag. Huh, boring. Okay, let's throw it in there. Index html is a default setup for S3 buckets that are serving websites. So, nothing to configure there. Info says website access is enabled. So, we should be good to go. Now, this is a tricky one with S3. Generally, when you serve a website in a bucket, you point your domain record, like mine, for example, dotp.sh to the bucket. The S3 API receives the request with the host header set to that domain. And this is how it understands it's supposed to serve. Since we're running locally, we need to trick the system. We'll start by creating a bucket with the name of the domain I'd like to serve. Let's give it permissions with the key from earlier. Now, copy our index into the bucket's path using our AWS CLI for S3. Garage website allow. And we should be done. curl also a fantastic tool you should learn more about in a video I made recently adding the domain as a proper header and voila garage html but we want to see that in a browser don't we there are two main options for that we can add dotsh and map it to the local host IP or create a reverse proxy using caddy which is probably my new favorite server more about that in the video I also uploaded last week forming a proxy to the web serving port which we can run And look at Caddy obtaining TLS certificates without anyone asking. This thing is a real gem. Anyway, Safari comes to assist here to make sure there's no cached browser records. There's the Creative Garage HTML, a site for sore eyes. And while we're talking about reverse proxies, Garage suggests [music] a bunch of them, mainly serving as an SSL termination option, which again, if that's your goal, honestly, go with Caddy. Nothing else will even come close to making your life as easy. There's Engineix, Confir, Traffic, Caddy, and others. I should say that as mentioned earlier, all of these three have been covered in a dedicated video on the channel. I highly recommend watching everyone if you're having trouble choosing. Lastly, Garch comes with an administration API, which we've configured in our config file at the beginning, and even briefly tried accessing its port. with a configured token or ones that can be generated with a narrower permission scope. With the CLI, you can access additional endpoints like cluster health monitor and other administration options. Another thing I loved is the fact they have existing SDKs from multiple languages. With [music] Go, for example, you can pull the guarant SDK as a library and then use it to maintain the cluster. Get, add, or change nodes, layout management, and anything else that comes to maintenance. It's important to stress that when it comes to objects, you always have the AWS S3 SDK, which is not only fantastic and well doumented, it's probably one of the most battle tested SDKs in existence. And I can attest from years of experience. This covers everything you need to run your object store like a pro, minus one important point, which we've touched earlier, SSL termination or a reverse proxy that does that for you. And while I have a bunch of them on the channel, I recommend checking this one and this one only. The goat of SSL certificates, Caddy.

Video description

Private, encrypted cloud storage for DevOps & remote machines 👉 https://internxt.com/devopstoolbox **87% off** already applied: code DEVOPSTOOLBOX --- Garage is an open-source distributed object storage service tailored for self-hosting. It's lightweight, easy to scale, and it's FREE. - https://garagehq.deuxfleurs.fr/ ✅ Build a Second Brain With Neovim in Under 90 Minutes: https://learn.dotb.sh/courses/second-brain-neovim ✅ Zero To KNOWING Kubernetes in Under 90 Minutes: https://learn.dotb.sh/courses/k8s-from-scratch ❗Use `devopstoolbox20` at checkout for 20% off! ⌨️ The keyboard on this video is the Dygma Defy: http://dygma.com/DEVOPSTOOLBOX 🎹 Keycaps on my Defy are made by 3DKeyCaps: https://3dkeycap.com/?ref=vxdqqmmo ⚡ Tech I use: https://kit.co/omerxx/my-battle-station

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC