bouncer
← Back

TechHut · 19.9K views · 608 likes

Analysis Summary

20% Minimal Influence
mildmoderatesevere

“Be aware that the recommendation for the sponsored learning platform is framed by criticizing traditional competitors as 'boring,' a common marketing tactic to make the sponsored alternative feel more effective.”

Transparency Transparent
Human Detected
98%

Signals

The content features a distinct personal voice with natural linguistic imperfections, specific personal anecdotes about learning to code, and direct references to physical hardware being handled. The script lacks the formulaic, overly-polished structure typical of AI-generated tech tutorials.

Natural Speech Patterns Transcript includes natural filler words ('uh'), self-correction ('Not necessarily because I want to use Podman... but because I need to learn it'), and informal phrasing ('LLMs and crap like that').
Personal Anecdotes The creator references specific personal history, such as their 'old faithful gaming desktop' and their personal struggle to learn Python through multiple platforms.
Physical Hardware Interaction The narrator references physical objects in their immediate environment ('this machine', 'this guy', 'little mini PC here') and mentions upcoming videos on specific hardware they possess.
Brand Consistency TechHut is a long-standing personality-driven channel with a consistent host; the voice and presentation style match historical human-led content.

Worth Noting

Positive elements

  • This video provides a high-quality, step-by-step technical guide for Linux users to move beyond basic installations into advanced storage (ZFS) and containerization (Podman).

Be Aware

Cautionary elements

  • The only minor influence to note is the 'revelation' style framing of the sponsor as a superior alternative to established educational institutions.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 16:07 UTC Model google/gemini-3-flash-preview-20251217

More on This Topic

Related content covering similar topics.

Transcript

Hello everybody. We got a big one for you today. What I am going to be doing is turning my old faithful gaming desktop into a full-fledged Linux server. And no, I'm not just going to be installing Fedora on it. We are going to do a bunch of things, including some post installation steps, messing with the DNF config. We're going to be using ZFS to create a pool. We're going to do some things with permissions. This thing has a Nvidia GPU in it. So, we will be setting up the Nvidia GPU drivers, installing Podman for the very first time. Not necessarily because I want to use Podman over Docker, but because I need to learn it, which it shouldn't be too hard because it's pretty one to one when it comes to the commands and stuff compared to Docker. We're going to be diving into Cockpit. We're going to be doing a lot of things at least to build the server out as a good baseline. So then you could really do whatever you want with it. And this is technically part one in a series. So subscribe, ring that bell so you do not miss the future additions to this Fedora server. You can install Fedora server on just about anything. This machine has a AMD uh 3700X. Uh good amount of RAM, 64 gigs DDR4. We have a 512 gig MVME SSD. Highly recommend you install this on an SSD. But in addition, we have two 4 TBTE drives that again, like I said, we're going to be putting into a ZFS pool. Got a 3090 in it for some local AI stuff, but you don't need something as powerful as this. You could do this on a little mini PC here. You could install it on what I would argue to be the coolest single board computer in the world. Video coming on this guy very soon. Or you could go on Facebook Market, Craigslist, whatever, find an old like Think Center desktop computer, 10020, 200 bucks, and have your very own home lab. Most services that you're going to want to run on a home lab aren't very resource intensive. Granted, if you want to run LLMs and crap like that, uh, Nvidia GPU is nice. If you are brand new to Linux, this is a great video for you as I will be running through everything step by step and we have a full written guide with additional explainers, all the commands we use and everything like that. And in addition to this video, one of the very best ways to learn Linux is with our sponsor boot.dev. I've personally tried other things like Corsera, LinkedIn Learning, and they're all so boring. They did not vibe at all with how I actually like to learn things and boot.dev dev takes more of like a gamified RPG approach, teaching you how to code, learn Linux, whatever it may be. The Python course that I have been doing so far has been awesome. It has a point system, streak rewards, and things that actually make it so you really don't want to screw up when you're about to submit your completed work in which it has an interactive terminal where you can actually type things out, run the code, and see if it works. It's incredibly engaging, and I might actually finally learn Python with it. This is like the fifth thing I've done to try to teach myself Python. If you're somebody who's just getting into Linux, I skimmed through the Linux curriculum and there's a bunch of good stuff in there. You got file systems, permissions, programs, and if you need extra help in addition to the content that they offer, they have boots, which is a bear wizard, which is just a little AI tutor. You can ask questions. It's been trained on all of the course material, everything like that. Cool thing is all the content is actually completely free to read and watch. You do have to have an active subscription to actually interact with the lessons, all the features, and get hands-on with the coding. And if you're interested in learning more, do make sure you scan the QR code on screen now or go to the link to check it out, which will give you an entire 25% off your annual plan. Overall, it's an awesome platform and you should definitely go check it out. Now, for my future server here, we are going to need to grab the Fedora ISO. And for that, all you need to do is go to the Fedora website, go to get Fedora server, download now, and then depending on your hardware, most likely you are going to be selecting this one, the DVD for AMD 64 systems. Download that and then flash the ISO with whatever tool you prefer. If you're already on Linux, Gnome Discs is really good. If you're on Mac OS, Bolina Etcher. If you're on Windows, go ahead and use Rufus. After it is flashed, plug it on into your computer and then go to your BIOS. Depending on your motherboard, it may be a variety of keys. It's probably one of the function keys. One of the F2 311 keys. Could be delete, could be escape. Who knows? You'll figure it out. So, we're here in my BIOS and there's going to be a couple settings you want to change. First is to enable virtualization for your CPU. And this is going to be called something different if you have Intel and maybe in a different location depending on your motherboard. For me, it's going to be under advanced CPU configuration. And right here, SVM. So CPU virtualization, I'd make sure that's enabled. Again, if you're on Intel, it's probably going to be called something else. Additionally, you are going to want to go to security and secure boot and make sure that is disabled. Granted, there are some distributions, I think Fedora is one of them, Ubuntu, that you can have secure boot enabled, but highly recommend you disable that, especially when we add Nvidia drivers, things like that, when we actually modify the kernel a bit, change kernel modules, that could lead you to having a bad time. So from there, we're going to go over to boot and for our boot order, we are going to set our USB as the first drive. You can see Fedora on there. I went through tested everything that we're going to be doing today. So from there, save changes and exit, and it should allow us to boot right into Fedora. So here we go. We can test and install, but at the same time, I'm just going to go straight to the installation. Now, as you will see, there is a difference between installing something like Ubuntu server and Fedora server. We got Anaconda here, a nice little graphical installer. So, we're going to select our language stuff. So, I'm going to go continue. And now here we have all of our various installation options. Localization, we're not going to need to change any of that. Under software, we have the installation selection, but we're going to want to go down here to software selection. And for this server, I want it to install cockpit and a couple additional tools to help kind of managing or making our server a little bit easier to manage. So, I'm going to select that option right there that says head this management. And then go ahead and go back. From there, we are going to go to the installation location. So, the actual drive that we want to install this on, highly recommend you install this on an SSD, MVME SSD, whatever it may be, just not on a spinning disc drive. It will be incredibly slow. This right here is the MVME that I'm going to install it on. And these are the two 4 TBTE hard drives that I mentioned in the intro of this video. For this, we're just going to go with all the automatic stuff, which in Fedora installs it as a VLM and XFS as the actual kind of file system, file structure. There are some pros to using VLM where the default configuration including partitions are really easy to kind of resize on the fly. It's much easier to do snapshots. If needed, you could span volumes across multiple discs. There are pros, there are cons. So, it just adds another kind of layer of complexity on your system. And for our single SSD setup, it is kind of overkill. It is the default, so we will stick with it. Just note we are going to have to resize a partition. And when I click done, we're going to need to reclaim some space. So, go ahead, click reclaim space. Obviously, this is going to completely wipe the drive. So, if you have anything important on it, you will lose it. So, I'm just going to go ahead and select delete all and then reclaim. And we're good to go in that regard. We have our networking setup here. We don't really need to change much. We're going to go into some of the commands on how to set like static IP addresses. But one thing we will change real quick right here is the host name. This is really the only Fedora machine I'm going to have in my stack. So, I'll just call the machine Fedora for the host name. And then from there, we will click done. and we're going to need to set up our user account. So, let's go ahead and select this and fill this out. Do make sure you make this an admin account so you can use pseudo and put in your password, all that. And I am going to go to done. My screen is so small. I'm sorry I keep having to like lean forward a bit. I'm going to hit done again cuz I got a weak password. I don't intend on using the root account for anything. So, we're just going to keep that disabled there. And I will begin the installation. And just like that, we are rocking and rolling. Good to go. It's going to go through the whole installation process. Once it's done, unplug your USB, restart your system, and it's going to go ahead and boot you in. If you need to figure out your IP address, let it boot up all the way, sign in, and then run the command IPA, and then you'll be able to see your IP address. And then from there, we'll go over to the desktop and log into cockpit just to check it out before we dive into the terminal. All right. So, I went ahead moved my computer or my new server to my server cabinet because running Ethernet or internet through standard outlets isn't really the best experience at all, which is what I have to do in this office. But regardless, here we are in our web browser. I traveled to the IP address at the port 9090 for cockpit just so we can see everything spun up and ready to go. If you get a little warning like this, it just means we don't have an SSL certification. If you're interested in getting all that set up, do check out my video on my whole proxy setup. So, for this, I'm going to accept the risk and continue. And then you'll get something that looks a little like this. Let's go ahead and full screen. So, we got a better kind of visual of what's happening here. And sign in just with that user that we created during the installation process. So, hit log in. And look at that. We're in. Not everything in cockpit is just going to work out of the gate. For example, if I go to hardware details or not hardware details, if I go to usage metrics and history, we can see we have some missing stuff. We'll go ahead and fix all that up later. But cockpit's really nice because we could actually monitor all of these things happening on our system. We could view the system logs, see some information about our storage, networking, accounts. So, you can see Whoa, whoa, whoa. You can see my user account there. I Whoopsies. To my credit, I could barely see the screen that I was working on. We have all of our services here. So everything uh enabled and running. Additional applications for cockpit which I will dive into. You could update directly through here. And of course what's really nice is you can just access the terminal real quick in a nice web interface if you don't want to deal with signing in with SSH which is what we are going to talk about now. So let's go ahead and get rid of this for at least the time being. And then bring in a terminal. We can make this just a smidgen bit bigger so we could see what's going on here. and let's clear it out because what we're going to do is for the first time on this channel sign into SSH properly using an SSH key. Really easy to get going. This system that I'm on is going to be kind of the primary system that I use to log into this thing. So, first we're going to do SSH key gen. If you don't have this installed, if you're on Mac, you could use Brew. It's in all the package managers if it's not already installed. But I'm going to type in my email address here. We have the encryption type as well as a comment of that email address. So, if I go ahead and hit enter, it's going to ask us where we want to save this file. The default location for me is going to be fine. And we can enter a passphrase for this SSH key, but I'm not going to do that for now. We have a little random art. And then what we're going to want to do is cat or the cat command is just to display contents of a file. The public key. So, this is my public key. Don't get any funny ideas. cuz I am going to switch this out. So, I'm going to go ahead and give that a copy. And then what we're going to do is actually SSH into our server. So, to do that, you just type SSH your username at the IP address. So, mine's 173.1080. For you, it's probably going to be different. So, let's hit enter. Yes, this is my server. I am familiar with it. And then type in that password. So from there, what we're going to want to do is actually make a SSH file or directory where we can put our known or authorized keys in. So if I ll to see everything in there, we have nothing in our home directory. So we're going to make a directory called SSH. And then let's cd into this directory. Now within it, we are simply going to nano and make a new file called authorized_keys. Hit enter. Oh dear. Oh dear. We don't have Nano. Nano is, dare I say, the uh best text editor. We'll dive more into DNF in a minute, but we're going to install Nano. Pseudo DNF Nano. Install it, of course. So, hit enter. Type in our password. From there, the default is currently no. So, we're going to say Y for yes. Hit enter. And now, we can theoretically nano the authorized keys. And then from here, go ahead and drop in your authorized key and hit control O to output that. Crl + X and now we're going to set the proper permissions so only my individual user can read and write it. So let's do chmod. We'll do 600 which this just means six readrite privileges, no execute and then zero privileges for group and any other users. I'm going to point this to authorized keys and then there we go. So now if I hit exit and I go sign in, it should not ask me for my password. Just dropped me on in. If you want some additional security, you could disable password authorization with SSH. And to do that, I'm quickly going to list the files in this directory here just to see what the names are. Okay, we'll make a new policy and we will call it disable password configuration. So, let's hit enter. There's nothing here, but all we need to do is drop in the password authentication variable. No. So, control O to output that. Crl X. And then we're going to restart the SSH service. And then from there, you could test this out. If I exit out again, I can use a command to force it to try to use a password. And it will look a little bit something like this. Preferred authentication, my password, disable the public key, try to log in with those variables. And as you can see, the permission is denied because I tried to use the password. I'll go ahead and clear that out. And then if I try to log in normally, it's going to let us in. We're good to go. So now during the installation process, I mentioned that we may need to expand a drive. If I run the command df-h for our root system, you could see I only have 13 gigs available or 15 technically. I'm going to run out of storage real quick with only 15 gigs. So what I'm going to do is run a lv extend command. So you can see I'm lv extending to 100% of the free space for my root partition, which is fedora fedora root. Yours might be might be named slightly different. So, just note that. Let's hit enter. Type in our password. It was successfully resized. And then we're going to want to run a XFS grow command on that same root partition, which you can see the data blocks have now changed. So now, if I run the dfh command, you can see now I'm only using 3% of the 453 gigs available to me. So, now that we have some room to do things, we're going to talk about updating an DNF. To actually upgrade your system or update your system, all you need to do is run a pseudo dnf upgrade command. Different from uh Ubuntu or Debian based systems where first you have to apt to update the repository and then apt to upgrade those files. This kind of does it all in one go. So if I run that, it's going to go over everything that it's going to do, which in this case is quite a bit. It's my first update. You can see the default currently is no, and we will change that so it kind of matches how at least Ubuntu works. So, I'm going to say yes and then allow that to do everything that it needs to do. And there we go. It is complete. So, now one thing that you might want to touch on or that we will touch on is messing with the DNF configuration. Our configuration file is going to be right over here. You can see we have no custom configuration. If you're interested in seeing the configuration options, you can run the man command, which is manual for the DNF conf file. Hit enter and you could see just about everything that we have. So, these are all different variables. cache directory cache only default yes which we will add actually might as well give that a copy copy this is true or false you could see if it's either a boolean a string whatever it may be so if I hit Q we'll get out of there and now instead of using cat to see the configuration we will do a pseudo nano to go ahead and jump on into it type in our password and then here under main we could just paste in default yes is equal to true so now instead of no being the default. Y or yes is now the default. Examples of some additional configuration options here. We have max parallel downloads. It's pretty self-explanatory. I'll allow 10 of them. We will use the fastest mirror. So it will check that and automatically set that as our mirror. Now keep cache I honestly don't need. So I will do Ctrl K to get rid of that line. Control O to output that and get on out of there. So we'll grab some common packages. I'm pretty sure that these or some of these are already installed on the system including nano which I've already added but we're getting curl wget get htop just some tools that I use quite a bit and there it goes pretty lightweight stuff that went by rather quick. So now what we're going to do is talk about the network configuration since we didn't really do anything within the installation process for network configuration. The default is DHCP, meaning our router, modem, whatever it may be, is controlling the actual IP address. It is in charge of managing it. And that is generally what I recommend here. Right now, I'm using UniFi, and this will probably be in a different location depending on what you're running. For me, I would just go to client devices. I would find the device, which is this AMD desktop right here. I'd go to settings, and then I would set a fixed IP address there. Depending on your stuff, it's probably under either client side devices. you can do it through there or go to the actual DHCP settings. Now, if you want to set an actual static IP through here, I will show you how to do that and then revert back if you need to do so. First, we ran a pretty big update. So, let's go ahead and restart our network manager right out of the gate just so there's no kind of conflicts there with uh versionings and whatnot. If I run this command, nm CLI connection show, we can see our connection including the device type which is going to be important or not device type, the interface ID right there. And if you want to change to static, the command is going to look a little something like this. So we have connection modify the interface ID which is right here. connection method is going to be manual with the IP address that I want to set it to with the gateway and DNS being to those proper locations in most cases. This if you have like Pi Hole or something running, it's just going to be the address ending in one on your proper subnet. So, if I hit enter, that is now set. If I run this command, which is IP address show for that specific thing, you can see your IP address configured there. And if you want to switch back to DHCP basically and then to apply this we just do connection up with that specific device ID. Now to set it back to DHCP I am just going to drop in this command here. Let's just changes the method to auto and it blanks out some of these variables. So let's go ahead and hit enter. And you can see it automatically or I had the connection up. So now my configuration is is good to go. We're back on DHCP. So now I do want to talk about firewall because if you're going to spin up services, the one thing that may get in your way is the firewall. If I do pseudo firewall cmd state, we can see that it's running. If you're familiar with Ubuntu, it's usually UFW. Instead of doing state, if I did something like a list all, we would be able to see all the current rules and everything. So, I have the services of SSH, uh, DHCP v6 and cockpit already on our firewall, which is why we were able to access cockpit right out of the gate. If I wanted to add additional services, so let's say we threw up a reverse proxy manager on this system, we could add a firewall cmd permanent add service HTTP and then HTTPS. If I hit enter, those have both been successfully added. And a service that we're going to test a little bit later has the port 30001 on TCP. So instead of saying add service, you just say add port with the port that you want and you'd hit enter and we have a success there. And to enable that, we would just run a firewall cmd reload command. And then from there, if we list all, we could see now we have the port 301 open and the additional services of HTTP and HTTPS, which is 80 and 443. And now what I'm going to talk about before we dive into some of the ZFS stuff is automatic updates. So then you don't have to run the update command every time you need anything. So we're going to install a package called DNF automatic. Hit enter. And now that that is added, we're going to mess with the configuration for it. So that's under uh Etsy DNF automatic conf. Hit enter. And we can go ahead and add some variables for this. I'm going to do the upgrade type for all the security updates. So not everything just the things that are deemed security important and apply updates. We will say yes. So let's control O to output that. Get out of there. And then we will create a systemctl enable service for the DNF automatic timer. Hit enter. And now it's created. So it should automatically run those security updates for us. So now what we're going to do is set up and kind of talk about some of the cool features and whatnot of ZFS. I have those two 4 TBTE hard drives which we're going to use as a kind of mirrored array to store bulk things such as my uh Nexcloud stuff, images, media, and so forth. So ZFS [snorts] isn't in the Fedora repository. So we're going to have to add from ZFS on Linux.org. The reason it's not there is for like software licensing reasons, but we can just go ahead and add that repository. I have an older release here. So we are going to use this command with pseudo of course and now it's added. Beautiful. So now we will install the proper kernel headers. So pseudo drop that on in. And then once that's done, we'll be able to install ZFS just like this. And there we go. It's complete. So then we will load the kernel module to make sure everything's good to [clears throat] pseudo mod probe. There we go. And then we will make sure that this loads when it boots up. And it looks like my notes are actually right in this regard. They need to update their wiki. There we go. That using a pseudo t command. And now if I do something like zfs uh version, hit enter. We will get the version. We're good to go. So now let's get rid of these docs. My notes. I'll update those and there will be a link down below for all the commands and everything that I am running. So now let's create a little mirror pool. If you have multiple drives, you can set up um something similar to like a RAID 5 configuration or whatever you prefer. So if you have four discs, for example, you could use the storage on three of them, keeping one of them for redundancy. But since I only have two here, I'm going to set this up as a mirror. Now these two discs, SDA and SDB, are what I'm going to put in my mirror. And I do need to wipe them because I tested this. They currently have ZFS configurations. So I am just going to do a full wipe of them. So I will do pseudo wipe fs A which will just destroy all the data in it. Uh dev SD we want SDB as well as SDA. I switched those in the notes just in case of uh you're using SDA as your root partition. You don't want to make that mistake. Hit enter. And then it is going to wipe all of those drives. I'm pretty sure that was sufficient, but another command is if they were in a previous kind of array, you may need to wipe the labels. Fail to clear, meaning that that first command was probably sufficient. So to create the pool, what we're going to do is this command right here. I'm going to switch these to my proper device names, and we are going to run zpool create as the mount point for for slash data. So, I'm mounting this into my actual root directory, Justin data. You could change that if you'd like to. I am naming this pool data, and I'm setting this up as a mirror. So, the two drives are actually mirroring each other. So, if I hit enter from there, it's going to go ahead and create that for us. And now, if I like ls my root directory here, you're going to see data. Now, from this point, you could technically go ahead and just manually make folders in there and set it up however you want. But what I like to do is actually use data sets which are basically folders but it gives you additional kind of configuration options and snapshots per individual folder which is really nice. So for example I have three data sets that I want to create. You can see I am going to use pseudo ZFS create data and media also going to do backups and documents just for now. So just like that it is going to create those three folders. So now if I ls that data well not cd but now ls that data directory you can see those folders are created. Now an example of something that you could do with this and there's a lot of things that you could do with this is set the compression type per the individual data set. So if I did something like uh incorrect variable you could see the options that we have. So on off gzip LZ4 LZ4 is generally kind of the best recommended most used one. pretty decent compression without very much speed loss or anything like that. But let's say in theory I wanted this documents directory to have absolutely no compression whatsoever. All I would do is change this to off and point this to the documents. So documents enter and now that compression type is set. And then if I run like a ZFS get command for the compression compression ratio hit that I can see all those values from here in which you can see for data the compression is off. Additionally if I wanted to show all the variables for a specific data set I could do get all hit enter and you can see all the different variables and things that you can change quotas mounted devices IDs a whole lot more. And I'm probably going to dive into this a lot more in a future video. But the last thing I want to show you with ZFS is snapshots, which is super super cool. So, real quick, if we do pseudo zpool status, we can see the status. Everything looks good. Everything's online. If I did a ZFS list command, if I can type here, we could see our actual data sets and the available space, the mount points, all that. So, what I'm going to do is go into documents real quick. So, let's CD over into documents. Do it correctly. And I'm going to make a file here. I have to use pseudo because I haven't changed the permissions yet. But if I do something like test.txt and then I say let's type like before update output that. And then if I cat that file, you can see the context of that file. And then from there we create a snapshot. So this is in my documents and we'll add a tag and we'll say uh before upgrade of course adding an s on pseudo hit enter and now that snapshot has been created for that specific folder that specific data set. The beautiful thing about snapshots in this regard is if it's like a much larger directory let's say you have 10 gigabytes or 10 terabytes something dramatic a large data set and you create a snapshot it's not taking up any more space. The situation in which space will be kind of taken up is if you have that 10 terabyte data set, you create a snapshot, you delete a terabyte of data, that data will still exist in that previous snapshot. So 10 terabytes will still kind of be used, but it makes it really easy to restore that data back if something bad happens. An example of this working is if I nano back into that text file, I do after update output that we cap that file again. You can see we have before update, after update, and then if I go back to my snapshot command, instead of saying snapshot, I want to roll back. So roll back, hit enter. And now if we cat that file, you can see before update, it got rid of that change for us. Do subscribe, ring that bell because in a future video I'm going to go over way more when it comes to ZFS, including some of those other variables you saw, setting up automated snapshots with schedules, replacing faulty drives, and a lot more, including actually getting notified if you have an unhealthy pool. All right, editor Brandon here. One thing I forgot to do is actually enable this to automatically mount the specific data sets when it reboots. I'll show you how to do that. But if you ever run into a situation in which they're unmounted for some reason, all you need to do is run a Zpool import data command, which will import that. And then to mount all the data sets, you do a pseudo ZFS mount A. And we can see it all right here. If it doesn't show up, you might need to change the directories. Go back in and it should all show up. And to enable autoimp import, we're going to enable ZFS import cache service. We're going to enable the scan service. We are going to enable the mount service and then we are going to enable the ZFS target. Now for actually messing with data here. So if I touched it, let's say I wanted to touch something in document touch, you can see permission denied. And all we really need to do is run a chown command. So if I do pseudo or first what's your user ID? Just type ID. Mine is a,000,000. So from there, if I ran a pseudo chown, I want this recursively. So it's going to um in all the directories, files, folders, everything, it's going to change the permissions. We're going to set this to my user and my group. And I'll just point this directly to data. So if I hit enter, it's going to change those permissions. And now if I run a ll, you can see I am the owner. So I can manipulate data in there. If I ran this touch command again, you can see that it worked completely fine. If I ls into those documents, we have touch. So from there, I did mention I have a Nvidia GPU in this machine. If you don't have a Nvidia GPU, you could skip this step. If you're running Intel, Intel quick sync is great for media servers, but I'm going to be using this machine for like local LLM models and stuff like that. So I have a 3090 in it. We need some we need some drivers. And the one thing that kind of sucks about Fedora compared to like Ubuntu LTS is it's much more bleeding edge. So, we can't really use like the official Nvidia script to install these drivers. And Fedora is way more picky about the software licensing in those repositories or in their repositories. So, we're going to have to use RPM Fusion to get the drivers that we need. RPM Fusion is just another Fedora repository that has a lot of the nonfree drivers as you can see there. So, let's go ahead and add those repositories. Last time I tested this, I had an issue with systemd. So I figured out these commands are the best way to fix it, which will refresh systemd and then run a dro sync. So we'll run that. We'll go ahead and say yes. This step might not be necessary for you. You could try it without doing it, but I'm just trying to save myself some hassle here. So now that that's done, let's grab some kernel modules and headers. So really, it only needed this one package here. But now we're going to grab the AK mod version of the Nvidia drivers which will build them specifically for the kernel that you have running. The official Nvidia kind of script has the wrong kernel. Hasn't been updated yet. So we need to use this. It's going to install a whole bunch of things. And now we need to build those kernel modules. And we can do that with a pseudo akmod force. So hit that. Do note it's going to look like it stalled out. It's doing what it needs to do. Leave it alone. If you screw with it, you could cause a little bit of damage. I learned that the hard way. Hey, it looks like we are done. Now, before we reboot our system so everything's loaded properly, we're going to install some CUDA support. Some good old proprietary CUDA. Drop that on in there. Exorg X11 CUDA NVIDIA drivers. So, hit enter. You can see it's grabbing them from RPM Fusion non-free. Ran everything. We're good to go. So, now what I'm going to do is blacklist the open source drivers. So, we're using the proprietary ones with a simple echo into our mod probe. We're blacklisting that open source driver. Hit enter. There we go. Now, we need to rebuild in it RAM FS. So, that's loaded properly when it reboots. So, run this. And again, it's going to look like it stalled. It didn't. Let it finish. And now, for the first time in this video, we are going to reboot our system. Whenever you make any big changes, even after the first update, we probably should have rebooted, but rebooting now is a good time to do so. And I'm gonna need to run that as pseudo again. I'm very familiar with using Ubuntu server. So there are some differences here. I'm out here just trying to learn. So there we go. After a little bit of time, we are rebooted, sshed back into it, and we can run the Nvidia Smi tool to actually see if everything's working. We have our driver. We have our GPU. We are good to go. Now, what we're going to do is set up Podman. I am a Docker fanboy. I absolutely love Docker. If I wasn't trying to just force myself to get familiar with Podman, I would just use Docker. If you want to install and learn Docker, I have a fantastic article on my website which includes how to install it, it works crossplatform. Just use their little installation script. Add your user to the Docker group and you're good to go. Podman is Damonless, which means each container has its own process. There's really no single point of failure. There's better security, especially when you're running it rootless. And to get it, we're just going to run this command pseudo dnf install podman podman compose and cockpit podman. So we can actually kind of see our containers, our stacks, whatever it may be through cockpit and management manage it there if we would like to. So let's install that. Podman might be pre-installed, but I'm not sure who it is. Podman, but it grabbed everything else for us. It's done. Good to go. So now we can use Podman spin up some services and packages. You can verify it with the version and info commands. So we can see podman is up running. We are good to go. And earlier I mentioned we can run podman rootless if we would like to which just means it's not running as root. Much more secure to do it this way especially if it's something in which you don't need to run it as root. Basically the container just runs on your user account without interacting with the system or with pseudo or anything like that. The only limitation is binding ports below 1024 such as like HTTP, so 80 443. It won't let you do that unless if you add an exemption to allow unprivileged users to bind lower ports. And to do that, we can run this command here if I want myself to be able to bind things from port 80 and up. So I can hit enter and then we can add that to the configuration. So now I should be able to do that if I want to without running it as root. Now, since we added those Nvidia drivers, I do want to be able to use my Nvidia GPU with Podman as the demo that I'm going to do to show you a service spun up is going to be O Lama and Open Web UI. So, the first thing is adding the Nvidia toolkit or container toolkit repo with this command here. And once that is added, all we need to do is install the Nvidia container toolkit, which you can see there it completed. Now, we want to generate a specification for our container device interface. So we can do that with the Nvidia ctk command right here which you can see it generated. And to verify this is working we can run the same command as a list and we can see our GPU information right there. And to actually really test to verify this is working we could actually spin up a quick podman container for Nvidia CUDA. Run the command and if we get it we know that it's working. Watch. This one might need to have pseudo. And there we go. Absolutely beautiful. So now what I'm going to do is I'm going to spin up a quick stack with Podman for the very first time. Technically the second time I did test this very first time on video. And generally and I say the same thing with Docker. It's usually not the best idea to put like Docker configurations, MySQL databases, anything like that within um a pool or anything that has spinning disk drives. You'll just get much better performance in the applications if you have that kind of data on an SSD. So, what I like to do, and you can do this however you want, is I like to run this in my home directory. So, you can see home brand. What I do, I usually make a docker folder, but in this case, I'm going to make a podman folder. So, make directory podman. And then I'll go ahead and cd into that directory. And this is what should I call it? Let's do this is an AI folder. Sorry if you're not a fan of AI. At least I'm running it locally. Open AI ain't getting my data today. So, now I'll cd into this new AI folder and we're going to make a compose file. Cool thing about podman is it shouldn't be too much of a learning curve because it is very compatible with a lot of the docker commands docker compose it's almost one to one there are some little kind of nuances and stuff that I'll have to learn over time such as like exposing the docker sock I think it's the podman sock there there's just some stuff that I'll have to kind of figure out as I learn this and when I do guess what I'll make a video very similar to the getting started with docker video so we'll nano compose yaml we have a new file here And I am just going to drop in this stack with some stuff in it including Olama and open web UI. We could keep basically everything. But one thing I do want to do instead of using docker volumes I want to have this data within the same directory as this uh compose file. So to do that instead of volumes I'll just say in this current directory let's store this in a folder called lama. And same thing with open web UI within this current directory. Let's make a folder called open web UI in which the data will be stored. For this one, I enabled the devices for uh nvidia.com GPU all so it can access that. Disabled the uh I think it's se Linux security stuff on it so it can access that GPU without too many problems. And you can see it's pulling the images directly from Docker IO as they are containers. So it's it's going to work fine. So I'll do control O to output that. Ctrl X. And I do need to make some directories or else it'll throw out an error. So I'm going to make a directory for OAMA as well as a directory for Open Web UI. Do note this is just a demo with these services. I am going to keep them on this machine because it has the Nvidia GPU in it. But again, I'm going to be using this machine in a lot more future videos. I am dyslexic. Let's see. Open web UI. Just like that. Now, we've already ran this command earlier, but do open the firewall for whatever services. So, this one's on 30001 or at least the one that I'm going to access remotely, at least within my local network or with like Netbird. So, do add that. It's going to tell me it's already there. So, we won't really need to worry about it. And then at this point, we can simply run a uh doc not docker podman compose up dash d for detached. If we hit enter, it should spin up without any issues. And there we go. It's pulling all the open web UI stuff. And it's done. So it should theoretically work. If I run a podman ps just like docker, we can see it's up. The status is good. With open web UI, you could pull models directly through the interface. And I'll show you that. But just so I can show you a podman execute interactive into a llama, we're going to run a lama pull command just to get a model real quick. So there we go. I picked a smaller one just for the sake of time. And success. So, we're good to go. Now, one thing I would like to kind of show off here. If I uh full screen this and I need to refresh it. Log back in. We can see we have Podman containers here. So, you can manage them directly with cockpit, which is sweet. So, we see the containers running. If I drop them down here, we could see the logs. So, you could see it did the uh downloading task. Could jump into the console if we'd like to see the details. And it's cool because it's keeping these together. You can see it's pod AI as the stack directly in here. So it's not showing up as just a bunch of individual containers. It actually recognizes that this is a podman compose. So then I can like if I go over here, boop, I could stop the entire thing, which will stop all of the containers. And just to show it working, if I actually switch on over to the port 301, this will not work with HTTPS, so I'll get rid of that. Will you look at that? Let's go ahead and get started. Fill out your information, create your admin account, and then we are in. So then I can communicate with it. This isn't a open web UI lama video, just a demo. Write me a single page website for a window cleaning company with filler data using Node.js. Maybe a little bit more complicated. So there we go. And if we actually go over here, run this command, we can see it's starting to cook. We're using some wattage. And there it goes. It's making it. My golly. Make this a bit bigger. I won't say it necessarily did good, but it did. And I'm not going to get too far into managing these containers because it's all the same commands as Docker for the most part. But at this point, we basically have a really strong foundation to start playing around with our Linux server. Spin up some services. Have fun. I will be making follow-up videos to this. So, do subscribe. This is technically part one of a multi-part series. So, I hope you guys enjoy that. Real quick, if I go into usage here, it's real easy to fix some of this stuff. So if I install PCP support, we can hit install. It will go ahead and handle that for us, which this will just make it so it will have our um much better history. So history metrics. You can see this service is not running. So we can actually manage that right in here. So if I just grab this service name, head on over to services. We could search for this particular service. See why what's going on? Why is it not running? Just hasn't started yet. So we can just start that service. There we go. Now it's running. So, if I go back over here, go to the metrics, you can see it's not working. Hasn't ran long enough to show anything, but it will. Additionally, very last thing I wanted to touch on is applications for cockpit. There are a hell of a lot of things that you could do to expand the uh capabilities of this. You can see the services we already have, including storage, podman, networking. If I wanted to add something such as files here, let's install files. Let that do what it needs to do. We now have that tool. So, if I go over to the file browser, we can see our home directory. We have Brandon, Podman, AI, and those files here. And I could actually manage files, upload, and do some stuff directly through here if I would like to. There's also some thirdparty tools that can help you make like SMD shares, but I'm going to save that for the next part. Do subscribe because we're going to make this server even better. And with all that, like I mentioned, everything I did, including every single command that I ran, will be linked down below, so you can follow this video with the written guide to set up your very own Fedora server with Podman, ZFS, and a whole bunch of other fun stuff. With all that, do check out boot.dev. That's another great platform to go ahead and learn Linux, especially if you're just getting started. Go through their Linux course. You'll get familiar with permissions, all that stuff. It's awesome. But yeah, with all that, have a beautiful day and goodbye.

Video description

Click this link https://boot.dev/?promo=TECHHUT and use my code TECHHUT to get 25% off your first payment for boot.dev. In this guide, we’re going to build a fully functional home server from scratch using Fedora Server and Podman. No hypervisors, no pre-built NAS solutions—just a Linux box running containers. By the end, you’ll have a solid foundation with proper storage, essential services, and monitoring. Written Guide: https://techhut.tv/fedora-server-guide-cockpit-zfs-podman/ 🏆 FOLLOW TECHHUT X (Twitter): https://bit.ly/twitter-techhut MASTODON: https://bit.ly/mastodon-techhut BlueSky: https://bsky.app/profile/techhut.bsky.social INSTAGRAM: https://bit.ly/personal-insta 👏 SUPPORT TECHHUT (all links below this line will earn us commission) BUY A COFFEE: https://buymeacoffee.com/techhut YOUTUBE MEMBER: https://bit.ly/members-techhut —PAID/AFFILIATE LINKS BELOW— 🛎 RECOMMENDED SERVICES VPN I USE: https://airvpn.org/?referred_by=673908 📷 MY GEAR HARD DRIVES: https://serverpartdeals.com/techhut MinisForum Tablet: https://amzn.to/3SeMmds Beelink N200: https://amzn.to/3xZjeQs Raspberry Pi 5: https://amzn.to/4f3yUCN Q1 HE QMK Custom Keyboard: https://www.keychron.com/products/keychron-q1-he-qmk-wireless-custom-keyboar?ref=techhut ASUS ProArt Display: https://amzn.to/4i4cAKz 00:00 - Intro 00:56 - Hardware 02:02 - Learn Linux! (Sponsor) 03:35 - Get Fedora 04:18 - BIOS 05:26 - Install Fedora Server 08:16 - First Login (cockpit) 10:02 - SSH Keys 12:45 - Disable Password SSH 13:39 - Expand LVM 14:31 - Upgrade and DNF Config 15:08 - DNF Config 16:24 - Common Packages 16:42 - Network Configuration 18:31 - Set as DHCP 18:49 - Firewall 20:02 - Automatic Updates 20:48 - ZFS Storage 22:16 - Wipe DIsks 23:21 - Create ZFS Pool 23:55 - ZFS Datasets 25:49 - ZFS Status and List 26:06 - ZFS Snapshot and Rollback 28:04 - ZFS Auto Mounting 28:47 - Pool File Permissions 29:37 - NVIDIA Drivers 32:31 - Podman Setup 33:37 - Podman rootless 34:21 - NVIDIA Container Support 35:16 - Podman Compose 38:42 - podman exec 39:01 - podman-cockpit 39:41 - OpenWebUI 40:25 - Cockpit to Manage 41:26 - Cockpit Apps 42:05 - Part 2 Soon

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC