bouncer
← Back

Learn Linux TV · 8.3K views · 567 likes

Analysis Summary

10% Minimal Influence
mildmoderatesevere

“This video is highly transparent; be aware that the 'hands-on lab' framing is a structured way to encourage engagement with the creator's external blog and sample files.”

Transparency Transparent
Human Detected
98%

Signals

The content exhibits high-quality, long-form educational instruction with a distinct personal voice, natural conversational fillers, and specific references to a long-standing community and custom-made learning materials. The presence of phonetic transcription errors further confirms a human speaker being recorded rather than a synthetic voice reading a script.

Natural Speech Patterns The transcript contains natural filler phrases ('well', 'the thing is', 'it's going to be a ton of fun') and conversational transitions that feel authentic to a live educator.
Personal Branding and Community The creator (Jay) references specific channel-specific resources like 'Learn Linux TV Merch', 'official blog posts', and custom sample zip files created specifically for the lesson.
Technical Demonstration Context The narration describes real-time actions ('On my end, if I list the storage...') which align with a human performing a live screen-recorded demonstration.
Phonetic Transcription Errors The transcript shows phonetic misinterpretations of technical terms (e.g., 'rync', 'arsync', 'sudoapp') which typically occur when automated captioning listens to a human voice, rather than a clean AI text-to-speech output.

Worth Noting

Positive elements

  • This video provides a high-quality, practical breakdown of rsync flags and systemd timer configuration that is immediately applicable for system administration.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 16:07 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-08a App Version 0.1.0
Transcript

Hello again everyone and welcome back to Learn Linux TV. One of the best ways to learn Linux is to get some hands-on experience. So, I've decided to prepare some hands-on labs every now and then to help you test your skills. But it's more about testing your skills. We're actually going to be building a project that's genuinely useful. Today, we're going to focus on learning rync, but not just arsync. We'll be building a backup script. And this backup script will also feature incremental updates as well. Now, one of the coolest things about labs like these is that you'll get a chance to see how Linux related technologies are actually used in the real world. But not only that, the script that we'll be creating today will be genuinely useful. By the end of this video, you'll have something that you could potentially put into production to help back up your systems. And that's really cool. So in this video, we'll work through the process of creating a bash script that will enable us to back up our important files while also retaining previous files in case you need to restore something later. In addition to that, later in the video, we'll also create a systemd service and timer that'll enable us to run backups automatically. But the thing is, even if we do set our backups to run automatically, it is possible that they might silently fail. So, as part of this tutorial, I'm going to show you a couple of ways that you can help prevent that. It's going to be a ton of fun. But not only that, in the description down below, you'll find a link to the official blog post for this video. That blog post will include all of the commands that we're going to use today, and it's also going to include the bash script itself. But not only that, I'm also going to include a download in that blog post that'll give you a zip file that's full of sample files that you could use to test out your script. That way you don't have to worry about testing it against important files, you could use the sample files as you test it and then once you're sure that it works, you can then implement it in production. So if you need some files to back up, well, go to that blog post and download the zip file and you could use that for your tests. And with all that said, let's get started. I'll be guiding you through the process every step of the way. It's going to be a ton of fun. So let's dive in right now. >> [music] >> First, let's talk about what we'll need for our project. And you don't really need all that much. You only need a Linux installation with Arsync installed. And you could use a physical server, a virtual machine, or even a cloud instance. You just need a distribution of Linux installed somewhere. And the specifics of that won't matter. And most distributions offer the arsync package. So it's usually just a matter of getting it installed. Most of the time you can simply install the arsync package with apt or dnf or whatever package manager your distribution uses. For example, on Debian I can run sudoapp install and then rsync just like that. Again, just replace apt with whatever your package manager is and all you should have to do is press enter and now it's installed. It was literally that easy. And as you can see, the rsync command is recognized by the system. In addition, you'll also need a source and destination for your backups. The source directory is the one that contains the data that you actually want to back up. And the destination directory is where you want your backups to be sent to. On my end, if I list the storage of my current working directory, you'll see that I have a directory called my files. That directory contains a number of sample files. As you can see right here and if I go inside that directory and I list the storage of each of the directories inside this directory, you'll see that I have a number of files here. And these files look like real files. We see a database dump. We see a Python file. We see a YAML file, a spreadsheet, and also meeting notes. Now, each of these files just contain random data. They're meant to look real, but they're not actually important. On your end though, just think about a directory that contains files that's important to you and we'll back up that directory with the script that we'll be creating in this video. As far as the destination directory, you can send the backups wherever you want to. So, in my case, what I'm going to do is create a directory to store the backups. And I created /mnt/backup for that purpose. Now, to follow along with this tutorial, you will not need a network share. It's great if you could use one and if you have one available, but since this is a hands-on lab that is going to facilitate learning, you don't really need to use actual files or an actual destination directory. But if you use this script in production, you'll want to make sure that the backups are being sent somewhere safe. Now, what I'm going to do to make this more fun is I'm going to mount an NFS share to/m/backup. And this share points to my true NAS server. So, what I plan on doing is sending backups over to that server. So to mount an NFS share, I'll show you what that looks like. I'll run sudo and then mount. Then I'll type the IP address or the fully qualified domain name for the target. In my case, I'll type that right now and then a colon and then the path to the fileshare. Now, I'm going to be saving this in my public share, which is probably not the best place to store a backup, but again, this is just a demo and these files aren't actually important. I'm just showing you what it looks like to mount an NFS share. And I happen to have this one available. Next, we choose where we want to mount that to. And we'll mount it to the directory that we just created. And then I'll press enter. Now, if I run the df-h command, you'll see at the bottom that I have the NFS share mounted. But again, you don't actually need an NFS share to follow along with this tutorial. So, if you don't have one, that's perfectly fine. Just keep in mind that it's a best practice in the future if you decide to use this script in production. But before we mount an NFS share, we have to make sure that our distribution supports NFS shares. And by default, most don't. But it's usually just a matter of installing NFS support, which is pretty easy. So if you're using Debian or Iuntu, it might look something like this. So as you can see, I'm using app to install the package NFS common. Now, you only need this package if you do actually want to use an NFS share. If you're just using local directories as a demo, then this doesn't really apply to you. But if you do want to install NFS support, well, that's how you do it. I'll also leave some commands on the screen right now you could use on other distributions if you're not using Debian to install NFS support. Anyway, let's start writing our backup script. I'm going to go back to my home directory and I'm going to create it right here. Now, of course, you don't really want to save an important script, like a backup script in your home directory. That's not really the best place for it. But it's perfectly acceptable to run a script from your home directory if you're in the process of creating it. So, what I'll do is use a text editor. I'm going to use Nano because it's easier to explain in this tutorial, but it really doesn't matter what your text editor of choice happens to be. And I'll create a file called backup.sh. And let's start building it. The first thing I'm going to do is declare that this is a bash script. And to do that, I'll type pound and then exclamation mark /bin and then /bash. Now, if you want more information on exactly what's going on right here, I do have a full course available here on YouTube for free that'll teach you everything you need to know about bash scripting. So, I'm not going to go into too much detail about this particular line right here since I covered it in that series. But, what I'll do right now is walk you through the process of creating the rest of the script. And the next thing we're going to do is create a number of variables. And the first of those is going to be backup source and then equal sign. And inside the double quotes will include the path that contains the files that we want to back up. And in my case, the files that I'm going to be backing up are stored under /home/j/my files. And as an aside, if you visit the blog post for this video, you'll be able to download the same sample files that I'm using. So, if you don't have files to back up, you can download that zip file, extract it, and well, you'll have files to back up. So, if you want to check out that blog post, you'll find a link to it in the description down below. And then next, what we'll do is clarify where we want our backups to be saved. And in my case, I'm going to use /mnt/backup. We'll also create a variable named current_ate. And for this, we're actually going to use a subshell. Now, a subshell is basically a command that's entered in the background. So, whenever the script runs, it's going to capture the date when it runs and store it in this variable. And that helps us understand when the script was run, which will come in handy later. Now, if you want to actually see what this is going to do, what I could do to demonstrate that is to capture this part of the command right here. I'll save the file. We'll come right back to it. And what I'll do is paste in the command that I copied earlier, which is this one right here. So, if I press enter, you'll see what it does immediately. It's giving me the date with a year first. So, that's what this command does. And I also have a dedicated video that covers the date command if you want to check that out. But anyway, let's return to our script. Next, we'll create the log path. So, log path We're going to set that equal to slash mnt and then backup and then logs. Next, we'll create a new variable called previous files. And what I'm going to do is set that equal to slash mnt slashbackup slashfiles and then previous and then slash. And what we'll do is use the current date variable right here. And when it comes to previous files, we will be exploring that later in the video. But the basic idea is that we'll be able to retain files that are replaced. So for example, if you have a new version of a file, the old one will be retained and the new one will be copied over. That way, if you want to return to a previous version of a file, you'll be able to do that. But if that's still not completely clear to you, don't worry about it because you'll be seeing this in action shortly. Continuing, what we'll do is create a comment. And if someone else looks at this script, that just helps them understand what's going on here. Anything after the pound symbol will be ignored. But what we'll do right here is create some directories. The directories that we've identified earlier. So for example, I'll type mkdir-p. I want the parent directory to be created as well. We want to create the log path and also the previous files directory as well. Continuing, we'll create another comment. And this is where we're actually going to run our sync. So what we'll do is type our sync. And for options, we'll use A for archive mode and also V for verbose. And if you're curious what these particular options do, we could check the man page. So in another terminal window, I'll type man and then rsync just like that. And what I'll do next is look for one of the options that we added to our script so far. And the first one was - a. And we can see the archive mode right here. And that option actually includes a number of other options. We see that archive mode includes - RLP and so on. And basically what archive mode does is it just makes sure that all the file metadata is retained when you back up your files and folders. That means you can retain user ownership, permissions and so on. It tries to make sure that the files are identical as possible and that includes the metadata. So that's why it's a good option to include. And the next option was -v. And that increases verbosity which means we'll see more output. And the reason why we're including this option is because we want to see everything that's being backed up. We don't want to run it in silent mode or anything like that. We want to see what's going on. So, we're going to include the -v option as well. Continuing, what we'll do is type- delete. And what that'll do is cause any files that don't exist in the source to be deleted from the destination. Essentially, that synchronizes files. So for example, if you have a report file or something like that in the files that you want to back up and then later on you delete it. If you include this option, it'll make sure that the file is also deleted at the target. We also want to include the d- backup option and that'll help us make sure that we retain files that are being replaced. In addition to that, what I'll do is add the dry run option. So that's d- dry-ash run. And the dry run option is an option of arsync that you absolutely want to remember. And this particular option will result in no backup at all. It just basically gives us the ability to test the script before we actually run it. It'll show us all the files that it would have backed up if we were actually running it for real. So you can think of dry run as demo mode. And it's always a good idea to start off with dry mode being enabled because that way you won't risk losing anything important. In addition to that, we'll include another option We're going to create d-backup-dur for directory and we're going to set that equal to dollar sign previous files which basically means in this case that any file that's replaced the original will be moved to previous files. That way if we replace something by mistake or if we want to go back and restore an earlier version of a file we could do that from the previous files directory. Next, what we'll do is use the backup source variable. And again, that contains the files that we want to back up. And then next, what we'll do is type the destination, which we captured in a variable. And to make this a little bit cleaner, what I'll do is add a backslash right here. I'll press enter. And this isn't required, but when you include a backslash, that enables you to have a long command that would normally wrap multiple lines. We're going to basically wrap it ourselves. So, I include the backslash as you can see right here. And that enables me to start typing on a new line. And we'll include the backup destination variable as well. And then space, another backslash so we can go to the next line. That's just for readability. Again, it's not required. We're going to include a redirect symbol. We want to basically capture the output. And then we're going to add another line right here. We're going to use two and then greater than which stands for standard error. We want to capture errors as well. And we'll store the errors in its own log file. And basically this is our completed script so far. We will be adding more to it, but it's enough to get us started. So, what I'll do is save the script and I'll exit the editor. And as you can see, we have the script file right there. Now, if I do a long listing, we can see that there's no X bit set for backup.sh, the executable bit. So, what we'll need to do is add that in order to be able to run the script. So what I'll do is run chod and then plus x and I'll run that against the backup script. And now as you can see our backup script is executable. Now next what we're going to do is run the script. And to do that we could type slash and then the name of our script. Now it looks like nothing really happened right? Well I don't really see any output. So what's going on here? Well, let's take a look at the backup directory, our destination directory, and see if anything's changed. Yes, we use dry mode, but we also set it up to log information as well. And the logs aren't going to be under dry mode, so those should exist. So, we'll go into the backup directory. And then we'll list the storage. And we have two folders right here. We have files previous and we have logs. And if I list the storage inside that particular directory, we have two log files. We have the normal log file and we also have the error log. So let's take a look at the backup log. And as you can see right here, it has a list of files that are being backed up. But we also see at the bottom that it says dry run, which is our indication that dry run mode is enabled. So these files won't actually exist at the target. But what we want to do is look at the output and make sure that everything looks good. We'll make sure that any files or folders that we want to back up are listed right here. And if everything looks good to us, what we can do is remove dry run. And that'll give us an actual backup. Now, what I'll do right here is add the word current. So, I added slash and then current because I don't want the files to be listed alongside the other directories here. I want the files to be in their own directory. And the current directory is going to include the most current versions of every file. Again, this is going to make sense very shortly. So, what I'll do is go down here. I'll remove the dry run option and then I'll save the script. And then let's run it and see what happens. Now, we don't see any output and we don't expect to because we're sending all output to log files. We're sending standard output to a log file and standard error to a different log file. So when we go to the backup directory this time which is this one right here you'll see we have a directory named current. Now I didn't create that directory and the reason why that directory exists even though I didn't create it is because rsync will create directories up to a certain point. It's not going to create nested directories but it will create one level of directories if it needs to which is why I didn't actually set up the script to create the current directory. But anyway, if I go in there and then list the storage, you'll see that I have my files and then I'll list the storage again. And here we have all the sample files that I wanted to back up. Now, continuing, if I go back a directory and then back one more, what we're going to do is go into files previous. If I list the storage, we actually have a dated folder right there. So, I'll go inside that folder. And we have nothing inside that folder. And the reason why we have nothing inside that folder is because we just ran the backup for the very first time. So nothing was replaced at the target. So we don't have anything to have a copy of right here. So that's perfectly fine. But watch what happens if I go back to my home directory and then the source folder, which we have right here. What I'm going to do is just make some random changes to these files. So inside the code directory, we have a few files right here. I'll open up the requirements file. And what I'm going to do is just make a very simple change to this file. We don't really have to do anything that extravagant. So I'll save the file. I'll exit out. And now we've made a change to that file. So what I'm going to do is go back to my home directory. And I'm going to run it again. Now keep in mind we just made changes to a file. And what's going to happen is the new file is going to be copied to the destination and the file that was there before we overwrote it is going to be copied to our previous files directory. So if I go in that directory and again we're going to have a folder for each and every date. I'll list the storage. We have my files. We have that code directory. And we have the original file. So the basic idea here is that with this script, the current directory will always contain the most current version of every file that you're backing up. Meanwhile, the directory that we established as the previous files directory, that'll include everything that would have been overwritten. So again, if we want to restore a previous version of a file, we could grab it from that directory. We'll get back to the video in just a moment, but I wanted to mention that we Linux users, well, we overengineer just about everything. I mean, we'll spend 20 minutes scripting something that a checkbox could have fixed, automate a task that we end up only doing once, or setting up multifactor authentication on a HomeLab app that literally no one else can access. And that's just scratching the surface. But imagine if we spent as much time on our appearance as we do overco complicating things. If we did, we'd look amazing. But instead, we're so busy tuning config files that most of our wardrobe was probably purchased well before the pandemic. And if you want to keep obsessing over terminals, desktop environments, distros, and config files and still look good while doing it, then you might want to check out the Learn Linux TV merch shop. It's a store for people who care about Linux customization and coffee. and probably in that order. You'll find shirts, hats, buttons, mouse pads, bags, and more. Whether you want to show off your favorite dro, have a little fun, or even warn people about your obsession, there's probably something in the shop just for you. And it's not just about revealing to the world that you run Arch. Although, yeah, that is important. There's practical items, too, such as the T-Mux cheat sheet, mouse pad, reference sheets, and other genuinely useful items. Every purchase helps support the channel, so grab yourself something great. You can check out the shop at merch.learnline.tv. And as always, I really appreciate you guys. Thank you so much for checking out the shop and supporting the channel. I really appreciate it. And now, let's get back to the video. Now, at this point, there's a few things that we should do to make this even better. The first thing is really easy. Currently, we have the backup script in my home directory, but that's not really a good place to store it. But where should we store it? Well, what I'll do is move it to the appropriate place. I'll use sudo mv for move. I want to move the backup script. And what I'll do is move it under slash user. User is abbreviated. And then slashbin. And now if I list the storage of that directory, you'll see the backup script is located right there. Now the next potential problem is that it's owned by my user account and I want to make sure that it's not easy to modify that script. So what I'll do is run sudo and then I want to change the ownership. I want to make sure that root owns that particular script and the root group as well. And I'll run it against the script file. And as you can see, it's now owned by root. And that's more secure. We definitely want to make sure that it's not easy to modify something as important as a backup script. However, the next problem is a bit more serious. The thing is, we're really busy and we might not remember to run this script. It would be a shame if we forget to run the script and then something happens and we lose files. So, it would be a lot better if we could set this up to run automatically. And in fact, that's what we're going to do right now. We're first going to create a systemd. And the systemdunit will run our backup script for us. So, to set that up, we'll type sudo. We're going to use a text editor. And what we'll do is edit / etsy and then /systemd /system and we're going to call it backup.service. So we'll open up the file which is going to be empty because we have yet to create it. But it's actually fairly simple. Again, what we're doing is creating a systemd unit. Now I do have a video on the channel that covers systemd in more detail. In fact, I have several videos that covers it. But the basic idea is we're creating a systemd unit. In this case, a service that's going to run the script for us automatically. And if you've never created a systemd unit before, there's going to be several sections. So the first one is going to be unit. Notice the capital U. And we'll type a description. And the description will say run a daily backup. We'll type after then network online.target and then once then equal sign and again we'll set that to networkonline.target as you see right here. And that basically makes sure that this particular unit will not run until after the network is online. That way we don't have a backup script that's trying to run when the network is down which means that it's not going to actually be able to copy anything to a network drive. So, we want to make sure that that's online first. And then next, we'll build the service section. We'll need to add a type. And the type we'll add is oneshot. And basically what this means is that we're setting up something that's going to run and then exit. It's not going to stay running in the background. It only runs as long as it needs to. And that's what oneshot means. And then we have exact start. And we're going to set that equal to the full path to our script. In our case, that's SL usr / local/bin. And we call the script backup.sh. Just like that. Finally, we'll build the install section. And we're going to set up a systemd timer. That's why we're adding this right here. We'll save this particular file. And then next, what we'll do is create a timer as well. And if you've never heard of a systemd timer before, it's essentially a cron job with additional features. It basically gives us the ability to run something at a particular time. Specifically, it gives us the ability to run a systemd service at a particular time. And that means that we won't actually be calling the backup service directly. We'll be using the timer to do that for us. So to set that up, we'll again build a unit section. Then we'll add a description. And I've done that. We'll create the timer section. We'll use on calendar. And we're going to set that equal to midnight. And of course, you can adjust the time. So, if you want to change it from midnight to something else, you can absolutely do that. But I'm just going to keep it simple and set it to midnight. Next, we have the persistent option. And this is really important because persistence gives us the ability to make sure that a script runs even if the machine was down when it was supposed to run. So, for example, if you are going to run this at midnight, but you take the server down at 11:30 and you're doing maintenance and you're not done until 12:30, well, in that case, it's too late, right? Because midnight has passed. But with the persistent option, what that's going to do is tell systemd that it needs to run a missed job if it encounters one. So, if you start the server and it's after midnight, systemd will notice that it didn't run and then go ahead and run it. So, we definitely want to set this to true. Finally, we'll set up the install section and again wanted by and then timers.target. And that's our completed timer. So, I'll save it and then I'll exit the editor. And as you can see here, we created two things. So we created a service and a timer and a service gives us the ability to run something and our timer will help us run it automatically. Now the thing is though the name of the service and the timer has to match. The file extension is different but the name needs to be the same. So keep that in mind. System D is smart enough to know that when it runs a timer it's supposed to find a service with the same name. And if it doesn't find it then well it's not going to run. But what we actually need to do right now is tell systemd that we created these files. We need to refresh it basically. And to do that, we can run sudo and then systemctl and then demon hyphen reload just like this. And all that does is just tell systemd to rescan its files and see if there's anything new. So if it didn't already notice the service and timer that we created, well, it's going to notice it now. And similarly, we could check the status of the timer as well. But in this case, it's disabled just like the other one was. And it's also not running. But unlike the service, the timer does need to be enabled and it also needs to be running. So to set that up, what I'll do is type sudo, then systemctl, and then enable, then the option d-now. and I want to enable the backup timer. Now, if I check the status again, we can see that it's active and waiting and it's also enabled. And what's also interesting is it's going to tell us when it's going to run next. In this case, we see that this backup script is going to run in 6 hours. Now, at this point, let's give it a quick test. Underneath my files, I'll list the storage. And what I'll do is just create a test file. I want to make sure that this gets copied over. And now we have the test file. And what I'm going to do is run the backup script and see if it copies that file over. However, I'm going to do it a little bit differently this time. What I'll do is start the backup script through the systemd service instead of running it manually. And this will simulate what the timer is going to do the next time it triggers. So what I'll do is type sudo systemct ctl start and I want to start the backup service. Now earlier I mentioned that you're not generally going to run the service file because the timer is going to do that for you. But there's nothing wrong with testing it out and that's exactly what we're doing right now. So what I'll do is press enter and it looks like it worked. What we'll do is check the status of the backup service. And here we can see that it ran. It was given the code success. So, so far so good. And if I look at the contents of that directory inside the backup directory, we'll see the test file is right there. So, it worked through the systemd service, which is pretty cool. However, there actually is a very important bug that we need to fix when it comes to our backup script. Now, it's not a bug in the sense that it's not going to work until we fix it. We saw that it worked. So, it looks like everything is perfectly fine, but there is a situation that you might run into that we want to prevent. So, what I'll do is edit the script. And since that script is now saved under / user/local/bin, that's a protected directory. So, I'll need sudo in order to edit anything under that directory. And there's a script. Now, when I told you earlier that there's a potential bug, what was I talking about? Well, here's the thing. If you're mounting an NFS share or external directory like I am, you could run into a situation where the backup script runs even though that target mount point isn't mounted. And what that'll cause is a false positive when it comes to success. You'll see success messages, but the backups aren't actually going to be sent anywhere. So, what we'll do is implement a check that'll make sure that the backup directory is mounted before we can actually run the backup script. Now, keep in mind if you're not using an NFS share like I am, then you don't want to implement this. So, if it's just a demo, I guess it really doesn't matter. But if you are going to be implementing this in production, it is a good idea to add the block of code that I'm going to give you right now. So, in my case, what I'll do is I'll add it right here. I'll give it a comment of safety check. I'm going to write an if statement right here. So, I'm setting up the if statement. Now, basically what's going on here is we're using the mount point command to make sure that our backup directory is mounted. And also there's an exclamation mark right here and that's basically going to invert the check. So it's checking if it's not mounted. That's what the mountpoint command is doing. The - Q option is just making sure that we get quiet output or essentially no output. We want to make sure that it's mounted, but we don't really care about seeing text because this is going to run automatically when the timer runs the script. So we might not be present to see what the output actually is. But we do want to check to see if this is mounted because what basically could happen is normally you mount something under / mnt/backup. So when you save files there, they're going to be saved at the destination. You know, whatever's mounted to /mnt/backup. But if that mount ever drops, for example, an NFS server goes down or you disconnect an external hard drive, whatever that directory points to and that gets unmounted, it's still going to be able to copy data into that directory. It's just at that point it's not mounted anymore. So in this case, we want to make sure that the mount point is in existence before we run this script. So that way we don't start writing backups to our local file system. But since we have the script open, let's just take a quick run through and make sure that we fully understand it. There's the backup source directory, and we're going to set that to equal the path that contains the files that we want to back up. So that's pretty straightforward. Backup desk is going to be the destination. We're going to be saving it under /mnt/backup and then the current directory. We also have the current date variable and that's going to be set equal to whatever the date command is going to give us when it runs which is going to be the current date. We're also setting the log path. That's going to be / mnt/backup/logs in this case. We have the previous files directory which is where the replaced files are going to end up. And then we have the safety check that we just added. I just explained that. And then down here, we're going to be creating some directories. And that'll just make sure that the log path exists and the previous files directory exist as well. And then the final block of code is actually running our sync. We're using the - a option, which is archive mode. That's going to try to retain metadata, which is pretty important if we can. The V option is going to enable verbose output. That's why we're able to capture the output and save it into log files. But we have the d--backup-dur option. And that particular option is going to enable us to set which directory our previous files are going to be saved in. Then after that, we're giving it backup source and backup desk. We're going to redirect the output of standard output to the log file. That'll allow us to keep a log of what's actually being backed up. And then the two greater than is going to reference errors and specifically save errors in the error log. Now there is a bit of redundancy in this file and it's always a good idea to check for this. So I left this in the file on purpose to show you something that's pretty important. So you'll notice that we have a variable that gives us the destination for the backups. It's going to be under /t/backup. So that's in a variable which is actually appropriate. But then later we're calling SLMntbackup. So for example, right here we have SLMnt/backup again, which is kind of redundant because we included that in the variable. So let's fix that. So right here, what I'll do is type backup_mount. We'll set that equal to / mnt/backup. And then here what we're going to do is we're going to keep the current directory and we're going to reuse it right here. We basically don't want to type anything twice unless we absolutely have to. And then right here we can avoid setting the path manually by including the variable that we just created. We also have a bit of redundancy right here. So, let's clean this up a bit. And the same thing here. And now I think the script looks a lot better. Again, we definitely want to avoid any redundancy whenever we can help it. Now, if you're curious about the exit statement that's right here, that allows us to control the exit code. An exit code of zero means success, but we don't want to exit with success. We want to specifically fail out. And that's what exit one gives us the ability to do. So, if you were to check this script with another tool and it has an exit code of one, it's going to constitute a failure. And if it's an exit code of zero, then it's success. Now, if this mount point does exist, it's going to skip this entire thing right here. And then exit one will never run. And then the script will continue. It'll create the directories. And then it gets down here to the rsync command. And as long as that runs, okay, then we'll still have an exit code of zero. But when we do run a script, we want to make sure that we don't have an exit code of zero when it fails. So, we're basically trying to catch the failure if the if statement declares that the mount point isn't present. Anyway, let's save the script. We'll exit out. And to be on the safe side, what I'll do is run the script to ensure that it's working. Let's check the status. And as you can see, the script was able to run successfully. Now, before I close this video, I want to point your attention to something. It's possible that your script might silently fail. For example, if you don't remember to go back and check it, you'll never know whether or not it actually ran. And it's a little tedious to check this every single day. So, it would be nice if there was a way to know whether or not it ran and have something alert us if it didn't. And health checks.io is one way that you could set that up. Now, just to be clear, this service is not a sponsor of the channel. I just use it personally to make sure things run. It's a really good service and I'm sure there's others just like it. But this will give us an example of checking to make sure that something has ran and if it hasn't to get an alert about it. So if you want to use health checks.io, you'll need to create an account there. And then once you do, you could create a project. So what I'll do is create a new project. I'm going to name mine rsync backup. So let's create it. And now we have the project. So, what I'm going to do is add a new check. And you could basically create a new check per server. I only have one server, so I'm just going to be creating one check right here. But we can leave these two fields as they are. We'll leave it on simple mode. And then we decide how many days we want to wait until we get an alert. Now, since this script is supposed to run every day, I want to get an alert if it hasn't run. We give it a grace period of 1 hour. Basically, give it a chance to finish running. And what we'll do is save it. Now, right here we have our health check URL. So, what I'm going to do is copy it. So, just copy this right here. And then what we'll do is return to our script. So, I'll bring it back up in an editor. And what we'll do is add our health check to the script. So what I'll do is just create it by itself. I want it to stand out. We'll create a new variable. We're going to call it health check URL. We'll paste in the URL that we copied earlier. And then we'll close it. And now what we'll do is go all the way to the bottom. We're going to add another block of code right here. And we'll create an if statement and then dollar sign question mark. And what that allows us to do is check the exit code. Remember what I told you earlier that if a command is successful, the exit code will be zero. And if it's not successful, it'll be anything other than zero. Earlier, we added a block of code that's going to be checking the mount point, and we're exiting with exit code one. Again, we want to specifically fail if the mount point isn't there. But when it comes to the rsync command itself, if it was successful, we want to capture that. So, what we'll do is use dollar sign question mark as you see right here. We're going to check if it equals zero. We'll close the bracket. We'll add a semicolon. We'll type then and we'll use curl. We'll give it a few options. We'll use the health check URL variable that we created earlier. And we'll send the output to /dev/null. And now we'll close the if statement. Now we're sending the output to /dev/null because we just don't really need to see it. We just need this to run, but we don't need any output. So this if statement right here is going to check the exit code. And when you run a command like this, it's going to check the exit code of the most recent thing that ran. And in this case, our sync at this point will be the most recent thing. So it's going to check if the exit code was zero, which means success. And if that's the case, it's going to alert the health check. So what I'll do is save the file. I'll exit out. We'll go back to the browser. And you'll notice right here it shows that this has never been run. Last ping is set to never. So what I'll do is run the script right now. So I'll switch back to the terminal and we'll run sudo systemctl and then start and then we'll start the backup service. I'll press enter. And now that it's run, let's go back to the browser. And as you can see right here, the last ping was 11 seconds ago. So, it definitely ran. Now, if it ever goes one day without running, then this is going to flag and then send us an email. So, that way, if our backup script doesn't actually run or it's not successful, we'll get a message and we'll be able to take action. And that's important. We definitely don't want anything to silently fail. Anyway, I hope you enjoyed this script and I hope it proves useful in your Linux journey. And there's our video. In today's video, we learned more about RSync and we also combined it with other technologies. We set up a systemd service, a timer, and we also checked out health checks.io, which can help prevent us from experiencing a silent failure. I really hope this bash script is helping you out and you learned a lot. And if you did, then be sure to click that like button to let YouTube know. In the meantime, thank you so much for watching this video and I'll see you in the next one. >> [music] [music]

Video description

In this tutorial we build an automated Linux backup using rsync and a systemd timer. In this step-by-step tutorial, we'll create a reliable rsync backup script, test it safely using dry-run mode, and automate it using a systemd service and timer so your backups run automatically. You’ll also learn how to prevent silent backup failures by adding a mount check, and how to integrate healthchecks.io so you get alerts if your backup script fails. This tutorial is perfect for Linux users, sysadmins, and homelab enthusiasts who want a simple, reliable, and transparent backup solution without relying on heavy backup software. *❤️ Consider becoming a Channel Member* Support Linux Learning and gain acess to exclusive perks, such as ad-free content and early access to select videos. Your support really helps!!! Join here ➜ https://learnlinux.link/member *🛍️ Support The Channel and Get Awesome Linux Swag!* Head on over to the Learn Linux TV Merch Shop and check out some great Linux-themed gear, including (but not limited to) T-shirts, drinkware, buttons, stickers and more! • "apt install coffee" T-Shirt ➜ https://learnlinux.link/apt-install-coffee • "sudo" T-Shirt ➜ https://learnlinux.link/sudo-shirt • Linux Commands Cheat Sheet ➜ https://learnlinux.link/linux-commands • "May Spontaneously Talk About Linux" T-Shirt ➜ https://learnlinux.link/talk-about-linux-shirt • "Dark Side of the Terminal" T-Shirt ➜ https://learnlinux.link/dark-side-shirt • Lots more ➜ https://merch.learnlinux.tv _Use coupon code "LINUXFAN" to get 10% off your entire order ➜ https://merch.learnlinux.tv_ *🐧 Other Ways to Support Learn Linux TV* • Channel Membership ➜ https://learnlinux.link/member • Patreon ➜ https://learnlinux.link/patron • Spin up your very own Linux server ➜ https://learnlinux.link/digitalocean • Linux swag ➜ https://merch.learnlinux.tv • Check out Netdata ➜ https://learnlinux.link/netdata • Jay's Gear ➜ https://learnlinux.link/amazon _Note: Royalties and/or commission is earned from each of the above links_ *🕐 Time Codes* 00:00 - Intro: Automating Linux Backups with rsync and systemd 02:21 - Project Overview: What You Need for an rsync Backup Script 06:41 - Writing a Linux Backup Script with rsync 16:16 - Testing the rsync Backup Script (Dry Run Mode) 21:29 - Support the Channel + Linux Swag 23:03 - Finalizing the rsync Backup Script 24:24 - Automating Backups with systemd Service and Timer 31:06 - Testing the systemd Timer and Backup Script 32:30 - Adding a Mount Check to Prevent Backup Failures 35:27 - Full rsync Backup Script Walkthrough 40:16 - Monitoring Backup Failures with healthchecks.io *🔗 Relevant Links* • Official Blog Post ➜ https://learnlinux.link/rsync-systemd • Health Checks ➜ https://healthchecks.io *🎓 Full Linux Courses* • Linux Crash Course ➜ https://linux.video/cc • tmux ➜ https://linux.video/tmux • vim ➜ https://linux.video/vim • Bash Scripting ➜ https://linux.video/bash • Proxmox VE ➜ https://linux.video/pve • Ansible (Udemy) ➜ https://learnlinux.link/ansible • Linux Essentials (Udemy) ➜ https://learnlinux.link/linux-essentials *🎓 More About Learn Linux TV* • Main site ➜ https://www.learnlinux.tv • Community Forums ➜ https://community.learnlinux.tv • Github Account ➜ https://github.com/LearnLinuxTV • Content Ethics ➜ https://www.learnlinux.tv/content-ethics • Request Paid Assistance ➜ https://www.learnlinux.tv/request-assistance ⚠️ Use Content Responsibly Learn Linux TV shares technical content intended to teach and help you, but it comes with no warranty. The channel is not liable for any damages from its use. Always ensure you have proper permissions, follow company policies, and comply with all applicable laws while working with infrastructure. #rsync #linuxbackup #linuxcommands #rsynctutorial #linuxadmin #sysadmin #linuxserver #databackup #filesync #linuxforbeginners #commandline #devops #linuxhowto #opensource

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC