We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Worth Noting
Positive elements
- This video provides a clear, conceptual breakdown of the complex Linux traffic control subsystem, making it accessible to systems engineers.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
what's going on everyone in this video we're going to be talking about shaping linux traffic with tc or traffic control tc is a really awesome tool i use it here and there to do some interesting simulations of packet loss of bandwidth control of all kinds of cool stuff but it's generally a great tool for doing kind of neat stuff at a at a packet linux traffic flow level where you can do everything from implement quality of service in a router-like fashion to again some of these more specific use cases that i talk about around um you know uh simulating certain scenarios on nodes and and so on so we're gonna cover a bunch of cool stuff today first we're going to talk about a use case around quality of service that'll be one of the first things that we cover we're then going to get into traffic control the tool itself how it works some of the key concepts and then finally we'll go through an implementing of a shaping exercise and if you're not familiar with what i mean by shaping i'll be talking about that don't worry but we're going to end with kind of a concrete example of how we can do some some cool stuff with tc so starting us off let's add some context around this notion of quality of service just to kind of uh to set the stage now tc is not just to implement things like quality of service but quality services i think is a really good example of something that you can set up and just to level set what quality of service means now you have to be a linux guru to understand quality of service many of us are actually pretty familiar with the idea of quality of service because many of us use a router that supports some amount of quality of service setup inside of our home or work networks and there's a bunch of different ways that we can do quality of service one way is we can do it from kind of a client perspective and we could say okay well laptop one and then there's folks on the guest network so guest 2 right and in a very simple model there's ways for us to look at the devices themselves connecting to the router okay and we can actually put in specific rules about the degree to which their traffic should be limited so maybe folks that are considered a guest or mac addresses that aren't on a known list they have a cumulative limit of 10 megabits per second inside of the network while you know specifically laptops or groups of laptops get a larger pool of 100 megabits per second because they're trusted and they're known devices that should have priority to the bandwidth itself and then there's ways to do not just the source and quality assurance or sorry quality assurance quality of service based on the source but also the destination so we could think of things like well when the router goes out and sends to you know let's say some internal network that's super important for us like this theoretical cider range we want to be able to prioritize the traffic that we allow here so maybe we allow kind of an upload speed there of a thousand megabits per second right and then in kind of the fall through case let's just use every other ip address right we want to ensure that going to public facing traffic which is less critical has a limit of 100 megabits per second guaranteed and things can get even fancier from here right maybe we don't just want to say well only give this 100 megabits per second if there's unused bandwidth so maybe at kind of a larger scale here there might be some scenario where we want to take you know the larger pool which let's say we have 2 000 megabits per second available or you know what even like let's say uh 1500 megabits per second and the idea being if the if this network isn't saturating its throughput then in that case let's go ahead and let this upgrade all the way up to maybe the full network capacity 1500 megabits per second but the second that this starts needing to take in traffic and do stuff it should largely get priority in which case all of our traffic to this bottom path here would then be limited to 100 megabits per second so these kind of scenarios you can actually implement things like this with tc and like i said kind of taking a you know a more specific example for how i use it i'm not implementing routers or quality of service or anything like that but you know in my world i work with a lot of containerized workloads and in these environments you've got one machine with perhaps 10 20 hundreds of containers sometimes on them and i can simulate cool things like i could say all right all of a sudden this machine can only transmit 10 megabits per second how does that impact all the workloads running on it or perhaps all right all of a sudden this machine is dropping every 700th packet what impact does that have on my complex system and so on so just a super powerful tool for doing everything from quality of service tile stuff to what i'm talking about here all the way down to some of kind of the more simulation type work that i often use tc for so with that being said we probably should talk a little bit about what tc is so if you have a linux host lying around it's quite likely that you can just type in man tc to get the man pages and you'll get them right here because tc's likely already on your system and available as the name implies tc it's traffic control it lets us manipulate the settings the tc tool lets us manipulate the traffic control settings um that are used kind of in the in the linux networking stack specifically with the kernel and there's a couple different things in the man pages that get called out here first off is the ways that you can control traffic okay and there's a couple different ways that you can do this our focus is going to be around this idea of shaping traffic and this is focused with impacting the rate of transmission now oftentimes when you hear rate of transmission you just me you just think oh okay well you know that means i'm going to be able to lower the amount of you know transmission available or bandwidth available and that is a totally use case but there's tons of more advanced cases with priorities and all these different things you can do when you're shaping the traffic outside of shaping there's a bunch of other options that you could check out things like scheduling policying and dropping traffic all these different things can be used to do kind of advanced traffic control stuff so they might be worth checking out but our focus for today is going to be around this idea of shaping traffic now let's talk a bit about the components that make up tc and this will help us kind of frame some of the the initial bits and we'll diagram them as we talk about them the first is a q desk which stands for cueing discipline and this is the thing that the kernel will send a packet or sorry we'll enqueue you know packets to and eventually pull packets off so think of this kind of like a queue the kernel can put packets into it and then afterwards the kernel will try to get as many packets as possible from that q disk eventually giving them to the network driver so how this looks from a conceptual standpoint on on let's say like a linux host so we'll just take a box here move it over all right so let's call this our linux host we'll make it a little bit bigger and we'll bring this up and say here we go let's say linux host okay and then obviously on a linux host we've eventually got the network adapter i'm going to represent this in software by calling out the network interface so let's say that the interface is eth0 okay now on a given linux stack be it or linux host be it containers be it processes whatever it might be you eventually and i mean containers really are processes right so let's let's just call this process zero typically some things happen in the linux networking stack where the uh process eventually gets packets out to the the uh the interface and then that sends uh sends those packets out to some destination likely outside of the host so let me just get rid of this extra arrow here cool so this is you know definitely not talking about a couple very key components right um there's obviously a lot of stuff happening in the linux networking stack to facilitate this but it kind of gets the point across i think at a bit of a conceptual level now to further that somewhat naive explanation that just again aids us in the conceptual idea here let's talk about the actual q disk itself so what we're talking about adding here is effectively something we can sort of think about like a queue and instead of going right to eth0 we'll be able to filter these packets more or less and make sure that they make their way into this queue that again the kernel will eventually pull directly off of and send to send to the network interface now the introduction of this queue while it does add like an intermediary step it adds a lot of functionality because we will eventually learn how we can add these things called classes inside of here and classes will basically mean that instead of they're just being kind of a first in first out model where when things get queued up the kernel just pulls them out as quick as possible we do fancier stuff like we enforce these things where we say all right like kind of don't let the colonel know but we're gonna hide things behind the scene and only let it get 10 megabits per second you know out of here so again talking about kind of the quality of quality of service controls and all that good stuff we're actually going to be able to do that inside of these qdisks which brings up our other big piece which is the classes themselves so qdisks can contain classes which can then contain further q disks which might seem a little confusing but there's basically this parent-child relationship you can do i won't talk much more about that right now until we get into the example because that will bring some of it to light but as i mentioned classes are going to let us do some of the more complex things with the q disks in the man pages you'll find two categories there is this idea of classless qdisk so these are cue disks without a class associated with them an example of that like i had mentioned is first in first out which is the default in the fifo model this is almost like a no impact type operation things get queued up they get pulled out as quickly as they can in first in first out order and then there are the class full cue discs okay and these are the ones that we're going to be focusing on there's a bunch in here you can read through them the one we're going to focus on is one called the hierarchy token bucket whereas i'll be calling it throughout this htb now htb is going to allow us to have guaranteed bandwidth for classes exactly what we want we want the ability to say you can send this much traffic at this rate based on this ip address you're sending to you know some some scenario sort of like that so that's largely what htb is going to do and and i should mention in in case you want to dig a little bit deeper into htb the documentations on that are also great you can just do man tc-htb and it will bring up the documentation for this specific class so um you know i'd recommend definitely checking out and reading through this obviously i'm not going to get into all these bits but we will look at some of the key key parameters like ceilings and rates and priority and all that good stuff but for now let's go back to our main page so we've talked a bit about qdisks we've talked a bit about classes and i even alluded a little bit to filters filters are going to give us the ability to say as i diagrammed here based on some property of the packet it should be going into this specific q disk so this could be looking at the packet from like a source destination type setup let me switch the navigation down here to traffic control there we go source navigation a source destination type uh type packet and eventually get it into into the the queue disk that's appropriate for it with ideally the class that's appropriate for it as well so we've got qdisks we've got classes we've got filters those are some of the main constructs inside of tc so before we get too deep into implementing it let's kind of lay out what the tree more or less looks like with uh with this tc bit set up so here's what we're gonna here's what we're gonna try to implement so if we think about this kind of like a tree we know that there is going to be a root cue disk and my understanding is there's always a root cue disk i'm pretty sure that's a fair statement unless there's some edge cases i'm not aware of and the root q disc as we had mentioned is the main component that the kernel is actually going to talk to so let's just put something up here so we don't forget kernel and and one of the the key reasons i think it's worth calling this out is we're going to kind of formulate a tree down here with a bunch of different nodes and a lot of times conceptually we think of trees like top down where you know uh the the root would eventually go to some child and then maybe send the packet out but it's actually quite the reverse the kernel is interacting with root to both in cue packets and also to dq packets to be sent out so everything kind of bubbles back up through the kernel if that kind of makes sense maybe when we get into the example it will help a little bit but we want to implement here off of the root is what i'm going to call q disk one i suppose we'll assume that root is kind of like q disk and then the idea with this is that qdisk1 is something we're going to implement a class inside of and the class is basically going to limit us to 10 megabits per second via the hierarchy token bucket or htb and then what we'll effectively want to do here is set up a filter that basically says and i'm going to kind of use some pseudo code here again this isn't like a perfectly technically accurate diagram it's just to get the concepts across if destination equals um uh u1 or u2 and i'll talk about what these are these are host names so host u1 or u2 then go to qdisk1 okay so this pseudocode is effectively what we're calling a filter right and the filter is basically going to be here to say again as the packets come in let's actually look at them and let's figure out what their destination is and based on that we can go to this q disk which in theory if you're sending to u1 or u2 it would go to this q disk and the end result would be that we'd end up with traffic limiting at about 10 megabits per second so that's kind of roughly what we're going to be setting up here as an initial take and then we'll expand on it and if it feels a little bit academic and conceptual at this point that's okay let's let's now transition it from traffic control into actually implementing the shaping itself and we'll talk a little bit about this more concretely so to demonstrate this i've got three servers set up okay i've got u0 at the very top this is going to be our client u1 is server one u2 is server two now in order to show this traffic stuff happening consistently i'm going to be using a tool called iperf you can download iperf with your package manager and basically what iperf is going to let us do is basically send traffic through to kind of gauge what the bandwidth is so on each of these servers i'm going to run iperf in a server mode for dash s and i think we're also going to put in a port here for 8080 and i think that should be about it great so iperf is listening on port 80. let's do the same thing here so iperf we will do server on port 8080 listening on 8080. you know what i think i might have just done there's two versions of iperf on these servers just to make it extra confusing let's make sure we're using the same version here so iperf3 iperf3 now on the client what i can do here is i can connect to these iperf servers so effectively i should be able to say something like iperf3 and say that this is a client and then i can put in the ip address of u1 which i know is 192.168.5 the port is 8080 and i'm also going to time this test for 30 seconds so we'll go ahead and run it here we can see that it's connected i'm going to zoom into server one and it's going to start giving me output of what my bandwidth is now this should be really fast bandwidth because these are virtual machines on the same hypervisor i can see i'm getting roughly let's say 19 gigabits per second in this particular test from my client side so i'll zoom out the client at the top from u0 is giving me similar results again this is just kind of the client perspective and after 30 seconds here it should finish and then for a sanity check i should do the same thing on u2 so the ip address for u2 is 5.1 we'll go ahead and hit that and zoom in and almost the same results right obviously it's going to fluctuate a little bit but i'd say we're getting somewhere in the wheelhouse of 19 gigabits per second communicating with you two so very fast and when we enforce our throttling here it's probably going to be very apparent that we're throttling because we're going to drop down quite significantly especially if we go into our use case actually let's even bump this up let's say let's bump ourselves up to 100 megabits per second just to make it a little bit more reasonable so how do we implement this from a tc level well we basically want to inject this into what the client is right because we're throttling bandwidth from the client to specific destinations so what i'm going to do here with y'all is write a little script and like most of my videos there's a blog post accompanying this so if you want to check out the actual script that i write here check out the blog post it'll be in the description of this video you're welcome to kind of test it out for yourself all right so here is what we're going to do um with with this script we're going to first set up a couple different key variables the first thing we'll do you know let's go ahead and declare that this is a bash script it probably would be fine with bin sh but let's just do bin bash and then we're going to set up some key variables first thing is the tc binary now on most systems the tc binary is going to always exist inside of sbin and it's going to be tc so i'm going to store this here so i can reference it throughout we're also going to call out the network interface because that's going to be important to ensure we're attaching to the right interface the interface is going to be let's go back to our server real quick and i'll just do an ipa command our interface is ens160 so ens 160 is what we will store inside of this variable here and the last thing i want to set up at least for right now is what our upper limit is going to be and as we talked about in our diagram we want to do 100 megabits now how did i know m bit here if you do go back into the man pages for tc there is a section somewhere in here that lists out what yeah all the bandwidth parameters are so for rates there's a whole list here you can do bits versus bytes and then you've obviously got kilomega giga tara you know so on and so forth so all that is pretty much good to go and we're going to limit at 100 megabits per second inside of here and then the last thing i'm going to call out is the destination citer okay so this is going to be again when we filter what is the location that we want to be looking for here and the destination citer that i'm going to start off with is going to be 192 168 dot and i'll do slash 32 to make it just this ip now the reason i'm not doing both of my ips here is i actually want to be able to show you all that we're throttling just to one destination so ideally i should see the throttling happening when going to u1 but i should not see the throttling happening when i go to u2 if if all works well here all right so all is well right here and um since this is i don't know we won't actually use these functions in like a fancy way but i'm just going to make functions just to keep it somewhat clean i just arbitrarily here and what we're going to do as the first step for the function create is we're going to say create and then we'll do this here just so we can identify our log messages and i will say something like shaping init so this is how i know shaping is starting and the first thing that we're going to do is set up that root cue disk because as we talked about i need the root to eventually get the qdisk with the class in place so let's set up the root cue disk what we'll do for here is we will run the tc command we will say qdisk add and this will be pointing at the device which is our interface so i will do the if since we've got that stored and with that we will call this a root we will say handle one and this one here is effectively the identifier we'll do i think the zero is implied with tc i can't really i can't really recall how that works actually if someone knows leave it in the comments but i'm gonna say one colon zero here and think of this like a class i guess an id and then this is like a sub id um it'll the relationship will become more apparent soon i'm probably using wrong verbiage there but one zero is going to be our identifier and we're going to call out that it's going to use htb and there's also this thing that you can do in here i'll do a quick break line there's also this thing you can do in here that is a default which i which i like a lot so default is the idea that in the case that we don't have a qdisk to qualify this for what qdisk should things default to so perhaps in like a quality of service scenario there's like a q disc with very limited bandwidth where things that kind of shouldn't even be coming in here that got in automatically go to so i'm just going to set up a default because we'll use it a little bit later the default is going to be pointing to 30 which is probably pretty arbitrary for you right now but i'll i'll bring to light what that maps to very shortly here so we've got the most important thing here which is we're adding a qdisk uh and this is the root cue disk now that we've got the root cue disk we need to actually add one of these kind of leaf cue disks or children qdisk with a class inside of it so let's start off by doing that now so we'll do tc and we're basically going to do another another ad here if you will but this time we're going to say class ad for tc this is going to be a dev again a device which is going to be of course our same interface that's not going to change and we're going to call out what the parent is so it knows what root q disk it belongs to the parent in this case is going to be 1 0 okay so that points again to this id that we have for our root q disk and with the parent in place i'm also going to specify a class id so this is basically going to be the id of this class kind of like the root has its own id up here and the id of this class is going to be one one okay so that'll be the id of this specific class this is also going to be htb and this time since this is an actual class we'll be able to set up a rate itself all right so the rate here is going to be effectively our limit if all goes well and things make their way into this q disk or this class if you will um this is going to effectively uh is going to effectively uh be able to to limit our stuff here okay so now that we've got this this kind of class in place right we've got i guess i'm kind of mixing up my verbiage a bit we've got the q disc here and then we've got the class that's pulling off of it so this is effectively our root q disc our q disk this is effectively our class if you will just to make sure we're clear on that and then the last thing we need to do is we need to make sure that we have a filter in place that packets that meet this destination cider actually make their way into here now one tip and trick i pulled off of the docks is to use an environment variable here because the filter syntax is a little bit long form and with filters i should mention there are a couple different types of filters in the man pages you can read about those right here in the filters section um there's a bunch of different ones we're going to use what i believe is probably the most common u32 this is basically a general purpose filtering mechanism which is another way of saying we can look at properties on the packet excuse me and determine which um which things we want to filter into as far as as far as q disks or classes are are effectively concerned okay so u32 is what we're going to use and i'm going to store this in an environment variable just to keep it kind of short and sweet here because the syntax again is a little long so the syntax that we're going to have here is we're going to be again running the tc command we're going to be adding a filter so filter add the device is once again going to be our same interface that we're used to the protocol here protocol is going to be ip all right and the parent is going to be that qdisk id so we're gonna do one zero here that we're pointed to um prio is one okay priority one and this is again a u32 filter all right so we've got most of the pieces in place to kind of use this filter generically the only things we need to add when we run this filter are effectively what destination ip we want to match against so this is something that i've just got written down because i use this type of filter a lot and you can kind of see how the pieces fit together now let's go ahead and use u32 now that we've got this set up so inside of create here we'll go down and we're going to say u32 right and we're going to add on to that command so we're going to match the ip all right and we're going to look at destination and this is going to specifically use the destination citer that we pulled out and then the last thing we need to do is call out effectively a a flow id here that we're going to be mapping to so the flow id that we're going to be mapping to is actually going to be the class id of this limited class so the flow id will be 1-1 and now we should pretty much be set up so if it's not completely clear what happens with this is on the linux host it's going to set these things up so that they're just there it's not like we run this before you know we do something with our program on the host we're gonna i mean i guess we could but on the host we're gonna run this and then it's gonna be completely set up it's gonna be like a global rule on that linux host for what we do with traffic matching this filter then the last thing i'll do just for the sake of our logs is just quickly call out that shaping is done i should really say shaping config is done but you kind of get the point and the last thing i'll want to do here because we're going to use this quite a bit is i'm going to also put in a clean function and the idea again being that once create runs this will globally be put in place it would probably be wise for me to erase any existing qdisk stuff that's set up before i run this create especially for some of the testing that we're going to do here so what we'll do here is run a pretty simple command we're going to do tc we're going to say de uh q disc if you will delete device um we're going to point to our same interface and this time we're just going to point at the the root device for that for that interface so this will again do cleanup and we'll pretty much be set now i have these functions in here so we could do like command line arguments and things like that but i'm not going to waste time in this video doing that what we'll do here is keep it really simple and just literally call clean and create and then we'll come in here and kind of build on these as we get a little bit more fancy but hopefully that gives you an idea this this should if all goes well have implemented basically what we were talking about here so let's let's go ahead and and try out this script so i will go back to this bash window and i will scp up the throttle sh file that i've set up again this is going to use xero the client server so we'll go back to the client server here we'll clear it out and we'll see if we have typos and stuff we need to fix up but server one is still listening server two is still listening and we will just double check for sanity sake that throttle is here throttle is here that all looks pretty familiar right and we've got the right ip address okay so let's try to run throttle and see if it goes well so we run throttle and we need to be root which would definitely make sense you certainly should be a root user if you're doing this kind of stuff so let's sudo run throttle and interesting no such file or directory is a little super oh that might be from the cleanup happening i'm thinking that's from the cleanup so um i should have put echo commands in there otherwise i would i wouldn't have known um so shaping init shaping done in theory our qdisk is set up we can even use tc to kind of get and list kind of what the configuration currently is but let's just go ahead and assume it worked and see if we can hit our servers now so in theory i should still be able to call server 2 and not be throttled fingers crossed so this is a command for server 2. i expect no changes let's just do it for 10 seconds so i'm calling server 2 and it's still pretty darn fast it's certainly faster than 100 megabits per second right so we're calling server 2 at 20 or 19 gigabits per second now if our filtering went right any process any application any command that calls u1 which again is on this ip address should now be throttled at least that is the hope so let's try it out we'll do that we'll zoom in check it out we have limited bandwidth so we know that u1 is totally capable of a hundred megabits per uh sorry 19 gigabits per second but now we're seeing it get throttled and you can tell it's not getting exactly what the full megabits per second are it could be because there's other processes that are talking to the server right that's using up a little bit of bandwidth it could also just be the calculation you know isn't perfect in regards to like how much it's sending and how much it's receiving and all that good stuff but effectively we are being limited we are seeing ourselves get right around 100 megabits per second when talking to that server so that is pretty darn cool it is actually set up and good to go now since that worked so well i say in this video let's try to make it a little bit more complicated just to give you kind of an idea of how advanced this stuff can get so conceptually check this out we know we've got qdisk 1 at 100 megabits per second now let's say that in theory we want to introduce some children off of this or some leafs off of this let's let's bring this up a little bit and let's say that we want this to be a little bit more complex where and i actually don't even know i think technically it's okay to call this a q disk right i correct me in the comments if i'm wrong here but this is our class i'm just going to call this uh well let's call by the ids that'll make it a little bit simpler this is the class id which is 1-1 maybe maybe i shouldn't be calling that a qdisk but i think technically under the hood it might it technically is anywho um so that's this is one one we know that root is one zero okay so that that works pretty well now what if we want to make this a little bit fancier and we want to introduce basically like um 110 and we want to do 110 and since i did default 30 i probably should do 130. so these are arbitrary id numbers i don't have a third server so i'm not going to do 120 but 110 and 130. and basically the idea that i have here is inside of this class i want to go ahead and limit 110 to have upwards of let's say 75 megabits per second okay and then i want 130 to have the ability to have um let's say 50 megabits per second okay so the thought process here is that not only do i want to be able to limit specifically to 75 here and 50 here but i also want to have a relationship with the parent here where when it's unused i can borrow unused bandwidth from the parent so in theory 50 megabits per second is kind of like my um my baseline right that i i will get up to or i guess kind of be guaranteed in a way um guaranteed may be too strong but we'll be given in a way but when not used i want to go ahead and just consume um you know upwards of 100 megabits per second or maybe maybe what could make this use case uh you know a little bit more interesting is let's say let's let's let's actually take a slightly different spin on it let's say that this can have 80 megabits per second and yeah let's keep it simple so this can have 80 megabits per second so not so much um how do i put this like in in a way here that like i want to be able to use the parent as like a a global limiter i guess you could say something where bandwidth is being borrowed from the parent so here's here's let me just concretely put it since i'm not using my words right this can have 80 megabits per second going to a host this can have 80 megabits per second going to a host but the parent offers us up 100 megabits per second in total so in a theoretical world where just 110 is firing off to some host or being packets are being dequeued to send off to some host it will give you 80 megabits per second but in that theoretical world if 130 should be active as well then these would actually both bog themselves down to 50 megabits per second because that's what they would be bringing in from the parent if that makes sense so you can see how we're kind of getting advanced and really like putting in kind of these hierarchical relationships but the key thing is everything still gets pulled off at the root level from the kernel but effectively as we kind of pull through we have this limit here of 100 megabits per second even though these are capable of 80 should this be saturated these two should actually be impacted by the fact that the parent offers up um you know 80 in total if that sort of sort of makes sense so let's try this out we're obviously going to need a bunch a bunch more fancy filters we're going to need a filter that says something like if you won i want to go to 1 10 now okay and if you two i want to go to 130 let's say all right so we're going to be changing up filters and we're going to basically be adding these uh these children classes in okay so let's let's go ahead and see if we can we can set this up and not get too confusing here so we've got the we've got the parent set up and we know that the parent limit is 100 megabits per second so that's good let's let's add in the child limit number which we will call 80 megabits per second um 80 megabits all right so that'll be an important thing for us to reference a bit later so we've got the qdisk we've got um let's uh my verbiage here my naming might get me in a bit of trouble but let's just call it these to conceptualize things okay so we've got root we've got parent and then i'm going to call these children or leafs if you will and then let's go ahead and set up some of these some of these lower level ones so first thing we're going to do is basically just make another class so if we do a tc class and we add the device for the interface just like we did before okay so we'll add in the interface we're now going to reference who the parent is but the parent of course this time is not going to be root the parent this time is going to be this class or 1-1 so parent in this case is in fact 1-1 then we're going to put in a class id here okay and the class id for this one is going to be arbitrarily 110 so one and then ten is like i think they call it the what is it the sub id or something nonetheless this is my my identifier 110. this is going to be htb and i am going to be putting in a rate now the rate that i am going to get to here is going to be kind of like a i guess you could say like a starting rate that we're going to kind of kick off with so what we can do here is um let's say like uh start rate i guess you could say and let's say that we want to start ourselves out at five megabits per second um and this might seem a little bit weird at first but bear with me hopefully i'll be able to bring it to light so we've got the the new thing here we've got the rate and we're going to say start rate is what the rate is so we're going to start ourselves off at 5 megabits per second now what we're going to be able to do in here is actually set up what the ceiling itself should look like so the ceiling is going to be that theoretical upper bounds that we could hit so when we put in the ceiling command okay we're then going to have child limit be effectively the ceiling so there's a delta between start and ceiling and ceiling can go all the way up to that 80 megabits per second all right so we've got the ceiling in place all looks well it's pulling from the parent so when it hits the start and starts consuming up towards the ceiling again it'll be reliant on what it can borrow from the parent and if the parent's exhausted you know because 1 30 our new class we're going to create now is being used there will be some impact in how the traffic works so let's basically copy this exact thing and this is going to be for again 130 in this case we're going to start off at the same thing we're going to have the child be the same thing really the only difference here is that we've got a unique uh unique identifier here that we'll be able to set the filters up to use okay so now we've got one filter here which goes in for uh for one one it goes up to this parent but now we actually want to be sending things into the leafs we want to be filtering so that things go into these buckets instead if you will rather than going into the parent itself so inside of the flow id i'm simply going to change this to 110 and now we're going to be mapping packets with this destination cider into 110 and then we'll go ahead and do you know this is where my variable names are going to fail me a bit we'll do destination citer 2 this is going to map into 130 okay so if we go into destination cider we'll do destination cider 2 this will be that specific ip address again i could be using bigger ciders here but you know just conceptually bear with me destination citer 2 is now inside of here so we've got these mapped we've got these theoretically flowing into traffic control to go into the right bucket and in short that should pretty much be it we now should have this kind of hierarchical setup bearing any typos or mistakes so let's see if this works we'll make a quick enhancement here because i know that kind of threw me off before clean init okay and then we'll say clean uh done here as well just to make sure i've got some logging info um on that as well so yeah i mean that i think that should be about it let's let's see if it works um so we'll go back to our terminal i will scp up the throttle command one more time we'll do a quick sanity check with less here to see if throttle is good and yeah i can see our new clean command so that all looks well let's go ahead and run throttle on the uh on the client and as usual i have forgotten to run into sudo so let's do that we'll pseudo throttle okay so at least as far as tc's concerned we i can't say i mapped everything correctly for sure yet but we have initiated the different pieces and it syntactically was okay as far as the the kernel is concerned so now we'll go back we will look at our servers uh u1 is running u2 is running now it's just a matter of running the iperf through so i think the first test that would be appropriate is let's just focus on dequeuing off of one of these classes in other words let's only send packets to u1 and make sure that it does reach around 80 megabits per second so we'll do iperf here we've already got it set up for u1 let's hit that let's look in and awesome as you can see here we're approaching pretty close to 80 megabits per second so that validated of course that this is working well but it doesn't necessarily validate that kind of parent borrowing thing i was talking about if 130 goes off at 80 megabits per second which we could certainly show without a problem let's just do that real quick we'll do one here now we have this firing off 80 megabits per second but we talked about this we don't really want our server exhausting a cumulative 100 this is really borrowing from the parent it shouldn't use more than that so let's set ourselves up with just like a little micro script here so we'll say um we'll call this iperf uh iperf dot sh i guess um and i'll just make it executable right here before i forget iperf.sh all right great so if i go back and just look at the commands we ran before let me grab those to keep it simple here's the client command great we will go ahead and just put these in and i won't even excuse me i won't even put in the bin bash thing at the top here this will be quite simple so we'll do that and let's let's make these longer just so we don't miss anything so the idea i've got here is i'm just going to use ampersand to not attach to the process and we'll actually run these and we'll just look from a server level what the bandwidth looks like when we sort of run these two effectively in parallel i'm going to launch two separate processes to run this iperf command so we've got that all right we're going to zoom into these once we get started but let's just go ahead and let's run this thing so iperfsh boom and now awesome check it out so if we look at our two servers down here you can now see they are roughly only getting 50 megabits per second because um again they're pulling or borrowing from that parent so effectively their 80 is never reached because they're using that capacity of 100 and clearly it's weighting evenly in this case i'm not sure exactly how the waiting mechanism works if it always would be even or if there's some other kind of priority thing that gets set up but it's working exactly as we'd expect in fact we can even demonstrate this better if we look inside of the iperf script real quick and we did um let's run this one for 10 seconds and then server 2 for 30. so what i expect to happen here is we start off at 50 50. test uh actually let me i'm mixing these up real quick this i'll put this one at 10 so it's the the u1 u1 should stop receiving traffic in about 10 seconds and then i think we're going to see u2 ramp up bandwidth because it's now going to have more that it can actually pull from the parent so let's validate that theory and run iperf again so here we go we can see we're at about 50 50. i'm expecting this middle buffer here to quit in 10 seconds and we can actually see a little bit of a drop here i'm not sure exactly what caused that but now once that got killed we're back at 80 megabits per second on youtube because that is still receiving traffic so we have effectively implemented a pretty cool system around quality of service here that we could do all kinds of interesting stuff with right it's not just you know building a router it's simulating weird things in your environment maybe handling certain traffic things on just a linux host level there's just a lot of freaking cool stuff that you can do here around this so overall that worked pretty well and like i had mentioned if you want to take a look at the script i wrote i'll be sure to post it on my website in the description but all in all that's tc so i hope you found this video really interesting this is a tool i love talking about um it maybe just gives you some ideas some areas you can play around with um if you like this content be sure to give it a like throw a comment in if you have some feedback some ideas i'd love to hear them but until next time i will talk to you later see you then
Video description
In this video we'll explore using tc (traffic control) to shape traffic in a Linux. This can enable you to do things like simulating limited bandwidth to implementing robust quality of service solutions. blog post: https://octetz.com/docs/2020/2020-09-16-tc 00:00:00 - Quality of Service 00:05:07 - Traffic Control 00:15:17 - Implementing Shaping