Previous

Content

Next 

2.4.- TBF queuing discipline

There is an old story in my country that tells more or less as this: there was once a mule breeder for selling. One day a farmer looking for purchase some mules for his farm, told the peasant. How many mules do you have? I don't know, the breeder answered. The farmer asked then: What's the price of each of them? Just one buck for each one, the peasant answered. Okay, the farmer told: here you have 100 bucks; let me take now 100 of them. No, no, the peasant very worry told. I don't like the business that way. Just put one buck on my hat and for each buck you put on it, I will take out a mule for you from the corral. It's very simple: buck that drops, mule that crosses.
TBF queuing discipline works very similar to this funny story. You are interested in controlling the maximum rate that packets are dispatched from a queue of them; then you create a buffer (bucket) constantly filled by some virtual pieces of information called tokens, at a specific rate (token-rate).
 
As soon as there are tokens in the bucket and packets in the queue you pick up a token and let a packet go. Making the simil, the bucket is the peasant's hat, token are the bucks, the queue is the corral and packets are the peasant's mules. As fast as the token bucket is filled the packets are dequeued from the queue. The constant rate at which you fill the bucket of token is the maximum rate you are interested packets leave the queue.
 
Of course relationship cannot be one-to-one; this means, one-token one-packet, because packets are most of the time of different size. But you can create a one-token one-byte relationship. Let's say, for example, that you want to have a maximum rate of saying, 500 kbps. For doing this you have to dispatch 62.5 KB/sec, this means, 64000 bytes per second. Okay, we are going to fill our bucket to a rate of 64000 tokens per second. Let's suppose now that we check the head of the packet queue and we have a 670 bytes packet (the length is in the header packet). Then we pick up 670 tokens from the bucket and let the packet go. We will try to keep our packet queue empty; as soon as we have a packet and enough tokens to let it go, we will do that.
 
Let's see now a brief summary of how people from Lartc [7] explain this in a more technical way:
 
The Token Bucket Filter (TBF) is a simple queue that only passes packets arriving at a rate which is not exceeding some administratively set rate, with the possibility to allow short bursts in excess of his rate.
 
"TBF is very precise, network- and processor friendly. It should be your first choice if you simple want to slow an interface down!"
 
The TBF implementation consists of a buffer (bucket), constantly filled by some virtual pieces of information called tokens, at a specific rate (token rate). The most important parameter of the bucket is its size, that is the number of tokens it can store.
 
Each arriving token collects one incoming data packet from the data queue and is then deleted from the bucket. Associating this algorithm with the two flows -- token and data, gives us three possible scenarios:
 
  • The data arrives in TBF at a rate that is equal to the rate of incoming tokens. In this case each incoming packet has its matching token and passes the queue without delay.
  • The data arrives in TBF at a rate that is smaller than the token rate. Only a part of the tokens are deleted at output of each data packet that is sent out the queue, so the tokens accumulate, up to the bucket size. The unused tokens can then be used to send data at a speed that is exceeding the standard token rate, in case short data bursts occur.
  • The data arrives in TBF at a rate bigger than the token rate. This means that the bucket will soon be devoid of tokens, which causes the TBF to throttle itself for a while. This is called an "overlimit situation". If packets keep coming in, packets will start to get dropped.
  guitar lessons
The last scenario is very important, because it allows to administratively shape the bandwidth available to data that is passing the filter.
The accumulation of tokens allows a short burst of overlimit data to be still passed without loss, but any lasting overload will cause packets to be constantly delayed, and then dropped. Please note that in the actual implementation, tokens correspond to bytes, not packets.
Reading above observe that the second scenario (when some token are saved) permits us for allowing some short data burst until tokens are again totally consumed.
Next a parameter description is required before having a look of how to use tc for configuring this queue; reading from Lartc [7] and TBF man page we have:
Even though you will probably not need to change them, tbf has some knobs available. First the parameters that are always available:
limit or latency "(size -in bytes- of the packets queue)"
  Limit is the number of bytes that can be queued waiting for tokens to become available. You can also specify this the other way around by setting the latency parameter, which specifies the maximum amount of time a packet can sit in the TBF. The latter calculation takes into account the size of the bucket, the rate and possibly the peakrate (if set).
burst/buffer/maxburst "(size -in bytes- of the token queue)"
  Size of the bucket, in bytes. This is the maximum amount of bytes that tokens can be available for instantaneously. In general, larger shaping rates require a larger buffer. For 10mbit/s on Intel, you need at least 10kbyte buffer if you want to reach your configured rate! If your buffer is too small, packets may be dropped because more tokens arrive per timer tick than fit in your bucket.
mpu
  A zero-sized packet does not use zero bandwidth. For ethernet, no packet uses less than 64 bytes. The Minimum Packet Unit determines the minimal token usage for a packet.
rate
  The speedknob. See remarks above about limits!
If the bucket contains tokens and is allowed to empty, by default it does so at infinite speed. If this is unacceptable, use the following parameters:
peakrate
  If tokens are available, and packets arrive, they are sent out immediately by default, at "lightspeed" so to speak. That may not be what you want, especially if you have a large bucket. The peakrate can be used to specify how quickly the bucket is allowed to be depleted. If doing everything by the book, this is achieved by releasing a packet, and then wait just long enough, and release the next. We calculated our waits so we send just at peakrate. However, due to de default 10ms timer resolution of Unix, with 10.000 bits average packets, we are limited to 1mbit/s of peakrate!
 mtu/minburst
  The 1mbit/s peakrate is not very useful if your regular rate is more than that. A higher peakrate is possible by sending out more packets per timertick, which effectively means that we create a second bucket! This second bucket defaults to a single packet, which is not a bucket at all. To calculate the maximum possible peakrate, multiply the configured mtu by 100 (or more correctly, HZ, which is 100 on intel, 1024 on Alpha).
Let's bring now a simple configuration taken from the TBF man page:

A little bit more complicated that configuring a FIFO or PRIO qdisc. Isn't it? But not too much. First part of the command (including 'tbf') is well known already. The maximum sustained rate is selected as 0.5 mbps with a 5 kilobyte buffer and a peak rate of 1.0 mbps for short burst of packets. The bucket queue size is calculated so that a maximum of 70ms of latency a packet will be suffer on the queue. The minburst parameter is selected as the MTU (maximum trasmitted unit) of the interface.
 
Finally to close this section: do you remember the PRIO qdisc configuration we did somewhere above? Let's replace the class 1:1 FIFO qdisc with a TBF qdisc to protect class 1:2 and class 1:3 flows from being starved by class 1:1 flows. The new diagram will be:
 

 
And the tc commands to build this scheme will be:
 

 
  hao handbuilt guitar effects pedals
Observe that last command is very similar to the TBF configuration example but this time the word 'root' is omitted and replaced by the words 'parent 1:1 handle 10:'. Now our TBF qdisc is not a root qdisc, but instead, it is a child of class 1:1 (class 1:1 own queue) and we number it as 10:0. This is a simple example of how, using tc, we assign a qdisc to a class to create a hierarchy.
Well, we are ready with TBF queuing discipline. To continue we will study now the SFQ queuing discipline.
Previous

Content

Next