Occasionally the full capacity of a network is not made available to end users. Usually rate-limiting is implemented to protect networks or servers. However, often the opposite happens. Rate-limiting algorithms can actually break an otherwise operational network.
Before looking at rate-limiting technology, let’s review how traffic gets onto a network. Some may say that traffic is generated by computers. But, even more basically, traffic is generated by people. Individuals request information, and that information is passed over the network. The amount of traffic generated by an individual depends on what that individual is using the network for.
For example, a user who is using the internet usually will generate more traffic on a faster link than on a slow link. However, an individual who uses a network for business purposes will have a predefined amount of traffic that must be passed over the network. A hospital worker must process a fixed amount of patients. The number of transactions that a bank must perform is based on the number of customers that enter their branches. If network traffic is predefined, implementing rate-limiting networks will not reduce the amount of traffic that will pass over a network. In fact, rate limiting could actually increase the total amount of traffic on a network.
Let’s explore how we would design a large internal network for an investment company. There are 400 servers, and 10,000 workstations connected through an array of switches and routers.
All servers and workstations have 10/100Mbps NIC cards. A decision was made to tune the speed of the network cards to protect the load on the network. All workstations were to be set to 10Mbps, and all servers would be set to 100Mbps.
The problem with this approach is that the network would have to work harder with the workstations set to 10Mbps. Understand that most traffic will be moving from the servers to the client workstations, and the daily amount of traffic is fixed. The server sends data to the routers and switches at 100Mbps. The switches and routers then must queue the data while it is being fed to the client at 10Mbps. In this environment the network will be better protected if the workstations are set to 100Mbps rather than 10Mbps.
In the previous example, we found that we could simply limit the overall speed of a network path by setting the NICs to only 10Mbps. In some instances, you may want to limit traffic even more. For example, we often find that university dormitories are limited to approximately 200Kbps to the internet, while the remainder of the campus is running faster. Flexible network rate limiting requires algorithms implemented in hardware. If you limit the dormitories of a university to 200Kbps, you may not want to implement 200Kbps WAN links everywhere. Instead, you may run 10Mbps LAN connections into routers or other devices, and implement a sufficient method of rate limiting.
However, some algorithms are better than others. Let’s examine the following diagram.
Logically, most network devices can be represented as containing both a queue and server. In this case, a server is not a server in the normal sense. Consider the server to be similar to a sausage slicer, slicing packets (sausages) one bit (slice) at a time onto the downstream network. The queue is a buffer that holds packets while they are waiting in line to be serviced. Queue buffers will never be filled if there is only one input, and the server is as fast as or faster than the incoming data stream. A modem is an example of a device that does not require a queue because input and output speeds are identical.
A clean link without media errors will only drop packets at points in the network where queues are full. Queue buffer sizes vary, however 64Kbytes is a common size. Much larger, and a network may experience delays that will cause TCP to retransmit data unnecessarily. Much smaller, and queues can cripple TCP efficiency.
Rate-limiting queues are a method of limiting traffic by reducing the size of the queue buffer. We have measured queue buffers as small as 4Kbytes. As long as combined upstream data rates never exceed the capacity of the server, a rate-limiting queue has no effect. But usually this is not the case. As soon as 4 or 5 packets are sent in unison, the queue is filled and packets are dropped.
Dropping packets to limit data rates is a very bad idea. Remember that users create traffic. By simply dropping packets, data does not go away. Eventually the data is retransmitted. Rate-limiting queues can actually increase overall traffic, and may drive larger networks to the point of saturation. You may be able to adjust upstream flows to bypass effect of rate limiting queues. By pacing packets, the downstream link can be saturated. Sending many simultaneous streams of data sometimes can fill the downstream network.
Rate-limiting queues are found in routers that are implementing Cisco’s CAR feature. We have also detected rate-limiting queues in African and Eastern-European Frame Relay Networks.
It is much better to implement a rate-limiting server, also called a Packet Shaper. This method relies on spacing of packets on the downstream link. Packets are not dropped unnecessarily, and rate limiting cannot be bypassed with upstream traffic shaping. Packet shaping is similar to on-ramp traffic lights that pace cars onto a highway. The cars are still allowed to queue up on the ramp, and the highway is protected from flooding. However, rate-limiting queue is like a tank that blows up cars when there are too many lined up on the ramp.