Congestion
- When the number of packets traveling through a network approach or exceed the handling ability, congestion occurs.
- The study of congestion control is based on queuing theory.
- Each node on the network contains a queue.
- Arriving packets are stored in an input buffer
- Eventually the processor observes these packets
- To determine the destination.
- It must parse this from the packet header information
- It will place the packet in the proper output queue.
- Packets are then transmitted from the output queue.
- Queues are finite.
- If the incoming rate exceeds the outgoing rate there is definitely a problem.
- As the arrival rate approaches the transmission rate, problems begin to develop.
- We are concerned about queue delay and throughput
- When the queues become full there are two choices:
- Drop excess packets
- Tell the source to slow down.
- Figure 13.3 assumes infinite buffers.
-
- Note, as we approach 100% load, the delay goes up exponentially
- This is because packets must wait longer and longer in the output queue.
- But we don't have infinite buffers.
- In an actual network other factors come into play
- As delay increases, retransmission becomes a factor as packets and acks are lost
- Messages about delays begin to hit the network and route costs change
- Attempts to route through other sources increases time spent in routing algorithms.
- ICMP Source Quench (and others) messages begin flowing adding to traffic
- Other nodes begin experiencing queues filling as well.
- Figure 13.4 has a real-world look at congestion.
-
- Note after point B in this figure, even delivered packets are
assumed to be lost. The application has most likely timed out.