Encyclopedia > Network congestion avoidance

  Article Content

Network congestion avoidance

Network congestion avoidance is a process used in networks to avoid Network congestion[?]. The fundamental problem is that all network resources are limited, including router processing time and link bandwidth. Unless there is some sort of compensating process, users could easily increase their requests to the point where all network resources were consumed, making the network unusable (humorously called a "notwork"). Implementations of connection oriented flows[?], such as the widely-used TCP protocol, generally measure/register packet errors, losses or delays, and adjust the transmit speed. There are many different network congestion avoidance processes, since there are a number of different trade-offs available.

Table of contents

TCP/IP congestion avoidance Problems occur when many concurrent TCP-flows are experiencing port queue buffer tail-drops. Then TCP's automatic congestion avoidance is not enough. All flows that experience port queue buffer tail-drop, will begin a TCP retrain at the same moment - this is called TCP global synchronization.

RED One solution is to use RED (Random Early Detection) on network equipments port queue buffer. On network equipment ports with more than one queue buffer, WRED (Weighted Random Early Detection) could be used if available. RED indirectly signals to sender and receiver by deleting some packets , eg. when the average queue buffer lengths are more than 50% filled and deletes exponentially more and more packets, when the average queue buffer lengths are approaching 100%. The average queue buffer lengths are computed over 1 second at a time.

IP ECN Another approach is to use IP ECN (Explicit Congestion Notification). IP ECN are only used when the two hosts signal that they respect ECN. With this method, an ECN bit are set in a few selected packet headers to signal an explicit congestion. This is better in some respects than the indirect packet delete congestion notification performed by RED/WRED algorithm, but note that it requires explicit support by both hosts to be effective.

Flowbased-RED/WRED Some network equipment are equipped with ports that can follow and measure each flow (flowbased-RED/WRED) and are hereby able to signal to a too big bandwidth flow according to some QoS policy. A policy could divide the bandwidth among all flows by some criteria.

Good things about Active Queue Management (RED, WRED, ECN) The IETF's RFC2309: Recommendations on Queue Management and Congestion Avoidance in the Internet (April 1998) (http://rfc.sunsite.dk/rfc/rfc2309) states that:

  • Less packets will be dropped with Active Queue Management (AQM).
  • The link utilization will increase because fewer TCP global synchronization will occur.
  • By keeping the average queue size small, queue management will reduce the delays and jitter seen by flows.
  • The connection bandwidth will be more equally shared among connection oriented flows[?], even without flowbased-RED, WRED.

See also:

External Links

All Wikipedia text is available under the terms of the GNU Free Documentation License

  Search Encyclopedia

Search over one million articles, find something about almost anything!
  Featured Article
Ocean Beach, New York

... to the United States Census Bureau, the village has a total area of 0.4 km² (0.1 mi²). 0.4 km² (0.1 mi²) of it is land and none of ...

This page was created in 24.1 ms