Encyclopedia > Network protocol design principles

  Article Content

Network protocol design principles

The topic of this entry is to describe the design principles which had been applied for specifying network protocols. The entry needs rework and has been moved here from Systems engineering.

Table of contents

Protocol Layering Usually, protocols are layered. For example, one layer might describe how to encode text (with ASCII, say), while another describes how to inquire for messages (with the Internet's simple mail transfer protocol, for example), while another may correct errors (with the Internet's transmission control protocol), another handles addressing (say with IP, the internet protocol), another handles the error detection (with the internet's point-to-point protocol), and another handles the physical form of the bits, (with a V.42 modem, for example).

Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. Layering also permits familiar protocols to be adapted to unusual circumstances. For example, the mail protocol above can be adapted to send messages to aircraft. Just change the V.42 modem protocol to the INMARS LAPD data protocol used by the international marine radio satellites.

Error Detection[?] and Correction It's a truism[?] that communication media are always faulty. The conventional measure of quality is the number of failed bits per bit transmitted. This has the wonderful feature of being a dimensionless figure of merit that can be compared across any speed or type of communication media.

In telephony, failure rates of 1x10-4 bit per bit are faulty (they interfere with telephone conversations), while 1x10-5 bit per bit or more should be dealt with by routine maintenance (they can be heard).

Communication systems correct errors by selectively resending bad parts of a message. For example, in TCP (the internet's Transmission Control Protocol), messages are divided into packets, each of which has a checksum. When a checksum is bad, the packet is discarded. When a packet is lost, the receiver acknowledges all of the packets up to, but not including the failed packet. Eventually, the sender sees that too much time has elapsed without an acknowledgement, so it resends all of the packets that have not been acknowledged. At the same time, the sender backs off its rate of sending, in case the packet loss was caused by saturation of the path between sender and receiver. (Note: this is an over-simplification: see TCP for more detail)

In general, the performance of TCP is severely degraded in conditions of high packet loss (more than 0.1%), due to the need to resend packets repeatedly. For this reason, TCP/IP connections are typically either run on highly reliable fiber networks, or over a lower-level protocol with added error-detection and correction features (such as modem links with ARQ). These connections typically have uncorrected bit error rates of 1x10-9 to 1x10-12, ensuring high TCP/IP performance.

Resiliency Another form of network failure is topological failure, which a communications link is cut. Most modern communication protocols periodically send messages to test a link. In phones, a framing bit is sent every 24 bits on T1 lines. In phone systems, when "sync is lost", fail-safe mechanisms reroute the signals around the failing equipment.

In packet switched networks, the equivalent functions are performed using router update messages to detect loss of connectivity.

See Also



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Bullying

... authority. The first to have the title of "Tyrant" was Pisistratus in 560 BC. In modern times Tyrant has come to mean a dictator who rules with ...

 
 
 
This page was created in 37 ms