Encyclopedia > Throughput

  Article Content

Throughput

Throughput in theory of constraints is the rate at which a system produces its outputs.

Throughput in IT is the speed at which a computer or network processes data end to end. It therefore is a good measure of absolute performance, and we frequently will see internet connections rated in terms of how many bits they pass per second.

However it is a very bad measurement of perceived performance, which is mostly based on how quickly it responds to you. Responsiveness has far less to do with throughput than latency. As the classic example goes, a station wagon full of magnetic tape has excellent throughput and horrible latency. It may take a week to deliver data from California to New York, but can carry so much that the throughput is better than broadband. Yet a user who has to wait a week to see a web page will complain that they preferred their much faster dialup connection!

Normally throughput and latency are opposed goals. To improve latency you typically want to increase how much the computer checks to see if you are trying to interact. This checking overhead slows you down. However there is one very common exception to this rule. Network protocols and programs tend to synchronize both ends regularly. If these synchronizations are slow, then throughput can suffer horribly.

Reference:



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Fibre optic gyroscope

...     Contents Fibre optic gyroscope wikipedia.org dumped 2003-03-17 with ...

 
 
 
This page was created in 30.1 ms