Encyclopedia > Amdahl's law

  Article Content

Amdahl's law

Amdahl's law, named after computer architect Gene Amdahl, states that if F is the fraction of a calculation that is sequential, and (1-F) is the fraction that can be parallelised, then the maximum speedup that can be achieved by using P processors is 1 / (F + (1-F)/P). In the limit, as P tends to infinity, the maximum speedup tends to 1/F. In practice, price/performance ratio falls rapidly as P is increased once (1-F)/P is small compared to F.

As an example, if F is only 10%, the problem can be sped up by only a maximum of a factor of 10, no matter how large the value of P used. For this reason, parallel processing is only useful for either small numbers of processors, or problems with very low values of F: so-called embarassingly parallel problems[?].

A great part of the craft of parallel programming consists of attempting to reduce F to the smallest possible value.

References:

  • Gene Amdahl, "Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities", AFIPS Conference Proceedings, (30), pp. 483-485, 1967.
  • Reevaluating Amdahl's Law (http://www.scl.ameslab.gov/Publications/AmdahlsLaw/Amdahls)



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Ocean Beach, New York

... present, and 42.6% are non-families. 29.5% of all households are made up of individuals and 4.9% have someone living alone who is 65 years of age or older. The average ...

 
 
 
This page was created in 34.6 ms