Encyclopedia > Analysis of algorithms

  Article Content

Analysis of algorithms

To analyze an algorithm is to determine the quantity of resources (such as time and storage) which will be necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length. Usually the efficiency of an algorithm is stated as a function relating the input length to the number of steps or storage locations that are necessary.

Big O notation, omega notation and theta notation are used to specify this in an asymptotic sense. For instance, binary search is said to run an amount of steps proportional to a logarithm, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic measures are stated because different implementations[?] of the same algorithm may differ in efficiency. However, by the principle of invariance[?], a constant multiplicative factor relates the efficiencies of any two implementations of a given algorithm. This is called a hidden constant.

Exact (not asymptotic) measures of efficiency can sometimes be computed but they require that more details be given concerning the particular implementation of the algorithm. For example, if the sorted set to which we apply binary search has n elements, and we can guarantee that a single iteration can be done in constant time, then at most log2 N + 1 steps (consisting of lookups at specific positions in the set) are needed to return an answer.

Exact measures of efficiency are useful to the people who actually implement and use algorithms, because they are more precise and thus enable them to know how much time they can expect to spend in execution. To some people (e.g. game programmers), a hidden constant can make all the difference between success and failure.

Note that the time efficiency depends on what we define to be a step. For the analysis to make sense, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as a step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may become very large, addition no longer can be assumed to require constant time (compare the time you need to add two 2-digit integers and two 1000-digit integers using a pen and paper).

Complexity theory provides minimum bounds on the resources needed by any algorithm which solves a given computational problem. When the complexity of a problem is known, it is hopeless to search for ways to solve it more efficiently than what this function specifies.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Westhampton Beach, New York

... There are 2,279 housing units at an average density of 302.4/km² (783.9/mi²). The racial makeup of the village is 89.17% White, 4.63% African American, ...

 
 
 
This page was created in 31.1 ms