The speed of an algorithm is measured in various ways. The most common method uses time complexity[?] to determine the Big-O[?] of an algorithm: often, it is possible to make an algorithm faster at the expense of space. This is the case whenever you cache the result of an expensive calculation rather than recalculating it on demand. This is a very common method of improving speed, so much so that languages often add special features to support it, such as C++'s mutable keyword.
The space of an algorithm is actually two separate but related things. The first part is the space taken up by the compiled executable on disk (or equivalent, depending on the hardware and language) by the algorithm. This can often be reduced by preferring run-time decision making mechanisms (such as virtual methods[?] and run-time type information[?]) over certain compile-time decision making mechanisms (such as macro substitution[?] and templates[?]). This, however, comes at the cost of speed.
The other part of algorithm space measurement is the amount of temporary memory taken up during processing. For example, pre-caching results, as I mentioned earlier, improves speed at the cost of this attribute.
Be careful, in the pursuit of good coding style, not to over-emphasize efficiency. Nearly all of the time, a clean and usable design is much more important than a fast, small design. There are exceptions to this rule (such as embedded systems, where space is tight, and processing power minimal) but these are rarer than one might expect.
Search Encyclopedia
|
Featured Article
|