For many problems, the problem size is taken to be the number of bits required to encode the input. For instance, if the problem is to square a given integer, we would typically measure the input size as the logarithm of the input integer (since that describes how many bits are needed to encode the integer in binary notation). However, often the encoding of the input is not canonical; if for instance the problem is one in graph theory, then different problem sizes can be defined, since a graph can be encoded as a list of edges or alternatively as an adjacency matrix.
Search Encyclopedia
|
Featured Article
|