Computability theory is that part of the
theory of computation dealing with which
problems are solvable by
algorithms (equivalently, by
Turing machines), with various restrictions and extensions. Computability theory addresses four main questions:
- What problems can Turing machines solve?
- What other systems are equivalent to Turing machines?
- What problems require more powerful machines?
- What problems can be solved by less powerful machines?
See the article on
theory of computation for a chart showing which classes of problems are subsets of other classes.
Not all problems can be solved. An
undecidable problem is one that cannot be solved by any algorithm, even given unbounded time and memory. Many undecidable problems are known. For example, the
Entscheidungsproblem (German for "decision problem") is this: given a statement in
first-order predicate calculus, decide whether it is universally valid.
Church and
Turing independently proved this is undecidable. The
halting problem is: given a program and inputs for it, decide whether it will run forever or will eventually halt. Turing proved this is also undecidable. He defined a
computable number to be a
real number for which there is an algorithm that, given
n, calculates the
nth digit. He proved almost all numbers are uncomputable.
Chaitins constant is an uncomputable number, even though it is
well defined.
The
languages that are accepted by a Turing machine are exactly those that are generated by
formal grammars. The
lambda calculus is a way of defining functions. The functions that can be computed in the lambda calculus are exactly those that can be computed by a Turing machine. These three formulations, Turing machines, formal grammars, and the lambda calculus all look very different, and were all developed by different people. Yet they are all equivalent, and have the same problem-solving power.
This is generally taken as evidence for the
Church-Turing thesis, which is the claim that our intuitive notion of an
algorithm or an
effective procedure is captured by the mathematical definition of a Turing machine.
Electronic computers, and even quantum computers, are exactly equivalent to Turing machines, if they have access to an unbounded supply of memory. As a corollary, all implementable programming languages are at best equivalent in power to a Turing machine (in practice, very few are less powerful). Such languages are said to be Turing-complete.
Systems equivalent to a Turing machine include:
- Turing machine with several tapes
- Turing machine with a 2-dimensional "tape" (an infinite number of linear tapes)
- Turing machine with a limited number of states and tape symbols
- States×symbols can be any of 2×18, 3×10, 4×6, 5×5, 7×4, 10×3, 22×2
- Finite state machine with 2 stacks
- Finite state machine with 2 counters
- Formal grammar
- Post system[?]
- Lambda calculus
- Partial recursive functions
- Almost any modern programming language (when given unlimited memory), including:
- A language with 1 instruction, 1 parameter (see OISC and URISC[?] here (http://www.tuxedo.org/~esr/retro/))
- A language with 8 instructions, no parameters (see BrainFuck)
- Wang tiles
- Recurrent neural network (finite-precision inputs/outputs/weights, infinite-precision signals initialized to zero)
- Cellular automaton, including:
- Conway's Game of Life
- Cellular automaton with just 1 dimension, 2 states, 3 cells per neighborhood (e.g. rule 110)
- Non-deterministic Turing machine
- Probabilistic Turing Machine
- Quantum computer
The last three examples use a slightly different definition of accepting a language. They are said to accept a string if any computation accepts (for non-deterministic), or most computations accept (for probabilistic and quantum). Given these definitions, those machines have the same power as a Turing machine for accepting languages.
Sometimes machines are considered that have more power than a Turing machine. For example, an
oracle machine uses a black box that can compute some particular function that might not be possible for an ordinary Turing machine. The theory of
real computation[?] deals with machines using infinite-precision real numbers. Within this theory, it is possible to prove interesting statements such as "the complement of the
Mandelbrot set is only partially decidable" (
see hypercomputation).
The
Chomsky hierarchy defines those
languages that can be accepted by four classes of algorithms. They all assume a machine consisting of a non-deterministic
finite state machine combined with some form of memory. If the memory is an infinite tape, then it has the full power of a Turing machine, and can accept exactly those languages that are generated by
unrestricted grammars. If it is given only an amount of memory proportional to the size of the input, then it can recognize exactly those languages generated by
context-sensitive grammars. If it is given only a stack as its memory, then it can recognize exactly those languages generated by
context-free grammars. If it is given no additional memory at all then it can accept exactly those languages generated by
regular grammars.
Other restrictions on memory, or time, or other resources have often been considered. The results for those restrictions are usually considered part of complexity theory rather than computability theory.
All Wikipedia text
is available under the
terms of the GNU Free Documentation License