Redirected from Error correction
In computer science and information theory, errorcorrection consists of methods to detect and correct errors in the transmission or storage of data by the use of some amount of redundant data and (in the case of transmission) the selective retransmission of incorrect segments of that data.
Errorcorrection methods are chosen depending on the error characteristics of the transmission or storage medium, such that errors are almost always detected and corrected with a minimum of redundant data stored or sent.
The most obvious (and highly inefficient) method of errorcorrection is to repeat each unit of data multiple times. Another simple method is to use one bit of each byte of data as a parity bit. Note, however, that this only provides errordetection, and not errorcorrection: if an error occurs, we do not know which bit is in error.
Information theory tells us that whatever be the probability of error in transmission or storage, it is possible to construct errorcorrecting codes in which the likelihood of failure is arbitrarily low. It gives a bound on the efficiency that such schemes can achieve.
Errorcorrection in practice is complicated by the fact that errors might occur in bursts rather than at random.
Block errorcorrecting codes, like Hamming codes, and ReedSolomon codes transform a chunk of bits into a (longer) chunk of bits in such a way that errors up to some threshold in each block can be detected and corrected.
See also:
Search Encyclopedia

Featured Article

