Redirected from Nyquist's theorem
The theorem states that, when converting from an analog signal to digital (or otherwise sampling a signal at discrete intervals), the sampling frequency must be greater than twice the highest frequency of the input signal in order to be able to reconstruct the original perfectly from the sampled version.
If the sampling frequency is less than this limit, then frequencies in the original signal that are above half the sampling rate will be "aliased" and will appear in the resulting signal as lower frequencies. Therefore, an analog low-pass filter is typically applied before sampling to ensure that no components with frequencies greater than half the sample frequency remain. This is called an "anti-aliasing filter".
The theorem also applies when reducing the sampling frequency of an existing digital signal.
The theorem was first formulated by Harry Nyquist in 1928 ("Certain topics in telegraph transmission theory"), but was only formally proved by Claude E. Shannon in 1949 ("Communication in the presence of noise"). Mathematically, the theorem is formulated as a statement about the Fourier transform.
If a function s(x) has a Fourier transfrom F[s(x)] = S(f) = 0 for |f| > W, then it is completely determined by giving the value of the function at a series of points spaced 1/(2W) apart. The values sn = s(n/(2W)) are called the samples of s(x).
The minimum sample frequency that allows reconstruction of the original signal, that is 2W samples per unit distance, is known as the Nyquist frequency, (or Nyquist rate). The time inbetween samples is called the Nyquist interval.
If S(f) = 0 for |f| > W, then s(x) can be recovered from its samples by the Nyquist-Shannon Interpolation Formula.
References:
Search Encyclopedia
|
Featured Article
|