Redirected from ADC
In electronics, an analog to digital converter (abbreviated ADC, A/D, or A to D) is a device that converts continuous signals to discrete digital numbers. Typically, an ADC converts a voltage to a digital number. The digital to analog converter or DAC performs the reverse operation.
|
The resolution of the converter indicates the number of discrete values it can produce. It is usually expressed in bits. For example, an ADC that encodes an analog input to one of 256 discrete values has a resolution of eight bits, since
Most ADCs are linear, which means that they are designed to produce an output value that is a linear function of, i.e. proportional to, the input. Another common type is the logarithmic ADC, which is used in telecommunications systems where the amplitude of the input signal varies over a wide range. The logarithmic ADC compresses the input signal into a smaller number of bits than a linear ADC with the same input range and resolution.
Accuracy depends on the error in the conversion. If the ADC is not broken, this error has two components: quantization error and (assuming the ADC is intended to be linear) non-linearity. These errors are measured in a unit called the LSB, which is an abbreviation for least significant bit. In the above example of an eight-bit ADC, an error of one LSB is 1/256 of the full signal range, or about 0.4%.
Quantization error is due to the finite resolution of the ADC, and is an unavoidable imperfection in all types of ADC. The magnitude of the quantization error at the sampling instant is between zero and half of one LSB.
All ADCs suffer from non-linearity errors caused by their physical imperfections, causing their output to deviate from a linear function (or some other function, in the case of a deliberately non-linear ADC) of their input. These errors can sometimes be mitigated by calibration, or prevented by testing.
Commonly, the analog signal is continuous in time and it is necessary to convert this to a flow of digital values. It is therefore required to define the rate at which new digital values are sampled from the analog signal. The rate of new values is called sampling rate of the converter.
All ADCs work by sampling their input at discrete intervals of time. Their output is therefore an incomplete picture of the behaviour of the input. There is no way of knowing, by looking at the output, what the input was doing between one sampling instant and the next. If the input is known to be changing slowly compared to the sampling rate, then it can be assumed that the value of the signal between two sample instants was somewhere between the two sampled values. If, however, the input signal is changing fast compared to the sample rate, then this assumption is not valid.
If the digital values produced by the ADC are, at some later stage in the system, converted back to analog values by a digital to analog converter or DAC, it is desirable that the output of the DAC is a faithful representation of the original signal. If the input signal is changing much faster than the sample rate, then this will not be the case, and spurious signals called aliases will be produced at the output of the DAC. This problem is called aliasing.
To avoid aliasing, the input to an ADC is often filtered to prevent it from changing faster than the sample rate. This filter is called an anti-aliasing filter.
There are four common ways of implementing an electronic ADC:
Nonelectronic ADCs usually use some scheme similar to one of the above.
These are usually integrated circuits.
Most converters sample with 6 to 24 bits of resolution, and produce fewer than 1 megasample per second. Mega and gigasample converters are available, though (Feb 2002); megasample converters are required for digital video editing[?]. Commercial converters usually have ±0.5 to ±1.5 LSB error in their output.
The most expensive part of an integrated circuit is the pins, because that makes the package larger, and each pin has to be connected to the integrated circuit's silicon. To save pins, it's common for ADCs to send their data one bit at a time over a serial interface to the computer, with the next bit coming out when a clock signal changes state, say from zero to 5V. This saves quite a few pins on the ADC package, and in many cases, does not make the overall design any more complex. (A notable exception is in connecting the converters to microprocessors which use memory-mapped IO[?].)
Commercial ADCs often have several inputs that feed the same converter, usually through an analog multiplexer. Different models of ADC may include sample-and-hold[?] circuits[?], instrumentation amplifiers[?] or differential inputs, where the quantity measured is the difference between two voltages.
Search Encyclopedia
|
Featured Article
|