In contrast to image compression, lossless audio compression algorithms are not nearly as widely used. The primary users of lossless compression are audio engineers and those consumers who disdain the quality loss from lossy compression techniques such as Vorbis and MP3.
First, the vast majority of sound recordings are natural sounds, recorded from the real world, and such data doesn't compress well. In a similar manner, photos compress less efficiently with lossless methods than computer-generated images do. But worse, even computer generated sounds can contain very complicated waveforms that present a challenge to many compression algorithms. This is due to the nature of audio waveforms, which are generally difficult to simplify without a (necessarily lossy) conversion to frequency information, as performed by the human ear.
The second reason is that values of audio samples change very quickly, so generic data compression algorithms don't work well for audio, and strings of consecutive bytes don't generally appear very often. However, convolution with the filter [-1 1] (that is, taking the first difference) tends to whiten the spectrum a bit and allows traditional lossless compression to do its job; integration restores the original signal. More advanced codecs such as Shorten (SHN) and FLAC use linear prediction to come up with an optimal whitening filter.
Some examples of popular lossless audio codecs:
Lossless audio codecs have no quality issues, so the usabilty can be estimated by
Most lossy audio compression algorithms are based on simple transforms like the modified discrete cosine transform (MDCT), that convert sampled waveforms into their component frequencies. Some modern algorithms use wavelets, but it is still not certain if such algorithms will work significantly better than those based on MDCT because of the inherent periodicity of audio signals, which wavelets seem not to handle well. Some algorithms try to merge the two approaches.
Most algorithms don't try to minimize mathematical error, but instead maximize subjective human feeling of fidelity. As the human ear cannot analyze all components of an incoming sound, a file can be considerably modified without changing the subjective experience of a listener. For example, a codec can drop some information about very low and very high frequencies, which are almost inaudible to humans. Similarly, frequencies which are "masked" by other frequencies due to the nature of the human cochlea, are represented with decreased accuracy. Such a model of the human ear is often called a psychoacoustic model or "psycho-model" for short.
Due to the nature of lossy algorithms, audio quality[?] suffers when a file is decompressed and recompressed (generation losses). This makes lossily-compressed files less than ideal for audio engineering applications, such as sound editing and multitrack recording.
Some examples of popular audio codecs:
Other examples can be found on the codec page.