Encyclopedia > Data compression lossless

  Article Content

Lossless data compression

Redirected from Data compression/lossless

Lossless data compression refers to data compression algorithms which allow the original data to be reconstructed exactly from the compressed data. Contrast with lossy data compression.

Lossless data compression is used in software compression tools such as the highly popular Zip format, used by PKZIP and WinZip, and the Unix programs bzip2, gzip and compress. Lossless compression is used when it is important that the original and the decompressed data are exactly identical, or when no assumption can be made on whether certain deviation is uncritical. Typical examples are executable programs and source code. Some image file formats, notably PNG, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. GIF uses a technically lossless compression method, but most GIF implementations are incapable of representing full color, so they quantize the image (often with dithering) to 255 or fewer colors before encoding as GIF. Color quantization is a lossy process, but reconstructing the color image and then re-quantizing it produces no additional loss. (Some rare GIF implementations make multiple passes over an image, adding 255 new colors on each pass.)

Lossless data compression makes some files longer Lossless data compression algorithms cannot guarantee to compress (that is make smaller) all input data sets. In other words for any (lossless) data compression algorithm there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument[?], as follows:

  • Assume that each file is represented as a string of bits of some arbitrary length.
  • Suppose that there is a compression algorithm that transforms every file into a distinct shorter file. (If the output files are not all distinct, the compression cannot be reversed without losing some data).
  • Consider the set of all files of length at most N bits. This set has 1 + 2 + 4 + ... + 2N = 2N+1-1 members, if we include the zero-length file.
  • Now consider the set of all files of length at most N-1 bits. There are 1 + 2 + 4 + ... + 2N-1 = 2N-1 such files, if we include the zero-length file.
  • But this is smaller than 2N+1-1. We cannot map all the members of the larger set uniquely into the members of the smaller set.
  • This contradiction implies that our original hypothesis (that the compression function makes all files smaller) must be untrue.

Notice that the difference in size is so marked that it makes no difference if we simply consider files of length exactly N as the input set: it is still larger (2N members) than the desired output set.

If we make all the files a multiple of 8 bits long (as in standard computer files) there are even fewer files in the smaller set, and this argument still holds.

Thus any lossless compression algorithm that makes some files shorter must necessarily make some files longer. Good compression algorithms are those that achieve shorter output on input distributions that occur in real-world data.


Lossless Compression Techniques

Lossless compression methods may be categorized according to the type of data they are designed to compress. The three main types of targets for compression algorithms are text, images, and sound. Whilst, in principle, any general-purpose lossless compression algorithm (general-purpose means that they can handle all binary input) can be used on any type of data, many are unable to achieve significant compression on data that is not of the form that they are designed to deal with. Sound data, for instance, cannot be compressed well with conventional text compression algorithms.

Most lossless compression programs use two different kinds of algorithm: One which generates a statistical model for the input data, and another which maps the input data to bit strings using this model in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data. Often, only the former algorithm is named, while the second is implied (through common use, standardization etc.) or unspecified.

Statistical modelling algorithms for text (or text-like binary data such as executables) include:

Encoding algorithms to produce bit sequences are:

Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the USA and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by the patent holder Compuserve considered abusive, some open source activists encourage people to avoid using the Graphics Interchange Format (GIF) for compressing image files in favor of JPEG (for true color[?] images) or Portable Network Graphics PNG (for indexed images).

Many of the lossless compression techniques used for text also work reasonably well for indexed images[?], but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that colour images usually have a preponderance to a limited range of colours out of those representable in the colour space).

As mentioned previously, lossless sound compression is a somewhat specialised area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data - essentially using models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the "error") tends to be small, then certain difference values (like 0, +1, -1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.

It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of an image). This is called delta compression[?] (from the greek letter Delta which is commonly used in mathematics to denote a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta compression from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.


See also Lossy data compression, David A. Huffman.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Quioque, New York

... of 800. Geography Quioque is located at 40°49'17" North, 72°37'48" West (40.821435, -72.629898)1. According to the United States Census Bureau, the town ...

 
 
 
This page was created in 24.4 ms