Encyclopedia > JPG

  Article Content

JPEG

Redirected from JPG

In computing, JPEG is a commonly used standard method of compressing photographic images. The file format which employs this compression is commonly also called JPEG, platforms with short file extensions may use .JPG or .JPE to identify this format.

The name stands for "Joint Photographic Experts Group". JPEG itself specifies only how an image is transformed into a stream of bytes, but not how those bytes are encapsulated in any particular storage medium. A further standard, called JFIF (JPEG File Interchange Format) specifies how to produce a file suitable for computer storage and transmission (such as over the Internet) from a JPEG stream. In common usage, when one speaks of a "JPEG file" one generally means a JFIF file, though there are some software systems that encode JPEG streams differently.

JPEG/JFIF is the most common format used for storing and transmitting photographs on the World Wide Web. It is not as well suited for line drawings and other textual or iconic graphics, because its compression method performs badly on these types of images (the PNG and GIF formats are in common use for that purpose; GIF, having only 8 bits per pixel is not well suited for colour photographs, but PNG may have as much or more detail as JPEG).

There are many options in the standard, but many are little used. Here is a brief desciption of one of the more common ones when applied to an input that has 24 bits per pixel (8 each of red, green, and blue). This particular option is a lossy method.

First the image is converted from RGB into a different color space called YUV. This is similar to the color space used by NTSC and PAL color television transmission. The Y component represents brightness of a pixel, and the U and V components together represent the hue and saturation. This part is useful because the human eye can see more detail in the Y component than in the others. This enables the next step which is to reduce the U and V components to half size in both vertical and horizontal directions (called "downsampling"), thereby reducing the size in bytes of the whole image by a factor of 2. For the rest of the compression process, Y, U and V are processed separately and in a very similar manner.

Next, each component (Y, U, V) of the image is "tiled" into sections of 8 by 8 pixels each, then each tile is converted to frequency space using a two-dimensional discrete cosine transform.

Next phase: quantization. The human eye is fairly good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This fact allows you to get away with greatly reducing the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers.

Last phase: entropy coding (this is a special form of lossless compression). Basically a combination of putting the components in a "zigzag" order that groups similar frequencies together, then run length coding zeros, then using Huffman coding on what's left. The standard also allows but does not require the use of Arithmetic coding, which is always superior to Huffman coding, but this feature is rarely used because 1. it is covered by patents that encumber development of software, 2. it is much slower to encode and decode than Huffman coding, and 3. it does not provide much benefit (on the order of 5% smaller files).

Decoding to display the image consists of doing all the above in reverse.

The resulting compression ratio can be varied according to need, by being more or less aggressive in the divisors used in the quantization phase. 10 to 1 compression usually results in an image that can't be distinguished by eye from the original. 100 to 1 compression is usually possible, but will look distinctly "blocky" and "blurry" compared to the original. The appropriate level of compression depends on the use to which the image will be put.

JPEG is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. In this case it usually performs much better than purely lossless methods while still giving a good looking image (in fact it will produce a much higher quality image than other common methods such as GIF which are lossless for drawings and iconic graphics but require severe quantization for full-color images).

Newer lossy methods, particularly wavelet compression, perform even better in these cases. However, JPEG is a well established standard with plenty of software available, including free software, so it continues to be heavily used as of 2003. Also, many wavelet algorithms are patented, making it difficult or impossible to use them freely in many software projects.

The JPEG committee has now created its own wavelet-based standard, JPEG 2000, which is intended to eventually supersede the original JPEG standard.

In 2002 Forgent Networks asserted that it owns and will enforce patent rights on the JPEG technology. The announcement has created a furore remisicent of Unisys' attempts to assert its rights over the GIF image compression standard.

The JPEG committee has made the following statement in response:

It has always been a strong goal of the JPEG committee that its standards should be implementable in their baseline form without payment of royalty and license fees, and the committee would like to record their disappointment that some organisations appear to be working in conflict with this goal. Considerable time has been spent in committee in attempting to either arrange licensing on these terms, or in avoiding existing intellectual property, and many hundreds of organisations and academic communities have supported us in our work.

The up and coming JPEG 2000 standard has been prepared along these lines, and agreement reached with over 20 large organisations holding many patents in this area to allow use of their intellectual property in connection with the standard without payment of license fees or royalties.

The MIME media type for JFIF is image/jpeg (defined in RFC 1341).


Those who surf the web may be familiar with the irregularities known as compression artifacts that appear in JPEG digital images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around eyes in pictures of faces. They can be reduced by choosing a lower level of compression; they may be eliminated by saving an image using a lossless file format, though this will result in an image requiring more computer memory.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Royalist

... noun or adjective, Royalist, can have several shades of meaning. At its simplest, it refers to an adherent of a monarch or royal family. Of the more specific uses of ...

 
 
 
This page was created in 38.3 ms