Redirected from Depth buffer
When an object is rendered by a 3D graphics card, the depth of a generated pixel (z coordinate) is stored in a buffer (the z-buffer). This buffer is usually arranged as a two-dimensional array (x-y), one element for each screen pixel. If another object of the scene must be rendered in the same pixel, the graphics card compares the two depths and chooses the one closest to the observer. The chosen depth is then saved to the z-buffer, replacing the old one. In the end, the z-buffer will allow the graphics card to correctly reproduce the usual depth perception: a close object hides a farther one.
The granularity of a z-buffer has a great influence on the scene quality: an 8-bit z-buffer can result in artifacts when two objects are close to each other. A 16 or 32-bit z-buffer behaves much better.
At the start of a new scene, the z-buffer must be cleared to a defined value (usually zero).
On recent PC graphics card (1999-2003), z-buffer management uses a significant chunk of the available bandwith. Various methods have been employed to reduce the impact of z-buffer, such as lossless compression (compute resources to compress/decompress are cheaper than bandwidth), and fast z-clear (skipping inter-frame clear altogether using signed numbers to cleverly check depths: one frame positive, one frame negative).
Search Encyclopedia
|
Featured Article
|