The rewrite of this article is being devised at Talk:3D computer graphics/Temp. Please comment or help out as necessary. Thanks
The term 3D computer graphics refers in particular to works of graphic art that were created with the aid of digital computers and specialized 3D software. In general, the term may also refer to the process of creating such graphics, or the field of study of 3D computer graphic techniques and its related technology.
3D computer graphics are distinct from 2D computer graphics in that a three-dimensional virtual representation of objects is stored in the computer for the purposes of performing calculations and rendering images. In general, the art of 3D graphics is akin to sculpting or photography, while the art of 2D graphics is analogous to painting. In computer graphics software, this distinction is occasionally blurred; some 2D applications use 3D techniques to achieve certain effects such as lighting, while some primarily 3D applications make use of 2D visual techniques.
|
OpenGL and Direct3D are two popular APIs for the generation of 3D imagery on the fly. Many modern graphics cards provide some degree of hardware acceleration based on these APIs, frequently enabling the display of complex 3D graphics in real-time. However, it's not necessary to employ any one of these to actually create 3D imagery.
Creation of 3D computer graphics
The process of creating 3D computer graphics can be sequentially divided into three basic phases:
The modelling stage could be described as shaping individual objects that are later used in the scene. There exist a number of modelling techniques; Constructive Solid Geometry, NURBS modelling and polygonal modelling are good examples. Modelling processes may also include editing object surface or material properties (e.g., color, luminosity, diffuse and specular shading components—more commonly called roughness and shininess, reflection characteristics, transparency or opacity, or index of refraction), adding textures, bump-maps and other features.
Modelling may also include various activities related to preparing a 3D model for animation. Objects may be fitted with a skeleton, a central framework of an object with the capability of affecting the shape or movements of that object. This aids in the process of animation, in that the movement of the skeleton will automatically affect the corresponding portions of the model. See also Forward kinematic animation[?] and Inverse kinematic animation.
Modelling can be performed by means of a dedicated program (e.g., Lightwave[?] Modeler, Rhinoceros 3D[?], Moray[?]), an application component (Shaper, Lofter in 3D Studio[?]) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modelling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace[?]).
Scene setup involves arranging virtual objects, lights, cameras and other entities on a scene which will later be used to produce a still image or an animation. If used for animation, this phase usually makes use of a technique called "keyframing[?]", which facilitates creation of complicated movement in the scene. With the aid of keyframing, instead of having to fix an object's position, rotation, or scaling for each frame in an animation, one needs only to set up some key frames between which states in every frame are interpolated.
Lighting is an important aspect of scene setup. As is the case in real-world scene arrangement, lighting is a significant contributing factor to the resulting aesthetic and visual quality of the finished work. As such, it can be a difficult art to master. Lighting effects can contribute greatly to the mood and emotional response effected by a scene, a fact which is well-known to photographers and theatrical lighting technicians.
The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on it's circumference into a polygon representation of a sphere, is called tesselation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc, to so-calles meshes, which are nets of inteconnected triangles.
Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to render using scanline rendering.
Polygon representations are not used in all rendering techniques, and in these cases the tesselation step is not included in the transition from abstract representation to rendered scene.
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Photo-realistic image quality is often a desirable outcome, and to this end several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more modern techniques such as: scanline rendering, raytracing or radiosity.
Rendering software may simulate such cinematographic effects as lens flares, depth of field or motion blur. These artifacts are, in reality, a by-product of the mechanical imperfections of physical photography, but as the human eye is accustomed to their presence, the simulation of such artifacts can lend an element of realism to a scene. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with atmosphere, smoke, or particulate matter. Examples of such techniques include particle systems[?] (which can simulate rain, smoke, or fire), volumetric sampling[?] (to simulate fog, dust and other spatial atmospheric effects), and caustics[?] (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool).
The rendering process is known to be computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner.
Modern 3D computer graphics rely heavily on a simplified reflection model called Phong reflection model, which should not be confused with Phong shading which is an entirely different matter.
This reflection model and the shading techniques it gives rise to, apply to polygon-based rendering only. I.e. raytracing and radiosity does not use it.
Popular reflection rendering techniques in 3D computer graphics include:
3D graphics have become so popular, particulary in computer games, that specialized APIs (Application Programmer Interfaces) have been created to ease the processes in all stages of computer graphics generation. These APIs have also proved vital to computer graphics hardware manufacturers, as they provide a way for programmers to access the hardware in an abstract way, while still taking advantage of the special hardware of this-or-that graphics card.
These APIs for 3D computer graphics are particularly popular:
While there are plenty of 3D modelling and animation software around, there are 4 major softwares (big boys, so to speak). They are
Besides these major software, there are others which haven't quite gained mass acceptance but aren't toys either. Among them are
For other software, see also entries on CAD and rendering.
Search Encyclopedia
|