Color constancy only works if a scene with several differently-colored objects is viewed. The color-recepting cone cells of the eye register the red, green and blue components of the light reflected by every object in the scene. From this information, the visual system attempts to determine the approximate composition of the illuminating light. This illumination is then discounted in order to obtain the object's "true color" or reflectance: the percentage of red, green and blue light the object reflects. This reflectance then largely determines the perceived color. The precise algorithm used for this process is not known.
The effect was described in 1971 by Edwin Land, who formulated the above retinex theory to explain it. The word "retinex" is formed from "retina" and "cortex", suggesting that both the eye and the brain are involved in the processing.
The effect can be experimentally demonstrated as follows. A painting consisting of numerous colored patches is shown to a person. The red/green/blue components of the illuminating light are adjustable, and the person is asked to adjust them so that a particular patch in the painting appears white. The experimenter then measures the precise red/green/blue components of the light reflected from this white-appearing patch, picks a different painting, and adjusts the illumination so that a certain patch in the new painting will reflect light of the exact same white-appearing composition. The person is then asked about the color of this patch in the new painting. Answers will usually differ from "white" and will vary widely depending on the other colors in the new painting.
Color constancy is a desirable feature of robotic color vision, or computer vision, and several algorithms have been developed. These are known as retinex algorithms. These algorithms receive as input the red/green/blue values of each pixel of the image and attempt to estimate the reflectances of each point. One such algorithm operates as follows: the maximal red value rmax of all pixels is determined, and also the maximal green value gmax and the maximal blue value bmax. Assuming that the scene contains objects which reflect all red light, and (other) objects which reflect all green light and still others which reflect all blue light, one can then deduce that the illuminating light source is described by (rmax, gmax, bmax). For each pixel with values (r, g, b) we then estimate its reflectance as (r/rmax, g/gmax, b/bmax).
Search Encyclopedia
|
Featured Article
|