Redirected from Gram-Schmidt Process
In linear algebra, the Gram-Schmidt process is method of orthogonalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn. Orthogonalization in this context means the following: we start with vectors v1,...,vk which are linearly independent and we want to find mutually orthogonal vectors u1,...,uk which generate the same subspace as the vectors v1,...,vk.
We denote the inner (dot) product of the two vectors u and v by (u . v). The Gram-Schmidt process works as follows:
To check that these formulas work, first compute (u1 . u2) by substituting the above formula for u2: you will get zero. Then use this to compute (u1 . u3) again by substituting the formula for u3: you will get zero. The general proof proceeds by mathematical induction.
Geometrically, this method proceeds as follows: to compute ui, it projects vi orthogonally onto the subspace U generated by u1,...,ui-1, which is the same as the subspace generated by v1,...,vi-1. ui is then defined to be the difference between vi and this projection, guaranteed to be orthogonal to the all vectors in the subspace U.
If one is interested in an orthonormal system u1,...,uk (i.e. the vectors are mutually orthogonal and all have norm 1), then one can divide ui by its norm (ui . ui).
When performing orthogonalization on a computer, the Householder transformation[?] is usually preferred over the Gram-Schmidt process since it is more numerically stable, i.e. rounding errors tend to have less serious effects.
Search Encyclopedia
|
Featured Article
|