Redirected from GramSchmidt Process
In linear algebra, the GramSchmidt process is method of orthogonalizing a set of vectors in an inner product space, most commonly the Euclidean space R^{n}. Orthogonalization in this context means the following: we start with vectors v_{1},...,v_{k} which are linearly independent and we want to find mutually orthogonal vectors u_{1},...,u_{k} which generate the same subspace as the vectors v_{1},...,v_{k}.
We denote the inner (dot) product of the two vectors u and v by (u . v). The GramSchmidt process works as follows:
To check that these formulas work, first compute (u_{1} . u_{2}) by substituting the above formula for u_{2}: you will get zero. Then use this to compute (u_{1} . u_{3}) again by substituting the formula for u_{3}: you will get zero. The general proof proceeds by mathematical induction.
Geometrically, this method proceeds as follows: to compute u_{i}, it projects v_{i} orthogonally onto the subspace U generated by u_{1},...,u_{i1}, which is the same as the subspace generated by v_{1},...,v_{i1}. u_{i} is then defined to be the difference between v_{i} and this projection, guaranteed to be orthogonal to the all vectors in the subspace U.
If one is interested in an orthonormal system u_{1},...,u_{k} (i.e. the vectors are mutually orthogonal and all have norm 1), then one can divide u_{i} by its norm (u_{i} . u_{i}).
When performing orthogonalization on a computer, the Householder transformation[?] is usually preferred over the GramSchmidt process since it is more numerically stable, i.e. rounding errors tend to have less serious effects.
Search Encyclopedia

Featured Article
