Encyclopedia > Gaussian elimination

  Article Content

Gauss-Jordan elimination

Redirected from Gaussian elimination

Gauss-Jordan elimination is an algorithm in linear algebra for determining the solutions of a system of linear equations, for determining the rank of a matrix, and for calculating the inverse of an invertible square matrix.

The method is named after the mathematician Carl Friedrich Gauss and the surveyor Wilhelm Jordan[?], but the method is described by Liu Hui's comments written in 263 A.D. to the Chinese book Jiuzhang suanshu or The Nine Chapters on the Mathematical Art.

The computation complexity of Gauss-Jordan elimination is O(n3), that is, the number of operations required is proportional to n3 if the matrix size is n-by-n.

Systems of linear equations Suppose you need to find numbers x1, x2 and x3 such that the following three equations are all true:

  2x1 + 4x2 +  2x3 = 15
2x1 +x2 + 2x3 = -5
4x1 + x2 - 2x3 = 0
This is called a system of linear equations for the unknowns x1, x2 and x3. The goal is to transform this system to an equivalent one so that we can easily read off the solution. The allowed operations to transform a system of equations to an equivalent one are as follows:
  • multiply/divide an equation by a non-zero number
  • switch two equations
  • add a multiple of one equation to another one
The strategy is as follows: eliminate x1 from all but the first equation, eliminate x2 from all but the second equation, and then eliminate x3 from all but the third equation.

In our example, we eliminate x1 from the second equation by adding 3/2 times the first equation to the second, and then we eliminate x1 from the third equation by adding the first equation to the third. The result is:

2x +  y  -  z = 8
    .5y + .5z = 1
     2y +  z = 5
Now we eliminate y from the first equation by adding -2 times the second equation to the first, and then we eliminate y from the third equation by adding -4 times the second equation to the third:
2x     - 2z = 6
   .5y + .5z = 1
          - z = 1
Finally, we eliminate z from the first equation by adding -2 times the third equation to the first, and then we eliminate z from the second equation by adding .5 times the third equation to the second:
2x         = 4
   .5y     = 1.5
        - z = 1
We can read off the solution: x = 2, y = 3 and z = -1.

This algorithm works generally, also for bigger systems. Sometimes it is necessary to switch two equations: for instance if y hadn't occurred in the second equation after our first step above, we would have switched the second and third equation and then eliminated y from the first equation. It is possible that the algorithm gets "stuck": for instance if y hadn't occurred in the second and the third equation after our first step above. In this case, the system doesn't have a unique solution.

When implemented on a computer, one would typically store the system as a coefficient matrix; our original system would then look like

<math> \begin{pmatrix} 2 & 1 & -1 & 8 \\ -3 & -1 & 2 & -11 \\ -2 & 1 & 2 & -3 \end{pmatrix} </math>

and in the end we're left with

<math> \begin{pmatrix} 2 & 0 & 0 & 4 \\ 0 & .5 & 0 & 1.5 \\ 0 & 0 & -1 & 1 \end{pmatrix} </math>

or, after dividing the rows by 2, .5 and -1, respectively:

<math> \begin{pmatrix} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & -1 \end{pmatrix} </math>

This algorithm can be used on a computer for systems with thousands of equations and unknowns. For even larger systems whose coefficients follow a regular pattern, faster iterative methods have been developed. See system of linear equations.

Finding the inverse of a matrix

Suppose A is a square n-by-n matrix and you need to calculate its inverse. You attach the n-by-n identity matrix to the right of A, which produces an n-by-2n matrix. Then you start the Gauss-Jordan algorithm on that matrix. When the algorithm finishes, the identity matrix will appear on the left; the inverse of A can then be found to the right of the identity matrix.

If the algorithm gets "stuck" as explained above, then A is not invertible.

In practice, inverting a matrix is rarely required. Most of the time, one is really after the solution of a particular system of linear equations.

The general algorithm to compute ranks and bases

The Gauss-Jordan algorithm can be applied to any m-by-n matrix A. If we get "stuck" in a given column, we move to the next column. In this way, for example, any 6x9 matrix can be transformed to a matrix that has a reduced row echelon form like

<math> \begin{pmatrix} 1 & * & 0 & 0 & * & * & 0 & * & 0 \\ 0 & 0 & 1 & 0 & * & * & 0 & * & 0 \\ 0 & 0 & 0 & 1 & * & * & 0 & * & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & * & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} </math>

(the *s are arbitrary entries). Note that the entries above and below and in front of the leading ones are all zero. This echelon matrix T contains a wealth of information about A: the rank of A is 5 since there are 5 non-zero rows in T; the vectorspace spanned by the rows of A has as basis the first, third, forth, seventh and ninth column of A (the rows of the ones in T), and the *'s tell you how the other rows of A can be written as linear combinations of the basis rows.

The Gauss-Jordan elimination can be performed over any field. The three basic operations used in the Gauss-Jordan elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix A with invertible m-by-m matrices from the left. In general, we can say:

To every m-by-n matrix A over the field K there exists a uniquely determined invertible m-by-m matrix S and a uniquely determined reduced row echelon matrix T such that A = ST.

The formal algorithm to compute T from A follows. We write A[i,j] for the entry in row i, column j in matrix A. The transformation is performed "in place", meaning that the original matrix A is lost and successively replaced by T.

i=1
j=1
while (i <= m and j<= n) do
  # Find pivot in column j, starting in row i:
  max_val = abs(A[i,j])
  max_ind = i
  for k=i+1 to m do
    if abs(A[k,j]) > max_val then
      max_val = abs(A[k,j])
      max_ind = k
    end_if
  end_for
  if max_val <> 0 then
    switch rows i and max_ind
    divide row i by max_val
    for u = i+1 to m do
      add - A[u,j] * row i to row u
    end_for
    i = i + 1
  end_if
  j = j + 1
end_while

This algorithm differs slightly from the one discussed earlier, because before eliminating a variable, it first exchanges rows to move the entry with the largest absolute value to the "pivot position". Such a pivoting procedure improves the numerical stability of the algorithm; some variants are also in use.

Note that if the field is the real or complex numbers and floating point arithmetic is in use, the comparison "max_val <> 0" should be exchanged by "max_val > eps" for some small, machine-dependent constant eps, since it is never correct to compare floating point numbers to zero.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
East Marion, New York

... racial makeup of the town is 95.24% White, 0.93% African American, 0.13% Native American, 0.93% Asian, 0.00% Pacific Islander, 1.72% from other races, and 1.06% from tw ...

 
 
 
This page was created in 23.6 ms