Encyclopedia > Spectrum of an operator

  Article Content

Eigenvector

Redirected from Spectrum of an operator

In linear algebra, the eigenvectors (from the German eigen meaning "own") of a linear operator are non-zero vectors which, when operated on by the operator, result in a scalar multiple of themselves. The scalar is then called the eigenvalue associated with the eigenvector. The set of all the eigenvalues is called the matrix or operator spectrum.

In applied mathematics and physics the eigenvectors of a matrix or a differential operator[?] often have important physical significance. In classical mechanics the eigenvectors of the governing equations typically correspond to natural modes of vibration in a body, and the eigenvalues to their frequencies. In quantum mechanics, operators correspond to observable variables, eigenvectors are also called eigenstates, and the eigenvalues of an operator represent those values of the corresponding variable that have non-zero probability of occurring.

Examples Intuitively, for linear transformations of two-dimensional space R2, eigenvectors are thus:

  • rotation: no eigenvectors
  • reflection: eigenvectors are perpendicular and parallel to the line of symmetry, the eigenvalues are -1 and 1, respectively
  • scaling: all vectors are eigenvectors, and the eigenvalue is the scale factor
  • projection onto a line: eigenvectors with eigenvalue 1 are parallel to the line, eigenvectors with eigenvalue 0 are parallel to the direction of projection

Definition Formally, we define eigenvectors and eigenvalues as follows: If A : V -> V is a linear operator on some vector space V, v is a non-zero vector in V and c is a scalar (possibly zero) such that

<math>\mathbf{A} \mathbf{v} = c \mathbf{v},</math>

then we say that v is an eigenvector of the operator A, and its associated eigenvalue is <math>c</math>. Note that if v is an eigenvector with eigenvalue <math>c</math>, then any non-zero multiple of v is also an eigenvector with eigenvalue <math>c</math>. In fact, all the eigenvectors with associated eigenvalue <math>c</math>, together with 0, form a subspace of V, the eigenspace for the eigenvalue <math>c</math>.

For example, consider the matrix

<math>A =
\begin{bmatrix} 0 & 1 & -1 \\ 1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix} </math>

which represents a linear operator R3 -> R3. One can check that

<math>A \begin{bmatrix}1 \\ 1 \\ -1 \end{bmatrix}

2 \begin{bmatrix}1 \\ 1 \\ -1 \end{bmatrix} </math> and therefore 2 is an eigenvalue of A and we have found a corresponding eigenvector. An important tool for describing eigenvalues of square matrices is the characteristic polynomial: saying that c is an eigenvalue of A is equivalent to stating that the system of linear equations (A - cI) x

0 (where I is the identity matrix) has a non-zero solution x (namely an eigenvector), and so it is equivalent to the determinant det(A - c I) being zero. The function p(c) = det(A - cI) is a polynomial in c since determinants are defined as sums of products. This is the characteristic polynomial of A; its zeros are precisely the eigenvalues of A. If A is an n-by-n matrix, then its characteristic polynomial has degree n and A can therefore have at most n eigenvalues.

Returning to the example above, if we wanted to compute all of A's eigenvalues, we could determine the characteristic polynomial first:

<math>p(x) = \det( A - xI) = \det
 \begin{bmatrix}
-x & 1 & -1 \\ 1 & 1-x & 0 \\ -1 & 0 & 1-x \end{bmatrix} </math>

<math> = -x^3 + 2x^2 + x - 2\ </math>
            
and because of <math>p(x) = -(x - 2) (x - 1) (x + 1)</math> we see that the eigenvalues of A are 2, 1 and -1.

(In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Faster and more numerically stable methods are available, for instance the QR decomposition[?].)

Note that if A is a real matrix, the characteristic polynomial will have real coefficients, but not all its roots will necessarily be real. The complex eigenvalues will all be associated to complex eigenvectors.

In general, if v1, ..., vm are eigenvectors to different eigenvalues λ1, ..., λm, then the vectors v1, ..., vm are necessarily linearly independent.

The spectral theorem for symmetric matrices states that, if A is a real symmetric n-by-n matrix, then all its eigenvalues are real, and there exist n linearly independent eigenvectors for A which all have length 1 and are mutually orthogonal.

Our example matrix from above is symmetric, and three mutually orthogonal eigenvectors of A are

<math>v_1 = \begin{pmatrix} 1\\ 1\\ -1\end{pmatrix}</math>
<math>v_2 = \begin{pmatrix} 0\\ 1\\ 1\end{pmatrix}</math>
<math>v_3 = \begin{pmatrix} 2\\ -1\\ 1\end{pmatrix}</math>

These three vectors form a basis of R3. With respect to this basis, the linear map represented by A takes a particularly simple form: every vector x in R3 can be written uniquely as

<math>\mathbf{x} = x_1 \mathbf{v}_1 + x_2 \mathbf{v}_2 + x_3 \mathbf{v}_3</math>
and then we have
<math>\mathbf{A x} = 2x_1 \mathbf{v}_1 + x_2 \mathbf{v}_2 - x_3 \mathbf{v}_3.</math>



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
242

... - 4th century Decades: 190s 200s 210s 220s 230s - 240s - 250s 260s 270s 280s 290s Years: 237 238 239 240 241 - 242 - 243 244 245 246 247 Events Patriarch ...

 
 
 
This page was created in 37.7 ms