Encyclopedia > Covariance matrix

  Article Content

Covariance matrix

In statistics, the covariance matrix generalizes the concept of variance from one to n dimensions, or in other words from scalar-valued random variables to vector-valued random variables (tuples of scalar random variables). If X is a scalar-valued random variable with expected value μ then its variance is

<math>\sigma^2={\rm var}(X)=E((X-\mu)^2).</math>

If X is an n-by-1 column vector-valued random variable whose expected value is an n-by-1 column vector μ then its variance is the n-by-n nonnegative-definite matrix:

<math>\Sigma={\rm var}(X)=E((X-\mu)(X-\mu)^\top).</math>

The entries in this matrix are the covariances between the n different scalar components of X. Since the covariance between a scalar-valued random variable and itself is its variance, it follows that in particular the entries on the diagonal of this matrix are the variances of the scalar components of X. This may appear to be a property of this matrix that depends on which coordinate system is chosen on the space in which the random vector X resides. However, it is true generally that if u is any unit vector, then the variance of the projection of X on u is uTΣu. (This point is expanded upon somewhat at [1] (/wiki/Talk:Covariance_matrix). It is a consequence of an identity that appears below.)

Nomenclatures differ. Some statisticians, following the great probabilist William Feller, call this the variance of the random vector X because it is the natural generalization to higher dimensions of the 1-dimensional variance. Other call it the covariance matrix because it is the matrix of covariances between the scalar components of the vector X.

With scalar-valued random variables X we have the identity

<math>{\rm var}(aX)=a^2{\rm var}(X)</math>
if a is constant, i.e., not random. If X is an n-by-1 column vector-valued random variable, and A is an m-by-n constant (i.e., non-random) matrix, then AX is an m-by-1 column vector-valued random variable, whose variance must therefore be an m-by-m matrix. It is
<math>{\rm var}(AX)=A\Sigma A^\top.</math>

This covariance matrix (though very simple) is a very useful tool in many very different areas. From it a transformation matrix[?] can be derived that allows to completly decorrelate the data or from a different point of view to find an optimal basis for representing the data in a compact way. This is called PCA (principal components analysis) in statistics and KL-Transform (Karhunen-Loève transform) in image processing.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Dynabee

... slipping friction. The device is covered by US patents 3,726,146 (1973) and 5,353,655 (1994) by L.A. Mishler and US patent 5,800,311 (1998) by P.S.Chuang. External ...

 
 
 
This page was created in 23.7 ms