The principal component w_{1} of a dataset x can be defined as
= \arg\max_{\Vert \mathbf{w} \Vert = 1} E\left\{ \left( \mathbf{w}^T \mathbf{x}\right)^2 \right\}</math>with the first <math>k  1</math> components, the <math>k</math>th component can be found by subtracting the first <math>k  1</math> principal components from x:
= \mathbf{x}  \sum_{i = 1}^{k  1} \mathbf{w}_i \mathbf{w}_i^T \mathbf{x}</math>and by substituting this as the new dataset to find a principal component in:
= \arg\max_{\Vert \mathbf{w} \Vert = 1} E\left\{ \left( \mathbf{w}^T \mathbf{\hat{x}}_{k  1} \right)^2 \right\}.</math>
A simpler way to calculate the components w_{i} uses the covariance matrix of x, the measurement vector. By finding the eigenvalues and eigenvectors of the covariance matrix, we find that the eigenvectors with the largest eigenvalues correspond to the dimensions that have the strongest correlation in the dataset. The original measurements are finally projected onto the reduced vector space.
Related (or even more similar than related?) is the calculus of empirical orthogonal functions[?] (EOF).
Another method of dimension reduction is a selforganizing map.
See also:
Search Encyclopedia

Featured Article
