Encyclopedia > Linear prediction

  Article Content

Linear prediction

Linear prediction is a mathematical operation where future values of a digital signal is estimated as a linear function of previous samples.

In digital signal processing linear prediction is often called linear predictive coding (LPC) and can thus be viewed as a subset of filter theory[?]. In system analysis (a subfield of mathematics), linear prediction can be viewed as a part of mathematical modelling or optimization.

The prediction model

The most common representation is

<math>x'(n) = \sum_{i=1}^p a_i x(n-i)</math>

where <math>x'(n)</math> is the estimated signal value, <math>x(n)</math> the previous values, and <math>a_i</math> the predictor coefficients. The error generated by this estimate is

<math>e(n) = x(n) - x'(n)</math>

where <math>x(n)</math> is the true signal value and <math>x'(n)</math> the estimated value.

These equations are valid for all types of (one-dimensional) linear prediction. The differences are found in the way the parameters <math>a_i</math> are chosen.

For multi-dimensional signals the error is often defined as

<math>e(n) = ||x(n) - x'(n)||</math>

where <math>||.||</math> is a suitable chosen vector norm.

Estimating the parameters

The most common choice in optimisation of parameters <math>a_i</math> is the root mean square criterion which is also called the autocorrelation criterion. In this method we minimise the expected value of the squared error <math>E\{e^2(n)\}</math>, which yields the equation

<math>\sum_{i=1}^p a_i R(i-j) = -R(j)</math>, for <math>1 <= j <= p</math>,

where <math>R</math> is the autocorrelation of signal <math>x(n)</math> defined as <math>R(i) = E\{x(n)x(n-i)\}</math> In the multi-dimensional case this corresponds to minimising the L 2 norm.

The above equations are called the normal equations or Yule-Walker equations. In matrix form the equations can be equivalently written as

<math>R*a = r</math>,

where the autocorrelation matrix R is a symmetric Toeplitz matrix with elements <math>r_{i,j} = R(i-j)</math>, vector <math>r</math> is the autocorrelation vector <math>r_j = R(j)</math>, and vector <math>a</math> is the parameter vector of <math>a_i</math>.

Another more general approach is to minimise

<math>e(n) = \sum_{i=0}^p a_i x(n-i)</math>

where we usually constrain the parameters <math>a_i</math> with <math>a_0=1</math> to avoid the trivial solution. This constraint yields the same predictor as above but the normal equations are then

<math>R*a = [1, 0, ... , 0]^T</math>

where vector <math>a_i</math> ranges from 0 to <math>p</math> and the size of <math>R</math> is <math>p+1</math> times <math>p+1</math>.

Optimisation of the parameters is a wide topic and a large number of other approaches have been proposed.

Still, the autocorrelation method is the most common and it is used, for example, for speech coding in the GSM standard.

Solution of the matrix equation <math>R*a = r</math> is computationally a relatively expensive process. The Gauss algorithm[?] for matrix inversion is probably the oldest solution but this approach does not efficiently use the symmetry of <math>R</math> and <math>r</math>. A faster algorithm is the Levinson recursion proposed by N. Levinson in 1947, which recursively calculates the solution. Later, Delsarte et al proposed an improvement to this algorithm called the split Levinson recursion[?] which requires about half the number of multiplications and divisions. It uses a special symmetrical property of parameter vectors on subsequent recursion levels.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Quadratic formula

... to both sides, getting <math>x^2+\frac{b}{a}x+\frac{b^2}{4a^2}=-\frac{c}{a}+\frac{b^2}{4a^2}.</math> The left side is now a perfect square; it is the ...

 
 
 
This page was created in 22.4 ms