Encyclopedia > Polynomial interpolation

  Article Content

Polynomial interpolation

Polynomial interpolation is the act of fitting a polynomial to a given function with defined values in certain discrete data points. This "function" may actually be any discrete data (such as obtained by sampling), but it is generally assumed that such data may be described by a function. Polynomial interpolation is an area of inquiry in numerical analysis.

Ploynomial interpolation relies on Weierstrass' theorem[?] which states that for any function <math>f</math> that is continuous on the interval <math>[a,b]</math> there exists a sequence of polynomials such that if:

<math>p(t) = a_n t^n + a_{n-1} t^{n-1} + \cdots + a_1 t + a_0, a_i \in \mathbf{R}^k</math>

then

<math>\lim_{n \rightarrow \infty} P_n(t) = f(t)</math>

holds, where <math>n</math> is the degree of the polynomial. <math>P_n</math> is the set of all n:th degree polynomials, and also form a linear space[?] with the dimension <math>n+1</math>. The monomials <math>1, t, t^2, t^3 \cdots t^n</math> form a basis for this of this space.

Table of contents

Fitting a Polynomial to Given Data Points

We want to determine the constants <math>a_0, a_1, a_2 \cdots a_n</math> so that the resulting polynomial of degree <math>n</math> interpolates some given data set <math>(t_0,y_0), (t_1,y_1), (t_2,y_2) \cdots (t_j,y_j)</math>. From the amount of information obtained from the data set, we see that we cannot fit a polynomial of greater degree than <math>j</math>, so we assume that <math>n = j</math> and:

<math>p(t_i) = y_i, \forall i \in \left\{ 0, 1, \cdots j\right\}</math>

If we put all these conditions in a matrix-vector combination, with the coefficients <math>a_j</math> as unknowns, we obtain the system:

<math>\begin{pmatrix}
t_0^n & t_0^{n-1} & t_0^{n-2} & \ldots & t_0 & 1 \\ t_1^n & t_1^{n-1} & t_1^{n-2} & \ldots & t_1 & 1 \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ t_n^n & t_n^{n-1} & t_n^{n-2} & \ldots & t_n & 1 \end{pmatrix} \begin{pmatrix} a_n \\ a_{n-1} \\ \vdots \\ a_0 \end{pmatrix} = \begin{pmatrix} y_0 \\ y_1 \\ \vdots \\ y_n \end{pmatrix} </math>

Where the horrible leftmost matrix is commonly referred to as a vandermonde matrix, so named after the mathematician Alexandre-Théophile Vandermonde[?]. This equation may be solved both by hand and by machine using for example Gauss-Jordan elimination. It can be proved that given <math>n+1</math> mutually different (i.e. no two the same) <math>t_i</math>:s, there is only one unique polynomial <math>p</math> in <math>P_n</math> of maximum degree <math>n</math> that solves this interpolation task. This is called the Unisolvence theorem. (It can be proven by assuming the opposite.)

Non-Vandermonde Solutions

Solving the vandermonde matrix is (mostly) a costly operation (as counted in clock cycles of a computer trying to do the job). Therefore, several other clever ways of constructing the unique polynomial have been devised:

The Error of Polynomial Interpolation

To be written

Disadvantages of Polynomial Interpolation

When the interpolation polynomial reach a certain degree, it will tend to oscillate wildly in the undetermined areas. This is called Runge's phenomenon. Even though these problems can be partially avoided by using for example Chebyshev polynomials, the solution that is mostly preferred in practice is to use several polynomials of a lower degree, connected in chains. These are called splines[?].



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
242

... 210s 220s 230s - 240s - 250s 260s 270s 280s 290s Years: 237 238 239 240 241 - 242 - 243 244 245 246 247 Events Patriarch Titus[?] succeeds Patriarch Eugenius I[?] as ...

 
 
 
This page was created in 52.6 ms