## Encyclopedia > Newton polynomial

Article Content

# Newton polynomial

Newton polynomials (named after their inventor Isaac Newton) are polynomials used for polynomial interpolation. Rather than solving the huge Vandermonde matrix equation obtained in the polynomial interpolation by Gauss-Jordan elimination, we notice that we can do clever tricks by writing the polynomial in a different way. Given a data set:

$(t_0,y_0), (t_1,y_1), ... (t_n,y_n)$

where no two $t_i$ are the same, we assume the $y_n$:s are values of a function, $f$, at some certain $t$-points named $f(t_n)$. We know from Weierstrass' theorem[?] that there exists a unique polynomial of degree $n-1$ that pass through all these points, and we write it thusly:

$P(t) = c_0 + c_1(t-t_1) + c_2(t-t_1)(t-t_2) + ... + c_n(t-t_1)(t-t_2)...(t-t_n)$

or more formal:

$P(t) = \sum_{i=0}^n c_i \prod_{j=0}^i (t-t_j)$

We notice that:

$P(t_k) = f(t_k), \forall k$

And the equation may be written:

$c_0 = f_1$
$c_0 + c_1(t_2-t_1) = f_2$
$c_0 + c_1(t_3-t_1) + c_2(t_3-t_1)(t_3-t_2) = f_3$
...

And the solution giving the coefficients $c_i$ is thus:

$c_0 = f_1$
$c_1 = \frac{f_2-f_1}{t_2-t_1}$
$c_2 = \frac{f_3-f_1-\frac{t_3-t_1}{t_2-t_1}(f_2-f_1)}{(t_3-t_1)(t_3-t_2)}$
...

and this equation system will quickly grow to unrealistic proportions. Thus we need to find a better way to retrieve the coefficients $c_k$. It turns out there exists a recusive formula for doing this in an efficient way.

We see that the polynomial $P_k(t)$ for a certain $k$ may be defined recursively, thus:

$P_0(t) = c_0$
$P_k(t) = P_{k-1}(t) + c_k(t-t_1)(t-t_2)...(t-t_k)$

so that $P_k$ interpolates the function $f$ in the points $t_1, t_2, ... t_n$. The coefficients $c_k$ are dependent only on the obtained function values of $f$ (our $y_k$:s), so it is natural to say that as $c_0$ only depends on $f_1$, $c_1$ only on $f_1, f_2$ and $c_k$ only on $f_1, f_2, ... f_k$, and we define a notation for the divided differences:

$c_k = f[t_0, t_1, t_2, ..., t_k]$

This definition gives us the formal definition of the divided differences:

$f[t_i] = f(t_i) = f_i = y_i$
$f[t_1,t_2,t_3, ..., t_{k+1}] = \frac{f[t_1, t_2, ..., t_{k+1}] - f[t_1, t_2, ..., t_k]}{t_{k+1} - t_1}$

So that for example:

$f[t_1,t_2] = \frac{f_2-f_1}{t_2-t_1}$
$f[t_1,t_2,t_3] = \frac{f[t_2,t_3] - f[t_1, t_2]}{t_3-t_1}$

These are not easily grasped when put like this, but once the functions are arranged in a tabular form, things look simpler. Here for example, for a data set of $(t_1,y_1), (t_2, y_2), (t_3, y_3), (t_4, y_4), (t_5, y_5)$ (and we know that $f_k = y_k$ for all $k$):

 $t_1$ $f_1$ $t_2$ $f_2$ $f[t_1, t_2]$ $t_3$ $f_3$ $f[t_2, t_3]$ $f[t_1, t_2, t_3]$ $t_4$ $f_4$ $f[t_3, t_4]$ $f[t_2, t_3, t_4]$ $f[t_1, t_2, t_2, t_4]$ $t_5$ $f_5$ $f[t_4, t_5]$ $f[t_3, t_4, t_5]$ $f[t_2, t_3, t_4, t_5]$ $f[t_1, t_2, t_3, t_4, t_5]$

On the diagonal of this table you will find the coefficients, as $c_0=f_1, c_1=f[t_1,t_2], c_2=f[t_1,t_2,t_3], c_3=f[t_1,t_2,t_3,t_4], c_4=f[t_1,t_2,t_3,t_4,t_5]$. Insert these into the formula at the top and you have your unique interpolation polynomial:

$P_n(t) = \sum_{i=0}^n f[t_i] \prod_{j=0}^i (t-t_j)$

expanding into:

$P_n(t) = f_1 + f[t_1,t_2](t-t_1) + ... +f[t_1,...,t_n](t-t_1)...(t-t_n)$

This is usually called the common newtonian interpolation polynomial.

to be written

All Wikipedia text is available under the terms of the GNU Free Documentation License

Search Encyclopedia
 Search over one million articles, find something about almost anything!

Featured Article
 East Farmingdale, New York ... makeup of the town is 73.33% White, 14.83% African American, 0.15% Native American, 4.09% Asian, 0.09% Pacific Islander, 4.52% from other races, and 2.98% from two or ...