Encyclopedia > Polynomial

  Article Content

Polynomial

In algebra, a polynomial function, or polynomial for short, is a function of the form

<math>f(x) = a_n x^n + a_{n - 1} x^{n - 1} + \cdots + a_1 x + a_0</math>

where x is a scalar-valued variable, and a0,...,an are fixed scalars, called the coefficients of the polynomial f. The highest occurring power of x (n if the coefficient an is not zero) is called the degree of f; its coefficient is called the leading coefficient. a0 is called the constant coefficient of f. Each summand of the polynomial of the form ak xk is called a term.

Monomials, binomials and trinomials are special cases of polynomials with one, two and three terms respectively.

The polynomial can be written in sigma notation[?] as:

<math>f(x) = \sum_{r = 0}^{n} a_r x^r</math>

In calculus, the scalars are almost always real or complex numbers.

Polynomials of

  • degree 0 are called constant functions,
  • degree 1 are called linear functions,
  • degree 2 are called quadratic functions,
  • degree 3 are called cubic functions,
  • degree 4 are called quartic functions and
  • degree 5 are called quintic functions.

The function f(x) = -7 x3 + 2/3 x2 - 5 x + 3 is an example of a cubic function with leading coefficient -7 and constant coefficient 3.

Polynomials are important because they are the simplest functions: their definition involves only addition and multiplication (since the powers are just shorthands for repeated multiplications). They are also simple in a different sense: the polynomials of degree ≤ n are precisely those functions whose (n+1)st derivative is identically zero. One important aspect of calculus is the project of analyzing complicated functions by means of approximating them with polynomials. The culmination of these efforts is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial, and the Weierstrass approximation theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial.

Quotients[?] of polynomials are called rational functions. Piecewise[?] rationals are the only functions that can be evaluated directly on a computer, since typically only the operations of addition, multiplication, division and comparison are implemented in hardware. All the other functions that computers need to evaluate, such as trigonometric functions, logarithms and exponential functions, must then be approximated in software by suitable piecewise rational functions.

In order to determine function values of polynomials for given values of the variable x, one does not apply the polynomial as a formula directly, but uses the much more efficient Horner scheme instead. If the evaluation of a polynomial at many equidistant points is required, Newton's difference method[?] reduces the amount of work dramatically. The Difference Engine of Charles Babbage was designed to create large tables of values of logarithms and trigonometric functions automatically by evaluating approximating polynomials at many points using Newton's difference method.

A root or zero of the polynomial f(x) is a number r such that f(r) = 0. Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. Some polynomials, such as f(x) = x2 + 1, do not have any roots among the real numbers. If however the set of allowed candidates is expanded to the complex numbers, every (non-constant) polynomial has a root (see Fundamental Theorem of Algebra).

Approximations for the real roots of a given polynomial can be found using Newton's method, or more efficiently using Laguerre's method[?] which employs complex arithmetic and can locate all complex roots. These algorithms are studies in numerical analysis.

There is a difference between approximating roots and finding concrete closed formulas for them. Formulas for the roots of polynomials of degree up to 4 have been known since the sixteenth century (see quadratic formula, Cardano, Tartaglia). But formulas for degree 5 eluded researchers for a long time. In 1824, Abel proved the striking result that there can be no general formula (involving only the arithmetical operations and radicals) for the roots of a polynomial of degree ≥ 5 in terms of its coefficients (see Abel-Ruffini theorem). This result marked the start of Galois theory which engages in a detailed study of relations among roots of polynomials.

In multivariate calculus[?], polynomials in several variables play an important role. These are the simplest multivariate functions and can be defined using addition and multiplication alone. An example of a polynomial in the variables x, y, and z is

<math>f(x, y, z) = 2 x^2 y z^3 - 3 y^2 + 5 y z - 2</math>

The total degree of such a multivariate polynomial can be gotten by adding the exponents of the variables in every term, and taking the maximum. The above polynomial f(x,y,z) has total degree 6.

In computer science, we say that a polynomial of highest order n has a running time of O(xn). For example, take the polynomials:

<math>f(x) = 6x^4-2x^3+5</math>
<math>g(x) = x^4</math>

We say this polynomial has order O(x4). From the definition of order, |f(x)| ≤ C |g(x)| for all x>1, where C is a constant.

Proof:

<math>|6x^4-2x^3+5| = 6x^4 + 2x^3 + 5</math> where x > 1
<math>|6x^4-2x^3+5| \le 6x^4 + 2x^4 + 5x^4</math> because x3 < x4, and so on.
<math>|6x^4-2x^3+5| \le 13x^4</math>
<math>|6x^4-2x^3+5| \le 13|x^4|</math>

From the definition of O-notation above, the polynomial <math>6x^4-2x^3+5</math> is in O(x4)

Abstract Algebra

In abstract algebra, one has to carefully distinguish between polynomials and polynomial functions. A polynomial f is defined to be a formal expression of the form

<math>f = a_n X^n + a_{n - 1} X^{n - 1} + \cdots + a_1 X + a_0</math>

where the coefficients a0, ... , an are elements of some ring R and X is considered to be a formal symbol. Two polynomials are considered to be equal if and only if the sequences of their coefficients are equal. Polynomials with coefficients in R can be added by simply adding corresponding coefficients and multiplied using the distributive law and the rules

X a = a X for all elements a of the ring R

Xk Xl = Xk+l for all natural numbers k and l.

One can then check that the set of all polynomials with coefficients in the ring R forms itself a ring, the ring of polynomials over R, which is denoted by R[X]. If R is commutative, then R[X] is an algebra over R.

One can think of the ring R[X] as arising from R by adding one new element X to R and only requiring that X commute with all elements of R. In order for R[X] to form a ring, all sums of powers of X have to be included as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals[?], are important tools for constructing new rings out of known ones. For instance, the clean construction of finite fields involves the use of those operations, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic).

To every polynomial f in R[X], one can associate a polynomial function with domain and range equal to R. One obtains the value of this function for a given argument r by everywhere replacing the symbol X in f's expression by r. The reason that algebraists have to distinguish between polynomials and polynomial functions is that over some rings R (for instance over finite fields), two different polynomials may give rise to the same polynomial function. This is not the case over the real or complex numbers and therefore analysts don't separate the two concepts.

In commutative algebra, one major focus of study is divisibility among polynomials. If R is an integral domain and f and g are polynomials in R[X], we say that f divides g if there exists a polynomial q in R[X] such that f q = g. One can then show that "every zero gives rise to a linear factor", or more formally: if f is a polynomial in R[X] and r is an element of R such that f(r) = 0, then the polynomial (X - r) divides f. The converse is also true. The quotient can be computed using the Horner scheme.

If F is a field and f and g are polynomials in F[X] with g ≠ 0, then there exist polynomials q and r in F[X] with

f = q g + r
and such that that the degree of r is smaller than the degree of g. The polynomials q and r are uniquely determined by f and g. This is called "division with remainder" or "long division" and shows that the ring F[X] is a euclidean domain.

One also speaks of polynomials in several variables, obtained by taking the ring of polynomials of a ring of polynomials: R[X,Y] = (R[X])[Y] = (R[Y])[X]. These are of fundamental importance in algebraic geometry which studies the simultaneous zero sets of several such multivariate polynomials.

Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element.

Other related objects studied in abstract algebra are formal power series, which are like polynomials but may have infinite degree, and the rational functions[?], which are ratios of polynomials.

See also:



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Anna Karenina

... to the jealously of Dolly's sister Katarina (Kitty) whom he has previously been courting. As the novel opens, Stiva's friend Levin has returned Moscow from his country ...

 
 
 
This page was created in 49 ms