In
calculus, an
infinitesimal is a number greater than
zero yet smaller than any
positive real number. If
x is an infinitesimal and
x>0 then any finite sum
x+ ... +
x is less than 1, no matter how large the finite number of terms in the sum. Furthermore, 1/
x is larger than any positive real number. Of course, there exists no infinitesimal real number.
When Newton and Leibniz developed the calculus, they made use of infinitesimals. A typical argument might go:
- To find the derivative f '(x) of the function f(x) = x², let dx be an infinitesimal. Then f '(x) = (f(x+dx)-f(x))/dx = (x²+2x*dx+dx²-x²)/dx = 2x+dx = 2x, since dx is infinitesimally small.
This argument, while intuitively appealing, and producing the correct result, is not mathematically rigorous. The use of infinitesimals was attacked as incorrect by
George Berkeley in his work
The analyst: or a discourse addressed to an infidel mathematician. The fundamental problem is that d
x is first treated as non-zero (because we divide by it), but then later discarded as if it were zero.
It was not until the second half of the nineteenth century that the calculus was given a formal mathematical foundation by Karl Weierstrass and others using the notion of a limiting process, which obviates the need to use infinitesimals.
Nevertheless, the use of infinitesimals continues to be convenient for simplifying notation and calculation.
Infinitesimals are legitimate quantities in the non-standard analysis of Abraham Robinson. In this theory, the above computation of the derivative of f(x) = x² can be justified with a minor modification: we have to talk about the standard part of the difference quotient, and the standard part of x + dx is x.
See also:
All Wikipedia text
is available under the
terms of the GNU Free Documentation License