Encyclopedia > Uniform convergence

  Article Content

Uniform convergence

In mathematical analysis, the concept of uniform convergence is used to describe a situation where a sequence of functions (fn) converges to a limiting function f in such a way that the speed of convergence of fn(x) to f(x) does not depend on x. This notion is used because several important properties of the functions fn, such as continuity, differentiability and Riemann integrability, are only transferred to the limit f if the convergence is uniform.

Table of contents

Definition and comparison with pointwise convergence

Suppose S is a set and fn : I -> R are real-valued functions for every natural number n. We say that the sequence (fn) converges uniformly with limit f : S -> R iff

for every ε > 0, there exists a natural number N, such that for all x in S and all nN: |fn(x) - f(x)| < ε
Compare this to the concept of pointwise convergence: The sequence (fn) converges pointwise with limit f : S -> R iff
for every x in S and every ε > 0, there exists a natural number N, such that for all nN: |fn(x) - f(x)| < ε
In the case of uniform convergence, N can only depend on ε, while in the case of pointwise convergence N may depend on ε and x. It is therefore plain that uniform convergence implies pointwise convergence. The converse is not true, as the following example shows: take S to be the unit interval [0,1] and define fn(x) = xn for every natural number n. Then (fn) converges pointwise to the function f defined by f(x) = 0 if x < 1 and f(1) = 1. This convergence is not uniform: for instance for ε = 1/4, there exists no N as required by the definition.

Theorems

If S is a real interval (or indeed any topological space), we can talk about the continuity of the functions fn and f. The following is the more important result about uniform continuity:

If (fn) is a sequence of continuous functions which converges uniformly towards the function f, then f is continuous as well.
If S is an interval and all the functions fn are differentiable and converge to a limit f, it is often desirable to differentiate the limit function f by taking the limit of the derivatives of fn. This is however in general not possible: even if the convergence is uniform, the limit function need not be differentiable, and even if it is differentiable, the derivative of the limit function need not be equal to the limit of the derivatives. Consider for instance fn(x) = 1/n sin(nx) with uniform limit 0, but the derivatives do not approach 0. The precise statement covering this situation is as follows:
If fn converges uniformly to f, and if all the fn are differentiable, and if the derivatives f'n converge uniformly to g, then f is differentiable and its derivative is g.
Similarly, one often wants to exchange integrals and limit processes. For the Riemann integral, one needs to require uniform convergence:
if (fn) is a sequence of Riemann integrable functions which uniformly converge with limit f, then f is Riemann integrable and its integral can be computed as the limit of the integrals of the fn.
Much stronger theorems in this respect, which require not much more than pointwise convergence, can be obtained if one abandons the Riemann integral and uses the Lebesgue integal instead.

If S is a compact interval (or in general a compact topological space), and (fn) is an monotone increasing sequence (meaning fn(x) ≤ fn+1(x) for all n and x) of continuous functions with a pointwise limit f which is also continuous, then the convergence is necessarily uniform ("Dini's theorem").

Generalizations

One may straightforwardly extend the concept to functions S -> M, where (M, d) is a metric space, by replacing |fn(x) - f(x)| with d(fn(x), f(x)).

The most general setting is the uniform convergence of nets of functions S -> X, where X is a uniform space. We say that the net (fα) converges uniformly with limit f : S -> X iff

for every entourage V in X, there exists an α0, such that for every x in I and every α => α0: (fα(x), f(x)) is in V.
The above mentioned theorem, stating that the uniform limit of continuous functions is continuous, remains correct in these settings.

History

Cauchy in 1821 published a faulty proof of the false statement that the pointwise limit of a sequence of continuous functions is always continuous. Fourier and Abel found counter examples in the context of Fourier series. Dirichlet then analyzed Cauchy's proof and found the mistake: the notion of pointwise convergence had to be replaced by uniform convergence.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Dennis Gabor

...     Contents Dennis Gabor Dennis Gabor (Gábor Dénes) (1900-1979) was a Hungarian physicist. He invented holography in 1947, for ...

 
 
 
This page was created in 36.8 ms