For example, the condition number associated with the linear equation <math>Ax = b</math> gives a bound on how inaccurate the solution <math>x</math> will be after numerical solution.
The condition number also amplifies the error present in <math>b</math>. The extent of this amplification can render a low condition number system (normally a good thing) inaccurate and a high condition number system (normally a bad thing) accurate, depending on how well the data in <math>b</math> are known. For this problem, the condition number is defined by
<math>\Vert A^{-1}\Vert \cdot \Vert A\Vert</math>,
in any consistent norm.
Condition numbers for singular-value decompositions, polynomial root finding, eigenvalue and many other problems may be defined.
Generally, if a numerical problem is well-posed, it can be expressed as a function <math>f</math> mapping its data, which is an <math>m</math>-tuple of real numbers <math>x</math>, into its solution, an <math>n</math>-tuple of real numbers <math>y</math>.
Its condition number is then defined to be the maximum value of the ratio of the relative errors in the solution to the relative error in the data, over the problem domain:
<math>\max \left\{ \left| \frac{f(x) - f(x^*)}{f(x)} \right| \left/ \left| \frac{x - x^*}{x} \right| \right. : x - x^* < \epsilon \right\}</math>
where <math>\epsilon</math> is some reasonably small value in the variation of data for the problem.
If <math>f</math> is also differentiable, this is approximately
<math>\left| \frac{ f'(x)x }{ f(x) } \right|</math>
Search Encyclopedia
|
Featured Article
|