When approximating different solutions to numerical problems, three factors about such methods are to be considered:
While numerical analysis employs mathematical axioms, theorems and proofs in theory, it may use empirical[?] results of computation runs to probe new methods and analyze problems. It has thus a unique character when compared to other mathematical sciences.
Computers are an essential tool in numerical analysis, but the field predates computers by many centuries, and actually computers were invented to a large extent in order to solve numerical problems, not the other way around. Taylor approximation is a product of the seventeenth and eighteenth centuries that is still very important. The logarithms of the sixteenth century are no longer vital to numerical analysis, but the associated and even prehistoric notion of interpolation continues to solve problems for us.
A well-conditioned mathematical problem[?] is, roughly speaking, one whose solution changes by only a small amount if the problem data are changed by a small amount. The analogous concept for the numerical algorithm for solving the problem is that of numerical stability: an algorithm for solving a well-conditioned problem is numerically stable if the result of the algorithm changes only a small amount if the data change a little.
An algorithm that solves a well-conditioned problem may or may not be numerically stable. An art of numerical analysis is to find a stable algorithm for solving a mathematical problem.
The study of the generation and propagation of round-off errors in the cause of a computation is an important part of numerical analysis. Subtraction of two nearly equal numbers is an ill-conditioned operation, producing catastrophic loss of significance.
One fundamental problem is the determination of zeros of a given function. Various algorithms have been developed. If the function is differentiable and the derivative is known, then Newton's method is a popular choice.
Numerical Analysis is also concerned with computing (in an approximate way) the solution of Partial Differential Equations. This is done by first discretizing the equation, bringing it into a finite dimensional subspace, then solving the linear system in this finite dimensional space. The first stage is done by the Finite element method, finite difference methods, or (particularly in engineering) the method of Finite Volumes. The theoretical justification of these methods often involves theorems from functional analysis.
The linear systems that come form discretized Partial Differential Equations can then be solved by a variant of Gauss-Jordan elimination, by some Iterative method such as Conjugate Gradients, or by Multigrid[?].