Encyclopedia > Finite element method

  Article Content

Finite element method

The finite element method is used for solving partial differential equations (PDE). Solutions are achieved by either eliminating the differential equation completely (steady state problems), or rendering the PDE into an equivalent ordinary differential equation, which is then solved using standard techniques such as finite differences, etc. Use of the finite element method in engineering for the analysis of physical systems is commonly known as finite element analysis.

Finite element methods have also been developed to solve integral equations[?] such as the heat transport equation.

The method was introduced by Richard Courant[?] to solve torsion on cylinder. Courant's contribution was evolutionary, drawing on a large body of earlier results for PDEs developed by Rayleigh, Ritz and Galerkin. Development of the method began in earnest in the middle to late 1950s for airframe and structural analysis, and picked up a lot of steam at Berkeley in the 1960s for use in civil engineering. The method was provided with a rigorous mathematical foundation in 1973 with the publication of Strang and Fix's The Finite Element Method.

Finite element methods are used in a wide variety of engineering disciplines. In computer graphics, radiosity algorithms are finite element methods.

In solving partial differential equations, the primary challenge is to create an equation which approximates the equation to be studied, but which is stable, meaning that errors in the calculation do not acculumlate and cause the resulting output to be garbage.

See also: Discrete element method Spectral method

Table of contents

Technical Overview

For this discussion, we assume a basic understanding of differential calculus in several variables. If f is a function, then the notation fx will denote the partial derivative of f with respect to x.

The best way to introduce the subject is to give a simple example (a "model problem.") We shall use the Laplace equation on the torus [0,1)x[0,1). First some notation. Let

T:=[0,1)x[0,1)={(x,y);0x,y<1}

Now let g be a function from T to F, the field of scalars (either the real line R or the complex plane C.) Our problem is to find a function u from the entire plane R2 to F so that

  1. u is twice differentiable over R2 in the sense of multivariate calculus.
  2. uxx+uyy=g in T
  3. u(x,y)=u(x+1,y)=u(x,y+1), that is, u is periodic in x and in y, and of period 1 in both direction.

This problem can be rephrased in terms of linear maps and vector spaces.

  1. u is in V, the vector space of twice differentiable functions of the torus T.
  2. Lu=g with g is some given vector in V.

Of course, here L is the differential operator given by:

Lf=fxx+fyy

L is known as the Laplace operator. We are now looking for a u in V so that Lu=g.

Weak formulation

Now let φ(u,v) be any functional of V, that is, φ is a function of V to F so that for any t in F and u,v in V:

φ(tu+v)=tφ(u)+φ(v)

We denote by V* the set of all such functionals. Then, certainly, the following statements are equivalent:

Lu=g

and

For all φ in V*, φ(Lu)=φ(g)

The latter statement is said to be the weak form of our problem. One can think of it as emphasizing that the simple equality

φ(Lu)=φ(g)

for a single φ in V* is insufficient to imply Lu=g, and thus it is weaker. The true meaning of the nomenclature is related to the so-called weak topology for Banach spaces, which is induced by V*, but this is beyond the scope of this article.

The bilinear form of L, test functions

The set V* is larger than it absolutely needs to be. It turns out that it suffices to use functionals of the form

φv(f)=∫0101v(x,y)f(x,y)dxdy

That is, we are now replacing the original problem with that of finding u in V so that

φv(Lu)=φv(g)

for all v in V. From now on, we will use ∫T for the double integral ∫0101. One can see, via integration by parts that the last equality is equivalent to:

ψ(u,v):=-∫T(uxvx+uyvy)=∫Tgv

The function ψ of u and v is in fact bilinear[?], and it is the bilinear form associated with L. The functions v are called test functions.

We note here that ψ only uses first derivatives, and it would therefore be possible to discuss solutions to the original problem while only assuming first derivatives. Also note that in fact in most cases, the solution u will be infinitely differentiable.

A minimal set of test functions

From functional analysis, we know that we do not in fact need to try every possible test function v in V. In fact, if E={e1,e2,...} is a subset of V so that its linear span is dense in V (in a suitable topology) then we can use only those functions as test functions. That is, we are now trying to find a solution to the problem

(*) ψ(u,ej)=∫Tg'ej for every ej in E.

First discretization step

In order to turn this process into an algorithm that can be run on actual hardware, one chooses a finite subset of E. Now we have F=Fn={e1,e2,...,en} a finite set, F a subset of E, and we wish to solve the problem

(**) ψ(un,ej)=∫Tg,ej) for every ej in Fn

with the goal that, as n increases to infinity (and Fn increases to E,) the solutions un should converge to the solution u of (*)

Second discretization step

Let us now look for a solution in the linear span of F. While the actual solution of (*) is almost certainly not in the linear span of F, we are once more hoping for some sort of convergence property. If everything were occuring inside the linear span of F, we would have

un=∑akek
gn=∑bkek

Substituting into (**) and expanding, we obtain:

(***) ∑akψ(ek,ej)=∑bkTekej for all j=1,...,n.

There is now the question of how to invent a suitable gn and there are many approaches, depending on how the set E was chosen. If E is chosen to be some Fourier basis, then gn can be obtained as the projection of g onto the linear span of F, but other approaches are possible.

The desire of course is again to make certain that the resulting un will converge to a solution of (*).

Matrix version

The astute reader will have noted that the last problem can be formulated as a matrix problem as follows:

Pa=Qb

where a is the column vector (a1,a2,...,an) and b is the column vector (b1,b2,...,bn). The matrices P and Q are given by (***):

P[j,k]=ψ(ek,ej)
Q[j,k]=∫Tekej

P is called the stiffness matrix and Q is called the mass matrix.

Algorithm

The Finite Element algorithm is thus as follows:

  1. Compute the Stiffness Matrix P.
  2. Compute the Mass Matrix Q.
  3. Approximate g and compute b.
  4. Solve the matrix problem Pa=Qb for the unknown vector a.
  5. Possibly convert the vector a back into a solution u (e.g. for viewing with a graphical computer program.)

Note on the choice of basis

If one chooses a Fourier basis for E, then one gets a so-called Spectral method, which from our point of view is closely related to the finite element method. Typically, in the finite element method, the set F is chosen directly, and is not usually construed to be a subset of some E.

Instead, the functions of F can be chosen to be piecewise linear. A tesselation of the domain T is chosen, which decomposes T into triangles (say) and the functions of F are those that are linear on each component of the tesselation of T. (An illustration would be good here.) Each triangle is referred to as an "element."

To obtain better algorithms, one can attempt to vary the primitives of the tesselation; it may be more natural to use rectangular elements, and in some cases curvilinear elements are called for. Conversely, once the elements are chosen, one still has a choice of how to define the test functions on each element. Test functions are usually, but not always chosen to be piecewise polynomial.

If the test functions are chosen in a cunning way, the matrices P and Q will be structured so that the linear problem Pa=Qb can be solved very quickly. To understand the problem here, the calculation of Qb requires apparently O(n^2) multiplications and additions. However, if one chooses the Fourier basis, one notes that Q is the identity matrix, so there's nothing to be done to compute Qb. The coefficients of b can then be found using the fast fourier transform of g, which runs in O(nlogn) time. The matrix P is diagonal with easily calculated entries, so solving for a is trivial. One would then typically do an inverse Fourier transform on a, which requires O(nlogn) to obtain u.

If one chooses a finite element basis (using, for instance, the triangular elements discussed above) so that each test function is supported on a small number of elements then one can see that the matrices P and Q will be sparse -- that is, most of the entries are zero. Then the calculation Qb can be done in O(n) time, and solving for a can be done efficiently using an iterative algorithm (such as the Conjugate gradient iteration[?].)

We note that the choice of piecewise linear elements is not even once differentiable, however it is piecewise differentiable. It is interesting to study how such solutions converge to the real solution of the Laplace problem as the number of elements tends to infinity.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Kings Park, New York

... (40.888497, -73.242582)1. According to the United States Census Bureau, the town has a total area of 16.3 km² (6.3 mi²). 15.3 km² (5.9 mi²) of it is ...

 
 
 
This page was created in 35 ms