Encyclopedia > Intermediate treatment of tensors

  Article Content

Intermediate treatment of tensors

Note: The following is a modern component-based treatment of tensors (sometimes called the "classical treatment" of tensors). Read the article tensor for a simple description of tensors, or see the component-free treatment of tensors for a more abstract treatment.

Note that the word "tensor" is often used as a shorthand for "tensor field", a concept which defines a tensor value at every point in a manifold. To understand tensor fields, you need to first understand tensors.


A tensor is the mathematical idealization of a geometric or physical quantity whose analytic description, relative to a fixed frame of reference, consists of an array of numbers.

This way of viewing tensors, called tensor analysis, was used by Einstein and is generally prefered by physicists. It is very grossly a generalization of the concept of vectors and matrices and allows the writing of equations independently of any given coordinate system.

It should be noted that the array of numbers representation of a tensor is not the same thing as the tensor. An image and the object represented by the image and not the same thing. The mass of a stone is not a number. Rather the mass can be described by a number relative to some specified unit mass. Similarly, a given numerical representation of a tensor only makes sense in a particular coordinate system.

Some well known examples of tensors in geometry are quadratic forms, and the curvature tensor. Examples of physical tensors are the energy-momentum tensor[?] and the polarization tensor[?].

Geometric and physical quantities may be categorized by considering the degrees of freedom inherent in their description. The scalar quantities are those that can be represented by a single number --- speed, mass, temperature, for example. There are also vector-like quantities, such as force, that require a list of numbers for their description. Finally, quantities such as quadratic forms naturally require a multiply indexed array for their representation. These latter quantities can only be conceived of as tensors.

Actually, the tensor notion is quite general, and applies to all of the above examples; scalars and vectors are special kinds of tensors. The feature that distinguishes a scalar from a vector, and distinguishes both of those from a more general tensor quantity is the number of indices in the representing array. This number is called the rank of a tensor. Thus, scalars are rank zero tensors (no indices at all), and vectors are rank one tensors.

It is also necessary to distinguish between two types of indices, depending on whether the corresponding numbers transform covariantly or contravariantly relative to a change in the frame of reference. Contravariant indices are written as superscripts, while the covariant indices are written as subscripts. The valence of a tensor is the pair <math>(p,q)</math>, where <math>p</math> is the number contravariant and <math>q</math> the number of covariant indices, respectively.

It is customary to represent the actual tensor, as a stand-alone entity, by a bold-face symbol such as <math>\mathbf{T}</math>. The corresponding array of numbers for a type <math>(p,q)</math> tensor is denoted by the symbol <math>T^{i_1\ldots i_p}_{j_1\ldots j_q},</math> where the superscripts and subscripts are indices that vary from <math>1</math> to <math>n</math>. This number <math>n</math>, the range of the indices, is called the dimension of the tensor. The total degrees of freedom required for the specification of a particular tensor is a power of the dimension; the exponent is the tensor's rank.

Again, it must be emphasized that the tensor <math>\mathbf{T}</math> and the representing array <math>T^{i_1\ldots i_q}_{j_1\ldots j_p}</math> are not the same thing. The values of the representing array are given relative to some frame of reference, and undergo a linear transformation when the frame is changed.

Finally, it must be mentioned that most physical and geometric applications are concerned with tensor fields, that is to say tensor valued functions, rather than tensors themselves. Some care is required, because it is common to see a tensor field called simply a tensor. There is a difference, however; the entries of a tensor array <math>T^{i_1\ldots i_q}_{j_1\ldots j_p}</math> are numbers, whereas the entries of a tensor field are functions. The present entry treats the purely algebraic aspect of tensors. Tensor field concepts, which typically involved derivatives of some kind, are discussed elsewhere.

Definition

The formal definition of a tensor quantity begins with a finite-dimensional vector space <math>\mathcal{U}</math>, which furnishes the uniform "building blocks" for tensors of all valences. In typical applications, <math>\mathcal{U}</math> is the tangent space at a point of a manifold; the elements of <math>\mathcal{U}</math> typically represent physical quantities such as velocities and forces. The space of <math>(p,q)</math>-valent tensors, denoted here by <math>\mathcal{U}^{p,q}</math> is obtained by taking the tensor product of <math>p</math> copies of <math>\mathcal{U}</math> and <math>q</math> copies of the dual vector space <math>\mathcal{U}^*</math>. To wit,

<math>\mathcal{U}^{p,q} =
\left\{\mathcal{U}\otimes\ldots\otimes\mathcal{U}\right\} \otimes \left\{\mathcal{U}^*\otimes\ldots\otimes\mathcal{U}^*\right\}</math>

In order to represent a tensor by a concrete array of numbers, we require a frame of reference, which is essentially a basis of <math>\mathcal{U}</math>, say <math>\mathbf{e}_1,\ldots,\mathbf{e}_n \in \mathcal{U}.</math> Every vector in <math>\mathcal{U}</math> can be "measured" relative to this basis, meaning that for every <math>\mathbf{v}\in\mathcal{U}</math> there exist unique scalars <math>v^i</math>, such that (note the use of the Einstein summation convention)

<math>\mathbf{v} = v^i\mathbf{e}_i</math>

These scalars are called the components of <math>\mathbf{v}</math> relative to the frame in question.

Let <math>\varepsilon^1,\ldots,\varepsilon^n\in\mathcal{U}^*</math> be the corresponding dual basis[?], i.e.

<math>\varepsilon^i(\mathbf{e}_j) = \delta^i_j,</math>
where the latter is the Kronecker delta array. For every covector <math>\mathbf{\alpha}\in\mathcal{U}^*</math> there exists a unique array of components <math>\alpha_i</math> such that
<math>\mathbf{\alpha} = \alpha_i\, \varepsilon^i.</math>

More generally, every tensor <math>\mathbf{T}\in\mathcal{U}^{p,q}</math> has a unique representation in terms of components. That is to say, there exists a unique array of scalars <math>T^{i_1\ldots i_q}_{j_1\ldots j_p}</math> such that

<math>\mathbf{T} = T^{i_1\ldots i_q}_{j_1\ldots j_p}\, \mathbf{e}_{i_1} \otimes
\ldots\otimes \mathbf{e}_{i_q} \otimes \varepsilon^{j_1}\otimes\ldots\otimes \varepsilon^{j_p}.</math>

Transformation rules

Next, suppose that a change is made to a different frame of reference, say <math>\hat{\mathbf{e}}_1,\ldots,\hat{\mathbf{e}}_n\in\mathcal{U}.</math> Any two frames are uniquely related by an invertible transition matrix <math>A^i_j</math>, having the property that for all values of <math>j</math> we have the frame transformation rule

<math>
\hat{\mathbf{e}}_j = A^i_{\,j}\, \mathbf{e}_i. </math>

Let <math>\mathbf{v}\in\mathcal{U}</math> be a vector, and let <math>v^i</math> and <math>\hat{v}^i</math> denote the corresponding component arrays relative to the two frames. From

<math>\mathbf{v} = v^i\mathbf{e}_i = \hat{v}^i\hat{\mathbf{e}}_i,</math>
and from the frame transformation rule we infer the vector transformation rule
<math>
\hat{v}^i = B^i_{\,j}\, v^j, </math>

where <math>B^i_{\,j}</math> is the matrix inverse[?] of <math>A^i_{\,j}</math>, i.e.

<math>A^i_{\,k} B^k_{\,j } = \delta^i_{\,j}.</math>
Thus, the transformation rule for a vector's components is contravariant to the transformation rule for the frame of reference. It is for this reason that the superscript indices of a vector are called contravariant.

To establish the transformation rule for vectors, we note that the transformation rule for the dual basis takes the form

<math>\hat{v}e^i = B^i_{\,j}\, \varepsilon^j,</math>
and that
<math>v^i = \varepsilon^i(\mathbf{v}),</math>
while
<math>\hat{v}^i = \hat{v}e^i(\mathbf{v}).</math>

The transformation rule for covector components is covariant. Let <math>\mathbf{\alpha}\in \mathcal{U}^*</math> be a given covector, and let <math>\alpha_i</math> and <math>\hat{\alpha}_i</math> be the corresponding component arrays. Then

<math>\hat{\alpha}_j = A^i_{\,j} \alpha_i.</math>
The above relation is easily established. We need only remark that
<math>\alpha_i = \mathbf{\alpha}(\mathbf{e}_i),</math>
and that
<math>\hat{\alpha}_j = \mathbf{\alpha}(\hat{\mathbf{e}}_j),</math>
and then use the transformation rule for the frame of reference.

In light of the above discussion, we see that the transformation rule for a general type <math>(p,q)</math> tensor takes the form

<math>\hat{T}^{i_1\ldots i_q}_{\,j_1\ldots j_p} =
A^{i_1}_{\,k_1}\cdots A^{i_q}_{\,k_q} B^{l_1}_{\,j_1}\cdots B^{l_1}_{\,j_p} T^{k_1\ldots k_q}_{\,l_1\ldots l_p}. </math>

See also:

Further reading

  • Bernard Schutz, Geometrical methods of mathematical physics, Cambridge University Press, 1980.
  • Schaum's Outline of Tensor Calculus
  • Synge and Schild, Tensor Calculus, Toronto Press: Toronto, 1949


An earlier version of this article was adapted from the GFDL article on tensors at http://planetmath.org/encyclopedia/Tensor from PlanetMath, written by Robert Milson and others



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
List of closed London Underground stations

... Ongar tube station[?] St Mary's (Whitechapel Road) tube station[?] South Acton tube station[?] South Kentish Town tube station[?] Swiss Cottage (Metropolitan ...

 
 
 
This page was created in 23.8 ms