Encyclopedia > Component-free treatment of tensors

  Article Content

Component-free treatment of tensors

Note: This is a fairly abstract mathematical approach to tensors. If you are baffled by this article, try reading the main tensor article and the classical treatment first.


The modern component-free[?] approach to the theory of tensors views tensors initially as abstract objects, expressing some definite type of multi-linear concept. Their well-known properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra.

This modern component-free appproach is a field of differential geometry where a physical property is described by a tensor field on a manifold and doesn't need to make references to coordinates at all.

Definition

Formally, a tensor is defined as follows. Given any two real vector spaces V, W, their tensor product is a real vector space

<math>V \otimes W</math>

together with a bilinear map

<math>\otimes: V \times W \rarr V \otimes W</math>

If {ei} and {fj} are bases for V and W, the set

<math> \{ \mathbf{e}_i \otimes \mathbf{f}_j \} </math>

is a basis for this tensor product, the dimension of which is given by the product of the dimensions of V and W. This tensor product can be generalized to more than just two vector spaces.

A tensor on the vector space V is then defined to be an element of the tensor product

<math>V \otimes V \otimes ... \otimes V \otimes V^* \otimes V^* \otimes ... \otimes V^*</math>

where V* is the dual space of V.

If there are m copies of V and n copies of V* in our product, the tensor is said to be of type (m, n) and of contravariant rank m and covariant rank n. The tensors of rank zero are just the scalars R, those of contravariant rank 1 the vectors in V, and those of covariant rank 1 the one-forms in V* (for this reason the last two spaces are often called the contravariant and covariant vectors).

Note that the (1,1) tensors

<math>V \otimes V^*</math>

are isomorphic in a natural way to the space of linear transformations (i.e. matrices) from V to V. An inner product V × V → R corresponds in a natural way to a (0,2) tensor in

<math>V^* \otimes V^*</math>

called the associated metric and usually denoted g.

In differential geometry, physics and engineering, we usually deal with tensor fields on differentiable manifolds. (The term "tensor" is sometimes used as a shorthand for "tensor field".) For instance, the curvature tensor is discussed in differential geometry and the stress-energy tensor is important in physics and engineering. Both of these are related by Einstein's theory of general relativity. In engineering, the underlying manifold will often be Euclidean 3-space. A tensor field assigns to any given point of the manifold a tensor in the space

<math>V \otimes V \otimes ... \otimes V \otimes V^* \otimes V^* \otimes ... \otimes V^*</math>

where V is the tangent space at that point and V* is the cotangent space.

For any given coordinate system we have a basis {ei} for the tangent space V (note that this may vary from point-to-point if the manifold is not linear), and a corresponding dual basis {ei} for the cotangent space V* (see dual space). The difference between the raised and lowered indices is there to remind us of the way the components transform.

For example purposes, then, take a tensor A in the space

<math>V \otimes V \otimes V^*</math>

The components relative to our coordinate system can be written

<math>\mathbf{A} = A^{ij}_k (\mathbf{e}_i \otimes \mathbf{e}_j \otimes \mathbf{e}^k)</math>

Here we used the Einstein notation, a convention useful when dealing with coordinate equations: when an index variable appears both raised and lowered on the same side of an equation, we are summing over all its possible values. In physics we often use the expression

<math>A^{ij}_k</math>

to represent the tensor, just as vectors are usually treated in terms of their components. This can be visualized as an n × n × n array of numbers. In a different coordinate system, say given to us as a basis {ei'}, the components will be different. If (xi'i) is our transformation matrix (note it is not a tensor, since it represents a change of basis rather than a geometrical entity) and if (yii') is its inverse, then our components vary per

<math>A^{i'j'}_{k'} = x^{i'}_i x^{j'}_j y^k_{k'} A^{ij}_k</math>

In older texts this transformation rule often serves as the definition of a tensor. Formally, this means that tensors were introduced as specific representations of the group of all changes of coordinate systems.


/Old Talk - still has some stuff that should likely be merged in



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Shoreham, New York

... population is spread out with 27.6% under the age of 18, 6.2% from 18 to 24, 17.7% from 25 to 44, 36.0% from 45 to 64, and 12.5% who are 65 years of age or older. Th ...

 
 
 
This page was created in 39.6 ms