This is documentation for Mathematica 5, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.2)

Documentation / Mathematica / The Mathematica Book / Advanced Mathematics in Mathematica / Linear Algebra /

3.7.11 Advanced Topic: Tensors

Tensors are mathematical objects that give generalizations of vectors and matrices. In Mathematica, a tensor is represented as a set of lists, nested to a certain number of levels. The nesting level is the rank of the tensor.

Interpretations of nested lists.

A tensor of rank is essentially a -dimensional table of values. To be a true rank tensor, it must be possible to arrange the elements in the table in a -dimensional cuboidal array. There can be no holes or protrusions in the cuboid.

The indices that specify a particular element in the tensor correspond to the coordinates in the cuboid. The dimensions of the tensor correspond to the side lengths of the cuboid.

One simple way that a rank tensor can arise is in giving a table of values for a function of variables. In physics, the tensors that occur typically have indices which run over the possible directions in space or spacetime. Notice, however, that there is no built-in notion of covariant and contravariant tensor indices in Mathematica: you have to set these up explicitly using metric tensors.

Functions for creating and testing the structure of tensors.

Here is a tensor.

In[1]:= t = Table[i1+i2 i3, {i1, 2}, {i2, 3}, {i3, 2}]

Out[1]=

This is another way to produce the same tensor.

In[2]:= Array[(#1 + #2 #3)&, {2, 3, 2}]

Out[2]=

MatrixForm displays the elements of the tensor in a two-dimensional array. You can think of the array as being a matrix of column vectors.

In[3]:= MatrixForm[ t ]

Out[3]//MatrixForm=

Dimensions gives the dimensions of the tensor.

In[4]:= Dimensions[ t ]

Out[4]=

Here is the element of the tensor.

In[5]:= t[[1, 1, 1]]

Out[5]=

ArrayDepth gives the rank of the tensor.

In[6]:= ArrayDepth[ t ]

Out[6]=

The rank of a tensor is equal to the number of indices needed to specify each element. You can pick out subtensors by using a smaller number of indices.

Tensor manipulation operations.

You can think of a rank tensor as having "slots" into which you insert indices. Applying Transpose is effectively a way of reordering these slots. If you think of the elements of a tensor as forming a -dimensional cuboid, you can view Transpose as effectively rotating (and possibly reflecting) the cuboid.

In the most general case, Transpose allows you to specify an arbitrary reordering to apply to the indices of a tensor. The function Transpose[T, , , ... , ], gives you a new tensor such that the value of is given by .

If you originally had an tensor, then by applying Transpose, you will get an tensor.

Here is a matrix that you can also think of as a tensor.

In[7]:= m = {{a, b, c}, {ap, bp, cp}}

Out[7]=

Applying Transpose gives you a tensor. Transpose effectively interchanges the two "slots" for tensor indices.

In[8]:= mt = Transpose[m]

Out[8]=

The element m[[2, 3]] in the original tensor becomes the element m[[3, 2]] in the transposed tensor.

In[9]:= { m[[2, 3]], mt[[3, 2]] }

Out[9]=

This produces a tensor.

In[10]:= t = Array[a, {2, 3, 1, 2}]

Out[10]=

This transposes the first two levels of t.

In[11]:= tt1 = Transpose[t]

Out[11]=

The result is a tensor.

In[12]:= Dimensions[ tt1 ]

Out[12]=

If you have a tensor that contains lists of the same length at different levels, then you can use Transpose to effectively collapse different levels.

This collapses all three levels, giving a list of the elements on the "main diagonal".

In[13]:= Transpose[Array[a, {3, 3, 3}], {1, 1, 1}]

Out[13]=

This collapses only the first two levels.

In[14]:= Transpose[Array[a, {2, 2, 2}], {1, 1}]

Out[14]=

You can also use Tr to extract diagonal elements of a tensor.

This forms the ordinary trace of a rank 3 tensor.

In[15]:= Tr[Array[a, {3, 3, 3}]]

Out[15]=

Here is a generalized trace, with elements combined into a list.

In[16]:= Tr[Array[a, {3, 3, 3}], List]

Out[16]=

This combines diagonal elements only down to level 2.

In[17]:= Tr[Array[a, {3, 3, 3}], List, 2]

Out[17]=

Outer products, and their generalizations, are a way of building higher-rank tensors from lower-rank ones. Outer products are also sometimes known as direct, tensor or Kronecker products.

From a structural point of view, the tensor you get from Outer[f, t, u] has a copy of the structure of u inserted at the "position" of each element in t. The elements in the resulting structure are obtained by combining elements of t and u using the function f.

This gives the "outer f" of two vectors. The result is a matrix.

In[18]:= Outer[ f, {a, b}, {ap, bp} ]

Out[18]=

If you take the "outer f" of a length 3 vector with a length 2 vector, you get a matrix.

In[19]:= Outer[ f, {a, b, c}, {ap, bp} ]

Out[19]=

The result of taking the "outer f" of a matrix and a length vector is a tensor.

In[20]:= Outer[ f, {{m11, m12}, {m21, m22}}, {a, b, c} ]

Out[20]=

Here are the dimensions of the tensor.

In[21]:= Dimensions[ % ]

Out[21]=

If you take the generalized outer product of an tensor and an tensor, you get an tensor. If the original tensors have ranks and , your result will be a rank tensor.

In terms of indices, the result of applying Outer to two tensors and is the tensor with elements f[,].

In doing standard tensor calculations, the most common function f to use in Outer is Times, corresponding to the standard outer product.

Particularly in doing combinatorial calculations, however, it is often convenient to take f to be List. Using Outer, you can then get combinations of all possible elements in one tensor, with all possible elements in the other.

In constructing Outer[f, t, u] you effectively insert a copy of u at every point in t. To form Inner[f, t, u], you effectively combine and collapse the last dimension of t and the first dimension of u. The idea is to take an tensor and an tensor, with , and get an tensor as the result.

The simplest examples are with vectors. If you apply Inner to two vectors of equal length, you get a scalar. Inner[f, , , g] gives a generalization of the usual scalar product, with f playing the role of multiplication, and g playing the role of addition.

This gives a generalization of the standard scalar product of two vectors.

In[22]:= Inner[f, {a, b, c}, {ap, bp, cp}, g]

Out[22]=

This gives a generalization of a matrix product.

In[23]:= Inner[f, {{1, 2}, {3, 4}}, {{a, b}, {c, d}}, g]

Out[23]=

Here is a tensor.

In[24]:= a = Array[1&, {3, 2, 2}]

Out[24]=

Here is a tensor.

In[25]:= b = Array[2&, {2, 3, 1}]

Out[25]=

This gives a tensor.

In[26]:= a . b

Out[26]=

Here are the dimensions of the result.

In[27]:= Dimensions[ % ]

Out[27]=

You can think of Inner as performing a "contraction" of the last index of one tensor with the first index of another. If you want to perform contractions across other pairs of indices, you can do so by first transposing the appropriate indices into the first or last position, then applying Inner, and then transposing the result back.

In many applications of tensors, you need to insert signs to implement antisymmetry. The function Signature[, , ... ], which gives the signature of a permutation, is often useful for this purpose.

Treating only certain sublists in tensors as separate elements.

Here every single symbol is treated as a separate element.

In[28]:= Outer[f, {{i, j}, {k, l}}, {x, y}]

Out[28]=

But here only sublists at level 1 are treated as separate elements.

In[29]:= Outer[f, {{i, j}, {k, l}}, {x, y}, 1]

Out[29]=