56 3Tensors
hand side we have
T
k
1
...k
r
l
1
...l
s
e
k
1
⊗···⊗e
k
r
⊗e
l
1
⊗···⊗e
l
s
e
i
1
,...,e
i
r
,e
j
1
,...,e
j
s
=T
k
1
...k
r
l
1
...l
s
e
k
1
(e
i
1
)...e
k
r
(e
i
r
)e
l
1
e
j
1
...e
l
s
e
j
s
=T
k
1
...k
r
l
1
...l
s
δ
k
1
i
1
...δ
k
r
i
r
δ
j
1
l
1
...δ
j
s
l
s
=T
i
1
,...,i
r
j
1
...j
s
(3.56)
so our claim is true. Thus, for instance, a (2, 0) tensor like the Minkowski met-
ric can be written as η = η
μν
e
μ
⊗ e
ν
. Conversely, a tensor product like f ⊗ g =
f
i
g
j
e
i
⊗e
j
∈T
2
0
thus has components (f ⊗g)
ij
=f
i
g
j
. Notice that we now have
two ways of thinking about components: either as the values of the tensor on sets
of basis vectors (as in (3.5)) or as the expansion coefficients in the given basis (as
in (3.55)). This duplicity of perspective was pointed out in the case of vectors just
above Exercise 2.10, and it is essential that you be comfortable thinking about com-
ponents in either way.
Exercise 3.13 Compute the dimension of T
r
s
.
Exercise 3.14 Let T
1
and T
2
be tensors of type (r
1
,s
1
) and (r
2
,s
2
), respectively, on a vector
space V . Show that T
1
⊗T
2
can be viewed as an (r
1
+r
2
,s
1
+s
2
) tensor, so that the tensor
product of two tensors is again a tensor, justifying the nomenclature.
One important operation on tensors which we are now in a position to discuss is
that of contraction, which is the generalization of the trace functional to tensors of
arbitrary rank: Given T ∈T
r
s
(V ) with expansion
T =T
i
1
...i
r
j
1
...j
s
e
i
1
⊗···⊗e
i
r
⊗e
j
1
⊗···⊗e
j
s
(3.57)
we can define a contraction of T to be any (r − 1,s − 1) tensor resulting from
feeding e
i
into one of the arguments, e
i
into another and then summing over i as
implied by the summation convention. For instance, if we feed e
i
into the rth slot
and e
i
into the (r +s)th slot and sum, we get the (r −1,s−1) tensor
˜
T defined as
˜
T(v
1
,...,v
r−1
,f
1
,...,f
s−1
) ≡T
v
1
,...,v
r−1
,e
i
,f
1
,...,f
s−1
,e
i
.
You may be suspicious that
˜
T depends on our choice of basis, but Exercise 3.15
shows that contraction is in fact well-defined. Notice that the components of
˜
T are
˜
T
i
1
...i
r−1
j
1
...j
s−1
=T
i
1
...i
r−1
l
j
1
...j
s−1
l
.
Similar contractions can be performed on any two arguments of T provided one ar-
gument eats vectors and the other dual vectors. In terms of components, a contrac-
tion can be taken with respect to any pair of indices provided that one is covariant
and the other contravariant. If we are working on a vector space equipped with a
metric g, then we can use the metric to raise and lower indices and so can contract
on any pair of indices, even if they are both covariant or contravariant. For instance,
we can contract a (2, 0) tensor T with components T
ij
as
˜
T = T
i
i
=g
ij
T
ij
, which
one can interpret as just the trace of the associated linear operator (or (1, 1) tensor).