526 THE LINEAR PRODUCTS AND OPERATIONS CHAPTER 20
The product matrices need to be computed only once, to bootstrap the implementation.
The results can be stored symbolically, leading to matrices such as [[ A
G
]] in Figure 20.1.
The same algorithm can compute the matrix for any product derived from the
geometric product. Figure 20.1 also shows matrices for the outer product ([[ A
O
]] ) and the
left contraction ([[ A
L
]] ). Note how these matr ices are mostly identical to the geometric
product matrix [[ A
G
]] , but with zeros at specific entries. The reason is of course that these
products are merely selections of certain grade parts of the more encompassing geometric
product.
After the symbolic matrices have b een initialized, computing an actual product C = ABis
reduced to creating a real matrix [[ A
G
]] according to the appropriate symbolic matrix and
the coordinates of [[ A]] , and computing [[ C]] = [[ A
G
]] [[ B]] . So computing the products of
basis blades as in (20.1) is only required during the initialization step, during which sym-
bolic matrices are computed. This is fine in a general Clifford algebra, in which A may be
an arbitrary element of many grades. However, in geometric algebra proper elements tend
to be rather sparse on the
L basis, typically being restricted to a single grade for objects
(which are represented by blades), or only odd or even g rades for oper ators (which are
represented by versors). Then the resulting matrices are sparse, and some kind of opti-
mization in their computation should prevent the needless and costly evaluation of many
zero results in their multiplication.
20.2 THE LIST OF BASIS BLADES APPROACH
Instead of representing multivectors as 2
n
vectors and using matrices to implement the
products, we can represent a multivector by a list of basis blades. That is how our reference
implementation works. The linear products and operations are distributive over addition;
for example, for the left contraction and the reverse we have
(a
1
+ a
2
+ ···+ a
n
)(b
1
+ b
2
+ ···+ b
m
) =
n
i=1
m
j=1
a
i
b
j
,
(b
1
+ b
2
+ ···+ b
n
)
∼
=
n
i=1
b
i
.
Therefore, we can straightforwardly implement any linear product or operation for multi-
vectors using their implementations of basis blades of the previous chapter. As an exam-
ple, Figure 20.2 gives Java code from the reference implementation that computes the
outer product.
The explicit loops over basis blades and the actual basis blade product evaluations are
quite expensive computationally, and implementations based on this principle are about
one or two orders of magnitude slower than implementations that expand the loops and
optimize them. We will get back to this in Chapter 22.