Efficient Algorithms for Factorization and Join of Blades 475
and project the factors s
i
onto either A or B. If those projections are not zero, they
are factors of
meet(A, B). This method is due to [2].
Using this approach, one may also compute both the
join and the meet simulta-
neously. Since factors of the dual of the
join may be obtained as the rejection of the
s
i
from A or B (i.e., the operation (s
i
∧A)A
−1
,see[6]), such an algorithm is able
to compute the
join and the meet simultaneously and terminate as soon as either is
fully known.
We implemented this idea using the same code generation techniques as used for
the FastJoin algorithms, expecting it to be somewhat faster than the StableFastJoin
algorithm. However, it turned out to be slightly slower (5% to 15%) and required
about 1.5 times as much code to be generated.
One of the reasons for it being slower is that a full evaluation of the (dual of the)
delta product is always required. By contrast, the StableFastJoin algorithm merely
computes the grade of the delta product (not its numerical value), and only when
“in doubt.” One of the reasons for the larger code size is that one needs both a
meet-from-join and a join-from-meet function.
Besides being slower, the implementation of the simultaneous
meet and join al-
gorithm also takes more effort because the algorithm is more complex. For all these
reasons, we did not include a detailed description of it in this paper.
6Conclusion
Using our FastFactorization algorithm, the outer factorization of a k-blade in V
n
is a computationally trivial operation. It amounts to copying and possibly negating
selected coordinates of the input blade into the appropriate elements of the factors.
Implemented as such, factorization is only about five times slower than an outer
product in the same algebra, at least in the low-dimensional spaces (less than 7-D)
for which we benchmarked. The O(n
−1/2
2
n
) time complexity of the algorithms
is determined by the number of coordinates of the input k-blade, which becomes
exceedingly large in high-dimensional spaces.
The
join and meet of blades are relatively expensive products, due to their nonlin-
earity. However, when efficiently implemented through our FastJoin algorithm, their
cost is only in the order of 10 times that of an outer product in the same algebra,
compared to 100 times in previous research. Again, these figures are valid only for
low-dimensional spaces. The O(n
3/2
2
n
) time complexity makes clear that in high-
dimensional spaces, one should use a multiplicative presentation of blades and use
classical linear algebra algorithms like QR (which has O(n
3
) time complexity) to
implement the
join,asin[10]. Our StableFastJoin algorithm, which takes both grade
and numerical stability into account, is just 10% slower than the FastJoin algorithm.
These speeds are obtained at the expense of generating efficient code that spells
out the operations for certain combinations of the basis blades and grades in the
arguments. While efficient, this is only truly possible for rather low-dimensional
spaces, since the amount of code scales as O(n
2
2
n−1
) for FastFactorization and
O(n2
2n
) for the FastJoin algorithm.