Exterior Algebra 2: the Inner Product
(See this previous post for some of the notations used here.)
(Not intended for any particular audience. Mostly I just wanted to write down these derivations in a presentable way because I haven’t seen them from this direction before.)
(Vector spaces are assumed to be finitedimensional and over )
For reasons which will be explained a few posts in the future, I use the symbol for the exterior product instead of the more common .
Exterior algebra is obviously useful any time you’re anywhere near a cross product or determinant. I want to show how it also comes with an inner product which can make certain formulas in the world of vectors and matrices vastly easier to prove.
1. The Inner Product
Euclidean vectors have an inner product that we use all the time. Multivectors are just vectors. What’s theirs?
However we define the inner product (or ‘dot product’; I tend to use both names) on multivectors over , we’re going to want it to act a lot like it does on vectors. Particularly, it seems like it would be nice if
ought to hold. More generally, for two multivectors in the same space, we should be able to sum over their components the same way we do for vectors with :
If it doesn’t look sorta like that, we’re not going to have much intuition for it. Fortunately, this does turn out to be possible, although the usual presentation looks… a little different.
Here’s the standard way to define inner products on the exterior algebra , extending the inner product defined on the underlying vector space :
This is then extended linearly if either argument is a sum of multivectors. I find this expression pretty confusing. It turns out to be right, but it takes a while to see why.
The left side of this is the inner product of two vectors (each are the wedge product of factors together); the right side is the determinant of a matrix. For instance:
Simple calculations yield:
If we label the basis vectors using multiindices , where no two contain the same set of elements up to permutation, then this amounts to saying that basis multivectors are orthonormal:^{1}
And then extending this linearly to all elements of .^{2} This gives an orthonormal basis on , and the first thing we’ll do is define the ‘lengths’ of multivectors, in the same way that we compute the length of a vector :
This is called the Gram determinant of the matrix formed by the columns of . It’s nonzero if the vectors are linearly independent, which clearly corresponds to the wedge product not being in the first place.
In this gives
It turns out that multivector inner products show up in disguise in a bunch of vector identities.
2. Computation of Identities
Let’s get some practice computing with (1).
In these expressions, I’m going to be juggling multiple inner products at once. I’ll denote them with subscripts: , , . (I apologise for the similarity between and – hopefully they’re different enough to distinguish.)
The types are:
 the underlying inner product on , which only acts on vectors: .
 the induced inner product on , which acts on tensors of the same grade termbyterm:
 the induced inner product on , which we described above: .
Let be the Alternation Operator, which takes a tensor product to its total antisymmetrization, e.g. . For a tensor with factors, there are components in the result.^{3}
can be computed by hand by expanding one side into a tensor product and the other into an antisymmetrized tensor product.
The operator can be applied on either side; usually I put it on the right. If you put it on both sides, you would need to divide the whole expression by , which is annoying (but some people do it).
Here’s an example of this on bivectors:
Now, some formulas which turn out to be the multivector inner product in disguise.
Set , in (2) and relabel to get Lagrange’s Identity:
If you’re working in , use the Hodge Star map (to be discussed in a future post) to turn wedge products into cross products (preserving their magnitudes) to get the BinetCauchy identity:
Or, if you have three terms on each side, you can turn into a scalar triple product:
Set , in the twovector version to get:
Drop the cross product term to get CauchySchwarz:
I thought that was neat. Maybe there are places where CauchySchwarz is used where in fact Lagrange’s identity would be more useful?
On vectors in (or any dimension), of course, the vector magnitude of course gives the Pythagorean theorem:
This generalizes to the bivector areas of an orthogonal tetrahedron (or vector surface areas of a simplex in any dimension), which is called De Gua’s Theorem:
This is because the total surface area bivector for a closed figure in is , so the surface area of the opposing face is exactly .
There is naturally a version of the law of cosines for any tetrahedron/ simplex with nonorthogonal sides as well. If then (though it’s often stated with instead):
We can easily expand linearly when are bivectors or anything else; the angles in the cosines become angles between planes, or something fancier, but the formula is otherwise the same:
Which is kinda cool.
3. Matrix Multiplication
This is one of the more enlightening things I’ve come across using .
Let and be linear transformations. Their composition has matrix representation:
The latter form expresses the fact that each matrix entry in is an inner product of a column of with a row of .
Because and are also linear transformations, their composition also has a matrix representation:
Where are indexes over the appropriate spaces.
is the wedge product of the columns of , and is the wedge product of rows of from , which means this is just the inner product we discussed above.
But this is just the determinant of a minor of – the one indexed by . This means that:
And thus:
This is called the Generalized CauchyBinet formula. Note that does not require that the matrices be the square.
Which is neat. I think this version is way easier to remember or use than the version in Wikipedia, which is expressed in terms of matrix minors and determinants everywhere.
Corollaries:
When and are in the same space and , then all of the wedge powers turn into determinants, giving something familiar:
When , it says that the determinant of the square matrix is the sum of squared determinants of minors of the (not necessarily square) . If is , this is a sum over all minors of :
Other articles related to Exterior Algebra:
 Oriented Areas and the Shoelace Formula
 Matrices and determinants
 The inner product
 The Hodge Star
 The Interior Product

I prefer to because, well, it makes perfect sense. ↩

If we don’t specify that all of our multiindices are unique up to permutation, then we would have to write something like , since for instance . ↩

There are several conventions for defining ; often it comes with a . If you wanted it to preserve vector magnitudes, you might have it divide by . I don’t like either of those though, and prefer to leave it without factorials, because it makes other definitions much easier. ↩