Exterior Algebra Notes #2: the Inner Product

[October 9, 2018]

(See this previous post for some of the notations used here.)

(Not intended for any particular audience. Mostly I just wanted to write down these derivations in a presentable way because I haven’t seen them from this direction before.)

(Vector spaces are assumed to be finite-dimensional and over )

Exterior algebra is obviously useful any time you’re anywhere near a cross product or determinant. I want to show how it also comes with an inner product which can make certain formulas in the world of vectors and matrices vastly easier to prove.


1. The Inner Product

Euclidean vectors have an inner product that we use all the time. Multivectors are just vectors. What’s theirs?

However we define the inner product (or ‘dot product’; I tend to use both names) on multivectors over , we’re going to want it to act a lot like it does on vectors. Particularly, it seems like

ought to hold. More generally, for two multivectors in the same space, we should be able to sum over their components the same way we do for vectors with :

This turns out to work, although the usual presentation is pretty confusing. Here’s the standard way to define inner products on the exterior algebra , extending the inner product defined on the underlying vector space :

This is then extended linearly if either argument is a sum of multivectors. This expression is pretty confusing. It turns out to be natural, but it takes a while to see why.

The left side of this is the inner product of two -vectors (each are the wedge product of factors together); the right side is the determinant of a matrix. For instance:

Simple examples:

If we label the basis -vectors using multi-indices , where no two contain the same set of elements up to permutation, then this amounts to saying that basis multivectors are orthonormal:1

And then extending this linearly to all elements of .2 This gives an orthonormal basis on , and the first thing we’ll do is define the ‘-lengths’ of multivectors, in the same way that we compute the length of a vector :

This is called the Gram determinant of the ‘Gramian’ matrix formed by the vectors of . It’s non-zero if the vectors are linearly independent, which clearly corresponds to the wedge product not being in the first place.

In this gives

It turns out that multivector inner products show up in disguise in a bunch of vector identities.


2. Computation of Identities

Let’s get some practice computing with (1).

In these expressions, I’m going to be juggling multiple inner products at once. I’ll denote them with subscripts: , , . (I apologise for the similarity between and – hopefully they’re different enough to distinguish.)

The types are:

  • the underlying inner product on , which only acts on vectors: .
  • the induced inner product on , which acts on tensors of the same grade term-by-term:
  • the induced inner product on , which we described above: .

Let be the Alternation Operator, which takes a tensor product to its total antisymmetrization, e.g. . For a tensor with factors, there are components in the result.3

can be computed by hand by expanding one side into a tensor product and the other into an antisymmetrized tensor product.

The operator can be applied on either side; usually I put it on the right. If you put it on both sides, you would need to divide the whole expression by , which is annoying (but some people do it).

Here’s an example of this on bivectors:


Now, some formulas which turn out to be the multivector inner product in disguise.

Set , in (2) and relabel to get Lagrange’s Identity:

If you’re working in , use the Hodge Star map (to be discussed in the next post, but we may as well see this identity now) to turn wedge products into cross products (preserving their magnitudes) to get the Binet-Cauchy identity:

Or, if you have three terms on each side, you can expand the product of two scalar triple products:

Set , in the two-vector version to get:

Drop the cross product term to get Cauchy-Schwarz:

I thought that was neat. Maybe there are places where Cauchy-Schwarz is used where in fact Lagrange’s identity would be more useful?

On vectors in (or any dimension), of course, the vector magnitude of course gives the Pythagorean theorem:

This generalizes to the bivector areas of an orthogonal tetrahedron (or -vector surface areas of a -simplex in any dimension), which is called De Gua’s Theorem:

This is because the total surface area bivector for a closed figure in is , so the surface area of the opposing face is exactly .

There is naturally a version of the law of cosines for any tetrahedron/ -simplex with non-orthogonal sides as well. If then (though it’s often stated with instead):

We can easily expand linearly when are bivectors or anything else; the angles in the cosines become angles between planes, or something fancier, but the formula is otherwise the same:

Which is kinda cool.


3. Matrix Multiplication

This is one of the more enlightening things I’ve come across using .

Let and be linear transformations. Their composition has matrix representation:

The latter form expresses the fact that each matrix entry in is an inner product of a column of with a row of .

Because and are also linear transformations, their composition also has a matrix representation:

Where are indexes over the appropriate spaces.

is the wedge product of the columns of , and is the wedge product of rows of from , which means this is just the inner product we discussed above.

But this is just the determinant of a minor of – the one indexed by . This means that:

And thus:

This is called the Generalized Cauchy-Binet formula. Note that does not require that the matrices be the square.

Which is neat. I think this version is way easier to remember or use than the version in Wikipedia, which is expressed in terms of matrix minors and determinants everywhere.


Corollaries:

When and are in the same space and , then all of the wedge powers turn into determinants, giving something familiar:

When , it says that the determinant of the square matrix is the sum of squared determinants of minors of the (not necessarily square) . If is , this is a sum over all minors of :


Other articles related to Exterior Algebra:

  1. Oriented Areas and the Shoelace Formula
  2. Matrices and Determinants
  3. The Inner product
  4. The Hodge Star
  5. The Interior Product
  1. I prefer the notation to because, well, it makes perfect sense. 

  2. If we don’t specify that all of our multi-indices are unique up to permutation, then we would have to write something like , where is the sign of the permutation that takes to , since for instance

  3. There are several conventions for defining ; often it comes with a . If you wanted it to preserve vector magnitudes, you might have it divide by . I don’t like either of those though, and prefer to leave it without factorials, because it makes other definitions much easier.