Exterior Algebra Notes #4: The Interior Product

[January 27, 2019]

Vector spaces are assumed to be finite-dimensional and over . The grade of a multivector will be written , while its magnitude will be written . Bold letters like will refer to (grade-1) vectors, while Greek letters like refer to arbitrary multivectors with grade .

More notes on exterior algebra. This time, the interior product , with a lot more concrete intuition than you’ll see anywhere else.

I am not the only person who has had trouble figuring out what the interior product is for. This is what I have so far…


1. The Interior Product

The last main tool of exterior algebra is the interior product, written or . It subtracts grades () and, conceptually, does something akin to ‘dividing out of ’. It’s also called the ‘contraction’ or ‘insertion’ operator. We use the same symbol as the inner product because we think of it as a generalization of the inner product: when , then .

Its abstract definition is that it is adjoint to the wedge product with respect to the inner product:

In practice this means that it sort of ‘undoes’ wedge products, as we will see.

When we looked at the inner product we had a procedure for computing . We switched from the inner product to the inner product, by writing both sides as tensor products, with the right side antisymmetrized using :1

Interior products directly generalize inner products to cases where the left side has a lower grade2, (which is why we use for both), and can be computed with the exact same procedure:

A general formula for the interior product of a vector with a multivector, which can be deduced from the above, is


The intuitive meaning of the interior product is related to projection. We can construct the projection and rejection operators of a vector onto a multivector with:

To understand this, recall that the classic formula for projecting onto a unit vector is:

That is, we find the scalar coordinate along , then multiply by once again. With multivectors, is not a scalar, so we can’t just use scalar multiplication – so it makes some sense that it would be replaced with .3

The classic vector rejection formula is

Using the interior product we can write this as

The multivector version is only non-zero if has a component which does not contain – all -ness is removed by the wedge product, leaving something like . Then .

The correct interpretation of , then, is a lot like what it means when : it’s finding the ‘-component’ of . It’s just that, when is a multivector, the ‘-coordinate’ is no longer a scalar.

For example this is the ‘‘-component of a bivector :

Note that the result doesn’t have any factors in it.

What about , where is a multivector? It’s still true that gives the ‘-coordinate’ of , if there is one. But the rejection formula doesn’t work – we can only use . The problem is that there are cases where both , such as for and .4


If we consider our projection/rejection operations as operators, writing and , then:

Since , this could also be written as

And in fact this works (although the interpretation is trickier) with different vectors for each term:

There is a lot of interesting structure here which is worth diving into in the future. It turns out to be related to a lot of other mathematics. The short version is that is, technically, a “graded derivation” on the exterior algebra, and the property that is the exterior-algebra equivalent of the fact that on derivatives (in the sense that ).


If we are keeping track of vector space duality, the left side of an interior product should transform like a dual multivector. (It certainly seems like it should because the left side of an inner product should.) More on that later.

The discussion about projection above seems to me to strongly suggest that we define as a sort of ‘multiplicative inverse’ of . It’s not a complete inverse, because . Instead of being invertible, dividing and then multiplying pulls out the projection on . There is a certain elegance to it.

In fact there is an argument to be made that interior products , and dual vectors in general, should be considered as negative-grade multivectors, so . Then we could write that even if has the higher grade. This is also compelling because it explains why dual vectors transform according to the inverse of a transformation: if , of course . Something to think about! I hope to look into it in a later article.


2. More identities

We can use to prove a few more vector identities.

Here’s the vector triple product:

The quadruple product:

The Jacobi Identity:

The Jacobi Identity can also be rearranged into the following intriguing form, which we will have to figure out someday (it has some relationship to Lie algebras).

The second line is our previous expansion of . How could these be equal?

One case where the interior product is already being used in mathematics is when multiplying by an antisymmetric matrix. A bivector can be represented as a tensor product , which can be treated as an antisymmetric matrix. The interior product is then equivalent to matrix multiplication:

For instance this is one way of writing a rotation operator which rotates vectors by in the plane (if are unit vectors):

The Hodge Star can be written as an interior product with the pseudoscalar. In :

This is probably the better definition. One reason is that it suggests that is not so special, and, for instance, we might allow ourselves to take a in a subspace. For instance while working in here is another way to write the rotation operator :


Other articles related to Exterior Algebra:

  1. Oriented Areas and the Shoelace Formula
  2. Matrices and Determinants
  3. The Inner product
  4. The Hodge Star
  5. The Interior Product
  1. Recall that we basically elect to antisymmetrize one side because if we did both we would need an extra factor of for the same result. It might be that there are abstractions of this where you do need to do both sides (for instance if ?) 

  2. It is probably possible to generalize to either side having the lower grade, but it’s not normally done that way. I want to investigate it sometime. 

  3. the other candidate would be , but we’d like the result to also be a multivector so it makes sense to only consider

  4. I think there’s a way to make it work. It looks something like: for each basis multivector of lower grade, remove it from both sides, like . But that’s complicated and will have to be saved for the future.