Exterior Algebra Notes #2: the Inner Product

October 9, 2018

(Not intended for any particular audience. Mostly I just wanted to write down these derivations in a presentable way because I haven’t seen them from this direction before.)

(Vector spaces are assumed to be finite-dimensional and over \(\bb{R}\))

Exterior algebra is obviously useful any time you’re anywhere near a cross product or determinant. I want to show how it also comes with an inner product which can make certain formulas in the world of vectors and matrices vastly easier to prove.


1. The Inner Product

Euclidean vectors have an inner product that we use all the time. Multivectors are just vectors. What’s their inner product? They’re just vectors, so we should be able to sum over the components of two multivectors the same way we do for vectors with \(\< \b{u} , \b{v} \> \equiv \sum_{i \in V} u_i v_i\), like this:

\[\boxed{\< \b{u} , \b{v} \> \equiv \sum_{\^ I \in \^^k V} u_{\^ I} v_{\^ I}} \tag{1}\]

Which lets us compute the magnitudes of areas in the expected way:

\[\| a \b{x \^ y} + b \b{y \^ z} + c \b{z \^ x} \|^2 = a^2 + b^2 + c^2\]

This turns out to work, although the usual presentation is pretty confusing. The standard way to define inner products on the exterior algebra \(\^^k V\), extending the inner product defined on the underlying vector space \(V\), looks like this:

\[\< \bigwedge_{i=1}^k \b{a}_i , \bigwedge_{i=1}^k \b{b}_i \> = \det \< \b{a}_i, \b{b}_j \>\]

This is then extended linearly if either argument is a sum of multivectors. This expression is pretty confusing. It turns out be the same as (1), but it takes a while to see why.

The left side of the standard definition is the inner product of two \(k\)-vectors (each are the wedge product of \(k\) factors together); the right side is the determinant of a \(k \times k\) matrix. For instance:

\[\< \b{a}_1 \^ \b{a}_2 , \b{b}_{1} \^ \b{b}_{2} \> = \begin{vmatrix} \< \b{a}_1 , \b{b}_1 \> & \< \b{a}_1 , \b{b}_2 \> \\ \< \b{a}_2 , \b{b}_1 \> & \< \b{a}_2 , \b{b}_2 \> \end{vmatrix}\]

Simple examples:

\[\begin{aligned} \< \b{x\^ y} , \b{x \^ y} \> &= 1 \\ \< \b{x\^ y} , \b{y \^ x} \> &= -1 \\ \< \b{x\^ y} , \b{y \^ z} \> &= 0 \\ \end{aligned}\]

If we label the basis \(k\)-vectors using multi-indices \(I = (i_1, i_2, \ldots i_k)\), where no two \(I\) contain the same set of elements up to permutation, then this amounts to saying that basis multivectors are orthonormal:1

\[\boxed{\< \b{x}_{\^ I} , \b{x}_{\^ J} \> = 1_{IJ}}\]

And then extending this linearly to all elements of \(\^^k V\), 2 which gives (1).

This gives an orthonormal basis on \(\^^k V\), and the first thing we’ll do is define the ‘\(k\)-lengths’ of multivectors, in the same way that we compute the length of a vector \(\| \b{v} \| = \< \b{v} , \b{v} \>\):

\[\| \bigwedge_i \b{a}_i \| = \< \bigwedge_i \b{a}_i , \bigwedge_i \b{a}_i \> = \det \< \b{a}_i , \b{a}_j \>\]

This is called the Gram determinant of the ‘Gramian’ matrix formed by the vectors of \(\b{a}\). It’s non-zero if the vectors are linearly independent, which clearly corresponds to the wedge product \(\bigwedge_i \b{a}_i\) not being \(=0\) in the first place.

It turns out that multivector inner products show up in disguise in a bunch of vector identities.


2. Computation of Identities

Let’s get some practice with (1).

In these expressions, I’m going to be juggling multiple inner products at once. I’ll denote them with subscripts: \(\<,\>_{\^}\), \(\<,\>_{\o}\), \(\<,\>_{V}\).

The types are:

  • the underlying inner product on \(V\), which only acts on vectors: \(\< \b{u}, \b{v} \>_V = \sum_i u_i v_i\).
  • the induced inner product on \(\o V\), which acts on tensors of the same grade term-by-term: \(\< \b{a \o b}, \b{c \o d} \>_{\o} = \< \b{a , c} \>_V \< \b{b, d } \>_V\)
  • the induced inner product on \(\^ V\), which we described above: \(\< \b{a \^ b}, \b{c \^ d} \>_{\^} = \< \b{a , c} \>_V \< \b{b, d } \>_V - \< \b{a , d} \>_V \< \b{b, c } \>\).

Let \(\text{Alt}\) be the Alternation Operator, which takes a tensor product to its total antisymmetrization, e.g. \(\text{Alt}(\b{a \o b}) = \b{a \o b - b \o a}\). For a tensor with \(N\) factors, there are \(N!\) components in the result.3

The general procedure for computing \(\< \bigwedge_i \b{a}_i , \bigwedge_j \b{b}_j \>_\^\) by hand is to expand one side into a tensor product and the other into an antisymmetrized tensor product.. Which side is which doesn’t matter, so let’s standardize by putting it on the right.4 If you put it on both sides, you would need to divide the whole expression by \(\frac{1}{N!}\), which is annoying (but some people do it).

\[\begin{aligned} \< \bigwedge_i \b{a}_i , \bigwedge_j \b{b}_j \>_{\^} &= \det \< \b{a}_i , \b{b}_j \>_V \\ &= \sum_{\sigma \in S_k} \sgn(\sigma) \prod_i \< \b{a}_i, \b{b}_{\sigma(i)} \>_V \\ &= \< \bigotimes_i \b{a}_i , \sum_{\sigma \in S_k} \sgn(\sigma) \bigotimes_j \b{b}_{\sigma(j)}\>_{\o} \\ &= \< \bigotimes_i \b{a}_i, \text{Alt}(\bigotimes_j \b{b}_j) \>_{\o} \\ &= \< \text{Alt}(\bigotimes_i \b{a}_i), \bigotimes_j \b{b}_j \>_{\o} \end{aligned}\]

Here’s an example of this on bivectors:

\[\begin{aligned} \< \b{a \^ b }, \b{c \^ d} \>_\^ &= \< \b{a \o b}, \b{c \o d} - \b{d \o c} \>_{\o} \\ &= \< \b{a , c} \>_V \< \b{b, d } \>_V - \< \b{a , d} \>_V \< \b{b, c } \>_V \tag{2} \end{aligned}\]

Now, some formulas which turn out to be the multivector inner product in disguise.

The inner product between two areas vectors, as we have seen, is

\[(\b{a \^ b}) \cdot (\b{c \^ d}) = (\b{a \cdot c}) (\b{b \cdot d}) - (\b{a \cdot d})(\b{b \cdot c})\]

Set \(\b{a = c}\), \(\b{b = d}\) in (2) and relabel to get Lagrange’s Identity:

\[| \b{a} \^ \b{b} |^2 = | \b{a} |^2 | \b{b} |^2 - (\b{a} \cdot \b{b})^2\]

If you’re working in \(\bb{R}^3\), a lot of familiar updates are found after turning \(\^\) into the cross product \(\times\) using the the Hodge Star map \(\star\). We haven’t look at this yet, but in \(\bb{R}^3\) it usually means that formulas that hold for \(\^\) also just hold for \(\star\).

Transforming our bivector inner product, we immediately get what’s apparently called the Binet-Cauchy identity:

\[\begin{aligned} (\b{a \^ b}) \cdot (\b{c \^ d}) &= (\b{a \times b}) \cdot (\b{c \times d})\\ &= \b{(a \cdot c) (b \cdot d) - (a \cdot d) (b \cdot c)} \end{aligned}\]

Set \(\b{a=c}\), \(\b{b = d}\) in the two-vector version to get a more familiar version of Lagrange’s identity:

\[| \b{a} \times \b{b} |^2 = | \b{a} |^2 | \b{b} |^2 - (\b{a} \cdot \b{b})^2\]

Drop the cross product term to get Cauchy-Schwarz:

\[(\b{a} \cdot \b{b})^2 \leq | \b{a} |^2 | \b{b} |^2\]

I thought that was neat. Maybe there are places where Cauchy-Schwarz is used where in fact Lagrange’s identity would be more useful?

The trivector version gives the product of two scalar triple products, which is quite a bit harder to see without this framework:

\[\< \b{a \^ b \^ c}, \b{x \^ y \^ z} \> = (\b{a} \cdot (\b{b \times c})) (\b{x} \cdot (\b{y \times z})) \\ = \det \begin{pmatrix} \b{a \cdot x} & \b{a \cdot y} & \b{a \cdot z} \\ \b{b \cdot x} & \b{b \cdot y} & \b{b \cdot z} \\ \b{c \cdot x} & \b{c \cdot y} & \b{c \cdot z} \end{pmatrix}\]

The inner product of any multivector with itself gives the volume of the \(k\)-polytope (as they’re called) spanned by the vectors:

\[\| \b{a \^ b \^ c} \|^2 = (\b{a} \^ \b{b} \^ \b{c}) \cdot (\b{a} \^ \b{b} \^ \b{c}) \\ = \text{vol} (\b{a}, \b{b}, \b{c})^2\]

This holds on any multivector, but if \(k < n\) there can be more than one term in the sum. When \(k = n\), there’s a single term, and then this is what’s called the Gram Determinant of the vectors, e.g.

\[\| \b{a \^ b \^ c} \|^2 = \det \begin{pmatrix} \b{a \cdot a} & \b{a \cdot b} & \b{a \cdot c} \\ \b{b \cdot a} & \b{b \cdot b} & \b{b \cdot c} \\ \b{c \cdot a} & \b{c \cdot b} & \b{c \cdot c} \end{pmatrix}\]

On vectors in \(\bb{R}^2\) (or any dimension), of course, the vector magnitude of course gives the Pythagorean theorem:

\[| a \b{x} + b \b{y} |^2 = a^2 + b^2 = c^2\]

This generalizes to the bivector areas of an orthogonal tetrahedron (or \((n-1)\)-vector surface areas of a \(n\)-simplex in any dimension), which is called De Gua’s Theorem. For instance in \(\bb{R}^3\), the surface area of the ‘hypotenuse’ side of an orthogonal tetrahedron can be computed from the sum of the areas of its other faces:

\[\| a \b{x \^ y} + b \b{y \^ z} + c \b{z \^ x} \| ^2 = a^2 + b^2 + c^2 = d^2\]

This is because the total surface area bivector for a closed figure in \(\bb{R}^3\) is \(0\), so the surface area bivector of the opposing face is exactly \(d = -(a \b{x \^ y} + b \b{y \^ z} + c \b{z \^ x} )\).

There is naturally a version of the law of cosines for any tetrahedron/ \(n\)-simplex with non-orthogonal sides as well. If \(\vec{c} = \vec{a} + \vec{b}\) then

\[\begin{aligned} \| \vec{c} \|^2 &= \|\vec{a}\|^2 + \|\vec{b}\|^2 + 2 \vec{a} \cdot \vec{b} \\ &=\|\vec{a}\|^2 + \|\vec{b}\|^2 + 2 \| \vec{a} \| \| \vec{b}\| \cos \theta_{ab} \end{aligned}\]

We can easily expand \(d^2 = \| a + b + c \|^2\) linearly when \(a,b,c\) are bivectors or anything else; the angles in the cosines become angles between planes, or something fancier, but the formula is otherwise the same:

\[d^2 = \| a + b + c \|^2 = |a|^2 + |a|^2 + |c|^2 + 2(a \cdot b + b \cdot c + c \cdot a)\]

Which is kinda cool.


3. Matrix Multiplication

Here’s my favorite thing that is easily understood with \(\<,\>_\^\): the generalizations of \(\det(BA) = \det(B) \det(A)\).

Let \(A: U \ra V\) and \(B: V \ra W\) be linear transformations. Their composition \(B\circ A\) has matrix representation:

\[(BA)_i^k = \sum_{j \in V} A_i^j B_j^k = \<A_i, B^k \>\]

The latter form expresses the fact that each matrix entry in \(BA\) is an inner product of a column of \(A\) with a row of \(B\).

Because \(A^{\^ q} : \^^q U \ra \^^q V\) and \(B^{\^ q} : \^^q V \ra \^^q W\) are also linear transformations, their composition \(B^{\^ q} \circ A^{\^ q} : \^^q U \ra \^^W\) also has a matrix representation:

\[(B^{\^ q} A^{\^ q})_I^K = \sum_{J \in \^^q V} (A^{\^ q})_I^J (B^{\^ q})_J^K = \< A^{\^ q}_I, (B^{\^ q})^K \>\]

Where \(I,J,K\) are indexes over the appropriate \(\^^q\) spaces.

\(A^{\^ q}_I\) is the wedge product of the \(I = (i_1, i_2, \ldots i_q)\) columns of \(A\), and \((B^{\^ q})^K\) is the wedge product of \(q\) rows of \(B\) from \(K\), which means this is just the inner product we discussed above.

\[(B^{\^ q} A^{\^ q})_I^K = \det_{i \in I, k \in K} \< A_{i}, B^{k} \>\]

But this is just the determinant of a minor of \((BA)_i^k\) – the one indexed by \((I,K)\). This means that:

\[(B^{\^ q} A^{\^ q})_I^K = ((BA)^{\^ q})_I^K\]

And thus:

\[\boxed{B^{\^ q} A^{\^ q} = (BA)^{\^ q}} \tag{3}\]

This is called the Generalized Cauchy-Binet formula. Note that \((3)\) does not require that the matrices be the square.

Which is neat. I think this version is way easier to remember or use than the version in Wikipedia, which is expressed in terms of matrix minors and determinants everywhere.

Some corollaries:

When \(A\) and \(B\) are in the same space and \(q = \dim V\), then all of the wedge powers turn into determinants, giving

\[\det(BA) = \det(B) \det(A)\]

If \(B = A^T\), then even if \(A\) is not square, the determinant of the square matrix \(A^T A\) is the sum of squared determinants of minors of \(A\). If \(A\) is \(n \times k\), this is a sum over all \(k \times k\) minors of \(A\):

\[\begin{aligned} \det (A^T A) &= \^^k (A^T A) \\ &= \^^k A^T \^^k A \\ &= \sum_{I \in \^^k U} \sum_{J \in \^^k V} (A_I^J)^2 \end{aligned}\]

My other articles about Exterior Algebra:

  1. Oriented Areas and the Shoelace Formula
  2. Matrices and Determinants
  3. The Inner product
  4. The Hodge Star
  5. The Interior Product
  6. EA as Linearized Set Theory?
  7. Oriented Projective Geometry
  8. Simplex Volumes and Boundaries
  9. All the Exterior Algebra Operations
  1. \(1_{ij}\) is a Kronecker delta. I like this notation better than \(\delta_{ij}\) because the symbol \(\delta\) has too many meanings. 

  2. If we don’t specify that all of our multi-indices are unique up to permutation, then we would have to write something like \(\< \b{x}_{\^ I}, \b{x}_{\^ J} \> = \sgn(I, J)\), where \(\text{sgn}\) is the sign of the permutation that takes \(I\) to \(J\), since for instance \(\< \b{x \^ y} , \b{y \^ x} \> = -1\). 

  3. There are several conventions for defining \(\text{Alt}\). It often comes with a factor of \(\frac{1}{N!}\). If you wanted it to preserve vector magnitudes, you might instead use \(\frac{1}{\sqrt{N!}}\). I prefer to leave it without any factorials, because it makes other definitions much easier. 

  4. This will matter later when we define the interior product the same way, but it’s still a matter of preference.