Fourier Transforms via magic

[November 26, 2019]

A while ago I found a series of papers which do some wild magic with derivative operators:

  1. New Dirac Delta function based methods with applications to perturbative expansions in quantum field theory by Kempf/Jackson/Morales, 2014
  2. How to (Path-) Integrate by Differentiating also by Kempf/Jackson/Morales, 2015
  3. Integration by differentiation: new proofs, methods and examples by Jia/Tang/Kempf, 2016

The general theme is that evaluating functions on derivative operators , and applying this to delta functions , is occasionally useful and gives weird alternate characterizations of the Fourier transform and stuff like that.

The authors are physicists, unsurprisingly, and I’m sure there are a bunch of reasons why these results are either not that surprising or surprising-yet-not-useful, but I found them useful. However, when I revisited them trying to understand the ideas more closely, I found myself kinda overwhelmed and confused. So here’s a… totally different take, rederiving the main result by poking around.

tldr: the Fourier transform of is . Wait, what?


1

Consider the formula

which can be understood as the exponential map, but in practice we just figure that the Taylor series matches, , and take that as a proof. This is commonly used in physics, but deserves to be more wildly-known – I remember it blowing my mind a bit when I first saw it.

Now, the Fourier transform of is given by1

So it seems like we can write this, by matching up the Taylor series term-by-term

Is that correct? Sure, no problem: if a function is a sum of exponentials , then of course is a sum of exponentials , and the first factor just passes through the transform untouched because it has no -dependence (and also Wikipedia says so). But it also just seems that we simply swapped for . I wonder if we can do just insert to Fourier-transform any other functions with no work?


2

Let’s try polynomials. We can write an integral as a a sort of inverse derivative: let’s say that means to . Then we can write a polynomial as

Mindless substitution (with , with the delta function) gives the Fourier transform as

Wikipedia says that the transform should actually be . Are those the same? Yeah, turns out they are:

So we have2


3

If that’s true, then the Fourier series of any function that has a Taylor series is going to be something like

Since we like to write we can write that as:

This kinda suggests that should just be for any function that has a Taylor series.

The derivative here acts the right, not on the delta function. If we don’t like leaving our Fourier transforms as derivative operators , we can sometimes rewrite the derivatives in terms of delta functions: . For instance here is the polynomial formula from before, done another way:

What can we make of ? Well, if has a Taylor series and we operate this on an exponential, it turns into a form of substitution: . So is an operator whose eigenvalues, I guess, are , when acting on each exponential. That makes a certain amount of sense. It feels a lot like quantum mechanicical operators pulling out eigenvalues… but it’s purely mathematical! Weird.


4

What about ? Apparently it should be:

I kept the around because Wikipedia says this should be , which would mean . I’m not sure where you get that from, but I think it’s possible the constant is arbitrary. This integral is normally evaluted using the Cauchy Principal Value, which is not very well-behaved anyway, so I dunno3.


5

A quick example of how this is useful for evaluating integrals. Using we get this odd formula:

Check this out:

This is so much easier and more… algebraic than doing a limit and taking the principal value or whatever you normally have to do. As a bonus you get to see the Fourier transform as an intermediate step (it’s )

There are also versions for integrating over finite intervals, doing Laplace transforms, and a bunch of other stuff. I’ll probably write more about them later.


6

In summary it appears to be true, in some numerological and intuitive sense, that when a function has a Taylor series its Fourier transform might be given by4

And more generally if we write as a function of and , then:

At least since this works for Taylor series it seems like it will work for a nice class of well-behaved functions. From playing with it I’ve decided that it ‘rings true’ to me, because it has so far appeared to be consistent with a bunch of the Fourier transforms I’ve checked, so I suspect that it will be more-or-less consistent with the rest.

Returning to the papers in the introduction, this is equation (6) in the first Kempf/Jackson/Morales paper above, up to differences in notation. If anything they undersell it: they mostly present this mostly as a novel way of defining the delta function and of computing integrals, rather of an unusual perspective on the Fourier transform itself. Perhaps it is not wise to be philosophical in papers.

So it appears that, if we consider a function as being defined on a two-dimensional space spanned by , this amounts to a sort of change-of-basis to , perhaps representable as

But unlike a usual change of basis our coordinate bases don’t commute, since (cf Weyl Algebra), and I have no idea what complexity that that leads to. Anyway, that immediately suggests we should try other transformations. There apparently are already generalizations of the Fourier transform which rotate fractionally between position and frequency space, or generalize this to arbitrary 2x2 matrices. I expect there are versions of all of these using terms like as well.

So that’s fascinating. It works too well to not be real. But… evaluating functions at derivative operators is meaningful? Who ordered that?

  1. Let’s agree to use the convention , with the on the inverse transform. 

  2. I have a different sign here than the Kempf/Jackson/Morales paper and I’m not sure why, since we’re using the same Fourier convention. 

  3. The principal value, a technique for integrating , is problematic. It takes the integral by adding and , which seems good at first, but the value ends up depending on how approaches 0, which can change depending on your choice of coordinate system. 

  4. Where should the Taylor series be defined / convergent? No idea. But it seems to work algebraically.