# Fourier Transforms via magic

[November 26, 2019]

A while ago I found a series of papers which do some weird stuff with derivative operators:

1. New Dirac Delta function based methods with applications to perturbative expansions in quantum field theory by Kempf/Jackson/Morales, 2014
2. How to (Path-) Integrate by Differentiating also by Kempf/Jackson/Morales, 2015
3. Integration by differentiation: new proofs, methods and examples by Jia/Tang/Kempf, 2016

The general theme is: evaluating functions on derivative operators $f(\p)$, and applying this to delta functions $f(\p_x) \delta(x)$, is occasionally useful and can give weird alternate characterizations of the Fourier transform and can be used to efficiently solve integrals.

The authors are physicists, unsurprisingly, and I’m sure there are a bunch of reasons why these results are either not that surprising or surprising-yet-not-useful, but I found them remarkable. But the whole thing is confusing and hard to make sense of. Here’s a… totally different take, in which I rederive the main result by poking around.

tldr: the Fourier transform of $f(x)$ is $2 \pi f(i \p_x) \delta(x)$. Er, what?

## 1.

First let’s fix a Fourier transform convention:

I prefer not to use the ones that put $2 \pi$ in the exponent because it’s distracting.

Here are a few common Fourier transform formulas in this convention, for reference:

## 2

It is common to take Fourier transforms of operators acting on functions, like $\F{\p_x f(x)} = ik \hat{f}(k)$, in order to solve differential equations. This can be computed using integration by parts inside the transform:

Itseems plausible to use the same argument to Fourier-transform a “freestanding” derivative operator, like $\p_x$:

I find this compelling, because it seems to work. Note that the minus sign is related to integration by parts. We might rewrite $\p_x f(x)$ as an operator $- f(x) \p_x$. These are different in general, but under an integral where the boundary vanishes they are the same, which is an assumption we’ll make liberally.

We can also find $\F{x}$ by rewriting it as a derivative in $k$:

Armed with these, we may transform any function in $\p_x$ or $x$ which has a Taylor series:

Or even both at once, as long as we are careful with what all of the operators act on:

In this expression, the $\p_x$ and $\p_k$s are acting to the right, not on internal members of the expression. If they act on an internal member they pick up a minus sign, like we saw above:

All of this mostly seems to work if we allow negative powers of $x$ and $\p_x$ also, but there is some funny business around integration bounds.

What value of $c$ should be used? To get the same value as Wikipedia’s table of Fourier transforms it should be $c = \frac{1}{2}$. This makes $\theta(k) + \frac{1}{2}$ an odd function, which means it is a choice that $\frac{1}{x} \vert_{x=0} = 0$. This seems somewhat arbitrary, and I suspect that one could get away with just not choosing at all. If we do use $c = \frac{1}{2}$, we get:

The other direction is simpler:

In summary we have a rough hand-waving method for – well, maybe not for rigorously deriving Fourier transforms, but at least for guessing at them, for derivative operators can be written as Laurent series (Taylor series with negative powers). Just swap $f(x, \p_x) \ra f(i \p_k, - ik)$.

In a sense this is a quarter rotation in the $(x, \p_x)$ plane, followed by multiplying by $i$ and relabeling $x \ra k$. That is:

I don’t now what it means to rotate in the $(x, \p_x)$ plane, but it turns out that you can do other linear transformations in this plane as well – fractional rotations, or arbitrary matrices. Incidentally, the Laplace transform is $(t, \p_t) \ra (-\p_s, -s)$, although the two-sided transform is better behaved than the more common one-sided transform. The one-sided version produces a bunch of integration bounds $f(0)$ and such in the process, which is useful because it’s used for signals that turn ‘on’ at $t=0$, but not too helpful for understanding as a rotation.

## 3. An integration technique

These are the main results of the papers mentioned above, I guess because papers have to justify their existence.

Recall that the integral of a function over the real line is equal to its Fourier transform evaluated at $0$:

Using our form of $\hat{g}$ this is:

This is readily computable:

This is so much easier and more algebraic than doing a limit and taking the principal value or whatever you normally have to do. As a bonus you get to see the Fourier transform as an intermediate step (it’s $\pi [ \theta(x + 1) - \theta(x-1)]$)

There are also versions for integrating over finite intervals, doing Laplace transforms, and a bunch of other stuff. I’ll probably write more about them later. There are more tricks – solving bounded integrals, for instance, amounts to evaluating $\int (\theta(b) - \theta(a)) f(x) dx$, and using the fact that we know the Fourier transform of $\theta(x)$. Although it is messy: $\sgn(x)$ has a clean transform, $\F{\sgn(x)} = \frac{1}{ik}$. Then you solve for $\theta(x) = \frac{1}{2}(\sgn(x) - 1)$.

Anyway, I wanted to write this up so I don’t forget about it or how to understand it. Hope it’s useful to somebody else.