All About Taylor Series

[December 28, 2018]

Here is a survey of understandings on each of the main types of Taylor series:

  1. single-variable
  2. multivariable
  3. multivariable
  4. complex

I thought it would be useful to have everything I know about these written down in one place.

These notes are not pedagogical; they’re for crystallizing everything when you already have a partial understanding of what’s going on. Particularly, I don’t want to have to remember the difference between all the different flavors of Taylor series, so I find it helpful to just cast them all into the same form, which is possible because they’re all the same thing (seriously why aren’t they taught this way?).

In these notes I am going to ignore discussions of convergence so that more ground can be covered. Generally it’s important to address convergence in order to, well, not be wrong. And I’m certain that I’ve made statements which are wrong below. But I am just trying to make sure I understand what happens when everything works, because in my interests (physics) it usually does.


1. Single Variable

A Taylor series for a function in looks like this:

It’s useful to write this as one big operator acting on :

Or even as a single exponentiation of the derivative operator, which is commonly done in physics, but you probably shouldn’t think too hard about what it means:

I also think it’s useful to interpret the Taylor series equation as resulting from repeated integration:

This basically makes sense as soon as you understand integration, plus it makes obvious that the series only works when all of the integrals are actually equal to the values of the previous function (so you can’t take a series of which passes , because you can’t exactly integrate past it (though there are tricks))

… plus it makes sense in pretty much any space you can integrate over.

plus it makes it obvious how to truncate the series, how to create the remainder term, and it even shows you how you could – if you were so inclined – have each derivative be evaluated at a different point, such as , which I’ve never even seen done before (except for here?), though good luck with figuring out convergence if you do that.


L’Hôpital’s rule about evaluating limits which give indeterminate forms follows naturally if the functions are both expressible as Taylor series. If , then:

Which equals if the limit exists, and otherwise might be solvable by applying the rule recursively. None of this works of course if limit doesn’t exist. If , evaluate instead. If the indeterminate form is , evaluate instead.


2. Multivariable -> Scalar

The multivariable Taylor series looks messier at first, so let’s start with only two variables, writing and , and we’ll work it into a more usable form.

(The strangeness of the terms like and is because these are really sums of multiple terms; because of the commutativity of partial derivatives on analytic functions, , we can write .)

The first few terms are often arranged like this:

is the gradient of (the vector of partial derivatives like . The matrix is the “Hessian matrix” for , and represents its second derivative.

… But we can do better. In fact, every order of derivative of in the total series has the same form, as powers of , which I prefer to write as , because it represents a ‘vector of partial derivatives’ :

(This can also be written as a sum over every individual term using multi-index notation.)

So that looks pretty good. And it can still be written as . The same formula – now that we’ve hidden all the actual indexes – happily continues to work for dimension , as well.

… Actually, this is not as surprising a formula as it might look. The multivariate Taylor series of is really just a bunch of single-variable series multiplied together:

I mention all this because it’s useful to have a solid idea of what a scalar function is before we move to vector functions.


L’Hôpital’s rule is more subtle for multivariable functions. In general the limit of a function may be different depending on what direction you approach from, so an expression like is not necessarily defined, even if both and have Taylor expansions. On the other hand, if we choose a path for , such as then this just becomes a one-dimensional limit, and the regular rule applies again. So, for instance, while may not be defined, is.

And the path we take to approach doesn’t even matter – only the gradients when we’re infinitesimally close to . For example, suppose we and we’re taking the limit on the path given by :

The and terms are of order and so drop out, leaving a limit taken only on the -axis – corresponding to the fact that the tangent to at 0 is .

In fact, this problem basically exists in 1D also, except that limits can only come from two directions: and , so lots of functions get away without a problem (but you can also abuse this). L’Hôpital’s rule only needs that the functions be expandable as a Taylor series on the side the limit comes from.

I think that the concept of a limit that doesn’t specify a direction of approach is more common than it should be, because it’s really quite problematic in practice. I’m not quite sure I fully understand the complexity of solving it in dimension – but clearly if you just reduce to a 1-dimensional limit, you sweep the difficulties under the rug anyway. But see, perhaps, this pre-print for a lot more information.


3. Vector Fields

There are several types of vector-valued functions – curves like , or maps between manifolds like (including from a space to itself). In each case there is something like a Taylor series that can be defined. It’s not commonly written out, but I think it should be, so let’s try.

Let’s imagine our function maps spaces , where has coordinates and has coordinates, and might be 1 in the case of a curve. Then along any particular coordinate in out of the – call it – the Taylor series expression from above holds, because is just a scalar function.

But of course this holds in every at once, so it holds for the whole function:

The subtlety here is that the partial derivatives are now being taken termwise – once for each component of . For example, consider the first few terms when and are 2D:

That matrix term, the term in the series, is the Jacobian Matrix of , sometimes written , and is much more succinctly written as , or just or even just .

The Jacobian matrix is the ‘first derivative’ of a vector field, and it includes every term which can possibly matter to compute how the function changes to first-order. In the same way that a single-variable function is locally linear (), a multi-variable function is locally a linear transformation: .


Higher-order terms in the vector field Taylor series generalize ‘second’ and ‘third’ derivatives, etc, but they are generally tensors rather than matrices. They look like , , or in general, and they act on copies of , ie, .

The full expansion (for of any number of coordinates) is written like this:

We write the numerator in the summation as , which expands to , and then we can still group things into exponentials, only now we have to understand that all of these terms have derivative operators on them that need to be applied to to be meaningful:

We could have included indexes on also:

It seems evident that this should work any other sort of differentiable object also. What about matrices?

I don’t want to talk about curl and divergence here, because it brings in a lot more concepts and I don’t know the best understanding of it, but it’s worth noting that both are formed from components of , appropriately arranged.


4. Complex Analytic

The complex plane is a sort of change-of-basis of , via :

Therefore we can write it as a Taylor series in these two variables:

One subtlety: it should always be true that when changing variables. Because and , when considered as vectors in , are not unit vectors, there is a normalization factor required on the partial derivatives. Also, for the factors of cause the signs to swap:

In complex analysis, for some reason, is not treated as a true variable, and we only consider a function as ‘complex differentiable’ when it has derivatives with respect to alone. Notably, we would say that the derivative does not exist – the value of is different depending on the path you take towards the origin. These statements turn out to be almost equivalent:

  • is a function of only in a region
  • in a region
  • is complex-analytic in a region
  • has a Taylor series as a function of in a region

So when we discuss Taylor series of functions , we usually mean this:

If we write , the requirement that becomes the Cauchy-Riemann Equations by matching real and complex parts:


There is one important case where a function is a function of only , yet it is not analytic and , and it is solely responsible for almost all of the interesting parts of complex analysis. It’s the fact that:

Where is the two-dimensional Dirac Delta function. I find this to be quite surprising. Here’s an aside on why it’s true:

Importantly, is only true for . This property gives rise to the entire method of residues, because if , where has no terms of order , then integrating a contour around a region which contains gives, via Stokes’ theorem:

(If the derivative isn’t , you get the Cauchy-Pompeiu formula for contour integrals immediately.)

By the way: Fourier series are closely related to contour integrals, and thus to complex Taylor series. You can change variables to write as , which is clearly a Fourier transform for suitable .