Algebraic derivatives

In commutative algebra and algebraic geometry, a common operation is to take derivatives of polynomials. This would seem a fairly straightforward thing to do, but in commutative algebra/geometry, we study polynomials over arbitrary rings and fields, and the good old derivative from standard calculus is quite dependent on the metric structure of \mathbf{R} or \mathbf{C}, since it is a limit as \Delta x goes to 0. Arbitrary rings and fields don’t come with metric structures; there is no way to define what “going to 0” means. Or is there?

The derivative operator on polynomials in A[x] is

    \[\frac{\mathrm{d}}{\mathrm{d}x}\colon rx^n \mapsto nrx^{n-1}\]

on a monomial, and is defined on polynomials via linear extension. The definition extends to formal power series and rational functions in the obvious way you’d expect from knowing calculus.

Over an arbitrary field, we’re no longer defining the slope of a curve or the instantaneous rate of change, but this is still a surprisingly useful operation. It can be used to check for repeated roots if f and f' are co-prime, or whether f is separable. The notion extends to the concept of derivations — linear maps that obey the product rule from calculus — and there is a whole field of differential algebra that studies commutative rings with derivations. Moreover, derivations on the local ring of a point can also be used to define tangents in algebraic geometry, as they are in differential geometry.

One of the nice things about algebraic differentiation is that many of the basic things you might want to prove can be done using induction on the degree of the polynomial (a degree n+1 polynomial is just xf(x) + c where f(x) is degree n) or a double induction on the degree and number of letters if you’re working over A[x_1, ..., x_n].

For example, consider the ring A[x, h], where h is another indeterminate with no relations. We get an algebraic version of Taylor’s theorem by

    \[ f(x + h) = f(x) + f'(x)h + \frac{f''(x)}{2!}h^2 + \dots + \frac{f^{(n)}(x)}{n!}h^n.\]

This can easily be proved by induction on the degree of f(x). Now we can “let h go to 0″. What is the algebraic interpretation of this? Well, it harks back to the notion of “infinitesimals” from early/non-standard analysis. We quotient out by h^2, so that in A[x,h]/(h^2) we have that h is now “so small” that its square is 0. We can now define the algebraic derivative to be the unique solution to the equation

    \[ f(x + h) - f(x) = f'(x)h \mod{h^2}.\]

I find this really pretty. And in fact, we can do one better: we can divide this equation through by h (which always obtains a new polynomial, since h is a factor on both sides), and then let h truly “go to 0” by quotienting again by h. We obtain that

    \[ \frac{f(x + h) - f(x)}{h} = f'(x) \mod{h}.\]

Side note: I’m aware that a very similar post to this appeared on reddit’s /r/math this week. This is coincidental; I actually wrote this post before the Snake Lemma one.