| CARVIEW |
Explicit Sylow theory for
Our starting point is the following.
Baby Lie-Kolchin: Let be a finite
-group acting linearly on a finite-dimensional vector space
over
. Then
fixes a nonzero vector; equivalently,
has a trivial subrepresentation.
Proof. If then there are
nonzero vectors in
, so by the PGFPT
fixes at least one of them (in fact at least
but these are just given by scalar multiplication).
Now we can argue as follows. If is a finite
-group acting on an
-dimensional vector space
over
(equivalently, up to isomorphism, a finite
-subgroup of
), it fixes some nonzero vector
. Writing
, we get a quotient representation on
, on which
fixes some nonzero vector, which we lift to a vector
, necessarily linearly independent from
, such that
acts upper triangularly on
.
Continuing in this way we get a sequence of linearly independent vectors (hence a basis of
) and an increasing sequence
of subspaces of
that
leaves invariant, satisfying the additional condition that
fixes
. The subspaces
form a complete flag in
, and writing the elements of
as matrices with respect to the basis
, we see that the conditions that
leaves
invariant and fixes
says exactly that
acts by upper triangular matrices with
s on the diagonal in this basis.
Conjugating back to the standard basis, we’ve proven:
Proposition: Every -subgroup of
is conjugate to a subgroup of the unipotent subgroup
.
This is almost a proof of Sylow I and II for (albeit at the prime
only), but because we defined Sylow
-subgroups to be
-subgroups having index coprime to
, we’ve only established that
is maximal, not that it’s Sylow.
We can show that it’s Sylow by explicitly dividing its order into the order of but there’s a more conceptual approach that will teach us more. Previously we proved the normalizer criterion: a
-subgroup
of a group
is Sylow iff the quotient
has no elements of order
.
Claim: The normalizer of is the Borel subgroup
of upper triangular matrices (with no restrictions on the diagonal). The quotient
is the torus
and in particular has no elements of order
.
Corollary (Explicit Sylow I and II for :
is a Sylow
-subgroup of
.
Proof. The normalizer is the stabilizer of
acting on the set of conjugates of
. We want to show that
, which would mean that the action of
on the conjugates of
can be identified with the quotient
.
This quotient is the complete flag variety: it can be identified with the action of on the set of complete flags in
, since the action on flags is transitive and the stabilizer of the standard flag
is exactly . So it suffices to exhibit a
-equivariant bijection between conjugates of
and complete flags which sends
to the standard flag, since then their stabilizers must agree.
But this is clear: given a complete flag
we can consider the subgroup of which preserves the flag (so
) and which has the additional property that the induced action on
is trivial for every
. This produces
when applied to the standard flag, so produces conjugates of
when applied to all flags. In the other direction, given a conjugate
of
, it has a
-dimensional invariant subspace
acting on
, quotienting by this subspace produces a unique
-dimensional invariant subspace
acting on
, etc.; this produces the standard flag when applied to
, so produces
applied to the standard flag when applied to conjugates of
. So we get our desired
-equivariant bijection between conjugates of
and complete flags, establishing
as desired.
(This argument works over any field.)
From here it’s not hard to also prove
Explicit Sylow III for : The number
of conjugates of
in
divides the order of
and is congruent to
.
Proof. Actually we can compute exactly: we established above that it’s the number of complete flags in
(on which
acts transitively, hence the divisibility relation), and a classic counting argument (count the number of possibilities for
, then for
, etc.) gives the
-factorial
which is clearly congruent to .
We could also have done this by dividing the order of by the order of the Borel subgroup
, but again, doing it this way we learn more, and in fact we get an independent proof of the formula
for the order of , where all three factors acquire clear interpretations: the first factor is the order of the unipotent subgroup
, the second factor is the order of the torus
, and the third factor is the size of the flag variety
.
What is going on in these proofs?
Let’s take a step back and compare these explicit proofs of the Sylow theorems for to the general proofs of the Sylow theorems. The first three proofs we gave of Sylow I (reduction to
, reduction to
, action on subsets) all proceed by finding some clever way to get a finite group
to act on a finite set
with the following two properties:
. By the PGFPT this means any
-subgroups of
act on
with fixed points, so we can look for
-subgroups in the stabilizers
.
- The stabilizers of the action of
on
are
-groups. Combined with the first property, this means that at least one stabilizer must be a Sylow
-subgroup, since at least one stabilizer must have index coprime to
.
In fact finding a transitive such action is exactly equivalent to finding a Sylow -subgroup
, since it must then be isomorphic to the action of
on
. The nice thing, which we make good use of in the proofs, is that we don’t need to restrict our attention to transitive actions, because the condition that
has the pleasant property that if it holds for
then it must hold for at least one of the orbits of the action of
on
. (This is a kind of “
pigeonhole principle.”)
In the explicit proof for we find
by starting with the action of
on the set of nonzero vectors
, which satisfies the first condition but not the second, and repeatedly “extending” the action (to pairs of a nonzero vector
and a nonzero vector in the quotient
, etc.) until we arrived at an action satisfying both conditions, namely the action of
on the set of tuples of a nonzero vector
, a nonzero vector
, a nonzero vector
, etc. (a slightly decorated version of a complete flag).
The next question we’ll address in Part III is: can we do something similar for ?
The -group fixed point theorem (PGFPT): If
is a finite
-group and
is a finite set on which
acts, then the subset
of fixed points satisfies
. In particular, if
then this action has at least one fixed point.
There will be some occasional historical notes taken from Waterhouse’s The Early Proofs of Sylow’s Theorem.
A quick note on the definition of a Sylow subgroup
A Sylow -subgroup of a finite group
can be equivalently defined as either a
-subgroup of index coprime to
(that is, if
where
, then
) or a
-subgroup which is maximal with respect to inclusion. The equivalence of these two definitions requires the Sylow theorems, but we’ll want to discuss such subgroups before proving them, and for the purposes of this post it will be most convenient to adopt the first definition.
So for the rest of this post, a Sylow -subgroup is a
-subgroup of index coprime to
, and we’ll refer to
-subgroups maximal with respect to inclusion as maximal
-subgroups before we prove that the two are equivalent.
Warmup: Cauchy’s theorem
We’ll start off by giving some proofs of Cauchy’s theorem. One of the proofs of Sylow I later needs it as a lemma, and some of the proofs we give here will generalize to proofs of Sylow I. First we quote the proof using the PGFPT in full from the above blog post, as Proof 1.
Cauchy’s theorem: Let be a finite group. Suppose
is a prime dividing
. Then
has an element of order
.
Proof 1 (PGFPT). acts by cyclic shifts on the set
since if then
. This set has cardinality
, which is not divisible by
, hence
has a fixed point, which is precisely a nontrivial element
such that
.
This proof is almost disgustingly short. By contrast, according to Waterhouse, Cauchy’s original proof runs nine pages (!) and works as follows.
First Cauchy explicitly constructs Sylow -subgroups of the symmetric group
. This can be done by recursively partitioning
into
blocks of size
, considering the subgroup generated by
-cycles cyclically permuting each block, then partitioning the set of blocks itself into
blocks-of-blocks of size
, cyclically permuting these, and so forth. This subgroup has size
and the exponent is Legendre’s formula for the largest power of
dividing
. It follows that this is a Sylow
-subgroup.
Second, Cauchy proves a special case of the following lemma, namely the special case that . This lemma doesn’t appear to have a standard name, so we’ll name it
Cauchy’s -group lemma: Let
be a subgroup of a finite group
such that
has a Sylow
-subgroup
. If
, then
has an element of order
.
Proof. We’ll prove the contrapositive: if doesn’t contain an element of order
, then
does not divide
.
Consider the left action of on the set of cosets
. By grouping the cosets into
-orbits and using the orbit-stabilizer theorem we have
where is the collection of double cosets and the stabilizer of a coset is
and in particular it has order dividing , hence a power of
.
If has no element of order
, then since any nontrivial subgroup of
contains an element of order
, these stabilizers are all trivial, so we get that
and in particular that divides
. Since
is a Sylow
-subgroup,
isn’t divisible by
, and hence neither is
.
Call a collection of finite groups cofinal if every finite group embeds into at least one of them.
Corollary: To prove Cauchy’s theorem for all finite groups, it suffices to find a cofinal collection of finite groups which have Sylow -subgroups.
We can now complete Cauchy’s proof of Cauchy’s theorem.
Proof 2 (symmetric groups). The symmetric groups are cofinal (this is exactly Cayley’s theorem) and have Sylow
-subgroups.
But we don’t have to use the symmetric groups!
Proof 3 (general linear groups). Since the symmetric groups embed into the general linear groups
, the latter are also cofinal, and we can also exhibit Sylow
-subgroups for them: the unipotent subgroup
of upper triangular matrices with
s on the diagonal is Sylow. This follows from the standard calculation of the order
where is the
-factorial.
Here’s a fourth proof that doesn’t use Cauchy’s lemma and doesn’t require the clever construction of the first proof; Waterhouse says it’s used by Rotman.
Proof 4 (class equation). We induct on the order of . First we observe that Cauchy’s theorem is straightforwardly true for finite abelian groups by using the Chinese remainder theorem to write a finite abelian group as a direct sum of finite abelian
-groups. (And then, to spell it out very explicitly, Lagrange’s theorem implies that any nontrivial element of a finite
-group has
-power order, so some power of that element has order
.)
Next, given a finite group such that
we consider the class equation
where the sum runs over non-central conjugacy classes. If then we’re done by reduction to the abelian case. Otherwise,
doesn’t divide
but does divide
, so by taking both sides
we conclude that there is some
such that the centralizer
has index in
coprime to
, and hence
. Since
is a non-central conjugacy class
has order strictly less than
so has an element of order
by the inductive hypothesis.
Sylow I
With very little additional effort the above proof of Cauchy’s -group lemma actually proves the following stronger result; Waterhouse attributes this argument, again in the special case that
, to Frobenius. It also doesn’t appear to have a standard name, so we’ll call it
Frobenius’ -group lemma: Let
be a subgroup of a finite group
such that
has a Sylow
-subgroup
. Then
also has a Sylow
-subgroup, given by its intersection with some conjugate of
.
Proof. Consider as before the sum
obtained by considering the orbits of the action of on
. Since
doesn’t divide
, at least one term on the RHS is also not divisible by
, so there exists some
such that
has index coprime to
and hence is a Sylow
-subgroup of
; conjugating, we get that the intersection of
with
is a Sylow
-subgroup of
as desired.
Corollary: To prove that Sylow subgroups exist for all finite groups, it suffices to prove that they exist for a cofinal collection of finite groups.
Now we can give two different proofs of Sylow I, as follows.
Sylow I: Every finite group has a Sylow
-subgroup.
(Note that with our definitions if then this subgroup is just the trivial group.)
Proof 1 (symmetric groups). It suffices to prove the existence of Sylow subgroups for the symmetric groups , which was done above.
Proof 2 (general linear groups). It suffices to prove the existence of Sylow -subgroups for
, which was done above. Running the argument gives that every finite
-group is a subgroup of
for some
, which gives an independent proof that finite
-groups are nilpotent.
Proof 2 is the first proof of Sylow I given by Serre in his Finite Groups: An Introduction.
I heard a rumor years ago that these proofs existed but didn’t see them; glad that’s finally settled.
The next proof of Sylow I we’ll present is the one that I saw as an undergraduate, in Artin’s Algebra if memory serves. According to Wikipedia it’s due to Wielandt. Serre attributes it to “Miller-Wielandt” and it’s the second proof he gives.
Proof 3 (action on subsets). Write where
and consider the action of
by left multiplication on the set
of
-element subsets of
. We’ll show that there exists a stabilizer subgroup of size
.
(I found this construction very bizarre and unmotivated as an undergraduate. I like it better now, I guess, but I’m still a little mystified.)
Key observation: An element stabilizes a subset
iff
consists of cosets of the cyclic subgroup
generated by
; in particular a necessary condition is that its order must divide the size of
.
This means that the action of on
has the property that all stabilizers must be
-groups (just like the action of
on
we used above). As before, decomposing this action into orbits gives
where the sum runs over a set of orbit representatives.
Lemma: .
Proof. This is a special case of Lucas’ theorem, which we proved using the PGFPT previously.
It follows that is not divisible by
, so there exists some subset
such that
is also not divisible by
. But since
is a power of
this is only possible if
. This stabilizer is a Sylow
-subgroup as desired.
This proof is tantalizingly similar to the proof of Frobenius’ lemma above; in both cases the key is to write down an action of on a set of size relatively prime to
but such that all stabilizers must be
-groups. But I’m not yet sure how to relate them.
We’re not done with proofs of Sylow I! The next proof is due to Frobenius; Waterhouse stresses that its early use of quotients crucially highlighted the importance of working with abstract groups rather than just groups of substitutions as was standard in early group theory. It generalizes the class equation proof of Cauchy’s theorem above.
Proof 4 (class equation). We again induct on the order of and start by observing that Sylow subgroups clearly exist for finite abelian groups, for example by the structure theorem. We can also use the Chinese remainder theorem.
Now, suppose that . If
also divides the order
of the center, then
contains a (central, abelian, nontrivial) Sylow
-subgroup
, which we can quotient by. Then
, by the inductive hypothesis, also contains a Sylow
-subgroup, and the preimage of this subgroup is a Sylow
-subgroup of
.
If , then again we inspect the class equation
where again the sum runs over non-central conjugacy classes. Since we conclude that there is some conjugacy class
such that
, hence such that
and
have the same
-adic valuation. By the inductive hypothesis,
contains a Sylow
-subgroup, which is then (by virtue of having the appropriate order) also a Sylow
-subgroup of
.
This proof is the first one we’ve seen that provides a halfway reasonable algorithm for finding Sylow subgroups, which is lovely (the first three are more or less just raw existence proofs): pass to the quotient by central -groups until you can’t anymore, then search for conjugacy classes whose centralizers have index relatively prime to
and pass to the centralizers until you can’t anymore, then repeat.
We give one final proof. According to Waterhouse, this one is essentially Sylow’s original proof, and it naturally proves an apparently-slightly-stronger result, interpolating between Cauchy’s theorem and Sylow I. It also provides a (different) algorithm for finding Sylow subgroups.
Sylow I+: Every finite group contains a subgroup of prime power order
whenever
. These subgroups can be constructed inductively by starting from an element of order
and then finding an element of
-power order normalizing the previous subgroup.
(Note that this result is equivalent to Sylow I as usually stated once we know that finite -groups admit composition series in which each quotient is
, which is not hard to prove by induction once we know that finite
-groups have nontrivial center, which in turn we proved previously using the PGFPT. The below proof actually gives an independent proof of the existence of such a composition series, and hence an independent proof that finite
-groups are nilpotent.)
Proof 5 (normalizers + PGFPT). We induct on . The base case
is Cauchy’s theorem, which we proved above (four times!).
Now suppose we’ve found a subgroup of order
, and we’d like to see if we can find a subgroup of order
. Consider the action of
by left multiplication on the cosets
. By the PGFPT the number of fixed points of this action satisfies
.
On the other hand, the fixed points of this action are precisely the cosets such that
, or equivalently such that
. So they are naturally in bijection with the quotient
of the normalizer
of . So the PGFPT tells us:
Lemma: If is a
-subgroup of a finite group
, then
.
If , or equivalently if
is not a Sylow
-subgroup, then
and hence
, so by Cauchy’s theorem it follows that
has an element
of order
. The preimage of the cyclic subgroup
in
is then a
-subgroup of order
as desired; more specifically it’s a subgroup
given by some extension
where generates the copy of
. So we’ve found our bigger
-subgroup.
This proof gives us an algorithm for finding Sylow subgroups by starting with elements of order and repeatedly finding elements of order
in the normalizer, and note that it also gives us a way to prove that a
-subgroup is Sylow without explicitly checking that its index is coprime to
:
Corollary (normalizer criterion): A -subgroup
of a finite group
is a Sylow
-subgroup iff the quotient
contains no elements of order
.
It has another corollary we haven’t quite gotten around to proving yet:
Corollary (Sylow = maximal): Sylow -subgroups are precisely the maximal
-subgroups.
(Before it could have been the case that there were -subgroups maximal with respect to inclusion but without index coprime to
.)
These are all the proofs of Sylow I that I know or could track down. After having gone through all of them I think I like proof 4 (class equation) and proof 5 (normalizers) the best. Proof 3 (action on subsets) is really too opaque and too specialized; you only learn from it that Sylow subgroups exist and practically nothing else, whereas proof 4 and proof 5 both teach you more about them, including how to find them. I’m sort of amazed it took me over 10 years after I first ran into the Sylow theorems to finally see a version of Sylow’s original proof, and that when I did, I liked it so much better than the first proof I had seen.
Sylow II and III
These will be short.
Sylow II: Let be a finite group and
be any Sylow
-subgroup. Every
-subgroup
of
is contained in a conjugate of
. In particular, taking
to be another Sylow
-subgroup, every pair of Sylow
-subgroups is conjugate.
Proof 1. Immediate corollary of Frobenius’ lemma.
Proof 2. Consider the action of on
. By hypothesis,
, so by the PGFPT,
.
A fixed point of the action of on
is precisely a coset
such that
, or equivalently such that
, so there must be at least one such
conjugating
into
as desired.
Corollary (Sylow = maximal, again): Sylow -subgroups are precisely the maximal
-subgroups.
(We’ve now proved this result in three different ways!)
Sylow III: Let be a finite group and
be any Sylow
-subgroup. The number
of Sylow
-subgroups of
satisfies
and
; in particular,
.
Proof. Since the Sylow -subgroups are conjugate, the set of Sylow
-subgroups is just the set of conjugates of
.
acts transitively on this set by conjugation with stabilizer
, so by orbit-stabilizer it has size
and in particular has size dividing
. We now give two proofs of the all-important congruence
.
Subproof 1 (action of on
). In Proof 5 (normalizers + PGFPT) of Sylow I above we showed that
and dividing both sides by gives the desired result.
Subproof 2 (action of on
). This proof has the benefit of explaining what this congruence “means.”
can’t be divisible by
, so the only elements of
-power order that normalize
are the elements of
, and the same must be true for any other Sylow. This implies that if we restrict the conjugation action of
on the Sylows to
, then
fixes only itself, so by the PGFPT
and the conclusion follows.
Theorem 1: Universal lossless compression is impossible. That is, there is no function which takes as input finite strings (over some fixed alphabet) and always produces as output shorter finite strings (over the same alphabet) in such a way that the latter is recoverable from the former.
Supposedly people sometimes claim to be able to do this. It is as impossible as constructing a perpetual motion machine, if not more so, and for much more easily understandable reasons.
Proof. The problem is simply that there are not enough short strings. To be specific, let’s work with just binary strings. There are strings of length
, and
strings of length at most , which is smaller. Hence there’s no injective function from the set of binary strings of length
to the set of binary strings of length at most
, for any
. The problem only gets worse if
is replaced by a larger number, or if we try to compress strings of length at most
instead of strings of length exactly
.
Informally, another way to summarize the argument is that it takes bits to represent an arbitrary binary string of length
, and less than
bits to represent an arbitrary binary string of length at most
.
This argument straightforwardly implies that any lossless compression algorithm which makes at least one of its inputs smaller necessarily makes at least one of its inputs larger, and more broadly draws attention to the set of inputs one might actually want to compress in practice. If you want to compress 1000×1000 images – strings of a million pixels – but you only want to compress, say, photos that humans might actually take, this is potentially much smaller than the set of all 1000×1000 images (for example, because most real images have the property that most of the time, a pixel is close in color to its neighbors), and so the theoretical lower bound on how well such images can be compressed might be much better.
More generally, the situation is improved if you have a probability distribution over inputs you want to compress with low entropy and you only want to compress well on average, which allows you to do something like Huffman coding. (This is a generalization because the uniform distribution over a set of size has entropy
.) Probably all probability distributions describing real-life data have this property. And this is before taking into account the possibility of lossy compression.
What I like about Theorem 1 is that it is both incredibly easy to prove and a surprisingly deep and important fact. Here is, for example, another very important result that is arguably just a corollary of Theorem 1.
Consider what is in some sense the “ultimate” lossless compression scheme, namely, given a binary string of length , compress it to the shortest program, in some fixed programming language, which prints that string. The length of this program – let’s also write it as a binary string, using ASCII or equivalent – is called the Kolmogorov complexity of the string. Let’s call this Kolmogorov compression.
The sense in which Kolmogorov compression is the “ultimate” lossless compression scheme is that whatever decompression is, it surely ought to be computable or else we wouldn’t be able to do it (ignoring for the moment that Kolmogorov compression is uncomputable!), and any compression scheme whatsoever (computable or not) which compresses some complicated input string into a string of length
, together with the decompression program, constitutes a program which prints
, of size at most
plus the size of the decompression program, which is a constant.
So the Kolmogorov complexity of a string is a lower bound, up to an additive constant, on the length of a string that compresses it, according to any compression scheme with a computable decompression function. (Of course you can come up with a silly compression scheme which “memorizes” any fixed string and compresses it arbitrarily small, but the price you pay is that the Kolmogorov complexity of this string is more or less a lower bound on the size of your decompression program, and also there’s no reason this would help you compress anything you actually care about.)
Now, Theorem 1, applied to Kolmogorov compression, implies the following.
Theorem 2: For every , there exist strings of length
whose Kolmogorov complexity is at least
.
In other words, there exist incompressible strings, whose shortest descriptions are just those strings themselves. In fact, most strings are basically incompressible in this sense; a slight generalization of Theorem 1 shows that at most of all binary strings of length
can be compressed to have length at most
. For example, taking
, less than 1% of strings of length
can have their length reduced by
or more by any fixed compression scheme.
This observation suggests an important connection between incompressibility and randomness. On the one hand, a randomly chosen string is incompressible with high probability. On the other hand, incompressible strings “look random” – they don’t have any patterns in them, because patterns can be described by short programs.
]]>Gradient descent, in its simplest where you just subtract the gradient of your loss function , is not dimensionally consistent: if the parameters you’re optimizing over have units of length, and the loss function is dimensionless, then the derivatives you’re subtracting have units of inverse length.
This observation can be used to reinvent the learning rate, which, for dimensional consistency, must have units of length squared. It also suggests that the learning rate ought to be set to something like for some kind of characteristic length scale
, which loosely speaking is the length at which the curvature of
starts to matter.
It might also make sense to give different parameters different units, which suggests furthermore that one might want a different learning rate for each parameter, or at least that one might want to partition the parameters into different subsets and choose different learning rates for each.
Going much further, from an abstract coordinate-free point of view the extra information you need to compute the gradient of a smooth function is a choice of (pseudo-)Riemannian metric on parameter space, which if you like is a gigantic hyperparameter you can try to optimize. Concretely this amounts to a version of preconditioned gradient descent where you allow yourself to multiply the gradient (in the coordinate-dependent sense) by a symmetric (invertible, ideally positive definite) matrix which is allowed to depend on the parameters. In the first paragraph this matrix was a constant scalar multiple of identity and in the third paragraph this matrix was constant diagonal.
This is an extremely general form of gradient descent, general enough to be equivariant under arbitrary smooth change of coordinates: that is, if you do this form of gradient descent and then apply a diffeomorphism to parameter space, you are still doing this form of gradient descent, with a different metric. For example, if you pick the preconditioning matrix to be the inverse Hessian (in the usual sense, assuming it’s invertible), you recover Newton’s method. This corresponds to choosing the metric at each point to be given by the Hessian (in the usual sense), which is the choice that makes the Hessian (in the coordinate-free sense) equal to the identity. This is a precise version of “the length at which the curvature of starts to matter” and in principle ameliorates the problem where gradient descent performs poorly in narrow valleys (regions where the Hessian (in the usual sense) is poorly conditioned), at least up to cubic and higher order effects.
In general it’s expensive to compute the inverse Hessian, so a more practical thing to do is to use a matrix which approximates it in some sense. And now we’re well on the way towards quasi-Newton methods.Â
]]>
Preliminaries over an arbitrary ring
(All rings and algebras are commutative unless otherwise stated.)
The additive group scheme over a base ring
has functor of points given by the underlying abelian group functor
This functor is represented by the free -algebra
together with the Hopf algebra structure given by the comultiplication
and antipode , and so it is an affine group scheme whose underlying scheme is the affine line
over
. From a Hopf algebra perspective, this Hopf algebra is special because it is the free Hopf algebra on a primitive element.
A representation of can be described in a few equivalent ways. From the functor of points perspective, we first need to describe the functor of points perspective on a
-module: a
-module
has functor of points
making it an affine group scheme if is finitely generated projective (in which case its ring of functions is the symmetric algebra
on the dual of
) but not in general, for example if
is a field and
is an infinite-dimensional vector space. Note that if
we recover
, and more generally if
we recover
.
A representation of over
is, loosely speaking, a polynomial one-parameter group of automorphisms of a
-module. The simplest nontrivial example is
which defines a nontrivial action of on
.
Formally, we can define an action of on a
-module
as an action of the functor of points of the former on the functor of points of the latter which is linear in the appropriate sense. Explicitly, it is a natural transformation with components
such that each defines an
-linear action of the group
on the
-module
in a way which is natural in
. So we can think of the parameter
in the one-parameter group above as running over all elements of all
-algebras
.
can equivalently, by currying, be thought of as a natural transformation of group-valued functors from
to the functor
even though the latter is generally not representable by a scheme, again unless is finitely generated projective; note that if
then
is an affine group scheme, namely the general linear group
.
The advantage of doing this is that we can appeal to the Yoneda lemma: because is representable, such natural transformations correspond to elements of the group
at which point we’ve finally been freed of the burden of having to consider arbitrary . Writing down the conditions required for a map
to correspond to an element of
giving a homomorphism of group-valued functors
, we are led to the following.
Definition-Theorem: A representation of over
is a comodule over the Hopf algebra of functions
.
This is true more generally for representations of any affine group scheme.
Let’s get somewhat more explicit. A comodule over is in particular an action map
. Isolating the coefficient of
on the RHS, this map breaks up into homogeneous components
which each correspond to an element of , which we’ll call
. The entire action map can therefore be thought of as a power series
and the condition that is part of a comodule structure turns out to be precisely the condition that 1)
is the identity, 2) that we have
as an identity of formal power series, and 3) that for any ,
for all but finitely many
. Which of these is nonzero can be read off from the value of
, which takes the form
.
Equating the coefficients of and of
on both sides of the identity
(the two coefficients are equal on the LHS, hence must be equal on the RHS) gives
from which it follows in particular that the commute. This is necessary and sufficient for us to have
, so we can say the following over an arbitrary
:
Theorem: Over a ring , representations of
can be identified with modules
over the divided power algebra
which are locally finite in the sense that for any we have
for all but finitely many
. Given the action of each
, the corresponding action of
is
.
If is torsion-free as an abelian group, for example if
, then the divided power algebra can be thought of as the subalgebra of
spanned by
, where the isomorphism sends
to
. This is because the key defining relation above can be rewritten
if there is no torsion.
These representations should really be thought of as continuous representations of the profinite Hopf algebra given as the cofiltered limit over the algebras spanned by for each
, with the quotient maps given by setting all
past a certain point to zero. This profinite Hopf algebra is dual to the Hopf algebra
of functions on
, and we can more generally relate comodules over a coalgebra
to modules over a profinite algebra dual to it in this way under suitable hypotheses, for example if
is a field.
In characteristic zero
If is a field of characteristic zero, or more generally a
-algebra, then the divided power algebra simplifies drastically, because we can now divide by all the factorials running around. Induction on the relation
readily gives
and we conclude the following.
Theorem: Over a -algebra
, representations of
can be identified with endomorphisms
of a
-module which are locally nilpotent in the sense that, for any
, we have
for all but finitely many
. Given such an endomorphism, the corresponding action of
is
.
The corresponding profinite Hopf algebra is the formal power series algebra in one variable, with comultiplication
. If
is a field of characteristic zero, this means we can at least classify finite-dimensional representations of
using nilpotent Jordan blocks.
Example. The representation we wrote down earlier corresponds to exponentiating the Jordan block
.
Example. Any coalgebra has a “regular representation,” namely the comodule given by its own comultiplication. The regular representation of is the translation map
.
and the corresponding locally nilpotent endomorphism is the derivative (hence the use of
above). More generally, locally nilpotent derivations on a
-algebra
correspond to actions of
on the affine
-scheme
.
The regular representation has subrepresentations given by restricting attention to the polynomials of degree at most for each
. These subrepresentations are finite-dimensional, and are classified by the
nilpotent Jordan block, which follows from the fact that they have a one-dimensional invariant subspace given by the constant polynomials.
In positive characteristic
In positive characteristic the binomial coefficient in the relation
is sometimes zero, so things get more complicated. For starters, the same induction as before shows that if
is a field of characteristic
, or more generally an
-algebra, we have
.
However, when we try to analyze , we find that we can put no constraint on it in terms of smaller
but instead that
. In fact we have
for all
, because the identity
gives by induction
(here we use the fact that we know the commute). So things are not determined just by
; we at least also need to know
. Note that
guarantees that the exponential
still makes sense in characteristic
, because it’s no longer necessary to divide by factorials divisible by
.
Continuing from here we find that
using the fact that and
is not divisible by
for
by Kummer’s theorem, as well as Lucas’s theorem which gives more precisely
. This gets us up to
and then when we try to analyze
we find that we cannot relate it to any of the smaller
, because
is always divisible by
(using either Kummer’s or Lucas’s theorems), and that
, which we already knew.
The general situation is as follows. We can write as a product of
precisely when we can find
such that
is nonzero
. Kummer’s theorem implies that this is possible precisely when
is not a power of
, from which it follows that each
is a product of
. We can be more precise as follows. Write
in base
as
Then induction gives
where the LHS is congruent to by a mild generalization of Lucas’s theorem and the RHS can be rewritten as above, giving
.
This is enough for the to satisfy every relation defining the divided power algebra, and so we conclude the following.
Lemma: Over an -algebra
, the divided power algebra can be rewritten
.
Corollary: Over an -algebra
, representations of the additive group scheme
correspond to modules
over the algebra
which are locally finite in the sense that for all
, we have
for all but finitely many
. Given the action of each
, the corresponding action of
is
.
As before, we can equivalently talk about continuous modules over a suitable profinite Hopf algebra.
Example. We can classify all nontrivial -dimensional representations over a field
of characteristic
as follows. There is some smallest
such that the action of
is nonzero. In a suitable basis,
acts by a nilpotent Jordan block
, and all the other
commute with it. A calculation shows that this implies that each
must also have the form
for some scalars
, and hence our representation has action map
.
Contrast to the case of characteristic zero, where there is only one isomorphism class of nontrivial -dimensional representation (given by
).
Example. Consider again the regular representation given by the translation action of on itself:
.
We find as before that is the derivative. Now, in characteristic
the derivative of a polynomial is well-known to have the curious property that
; among other things this means that translation is no longer given by just exponentiating the derivative. However, since over
we have
we have, over and hence over any
, a well-defined differential operator
(for ; otherwise zero) and this differential operator must be
; similarly for higher powers of
we have
(for ; otherwise zero, as above).
Endomorphisms
Actually we should have expected an answer like this all along, or at least we should have known that we would also need to write down representations involving powers of Frobenius in addition to the obvious representations of the form . This is because in characteristic
, the Frobenius map gives a nontrivial endomorphism
, and the pullback of any representation along an endomorphism is another representation. Pulling back along the Frobenius map sends
to
. More generally, any sum of powers of the Frobenius map gives a nontrivial endomorphism of
as well.
The fact that this sort of thing doesn’t happen in characteristic zero means that can’t have any non-obvious endomorphisms like the Frobenius map there. In fact we can say the following.
Proposition: Over a ring , endomorphisms
are in natural bijection with additive polynomials, namely polynomialsÂ
such that
and
.
Proof. This is mostly a matter of unwinding definitions. By the Yoneda lemma, maps (on underlying affine schemes, maps
) correspond to elements of
, hence to polynomials
, and the condition that such a map corresponds to a group homomorphism is precisely the condition that
and
.
Now let’s try to classify all such polynomials. Writing , equating the coefficient of
on both sides as before gives
If is torsion-free, and in particular if
is a
-algebra, then this implies that
for
(by setting
where
), and since
the only possible nonzero coefficient is
. So
is scalar multiplication by the scalar
.
On the other hand, if is an
-algebra, then it can again happen as above that
doesn’t vanish if
is always divisible by
, which as before happens iff
is a power of
. In this case
is a linear combination
of powers of the Frobenius map, which is clearly an endomorphism. In summary, we conclude the following.
Proposition: If is torsion-free, the endomorphism ring of
is
acting by scalar multiplication. If
is an
-algebra, the endomorphism ring of
is
, with
acting by Frobenius.
The endomorphisms generated by Frobenius can be used to write down examples of affine group schemes which only exist in positive characteristic. For example, the kernel of Frobenius is the affine group scheme with functor of points
.
.
It turns out that the equivalence class of is completely determined by its rank
. To prove this we construct some bases by induction. For starters, let
be a vector such that
; this is always possible unless
. Next, let
be a vector such that
is linearly independent of
; this is always possible unless
.
Continuing in this way, we construct vectors such that the vectors
are linearly independent, hence a basis of the column space of
. Next, we complete the
and
to bases of
in whatever manner we like. With respect to these bases,
takes a very simple form: we have
if
and otherwise
. Hence, in these bases,
is a block matrix where the top left block is an
identity matrix and the other blocks are zero.
Explicitly, this means we can write as a product
where has the block form above, the columns of
are the basis of
we found by completing
, and the columns of
are the basis of
we found by completing
. This decomposition can be computed by row and column reduction on
, where the row operations we perform give
and the column operations we perform give
.
Conceptually, the question we’ve asked is: what does a linear transformation between vector spaces “look like,” when we don’t restrict ourselves to picking a particular basis of
or
? The answer, stated in a basis-independent form, is the following. First, we can factor
as a composite
where is the image of
. Next, we can find direct sum decompositions
and
such that
is the projection of
onto its first factor and
is the inclusion of the first factor into
. Hence every linear transformation “looks like” a composite
of a projection onto a direct summand and an inclusion of a direct summand. So the only basis-independent information contained in is the dimension of the image
, or equivalently the rank of
. (It’s worth considering the analogous question for functions between sets, whose answer is a bit more complicated.)
The actual problem this blog post is about is more interesting: it is to classify matrices
up to orthogonal change of basis in both the source and the target. In other words, we now want to understand the equivalence classes of the equivalence relation given by
.
Conceptually, we’re now asking: what does a linear transformation between finite-dimensional Hilbert spaces “look like”?
Inventing singular value decomposition
As before, we’ll answer this question by picking bases with respect to which is as easy to understand as possible, only this time we need to deal with the additional restriction of choosing orthonormal bases. We will follow roughly the same inductive strategy as before. For starters, we would like to pick a unit vector
such that
; this is possible unless
is identically zero, in which case there’s not much to say. Now, there’s no guarantee that
will be a unit vector, but we can always use
as the beginning of an orthonormal basis of . The question remains which of the many possible values of
to use. In the previous argument it didn’t matter because they were all related by change of coordinates, but now it very much does because the length
may differ for different choices of
. A natural choice is to pick
so that
is as large as possible (hence equal to the operator normÂ
of
); writing
, we then have
.
is called the first singular value of
,
is called its first right singular vector, and
is called its first left singular vector. (The singular vectors aren’t unique in general, but we’ll ignore this for now.) To continue building orthonormal bases we need to find a unit vector
orthogonal to such that
is linearly independent of
; this is possible unless
, in which case we’re already done and
is completely describable as
; equivalently, in this case we have
.
We’ll pick using the same strategy as before: we want the value of
such that
is as large as possible. Note that since
, this is equivalent to finding the value of
such that
is as large as possible. Call this largest possible value
and write
.
At this point we are in trouble unless ; if this weren’t the case then our strategy would fail to actually build an orthonormal basis of
. Very importantly, this turns out to be the case.
Key lemma #1:Â Suppose is a unit vector maximizing
. Let
be a unit vector orthogonal to
. Then
is also orthogonal to
.
Proof. Consider the function
.
The vectors are all unit vectors since
are orthonormal, so by construction (of
) this function is maximized when
. In particular, its derivative at
is zero. On the other hand, we can expand
out using dot products as
.
Now we can compute the first-order Taylor series expansion of this function around , giving
so setting the first derivative equal to gives
as desired.
This is the technical heart of singular value decomposition, so it’s worth understanding in some detail. Michael Nielsen has a very nice interactive demo / explanation of this. Geometrically, the points trace out an ellipse centered at the origin, and by hypothesis
describes the semimajor axis of the ellipse: the point furthest away from the origin. As we move away from
, to first order we are moving slightly in the direction of
, and so if
were not orthogonal to
it would be possible to move slightly further away from the origin than
by moving either in the positive or negative
direction, depending on whether the angle between
and
is greater than or less than
. The only way to ensure that moving in the direction of
does not, to first order, get us further away from the origin is if
is orthogonal to
.
Note that this gives a proof that the semiminor axis of an ellipse – the point closest to the origin – is always orthogonal to its semimajor axis. We can think of key lemma #1 above as more or less being equivalent to this fact, also known as the principal axis theorem in the plane, and which is also closely related to but slightly weaker than the spectral theorem for real matrices.
Thanks to key lemma #1, we can continue our construction. With as before, we inductively produce orthonormal vectors
such that
is maximized subject to the condition that
for all
, and write
where is the maximum value of
on all vectors
orthogonal to
; note that this implies that
.
The are the singular values of
, the
are its right singular vectors, and the
are its left singular vectors. Repeated application of key lemma #1 shows that the
are an orthonormal basis of the column space of
, so the construction stops here:
is identically zero on the orthogonal complement of
, because if it weren’t then it would take a value orthogonal to
. This means we can write
as a sum
.
This is one version of the singular value decomposition (SVD for short) of , and it has the benefit of being as close to unique as possible. A more familiar version of SVD is obtained by completing the
and
to orthonormal bases of
and
(necessarily highly non-unique in general). With respect to these bases,
takes, similar to the warm-up, a block form where the top left block is the diagonal matrix with entries
and the remaining blocks are zero. Hence we can write
as a product
where has the above block form,
has columns given by
, and
has columns given by
.
So, stepping back a bit: what have we learned about what a linear transformation between Hilbert spaces looks like? Up to orthogonal change of basis, we’ve learned that they all look like “weighted projections”: we are almost projecting onto the image as in the warmup, except with weights given by the singular values
to account for changes in length. The only orthogonal-basis-independent information contained in a linear transformation turns out to be its singular values.
Looking for more analogies between singular value decomposition and the warmup, we might think of the singular values as a quantitative refinement of the rank, since there are of them where
is the rank, and if some of them are small then
is close (in the operator norm) to a linear transformation having lower rank.
Geometrically, one way to describe the answer provided by singular value decomposition to the question “what does a linear transformation look like” is that the key to understanding is to understand what it does to the unit sphere of
. The image of the unit sphere is an
-dimensional ellipsoid, and its principal axes have direction given by the left singular vectors
and lengths given by the singular values
. The right singular vectors
map to these.
Properties
Singular value decomposition has lots of useful properties, some of which we’ll prove here. First, note that taking the transpose of a singular value decomposition gives another singular value decomposition
showing that has the same singular values as
, but with the left and right singular vectors swapped. This can be proven more conceptually as follows.
Key lemma #2: Write . Then for every
, the left and right singular vectors
maximize the value of
subject to the constraint that
for all
, that
is orthogonal to
, and that
is orthogonal to
. This maximum value is
.
Proof. At the maximum value of subject to the constraint that
is orthogonal to
and
is orthogonal to
, it must also be the case that if we fix
then
takes its maximum value at
. But for fixed
,
uniquely takes its maximum value when
is proportional to
(if possible), hence must in fact be equal to
; moreover, this is always possible thanks to key lemma #1. So we are in fact maximizing
subject to the above constraints and we already know the solution is given by .
Left-right symmetry: Let be the singular values, left singular vectors, and right singular vectors of
as above. Then
are the singular values, left singular vectors, and right singular vectors of
. In particular,
.
Proof. Apply key lemma #2 to , and note that
is the same either way, just with the roles of
and
switched.
Singular = eigen:Â The left singular vectors are the eigenvectors of
corresponding to its nonzero eigenvalues, which are
. The right singular vectors
are the eigenvectors of
corresponding to its nonzero eigenvalues, which are also
.
Proof. We now know that and that
, hence
and
.
Hence are orthonormal eigenvectors of
respectively. Moreover, these matrices have rank at most (in fact exactly)
, so this exhausts all eigenvectors corresponding to nonzero eigenvalues.
This gives an alternative route to understanding singular value decomposition which comes from writing as
and then applying the spectral theorem to to diagonalize, but I think it’s worth knowing that there’s a route to singular value decomposition which is independent of the spectral theorem.
In addition to the above algebraic characterization of singular values, the singular values also admit the following variational characterization.
Variational characterizations of singular values (Courant, Fischer):Â We have
and
.
Proof. For the first characterization, any -dimensional subspace
intersects
nontrivially, hence contains a unit vector of the form
.
We compute that
and hence that
.
We conclude that every contains
such that
, hence
. Equality is obtained when
.
The second characterization is very similar. Any -dimensional subspace
intersects
nontrivially, hence contains a unit vector of the form
.
We compute that
and hence that
.
We conclude that every contains a vector
such that
, hence
. Equality is obtained when
.
The second variational characterization above can be used to prove the following important theorem.
Low rank approximation (Eckart, Young): If is the SVD of
, let
where
has diagonal entries
and all other entries zero. Then
is the closest matrix to
in operator norm with rank at most
; that is,
minimizes
subject to the constraint that
. This minimum value is
.
Proof. Suppose is a matrix of rank at most
. Let
be the nullspace of
, which by hypothesis has dimension at least
. By the second variational characterization above, this means that it contains a vector
such that
, and since
this gives
and hence that . Equality is obtained when
as defined above.
The variational characterizations can also be used to prove the following inequality relating the singular values of two matrices and of their sum, which can be thought of as a quantitative refinement of the observation that the rank of a sum of two matrices is at most the sum of their ranks.
Additive perturbation (Weyl): Let be
matrices with singular values
. Then
.
Proof. We want to bound in terms of the singular values of
and
. By the second variational characterization, we have
.
To give an upper bound on a minimum value of a function we just need to give an upper bound on some value that it takes. Let and
be the subspaces of
of dimensions
respectively which achieve the minimum values of
and
respectively, and let
be their intersection. This intersection has dimension at least
, and by construction
.
Since has dimension at least
, the above is an upper bound on the value of
for any
-dimensional subspace
, from which the conclusion follows.
The slightly curious off-by-one indexing in the above inequality can be understood as follows: if and
are both very small, this means that
and
are close to matrices of rank at most
and
respectively, and hence
is close to a matrix of rank at most
, hence
also ought to be small.
Setting in the additive perturbation inequality we deduce the following corollary.
Singular values are Lipschitz: The singular values, as functions on matrices, are uniformly Lipschitz with respect to the operator norm with Lipschitz constant : that is,
.
Proof. Apply additive perturbation twice with , first to get
(remembering that is the operator norm), and second to get
(remembering that ).
This is very much not the case with eigenvalues: a small perturbation of a square matrix can have a large effect on its eigenvalues. This is explained e.g. in in this blog post by Terence Tao, and is related to pseudospectra.
Setting , or equivalently
, in the additive perturbation inequality, we deduce the following corollary.
Interlacing:Â Suppose are matrices such that
. Then
.
Proof. Apply additive perturbation twice, first to get
and second to get
.
Interlacing gives us some control over what happens to the singular values under a low-rank perturbation (as opposed to a low-norm perturbation; a low-rank perturbation may have arbitrarily high norm, and vice versa). For example, we learn that if all of the singular values are clumped together, then a rank- perturbation will keep most of the singular values clumped together, except possibly for either the
largest or
smallest singular values. We can’t expect any control over these, since in the worst case a rank-
perturbation can make the
largest singular values arbitrarily large, or make the
smallest singular values arbitrarily small.
A particular special case of a low-rank perturbation is deleting a small number of rows or columns (note that a row or column which is entirely zero does not affect the singular values, so deleting a row or column is equivalent to setting all of its entries to zero), in which case the upper bound above can be tightened.
Cauchy interlacing: Suppose is a matrix and
is obtained from
by deleting at most
rows. Then
.
Proof. The lower bound follows from interlacing. The upper bound follows from the observation that we have for all
, then applying either variational characterization of the singular values. Â
Cauchy interlacing also applies to deleting columns, or combinations of rows and columns, because the singular values are unchanged by transposition. In particular, we learn that if is obtained from
by deleting either a single row or a single column, then the singular values of
interlace with the singular values of
, hence the name.
In particular, if all of the singular values of are clumped together then so are those of
, with no exceptions. Taking the contrapositive, if the singular values of
are spread out, then the singular values of
must be as well.
Three special cases
Three special cases of the general singular value decomposition are worth pointing out.
First, if has orthogonal columns, or equivalently if
is diagonal, then the singular values
are the lengths of its columns, we can take the right singular vectors to be the standard basis vectors
, and we can take the left singular vectors to be the unit rescalings of its columns. This means that we can take
to be the identity matrix, and in general suggests that
is a measure of the extent to which the columns of
fail to be orthogonal (with the caveat that
is not unique and so in general we would want to look at the
closest to
).
Second, if has orthogonal rows, or equivalently if
is diagonal, then the singular values
are the lengths of its rows, we can take the left singular vectors to be the standard basis vectors
, and we can take the right singular vectors to be the unit rescalings of its rows. This means that we can take
to be the identity matrix, and in general suggests that
is a measure of the extent to which the rows of
fail to be orthogonal (with the same caveat as above).
Finally, if is square and an orthogonal matrix, so that
, then the singular values
are all equal to
, and an arbitrary choice of either the left or the right singular vectors uniquely determines the other. This means that we can take
to be the identity matrix, and in general suggests that
is a measure of the extent to which
fails to be orthogonal. In fact it is possible to show that the closest orthogonal matrix to
is given by
, or in other words by replacing all of the singular values of
with
, so
is precisely the distance from to the nearest orthogonal matrix. This fact can be used to solve the orthogonal Procrustes problem.
In general, we should expect that the SVD of a matrix is relevant to answering any question about
whose answer is invariant under left and right multiplication by orthogonal matrices. This includes, for example, the question of low-rank approximations to
with respect to operator norm we answered above, since both rank and operator norm are invariant.
The answer is that we can think of the Morita 2-category as a 2-category of module categories over the symmetric monoidal category of
-modules, equipped with the usual tensor product
over
. By the Eilenberg-Watts theorem, the Morita 2-category is equivalently the 2-category whose
- objects are the categories
, where
is a
-algebra,
- morphisms are cocontinuous
-linear functors
, and
- 2-morphisms are natural transformations.
An equivalent way to describe the morphisms is that they are “-linear” in that they respect the natural action of
on
given by
.
This action comes from taking the adjoint of the enrichment of over
, which gives a tensoring of
over
. Since the two are related by an adjunction in this way, a functor respects one iff it respects the other.
So Morita theory can be thought of as a categorified version of module theory, where we study modules over instead of over
. In the simplest cases, we can think of Morita theory as a categorified version of linear algebra, and in this post we’ll flesh out this analogy further.
Technical preliminaries
Let be a symmetric monoidal category (which in this post will beÂ
, for
a commutative ring), and consider categories enriched over
, or
-categories for short. If
and
are two
-categories, their naive tensor product
is the
-category whose objects are pairs
of objects in
and objects in
, and whose homs are given by the tensor products
of the homs in and
, with the obvious composition (which requires the ability to switch tensor factors to define). When
and
have one object and
, this reduces to the usual tensor product of
-algebras.
Thinking of -categories as many-object generalizations of
-algebras (one might call them “
-algebroids,” but I won’t), it’s natural to define notions of modules over them. We’ll say that a left module over
is a
-functor (
-enriched functor)
, while a right module over
is a
-functor
. If
are two
-categories, a
-bimodule is a
-functor
.
All of this terminology has its usual meaning when and
have one object (so correspond to algebras) and
.
The basic analogy
The basic analogy, one piece of structure at a time, goes like this.
- Sets are analogous to categories.
- Abelian groups are analogous to cocomplete categories. (There are several other things we could have said here, but this is the one that’s relevant to thinking about Morita theory. The idea is that taking colimits categorifies addition.)
- Rings are analogous to monoidal cocomplete categories (this includes the condition that the monoidal structure distributes over colimits).
- Commutative rings are analogous to symmetric monoidal cocomplete categories.
- Modules over commutative rings are analogous to cocomplete module categories over symmetric monoidal cocomplete categories.
We won’t get into the generalities of thinking about symmetric monoidal categories or modules over them because, in this post, the only symmetric monoidal categories we care about are those of the form for a commutative ring
, and the only module categories over them we care about are the ones that are “free on a category of generators” in the sense that they are categories of
-linear presheaves / right modules
on essentially small -linear categories
(thinking of taking presheaves as the free cocompletion). By the universal property of the free cocompletion, cocontinuous
-linear functors
correspond to
-linear functors
, or equivalently (by an adjunction) to
-bimodules; this generalizes, and in particular proves, the Eilenberg-Watts theorem. Composition is given by tensor product of bimodules, which is computed using coends.
In the special case that the categories involved have one object, they correspond to
-algebras, and the words “module,” “bimodule,” and “tensor product” all have their usual meaning. More generally, if
has finitely many isomorphism classes of objects, then we can replace it with the endomorphism ring of the direct sum of one object from each isomorphism class (because
is Morita equivalent to the one-object
-linear category with this endomorphism ring), so we get a genuinely bigger Morita 2-category if we allow
to have infinitely many objects.
From now on we’ll work in this bigger Morita 2-category, which is now the 2-category deserving the name . It has
- objects essentially small
-linear categories
,
- morphisms
-bimodules over
, and
- 2-morphisms homomorphisms of bimodules.
Equivalently, it has
- objects the cocomplete
-linear categories
(where
is as above),
- morphisms cocontinuous
-linear functors
, and
- 2-morphisms natural transformations.
From now on, when we say that a -linear category
“is” a
-algebra, possibly with some further properties, what we mean is that it’s equivalent to a category with one object and endomorphism ring a
-algebra, possibly with some further properties.
In what ways do the objects of the Morita 2-category behave like modules?
Proposition: The Morita 2-category has biproducts.
In other words, the product is also the coproduct. This is because on the one hand a functor (cocontinuous and
-linear)Â into
is precisely a pair of functors into
and
, and on the other hand
If and
are algebras rather than categories it would be tempting to write
rather than
, but in the case of algebras these are Morita equivalent as
-linear categories. Actually, the above argument, with disjoint unions of
-linear categories, applies just as well to infinitely many
-linear categories: this means that the bigger Morita 2-category has infinite biproducts. Note that this has nothing to do with
itself having biproducts: the same is true for the bigger Morita 2-category over
.
Proposition:Â There is a tensor-hom adjunction
.
Here the internal hom is the category of cocontinuous
-linear functors, which (as we’ve seen above, since it’s a category of bimodules) is itself a cocomplete
-linear category, and the tensor product
is
(this is a computation, not a definition: the definition is a universal property in terms of functors out of which are cocontinuous and
-linear in each variable). Here
is the “naive” tensor product over
.
Both sides of the equivalence above are equivalent to , which might look familiar as a categorification of a corresponding statement about finite-dimensional vector spaces. This reflects the fact that
is always dualizable with respect to the above tensor product, with dual
.
With respect to this tensor product, is the unit object. It should be thought of as the “tensor product over
,” exactly analogous to the tensor product on modules over a commutative ring.
Aside: big categories vs. little categories
The biggest secret about category theory that I don’t think is in common circulation is that there are approximately two kinds of categories, and they should be thought of very differently (because the relevant notion of morphism between them is very different):
- Big categories are “categories of mathematical objects.” Typical examples are categories of modules and sheaves. They tend to be cocomplete, and people like to consider cocontinuous functors (really, left adjoints) between them.
- Little categories are “categories as mathematical objects.” Typical examples include categories with one object, or finitely many objects. They tend to be Cauchy complete at best, and people like to consider arbitrary functors, or more generally bimodules, between them.
The Morita 2-categories we discussed above have both little descriptions and big descriptions (via -linear categories and modules over these respectively), and both are important. We can pass from little to big by taking modules / presheaves, and we can pass from big to little by taking tiny objects. (This can at best recover the Cauchy completion of the original little
-linear category, but this is fine since we’re only hoping to doing things up to Morita equivalence anyway.)
I think one thing that confuses people when they first start to learn category theory is that the first examples of categories (e.g. groups, rings, modules) tend to be big, even though little categories figure prominently in the theory (e.g. as shapes for diagrams to take limits or colimits over, and/or as things to take presheaves or sheaves on) and feel very different. It’s little categories that can reasonably be thought of as algebraic objects generalizing more familiar objects like monoids and posets (or, in our enriched setting, -algebras), whereas big categories, I think, genuinely require a new set of intuitions.
On the other hand, the ability to pass between big and little categories is also important. Eilenberg-Watts, as we have seen, gives one version of this: another version is Gabriel-Ulmer duality.
The big vs. little nomenclature suggests, but is not equivalent to, the rigorous distinction between large and small categories, and is related to the distinction between big and little toposes (or, depending on your preferences, between gros and petit topoi) in sheaf theory. The basic point of this distinction is that there are two somewhat different sorts of things people mean by sheaf: on the one hand one might mean a functor on a little category like the category of open subsets of a topological space, and on the other hand one might mean a functor on a big category like the category of commutative rings. It would be nice if people emphasized this distinction more.
Bases, coordinates, and matrices
In terms of higher linear algebra, big categories of modules are “higher vector spaces,” while the little categories that can be used to present the big categories as categories of modules are “bases” for them. As we learned previously, a cocomplete abelian category has a “basis” in this sense iff it has a family of tiny (compact projective) generators, for various notions of generator.
 Given a “basis” for a “higher vector space”
(really, a higher module), any object is described by a module / presheaf
; the components of this presheaf can be thought of as the “coordinates” of
 in the “basis”
. In the same way that a vector is a sum over elements of a basis weighted by its coordinates in that basis, a presheaf is a weighted colimit / coend / functor tensor product
weighted by its “coordinates” of the “basis” of representable presheaves. This is an enriched version of the familiar statement that a presheaf of sets over a category is canonically a colimit of representable presheaves. It is sometimes called the co-Yoneda lemma.
Similarly, the statement that cocontinuous -linear functors
are equivalent to functors
can be interpreted as saying that such functors can be written as “matrices” indexed by
and
. Composition, as well as evaluation, are given by the familiar formulas if we consistently reinterpret the relevant products as tensor products and the relevant sums as coends.
Suppose in particular that . Then we get that endomorphisms of
correspond to
-bimodules, or equivalently to functorsÂ
. These are precisely the sorts of things we can take coends of, getting an object
which deserves to be called the trace of the endomorphism . This is a generalization of (zeroth) Hochschild homology with coefficients in a bimodule, which it reduces to when
is an algebra. (There is also a second trace generalizing Hochschild cohomology and computed using ends.)
The identity functor turns out to be represented by , so taking its trace we get the Hochschild homology or trace of
(or, depending on your point of view, of
) itself:
.
More explicitly, this coend is the result of coequalizing the left and right actions of on
(by postcomposition and precomposition respectively), so can be written as
where the two arrows send a pair of morphisms and
to the two composites
and
. In other words,
is the quotient of the direct sum of the endomorphism rings of every object in
by the subspace of “commutators” of the form
, where
need not themselves be endomorphisms.
By construction, this means that is the recipient of a “universal trace” map: any endomorphism
in
has an image
satisfying
for all as above, and
is universal with respect to this property.
Example. Suppose has one object, so corresponds to an algebra
. Then
is just the quotient
of
by the subspace of commutators, hence is the ordinary zeroth Hochschild cohomology
of
(with coefficients in itself).
The above construction of the trace implies that it is Morita invariant, and is Morita equivalent to the
-linear category of finitely generated projective (right)
-modules (which is its Cauchy completion). It follows that the trace of this category is also
, and this means that for any finitely generated projective
-module
and any endomorphism
there is a trace
.
This trace is called the Hattori-Stallings trace. For free modules it is computed by taking the image of the sum of the diagonal elements of in
. It is a shadow of various more general and more interesting maps from versions of K-theory to versions of Hochschild homology, including a version of the Chern character and the Dennis trace.
In particular, if is commutative, we recover the usual notion of trace of an endomorphism of a finitely generated projective
-module without using a monoidal structure (previously we recovered this notion using dualizability with respect to the usual tensor product of
-modules).
The simplest interesting case of the above discussion occurs when is a field and the algebras we consider are finite direct sums of matrix algebras
; equivalently, the
-module categories we consider are finite direct sums of copies of
. These are known as Kapranov-Voevodsky 2-vector spaces. Morphisms from
to
correspond to
matrices of vector spaces, just as for free modules, and the trace of an endomorphism
is the direct sum of its diagonal vector spaces.
Higher representation theory
One reason you might want a higher version of linear algebra is to study “higher representation theory,” where groups (or higher versions of groups, such as 2-groups) act on higher vector spaces. Previously we saw such actions occur naturally in Galois descent: namely, if is a Galois extension with Galois group
, then
naturally acts on the category
of
-vector spaces, and we used this action to describe Galois descent for vector spaces.
More generally, if is a group acting on a schemeÂ
(or some more or less general object, depending on taste), then
naturally acts on the category
of quasicoherent sheaves on
. If
is an
-scheme for some base
, this is naturally a module category over
, and if
acts on
by
-scheme automorphisms, the induced action on
is
-linear. One reason you might want to understand higher representation theory is to understand these sorts of actions. In particular, a natural question is what the homotopy fixed points of this action are, and the answer is that
;
that is, the homotopy fixed point category (which here is typically called “-equivariant quasicoherent sheaves on
“) is the category of quasicoherent sheaves on the quotient
regarded as a stack. (It certainly does not suffice to take the ordinary quotient of schemes: for example, if
is a point, then
is
, and
is the category of
-linear representations of
.)
If the stacky quotient happens to be an ordinary scheme (so
is a
-torsor in schemes), this is a generalization of Galois descent, to which it reduces in the case when
is a finite extension,
, and
is the Galois group.
Fire
Fire is a sustained chain reaction involving combustion, which is an exothermic reaction in which an oxidant, typically oxygen, oxidizes a fuel, typically a hydrocarbon, to produce products such as carbon dioxide, water, and heat and light. A typical example is the combustion of methane, which looks like
.
The heat produced by combustion can be used to fuel more combustion, and when that happens enough that no additional energy needs to be added to sustain combustion, you’ve got a fire. To stop a fire, you can remove the fuel (e.g. turning off a gas stove), remove the oxidant (e.g. smothering a fire using a fire blanket), remove the heat (e.g. spraying a fire with water), or remove the combustion reaction itself (e.g. with halon).
Combustion is in some sense the opposite of photosynthesis, an endothermic reaction which takes in light, water, and carbon dioxide and produces hydrocarbons.
It’s tempting to assume that when burning wood, the hydrocarbons that are being combusted are e.g. the cellulose in the wood. It seems, however, that something more complicated happens. When wood is exposed to heat, it undergoes pyrolysis (which, unlike combustion, doesn’t involve oxygen), which converts it to more flammable compounds, such as various gases, and these are what combust in wood fires.
When a wood fire burns for long enough it will lose its flame but continue to smolder, and in particular the wood will continue to glow. Smoldering involves incomplete combustion, which, unlike complete combustion, produces carbon monoxide.
Flames
Flames are the visible parts of a fire. As fires burn, they produce soot (which can refer to some of the products of incomplete combustion or some of the products of pyrolysis), which heats up, producing thermal radiation. This is one of the mechanisms responsible for giving fire its color. It is also how fires warm up their surroundings.
Thermal radiation is produced by the motion of charged particles: anything at positive temperature consists of charged particles moving around, so emits thermal radiation. A more common but arguably less accurate term is black body radiation; this properly refers to the thermal radiation emitted by an object which absorbs all incoming radiation. It’s common to approximate thermal radiation by black body radiation, or by black body radiation times a constant, because it has the useful property that it depends only on the temperature of the black body. Black body radiation happens at all frequencies, with more radiation at higher frequencies at higher temperatures; in particular, the peak frequency is directly proportional to temperature by Wien’s displacement law.
Everyday objects are constantly producing thermal radiation, but most of it is infrared – its wavelength is longer than that of visible light, and so is invisible without special cameras. Fires are hot enough to produce visible light, although they are still producing a lot of infrared light.
Another mechanism giving fire its color is the emission spectra of whatever’s being burned. Unlike black body radiation, emission spectra occur at discrete frequencies; this is caused by electrons producing photons of a particular frequency after transitioning from a higher-energy state to a lower-energy state. These frequencies can be used to detect elements present in a sample in flame tests, and a similar idea (using absorption spectra) is used to determine the composition of the sun and various stars. Emission spectra are also responsible for the color of fireworks and of colored fire.
The characteristic shape of a flame on Earth depends on gravity. As a fire heats up the surrounding air, natural convection occurs: the hot air (which contains, among other things, hot soot) rises, while cool air (which contains oxygen) falls, sustaining the fire and giving flames their characteristic shape. In low gravity, such as on a space station, this no longer occurs; instead, fires are only fed by the diffusion of oxygen, and so burn more slowly and with a spherical shape (since now combustion is only happening at the interface of the fire with the parts of the air containing oxygen; inside the sphere there is presumably no more oxygen to burn):

Black body radiation
Black body radiation is described by Planck’s law, which is fundamentally quantum mechanical in nature, and which was historically one of the first applications of any form of quantum mechanics. It can be deduced from (quantum) statistical mechanics as follows.
What we’ll actually compute is the distribution of frequencies in a (quantum) gas of photons at some temperature ; the claim that this matches the distribution of frequencies of photons emitted by a black body at the same temperature comes from a physical argument related to Kirchhoff’s law of thermal radiation. The idea is that the black body can be put into thermal equilibrium with the gas of photons (since they have the same temperature). The gas of photons is getting absorbed by the black body, which is also emitting photons, so in order for them to stay in equilibrium, it must be the case that at every frequency the black body is emitting radiation at the same rate as it’s absorbing it, which is determined by the distribution of frequencies in the gas. (Or something like that. I Am Not A Physicist, so if your local physicist says different then believe them instead.)
In statistical mechanics, the probability of finding a system in microstate given that it’s in thermal equilibrium at temperature
is proportional to
where is the energy of state
and
is thermodynamic beta (so
is temperature and
is Boltzmann’s constant); this is the Boltzmann distribution. For one possible justification of this, see this blog post by Terence Tao. This means that the probability is
where is the normalizing constant
called the partition function. Note that these probabilities don’t change if is modified by an additive constant (which multiplies the partition function by a constant); only differences in energy between states matter.
It’s a standard observation that the partition function, up to multiplicative scale, contains the same information as the Boltzmann distribution, so anything that can be computed from the Boltzmann distribution can be computed from the partition function. For example, the moments of the energy are given by
and, up to solving the moment problem, this characterizes the Boltzmann distribution. In particular, the average energy is
.
The Boltzmann distribution can be used as a definition of temperature. It correctly suggests that in some sense is the more fundamental quantity because it might be zero (meaning every microstate is equally likely; this corresponds to “infinite temperature”) or negative (meaning higher-energy microstates are more likely; this corresponds to “negative temperature,” which it is possible to transition to after “infinite temperature,” and which in particular is hotter than every positive temperature).
To describe the state of a gas of photons we’ll need to know something about the quantum behavior of photons. In the standard quantization of the electromagnetic field, the electromagnetic field can be treated as a collection of quantum harmonic oscillators each oscillating at various (angular) frequencies . The energy eigenstates of a quantum harmonic oscillator are labeled by a nonnegative integer
, which can be interpreted as the number of photons of frequency
. The energies of these eigenstates are (up to an additive constant, which doesn’t matter for this calculation and so which we will ignore)
where is the reduced Planck constant. The fact that we only need to keep track of the number of photons rather than distinguishing them reflects the fact that photons are bosons. Accordingly, for fixed
, the partition function is
.
Digression: the (wrong) classical answer
The assumption that , or equivalently the energy
, is required to be an integer here is the Planck postulate, and historically it was perhaps the first appearance of a quantization (in the sense of quantum mechanics) in physics. Without this assumption (so using classical harmonic oscillators), the sum above becomes an integral (where
is now proportional to the square of the amplitude), and we get a “classical” partition function
.
(It’s unclear what measure we should be integrating against here, but but this calculation appears to reproduce the usual classical answer, so I’ll stick with it.)
These two partition functions give very different predictions, although the quantum one approaches the classical one as . In particular, the average energy of all photons of frequency
, computed using the quantum partition function, is
whereas the average energy computed using the classical partition function is
.
The quantum answer approaches the classical answer as (so for small frequencies), and the classical answer is consistent with the equipartition theorem in classical statistical mechanics, but it is also grossly inconsistent with experiment and experience. It predicts that the average energy of the radiation emitted by a black body at a frequency
is a constant independent of
, and since radiation can occur at arbitrarily high frequencies, the conclusion is that a black body is emitting an infinite amount of energy, at every possible frequency, which is of course badly wrong. This is (most of) the ultraviolet catastrophe.
The quantum partition function instead predicts that at low frequencies (relative to the temperature) the classical answer is approximately correct, but that at high frequencies the average energy becomes exponentially damped, with more damping at lower temperatures. This is because at high frequencies and low temperatures a quantum harmonic oscillator spends most of its time in its ground state, and cannot easily transition to its next lowest state, which is exponentially less likely. Physicists say that most of this “degree of freedom” (the freedom of an oscillator to oscillate at a particular frequency) gets “frozen out.” The same phenomenon is responsible for classical but incorrect computations of specific heat, e.g. for diatomic gases such as oxygen.
The density of states and Planck’s law
Now that we know what’s going on at a fixed frequency , it remains to sum over all possible frequencies. This part of the computation is essentially classical and no quantum corrections to it need to be made.
We’ll make a standard simplifying assumption that our gas of photons is trapped in a box with side length subject to periodic boundary conditions (so really, the flat torus
); the choice of boundary conditions, as well as the shape of the box, will turn out not to matter in the end. Possible frequencies are then classified by standing wave solutions to the electromagnetic wave equation in the box with these boundary conditions, which in turn correspond (up to multiplication by
) to eigenvalues of the LaplacianÂ
. More explicitly, if
, where
is a smooth function
, then the corresponding standing wave solution of the electromagnetic wave equation is
and hence (keeping in mind that is typically negative, so
is typically purely imaginary) the corresponding frequency is
.
This frequency occurs times where
is the
-eigenspace of the Laplacian.
The reason for the simplifying assumptions above are that for a box with periodic boundary conditions (again, mathematically a flat torus) it is very easy to explicitly write down all of the eigenfunctions of the Laplacian: working over the complex numbers for simplicity, they are given by
where is the wave vector. (Somewhat more generally, on the flat torus
where
is a lattice, wave numbers take values in the dual lattice of
, possibly up to scaling by
depending on conventions). The corresponding eigenvalue of the Laplacian is
from which it follows that the multiplicity of a given eigenvalue is the number of ways to write
as a sum of three squares. The corresponding frequency is
and so the corresponding energy (of a single photon with that frequency) is
.
At this point we’ll approximate the probability distribution over possible frequencies , which is strictly speaking discrete, as a continuous probability distribution, and compute the corresponding density of statesÂ
;Â the idea is that
should correspond to the number of states available with frequencies between
and
. Then we’ll do an integral over the density of states to get the final partition function.
Why is this approximation reasonable (unlike the case of the partition function for a single harmonic oscillator, where it wasn’t)? The full partition function can be described as follows. For each wavenumber , there is an occupancy number
describing the number of photons with that wavenumber; the total number
of photons is finite. Each such photon contributes
to the energy, from which it follows that the partition function factors as a product
over all wave numbers , hence that its logarithm factors as a sum
.
and it is this sum that we want to approximate by an integral. It turns out that for reasonable temperatures and reasonably large boxes, the integrand varies very slowly as varies, so the approximation by an integral is very close. The approximation stops being reasonably only at very low temperatures, where as above quantum harmonic oscillators mostly end up in their ground states and we get Bose-Einstein condensates.
The density of states can be computed as follows. We can think of wave vectors as evenly spaced lattice points living in some “phase space,” from which it follows that the number of wave vectors in some region of phase space is proportional to its volume, at least for regions which are large compared to the lattice spacing . In fact, the number of wave vectors in a region of phase space is exactly
times the volume, where
is the volume of our box / torus.
It remains to compute the volume of the region of phase space given by all wave vectors with frequencies
between
and
. This region is a spherical shell with thickness
and radius
, and hence its volume is
from which we get that the density of states for a single photon is
.
Actually this formula is off by a factor of two: we forgot to take photon polarization into account (equivalently, photon spin), which doubles the number of states with a given wave number, giving the corrected density
.
The fact that the density of states is linear in the volume is not specific to the flat torus; it’s a general feature of eigenvalues of the Laplacian by Weyl’s law. This gives that the logarithm of the partition function is
.
Taking its derivative with respect to gives the average energy of the photon gas as
but for us the significance of this integral lies in its integrand, which gives the “density of energies”
describing how much of the energy of the photon gas comes from photons of frequencies between and
. This, finally, is a form of Planck’s law, although it needs some massaging to become a statement about black bodies as opposed to about gases of photons (we need to divide by
to get the energy density per unit volume, then do some other stuff to get a measure of radiation).
Planck’s law has two noteworthy limits. In the limit as (meaning high temperature relative to frequency), the denominator approaches
, and we get
.
This is a form of the Rayleigh-Jeans law, which is the classical prediction for black body radiation. It’s approximately valid at low frequencies but becomes less and less accurate at higher frequencies.
Second, in the limit as (meaning low temperature relative to frequency), the denominator approaches
, and we get
.
This is a form of the Wien approximation. It’s approximately valid at high frequencies but becomes less and less accurate at low frequencies.
Both of these limits historically preceded Planck’s law itself.
Wien’s displacement law
This form of Planck’s law is enough to tell us at what frequency the energy is maximized given the temperature
(and hence roughly what color a black body of temperature
is): we differentiate with respect to
and find that we need to solve
.
or equivalently (taking the logarithmic derivative instead)
.
Let , so that we can rewrite the equation as
or, with some rearrangement,
.
This form of the equation makes it relatively straightforward to show that there is a unique positive solution , and hence that
, giving that the maximizing frequency is
where is the temperature. This is Wien’s displacement law for frequencies. Rewriting in terms of wavelengths
gives
where
(the units here being meter-kelvins). This computation is typically done in a slightly different way, by first re-expressing the density of energies in terms of wavelengths, then taking the maximum of the resulting density. Because
is proportional to
, this has the effect of changing the
from earlier to an
, so replaces
with the unique solution
to
which is about . This gives a maximizing wavelength
where
.
This is Wien’s displacement law for wavelengths. Note that .
A wood fire has a temperature of around (or around
celsius), and substituting this in above produces wavelengths of
and
.
For comparison, the wavelengths of visible light range between about for red light and
for violet light. Both of these computations correctly suggest that most of the radiation from a wood fire is infrared; this is the radiation that’s heating you but not producing visible light.
By contrast, the temperature of the surface of the sun is about , and substituting that in produces wavelengths
and
which correctly suggests that the sun is emitting lots of light all around the visible spectrum (hence appears white). In some sense this argument is backwards: probably the visible spectrum evolved to be what it is because of the wide availability of light in the particular frequencies the sun emits the most.
Finally, a more sobering calculation. Nuclear explosions reach temperatures of around , comparable to the temperature of the interior of the sun. Substituting this in produces wavelengths of
and
.
These are the wavelengths of X-rays. Planck’s law doesn’t just stop at the maximum, so nuclear explosions also produce even shorter wavelength radiation, namely gamma rays. This is solely the radiation a nuclear explosion produces because it is hot, as opposed to the radiation it produces because it is nuclear, such as neutron radiation.
]]>on the partition function . In this post I’d like to record an elementary argument, making no use of generating functions, giving a lower bound of the form
for some
, which might help explain intuitively why this exponential-of-a-square-root rate of growth makes sense.
The starting point is to think of a partition of as a Young diagram of size
, or equivalently (in French coordinates) as a lattice path from somewhere on the y-axis to somewhere on the x-axis, which only steps down or to the right, such that the area under the path is
. Heuristically, if the path takes a total of
steps then there are about
such paths, and if the area under the path is
then the length of the path should be about
, so this already goes a long way towards explaining the exponential-of-a-square-root behavior.
We can make this argument into a rigorous lower bound as follows. Consider lattice paths beginning at and ending at
where
is a positive integer to be determined later. Suppose that the steps of the lattice paths alternate between paths of the form (down, right, right, down) and (right, down, down, right), which means that
is even. Then the area under the path is exactly the area of the right triangle it approximates, which is
, and the number of such paths is exactly
. This gives
whenever , so we get a lower bound of the form
where
, quite a bit worse than the correct value
. This bound generalizes to all values of
with only a small loss in the exponent because
is nondecreasing (since the lattice path can continue along the line
for awhile at the end before hitting the x-axis).
One reason this construction can’t produce a very good bound is that the partitions we get this way do not resemble the “typical” partition, which (as proven by Vershik and explained by David Speyer here) is a suitably scaled version of the curve
.
whereas our partitions resemble the curve . With a more convex curve we can afford to make the path longer while fixing the area under it.
So let’s remove the restriction that our curve resemble as follows. Rather than count
directly, we will count
, so the number of lattice paths with area at most
. Since
is increasing, it must be at least
times this count. And we have much more freedom to pick a path now that we only need to bound its area rather than find it exactly. We can now take the path to be any Dyck path from
to
, of which there are
where denotes the Catalan numbers and the asymptotic can be derived from Stirling’s approximation. The area under a Dyck path is at most
, which gives the lower bound
and hence, when (so that
),
which (ignoring polynomial factors) is of the from where
, a substantial improvement over the previous bound. Although we are now successfully in a regime where our counts include paths of a typical shape, we’re overestimating the area under them, so the bound is still not as good as it could be.
Let denote the partition function, which describes the number of ways to write
as a sum of positive integers, ignoring order. In 1918 Hardy and Ramanujan proved that
is given asymptotically by
.
This is a major plot point in the new Ramanujan movie, where Ramanujan conjectures this result and MacMahon challenges him by agreeing to compute and comparing it to what this approximation gives. In this post I’d like to describe how one might go about conjecturing this result up to a multiplicative constant; proving it is much harder.
Verification
MacMahon computed using a recursion for
implied by the pentagonal number theorem, which makes it feasible to compute
by hand. Instead of doing it by hand I asked WolframAlpha, which gives
whereas the asymptotic formula gives
.
Ramanujan is shown getting closer than this in the movie, presumably using a more precise asymptotic.
How might we conjecture such a result? In general, a very powerful method for figuring out the growth rate of a sequence is to associate to it the generating function
and relate the behavior of , often for complex values of
, to the behavior of
. The most comprehensive resource I know for descriptions both of how to write down generating functions and to analyze them for asymptotic behavior is Flajolet and Sedgewick’s Analytic Combinatorics, and the rest of this post will follow that book’s lead closely.
The meromorphic case
The generating function strategy is easiest to carry out in the case that is meromorphic (for example, if it’s rational); in that case the asymptotic growth of
is controlled by the behavior of
near the pole (or poles) closest to the origin. For rational
this is particularly simple and just a matter of looking at the partial fraction decomposition.
Example.The generating function for the sequence of partitions into parts of size at most
is the rational function
whose most important pole is at and of order
, and whose other poles are at the
roots of unity, and of order at most
. We can factor
as
which gives, upon substituting , that the partial fraction decomposition begins
and hence, using the fact that
we conclude that is asymptotically
.
We ignored all of the other terms in the partial fraction decomposition to get this estimate, so there’s no reason to expect it to be particularly accurate unless is fairly large compared to
. Nonetheless, let’s see if we can learn anything from it about how big we should expect the partition function
itself to be. Taking logs and using the rough form
of Stirling’s approximation gives
and substituting for some
gives
.
If then this quantity is negative; at this point we’re clearly not in the regime where this approximation is accurate. If
then the first term dominates and we get something that grows like
. But if
then the first term vanishes and we get
.
This is at least qualitatively the correct behavior for , and the multiplicative constant isn’t so off either: the correct value is
.
Example. A weak order is like a total order, but with ties: for example, it describes a possible outcome of a horse race. Let denote the number of weak orders of
elements (the ordered Bell numbers, or Fubini numbers). General techniques described in Flajolet and Sedgewick can be used to show that
has exponential generating function
(which should be parsed as ; loosely speaking, this corresponds to a description of the combinatorial species of weak orders as the species of “lists of nonempty sets”). This is a meromorphic function with poles at
. The unique pole closest to the origin is at
, and we can compute using l’Hopital’s rule that
and hence the partial fraction decomposition of begins
which gives the asymptotic
Curiously, the error in the above approximation has some funny quasi-periodic behavior corresponding to the arguments of the next most relevant poles, at .
The saddle point bound
However, understanding meromorphic functions is not enough to handle the case of partitions, where the relevant generating function is
.
This function is holomorphic inside the unit disk but has essential singularities at every root of unity, and to handle it we will need a more powerful method known as the saddle point method, which is beautifully explained with pictures both in Flajolet and Sedgewick and in these slides, and concisely explained in these notes by Jacques Verstraete.
The starting point is that we can recover the coefficients of a generating function
using the Cauchy integral formula, which gives
where is a closed contour in
with winding number
around the origin and such that ;
is holomorphic in an open disk containing
. This integral can be straightforwardly bounded by the product of the length of
, denoted
, and the maximum value that
takes on it, which gives
.
From here, the name of the game is to attempt to pick such that this bound is as good as possible. For example, if we pick
to be the circle of radius
, then
. If
has nonnegative coefficients, which will always be the case in combinatorial applications, then
will take its maximum value when
, which gives the saddle point bound
as long as is holomorphic in an open disk of radius greater than
. Strictly speaking, this bound doesn’t require any complex analysis to prove: if
has nonnegative coefficients and
converges then for
we clearly have
.
But later we will actually use some complex analysis to improve the saddle point bound to an estimate.
The saddle point bound gets its name from what happens when we try to optimize this bound as a function of : we’re led to pick a value of
such that
is minimized (subject to the constraint that
is less than the radius of convergence
of
). This value will be a solution to the saddle point equation
and at such points the function , as a real-valued function of
, will have a saddle point. The saddle point equation can be rearranged into the more convenient form
and from here we can attempt to pick (which will generally depend on
) such that this equation is at least approximately satisfied, then see what kind of bound we get from doing so.
We’ll often be able to write for some nice
, in which case the saddle point equation simplifies further to
.
Example. Let ; we’ll use this generating function to get a lower bound on factorials. The saddle point bound gives
for any , since
has infinite radius of convergence. The saddle point equation gives
, which gives the upper bound
or equivalently the lower bound
.
We see that the saddle point bound already gets us within a factor of of the true answer.
This example has a probabilistic interpretation: the saddle point bound can be rearranged to read
which says that the probability that a Poisson random variable with rate takes the value
is at most
. When we take
we’re looking at the Poisson random variable with rate
, which for large
is concentrated around its mean
.
Example. Let denote the number of involutions (permutations squaring to the identity) on
elements. These are precisely the permutations consisting of cycles of length
or
, so the exponential formula gives an exponential generating function
.
The saddle point bound gives
for any , since as above
has infinite radius of convergence. The saddle point equation is
with exact solution
and using for simplicity gives
This approximation turns out to also only be off by a factor of from the true answer.
Example. The Bell numbers count the number of partitions of a set of
elements into disjoint subsets. They have exponential generating function
(which also admits a combinatorial species description: this is the species of “sets of nonempty sets”), which as above has infinite radius of convergence. The saddle point equation is
which is approximately solved when (a bit more precisely, we want something more like
, and it’s possible to keep going from here but let’s not), giving the saddle point bound
.
Edit, 11/10/20: I don’t know how far off this is from the true asymptotics. According to Wikipedia it appears to be off by at least an exponential factor.
Example. Now let’s tackle the example of the partition function . Its generating function
has radius of convergence , unlike the previous examples, so our saddle points
will be confined the interval
(between the pole of
at
and the essential singularity at
). The saddle point equation involves understanding the logarithmic derivative of
, so let’s try to understand the logarithm. (
previously appeared on this blog here as the generating function of the number of subgroups of
of index
, although that won’t be directly relevant here.) The logarithm is
and it will turn out to be convenient to rearrange this a little: expanding this out gives
and exchanging the order of summation gives
.
The point of writing in this way is to make the behavior as
very clear: we see that as
,
approaches
.
It turns out that as , the saddle point gets closer and closer to the essential singularity at
. Near this singularity we may as well for the sake of convenience replace
with
, which gives the approximate saddle point equation
.
This saddle point equation reinforces the idea that is close to
: the only way to solve it in the interval
is to take something like
, so we can ignore the
factor, which gives the approximate saddle point
.
and saddle point bound
.
From here we need an upper bound on and a lower bound on
to get an upper bound on
. Edit, 12/12/16: As Akshaj explains in the comments, the argument that was previously here regarding the lower bound on
was incorrect. Akshaj’s argument involving the Taylor series of log gives
As for the upper bound on , if
then
for
, hence
from which we conclude that
and hence that
which is off from the true answer.
Of course, without knowing a better method that in fact gives the true answer, we have no way of independently verifying that the saddle point bounds are as close as we’ve claimed they are. We need a more powerful idea to turn these bounds into asymptotics and recover our factors of .
Hardy’s approach
In the eighth of Hardy’s twelve lectures on Ramanujan’s work, he describes a more down-to-earth way to guess that
starting from the approximation
.
as . He first observes that
must grow faster than a polynomial but slower than an exponential: if
grew like a polynomial then
would have a pole of finite order at
, whereas if
grew like an exponential then
would have a singularity closer to the origin. Hence, in Hardy’s words, it is “natural to conjecture” that
for some and some
. From here he more or less employs a saddle point bound in reverse, estimating
based on the size of its largest term. It’s convenient to write so that this sum can be rewritten
so that, differentiating in , we see that the maximum occurs when
. We want to turn this into an estimate for
involving only
, so we want to eliminate
and use the approximation
, valid as
. This gives
(keeping in mind that is positive), so that
and
which altogether gives (again, approximating by its largest term)
for . Matching this up with
gives
and
hence .
The saddle point method
The saddle point bound, although surprisingly informative, uses very little of the information provided by the Cauchy integral formula. We ought to be able to do a lot better by picking a contour to integrate over such that we can, by analyzing the contour integral more closely, bound the contour integral
more carefully than just by the “trivial” bound .
From here we’ll still be looking at saddle points, but more carefully, as follows. Ignoring the length factor in the trivial bound, if we try to minimize
we’ll end up at a saddle point
of
(so we get a point very slightly different from above, where we used a saddle point of
). Depending on the geometry of the locations of the zeroes, poles, and saddle points of
, we can hope to choose
to be a contour that passes through this saddle point
in such a way that
is in fact maximized at
. This means that
should enter and exit the “saddle” around the saddle point in the direction of steepest descent from the saddle point.
If we pick carefully, and
has certain nice properties, we can furthermore hope that
- the contour integral over a small arc (in terms of the circle method, the “major arc”) is easy to approximate (usually by a Gaussian integral), and
- the contour integral over everything else (in terms of the circle method, the “minor arc”) is small enough that it’s easy to bound.
The Gaussian integrals that often appear when integrating over the major arc are responsible for the factors of we lost above.
Let’s see how this works in the case of the factorials, where . The function
has a unique saddle point at
, but to simplify the computation we’ll take
as before. We’ll take
to be the circle of radius
, which gives a contour integral which can be rewritten in polar coordinates as
.
This integral can also be thought of as coming from computing the Fourier coefficients of a suitable Fourier series. Write the integrand as , so that
.
only controls the phase of the integrand, and since it doesn’t vary much (it grows like
for small
) we’ll be able to ignore it.
controls the absolute value of the integrand and so is much more important. For small values of
we have
so from here we can try to break up the integral into a “major arc” where for some small (where the meaning of “small” depends on
)
and a “minor arc” consisting of the other values of
, and try to show both the integral over the major arc is well approximated by the Gaussian integral
and that the integral over the minor arc is negligible compared to this. This can be done, and the details are in Flajolet and Sedgewick (who take ); ignoring all the details, the conclusion is that
and hence that
which is exactly Stirling’s approximation. This computation also has a probabilistic interpretation: it says that the probability that a Poisson random variable with rate takes its mean value
is asymptotically
, which can be viewed as a corollary of the central limit theorem, since such a Poisson random variable is a sum of
independent Poisson random variables with rate
.
In general we’ll again find that, under suitable hypotheses, we can approximate the major arc integral by a Gaussian integral (using the same strategy as above) and bound the minor arc integral to show that it’s negligible. This gives the following:
Theorem (saddle point approximation): Under suitable hypotheses, if and
, let
be a saddle point of
, so that
. Then, as
, we have
.
Directly applying this theorem to the partition function is difficult because of the difficulty of bounding what happens on the minor arc. has essential singularities on a dense subset of the unit circle, and delicate analysis has to be done to describe the contributions of these singularities (or more precisely, of saddle points near these singularities); the circle method used by Hardy and Ramanujan to prove the asymptotic formula accomplishes this by choosing the contour
very carefully and then using modularity properties of
(which is closely related to the eta function).
We will completely ignore these difficulties and pretend that only the contribution from the (saddle point near the) essential singularity at matters to the leading term. Even ignoring the minor arc, to make use of the saddle point approximation requires that we know the asymptotics of
as
in more detail than we do right now.
Unfortunately there does not seem to be a really easy way to do this; Hardy’s approach uses the modular properties of the eta function, while Flajolet and Sedgewick use Mellin transforms. So at this point we’ll just quote without proof the asymptotic we need from Flajolet and Sedgewick, up to the accuracy we need, namely
.
Although this changes the location of the saddle point slightly, for ease of computation (and because it will lose us at worst multiplicative factors in the end) we’ll continue to work with the same approximate saddle point
as before. The saddle point approximation differs from the saddle point bound we established earlier in two ways: first, the introduction of the
term contributes a factor of
and second, the introduction of the denominator contributes another factor, which we approximate as follows. We have
and hence
so that
which gives (we actually know the multiplicative constant here, but it doesn’t matter because we already lost multiplicative constants when estimating
) and hence
.
Altogether the saddle point approximation, up to a multiplicative constant, is
.