- $A$ is a subset of $S$;
- $k$ is an element of $S$;
- the cardinality of $A$ is less or equal than $p$;
- if ${\mathop {\rm Card}}(A)=p$, then $k\in A$.
- $f(A,k) = (A \setminus\{k\}, k)$ if $k\in A$;
- $f(A,k) = (A\cup\{k\}, k)$ if $k\notin A$.
I would like to tell here a story that runs over some 150 years of mathematics, around the following question: given a power series $\sum a_n T^n$ (in one variable), how can you tell it comes from a rational function?
There are two possible motivations for such a question. One comes from complex function theory: you are given an analytic function and you wish to understand its nature — the simplest of them being the rational functions, it is natural to wonder if that happens or not (the next step would be to decide whether that function is algebraic, as in the problem of Hermann Amandus Schwarz (1843–1921). Another motivation starts from the coefficients $(a_n)$, of which the power series is called the generating series; indeed, the generating series is a rational function if and only if the sequence of coefficients satisfies a linear recurrence relation.
At this stage, there are little tools to answer that question, besides is a general algebraic criterion which essentially reformulates the property that the $(a_n)$ satisfy a linear recurrence relation. For any integers $m$ and $q$, let $D_m^q$ be the determinant of size $(q+1)$ given by
\[ D_m^q = \begin{vmatrix}
a_m & a_{m+1} & \dots & a_{m+q} \\
a_{m+1} & a_{m+2} & \dots & a_{m+q+1} \\
\vdots & \vdots & & \vdots \\
a_{m+q} & a_{m+q+1} & \dots & a_{m+2q} \end{vmatrix}. \]
These determinants are called the Hankel determinants or (when $m=0$) the Kronecker determinants, from the names of the two 19th century German mathematicians Hermann Hankel (1839—1873) and Leopold von Kronecker (1823–1891). With this notation, the following properties are equivalent:
(The proof of that classic criterion is not too complicated, but the standard proof is quite smart. In his book Algebraic numbers and Fourier analysis, Raphaël Salem gives a proof which arguably easier.)
Since this algebraic criterion is very general, it is however almost impossible to prove the vanishing of these determinants without further information, and it is at this stage that Émile Borel enters the story. Émile Borel (1871–1956) has not only be a very important mathematician of the first half of the 20th century, by his works on analysis and probability theory, he also was a member of parliament, a minister of Navy, a member of Résistance during WW2. He founded the French research institution CNRS and of the Institut Henri Poincaré. He was also the first president of the Confédération des travailleurs intellectuels, a intellectual workers union.
In his 1893 paper « Sur un théorème de M. Hadamard », Borel proves the following theorem:
Theorem. — If the coefficients \(a_n\) are integers and if the power series \(\sum a_n T^n \) “defines” a function (possibly with poles) on a disk centered at the origin and of radius strictly greater than 1, then that power series is a rational function.
Observe how these two hypotheses belong to two quite unrelated worlds: the first one sets the question within number theory while the second one resorts from complex function theory. It looks almost as magic that these two hypotheses lead to the nice conclusion that the power series is a rational function.
It is also worth remarking that the second hypothesis is really necessary for the conclusion to hold, because rational functions define functions (with poles) on the whole complex plane. The status of the first hypothesis is more mysterious. While it is not necessary, the conclusion may not hold without it. For example, the exponential series \(\sum T^n/n!\) does define a function (without poles) on the whole complex plane, but is not rational (it grows too fast at infinity).
However, the interaction of number theoretical hypotheses with the question of the nature of power series was not totally inexplored at the time of Borel. For example, a 1852 theorem of the German mathematician Gotthold Eisenstein (Über eine allgemeine Eigenschaft der Reihen-Entwicklungen aller algebraischen Functionen) shows that when the coefficients \(a_n\) of the expansion \(\sum a_nT^n\) of an algebraic functions are rational numbers, the denominators are not arbitrary: there is an integer \(D\geq 1\) such that for all \(n\), \(a_n D^{n+1}\) is an integer. As a consequence of that theorem of Eisenstein, the exponential series or the logarithmic series cannot be algebraic.It's always time somewhere on the Internet for a mathematical proof, so that I have no excuse for avoiding to tell you *how* Émile Borel proved that result. He uses the above algebraic criterion, hence needs to prove that some determinants \(D^q_m\) introduced above do vanish (for some \(q\) and for all \(m\) large enough). Then his idea consists in observing that these determinants are integers, so that if you wish to prove that they vanish, it suffices to prove that they are smaller than one!
If non-mathematicians are still reading me, there's no mistake here: the main argument for the proof is the remark that a nonzero integer is at least one. While this may sound as a trivial remark, this is something I like to call the main theorem of number theory, because it lies at the heart of almost all proofs in number theory.
So one has to bound determinants from above, and here Borel invokes the « théorème de M. Hadamard » that a determinant, being the volume of the parallelipiped formed by the rows, is smaller than the product of the norms of these rows, considered as vectors of the Euclidean space : in 2-D, the area of a parallelogram is smaller than the lengths of its edges! (Jacques Hadamard (1865—1963) is known for many extremely important results, notably the first proof of the Prime number theorem. It is funny that this elementary result went into the title of a paper!)
But there's no hope that using Hadamard's inequality of our initial matrix can be of some help, since that matrix has integer coefficients, so that all rows have size at least one. So Borel starts making clever row combinations on the Hankel matrices that take into accounts the poles of the function that the given power series defines.
Basically, if \(f=\sum a_nT^n\), there exists a polynomial \(h=\sum c_mT^m\) such that the power series \(g=fh = \sum b_n T^n\) defines a function without poles on some disk \(D(0,R)\) where \(R>1\). Using complex function theory (Cauchy's inequalities), this implies that the coefficients \(b_n\) converge rapidly to 0, roughly as \(R^{-n}\). For the same reason, the coefficients \(a_n\) cannot grow to fast, at most as \(r^{-n}\) for some \(r>0\). The formula \(g=fh\) shows that coefficients \(b_n\) are combinations of the \(a_n\), so that the determinant \(D_n^q\) is also equal to \[ \begin{vmatrix} a_n & a_{n+1} & \dots & a_{n+q} \\ \vdots & & \vdots \\ a_{n+p-1} & a_{n+p} & \dots & a_{n+p+q-1} \\ b_{n+p} & b_{n+p+1} & & b_{n+p+q} \\ \vdots & & \vdots \\ b_{n+q} & b_{n+q+1} & \dots & b_{n+2q} \end{vmatrix}\] Now, Hadamard's inequality implies that the determinant \(D_n^q\) is (roughly) bounded above by \( (r^{-n} )^p (R^{-n}) ^{q+1-p} \): there are \(p\) lines bounded above by some \(r^{-n}\) and the next \(q+1-p\) are bounded above by \(R^{-n}\). This expression rewrites as \( 1/(r^pR^{q+1-p})^n\). Since \(R>1\), we may choose \(q\) large enough so that \(r^p R^{q+1-p}>1\), and then, when \(n\) grows to infinity, the determinant is smaller than 1. Hence it vanishes!
The next chapter of this story happens in 1928, under the hands of the Hungarian mathematician George Pólya (1887-1985). Pólya had already written several papers which explore the interaction of number theory and complex function theory, one of them will even reappear later one in this thread. In his paper “Über gewisse notwendige Determinantenkriterien für die Fortsetzbarkeit einer Potenzreihe”, he studied the analogue of Borel's question when the disk of radius \(R\) is replaced by an arbitrary domain \(U\) of the complex plane containing the origin, proving that if \(U\) is big enough, then the initial power series is a rational function. It is however not so obvious how one should measure the size of \(U\), and it is at this point that electrostatics enter the picture.
In fact, it is convenient to make an inversion : the assumption is that the series \(\sum a_n / T^n\) defines a function (with poles) on the complement of a compact subset \(K\) of the complex plane. Imagine that this compact set is made of metal, put at potential 0, and put a unit electric charge at infinity. According to the 2-D laws of electrostatics, this create an electric potential \(V_K\) which is identically \(0\) on \(K\) and behaves as \( V_K(z)\approx \log(|z|/C_K)\) at infinity. Here, \(C_K\) is a positive constant which is the capacity of \(K\).
Theorem (Pólya). — Assume that the \(a_n\) are integers and the series \(\sum a_n/T^n\) defines a function (with poles) on the complement of \(K\). If the capacity of \(K\) is \(\lt1\), then \(\sum a_n T^n\) is rational.
To apply this theorem, it is important to know of computations of capacities. This was a classic theme of complex function theory and numerical analysis some 50 years ago. Indeed, what the electric potential does is solving the Laplace equation \(\Delta V_K=0\) outside of \(K\) with Dirichlet condition on the boundary of \(K\).
In fact, the early times of complex analysis made a remarkable use of this fact. For example, it was by solving the Laplace equation that Bernhard Riemann proved the existence of meromorphic functions on “Riemann surfaces”, but analysis was not enough developed at that time (around 1860). In a stunningly creative move, Riemann imagines that his surface is partly made of metal, and partly of insulating material and he deduces the construction of the desired function from the electric potential.
More recently, complex analysis and potential theory also had applications to fluid dynamics, for example to compute (at least approximately) the flow of air outside of an airplane wing. (I am not a specialist of this, but I'd guess the development of numerical methods that run on modern computers rendered these beautiful methods obsolete.)
The relation between the theorems of Borel and Pólya is that the capacity of a disk is its radius. This can be seen by the fact that \(V(z)=\log(|z|/R\)\) solves the Laplace equation with Dirichlet condition outside of the disk of radius \(R\).
A few other capacities have been computed, not too many, in fact, because it appears to be a surprisingly difficult problem. For example, the capacity of an interval is a fourth of its length.Pólya's proof is similar to Borel's, but considers the Kronecker determinant in place of Hankel's. However, the linear combinations that will allow to show that this determinant is small are not as explicit as in Borel's proof. They follow from another interpretation of the capacity introduced by the Hungarian-Israeli mathematician Michael Fekete (1886–1957; born in then Austria–Hungary, now Serbia, he emigrated to Palestine in 1928.)
You know that the diameter \(d_2(K)\) of \(K\) is the upper bound of all distances \(|x-y|\) where \(x,y\) are arbitrary points of \(K\). Now for an integer \(n\geq 2\), consider the upper bound \(d_n(K)\) of all products of distances \( \prod_{i\neq j}{x_j-x_i}\)^{1/n(n-1)}\) where \(x_1,\dots,x_n\) are arbitrary points of \(K\). It is not so hard to prove that the sequence \(d_n(K)\) decreases with \(n\), and the limit \(\delta(K)\) of that sequence is called the transfinite diameter by Fekete.
Proposition. — \( \delta(K)= C_K\).
This allows to make a link between capacity theory and another theme of complex function theory, namely the theory of best approximation, which end up in Pólya's proof: the adequate linear combination for the \(n\)th row is given by the coefficients of the monic polynomial of degree \(n\) which has the smallest least upper bound on \(K\).
If all this is of some appeal to you, there's a wonderful little book by Thomas Ransford, Potential Theory in the Complex Plane, which I find quite readable (say, from 3rd or 4th year of math studies on).
In the forthcoming episodes, I'll discuss two striking applications of the theorems of Borel and Pólya to proof by Bernhard Dwork of a proof of a conjecture of Weil (in 1960), and by a new proof (in 1987) by Jean-Paul Bézivin and Philippe Robba of the transcendence of the numbers \(e\) and \(\pi\), two results initially proven by Charles Hermite and Ferdinand von Lindemann in 1873 and 1882.
I would like to go back to a quite delicate question of commutative algebra, that of associated prime ideals of modules. In most textbooks (Bourbaki, Matsumura…), this concept is considered for modules over a noetherian ring, while it is also necessary to consider it in a greater generality for some applications in algebraic geometry. For my book, (Mostly) commutative algebra (Springer Nature, 2021), I preferred to introduce the general concept (§6.5), because I observed that the initial proofs are in fact easier. In yesterday's class (Cohomology of coherent sheaves, 2nd year of Master course at Université Paris Cité), some remarks of a student, Elias Caeiro, helped me simplify two steps of the treatment I proposed in my book.
Definition. — Let $A$ be a ring and let $M$ be an $A$-module. Say that a prime ideal $P$ of $A$ is associated to $M$ if there exists an element $m\in M$ such that $P$ is minimal among all prime ideals containing $\ann_A(m)$.
We write $\Ass_A(M)$ (sometimes spelt out as “assassin”) for the set of all associated prime ideals of $M$.
(Here, $\ann_A(m)$ is the annihilator of $m$, the ideal of all $a\in A$ such that $am=0$.)
There is a geometric way to intepret this definition: it means that in the spectrum $\Spec(A)$, the irreducible closed set $V(P)$ (of which $P$ is the generic point) is an irreducible component of $V(\ann_A(m))$. Thanks to this remark, associated prime ideals are compatible with localisation: \[ \Ass_{S^{-1}A}(S^{-1}A) = \Ass_A(M) \cap \Spec(S^{-1}A), \] where $\Spec(S^{-1}A)$ is identified as the subset of $\Spec(A)$ consisting of prime ideals $P$ which are disjoint from $S$. In particular, $P$ is associated to $M$ if and only if the maximal ideal $PA_P$ of the local ring $A_P$ is associated to the module $M_P$.
Here is what the associated prime ideals mean, from the point view of module theory.
Proposition. — Let $a\in A$.
a) The multiplication by $a$ is injective in $M$ if and only if $a$ does not belong to any associated prime ideal of $M$.
b) The localized module $M_a$ is zero if and only if $a$ belongs to all associated prime ideals of $M$.
c) In particular, $M=0$ if and only if $\Ass_A(M)=\emptyset$.
Proof. — a) If $a$ belongs to the associated prime ideal $P$, then $a$ belongs to the associated prime ideal $PA_P$ of $M_P$, which means that there exists $m\in M$ such that $PA_P$ is the only prime ideal containing $\ann_{A_P}(m)$. Consequently, $a$ is nilpotent modulo $\ann_{A_P}(m)$ and there exists $n\geq 0$ and $b\in A\setminus P$ such that $a^nb\in\ann_A(m)$. Take a minimal such $n$. Since $b\notin P$, one has $n\geq 1$; then $a^{n-1}b m\neq0$, while $a\cdot a^{n-1}bm=0$ and the homothety $(a)_M$ is not injective. Conversely, if $(a)_M$ is not injective, take $m\neq0$ in $M$ such that $am=0$; the annihilator $\ann_A(m)$ is not equal to $A$, hence $V(\ann_A(m))\neq \emptyset$; take an irreducible component of this closed subset — equivalently a minimal prime ideal $P$ among those containing $\ann_A(m)$; one has $a\in \ann_A(m)$, hence $a\in P$.
b) follows from c), with $a=1$.
c) The module $M$ is zero if and only if the multiplication by $0$ is injective on $M$. By a), this is equivalent to the fact that $\Ass_A(M)$ is empty.
Corollary. — A prime ideal $P$ is in the support of $M$ if and only if it contains some associated prime ideal. The prime ideal $P$ belongs to the support of $M$ if and only if $M_P\neq0$, if and only if $\Ass_{A_P}(M_P)$ is not empty, if and only if there exists an associated prime ideal of $M$ which belongs to $\Spec(A_P)$, that is, is contained in $P$.
For noetherian rings, one has the following characterization of associated prime ideals, which is usually taken at their definition.
Theorem. — Let $A$ be a noetherian ring and $M$ be an $A$-module. A prime ideal $P$ of $A$ is associated to $M$ if and only if there exists $m\in M$ such that $P=\ann_A(m)$.
If $P=\ann_A(m)$, then $P$ is associated to $M$. Conversely, let $m\in M$ and let $P$ be a minimal prime ideal of $A$ among those containing $\ann_A(m)$. We first assume that $A$ is local with maximal ideal $P$; then $P$ is the only prime ideal of $A$ that contains $\ann_A(m)$, which implies that any element of $P$ is nilpotent modulo $\ann_A(m)$. Since $P$ is finitely generated (because $A$ is noetherian), there exists an integer $n$ such that $P^n\subseteq \ann_A(m)$. Take a minimal such $n$. Since $\ann_A(m)\subseteq P$, one has $n\geq 1$; then $P^{n-1}\not\subseteq\ann_A(m)$ so that there exists $b\in P^{n-1}$ such that $bm\neq0$. Then $ab\in P^n$ for every $a\in P$, so that $P\subseteq \ann_A(bm)$, and $\ann_A(bm)\subseteq P$ because $bm\neq0$. Consequently, $P=\ann_A(bm)$. In the general case, we use the case of a local ring to obtain $m\in M$ such that $\ann_{A_P}(m/1)=PA_P$. Consequently, $\ann_A(m)\subseteq P$, and for every $a\in P$, there exists $b\notin P$ such that $abm=0$. Using that $P$ is finitely generated, one finds $b\notin P$ such that $abm=0$ for every $a\in P$; then $\ann_A(bm)=P$, as was to be shown.
From that point on, both presentations converge. One deduces from the preceding theorem that if $A$ is noetherian and $M$ is finitely generated, there exists a composition series $0=M_0\subseteq M_1 \subseteq \dots \subseteq M_n=M$, with successive quotients $M_k/M_{k-1}$ of the form $A/P_k$, for some prime ideals $P_k$ of $A$, and then $\Ass_A(M)$ is contained in $\{P_1,\dots,P_n\}$, in view of the following lemma. In particular, $\Ass_A(M)$ is finite.
Lemma. — Let $M$ be an $A$-module and let $N$ be a submodule of $M$; then $ \Ass_A(N)\subseteq \Ass_A(M)\subseteq \Ass_A(N)\cup \Ass_A(M/N)$.
The first inclusion $\Ass_A(N)\subseteq \Ass_A(M)$ follows from the definition. Let us prove the second one. Let $P\in\Ass_A(M)$ and let $m\in M$ be such that $P$ is a minimal prime ideal of $A$ among those containing $\ann_A(m)$. Let $m'$ be the image of $M$ in $M/N$. If $P$ contains $\ann_A(m')$, then $P$ is also minimal among such prime ideals, hence $P\in\Ass_A(M/N)$. Otherwise, there exists $b\in \ann_A(m')$ such that $b\notin P$. Let us prove that $P$ is minimal among the prime ideals containing $\ann_A(bm)$. First of all, let $a\in\ann_A(bm)$; then $abm=0$, hence $ab\in P$, hence $a\in P$ since $b\notin P$. Since $\ann_A(m)\subseteq\ann_A(bm)$, it also follows that $P$ is minimal among the prime ideals containing $\ann_A(bm)$. Since $b\in\ann_A(m')$, one has $bm'=0$, hence $bm\in N$ and $P\in\Ass_A(N)$.
The following reflexion came out of my irrepressible need to understand why the 3 double transpositions in $\mathfrak S_4$, together with the identity, formed a group $V$. Of course, one might just say: “they are stable under multiplication, as one sees by computing the 4·3/2 = 6 different products”, but who makes this computation anyway? And since I wanted not only to understand this, but to explain it to Lean, I needed an argument that could actually be done, for real. So here is an argument that requires no computation, besides the one that says that there are 3 double transpositions.
Prop. — The subgroup of $\mathfrak S_4$ generated by the 3 double transpositions is the unique 2-sylow subgroup of $\mathfrak A_4$. In particular, it has order 4 and consists of these 3 double transpositions and of the identity.
Proof. — Let $V$ be the subset of $\mathfrak S_4$ consisting of these 3 double transpositions and of the identity. Let $S$ be a 2-sylow subgroup in $\mathfrak A_4$.
We first prove $S \subseteq V$. The subgroup $S$ has order 4. Let $g\in S$. The order of $g$ divides 4, so its cycles have lengths 1, 2 or 4. If there were one cycle of length 4, then $g$ would be that cycle, hence of odd sign. Consequently, either $g=1$, or $g$ has a cycle of length 2, and then there must be a second because $g$ is even. Consequently, $S\subseteq V$, as claimed.
Since $4=\operatorname{\rm Card}(S)=\operatorname{\rm Card}(V)$, this shows that $S=V$, hence $S=\langle V\rangle$.
At this point, we still need to understand why there are 3 double transpositions. More generally, I wanted to prove that the number of permutations in $\mathfrak S_n$ of given orbit type. The orbit type a permutation $g$ is a multiset of strictly positive integers with sum $n$ given by the cardinalities of the orbits of $g$ on $\{1,\dots,n\}$. We write it as $1^{n_1} 2^{n_2}\dots r^{n_r}$, meaning that $g$ has $n_1$ orbits of length $1$ (fixed points), $n_2$ orbits of cardinality $2$, etc., so that $n= \sum n_i i$. Let $\mathscr O_g$ be the set of orbits of $g$. The action of $g$ on a given orbit $c$ coincides with a circular permutation with order the length $\ell(c)$ of this orbit; when it is nontrivial, such a permutation will be called a cycle of $g$. The supports of these cycles are pairwise disjoint, so that these cycles commute, and their product is exactly $g$. In fact, this is the only way of writing $g$ as a product of cycles with pairwise disjoint supports. (By convention, the identity is not a cycle.)
Theorem. — There are exactly \[ N(1^{n_1}\dots r^{n_r}) = \frac{n!}{1^{n_1}\dots r^{n_r} n_1!\dots n_r!} \] permutations with orbit type $1^{n_1} 2^{n_2}\dots r^{n_r}$.
A standard proof of this result goes as follows. Write the decomposition of such a permutation $g$ into cycles with disjoint supports as $g=(\cdot)\dots (\cdot)(\cdot,\cdot)\dots(\cdot,\cdot,\dots)$, leaving blank spaces for the values of the cycles (and, contradicting our convention, allowing for cycles of length 1…). There are $n!$ ways to fill these spaces with the $n$ distinct integers between $1$ and $n$, but some of them will give rise to the same permutation. Indeed, the entries in a cycle of length $s$ only count up to a circular permutation, so that we need to divide the result by $1^{n_1}\dots r^{n_r}$. Moreoveer, we can switch the order of the cycles of given length, hence we also need to divide that result by $n_s!$ (number of ways of switching the various cycles of length $s$), for all possible length $s$.
This is reasonably convincing but one could wish for something more precise, both in the proof, and in the statement. In fact, in the preceding formula, the numerator $n!$ is the order of $\mathfrak S_n$. Since all permutations with a given orbit type are conjugate by $\mathfrak S_n$, the left hand side appears as the cardinality of the orbit of a permutation $g$ of that orbit type, so that the denominator has to be equal the cardinality of the stabilizer of this permutation under the action by conjugation. Therefore, a more precise proof of this formula could run by elucidating the structure of this centralizer. This may also be interesting once one wishes to relativize the result to the alternating group $\mathfrak A_n$ in order to obtain a formula for the cardinality of the various conjugacy classes in $\mathfrak A_n$.
Let us fix a permutation $g\in\mathfrak S_n$ with orbit type $1^{n_1}\dots r^{n_r}$. The stabilizer of $g$ under the action by conjugation is its centralizer $Z_g$, the subgroup of all $k\in\mathfrak S_n$ which commute with $g$.
We first define a morphism of groups \[ \tau \colon Z_g \to \mathfrak S_{n_1}\times \mathfrak S_{n_2}\times\dots \mathfrak S_{n_r}. \] Let $\mathscr O_g$ be the set of orbits of $g$; this is a set with cardinality $n_1+n_2+\dots+n_r$. Restricted to one orbit, the action of $g$ coincides with that of a circular permutation on (which fixes the complementary subset); these circular permuations have disjoint supports, hence they commute pairwise and their product is equal to $g$. For $c\in\mathscr O_g$, we write $\ell(c)$ for its cardinality of its support, this is also the order of the cycle of $g$ defined by this orbit. If $k\in Z_g$, then $kgk^{-1}=g$. Consequently, the action of $k$ permutes the orbits of $g$, respecting their cardinalities. This defines the desired group morphism $\tau$ from $Z_g$ to a product of permutation groups $\mathfrak S_{n_1}\times \dots \mathfrak S_{n_r}$.
This morphism $\tau$ is surjective.
Indeed, given permutations $\sigma_1$ of the set of fixed points of $g$, $\sigma_2$ of the set of orbits of length 2, etc., we construct $k_\sigma\in Z_g$ such that $\tau(k_\tau)=(\sigma_1,\dots,\sigma_r)$. We fix a point $a_c$ in each orbit $c$ and decide that $k_\sigma(a_c)=a_{\sigma_i(c)}$ if $c$ has length $i$. The formula $k_\sigma g=g_\sigma$ imposes $k_\sigma (g^n a_c)=g^n a_{\sigma_i(c)}$ for all $n\in\mathbf Z$, and it remains to check that this formula gives a well defined element in $Z_g$. In fact, this formula defines a group theoretic section of $\tau$.
What is the kernel of this morphism $\tau$?
If $\tau(k)=1$, then $k$ fixes every orbit $c\in\mathscr O_g$. Since $kg=gk$, we obtain that on each orbit $c$, $k$ coincides with some power of the corresponding cycle, which has order $\ell(c)$. We thus obtain an isomorphism
\[ \ker(\tau) \simeq \prod_{c\in\mathscr C_g} (\mathbf Z/\ell(c)\mathbf Z). \]
To compute the cardinality of $Z_g$, it is now sufficient to compute those of $\operatorname{\rm im}(\tau)$ and $\ker(\tau)$, and this gives the formula \[ \operatorname{\rm Card} (Z_g) = \operatorname{\rm Card} (\ker(\tau)) \operatorname{\rm Card} (\operatorname{\rm im}(\tau)) = 1^{n_1}\dots r^{n_r} n_1! \dots n_r!, \] as was to be shown.
One of the interest of this argument is that it can be pushed forward to understand the structure of the conjugacy classes in the alternating group $\mathfrak A_n$. The case $n\leq 1$ is uninteresting, hence we assume $n\geq 2$. Then $\mathfrak A_n$ has index 2 in $\mathfrak S_n$, and the formulas \[ \operatorname {\rm Card}((g)_{\mathfrak A_n}) = \frac{{\rm Card}({\mathfrak A_n})}{{\rm Card}(Z_g \cap {\mathfrak A_n})} \quad\text{\rm and}\quad \operatorname {\rm Card}((g)_{\mathfrak S_n}) = \frac{{\rm Card}({\mathfrak S_n})}{{\rm Card}(Z_g)} \] for the cardinalities of the conjugacy classes $(g)_{\mathfrak A_n}$ and $(g)_{\mathfrak S_n}$ imply that both are equal if and only if $Z_g$ is not contained in $\mathfrak A_n$; otherwise, the conjugacy class $(g)_{\mathfrak S_n}$ is the disjoint union of $(g)_{\mathfrak A_n}$ and of a conjugacy class $(g')_{\mathfrak A_n}$ of a permutation $g'$ which is conjugate to $g$ in $\mathfrak S_n$ but not in $\mathfrak A_n$, and both have the same cardinality.
Examples of this phenomenon are classical. For example, the 5-cycles in $\mathfrak S_5$ are conjugate, but they constitute two distinct conjugacy classes under $\mathfrak A_5$. Even more elementary, the 3-cycles $(1\,2\,3)$ and $(1\,3\,2)$ are conjugate in $\mathfrak S_3$, but they are not conjugate in $\mathfrak A_3$ since that group is commutative!
So let us use our description of $Z_g$ to give a full description of this phenomenon.
As a first step, when is $\ker(\tau)$ contained in $\mathfrak A_n$? We have seen that $\ker(\tau)$ is generated by the cycles $c\in\mathscr C_g$. Consequently, $\ker(\tau)$ is contained in $\mathfrak A_n$ if and only if all of them are contained in $\mathfrak A_n$, which means that their lengths are odd.
We assume that this condition holds, so that $\ker(\tau)\subseteq \mathfrak A_n$, and now work on the image of $\tau$. Its surjectivity was proved by the means of an explicit section $\sigma\mapsto k_\sigma$. Given the preceding condition that $\ker(\tau)\subseteq \mathfrak A_n$, a necessary and sufficient condition for the inclusion $Z_g\subseteq \mathfrak A_n$ will be that the image of this section consists of even permutations. This section is a morphism of groups, so it suffices to understand the sign of $k_\sigma$ when $\sigma$ consists of a cycle $(c_1,\dots,c_s)$ in $\mathfrak S_{n_i}$ and is trivial on the other factors. Then $\ell(c_1)=\dots=\ell(c_s)$, by definition of $\sigma$. The formula $k_\sigma(g^n a_c)=g^n a_{\sigma(c)}$ shows that the non trivial cycles of $k_\sigma$ are of the form $(g^n a_{c_1},\dots, g^n a_{c_s})$; they all have the same length, $s$, and there are $\ell(c_1)$ of them. Consequently, the sign of $k_\sigma$ is equal to $(-1)^{(s-1)\ell(c_1)}=(-1)^{s-1}$ since $\ell(c_1)$ is odd. This proves that the sign of $k_\sigma$ is equal to the sign of $\sigma$. In addition to the condition that the orbits of $g$ have odd cardinalities, a necessary and sufficient condition for the image of $\sigma\mapsto k_\sigma$ to be contained in $\mathfrak A_n$ is thus that all symmetric groups $\mathfrak S_{n_i}$ coincide with their alternating groups, that is, $n_i\leq 1$ for all $i$. We can now conclude:
Theorem. — Let $1^{n_1}\dots r^{n_r}$ be a partition of $n$.
We can check the initial example of two 5-cycles in $\mathfrak S_5$ which are not conjugate in $\mathfrak A_5$. Their orbit type is $5^1$: the only length that appears is 5, hence odd, and it has multiplicity $1$. In fact, this is the only orbit type in $\mathfrak S_5$ where this phenomenon appears!