I would like to discuss here, in a particularly informal way, some frustration of myself relative to homological algebra, in particular to its most recent developments. I am certainly ill-informed on those matters, and one of my goals is to clarify my own ideas, my expectations, my hopes,...
This mere existence of this post is due to the kind invitation of a colleague of the computer science department working in (higher) category theory, namely François Metayer, who was interested to understand my motivation for willing to understand this topic.
Let me begin with a brief historical summary of the development of homological algebra, partly borrowed from Charles Weibel's History of homological algebra.
- B. Riemann (1857), E. Betti (1871), H. Poincaré (1895) define homology numbers.
- E. Noether (1925) introduces abelian groups, whose elementary divisors, recover the previously defined homology numbers.
- J. Leray (1946) introduces sheaves, their cohomology, the spectral sequence...
- During the years 1940–1955, under the hands of Cartan, Serre, Borel, etc., the theory develops itself in various directions (cohomology of groups, new spectral sequences, etc.).
- In their foundational book, Homological algebra, H. Cartan and S. Eilenberg (1956) introduce derived functors, projective/injective resolutions,...
- Around 1950, A. Dold, D. Kan, J. Moore, D. Puppe introduce simplicial methods. D. Kan introduces adjoint functors.
- A. Grothendieck, in Sur quelques points d'algèbre homologique (1957), introduces general abelian categories, as well as convenient axioms that guarantee the existence of enough injective objects, thus giving birth to a generalized homological algebra.
- P. Gabriel and M. Zisman (1967) developed the abstract calculus of fractions in categories, and proved that the homotopy category of topological spaces coincides with that of simplicial sets.
- J.-L. Verdier (1963) defines derived categories. This acknowledges that objects give rise to, say, injective resolutions which are canonical up to homotopy, and that the corresponding complex is an object in its own right, that has to be seen as equivalent to the initial object. The framework is that of triangulated categories. Progressively, derived categories came to play an important rôle in algebraic geometry (Grothendieck duality, Verdier duality, deformation theory, intersection cohomology and perverse sheaves, the Riemann–Hilbert correspondence, mirror symmetry,...) and representation theory.
- D. Quillen (1967) introduces model categories, who allow a parallel treatment of homological algebra in linear contexts (modules, sheaves of modules...) and non-linear ones (algebraic topology)... This is completed by A. Grothendieck's (1991) notion of derivators.
- At some point, the theory of dg-categories appears, but I can't locate it precisely, nor do I understand precisely its relation with other approaches.
- A. Joyal (2002) begins the study of quasi-categories (which were previously defined by J. M. Boardman and R. M. Vogt, 1973). Under the name of $(\infty,1)$-categories or $\infty$-categories, these quasi-categories are used extensively in Lurie's work (his books Higher topos theory, 2006; Higher algebra, 2017; the 10+ papers on derived algebraic geometry,...).
My main object of interest (up to now) is “classical” algebraic geometry, with homological algebra as an important tool via the cohomology of sheaves, and while I have barely used anything more abstract that cohomology sheaves (almost never complexes), I do agree that there are three main options to homological algebra: derived categories, model categories, and $\infty$-categories.
While I am not absolutely ignorant of the first one (I even lectured on them), the two other approaches still look esoteric to me and I can't say I master them (yet?). Moreover, their learning curve seem to be quite steep (Lurie's books totalize more than 2000 pages, plus the innumerable papers on derived algebraic geometry, etc.) and I do not really see how an average geometer should/could embark in this journey.
However, I believe that this is now a necessary journey, and I would like to mention some recent theorems that support this idea.
First of all, and despite its usefulness, the theory of triangulated/derived categories has many defects. Here are some of them:
- There is no (and there cannot be any) functorial construction of a cone;
- When a triangulated category is endowed with a truncation structure, there is no natural functor from the derived category of its heart to the initial triangulated category;
- Derived categories are not well suited for non-abelian categories (filtered derived categories seem to require additional, non-trivial, work, for example);
- Unbounded derived functors are often hard to define: we now dispose of homotopically injective resolutions (Spaltenstein, Serpé, Alonso-Tarrió et al.), but unbounded Verdier duality still requires some unnatural hypotheses on the morphism, for example.
Three results, now.
The first theorem I want to mention is due to M. Greenberg (1966). Given a scheme $X$ of finite type over a complete discrete valuation ring $R$ with uniformizer $\pi$, there exists an integer $a\geq 1$, such that for any integer $n\geq1$, a point $x\in X(R/\pi^n)$ lifts to $X(R)$ if and only if it lifts to $X(R/\pi^{an})$.
It may be worth stating it in more concrete terms. Two particular cases of such a ring $R$ are the ring $R[[t]]$ of power series over some field $k$, then $\pi=t$, and the ring $\mathbf Z_p$ of $p$-adic numbers (for some fixed prime number $p$), in which case one has $\pi=p$. It is then important to consider the case of affine scheme. Then $X=V(f_1,\dots,f_m)$ is defined by the vanishing of a finite family $f_1,\dots,f_m$ of polynomials in $R[T_1,\dots,T_n]$ in $n$ variables, so that, for any ring $A$, $X(A)$ is the set of solutions in $A^n$ of the system $f(T_1,\dots,T_n)=\dots=f_m(T_1,\dots,T_n)=0$. By reduction modulo $\pi^r$, a solution in $R^r$ gives rise to a solution in $R/\pi^r$, and Greenberg's result is about the converse: given a solution $x$ in $R/\pi^r$, how do decide whether it is a reduction of a solution in $R$. A necessary condition is that $x$ lifts to a solution in $R/\pi^s$, for every $s\geq r$. Greenberg's theorem asserts that it is sufficient that $x$ lift to a solution in $R/\pi^{ar}$, for some integer $a\geq 1$ which does not depend on $X$.
It may be worth stating it in more concrete terms. Two particular cases of such a ring $R$ are the ring $R[[t]]$ of power series over some field $k$, then $\pi=t$, and the ring $\mathbf Z_p$ of $p$-adic numbers (for some fixed prime number $p$), in which case one has $\pi=p$. It is then important to consider the case of affine scheme. Then $X=V(f_1,\dots,f_m)$ is defined by the vanishing of a finite family $f_1,\dots,f_m$ of polynomials in $R[T_1,\dots,T_n]$ in $n$ variables, so that, for any ring $A$, $X(A)$ is the set of solutions in $A^n$ of the system $f(T_1,\dots,T_n)=\dots=f_m(T_1,\dots,T_n)=0$. By reduction modulo $\pi^r$, a solution in $R^r$ gives rise to a solution in $R/\pi^r$, and Greenberg's result is about the converse: given a solution $x$ in $R/\pi^r$, how do decide whether it is a reduction of a solution in $R$. A necessary condition is that $x$ lifts to a solution in $R/\pi^s$, for every $s\geq r$. Greenberg's theorem asserts that it is sufficient that $x$ lift to a solution in $R/\pi^{ar}$, for some integer $a\geq 1$ which does not depend on $X$.
The proof of this theorem is non-trivial, but relatively elementary. After some preparation, it boils down to Hensel's lemma or, equivalently, Newton's method for solving equations.
However, it seems to me that there should be an extremely conceptual way to prove this theorem, based on general deformation theory such as the one developed by Illusie (1971). Namely, obstructions to lifting $x$ are encoded by various cohomology classes, and knowing that it lifts enough should be enough to see — on the nose — that these obstructions vanish.
The second one is about cohomology of Artin stacks. Y. Laszlo and M. Olsson (2006) established the 6-operations package for $\ell$-adic sheaves on Artin stacks, but their statements have some hypotheses which look a bit unnatural. For example, the base scheme $S$ needs to be such that all schemes of finite type have finite $\ell$-cohomological dimension — this forbids $S=\operatorname{Spec}(\mathbf R)$. More recently, Y. Liu and W. Zheng developed a more general theory, apparently devoid of restrictive hypotheses, and their work builds on $\infty$-categories, more precisely, a stable $\infty$-category enhancing the unbounded derived category. On page 7 of their paper, they carefully explain why derived categories are unsufficient to take care of the necessary descent datas, but I can't say I understand their explanation yet...
The last one is about the general formalism of 6-operations. While it is clear what these 6 operations should reflect (direct and inverse images; proper direct images and extraordinary inverse images; tensor product, internal hom), the list of the properties they should satisfy is not clear at all (to me). In the case of coherent sheaves, there is such a formulaire, written by A. Grothendieck itself on the occasion of a talk in 1983, but it is quite informal, and not at all a general formalism. Recently, F. Hörmann proposed such a formalism (2015–2017), based on Grothendieck's theory of derivators.
Now, how should the average mathematician embark in learning these theories?
Who will write the analogue of Godement's book for the homological algebra of the 21st century? Can we hope that it be shorter than 3000 pages?
I hope to find, some day, some answer to these questions, and that they will allow to hear with satisfaction the words of Hilbert: Wir müssen wissen, wir werden wissen.
The last one is about the general formalism of 6-operations. While it is clear what these 6 operations should reflect (direct and inverse images; proper direct images and extraordinary inverse images; tensor product, internal hom), the list of the properties they should satisfy is not clear at all (to me). In the case of coherent sheaves, there is such a formulaire, written by A. Grothendieck itself on the occasion of a talk in 1983, but it is quite informal, and not at all a general formalism. Recently, F. Hörmann proposed such a formalism (2015–2017), based on Grothendieck's theory of derivators.
Now, how should the average mathematician embark in learning these theories?
Who will write the analogue of Godement's book for the homological algebra of the 21st century? Can we hope that it be shorter than 3000 pages?
I hope to find, some day, some answer to these questions, and that they will allow to hear with satisfaction the words of Hilbert: Wir müssen wissen, wir werden wissen.