Categorical thinking is what we do when we reason using classes: "All men are mortal. Socrates is a man. Therefore Socrates is mortal." "man" and "mortal" are both categorical terms: they name whole classes of things.
It's a powerful tool, and like any powerful tool it can be dangerous when used improperly, which is most of the time. I've come to believe that the primary purpose of abstraction is lying, to ourselves and others, although it also has the indispensable secondary use of knowing.
So while I'm going to be critical of the ways we abuse categorical reasoning here, I don't want to lose sight of its utility and value. Being in favour of work gloves, steel-toed boots, ear protection, and goggles doesn't make me anti-chainsaw: it makes me pro-safety.
The essential error we make in categorical thinking is ontological: we project our concepts onto the world and reason about them as if they had the ontology--the way of being--of individuals. The key feature of the ontology of individuals is that they are crisp, binary, definite. My cat Yogurt was entirely Yogurt: he was himself. This is how individuals are. They may have fuzzy physical boundaries (Yogurt, being a cat, certainly did) but they aren't "themselves to some degree". They are just themselves.
Concepts or categories (I use the terms interchangeably just to drive philosophers nuts) aren't like this. They are a mental trick that allows us to treat an open-ended collection of individuals as if it was, itself, an individual, even though it isn't. Every individual that we subsume into a category is a member of that category "to some degree". We forget this at our peril.
Definitions determine what gets into the group all all, which is something individuals don't have. Yogurt wasn't defined by anything: he just was. He had an identity, not a definition. And unlike identities (putting aside the Ship of Theseus for the moment...) definitions are a pragmatic tool for simplifying our reasoning. Their purpose is to, well, define categories that make the world easy to think about as accurately as necessary under the circumstances, given the fundamental limitations of our merely human brains, which can only attend to maybe half a dozen things at once, on a good day.
If something exceeds the threshold for inclusion into a concept--if its degree of matching the definition is close enough--then we count it as a member. But it's still itself. It isn't just a member of that category. In fact, in and of itself, it isn't even a member of that category, because "categorical membership" isn't a property of things, it's something we do to things. It's an activity, like skating: "being skated upon" is not a property of ice. It's something we, as active beings, can do to ice. Putting things in categories is the same way. It doesn't change the thing, it changes our relationship to the thing.
When we forget this, we end up wanting to treat categories as having perfectly crisp boundaries, like individuals, but in reality the world is a collection of edge cases. This leads us into (at least) two common errors: either contracting or expanding the category to get rid of the edge cases, rather than acknowledging that everything is a member of a category to some degree, which takes work.
On the "contracting" side we may narrow the category down to only the most archetypal example(s) and treat everything we've excluded as if it was some kind of broken, deformed, imperfect approximation to the "true" concept.
In the philosophy of science there is a very large bias toward physics, which is often considered "really scientific" while chemistry, geology, biology, psychology, and so on are considered mere shadows of this Platonic ideal.
On the "expanding" side there is the tendency to treat everything that could just barely be included inside a category on the most generous interpretation as having all the characteristics of the most extreme member of the category.
We see this being done a lot right now with the category "uncertain", as people--some of whom are scientists who really ought to know better--seem to equate "we are uncertain about the severity and infectiousness of omicron" with "we can't say anything at all about the probable outcome of this, the terminal phase of the pandemic", when we most certainly can. For one, we can be reasonably sure it will in fact be the terminal phase of the pandemic.
We can see this as follows:
There is evidence that omicron has an R0 value close to that of measles, which is one of the most transmissible viruses there is. R0 is the number of new infections any given infection will produce in a population that takes no counter-measures.
R0 and doubling time scale inversely to each other, and the doubling time of omicron is half that of delta: 2.3 days vs 4.6 days. Assuming similar generation times that implies R0 for omicron is twice delta's, and R0 for delta is in the 6-8 range so omicron's is in the range of 12-16.
Measles has an R0 in the range of 12-18, and it is one of the most infectious disease there is, higher than smallpox or influenza.
This tells us that it is very difficult for a virus to become much more infectious than omicron, so it is very unlikely we are going to face another, more infectious, mutated strain, because if other viruses could have become more infectious they would have sometime over the past hundred thousand years. Viruses are very simple packages, and they are not infinitely flexible. Every molecular change is a trade-off for them, and the viable variant with the highest rate of spread is by definition going to win out against all the others. Evolution is literally just simple arithmetic when you get down to it: “variants that have a higher rate of spread will eventually outnumber variants with lower rates of spread” is as close to a tautology as you’re likely to ever see in science.
But a virus can’t just become arbitrarily more infectious, because it has other constraints to fulfill. To be viable a virus has to be able to evade the immune system and invade the host's cells, and mutations that allow it to spread rapidly--by reproducing more in the upper respiratory tract, for example, or having fewer symptoms early on--will eventually run up against limits on those things. Because symptoms are almost always due to immune response, and invading the host's cells almost always triggers an immune response, a virus can only be so "stealthy" before it stops being viable.
So the fact that R0 values for all viruses max out around 20 means that it is very difficult for any virus to balance the needs of viability against infectiousness beyond that level. There is a kind of “speed of light” for viral propogation that is around R0 of 20. Beyond that population dynamics are more important than genetics, so while there are a few special cases where a virus might exceed 20, those are not because the virus has found some genetic superpower but because it has gotten lucky with a naive population, like syphilis in early modern Europe.
That's not to say covid might not yet mutate—it almost certaintly will—but those mutant forms will be up against our non-naive immune systems, which will be vaccinated, boosted, and (probably, unfortunately) infected by omicron. And none of those mutant forms are likely to have R0 much higher than omicron. Certainly not a factor of two higher, because there simply isn't any case of a virus consistently hitting an R0 of more than 20, and omicron already has an R0 in the 12-16 range.
Ergo: we’re probably in the final stage of the pandemic. Although the process of getting there is going to devastate our health care system.
Now you'll noticed that almost nothing in the above is a categorical argument, although it uses a lot of categories. It is all quantitative argumentation, and insofar as I can figure out numbers it is a numerical argument. Not all quantitative arguments are numerical: we can reason about "more" and "less" and "bigger" and "smaller" and "likely" and "not so likely" without assigning specific numerical values to things, and often this is enough to get us as far as we need to go.
Nothing I've said above is certain, but all of it is knowledge. Simply because everything is in the category "uncertain" doesn't mean the future is in the category "unknowable", even though "unknowable" is the extremum of "uncertain".
Learning to think quantitatively, and even numerically, is hard, and we're still figuring out the basics of how to do it. But with discipline and practice it's possible to make it routine, and hopefully in some possible future we learn how to make it commonplace. It seems like it would help in navigating our unavoidably interconnected future.
In the meantime, as you’re reading this it’s probably Christmas Eve. I hope you are safe and well, and taking the threat of omicron seriously, and that the government wherever you are is handling it more pro-actively than where I am, based on solid quantitative, and even numerical, reasoning.
Merry Christmas!
Great post. I think John will be interested in this. In fact, I'll make sure he sees it. Merry Xmas.