As I wrote about a while back, I've created a SEIRS (susceptible/exposed/infectious/recovered/susceptible) model for the pandemic that results in a pretty good fit to Canadian hospitalization data with only a single free parameter of any note: the onset of waning immunity.
After writing it up like that I thought I had a bit better handle on the way the model worked, and started to see what I could infer from it.
I immediately ran into trouble, because what I was getting out didn't look much like what we think is the case: the "basic reproduction number", R0, which is the number of people the average infectious person infects, was not anything like the value of 3 or 5 or 10 or 15 we've seen talked about. It was more like 1.05, a tiny value. And the number of infections per person over the course of the first year of the model was high: 3 or 4, with very little variance. The model was saying that literally everyone has had omicron in the past year, several times. While reinfections are rising, I don't think that's a realistic belief.
I've still not sorted out what's going on, so it seemed like a good opportunity to present a look at the mind of a scientist in its natural habitat: confusion. This is what the average scientist's mind looks like almost all the time. When you're working at the coal face of knowledge, you rarely have any idea what you're doing.
The public image of scientists as knowing what's going on most of the time is wrong: that's engineers.
After all, how could it be otherwise? Scientists spend our time asking hard questions that no one knows the answer to. But that is barely represented in education up to the end of undergrad. We teach "science" by presenting students with finished theories and carefully curated experiments, and grade them on their ability to regurgitate the former and replicate the latter. Which is necessary, especially the theory part: without a solid grounding in the centuries of understanding that have accumulated no one can do anything interesting in science today.
The whole trick of science is that it is public testing of ideas by systematic observation, controlled experiment, and Bayesian inference, where "public" generally means "published". The Proceedings of the Royal Society was the start of the scientific revolution, because it created a public forum where ideas could be exposed to the light of day: criticism, rebuttal, and replication.
But while the result of a few thousand people practicing the discipline of science over a couple of centuries are some extremely low-uncertainty understandings of the world, the process that gets us there involves huge waves of uncertainty and confusion that swamp whole fields, drown entire disciplines for decades.
Einstein published a series of deeply confused papers on his way to General Relativity. He joked about it in a letter to Lorentz, how this fellow Einstein couldn't make up his mind and kept contradicting himself.
The confusion can rise and fall over very long times: the status of light as a particle or wave went back and forth over the two centuries between Newton and Einstein. Newton thought it was a particle. Young's interference experiments in the early 1800s--although ridiculed and ignored at first--showed it was a wave. Then in the early days of quantum theory Einstein showed it was a particle--the photon--albeit a particle with wavelike properties.
This makes pure science on leading-edge topics a surprisingly difficult source for policy, because the leading edge is where uncertainty is highest. This has bitten us badly multiple times over the past few years, as the public has asked for a level of certainty that science can't deliver on the leading edge, and far too many scientists have presented tentative or speculative ideas as if they were far more certain than they actually are.
When you get certainty from a scientist on a leading-edge topic, you are probably seeing bias, not knowledge. It can't be otherwise because reducing uncertainty takes time, and lower uncertainty is almost always achieved by passing through periods of much higher uncertainty first.
I recall a paper on an open problem in solid state physics that said in the abstract something along the lines of, "We do not in this paper propose to solve the problem of X, but to make a positive contribution to the tsunami of confusion that often precedes finding a solution." That's honest science.
"We can say X for-certain-sure based on a sample size of fifteen and an analysis that we hacked together in Excel and haven't properly checked" is bias. Any time leading-edge results seem to “just make sense” you should be hyper-suspicious, because knowledge doesn’t usually work that way.
The translation of uncertain science into crisp engineering has been a big part of my professional life, which leaves me with some ideas of how to do it in the least-biased way possible. It's a hard problem.
I recall a developer on my team about 20 years ago coming to me and saying, "I've just spent three days talking to one of the scientists about this problem, and while I now know a lot more about the problem, I still don't know what the default value this parameter should have in the software."
I went and talked to the scientist for a bit--mostly about the shapes of various distributions--then went and told the developer, "Three." I could do that in part because I had a better idea of what the algorithm in question was doing, but mostly because I was comfortable with one foot in the world of uncertain science and the other in the world of crisp engineering. I simply ate the arbitrariness: uncertainty went in, a definite value came out, my entropy increased.
The developer, like a good engineer, wanted a definite value and wasn't willing to just make one up, because that's not how engineers do things. We want handbooks and guidelines based on experience and experimentation, but this was a totally new algorithm: there were no such things.
The scientist, like a good scientist, was unwilling to crunch all the complexity of the problem down into a single number.
I knew the specific value didn't matter half so much as the fact that a specific value had to exist, and I chose the one that in my judgement had the lowest risk of resulting in something horrible happening if the user just left it as-was.
Which is an idea that has a name: the precautionary principle.
In general we need principles like this to convert the uncertainty inherent in knowledge--which is always large when a question is new and pressing--into the definite policies required for action. Another important principle is robustness, which I’ll talk about next week, I think.
Unfortunately acting on principle is not something that comes naturally to most people, and the phrase “in my judgement” hides a good deal of psychic wear and tear.
I'm good at it in part because as an autistic person I often have very little except principle to guide my actions. If it doesn't involve boats or machinery or radiation transport physics I probably don't have a very good intuitive feel for it.
But just as our training of scientists--at least up to the end of undergrad and to some degree even through grad school--is low on exposure to uncertainty, our training of policy-makers is low on acting on principle. In reality almost all policy is made on partisan rather than principled grounds: "What will gain my faction the most advantage in the endless, never-satisfied quest for power?" not "What is the right thing to do based on this set of well-known principles for converting necessary uncertainty into definite action?"
Calls for better training for scientists and policy makers are not new, and never work, so I won't make one here. It would be nice if we did, though.
So where does this leave me and my confusing SEIRS model? Down the rabbit hole, is where. Which is fine: there's no way to meet the Mad Hatter otherwise.
This conundrum has in fact led me on a merry chase that's still ongoing, and about which I'm sure I'll have more to say. I'm no longer convinced that the concept of "R0" in epidemiology is useful, and even have a few ideas as to what it might be replaced with. Being an outsider, no one is going to listen to me, mind. But the best science is generally done for the selfish satisfaction of the peculiar, possibly even eccentric, individual who does it.
Whether or not it has any particular value to others is entirely up to others to decide.
[The “SHARE” button is now an arrow thingy at the top right of the post! I tried adding a share button here but it doesn’t do anything, so if you want to share please try the one at the top. Thanks! —TJR]
Well, I’ll certainly listen to you, Tom! Great article!
I enjoyed your informative perspective on uncertainty in science and engineering