Two years into the pandemic there is still a lot of fighting going on about a lot of questions that have reasonably clear answers.
One of them is how the virus is transmitted, which is primarily via aerosols that move and spread very much like cigarette smoke. But that's not a convenient fact for a lot of people, particularly people who early on decided that it was 100% certain for sure that covid was not spread that way.
Recently Nature--still the premier single voice of the global scientific community--published a review of how the World Health Organization (WHO) communicated about how covid spreads, and aired the concerns that a lot of people have regarding how slow WHO was to update guidance as it became increasingly clear that covid was spread by aerosols, which in the common parlance means it is an "airborne virus".
It was only in late 2021 as the omicron wave swept the world that WHO updated the guidance on its website to clarify the importance of aerosol transmission, and:
The seemingly uncontroversial statement marked a clear shift for the Switzerland-based WHO, which had tweeted categorically early in the pandemic, “FACT: #COVID19 is NOT airborne,” casting the negative in capital letters as if to remove any doubt. At that time, the agency maintained that the virus spreads mainly through droplets produced when a person coughs, sneezes or speaks, an assumption based on decades-old infection-control teachings about how respiratory viruses generally pass from one person to another. The guidance recommended distancing of more than one metre — within which these droplets were thought to fall to the ground — along with hand washing and surface disinfection to stop transfer of droplets to the eyes, nose and mouth.
The problem with WHO's original statement is not that it's wrong, it's that it's not even wrong: it's a statement about a universe that doesn't exist, where "certainty" is both possible and desirable. That's not the universe we live in.
Unfortunately, it is the universe our brain believes we live in.
I recently got around to reading Thinking, fast and slow and it confirmed a lot of conclusions I've tentatively drawn over the years from watching the differences between how I think and how other people think, and how I used to think and how I think after forty years of thinking about working with probability.
People are probability-blind. They don't see probability any more than entirely colour-bind people see colour.
This makes reasoning about probability hard, and communication about it even harder.
To get a sense of how hard reasoning about probability is: in my observation it takes at least a decade of constant study and continuous exposure to problems of probability before you begin to really develop any level of fluency in thinking about it. By "problems of probability" I mean problems where the answer is not already known--by anyone--and where the answer becomes known after you've done your best to reason about the problem. Even then, "fluency" is really just an awareness of when you are out of your intuitive depth (almost always) and arming you with a toolkit that you are highly skilled at using to fill in the blind spot your "fast", intuitive reasoning systems have.
From another perspective: a number of important "paradoxes" in philosophy turn on the inability of philosophers to reason about probabilities, particularly conditional probabilities, where there are two or more existentially disjoint conditions that have to be reasoned about separately.
One example of this from popular culture is the Monty Hall problem, which goes like this: in the game show "Let's Make a Deal" the contestant is allowed to pick one of three closed doors, knowing that behind one door there is a prize and behind the other two there are goats.
Once the contestant has picked a door, there are two possible conditions: the condition or state of the world where the door they picked has the prize behind it, and the condition where the door they picked has a goat.
Assuming they choose randomly, because there are two goats and one prize, the latter condition will be the case 2/3 of the time, and the former condition (the prize) 1/3 of the time.
The "paradox" goes on the say (quite implausibly, which I'll get to in a bit) that Monty--the game show host--now must open one of the two doors the contestant did not pick and show the contestant that it has a goat behind it. This is a key--and otherworldly--aspect of how the paradox is constructed. It assumes that every single time this game is played Monty has to open one of the other two doors and show the contestant it has a goat behind it.
Under that assumption, the question is: Would the contestant be better off switching doors or not?
The answer, I hope, is obvious: because the contestant picked a goat 2/3 of the time, and because Monty has to open a door the contestant did not pick and has to open a door with a goat behind it, then 2/3 of the time there is only one door Monty can open. It is completely determined by the conditions of the scenario. But that means that 2/3 of the time, if the contestant switches their choice to the other unopened door, they will win the prize.
In the other condition, where they picked the prize the first time, switching will lose it for them, but that condition only happens 1/3 of the time.
Simple, yes? Or no?
And I've laid out the puzzle in the clearest way I know, which is quite different from the "standard" presentation, which is deliberately designed to be as confusing as possible, because for some reason people who like paradoxes do that. My general advice when encountering a paradox is to restate it in language completely different from the original language, using brutally simple and graphically memorable terminology, including drawing pictures and stuff, so you get the deliberately contrived and confusing language right out of your head.
The Monty Hall problem lays out one of the simplest paradoxes, and it confused a lot of people when it was published in the Parade and became famous. I had a decade of reasoning about conditional probabilities in real-world situations (nuclear and particle physics experiments) under my belt, and was able to see through the thickets in a day or so. But to most people the paradox is still a mystery, because reasoning about probabilities is really hard.
It's made even harder by the frankly ridiculous set-up of the problem, in which the host of a game show is required to behave like an automaton, and always do the same thing under the same circumstances. This is not how people--or game shows--actually work, and in reality Monty only opens a door when he feels like it, for whatever reason. It only takes a modest bias in the condition under which the door is opened to completely trash the logic of the puzzle. Most people are far more intuitively aware of this fact about how people behave than the niceties of probabilities.
This creates enormous difficulties when trying to communicate information about the world, which is always uncertain, because it is knowledge. If it wasn't uncertain it would be faith.
It is possible the scientists who are the source of the information understand probability, but equally likely they don't. It's not hard to find scientists talking about "scientific certainty," which is an oxymoron: science is the discipline (not method) of creating knowledge, and knowledge is not certain. The discipline (not method) of science is aimed at managing and making friends with uncertainty, not eliminating it. Eliminating uncertainty is neither possible nor desirable as a goal for science, any more than the transmutation of base metals into gold is a possible or desirable goal for chemistry.
So that leaves us with the problem of communicating about risks in a time of pandemic.
A big part of the problem is that people want answers to questions we don't--and can't--have answers to. They want to know, "What can I do to stay safe?" when the only question we can answer is, "What is most likely to keep me safer?" The two questions have nothing to do with each other. They are entirely unrelated, to the extent that the first question doesn't even have an answer, beyond, "Nobody knows."
In the Monty Hall problem the question people want to know the answer to is "Assuming I'm in this situation, should I switch?" and the probabilistic discussion says exactly nothing about the answer to that question, because the answer is "Nobody knows" (other than Monty, who isn't telling.) Knowing the probabilities in some idealized scenario doesn't tell you what you should actually do in a real-world situation, which is values-laden. How do you value risk over regret? Action over inaction?
The problem is we don't know what condition we're in. With regard to covid, we didn't know early on whether we were in a condition where transmission was significantly (or primarily) airborne or not. The WHO made a statement of faith about which condition we were in, and they were wrong: covid is largely an airborne virus, and WHO said it was not.
Even now there is more to learn on the question of droplet vs aerosol transmission. We know that aerosols are responsible for a great deal of covid transmission, but there is some evidence that under some circumstances droplets may dominate, which means the answer to the question “Droplets or aerosols?” is very likely, "It depends." That is: which condition we're in--droplet dominated, aerosol dominated, or a roughly equal mix--likely depends on multiple, complex, interacting probability distributions.
We aren't good at thinking about those things, and much worse at communicating about them. I'm not, and I'm an expert.
How we communicate this kind of uncertainty—and formulate policies that are both reasonably effective and politically palatable—is a largely unsolved problem, but if we want knowledge to guide public policy, it is a problem that we need to solve.
For the Monty Hall problem in particular, I like to imagine more doors. Imagine 1 car and 99 goats. You choose a door with a 1/100 chance it is car, and 99/100 chance the car is one of the other 99 doors. Then, Monty reveals 98 goats, chucking out all the noise, and squashing all that 99/100 goodness down into 1 door. Switch!
I find it helps me. But, when I use this model to try to help other people, it doesn't. They still cannot see why it does not become 50:50. Barbie was right, math is hard.
The first time I heard this was from a math teacher at the old Malaspina College. He was telling it to prove that Liberal Arts students couldn't understand science and math properly and were easily fooled by real life situations. He told the instructions incorrectly (everybody does), but I didn't know that until I read the Wiki, which does a great job of explaining the instructions. Nor is it a simple straightforward problem that everyone will understand if they just have enough math. Really, it's a verbal problem. Almost no one (including me) understands what's going on without a diagram (like the "simple solutions" part of the WIki). In fact, I'd say that without a diagram like the "simple solutions" diagram, there's no way you could answer this. In that sense, it's like the word problems that my daughter brings home from school. Even in Grade 4, there are questions so complex that I need a diagram to answer. But maybe that's your point.