I thought about writing this as an actual dialogue, a la Galileo, but time is short. Maybe later. This is the iii-th part of an n-part series on robustness.
Galileo’s dialogue opposed the Copernican and Ptolemaic systems of the world: heliocentricism and geocentricism. He made his friend the Pope look foolish in the process, and ended up recanting, although the story always ends with his famous apocryphal remark, “Nevertheless, it still moves.”
The systems I want to talk about are world views, which I’ll call the medical world view of physicians versus the engineering world view of, well, engineers.
To confess my bias: I’m an engineer--I was a licenced professional engineer for 25 years in Ontario and BC--but I’ve worked a lot in health care, including genomics (mostly to do with cancer), radiotherapy physics, medical imaging (everything from portal imaging to opthalmic ultrasound), and computer-assisted surgery (mostly orthopaedics, but also some cardiac.) My last academic appointment, long ago now, was in the Department of Pathology and Molecular Medicine at Queen’s, where I worked with both researchers and clinicians.
So I’ve seen a lot of how physicians think, and thought a lot about how it differs from how engineers think.
If I had to characterize the difference in one sentence it is: Physicians think about individuals, engineers think about systems.
This is true at all scales. Physicians think about pandemics in terms of individual outbreaks and individual chains of transmission, they think of patients as individual problems to be solved. Anyone with a chronic, systemic condition will have a story about a physician that goes along the lines of, “They couldn’t put it in a neatly labeled box so they tried to convince me it was psychological.” If it can’t be individualized it must be psychologicalized.
We’re all pretty familiar with this mode of individual-oriented thinking because it is an extremely natural one. The strength of it is that it pays attention to specific circumstances and details. You can’t make a diagnosis without taking a history, which is a gathering of individual details. My father (a physician) once described to me how my mother (also a physician) diagnosed one of the first cases of cat-scratch disease in North America by taking a good history.
Focus on the individual, on the facts in front of us interpreted by the knowledge we carry with us, is a powerful tool. I would not go to an engineer for a broken leg after a fall: they would set about figuring out to reduce the rate of falls, not set the bone. The physician’s view of the world is extremely valuable when you’ve got an indivdual problem.
But it’s also a very limited tool, utterly unsuited to a huge range of systemic problems.
Consider clean water.
Back in the 1800s cholera and other water-borne diseases were a huge problem. Despite the evidence for water-borne diseases coming from a physician--Dr John Snow of Broad Street Pump fame--most physicians at the time didn’t actually believe it, and not being trained in the discipline of science they struggled to update their beliefs in the face of new evidence. This problem has only gotten worse in the modern world, as the complexity and rate of change of knowledge have both increased dramatically. Even among physicians who understood what Snow was arguing for, their recommendation was for individuals to boil water.
Engineers, on the other hand, recognized that rather than asking people to boil their own water--an individual-focused solution--there was an opportunity to create a system that delivered clean water to everyone. These systems don’t always work but they are vastly cheaper and more effective than any individually-focused solution.
Today we face the same problem with clean air, which physicians are also not dealing very well with, to the extent of actively denigrating engineering efforts to improve things. There’s nothing the world needs less right now than a physician with zero training in or understanding of fluid mechanics and HVAC to offer a negative opinion on the only thing that’s likely to get us out of the ongoing pandemic.
When thinking about disinfo, there are also individual and system-level solutions. Here is a great example of an individual-focused solution, from physicians who want to think about disinfo as a kind of pandemic:
Had infodemic monitoring been in place, it might have prevented a “superspreader” event that began on October 12, 2020, when, in a misreading of a Centers for Disease Control and Prevention (CDC) report, The Federalist, a conservative online magazine that is sometimes cited by right-wing radio and cable hosts, reported that “masks and face coverings are not effective in preventing the spread of Covid-19.” Had the misleading article been caught by a dedicated team that quickly engaged possible readers online, Fox News’s Tucker Carlson might not have told his more than 4 million viewers the next evening that 85% of people who were infected with Covid-19 in July 2020 had been wearing a mask.
This is wrong in detail, which is not entirely surprising given how badly the same kind of active, focused, individual-outbreak response has worked for the ongoing covid pandemic. A response model that doesn't work very well in the context it was invented it is unlikely to do better in an entirely new context.
Furthermore, this focused approach requires calling favourites--censorship, in effect--about which I'll say more next week.
What would a system-level focus on disinfo look like?
Individual-focused solutions are reactive: classic debunking tries to cure the false belief after it has taken root. Even vaccines, the sole tool in the physician’s kit that looks remotely system-like, are often tailored to specific variants. This should be seen as a failure, not a triumph. It is the sign of a solution that is not very good.
Robust engineering solutions at the systems level tend to be passive and indiscriminate.
The Otis elevator is a great example of this. It's passive in the sense that if the elevator can’t operate properly it can’t operate at all, and it's indiscriminate in the sense that it does no matter why the elevator's weight is not on the cables. No matter what the cause, the effect is the same: the elevator's weight on the cable is what keeps the brakes unlocked and allows the elevator to move.
In the realm of automotive safety, crumple zones are passive and relatively indiscriminate, seat-belts and airbags less so. Seat-belts are passive once fastened, but require user action to get into that state. Airbags have active control systems and deploy catastrophically, which are not great markers of robustness.
Air filters and N95s are wonderfully indiscriminate. They filter particles, not viruses. They work against viruses not variants.
On the clean water side of things, chlorine in water kills bacteria, not just E. coli, and settling ponds for sewage treatment take out any number of nasties.
The more specialized or targeted a system is the more likely it is to fail in the face of something unexpected.
The more active intervention a system requires to work, the less likely it is to react in a timely or appropriate way, the harder it is to maintain, the more likely it is to fail.
Fragile solutions operate better than robust ones under ideal conditions. Robust systems continue to operate adequately under non-ideal conditions.
When dealing with disinformation, we're always dealing with non-ideal conditions, because grifters are working as hard as they possibly can to create non-ideal conditions. Their livelihoods depend on it, and their work includes everything from lobbying governments to relax legal protections to finding new ways to market hand-washing in the middle of an airborne pandemic.
So what does an engineering solution to disinfo look like?
I have the barest inkling of an idea.
People with what I've recently learned to call PHLEGM educations (Philosophy, History, Languages, English, Geography, Music) often fail to appreciate the incredible creativity that goes into engineering solutions, and here we are at the sharp end of it: I've identified the desirable features of an engineered response to disinfo, but that tells us nothing about how to build it. The engineering problem is figuring that out. Creating it out of nothing.
There are bound to be a bunch of good ideas from more creative minds than mine, but one thing we know would indiscriminately and passively reduce the propagation of disinfo is slowing things down.
Imagine social media with significant propagation delays.
Maybe six hours. Enough time to cool people down, to push them into another part of the day, and to take some of the emotional feedback out of the process.
This could actually be done at the infrastructure level, even with end-to-end encryption, via legal mandate. It could be done at the corporate level, also by legal mandate. Regulation of businesses in the name of the public good is as old as capitalism, and as regulations go, "Thou shalt not run a social media service that allows user interactions to occur in less than 6 hours" is fairly tame. It's certainly tamer than setting up "rapid response centres" that do nothing but target your political enemies while giving the disinfo from your own side a pass.
And it's not like defining a "social media service" is some kind of incredible epistemic problem that no one has ever thought about. After all, we can define cows for the purposes of legislation, and cows are way more complicated than websites. And if you can’t think of any legally problematic corner cases in the definition of “cow” you probably don’t know enough about cows.
So there is nothing inherently impossible about implementing a passive, indiscriminate, systems-level response to disinfo on social media, which is where the primary problem is these days.
Such a system has to be imposed by an outside force, just like the regulations that keep our food safe and our water clean. But since those regulations do by and large keep our food safe and our water clean, it would be fairly stupid for anyone to say, "No government would ever impose regulations on anything!" Governments do this all the time. It's what they exist for.
This is how the last disaster of rapid mass communications was ultimately handled: a mix of legal mandates and social conventions that were worked out in the aftermath of chaos.
I often liken our current era to that between the invention of the printing press and the start of the Refomation, which was 50-odd years. Printing created the possibility of cheaply publishing almost anything, and created demand from newly-educated readers, all outside of the system of monastic scriptoria that had had a monopoly on such activities for a thousand years.
This new ecosystem of ideas allowed all kinds of crazy, obviously wrong, beliefs to get published, like "every person is equal in the eyes of God", and "the Church has been using its position to accumulate money and power for centuries." Crazy, right? They were at the time. But the same time also spawned thoughts like this paragon of good sense:
I have said that, in my opinion, all was chaos, that is, earth, air, water, and fire were mixed together; and out of that bulk a mass formed – just as cheese is made out of milk – and worms appeared in it, and these were the angels. The most holy majesty decreed that these should be God and the angels, and among that number of angels there was also God, he too having been created out of that mass at the same time, and he was named lord with four captains, Lucifer, Michael, Gabriel, and Raphael.
While I'm obviously making fun of what's crazy and what's not, the people at the time who were being newly exposed to these ideas via the printing press had no way of telling, and it took several more-or-less bloody centuries of legal, social, educational, and religious experimentation to reach a point where society was reasonably stable again.
That's the scale we need to be thinking on. What we think of as a tsunami of misinformation is only just starting.
It's unlikely that legal remedies will be rapidly forthcoming, and the first ones tried will be focused on censorship, because no one cares about disinfo, whereas almost everyone wants to stop someone else from speaking their mind. Pretending to care about disinfo is a great excuse for a private political corporation to do what they want to do anyway, which is regulate speech.
In the meantime, another approach we can all take is to refer to what Kipling called "The Gods of the Copybook Headings", which were pithy sayings at the top of each blank page in school-children's workbooks. "A stitch in time saves nine" and stuff like that, much of it direly Christian in the best fashion of a Victorian moralist.
If you have an e-mail .sig file you can easily put in something like: "Covid spreads through the air like smoke", "Covid is the third leading cause of death in Canada in 2023", "N95 respirators worn indoors in public and in crowded outdoor places are effective protection against covid." “In 2023 the covid pandemic is not over.” Simple declarative sentences. Facts.
Passive, and indiscriminate.
Dialogue Concerning the Two Chief World Views
The comparison of the two ways of thinking was interesting and clear - and makes sense based on what physicians and engineers are trained for. I also appreciated the comparison between the early days of the printing press and today’s social media explosion. As you suggest, it’s unlikely that a solution to disinfo will come quickly but adding passive statements of fact into email signatures or similar, gives us something immediately actionable. Thank you.