This is part ii in a series, which will likely end with the iii’th part next week. We’ll see.
Last week I laid out the "how" and "why" of the problem of misinformation and disinformation (which I'm collectively calling "disinfo") as I understand them:
1) People believe things for feelings, not reasons.
2) People rely primarily on a fast, intuitive system of thought that is incapable of accommodating objective risk measures, and which uses generally poor heuristics instead.
3) Grifters can hack these two features of human cognition to create ideas that people will believe for good feelings. This hacking process is fundamental to our current mess, because the internet has created an ideal vector for carrying out the hack.
4) Once victims have identified with a grift they will tend to engage in defensive cognition when they encounter contradictory facts.
These are all heuristics. Don’t take them too literally. But they may be useful in understanding the world, regardless. It’s not as if anyone is ever using anything but heuristics in most of their interactions with reality.
The goal of these heuristics is to displace others that people often depend on, like “People believe things for reasons”, “People don’t do things that are obviously counter to their stated interests”, “Nice people are basically honest”, and “I am very good at changing my mind when faced with new information.”
As an example of these bad heuristics in action, I’ve seen many people claim that “the economy” is the reason for the premature withdrawal of covid protections in BC and elsewhere. This is obviously stupid: the Canadian economy is taking huge hits right now due to worker shortages that are a direct consequence of our ill-considered “let ‘er rip” covid policies adopted in early 2022. But people believe things for feelings, not reasons, and a lot of people have strong feelings about “capitalism” that makes it easy to blame for everything bad.
The idea that “no one does things against their obvious best interest” plays a large role here too. “They don’t understand probability and are therefore doing things that will harm their own interests” is more plausible than “they have a nefarious plan to enslave the working class, so what appears to be against their interests is actually in favour of them”.
This is a problem I’ve run into many times as a project manager: scheduling and estimation in software development are incredibly easy, but getting senior management to believe the results of good scheduling and estimation practice is incredibly hard even though it’s in the economic and professional interests of senior management to do so. But because it’s impossible for managers to see the probability that unknown-but-predictable things will slow the ideal schedule down by a known amount, they refuse to acknowledge the possibility. The heuristic “people are probability blind” is more useful in understanding this than the heuristic “people never knowingly act against their own interests.” The latter leads one down the path of paranoia at best, conspiracy theory at worst. The former suggests arguing for avoiding any mention of probability in presentations to senior managers.
Given the heuristics I’m suggesting, this week I'm going to talk about what can be done about grifters and disinfo. What we want is a society that is robust against perturbations by grifters who are doing everything they can to hack the heuristics we use to reason intuitively about the world. My argument is that incorporating the heuristics I’ve presented here into our intuitions will help, whereas the commonplaces they run counter to mostly don’t.
For example, I've been peripherally involved in "skeptic" and "debunking" communities off and on for decades, but never quite bought in to the idea that engaging in direct opposition to disinformation in the classic way was very useful. So far as I know only one person had ever been convinced by a "debunking" book, and it was quite different from anything else I've ever encountered: The Bermuda Triangle Mystery: Solved! didn't talk about how absurd the various accounts of disappearances were. It just went through them, case-by-case, and referenced actual records--particularly weather records--on the day the event was supposed to have happened.
That convinced me, at the age of 13 or so, that the various mystery-mongers so popular in the ‘70s were running scams to sell books. Given the continued prevalence of such scams, I don’t think the considerable debunking literature--all based on the idea that people believe things for reasons--has been effective.
Science communicators also spend a lot of time trying to educate the public with regard to reality and risks, and it probably hasn’t been a total failure: people of each of the past few generations are likely better at changing our beliefs for reasons than their parents were. But the return on investment has been very poor, because all science gives us is knowledge. Turning that knowledge into feelings is hard. Stories help, and there's even someone trying to sell "story creation for scientists as a service".
But grifters focus a lot on risk: the risk of disease or death or degeneration or other things starting with "d", due to wifi or disco or whatever they think will give people the feelings they need to believe in the grift. Then the grifter will sell them a cure.
Trying to address this kind of hack by teaching probability calculus is unlikely to have a large effect. Probability is something I've dealt with professionally for decades. I'm fairly good at it, but I'm still easily fooled in informal reasoning, because no matter how good my slow brain is at probability, my fast brain still has no clue it exists. It’s like handing a spectrometer to a colour-blind person: sure, they can use it to tell red from green, and that might be useful, but what they see--what they grasp immediately via their senses--is always going to be just shades of grey.
Getting past the probability-blindness of the fast system of intuitive reasoning means teaching people how to rely on the slow, formal, system. That's very difficult. All slow thinking is painful and most people aren't great at it--I'm certainly not--and probability specifically is an incredibly hard subject. We understood the motions of the planets and the rise and fall of the tides for the better part of a century before we really started to make inroads into probability.
So we don't want to be trying to get ordinary people to reason properly about marginal vs conditional vs total probabilities, at least not if we can help it. To go back to the sharks vs cows example: there are a lot more cows than sharks in the world, and a lot more people in routine contact with them. "Per encounter" risk for cows is lower than that for sharks, but by how much? How often do you encounter a shark when swimming vs a cow when out for a hike? Personally I’ve run into sharks more often, but maybe I’m weird. And so on: all these questions require people to do a //lot// of thinking when they just want to go for a swim or a hike, so we put up shark nets around resorts, which in my view are irrational and unnecessary, but more-or-less benign as these things go.
Absent the ability to train people to put aside their intuitions and learn one of the most difficult disciplines of thought there is, we therefore have to figure out how to change their intuitions about cows vs sharks (and the very idea of "cows vs sharks" is one of the sources of our intuitions: in a fight between a cow and a shark I'm pretty sure a shark would win.)
This is a marketing problem, and the challenge is that the grifters we're fighting are already on top of it and they aren't bound by facts. All they have to do is make up some plausible bullshit, whereas we have to work within the confines of knowledge.
This gives grifters an inherent, mathematical, advantage. They are free of a major constraint. We are not. They can move anywhere on the board. We can only move where the facts might plausibly take us.
This is why decades of earnest attempts to educate people about risks have not made nearly as much headway as one might like: grifters move in on a field, spread a bunch of lies that are carefully crafted to hack our intuitions, create a bunch of fear, sell a bunch of "cures" for whatever they've created a fear of, and then debunkers come in behind them, exposing the lies, challenging identities, and when they eventually start making some headway, the grifters move on to their next thing. Lather, rinse, repeat.
And once people have been fooled they are motivated to stay fooled.
It's a grift that keeps on grivving.
So while we can engage in an endless marketing war with grifters, and it probably does help, I'm unconvinced it's the optimal response. The grifters have an inherent advantage. I like to attack where the enemy is weak, not where they are strong.
We've had some success in making certain kinds of grift illegal or impractical. The existence of food and drug agencies is something of a triumph in this regard, and should be vigorously defended.
Regulating speech, however, always gets turned into a tool of political advantage and power, and never results in the ends its promoters claim to favour. So regulation can help with products, but not ideas.
Public education can improve our grift detection and avoidance skills. This is hard, but we're still very early in our learning about how to educate folks in the brave new world of social media.
Those are some of the things we’ve done in the past that have worked a little bit, often at the price of enormous effort.
My suggestion is that internalizing the heuristics I’ve presented here may help reduce our vulnerability to grifting, and be useful in interacting with people who are its victims.
I know a lot of people who have been frustrated by trying to understand the reasons why people believe crazy, dangerous, ideas about nutrition or health or whatever. Using the heuristic that “it’s never reasons, it’s always feelings”, understanding probability-blindness, and looking for how a particular grift is using those basic facts to hack a person’s beliefs in a way that’s self-sustaining may be more helpful than presenting them with fifteen studies that debunk the core claims of the grift.
It may not be, either: remember, unlike pretty much everyone else, I am often wrong about stuff. That’s partly because I’m not uptight about trying different approaches, but I’m also very disciplined about seeing if they work. Sometimes they do. I use a resistance inhaler for hypertension, and the effects have been dramatic compared to ACE inhibitors, which I’ve verified by daily blood pressure measurements that started a few weeks before I began using the device and kept up for a month or two after, and still do now and then. My BP dropped from mildly elevated into the normal range over the course of that time. Result.
But sometimes the crazy ideas I try don’t work. I find that out by systematic observation and recording, abandon the idea, and move on. Also result. Because all knowledge is good knowledge: knowing something doesn’t work is as interesting--albeit often not as practical--as knowing something does.
This is a view of knowledge that is deeply counter to that often seen in philosophy, where a single error is considered capable of wiping out an entire edifice of understanding. Bayesian knowing, in contrast, is robust.
Next week I’ll talk about the contrast between what I think of as the “medical” vs the “engineering” response to disinfo, and close with a meditation on rationalist vs Bayesian views of knowledge.
I agree with this.
I have spoken personally to hundreds of conspiracy theorists, and been involved in technical discussions between such people and technical experts. Like you, I have seen people who claimed were convinced by technical explanations, but really this almost never happens. In fact, there are many people who come to see that their beliefs are irrational, but almost always, this does not happen because the evidence convinced them. It is my experience this usually happens because they can't achieve any social goals by making their claims...people start laughing at them and they can't escape it.
The term "feelings" is better thought of an non-rational processes. There are many theories of what are best "interests". Sociobiology, for example, leads to all kinds of predictions about behavior that wouldn't be the same as Price Theory. There are many suggestions for why errors in the calculation of your best interests happen. Entire fields like Behavioral Economics are build on this. The so-called skeptics prefer a model based in errors of social cognition.
But you appear to mean something different than just systematic error. You seem to mean thoughts that can not be accounted for by any sort of logic. I agree completely that many kinds of behaviors can not be accounted for by any sort of logic. I made this argument on the James Randi Forum when I said most conspiracy theories are based in a something I called "confusion". After I spun myself in circles for a long time, I got the math question wrong. This wasn't because of a systematic processing error. There was a reason for my error, and it was based in my limited cognitive power, but no amount of logical training will make a difference in my ability to get the right answer here. What gets confusing to observers is that there are people who, by virtue of their superior cognitive and logical training, did get the answer correct, but the idea of spending a weekend doing symbolic logic to achieve this is ludicrous.