This is a bit of an excursion from robustness, but it loops back to it next week.
I've been talking to a friend this week about medical and nutritional misinformation and disinformation, and have reached a point where I think I can say something coherent about what I think the problem is and where to look for a solution.
That we have a problem with misinformation (falsehoods) and disinformation (falsehoods designed to interfere with the transmission or acceptance of knowledge) is I think uncontroversial. Because "misinformation and disinformation" is long and awkward, I'm just going to call it "disinfo".
There are three questions I'm interested in:
1) How does disinfo work?
2) Why does disinfo exist?
3) What can be done about disinfo?
Given we know so much today and have so much evidence to back it up, how is it that people persist in ignoring all that evidence and believing things that are counter to it? For example, I know people who still think Canada's New Democratic Party--an amoral gang of power-hungry leftist apparatchiks who will excuse any act of environmental or social destruction so long as it aides their electoral chances--are the Party of the Moral High Ground. How does that work?
My analysis of how disinfo works goes like this:
People believe things for feelings, not reasons. That is: people believe things based on the feelings those beliefs create, not the reasons they are given.
This is an example of what's called "teleological causation", where the end result or purpose ("teleos", in Greek) is the cause of the action. Teleological causation is odd because the cause happens after the effect: I work for money, but I do the work first and then I get paid.
Teleological causation is happens in systems with feedback and memory, so we only see it in biological systems, which are full of feedback loops. From this point of view the brain is a large collection of feedback loops, all looking for fulfillment: getting money at some future date is the "final cause" of my working today, but my desire for money precedes the actual work, so the temporal ordering of causality is ultimately preserved.
Now on the one hand, saying that people believe stuff because it makes them feel good seems pretty banal. On the other hand, I've seen any number of quite clever people wringing their hands and asking, "What could possibly be the reason for people to believe <<some incredibly stupid idea, like 'not getting sick weakens your immune system, as if it was a muscle that needed exercise, which we know is a misleading and wrong way of thinking about immunity'>>???"
This is why I put this the way I do: People believe things for feelings, not reasons. The cause of those beliefs is the feelings. That's the whole story. There are no reasons.
The feelings that cause people's beliefs are the motives in motivated reasoning.
The role of reasons in beliefs is to justify believing it, because most of us don't want to look like intellectually incontinent children who insist on believing whatever makes us feel good. And most of us have more-or-less reluctantly conceded that certain critical beliefs are painfully exempt from the blanket pardon we give ourselves for this kind of behaviour, and we actually do hold a those particular ideas by virtue of data and reasoning, rather than saying, "It makes me feel good to think this is true, and I can use these hand-waving claims about a few facts to make myself look non-demented to people who know even less about the question than I do."
The idea that respiratory diseases are spread by large, short-range droplets is a good example of a belief held for feelings: it's complete nonsense that didn't withstand even a few months of scrutiny after engineers and scientists started investigating it--and actual experts knew it to be false decades ago--but it was held for a century by "experts" in infectious diseases who had no actual knowledge of or training in fluid mechanics, which is an incredibly difficult subject that takes years of study to master, who drew false inferences from limited data and never bothered to subject them to further testing, while actively dismissing and diminishing contradictory experiments. The justification for holding this idea came down to "close contact leads to a higher rate of infection", which is another way of saying, "I can't even write the diffusion equation down, much less solve it."
It wasn't a question of "five microns vs one hundred microns" it was a question of having the most influencial people in setting these standards not having any understanding of the fluid mechanics of aerosol transport whatsoever.
There were no reasons. There were only feelings. Figuring out what the feelings why a particular belief is held can be difficult, and it often isn't the obvious things. Peer pressure can play a huge role, and as I'll argue below, one of the biggest feelings why people believe things is "belonging".
The second element in how disinfo works is that our feelings do not understand probability even a little bit. We are "probability blind": chance is not part of our emotional sensorium.
Daniel Kahneman goes into this in some detail in "Thinking, Fast and Slow" and there's a tonne of literature on it. The "fast", intuitive process that we use to draw most of our conclusions about the world does not have any sense objective measures of probability or risk, and so it uses a bunch of heuristics as stand-ins for it. Our intuitive, emotional brain--which does almost all our thinking--doesn't calculate the odds. It does a quick glance around its emotional environment and looks for the stuff that appears the most relevant.
Which is why people are more afraid of sharks than cows even though cows kill ten times as many people every year.
How disinfo works is that we believe things because of feelings, not reasons, and our feelings are almost certain to get anything involving probability wildly wrong.
To understand why disinfo exists requires two more steps.
The first is: there are grifters. Look around.
That is, there are people who will feed people beliefs that create good feelings in them. Our heuristics for forming beliefs are pretty bad, but that doesn't mean our beliefs have to be terrible. After all, those same heuristics kept our ancestors alive long enough to have and raise kids. They might still go wrong now and then, but just in case they don't, there exists a type of person who will deliberately set out to hack these imperfect mechanisms of belief formation for their own profit and power.
Marketing organizations like political parties spend almost all their time doing this: they want to create an association between positive feelings and their brand so they can go on to do whatever they damned well please while their voters still support them because of the good feelings that result.
If anyone is supporting a political party that makes them feel good, they are almost certainly a victim of this. The same is true of corporate branding: Brand X jeans aren't nearly so superior to Brand Y as to justify the gulf in people's feelings about them. Marketing is almost everything. In politics, it really is everything: create the feelings, the votes will follow, no matter how much old growth forest you cut, no many peaceful indigenous protesters you jail, or how many people are unable to access your viciously-defended "universal" health care system.
Second: people engage in defensive cognition. The most famous historical case of this is documented in When Prophecy Fails. Some nut convinced a few dozen people back in the '50s that the world was going to end on a particular date. It didn't. The prophecy was revised. It still didn't. And so on. Some members of the group got disenchanted and went on with their lives, but the ones who remained became stronger in their belief.
The same phenomenon is observable with QAnon today: Q has been gone from the 'Net for almost 2 years, but there is a hard core of true believers whose identity is so wrapped up in the conspiracy that the failure of all the predictions hasn't changed their beliefs. It has in fact caused them to build stronger defenses around them. They have gotten better-able to reject contradictory information.
To distill this position on the how and why of "the problem of disinfo" into four points:
1) People believe things for feelings, not reasons.
2) People rely primarily on a fast, intuitive system of thought that is incapable of accommodating objective risk measures, and which uses generally poor heuristics instead.
3) Grifters can hack these two features of human cognition to create ideas that people will believe for good feelings. This hacking process is fundamental to our current mess, because the internet has created an ideal vector for carrying out the hack.
4) Once victims have identified with a grift they will tend to engage in defensive cognition when they encounter contradictory facts.
These four feature between them answer the how and the why.
Next week I'll talk about the "What to do?" with a focus on robust, engineering-inspired approaches, rather than fragile physician-inspired ones. Coming up with novel solutions to new problems is something engineers are trained to do and something we practice every day, so when faced with a novel problem it's a very good idea to talk to an engineer at some point. Probably even more than one.
As you know, I have a problem with this idea. I think I now understand this problem. Keep in mind, this is not a statement about its truth value. It may even be a reflection on my education. But here goes...
This is a kind of social theory. It is not a literature review or historiography. It is the kind of thing from which empirical statements are derived. I have two problems with this. The first of these is the easiest to understand. The second problem is quite difficult to explain.
1. This idea is completely ahistorical. It may have a history, but I don't know what it is. It is not derived from existing social theory of any kind. It has no precedent in the thinking of, for example, Marx, Durkheim or Weber. It is, in that sense, just some stuff you made up. This may not be a problem, but it makes it hard for me to follow because you are not addressing any of the central issues of social theory.
2. This problem deals with the huge number of empirical statements you make. It's vast. Almost every sentence is a statement with empirical value. Some of these statements have already been investigated extensively. I am going to focus on one of your central statement that,
" People believe things for feelings, not reasons."
But there are many, many, many....others.
I don't know what you mean by "people". Is this everyone? Or just some people? This is NOT a minor point to me. Is this meant to be a statement about the essentialist nature of people, like 'All people have a brain'? Is it a defining statement about what makes us human? And that someone whose beliefs are not caused by their feelings is not entirely a person? You are NOT saying that individual differences in feelings result in individual differences in beliefs? That people with more or less of some 'feeling' have more or less of some 'reason'? Or are you? So what is the nature of this relationship?
There is a whole literature that speculates on the origins of beliefs and the nature of believing. Some of it is philosophical. Some of it relates to machine learning. I don't know much about that work. There is also a vast empirical literature that gets collected together by psychologists. In this work, I think, it would be wrong to say that "feelings cause beliefs".
It is a vast literature. It is worthy of a lifetime of study. But let's look at your use of the concept of cognitive dissonance and see where that takes us. Cognitive dissonance is NOT an affective mechanism. There is reason to believe it is a physical mechanism in your brain and body. It happens all the time. Some educators describe it as a key aspect of learning. Festinger and others focus on errors that cognitive dissonance produce for the purpose of illustrating its existence, not because it has any logical connection to mistakes. You might be using the term 'feelings' to mean something different than affect, but that would make this a different kind of statement and one you would have to clarify.
Beliefs are predicted by a variety of factors. If we made an equation that predicted individual differences in beliefs, there would be many different types of variables in there. Some of them would be affective, but others would be individual or social. In fact, I think we all know this. This is what leads me to wonder if you don't mean this in some essentialist sense. This is also a point you would have to clarify.
I guess my problem is, where did this idea come from? It's almost like it's a bunch of stuff that makes sense to you, and you're going to see if you can convince some other people. This is very different from the kind of independently reproducible argument you'd want associated with data, for example with Covid. What I would like to know more about is where this line of thought came from. Is it just a common sense sort of argument, albeit a high level of common sense?