Philosophy, particularly of the rationalist kind pioneered by Plato and turbo-charged by Descartes, puts a great deal of weight on a kind of universal consistency criterion: every idea is supposed to be fully and precisely consistent with every other idea, with no room for uncertainty or tolerance.
This criterion is a necessary consequence of the fragility of the tools that philosophers had available, and was also a result of the "mathematics envy" that many philosophers have: they believed that mathematics was a single body of entirely self-consistent thought that followed by deductive logic alone from some basic premises or axioms, which is what Kurt Godel proved to be false in the 20th century.
Euclid's geometry, which is a special case of geometric reasoning on flat surfaces, was taken as the exemplar, and philosophers thought they could emulate it in more general contexts.
This system of thought was supposed to be both unitary and certain: that is, once you knew its axioms, it should be possible to infer the whole system via deductive processes alone, and once you had done that you knew everything that can be known. There is only one system, it is fully self-consistent, and it contains everything.
For reasons that would make Bayes spin in his grave, this stuff was called "rational", and its inevitable failure was called "irrational". This is a problem because various kinds of gibberish, from socialist utopias to libertarian nightmares, also claim "irrationality" as their highest value, and their advocates will happily kill anyone who disagrees with their jackbooted vision of the future. The failure of the rationalist's insane program was--via the magic of amphiboly--a boost to a diversity of murderous cretins in the twentieth century, some of whom are still with us in spirit even today.
The failure of the (ir)rationalist programme was inevitable because deductive logic is famously fragile. Everyone from Aristotle to Sir Terry Pratchett has pointed this out.
When Sherlock Holmes says, "Eliminate the impossible, and whatever is left, however improbable, must be the truth" he is speaking nonsense, because "impossibility" generally depends on some form of deductive proof, and those go wrong in dozens of different ways, and the tiniest deviation shatters the whole structure, leaving you with nothing. That is the essence of fragility.
But the world, if it's anything, is robust, isn't it? So a fragile system that falls to bits at the first puff of uncertainty can't possibly have anything to do with the world, can it?
This gulf between the robust nature of reality, which famously is what keeps on existing even when you don't believe in it, and the fragile nature of "truth" as rationalists conceived of it, is kind of a clue that rationalists are not talking about reality: their systems, like those of pure mathematics, may have some interesting internal features, but they have nothing to do with what’s real. They can't: reality is robust. Rationalist systems are fragile. A fragile description of a robust system is broken and wrong. It's not, uh... consistent.
And a kind of consistency is desirable: just not the kind rationalists fantasize about.
The key feature of Bayesian updating is that it is the only way to update our beliefs such that two people presented with the same information but in a different order will reach the same conclusions, at least up to differences in their prior beliefs. This is consistency that's worth pursuing: it would be a bit weird if learning A and then B produced conclusions that were dramatically different from first learning B and then only learning A afterwards.
The key word here is "dramatically", though: slightly different beliefs? Beliefs that are within uncertainty of each other? Sure. Why not?
Deductive proof is hard, and its conclusions are almost always dependent on a single thread of reasoning, every step of which must be correct. But any real proof requires multiple auxiliary hypotheses, any one of which could be wrong, or irrelevant, or otherwise break the whole business and result in labeling a plausible conclusion "impossible".
Improbability, on the other hand, is robust: if something is improbable it means it would require us to be wrong about a whole bunch of diverse and reasonably well-known stuff.
Improbability is here being used as a stand-in for implausibility, and Bayes teaches us that plausibility and implausibility are all that matters. The kind of certainty sought by "rationalist" philosophers does not: it is neither interesting nor achievable.
What the rationalists were looking for was intolerant agreement: their deductions had no room for uncertainty, which is vital in more than just the realms of thought.
Consider: a machinist building a device has to create parts that are within tolerance of the desired dimensions, neither too big nor too small in any respect. Realistic tolerances are around one thousandth of an inch, with precision machinists able to get down to a tenth of that.
Parts that are within tolerance fit together well enough to work.
Do they have a "perfect" fit?
What does that even mean?
It can't mean "zero tolerance", because zero is not a tolerance: tolerance is the range a value must fall within if its to be good enough for going on with.
Zero is not a range, so it's not a tolerance.
Tolerance, uncertainty, error... these are all words for much the same thing, and on them our knowledge of reality depends.
This creates room in our descriptions of the world for a certain kind of inconsistency: so long as things are consistent within the required tolerances, our description is useful.
For example, I've been playing with fluid mechanics while working on various upper-room UVC box designs, and was delighted to find a quite lovely instance of this in some code for handling buoyancy forces in a fluids simulation.
Fluids move under the influence of various forces, and one of them is the tendency of warmer fluid to rise when its surrounded by colder fluid. Motion of this kind is called "convection".
As it turns out, the difference in density that is the cause of this motion--warmer fluids are less dense, which is what causes them to rise in accordance with Archemedes’ principle--is in many interesting cases negligible... even when the buoyancy forces are not.
Consider the wood stove that's sitting a couple of meters away from me just now, keeping off the chill of the winter night. The air immediately above the stove is significantly warmer than the rest of the room, but it hasn't doubled in volume or anything. It rises because it's lighter, but that motion is noticeable only because every other source of motion is tiny, and the degree of expansion that drives it can comfortably be neglected in a fluid mechanical model while still achieving more than adequate accuracy for a wide range of practical purposes.
A fully consistent model would have to account for the difference in density between the hotter air and the colder air around it, doing some complex book-keeping to maintain conservation of mass as the less dense air pushes out to displace the air around it, instead of just magically rising without changing its volume.
A working model instead just adds a buoyancy force to the equations--making it a function of temperature--and leaves the gas density unchanged.
After all, what possible reason could there be to add complex density-changing code to the system when the density change is negligible--that is, within tolerance--while the buoyancy forces are not? Why model the cause when we can model the effect with negligible loss of accuracy?
It would be irrational to insist on that kind of consistency, wouldn't it?
Insisting on deductive consistency--eliminating the impossible--rather than Bayesian plausibility--taking the implausible much less seriously--prevented philosophers from understanding reality for thousands of years. It's still holding some of them back.
Failure to grasp the idea of tolerance leads rationalist philosophers to believe that their oxymoronic "zero tolerance for error" is required for knowledge, whereas in reality it's only tolerance or uncertainty that allows us to know anything at all.
Things that don't agree with each other within their respective uncertainties are inconsistent.
Things that do agree with each other within their respective uncertainties are consistent. That's all we can achieve in most cases, and that's also all we need in most cases. Rationalists sometimes pretend that it would be somehow catastrophic if we can't have deductive certainty about the world. But they they go happily about their day with no such thing, accomplishing their goals, managing to pay their taxes and fail their students and so on, all without a hint of deductive certainty in sight.
In some areas of science--notably theoretical physics--mathematical deduction is one of our more useful tools, and getting exact agreement between, say, two different ways of deriving the same quantity is really important. But science is more of an art than a science, and a good part of the art is knowing which inconsistencies fall within acceptable tolerances, and which do not.
When faced with any question in the sciences, we often first step back and ask, "What are the acceptable uncertainties here? What are the relevant scales?" Getting a sense of that is vital to understanding what the limits of implausibility are, because those are what constrain our understanding, not what might seem impossible to an intolerant rationalist philosopher.
Wow! Thank you ❤️