As a scientist I value accuracy and precision, generally in that order, although sometimes not.
Precision and accuracy aren’t the same. Precision describes how easy it is to distinguish values that are very close to each other. A measurement with a precision of 0.0001 can tell us the difference between 1.0000 and 1.0001, even if the actual value is 5.
Accuracy describes how different the measurement is from reality.
Precision constrains accuracy but doesn’t determine it. A measurement can’t be more accurate than it is precise, but it can be far less accurate than it is precise.
An extreme example of this is something called the Mossbauer effect, which was discovered in the 1950’s by Rudolph Mossbauer--a Ph.D. student at the time, who won the Nobel for his work.
In the normal decay of an excited nuclear state via gamma ray emission, the nucleus recoils in the opposite direction of the emitted gamma ray, because gamma rays have momentum and momentum is conserved. The nuclear recoil is the “equal and opposite reaction” of Newton’s third law.
The effect of this recoil is to shift the energy of the emitted gamma ray by a tiny amount--on the order of micro-electron-volts, where typical emission energies are in the kilo- or mega-electron-volts. But that shift is enough to prevent resonant reabsorption of the gamma ray by the other nuclei of the same isotope. Resonant absorption happens when you have radiation that is precisely the right energy to excite a nuclear or atomic state, and is typically extremely strong. Without the shift, it would be possible to build gamma ray shielding that was both very thin and practically opaque.
What Mossbauer realized is that under certain circumstances the recoil momentum wasn’t absorbed by the the individual nucleus undergoing decay, but whole crystal of which it was a part. This meant the shift in energy was about twenty-five orders of magnitude smaller than in the normal case, and resonant absorption was possible.
Furthermore, many nuclear energy levels have fine structure. What looks like one level is actually two or more that are separated by tiny amounts of energy. This splitting of levels can be due to everything from the details of nuclear structure to the chemistry of the surrounding atoms, which produce magnetic and electric fields at the position of the decaying nucleus that alter the energy levels.
Mossbauer created a radioactive source that was rich in chromium-57, which beta-decays to manganese-57, which beta-decays to an excited state of iron-57. The iron-57 state emits a gamma ray that is low enough energy that the recoil momentum is taken up the by the crystal. Low energy is important because the effect depends on the quantization of recoil energy within the solid, and so long as the recoil energy is kept below the level of the lowest quantum state of crystalline vibration the Mossbauer effect will occur.
This is all very clever, and produced gamma rays with ridiculously well-defined energies that would reveal the fine structure of the nuclear state they were being emitted from. The only problem is the accuracy of our radiation detectors is in the kilo-electron volts, and the fine structure is in the micro-electron-volts. That’s nine orders of magnitude difference.
But Mossbauer didn’t need accuracy: he needed precision.
What he did was worthy of the Nobel Prize: he knew the emitted gamma rays would be subject to resonant absorption in iron-57, which is a common, stable, isotope. About 2% of any lump of iron is constituted by it. So Mossabauer set his radioactive source on a plunger that moved back and forth, resulting in the energy of the gamma rays being shifted by the Doppler effect. The amount was tiny. The source velocity is typically measured in mm/second. But the fine structure differences are so small that it was enough to carry the sample through multiple resonances. Place a detector behind an iron absorber foil, and you’ll see deep dips at specific velocities corresponding to particular splittings.
We still don’t know the exact energy of this excited state of iron-57 to within an accuracy of a few hundred electron-volts. But we know the differences between the sub-states to a precision of better than a micro-electron volt.
This is the most extreme difference between precision and accuracy I’m aware of, and the underlying physics is both elegant and arcane. Who could ask for anything more wonderful?
So that’s science.
As an engineer, businessperson, and citizen, I generally care more about robustness than either accuracy or precision. But unlike accuracy and precision, which are introduced in the early part of any intro physics course, robustness isn’t talked about very much.
Robustness is the property of degrading gracefully under less-than-ideal circumstances.
From the point of view of measurement apparatus, a non-robust system works well in a climate-controlled lab in the hands of experts. A robust system continues to work adequately when slung into the back of a truck and taken on the road by a couple of Australians.
Robustness is widely under-valued, in part because robust systems almost always under-perform non-robust ones under ideal conditions: we often trade off precision and/or accuracy to get robustness. This is particularly true in my work as an algorithm designer: robust approaches often result in less accuracy on the training data, but always outperform non-robust approaches in the field.
Robust estimation has been a major focus for people working in data analysis, robotics, and other real-world systems for decades, with the goal of making our world less fragile.
The most well-known robust system in use today is the internet, which was famously designed to survive a nuclear attack. This shows what you can do if you design with robustness in mind from the very start, rather than trying to bolt it on later, which can be very costly and rarely works all that well.
Unfortunately, in any system of political economy, from the most socialist command economy to the most capitalist dog-eat-dog free-for-all, robustness often takes a back seat to efficiency, because incentives reward efficiency and managers rarely have the creativity to see how to meet efficiency targets without sacrificing robustness.
This is understandable: as I’ve pointed out many times, the human imagination is terrible at predicting the present, much less the future, and robust design often requires that we imagine scenarios where things go wrong. We’re not very good at it. Whereas efficiency we can see in the next quarter’s results, or in the fulfillment of the current Five Year Plan, or in allowing the Party to make claims of great progress in reducing waiting lists for this or that government service before the next election.
Just-in-time manufacturing systems are a great example of this, which are fragile by design. This fragility was considered a feature in its original implementation at Toyota, as it provided immediate feedback if anything went wrong, because the whole manufacturing process came to a halt. As a means of identifying bottlenecks and weak points it was great. But as it got adopted elsewhere the focus moved to reduction in cost, and regardless it was adopted as a permanently entrenched system, not a transient experimental phase.
Then the pandemic hit.
Its effects are being felt throughout industries, especially ones with just-in-time systems. The tendency of socialist economies--and to a lesser extent capitalist ones--to concentrate manufacturing into a small number of organizations makes it worse, because diversity is one of the ways to build in robustness.
In general, robust systems have at least some of the following features:
1) They use as little information as possible while still achieving the desired result.
2) They use diverse sources of input (this applies to algorithms as much to inventories).
3) They rely on the smallest number of assumptions possible.
4) They have few external dependencies.
There are probably more, but those will do for a start. Notice that these features are not entirely consistent with each other: having no external dependencies is one way to build up robustness, but that’s not consistent with having diverse sources of input, each one of which is an external dependency. We can see this tension in climate preparedness, where self-sufficiency is one tendency, and fostering a diverse web of connected communities is another.
It follows from this that achieving robustness is not a simple, formulaic, kind of thing. If all goes well I’ll have more to say about the various ways it can be done in the next few weeks, although I’ve been down a rabbit hole with some quantum mechanics code for the past week and who knows, I might have something to say about that sometime soon as well.
Today is the eve of the Lunar New Year.
This is a pretty intense post. I am not sure I follow all of it. I have no background in nuclear physics that matters here, so it was very hard to understand your example. I think you are talking about how level of measurement connects the concepts of accuracy and robustness. Then you went on to talk about the significance of this for economics and socialism. As much as this is correct, I have trouble with your 4 points of a robust system. In systems composed of human beings, the idea of 'information' is not as clear as it is for machines. There are many kinds of information for humans. Some of these are vague and only understood locally. I frequently talk about the idea of legitimacy. While I agree with you in principle, I am less sure stating principles of robustness in these terms is meaningful for social systems composed of humans. Having said that, I was intrigued by one of your statements about the use of "creativity to see how to meet efficiency targets without sacrificing robustness."
This is one of those posts that I really need to print out and think about every word. Either I forget about doing this, or you post something else before I get around to doing it. Or it might also be one of the posts where I end up thinking you don't realize the brilliance of what you have written - at least from my POV.