If you want to create knowledge, measuring anything just once is a mistake, and measuring anything just one way is also not so great.
A knowledge-claim like "The diversity of life is due to evolution by variation and natural selection" is a claim about the way objective reality is. This is what makes knowledge useful, because what a thing is causes what it does, so the "is-ness" of a thing allows us to draw inferences about the "does-ness".
We go from one aspect of what a thing does to infer what it is, and then knowing what it is we can infer a bunch of other things about what it does. Newton saw that light split into rainbows through a prism, and inferred that white light is a combination of colours, from which he was then able to draw all kinds of other conclusions about what light would do, which among other things allowed him to build better telescopes.
A great deal of science is aimed at finding the ways of describing the is-ness of things that gives us the biggest payoff in predictiing does-ness. Newton's theory of universal gravitation says that every massive body "is" a source of a force field (gravity) that is attractive and falls off as one upon the radius squared, and this allows us to draw inferences about everything from the paths of the wandering stars to the fall of raindrops to the motion of the tides.
But this implies that every time we use a claim of is-ness to justify a prediction of does-ness we are in effect testing the is-ness claim when we do the measurements necessary to check the prediction. This is especially true when the precision of the prediction is greater than the precision of the observations that led to the inference of is-ness in the first place.
Newton got to gravity by considering the motion of comets and apples, and his predictions were spectacularly confirmed by the return of Halley's comet in the following century, as Edmund Halley had used Newton's theory to predict it. Getting something right over the span of 75 years (the period of Halley's comet) when it was based on only a short period of observation is impressively confirmatory.
In the 19th century Victorian astronomers made very precise measurements of the orbit of Mercury, and found nothing they did could reconcile their observations with Newton's theory of gravity, although they tried pretty hard. Every theory has wiggle room in the form of assumptions you make along the way. In the case of the orbit of Mercury, if the Sun had some weird internal lobes of high-density matter it might account for the observed motions, but further work showed the densities would be implausibly high and had to be distributed just-so for it to work.
"Just-so" is kind of an insult among scientists: it implies a solution requires fine-tuning to the extent that almost any other way forward is more plausible.
As it turned out, Mercury's small deviation from Newtonian orthodoxy was pointing us toward the fact that the universe is not the way Newton believed it to be when gravity gets really strong. This doesn't mean Newton was wrong: in weak gravity the objectively real world is as he described it, and we use predictions of what the universe will do because of that every day. But when gravity is strong, the universe is the way Einstein's theory of general relativity describes it, and that leads us to the Big Bang.
Big Bang cosmology is one of the most successful descriptions of the universe we have. The theory is disarmingly simple: at some point in the past all the energy in the universe was confined to something like a geometric point, and everything that has happened since is the result of that energy expanding, spreading out, turning into particles via m = E/c^2, and eventually becoming stars and planets and us.
The theory predicts the primordial abundances of light elements--hydrogen, helium, and the two stable isotopes of lithium--most of which were created during the process, and it predicts the Cosmic Microwave Background radiation (CMB) that permeates the whole universe, which is a "cooled down" left-over from the thermal radiation that illuminated the glowing hot days of the early universe.
The expanding universe implied by the Big Bang model was in fact predicted by Einstein in his work on General Relativity. His equations had a parameter--a constant of integration that just falls out of the math--that implied the universe in general did not have a constant size. At the time everyone thought the universe was static, and so he set the number to zero--a "just-so" solution--which he later described at the biggest mistake of his career.
About a decade later the American astronomer Edwin Hubble was measuring the spectra of stars in distance galaxies, and noticed something odd. The Doppler effect makes light from objects that are moving away from us shift toward the red end of the spectrum. Hubble noticed that the more distant the galaxy, the bigger the red shift in the spectral lines he was observing. Sodium, for example, which is what makes our sun yellow, has a prominent spectral feature at 589 nm (0.589 microns). But in more and more distant galaxies Hubble saw this feature at lower and lower wavelengths.
This is not a subtle effect: the shifts are several nanometers (nm) even for galaxies that are just a billion light years away. The kind of cheap off-the-shelf optical spectrometer I sometimes use in my day job has a resolution of better than 1 nm, so this kind of shift was easily detectable even a hundred years ago.
Once we knew the universe was expanding, measuring the precise rate of expansion--known as the Hubble Constant and referred to symbolically as H0--became critically important, because it allows us to draw all kinds of other inferences about what the universe does.
The problem is there are two fundamentally different approaches to making this measurement, and they no longer agree with each other within error. As the uncertainties (errors) have gotten smaller, this disagreement has become more acute, to the point where now it's sometimes being referred to as a "crisis" rather than a "tension" or a "problem". It's a bit silly to obsess over terminology in this regard, but whatever we call it, the situation is decidedly unsatisfactory, and recent results from the Gaia space telescope have made is worse.
The two ways of measuring H0 are dramatically different, and I only really understand one of them, but in the best traditions of popular science writing I'm not going to let that stop me from describing them both as best I can.
The one I understand is based on "standard candles" and the "distance ladder". Optical physics has its foundations in the centuries when the world was lit only by fire, and some of that language, like references to "candles", still persists into the modern day.
A standard candle is a star or stellar phenomenon whose absolute brightness we can predict. The canonical standard candle is the class of Cepheid variable stars, which pulsate in brightness over a period of days or weeks. The mechanism of this pulsation is a simple heat engine in the stellar atmosphere, in which a layer of partially ionized helium falls inward, and increases in opacity the denser it gets, causing the gas below it to heat up, which increases the pressure, which lifts the helium layer and releases the trapped heat, which causes the helium layer to fall inward, increasing its opacity...
Cepheids are useful as a standard candle because their period of oscillation is simply connected to their absolute luminosity, a fact that was discovered in the early 20th century by American astronomer Henrietta Swan Leavitt by studying thousands of Cepheids in the Magellanic Clouds, which are two "dwarf" galaxies that orbit the Milky Way. Because these galaxies are around 200,000 light years away but are only around 10,000 light years across, to first order all the Cepheids in them are at the same distance from us, so variations in apparent magnitude (how bright they look to the eye) are almost entirely due to variations in absolute magnitude (how bright they actually are). A plot of apparent magnitude against period gives a straight line, whose slope is related to absolute magnitude by a simple constant. Getting the value of that constant as accurately as possible has been the work of a century, with the recent Gaia measurements being the best value by far.
In very distant galaxies individual Cepheids are not resolvable, but other objects of known brightness are: Type Ia supernova. These form the next rung on the "distance ladder", and they can be calibrated using galaxies that are near enough to see Cepheids. Using a series of steps like this we can measure the distance to increasingly far-off objects, and find the ratio of their velocity of recession to their distance: the Hubble Constant.
That number comes in at about 73 km/s per mega-parsec, where a parsec is the distance at which an object has a parallax (apparent motion) of one arc-second against the fixed (most distant) stars as the Earth moves around the sun. Numerically, it's about 3.26 light years, which is about 3/4 of the way to the nearest star to the Sun.
The second way of measuring H0 is using variations in the cosmic microwave background (CMB). When the universe was about 379,000 years old, it cooled off enough that neutral atoms could form out the ionized gas of electrons, protons, and alpha particles that filled it up to that point. Ionized gases are conductors, which absorb light, so when the protons and alpha particles captured electrons to become neutral hydrogen and helium gas, the universe went from opaque to transparent, and there are photons that have been travelling ever since, getting red-shifted by the cosmic expansion as they go.
The temperature of the universe was about 3000 K when this happened--"warm white" in terms of modern colour temperature classifications--but the expansion process since has cooled it down just 2.7 K... cold enough to liquify helium at one atmosphere pressure!
The CMB is almost perfectly isotropic, but at very fine scales--corresponding to about 379,000 light years in the early universe, which show up as 1 degree patches of sky in the universe today--there are small variations.
There's a joke in physics that some arguments require a step that says "and then a miracle occurs", and from my point of view that's what happens next: some kind of analysis of these variations that I don't really understand allows an estimate of H0 from these small variations in the CMB. This estimate is independent of standard candles and distance ladders, and it comes out at about 68 km/s per mega-parsec, with a relatively tiny uncertainty.
As an aside: the cool thing about these measurements is that the anisotropy of the CMB is literally due to sound waves in the hot, dense medium of the early universe. They're incredibly low frequency and long wavelength, and move at almost the speed of light, but still: the Big Bang actually went bang!, albeit at a frequency no one could hear. Still... the child in me finds this delightful.
All of this leaves us in a situation like a person with two watches, one of which reads 10:27 AM, the other 11:21 AM... way out side of error.
There is partisanship on both sides, because everyone likes their own measurements, but partisanship rarely survives the scientific process for more than a decade or so, which is roughly a hundred times shorter than in other areas of life.
There are lots of possible solutions, but they all come down to: we are assuming the universe is some way that it is not.
With our two watches, maybe one of them is slow, or fast. Maybe both of them keep time badly. Maybe we forgot to adjust one of them to "why is it dark at 4 PM?" time. Maybe one is on the wrist of someone who has just flown in from the next time zone over... and so on. Simply knowing there's a difference doesn't tell us why.
With regard to the Hubble constant, my own bias is toward the approach I understand, which can be a problem: understanding it means I've internalized a bunch of assumptions about the way the world objectively is, and it can be difficult to tease out those assumptions and experiment with rejecting them. This is why robust, ideally respectful, dialogue with people on the other side of such disputes is important, and a commitment to the possibility that something I know may be wrong is enormously valuable.
There are at least three ways this disagreement could be resolved. One is that some mundane assumption on one or both sides will turn out to be wrong. This is the most likely outcome. People make mistakes, and mostly those mistakes turn out to be boring.
The second possibility is that there is genuinely new physics lurking on one side or the other (or both) that is messing up their results, but the expanding universe is still basically sound. This is exciting: maybe it'll give us some insight into dark matter, or dark energy, or something similarly opaque.
And the third possibility is that there is genuinely new physics at the cosmological level, and something is not just wrong with our measurement of the Hubble constant, but that we are living in a universe where the Hubble constant ceases to be meaningful at sufficiently large scales, in the same way Newtonian gravity ceases to be meaningful for sufficiently strong fields.
The is-ness of such a universe is beyond our current imaginings, but by looking closely we may yet discover it.
If you like this kind of thing, please subscribe!
If you think others might like it, please share!
Wow! Fascinating …and I think I’m starting to get a sense of the scale of the universe. Although no. That’s really ungraspable.