I talked about consciousness and intelligence last summer, but with the release of ChatGTP it seems useful to revisit the topic. What I said at the time was that, lacking consciousness, machines can think "in precisely the same way cars can walk and boats can swim." Which is to say: they can’t.
So has ChatGTP changed my mind?
Nope.
ChatGTP exhibits a limited range of intelligent behaviour, but it can't think because it's not conscious, and its successors are more likely than not to hit a wall beyond which they cannot pass. This limit is already evident in the way I tripped it up a couple of weeks ago: lacking conscious understanding, it has no way to distinguish plausible bullshit from reasonable summaries. This doesn’t mean it won’t be economically useful, because most people aren’t able to do this either.
The contention in the AI community for generations, going back at least to Turing's famous paper "Computing Machinery and Intelligence", has been: "If a machine exhibits intelligent behaviour it can think."
This is coupled with the assumption that a system that’s able to exhibit some intelligent behaviour implies that systems using the same basic technology are on the road to emulating conscious intelligent behaviour. The basic technology du jour is the Large Language Model (LLM) of which ChatGPT is the leading example. Once upon a time the technology in question was the expert system. Then there were “fourth generation AIs”, “fuzzy logic”, and so on. Here’s a picture:
None of these technologies was the on-ramp to general intelligence.
ChatGTP probably isn’t either.
That is: while it is the case that LLMs today are making impressive strides in the kind of symbol manipulation they are designed for, this does not imply that eventually they will be able to successfully emulate the symbol manipulation behaviour of a conscious being possessed of specifically human intelligence.
This position is supported by both history and evolutionary theory.
Historically, most technological progress is non-linear. Predicting the future course of a technology's capabilities based on the current state of progress has never been a good bet.
I’ll discuss a specific case in a moment, but want to first warn about survivorship bias in one’s impression of linear progress. Most people only see the successful progressions. Most unsuccessful progressions vanish before they are ever exposed to the light of day. It’s been my privilege as a consulting scientist of one kind or another over the past few decades to see a lot of nascent technology that’s trying to make it out of the lab and into the world. Most of the time it does not make the transition, often because there is some “small problem” that turns out to be a fundamental limitation.
In terms of what the public sees, as well as the actual history of AI mentioned above, consider the technology of flight. Balloons did not evolve smoothly into gliders, and glider research in the fifty years before Kittyhawk was more a collection of dead-ends than a linear path toward powered flight. The subsequent history of powered flight illustrates the same pattern.
In the six decades between the Wright brother's first flight and "one small step...", new commercial aircraft models increased in airspeed by fifty to a hundred knots per decade in a remarkably linear progression with fairly modest variance. Then in the 1970s something odd happened: the line stayed on course, but the variance in airspeeds of new models blew up, and in the decades after that the airspeed of new models flatlined and the variance fell to practically zero.
Commercial aviation had hit the sound barrier: a fundamental limit. The variance in airspeeds of new models was huge in the '70s because the Concorde was an enormous outlier, flying about twice as fast as everything else in the sky. The airspeed of new types of aircraft has been constant at about 75% of the speed of sound ever since. Against many expectations at the time, supersonic flight proved to be ferociously expensive and not particularly worth the price.
Technological process is almost always non-linear.
Except when it isn't: the end of silicon semiconductor technology has been predicted for decades, and we still are managing to squeeze out a new generation of higher-performance chips every 18 months or so. But that is coming to an end Real Soon Now and no one knows what we'll do when that happens. We're getting much better at parallelism, and that will be one part of the equation, but simply because we've not hit a wall yet does not mean there isn't one out there. With silicon technology we know we're running hard up against very fundamental limits: atomic structure, quantum uncertainty, and radiative losses all have roles to play. Which one will put the nail in the coffin of continual growth we don't know. But we do know that the days of continuous progress with silicon are numbered.
So even in a famous case where we have not yet hit the limit, we know the limit is there. Like aircraft, it’s taken sixty or seventy years to reach it, but that kind of longevity for a single technology is fantastically rare.
We should therefore be cautious about simply declaring that continual improvement is the natural course of things with new technologies. Believe, me, it isn’t.
In the case of word-predictor programs, we have more than just historically-justified caution to give us pause. We have good theoretical reasons to be worried that the cognitive equivalent of the sound barrier lies ahead for LLMs. Call it the “thought barrier”.
Just as quantum mechanics gives us reason to believe silicon's days are finally numbered, and fluid mechanics predicted the sound barrier (if not the economic infeasibility of exceeding it), evolutionary theory predicts that there is a limit to the capabilities of unconscious intelligence.
It's not often evolution allows us to make predictions, but it does in this case, because Orgel's second law tells us "evolution is smarter than us" and when it comes to creating specifically human, tool-building, language-creating, generally representational intelligence, evolution could not do it without consciousness even though it unquestionably has the basic building blocks available. We know it had the tools because we copied natural neural networks as the basic design of the artificial neural networks that power systems like ChatGTP.
In fact, it's likely that many other kinds of intelligence can't be achieved without consciousness either, from apes to ants to octopi. And don't get me started on cetaceans. I seriously believe we have evidence that Southern Resident orca have religion, even though they aren't tool users and so don't qualify for the specifically human kind of intelligence that I'm talking about.
We ourselves are possessed of a great deal of subconscious intelligence. We do all kinds of things without conscious thought. Consciousness only comes into play as the final arbiter of difficult cases: things that the subconscious mind can't handle, and almost all of what the subconscious does is in fact unconscious. We could bring it to mind, but we don't.
Consciousness is the solution evolution has found to the problem of creating intelligent behaviour, just as legs are the solution evolution has found to terrestrial locomotion, and wings in all of their variety are the solution evolution found to flight.
We can argue about ants, but there is no reasonable question that many other intelligent species are conscious. Consider dogs. They have minds. They have personalities. They are aware of the world around them. The conscious self is not an illusion, no matter how many Buddhist Creationists declare otherwise, but an evolved regulator of behaviour, in humans and other animals.
But consciousness has costs. It goes wrong in all kinds of ways across species. In humans we have a whole catalog of terminology that allows us to discuss how our consciousness is costing us today. Beyond that, consciousness is subject to various biases that significantly distort our response to the world in ways that can get us killed.
So: consciousness is the only way evolution has been able to produce the widest range of intelligent behaviour, and it has significant costs.
An unconscious intelligence wouldn’t suffer from anxiety of depression or “The heart-ache and the thousand natural shocks/That flesh is heir to.” Unconscious intelligence is “a consumation devoutly to be wished”, no? All of this strongly suggests that if the same level of intelligent behaviour could have evolved without consciousness, it would have.
But it didn't.
So on the one hand we know that barriers and limits and discontinuities in the growth curve of technological capabilities are commonplace, and on the other hand we know that nature, which is well-equipped with the building blocks required to evolve non-conscious human-type intelligence over many millions of years, has not done so. It hasn't even managed dog intelligence without consciousness.
This tells us that extrapolating from the current rate of growth in LLM capabilities to the level of intelligence exhibited by humans is almost certainly unjustified.
To get a sense of the force of this argument from evolution, imagine Orville and Wilbur in an alternate world where there were no animals with wings. Given the evolutionary advantages of flight, the only reason why it would not evolve is if the force of gravity and the density of the atmosphere and the strength of chemical bonds holding bones and tissue together were such that it was impractical to overcome the former using the latter. On a world without naturally evolved wings, building aircraft would probably be beyond the reach of a couple of bicycle mechanics.
We live in a world without naturally evolved unconscious high-level intelligence.
All engineers have always been trained to use the natural world as a source of models, and everything from Velcro to backhoe shovels reflects this training and the common practice of engineering design. Given this, when we look to nature for models of unconscious human-level intelligence and don’t find any, it should give us pause. Evolution is smarter than we are.
The outlook for LLMs gets worse when we notice that it’s also lacking something all conscious intelligences have: embodiment.
We think not just with our minds but with our bodies. I do, anyway. Men in general are kind of famous for it. And the kind of thinking we do with our embodied minds involves far more than just symbol manipulation, which is the only thing LLMs are concerned with.
Consciousness is an evolved, embodied, capacity that enables and regulates intelligent behaviour in living things. There is no reason to believe that the full range of behaviours enabled by conscious, embodied intelligence, even in the limited domain of symbol manipulation, can be achieved by a large language model or any other non-conscious system, and the lack of naturally evolved non-conscious intelligences is strong evidence for the opposite proposition: that only an embodied consciousness can produce the kind of intelligence we see in dogs, much less human beings.
So... have I proven unconscious human-level intelligence is impossible? Nope. But I don’t have to. “You can’t prove it’s not impossible” is not an argument, its a statement of bias. Anyone who believes it is possible to create a disembodied non-conscious human-level intelligence has to explain why nature hasn't. Simple, yes?
On both historical and evolutionary grounds, it is very likely that LLMs will hit a fundamental limit in their performance, and new approaches to machine intelligence will be required to get beyond it. Those approaches will probably involve embodiment, and--ultimately--consciousness.
Only then will we be able to welcome our new cybernetic overlords.
Woah this is really interesting
So not likely by next week then?