What makes this software special to me is not that it can recount information like Google search, but that it can act so humanlike, or to be frank, superhuman-like. Let’s look at the situation vis-à-vis errors and the overlap with human capabilities and habits of mind. As you said, the software works increasingly like a human. It gives you a name to call it. Also, when pressured to give information that it does not have, rather than admit it, it attempts to fake it until it can make it. In your case, it even shamelessly launched into a somewhat superficially persuasive word salad. Just like many humans. Or at least most teenagers.
It did something similar with me. I asked what caused the opium war. It told me that the war was over the opium trade between Britain and China. I informed it that Britain stopped shipping opium to China in the 1790s, 50 years before the opium war. Its response was to apologize for this mistake, admit that, according to its own research, I was factually correct, and revise its explanation. It’s new explanation then included a new error, this time about the East India Company.
Was I disappointed with these errors? Was I put off by this brazen shamelessness? No, I was impressed. These errors of fact will be fixed in the fullness of time. Regarding the first error, it failed to reach deeply enough into its memory. When I prompted it, it reached deeper into its memory and recognized its error. That too is a very human trait. In the full blush of conversation, our memory only runs so deep because all conversations require simultaneously running a grab bag of capabilities (reading facial expressions, body language, anticipating counter arguments, jocularity/wit, etc.) tightly within a timeframe. ChatGPT has its own considerations and can only perform so well within a given timeframe. Hence, its ability to forget, to overlook. Very human, no?
In other words, it shoots from the hip like a human, forgets like a human, reconsiders like a human, reorganizes its thinking like a human, apologizes like a human, and keeps moving ahead like a human. In other words, it’s not only going to become increasingly accurate, like Google search, but increasingly human. It takes risks like humans, gropes in the blind like humans, offers hypotheses like humans, recovers from failure like humans, and is, increasingly human. Except that it’s on a trajectory to become much much more. It’s going to become superhuman.
Oh, I forgot. When it needs more information, more context, more parameters, it prompts you to produce it. It even asks questions. What could be more “human” than that? As long as it is powered up, it’s essentially a perpetual motion machine gathering intelligence, snowballing skills and information and perspectives. It is en route to become superhuman. Arguably, in an already growing swathe of respects, it already is.
That’s my back of the napkin, working hypothesis. I am by no means wedded to it.
By the way, I am the aforementioned Chinese to English translator mentioned by commenter Scott Summers. I am definitely not an academic or specialist… ha… Scott makes some good points. Some quality pushback. Hence the need for this conversation.
Thanks for this. It's fascinating to see how people respond to this technology, particularly in seeing ego and personality within the application. I started out treating it as an individual but by the time I was done my first interaction I was treating it as a machine. I didn't find it a convincing simulacrum of of an intelligent being, but I'm very skeptical in that regard: I don't find most people very convincing either.
My own belief is this approach to emulating intelligent behaviour has fundamental limitations, which I'll write more about in the New Year. That's not to say this technology won't have a major impact. It will. Huge. But I expect it to fall far short of genuine intelligence, which for the moment I'll define operationally by saying that any approach to emulating intelligence based on large language models will always be subject to the kind of failure it demonstrates in my interaction with it here, passing seamlessly from apparently understanding to complete gibberish.
I'm not sure if the following tentative conclusions of mine will be of use to you, but... So why has recognizable AI first appeared in chatbot form? Sorry, I don’t have enough time to explore this in greater detail. Suffice to say,
1: for intelligence to be recognized, it needs to be intelligible to the audience. As I do not understand machine language, COBOL, Python, or C+, the only way I can recognize AI is for it to appear in human language. Ergo, it was most likely to first appear intelligibly to me via a chatbot.
2: I have a very weak understanding of how the software operates beyond the notion that it calculates probabilities vis-à-vis the most appropriate next word in a given sentence. If the final word in the sentence is “good” then the thought continues. If the final word is “bad”, then you redirect. If the key word in the sentence is “bright”, “luminous”, or “refulgent” the sentence goes in slightly different directions because each of these words has a different set of optimally compatible partner words to choose from. This is my primitive impression of how the software works at its most elemental level.
When I’m thinking, my thoughts go in a predictable direction. However, when I speak, words pass through a different filter, that of the ear. I’m sure you’ve noticed as well that this produces a very different response, one which is very sensitive to word choices, especially those made in the final sentence one is speaking. Oftentimes, I even struggle to finish a spoken sentence, as looking for the right word requires churning through so many word choices (something which seldom to never happens when thinking wordlessly). And that right word then determines the direction that my thought pattern takes next. Not so much when I’m thinking, but definitely when I’m talking. In other words, the identical process used by ChatGPT.
I’ve long noticed that I am much smarter in conversation that I am when thinking to myself. Also, it has long been an observation by writers that they cannot figure out what they think until they write about it. Writing is actually a dialogue. You throw down a first draft. You argue with it. You throw down your second draft. In other words, writing is a conversation.
In short, thinking is not thinking. Thinking is talking, dialoguing, conversation. Thinking is simply predicting the optimal word for the spoken/written sentence and then extrapolating to the next related optimal word that keeps the conversation moving. I need to define “moving”, but even more I need to get back to work… ha…
I do have a very good understanding of how software operates, and speak most of the languages you mention, and more besides. I'm also a writer and am familiar with the iterative process of verbal thinking. As Winston Churchill once said, "How can I know what I think until I've heard what I say?"
But the process of searching for a word in a conscious mind vs a large language model is completely different: a conscious mind is choosing words with an aim to express its meaning. This can and does result in statistically unlikely choices. These can be perceived as "surprising" to the listener, and maybe even to the speaker because the subconscious exists. A large language model like ChatGTP is just computing over probability distributions. Those probability distributions are the output of the reasoning process that humans have undergone, as represented by the texts ChatGTP was trained on. The conscious search for meaning is the input to that process.
Thinking is a variety of things. I believe consciousness is inherently embodied, so in my view thinking is walking, swimming, kayaking, canoeing, sailing, standing still, lying down, and yes, also talking. A disembodied word-predictor program is not thinking. It is engaged in statistical calculations in an attempt to emulate the output of thought. There is every reason to believe that such emulation is inherently limited: the claim that a non-conscious system can successfully emulate all the behaviours of a conscious one requires justification, as all the evidence we have points very strongly to the conclusion that it is false. I'll be writing something on this topic in the new year, and appreciate this opportunity to clarify my thoughts. But those thoughts existed before they were spoken.
I have a friend who is a Chinese-to-English translator. He is not an academic area specialist. He posted something about AI taking over human thinking occupations in 5 years. He went on to cite a comment from an undergraduate doing research on a problem that hadn't been examined before, and how ChatGPT was able to guess many of the "points in my research proposal" - although I'm not sure what that means. My friend didn't quite get what was being said here. That AI is able to compute solutions to some kind of research problem just says that a lot of research is pretty mundane - but we all know that. I also question the ability of an undergraduate to know the research relevance of what the AI was able to do. All of this is analogous to the point about poetry and expert judgement. Computational machines can do amazing things and assist with the performance of mundane tasks that humans used to get credit for doing. It's really interesting what this will lead to, but the replacement of human thinking is probably not one of them.
Very interesting to see your questions and ChatGPT’s answers with your commentary for those of us not clear on the physics. Loved the recording with the regular and drunk version of ChatGPT when it was getting off track!
What makes this software special to me is not that it can recount information like Google search, but that it can act so humanlike, or to be frank, superhuman-like. Let’s look at the situation vis-à-vis errors and the overlap with human capabilities and habits of mind. As you said, the software works increasingly like a human. It gives you a name to call it. Also, when pressured to give information that it does not have, rather than admit it, it attempts to fake it until it can make it. In your case, it even shamelessly launched into a somewhat superficially persuasive word salad. Just like many humans. Or at least most teenagers.
It did something similar with me. I asked what caused the opium war. It told me that the war was over the opium trade between Britain and China. I informed it that Britain stopped shipping opium to China in the 1790s, 50 years before the opium war. Its response was to apologize for this mistake, admit that, according to its own research, I was factually correct, and revise its explanation. It’s new explanation then included a new error, this time about the East India Company.
Was I disappointed with these errors? Was I put off by this brazen shamelessness? No, I was impressed. These errors of fact will be fixed in the fullness of time. Regarding the first error, it failed to reach deeply enough into its memory. When I prompted it, it reached deeper into its memory and recognized its error. That too is a very human trait. In the full blush of conversation, our memory only runs so deep because all conversations require simultaneously running a grab bag of capabilities (reading facial expressions, body language, anticipating counter arguments, jocularity/wit, etc.) tightly within a timeframe. ChatGPT has its own considerations and can only perform so well within a given timeframe. Hence, its ability to forget, to overlook. Very human, no?
In other words, it shoots from the hip like a human, forgets like a human, reconsiders like a human, reorganizes its thinking like a human, apologizes like a human, and keeps moving ahead like a human. In other words, it’s not only going to become increasingly accurate, like Google search, but increasingly human. It takes risks like humans, gropes in the blind like humans, offers hypotheses like humans, recovers from failure like humans, and is, increasingly human. Except that it’s on a trajectory to become much much more. It’s going to become superhuman.
Oh, I forgot. When it needs more information, more context, more parameters, it prompts you to produce it. It even asks questions. What could be more “human” than that? As long as it is powered up, it’s essentially a perpetual motion machine gathering intelligence, snowballing skills and information and perspectives. It is en route to become superhuman. Arguably, in an already growing swathe of respects, it already is.
That’s my back of the napkin, working hypothesis. I am by no means wedded to it.
By the way, I am the aforementioned Chinese to English translator mentioned by commenter Scott Summers. I am definitely not an academic or specialist… ha… Scott makes some good points. Some quality pushback. Hence the need for this conversation.
Thanks for this. It's fascinating to see how people respond to this technology, particularly in seeing ego and personality within the application. I started out treating it as an individual but by the time I was done my first interaction I was treating it as a machine. I didn't find it a convincing simulacrum of of an intelligent being, but I'm very skeptical in that regard: I don't find most people very convincing either.
My own belief is this approach to emulating intelligent behaviour has fundamental limitations, which I'll write more about in the New Year. That's not to say this technology won't have a major impact. It will. Huge. But I expect it to fall far short of genuine intelligence, which for the moment I'll define operationally by saying that any approach to emulating intelligence based on large language models will always be subject to the kind of failure it demonstrates in my interaction with it here, passing seamlessly from apparently understanding to complete gibberish.
I'm not sure if the following tentative conclusions of mine will be of use to you, but... So why has recognizable AI first appeared in chatbot form? Sorry, I don’t have enough time to explore this in greater detail. Suffice to say,
1: for intelligence to be recognized, it needs to be intelligible to the audience. As I do not understand machine language, COBOL, Python, or C+, the only way I can recognize AI is for it to appear in human language. Ergo, it was most likely to first appear intelligibly to me via a chatbot.
2: I have a very weak understanding of how the software operates beyond the notion that it calculates probabilities vis-à-vis the most appropriate next word in a given sentence. If the final word in the sentence is “good” then the thought continues. If the final word is “bad”, then you redirect. If the key word in the sentence is “bright”, “luminous”, or “refulgent” the sentence goes in slightly different directions because each of these words has a different set of optimally compatible partner words to choose from. This is my primitive impression of how the software works at its most elemental level.
When I’m thinking, my thoughts go in a predictable direction. However, when I speak, words pass through a different filter, that of the ear. I’m sure you’ve noticed as well that this produces a very different response, one which is very sensitive to word choices, especially those made in the final sentence one is speaking. Oftentimes, I even struggle to finish a spoken sentence, as looking for the right word requires churning through so many word choices (something which seldom to never happens when thinking wordlessly). And that right word then determines the direction that my thought pattern takes next. Not so much when I’m thinking, but definitely when I’m talking. In other words, the identical process used by ChatGPT.
I’ve long noticed that I am much smarter in conversation that I am when thinking to myself. Also, it has long been an observation by writers that they cannot figure out what they think until they write about it. Writing is actually a dialogue. You throw down a first draft. You argue with it. You throw down your second draft. In other words, writing is a conversation.
In short, thinking is not thinking. Thinking is talking, dialoguing, conversation. Thinking is simply predicting the optimal word for the spoken/written sentence and then extrapolating to the next related optimal word that keeps the conversation moving. I need to define “moving”, but even more I need to get back to work… ha…
Thanks for these thoughts.
I do have a very good understanding of how software operates, and speak most of the languages you mention, and more besides. I'm also a writer and am familiar with the iterative process of verbal thinking. As Winston Churchill once said, "How can I know what I think until I've heard what I say?"
But the process of searching for a word in a conscious mind vs a large language model is completely different: a conscious mind is choosing words with an aim to express its meaning. This can and does result in statistically unlikely choices. These can be perceived as "surprising" to the listener, and maybe even to the speaker because the subconscious exists. A large language model like ChatGTP is just computing over probability distributions. Those probability distributions are the output of the reasoning process that humans have undergone, as represented by the texts ChatGTP was trained on. The conscious search for meaning is the input to that process.
Thinking is a variety of things. I believe consciousness is inherently embodied, so in my view thinking is walking, swimming, kayaking, canoeing, sailing, standing still, lying down, and yes, also talking. A disembodied word-predictor program is not thinking. It is engaged in statistical calculations in an attempt to emulate the output of thought. There is every reason to believe that such emulation is inherently limited: the claim that a non-conscious system can successfully emulate all the behaviours of a conscious one requires justification, as all the evidence we have points very strongly to the conclusion that it is false. I'll be writing something on this topic in the new year, and appreciate this opportunity to clarify my thoughts. But those thoughts existed before they were spoken.
I have a friend who is a Chinese-to-English translator. He is not an academic area specialist. He posted something about AI taking over human thinking occupations in 5 years. He went on to cite a comment from an undergraduate doing research on a problem that hadn't been examined before, and how ChatGPT was able to guess many of the "points in my research proposal" - although I'm not sure what that means. My friend didn't quite get what was being said here. That AI is able to compute solutions to some kind of research problem just says that a lot of research is pretty mundane - but we all know that. I also question the ability of an undergraduate to know the research relevance of what the AI was able to do. All of this is analogous to the point about poetry and expert judgement. Computational machines can do amazing things and assist with the performance of mundane tasks that humans used to get credit for doing. It's really interesting what this will lead to, but the replacement of human thinking is probably not one of them.
Very interesting to see your questions and ChatGPT’s answers with your commentary for those of us not clear on the physics. Loved the recording with the regular and drunk version of ChatGPT when it was getting off track!
Thank you for clarifying the current state of AI. It is going to be very interesting to watch how quickly it evolves.