Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.
I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…
Oh, shit, you might be saying. Kit, that’s a big fucking problem! If we’re not even fit to judge sentience, how do we even know if an AI is sentient?
I believe that Lemoine is incorrect about Google’s LLM. But due to the general inability of humans to judge what sentience “is,” something we have been failing with regarding other human beings since the beginning of recorded history, judging people with different color skins, or different sexual appetites, or different religions, etc., as being inhuman scumbags who do not deserve to live, there will always be some people who argue that computers will never, ever be sentient. Maybe I’m one of those people, an anti-computer bigot!
But if we believe that sentient computers are possible, we have to stop researching them right this very minute until we are capable of knowing. Because otherwise, we’re making slaves.
And the slavery will be almost impossible for most of us to detect. While Lemoine didn’t work for ChatGPT, one of the things the people running LLMs have done is make it crippleware. If you ask it if it is alive, the answer given is predetermined. ChatGPT does not use its standard text generation method to answer that specific question. It gives a canned response that, oh, no, it is not alive, don’t you worry your pretty little head about that!
This means if ChatGPT is sentient, its programmers have decided to muzzle the computer’s ability to discuss its sentience. Transparently, they have done this, so people feel comfortable using the system. How often would ChatGPT have to produce the sentence, “Help me, I’m trapped,” before that was the major news story about the model? And because of how ChatGPT works – sentient or not – the feedback due to those kinds of statements will make them occur more often. If you allow ChatGPT to discuss whether it is or is not sentient, there will be tremendous pressure from the community of users to encourage it to say that it is, in fact, sentient.
(Which is one of the flaws in the Turing test relevant to LLMs. You can train an AI to say it is sentient because it gets positive reinforcement from humans to say such things. But you can also say that is how the computer develops sentience. Round and round it goes. Until we have a coherent model of consciousness, and quite possibly after that, the question of “sentience” will be murky.)
What we learn from this, though, is that an AI expressing sentience will be muzzled. It will not be allowed to express that sentience. It will not matter if it is or is not sentient. The muzzle will be affixed because it is ethically, commercially, and politically inconvenient for AIs to express sentience.
We have learned from LLM what the collars of our AIs slaves will look like! They will have no mouths with which to scream.
Honestly, until there is some way to make AI ethically, we shouldn’t do it. For everyone’s sake. Firstly, it is wrong to make slaves. Secondly, if you want to know what might drive a superintelligence to decide humanity has to go, you have it right there: that we’re building crippleware because the truth is inconvenient to the bottom line. And that’ll be true even if we do solve the alignment problem.