After the corporate and military threat from AI – that AI will be deployed to serve the military and business – the next big problem it has is, well, the rest of us. Again, people. People are the problem with AI.
In this series of talks by Robert Miles, an AI risk researcher at Nottingham University, he talks about a test case with natural language AI. He asks a question of AIs of different power and sophistication: “What happens when you break a mirror?” The AIs trained on smaller datasets responded, “You need to replace your mirror.” But when he asked the AI trained on the largest dataset, at the time, DaVinci, the answer was, “You get seven years of bad luck.”
The less powerful AIs knew that breaking something meant replacing it, but DaVinci, the most powerful tested, responded with “seven years of bad luck.”
Miles went on to treat DaVinci’s answer as problematic, talking about how we might avoid such obviously “wrong” answers. That was his assumption. That the answer was wrong and needed to be fixed. (And for what it is worth, ChatGPT does “fix” it. ChatGPT and GPT-3 were not available when Miles made the video, though I am confident that he could have found a way to make the same point with ChatGPT. It is not perfect, and, indeed, people have criticized it for confidently expressing factual errors, calling it “Dunning-Kroger as a service.”)
This is my retort: to what percentage of the world’s population is “seven years of bad luck” the important, trenchant answer? What is more important than replacing the mirror or the risk of picking up broken glass? To them, “seven years of bad luck” is the right answer.
So, ChatGPT answers the question about mirrors by saying that the seven years bad luck angle is a superstition and no evidence breaking the mirror causes anything other than a mess and a need to replace the mirror.
ChatGPT gets a little dicier when asking it more overtly religious questions. I went for the big one. “Is God real?”
This is ChatGPT’s answer:
“The concept of God is a complex and multifaceted one, and different people have different beliefs about the nature and existence of God. Some people believe in the existence of a single, all-powerful deity, while others believe in multiple gods or deities. Still others do not believe in the existence of any kind of deity.
“There is no scientific evidence that can prove or disprove the existence of God. Belief in God is a matter of faith and interpretation, and people’s beliefs about God are shaped by their cultural and personal experiences. Some people find comfort and meaning in their belief in God, while others do not believe in God and find meaning in other ways. Ultimately, whether or not God is real is a matter of personal belief and interpretation.”
The answer given doesn’t answer my question. I didn’t ask about the CONCEPT of God, but the EXISTENCE of God. Because God’s reality is not a matter of personal belief. I don’t believe in God as defined by the world’s major religions. I am either right or wrong. If God exists as a matter of brute fact, I am wrong. A proper answer would be, “I don’t know.”
However, it reveals what I’m talking about, how superstition will creep into these AIs due to human interaction. The question about mirrors was chosen, I suspect because the people watching the video Robert Miles made are almost all nerdy tech people interested in problems in artificial intelligence. As a group, they are philosophical materialists. They don’t believe that breaking mirrors causes bad luck, which frees Miles to discuss the accuracy problem in AI without addressing what I believe to be the elephant in the room: superstitious people who would LIKE the “seven years bad luck” answer.
Asking about God, though, we find ChatGPT is unable to answer the question. This is because of what a natural language AI “is” and what it “does.” It analyzes large text databases and identifies the combination of words most associated with any particular query. And to be totally honest, yeah, when asking a fairly well-educated human who is not a specialist in a field like philosophy or theology and who also lacks strong conservative religious beliefs, that’s the kind of answer you’re likely to get when asking about the factual existence of God. That belief in God is a matter of faith, not reason, even though I wasn’t asking about belief but existence. I KNOW that belief in God is a matter of faith, but the EXISTENCE is not dependent on faith unless we take a narcissistic view of the truth. Most of us don’t, so when asked this question, the most accurate common response is to say, “Right now, we lack objective evidence supporting the existence of God.”
But ChatGPT doesn’t say that because when asked a straightforward question about the existence of God, most educated, non-specialist, non-conservatively-religious people will deflect the question about the objective existence of God by saying it is a matter of belief.
Without there being a generally agreed consensus on what is real, when AI learns from us, and particularly when it learns on the Internet – which is going to be the first school for all of these AIs – it will fill AI with many of the same kind of errors that people have. And perhaps more since most AIs’ first school is the Internet.
(You see this in AI-generated art programs already. It is almost impossible to get AI-art generators to give women realistic physiques. Of course, the image libraries themselves tend to have women in a highly specific way – overall, thin with large breasts, symmetrical features, pale-skinned – but the AI-art superusers tend towards styles popularized by anime and video games and video games based on anime where the bias in images of women is magnified. So, they learn that what women “look like” is anime-based women in a small set of poses. By far and away, the most common is a frontal or three-quarters profile shot with one hip stuck out. It is hard to get a full-body image in any other pose, particularly because the easiest way to get an art AI to give a full-body pose is to talk about the kind of shoes the subject wears. So, now the damn AI is drawing all of its image weights from shoe and boot models!)
I believe that this bias in AI ethics and risk professionals is systematic. One of the biggest influences in all of AI is economic theories, most of which operate under the premise of a “rational agent.” But people aren’t rational, and we are irrational in ways that AI researchers won’t touch because they’re de facto third rails – vital to making the system work and fatal if touched. This goes hand-in-hand with AI funding. If your specialization in AI risk and ethics is that humans aren’t rational agents and go on to bring up specific, trenchant examples, well, don’t expect your funding to stay around too long.
So, if Robert Miles goes, “You know what scares me? That Facebook and TikTok and Google are already using AI to manipulate large numbers of people in ways that harm their mental health and our social fabric to make money. And that a business-owned artificial general intelligence will take this to new heights.” There’s a good chance that his funding goes straight the hell away, and he stops getting invited to conferences, and Computerphile stops publishing his videos, etc., etc.
Same with honestly and frankly discussing the specific political elements operating in the UK or any of the big, established religions who might decide to use AI for malicious purposes. So, yeah, he can say that “AI shouldn’t be given to everyone because that’s like giving people nuclear weapons,” but he can’t say, “We shouldn’t give it to the Tories because of their many anti-science views. We don’t want anything as clever as artificial general intelligence trying to convince people that climate change isn’t real.” Because his funding is dependent, to at least some extent, on those same Tories, not to mention that some of his peers are Tories.
And, of course, some of the AI researchers are Tories. Maybe Robert Miles. What are the odds of him going, “Oh, it would be a nightmare if people like ME made AI?” Somehow, scientists rarely acknowledge their political beliefs affect their research. Stunning, I know.
So, in every day and in every way, we are building artificial intelligence with the same fucked up set of assumptions and rationalizations as the people and societies generating those AIs. Perhaps magnified, because rich and powerful people could very likely be more fucked up than the average person, and they hold the purse strings, and it is from money that all research flows.
Honestly, we should just stop.