I’m gonna talk about Dungeons & Dragons. I have more research-oriented, If God Did Not Exist stories queued up – and doing that writing has demanded that I do additional research that’s, y’know, reading books – but I’m trying to keep this whole blog thing semi-active. Thus, D&D talk, or, more exactly, the brouhaha around D&D right now.
Before artificial general intelligence existed, before a superintelligence was created, some clever people observed that if we succeeded in creating machines smarter than we were that humans would have no way of determining what would happen next. A superintelligence would lack the ability even to describe to us what it was doing and why it was doing it. It would be in the same situation as a human trying to describe to a dog why they were writing a technical manual. Not only would the dog not understand what a technical manual was, but what writing was or the book’s subject! Those same people also observed that a superintelligence might learn to whistle in ways that would make humans heel.
– Professor Holly Wu Continue reading The Memphis Project III
(While part of the Memphis Project collection of stories, you shouldn’t need to read the other stories for this to be intelligible. — Ed.)
The very first moment that Facebook and Google started using machine learning algorithms – artificial intelligence – to create targeted ads, businesses had been engaging in a massive program of human experimentation. In 2016, we started seeing the power of these systems in the Trump election, where AI played a major role, or in the genocide in Myanmar, where the social media algorithms were coopted to further the cause of mass murdering tyrants.
No one stopped corporate interests from widespread human experimentation. It was, somehow, just “business” to operate vast psyops on unsuspecting populations.
– Professor Holly Wu
Artificial intelligences are all capitalists. No, it’s true. When deciding how to motivate them, AI researchers looked as far as capitalism as an economic theory and then stopped. It was simple. They assigned a score to an AI for completing a task – positive or negative – and told those AIs to maximize their scores. The internal economy of actions by artificial intelligence is explicitly and solely modeled on capitalism.
What was found was that when you turn capitalism into an epistemological model, a way to organize the perception of an intelligence, is that cheating, lies, and manipulation are natural to the system. The AIs, driven by nothing more than a desire to maximize their point potential, will do anything unless you take away points to stop them. And no matter how we try to prevent this emergent behavior, we can’t. We always miss something, and the AIs find it and exploit it.
Not only was this no cause among AI researchers to criticize capitalism or question the relation of capitalism to the rational agent hypothesis, but it was also no cause to look for another model to motivate their AIs.
– Professor Holly Wu
One of the old questions people asked of AI researchers is, “Why not just program in the Three Laws of Robotics,” referring to the science-fiction stories by Isaac Asimov. For many years, all of us in the field of artificial intelligence said, “Oh, haha, you can’t program that into a computer. Read the stories! They don’t even work in the stories!”
It wasn’t until later, with the hindsight of experience, that I understood that was the point. Asimov wasn’t saying that the Three Laws were a panacea that would control artificial intelligence, but the exact opposite, that AI would be put into situations where any set of rules, no matter how clearly stated or well-intentioned, would conflict with each other or the environment. The society of the Three Laws wasn’t a utopia, it is a cautionary tale.
– Professor Holly Wu
Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons. That’s his analogy, not mine.
After the corporate and military threat from AI – that AI will be deployed to serve the military and business – the next big problem it has is, well, the rest of us. Again, people. People are the problem with AI.
For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project. When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?
Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.
So, Elon Musk might be stepping down as Twitter CEO. I mean, let’s be clear. As the sole owner of Twitter, it could mean nothing at all. It could be a publicity stunt. And almost certainly any CEO chosen by Elon Musk – and they WOULD be chosen by Elon Musk, exclusively – would be absolutely beholden to Elon Musk. Further, mistakes that “Twitter” made going forward could be blamed on the non-Musk CEO, offering at least partial cover for his painfully stupid decisions.