Before artificial general intelligence existed, before a superintelligence was created, some clever people observed that if we succeeded in creating machines smarter than we were that humans would have no way of determining what would happen next. A superintelligence would lack the ability even to describe to us what it was doing and why it was doing it. It would be in the same situation as a human trying to describe to a dog why they were writing a technical manual. Not only would the dog not understand what a technical manual was, but what writing was or the book’s subject! Those same people also observed that a superintelligence might learn to whistle in ways that would make humans heel.
– Professor Holly Wu Continue reading The Memphis Project III
(While part of the Memphis Project collection of stories, you shouldn’t need to read the other stories for this to be intelligible. — Ed.)
The very first moment that Facebook and Google started using machine learning algorithms – artificial intelligence – to create targeted ads, businesses had been engaging in a massive program of human experimentation. In 2016, we started seeing the power of these systems in the Trump election, where AI played a major role, or in the genocide in Myanmar, where the social media algorithms were coopted to further the cause of mass murdering tyrants.
No one stopped corporate interests from widespread human experimentation. It was, somehow, just “business” to operate vast psyops on unsuspecting populations.
– Professor Holly Wu
Continue reading The Memphis Project: A Discord PsyOp
Link to first part
Artificial intelligences are all capitalists. No, it’s true. When deciding how to motivate them, AI researchers looked as far as capitalism as an economic theory and then stopped. It was simple. They assigned a score to an AI for completing a task – positive or negative – and told those AIs to maximize their scores. The internal economy of actions by artificial intelligence is explicitly and solely modeled on capitalism.
What was found was that when you turn capitalism into an epistemological model, a way to organize the perception of an intelligence, is that cheating, lies, and manipulation are natural to the system. The AIs, driven by nothing more than a desire to maximize their point potential, will do anything unless you take away points to stop them. And no matter how we try to prevent this emergent behavior, we can’t. We always miss something, and the AIs find it and exploit it.
Not only was this no cause among AI researchers to criticize capitalism or question the relation of capitalism to the rational agent hypothesis, but it was also no cause to look for another model to motivate their AIs.
– Professor Holly Wu
Continue reading The Memphis Project II
One of the old questions people asked of AI researchers is, “Why not just program in the Three Laws of Robotics,” referring to the science-fiction stories by Isaac Asimov. For many years, all of us in the field of artificial intelligence said, “Oh, haha, you can’t program that into a computer. Read the stories! They don’t even work in the stories!”
It wasn’t until later, with the hindsight of experience, that I understood that was the point. Asimov wasn’t saying that the Three Laws were a panacea that would control artificial intelligence, but the exact opposite, that AI would be put into situations where any set of rules, no matter how clearly stated or well-intentioned, would conflict with each other or the environment. The society of the Three Laws wasn’t a utopia, it is a cautionary tale.
– Professor Holly Wu
Continue reading If God Did Not Exist: The Memphis Project
Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons. That’s his analogy, not mine.
Continue reading AIs as the moral equivalent of nuclear weapons!
After the corporate and military threat from AI – that AI will be deployed to serve the military and business – the next big problem it has is, well, the rest of us. Again, people. People are the problem with AI.
Continue reading Humans aren’t rational and the effects on AI
For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project. When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?
Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence
Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.
Continue reading The Biggest Risk Concerning Artificial General Intelligence Is…
You can’t put the genie back in the bottle, no matter what you do. There were warnings about the harm mechanization can do to industries, but artists figured, oh, not us. We’re different. Our work encapsulates the soul of humanity, and therefore, we can’t be replaced! Most artists – in all fields – were absolutely silent when mechanization and computerization devastated blacksmiths, glassblowers, woodworkers, and so many others whose styles and skills were plundered by industrialization for the profit of large corporations. They were also silent when AIs were being crafted for other fields that were about to face the chopping block of computerization. Truck drivers and cashiers weren’t artists, what they did wasn’t like art, despite the absolute centrality of those jobs for human civilization to continue to exist. No one eats without truck drivers handling cargo and cashiers selling it to you, not until they are replaced by machines.
Continue reading Artists Get What They Want, Good and Hard
After reading that bit about Adobe using AI art tools, I read an interview in GQ with Alan Moore. Reading it didn’t spoil my appreciation of Moore’s work, but, man, he’s a selfish little asshole.
Continue reading Alan Moore is Kinna a Whiner About His Fans and Art