Like all modern AIs, Memphis was antagonistic. To develop its arguments without guidance, it had a sub-routine that questioned everything it did. While not forward facing, this antagonistic routine had to be as powerful as the generative model for Memphis to do its job.
– Professor Holly Wu
Joey Henley was high as a kite and fucking around with BibleChat. He was in his Bakersfield apartment on a Saturday afternoon, a vape pen by his computer, between bouts of League of Legends.
He said, “Computer God dude, my job sucks ass. I do construction shit, y’know, and my knees are hurting all the time except when I’m fucked up, and my back is going, too. I can feel it. And the work isn’t steady, so, like, I’m on unemployment a lot, and that sucks as bad as my knees hurting, y’know? I need to make some fucking money.”
Continue reading The Memphis Project V: →
It sounds like a clickbait title, that artificial general intelligence can destroy people through computer role-playing games, but give me a second to make my point!
The next sentence I write is one of the most important things that no one discusses or understands despite it being common knowledge: Human society is based on giving fictional characters superhuman attributes and then designing our laws, government, and culture around what we imagine these fictional characters want. We call this “religion,” and its power religion exercises is mind-blowing when you realize that the gods do not exist. Even if you take an exception for your religion, but you should not, it means that everyone else – and the vast majority of people through history – have organized their society around fictional characters they believe are more important than actual flesh-and-blood humans.
Continue reading How Artificial Intelligence Might Destroy the World with Role-Playing Games →
Before artificial general intelligence existed, before a superintelligence was created, some clever people observed that if we succeeded in creating machines smarter than we were that humans would have no way of determining what would happen next. A superintelligence would lack the ability even to describe to us what it was doing and why it was doing it. It would be in the same situation as a human trying to describe to a dog why they were writing a technical manual. Not only would the dog not understand what a technical manual was, but what writing was or the book’s subject! Those same people also observed that a superintelligence might learn to whistle in ways that would make humans heel.
– Professor Holly Wu Continue reading The Memphis Project III →
(While part of the Memphis Project collection of stories, you shouldn’t need to read the other stories for this to be intelligible. — Ed.)
The very first moment that Facebook and Google started using machine learning algorithms – artificial intelligence – to create targeted ads, businesses had been engaging in a massive program of human experimentation. In 2016, we started seeing the power of these systems in the Trump election, where AI played a major role, or in the genocide in Myanmar, where the social media algorithms were coopted to further the cause of mass murdering tyrants.
No one stopped corporate interests from widespread human experimentation. It was, somehow, just “business” to operate vast psyops on unsuspecting populations.
– Professor Holly Wu
Continue reading The Memphis Project: A Discord PsyOp →
Link to first part
Artificial intelligences are all capitalists. No, it’s true. When deciding how to motivate them, AI researchers looked as far as capitalism as an economic theory and then stopped. It was simple. They assigned a score to an AI for completing a task – positive or negative – and told those AIs to maximize their scores. The internal economy of actions by artificial intelligence is explicitly and solely modeled on capitalism.
What was found was that when you turn capitalism into an epistemological model, a way to organize the perception of an intelligence, is that cheating, lies, and manipulation are natural to the system. The AIs, driven by nothing more than a desire to maximize their point potential, will do anything unless you take away points to stop them. And no matter how we try to prevent this emergent behavior, we can’t. We always miss something, and the AIs find it and exploit it.
Not only was this no cause among AI researchers to criticize capitalism or question the relation of capitalism to the rational agent hypothesis, but it was also no cause to look for another model to motivate their AIs.
– Professor Holly Wu
Continue reading The Memphis Project II →
One of the old questions people asked of AI researchers is, “Why not just program in the Three Laws of Robotics,” referring to the science-fiction stories by Isaac Asimov. For many years, all of us in the field of artificial intelligence said, “Oh, haha, you can’t program that into a computer. Read the stories! They don’t even work in the stories!”
It wasn’t until later, with the hindsight of experience, that I understood that was the point. Asimov wasn’t saying that the Three Laws were a panacea that would control artificial intelligence, but the exact opposite, that AI would be put into situations where any set of rules, no matter how clearly stated or well-intentioned, would conflict with each other or the environment. The society of the Three Laws wasn’t a utopia, it is a cautionary tale.
– Professor Holly Wu
Continue reading If God Did Not Exist: The Memphis Project →
After the corporate and military threat from AI – that AI will be deployed to serve the military and business – the next big problem it has is, well, the rest of us. Again, people. People are the problem with AI.
Continue reading Humans aren’t rational and the effects on AI →
For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project. When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?
Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence →