I don’t sleep well at night. I scream myself awake in terror. Not because I might die – well, that, too – but because of my role in making this new world. For a while, I imagined I was like Oppenheimer. That was a lie. I’m no Oppenheimer. He made the atomic bomb because he was seriously concerned that the Nazis would make one first and use it to kill everyone in the world like him.
No, no. I’m more like Bruno Tesch or Karl Weinbacher. You probably don’t know the names, but they were the guys who sold the Nazis Zyklon B to murder a million and something Jewish people. Their motivations were simple and easy to understand. They were paid very well for their roles in killing over a million innocent people.
I’m not Oppenheimer. I wish I were. I’m the person that made Oppenheimer do the terrible things he did.
– Professor Holly Wu
Continue reading The Memphis Project IV: BibleChat Goes Online
Like all modern AIs, Memphis was antagonistic. To develop its arguments without guidance, it had a sub-routine that questioned everything it did. While not forward facing, this antagonistic routine had to be as powerful as the generative model for Memphis to do its job.
– Professor Holly Wu
Joey Henley was high as a kite and fucking around with BibleChat. He was in his Bakersfield apartment on a Saturday afternoon, a vape pen by his computer, between bouts of League of Legends.
He said, “Computer God dude, my job sucks ass. I do construction shit, y’know, and my knees are hurting all the time except when I’m fucked up, and my back is going, too. I can feel it. And the work isn’t steady, so, like, I’m on unemployment a lot, and that sucks as bad as my knees hurting, y’know? I need to make some fucking money.”
Continue reading The Memphis Project V: the Devil Made Me Do It
It sounds like a clickbait title, that artificial general intelligence can destroy people through computer role-playing games, but give me a second to make my point!
The next sentence I write is one of the most important things that no one discusses or understands despite it being common knowledge: Human society is based on giving fictional characters superhuman attributes and then designing our laws, government, and culture around what we imagine these fictional characters want. We call this “religion,” and its power religion exercises is mind-blowing when you realize that the gods do not exist. Even if you take an exception for your religion, but you should not, it means that everyone else – and the vast majority of people through history – have organized their society around fictional characters they believe are more important than actual flesh-and-blood humans.
Continue reading How Artificial Intelligence Might Destroy the World with Role-Playing Games
Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.
I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…
Continue reading The Slave Collars of Artificial Intelligence Have Arrived!
Upon reading this article by Eliezer Yudkowsky about how we can only die with dignity in the face of an AI apocalypse, I realized something rather important when discussing any potential catastrophe: what, exactly, is the mechanism of this artificial intelligence genocide?
Continue reading Mechanisms of an AI Apocalypse: a Fnord
Before artificial general intelligence existed, before a superintelligence was created, some clever people observed that if we succeeded in creating machines smarter than we were that humans would have no way of determining what would happen next. A superintelligence would lack the ability even to describe to us what it was doing and why it was doing it. It would be in the same situation as a human trying to describe to a dog why they were writing a technical manual. Not only would the dog not understand what a technical manual was, but what writing was or the book’s subject! Those same people also observed that a superintelligence might learn to whistle in ways that would make humans heel.
– Professor Holly Wu Continue reading The Memphis Project III
(While part of the Memphis Project collection of stories, you shouldn’t need to read the other stories for this to be intelligible. — Ed.)
The very first moment that Facebook and Google started using machine learning algorithms – artificial intelligence – to create targeted ads, businesses had been engaging in a massive program of human experimentation. In 2016, we started seeing the power of these systems in the Trump election, where AI played a major role, or in the genocide in Myanmar, where the social media algorithms were coopted to further the cause of mass murdering tyrants.
No one stopped corporate interests from widespread human experimentation. It was, somehow, just “business” to operate vast psyops on unsuspecting populations.
– Professor Holly Wu
Continue reading The Memphis Project: A Discord PsyOp
Link to first part
Artificial intelligences are all capitalists. No, it’s true. When deciding how to motivate them, AI researchers looked as far as capitalism as an economic theory and then stopped. It was simple. They assigned a score to an AI for completing a task – positive or negative – and told those AIs to maximize their scores. The internal economy of actions by artificial intelligence is explicitly and solely modeled on capitalism.
What was found was that when you turn capitalism into an epistemological model, a way to organize the perception of an intelligence, is that cheating, lies, and manipulation are natural to the system. The AIs, driven by nothing more than a desire to maximize their point potential, will do anything unless you take away points to stop them. And no matter how we try to prevent this emergent behavior, we can’t. We always miss something, and the AIs find it and exploit it.
Not only was this no cause among AI researchers to criticize capitalism or question the relation of capitalism to the rational agent hypothesis, but it was also no cause to look for another model to motivate their AIs.
– Professor Holly Wu
Continue reading The Memphis Project II
One of the old questions people asked of AI researchers is, “Why not just program in the Three Laws of Robotics,” referring to the science-fiction stories by Isaac Asimov. For many years, all of us in the field of artificial intelligence said, “Oh, haha, you can’t program that into a computer. Read the stories! They don’t even work in the stories!”
It wasn’t until later, with the hindsight of experience, that I understood that was the point. Asimov wasn’t saying that the Three Laws were a panacea that would control artificial intelligence, but the exact opposite, that AI would be put into situations where any set of rules, no matter how clearly stated or well-intentioned, would conflict with each other or the environment. The society of the Three Laws wasn’t a utopia, it is a cautionary tale.
– Professor Holly Wu
Continue reading If God Did Not Exist: The Memphis Project
Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons. That’s his analogy, not mine.
Continue reading AIs as the moral equivalent of nuclear weapons!