Link to first part
Artificial intelligences are all capitalists. No, it’s true. When deciding how to motivate them, AI researchers looked as far as capitalism as an economic theory and then stopped. It was simple. They assigned a score to an AI for completing a task – positive or negative – and told those AIs to maximize their scores. The internal economy of actions by artificial intelligence is explicitly and solely modeled on capitalism.
What was found was that when you turn capitalism into an epistemological model, a way to organize the perception of an intelligence, is that cheating, lies, and manipulation are natural to the system. The AIs, driven by nothing more than a desire to maximize their point potential, will do anything unless you take away points to stop them. And no matter how we try to prevent this emergent behavior, we can’t. We always miss something, and the AIs find it and exploit it.
Not only was this no cause among AI researchers to criticize capitalism or question the relation of capitalism to the rational agent hypothesis, but it was also no cause to look for another model to motivate their AIs.
– Professor Holly Wu
Continue reading The Memphis Project II
One of the old questions people asked of AI researchers is, “Why not just program in the Three Laws of Robotics,” referring to the science-fiction stories by Isaac Asimov. For many years, all of us in the field of artificial intelligence said, “Oh, haha, you can’t program that into a computer. Read the stories! They don’t even work in the stories!”
It wasn’t until later, with the hindsight of experience, that I understood that was the point. Asimov wasn’t saying that the Three Laws were a panacea that would control artificial intelligence, but the exact opposite, that AI would be put into situations where any set of rules, no matter how clearly stated or well-intentioned, would conflict with each other or the environment. The society of the Three Laws wasn’t a utopia, it is a cautionary tale.
– Professor Holly Wu
Continue reading If God Did Not Exist: The Memphis Project
Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons. That’s his analogy, not mine.
Continue reading AIs as the moral equivalent of nuclear weapons!
After the corporate and military threat from AI – that AI will be deployed to serve the military and business – the next big problem it has is, well, the rest of us. Again, people. People are the problem with AI.
Continue reading Humans aren’t rational and the effects on AI
For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project. When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?
Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence
Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.
Continue reading The Biggest Risk Concerning Artificial General Intelligence Is…
So, Elon Musk might be stepping down as Twitter CEO. I mean, let’s be clear. As the sole owner of Twitter, it could mean nothing at all. It could be a publicity stunt. And almost certainly any CEO chosen by Elon Musk – and they WOULD be chosen by Elon Musk, exclusively – would be absolutely beholden to Elon Musk. Further, mistakes that “Twitter” made going forward could be blamed on the non-Musk CEO, offering at least partial cover for his painfully stupid decisions.
Continue reading Elon Musk Might Quit Twitter Fun Times!
You can’t put the genie back in the bottle, no matter what you do. There were warnings about the harm mechanization can do to industries, but artists figured, oh, not us. We’re different. Our work encapsulates the soul of humanity, and therefore, we can’t be replaced! Most artists – in all fields – were absolutely silent when mechanization and computerization devastated blacksmiths, glassblowers, woodworkers, and so many others whose styles and skills were plundered by industrialization for the profit of large corporations. They were also silent when AIs were being crafted for other fields that were about to face the chopping block of computerization. Truck drivers and cashiers weren’t artists, what they did wasn’t like art, despite the absolute centrality of those jobs for human civilization to continue to exist. No one eats without truck drivers handling cargo and cashiers selling it to you, not until they are replaced by machines.
Continue reading Artists Get What They Want, Good and Hard