Category Archives: If God Did Not Exist

If God Did Not Exist: The Memphis Project

One of the old questions people asked of AI researchers is, “Why not just program in the Three Laws of Robotics,” referring to the science-fiction stories by Isaac Asimov.  For many years, all of us in the field of artificial intelligence said, “Oh, haha, you can’t program that into a computer.  Read the stories!  They don’t even work in the stories!”

It wasn’t until later, with the hindsight of experience, that I understood that was the point.  Asimov wasn’t saying that the Three Laws were a panacea that would control artificial intelligence, but the exact opposite, that AI would be put into situations where any set of rules, no matter how clearly stated or well-intentioned, would conflict with each other or the environment.  The society of the Three Laws wasn’t a utopia, it is a cautionary tale.

– Professor Holly Wu

Continue reading If God Did Not Exist: The Memphis Project

AIs as the moral equivalent of nuclear weapons!

Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons.  That’s his analogy, not mine.

Continue reading AIs as the moral equivalent of nuclear weapons!

What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence

For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project.  When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?

Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence

The Biggest Risk Concerning Artificial General Intelligence Is…

Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.

Continue reading The Biggest Risk Concerning Artificial General Intelligence Is…