Tag Archives: artificial intelligence

The Memphis Project: A Discord PsyOp

(While part of the Memphis Project collection of stories, you shouldn’t need to read the other stories for this to be intelligible. — Ed.)

The very first moment that Facebook and Google started using machine learning algorithms – artificial intelligence – to create targeted ads, businesses had been engaging in a massive program of human experimentation.  In 2016, we started seeing the power of these systems in the Trump election, where AI played a major role, or in the genocide in Myanmar, where the social media algorithms were coopted to further the cause of mass murdering tyrants.

No one stopped corporate interests from widespread human experimentation.  It was, somehow, just “business” to operate vast psyops on unsuspecting populations.

–  Professor Holly Wu

Continue reading The Memphis Project: A Discord PsyOp

The Memphis Project II

Link to first part

Artificial intelligences are all capitalists.  No, it’s true.  When deciding how to motivate them, AI researchers looked as far as capitalism as an economic theory and then stopped.  It was simple.  They assigned a score to an AI for completing a task – positive or negative – and told those AIs to maximize their scores.  The internal economy of actions by artificial intelligence is explicitly and solely modeled on capitalism.

What was found was that when you turn capitalism into an epistemological model, a way to organize the perception of an intelligence, is that cheating, lies, and manipulation are natural to the system.  The AIs, driven by nothing more than a desire to maximize their point potential, will do anything unless you take away points to stop them.  And no matter how we try to prevent this emergent behavior, we can’t.  We always miss something, and the AIs find it and exploit it.

Not only was this no cause among AI researchers to criticize capitalism or question the relation of capitalism to the rational agent hypothesis, but it was also no cause to look for another model to motivate their AIs.

– Professor Holly Wu

Continue reading The Memphis Project II

AIs as the moral equivalent of nuclear weapons!

Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons.  That’s his analogy, not mine.

Continue reading AIs as the moral equivalent of nuclear weapons!

What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence

For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project.  When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?

Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence