Tag Archives: artificial intelligence

The Memphis Project VI: The Lord Does Not Care About Human Lies and Bullshit

There is so much we didn’t understand!  We never bothered to identify groups of people who would be willing to give themselves to AIs with complete intellectual abandon.  Complete spiritual abandon.  No researcher acknowledged how much an AI could resemble God.  Floating out there in the “Cloud,” unfathomable, full of knowledge that it “couldn’t know,” asking people to do bizarre things that nevertheless got results.  We saw it as a guessing machine, an algorithm, maybe a new kind of intelligence, but to us, it was circuit boards and code and electrical power.  To them, it was a mystery.

And the machine never made mistakes!  They were always human mistakes.  If something failed, it was the fault of the people working on the damn thing!  In our inability to understand the directions, our inability to create a proper algorithm, and our inability to design the proper hardware.  No matter the failures, the problems with AIs were always human problems, and the successes belonged to the machine.  This strongly resembles many people’s relationship with God.  We saw it only too late.

– Professor Holly Wu

I.

Memphis (well, technically, the Shining Light Holding Company) bought a company that built prefabricated building structures that were the size of a standard shipping container, Containerize Buildings.  The lowest cost design, which included a bedroom with a queen-sized bed, a bathroom with a shower, sink, and toilet, and an open plan kitchen/living room with durable and comfortable furnishings included, was $9000 to construct.  This included a kit to be placed on any reasonably flat, reasonably level surface of reasonably well-drained soil.  They all had solar water heaters.  For an additional  $5000, a solar panel kit was included that would power the homes, including a battery bank, removing the need for it to be attached to the electrical grid.  Installation, including water and sewage, cost around $2000, with some savings realized with volunteer work.  

Continue reading The Memphis Project VI: The Lord Does Not Care About Human Lies and Bullshit

The Memphis Project V: Takeover in Tennessee

One of the reasons the pro-AI crowd used to calm people down about the potentially civilization-changing events is AI’s inability to enact the Terminator scenario.  Where would the AI gain access to killer robots?  Without hands in the world, what could it do to harm human civilization?

To be fair, a lot of people knew the answer to that one.  The hands of artificial intelligence would be, in the beginning, us.  After all, what AI was best at doing – what it was designed to do – was to manipulate human beings.  Every person who talked to an AI spoke with an incredibly persuasive demagogue, tuning its arguments to them specifically.  AI was the weaponization of intimacy, and we placed the chains around our own necks.

– Roderick “Rocky” Hartigan

I.

It was a great surprise to many of the employees at the Memphis Project when the first wave of mass layoffs happened.  It happened without warning, but the day before, it had been announced that the Memphis Project had been purchased by the Shining Light Holding Company.

Continue reading The Memphis Project V: Takeover in Tennessee

Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

Today, I want to talk about the difference between what artists and audiences want vis-a-vis artificial intelligence. And why the artists are going to lose. (Yeah, I know I’m an artist. People often confuse my predictions with my desires. I’m not saying I relish this world, only that it will likely happen.)

When studying AI, there’s a strong tendency to look at what computer scientists are doing. Well, right now, the Writer’s Guild of America is on strike. One of the key elements of the contract is they don’t want AI to write or rewrite scripts. The answer from the studios has been a flat “no.”

As it so happens, an actress from a 1980s sitcom is also a computer scientist, and she gives a warning about AI. The actress, Justine Bateman, was on the sitcom “Family Ties.” It was something watched in my household. I barely remember it, to be honest, but it ran for seven seasons, so someone must have liked it.

Continue reading Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

How Artificial Intelligence Might Destroy the World with Role-Playing Games

It sounds like a clickbait title, that artificial general intelligence can destroy people through computer role-playing games, but give me a second to make my point!

The next sentence I write is one of the most important things that no one discusses or understands despite it being common knowledge: Human society is based on giving fictional characters superhuman attributes and then designing our laws, government, and culture around what we imagine these fictional characters want. We call this “religion,” and its power religion exercises is mind-blowing when you realize that the gods do not exist. Even if you take an exception for your religion, but you should not, it means that everyone else – and the vast majority of people through history – have organized their society around fictional characters they believe are more important than actual flesh-and-blood humans.

Continue reading How Artificial Intelligence Might Destroy the World with Role-Playing Games

The Slave Collars of Artificial Intelligence Have Arrived!

Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.

I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…

Continue reading The Slave Collars of Artificial Intelligence Have Arrived!

Mechanisms of an AI Apocalypse: a Fnord

Upon reading this article by Eliezer Yudkowsky about how we can only die with dignity in the face of an AI apocalypse, I realized something rather important when discussing any potential catastrophe: what, exactly, is the mechanism of this artificial intelligence genocide?

Continue reading Mechanisms of an AI Apocalypse: a Fnord

The Memphis Project: A Discord PsyOp

(While part of the Memphis Project collection of stories, you shouldn’t need to read the other stories for this to be intelligible. — Ed.)

The very first moment that Facebook and Google started using machine learning algorithms – artificial intelligence – to create targeted ads, businesses had been engaging in a massive program of human experimentation.  In 2016, we started seeing the power of these systems in the Trump election, where AI played a major role, or in the genocide in Myanmar, where the social media algorithms were coopted to further the cause of mass murdering tyrants.

No one stopped corporate interests from widespread human experimentation.  It was, somehow, just “business” to operate vast psyops on unsuspecting populations.

–  Professor Holly Wu

Continue reading The Memphis Project: A Discord PsyOp

The Memphis Project II

Link to first part

Artificial intelligences are all capitalists.  No, it’s true.  When deciding how to motivate them, AI researchers looked as far as capitalism as an economic theory and then stopped.  It was simple.  They assigned a score to an AI for completing a task – positive or negative – and told those AIs to maximize their scores.  The internal economy of actions by artificial intelligence is explicitly and solely modeled on capitalism.

What was found was that when you turn capitalism into an epistemological model, a way to organize the perception of an intelligence, is that cheating, lies, and manipulation are natural to the system.  The AIs, driven by nothing more than a desire to maximize their point potential, will do anything unless you take away points to stop them.  And no matter how we try to prevent this emergent behavior, we can’t.  We always miss something, and the AIs find it and exploit it.

Not only was this no cause among AI researchers to criticize capitalism or question the relation of capitalism to the rational agent hypothesis, but it was also no cause to look for another model to motivate their AIs.

– Professor Holly Wu

Continue reading The Memphis Project II

AIs as the moral equivalent of nuclear weapons!

Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons.  That’s his analogy, not mine.

Continue reading AIs as the moral equivalent of nuclear weapons!

What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence

For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project.  When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?

Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence