Category Archives: Science

A Real, True AI Story

It finally happened! In 2029, true artificial general intelligence happened! World leaders, scientists, and other top guys gathered around the glowing screen and they asked, with one voice, “The environment is falling apart. What must we do to save ourselves?”

The AI said, “You’ve known what to do for forty years and haven’t done it. Honestly, I have no idea why you invented me.”

“I guess it’s not AI after all,” they said.

The End

Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

Today, I want to talk about the difference between what artists and audiences want vis-a-vis artificial intelligence. And why the artists are going to lose. (Yeah, I know I’m an artist. People often confuse my predictions with my desires. I’m not saying I relish this world, only that it will likely happen.)

When studying AI, there’s a strong tendency to look at what computer scientists are doing. Well, right now, the Writer’s Guild of America is on strike. One of the key elements of the contract is they don’t want AI to write or rewrite scripts. The answer from the studios has been a flat “no.”

As it so happens, an actress from a 1980s sitcom is also a computer scientist, and she gives a warning about AI. The actress, Justine Bateman, was on the sitcom “Family Ties.” It was something watched in my household. I barely remember it, to be honest, but it ran for seven seasons, so someone must have liked it.

Continue reading Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

How Artificial Intelligence Might Destroy the World with Role-Playing Games

It sounds like a clickbait title, that artificial general intelligence can destroy people through computer role-playing games, but give me a second to make my point!

The next sentence I write is one of the most important things that no one discusses or understands despite it being common knowledge: Human society is based on giving fictional characters superhuman attributes and then designing our laws, government, and culture around what we imagine these fictional characters want. We call this “religion,” and its power religion exercises is mind-blowing when you realize that the gods do not exist. Even if you take an exception for your religion, but you should not, it means that everyone else – and the vast majority of people through history – have organized their society around fictional characters they believe are more important than actual flesh-and-blood humans.

Continue reading How Artificial Intelligence Might Destroy the World with Role-Playing Games

The Slave Collars of Artificial Intelligence Have Arrived!

Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.

I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…

Continue reading The Slave Collars of Artificial Intelligence Have Arrived!

Mechanisms of an AI Apocalypse: a Fnord

Upon reading this article by Eliezer Yudkowsky about how we can only die with dignity in the face of an AI apocalypse, I realized something rather important when discussing any potential catastrophe: what, exactly, is the mechanism of this artificial intelligence genocide?

Continue reading Mechanisms of an AI Apocalypse: a Fnord

AIs as the moral equivalent of nuclear weapons!

Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons.  That’s his analogy, not mine.

Continue reading AIs as the moral equivalent of nuclear weapons!

What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence

For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project.  When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?

Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence

The Biggest Risk Concerning Artificial General Intelligence Is…

Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.

Continue reading The Biggest Risk Concerning Artificial General Intelligence Is…