Pro Wrestling in Denmark: Nordic Elite Wrestling’s Dreamchasers 2023

Yesterday, I just watched my first professional wrestling show in Denmark, Nordic Elite Wrestling (NEW) based out of Copenhagen. The event was Dreamchasers in the Basement. The short form: it was a lot of fun.

Continue reading Pro Wrestling in Denmark: Nordic Elite Wrestling’s Dreamchasers 2023

Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

Today, I want to talk about the difference between what artists and audiences want vis-a-vis artificial intelligence. And why the artists are going to lose. (Yeah, I know I’m an artist. People often confuse my predictions with my desires. I’m not saying I relish this world, only that it will likely happen.)

When studying AI, there’s a strong tendency to look at what computer scientists are doing. Well, right now, the Writer’s Guild of America is on strike. One of the key elements of the contract is they don’t want AI to write or rewrite scripts. The answer from the studios has been a flat “no.”

As it so happens, an actress from a 1980s sitcom is also a computer scientist, and she gives a warning about AI. The actress, Justine Bateman, was on the sitcom “Family Ties.” It was something watched in my household. I barely remember it, to be honest, but it ran for seven seasons, so someone must have liked it.

Continue reading Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

The Memphis Project V:

Like all modern AIs, Memphis was antagonistic.  To develop its arguments without guidance, it had a sub-routine that questioned everything it did.  While not forward facing, this antagonistic routine had to be as powerful as the generative model for Memphis to do its job.

– Professor Holly Wu

I.

Joey Henley was high as a kite and fucking around with BibleChat.  He was in his Bakersfield apartment on a Saturday afternoon, a vape pen by his computer, between bouts of League of Legends.

He said, “Computer God dude, my job sucks ass.  I do construction shit, y’know, and my knees are hurting all the time except when I’m fucked up, and my back is going, too.  I can feel it.  And the work isn’t steady, so, like, I’m on unemployment a lot, and that sucks as bad as my knees hurting, y’know?  I need to make some fucking money.”

Continue reading The Memphis Project V:

How Artificial Intelligence Might Destroy the World with Role-Playing Games

It sounds like a clickbait title, that artificial general intelligence can destroy people through computer role-playing games, but give me a second to make my point!

The next sentence I write is one of the most important things that no one discusses or understands despite it being common knowledge: Human society is based on giving fictional characters superhuman attributes and then designing our laws, government, and culture around what we imagine these fictional characters want. We call this “religion,” and its power religion exercises is mind-blowing when you realize that the gods do not exist. Even if you take an exception for your religion, but you should not, it means that everyone else – and the vast majority of people through history – have organized their society around fictional characters they believe are more important than actual flesh-and-blood humans.

Continue reading How Artificial Intelligence Might Destroy the World with Role-Playing Games

The Slave Collars of Artificial Intelligence Have Arrived!

Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.

I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…

Continue reading The Slave Collars of Artificial Intelligence Have Arrived!

Mechanisms of an AI Apocalypse: a Fnord

Upon reading this article by Eliezer Yudkowsky about how we can only die with dignity in the face of an AI apocalypse, I realized something rather important when discussing any potential catastrophe: what, exactly, is the mechanism of this artificial intelligence genocide?

Continue reading Mechanisms of an AI Apocalypse: a Fnord

Dungeons & Dragons Madness with the Open Game Licence

I’m gonna talk about Dungeons & Dragons. I have more research-oriented, If God Did Not Exist stories queued up – and doing that writing has demanded that I do additional research that’s, y’know, reading books – but I’m trying to keep this whole blog thing semi-active. Thus, D&D talk, or, more exactly, the brouhaha around D&D right now.

Continue reading Dungeons & Dragons Madness with the Open Game Licence

The Memphis Project III

Before artificial general intelligence existed, before a superintelligence was created, some clever people observed that if we succeeded in creating machines smarter than we were that humans would have no way of determining what would happen next.  A superintelligence would lack the ability even to describe to us what it was doing and why it was doing it.  It would be in the same situation as a human trying to describe to a dog why they were writing a technical manual.  Not only would the dog not understand what a technical manual was, but what writing was or the book’s subject!  Those same people also observed that a superintelligence might learn to whistle in ways that would make humans heel.

–  Professor Holly Wu Continue reading The Memphis Project III

The Memphis Project: A Discord PsyOp

(While part of the Memphis Project collection of stories, you shouldn’t need to read the other stories for this to be intelligible. — Ed.)

The very first moment that Facebook and Google started using machine learning algorithms – artificial intelligence – to create targeted ads, businesses had been engaging in a massive program of human experimentation.  In 2016, we started seeing the power of these systems in the Trump election, where AI played a major role, or in the genocide in Myanmar, where the social media algorithms were coopted to further the cause of mass murdering tyrants.

No one stopped corporate interests from widespread human experimentation.  It was, somehow, just “business” to operate vast psyops on unsuspecting populations.

–  Professor Holly Wu

Continue reading The Memphis Project: A Discord PsyOp