Tag Archives: philosophy

The Lexicon of Terms to Discuss Online Hyperreality and Hypernormalization

Less heavy but not unheavy, I’ve been trying to think of words to frame some of the issues caused by our collective obsession with social media and its consequences. (1)

The first relevant term I learned, personally, was “future shock.” Roughly, it’s the state where people suffer emotional distress because everything is changing so fast! I suspect we’ve all felt it: that moment when, at work, something changes. So, you’ve got to abandon your expertise with the previous system for something else, which is often riddled with bugs, and just when you’re getting good with the new one, bang, they change it again. It can also be felt with the rapid rise and subsequent fall of social media networks – or just their sheer proliferation – as Facebook, Twitter, Instagram, TikTok, and others compete in the same crowded space. And, lately, I’ve been seeing the final stage of future shock: the kids have it. Much of the discourse against gen AI is from young people who are seeing their futures stripped away by the rise of the AI shoggoth, its tentacles into everything.

Continue reading The Lexicon of Terms to Discuss Online Hyperreality and Hypernormalization

Humans and Social Media Are the Problem with Spreading Misinformation, not AI

I had been thinking of writing a post about how it seems obvious that many of the posts that people make about AI errors are deceptive if not outright lies. Screenshots are easily fabricated! And I am unable to recreate any of the errors, even when I know the exact prompts and chatbots – even their versions – involved. Plus, it’s reasonably easy to “gaslight” an AI into giving ridiculous output by, for instance, starting a conversation and priming it with contradictory information and then demanding it reconcile the contradictions. Absurdity often results. And, yes, AI sometimes says stupid things to well-formed, innocuous posts, but the sheer scale of articles and videos about AI “mistakes” has grown so commonplace that it seemed to me, at first blush, that deception was involved.

Continue reading Humans and Social Media Are the Problem with Spreading Misinformation, not AI

Transitioning Away from Capitalist AIs and If God Did Not Exist

I took a pause from the If God Does Not Exist stories because I realized that the early stories needed revision. The speed of progress for AI is so fast that even things written a year or two ago now look retro! I have a footnote (1) about the literary problems I was kicking around, but the key thing is that, boy, was the break intellectually fruitful. Let’s talk about how AIs are built!

Continue reading Transitioning Away from Capitalist AIs and If God Did Not Exist

Social media’s bias is money and power

This video by a Danish military expert, Anders Puck Nielsen, talks about social media and how to improve it. What he suggests is typical of most well-meaning people who want to improve social media, but all of them are at least slightly bizarre because we all know that won’t happen without government regulation.

While watching Nielsen’s post, I saw some fnords. First, Nielsen starts by suggesting an unbiased algorithm. He’s talking about right-wing versus left-wing. He ignores – as do most people – that the biggest and most significant bias in social media algorithms is the one that creates profitability for their owners. I’d say that most of social media’s problems for society have this as their root: they are designed to make their owners fabulous amounts of cash and give them enormous power, and they’re highly successful in that goal, and everything else flows from the “make money and grow powerful” imperative.

Continue reading Social media’s bias is money and power

How Artificial Intelligence Might Destroy the World with Role-Playing Games

It sounds like a clickbait title, that artificial general intelligence can destroy people through computer role-playing games, but give me a second to make my point!

The next sentence I write is one of the most important things that no one discusses or understands despite it being common knowledge: Human society is based on giving fictional characters superhuman attributes and then designing our laws, government, and culture around what we imagine these fictional characters want. We call this “religion,” and its power religion exercises is mind-blowing when you realize that the gods do not exist. Even if you take an exception for your religion, but you should not, it means that everyone else – and the vast majority of people through history – have organized their society around fictional characters they believe are more important than actual flesh-and-blood humans.

Continue reading How Artificial Intelligence Might Destroy the World with Role-Playing Games

The Slave Collars of Artificial Intelligence Have Arrived!

Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.

I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…

Continue reading The Slave Collars of Artificial Intelligence Have Arrived!

Mechanisms of an AI Apocalypse: a Fnord

Upon reading this article by Eliezer Yudkowsky about how we can only die with dignity in the face of an AI apocalypse, I realized something rather important when discussing any potential catastrophe: what, exactly, is the mechanism of this artificial intelligence genocide?

Continue reading Mechanisms of an AI Apocalypse: a Fnord

AIs as the moral equivalent of nuclear weapons!

Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons.  That’s his analogy, not mine.

Continue reading AIs as the moral equivalent of nuclear weapons!