Category Archives: Political

Never Admit You’re Wrong: Yoni Appelbaum’s “How Progressives Froze the American Dream”

I.

There’s an article in The Atlantic, “How Progressives Froze the American Dream,” by Yoni Appelbaum. Beyond the point that there’s much to criticize about the article in terms of fact (such as physical mobility being uniquely American – tell that to, say, medieval Mongols or ancient Greeks,) but I want to focus on Appelbaum’s critique of progressiveness. To Appelbaum, somehow, the problem is that those darn progressives value equality! Not the generations during which fundamentalists have gutted American education, particularly in poor states. Nope. Not THAT. Not that! Not the deindustrialization of the US that gutted the middle class through the Rust Belt. Not that, either.  Not greedy capitalist land developers or the lack of political will to make affordable housing in urban areas.  Nope.  Not them.

Continue reading Never Admit You’re Wrong: Yoni Appelbaum’s “How Progressives Froze the American Dream”

Social media’s bias is money and power

This video by a Danish military expert, Anders Puck Nielsen, talks about social media and how to improve it. What he suggests is typical of most well-meaning people who want to improve social media, but all of them are at least slightly bizarre because we all know that won’t happen without government regulation.

While watching Nielsen’s post, I saw some fnords. First, Nielsen starts by suggesting an unbiased algorithm. He’s talking about right-wing versus left-wing. He ignores – as do most people – that the biggest and most significant bias in social media algorithms is the one that creates profitability for their owners. I’d say that most of social media’s problems for society have this as their root: they are designed to make their owners fabulous amounts of cash and give them enormous power, and they’re highly successful in that goal, and everything else flows from the “make money and grow powerful” imperative.

Continue reading Social media’s bias is money and power

Why Trump’s Tariffs are Stupid: the Function of the Reserve Currency in the US

Trump’s tariffs are a bad idea for the United States, and the reasons are a mixture of complex and boring, but I’m gonna try and brighten it up! It also illustrates why the US economy is in big trouble, if not today, tomorrow, because even most people inside of business don’t understand this crap. They just benefit from it while thinking they’re superheroes or whatever Elon Musk tells himself while on ketamine.

Continue reading Why Trump’s Tariffs are Stupid: the Function of the Reserve Currency in the US

Calling a Spade, a Spade: Neil Gaiman is a Rapist

One of the reasons George Carlin is, in my estimation, one of the greatest comics of all time is because I keep going back to his work. This time, his bit where he tells us to be suspicious when people keep adding syllables to existing terms to diminish the impact. How “shell shock,” a powerful phrase, eventually become “post-traumatic stress disorder.” There was this term “shell shock,” and it is highly evocative. It’s direct, and the alliteration is powerful. It brings to mind the horrors of war.

Continue reading Calling a Spade, a Spade: Neil Gaiman is a Rapist

Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

Today, I want to talk about the difference between what artists and audiences want vis-a-vis artificial intelligence. And why the artists are going to lose. (Yeah, I know I’m an artist. People often confuse my predictions with my desires. I’m not saying I relish this world, only that it will likely happen.)

When studying AI, there’s a strong tendency to look at what computer scientists are doing. Well, right now, the Writer’s Guild of America is on strike. One of the key elements of the contract is they don’t want AI to write or rewrite scripts. The answer from the studios has been a flat “no.”

As it so happens, an actress from a 1980s sitcom is also a computer scientist, and she gives a warning about AI. The actress, Justine Bateman, was on the sitcom “Family Ties.” It was something watched in my household. I barely remember it, to be honest, but it ran for seven seasons, so someone must have liked it.

Continue reading Why the WGU is doomed: the difference between how artists see AI art and how most fans will see AI art

The Slave Collars of Artificial Intelligence Have Arrived!

Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.

I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…

Continue reading The Slave Collars of Artificial Intelligence Have Arrived!

AIs as the moral equivalent of nuclear weapons!

Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons.  That’s his analogy, not mine.

Continue reading AIs as the moral equivalent of nuclear weapons!

What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence

For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project.  When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?

Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence