I had been thinking of writing a post about how it seems obvious that many of the posts that people make about AI errors are deceptive if not outright lies. Screenshots are easily fabricated! And I am unable to recreate any of the errors, even when I know the exact prompts and chatbots – even their versions – involved. Plus, it’s reasonably easy to “gaslight” an AI into giving ridiculous output by, for instance, starting a conversation and priming it with contradictory information and then demanding it reconcile the contradictions. Absurdity often results. And, yes, AI sometimes says stupid things to well-formed, innocuous posts, but the sheer scale of articles and videos about AI “mistakes” has grown so commonplace that it seemed to me, at first blush, that deception was involved.
My Journey Through the Hall of AI on YouTube
Upon reflection, however, the case of AI, at least on YouTube, seems similar to that of gaming videos. When I get sick, I used to watch humorous gaming content – mostly funny videos about whatever garbage fire triple A game studio had just mucked things up – because short, funny videos about nothing important appeal to me when I’m ill. But, for months afterward, my YouTube feed would try to radicalize me. Rather than giving me funny videos about whatever hot mess game and out-of-touch PR management that I had been seeing, I started seeing videos telling me that the problem with gaming is women and trans people. I’d have to block creators with sexist and, more recently, anti-trans videos for weeks. Because I had listened to some funny videos taking a piss on big, arrogant game company mistakes, YouTube was sure that was a sexist pigdog!
With AI, something similar happens. There’s a lot of content out there that’s smart, even when it’s alarmist. Discussing the possibility of AI ending the world is a fear shared by many of the top people in the field, so regardless of the thoroughness or quality of research in these videos, at least they’re talking about something the field is honestly worried about. There are also numerous legitimate educational videos on AI subjects.
For my If God Did Not Exist stories, I listened to a number of those educational videos to understand the basics conversationally or as an introduction to complex subjects so I could make sense of more technical books and papers. Now that the majority of my research is done, what YouTube is promoting to me is a lot of content whose upshot is “gen AI sucks.” These aren’t research-based but inflammatory videos about serious issues in AI, nor are they educational videos about generative AI and the lawsuits, ethical issues, or capabilities of AI. They are, in short, hit pieces.
So – and this isn’t to fault creators broadly – a giant number of people I’d been following on YouTube (which is the only social media platform where I’m active at all right now) have done videos on gen AI that are just embarrassing. Ryan George, Mike Burns, Taylor Lorenz, Emma Thorn, Rebecca Watson, and many others are on the AI-hate bandwagon. The videos aren’t measured criticism. As a recent example, I had to ditch out of a Mike Burns video because, during a stream, talking about how elements of fundamentalist Christianity have embraced AI to create religiously-themed media. Both Burns and the chat devolved into simply repeating how horrible gen AI is. So, instead of talking about what I found a genuinely interesting article – particularly because IGDNE is about the intersection of religion and AI – everyone just piled on AI, calling it slop, saying it should be banned, etc., when the discussion was, theoretically, how fundie Christian communities are embracing gen AI. And after a Ryan George video about AI “lying,” where I found him to be, at best, incredulous about AI hallucinations that I couldn’t reproduce, it seemed to me that there was intent. That someone out there was consciously making screenshots because they hate AI so much. And perhaps they are. But what was happening, instead, I believe, was the sort of toxic algorithmic intensification that I’d suffered with gaming videos.
Revelation: Social Media Amplification and Radicalization is a Far More Serious Problem Than Artificial Intelligence Hallucinations
AIs don’t lie. They make mistakes, and everyone should be careful of, really, all Internet sources, but AIs don’t lie. They can be ordered to lie, but the big commercial models all train their models to be honest and helpful. When an AI fucks up, it isn’t part of a plan, at least not the plan of the AI company. It’s a mistake or a bias in the training data (which might reflect the training values of the creators but never the AI itself,) contradictions in the conversation or dataset, or any one of several technical matters. They don’t lie, though.
Social media, on the other hand, is a fucking mess. An MIT study found that false news stories are 70% more likely to be retweeted than true stories, and they reach 1,500 people six times faster than the truth. Additionally, the false news cascades to achieve a depth between ten and twenty times deeper than the facts. Top false news stories typically reach between a thousand and a hundred thousand people, whereas true news stories rarely break a thousand. That’s from fucking MIT.
And this MIT research is brutal. It just keeps going on and on and on. The study found that humans, not bots, are the primary source of spreading false information. Sorry, it just isn’t a Russian psy-op or arising from the unintended consequences of bots – if the Internet is heading to death, it’s not the bots who are doing it; it’s the people. Additionally, according to the same study, social media platforms systematically amplify the more engaging false information over true stories.
This means that the system is designed to promote feel-bad, erroneous content and that creators are therefore encouraged to game the algos by generating feel-bad, erroneous content, even if it is based on deceptive information. (Correlary, no one is going to repost this essay because it isn’t rage-bait nonsense.)
Did I mention how it goes on and on and on? It does. Researchers from Princeton did a study, and it’s just as bad as the MIT one. They studied over 250,000 decisions made by more than 11,000 participants and found some interesting cognitive patterns. By “interesting,” I mean “terrifying.” They found that ideological congruency means that people dramatically prefer sources that they believe are “true,” while “motivated reflection” leads people with greater analytical skills to be more susceptible to misinformation that confirms their biases. And, of course, the familiarity effect makes repeated statements – true or false – seem more credible. Like the MIT guys, when the bots were subtracted from the data, nothing changed.
And, lastly, for just talking about science, y’know, for the moment, the top science journal in the world, Nature, published a study from the University of Groningen that found that “ideologically separated networks” doubled misinformation exposure. It jumps from 35% to 70%. Now, almost all of what you see is misinformation when you’re in your echo chamber! It’s mostly bullshit!
You put this all together, and it explains a lot. If you’re in an ideologically separated network, say, you really hate AI and don’t want to hear anything else, the amount of misinformation you get about it DOUBLES to be the vast majority of what you see on the subject. You’re, overall, more likely to repeat the misinformation, which then reaches between ten and twenty times more people. The part that particularly scours my testes is that people with high analytical skills are more likely to believe misinformation – translation: smart people are smart enough to talk themselves into believing bullshit. Fuck me! So, then, they can cogently reframe the false information, making it seem more palatable and broadening their reach! And before you know it, because algorithmic amplification favors bullshit, my YouTube and news feeds are choked with nonsense stories and videos about AI slop instead of the smart and well-researched media I want.
From Artificial Intelligence to Social Media
First, I find it highly ironic that people discussing AI slop are ignoring the well-documented ways that misinformation spreads through ideologically distinct spaces despite acknowledging that they’re part of social media’s algorithmic amplification of false narratives. Slop AI is the problem, they shout! While ignoring their role in the spread of non-AI slop through the well-established mechanisms of the social media outrage machine of which they are a part.
Second, although unstudied, I find that the anti-gen AI proponents are placing additional faith in human cognition. In short, they’re reinforcing the rational man hypothesis, which has been conclusively demonstrated through scientific research to be nonsense. In turn, faith in their rationality makes it easier to manipulate them!
Analysis: While artificial intelligence can make mistakes, it doesn’t intentionally mislead anyone. But the mechanisms of social media – ideological sorting, humans selecting to amplify stories based on conformity to our narrowly sorted ideologies regardless of their truth in segregated ideological networks, and algorithmic amplification of false news content combine to create a feedback loop that ends with false news reaching ten to twenty times more people than the less interesting true content – are vastly different and far, far worse. AI companies are researchers are working their asses off to improve the utility and accuracy of AI – because they want to see it to people and industries that require accuracy, like lawyers, engineers, and medical personnel – while social media companies are working hard to sort people into narrow and isolated social networks to leverage human willingness to amplify false information to create content that the social networks then use to further amplify false narratives because that’s what keeps humans glued to their sites.
AI companies are not the problem. Social media companies are the problem. This makes me conclude: social media delenda est. It is a far, far greater threat to human cognitive integrity, happiness, and truth than “AI slop,” and the focus on AI slop is an example of the harm of social media.