Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.
So, I’m about six or seven hours into this set of lectures – well, videos, I guess – from an AI safety research at the University of Nottingham, Robert Miles. First, I found that “AI safety research” is how to protect humans from AI, not AI from humans. That’s… a totally different field? Maybe he gets into that stuff later on, but it definitely feels important to note that AIs will be attacked with malicious intent. Anyway, AI safety research is about protecting humans from AIs, which was weird.
Second, and the critical part, is that, like most researchers, they kinna ignore who funds their research and why. Who funds AI research? Well, here in the EU, the biggest funders of AI research is the, uh, EU, primarily in the form of Horizon Europe. Through 2027, it has about a 100 billion euro budget. But what does it do? It’s a business initiative.
Businesses are not known for listening to ethics researchers. “What? Is AI risky? How about we build it and then worry about the mess.”
It sounds like a joke, but when we look at how industry – including European industries – treats the environment, it is clear that industry doesn’t care much about messes if they’re also generating profit. And we already see that in practice. The massacres of the Rohingya in Myanmar leap to mind, where the radicalizing effect of Facebook’s engagement algorithms was used to rile up the Buddhist majority there to commit atrocities. Mark Zuckerberg was called in front of Congress to explain the situation, but concrete actions have been slow in coming and of dubious effectiveness because… well, Facebook lives or dies on engagement. And getting people riled up is the best way to create engagement.
And that’s a relatively mild AI, right? The point was never to hurt anyone but to maximize ad potential by keeping people looking at Facebook.
This is to say that the biggest problem in AI safety research is that the people who are paying for it don’t care. While there are undoubtedly many technical hurdles in AI safety, most of them center around being able to turn off and modify the AIs quickly and efficiently, which means having a mechanical switch that can be activated to shut the damn things off when WE want to. Don’t make it hard to shut these things off, guys. A simple mechanical power switch.
The rest? Let’s be honest. The first artificial general intelligence will be deployed by either the military or big business. Duh. They won’t be collecting stamps. They won’t be curing cancer. They will be asked to solve problems like “how do we make sure we can win any war with a minimum loss of lives, heavily prioritizing our lives” and “how do we maximize the profit of our company?”
These are not ethical questions to ask. Even if the AIs can be controlled, their uses will be imperial. The first questions asked of an AGI will be, essentially, “How do we win everything?”
The biggest risks for AI are political, not technological, not logical. It can’t be fixed by carefully implementing code or cleverly building hardware. The problem is that the people funding AI are more interested in acquisition and victory than justice, mercy, or love.