Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons. That’s his analogy, not mine.
You’d think a risk assessor would be better at the job, though, right? Because I don’t think anyone calculating the risk – financial, environmental, economic, etc. – would conclude that nuclear weapons should be in anyone’s hands. Their mere presence dramatically increases the odds of a catastrophic effect, not to mention all the wasted money and effort that went into weapons whose sole admitted purpose is deterrence. Like, everyone who studies deterrence knows that it should have started in 1944. If the US had scrapped the Manhattan Project, everyone would have absolutely, positively gotten together to totally outlaw the development and ownership of atomic and nuclear weapons.
The world would be safer.
So, if you’re saying – you’re actually saying – that artificial general intelligence is like nuclear fucking weapons, what you should be saying, “So, everyone, stop doing this. There are no responsible parties when it comes to AGI. The risk is too high.”
This is what people mean when they talk about the ivory tower. You’ve got this guy talking about the risks of AI and likening them to nuclear weapons in some sort of abstract, like the current order of nuclear weapons is something to write home about as a victory. Even without their use, they place some countries above, well, consequences for their actions. So, right now, the EU and the US will not invade Russia. It. Will. Not. Happen. For that matter, no matter how well the Ukrainians do, they won’t invade, either! It is understood there are conditions under which Russia will use nuclear weapons, and invasion is one of those possible scenarios.
Ditto, North Korea. The country is now off-limits for direct military action, no matter how justified, because it might make Seoul and Tokyo vanish. Not to mention the non-zero chance that nuclear weapons will be used against North Korea’s population if they decide they want new leadership.
And since nuclear weapons are 1950s technology, the odds of them staying with the countries that currently have them is almost zero percent. Other places will get them. Iran will probably be next, which will complicate Middle Eastern politics, to say the least. Other countries inevitably follow suit because the view is that possessing nuclear weapons makes your country immune to invasion. It just raises the potential costs outside the range of acceptable.
Is… is this the model we want for AGI? Where the full power of AI is owned by a small number of rich nations? Does anyone imagine that AGI will be used by these nations in a fair and equitable manner, or that it will be used to gain an economic and military competitive advantage over their rivals and enemies? Where unstable regimes seek to build secret AIs in underground bunkers? Is that the world proposed? Because I think it is.
I suspect his retort would be along the lines of, “That’s just an example.” And my retort would be, “What other example do you have of a technology that should NOT be democratized? It’s all going to be stuff where it would be better off if NO ONE had it, right? Mostly weapons, but maybe also environment-destroying technologies like coal mining or whatever. So, lay it on me, what does it LOOK LIKE if AI isn’t democratized in an actual, coherent, believable narrative with examples drawn from history and the news? And what happens if AI is held by a small number of presumably responsible parties?”
I suspect I would never get answers to those questions. Which is, I suppose, part of the reason I write stories. To explore such questions.