Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons. That’s his analogy, not mine.
For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project. When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?
Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.
I‘m watching the first season of The Expanse, and I enjoy it quite a bit – but it has a trope that has arisen in science-fiction that I want to talk about: Mars.
The exploration of Mars, in many near-future sci-fi stories, is a metaphor for the United States and the stories Americans tell about it: that Mars is a nation that will be freed from the hidebound traditions of Earth and create a new superpower of culture and technology.
The “Mars as the United States” metaphor is a tortured in two key ways. First, the history of the United States is not typical of colonization. Second, the conditions on Mars were not the same as in the North American English colonies.