Newsweek published an article written by one of the former Google computer researchers on their AI project. You know. The one who thinks it’s sentient, Blake Lemoine.
I don’t think any large language model computer is sentient. They’re guessing machines. What we learn from LLM systems is that language isn’t as complex as we imagined, at least on the scale of a thousand words or less. It is an important lesson, perhaps vitally so, but not a demonstration of intelligence. And even if an LLM can pass a Turing test, which is Lemoine’s assertion, that Google’s LLM passed the Turing test, that’s not a very good standard of sentience, either. Humans stink at acknowledging the full humanity of other humans. We are not fit judges of sentience…
Continue reading The Slave Collars of Artificial Intelligence Have Arrived! →
Upon reading this article by Eliezer Yudkowsky about how we can only die with dignity in the face of an AI apocalypse, I realized something rather important when discussing any potential catastrophe: what, exactly, is the mechanism of this artificial intelligence genocide?
Continue reading Mechanisms of an AI Apocalypse: a Fnord →
Listening to these videos on risk assessment in AI is weird. In this video, the risk assessment researcher, again Robert Miles, addresses an interview that Elon Musk gave on the dangers of autonomous military killbots. One of the things Musk said is he desired AI to be “democratized.” The researcher takes a different path and says that because of the many risks involved in artificial general intelligence, it should be developed solely by responsible parties. Like with nuclear weapons. That’s his analogy, not mine.
Continue reading AIs as the moral equivalent of nuclear weapons! →
For a way to understand how business and the military treat scientific ethics, the best, clearest case is the Manhattan Project. When discussing the project, it is important to remember that the program wasn’t to “develop an atomic bomb.” But to “develop an atomic bomb before the Nazi project produced one.” The potentially civilization-ending powers of the weapon were known. And, obviously, in a world where wooden bullets and poison gas were forbidden on the battlefield, something as horrific as an atomic bomb must be, right?
Continue reading What we learn about scientific ethics from the Manhattan Project relative to artificial general intelligence →
Doing research into AI for a project, which is part of the reason why I’m so interested in AI art and language as it is pretty much the only AI stuff that I can get my hands on, I have come to believe the biggest threat from AI is the tendency for scientists to ignore who funds their research and why.
Continue reading The Biggest Risk Concerning Artificial General Intelligence Is… →
I‘m watching the first season of The Expanse, and I enjoy it quite a bit – but it has a trope that has arisen in science-fiction that I want to talk about: Mars.
The exploration of Mars, in many near-future sci-fi stories, is a metaphor for the United States and the stories Americans tell about it: that Mars is a nation that will be freed from the hidebound traditions of Earth and create a new superpower of culture and technology.
The “Mars as the United States” metaphor is a tortured in two key ways. First, the history of the United States is not typical of colonization. Second, the conditions on Mars were not the same as in the North American English colonies.
Continue reading There’s nothing on Mars! →