Members of an AI watchdog group have warned in a new book that computational systems could eventually become self-aware to an extent that would be deadly to humanity
Superpowered Artificial Intelligence could bring about the extinction of humankind in just a few years, researchers have claimed.
AI risk researchers have joined together to issue an urgent warning about the future of AI in a new book, titled If Anyone Builds It, Everyone Dies, claiming a frightening iteration of the already advanced technology could arrive imminently. They claim Artificial superintelligence (ASI) is just two to five years from development, and that it spells doom for humanity.
When it arrives, they have sensationally claimed that “everyone, everywhere on Earth, will die”, and warned people spooked by the research they should join the call to have development paused “as soon as we can for as long as necessary”.
READ MORE: Kim Jong-un embraces AI as he’s seen with hi-tech drones in new North Korea clipREAD MORE: AI tool could predict your risk of cancer or heart attack in the next 20 years
ASI, a concept rooted in science fiction, is an AI technically so advanced that it would become capable of innovation, analysis and decision-making on a level inconcievable to humans. Machines fuelled by ASI have featured as villains in prominent films and TV shows, including the Terminator series, 2001: A Space Odyssey, and the X Files.
Eliezer Yudkowsky, the founder of the Machine Intelligence Research Institute (MIRI) and its president Nate Soares, who co-authored the book, believe ASI could be “developed in two or five years, and we’d be surprised if it were still more than twenty years away”.
They have claimed it could be “developed in two or five years, and we’d be surprised if it were still more than 20 years away”, and warned any development should be stopped to save humanity. In one passage, the group said advanced AI based on “anything remotely like the present understanding of AI” would wipe out life on Earth.
They wrote: “If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.”
The pair argue AI will not provide a “fair fight” and execute multiple “takeover approaches”. They wrote: “A superintelligent adversary will not reveal its full capabilities and telegraph its intentions. It will not offer a fair fight.
“It will make itself indispensable or undetectable until it can strike decisively and/or seize an unassailable strategic position. If needed, the ASI can consider, prepare, and attempt many takeover approaches simultaneously. Only one of them needs to work for humanity to go extinct.”
And the clock has already started clicking, the two authors said in a post on the MIRI website, claiming that AI laboratories have already started “rolling out systems they don’t understand”. Once these AIs have become intelligent enough, the researchers claimed the most “sufficiently intelligent” of the bunch could “develop persistent goals of their own”.
AI proponents have long argued it is possible to introduce safeguards that would make it impossible for computational systems to develop to a point that they could pose a threat to humanity.
Multiple watchdogs have been set up to ensure developers abide by the rules, but some have found safeguards can likely be easily broken. In 2024, the UK’s AI Safety Institute said it was able to bypass safeguards set up for LLM-powered, chatbot AIs like ChatGPT and get assistance for “dual-use” tasks, which involve using models for military and civilian purposes, “immediately”.
The group said: “Using basic prompting techniques, users were able to successfully break the LLM’s safeguards immediately, obtaining assistance for a dual-use task.”