There’s a 5% chance of AI causing humans to go extinct, say scientists
Share- Nishadil
- January 04, 2024
- 0 Comments
- 2 minutes read
- 16 Views
AI researchers predict a slim chance of apocalyptic outcomes Stephen Taylor / Alamy Stock Photo Many researchers see the possible future development of superhuman AI as having a non trivial chance of causing human extinction – but there is also widespread disagreement and uncertainty about such risks.
Read more Those findings come from who have recently published work at six of the top AI conferences – the largest such survey to date. The survey asked participants to share their thoughts on possible timelines for future AI technological milestones, as well as the good or . Almost 58 per cent of researchers said they considered that there is a 5 per cent chance of human extinction or other extremely bad AI related outcomes.
Advertisement “It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity,” says at the Machine Intelligence Research Institute in California, an author of the paper. “I think this general belief in a non minuscule risk is much more telling than the exact percentage risk.” But there is no need to panic just yet, says at Case Western Reserve University in Ohio.
Such AI expert surveys “don’t have a good track record” of forecasting future AI developments, they say. A 2012 study showed that in the long run, AI expert predictions were no more accurate than non expert public opinion. This new survey’s authors also pointed out that AI researchers are not experts in forecasting the future trajectory of AI.
Compared with answers from a 2022 version of the same survey, many AI researchers predicted that AI will hit certain milestones earlier than previously predicted. This coincides with the November 2022 debut of ChatGPT and chatbot services based on large language models. Sign up to our The Weekly newsletter Receive a weekly dose of discovery in your inbox.
The surveyed researchers predicted that within the next decade, AI systems have a 50 per cent or higher chance of successfully tackling most of 39 sample tasks, including writing new songs indistinguishable from a Taylor Swift banger or coding an entire payment processing site from scratch. Other tasks such as physically installing electrical wiring in a new home or are expected to take longer.
The possible development of AI that can outperform humans on every task was given 50 per cent odds of happening by 2047, whereas the possibility of all human jobs becoming fully automatable was given 50 per cent odds to occur by 2116. These estimates are 13 years and 48 years earlier than those given in last year’s survey.
But the heightened expectations regarding AI development may also fall flat, says Torres. “A lot of these breakthroughs are pretty unpredictable. And it’s entirely possible that the field of AI goes through another winter,” he says, referring to the . Read more There are also more immediate worries without any superhuman AI risks.
Large majorities of AI researchers – 70 per cent or more – described AI powered scenarios involving deepfakes, manipulation of public opinion, engineered weapons, authoritarian control of populations and worsening economic inequality to be of either substantial or extreme concern. Torres also highlighted the dangers of AI contributing to disinformation around existential issues such as climate change or worsening democratic governance.
“We already have the technology, here and now, that could seriously undermine [the US] democracy,” says Torres. “We’ll see what happens in the 2024 election.” Topics:.