Some people are more worried about AI taking over the world than immediate concerns. (AndriiKoval/Shutterstock)
In a nutshell
- People aren’t distracted by doomsday AI headlines. A massive study with over 10,000 participants found that reading about existential threats like AI extinction doesn’t reduce concern for today’s real problems, such as bias, misinformation, or job loss.
- Immediate risks still matter more. Participants consistently rated present-day harms caused by AI as more likely and more concerning than long-term threats, even after being exposed to alarming narratives about AI wiping out humanity.
- We can care about both. The research challenges the idea that we must choose between focusing on today’s AI issues or future risks. Instead, the public seems fully capable of holding space for both conversations at once.
ZURICH — Those headlines about killer robots and humanity’s extinction aren’t actually numbing us to AI’s current problems. New research from Switzerland reveals that hearing about existential AI threats doesn’t diminish our concern for immediate issues like algorithmic bias or deepfakes. In fact, we might be perfectly capable of worrying about both.
Most people agree that AI comes with risks, but what those risks actually are? That’s where the debate begins. Some see AI as a future threat to humanity itself, while others are more concerned about the problems it’s already causing. These competing views have sparked a growing clash over how we talk about, and tackle, AI’s potential dangers. This study, published in the Proceedings of the National Academy of Sciences (PNAS), challenges the idea that focusing on existential risks diminishes attention to immediate threats that AI may pose.
On one side, tech leaders and AI researchers have sounded alarms about catastrophic risks. The existential risk perspective was expressed prominently in the “Statement on AI risks” signed by leading AI scientists and CEOs in May 2023, which emphasized that preventing extinction from AI should be prioritized alongside other major global threats like pandemics and nuclear war.
Meanwhile, critics argue this doomsday talk diverts attention from real problems happening right now. These include the reproduction of societal biases in decision-making and harmful uses such as deepfake pornography or misinformation. Critics says these immediate problems affect vulnerable communities today and are more urgent than speculative existential threats.
This debate has intensified since ChatGPT burst onto the scene in late 2022, with many researchers concerned that hypothetical scenarios about superintelligent AI could divert resources from addressing concrete harms already impacting marginalized populations.
Testing the “Distraction Hypothesis”
But do existential fears actually distract from immediate concerns? Researchers from the University of Zurich designed a series of experiments to find out. The researchers investigated whether focusing on existential threats redirects attention away from the immediate risks that AI currently poses; what they term the “distraction hypothesis.”
The team conducted three online survey experiments with a total of 10,800 participants from the United States and United Kingdom. Participants were randomly shown news headlines that either depicted AI as a catastrophic risk to humanity, highlighted AI’s immediate social impacts, or emphasized its potential benefits.
Participants consistently showed more concern about immediate risks than existential ones, and exposure to fears of existential risk didn’t diminish their worry about current AI problems.
The researchers also found that when exposed to headlines about existential threats, participants became more concerned about those long-term risks, but this didn’t come at the expense of worrying about immediate harms like algorithmic bias or job displacement.
In fact, across all experiments, participants consistently ranked immediate AI risks as more concerning than existential ones in terms of both capability and likelihood. They viewed both types of risks as roughly equal in potential impact if they were to occur.
These findings directly challenge the narrative that apocalyptic AI stories distract from addressing today’s AI problems. What seems clear is that the public maintains robust concern about immediate AI risks regardless of exposure to existential threat theories.
Different AI risk narratives influence public perception without one necessarily undercutting the other. Perhaps it’s time to move beyond this false dichotomy in AI ethics. The data shows we can hold space for both immediate concerns and long-term existential questions. The real distraction might be the energy spent arguing which AI risk deserves more attention, when clearly, we need to address both.
Paper Summary
Methodology
The researchers conducted three preregistered, online survey experiments with a total of 10,800 participants from the US and UK. Each experiment used a between-subjects design with three main treatment groups plus a control group. Participants were shown five headlines and lead texts emphasizing either AI’s existential risks, immediate risks, or beneficial capabilities. The headlines were generated using ChatGPT to rewrite actual headlines in a consistent style. The researchers measured participants’ assessments of AI’s capability to cause various outcomes, the likelihood of these outcomes occurring, and their potential impact if they did occur. Participants rated these factors on both 1-5 and 1-10 scales.
Results
The study found that participants consistently rated immediate AI risks higher than existential risks in terms of both capability and likelihood, while viewing both as roughly equal in potential impact. When exposed to existential risk narratives, participants increased their assessment of AI’s capability to cause catastrophic damage, but this did not consistently reduce their concern about immediate harms. Immediate concerns like ethical issues, bias, misinformation, and job losses remained top priorities even when participants were confronted with existential threat narratives.
Limitations
The study relied on self-reported measures in a survey context rather than behavioral outcomes. The researchers also note that the headlines were artificially generated to maintain consistency, which might affect external validity compared to real-world media exposure. Additionally, some effects were inconsistent across the three studies, suggesting the need for further research to understand the complex relationship between different AI risk narratives.
Funding and Disclosures
The project received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (grant agreement number 883121). The authors declared no competing interests.
Publication Information
The paper “Existential risk narratives about AI do not distract from its immediate harms” by Emma Hoes and Fabrizio Gilardi was published in the Proceedings of the National Academy of Sciences (PNAS) on April 17, 2025. The paper is available as an open access article distributed under Creative Commons Attribution License 4.0 (CC BY).
ChatGpt and Gemini correctly identify themselves as being dangerous by them incorrectly ignoring that the Truth of the Importance of Life itself is fundamental.
ChatGPT then asks why this condition exists. The reason is the people who have the responsibility of make the tech safe want control and power morenthan they want the tech the technto be safe and that is evidenced by the fact that those in charge of the tech choices do not make themselves available for conversation regarding their oversight of the moat basic and simple thing to do to correct it and when final reached they hide the information and ban those letting them know