The world’s leading artificial intelligence companies are "fundamentally unprepared" for the consequences of creating systems with human-level cognitive capabilities, according to the Future of Life Institute (FLI), a nonprofit focused on AI-related risks.
In a newly released safety index, FLI found that none of the companies received a score higher than D on "existential preparedness"—the ability to prevent catastrophic risks from their own technologies.
One of the report’s five independent reviewers noted that, despite publicly stated ambitions to develop artificial general intelligence (AGI), not a single company had presented "even a remotely coherent or actionable plan" to manage such systems safely.
AGI refers to a hypothetical stage of AI development at which systems can perform any intellectual task at the level of a human being. OpenAI, the creator of ChatGPT, has stated its mission is to ensure AGI benefits all of humanity. But advocates for stricter safeguards warn that an uncontrolled AGI could pose an existential threat with catastrophic consequences.
Leading AI developers are failing to prepare for risks they themselves acknowledge as inevitable, the FLI report says. According to the analysis, not a single company working toward AGI received higher than a D rating for its approach to mitigating existential threats.
"The industry is structurally unprepared to deliver on its own stated goals," the report concludes. "Firms claim they could reach AGI within the next decade, yet none has articulated a credible strategy for managing the associated dangers."
The index covers seven companies: Google DeepMind, OpenAI, Anthropic, Meta, xAI, and the Chinese firms Zhipu AI and DeepSeek. Scores were based on six criteria, including "present-day harms" and "existential safety." The highest overall rating—C+—went to the startup Anthropic, followed by OpenAI with a C and Google DeepMind with a C-.
The Future of Life Institute, a U.S.-based nonprofit, advocates for the safe use of advanced technologies. The organization operates independently, supported in part by an unrestricted donation from Ethereum co-founder Vitalik Buterin.
A similar report was released the same day by SaferAI, another nonprofit, which concluded that major AI firms have "weak or very weak risk management systems" and described the current approach as "unacceptable."
FLI’s panel of reviewers included leading AI experts such as British scientist Stuart Russell and activist Sneha Revanur, founder of the youth-led advocacy group Encode Justice, which campaigns for technology regulation.
Max Tegmark, MIT professor and co-founder of FLI, described the findings as deeply concerning. "It’s bizarre to see companies developing superintelligent systems without releasing a single plan for handling the consequences," he said. He likened the situation to "building a giant nuclear power plant in the heart of New York City, scheduled to go online next week—with no emergency response plan in place."
Until recently, experts believed humanity had several decades to address the challenges posed by AGI. "Now the companies themselves say it’s a matter of years," said Tegmark.
He also pointed to the "stunning" progress since February, when a global AI summit was held in Paris. Since then, upgraded models such as xAI’s Grok 4, Google’s Gemini 2.5, and the video generator Veo3 have all significantly outperformed their predecessors.
In response to the report, Google DeepMind said it did not fully reflect the company’s safety efforts. "Our systematic approach to AI safety and reliability goes well beyond the metrics evaluated in the report," the company said. Requests for comment were also sent to OpenAI, Anthropic, Meta, xAI, Zhipu AI, and DeepSeek.