A safety report card ranks AI company efforts to protect humanity
Published in Science & Technology News
Are artificial intelligence companies keeping humanity safe from AI's potential harms? Don't bet on it, a new report card says.
As AI plays an increasingly larger role in the way humans interact with technology, the potential harms are becoming more clear — people using AI-powered chatbots for counseling and then dying by suicide, or using AI for cyberattacks. There are also future risks — AI being used to make weapons or overthrow governments.
Yet there are not enough incentives for AI firms to prioritize keeping humanity safe, and that's reflected in an AI Safety Index published Wednesday by Silicon Valley-based nonprofit Future of Life Institute that aims to steer AI into a safer direction and limit the existential risks to humanity.
"They are the only industry in the U.S. making powerful technology that's completely unregulated, so that puts them in a race to the bottom against each other where they just don't have the incentives to prioritize safety," said the institute's president and MIT professor Max Tegmark in an interview.
The highest overall grades given were only a C+, given to two San Francisco AI companies: OpenAI, which produces ChatGPT, and Anthropic, known for its AI chatbot model Claude. Google's AI division, Google DeepMind, was given a C.
Ranking even lower were Facebook's Menlo Park-based parent company, Meta, and Elon Musk's Palo Alto-based company, xAI, which were given a D. Chinese firms Z.ai and DeepSeek also earned a D. The lowest grade was given to Alibaba Cloud, which got a D-.
The companies' overall grades were based on 35 indicators in six categories, including existential safety, risk assessment and information sharing. The index collected evidence based on publicly available materials and responses from the companies through a survey. The scoring was done by eight artificial intelligence experts, a group that included academics and heads of AI-related organizations.
All the companies in the index ranked below average in the category of existential safety, which factors in internal monitoring and control interventions and existential safety strategy.
"While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control," according to the institute's AI Safety Index report, using the acronym for artificial general intelligence.
Both Google DeepMind and OpenAI said they are invested in safety efforts.
"Safety is core to how we build and deploy AI," OpenAI said in a statement. "We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts. We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities."
Google DeepMind in a statement said it takes "a rigorous, science-led approach to AI safety."
"Our Frontier Safety Framework outlines specific protocols for identifying and mitigating severe risks from powerful frontier AI models before they manifest," Google DeepMind said. "As our models become more advanced, we continue to innovate on safety and governance at pace with capabilities."
The Future of Life Institute's report said that xAI and Meta "lack any commitments on monitoring and control despite having risk-management frameworks, and have not presented evidence that they invest more than minimally in safety research." Other companies like DeepSeek, Z.ai and Alibaba Cloud lack publicly available documents about existential safety strategy, the institute said.
Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not return a request for comment.
"Legacy Media Lies," xAI said in a response. An attorney representing Musk did not immediately return a request for additional comment.
Musk is also an advisor to the Future of Life Institute and has provided funding to the nonprofit in the past, but was not involved in the AI Safety Index, Tegmark said.
Tegmark said he's concerned that if there is not enough regulation of the AI industry it could lead to helping terrorists make bioweapons, manipulate people more effectively than it does now or even compromise the stability of government in some cases.
"Yes, we have big problems and things are going in a bad direction, but I want to emphasize how easy this is to fix," Tegmark said. "We just have to have binding safety standards for the AI companies."
There have been efforts in the government to establish more oversight of AI companies, but some bills have received pushback from tech lobbying groups that argue more regulation could slow down innovation and cause companies to move elsewhere.
But there has been some legislation that aims to better monitor safety standards at AI companies, including SB 53, which was signed by Gov. Gavin Newsom in September. It requires businesses to share their safety and security protocols and report incidents like cyberattacks to the state. Tegmark called the new law a step in the right direction, but much more is needed.
Rob Enderle, principal analyst at advisory services firm Enderle Group, said he thought the AI Safety Index was an interesting way to approach the underlying problem of AI not being well-regulated in the U.S. But there are challenges.
"It's not clear to me that the U.S. and the current administration is capable of having well-thought-through regulations at the moment, which means the regulations could end up doing more harm than good," Enderle said. "It's also not clear that anybody has figured out how to put the teeth in the regulations to assure compliance."
©2025 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.







Comments