AI executives warn that its threat to humanity is comparable to ‘pandemics’ and nuclear war

The threat posed by the rapidly developing technology to mankind is comparable to that posed by nuclear war and disease, according a group of scientists and chief executives from Google DeepMind and OpenAI.

The Center for AI Safety in San Francisco, a non-profit organization, said that “mitigating the risks of extinction due to AI should be a priority for the world along with other risks on a societal scale such as pandemics or nuclear war.”

is a one-sentence statement that was signed by more than 350 AI executives and researchers, including Sam Altman from OpenAI, Demis Hassabis from Google DeepMind, and Dario Amedei of Anthropic.

Geoffrey Hinton, Yoshua Bengio and others who have been described as the “godfathers of AI” and won a Turing Award in recognition of their work with neural networks also signed. Hinton quit his job at Google in the beginning of this month to talk openly about the dangers of AI.

This statement comes after calls for regulation in the industry following a number AI launch from Big Tech companies that have raised awareness about its potential flaws. These include spreading misinformation and perpetuating social biases.

The European Artificial Intelligence Act is being pushed forward by EU legislators, and the US is also looking at regulation.

Microsoft’s OpenAI ChatGPT is widely regarded as the leader in artificial intelligence adoption. It was launched in November. Altman testified for the first in US Congress this month, calling for licensing. In March, Elon Musk and more than 1,000 other researchers and tech executives Call for a 6-month break On the development of advanced AI to stop what they called “arms races”.

Some researchers who were cited in the letter’s reasoning criticized its approach. Others disagreed with its recommendation to pause the technology.

The Center for AI Safety said to the New York Times in a single-line statement that it wanted to avoid any disagreement.

Executive director Dan Hendrycks stated, “We did not want to push a large menu of potential interventions.” When that happens, the message is diluted.

Mustafa Suleyman – a former Deepmind cofounder and now running Inflection AI – as well as Eric Horvitz – Microsoft’s Chief Technology Officer – also signed on to the statement.