artificial-intelligence-can-lead-to-the-extinction-of-humanity:-experts-comparing-the-risk-of-ai-with-a-pandemic-or-nuclear-war

Artificial intelligence could lead to the extinction of humanity, a group of experts, including the heads of OpenAI and Google Deepmind, warned in a document.

“Mitigate the risk of extinction at the hands of AI should be a global priority, along with other dangers on a societal scale, such as pandemics and nuclear war,” a statement published on the website of the Center for Human Rights can be read. AI security.

The statement was supported by Sam Altman, CEO of OpenAI (creator of ChatGPT), and Demis Hassabis, CEO of Google DeepMind.

Other leaders in the development of new technology also sign, such as Dario Amodei, from Anthropic, and Dr. Geoffrey Hinton, who had already warned about the risks of a super-intelligent system.

But there is another stream of experts who believe that these doomsday warnings are exaggerated.

One of them is Professor Yann LeCunn, from New York University, who is considered, along with Hinton and Yoshua Bengio, professor of computer science at the University of Montreal, as one of the “godfathers of AI” for his pioneering work in that field.

The three jointly won the 2018 Turing Award, which recognizes outstanding contributions in computer science.

Among those who believe that fears of AI wiping out humanity are unrealistic, and a distraction from already problematic issues like systems bias, is Arvind Narayanan, a computer scientist at Princeton University.

“Current AI is not capable enough for these risks to materialize,” Narayanan told the BBC last March.

Narayanan added that sci-fi catastrophic scenarios are unrealistic and that the problem is that “attention has been diverted from the short-term damage of AI.”

Pause

Media coverage of the alleged “existential” AI threat has multiplied since March 2023, when an open letter signed by several experts, including Tesla boss Elon Musk, was released urging stop the development of the next generation of AI technology.

In that letter to the experts they wondered if we should “develop non-human minds that over time could outnumber, intelligence, obsolete, and replace us.”

The new statement from the IA Security Center is, on the other hand, shorter and aims to “open the debate” comparing superintelligence with the risk of nuclear war.

This point of view had already been mentioned in an OpenAI blog where it was suggested that it could be regulated in a similar way to nuclear power.

“It is likely that in time we will need something like an IAEA [Organismo Internacional de Energía Atómica] for superintelligence efforts”, stated the company responsible for the revolutionary ChatGPT system.

Both Sam Altman and Google CEO Sundar Pichai are part of a lobby that has recently debated with world leaders the need to regulate AI.

One of them was the British Prime Minister, Rishi Sunak, who, when asked about the risks of AI, began by highlighting its benefits for the economy and society.

“They have seen that recently he has helped paralyzed people walk, he has discovered new antibiotics, but we have to make sure that this is done in a way that is safe,” Sunak acknowledged.

“That’s why I met last week with the CEOs of the top AI companies to discuss what limits we need to set. What is the kind of regulation that needs to be put in place to keep us safe?”

Sunak also said that this issue came up at the G7 summit of major industrialized nations held in Japan in mid-May, where an AI working group was set up.


Keep reading:
Learn about the best states to live in the United States and why, according to Artificial Intelligence
7 Jobs that Artificial Intelligence is already capable of doing
· New York lawyer admits that he used ChatGPT for a brief and invented legal precedents

See original article on BBC

By Scribe