Companies Pushing Artificial Intelligence Equate Its “Risk Of Extinction” To Nuclear War

Academics and executives of large companies that are leading the deployment of the artificial intelligence (IA) have warned of the existential threat which, in his opinion, means for the humanity. In a statement of just 22 words, researchers and CEOs have noted that controlling this emerging technology should be prioritized, equating its importance with that of pandemics or the nuclear war.

“Mitigate the risk of extinction of AI should be a global priority, along with other societal risks such as pandemics and nuclear war,” reads the open letterwhich repeats the hypothetical and criticized idea that, as in ‘Terminator‘, the machine becomes conscious and kills us all. The publication does not cover the climate change as a priority nor does it suggest actions to minimize that presumed threat.

Among the more than 200 personalities that have signed this declaration, business leaders such as sam altmanexecutive director of Open AIcreator of ChatGPT and partly in the hands of Microsoft either Demis HassabisCEO of Google DeepMind. There are also academics such as computer science professors. Yoshua Bengio and Geoffrey Hintonwho less than a month ago resigned as vice president of engineering at Google. Both were recognized with the Turing Awardthe Nobel Prize in computer science, for his work on deep learning (‘deep learning’), the technology behind the AI.

contentious debate

The letter seeks to “open the discussion” on the “most severe risks” of a technology still under development, a complex debate that has already led to several controversies. Several of those signatories published another letter at the end of March in which they asked to suspend the training of the most advanced AI systems for six months. However, critical experts in the field denounced that these supposed future risks of AI are still Science fiction and that talking about them serves to hide the real impact that this technology is already having both in labor matters and in its potential for disinformation or your consumption of water and electricity.

Other experts are also skeptical of this latest warning. “You have to be very clear that AI does not have an ontological entity to end humanity, another thing is that someone programmed it for it, but there the problem would be the person, not the technology,” he explained to EL PERIÓDICO Ulises Cortes, scientific coordinator of high-performance AI at the Barcelona Supercomputing Center. “Most signers have gotten rich off of AI tools, selling the hype and taking advantage of other people’s data,” he adds.

For this and other researchers, equating AI with nuclear weapons is tricky. “The pandemic and nuclear war are two dangers that are based on theoretical and empirical evidence, while the risk of humanity’s extinction is blurry, completely uncertain, and based on hypothetical superintelligence for which there is no evidence,” he remarks. Ariel Guersenzvaig, ELISAVA professor expert in design ethics and technology. “We have not asked to mitigate physics because it has allowed the creation of the atomic bomb,” adds Cortés.

diversion maneuver

Related news

Although most of the signatories are Americans, on the other side of the Atlantic there are also many voices critical of this type of apocalyptic statement. “When the AI ​​bros yell “Look at a monster!” to distract everyone from their practices (data theft, energy waste, bias escalation, information ecosystem pollution), we should do like Scooby- Doo and remove their mask”, ha tweeted Emily Bendera professor of computational linguistics at the University of Washington.

During the last two weeks, Altman has appeared before the United States Senate and has made a tour of Europe in which he has asked the authorities -among them, the Spanish president Pedro Sanchez and his French counterpart Emmanuel Macron– create a body to oversee the security of AI projects. As others have also pointed out, Guersenzvaig sees in these requests from the creator of ChatGPT a “crime strategy” to influence and soften a future regulation of the tools they are deploying. “They are making us talk about the technology and not about who is using it and for what purpose,” she adds. Faced with the hypothetical extinction of the human race, all other dangers will seem tiny.



ttn-24