Elon Musk and hundreds of other entrepreneurs and researchers are calling for a pause in AI development

“Powerful AI systems should only be developed when we are sure that their impact is positive and their risks manageable.” This is what the non-profit organization Future of Life writes in an open letter published on March 22nd, which was signed by more than 26,000 developers, entrepreneurs and researchers just under a month later. The supporters of the demands expressed in the letter include Tesla bosses Elon Musk including Turing Prize winner Yoshua Bengio and Apple co-founder Steve Wozniak. World-renowned writer and historian Yuval Noah Harari has also signed.

The requirement: highly intelligent AIs should not be trained for six months

In the letter, the experts pose the question of morality and how we want to develop as a society: “Should we develop non-human minds that could one day outnumber us, be smarter than us and make us superfluous and [dann] could replace? Should we risk losing control of society?”

While their opinions may differ on these questions, all signatories agree on one point: Before it actually happens, they believe that society should collectively decide on a path and write a set of rules for the development and use of AIs – preferably yesterday. “We call on all AI labs to pause the training of AI systems that are more powerful than GPT-4 for at least six months with immediate effect,” the open letter says. This does not in any way demand an absolute break in development, but rather that companies, researchers and developers take a step back from the dangerous race that is currently taking place in the field of artificial intelligence. According to the undersigned, during the six-month break, the entire industry should focus uniformly on the safety and reliability of the new technology.

Experts: AIs could manipulate the information channels

The reason for the demands is that Future of Life and the undersigned currently see “profound risks for society and humanity” in particularly intelligent AIs. So it is not unreasonable that AIs will flood information channels with propaganda and untruths in the near future. Such risks must be limited before continuing with the training of existing and new powerful AIs.

In addition to the joint development of a set of rules for dealing with artificial intelligence, the undersigned demand that all developers make their existing and future work publicly verifiable on public platforms. They are also demanding a moratorium from all states if this is not adhered to and if the industry does not soon write a set of rules that applies to everyone.

Editorial office finanzen.net

Selected leveraged products on Apple Inc.With knock-outs, speculative investors can participate disproportionately in price movements. Simply select the desired lever and we will show you suitable open-end products on Apple Inc.

Leverage must be between 2 and 20

No data

Image Credits: Joshua Lott/Getty Images, Diego Donamaria/Getty Images for SXSW

ttn-28