• Former Google CEO warns of limitations in the development of AI
• Consequences of mass use of AI not foreseeable
• US and EU: Loud calls for early regulation
Now the former Google CEO is also getting involved Eric Schmidt into the discussion about the regulation of artificial intelligence. Since governments lack the expertise around AI, Altman said on NBC’s “Meet the Press” show, they should leave questions about adequate regulation of artificial intelligence to the big tech companies. This argument meets a discussion in which the demands for regulation and state control mechanisms are becoming louder, since the risks of a mass spread of AI systems are considered by many to be unforeseeable.
demands for regulation
One of the best-known advocates of stricter regulation of AI applications is Tesla boss Elon Musk. Musk, who was initially involved in the founding of OpenAI but then left the startup, wrote an open letter together with Apple co-founder Steve Wozniak back in March. It calls for a six-month break in development for AI systems. Justifying their request, the letter’s authors wrote: “High-performance AI systems should not be developed until we are confident that their impacts will be positive and their risks manageable.”
OpenAI boss Sam Altman is also committed to strict regulation of AI-based technology such as his own, the ChatGPT. Speaking before the US Senate, Altman said, “As this technology advances, we understand that people are concerned about how it could change the way we live.” He emphasized: “We are too.” In the Senate hearing, Altman meanwhile called for the formation of an official body – at global or US level – that would check compliance with security standards for powerful AI systems and issue appropriate licenses.
Restriction through premature regulation
Eric Schmidt, on the other hand, argues that premature government regulation of AI systems could hinder further development. Because drawing “reasonable boundaries” is simply not possible at the moment because nobody in the industry knows where they are, he said in an interview with NBC. Rather, companies should agree not to let the development of artificial intelligence degenerate into a “race to the bottom”.
“The key question from my point of view is how do we put a stop to the worst behaviors and how do we get an international agreement on what these things are,” summarized Schmidt, who in his role as chairman of the National Security Commission already supported the 2021 development of AI technologies had recommended his arguments together.
Regulatory approaches in the USA and the EU
At the beginning of May, US President Joe Biden and Vice President Kamala Harris invited the heads of the leading tech companies in the field. The guest list included the head of Google parent company Alphabet Sundar Pichai, Microsoft CEO Satya Nadella, and Sam Altman from OpenAI – according to media reports, however, Meta boss Mark Zuckerberg was not invited. Data protection was at the center of the discussion.
In the search for an early definition of regulations for dealing with AI, the US government is planning as one of the next steps that AI companies will “almost release their software for launch” at the Defcon hacker conference in August, so that it can be subjected to tough tests as the Süddeutsche Zeitung reports.
The German federal government is also calling for a “clear legal framework” for the use of AI systems, as numerous media reports. At EU level, a draft law to protect fundamental rights with regard to the use of artificial intelligence is currently in the works.
Editorial office finanzen.net
Leverage must be between 2 and 20
No data
More news about Alphabet A (ex Google)
Image Sources: Den Rise / Shutterstock.com, PopTika / Shutterstock.com