US and EU push for code of conduct AI companies | Tech

The United States and the European Union should urge tech companies to follow a code of conduct on artificial intelligence (AI), said European Commissioner Margrethe Vestager.

Although the EU is working on legislation to limit the risks of breakthrough technology, developments are moving so fast that companies such as OpenAI, Google and Microsoft should already be self-regulating to counteract the harmful social consequences of AI.

“At the earliest, it will only take effect in two and a half to three years. That is of course much too late,” Vestager referred to the European AI laws that are being drafted. The Danish, who is in charge of digitization in the EU, thinks that a draft version of the voluntary code of conduct for AI companies could be ready “within weeks”. The EU and the US are working together on that first draft, Vestager said

Agree

The two power blocs held their recurring talks in Sweden on trade and technology issues, among other things. Artificial intelligence was high on the agenda. US Secretary of State Antony Blinken said that governments feel the “gigantic urgency” to act on AI.

Recently, European Commissioner for Industry Thierry Breton said that the day-to-day management of the EU is working on an AI agreement with Alphabet, the parent company of Google. This agreement should prevent the technology from going off the rails while the European rules are still being drafted.

To assure

In recent months, tech companies have been presenting increasingly advanced AI platforms, driven by the introduction of chatbot ChatGPT and OpenAI’s underlying language model GPT. Based on a few instructions, the chat program is able to write complete texts that are barely distinguishable from human work. Microsoft is riding on that success with billions of investments in OpenAI and in the meantime Google is coming up with its own counterparts.

The AI ​​technology can facilitate and speed up a lot of work, but it also causes a lot of concern. For example, chatbots could accelerate the spread of disinformation. There are also concerns about “intelligent machines” that could think for themselves and thus supplant humans.

LOOK. Jill Peeters late weather report writing ear ChatGPT:

ttn-3