Google, Meta and Microsoft promise to mark content created with artificial intelligence

the big seven technology companies who are leading the development of the artificial intelligence (IA) have promised this Friday to take measures to guarantee their safety.

Convened by the White House, Amazon, anthropic, Google, Inflection, Goal, Microsoft and Open AIcreator of ChatGPThave accepted a series of requests from the Government of USA that would serve to protect users. Pledges include increasing investment in equipment for cybersecurity more robust, create a watermark that identifies the content generated with AI (whether text, images or videos) and promote research on biases and the discrimination that these systems may have.

Those seven companies – some tech giants and some smaller, start-ups – have also committed to sharing data about their systems with government and academia, as well as allowing independent experts to vet their products before releasing them to the general public.

This commitment is not binding, so it may end up being a piece of paper. The companies have voluntarily signed agreements that could be violated, since no type of obligation has been established to carry out the promises or any consequence in the event that they fail to comply with them. Even so, government sources have explained to ‘The Washington Post’ that they believe that it will serve to “raise the standards of security and trust in AI”.

Timid regulation in the US

Related news

The presidential administration headed by Joe Biden he is working on an executive order that would address some of the risks posed by AI, although few details of the text are known. The agreement signed today with the main AI developers thus serves to strengthen their contact with power. In May, the White House expanded the allocation of public funding for the promotion of this technology.

Meanwhile, the US Senate is working on a bill to regulation which is still at a very early stage. At the moment, the senators are finding out about what AI is and what risks it can pose for different areas such as national security. The use of the so-called generative AI is being limited in other parallel laws.