On January 10, a change was made in complete silence, which the whole world should hear

ChatGPT’s terms of service have prohibited its use for military purposes. The mention was removed earlier in January with the fewest votes.

ChatGPT’s terms of service prohibited its use for military purposes – now the mention has been removed. The background must be the cooperation between OpenAI and the Pentagon. Adobe Stock

In January, OpenAI quietly removed from its terms of use the mention that ChatGPT may not be used for military purposes. The ban was mentioned in the terms of use until January 10.

of The Intercept previously, the terms directly stated that OpenAI’s products should not be used for “activities with a high risk of physical harm”, including “weapons development” and “military operations and armed forces”.

Since then, hostilities and armed forces have disappeared from the text. However, the renewed terms mention the prohibition to use the service “to harm yourself or others” and also include “using and developing weapons” as an example.

Now a news agency Bloomberg says that OpenAI works on software projects of the United States Department of Defense, which are related to cyber security, among other things. The company announced this on Tuesday.

At the World Economic Forum, OpenAI also said that it is developing tools for the US administration that could reduce veteran suicides. War veterans commit suicide significantly more than other people due to, among other things, traumatic stress, depression and feelings of guilt.

According to The Intercept, the change in terms of use has raised concerns among AI safety advocates. The new terms seem to emphasize compliance with the law rather than comprehensive security.

Cyber ​​security company Trail of Bits Heidy Khlaaf has previously co-authored with researchers at OpenAI research articlewhich highlighted the dangers of extensive language patterns in military use.

Khlaaf commented to the Intercept that the implications of OpenAI’s involvement in weapons development and military operations are significant for AI security.

According to him, the inaccuracy of language models and the tendency to hallucinate and bias can only lead to imprecise and biased operations, which increases the number of civilian casualties and damage.

ttn-54