The New York Times uses crooked methods, says an artificial intelligence company

According to OpenAI, which develops artificial intelligence models, the accusations leveled at it by The New York Times are unfounded. OpenAI says the magazine used questionable methods to get ChatGPT to work as claimed.

The New York Times has sued the artificial intelligence company OpenAI. All Over Press

  • OpenAI has responded to The New York Times’ claims of copyright infringement.
  • The company that develops artificial intelligence models denies the abuses and says the problem is the result of a bug.
  • OpenAI accuses The New York Times of playing a crooked game.

In December, we reported on the dispute between The New York Times newspaper (NYT) and OpenAI, which develops artificial intelligence models.

The traditional newspaper sued OpenAI, known for its ChatGPT language model, and the company’s largest investor, Microsoft, for copyright infringement.

NYT claims in its challenge that OpenAI taught its ChatGPT language model with millions of magazine articles without permission. ChatGPT has answered some questions using NYT paywall articles verbatim.

According to NYT’s challenge, Microsoft’s search engine Bing has also done the same, which uses ChatGPT in some of its features.

The Finnish discussion forum was mentioned

OpenAI has published on their website response to the newspaper’s accusations.

– Although we disagree with the accusations made by The New York Times, we see this as an opportunity to clarify our business, our purpose and how we build our technology.

OpenAI emphasizes that it supports journalism and cooperates with news organizations.

According to OpenAI, using openly available content to train artificial intelligence falls within the scope of fair use as defined in US copyright laws.

Despite this, OpenAI says it has offered news houses the opportunity to leave their own content out of the artificial intelligence educational material. According to OpenAI, NYT has exercised this right in the fall of 2023.

According to OpenAI, the texts reproduced word for word by ChatGPT are years old, and the same contents have since been quoted elsewhere on the internet.

It raises to one example of the Finnish-language Punk in Finland discussion forumwhich in 2020 has shared The New York Times’ paywalled content.

It’s about a bug

According to OpenAI, the fact that ChatGPT has used the contents of The New York Times verbatim when answering the questions posed to it is a bug in the artificial intelligence learning process. The problem has been known for some time, but it has not been solved.

– The bug usually appears when some content appears in the educational material more than once, the company says.

Prohibited methods?

According to OpenAI, it had previously discussed with the newspaper that ChatGPT would answer some questions using verbatim the newspaper’s copyrighted material.

– They repeatedly refused to give us examples (of the questions they asked ChatGPT) despite our willingness to find out and fix the matter, says OpenAI.

OpenAI claims in its article that in order to get ChatGPT to repeat the texts of NYT editors, the bot had to be asked very leading questions using excerpts from NYT articles already in the question itself.

– Even then, ChatGPT does not normally work in the way described by the magazine, which suggests that they either instructed ChatGPT to repeat their content word for word or picked their example from among numerous companies.

OpenAI concludes its paper by stating that The New York Times must have deliberately misused ChatGPT to get it to reproduce the educational material verbatim. According to OpenAI, such use is prohibited.

– In our opinion, The New York Times’ challenge is unfounded. Even so, we hope for a constructive partnership with them and respect their long history.

Sources: The New York Times, OpenAI

ttn-54