ChatGPT detector catches AI fraud New Scientist

A new tool, GPTZero, can determine whether a text has been produced by artificial intelligence (AI). With this, students can be caught letting AI chatbots such as ChatGPT do their homework.

A web tool called GPTZero can accurately determine whether an essay has been created by the artificial intelligence chatbot ChatGPT. This can help expose students who use AI to commit fraud. However, the tool will only work if the company behind the popular chatbot keeps its underlying AI models public.

The company behind ChatGPT, OpenAI, is working on watermarking their AI texts. But before that happens, users can fully use the chatbot for nefarious purposes. ChatGPT was made publicly available in December 2022 and has been used by millions of people ever since. There are numerous reports of pupils and students letting the chatbot do their homework.

READ ALSO

‘Determine in advance how you will deal with the extreme right’

The media contribute to normalizing right-wing extremist ideas, says political scientist Léonie de Jonge. Therefore, they must set limits on w…

Computer scientist Edward Tian of Princeton University in the US has therefore now developed a tool, GPTZero, which tests whether a piece of text originates from ChatGPT. The tool uses ChatGPT’s architecture, which is public. “Our methods use the AI ​​model itself to assess whether the output is a case of plagiarism,” says Tian.

When someone pastes a text into GPTZero, the tool uses an older version of the AI ​​model behind ChatGPT. This way he determines how likely it is that the text was produced by the AI. GPTZero also looks at how this probability varies in the different parts of the text. Human texts can alternately look like and don’t look like an AI text, but in AI texts that probability is more constant, says Tian.

The researchers tested the tool on articles that ChatGPT wrote based on news headlines from BBC News. GPTZero correctly judged the texts in about 98 percent of the cases. In less than 1 percent of cases, he labeled the article as AI text when it was actually from a human creator.

Transparency

Detection tools such as GPTZero often only work temporarily, until the AI ​​models learn how to bypass these tools. But Tian thinks their system will work for a long time because it uses the structures of the AI. “As long as OpenAI keeps its models public, the detection tool will be one step ahead,” says Tian.

A spokesperson for OpenAI underlines that they are committed to transparency around the use of AI for writing texts. “We ask users to open up to their audience when using our creative tools such as DALL-E and ChatGPT. We don’t want ChatGPT to be used for deceptive purposes in schools or anywhere else. We are therefore developing ways to help everyone identify texts written by that system.’

Race for the truth

Tools such as GPTZero are not new, says computer scientist Yulan He from King’s College London. A similar tool from researchers at MIT, IBM and Harvard was released in 2019. It looks at individual words that indicate that the text has been generated by artificial intelligence. The success of ChatGPT has made these tools a lot more relevant.

ttn-15