Artificially intelligent chatbot ChatGPT is tricking you – New Scientist

ChatGPT, an artificial intelligence chatbot from the company OpenAI, has caused quite a stir with its eerily human texts. Can legislation do anything about the shady uses of technology? “This is going to be a game of cat and mouse.”

‘The DNA of the Amsterdam canals is a revelation that will turn the world upside down for scientists! After years of research, scientists have unraveled the complete genome of the Amsterdam canals, and what they have discovered is an absolute sensation.’ Well, that’s what you get when you ask ChatGPT to write a ‘breathless popular science article’ about a typical Parool topic, namely ‘the DNA of the Amsterdam canals’.

The artificially intelligent (AI) text generation model GPT-3 has recently caused quite a stir with its eerily human texts. GPT-3 is a language model based on deep learning, where billions of pages of text have been boiled down to settings of a computer system’s artificial brain cells. Now it poops out apparently sensible texts on command, via the chatbot ChatGPT.

READ ALSO
Brain training apps lack scientific backing

On command

You can give ChatGPT commands, where it remembers what it should be about, what the storyline is and in what style the final product should be. You can even send that back for corrections. From movie script to avant-garde poem to popular science piece, ChatGPT doesn’t turn its hand around.

However, there is no guarantee whatsoever that the result makes sense, let alone that it is true (canals have no DNA). And that’s a problem. Companies like OpenAI are mouth watering with the applications sales, marketing or software production, or writing articles, fiction and scripts. But some applications are more obscure: plagiarism, scams, theft, propaganda and fake news.

Think not only of schoolchildren and students who outsource their assignments (‘Write a school assignment about Amsterdam canals’), but also of scammers who loot money by sending alarming reminders to vulnerable people, but without language errors and precisely tailored to the victim.

To lie

This ‘automation of persuasion’, for example, is very similar to the work of the chatbot of the American company DoNotPay. It automatically arranges matters such as contesting parking fines, but also pinged down the costs of an internet connection in a chat (although it had to lie about how bad the internet connection was). Handy for the second-hand car salesman. But companies can also play that trick. In the future, customer service will brush you off eloquently and fully automatically.

Text generators in particular are a godsend for the producers of propaganda and fake news. The Russian troll factories and other fake news producers can lay off staff and flood social media faster and cheaper with inflammatory, tendentious and untruthful messages.

Watermark

These concerns are not completely new: GPT-3 is a further development of earlier text generators that also caused dismay. And before, image generators like DALL-E caused problems with deep fakes: photorealistic images of events that never happened.

Scott Aaronson, a computer scientist at OpenAI, recognizes the potential problems. “We want it to be much more difficult to pretend that the output of an AI system is that of a human,” he said during a science talk about a “watermark” that should make AI texts like GPT-3 detectable .

This works as follows: GPT-3 can often choose from several words when building its texts. By choosing those words according to a certain pattern, the text can receive a kind of signal that can be detected with software: a stamp of inauthenticity.

Of course, such a thing only works with longer pieces of text and it can be circumvented; for example by asking another AI to rephrase the text in different words. In addition, the producer of the software must cooperate in this.

Another option is to train another AI to recognize texts written by fellow AIs. Researchers at the University of Washington are working on an AI that is not alone fake news can write, but can also detect it. Grover, as the program is called, is particularly good at detecting its own fake news (92 percent).

“We need technical instruments, similar to the tools that already exist before deep fakes and detecting manipulated photos,” said Sandra Wachter, a technology and regulatory researcher at the Oxford Internet Institute, in an interview with the US technology magazine Wired.

Legislation

The last resort is legislation. The European Union is preparing a bill, the AI ​​Act, which should set limits on the use of AI systems. For example, a company should always let it know that you are not talking to a human being. The US, UK and Canada are also working on AI legislation.

But legislation can be evaded and tends to lag behind technology. ChatGPT is an example of this. Wachter predicts: “This will be a game of cat and mouse.”

ttn-15