Europol warns of potential for fraud and terrorism through ChatGPT

The EU police authority Europol has examined the popular chatbot ChatGPT for risk scenarios and is now warning of the potential for abuse.

In a series of workshops, Europol’s own Innovation Lab created an initial threat analysis for ChatGPT and has now published its findings. Several dangers are identified.

Europol investigates ChatGPT

A report identifies multiple use cases where malicious actors can use artificial intelligence (AI) for criminal purposes. Specifically, Europol dealt with ChatGPT, which is based on the “Large Language Model” (LLM) GPT. GPT, developed by OpenAI, is trained on billions of data points to understand and generate natural language. Not only can ChatGPT provide coherent and objective answers to almost all questions – even if they are incorrect or incomplete. With the right prompt, the chatbot is even able to write code for many common programming languages.

OpenAI is constantly improving security measures to prevent criminals from abusing these capabilities. However, they always find new ways to let ChatGPT produce results anyway. Europol says: “While the capabilities of LLMs (Large Language Models) like ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals presents a bleak outlook.”

What danger does ChatGPT pose?

In the analysis of ChatGPT, the Europol experts have identified three areas in which the potential for abuse of AI is of particular concern: fraud, disinformation and cybercrime.

  • Fraud: Phishing scams are a common problem that could be made worse by using ChatGPT. Although there are exceptions, the majority of phishing attempts with fake e-mails and websites can often be identified quickly, especially by the language inconsistencies. According to Europol’s analysis, however, ChatGPT can imitate the language of groups such as companies and thus make fake content appear even more authentic.
  • disinformation: Disinformation means propaganda and fake news. The rapid production of authentic-looking texts and speeches makes it even easier to spread disinformation on a large scale.
  • cybercrime: In Europol’s analysis, cybercrime explicitly refers to classic hacker attacks with maliciously crafted code. Accordingly, ChatGPT can write simple code that can be used as a basis for attacks even for criminals without prior knowledge. In the detailed report, Europol also explains how the actors can circumvent ChatGPT’s security measures. The AI ​​can recognize when someone asks it to write malicious code, but only if it is a coherent command. If the command is divided into individual steps, the security precautions can be circumvented.

The report also addresses another way ChatGPT can be misused for criminal purposes. Making information more accessible could “simplify terrorist activities, such as terrorism financing and anonymous file sharing.”

Sources

ttn-35