Hackers rely on artificial intelligence: How AI supports phishing scams

• ChatGPT and Co. enable personalized phishing campaigns
• AI facilitates programming of malware
• AI can also be used to defend against hacker attacks

On November 30, 2022, the AI-supported chatbot ChatGPT, which can generate human-like text and act as a voice assistant, saw the light of day. Unlike previous applications, it can not only read and write but also understand natural language and contextual content. The fact that its developer, the US startup OpenAI, decided to make the prototype freely available meant that the topic of “artificial intelligence” became the focus of the general public. Opinions differ widely on this. While some praise the enormous potential, others warn of the possible dangers of this technology.

In mid-May, Swisscom warned that the threat posed by AI-based attacks, ie cyber attacks using artificial intelligence technologies, had increased significantly in recent months. Because AI allows hackers to carry out their attacks more effectively and efficiently. With the help of AI, for example, phishing attacks can be significantly improved in quality or weaknesses in program codes can be detected. This is the result of the “Cyber ​​Security Threat Radar” published on May 15 by the telecom group, which is also a partner to many Swiss banks in the areas of communication, data transmission and IT services.

Phishing emails are becoming more credible

The misuse of AI to create malware/malware codes and phishing campaigns will certainly increase in the future, Swisscom predicts and attributes the growing threat to the fact that the corresponding publicly available tools have made a veritable evolutionary leap. This also includes ChatGPT, which can generate human-like text and act as a voice assistant, so it is also able to formulate personalized phishing emails more convincingly. This makes phishing attacks more difficult to unmask and could tempt recipients to disclose sensitive information or click on harmful links, Swisscom warns. In most cases, phishing is the gateway for ransomware attacks, in which hackers use malware to penetrate a system in order to encrypt files there and then demand a ransom for their release.

AI can detect vulnerabilities

In addition to being used for targeted phishing campaigns with completely individualized, context-dependent emails, language model AIs can also be misused to analyze program codes for vulnerabilities and to program malware to exploit the vulnerabilities found, including suitable attack vectors. According to the “Cyber ​​Security Threat Radar”, the know-how required to carry out complex attacks continues to decrease.

AI can also help with defense

On the other hand, AI can also be used to detect and ward off fraud and cyber attacks. For example, it can help to recognize AI-generated texts or AI-generated image and video material. Furthermore, concepts such as zero trust for granularly controlled and authenticated access to data and resources help to reduce the attack surface for companies.

“As AI technologies advance at a rapid pace, we need to be aware that they are not good or evil per se. Rather, it is a tool that can be used for both purposes. The challenge is to push the defense further so that AI-based attacks can also be successfully repelled – in the future increasingly with the help of ‘good’ AI,” explained Florian Leibenzeder, Head of the Swisscom Security Operation Center.

Editorial office finanzen.net

Selected leverage products on Swisscom AGWith knock-outs, speculative investors can participate disproportionately in price movements. Simply select the lever you want and we will show you suitable open-end products on Swisscom AG

Leverage must be between 2 and 20

No data

More news about Swisscom AG

Image sources: PopTika / Shutterstock.com, BeeBright / Shutterstock.com

ttn-28