Mikko Hyppönen warns of a danger that threatens us already this year

A Finnish expert told which five uses of artificial intelligence cause headaches for information security professionals.

Mikko Hyppönen has listed of The Next Web in the interview, their top five information security concerns related to artificial intelligence.

According to Hyppönen, who works as a research director at Withsecure, artificial intelligence will change everything: it will bring a bigger upheaval than the internet’s change in our lives. He believes that the change will bring positive effects, but at the same time he is worried about its effect on new cyber threats.

Deep fakes

In 2023, crimes committed using deepfake technologies were up to 3,000 percent more than before, according to a report by Onfido, which develops authentication services. For example, a fake video with an internet celebrity went viral on Tiktok in October MrBeast’s allegedly offering iPhones for a couple bucks. Grossly fake videos of the President of Ukraine From Volodymyr Zelenskyi seen at the beginning of the war in Ukraine.

According to Hyppönen, credible-looking deeply faked videos have not yet been seen in fraud attempts. He says that he has only seen three of them, but he suspects that the number may start to grow rapidly in the near future.

He urges you to be prepared now and agree on a secret security password with important contacts. If someone tries to pretend to be a family member or colleague in a video call and the situation starts to stink, you can then ask the other person to mention the password.

The story continues below the picture.

Withsecure’s research director Mikko Hyppönen says that credible deepfake videos have not yet been seen by scam companies. However, he suspects that their number may start to grow. Kimmo Haapala

Deep scams

Through automation, the scale of the fraud can swell to enormous. Deep here does not necessarily refer to the manipulated content as much as to the scope of the scam.

Romance scammers like the Tinder scammer have been able to scam millions of women they’ve met. However, he had to write messages to everyone separately. With the help of extensive language patterns, according to Hyppönen, such a scammer could collect thousands of victims instead of a few. In addition to translating messages, artificial intelligence applications would also help to create images of the alter ego of a swindler who acts like a rogue.

The same tricks also work for the Airbnb scam author, who would create fake rental properties for the service.

Malware written by the language model

Services like ChatGPT are already being used to create and rewrite malicious code to make it harder to identify viruses. Hyppönen says that his team has already found malicious programs that used a language model to help renew their code every time they spread.

OpenAI has blocked the entry of malicious code in its service. Despite the company’s name, its code is not open code, but only for its own use.

– If you could download the entire extensive language model, you could run it locally or on your own server. Then others would no longer be able to stop you. This is the advantage of closed-code generative artificial intelligence systems, Hyppönen says.

The story continues below the picture.

Hyppönen’s team has already seen how language models like ChatGPT can be used to spread malware. Kimmo Haapala

Searching for zero-day vulnerabilities

– It’s great to be able to use an AI assistant to find zero-day vulnerabilities in your code so you can fix them. And it’s scary when someone else uses artificial intelligence to find zero-day vulnerabilities in order to exploit them, Hyppönen says.

Withsecure’s assignment, the student has already tested this in his thesis. He managed to automate the escalation of user rights in the Windows 11 environment by scanning and exploiting the vulnerabilities. The company hid the thesis because, according to Hyppönen, publishing the results would not have been a responsible activity.

Automated malware

Hyppönen ranks fully automated malware as the most dangerous information security threat of 2024.

Information security companies have been automating their own operations for years, and criminals may be developing their own similar systems. Soon, information security may be a game between good and bad artificial intelligence.

In the future, there will be general-purpose, broad artificial intelligence

As a bonus, Hyppönen also mentions the development of human-like artificial intelligence. The artificial intelligences in use today are, by definition, narrow artificial intelligences, i.e. technologies designed for a specific purpose, which, however, are not capable of everything that human intelligence is capable of.

– I believe that we will become the second most intelligent being on this planet in my lifetime. I don’t think it will happen in 2024, but I think it will happen in my lifetime, Hyppönen says.

With this kind of technology, both the benefits and the risks are huge. Therefore, according to Hyppönen, it should be developed so that general-purpose artificial intelligence understands humanity and that its goals are in line with people’s long-term goals.

ttn-54