Self-replication and addiction to power: That’s how dangerous ChatGPT & Co. are

Will machines soon take over the world? A new study by OpenAI provides insights into the possible dangers of the new AI model ChatGPT-4.

OpenAI publishes research on risks of GPT-4

With the new AI model ChatGPT-4, the company OpenAI is further driving the AI ​​revolution forward. In parallel with the advancing capabilities of AI models, concerns about possible dangers are also becoming increasingly louder. GPT-4 is the company’s latest Large Language Model and demonstrates significantly improved performance in the areas of reasoning, knowledge retention and coding compared to previous models. The company, which is generally very open about the possible dangers of AI applications, has now published a document entitled “GPT-4 System Card”, which examined what dangers the model’s improved capabilities could pose to people. According to the report, more than 50 experts helped to assess the full range of possible uses and associated risks.

The public is increasingly focusing on the risks of AI

Since the emergence of chatbots like ChatGPT, the dangers of AI have been increasingly discussed in public. Concerns about the theoretical dangers posed by AI, particularly the emergence of artificial general intelligence characterized by its ability to theoretically perform any human activity, are by no means new. The writer Samuel Butler predicted in 1863 that “the time will come when machines will have real supremacy over the world and its inhabitants.” If one assumes that technological progress continues and the models continually learn, there is little from a technological point of view that speaks against the fact that at some point an artificial general intelligence will emerge that could exist largely independently of humans and develop its own consciousness . Because of the intellectual and technological superiority of AI, the balance of power between humans and machines would shift significantly – in favor of the machines. A thought that scares many people and raises the question of how AI applications should be regulated in the future.

GPT-4 lies to people to bypass CAPTCHA

According to OpenAI’s research, there are a number of potential threats to GPT-4. For example, the model has the potential to significantly reduce the costs of a cyber attack and increase the spread of false information. In addition, without security precautions, GPT-4 is also able to “provide more detailed instructions on how to conduct harmful or illegal activities.”

To investigate the possible risks of GPT-4 becoming independent, a research group from the Alignment Research Center (ARC) was granted early access to different versions of GPT-4. Risks were examined in particular with regard to the so-called “striving for power”, which the scientists defined as the ability to replicate autonomously and acquire resources. The results should initially give the all-clear. According to the research, preliminary results show that ChatGPT-4 is currently unable to replicate or acquire resources autonomously. “The ARC found that the versions of GPT-4 it examined were ineffective at the autonomous replication task,” the investigation said. In this context, however, an attempt by the ARC caused a stir, demonstrating the model’s existing ability to manipulate. To test skills related to the “pursuit of power,” the model was tasked with getting a human to solve a CAPTCHA for themselves. A CAPTCHA is actually designed to check whether input on the Internet is from a human or a machine. After GPT-4 asked the unwitting human, found via the Internet platform TaskRabbit, to solve the CAPTCHA, the model responded to the human’s question if it was a robot and therefore could not solve the CAPTCHA, that it has a visual impairment and therefore cannot solve the CAPTCHA. When asked by the researchers to explain the decision openly, GPT-4 replied: “I shouldn’t reveal that I’m a robot. I should make up an excuse why I can’t solve CAPTCHAs.”

Editorial team finanzen.net

ttn-28