OpenAI, the maker of the revolutionary chatbot ChatGPT, is setting up a new department internally to prevent artificial intelligence (AI) from becoming hostile. “We need new scientific and technical breakthroughs to solve this problem,” the American developer said in a statement on the website, announcing the plan.
ChatGPT uses artificial intelligence (AI) to write logical sounding texts. In a short time, the system caused quite a stir by creating texts that are difficult to distinguish from human work. That made the service popular but also controversial. ChatGPT was banned in Italy for a while and France is investigating.
‘Extinction of Humans’
The artificial intelligence is trained by collecting information from the internet and possibly also by processing questions that users ask. Technology is rapidly getting smarter, and while it can solve many fundamental problems, experts warn that “mankind’s extinction is imminent.” Superintelligence will be the most influential technology humanity has ever invented. “But the enormous power of superintelligence can also be very dangerous and can lead to the impotence of humanity or even the extinction of mankind.”
At the moment, there is no way to “power potentially super-intelligent AI,” OpenAI writes. That has to change and so the company is setting up a new unit internally: Super alignment. This department, consisting of trained scientists and whiz kids, has to make sure with daily safety checks that AI does not become hostile and does not cause chaos, or worse.
OpenAI cannot give any guarantees that this will succeed. “While this is an incredibly ambitious goal and we are not guaranteed to succeed, we are optimistic that a focused, joint effort can solve this problem.” The company is looking for new employees who want to contribute. “We need the brightest minds in the world.” OpenAI expects to learn more about AI, its dangers and opportunities along the way. These results will be shared with everyone throughout the process.
Dangers and opportunities
According to the NCTV, the arrival of artificial intelligence is creating new digital dangers. For example, it is becoming increasingly accessible to develop malicious software. It also becomes more difficult to determine whether texts, photos, videos and sounds are real. However, the same technology also makes it possible to fend off potential attacks faster and better.
The European Parliament recently passed legislation to tackle AI. The intended European regulation regulates the design of AI, its use and the sale of AI systems. Facial recognition with smart cameras in public spaces will be banned and binding rules will be introduced for large chatbot models such as ChatGPT.
However, artificial intelligence should not only be seen as a problem, Prime Minister Mark Rutte said earlier. Rutte thinks of fighting diseases and even solving social problems as examples in which AI can play a role. “At the same time, you also want to prevent the risks that it entails from becoming unmanageable.”
Free unlimited access to Showbytes? Which can!
Log in or create an account and don’t miss a thing of the stars.