Otto Barten saw a TED talk in 2014 that greatly influenced his worldview. “It was a lecture by the Swedish futurist Anders Sandberg,” said Barten. “In it, Sandberg talks about the future of humanity. A key concept is ‘existential risk’: the risk that humanity will die out, for example due to artificial intelligence that turns against humans.”
The lecture, Humanity on the Edge of Extinction, was an eye-opener for Barten. “To my surprise, there was no social debate on this subject at all.” That is why he decided to set up the Existential Risk Observatory in 2021, a Dutch NGO that draws attention to the risk of human extinction due to future AI (artificial intelligence, artificial intelligence). Existential Risk Observatory does this through media appearances, opinion pieces and organizing debates.
Existential Risk Observatory is also heavily involved in the petition Control AI which was signed this month by a large number of prominent people. Among them are (former) politicians such as Lodewijk Asscher, Boris van der Ham, Klaas Dijkhoff and Gert-Jan Segers. But also writers and opinion makers such as Maxim Februari, Rutger Bregman, Bas Heijne and Sander Schimmelpenninck.
“The emergence of AI offers opportunities, but also entails enormous risks,” the text of the petition begins. “That is why we are calling on Dutch politicians: take control of the development of AI. Let humans determine what a future with AI looks like. Not the other way around.”
Much-discussed open letter
In the row ‘Initiators’, next to Barten’s name is that of Ruben Dieleman, campaign manager of Existential Risk Observatory. The term ‘existential risk’ appears twice in the petition, and that was the reason for Marleen Stikker, tech philosopher and director of Waag Futurelab, not to sign it.
“I agree with 80 percent, but I did not sign because of the term existential risk. That seems like an innocent concept, but if you look closely, you will see that the petition links to a website, pauseai.info. And there you can see how it is worked out. This is about the extinction of humanity by AI in the future. There is no scientific evidence for that. You can’t rule it out, but so can the abominable snowman. And it distracts from concrete problems in the present such as disinformation and discrimination by AI. I want to talk about human rights, democracy and self-determination. Not about the extinction of the human species.”
The website pauseai.info was founded by Joep Meindertsma, co-initiator of the Petition Control AI. Unlike in the petition, the term ‘extinction’ is explicitly mentioned here. “We are in danger of human extinction,” it says, followed by a call to stop development of “all systems more powerful than GPT-4 [de nieuwste versie van ChatGPT]”. That is exactly what the much-discussed open letter from the Future of Life Institute at the end of March, which was signed by Elon Musk, among others, called for. The call was 23 words: “We are calling on all AI labs to immediately pause the development of AI systems more powerful than GPT-4 for a period of at least six months.”
A 22-word statement from the Center of AI Safety followed on May 30: “Reducing the risk of AI extinction should be a global priority alongside other major societal risks such as pandemics and nuclear war.” The statement was signed by many AI scientists, as well as tech industry figures such as CEO Sam Altman of OpenAI, the maker of ChatGPT and image generator DALLE.
science fiction script
The Dutch petition is different: it is a call to Dutch politicians to no longer leave control of AI to the tech industry, but to take it into their own hands: “Take a leading role in Europe and implement high-quality regulation through EU legislation of AI applications,” the text reads.
Each ministry should also investigate “in which areas AI will have a fundamental influence” and map out “where additional regulations are needed to steer that influence in the right direction”. More research into the dangers of AI is also advocated, by creating “300 full-time research positions in the fields of AI Safety, AI and Rule of Law and Existential Risk”.
What does Barten mean by ‘existential risk’? “Existential risk is about the survival of humanity being at stake. It can mean extinction, the collapse of civilization or permanent dystopia. But the probability of human extinction is much greater than the probability of those other two scenarios. As far as I am concerned, there is little air between existential risk and extinction.”
Fidelitycolumnist and computer scientist Ilyaz Nasrullah was approached but refused to sign the petition. This petition is actually divided into two parts. On the one hand the future problems in the short term, on the other hand the existential risks. I too am concerned about the social problems that arise now that the tech industry is throwing unfinished AI onto the market. But there is no technological path to AI that poses an existential risk to humanity. That is a science fiction script.”
The Existential Risk Observatory is affiliated with the movement of effective altruismsays Nasrullah. “They ask themselves: how can you do maximum good? And then they come to preventing the extermination of man, the highest charity imaginable.”
In 2021, Barten presented his NGO on the Effective Altruism website. For the time being, Existential Risk Observatory was ‘self-funded’, he wrote at the time. “We now have 3 fte, I am one of them,” he says now. “So I work full time for Existential Risk Observatory. We are partly financed by Maurice Schuurmans, a Dutch tech billionaire.”
Chess game
On the Existential Risk Observatory site, “unaligned AI” is identified as the main risk: AI that does not do what humans want. The risk of extinction in the next 100 years is 10 percent, according to research according to Existential Risk Observatory. Reference is made to a book by Toby Ord, one of the founders of effective altruism. He published the book in 2021 The Precipice: Existential Risk and the Future of Humanity and is affiliated with the Future of Humanity Institute in Oxford, which is funded by Elon Musk, among others.
Barten acknowledges that his NGO is affiliated with the movement of effective altruism. “The difference is: they mainly look at the technical solution. How can you ensure that AI continues to do what we want as humans? Above all, we want to stimulate the social debate about existential risk and want independent scientific research into the subject.”
According to Barten, the big question is: what will happen if AGI, Artificial General Intelligence, is introduced, a form of AI that is smarter than humans? “Not much has been thought about that. Like organizations such as Waag Futurelab, we believe that we should critically monitor the current AI, but we also think: who will control the future super intelligence?”
We don’t know how powerful the AI of the future will be and whether we can still control it, says Barten. “An AI system like ChatGPT currently remains on the server we put it on. But in the future, AI may start to allow itself access to other locations. It can go into hacking banks, airplanes or even nuclear weapons. I’m not saying that all AI systems will do that, but even if there are only a few, it is dangerous.”
We cannot win a battle against an AI system that is smarter than humans, says Barten. He compares it to a game of chess against a grandmaster like Magnus Carlsen. “I don’t know what he will do, what his next move will be, but I will lose.”
While Stikker and Nasrullah call the scenario of extinction unscientific, Barten refers to the AI scientists who signed the extinction petition of the Center for AI Safety at the end of May. “262 leading AI scientists apparently think that risk is real. And now UN boss Guterres, the White House and British Prime Minister Sunak have also recognized the existential risk of AI.”
Nasrullah sees this as a successful strategy for the tech industry. “You really see a broad refusal from all organizations involved in technology and human rights to sign these kinds of petitions. In our world we have known the lobby of Existential Risk Observatory and effective altruists for much longer. There is a lobby from the tech industry to avoid talking about the dangers of AI in the present by shifting attention to the risks of a future superintelligence.”
‘Camp struggle’
Nasrullah is not the only one who sees it that way. Tech magazine MIT Technology Review devoted an article to it this week: How existential risk became the biggest meme in AI. The gist: tech companies like to move concerns about AI to the distant future, where their current interests are not at stake. “When we talk about the distant future, when we talk about mythological risks, we reframe the problem as one that exists in a fantasy world, and the solutions can only exist in that fantasy world,” said Meredith Whittaker, founder of research lab AI Now Institute, in the article. In other words: x-risk, as existential risk is simply called, is a lightning rod that Big Tech uses to escape regulation in the present.
Doesn’t Barten think his story is a distraction from problems with AI in the present? “No, I think there is room to talk about both. Also about current problems such as prejudice, discrimination, impending unemployment, inequality, concentration of power, disinformation. We are also concerned about that. That is why we support the EU AI Act and that alone shows that we are not an extension of Big Tech, because they think the European rules are too strict. But we also want to generate attention for AI risks of the future.”
Former Member of Parliament Kees Verhoeven and former VVD campaign strategist Mark Thiessen are also among the initiators of the petition. “We talked about the text,” says Stikker. “They are mainly concerned about the problems in the present, but also wanted to keep the x-risk clubs on board.”
As a result, many organizations involved in technology and human rights, such as Bits of Freedom, Amnesty and Waag Futurelab, have not signed the petition. Verhoeven regrets that. “A camp battle has arisen that I do not feel part of,” says Verhoeven. “Mark Thiessen and I are particularly concerned about current AI issues. But we thought it would be a good idea to name as many different risks as possible. As for the abominable snowman, it doesn’t cause any problems in the present. AI does. So that comparison doesn’t work.”
He is not particularly concerned about ‘existential risk’, says Verhoeven. “But I can’t rule it out. We don’t know how AI will develop, do we? As far as I’m concerned, different concerns can coexist. I thought: everyone can join.”
Essayist Maxim Februari, who writes a lot about AI, decided to sign the petition. “I would prefer to discuss current problems soberly, such as the consequences of AI for the law. I think everyone weighed it differently. Marleen Stikker and I are not far apart. But the question is: how do you get it on the political agenda? Existential risk is not something I am concerned with. And I also see the objections. But apparently this works. The press and politicians have finally been shaken up.”
A version of this article also appeared in the newspaper of June 24, 2023.