What’s open about smart chatbot maker OpenAI?

Freely accessible artificial intelligence (AI) where everyone can see under the hood. That was the ideal for which a new club was founded in San Francisco in 2015: OpenAI, a laboratory for non-profit AI.

“Everything the group develops will be available to everyone,” said co-founder Sam Altman at the time to tech magazine journalist Steven Levy Wired. The founders presented the lab as a non-profit with a mission: to combat AI abuse. Altman: “Because we are not a for-profit company like Google, we can focus on something other than enriching our shareholders, which is what we think is best for the future of humanity.”

Seven years later, OpenAI is the most prominent AI lab in the world. “First we got the internet. Then came the iPhone. Now there is ChatGPT,” Alexander Klöpping tweeted at the beginning of December about ChatGPT, the OpenAI chatbot launched at the end of November. And he is not alone in seeing the introduction of this language model as a major event.

The whole world is amazed at the AI ​​system that writes an essay on the Cuban Missile Crisis or an episode of a sitcom with the greatest of ease. It effortlessly answers all kinds of questions, including about itself, in seemingly natural, human language. According to The New York Times ChatGPT could be the successor to search engines. In addition to offering suggestions for Christmas gifts, it can “serve information in clear, simple sentences, rather than a list of Internet links, and explain concepts in a way people can easily understand.”

Read also: Can you let this super smart chatbot do your homework and what does the teacher notice?

Within a week, ChatGPT already had a million users. And that is the second time in less than a year that an application of OpenAI has become a hit. Because in the summer there was the hype surrounding the revolutionary image generator DALL-E 2, with which anyone can create instant ‘art’. The system, whose name is a nod to Salvador Dalí, is not afraid of a command like ‘Draw a horse on the moon in the style of Andy Warhol’, but can also generate realistic images that look like photographs.

OpenAI was founded by a group of wealthy Silicon Valley investors, who collectively put more than a billion dollars into the lab. Among them: Elon Musk (Tesla, Space X), Sam Altman of Y Combinator (the startup accelerator of Airbnb and DropBox, among others) and PayPal founder Peter Thiel.

“I think the best defense against AI misuse is to empower as many people as possible to use AI,” Musk said in the aforementioned tech magazine article. Wired. “If everyone has AI power, there can be no AI superpower in an individual or small group of people.”

According to the OpenAI website the goal is still “to ensure that Artificial General Intelligence benefits all of humanity.” But insofar as that mission was ever sincere, critics say little of it remains.

Lost ideals

In 2019, the non-profit structure of OpenAI was exchanged for a hybrid model, in which a commercial company would now be managed from the non-profit organization. That same year, Microsoft invested a billion dollars in OpenAI and obtained the exclusive license of language model GPT3, the predecessor of ChatGPT.

“I look at OpenAI with a mix of admiration and concern,” says Jelle Zuidema, language technology researcher (UvA). “It is clear that there are very good technicians working there, and that there is a company culture in which creative new ideas can flourish. But where are the ideals with which they were founded: open technology, a counterbalance to Big Tech?” They are hard to find, sees Zuidema. “The technology underlying their products has very limited scrutiny by independent scientists. And GPT3 is financed and traded by tech giant Microsoft.”

OpenAI is not what it seems, Karen Hao noted back in 2020 MIT Technology Review. The journalist was allowed to visit the AI ​​lab, located in a historic building in San Francisco that says PIONEER BUILDING in large letters. The Pioneer Truck Factory was once located here, now AI is being pioneered in halls with names such as ‘A Space Odyssey’. OpenAI shares the pioneer building and canteen with another tech lab: Neuralink, Elon Musk’s company that develops brain chips.

“What the company is publicly embracing is inconsistent with how it operates behind closed doors,” Hao wrote. “Over time, it has allowed a development in which its founding ideals of transparency, openness and collaboration have been eroded by fierce competitiveness and mounting pressure to find more and more funding.”

Read also: If the computer gets better with language than we do

“What is open about OpenAI?” asks tech philosopher Marleen Stikker, director of De Waag, rhetorically. „ The name suggests that it is about collective knowledge, but that is not the case. The software and models are not open for research or reuse. Scientists cannot do much with it, because the systems are too large.”

The ideals promoted by the founders appear to be at odds with other ambitions of the AI ​​lab. For example, the development of Large Language Models (LLMs), the giant language models such as ChatGPT that have ‘read’ millions of texts and can perform the most diverse language tasks on the basis of that knowledge, is extremely expensive. A training lap of GPT3 costs 12 million dollars, calculated a tech entrepreneur based on OpenAI’s own language model energy consumption information. So it takes a lot of money from new investors to stay at the forefront of LLMs, which makes it difficult to remain a non-profit.

Too big to control

Furthermore, the trend of LLMs booming is difficult to reconcile with transparency. The increase in scale makes the models work better – the more text they read and the more computing power they have, the better they become – but that hinders transparency: a database of millions of texts is too large to check for inaccuracies and biases .

And then there is the ambition of OpenAI to be the first to achieve ‘AGI’: Artificial General Intelligence, or AI that can perform the same intellectual tasks as a human brain. In this respect, the lab competes with other tech companies and, according to critics, this race stands in the way of the safe, responsible development of AI. “If companies know that everyone is racing to the latest AI at perilous speed,” said then PhD student Miles Brundage. in 2015 against Wired“then they may be less inclined to put down guarantees for safe AI.”

Brundage joined OpenIA in 2018 — and he’s not the only white man working there. A lack of diversity applies to the entire tech industry, but in the case of OpenAI it is extra difficult to reconcile with the claim to want to make AI for all of humanity.

“Even if OpenAI had the best of intentions,” says Professor Tamar Sharon, who heads the interdisciplinary research group Digitization and Society at Radboud University, “they still do not represent humanity in any way. OpenAI serves the agenda of their investors: Silicon Valley billionaires pushing technology as the solution to all of humanity’s problems. Their own technology, that is, which they sell to us.”

Read also: Stay vigilant as the line between real and AI blurs

The ideals of OpenAI are unbelievable, says Sharon. “OpenAI was founded by a group of billionaires from their ideology of Effective Altruism, EA, which also includes fallen crypto billionaire Sam Bankman-Fried. The EA movement wants to save humanity from an apocalypse where a machine with human intelligence destroys humanity.”

Sharon is concerned about the influence of these wealthy Effective Altruism supporters on AI. “They are talking about”beneficial AI‘ that can take over human labor in the future, but for now a lot of AI is powered by human labor in low-wage countries: tens of thousands of underpaid workers digging through the datasets. And Large Language Models like ChatGPT have a huge ecological footprint: they guzzle energy. The current trend in AI land is to make these LLMs bigger and bigger, because that makes them perform better. What that means for the planet is not at all clear beneficial for humanity.”

Collective knowledge

The revenue model of OpenAI is now also becoming clear. The world can get to know DALL-E 2 and ChatGPT for free, but after that we have to pay for it. When you run out of free credits, you have to buy new ones to continue using the application. The AI ​​lab will also earn money from other tech companies that want to build apps with OpenAI technology. Last month, a leaked presentation to investors revealed that OpenAI expects to generate $200 million in revenue by 2023. And in 2024 a billion.

According to Marleen Stikker, the ideals of OpenAI have been “a smokescreen” for the commercial interests of the investors from the outset. The tech philosopher does not like to place the fate of humanity in the hands of Silicon Valley billionaires. “Be careful when they talk about ‘humanity’, because they decide for themselves what they mean by that. Usually it does not mean humanity here and now, but that of the future, as they themselves envision it.”

ttn-32