After the scandal surrounding Sam Altman: Are AI profits at OpenAI compatible with ethical principles?

The leadership chaos at ChatGPT developer OpenAI seems to have subsided for the time being, but many points are still unclear. Above all, the startup must now address the question of whether its ethical mission is really compatible with the company structure.

• OpenAI’s mission focuses on the interests of all humanity
• CEO Sam Altman is probably more profit-oriented
• Corporate structure inevitably leads to tensions

The AI ​​startup OpenAI has had a turbulent time after boss and co-founder Sam Altman was first surprisingly forced out by the board of directors and then brought back a few days later. Instead of Altman, most of the members of the board of directors at the time are no longer with OpenAI. However, it is still not known what exactly prompted the board of directors to decide to fire the OpenAI boss. The startup’s blog post said Altman was “not consistently open in his communication with the board [gewesen]”, which impaired his ability to assume his responsibilities”. As a result, they lost confidence in Altman’s ability to continue to lead OpenAI. It is unclear where communication was not open enough. However, according to “The Information”, there is still to be a formal investigation into the matter things that led to Altman’s expulsion.

As Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics, writes in a guest article for Fortune, there have been tensions throughout the past year on OpenAI’s board of directors, which included Sam Altman before the leadership chaos. These could possibly have arisen from the fact that Sam Altman, as CEO of OpenAI, put the startup’s interests first, while the board of directors is committed to the company’s ethical mission. This contrast could ultimately have led to the break.

OpenAI’s mission: AI for the good of humanity

OpenAI describes itself on the company website as an “AI research and delivery company” whose “mission” is “to ensure that artificial general intelligence benefits all of humanity.” According to OpenAI, the term artificial general intelligence (AGI) refers to “AI systems that are generally more intelligent than people”. The startup also states that it wants to build “safe and useful AGI”. Together with the formulated mission, this suggests that the startup puts ethics over profit. But the structure of the company is rather complex and not completely free of profit orientation.

According to the company, OpenAI was founded in 2015 as a non-profit organization. However, in 2019, a restructuring followed to ensure that the company could also raise the capital it needed to fulfill its mission. At the same time, the mission, leadership and oversight of the nonprofit organization should be maintained. This led to the creation of OpenAI LP, a “mix of a for-profit and a nonprofit company that we call a ‘capped-profit’ company,” according to the startup. However, the original organization as “OpenAI Nonprofit” continues to support this. By combining a for-profit and a non-profit company, backers can receive a certain maximum return on their investment in OpenAI LP, but any returns beyond that go to the OpenAI nonprofit organization. The capped-profit company is monitored by the board of directors, which is supposed to think and act in a charitable manner and ensure that OpenAI adheres to its mission and upholds its charter. Hence the “majority of the board […] independent and the independent directors […] not involved in OpenAI,” as it says on the OpenAI website.

Contradictions between partial profit-making and non-profit status

According to Ann Skeet’s guest article at Fortune, OpenAI’s board of directors was deliberately given the goal and power to exclude profit motives from the equation. In addition, according to “The Algorithmic Bridge”, he should also ensure that AI security and AI alignment remain the highest priority in the company’s hierarchy of values ​​- and has even promised to destroy OpenAI if this becomes necessary for security reasons. So while OpenAI’s board of directors aims to put humanity over profit, OpenAI is also led by Sam Altman, who, according to Data Ethics, is “extremely profit-oriented.”

It is obvious that both aspects are difficult to reconcile and that tensions are almost inevitable – and this was apparently also the case before the recent scandal. “Sources tell me that the company’s profit focus under Altman and the speed of development, which could be seen as too risky, and the nonprofit side, which advocated for greater safety and caution, were at odds,” tech reporter Kara wrote Swisher referred to the dismissal of Altman on the short message service X, formerly Twitter. However, according to Swisher, opinions differ as to whether Altman’s expulsion was a coup or whether it was the right move.

Profit vs. Ethics: Which emerges victorious from chaos?

In general, there will likely continue to be disagreements as to whether profits or ethics have or should have higher priority at OpenAI. Because as an article on “Quora” points out, points that are supposed to prove the prioritization of ethics always receive criticism. According to the “Quora” post, OpenAI has published several guidelines and principles for the ethical development of AI, which deal, among other things, with security, fairness and transparency. However, for critics, these are too vague and too open to different interpretations. The company’s security measures to prevent abuse, bias and discrimination are also often viewed as not strong enough. In addition, partnerships that OpenAI has established with organizations such as universities or research institutes that are committed to promoting ethical AI development have also been criticized. According to the article on Quora, these are not sufficient to ensure that OpenAI’s technologies are actually used responsibly.

The fact that Altman is now back at the helm of OpenAI, while the board of directors is being almost completely rebuilt, could now indicate that the profit-oriented side of the AI ​​company has really gained the upper hand. In fact, the decision in 2019 to commercialize its own technology via OpenAI LP could be an indication that the startup may be more interested in making money than in ethical AI development. However, AI development obviously also costs money – so much so that the required sums probably cannot realistically be raised with donations alone. Whether OpenAI will completely give up its non-profit side under Altman and the new board of directors in the future or continue to try to achieve the difficult balance between profit and ethics will probably become clear in the next few weeks and months.

Editorial team finanzen.net



ttn-28