What assessment for artificial intelligence and machine learning in 2021?

Stanford University’s Institute for Human-Centered Artificial Intelligence (Stanford HAI) published during the month of March, the 2022 version of its report. For the fifth year, it analyzes how artificial intelligence (AI) and more specifically machine learning models affect the research and development of companies and organizations, as well as the economy and policy of countries around the world.

Between satisfaction and reluctance: AI models are powerful, but can be dangerous

The report proposed by Stanford HAI is divided into five chapters: research & development, technical performance, ethics of artificial intelligence, economy & education, policy and governance of AI.

In the same category

Margrethe Vestager Executive Vice-President of the European Commission

The EU agrees on the Digital Markets Act!

Among one of the trends highlighted by the study, neural networks, the main component of machine learning models, are growing in size and development. The report discusses language models, like Open AI’s GPT-3, which are trained on tens of terabytes (1012 bytes) of data and having hundreds of billions of parameters.

Toxicity AI modelsToxicity AI models

This graph shows how the increase in the number of parameters and the use of ever more data (symbolized by very low, low, med and high) increase the risk of model toxicity. Screenshot: Rae et al. / 2022 AI Index Report.

A model composed of 280 billion parameters and developed in 2021 can be up to 28% more dangerous than a model of 117 million parameters as it was possible to find in 2018. This increase in dangerousness is accompanied by a large and significant increase in the capabilities of the model “says the document. The authors added that it is more important than ever to understand what the shortcomings of these systems are as they tend to be marketed and deployed all over the world.

In parallel, research on the fairness and transparency of AI has exploded since 2014. The number of publications by researchers interested in the ethics of AI has multiplied by 5.

The private sector is increasingly interested in artificial intelligence

The further we go in time, the more affordable the training and execution of machine learning models becomes. The cost to develop and train a model to classify images according to certain characteristics decreased by 63.6%. This price drop has enabled a total increase in investment, as companies can increasingly afford to put in the money to exploit AI.

Investments in AIInvestments in AI

This graph highlights investments in AI by the private sector (blue) or the public sector (green). In dark purple, we find mergers or acquisitions and in light purple, minority players (such as joint ventures for example). Screenshot: NetBase Quid / 2022 AI Index Report.

The investment made in AI by all private companies in 2021 is $93.5 billion. This result doubled compared to the figures for 2020. In 2020, only 4 funding rounds for fundraisers worth $500 million or more had been completed, compared to 15 in 2021.

Among them, we find Waymo, the subsidiary of Alphabet specializing in the sector of autonomous vehicles, which had signed a fundraising of 2.5 billion dollars in order to improve its technology and recruit additional staff. On the other hand, the number of start-ups expressly financed and specialized in the world of AI continues to decline: from 1,051 in 2019, it fell to 762 in 2020 and to 746 in 2021.

The commercialization of AI models is strongly driven by research: China is the world leader in terms of articles published, ahead of the United States. Despite the geopolitical tensions and the rivalry that may exist between these two countries, the United States/China collaboration is the most prolific in terms of scientific publications.

ttn-4