While part of the world was depressed watching Netflix during the confinements in 2020, Facebook teams got down to the task of developing its future super computer in Artificial Intelligence. The AI Research SuperCluster (RSC) has unveiled January 24. To believe mark zuckerberg ” Meta has developed what we believe is the world’s fastest supercomputer “.
A learning system for future Metaverse AIs
This supercomputer must train large AI models with trillions of parameters. This will allow ” to work in hundreds of different languages, to transparently analyze texts, images and videos, to develop new augmented reality tools, etc. “.
The CNIL launches a public consultation on so-called “intelligent” video devices
RSC has already started training large natural language processing and computer vision models. These will be used to support the moderation of Meta on its platforms, an extremely sensitive subject at the moment for the company, and to create new services for the future metaverse of the company.
Meta gave some tastes of the possibilities open. For example the creation of real-time voice translation systems of ever more languages, dialects, accents, or models able to study long videos taking into account more data, to develop a voice recognition model able to work even in complex situations such as a party or a concert.
Jerome Pesenti, vice president of AI at Meta, told the wall street journal than ” in the metaverse it’s 100% time consuming, 3D multi-sensory experience, and you have to create AI agents in that environment that are relevant to you “. This means more context-aware AIs, more subtle in its analysis of a situation than exist today.
Covid-19 crisis and shortage of semiconductors spoilsport
To build it, Meta, which already has a supercomputer created in 2017, started from scratch. The goal is to take advantage of new technological advances in IT infrastructure. The company worked with Penguin Computing, Pure Storage, and Nvidia for development launched in a world disrupted by the Covid-19 pandemic.
To the constraints of remote work in the first phase of development was added the difficulty of obtaining chips or GPUs in the midst of a shortage of semiconductors. RCS has, for its phase 1, 760 DGX A100 systems representing 6,080 GPUs, which already allows it during the first tests to work 20 times faster than its predecessor on computer vision and run natural language understanding systems. at scale three times faster. To train models with tens of billions of parameters, it takes 3 weeks instead of 9.
The most powerful supercomputer in the world… In its domain
RCS must reach 16,000 GPUs by mid-2022, increasing its AI learning performance by 2.5 times. Nvidia has already confirmed that this will be the largest customer install in existence. To manage the data flowing into its supercomputer, Meta provides an exabyte of storage, or 36,000 years of high-quality video.
With this configuration, RCS should indeed become the fastest supercomputer in the world, ahead of the Japanese supercomputer Fugaku trusting the top of the independent site’s supercomputer ranking. top500.org since 2020.
A performance to qualify, however. The RCS does not operate quite on the same plane as conventional supercomputers for states and universities. Shubho Sengupta, engineer on the project, told the wall street journal, “ Ordinary supercomputers are optimized for high-precision activity, while AI supercomputers operate on much lower levels of precision, gaining speed without affecting end results “.
The performance and promise of the RCS are nonetheless impressive. Meta did not disclose the location of its supercomputer or the investment needed to build it. The echoes notes, however, that in 2020 the company spent $18 billion on research and development. It takes at least some of that money to create the supercomputer that will train the metaverse’s future AIs.