Sat. Nov 26th, 2022

Designed to speed up research into artificial intelligence and aid in the creation of the metaverse, the AI Research SuperCluster (RSC) has been unveiled by Meta. Working across hundreds of different languages, RSC will help the company develop new AI models as well as augmented reality tools.

New powerful computers with quintillions of operations per second will be needed to develop the next generation of advanced artificial intelligence. In order to train models with trillions of parameters across Meta’s businesses, from content-moderation algorithms used to detect hate speech on Facebook and Instagram to augmented reality features that will be available in the metaverse one day, Meta’s researchers have already started using RSC to train large models in NLP and computer vision for research. Models trained by RSC are capable of determining whether an action, sound or image is harmful or not. Meta claims that this will help to keep users safe not only on Meta’s current services, but also in the metaverse.

Meta Unveils AI Supercomputer

While conventional supercomputers measure their power in terms of the number of cores, Meta uses graphics processing units (GPUs), which are more useful for running deep-learning algorithms that can read images, analyse text and translate between languages.

There are many ways to build AI supercomputers, but the most common is to combine multiple GPUs into compute nodes and connect them via a high-performance network fabric. RSC currently has 6,080 GPUs spread across 760 NVIDIA DGX A100 compute nodes. The two-level Clos fabric used by the DGX is NVIDIA Quantum 1600 Gb/s InfiniBand with no oversubscription. Pure Storage FlashArray, 46 petabytes of cache storage in Penguin Computing Altus systems, and 10 petabytes of Pure Storage FlashBlade make up RSC’s storage tier.

With 16,000 total GPUs, the RSC will be able to train AI systems with more than a trillion parameters and data sets that are as large as an exabyte by the end of the year in 2022. A system’s overall performance can’t be gauged solely by the sheer number of GPUs it contains. For example, Microsoft’s AI supercomputer, built in collaboration with the OpenAI research lab, contains 10,000 GPUs.

By Adam

If you want to contribute kindly contact at [email protected] or [email protected] also you can buy guest posts from our other different sites and write post for us.

Leave a Reply

Your email address will not be published. Required fields are marked *