Taipei, Taiwan – Computer hardware and manufacturing firm NVIDIA has recently announced that it has joined forces with top computer manufacturers to introduce a range of Blackwell architecture-powered systems.
These systems will feature Grace CPUs, NVIDIA networking, and infrastructure designed to help enterprises build AI factories and data centres to spearhead the next wave of generative AI.
In this collaboration, Jensen Huang, founder and chief executive officer at NVIDIA, said that the company is set to collaborate with ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn.
Together, these companies will provide cloud, on-premises, embedded, and edge AI systems using NVIDIA GPUs and networking.
Speaking about the collaboration, Jensen said, “The next industrial revolution has begun. Companies and countries are partnering with NVIDIA to shift the trillion-dollar traditional data centres to accelerated computing and build a new type of data centre—AI factories—to produce a new commodity: artificial intelligence.”
“From server, networking, and infrastructure manufacturers to software developers, the whole industry is gearing up for Blackwell to accelerate AI-powered innovation for every field,” he further explained.
Meanwhile, in terms of addressing applications of all types, the offerings will include configurations from single to multi-GPUs, x86 to Grace-based processors, and air to liquid-cooling technology.
Supported by NVIDIA Blackwell products, the NVIDIA MGX modular reference design platform will further assist in speeding up the development of systems of different sizes and configurations.
This encompasses the new NVIDIA GB200 NVL2 platform, designed to deliver unparalleled performance for mainstream large language model inference, retrieval-augmented generation, and data processing.
The GB200 NVL2 is ideally suited for emerging market opportunities such as data analytics, a sector where companies spend tens of billions of dollars annually.
By leveraging the high-bandwidth memory performance provided by NVLink®-C2C interconnects and dedicated decompression engines in the Blackwell architecture, data processing speeds can also be increased by up to 18 times, with 8 times better energy efficiency compared to using x86 CPUs.
In addition, NVIDIA MGX also provides computer manufacturers with a reference architecture to quickly and cost-effectively build more than 100 system design configurations. This move allows them to meet the diverse accelerated computing needs of the world’s data centers.
AMD and Intel are further backing the MGX architecture with plans to introduce their own CPU host processor module designs for the first time.
On the other hand, NVIDIA’s latest GB200 NVL2 also utilises MGX and Blackwell, with its scale-out, single-node design allowing a variety of system configurations and networking options to seamlessly integrate accelerated computing into existing data centre infrastructure.
Adding to the Blackwell product range, the GB200 NVL2 joins NVIDIA Blackwell Tensor Core GPUs, GB200 Grace Blackwell Superchips, and the GB200 NVL72.