ads
Edition: United States
language: English
ads
ads
Nvidia Unveils Groundbreaking DGX Superpod With GB200 Grace Blackwell

Nvidia Unveils Groundbreaking DGX Superpod With GB200 Grace Blackwell

Tech Desk 19 Mar , 2024 12:36 PM GMT

  • The DGX SuperPOD features the new Nvidia GB200 Grace Blackwell Superchip.

  • It is designed for processing trillion-parameter models with continuous uptime.

  • The liquid-cooled rack-scale architecture delivers 11.5 exaflops of AI supercomputing.

Nvidia Unveils Groundbreaking DGX Superpod With GB200 Grace Blackwell
Illustration picture of AMD chip
Reuters
ads

NVIDIA has introduced its latest AI supercomputer, the Nvidia DGX SuperPOD, featuring the new Nvidia GB200 Grace Blackwell Superchip. This cutting-edge system is tailored for processing trillion-parameter models with continuous uptime for superscale generative AI training and inference workloads.

The DGX SuperPOD showcases a liquid-cooled rack-scale architecture, delivering 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory, expandable with additional racks. The Nvidia GB200 Superchip, a new AI accelerator, is designed to meet the rigorous demands of generative AI training and inference workloads involving trillion-parameter models.

ads

The GB200 Superchip integrates 36 Nvidia Arm-architecture Grace CPUs and 72 Nvidia Blackwell GPUs, enhancing performance for processing complex AI workloads efficiently. Interconnected via fifth-generation Nvidia NVLink, the GB200 Superchips in a DGX GB200 system operate seamlessly as a unified supercomputer, facilitating high-speed data transfer between CPUs and GPUs.

It is designed for processing trillion-parameter models with continuous uptime.

The DGX SuperPOD features the new Nvidia GB200 Grace Blackwell Superchip.

The liquid-cooled rack-scale architecture delivers 11.5 exaflops of AI supercomputing.

Noteworthy is the GB200 Superchip's capability to deliver up to 30 times the performance of Nvidia’s current leading H100 Tensor Core GPU for large language model inference tasks, marking a significant advancement in AI supercomputing.

The DGX SuperPOD, equipped with Nvidia DGX GB200 systems, offers 11.5 exaFLOPS of AI supercomputing power at FP4 precision and 240 terabytes of fast memory, scalable by adding more racks. Each DGX GB200 system comprises 36 Nvidia GB200 Superchips, connected via fifth-generation Nvidia NVLink.

The SuperPOD can scale to tens of thousands of GB200 Superchips connected via NVIDIA Quantum InfiniBand, providing a vast shared memory space for next-generation AI models. The architecture includes Nvidia BlueField-3 DPUs and supports Nvidia Quantum-X800 InfiniBand networking, enhancing in-network computing performance.

ads

The DGX SuperPOD features a highly efficient, liquid-cooled architecture that optimizes performance while minimizing thermal constraints, ensuring sustainable and energy-efficient operations even under heavy computational loads.

Expected to be available later this year through NVIDIA’s global partners, the DGX SuperPOD with DGX GB200 and DGX B200 systems is set to revolutionize AI supercomputing, offering unparalleled computational power and efficiency for handling complex AI workloads.

ads

Collaborations with Oracle, Google, Microsoft, and AWS further extend the reach of the new platform, showcasing its potential to drive AI innovation across industries and solidifying Nvidia's position as a leader in high-performance computing for AI.

End of Article

No COMMENT ON THIS STORY
ads
ads

Next Story