Nvidia Corporation launched the initial Arm® Neoverse™-based distinct data center CPU for high-performance computing and AI infrastructure. It provides energy efficiency, double the memory bandwidth, and the highest performance compared with today’s powerful server chips.
The Nvidia Grace™ CPU Superchip consists of 2 CPU chips that are connected coherently via NVLink®-C2C, a new low-latency, high-speed, chip-to-chip interconnect.
The Grace CPU Superchip improves Nvidia’s first integrated CPU-GPU module, also known as the Grace Hopper Superchip, launched last year before, and is designed to support AI applications and giant-scale HPC alongside Nvidia Hopper™ architecture-based graphics processing unit (GPU). The primary CPU designs, as well as the NVLink-C2C connection, are identical on both Superchips.
“A new form of the data center has evolved — AI factories that refine and process mountains of data to create intelligent systems,” stated Jensen Huang, CEO, and founder of Nvidia Company. “The Grace CPU processor offers memory bandwidth, best performance, and Nvidia software platforms into a single chip that will shine as the AI infrastructure’s CPU.”
Introducing Nvidia’s CPU Platform
Nvidia’s CPU Platform was mainly created to provide the highest performance; the Grace CPU processor packs 144 Arm cores in one socket, offering the industry-top predicted performance of 740 on the SPECrate®2017_int_base standard. It is 1.5x higher than the dual-CPU transport with the DGX™ A100 today, as expected by Nvidia’s innovation labs with a similar compilers class.
Grace CPU processor offers memory bandwidth and industry-leading power efficiency. Its revolutionary memory subsystem comprises LPDDR5x memory with the Error Correction Code to provide the ideal balance of performance and energy consumption. This LPDDR5x memory subsystem offers twice the bandwidth of standard DDR5 models with 1 Terabyte per second and consumes significantly less power with the entire CPU, including memory that consumes only 500 Watts.
The Grace CPU Superchip is built on the most recent data center architecture, Arm®v9. The Grace CPU Superchip combines the best single-threaded performance of the core with Arm’s latest generation of vector extensions. It would bring immediate advantages to numerous applications.
The Grace CPU processor can run all computing software stacks, including Nvidia HPC, Nvidia AI, Nvidia RTX™, and Omniverse. The Grace CPU processor alongside Nvidia ConnectX®-7 NICs provides the adaptability of being configured to create GPU-accelerated servers with eight, four, two, or one Hopper-based GPUs or as standalone CPU-only systems that allow users to maximize performance for their particular workloads while using one software stack.
Designed for Cloud, AI, HPC, and Hyperscale Applications
The Grace CPU processor is a winner in those most challenging AI, data analytics, HPC, hyper-scale computing, and scientific computing applications, with memory bandwidth, energy efficiency, highest performance, and configurability.
The 1TB/s of memory bandwidth and Grace CPU Superchip’s 144 cores will deliver unbeatable performance for CPU-based top-performance computing applications. The High-Performance Computing (HPC) applications are the highest memory bandwidth, demanding the top-performing cores, compute-intensive, and the suitable memory capacity per core is required to speed up outcomes. HPC applications are computationally intensive and require the most efficient cores, the highest memory bandwidth, and the appropriate memory capacity per core to improve results.
Nvidia has partnered with the top High-Performance Computing, hyperscale, supercomputing, and cloud customers for the Grace CPU Superchip. The Grace Hopper Superchip and Grace CPU Superchip are anticipated to be available by the first quarter of 2023.