The NVIDIA Grace CPU is able to address the computing requirements for advanced applications, including natural language processing, recommender systems and AI supercomputing, that analyse enormous datasets requiring both ultra-fast compute performance and massive memory.
The device combines energy-efficient Arm CPU cores with an innovative low-power memory subsystem to deliver high performance with great efficiency.
“Leading-edge AI and data science are pushing today’s computer architecture beyond its limits – processing unthinkable amounts of data,” said Jensen Huang, founder and CEO of NVIDIA. “Using licensed Arm IP, NVIDIA has designed Grace as a CPU specifically for giant-scale AI and HPC. Coupled with the GPU and DPU, Grace gives us the third foundational technology for computing, and the ability to re-architect the data center to advance AI. NVIDIA is now a three-chip company.”
Grace is a highly specialised processor targeting workloads such as training next-generation NLP models that have more than 1 trillion parameters. When tightly coupled with NVIDIA GPUs, a Grace CPU-based system will deliver 10x faster performance than today’s state-of-the-art NVIDIA DGX-based systems, which run on x86 CPUs.
While the vast majority of data centres are expected to be served by existing CPUs, Grace - named for Grace Hopper, the U.S. computer-programming pioneer - will serve a niche segment of computing.
Initial customers, according to Nvidia CEO Jen-Hsun Huang, include the Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory who will buy supercomputers based on Grace built by HPE’s Cray group for delivery in 2023