comment on this article

A 'singular' vision

Earlier this month Nvidia’s GPU Technology Conference saw the unveiling of new products, platforms and customer news.

Nvidia may have come in for extensive and detailed criticism for its proposed acquisition of Arm, but putting that aside it remains a leading technology company that is investing heavily in CPUs, DPUs and GPUs as it looks to address a myriad of new technologies from autonomous machines to virtual worlds and artificial intelligence.

At this year’s GPU Technology Conference its CEO Jensen Huang expanded on the company’s vision and talked about it as being a, “computing platform, helping to advance the work for the Da Vincis of our time – in language understanding, drug discovery, or quantum computing.”

Huang spoke at length about how Nvidia is investing heavily in CPUs, DPUs, and GPUs and is working to weave them into new data centre scale computing solutions for both researchers and enterprises.

Huang used his keynote to announce Nvidia’s first data centre CPU, Grace, named after Grace Hopper, a US Navy rear admiral and computer programming pioneer.

Grace is a highly specialised processor that has been designed to target large data intensive HPC and AI applications training next-generation natural-language processing models that have more than one trillion parameters.

When coupled with Nvidia GPUs, a Grace-based system will be capable of delivering 10x faster performance than today’s state-of-the-art Nvidia DGX-based systems, which run on x86 CPUs, according to the company.

“Leading-edge AI and data science are pushing today’s computer architecture beyond its limits,” said Huang, “but by using Arm IP, Grace has been designed specifically for giant scale AI and HPC. Nvidia can now be described as a three-chip company.”

While the vast majority of data centres are expected to be served by existing CPUs, Grace is intended to serve a niche segment of computing.

Huang revealed that the Swiss National Supercomputing Center (CSCS) will build a supercomputer, dubbed Alps, that will be powered by Grace and Nvidia’s next-generation GPUs.

Alps will be built by Hewlett Packard Enterprise using the new HPE Cray EX supercomputer product line as well as the Nvidia HGX supercomputing platform and will include Nvidia GPUs as well as the Grace CPU.

CSCS users will be able to use Alps to carry out a wide range of emerging scientific research, from analysing scientific papers to generating new molecules for drug discovery.

The US Department of Energy’s Los Alamos National Laboratory will also bring a Grace-powered supercomputer online in 2023, Nvidia announced.

Bluefield-3 DPU

Further accelerating the infrastructure upon which hyperscale data centres, workstations, and supercomputers are built, Huang went on to unveil the new BlueField-3 DPU.

According to Huang, “This DPU will let every enterprise deliver applications at any scale with industry-leading performance and data centre security. It has been optimised for multi-tenant, cloud-native environments, offering software-defined, hardware accelerated networking, storage and management services at a data-centre scale.”

Where BlueField-2 offloaded the equivalent of 30 CPU cores, with Bluefield-3 it would take 300 CPU cores to secure, offload, and accelerate network traffic at 400 Gbps as it provides a 10x leap in performance, Huang explained.

Both Grace and BlueField form essential parts of a data centre roadmap that now consists of CPUs, GPUs, and DPUs, added Huang.

“Each chip architecture has a two-year rhythm with likely a kicker in between. One year will focus on x86 platforms, the next on Arm platforms,” said Huang, who added, “Every year you will be seeing new exciting products from us.”

Expanding Arm into the Cloud

Speaking before the news that the UK government was instructing the UK’s competition authority, the CMA, to investigate the proposed acquisition of Arm, Huang talked at length about Arm and said that it had become the most popular CPU in the world for ‘good reason’, suggesting it was not only “super energy-efficient” but that its, “open licensing model inspired a world of innovators.”

Whether that model remains in place should the acquisition proceed remains a moot point, but Huang used his keynote to announced new Arm partnerships covering Amazon Web Services in cloud computing, Ampere Computing in scientific and cloud computing, Marvel in hyper-converged edge servers, and MediaTek which will create a Chrome OS and Linux PC SDK and reference system.

“Arm’s global ecosystem of technology companies will take Arm-based products into new markets like cloud, supercomputing, PC and autonomous systems. With the new partnerships announced, we’re taking important steps to expand the Arm ecosystem beyond mobile and embedded,” said Huang.

Nvidia and AWS are working together to deploy GPU-accelerated Arm-based instances in the cloud. The new Amazon EC2 instances will bring together AWS Graviton2 processors and Nvidia GPUs to provide a range of benefits including lower cost, support for richer game-streaming experiences, and greater performance for Arm-based workloads.

The instances will enable game developers to run Android games natively on AWS, accelerate rendering and encoding with Nvida GPUs, and stream games to mobile devices without the need to run emulation software.

Nvidia’s Arm HPC Developer Kit has been developed to support scientific computing amid the growing need for energy-efficient supercomputers and data centres.

The kit includes an Ampere Altra CPU, with 80 Arm Neoverse cores running up to 3.3GHz; dual Nvidia A100 GPUs, each delivering 312 teraflops of FP16 deep learning performance, as well as two BlueField-2 DPUs, which are intended to accelerate networking, storage and security.

Developers and ISV partners can use the devkit to migrate and validate their software, and conduct performance analysis.

Computing centres deploying it include Oak Ridge National Laboratory, Los Alamos National Laboratory, and Stony Brook University in the U.S.; the National Center for High Performance Computing, in Taiwan; and the Korean Institute of Science and Technology.

Nvidia’s Omniverse

Huang also unveiled the Nvidia Omniverse. A cloud-native, scalable to multiple GPUs, physically accurate platform it is able to recreate virtual 3D worlds that take advantage of RTX real-time path tracing and DLSS, simulates materials with Nvidia MDL, simulates physics with Nvidia PhysX, and fully integrates Nvidia AI.

According to Huang the, “Omniverse was made to create shared virtual 3D worlds. Ones not unlike the science fiction metaverse described by Neal Stephenson in his early 1990s novel ‘Snow Crash’”

Omniverse is going to be available for enterprise licensing and is currently with open beta partners such as Foster and Partners in architecture, ILM in entertainment, Activision in gaming, and advertising giant WPP.

Demonstrating the possibilities of the Omniverse Huang, along with Milan Nedeljković, a member of the Board of Management of BMW, showed how a photorealistic, real-time digital model — a “digital twin” of one of BMW’s highly-automated factories — could be used to accelerate modern manufacturing.

“These new innovations will reduce the planning times, improve flexibility and precision and produce 30 percent more efficient planning,” Nedeljković said.

Other announcements included the unveiling of the Megatron, a framework for training Transformers, leading to breakthroughs in natural-language processing - transformers generate document summaries, complete phrases in email, grade quizzes, generate live sports commentary, even code - as well as new models for Clara Discovery, Nvidia’s acceleration libraries for computational drug discovery.

Another announcement concerned Nvidia Morpheus – a data centre security platform for real-time all-packet inspection built on Nvidia AI, BlueField, Net-Q network telemetry software, and EGX.

To accelerate conversational AI, Huang also announced the availability of Nvidia Jarvis – a state-of-the-art deep learning AI for speech recognition, language understanding, translations, and expressive speech.

Today, Nvidia is now offering GPUs, CPUs, and DPUs and is continuing to invest heavily in AI and in the company’s Omniverse

In concluding his keynote Huang, addressing the online audience, said that Nvidia was, “the instrument for your life’s work,” and in the range and breadth of the announcements made the scale of the company’s ambition and its ‘singular ambition’ was clear.

Author
Neil Tyler

Comment on this article


Websites

http://www.nvidia.com

Companies

Nvidia Ltd

This material is protected by MA Business copyright See Terms and Conditions. One-off usage is permitted but bulk copying is not. For multiple copies contact the sales team.

What you think about this article:


Add your comments

Name
 
Email
 
Comments
 

Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

Related Articles

NI Trend Watch 2014

This report from National Instruments summarises the latest trends in the ...

Capactive sensing

This whitepaper looks at a number of capacitive sensing applications to ...

Spurring on the IoT

A team of Stanford engineers has built a radio the size of an ant – a device so ...

Digital consciousness

​Would you consider uploading your brain to the cloud if it meant you could ...