TITAN V looks to transform the PC into AI ‘supercomputer’

1 min read

NVIDIA has announced the TITAN V driven by, what the company’s claims, is the world’s most advanced GPU architecture, NVIDIA's Volta.

Unveiled at a presentation by the company’s CEO Jensen Huang at the annual Neural Information Processing Systems (NIPS) conference, in Long Beach, California, the TITAN V’s 21.1 billion transistors mean that it’s able to deliver 110 teraflops of raw horsepower, around 9x that of its predecessor, while being extremely energy efficiency.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world.”

TITAN V’s Volta architecture features a major redesign of the streaming multiprocessor that is at the centre of the GPU and it doubles the energy efficiency of the previous generation Pascal design, enabling a dramatic improvement in performance in the same power envelope.

With independent parallel integer and floating-point data paths, Volta is also much more efficient on workloads with a mix of computation and addressing calculations. Its new combined

An L1 data cache and shared memory unit are intended to significantly improve performance while simplifying programming.

Fabricated on a new TSMC 12-nanometer FFN high-performance manufacturing process customised for NVIDIA, TITAN V also incorporates Volta’s highly tuned 12GB HBM2 memory subsystem for advanced memory bandwidth utilization.

TITAN V’s is intended for developers who want to use their PCs to work in AI, deep learning and high performance computing and they will be able to gain immediate access to the latest GPU-optimised AI, deep learning and HPC software by signing up with NVIDIA’s GPU Cloud account which provides access to NVIDIA-optimised deep learning frameworks, third-party managed HPC applications, NVIDIA HPC visualisation tools and the TensorRT inferencing optimiser.