Breakthrough deep learning performance on a CPU

2 min read

Deci, a deep learning company, has announced a new set of image classification models, dubbed DeciNets, for Intel Cascade Lake CPUs.

Deci’s proprietary Automated Neural Architecture Construction (AutoNAC) technology automatically generated the new image classification models that improve all published models and deliver more than 2x improvement in runtime, coupled with improved accuracy, as compared to the most powerful models publicly available such as EfficientNets, developed by Google.

While GPUs have traditionally been used in convolutional neural networks (CNNs), CPUs are a much cheaper alternative. Although it is possible to run deep learning inference on CPUs, they are significantly less powerful than GPUs and, as a result, deep learning models typically perform 3-10X slower on a CPU than on a GPU.

DeciNets significantly close that performance gap so that tasks, that previously could not be carried out on a CPU because they were too resource intensive, are now possible.

Additionally, these tasks will see a marked performance improvement. According to Deci, by leveraging DeciNets, the gap between a model’s inference performance on a GPU versus a CPU is cut in half, without sacrificing the model’s accuracy.

“As deep learning practitioners, our goal is not only to find the most accurate models, but to uncover the most resource-efficient models which work seamlessly in production – this combination of effectiveness and accuracy constitutes the ‘holy grail’ of deep learning,” said Yonatan Geifman, co-founder and CEO of Deci. “AutoNAC creates the best computer vision models to date, and now, the new class of DeciNets can be applied and effectively run AI applications on CPUs.”

“There is a commercial, as well as academic desire, to tackle increasingly difficult AI challenges. The result is a rapid increase in the complexity and size of deep neural models that are capable of handling those challenges,” said Prof. Ran El-Yaniv, co-founder and Chief Scientist of Deci and Professor of Computer Science at the Technion – Israel Institute of Technology.

“The hardware industry is in a race to develop dedicated AI chips that will provide sufficient compute to run such models; however, with model complexity increasing at a staggering pace, we are approaching the limit of what hardware can support using current chip technology. Deci's AutoNAC creates powerful models automatically, giving users superior accuracy and inference speed even on low-cost devices, including  traditional CPUs."

In March last year, Deci and Intel announced a broad strategic collaboration to optimise deep learning inference on Intel Architecture (IA) CPUs. Prior to this, they had worked together at MLPerf, where on several popular Intel CPUs, Deci’s AutoNAC technology accelerated the inference speed of the well-known ResNet50 neural network, reducing the submitted models’ latency by a factor of up to 11.8x and increasing throughput by up to 11x.

Deci’s AutoNAC technology is already serving customers across industries in production environments.