More in

Deci and Intel look to optimise deep learning inference

1 min read

The deep learning company, Deci, has announced a broad strategic business and technology collaboration with Intel to optimise deep learning inference on Intel Architecture (IA) CPUs.

As one of the first companies to participate in the Intel Ignite startup accelerator, Deci will now work with Intel to deploy innovative AI technologies to mutual customers.

The collaboration is intended to take a significant step towards enabling deep learning inference at scale on Intel CPUs, reducing costs and latency, and enabling new applications of deep learning inference.

New deep learning tasks can be performed in a real-time environment on edge devices and companies that use large scale inference scenarios can dramatically cut cloud or datacentre cost, simply by changing the inference hardware from GPU to Intel CPU.

“By optimising the AI models that run on Intel’s hardware, Deci enables customers to get even more speed and will allow for cost-effective and more general deep learning use cases on Intel CPUs,” said Deci CEO and co-founder Yonatan Geifman. “We are delighted to collaborate with Intel to deliver even greater value to our mutual customers and look forward to a successful partnership.”

The companies first worked together with MLPerf where on several popular Intel CPUs, Deci’s AutoNAC (Automated Neural Architecture Construction) technology accelerated the inference speed of the ResNet-50 neural network, reducing the submitted models’ latency by a factor of up to 11.8x and increasing throughput by up to 11x.

Deci’s AutoNAC technology uses machine learning to redesign any model and maximize its inference performance on any hardware – all while preserving its accuracy.

Monica Livingston, AI Solutions and Sales Director at Intel said, “Deci delivers optimised deep learning inference on Intel processors as highlighted in MLPerf. Optimising advanced AI models on general purpose infrastructure based on Intel Xeon Scalable CPUs allows our customers to meet performance SLAs, reduce cost, decrease time to deployment, and gives them the ability to effectively scale.”

Pilots of Deci’s platform to select customers in the enterprise, cloud, communications, and media segments are being developed to enable them to scale up and further accelerate their deep learning usage on Intel CPUs. As the results of these engagements are shared, an opportunity to scale the Deci platform to a broader base of customers is being planned.