Eta Compute introduces TENSAI Flow

2 mins read

Eta Compute, a company that looks to deliver machine learning to low power IoT and edge devices using its TENSAI Platform, has announced its TENSAI Flow software suite.

The suite complements Eta's existing development resources and enables seamless design from concept to firmware, speeding the creation of machine learning applications in IoT and low power edge devices.

“Neural network and embedded software designers are seeking practical ways to make developing machine learning for edge applications less frustrating and time-consuming,” said Ted Tewksbury, CEO of Eta Compute. “With TENSAI Flow, Eta Compute addresses every aspect of designing and building a machine learning application for IoT and low power edge devices. Now, designers can optimise neural networks by reducing memory size, the number of operations, and power consumption, and embedded software designers can reduce the complexities of adding AI to embedded edge devices, saving months of development time.”

Eta Compute’s TENSAI Flow software has been designed to de-risk development by quickly confirming feasibility and proof of concept. It enables seamless development for machine learning applications in IoT and low power edge devices. It includes a neural network compiler, a neural network zoo, and middleware comprising FreeRTOS, HAL and frameworks for sensors, as well as IoT/cloud enablement.

The TENSAI Flow exclusive neural network compiler optimises neural networks running on Eta Compute’s devices. In addition, the middleware makes dual core programming seamless by eliminating the need to write customised code to take full advantage of DSPs. A unique Neural Network Zoo accelerates and simplifies development with ready-to-use networks for the most common use cases. These will include motion, image and sound classification.

Developers simply train the networks with their data. And, with the insight from TENSAI Flow’s real world applications, developers can see the potential of neural sensor processors precisely in terms of energy efficiency and performance in a variety of field tested examples with unmatched efficiency while preserving total flexibility.

Compared to direct implementation on a competitive device of the same CIFAR10 neural network, the TENSAI neural network compiler on TENSAI SoC improves energy per inference by a factor 54x. Using the CIFAR10 neural network from TENSAI neural network zoo and TENSAI neural network compiler improves the energy per inference further, bringing it to a 200x factor.

Through its interface with Edge Impulse, TENSAI Flow allows developers to securely acquire and store training data so customers train once and have real-world models for future development. The software automatically optimises TensorFlow Lite AI models for Eta Compute’s TENSAI SoC. Using TENSAI Flow, TENSAI SoC can load AI models that include sensor interfaces seamlessly.

TENSAI Flow provides the foundation to automatically provisions and connects devices to the cloud and upgrades firmware over the air based on new models or data.