Neurala unveils AI explainability technology in an industry first

1 min read

Vision AI software company Neurala has announced the launch of its AI explainability technology, purpose-built for industrial applications and manufacturing.

This feature is intended to help manufacturers improve quality inspections by accurately identifying objects in an image that are causing a particular problem or present an anomaly.

“Explainability is widely recognised as a key feature for AI systems, especially when it comes to identifying bias or ethical issues. But this capability has immense potential and value in industrial use cases as well, where manufacturers demand not only accurate AI, but also need to understand why a particular decision was made,” explained Max Versace, CEO and co-founder of Neurala.

“We’re excited to launch this new technology to empower manufacturers to do more with the massive amounts of data collected by IIoT systems, and act with the precision required to meet the demands of the Industry 4.0 era.”

Neurala’s explainability technology was built to address the digitization challenges associated with Industry 4.0. Industrial IoT systems are constantly collecting massive amounts of anomaly data, that are used in the quality inspection process. With the introduction of the explainability feature, manufacturers will be able to derive more actionable insights from these datasets, identifying whether an image truly is anomalous, or if the error is a false-positive resulting from other conditions in the environment, such as lighting. This gives manufacturers a more precise understanding of what went wrong, and where, in the production process, and allows them to take proper action – whether to fix an issue in the production flow or improve image quality.

Manufacturers can use Neurala’s explainability feature with either Classification or Anomaly Recognition models. Explainability highlights the area of an image causing the vision AI model to make a specific decision about a defect. In the case of Classification, this includes making a specific class decision on how to categorise an object; or in the case of Anomaly Recognition, it reveals whether an object is normal or anomalous. Armed with this detailed understanding of the workings of the AI model and its decision-making, manufacturers will be able to build better performing models that continuously improve processes and efficiencies.

Explainability is now available as part of Neurala’s cloud solution, Brain Builder, and will soon be available with Neurala’s on-premise software, VIA (Vision Inspection Automation). The technology is simple to implement, with no custom code required.