OUTLOOK 2018 – Machine learning and system design enablement; a key to future innovation

4 mins read

In the past decade, advancements in machine learning have given us facial recognition on social media, more reliable and capable self-driving cars, practical speech recognition, effective web searches and a vastly improved understanding of the human genome. It has become so pervasive that, today, you probably use it dozens of times a day without even knowing it.

Machines that learn – as opposed to machines that must be programmed via software so they know exactly what to do in every possible situation – have the potential to allow software applications to be more accurate in reacting to data and predicting outcomes than human intelligence.

This kind of computer learning uses a system of convolutional neural networks (CNNs), which is similar in structure to the neurons that make up the human nervous system and brain. The computer-based CNNs receive input data from sensors, just as you receive input from your senses, and then use a complex statistical analysis with many connected layers of processing to predict an output. If the system includes an actuator, the system then performs an action based on that output.

“Considering all the data that exists in a single image – and then multiplying that by frames per second – how is it possible that we’re not all carrying data centres around in our cars?”

Craig Cochran

The ever increasing complexity of CNN systems is a challenge for designers. That’s why we’re working closely with these system developers to understand their requirements and make sure we have the hardware and software tools they need to quickly get their innovations to market.

Supervised learning
Supervised algorithms require humans to provide both input and desired output, in addition to furnishing feedback about the accuracy of predictions during training. After training is complete, the algorithm applies what was learned to new data. An example of this could be a security system which uses facial recognition algorithms to identify the employees of an organisation and to sound an alarm when a non-employee is detected.

By comparing certain elements of a person’s facial structure, the system can determine ‘friend’ or ‘foe’, ‘yes’ or ‘no’, ‘1’ or ‘0’, an unlocked door or an alarm.

Unsupervised learning
Unsupervised algorithms are not necessarily trained with desired outcome data. Instead, they use an iterative approach called deep learning to review data and arrive at conclusions. These algorithms are used for more complex processing tasks than supervised learning systems. Using this method, the system can discover hidden patterns in data or determine a kind of efficiency not pre-supposed by the designer of the system. An example of using unsupervised learning could be a car’s safety system that interprets auditory input. Perhaps it has been ‘told’ what only one kind of police siren sounds like, but by using its own interpretations of data not specifically given by the system developers, it can also detect other styles of emergency siren using characteristics such as volume, changes in pitch and speed of trajectory. Using unsupervised learning, it can detect the patterns that ultimately mean ‘emergency vehicle coming, get out of the way’.

Challenges facing neural network designers
Understandably, systems that use supervised or unsupervised machine learning require huge amounts of computing power. Pixel by pixel, datum by datum and practically byte by byte, the data that is gathered by sensors must be extracted, processed, recombined, aggregated, processed again, convoluted and classified.

Considering all the data that exists in a single image – and then multiplying that by the number of frames/second (in the example of video or sensors receiving data in time) – how is it possible that we’re not all carrying data centres around in our cars?

When compared to hand-designed feature-extracting systems, neural networks actually reduce memory requirements and computation complexity, while simultaneously resulting in improved performance for applications where the input has a local correlation (for example, ADAS systems in autonomous driving applications or facial recognition in a local security system).

The computational resource requirements for neural networks are met by using graphic processing units (GPUs), digital signal processors (DSPs), or other silicon architectures optimised for high throughput and low energy when executing the computational workloads required for CNNs. Developing software algorithms for specialised DSPs with an ideal set of computation and memory resources required for running CNNs at high efficiency is where many engineers will have their focus in the coming years.

In meeting the computational requirements beyond the DSP, Cadence has overseen a transformation in system design approach – from a bottom-up, chip-centric design style, to a top-down, applications-driven system-centric design style – as chips and DSPs become more complex and capable of including much more functionality.

To that end, product requirements are not so much defined by the chip on its own, but by the application or vertical segment in which the system will run. Applications-based system design opens new opportunities for semiconductor and systems designers and for their supply chain partners. Key to designing with a systems approach is a streamlined and automated co-design and verification flow between IC, package and PCB design.

System design enablement
The complexity inherent in a systems-level approach means redesigning many tools to improve turnaround times and quality of results. Making tremendous progress in this area, Cadence has introduced tools that offer massively parallel capabilities to spread the load among many CPUs and computers in the cloud and take advantage of deep learning techniques to deliver improved results.

The move to a holistic System Design Enablement strategy requires a different approach to the massive compute resources required for systems that use machine learning technology. This new design methodology requires designers to think about the entire design, from the chip design and packaging to the electronic boards that contain the chips, to the subsystem and complete system, however it may be defined.

With a three-pronged approach to developing new tools, IP, processors, and platforms, companies must take a ‘whole body’ approach to developing solutions by:

  • Developing internal expertise on the demanding requirements of the most important end markets
  • Working directly with leading systems companies to develop products to meet their upcoming requirements
  • Partnering with other industry leaders, including software companies, to provide comprehensive integrated solutions

Cadence’s System Design Enablement strategy allows system designers and component engineers to stand out with new, intelligent, and elegant solutions to the challenges to creating machine learning/deep learning systems.

Cadence Design Systems enables electronic systems and semiconductor companies to create the innovative end products that are transforming the way people live, work and play. Cadence software, hardware and semiconductor IP are used by customers to deliver products to market faster. The company’s System Design Enablement strategy helps customers develop differentiated products – from chips to boards to systems – in mobile, consumer, cloud data centre, automotive, aerospace, IoT, industrial and other market segments. Cadence is listed as one of FORTUNE Magazine’s 100 Best Companies to Work For.