Outlook 2019 - Machine learning is set to radically change chip design

4 mins read

For the past 20 years or more, companies involved in electronic design automation (EDA) have concentrated on creating innovative new algorithms to keep chip, board, and system designers as productive as possible, as systems have become incredibly more complex. These enhancements make it possible to design a 7-billion gate chip in 7nm technology today in the same time it took to design a 100,000-gate chip in 1990.

But you haven’t seen anything like what you’re going to see in the next five to ten years, as we apply machine learning techniques to get orders of magnitude more productivity for chip designers.

Why now?

Three main forces are converging to make it possible to make fundamental changes to the design process.

First, massive cloud computing resources are now available, which, up until now, company computer server farms have not been able to provide. Cloud computing not only allows for greater parallelism (more tasks being completed on different parts of the design at the same time) but also more intelligence, allowing the computing resources to apply machine learning algorithms at a much larger scale.

Secondly, there is now more R&D into unique processor architectures that facilitate machine learning algorithms. Chipmakers are exploring new architectures that significantly increase the sheer volume of data that can be processed, setting up one of the biggest shifts in chip architectures in decades.

Those changes include:

• New processor architectures focusing on ways to process larger blocks of data per cycle, sometimes with less precision or by prioritising specific operations over others, depending upon the application

• More targeted processing elements scattered around a system that is near to memory; instead of relying on the main processor that best suits the application, accelerators are chosen by data type and application

• New memory architectures that alter how data is stored, read, written, and accessed

• Fusing different data types as patterns, increasing data density while minimising discrepancies between data types

• Making packaging a core component of architectures, increasing the emphasis on ease of modification

Finally, there has been more R&D into design flows that use machine learning, analytics, and optimisation technologies in EDA design flow. This algorithm development is in its infancy, but we are rapidly advancing this technology to automate the routing and tuning of devices to improve reliability, circuit performance, and resilience.

Machine and deep learning algorithms are rapidly evolving and present an opportunity to transform the electronics industry and create a new silicon renaissance with advances in software and IP. They can be used for internal and external use models that serve different needs. Internal improvements include increased performance and accuracy for existing tools and technologies. External improvements include modifications to existing design flows and methodologies to improve productivity.

Automation has been a fundamental driver of EDA technology from the start, and the push to improve designer productivity only increases as the complexity of chip designs grows.

To meet the needs of customers and to employ the latest in machine learning techniques, two major areas of R&D are necessary:

• Improving tools and flows to utilise machine learning techniques throughout design tools to make EDA tools easier to use and improve the designer experience while addressing larger and larger volumes of design and simulation data that challenge productivity (AI internally)

• Using real-world customer tool usage information to predict faster, more effective ways to do design, including learning from past design experience to get maximum productivity increases (AI externally)

At Cadence, we have first-hand expertise in developing new processor architectures that run AI and machine learning algorithms much more efficiently. Since 2013, we have been using machine learning in our products and continue to push the leading edge to improve usability and performance.

Our Tensilica processor cores are widely used in mobile handsets, drones, automotive, surveillance, and virtual reality products. Our challenge is to continue to innovate and develop new architectures and algorithms that our customers can use in their products.

Cadence is active in machine and deep learning research including research in adjacent technologies such as data analytics, optimization, and distributed computing architectures.

What does the future hold?

The overall goal of EDA tools of the future is to develop a fully automated no-human-in-the-loop circuit layout generator that enables users with little or no electronic design expertise to complete physical design of electronic hardware. This layout platform should support automated physical layout of multiple types of electronic components, including analog and digital systems on chips (SoCs), systems in packages (SiPs), and printed circuit boards (PCBs).

Achieving this goal requires developing the infrastructure, algorithms, methods, and software to successfully demonstrate the no-human-in-the-loop for physical layout, transforming a complete design netlist into a manufacturable layout database. It is envisioned that this platform will leverage applied machine learning methodologies to continuously evolve and improve performance as new data sets become available. The customization offered by training will empower differentiation through the breadth and quality of training sets available to the end user, providing them with an asymmetric advantage benefiting from an existing database of designs. Through 100% automation of electronics layout, this platform is expected to usher in a new era of 24-hour design of hardware systems.

A key consideration of this approach is that designers must be able to adopt the tools and methodologies created in this program in order to be successful in addressing the productivity gap. Based on our experience in both EDA and machine learning, this will require a series of staged introductions of the technology, allowing users to both gain an understanding on how best to leverage the tools to achieve the desired results, and further allowing the system to learn from the users (either explicitly by the users codifying their methodologies, or by deriving training data from their actions). We believe that this staged approach, which will be reflected by staged releases of new commercial tools and methodologies, will best meet the desired outcomes of this program.

This process will also enable the creation and modification of performance targets (e.g., bandwidth, frequency response, power consumption) and may use past preferences for the given circuit. These performance targets can be reused later in the process for implementation verification and feedback from various layout creation steps.

Expect to see major enhancements in the EDA design flows and processor IP in the next few years, with incremental improvements for many years thereafter. EDA tools will become much more productive, and companies will be able to train their tools with knowledge gained from their designs. Processors will evolve to more efficiently run these challenging algorithms. There are so many opportunities for machine learning to make a huge difference in chip, board, and system design. This kind of innovation is imperative to keep up with the speed of innovation.

Author details: Sanjay Lall is Corporate Vice President, Operations EMEA, Cadence