EDA learns from experience

5 mins read

Several years ago at the Design Automation Conference (DAC) the talk was of big data.

Chip designers could find rich seams of information and get each successive project completed faster by mining their own database. Such projects looked to be prime candidates because of electronic design automation’s (EDA) ability to generate enormous datasets.

Areas such as physical verification did prove able to play EDA’s version of Moneyball. GlobalFoundries mined its database of layouts to find the pathological cases more or less guaranteed to cause yield failures. With millions of transistors per design and a lot of designs passing through, identifying the trouble spots was not easy but it was achievable. But other areas found the idea of data mining to be more promise than reality.

Logic verification is another area that seemed to be ripe with huge datasets. What could deliver masses of data than simulations of billions of clock cycles? But work in this area foundered because it turned out that patterns from one design did not carry reliably over into others.
By the time this year’s DAC opened, big data was old news. The emphasis was on the not entirely unrelated field of machine learning.

Although the problems that plagued big data in this field remain, the executives are convinced machine learning in EDA is not a fad.
Joe Sawicki, general manager of the IC group at Mentor, a Siemens business, says: “I think in a few years the question of what are you doing with machine learning will be like asking: what are you doing with C++?”

Tom Beckley, general manager of the custom IC group at Cadence Design Systems, says the tools used to design ICs into packages and PCBs are changing: “We are layering in machine learning algorithms through all these platforms so we can be more productive.”

Tom Beckley, general manager of the custom IC group at Cadence Design Systems


John Aynsley, co-founder and CTO of training firm Doulos, warns of similar problems to the one that upset the idea of using big-data analytics in EDA. Speaking during a seminar organised by verification consultancy TV&S in April, he pointed to the technology of deep learning now used for many image and audio recognition problems, he said: “To do deep learning you need huge datasets. Very often you simply don’t have a large enough dataset to warrant the deployment of deep learning.”

Like other fields such as medical research, what teams find is that they have a lot of relatively small data sets. Again, physical design is the main beneficiary of techniques like deep learning, Aynsley says, pointing to Solido, acquired by Mentor 18 months ago as an example. In addition, there do seem to be applications in logic verification, the area that challenged earlier big-data techniques. Rob Aitken, 2019 DAC chair and an Arm fellow, says the company has been able to apply deep learning to analysing the effectiveness of test vectors applied to large designs. In other cases, the problem of lots of small datasets rears its head and it challenges all manner of machine-learning algorithms.

In one experiment, OneSpin used principal component analysis to try to find patterns in the length of time it takes different proof engines used for formal verification to produce a workable answer. Predicting proof times today is extremely difficult. Unlike simulation, where the engineer can at least see how many cycles have run even if they have not produced any worthwhile results, formal engine performance is far lumpier. “It can be working really hard and then make a breakthrough,” says Cadence marketing director Pete Hardee. And sometimes it just gets stuck.

Machine learning is not quite ready to make a breakthrough in prediction. In OneSpin’s experiments the training set did show an ability to predict runtimes but when deployed on designs from outside the training data the correlation dropped off. Both OneSpin and Cadence have had more success in using machine learning to orchestrate how their tools deploy proof engines on different designs and speed up verification. Dominik Strasser, cofounder and vice president of engineering at OneSpin says a benefit of this kind of automated orchestration means users do not have to weigh up different provers: the tool can help make the decision.

Sawicki says in many types of EDA tool machine learning provides a way of reducing the complexity the user faces when first setting them up. “Often, when it comes to benchmarking different tools, it’s not the tool that’s being benchmarked, it’s the user,” Sawicki says.
Aitken adds: “If you take most EDA tool flows they have a fair number of tweakable knobs. By applying machine learning, you can have a tool that learns the best settings for each type of design.”

Joe Sawicki, general manager of the IC group at Mento


Tool performance itself drives machine learning. “[One] model is using machine learning to create fitness functions or additional heuristics to be used in a standard EDA tool, to see whether, for example, we can make a better router,” Aitken says.

ML techniques
The heuristics developed by machine-learning techniques may be able to overcome the problems encountered by conventional EDA algorithms which focus on numerical optimisation. These often run into trouble when elements need to be organised hierarchically, such as the longstanding problem of how to divide a large design across multiple FPGAs, a common requirement for ASIC prototyping. Frank Schirrmeister, who handles the product management and marketing for prototyping systems at Cadence says machine learning is the group’s likely next step.

Plunify’s InTime design tool for FPGA design straddles the divide between tool performance and managing user settings. The company found that there are often counter-intuitive directives and tool settings that, if applied to FPGA placement and routing, result in better timing. For example, congestion may make it difficult to use embedded DSP cores effectively. Limiting use of those cores reduces congestion and overall performance even though it puts more pressure on the programmable logic. The tool learned that by trying many different options in multiple runs on cloud computers and then analysing the results.

The second way to improve tool performance is to provide an alternative to brute-force analysis of every part of a design to work out whether, for example, circuits will meet timing or avoid being affected by brownouts caused by excessively active logic in neighbouring blocks. This kind of analysis is the target of Ansys’ SeaScape platform, which began life at start-up Gear Design Solutions when the emphasis was on big data.

Vic Kulkarni, vice president and business strategist at Ansys, says the company’s tools are being integrated into SeaScape to provide “prescriptive analytics”. Instead of trying out every possible combination in a much more time-consuming full evaluation, SeaScape uses a learned understanding of circuit topology to home in on likely troublespots. “SeaScape helps narrow down the search space,” Kulkarni says.
He notes that it is important not to become too reliant on machine learning for this purpose. “Every design is different and you can miss things if you push too hard on machine learning.”

Aynsley says R&D effort in machine learning is getting more ambitious: “You might try to predict the location of bugs or predict what your final coverage will be in a simulation based on constrained random verification using a particular set of test vectors. There have been papers published the last few years that are moving towards this but it’s an open question on how well it’s really going to work.

“A really ambitious goal would be to have as input the design’s RTL and the output is a test bench. But I really don’t think it’s going to happen in the short term.”