Outlook 2016: The drive to system level design

5 mins read

ESL has come a long way in 20 years, but there is still work to be done.

2016 will mark the 20th anniversary of Gary Smith’s first use of the phrase ‘electronic system level’. Smith, at the time with Gartner, coined the phrase during a presentation to the Design Automation Conference in 1996. On a slide titled ‘The Design Continuum’, he put electronic design automation (EDA) in perspective between mechanical CAD and embedded software.

Since then, ESL, together with the industry’s desire to move towards higher levels of abstraction, has been a topic of debate and many panels.

Early trailblazer projects – like Synopsys’ Behavioral Compiler and Cadence’s Virtual Component Co-Design – have come and gone. The advent of IP reuse saved the day by helping to increase productivity enough for chip design costs to not become prohibitive.

High-level verification languages – like e, VERA and SuperLog – paved the way for verification to be automated with SystemVerilog. SystemC was introduced as a design and verification language above signal-level RTL, feeding high-level synthesis (HLS) and, with the introduction of TLM-2.0 APIs in 2008, became the backbone for integrating transaction-level models (TLMs) for virtual platforms.

Software as part of chip development changed the equation completely and became the dominant cost factor for complex designs. The industry has been striving towards a so-called ‘shift left’ to allow continuous, agile integration of hardware and software earlier and earlier.

Now two decades into the race to raise levels of abstraction, it is worth reviewing the status before predicting the future.

The design flow has split into IP creation and re-use and IP assembly. With respect to IP creation, users can choose from eight approaches to bring new functions into their design, six of which (including simply re-using IP) use higher levels of abstraction. Designers can re-use a hardware block if it is readily available as hardware IP; there is a healthy IP market that continues to grow. Alternatively, they can implement the function manually in hardware as a new block, starting with RTL, using the ‘good old way’. The same options exist on the software side, where users can implement software manually and run it on a standard processor, or re-use software blocks if they are readily available as software IP.

Four other IP creation options exist, again split into hardware and software. Engineers can use HLS to create hardware as a new block from a high-level description, like SystemC. Several companies are using this method as the only way to create new IP and Cadence is aware of more than 1000 such tapeouts. On the software side, designers can use automation to create software from a system model like UML, SysML or MathWorks Simulink and run it on a standard processor.

Hardware/software combinations

By effectively creating hardware/software combinations to implement new functionality, designers can use an extensible and configurable processor core to create a hardware/software implementation, with automation based on a higher-level processor description such as that of Cadence Tensilica IP. Users can also create an application-specific instruction processor, with its associated software, by using automated tools – LISA and NML are common description languages. Variations to both approaches exist, like creating co-processors automatically from C-code profiled on a standard processor.

Now facing a ‘sea of functions’, the other step in the design flow centres on IP integration. Users have four options when it comes to connecting all the blocks together, regardless of whether they were re-used or built as described above: they can connect blocks manually; assemble the blocks using auto-assembly front ends like ARM Socrates tool, with interconnects auto-generated by tools such as ARM AMBA Designer, Sonics or Arteris; synthesise protocols for interconnect from a higher level protocol description; or use a fully programmable NoC that determines connections completely at run time.

Notably, with exception of full manual assembly, all these methods can be considered as abstractions above pure RTL coding. While it has taken 20 years, the industry has certainly moved up in abstraction for both IP creation and IP assembly, albeit in a more fragmented way than some may have expected. A universal executable specification from which everything can be derived has simply not emerged.

For IP creation, the balance between manual coding and HLS will tip further towards HLS as it becomes mainstream. Re-use will gradually move upwards from single IP blocks to sub-systems of processors combined with peripheral IP blocks using interconnects, all ready to go with software executing on them. Good examples are the Tensilica sub-systems in the area of data-plane networking and multimedia. The overall balance of hardware and software is hard to predict in general terms, but overall development efforts and life cycles, especially for complex chips, will let the share of software grow further.

We are driving towards a world in which billions of ‘edge nodes’ will be sensing data of every kind – from heartbeats to temperatures and pressure to location. These devices will be connected to hubs which will aggregate data and send it over complex networks into clouds where equally complex servers will allow data to be analysed in every way.

As a result, the market will split into very complex and smaller designs. There will be hubs, networks and servers combined with growth of SoC complexity and specific needs to address functional safety issues in mission-critical applications for automotive, mil/aero and healthcare. In addition, there is room for smaller designs at the edge that might not require ‘bleeding-edge’ technologies. Consequently, there will be complex flows that require automation for assembly and building verification environments, mixed with simpler flows in which edge-node designs can be assembled in a point-and-click style from, let’s say, fewer than 20 IP components and sub-systems.

Horizontal integration

Functional verification has spawned several ways to hardware/software systems, combining dynamic engines for RTL simulation, emulation and FPGA-based prototyping with formal verification that has also – like HLS – become mainstream. Horizontal integration of these engines was kicked off by Cadence in 2011 with the introduction of the System Development Suite. The other two big EDA vendors joined the bandwagon in 2014 with Mentor’s Enterprise Verification Platform and Synopsys’ Verification Continuum, confirming the need for a ‘continuum of verification engines’ (COVE), as described by Jim Hogan.

Market pressures, exacerbated by brand exposure for quality and reliability, will further increase the need for integration between the engines in 2016. Hybrid combinations of RTL engines with TLM virtual platforms will enable further ‘shifts left’, integration of debug, verification IP and verification management with all the engines to further strengthen horizontal integration.

In addition, the industry is approaching a pivotal point to make stimulus truly portable across all the engines, with work going on in the Accellera PSWG and with implementations like Cadence Perspec System Verifier, extending verification re-use even into the silicon domain.

Despite this drive towards higher levels of abstraction, we will also see further vertical integration of the front-end flows with classic RTL down implementation flows. We already witnessed this with combinations like Cadence Palladium emulation with Joules power estimation, as well as performance analysis with the Cadence Interconnect Workbench, and there is lots of more to come in the area of annotating implementation data back into the frontend flows.

Finally, looking forward, we will most certainly miss Gary Smith, who relentlessly and enthusiastically drove us towards and reminded us of ESL and the higher level of abstractions. He was right in many ways. ESL has come a long way in the last 20 years, but there is lots of room for additional abstraction growth and excitement.

Cadence

Cadence enables global electronic design innovation and plays an essential role in the creation of today's integrated circuits and electronics. Customers use cadence software, hardware, IP and services to design and verify advanced semiconductors, consumer electronics, networking and telecommunications equipment, and computer systems. The company is headquartered in San Jose, California, with sales offices, design centres and research facilities around the world to serve the global electronics industry.