comment on this article

Multiple choice - embedded design

Debug has become a battleground for processor makers as they integrate more processors onto one piece of silicon.

Multicore processors and SoCs have become a staple of electronics development in the drive to cut system production costs. But this integration can complicate development and debug because of the way in which different processor cores from various sources are combined.

Steve Barlow, technical director at Argon Design and former founder of SoC maker Alphamosaic, explains: "You potentially have a mishmash of lots of different solutions. Maybe you have developed components or a complete chip that, in the next generation, will be integrated with other stuff. If they have different debug solutions, it's hard to get them to work together nicely."

Consistency, even within a product family, is hard to achieve. Trace data, because it demands much higher bandwidth, has often called for a separate parallel bus. Peter Hoogenboom, engineering manager for Europe, the Middle East and Africa at Green Hills Software, says: "With parallel trace, the additional pins were often too expensive to implement. The silicon vendor would say: 'I didn't put them on because they would have cost 15 or 16 pins'. So we often ended up with nothing."

It's not as if there have been no attempts to standardise how programmers access the debug system. The Nexus 5001 Forum, formed in 1998, concentrated initially on automotive and industrial systems. While it developed a series of standards, these are not used across the industry. "It's a real pity that Nexus didn't succeed as a standard," said Barlow. "The problem is that people see it as too close to Freescale."
Pressure from handset makers is helping to establish debug standards for that market. The Mobile Industry Processor Interface (MIPI) Alliance is working on protocols for multicore debug that will work with the existing de facto standard interfaces, such as ARM's Coresight.

Robert Oshana, director of software research and development for developer technologies at Freescale, says MIPI's work focuses on system level trace – using information from across a SoC to provide greater visibility into how the various cores are running. "It is an attempt to standardise more of the system debug, aspects which will become more important for multicore," he says.

Barlow says the close involvement of Texas Instruments in the MIPI Alliance may prove a roadblock to its widespread adoption, similar to the problems that face Nexus. However, Oshana argues that MIPI's work, so far, has the backing of a number of chipmakers, including Freescale, ST and TI. "These guys were all part of it. I think it will get traction," claims Oshana. "Will there be one standard across industry? Probably not, but we will get to maybe two or three."

A pressure that works against standardisation is competition between chipmakers. "Onchip debug is viewed as a differentiator for multicore processors," Oshana notes.

Where SoC integrators have picked a common interface, they have often alighted on Coresight when there is at least one ARM core inside the chip; an approach used by TI on a number of SoCs.

"We fit the gap that was not provided by the industry," says Serge Poublan, Coresight product manager at ARM, adding the specification makes it possible for IP providers to build their own trace ports and link them to a common Coresight infrastructure. Poublan points to dsp core designer Ceva as an example.

Barlow cautions: "You can hook third party things to Coresight, but you are very much on your own when doing that."

The long term trend is to have more IP blocks on an SoC contributing debug information to a single port. This is an area where UK based start up Ultrasoc aims to make a difference: providing hardware IP for a system level debug infrastructure. "The trend is to get more system visibility," says Hoogenboom.

A lot depends on how the SoC is being used. Hoogenboom points out that, in asymmetric multicore systems, the applications that run on the different processors are often quite independent. "You may have a processor that is running real time applications, another running virtualised Linux. The architects have basically divided at the application layer which processors work with which tasks."

There is no need to have a task on one core trigger a breakpoint that controls the other because the two processors are running independent applications. "When you have real time stuff running across eight identical cores, each working on a single burst of network packets, then it makes sense to start or halt all the cores at once," says Hoogenboom.

This type of closely synchronised control is where system level debug comes into its own, says Oshana. "You need visibility throughout the system. Where you have a multicore device being fed with different TCP/IP packets, you need to be able to trace them as they flow through the processors, the interconnect and out through DDR. To understand where bottlenecks and problems are, you can't trace that by looking at a single core."

To allow data to be traced on its journey through an SoC, more hardware units are getting trace support. With Coresight, Poublan explains, hardware units can post messages to the debug buffers in response to events, or they can be fitted with more extensive trace units. Caches are picking up trace interfaces to make it possible to see whether conflicts between processors are slowing the system.

"You have things like false sharing," says Oshana, "where two threads access the same level two cache line and you get into a thrashing scenario that degrades performance."

Now favoured for performance tuning, trace is becoming more critical to multicore debug than breakpoint and cross triggering control, says Barlow, because stopping entire cores is not practical. "With a single processor, you can have isolated control. You can stop the processor, look at its state and then let it carry on. In multicore, you can have several processors controlling a physical system that can't be stopped or communicating with a wireless network that can't be stopped either. Debugging while running was originally important in automotive, but is spreading to other areas."

Barlow adds that use of an OS makes it easier to use its facilities to watch for context switches and intertask interactions. But Oshana says chipmakers will make additions to hardware to support software mediated debug, pointing to the possibility of extending watchpoint logic to 'provide printf() support in hardware to show a particular event happened', alluding to the statements liberally used by C programmers to show what a program has done.

Virtualisation is another area where hardware support will become more important, says Oshana, to let debuggers work out which peripherals are being used even when a hypervisor is hiding them from the application being analysed.

The drive to differentiate through onchip debug and visibility will slow down cross industry standardisation. But the drive to integrate multiple, different processors and hardware accelerators on one piece of silicon will see features such as trace extend across a much wider range of devices.

Chris Edwards

Related Downloads

Comment on this article

This material is protected by MA Business copyright See Terms and Conditions. One-off usage is permitted but bulk copying is not. For multiple copies contact the sales team.

What you think about this article:

Add your comments


Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

Related Articles

Getting smarter

The latest ‘buzz phrases’ to emerge into general use are ‘machine learning’ and ...

Change based testing

A major cause of software bugs is inefficient and incomplete testing. This ...