Intelligent debug tools becoming a commercial necessity

4 mins read

When developers start a new microprocessor based project, they are faced with a critical choice – which debug solution should they use? This article seeks to explain why many developers are choosing a new generation of intelligent trace analysis tools and why, in many instances, these tools have become a commercial necessity.

Maybe a good starting point for this discussion is the fact that, in today's internet connected world, bad news travels fast and a single bug can easily crash an established global brand. Furthermore, consumers and business users are now far less inclined to accept glitches and reboots: they simply expect excellence. In addition to these considerations, the cost of a recall due to a software failure can be extremely expensive, not only in terms of customer loyalty but also in terms of logistics. With the narrow profit margins of a competitive sector, such a recall could easily jeopardise the viability of a company and, in the US, such a failure could also result in a class action for damages. An interesting development in this area is that, increasingly, systems purchasers are now insisting on contracts where the software is proven to meet a defined quality specification. Not up to the job? In many ways, traditional debug techniques such as printF and budget tools proved adequate when the amount of code in systems ran only to tens of thousands of lines and when the project may not have been so time critical. However, today's systems – even basic ones – can often need many millions of lines of code. Using the older and simpler methods to examine this code may be inexpensive, but the approach can easily result in project delays, products being shipped which are later recalled and too many bugs getting passed into production systems. Not only that, traditional approaches were geared more for 8 and 16bit microprocessors. Today, engineers are working with 32 and 64bit processors, multicore solutions and combined cpu/dsp devices. Furthermore, they may be running an OS or RTOS that demands an additional layer of understanding. Hardware assisted trace One of the limitations of the traditional approach is that the communication methods of RS232 and Ethernet carry certain overheads that limit the speed and nature of the data that can be collected. In order to provide a faster data stream and more system information, chip developers have been working with debug tool vendors to develop on chip 'trace ports' and hardware assisted 'trace tools'. There are many variants of the trace technology, ranging from multiple serial streams running at 6.25Gbit/s through to techniques that allow data to be streamed to a hard drive, supporting the collection and analysis of data over hours or even days. Analysing code behaviour Once this trace data is stored on the host pc, powerful software provided by the tools vendor will enable the detailed analysis of code behaviour. It will also provide graphical tools to help the developer understand the code performance and locate bugs and bottlenecks. Using a technology such as 'long term trace', the developer can collect massive amounts of information about the code's operation and its performance. This is all collected as it happens on an embedded system which is running in real time. This can be inspected by the user to detect and analyse the most unpredictable and transient bugs, as well as details of performance and timing under all conditions. Demand for long term code analysis is being driven by market sectors such as automotive, medical and aerospace, where microprocessor based systems are becoming increasingly complex and in need of more rigorous testing to comply with safety and performance criteria. Vehicle engine management systems are a good example of where such an approach is beneficial. The level of code coverage provided by long term trace enables engineers to analyse how the software behaves all the way from a cold engine start, then up to temperature and through a full emissions test routine. Another new technology is 'high speed trace'. Because modern processors run too fast for the trace to be collected via a parallel port, silicon designers are now integrating fast serial ports into their products. This also has the additional benefit of reducing the package pin count. In recent times, ARM has implemented this technology with its High Speed Serial Trace Port (HSSTP). This has been quickly followed by AMCC with the Titan, Freescale with the P4040 and P4080QorIQ processors and Marvell with the SETM3. Many other cores have been designed with this feature, but are not yet on public release. Code optimisation The techniques mentioned above are clearly intended for high performance and safety critical systems, but intelligent trace technology can play an important role even with low end, non critical consumer products. Most of the latest families of microcontrollers based on Cortex-M IP include dedicated trace capability as developers often need the same code analysis capability. As the cores are slower and generate less data, more cost effective methods can be used to collect the information on the code operation, while still providing all the analysis capability to ensure the code meets specifications For high volume products such as mobile phones, digital cameras and set top boxes, effective code analysis can help reduce the bill of materials and energy consumption, both of which can have a substantial effect on the commercial success of the product. Cache memory is on chip memory that can be accessed very quickly by the cpu. Access to cache memory can be in the order of 10 to 100 times faster than access to off chip memory. This fast memory acts as a temporary storage for code or data variables that are accessed on a repeated basis, thereby enabling better system performance. Trace technology enables analysis tools to confirm the effectiveness of cache usage and can have an important impact on the overall performance of the software. Power management is another area that is being influenced by the use of trace tools. Using the program flow information provided by the trace records and a simultaneous collection of power consumption data, it is possible to calculate power consumption by code function and task. This can either prove that existing power management schemes are working as designed or suggest areas for improvement. Whilst these techniques were first used in mobile phone design, they are now proving useful in many markets as devices become more compact and portable. Conclusion Customers have proven over many hundreds of projects that use of trace technologies can help to cut the software design cycle by around 50%, reducing the time to market and saving on engineering costs. Trace technologies can also help reduce the bill of materials, enhance device performance and greatly reduce the risk of failure in the field. In this respect, and with the use of ever more powerful mcus, the use of higher level debug technology is increasingly a becoming a commercial necessity for many developers. Barry Lock is Lauterbach's UK manager.