Long term view of embedded software development will improve quality and reduce development time

4 mins read

Many embedded applications are developed with a limited view of the need for long term reusability, yet almost all organisations can benefit by establishing a development framework that provides a high degree of platform independence and which facilitates reusability. The benefits are not only future proof embedded software components, but also long term improvements in quality and reduced development time.

While most software organisations apply a coding standard, teams in the organisation still develop incompatible software – it is just developed in a uniform way. However, a coding standard, in conjunction with other processes, such as a source control system and an issue tracking system, is often the full extent of software management. The processes themselves do not ensure useful outcomes such as future reuse and compatibility. Over time, concepts have emerged, such as object oriented programming, which enabled reusable software that was independent of external or platform considerations. For example, Windows or Linux code written for one target architecture generally runs on another. Porting the board support package was the most significant aspect of any project. This objective of portable 'future proof' software has escaped the deeply embedded world for a multitude of reasons – not least, the large number of possible combinations of hardware, peripherals and tools which must be handled within relatively small teams with very specialised applications. Engineering teams work with different architectures, different 'endianess', and different real time operating systems. The outcome can be incompatible code that is compliant with a common standard and the economic impact is significant – a large investment in a beautiful piece of development is applied only to one project. It would take significant further investment to avoid losing all the benefits of code reuse and from the benefit of it being strengthened continually through verification in different environments. It is not only software that is used inefficiently; so too can engineers across an organisation. If an engineer cannot pick up a module or project and know instantly how it is organised and configured, there are considerable inefficiencies in the use of valuable engineering resources. High quality code (see note 1) can be expensive to produce, though this cost could be reduced if software modules were reusable. Another healthy side effect of reusability is that engineering groups will be more motivated to develop high quality modules. A long term investment in an organisation, rather than 'freestyle' programming for the purposes of a short term project, also avoids the well known drawbacks, such as high ongoing maintenance costs. There are alternative strategies to this endless repetition of development. Some companies approach embedded development from a different perspective to improve quality, productivity, reusability and flexibility. Developers who have wanted to swap an application to a new processor architecture, but who cannot deal with the porting effort, will understand the problem. The same is true of changing to a new rtos – dealing with the many application and middleware integration issues can be unnecessarily complex. However, with some investment in a well thought out and structured approach, these problems would not be so daunting. HCC Embedded deals with software across many different embedded configurations, where compatibility issues could arise due to problems with endianess, different mcus, multiple incompatible peripherals, different rtoses and other technical variables. However, the middleware which HCC supplies has to run unchanged in a variety of target environments. In addition, some of the software is targeted at critical applications and must meet rigorous quality objectives. Like other organisations, HCC has developed a framework that ensures that, even at the driver level, there are no compiler specific defines, no endianess dependencies, no pragmas and clean compiling over many toolchains at the highest warning level. Regardless of functionality, all modules have basic characteristics that define them – an API, a configuration and a version definition. What goes on inside a module is not so important, as long as it provides all of its features in the correctly defined way. If it is necessary to investigate platform sensitive areas in a module, that module must be understood in depth – an unnecessary and potentially risky activity. If the module has been developed within a well defined framework then, in the same way that object oriented code is reused, the developer needs only refer to external module interfaces. There are a number of key steps in establishing a 'future proof' framework. • Implement a source tree methodology. Make sure there is a common system for placing code within a source tree so that whatever combination of modules is used, they can be plugged together without creating conflicts or requiring modification. • Create a module version verification system. This ensures that, when a module is released, it is only used with modules it has been verified with. As versions change, it needs to be clear that any other module that uses it will detect automatically that something must be reviewed. • Isolate platform dependencies from the main code. This is the key to ensuring portability. Engineers must understand where code is plain 'C', independent of environment, rather than code with a platform dependency. It must be ensured that, once a platform support package has been built for a target, any module can be dropped into that target, regardless of where it has been verified. • Define how platform dependent items are handled. This is critical for reusability across platforms. • Create module design guidelines. This is important in order to create consistency and reusability. Guidelines could vary from project to project, but ensure all interfaces are defined to a high quality. Possibly the single most important step in the process is to define high quality interface guidelines. While the quality of the modules is important, badly defined interfaces will always restrict the achievable quality of the application. Concrete steps to be taken could include: • All API functions follow rules; for example, they must all return an error code or void. • Parameter lists should use 'stdint.h' types with strict usage of the 'const' keyword. • All parameters should be declared in a logical order – input parameters, input parameter qualifiers, output parameters, output parameter qualifiers. When integrating the project with a module developed using the framework, developers need only deal with the platform support package requirements of that module. The framework consistently establishes the module code, its API, configuration and versioning. The benefits include greater flexibility across engineering teams since they will be able to work instantly with any module created anywhere in the organisation. It should be noted that while any framework is designed to meet the requirements of a particular development organisation, the underlying concepts are always the same. Unfortunately, there is currently no standard for this approach to development for deeply embedded systems, but applying some of these principles will result in higher quality applications and a better return on investment. Note 1. The term 'high quality' is used in this article to mean code developed with a higher level of process; for example, conforming to standards like IEC61508, FDA 501K and DOC1708. Dave Hughes is ceo of HCC Embedded.