System architecture has evolved from being an instinctive dark art – the balance between implementing functions on hardware and software often being swayed by the amount of legacy software or, perhaps, the limited hardware resource.
But recent developments in hardware platforms have highlighted the opportunities presented by optimising this partitioning and then adopting efficient design practices thereafter.
The disruptive technology here is the fpga, allowing greater flexibility in setting these partitions. While fpgas are not new, they have become increasingly capable and affordable over recent years. Equally, software – such as the MatLab suite of products from Mathworks – has developed to better integrate the design activities of system architects, hardware engineers, software engineers and specialists in, for example, analogue or rf.
Graham Reith, Mathworks' industry manager for communications, electronics and semiconductors, explained: "The role of our tools is to help each of those people to do their jobs effectively and to be able to communicate with each other - understand the context in which they are developing their particular part of the system."
Achieving this understanding comes from having command of the design flow; something the MatLab environment allows. "For electronic design, it is about trying to integrate the different steps together as you go – right from the idea of what the system should do and what algorithms we should develop in order to achieve some need," said Reith. "We then need to go through all the different steps to ensure the algorithm is right, is going to perform well, has the right architecture and right numerical performance – all those different stages through to implementation and then deployment in the product.
"We provide that integrated flow with a common tool set that means there is much easier communication between all the different people at all different stages of that design flow. So there is no miscommunication or need to retranslate from one environment into another because often it is the area of translation where misunderstanding start to creep in."
A more scientific process
Creating the hardware/software partitions has become a more scientific process, due to the visualisation and simulation abilities of the tools to which Reith refers. In the past, the system designer would have needed a good idea of what functions would be assigned to hardware and to software at the point of creating the algorithm at the core of the design. The integrated flow approach allows the system designer to concentrate, initially at least, on what the system needs to achieve, not how it is done.
Simulation of the design can then be used to determine which functions are best suited to hardware and which to software. The consequences, in terms of performance, power, silicon real estate and so on, can be calculated and used in an iterative process to arrive at an optimised system.
There are hardware developments coming on stream that dovetail nicely into this model, as Reith pointed out. "The interesting thing about the hardware/software codesign environment, where you have a lot more collaboration between the groups, is that there is some interesting new technologies coming along like the system on chip products – the Zynq-7000 All Programmable SoC from Xilinx is one of those. So now, rather than having a particular dsp and a particular fpga that forms part of an overall solution, there is an integrated product that will do that. The trade off of what goes where, even within an integrated environment, is where MathWorks tools can help to find the best partition between what goes into hardware and what goes into software. These new SoC products open up more flexibility because the interfaces are well defined within the product itself and this offers a bit more flexibility in how people implement their system. That choice of architecture can vary a bit more easily through the design process."
Zynq, the first major release of an fpga based SoC, was embedded in National Instruments; CompactRIO-9068, a software defined reconfigurable controller, introduced in August 2013. Giles Peckham, European marketing director at Xilinx, revealed why Zynq lends itself to optimised partitioning and, hence, why it was an ideal heart to the NI system. "You can reprogram microprocessors to do other functions but, until the fpga came along, you had to add more hardware to a board and then, through a series of programmable switches, you could select certain functions, but it is very limited. The fpga, of course, is a versatile 'sand pit' of functionality; you just need to connect it up in the way you want it. So when the fpga came along, it offered NI the flexible hardware and the flexible software it needed. Zynq takes it a step further; in the past, NI would have used a processor and an fpga as two separate chips."
There are several advantages to this, according to Peckham. Firstly, it reduces the number of components and, secondly, the closer interface improves performance. "There are some 3000 signals going between the processor and the fpga – you can never get a chip with that number of pins on it, but you can get that level of interconnect when you put it on the same device," said Peckham. "There is also a lot of power dissipation if you are taking 3000 signals off one chip through a pcb and onto another chip, toggling the signals up and down quickly. But if you have those two functions on the same bit of silicon, power consumption is very much lower. And you have the 'one design' environment for the processor side and for the programmable logic part."
One limitation can be where there are more software programmers than hardware developers, and hence the tendency to do more in software, irrespective of whether that is the most efficient route. Asking a hardware engineer to take a software block and turn it into hardware is not a straightforward task. But Xilinx, through the acquisition of AutoSEL, has developed a high level synthesis tool (Vivado HLS) that converts C, C++ or SystemC into the hardware description language RTL. Like the MathWorks suite of tools, it is claimed to remove another hurdle on the way to optimal partitioning for efficient systems.
Specialist skills remain important
However, Reith stresses the availability of Xilinx' SoC and the efficiency and automation afforded by his company's system tools in no way diminishes the importance of specialist skills. Neither should these disciplines only be employed late on in the design process after the algorithm has been developed. Indeed, the production of a virtual design through system modelling allows engineers of all disciplines to work on it at an early stage, reducing overall design time.
Reith concluded: "We think that software and hardware engineers should be engaged early in the process. They have the necessary expertise about how to do the coding and they can help guide how the algorithm can be structured in order to achieve that. We are not trying to replace the knowledge and experience that they have, but rather enable a common environment through which people who have that knowledge and understanding can express that in the design and keep everything together.
"So, when the code is produced, be it C or Verilog, they are happy with it. Algorithm engineers may not know that much about software or hardware and firmware engineers may not have much appreciation of the algorithm but, by providing an environment in which they can collaborate, we provide an effective flow all the way from the algorithm to implementation."