There is a school of thought that believes hardware/software codesign is tomorrow's technology – and always will be. But is that really the case?
The technique has been around since the mid 1990s, when the first attempts were made to enable software design teams to start work before silicon came back from the fab. Two factors resulted in that initial work falling by the wayside: time to market was nowhere near as important as it is today; and the ratio of software to hardware in a project was small. Today, things have changed significantly. Time to market – and, more accurately – time to volume is critical and software is the dominant part of an SoC design, with software engineers often representing 70% of the design team. Yet tools to deal with the changing face of design have yet to appear. And it appears that some parts of the electronics world are beginning to get impatient. At this year's Design Automation Conference, executives from Freescale and Intel made calls for the eda world to develop the tools needed to help them address future challenges. Michal Siwinski (pictured), Cadence's senior director of product management for system realisation, said: "Freescale wants a parallel path between hardware and software development, while Intel wants a higher level of intent. We believe in both approaches and are breaking some technology that allows for codesign. In general terms, we're seeing a healthy interest in the number of discussions and decisions about technology that allows software development to start earlier. These discussions are not only about tools for new programs, but also for use in existing products, where modifications need to be made and software adjusted." Jim Kenney, director of marketing for Mentor Graphics' emulation division, has long experience of this area. He noted the launch in 1996 of Seamless, a product intended to allow users to address hardware/software integration issues early in the design cycle. "It allowed people to move things from hardware to software to see the effect. But nobody wanted it; they didn't appear to want tools that helped them to experiment." Kenney believes this was more a people related issue than a technology one. "Some things were obviously hardware, some obviously software. An architect would sit down and make a decision about the rest. Often, the decisions had already been made." Siwinski believes a lot of technologies are often ahead of their time. "Virtual prototyping was a great technology, but the market wasn't ready. Today, software is driving the market, so having devices available is important. Software can represent 75% of a design team, something that wasn't the case 10 years ago. But eda vendors have to decide whether to offer a point tool or something engineers will use as part of their RTL to GDS design flow." In Kenney's opinion, hardware/software codesign is all about how to take advantage of multiple processor cores. "But tools aren't keeping up with those needs; hardware designers can put more cores on silicon than software developers can take full advantage of." Lurking behind the call for hardware/software codesign tools is the cost of a respin: not only the physical cost of getting new samples manufactured, but also the lost market opportunities. "When a chip comes back from the fab," Kenney said, "the first thing the customer wants to do is to run software on the hardware. But, just as important, they want a complete working environment. They want to boot the operating system and, in the case of a graphics chip, want to see it draw something. "If they have been able to run verification and diagnostics against a hardware representation, they will know it's a silicon problem if the chip doesn't work." Siwinski believes a potential solution may come from providing a higher level of abstraction. "Certain elements need to be at a lower level," he claimed, "but even if you choose the architecture, you can tweak at that lower level. If you can shift the balance of the work to a higher level, say with Transaction Level Modelling, there will be a similar level of intent for modelling software and developing hardware. In this way, there will be effective design of new products and parameterisation of derivatives." In Siwinski's model, design of a chip starts with the creation of a specification. "It tells you what the product should do, the graphics expectations, power consumption and so on. As the design proceeds, more hardware/software partition decisions can be made. Once there is a good idea about what the chip looks like, the first sets of models can be developed and work on software can begin in parallel." He also believes there needs to be better understanding between the software and hardware teams. "We're seeing some of this happening and it's not because engineering teams have worked out how to do it; it's because management has decided that it can't wait for an engineering led solution." Kenney also sees a challenge in emulation. "Many more software engineers want to use emulation. The problem is that emulation runs at a few MHz; software engineers want to run at GHz rates. Our challenge is to get software developers to run on something much slower than they would like; we have to drag them over and show them the benefits. "Software people want a target to run on and to be able to hit 'run'. We're working to make emulators look more like target hardware." Concluding, Siwinksi said he believes the fact software is the critical factor will change automation for the better. "Hardware/software codevelopment systems will be the first step, but it won't stop there." And he acknowledges those steps will take eda into new domains. "As we move forward, this new world will be as new to us as it will be to the engineering teams."