Automation the key to hardware security

5 mins read

Everyone says they want more secure systems but the reality of how teams go about implementing them means they can fall short all too easily.

At a workshop convened to gauge progress at the recent Design Automation Conference (DAC) in San Francisco, Mark Tehranipoor, professor of cybersecurity at the University of Florida pointed to what he sees as the key issue, “The problem is automation. If you don't get automation, it's not going to happen.”

The reason is not so much that machines will naturally do a better job but the issue is more around the availability of skilled personnel because as Tehranipoor explains, major manufacturers such as Intel have sizeable teams working on security. “So many companies only have one hardware-security person: they will need to have access to more pushbutton tools.”

Nicole Fern, security analyst at Riscure, adds, “There are always going to be more chip designers than security experts, but EDA with a light salting of security is not going to be effective.”

The countermeasures need to be extensive and comprehensive, which introduces another problem: who will pay?

A key issue with developing protections against hardware hacks is the sheer variety of attack types. They range from passive espionage characterised by side-channel analysis through logic-design mistakes to active intrusion that may extend to on-chip circuitry being altered directly in the hope of triggering errors that lead to encryption keys being exposed.

That is at odds with the realities of the EDA market as it stands, experts have found though as Synopsys scientist Mike Borza points out, “The [US] CHIPS Act has brought the strategic nature of the microelectronics supply chain to the fore.”

Despite the clear issues and costs raised by poor security, that does not necessarily translate into money for a toolchain that can deliver greater resilience though the cost of doing nothing is rising.

Some companies have a customer base that demands high security but that is not filtering through into money that might go to the EDA vendors who might be expected to come up with products.

“There’s not yet a clear business case. One reason is because we haven’t figured out a way to quantify the value of security,” says Adam Kimura, design verification lead at Battelle.

Valuing security

John Goodenough, consultant at Textrium and former architect for secure SoCs at Arm, says some industries have been able to put a value on security, pointing to the smartcard business as a prime example.

“What’s changing is that we are pushing components out into a more open ecosystem,” he notes, which is helping to expose more vulnerabilities in systems. “In an open ecosystem, things are going to be used in ways you didn’t expect. And more and more people are incentivised to go after these SoCs.”

One key issue is the long list of entities involved in a design before it gets to silicon. Many of the design techniques now in widespread use do little for confidentiality and may prove to be a way for adversaries to insert Trojans and other exploits.

Adam Cron, distinguished architect at Synopsys, points to the use of scan-based testing across the industry.

It is vital for keeping test time to a minimum but the access to the design that is given to the tools that insert the necessary gates at the end of the chip-integration process can expose everything inside the device. Avoiding this may need separating off the test patterns used for secure cores inside the chip from the rest of the logic, which can use the conventional scan-insertion tools.

“Tools that understand the different domains and which can optimise across them will be important,” Cron says.

Another issue is the compartmentalisation of security. “Part of the problem is how we do the research: much of what we see is about compromising AES keys, not doing work in the context of the SoC,” Tehranipoor says.

A system-level approach looks to be the key to effective secure design, but this has challenges.

Jason Oberg, co-founder and CTO of Cycuity says a key issue with design for security is that though it has clear parallels with conventional functional verification, it has a critical difference: “Security verification is looking at unexpected behaviour.”

It is far harder to rule out all the possible ways in which a chip might succumb to attacks given the sheer variety of methods that can be used to perform them. On top of that, Oberg says, “Security is not composable. Often you need to run the full software stacks in order to run an effective verification.”

Goodenough says that is going to be challenging for simulation. “How do I run system simulations that are going to implement the full hardware and software?”

However, users working on digital-twin models for automotive are faced with similar issues of trying to run simulations that are as close to reality as possible. That is providing an impetus to the EDA industry to work out what details can be left out from models, such as clock and signal accuracy, in order to approach real-time speeds.

Technology research

Though there are clear gaps in the anti-hacking armoury, technology research side is progressing on ways to spot vulnerabilities without going to full-stack simulation.

There are two key areas where tools are making inroads. One is the extension of formal verification to check for sneak paths through secure logic that may inadvertently expose data during normal operation. For example, work by Cycuity found a potential way for intermediate values used during encryption to leak out of the open-source OpenTitan cryptocontroller. Though Cycuity considered interception of this leak as being low risk because the values would be unlikely to reach external I/O ports, the OpenTitan applied a fix soon afterwards.

Another strand in which work is progressing quickly is in leakage caused by fault injection or through thermal and electromagnetic side channels. For fault injections, tools developed by Optima Design Automation and Riscure apply techniques from fault simulation to determine how logic will react to attacks on the clock or power supply. Ideally, the system should stop work and even reset if it detects this kind of intrusion. Tools like this can demonstrate whether that will happen and how much data might leak out.

For side-channel analysis, the main target lies in differences between paths through a programme that can reveal whether a cryptocontroller is dealing with a valid key byte or not. Ideally, these paths would be balanced to remove this differential, but it is not always easy to ensure. Sometimes, as Fern points out, synthesis tools will try to optimise away the protections and lead to highly unbalanced logic trees that an adversary can use to glean information. Riscure’s approach to this is to use power models to evaluate how a path through the different branches will appear to the outside world.

SLEAK

The research institute MITRE has taken an alternative approach with its SLEAK tool, which uses elements of information theory to determine how likely a circuit is likely to leak information.

“SLEAK identifies statistical relationships between sensitive data and system state,” says Dan Walters, principal embedded security engineer at MITRE. Ideally, the mutual information between these two is zero: if not, the circuit or algorithm may be exposing more than you would like. Walters says a key advantage of this approach is that it does not rely on power or EMI models to work which can be hard to implement and interpret. This method helps pinpoint where problems lie, he claims.

Taking a look at the overall situation, Borza concludes,” We seem to agree on the main issues but it’s a diverse solutions space. We will see a lot of new ideas emerging on how to solve these problems and we will see a lot of companies spring up to help solve these problems as we are now starting to see a lot of industry level expression of interest. “