Zones of Influence

5 mins read

Silicon consolidation will reduce the number of chips and cables in cars. But it comes with consequences for the design of the chips themselves. By Chris Edwards

What are the consequences for chip design from silicon consolidation? Credit: adobe.stock.com

Carmakers are tiring of the humble microcontroller. Or at least those that can only do specific jobs, like controlling the brakes or the cabin air conditioning.

It all made sense in the early days of electronic control, not least because of the way manufacturers could ask specialists in each area to develop the hardware. But the net result in today’s vehicles is a proliferation of electronic control units (ECUs) throughout the vehicle, supported by a forest of cabling that links them to each other and the sensors and actuators they manage. And some of those sensors can be at the other end of the vehicle, adding a lot more cable.

The zonal architecture the carmakers want to use flips that around. The approach consolidates as much as possible into powerful multicore processors that communicate over an Ethernet backbone. This, in theory, comes with two benefits. An ECU in each zone controls everything close to it, which reduces the amount of sensor cabling. Tasks like braking and seat adjustment wind up being split between zones. The ECUs in each zone need to coordinate with each other to steer and halt the car.

The payoff is that the new approach should help the designers move towards vehicles that can be defined largely in software. The programmers get to choose how to divide the workload across one or more ECUs. A high-level task may decide which brakes to activate based on what the various models run by the driver-assistance systems tell it. And zonal controllers closer to the actuators act on messages from that task to make it all happen.

Honda is one carmaker that aims to combine driver assistance with control over the powertrain control and the cabin onto an all-round multicore processor, saying at the start of the year the company had engaged Renesas to provide the hardware and much of the design for its 0 Series of electric vehicles.

It will take time for these and other “fusion” chips to proliferate. Even by the early 2030s, consultancy McKinsey expects less than a third of vehicles shipped to have zonal architectures. Even so, chipmakers see it as an inflexion point in how they deliver the silicon, not least because these devices may well lead to increased margins for chipmakers. Market analyst Techinsights projects half of the dollar spend on automotive processors to be on the highest-performing SoCs by 2031.

How many dollars each one will cost is also potentially a major obstacle considering how processing power the chips might need. Without a dramatic change in AI efficiency for the system expected to handle driver assistance, carmakers need to adopt silicon from bleeding-edge process nodes to get the computing power they need. That will be coupled with devices with many more transistors, adding to both production cost and design time. And the cost of designing huge monolithic chips limits how much the automobile companies or their Tier-1 suppliers get to customise.

Embracing the chiplet

One answer is to embrace something else the hyperscalers have seized upon: the chiplet. Speaking at Embedded World, Stewart Bell, European director of marketing and business development of Socionext, explained, “That monolithic SoC can be split into a number of chiplet building blocks, such as an I/O tile and a CPU tile, and then all brought together within a package to realise the whole system at a better price-performance point.”

Lower costs potentially come not just from the yield improvements that come with using smaller die sizes for individual chiplets but from the ability to tune the semiconductor process for each task. AI acceleration and high-end multicore processor complexes will use sub-10nm nodes. Interfaces for memory and I/O controllers can use older processes that are better matched for the job: analogue circuitry does not scale to newer processes as well as logic. That, in turn, provides lower-cost options for expanding space for memory I/O, which lets designers improve bandwidth by using multiple channels. A monolithic SoC often has limited area for I/O in order to keep die costs down. If you have the space of a larger package to run multiple I/O ports to a memory like LPDDR5, you can improve overall data bandwidth.

You could go back to simply dividing the multicore SoCs into a collection of PCB-mounted chips, giving each one its own memory. But that comes at the cost of increasing the delay and energy needed to transfer data between them. If the chiplet sits on an interposer, the links can be a lot faster, though slower than across a monolithic die.

As with components soldered onto the PCB directly, automotive users want the ability to customise what goes into their ECUs. That also opens the door to the new crop of AI vendors looking to supplant Nvidia by offering the chiplet option to automotive customers who want more control over the future supply of silicon through more ASIC-like procurement. “We don’t want the chip to be end-of-life by surprise,” said Tenstorrent CEO Jim Keller at his AI-processor company’s Dev Day in April, pointing to the migration to RISC-V as another example of organisations wanting greater control over the architecture.

“We're talking to a whole bunch of automotive companies who want RISC-V in their platform. They're interested in both the RISC-V and chiplet technology with the CPU, the AI and all their own other hardware for their own applications,” Keller added.

Despite the attractions, automotive chiplet-based systems face some major obstacles, not least cost. One major obstacle lies in test. The yield of individual chiplets may be higher because they are smaller. But one bad one in the package will kill the entire package. Package-on-package (PoP) construction like that used in smartphones works around this problem, and this may be how DRAM gets combined with automotive AI processors because it will be a much cheaper option than the high-bandwidth memory stacks (HBM) that server GPUs use.

Chiplet testing relies heavily on improving the ability to detect outliers while they are still on the wafer, which is more expensive that conventional chip testing and still not able to exercise each device fully. That pushes more test logic into the chiplets themselves. The latest iteration of the UCIe link incorporates self-test routines to let the chiplets on either side of the link agree on which ports they will use, sealing off those that fail or are marginal. One advantage of this is that the self-testing can continue through the life of the package so chiplets can adapt to problems caused by aging. However, this will add more die area to the overall design, which negates some of the notional cost saving from disaggregation.

Substrate costs can rise quickly, according to Andy Heinig, head of efficient electronics at the Fraunhofer Institute for Integrated Circuits (IIS), even though cars will probably use the cheaper option of organic materials rather than the silicon interposers used by the current crop of server-AI accelerators, especially if they need high-density interconnect combined with size. According to figures compiled by Heinig, adding two layers to reduce routing congestion to an organic substrate increases its cost by at least a third. And the price soars as packages get bigger to accommodate more powerful chiplet complexes.

The substrates are also relatively unexplored compared to PCBs when it comes to reliability in what is a far more hostile environment than an air-conditioned server room. Both vibration and thermal shocks present problems that can lead to cracking and warping on larger substrates.

“Lots of research is needed to find a cost-effective solution,” says Andreas Aal, systems architect at Volkswagen. The carmaker is working with Fraunhofer IIS to perform some of it.

Other research programmes have coalesced around Belgian research institute Imec, which has recruited a number of chipmakers to its programme, and also the Japanese ASRA consortium. These groups are building testbeds meant to iron out the cost and reliability hurdles. Other similarly cost-sensitive markets may benefit if the car industry finds the chiplet is commercially viable outside the well-funded server room.