Fibreglass fighting back as a pcb substrate

8 mins read

It says something for the status of FR4 as an everyday material that people notice when a printed circuit board (PCB) is not green. Yet both the fibreglass – yet another product of silicon – and the UL94V0-rated flame-retardant resin that binds the fibres together – the source of the FR in FR4 – are colourless.

The familiar colour comes from the solder-mask polymer used to coat the surface of the manufactured PCB and prevent stray solder from forming bridging connections between copper tracks. Exactly why green solder-mask became the standard is no longer clear. The decision goes back at least to the mid-1950s, with the two most plausible reasons being either an accident of chemistry – the source chemicals when combined produced a blue-green hue – or because the colour offered the greatest legibility. One theory is that the US military settled on the shade of green because it offered the greatest contrast to white lettering, easing the job of manual inspection. After that, very few people bothered to ask for a different colour solder-mask. While some operations chose red to mark out prototypes, they settled on the familiar green for the production boards. A few companies made the use of other colours their trademark, proud of a difference that only a service engineer would see. In the late 1990s, Apple made the colour of the PCB a design feature by encapsulating the board and surrounding components in a translucent case. Suddenly, the PCB had to match the exterior. There are some subtle differences in the way PCBs of different hues react. This is not so much down to the colour in visible light but how the chemical reacts to the ultraviolet light used to expose the solder-mask. Red and blue – the most commonly used alternatives to green – have fairly similar properties in that respect. Yellow does not usually fare so well, while black is generally not a good idea, although it can be made to work. The real problems for FR4 lie within the resin-glass matrix beneath the coloured solder-mask. The substrate most people would want for electronic circuits is an ideal insulator – and FR4 is some way from being that, although it is nowhere near as bad as some of the paper-based materials used in very cheap electronics. In many respects, FR4 is a mechanical carrier that happens to be a good-enough insulator to continue to serve as the substrate for most high-end electronics. The manufacturing process tends to introduce imperfections such as air bubbles and it is hard to maintain a consistent ratio of glass fibre to resin across the entire surface – which leads to changes in dielectric constant that can affect the accuracy of analogue circuit laid on top. The situation is not helped by the fact that FR4 is a lossy dielectric. As a result, the various capacitors formed between copper tracks and the substrate itself will have an effective capacitance that changes with the frequency of the signal. The worse the properties of the dielectric, the more quickly the capacitance falls and the more power is absorbed by the dielectric material. The dielectric loss would not be so bad if it were not for the skin effect that forces high-frequency electrical signals to the surface of a wire, effectively constraining them to a much narrower path than the physical width of a PCB trace would suggest. This increases the resistance of the transmission line with the unfortunate result of degrading the overall signal (see fig 1). On a serial digital interconnect, successive bits smear into each other, making it progressively harder for the receiver to get a clear stream of data. As clock speeds increased dramatically in the early 1990s, propelled by advances in semiconductor process technology, it looked as though FR4 was about to run out of steam. Even at relatively short distances – a matter of centimetres in many computers and not much more than a metre across a telecom backplane – the data rate per link was limited to a couple of hundred megabits per second. Backplanes were running out of space to accommodate hugely parallel buses. Architectures such as Futurebus+ appeared, but it quickly became apparent that massively parallel backplane buses would quickly become expensive white elephants. Connector companies were happy to design densely populated arrays of pins to carry wide buses, but the physics of capacitive interference called out for massive numbers of power and ground pins that could double up as shield lines. There were two options on the table. One was to move to optical signalling, which would have rendered any discussion about the substrate moot. Just as long as it was stable enough to carry a waveguide, it would do the job. The other was to replace FR4 with a better dielectric and extend the switching rate into the gigahertz region. But the materials were far more expensive and less well-understood by manufacturers. For example, while FR4 is not an easy material to drill, few PCB makers get it wrong. Some of the replacement materials were equally if not more unpredictable under a drill and no-one was keen to experiment. In the event, the semiconducting side of silicon won a reprieve for FR4. The answer to higher speeds came through modifications to the electrical signal to work around the high-frequency losses (see fig 2). Boosting the energy of the high-frequency portions of the electrical signal made it easier for the receiver to decode a signal accurately. Progressively more complex forms of equalisation made it possible to undo the effects of high-frequency smearing. The result allowed data rates to be pushed beyond 1Gbit/s While speeds on communications switch PCBs have now pushed to 6Gbit/s and beyond, the limitations of FR4 are beginning to show again. If the industry sticks to conventional pre-emphasis and equalisation techniques, around 8Gbit/s will be feasible. This is in stark contrast to the 20Gbit/s transceivers that are beginning to appear on high-end FPGAs. The choice between substrate materials, optical interconnects and more exotic signalling schemes has re-emerged. Better materials have been on the market for some years now, thanks to the growth of the RF communications business. RF designers have been forced to move to these materials by the limitations of FR4; they are unable to use the signal-processing trickery that their digital colleagues can employ and have borne the cost, which is often four or five times higher per square metre. These materials generally offer a more consistent dielectric constant at high frequencies and greatly reduced loss tangents. For example, a good-quality FR4 substrate can have a loss tangent of around 0.015 at 10GHz. Rogers 4350, based on a ceramic compound, slashes that to less than 0.005. Better materials may yet be beaten off by improvements in signalling. A 2008 paper by Rambus engineers Wendemagegnehu Beyene and Amir Amirkhany published in IEEE Transactions on Advanced Packaging concluded that it should be possible to push data rates to 20Gbit/s and beyond with better equalisation and a move to the kind of multilevel signalling used by internet modems. This keeps the symbol rate down, avoiding the problems caused by high-frequency losses, but increases the complexity of decoding the data at the other end accurately. Modems usually employ forward error correction to assist the receiver to interpret the incoming stream of symbols correctly. A year later, Dong Kam and colleagues from IBM's TJ Watson laboratory agreed that it should be possible to get to 25Gbit/s, even if that was only realistic for distances of less than 1m. But there was a cost: power. At more than 10Gbit/s, the energy needed to support the signal processing that shapes and decodes data accurately climbs dramatically. Kam and colleagues estimated that it would take 22mJ/Gbit on a 10Gbit/s interface: about 70% more than an equivalent optical link. But at 20Gbit/s, the power would increase to 40mJ/Gbit, more than double its optical equivalent. And the optical link would be far less constrained in terms of cable length. Companies such as Altera and Intel are working on optical interfaces to replace high-speed electrical links. However, Apple's launch earlier in 2011 of computers that use Intel's Thunderbolt port demonstrate the continuing tension between electrical and, theoretically better, optical signalling. Intel's work was aimed initially at optical transmission but this has been retargeted to a more advanced electrical signalling system. One problem for using optical interfaces for inter-chip communications on a board is the poor compatibility between silicon and the III-V processes that are suited to transmitting and decoding optical signals. Another is the relatively high cost of implementing optical waveguides on PCBs. These are much harder and more expensive to place than chemically etched copper traces, which tends to narrow the gap between the use of better substrates than FR4 and the embedding of fibre or similar waveguides in an FR4 board. A possible driver for using optical communications between chips is the focus on power consumption and the amount of current it takes to feed a high-performance processor. As the supply voltage has dropped – a move enforced by shrinking process geometries – the amount of current needed by a high-speed processor has ballooned. Phillip Stanley-Marbell and colleagues from IBM Research in Zurich forecast how much I/O processors would need in 2020 and found so many power pins would be needed that there would not be enough room to carry all the data signals needed to keep the multiple on-chip cores running at full speed. The researchers saw high-speed optical links as one way to free space for more of the power and ground pins. One of the trends for the high-speed processors being developed by the likes of IBM and Intel is to make more use of 3D packaging. The memory interface is becoming one of the biggest consumers of energy in the computer. As most of the energy consumption comes is the result of trying to send a signal more than 1cm, slashing the distance between chips by stacking them looks more and more attractive. If a chip-to-chip interface operates over a distance of millimetres or less it brings the energy at a data bandwidth of 10Gbit/s down to 6pJ/bit. That is roughly half of the energy needed for electrical to optical conversion and signalling that might be used for off-chip communication. Bringing several gigabytes of DRAM as close as possible to the processor – by stacking one on top of the other – should also reduce the number of transactions that need to go into the even larger main-memory array. Having drilled holes in the ICs and filled them with conductive vias, it is not much of a conceptual leap to add an additional interface chip that carries multiple optical ports and other I/O (see fig 3). So, the move to 3D packaging may provide yet another extension to FR4's lifetime by reducing the need to have a low loss dielectric and by reducing the bandwidth needed for signals that do not leave the processor complex. The optical interfaces might simply be reserved for high-speed interprocessor links operating across a backplane or a set of pluggable modules. As a result, 3D packaging technology may limit the use of advanced substrates to a narrow class of systems that are unable to shift to optical interconnect and which cannot use more advanced signalling for power reasons or simply because they need to support large amounts of analogue RF circuitry. But 3D packaging may stymie another trend that change the way in which FR4 technology is deployed. One of the concerns in high-volume systems is the sheer volume of passive components that need to be pick-and-placed onto a board. When it comes to simple passives, such as resistors and capacitors, the insertion cost can be higher than the cost of the devices themselves. The pressure on size in many consumer systems has helped force manufacturers to adopt smaller, far more fiddly packages, such as the almost microscopic 01005 – just 2.5mm long and little more than 1mm wide. The answer might be to work those components into the PCB itself, using the buried layers to free space on the top and bottom surfaces for larger active devices. Unfortunately, the result often turned out to be an even larger – and certainly more expensive – PCB. Passives made on dedicated production lines, with the advantage of access to high-temperature ovens for co-firing in the case of ceramic chip capacitors, can be supplied with much lower tolerances than materials that can be printed onto the surface or laid into trenches in PCB laminates. Companies working with the embeddable materials found that, to get good tolerances, the surface area or volume had to be much higher than with discrete components. This was fine for small-value resistors, but higher-value parts became prohibitive in terms of area consumed on the PCB. Placing discrete components between the internal layers of PCBs potentially offers a way forward for shrinking products. But that increases the complexity of manufacturing. It complicates test and requires a lamination process that does not destroy the discrete components. Techniques have been developed that will even place active devices in the internal layers – using better static control to prevent them from suffering electrical damage. However, the emphasis in these processes has shifted away from using them on PCBs towards putting them into multilayer packages that will then be placed conventionally on the larger board. This concentrates the cost in smaller substrates, leaving the larger PCB to be made using well-established and cheap production methods. Despite FR4's obvious limitations, it may yet prove to be the great survivor of the electronics industry – fending off challenges from other PCB substrate materials thanks to trends in chip and circuit design around it.