Cold correction

4 mins read

Despite the stark warnings about its effect on older cryptographic systems that underpin many of systems used in finance and computer security, quantum computing is not arriving in a hurry.

And part of the reason for that is the difficulty of controlling the delicate qubits needed to make the technology work. When she took to the stage for her keynote at the International Electron Device Meeting (IEDM) in San Francisco in December, Maud Vinet, CEA-Leti researcher and CEO of its quantum spinout Siquance, said “I first heard about quantum computing 28 years ago and then it was just a mathematical concept.”

Even at that early stage, Peter Shor, who is now professor of applied mathematics at the Massachusetts Institute of Technology (MIT) and who created the famous algorithm that threatens ciphers based on large prime numbers realised early on that quantum computing would need extensive help from the control electronics: the information stored by entangled qubits would most likely be easily corrupted or lost before it could be used.

In the mid-1990s, Shor developed a code that could detect and correct errors in qubits and in a way that does not compromise entanglement between the data qubits, a major part of quantum computing's advantage. The principles behind that method remain the underpinning for most of the current proposals for workable quantum correction though researchers have made many modifications and improvements since. Shor's method uses qubits known as stabilisers that are analogous to the parity bits used for correcting binary errors. The error readout relies on analysing symmetry properties of the entangled qubits rather than their actual contents, unlike conventional binary error-detecting codes. This approach prevents the superpositions from collapsing.

Though they only became feasible in the last couple of years, experiments have demonstrated that the lattice codes devised for error correction will work. However, there are drawbacks. One is that they do not work for the full set of gates that a machine that achieves quantum supremacy will need, but only for a set in the Clifford group. Some researchers have proposed other methods that might encompass T-gates and others that lie outside the Clifford group but for the moment the focus is on working around them.

The typical method for emulating the complex phase shifts that represent a number of these ex-Clifford operations is to rework the circuit and compute their results by preparing specific quantum states that can be run through the Clifford gates. This generally involves trial and error to build a state that passes an accuracy test in a process called magic-state distillation.

Overheads loom

The issue of overhead looms over both these schemes. Whereas you only need a couple of parity bits for binary error correction, protecting a single qubit may need 20 or 30 additional qubits to overcome the errors that are likely to prevail in quantum machines for the next couple of decades. How much overhead and where it lies is going to depend heavily on topology. Distillation imposes major overheads for all but the photonic machines, particularly as systems may need to run multiple magic-state factories in parallel to avoid the risk of the system losing coherence before they can deliver a usable output.

The experiment that demonstrated error correction was possible on the trapped-ion architecture used by a team from the University of Innsbruck because it allows any qubit in the array to be entangled with any other, which brings down the overhead. Superconducting machines of the kind made by IBM call for a higher ratio of correcting qubits: at least 20:1 compared to the 7:1 for the Innsbruck work. Photonic computers could prove to be far more efficient at magic-state distillation.

Clearly, production quantum computers will need sophisticated control electronics. They are not easy to implement when you consider all but photonic machines, which only need to run the photon detectors at low temperatures, need to maintain qubits in circuits cooled below 5K.

Not only does most conventional CMOS circuitry fail below 225K, even if the elements worked, they would likely deliver too much heat for the cooling systems to deal with. So, today’s experimental units put the controls far outside the cryogenic chamber.

Using Google's 50-qubit superconducting implementation as an example, CEA-Leti researcher Mikaël Cassé pointed the problem out in his talk at IEDM. "The striking feature of this is the room-temperature cabling, which takes up 50 per cent of the space. I think this is a good illustration of the progress that's still to be made before reaching the industrialisation of quantum computing.”

Shorter cabling will not just be needed for space but for responsiveness: it may take many rounds of correction between each logical operation, and all the operations need to be performed before the entangled qubits decohere.

As it stands, the race is between two technology families. Much of the roadmapping work on close-quarters control of quantum computers has so far focused on what may turn out to be the best long-term bet: superconducting circuits. Scott Holmes, lead technologist at Booz Allen Hamilton explained at the autumn readout meeting of the International Roadmap for Semiconductors (IRDS) that many obstacles face superconductor circuitry, not least their structure. The longstanding Josephson junction has two terminals rather than the three of the transistor, which makes circuit design trickier. According to Holmes the densest devices made so far have a million junctions: “It’s still small stuff still. We need logic families that scale to much larger circuits.”

Holmes points to recent work on gated nanowire devices that may form the basis of a three-terminal switch, that use phonons – vibrational quasiparticles – to control the flow of carriers through them. “We need to think about the best ways to gate, generate and control phonons,” Holmes says.

An effective stop-gap

In the meantime, CMOS may prove to be an effective stopgap as simulations and experiments demonstrate that silicon can function even below 1mK with some design tweaks. In 2021, Innovate UK awarded a grant of £6.5m to a seven-member consortium led by memory specialist sureCore with a remit to jointly develop advanced cryogenic semiconductor IP.

With a long background thanks to its work with the STMicroelectronics close to its Grenoble HQ, CEA-Leti favours the use of fully depleted silicon-on-insulator (FD-SOI) technology. Semiwise, a UK-based start-up founded by University of Glasgow researcher Asen Asenov has also focused on building models for cryogenically cooled FD-SOI.

Work disclosed at the most recent IEDM and other conferences has shown that, in some areas, the performance of CMOS improves as the temperature falls towards absolute zero. On-current, for example, has been shown to be stronger at lower temperatures. Unfortunately, the subthreshold swing that is crucial to fast switching degrades. Noise also increases in ways that have been hard to explain. CEA-Leti proposes the use of the back-biasing that is easier to achieve with FD-SOI than with bulk CMOS to compensate for the changes that occur to carrier behaviour close to 0K. Careful layout may still be needed as self-heating becomes significantly more problematic below 100K, with the heat delivered along the axis of the transistor channel.

According to Holmes, semiconductors remain the most likely candidates for quantum control in the short to medium term. But at some point, self-heating in the transistors will make it harder to justify the use even of supercooled CMOS as the number of qubits continues to double.

”At some point, somewhere between 2026 and 2035, is where we're going to hit the limits and we are going to need to switch over to superconductor-based control systems. We need them to be ready to pick up when that happens.”