23 November 2011

The world of serial communications between PCs and peripherals

When you consider how difficult it was to hook up peripherals to a PC in the late 1980s and early 1990s, it is little wonder that the Universal Serial Bus (USB) should have been such a success. Users had to wrestle with arcane interrupt and address selections to attach more than a couple of serial or parallel peripherals to a computer, often with unpredictable results and rarely entirely successfully.

Realising that, with the move to Windows, things wouldn't get better if PCs were stuck with RS232 serial and IEEE1284 parallel ports, seven companies clubbed together to try to improve I/O. When the first USB silicon appeared a year later, it promised a much easier life for PC users, although it took several more years for the interface to be adopted widely.

One problem was that, although USB was supposed to support more than 100 peripherals connected to a single host port through a tree of intermediates, in practice, this did not turn out so well. It took an update to version 1.1 for the specification to be robust enough to handle hubs. It was only after version 1.1 appeared that USB ports came into widespread use, roughly coinciding with the release of Windows 98.

In USBv1, peripherals could support either Low Speed, with a 1.5Mbit/s transfer rate, or a 12Mbit/s rate with the highly confusing name: Full Speed. Consumers naturally assumed that Full Speed-capable USB 2.0 hubs and devices would support the later revision's higher maximum data rate when, in fact, they would only run at up to 12Mbit/s.

Over time, USB has competed more strongly with the IEEE1394 Firewire interface championed by Apple, which spent close to a decade in development before appearing as standard on Macs. Apple wound up adopting USB – putting ports on its machines before the 1.1 revision was complete – ahead of its favoured I/O standard. At the time, the two interfaces occupied different parts of the market.

Firewire was designed to daisychain a comparatively low number of peripherals, instead of attaching them through hubs. The big difference was the peak datarate. The low-voltage differential swing (LVDS) interface represented, at the time, a marked change in the way that I/O worked electrically. The reduction in voltage swing, to just 350mW in each wire, reduced electromagnetic interface dramatically and allowed comparatively high datarates. At introduction, Firewire could pass up to 400Mbit/s, allowing it to be used as a replacement for the SCSI bus – Apple's original intended application for the I/O standard.

Another key difference from USB is the amount of host intervention needed – something that helped Firewire maintain a small, but significant, grip on audio and video peripherals even after the introduction of USB 2.0. Partly because it was intended to be used for hard drives, Firewire has a comparatively complex command set and has built-in support for direct memory access (DMA) transfers. USB, by contrast, generally used programmed I/O to send and retrieve words from the interface, although the adoption of the Open Host Controller Interface (OHCI) did allow the controller to offload some transfer functions from the host processor.

The second major version of USB appeared in 2000 – offering a top claimed datarate of 480Mbit/s. To distinguish High-Speed peripherals from the existing Low-Speed and slightly ironically named Full-Speed devices, the USB Implementer's Forum developed a 'chirping' protocol that the newer devices could recognise but which would not confuse older interface controllers. Once set, the host would treat the device as a High-Speed slave until the next reset.

USB 2.0 was made a firm standard in 2001, quickly turning into a mainstream offering on PCs and peripherals. Even Apple, which held out against offering USB 2.0 ports on its machines – largely because it already had high-speed peripheral support through FireWire – started to support it from 2003.

The dramatic success of USB 2.0 is, in some ways, a problem for its successors. There is not such a huge pent-up demand for a new interface as there was for USB 2.0. However, there are applications such as high-end audio and video storage and capture systems that can make use of additional bandwidth, not least because real-world datarates on USB 2.0 tend to top out at around 240Mbit/s.

USB 3.0 has taken a while to get to where it is now. Development work started in the mid-2000s, with an initial specification appearing in 2008. Even now, it is only just beginning to appear on PCs and high-speed peripherals. The interface is not expected to make it into Intel's motherboard chipsets until Panther Point arrives in 2012.

USB 3.0 does make extensive changes to the protocol, but maintains separate signal lines for USB 2.0 and 1.1 compatible devices. It even needs a slightly silly name to distinguish its protocol from the High Speed and Full Speed modes of its predecessors: SuperSpeed.

To improve bandwidth, USB 3.0 adopts much of the physical layer from PCIExpress 2.0, such as the 8B/10B encoding system and data scrambling to reduce electromagnetic interference. The scrambling technique prevents repetitive bit patterns, such as 10101010, from generating strong frequency peaks. A further change is that connections are no longer half-duplex: there are separate data lines for transmit and receive, which should make better use of the maximum available data rate of 5Gbit/s than USB 2.0 can make of its nominal 480Mbit/s.

More advanced forms of LVDS signalling have made it possible to transmit at several gigabits per second over a differential pair through the use of pre-emphasis and active equalisation. But the SuperSpeed protocol is very sensitive to cable length such that the distance from the USB controller in a host chipset to the connector at the edge of the PCB is now a major source of performance loss. For this reason, some of the electrical conditioning is moving out of the chipset and closer to the cable. Manufacturers are now selling redrivers that provide a boost for weakened signal just before it crosses from the connector to the cable.

A further change is that packets are no longer broadcast across a tree of devices (see fig 1). Hubs will route packets directly to the target device so that other devices in the tree are not forced to wake from sleep to check on a packet any time something appears on the bus, as with USB 2.0.

The data-transfer protocol is more streamlined for SuperSpeed, partly thanks to the adoption of the full simplex model (see fig 2). In previous forms of USB, data transfer involved a multistage handshake. First, the host would send an IN-token packet to initiate the transfer. Once the slave had responded, the host would send an ACK packet and, in the case of high-speed transfers, would probably immediately follow that with another IN-token. In SuperSpeed, ACK packets have multiple functions. Instead of sending an IN-token, a SuperSpeed host will kick off a transfer using one form of ACK. Once it has received the payload, it will send another ACK that contains a command for the next chunk of data and keep doing so until it sends a final ACK that does not request additional data. There is a similar change for outgoing packets that helps to reduce protocol overhead on the bus.

Higher-speed peripherals can take advantage of data bursting in which the host asks for a number of data packets to be sent in sequence. The host can request as many as 16, sending a subsequent ACK only when all of those packets have been received.

One of the key changes in USB 3.0 is to reduce the number of times a host polls peripherals to see if they have data to send or are still operating on the bus and have not been disconnected. Because of the master-slave organisation of USB, a peripheral can only send data when polled by the host with an IN-token. As the host has no idea when data might arrive, it simply polls on a regular basis at a frequency set by the operating system. Windows usually defaults to 125Hz for mice.

If the mouse receives an IN-token and it has not detected any movement since the last one, it will send a NAK packet, telling the host nothing has happened. The host will simply wait for another 8ms before sending another IN-token to see if anything has changed in the meantime. The SuperSpeed mechanism handles the situation somewhat better. For example, if the peripheral is a disk drive responding to a read command from the operating system, it will have to wait for the head to reach the correct tracks and for data to start streaming back.

If the host sends an ACK request immediately after issuing the read command, the data will not be ready. Under SuperSpeed, the drive will send a NRDY packet saying, in effect, it has nothing to send yet. However, instead of being forced to wait until the next time the host asks for data, the peripheral can, without prompting, send an ERDY packet. The host will then respond immediately by transmitting a new ACK request and the peripheral will start sending the data. Because it reduces the amount of polling needed, the new mechanism improves link-power management dramatically for peripherals with bursty, intermittent behaviour.

Although USB 3.0 increases the amount of power that a hub can deliver to a peripheral – 900mA instead of 500mA – it also has features to prevent power consumption from spiralling out of control and to deal better with peripherals that need to sleep to save power.

To maintain synchronisation, SuperSpeed devices have to transmit packets constantly. This is because the equalisation system uses electrical training sequences to ensure the receiver can detect bit transitions properly and these are maintained through idle sequences once a link is established. Unfortunately, this translates into a big source of energy consumption. To overcome this, if a peripheral does not need to send data for a while, it can tell the host it is moving into a low-power mode and will retrain the link when it becomes active again.

There are three power-down modes in USB 3.0. U1 is a fast-recovery state, where the node can reduce the frequency of keep-alive transmissions but return to normal transmission within microseconds. U2 offers a bigger power saving, but a slower recovery – this time measured in milliseconds. U3 is suspend mode, which extends recovery time further, but is still of the order of milliseconds. It can only be entered into under software control, whereas U1 and U2 can be implemented using hardware timers.

The transition to USB 3.0 has provided another opportunity for Apple to go its own way. Earlier this year, the company launched a series of MacBook Pro computers sporting an interface called Thunderbolt. Developed primarily by Intel under the name Light Peak, the original plan for Thunderbolt had been to use optical signalling to increase bandwidth to, ultimately, 100Gbit/s. But the light went out on Light Peak when Intel claimed it could hit the initial target datarate of 10Gbit/s using more advanced signalling over differential copper pairs.

The cabling requirements for Thunderbolt, although not yet publicly disclosed, are even more stringent than for SuperSpeed USB as a consequence of the higher datarate. Signal conditioning now goes inside the cable itself – just behind each connector – so it can be calibrated for the correct electrical performance at the factory. Manufacturers such as Gennum are offering signal conditioners that will fit onto a tiny PCB that can be attached to the connector.

Like FireWire – and in contrast to USB – Thunderbolt is a peer-to-peer technology. There is no master at the top of the tree that is needed to initiate transfers. This decision for USB demanded a key change to the protocol to allow peripherals, such as cameras or phones, to sometimes act as masters so that they could transfer pictures to an external storage device without calling for intervention from a host PC. The On-The-Go (OTG) variant of the USB standard allows peripherals to turn into masters when necessary.

The situation for role reversal on USB 2.0 is slightly more complex than it at first seems, due to the way the roles are reversed at the electrical signalling level and not just through an additional logical protocol.

The peer-to-peer nature of FireWire enables, for example, the target disk mode supported by Macintosh computers. Using Firewire, one computer can access another's disk drives over the bus, with the remote machine effectively taking control of the other without direct intervention from the target machine's OS. However, the DMA support and peer-to-peer nature of FireWire and Thunderbolt threaten to open a security hole in computers. Using DMA, it is possible for a remote machine to access any physical area of memory in the target machine that is reachable by its FireWire DMA controller. A proof of concept attack was demonstrated in 2006.

In principle, a hacker could walk up to unprotected machine, attach a FireWire peripheral and minutes later walk away with sensitive data found on the target computer. The target machine might expect DMA transfers from FireWire to be made to known I/O buffers.
But, as normally implemented, the DMA controller does not limit access to only those areas. They could be parts of the machine dedicated to kernel operations.

In practice, OSs such as OS X lock access to FireWire DMA if a password has been set and protection activated by the screensaver. It is currently unclear whether protective steps have been taken for Thunderbolt or how vulnerable it is to attack.

A bigger concern for Apple and Intel is whether the additional promised bandwidth will convince peripheral makers to support Thunderbolt or stick with USB 3.0 which, despite its slow start, is likely to dominate the PC industry within the next five years.

Chris Edwards

Supporting Information

This material is protected by Findlay Media copyright
See Terms and Conditions.
One-off usage is permitted but bulk copying is not.
For multiple copies contact the sales team.

Do you have any comments about this article?

Add your comments


Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

Related Articles

Amp works at 50% efficiency

Researchers from the Universities of Bristol and Cardiff have created an ...

Materials breakthrough

A technique to study the interface between materials, developed at the National ...

Quantum logic gate created

Professor Gerhard Rempe, director of the Max Planck Institute of Quantum ...

Down to the wire

Once the plain old telephone service, the role of the telephone wire continues ...

Within touching distance

Graphene is starting to filter onto the market. HEAD claims its tennis racquets ...

Making light work of photonics

Today's world is permeated by electronics, from industry to communications, ...

NI Trend Watch 2014

This report from National Instruments summarises the latest trends in the ...

Capactive sensing

This whitepaper looks at a number of capacitive sensing applications to ...

Altium's Innovation Station

An introduction to the Altium Innovation Station. It includes an overview of ...

IBM tackles 22nm challenges

IBM has announced the semiconductor industry’s first computationally based ...

BEEAs 2013

9th October 2014, 8 Northumberland, London

Self-destructing electronics

Researchers at Iowa State University have created transient electronics that ...

MEMS switch for 'true 4G'

General Electric has created a 3GHz RF MEMS switch that can handle up to 5kW of ...

Smart fabrics developed at NPL

NPL has developed a new method to produce conductive textiles. The technique ...

Electronic charge to 800mph

Breaking the land speed record would require a very special blend of latest ...

Flash drives semi technologies

Demand for NAND flash is said to be growing at 45% per year, driven mainly by ...

Top tech trends for 2013

Bee Thakore, European technical marketing manager for element14, gives an ...

Nathan Hill, director, NGI

Research into graphene won Andre Geim and Kostya Novoselov the Nobel prize in ...

Brent Hudson, Sagentia

Sagentia's ceo tells Graham Pitcher how the consulting company is anticipating ...

Prof Donal Bradley, Imperial

Graham Pitcher talks to a researcher who was 'there at the start' of the ...