VXFabric software system for backplane configuration

4 mins read

When work started on developing the VPX backplane standard in 2004, members of the VITA Consortium wanted to create something which represented the next revolution in bus boards.

The work drew on VME's heritage but, rather than continuing to use parallel bus communications, the aim was to support a range of high speed serial interconnect fabrics, potentially handling data rates of up to 6.25Gbit/s over hybrid backplanes. VPX is now a broadly defined technology, but is focused at the board level. The OpenVPX framework has extended this to take more of a system level approach. According to embedded computing specialist Kontron, even though hardware guidelines have been set, OEMs and developers still need to find the ideal communication protocol. While standards such as PCI Express, Gigabit Ethernet and Serial Rapid IO can be used for intra system communications, it says setting up such systems is not easy. The challenge for OEMs, says Kontron, is to find an easy to use communications protocol that is not only fast, but which also features low latency. Its solution is VXFabric, an open infrastructure which implements interboard communication at hardware speed using PCI Express. The first implementation of the software system has been made using the VX6060, a 6U VPX computing blade, and a VPX backplane. Vincent Chuffart, a Kontron product marketing manager, said the main driver, even for VPX systems, is to route high speed signals. "If you want to route heavy traffic between boards, there are two general solutions. One requires extra silicon and mezzanine boards, the other is proprietary through fpgas. PCI Express is available in all chipsets, but it isn't designed to handle board to board communications; rather, it's intended to be at the head of a tree of devices." With VXFabric, physical interconnect over the backplane can be made in two main ways: either five nodes can be connected over a PCI Express backplane without the need for an additional switch; or, if higher bandwidth is needed, up to 12 nodes can be connected with a PCI Express switch, such as the VX3905. The two processing nodes are connected to each other and to the data plane via PCI Express, which allows communication at hardware speed and the highest bandwidth. VXFabric has been developed to simplify the task of data flow management. Kontron says it is equivalent to an Ethernet network infrastructure mapped over a switched PCI Express fabric, with the layers implemented in such a way that users can handle the communication with an IP socket interface. This API allows direct access to most protocols, including TCP and UDP. Because VXFabric requires no modification of existing applications, development effort is reduced. VXFabric comprises a set of ready to use libraries and kernel modules that expose the socket API for data flow applications to implement efficient inter board communication at hardware speed. By decoupling the application software from low level silicon management, VXFabric simplifies application development. Kontron also notes this approach will help to extend application lifecycles, as migration to future backplane communication standards, such as 10G and 40G Ethernet, is supported. Chuffart noted: "The idea is to offer a continuous software based approach that can be used in the future. We expect the next step, which will happen in the next few years, will be to 10G Ethernet on the backplane." He admitted that VXFabric is not 'best in class' for latency or similar features. But he draws a comparison to the early days of video recording. "Everyone knew Betamax was the best system, but VHS won out." Chuffart added that one of the reasons why Kontron developed VXFabric was down to a growing lack of expertise amongst its customer base. "Integrators are hiring engineers who often don't know how a bus operates, so one of the targets for VXFabric is those companies who are not familiar with VPX. Those companies expect Kontron to provide a fully functional computing platform on to which they can load their application." The standard user programming model for VXFabric is based on the IP protocol and implements a socket layer API through an emulation of an Ethernet interface over PCIe, said to be an approach similar to implementations of pseudo Ethernet over virtualisation boxes. As such, Kontron says porting existing applications to VXFabric is a straightforward exercise for software developers. "VXFabric is implemented as Linux modules loaded at run time," Chuffart explained. "At the bottom end, VXFabric will set the address range, interrupts and so on between the various boards on the backplane. At the top end, it can link to the rest of a standard software stack. From the programming level, you don't know if the link is being made over a wire or a backplane; it's just another IP address." The simplest way to interconnect up to four computing blades via VXFabric is to use a distributed topology. The VPX processing boards interconnect via four PCIe x1 lanes, with the connection made to the first cpu (CPU A) on the Kontron VX6060. This topology does not require a dedicated VPX PCI Express switch because all VXFabric traffic flows through the system controller's integrated PCI Express switch in slot 1. A centralised topology uses a VX3905 PCI Express and Ethernet hybrid switch with six x4 PCIe lanes to interconnect up to six VX6060 boards. The PCIe switch occupies one VPX slot. Any one cpu on a VX6060 can communicate with any other cpu on the other boards. The second cpu on the VX6060 (CPU B) is then used as a coprocessor. VXFabric traffic is balanced through the VX3905's PCI Express ports. One further configuration uses a PCI Express switch to serve a star topology comprised of 12 PCI Express x2 lanes connecting six Kontron VX6060s. In this configuration, both processors on the VX6060 (CPU A and CPU B) are connected to VXFabric and can communicate directly with any other cpu. This, says Kontron, is the centralised equivalent to a full mesh topology. "A lot of knowledge has gone into making this work," Chuffart claimed. Once the first VX6060 in the system has booted Linux, the Ethernet emulation over VXFabric has to be initialised. The first step is to get some low level information about the VXFabric through the vxfabric command line interface. Having done that, the vxeth module is loaded. This command generates two pseudo Ethernet interfaces – vxeth0 and vxeth1 – both similar to standard eth# Ethernet interfaces. These interfaces deal with vxfabric #0 for the backplane and vxfabric #1 for local communication between the cpus on the VX6060. Once vxeth has been executed, users can configure the major network services, including ftp, telnet and NFS, according to their needs. "VXFabric is targeted at integrators who aren't familiar with VPX," Chuffart continued, "but integrators at any level can use it." Kontron isn't alone in developing such a solution. According to Chuffart, other companies are looking to do something similar. "To date, no one has such an elegant and straightforward solution. Kontron has deployed a solution while competitors are still thinking about it."