Squeezing the performance out of ATCA through virtualisation

4 mins read

It is more than 10 years since the release of PICMG 3.0, the standard which defined the Advanced Telecom Computing Architecture (ATCA). Deployed in many parts of the ‘network’, ATCA is now beginning to be adopted in markets such as military and aerospace, where highly reliable computing is a hard necessity.

Advances in processor technology have increased the average number of 64bit processor cores and memory that can be supported in a single ATCA payload blade. While early systems only had to contend with dual and quad core processors, the current generation of ATCA blades with 40G fabric interfaces, such as Artesyn’s ATCA-7480, can support 24 hyperthreaded cores with up to 256Gbyte of RAM. Future boards will be even more complex.

ATCA is steadily being adopted by the US military because it is a bladed server architecture and similar to the architectures on which it relied on in the past. But the advantage is that ATCA is an open standard.

The US Navy, for example, has adopted ATCA in a major combat system which takes as its input a variety of radar data, tracks incoming objects, identifies whether they are friend or foe and then coordinates the responses.

Many ancillary systems that interface to this combat system are also migrating to ATCA. Although the US Navy has started to investigate virtualising the system’s main components, a training system has already moved to virtualisation, bringing application abstraction and hardware independence. Meanwhile, a US Marines sensor analysis system uses VMWare on a six slot Artesyn ATCA system. The capability of the system was increased by adding virtual machines (VMs) and application code, instead of migrating individual application threads to processor boards running dedicated versions of Linux.

As with the US Navy training system, any new capabilities added to the system could choose the version and type of operating system (OS), based on the requirements of that application, without affecting the existing applications. By standardising on ATCA, processor boards can be upgraded selectively, increasing the system’s overall processing capabilities with minimum impact.

A variety of technologies were evaluated alongside ATCA, including VPX, enterprise servers and proprietary bladed architectures. VPX was not chosen because of the cost of supplying a level of performance comparable to an ATCA based system. Though the ruggedisation requirements were beyond a typical enterprise class system, they didn’t require the level of ruggedisation provided by a VPX based system. Meanwhile, support for 40G backplanes is readily available today with ATCA, but not from VPX. ATCA was the obvious choice because it could deliver the required level of performance in an efficient and rugged system envelope that could be a foundation for the future.

Making the most of it

To make efficient use of the resources available from a modern 14 slot ATCA chassis, application developers will require software which can take advantage of all the processor cores, memory and storage that will be available. Virtualising these system components and running applications under a variety of VMs is an efficient way to deliver a modern ATCA system.

Open source and proprietary virtualisation software is available for a variety of processor technologies, with XEN and KVM representing two of the most popular products. XEN, originally developed by University of Cambridge Computer Laboratory, is a hypervisor based on a microkernel design. KVM, meanwhile, is a kernel module added to Linux that manages VMs. Both target a variety of processors, but focus on Linux based guest OSs.

Red Hat, meanwhile, offers Red Hat Enterprise Virtualization, which includes either a bare metal hypervisor (RHEV hypervisor) or KVM and upper level tools. Though parts of this product are open source, users will need to acquire a license in order to access some of the enterprise tools and to get support. RHEV 3.3 supports Linux and Windows guest OSs.

Two popular commercial products are VMWare and Microsoft Hyper-V. These, along with Red Hat RHEV, are found in most data centres and all support bare metal hypervisors. VMWare and Microsoft have well defined hardware compatibility lists (HCLs) and validation programs.

Certification of the server or payload board under these programs guarantees support for the tools used to create and maintain VMs. Artesyn’s engineering team works closely with VMWare to certify server payload boards so they can be included in the HCL for VMWare products.

Which is best? It depends on the requirements. Products like VMWare and Hyper-V claim near bare metal performance for applications running under a VM and near 100% CPU utilisation. While these products require licenses and support to be purchased, open source products require extensive knowledge to maintain them and don’t provide the upper level tools found in proprietary products.

Many enterprise class data centres, where rack mount or bladed servers handle incoming requests, have migrated to the concept of virtualisation. In these applications, IT managers use virtualisation to eliminate the dependency between the hardware running in the data centre and the applications and OSs they support. In fact, PICMG has been working on the use of ATCA in enterprise applications and the use of virtualisation in the data centre would also apply to enterprise class ATCA systems.

Updating the OSs on large numbers of machines is problematic, time consuming and painful. Installing a hypervisor on a compute element as it is installed in the data centre, instead of a full OS, allows additional resources to be added to the existing pool. New VMs with a guest OS can be installed with the application, or existing VMs running on existing servers can be migrated. The entire virtualised centre can be managed from a few client stations. Resources can be provisioned prior to actual use, reducing potential down time due to increased user load. Fully using the compute resources also reduces the number of servers required and, in turn, the amount of power and cooling required.

Advances in flow management software, such as Artesyn’s FlowPilot – created for its ATCA-F140 and ATCA-F125 switch/hub blades – add additional optimising capabilities to an ATCA system. Statistical flow analysis of inbound streams can balance data to any of the resources in the virtual pool of VMs, which will maximise use of the available computing resources.

Modern ATCA systems, like Artesyn’s Centellis 4440, can be a very powerful compute resource and capable of processing the most demanding applications. But attempting to take full advantage of the device’s 288 hyperthreaded cores is extremely difficult without the use of virtualisation software. Even if it is assumed the system would only be using 16 of the 24 cores with an SMP operating system with 75% of the processing threads running in parallel, it would require 33% more processing and chassis infrastructure to do the same amount of processing as this existing system running virtualised software.

This, along with the advantages of attempting to maintain a large number of processing elements in a typical application processor, creates a compelling event to evaluate virtualisation software for your next system design.

Author profile:

Rob Persons is senior field applications engineer with Artesyn Embedded Technologies.