The rise of what is being described as the intelligent connected world has brought with it an explosion in data, the growing adoption of artificial intelligence and a move to more heterogeneous computing.
The electronics industry is seeing exponential change and that brings with it certain challenges; in particular, having to address the fact that the speed of innovation is now beginning to outpace silicon design cycles. This brings a growing need for acceleration and a move towards programmable logic and FPGAs.
These devices can provide massive computational throughput with very low latency, which means they can process data at wire speeds and implement complex digital computations, with power and cooling requirements an order of magnitude less than than either CPUs or GPUs.
Earlier in 2018, at a developers’ forum in Frankfurt, Xilinx’s senior director for software and IP, Ramine Roane said design teams were increasingly turning to FPGAs when CPU architectures are seen to be failing to meet the demands of increasing workloads.
According to Roane: “As CPU architectures fail to meet demand, so there’s growing interest in heterogeneous computing with accelerators. The breadth of apps being developed is also requiring different architectures. Designers are addressing the need for both higher performance and lower latency and, while we saw a move to multicore architectures to address this, we’re now seeing multicore architecture scaling beginning to flatten.”
With the growth in new applications, so the demand for application specific accelerators has increased. But, according to Roane: “Whether for video, machine learning or search applications, we have reached a point when specific accelerators can no longer be justified on economic grounds. Why? Because workloads are becoming more diverse and demand is constantly changing.”
Roane suggested there’s been a move away from application specific accelerators towards more reconfigurable ones and that trend has played to the strengths of FPGAs and SoCs.
“By using FPGAs, it is possible to provide configurable processor sub-systems and hardware that can be reconfigured dynamically. Their key advantages are that design engineers can build their own custom data flow graph, which can be customised to their own application with its own custom memory hierarchy, which is probably the biggest advantage as it lets you keep data internal to your pipeline,” he explained.
|"We have reached a point when specific accelerators can no longer be justified on economic grounds."|
Ramine Roane, Xilinx, senior director for software and IP
While FPGAs can offer massive computational advantages, programming them has traditionally been be seen as a challenge, despite various application tools being available.
Designers have also been put off using FPGAs as, traditionally, they have needed large investments in dedicated hardware to develop custom designs and hardware prototypes, run large scale simulations and test and debugg hardware accelerated codes.
Roane conceded the cost of FPGA engineering is an important reason why they haven’t become more mainstream and pointed to the complexity of programming them. Analysts have suggested that FPGA technology has, to a degree, been self-limiting because of the perceived complexity.
In an increasingly data driven world, Intel and Xilinx are developing partner ecosystems and are looking to deliver a much richer development stack so hardware, embedded and application software developers will be able to program FPGAs more easily by using higher level programming options, like C, C++ and OpenCL.
“We are now able to deliver a development stack that designers are increasingly familiar with and which is available on the Cloud via secure cloud services platforms,” said Roane.
The growing role of the Cloud
To increase application processing speeds, hardware acceleration is being helped by Cloud platforms such as Amazon Web Services’ (AWS) FPGA EC2 F1 instances. This new type of compute instance can be used to create hardware accelerations for specific applications.
F1 comes with the tools which will be needed to develop, simulate, debug and compile hardware acceleration code and it includes an FPGA Developer AMI and Hardware Developer Kit.
“Pre-built with FPGA development tools and run time tools to develop and use custom FPGAs for hardware acceleration, our FPGA developer AMI provides users with access to scripts and tools for simulating FPGA designs and compiling code,” explained Amazon’s senior director of business development and product, Gadi Hutt.
According to Hutt, the AWS Cloud provides greater agility and speed, cost savings and the ability to scale up and down quickly, as needed. “By using the Cloud,” he continued, “we are providing on-demand delivery of compute, storage and networking services.
“Engineers will no longer have to worry about hardware, networking, power and cooling,” Hutt said.
Amazon EC2 F1 instances are offered in two sizes that include up to eight FPGAs per instance.
|"By using the Cloud, we are providing on-demand delivery of compute, storage and networking services."|
Gadi Hutt, Amazon, senior director of business development and product
“F1 instances include 16nm Xilinx UltraScale Plus FPGAs with each FPGA including local 64Gbit DDR4 ECC protected memory, with a dedicated PCIe x16 connection,” Hutt explained. “The ability to code applications in C, C++, and OpenCL programming languages is possible through the availability of Xilinx’s SDAccel development environment.”
Each FPGA contains 2.5million logic elements and approximately 6800 DSP engines.
According to Hutt: “AWS will allow your company to ‘get out of IT’ and focus on providing specialised services where you can add value. It means you can focus on your core business.”
The benefit of using EC2 F1 instances, Hutt added, are that it’s a quick way to deploy custom hardware solutions. “Literally with just a few clicks in the platform’s management console,” he claimed.
Because F1 instances can have one or more AFIs associated with them, designers will have much greater speed and agility and be able to run multiple accelerations on the same instance. “It’s also very predictable,” said Hutt.
Connected via a dedicated PCI Express fabric, FPGAs can share the same memory space and communicate with each other at up to speeds of 12Gbit/s. The design ensures that only the application’s logic is running on the FPGA while the developers are using it; possible because the PCI Express fabric is isolated from other networks and FPGAs are not shared across instances, users or accounts.
With respect to EC2 F1 instances, Hutt made the point that it is possible to deploy hardware acceleration without having to buy FPGAs or specialised hardware to run them. “That reduces the cost of deploying hardware accelerations for an application dramatically.”
“There’s a tremendous opportunity for FPGAs to shine in a number of areas,” Hutt concluded. “It’s about democratising FPGA development and increasing the number of use cases. The platform is continually evolving and I believe users are turning to F1 because it offers access to thousands of servers in a matter of seconds, which means you can roll out applications quicker, complete your work faster and cut costs.”