As the deployment of Industrial IoT systems continues to proliferate, the streams of data transferred to the cloud skyrockets, drastically increasing the cost for cloud computing.

To solve this, many systems designers are adopting edge computing, in which data processing is done close to the source (e.g. sensors) in a bid to reduce data transfer, storage and processing costs, plus address a few other concerns over Cloud Computing, in particular security.

Big Data is a broad label for the growing amount of data generated by IoT devices and smart systems. For instance, some aircraft engines have more than 5,000 elements that are monitored at relatively high sample rates. Most of the data is transferred to a ground station for the real-time monitoring of the engine and for future R&D work. But this is only part of a growing trend. Most ‘smart’ systems produce vast amounts of data which needs to be processed immediately or be stored for subsequent processing.

To store Big Data, huge datacentres are required. These are often costly, need a spacious climate-controlled environment and require regular maintenance. The alternative is Cloud Computing, the on-demand delivery of compute power, applications and other IT resources, and cloud providers - such as Amazon with its web service (AWS) - provide a simple way to access their servers, databases, processing and platforms and storage devices.

The benefits of Cloud Computing are many-fold and include cost efficiency (i.e. no need to invest in and maintain your own hardware), scalability, resource availability (for all your users irrespective of their geographic locations), lower latency (as you can specify servers that are closest to the relevant users/customers) and peace of mind in terms of back-ups.

However, there are a few disadvantages to the cloud too, the biggest of which is that no provider can guarantee 100% availability. Data security and privacy are also causes for concern, both on the cloud and for data in transit. Latency can be an issue for Big Data, and doing computationally intensive tasks on the cloud will increase the cost. Of these concerns the last two, in particular, can be negated through edge processing; i.e. performing much of the computationally intensive work near the source data.

Benefits here include real-time or near-real-time data processing and reduced network traffic, as you need only transfer the product of the edge processing, thus resulting in lower Cloud Computing costs. Security and privacy can be improved by keeping the sensitive data (a.k.a. Hot Data) within the edge processing environment and only sending less sensitive (Cold) data to the cloud.

FPGAs have the edge

There are technologies that can be used for edge processing applications. These include the use of traditional CPUs (scoring high in terms of flexibility), application-specific processors (e.g. GPUs) and ASICs/SoCs (scoring high on performance). However, it is FPGAs that are slotting into most edge processing applications. Why is this so? Well, let’s consider the requirements.

Edge processing needs to be high-performance and in this respect an FPGA can perform several different tasks in parallel. For example, consider executing many non-dependent computations (such as A=B+C, D=E+F and G=H+I). On a CPU, these would have to be performed sequentially, with each sum requiring a few clock cycles. In an FPGA, an array of adders could do the computations in parallel, possibly requiring only a single clock cycle.

Power efficiency is essential too, as the end product may well be battery-powered. With an FPGA the function (design) need be the only circuit present, whereas the architecture of a CPU or GPU may not be fully utilized. Also, with an FPGA comes the benefit of reprogrammability.

Higher security is afforded too because the edge processing functions are hard wired into the FPGA. It is also possible to encrypt the transaction bus and to even go as far as designing your own processor.


A prime example of where edge processing is extremely useful, and in which FPGAs can play a significant role, is within an embedded system in which data derived from images needs to be transferred. For example, in the automotive sector Advanced Driver Assistance Systems (ADAS) are under development to make driving safer, easier and more comfortable, and ADAS is regarded as a significant step towards fully autonomous cars.

The data processed by an ADAS can be used to notify the driver of problems or to automatically trigger responses such as deceleration, braking and/or the execution of a manoeuvre. The data can also be useful outside the vehicle.

Let’s discuss the embedded vision system first though by considering an ADAS demo unit that was built for this year’s Embedded Vision show in Santa Clara, California. The demo comprised a TySOM-2-7Z100 prototyping board (see figure 1) which includes a Xilinx Zynq XC7Z100 device and a TySOM-FMC-ADAS daughter board to interface with four 960 x 540 pixel cameras.

Figure 1: TySOM-2-7Z100 prototyping board. Mixed technology (i.e. CPU and FPGA) boards like this are proving very popular for edge processing applications and for connecting with the cloud.

The processing was shared between a dual-core ARM Cortex-A9 processor and FPGA logic (both of which reside within the Zynq device) and began with frame grabbing images from the cameras and applying an edge detection algorithm (‘edge’ here in the sense of physical edges, such as objects, lane markings etc.). This is a computational-intensive task because of the pixel-level computations being applied (i.e. more than 2 million pixels). To perform this task on the ARM CPU a frame rate of only 3 per second could have been realised, whereas in the FPGA 27.5 fps was achieved.

The ARM CPU was mainly used for superimposing detected edges over the initial camera images, colour-space conversions, the formation of a composite image (see main image) and outputting it to an HD buffer. The FPGA and CPU could also work together to recognise and distinguish between obstacles and pedestrians close to the car and to provide lane departure warnings.

What goes up

Sending the processed data to the cloud for further processing and/or storage is then a relatively simple task. Firstly, an AWS account would be created along with an AWS IoT environment. Next, we would configure a Thing (seeing as it is the IoT) and download the public and private keys needed for secure communications with the cloud.

The embedded C MQTT standard would be the ideal Software Development Kit (SDK), because it is secure and requires minimal bandwidth. An application would then be prepared to run on the ARM CPU to publish the data onto the cloud.

However, imagine a scenario under which we have data from thousands of vehicles going to the cloud. Analysis of the data could be performed on the cloud and made available for traffic systems or highway maintenance organisations, for example. There may also be instances where data from the cloud feeds into an edge-processing application, in which case applications are also available from AWS.

In summary, there are both advantages and disadvantages associated with cloud computing. However, many of the disadvantages can be overcome though edge-processing; an activity for which FPGAs are particularly suited.

Author profile: Farhad Fallah is an Application Engineer with EDA company Aldec