Opinion piece: The future’s bright for GPUs

3 mins read

The application areas for embedded GPUs have expanded from smartphones and tablets, via TVs and vehicle infotainment systems, to sectors such as the IoT, wearables, VR/AR and servers.

For certain emerging markets, the application of GPU IP is inevitable and obvious – VR and AR, for example, depend on very high graphics performance and a display is integral.

What's more interesting, however, is the use of GPUs in markets like IoT and wearables. Some devices in these markets might have a display with a resolution low enough that it could be driven by a CPU, or not have a display at all.

So the use of a GPU in these applications is usually justified either by power efficiency or by the need for higher performance compute than an equivalent CPU could provide – or both.

For something like a smart watch, a GPU can increase the most important thing a wearable needs to have – battery life. But in products without a display, the use of a GPU almost always means a class of parallel computing problems needs to be solved and these problems map well to GPU architectures.

More designers are exploring how GPUs might be applied in these non-obvious and non-traditional markets, where their abilities make sense, but where pushing pixels isn’t the main use case.

With the increasing popularity of VR/AR, GPUs are being used for more general computing work, in addition to graphics processing. With VR and AR, the rendering and visual processing systems have a lot more to do than in typical rendering applications, which mainly require general purpose computation. It’s still general purpose computation on something related to graphics, but it’s not traditional rasterisation. Therefore, it’s important to have a GPU microarchitecture that is capable of mixing GPU and compute workloads at different stages inside a single frame of rendering.

To address the unpredictable performance, power and area (PPA) targets, and to ensure customers can use the same GPU architecture and software investment across their entire product line – both traditional and emerging applications – it’s important for Imagination to offer a scalable, somewhat modular and adjustable GPU.

The key scaling factor in a GPU is always performance; things like the number of floating point ops or the number of pixels which the GPU can process.

Imagination’s PowerVR product portfolio therefore spans from small, feature-rich GPUs such as the PowerVR GX5300 – which occupies less than 0.5mm2 in 28nm technology – to high-end, powerful designs.

GPU architectures must consume as little power as possible for a given level of performance and the PowerVR microarchitecture is designed with this in mind. The tile-based deferred rendering microarchitecture is designed to do as little as possible of the required work during rendering. In this way, only the pixels that contribute to the final image are processed, bringing a host of power savings.

Not running shader programs for hidden pixels means the energy needed to run the ALU and access register banks is saved. Not having to sample textures and write out pixels for intermediate buffers or the final one saves a significant amount of memory accesses which are expensive, not only from a bandwidth point of view, but also because memory accesses cost a lot of power.

Ray tracing GPUs

Imagination’s ray tracing technology, used in PowerVR Wizard GPUs, can deliver photorealistic and hyper-realistic graphics on a screen within the power budgets set aside for such functions. This technology is likely to be used in AR/VR headsets, gaming consoles, automotive instrument clusters and similar applications.

But it is also a good fit for some non traditional GPU applications. For example, ray tracing is good at taking a rendered pair of images for each eye and warping them to output a proper image to a particular visual system – for example, both lenses and the receiving eyes in a headset.

In cars, ray tracing can enable simpler ways to alter the images sent to the head up display (HUD) lenses so they appear on a curved surface –like the windscreen – as they would on a flat screen. This enables cost savings by removing the need for a custom-built lens. Running the warping stage on a hardware ray tracer coupled to the GPU is a clear advantage over running the same workload on a standalone GPU.

The approach could also be used to enable a single system to deliver sharp HMIs onto any screen. A simple alteration to the pre-warping algorithm will allow a particular HUD system to be used in cars from different manufacturers.

Ray tracing can also be used for foveated rendering, where more detail is placed where the eye is looking. In an AR or VR headset, the eyes can be tracked by sensors and most of the rays focused in the central area where the user is looking. This is a more efficient use of the total ray budget and can give a higher level of detail where it is needed.

A further application of ray tracing is to enhance rasterised graphics through its judicious use in a hybrid (rasterising/ray tracing) solution. The levels of use of ray tracing can range from adding realistic shadowing to the scene – making dials and needles look much sharper – to enabling photorealistic images of a car in the infotainment system.

The future is bright

Traditional GPU markets continue to drive towards providing a better visual experience for the user and this means ever higher display resolutions, higher update rates and better quality pixels – all of which require a higher investment in GPUs.

For emerging applications, the GPU is addressing a growing range of tasks and this points to a bright future for embedded GPU architectures designed with scalable PPA features. Meanwhile, approaches such as ray tracing will enable new capabilities and allow designers to create truly differentiated products.

Author profile:
Rys Sommefeldt is senior business development engineering manager with Imagination Technologies