Tomorrow’s in-vehicle infotainment

4 mins read

The in-vehicle infotainment experience is set to change drastically over the next few years.

Infotainment has always been a crucial part of the driving experience. But with the focus on improving vehicle safety, OEMs are having to explore alternative technologies - from voice to gesture recognition - to deliver a better and safer in-car experience.

As smartphone and smart home experiences have improved, expectations of the automotive experience have grown too. Voice recognition systems, such as those from Google and Alexa, have become central to a range of applications from smart speakers to lighting and heating. The convenience of smartphone applications are also helping to improve the user experience, with constant access to social media and emails.

Acknowledging the demand for a better in-car experience, Amazon launched the Echo Auto - an aftermarket product designed to bring Alexa into the car.

In a joint venture, IoT company, Klika Tech, and software company, aicas, have leveraged this technology to develop “a new standard for connecting drivers through voice recognition and real-time information”.

The technology uses aicas’ JamaicaCAR - a downloadable, connected application framework for car headunits and in-vehicle infotainment (I-V-I) systems based on real-time Java.

“Java removes the errors that C and C++ bring,” according to Dr. James Hunt, co-founder and CEO of aicas, “because the user doesn’t need to worry about memory management.”

He added, “Resource management is featured in the system so that each app can be limited to just what it needs. As a result, if there is an error, it doesn’t affect the whole system.”

The framework has a growing number of in-car apps and service offerings, with the car maker or Tier-1 supplier able to add new apps without having to discard the legacy platform.

It connects the car to the cloud and can be managed both inside and outside of the vehicle, opening it up to possibilities such as pre-programmed navigation. The platform supports the on- and off-board communication through a set of library interfaces like GPS, Internet and inter-application messaging.

“Consumers want the fully-integrated single voice-based service connecting them to the on-demand world and to all services in their cars,” explained Gennadiy M Borisov, Klika Tech’s President and Co-CEO. “At present, they are faced with disparate technologies that prevent intuitive and reliable control of services and information inside and around the vehicle.”

The aicas’ framework is based on the OSGi standard - a well-known module system for Java - to make it easy for any programmer to utilise it. The technology also has the benefit of real-time capability, a crucial quality added by aicas, according to Dr. Hunt. “You need instant feedback in a vehicle otherwise it’s a distraction for the driver.”

To further improve this real-time experience, both companies are working with Amazon to see what part of the system can be placed locally rather than on the cloud. “We want to do pre-filtering locally,” Dr. Hunt explained. This means that commands the driver would expect an instant reaction to e.g. ‘find me the nearest petrol station’, are processed locally and a response given in real-time. Whereas less urgent questions such as ‘is it raining in Kent?’ will be dealt with in the cloud.

However, there is another challenge to be faced before the technology can advance to such a stage.

“In-vehicle infotainment is currently part of the IoT ecosystem, but in time it will become its own entity, developing into a vital selling point.”

Dr James Hunt

The car is a noisy environment and with the current voice assistants available, systems can have difficulty “hearing” commands. Qualcomm has been working on noise cancellation technology, using the know-how it has developed for phone handsets and applying it to the automotive sector.

“It will certainly be more complicated,” Thomas Dannemann, director of product marketing in Europe at Qualcomm, admitted, “but the technology can be adapted.” He points to an increased amount of a processing technology and additional DSP performance as the way forwards.

Although Dannemann, like Dr. Hunt, sees voice dominating I-V-I, he believes that there is potential in gesture control too. Accompanying this, he foresees a trend with more displays being deployed, along with much higher resolution. “I can imagine a system in which drivers point and say, ‘what building is that?’ and the system recognises where/what they’re pointing to and provides them with the information, e.g. that is the town hall.”

To achieve this, he describes a car with a series of cameras placed inside and outside which detect both the driver’s movements and the vehicle’s surroundings, which are then combined with map data and location services. “Augmented arrows,” he suggested, “on top of a live streaming of the road could then be used to indicate where, for example, a building entrance is situated.”

To allow automakers to deliver these types of customisable options, Qualcomm has developed fully scalable 3rd Generation Snapdragon Automotive Cockpit Platforms, designed with a modular architecture.

The platform supports higher levels of computing and intelligence needed for advanced capabilities featured in next generation vehicles, including AI experiences for in-car virtual assistance, natural interactions between the vehicle and driver, and contextual safety use cases. It builds on the company’s Snapdragon 820a technology and aims to deliver a concurrent implementation of next generation high resolution digital instrument clusters.

“Right now, vehicles have a mixture of analogue and digital clusters. The next generation will be only digital, with information being shared between the multiple displays,” said Dannemann. “With our platform, we can provide auto manufacturers with the tools to design displays which not only ensure relevant information, such as gear shift, is always on show, but that also allows the style and arrangement of the clusters to be changed dependant on user preference.”

As AI advances within the car environment, Dannemann also envisions a system that learns and starts to understand driver behaviours.

“The displays might even configure themselves based on your habits. Perhaps the user listens to the news every morning on their way to work - the car could start to automate that process.” Taking that one step further, Dannemann said an intelligent system where the car would notify the driver it requires a service and propose a date based on a connected personal calendar, was also a possibility of future I-V-I systems.

Looking further into the future, Dr. Hunt and Dannemann agreed that with the introduction of ADAS and, eventually, autonomous vehicles, the possibilities of infotainment will only grow. If the human driver becomes the passenger to an artificial driver, the ways that they were once restricted will become irrelevant, said Dannemann. This will open up potentials such as in-car video conferences and even allow for multiple forms of interaction such as video and audio streaming to occur simultaneously.

“Today the I-V-I system has developed to be part of the IoT ecosystem, but now it’s looking like it will become its own entity with an ever-growing role,” Dr. Hunt added. “As such, it will be a vital selling point.”

Dannemann agreed, adding that the future holds much more collaboration between the car makers and Internet companies like Google.