First contactless cardiac arrest AI system for smart speakers

2 mins read

Researchers at the University of Washington have developed a new contactless tool to monitor people for cardiac arrest while they're asleep.

A new skill for a smart speaker or smartphone lets the device detect the gasping sound of agonal breathing and call for help. On average, the proof-of-concept tool, which was developed using real agonal breathing instances captured from 911 calls, detected agonal breathing events 97% of the time from up to 20 feet (or 6 meters) away. .

"A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of," said co-corresponding author Shyam Gollakota, an associate professor in the UW's Paul G. Allen School of Computer Science & Engineering. "We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there's no response, the device can automatically call 911."

Agonal breathing is present for about 50% of people who experience cardiac arrests, according to emergency call data, and patients who take agonal breaths often have a better chance of surviving.

"This kind of breathing happens when a patient experiences really low oxygen levels," said co-corresponding author Dr. Jacob Sunshine, an assistant professor of anesthesiology and pain medicine at the UW School of Medicine. "It's sort of a guttural gasping noise, and its uniqueness makes it a good audio biomarker to use to identify if someone is experiencing a cardiac arrest."

The researchers gathered sounds of agonal breathing from real calls and collected 162 calls between 2009 and 2017 and extracted 2.5 seconds of audio at the start of each agonal breath to come up with a total of 236 clips. The team captured the recordings on different smart devices -- an Amazon Alexa, an iPhone 5s and a Samsung Galaxy S4 -- and used various machine learning techniques to boost the dataset to 7,316 positive clips.

"We played these examples at different distances to simulate what it would sound like if it the patient was at different places in the bedroom," said Justin Chan, a doctoral student in the Allen School. "We also added different interfering sounds such as sounds of cats and dogs, cars honking, air conditioning, things that you might normally hear in a home."

For the negative dataset, the team used 83 hours of audio data collected during sleep studies, yielding 7,305 sound samples. These clips contained typical sounds that people make in their sleep, such as snoring or obstructive sleep apnea.

From these datasets, the team used machine learning to create a tool that could detect agonal breathing 97% of the time when the smart device was placed up to 6 meters away from a speaker generating the sounds.

Next the team tested the algorithm to make sure that it wouldn't accidentally classify a different type of breathing, like snoring, as agonal breathing.

"We don't want to alert either emergency services or loved ones unnecessarily, so it's important that we reduce our false positive rate," Chan said.

For the sleep lab data, the algorithm incorrectly categorized a breathing sound as agonal breathing 0.14% of the time. The false positive rate was about 0.22% for separate audio clips, in which volunteers had recorded themselves while sleeping in their own homes. But when the team had the tool classify something as agonal breathing only when it detected two distinct events at least 10 seconds apart, the false positive rate fell to 0% for both tests.

The team envisions this algorithm could function like an app, or a skill for Alexa that runs passively on a smart speaker or smartphone while people sleep.

"This could run locally on the processors contained in the Alexa. It's running in real time, so you don't need to store anything or send anything to the cloud," Gollakota said.

The researchers plan to commercialise this technology through a UW spinout, Sound Life Sciences.