Computational photography could solve a problem that bedevils self-driving cars

2 mins read

A system that is said to be capable of producing images of objects in fog so dense that human vision could penetrate only 36 centimeters, has been developed by MIT.

The system is also able to gauge an object’s distance at a range of 57 centimeters whilst shrouded by fog, says MIT, who believes its system could be a crucial step toward self-driving cars.

“I decided to take on the challenge of developing a system that can see through actual fog,” says Guy Satat, a graduate student in the MIT Media Lab, who led the research. “We’re dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios.”

The system uses a time-of-flight camera which is said to fire ultrashort bursts of laser light into a scene and measure the time it takes their reflections to return. On a clear day, the light’s return time faithfully indicates the distances of the objects that reflected it. But fog causes light to scatter, or bounce around in random ways. In foggy weather, most of the light that reaches the camera’s sensor will have been reflected by airborne water droplets, not by the types of objects that autonomous vehicles need to avoid. And even the light that does reflect from potential obstacles will arrive at different times, having been deflected by water droplets on both the way out and the way back.

The patterns produced by fog-reflected light vary according to the fog’s density. On average, light penetrates less deeply into a thick fog than it does into a light fog. However, MIT says it was able to show that, no matter how thick the fog, the arrival times of the reflected light adhere to a statistical pattern known as a gamma distribution.

Gamma distributions are somewhat more complex than Gaussian distributions, the common distributions that yield the familiar bell curve. They can be asymmetrical, and they can take on a wider variety of shapes. But like Gaussian distributions, they’re completely described by two variables. MIT claims its system can estimate the values of those variables on the fly and uses the resulting distribution to filter fog reflection out of the light signal that reaches the time-of-flight camera’s sensor.

The system then calculates a different gamma distribution for each of the 1,024 pixels in the sensor, says MIT. That’s why it’s able to handle the variations in fog density that foiled earlier systems; it can handle circumstances in which each pixel sees a different type of fog.

The camera is designed to count the number of light particles, or photons, that reach it every 56 picoseconds, or trillionths of a second. MIT explains that its system uses those raw counts to produce a histogram — essentially a bar graph, with the heights of the bars indicating the photon counts for each interval. Then it finds the gamma distribution that best fits the shape of the bar graph and simply subtracts the associated photon counts from the measured totals. What remain are slight spikes at the distances that correlate with physical obstacles.

To assess the system’s performance, the researchers used optical depth, which describes the amount of light that penetrates the fog. Optical depth is independent of distance, so the performance of the system on fog that has a particular optical depth at a range of 1 meter should be a good predictor of its performance on fog that has the same optical depth at a range of 30 meters. In fact, MIT says the system may even fare better at longer distances, as the differences between photons’ arrival times will be greater, which could make for more accurate histograms.