comment on this article

Human-like reasoning in driverless car navigation

​With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.

Human drivers are exceptionally good at navigating roads they haven't driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, however, struggle with this basic reasoning. In every new area, the cars must first map and analyse all the new roads, which is very time consuming. The systems also rely on complex maps - usually generated by 3-D scans - which are computationally intensive to generate and process on the fly.

MIT researchers describe an autonomous control system that "learns" the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a GPS-like map. Then, the trained system can control a driverless car along a planned route in a new area, by imitating the human driver.

Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car's course.

To train the system initially, a human operator controlled a driverless Toyota Prius - equipped with several cameras and a basic GPS navigation system - collecting data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

The system uses maps that are easy to store and process. Autonomous control systems typically use LIDAR scans to create massive, complex maps that take roughly 4,000 gigabytes of data to store just the city of San Francisco. For every new destination, the car must create new maps, which amounts to tons of data processing. Maps used by the researchers' system, however, captures the entire world using just 40GB of data.

During autonomous driving, the system also continuously matches its visual data to the map data and notes any mismatches. Doing so helps the autonomous vehicle better determine where it is located on the road.

Author
Bethan Grylls

Comment on this article


This material is protected by MA Business copyright See Terms and Conditions. One-off usage is permitted but bulk copying is not. For multiple copies contact the sales team.

What you think about this article:

Can driverless car tell difference between crazy person who going to jump in front of car or just regular who watching at you and waiting you to pass by. I guess right now all driverless cars should assume that all people are crazy ;))

Posted by: Dinar Dayanov, 28/05/2019

Add your comments

Name
 
Email
 
Comments
 

Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

Related Articles