Filling in the Blanks

2 mins read

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment.

This a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions.

Most AI agents—computer systems that could endow robots or other machines with intelligence—are trained for very specific tasks—such as to recognise an object or estimate its volume—in an environment they have experienced before, like a factory. But the agent developed by the researchers is general purpose, gathering visual information that can then be used for a wide range of tasks.

The scientists used deep learning to train their agent on thousands of 360-degree images of different environments.

Now, when presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses—like a tourist standing in the middle of a cathedral taking a few snapshots in different directions—that together add up to less than 20% of the full scene.

The system is not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene. This is much like if you were in a grocery store you had never visited before, and you saw apples, you would expect to find oranges nearby, but to locate the milk, you might glance the other way.

Based on glimpses, the agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings.

One of the main challenges the scientists set for themselves was to design an agent that can work under tight time constraints. This would be critical in a search-and-rescue application. For example, in a burning building a robot would be called upon to quickly locate people, flames and hazardous materials and relay that information to firefighters.

For now, the new agent operates like a person standing in one spot, with the ability to point a camera in any direction but not able to move to a new position. Or, equivalently, the agent could gaze upon an object it is holding and decide how to turn the object to inspect another side of it. Next, the researchers are developing the system further to work in a fully mobile robot.

Using the supercomputers at UT Austin’s Texas Advanced Computing Center and Department of Computer Science, it took about a day to train their agent using an artificial intelligence approach called reinforcement learning.

The team developed a method for speeding up the training: building a second agent, called a sidekick, to assist the primary agent.