Processor enables optical deep learning

1 min read

A new approach that uses light instead of electricity in deep learning computer systems based on artificial neural networks has been developed by a team of researchers at MIT. The team claims this discovery could vastly improve the speed and efficiency of certain deep learning computations.

“This optical chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” Professor Marin Soljačić said. “We’ve demonstrated the crucial building blocks but not yet the full system.”

The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The device is called a programmable nanophotonic processor.

“The advantage of using light to do matrix multiplication plays a big part in the speed up and power savings, because dense matrix multiplications are the most power hungry and time consuming part in AI algorithms,” Prof Soljačić added.

The programmable nanophotonic processor uses an array of waveguides that are interconnected in a way that can be modified as needed, programming that set of beams for a specific computation.

To demonstrate the concept, the team set the processor to implement a neural network that recognises four basic vowel sounds. They achieved a 77% accuracy level, compared to 90% for conventional systems, and the researchers believe there are no substantial obstacles to scaling up the system for greater accuracy.

According to the researchers, the nanophotonic processor could have other applications as well, including signal processing for data transmission. “This approach could do processing directly in the analogue domain,” said Prof Dirk Englund.

The system could also benefit data centres, security systems, self-driving cars and drones.