Donate to Science & Enterprise

S&E on Mastodon

S&E on LinkedIn

S&E on Flipboard

Please share Science & Enterprise

Chip Designed for Efficient, Mobile Neural Networks

Neural network illustration

(geralt, Pixabay)

4 February 2016. An engineering lab at Massachusetts Institute of Technology designed a new processing chip that could allow running of neural networks on mobile devices. A team led by electrical engineering and computer science professor Vivienne Sze described and demonstrated the new chip on 2 February at the International Solid State Circuits Conference in San Francisco.

Sze and colleagues in MIT’s Energy-Efficient Multimedia Systems Group seek to develop more efficient, but still high-performance systems for multimedia applications that usually require a great good deal of computing resources.  In this case, Sze’s team is looking for alternatives to graphic processing unit or GPU chips now used to implement neural networks that simulate human thought, including the ability to recognize objects and people, or learn new skills.

While GPUs were designed initially to represent graphics on computing screens, they can be adapted to resource-intensive applications, such as neural networks. These applications of artificial intelligence are often called deep learning, but even high-power GPU chips still need to tap into data and power of remote systems in the cloud to perform deep-learning functions. More efficient circuits would make it possible to perform these functions completely on local devices, even mobile phones.

“Right now, the networks are pretty complex and are mostly run on high-power GPUs,” says Sze in a university statement. “You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons.”

The team, including research scientist Joel Emer with NVidia (also on the MIT computer science faculty), a pioneering company with GPU chips, designed the new circuit — called  Eyeriss — with 168 cores, nearly as many as the 200 cores found in GPU chips, but with 10 times the power. The design separates the training of the neural network, where deep learning is implemented, from the trained network on the device. The implemented network in Eyeriss is also configured so cores that share data are adjacent in the circuit, removing the need to route data through the main memory.

In addition, tasks in Eyeriss are allocated to store both data describing the processing core’s function, as well as data needed for the task and manipulated by the core. These task allocations can be done dynamically on Eyeriss to maximize its efficiency. And to minimize the need for exchanges with remote data banks, Eyeriss’s cores directly store the data from the remote sources in a compressed format.

Sze and colleagues demonstrated an image-recognition task with Eyeriss at the conference. The designers believe the chip can be used to run algorithms for Internet of Things applications, where sensors built-in to vehicles, appliances, and other devices could exchange data with smaller local systems, thus requiring fewer exchanges with remote data banks. Neural networks in embedded systems could also provide more intelligence to autonomous robots.

Read more:

*     *     *

1 comment to Chip Designed for Efficient, Mobile Neural Networks