Companies hoping to use artificial intelligence should benefit from more efficient chip designs

Will Knight

Intel and Facebook are working together on a chip that should make it cheaper for big companies to use artificial intelligence.

The device promises to run pre-trained machine learning algorithms more efficiently, meaning less hardware and less energy is required to have AI do useful stuff.

Intel revealed the new AI chip, as well as the collaboration with Facebook, at the Consumer Electronics Show in Las Vegas today. The announcement shows how intertwined AI software and hardware are becoming as companies look for an edge in the development and deployment of AI.

The new “inference” AI chip could help Facebook and others deploy machine learning more efficiently and cheaply. The social network uses AI to do a wide range of things, including tagging people in images, translating posts from one language to another, and catching prohibited content. These tasks are more costly, in terms of time and energy, if run on more generic hardware.

Intel will make the chip available to other companies later in 2019. It is currently far behind the market leader for AI hardware, Nvidia, and faces competition from a host of chip-making upstarts.

Naveen Rao, vice president of the artificial intelligence products group at Intel, said ahead of the announcement that the chip would be faster than anything available from competitors, although he did not provide specific performance numbers.

Facebook confirmed that it has been working with Intel but declined to provide further details of the arrangement, or to outline its role in the partnership. Facebook is also rumored to be exploring its own AI chip designs.

Rao said the chip will be compatible with all major AI software but the involvement of Facebook shows how important it is for those designing silicon to work with AI software engineers. Facebook’s AI researchers develop a number of widely used AI software packages. The company also has vast amounts of data for training and testing machine learning code.

Intel was left flat-footed a couple of years ago as demand for AI chips exploded with use of deep learning, a powerful machine learning technique that involves training computers to do useful tasks by feeding them large amounts of data.

With deep learning, data is fed into a very large neural network, and the network’s parameters are tweaked until it provides the desired output. A trained network can then be used for a task like recognizing people in video footage.

The computations required for deep learning run relatively inefficiently on general-purpose computer chips. They operate far better on chips that split computations up, which includes the kinds of graphics processors Nvidia has long specialized in. As a result, Nvidia got a jump-start on AI chips and still sells the vast majority of high-end hardware for AI.

Intel kickstarted its AI chip development by acquiring a startup called Nervana Systems in 2016. Intel then announced its first AI chip, the Intel Nervana Neural Network Processor (NNP), a year later.

Intel’s latest chip is optimized for running algorithms that have already been trained, which should make it more efficient. The new chip is called the NNP-I (the “I” is for “inference”).

The past few years has seen a dramatic uptick in the development of new AI hardware. A host of startups are racing to develop chips optimized for AI. This includes Graphcore, a British company that recently raised $200 million in investment, and an array of Chinese companies such as CambriconHorizon Robotics, and Bitmain (see “China has never had a real chip industry. Making AI chips could change that”).

Intel also faces competition from the likes of Google and Amazon, both of which are developing chips to power cloud AI services. Google first revealed it was developing a chip for its Tensorflow deep learning software in 2016. Amazon announced last December that it has developed its own AI chips, including one dedicated to inference.

Intel might be late to the game, but the company has unparalleled expertise in the manufacturing of integrated circuits, which remains a key factor driving innovations in design and better performance. “Intel’s expertise is in optimizing silicon,” Rao says. “This is something we do better than anyone.”

Will Knight is MIT Technology Review’s Senior Editor for Artificial Intelligence. He covers the latest advances in AI and related fields, including machine learning, automated driving, and robotics.

Leave a Comment