Microsoft’s Project Brainwave offers real-time AI
Expanding its footprint in the artificial intelligence (AI) world, Microsoft has unveiled a new deep learning acceleration platform called Project Brainwave. The new project uses a set of field-programmable gate arrays (FPGA) deployed in the Azure cloud to enable a real-time AI experience at a faster pace.
The system under Microsoft’s Project Brainwave is built on three main layers: a high-performance distributed system architecture, a DNN engine synthesised on FPGAs, and a compiler and runtime for low-friction deployments of trained models. The extensive work on FPGAs by the Redmond giant enables high performance through Project Brainwave. Additionally, the system architecture assures low latency and high throughput.
One of the biggest advantages of the new Microsoft project is the speed. FPGAs on the system are attached directly with the network fabric to ensure the highest possible speed. The high throughput design makes it easier to create deep learning applications that can run in real-time.
“Our system, designed for real-time AI, can handle complex, memory-intensive models such as LSTMs, without using batching to juice throughput,” Microsoft’s distinguished engineer Doug Burger wrote in a blog post.
It is worth noting that Project Brainwave is quite similar to Google’s Tensor Processing Unit. However, Microsoft’s hardware supports all the major deep learning systems. There is native support for Microsoft’s Cognitive Toolkit as well as Google’s TensorFlow. Brainwave can speed up the predictions from machine learning models.