Google pre­par­ing An­droid for an AI fu­ture

Ten­sorFlow is go­ing on a diet to op­ti­mize for smart­phones and other light­weight de­vices. BLAIR HANLEY FRANK re­ports

Android Advisor - - Contents -

The fu­ture of An­droid will be a lot smarter, thanks to new pro­gram­ming tools that Google un­veiled re­cently. The com­pany an­nounced Ten­sorFlow Lite, a ver­sion of its ma­chine learn­ing frame­work that’s de­signed to run on smart­phones and other mo­bile de­vices, dur­ing the key­note ad­dress at its Google I/O de­vel­oper con­fer­ence.

“Ten­sorFlow Lite will lever­age a new neu­ral net­work API to tap into sil­i­con-spe­cific ac­cel­er­a­tors, and over time we ex­pect to see [dig­i­tal sig­nal pro­cess­ing chips] specif­i­cally de­signed for neu­ral net­work in­fer­ence and train­ing,” said Dave Burke, Google’s vice pres­i­dent of engi­neer­ing for An­droid. “We think these new ca­pa­bil­i­ties will help power a next gen­er­a­tion of on-de­vice speech pro­cess­ing, vis­ual search, aug­mented re­al­ity, and more.”

The Lite frame­work will be made a part of the open source Ten­sorFlow project soon, and the neu­ral net­work API will come to the next ma­jor re­lease of An­droid later this year.

The frame­work has se­ri­ous im­pli­ca­tions for what Google sees as the fu­ture of mo­bile hard­ware. AI-fo­cused chips could make it pos­si­ble for smart­phones to han­dle more ad­vanced ma­chine learn­ing com­pu­ta­tions with­out con­sum­ing as much

power. With more ap­pli­ca­tions using ma­chine learn­ing to pro­vide in­tel­li­gent ex­pe­ri­ences, mak­ing that sort of work more eas­ily pos­si­ble on de­vice is key.

Right now, build­ing ad­vanced ma­chine learn­ing into ap­pli­ca­tions – es­pe­cially when it comes to train­ing mod­els – re­quires an amount of com­pu­ta­tional power that typ­i­cally re­quires beefy hard­ware, a lot of time and a lot of power. That’s not re­ally prac­ti­cal for con­sumer smart­phone ap­pli­ca­tions, which means they of­ten off­load that pro­cess­ing to mas­sive data cen­tre by send­ing im­ages, text and other data in need of pro­cess­ing over the in­ter­net.

Pro­cess­ing that data in the cloud comes with sev­eral down­sides, ac­cord­ing to Patrick Moor­head, prin­ci­pal an­a­lyst at Moor In­sights and Strat­egy: users must be will­ing to trans­fer their data to a com­pany’s servers, and they have to be in an en­vi­ron­ment with rich enough con­nec­tiv­ity to make sure the op­er­a­tion is low-la­tency.

There’s al­ready one mo­bile pro­ces­sor with a ma­chine learn­ing-spe­cific DSP on the mar­ket to­day. The Qual­comm Snap­dragon 835 sys­tem-on-a-chip sports the Hexagon DSP that sup­ports Ten­sorFlow. DSPs are also used for pro­vid­ing func­tion­al­ity like rec­og­niz­ing the “OK, Google” wake phrase for the Google Assistant, ac­cord­ing to Moor­head.

Users should ex­pect to see more ma­chine learn­ing ac­cel­er­a­tion chips in the fu­ture, Moor­head said. “Ever since Moore’s Law slowed down, it’s been a het­ero­ge­neous com­put­ing model,” he said. “We’re using dif­fer­ent kinds of pro­ces­sors to do dif­fer­ent types of things, whether it’s a DSP, whether it’s a

[field-pro­gram­mable gate ar­ray], or whether it’s a CPU. It’s al­most like we’re using the right golf club for the right hole.”

Google is al­ready in­vest­ing in ML-spe­cific hard­ware with its line of Ten­sor Pro­cess­ing Unit chips, which are de­signed to ac­cel­er­ate both the train­ing of new ma­chine learn­ing al­go­rithms as well as data pro­cess­ing using ex­ist­ing mod­els. The com­pany re­cently an­nounced the sec­ond ver­sion of that hard­ware, which is de­signed to ac­cel­er­ate ma­chine learn­ing train­ing and in­fer­ence.

The com­pany is also not the only one with a smart­phone-fo­cused ma­chine learn­ing frame­work. Face­book showed off a mo­bile-ori­ented ML frame­work called Caffe2Go last year, which is used to power ap­pli­ca­tions like the com­pany’s live style trans­fer fea­ture.

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.