USA TODAY US Edition

Will Apple dive deep into AI?

The real question is: What will be its primary focus?

- Bob O’Donnell Special for USA TODAY Bob O’Donnell is the president and chief analyst of TECHnalysi­s Research, a market research and consulting firm. You can follow him on Twitter @bobodtech.

CITY, CALIF. We’re in FOSTER the heart of the tech conference season, in which giant players including Microsoft, Google, Facebook and next, Apple, lay out their visions for where their futures — as well as the tech industry as a whole — are headed.

Looking at what has been discussed to this point (and speculatin­g on what Apple will announce at its Worldwide Developers Conference on Monday), it’s safe to say that all of these organizati­ons are keenly focused on different types of artificial intelligen­ce, or AI. What this means is that each wants to create unique experience­s that leverage both new types of computing components and software algorithms to automatica­lly generate useful informatio­n about the world around us. In other words, they want to use real-world data in clever ways to enable cool stuff.

You may hear scary-sounding terms like convolutio­nal neural networks, machine learning, analytics and deep learning associated with AI, but fundamenta­lly, the concept behind all of them is to organize large amounts of data into various structures and patterns. From there, work is done to learn from the combined data, and then actions of various types — such as being able to better interpret the importance of new in- coming data — can be applied.

While some of these computing principles have been around for a long time, what’s fundamenta­lly new about the modern type of AI being pursued by these companies is its extensive use of real-world data generated by sensors — such as still and moving images, audio, location, motion, etc. — and the speed at which the calculatio­ns on the data are occurring.

When done properly, the net result of these computing efforts is a nearly magical experience where we can have a smarter, more informed view of the world around us. At Google’s recent I/O event, for example, the company debuted its new Lens capability for Google Assistant, which can provide informatio­n about the objects and places within your view.

In practical terms, Lens allows you to point your smartphone camera at something and have informatio­n about the objects in view appear overlaid on the phone screen. Essentiall­y, it’s a form of augmented reality I ex- pect we will see other major platform vendors provide soon (hint: Apple).

Behind the scenes, however, the effort to make something such as Lens work involves an enormous amount of technology, including reading the live video input from the camera (a type of sensor, by the way), applying AI-enabled computer vision algorithms to both recognize the objects and their relative location, combining that with location details from the phone’s GPS and/or Wi-Fi signals, looking up relevant informatio­n on the objects, and then combining all of that onto the phone’s display.

Of course, there are thousands of other examples of potential AI-driven experience­s.

Ironically, in the midst of all this new technology, one of the other intriguing aspects of AI-driven applicatio­ns is that they’re pushing our traditiona­l computing devices into the background. Sure, we’re still often using things such as smartphone­s to enable some of these experience­s, but the ultimate goal of these advanced AI computing architectu­res is to make our technology become invisible.

Voice-based computing and digital assistants are a step in this direction, but we’ll eventually see (hopefully!) small, discrete headmounte­d displays and other new methods of interactin­g with a computing-enhanced and more contextual­ly aware view of the real-world around us.

One intriguing aspect of AI-driven applicatio­ns is that they’re pushing our traditiona­l computing devices into the background.

 ??  ??
 ?? ANDREW BURTON, GETTY IMAGES ?? A man takes a selfie at Apple’s Worldwide Developers Conference last year in San Francisco. This year’s conference, held in San Jose for the first time since 2002, starts Monday and ends June 9.
ANDREW BURTON, GETTY IMAGES A man takes a selfie at Apple’s Worldwide Developers Conference last year in San Francisco. This year’s conference, held in San Jose for the first time since 2002, starts Monday and ends June 9.
 ?? REVIEWED.COM ?? Apple could unveil its plans for artificial intelligen­ce at WWDC17 next week.
REVIEWED.COM Apple could unveil its plans for artificial intelligen­ce at WWDC17 next week.

Newspapers in English

Newspapers from United States