USA TODAY International Edition
APPLE IS NEXT UP TO STRUT ITS BIG AMBITIONS FOR AI
The real question is: What will be its primary focus?
We’re in the heart of the tech conference season, in which giant players including Microsoft, Google, Facebook and next, Apple, lay out their visions for where their futures — as well as the tech industry as a whole — are headed.
Looking at what has been discussed to this point ( and speculating on what Apple will announce at its Worldwide Developers Conference on Monday), it’s safe to say that all of these organizations are keenly focused on different types of artificial intelligence, or AI. What this means is that each wants to create unique experiences that leverage both new types of computing components and software algorithms to automatically generate useful information about the world around us. In other words, they want to use real- world data in clever ways to enable cool stuff.
You may hear scary- sounding terms like convolutional neural networks, machine learning, analytics and deep learning associated with AI, but fundamentally, the concept behind all of them is to organize large amounts of data into various structures and patterns. From there, work is done to learn from the combined data, and then actions of various types — such as being able to better interpret the importance of new in- coming data — can be applied.
While some of these computing principles have been around for a long time, what’s fundamentally new about the modern type of AI being pursued by these companies is its extensive use of real- world data generated by sensors — such as still and moving images, audio, location, motion, etc. — and the speed at which the calculations on the data are occurring.
When done properly, the net result of these computing efforts is a nearly magical experience where we can have a smarter, more informed view of the world around us. At Google’s recent I/ O event, for example, the company debuted its new Lens capability for Google Assistant, which can provide information about the objects and places within your view.
In practical terms, Lens allows you to point your smartphone camera at something and have information about the objects in view appear overlaid on the phone screen. Essentially, it’s a form of augmented reality I ex- pect we will see other major platform vendors provide soon ( hint: Apple).
Behind the scenes, however, the effort to make something such as Lens work involves an enormous amount of technology, including reading the live video input from the camera ( a type of sensor, by the way), applying AIenabled computer vision algorithms to both recognize the objects and their relative location, combining that with location details from the phone’s GPS and/ or Wi- Fi signals, looking up relevant information on the objects, and then combining all of that onto the phone’s display.
Of course, there are thousands of other examples of potential AIdriven experiences.
Ironically, in the midst of all this new technology, one of the other intriguing aspects of AIdriven applications is that they’re pushing our traditional computing devices into the background. Sure, we’re still often using things such as smartphones to enable some of these experiences, but the ultimate goal of these advanced AI computing architectures is to make our technology become invisible.
Voice- based computing and digital assistants are a step in this direction, but we’ll eventually see ( hopefully!) small, discrete headmounted displays and other new methods of interacting with a computing- enhanced and more contextually aware view of the real- world around us.
One intriguing aspect of AI- driven applications is that they’re pushing our traditional computing devices into the background.