Google starts shaping the successor to the smartphone
Sundar Pichai, chief executive of Google, has said for more than a year that artificial intelligence will remake everything the internet company does. Finally, for people gathering at the company’s annual developer conference in Silicon Valley this week, there was a sense of how big that change might be.
In the most visible demonstration of its ambitions to extend the reach of its AI-powered services, Google launched its intelligent assistant — known, simply, as Assistant — as an app on Apple’s iPhone, pitting it directly against Apple’s Siri in a showdown of the intelligent agents.
Less noticed but perhaps more important was Google’s announcement of a new computing service for businesses and governments hoping to draw on the same AI that powers the com-
‘They’ve made a huge amount of progress in a short amount of time’ Geoff Blaber, CCS Insight
pany’s own services. “We realise we’re not going to solve all the world’s machine-learning problems ourselves,” said Jeff Dean, one of the company’s top AI researchers.
Instead, the techniques Google has developed for speech and vision recognition are being made available for companies to apply themselves. They will be able to use the technologies for some of their hardest computing problems, such as detecting fraud and analysing large volumes of patient health data, he said.
The new computing service will also propel Google deeper into the chip business, making it an unlikely competitor for the companies building the computing foundations of the AI era, such as
Nvidia and Intel. The internet company said ranks of servers based on chips of its own design, known as Tensor Processing Units, or TPUs, would be opened to customers as a cloud service.
“This could have far-reaching consequences,” said Chirag Dekate, an analyst at Gartner, adding that it would bring a step-change to the kind of power that companies have for analysing their own data. “It will be hard for others to compete with Google’s performance.”
In one demonstration of the potential of its increasingly powerful AI platform, Google disclosed it was working with three US medical institutions to analyse masses of health data in pursuit of new ways to improve patient care.
This was only one aspect of a broadranging push into machine learning — the company’s main AI technique — that was shown off this week at its big annual tech showcase event.
“They’ve made a huge amount of progress in a short amount of time,” said Geoff Blaber, an analyst at CCS Insight.
Underpinning this has been the headway the company reported in core technologies such as language understandi ng and i mage recognition. The advances have included reducing the error rate of its voice recognition technology from 8.5 per cent to 4.9 per cent since last July, said Mr Pichai.
AI has also helped with the design of some of its products, for example enabling it to refine the software in Home, its “smart” speaker.
Google has also been working on more ways to embed AI into all of its services.
“It’s more integrated into the user experience,” said Carolina Milanesi, an analyst at Creative Strategies. Examples on display this week include applying facial recognition to a user’s photos to recommend other people with whom the pictures should be shared.
At the centre of this push is Assistant. Google demonstrated this week how the technology could be used as an intelligent layer for other services, for instance helping users find out more about real-world objects captured by their smartphone cameras.
Assistant is set to become more knowledgeable — and useful — as it starts to appear on more devices and acts as the foundation for more services, said Mr Dean. “We’re starting to get third-party integration points,” he said.
Launched late last year on Google’s Pixel smartphone and Home, Assistant is still in its early stages. Google this week claimed it was on 100m devices — though that includes all smartphones running the latest version of the Android operating system, even if their users have not tried out Assistant.
And even Google executives say they are trying to design the most effective ways of interacting with the service to help users make the most of it.
“It doesn’t seem yet that assistants are changing our lives,” Ms Milanesi said.
Google also faces stiff competition from Amazon, which has moved quickly to consolidate the early lead of its own Alexa voice-operated service.
But Google has been putting the pieces in place for a battle likely to take years to unfold. Part of that has involved releasing an Assistant app to run on iPhone. Since it is not “native” to the Apple platform, it will not be able to do some things that Siri can, such as set the phone’s alarm. Google hopes it will more than make up for this by being able to operate Google’s own services, such as Maps, and bringing a deeper understanding of the user gleaned from interactions on other devices.
If people come to rely on such services to bring a common experience to all their devices, it could turn software like Assistant into a new computing platform, and, says Ms Milanesi, pose a big challenge to Apple, which tends to restrict services like Siri to its own devices to support hardware sales. “There’s a whole bunch of stuff Assistant will know that Siri doesn’t,” she said.
The smartphone still dominates computing. But as the new AI-powered platforms take shape, a possible successor to today’s touchscreens and app stores may be starting to come into focus.
Delegates at the Google developer conference in California heard the company outline plans to extend the reach of its machine learning technology