Assistants help change smartphones
For years, mobile phone owners have had access to just one digital assistant – Siri on an iPhone, Google Now/Google Assistant on an Android device, Cortana on a Windows one.
Now that’s changing as multiple assistants proliferate to multiple phones. It sounds like an epidemic case of multiple personality disorder but it’s actually a step toward a future in which the artificially intelligent entities will take over our gadgets and make them more powerful and easier to use.
Google Assistant, the unimaginatively named software that pops up on Android phones when you say ‘‘OK Google,’’ is now available for iPhones.
You can’t ask it to take a selfie or perform some other tasks – Apple keeps that functionality for its own Siri – but you are free to enjoy its superior speechrecognition technology.
With some technical skill, you can also run Google Assistant on Windows systems alongside the native digital assistant, Cortana (which, in turn, has been available on Android and iOS for a while). Samsung’s Galaxy S8, for its part, includes both Google Assistant and the company’s own Bixby.
In other words, the digital assistants are getting untethered from makers and even operating systems. Soon, they will supersede them: the assistant becomes the primary interface, and the user doesn’t really care about what’s under the hood unless he or she is a determined geek.
But that shift is predicated on improvements to voice interaction technology that have eluded developers so far.
So Google is adding keyboard input support to Assistant, and camera support will also come soon, the company announced at its developer conference (now under way). That means we’ll be able to train the phone’s camera on a flower or a building, and it’ll tell us what it is, or translate text from a foreign language without a special app.
The idea is that the digital assistant will do all the work for the user. If you want to post to Facebook, for example, you won’t need to open the app – the assistant will give you a window to do it; if you’re trying to figure out if the restaurant in front of you is worth entering, you won’t need to google it or bring up the Yelp app, just train the camera on the sign and the assistant will give you all the relevant information.
The benefits for the user are obvious. Eventually, it may become unnecessary to install apps on a phone – the assistant will just pull the necessary data from various services in the cloud.
Now, people hardly ever use the digital assistants on desktop and laptop computers, very often on home speakers and not that frequently on phones.
Talking to a gadget when you’re not driving a car remains a turnoff for many people.
The speakers, however, will be niche products for a long time. Not everyone can think of a use for them. Phones, however, are ubiquitous – and suddenly the whole market is up for grabs for the company that develops the perfect digital assistant and makes it the interface of choice. The mission is far from trivial. Artificial intelligence needs training to get better, and that means more interactions with us.
Companies need to entice customers to use highly imperfect products. That takes a lot of hype and some quick improvement, otherwise disappointed phone users will drift away.
Google and Amazon appear to be the most determined developers at this point, but there’s still time (though not that much) for Apple and Microsoft to catch up.
In the end, getting the digital assistants to take over our gadgets is a collaborative effort between developers, users and machinelearning algorithms.
It’s not quite clear yet if a complete takeover is even possible but, given the enormous resources being invested into this play, it’s likely to bring about change in the way we use our communication devices even if it doesn’t succeed 100 per cent. – Bloomberg