Why US giants of technology love the sound of your voice
AMAZON’S Echo has made tangible the promise of an artificially intelligent personal assistant in every home. Those who own the voiceactivated gadget (known colloquially as Alexa, after its female interlocutor) are prone to advertising “her” charms, applauding Alexa’s ability to call an Uber, order pizza or check a pupils’ maths homework. The company says more than 5,000 people a day profess their love for Alexa – you can probably check that up that claim.
Voice recognition has come a long way in the past few years. But it’s still not good enough to popularise the technology for everyday use and usher in a new era of human-machine interaction, allowing us to talk with all our gadgets-cars, washing machines, televisions.
Despite advances in speech recognition, most people continue to swipe, tap and click. And probably will for the foreseeable future. What’s holding back progress? The artificial intelligence that powers the technology has room to improve. There’s also a serious deficit of data-specifically audio of human voices, speaking in multiple languages, accents and dialects in often noisy circumstances that can defeat the code.
So Amazon, Apple, Microsoft and China’s Baidu have embarked on a world-wide hunt for terabytes of human speech.
The challenge is finding a way to capture natural, real-world conversations.
Even 95 percent accuracy isn’t enough, says Adam Coates, who runs Baidu’s artificial intelligence lab in Sunnyvale, California.
“Our goal is to push the error rate down to 1 percent,” he says. “That’s where you can really trust the device to understand what you’re saying, and that will be transformative.”
Not so long ago, voice recognition was comically rudimentary. An early version of Microsoft’s technology running in Windows transcribed “mom” as “aunt” during a 2006 demo before an auditorium of analysts and investors.
When Apple launched with much fanfare Siri five years back, the personal assistant’s gaffes were widely mocked because it, too, routinely spat out incorrect results or didn’t hear the question correctly. When asked if Gillian Anderson is British, Siri provided a list of English restaurants. Now Microsoft says its speech engine makes the same number or fewer errors than professional transcribers, Siri is winning grudging respect, and Alexa has given us a tantalising glimpse of the future.
Much of that progress owes a debt to the magic of neural networks, a form of artificial intelligence based loosely on the architecture of the human brain. — Bloomberg