NewStraitsTimes MONday, NOveMber 11, 2019 • bots 21 cover story \ We decided to use AI to solve this since everybody has a phone, which can also be used as a hearing aid. Sagar Savla instant for them to be able to participate in the conversation instead of becoming passive listeners. Google also partnered with the world’s premier university for the deaf and hard of hearing, Gallaudet University in Washington, US, which is also the inventor of the American Sign Language. The university provided feedback on Live Transcribe so that it meet the needs of these communities. In terms of education, Savla sees the app as revolutionary. “If you’re hard of hearing or deaf, you don’t have to go to a special school anymore. You can continue in the same year in school or a university and follow along and get conversations without having to learn or teach somebody else sign language.” The Live Transcribe app is available for free in the Google Play Store. The company hasn’t ruled out creating the app for iOS. It is open to working with Apple for the iOS version or the iPhone. people to switch between Malay and English in the same conversation, especially when they know both languages,” he says. There is a blue circle indicator that detects the loudness and noise volume level of the speaker’s voice relative to the noise of the environment on the right corner of the app. Another aspect is sound events. Live Transcribe can detect sounds like clapping, knocking, dog barking and much more in the user’s surrounding. Savla shares a user story where a deaf woman was alerted by the app to her baby crying. When her baby was not in the room, she then followed the crying sound according to the blue circle indicator only to find that her baby had locked herself inside the shoe closet. Google has also added a water sound detection feature. Apparently, some deaf people are scared about leaving the tap on when they go to the bathroom because they cannot hear the sound and often forget to close the tap. “And so instead of becoming paranoid and constantly checking their bathroom tap, they can use the app. This has helped avoid high water bills or flooding in many cases.” Another feature is the ability to save the transcription such as one-on-one conversations and meetings among small groups. It allows the user to save the transcript for between New Jersey, the place, versus new jersey as in new clothing that you’re going to buy. “When you say stuff like, I would like to have a table for two at 2pm, it understands the difference between the first two which means two people versus the second two, which is the time and transcribes it correctly.” Savla says the app is developed from Google’s cloud-based speech recognition model. It is the result of a decade-long research gathered from other products such as Google Voice Search and Google Assistant to help people recognise speech across different languages. The transcribing process, Savla says, happens within 200 milliseconds — from the audio to the system, and through the user’s phone to Google’s cloud servers, and back to the phone. He says this is important to ensure the user gets the caption or text back in an up to three days. If there is a need to keep the transcripts longer than that, the user can just copy and paste them to other platforms. POWER OF AI Savla says existing professional hearing aid is exorbitant, costing between US$1,000 and US$5,000. “As somebody who grew up in India, I can see how debilitating that can be for somebody’s lifestyle. Most people cannot afford that.” Usually, AI is synonymous with business. Now, I want to show that it’s actually much more impactful as it goes into many social good arenas and applications that can directly help people. “We decided to use AI to solve this since everybody has a phone, which can also be used as a hearing aid.” The app is smart enough to recognise the context behind certain words. During a demonstration, it understood the difference MOVING FORWARD When it comes to background noise, Savla says Google is currently working towards making the app more understanding of the context of noise and make it more robust to these kinds of scenarios. The next thing is to do is improve its speech recognition quality. “English is what we started with and is one of our best speech recognition systems. We want to continue to improve English as well as all the other languages that we support, including Malay.