Alexa may soon mimic voices
Amazon’s Alexa might soon replicate the voice of family members — even if they’re dead.
The capability, unveiled at Amazon’s Re: Mars conference in Las Vegas, is in development and would allow the virtual assistant to mimic the voice of a specific person based on less than a minute of recording.
Rohit Prasad, senior vice president and head scientist for Alexa, said at the event that the desire behind the feature was to build greater trust in the interactions users have with Alexa by putting more “human attributes of empathy and affect.”
In a video played by Amazon at the event, a young child asks “Alexa, can Grandma finish reading me ‘The Wizard of Oz’?” Alexa then acknowledges the request, and switches to another voice mimicking the child’s grandmother. The voice assistant then continues to read the book in that same voice.
Amazon’s push comes as competitor Microsoft earlier this week said it was scaling back its synthetic voice offerings and setting stricter guidelines to “ensure the active participation of the speaker” whose voice is re-created. Microsoft said it is limiting which customers get to use the service — while also continuing to highlight acceptable uses such as an interactive Bugs Bunny character at AT&T stores.
“This technology has exciting potential in education, accessibility and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners,” said Natasha Crampton, head of Microsoft’s AI ethics division.