Heard your late relative's voice? It could be Alexa's mimic

In a video played by Amazon, a young child asks "Alexa, can Grandma finish reading me the Wizard of Oz?" Alexa then acknowledges the request and switches to another voice mimicking the grandmother. 

Published: 24th June 2022 12:52 PM  |   Last Updated: 24th June 2022 12:52 PM   |  A+A-

Amazon’s Alexa might soon replicate the voice of family members - even if they’re dead. (Photo | AP)

Amazon’s Alexa might soon replicate the voice of family members - even if they’re dead. (Photo | AP)

By Associated Press

Amazon's Alexa might soon replicate the voice of family members - even if they're dead. The capability, unveiled at Amazon's Re: Mars conference in Las Vegas, is in development and would allow the virtual assistant to mimic the voice of a specific person based on a less than a minute of provided recording.

Rohit Prasad, senior vice president and head scientist for Alexa, said at the event on Wednesday that the desire behind the feature was to build greater trust in the interactions users have with Alexa by putting more "human attributes of empathy and affect."

"These attributes have become even more important during the ongoing pandemic when so many of us have lost ones that we love," Prasad said. "While AI can't eliminate that pain of loss, it can definitely make their memories last."

In a video played by Amazon at the event, a young child asks "Alexa, can Grandma finish reading me the Wizard of Oz?" Alexa then acknowledges the request and switches to another voice mimicking the child's grandmother. The voice assistant then continues to read the book in that same voice.

To create the feature, Prasad said the company had to learn how to make a "high-quality voice" with a shorter recording, as opposed to hours of recording in a studio. Amazon did not provide further details about the feature, which is bound to spark more privacy concerns and ethical questions about consent.

Amazon's push comes as competitor Microsoft earlier this week said it was scaling back its synthetic voice offerings and setting stricter guidelines to "ensure the active participation of the speaker" whose voice is recreated. Microsoft said Tuesday it is limiting which customers get to use the service -- while also continuing to highlight acceptable uses such as an interactive Bugs Bunny character at AT&T stores.

"This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners," said a blog post from Natasha Crampton, who heads Microsoft's AI ethics division.
 


India Matters

Comments

Disclaimer : We respect your thoughts and views! But we need to be judicious while moderating your comments. All the comments will be moderated by the newindianexpress.com editorial. Abstain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks. Try to avoid outside hyperlinks inside the comment. Help us delete comments that do not follow these guidelines.

The views expressed in comments published on newindianexpress.com are those of the comment writers alone. They do not represent the views or opinions of newindianexpress.com or its staff, nor do they represent the views or opinions of The New Indian Express Group, or any entity of, or affiliated with, The New Indian Express Group. newindianexpress.com reserves the right to take any or all comments down at any time.