The researchers have said that they were able to get a 100 percent success rate in their tests, demonstrating a dire need for companies to patch this hole in their products’ security. The technique for creating such audio files is complex for an average person, but well within the means of anyone looking to cause problems. A person could issue not just wake commands, but also make the smart speaker do other things, such as taking over all your smart-bulbs or placing orders for products over Amazon.
Right now, the Google Assistant on our phones will respond only to the voice it has been trained with, however, that is not the case with Alexa, the AI that powers Amazon’s Echo range of speakers. Voice match could be a better way of securing smart speakers, that will ensure they respond only to the voice of select people only.