Siri to recognize User Speech Patterns
Can Now Understand User Speech Nature

Siri, a voice assistance that comes along with iPhones has come a lot way with different updates. It has become an integral part of the userbase which consume Apple products. Though there are many competitors in the market, Amazon Alexa being a major one, Siri has still been able to create a separate consumer base.

Keeping all things aside recently Siri product has been receiving many complaints in the past few years regarding its feature diversity when compared to its competitors. Addressing all these issues Apple has now announced that it will be conducting further research in order to update Siri to recognize user speech patterns. It will also make it able to understand the nature of speech and will enable it to respond to the commands with a shout or a whisper.

With this new updated version, Apple aims to make Siri capable to reply to a command by detecting variations in the sound. This was done because it was found out that Siri is incapable of reading the environment of the room and usually responds in a loud volume which is inconvenient for the users. With this new update it will be able to reply in a better way even in a noisy environment.

The voice of Siri consist of various voice actors. These voices are then synthesized. With this newer updates the users will be able to use this service with a better understanding of the surrounding environment. For example, during the night when the user whispers any command to Siri, it will be able to answer back with a whisper. Similarly when in a loud environment or being outside, the user will still be able to communicate with Siri and will receive louder, clearer and slow paced response.

Upon further developments, the company is also planning to incorporate more high tech features into Siri. These features will allow it to locate the device and calculate the distance between the users. It will also be able to calculate the noise levels of the room.

Along with this Apple is also planning on modifying Siri to adapt to users who speak with a stutter. Currently the device is tuned for a specific speech pattern. When the users talk at a pace slower than what the device is designed for, their command gets cut while they’re still speaking.

Most of the Apple users who suffer from stuttering or have a slow speech rely on Hold to Talk feature on their iPhones. This might not be convenient most of the times, especially when there is a feature available to make the entire process easier. The company is now trying to come up with a feature that could detect if the user is stuttering while giving a command to Siri.

Apple has also claimed that, while they are right now focusing on how to serve the users who stutter in a better way, for the coming future tech they will be focusing on various categories such as dysarthria.

This is definitely a good news for the user base who suffers from speech complications. As Apple is trying its best to serve them better, Amazon and Google are trying their best too in order to capture this market with their advanced research and development.

 

Comments

comments