In the recent past, popularly endorsed AI like digital personal assistants namely Alexa, Siri and more have been accused of being covert spies, sleuthing for their companies, gathering personal data from unassuming clandestine talks of its users. Moreover, they have been suspected of leveraging this nosiness as a breach of individual privacy. A report, published rubbishes this expose and acquaints users with methods adopted by the likes of Apple and Amazon to give clients quality assurance and enhance features that have been mistakenly taken for spying.
Digital personal assistants have sadly received a lot of flak in the media lately, turning people wary around this obliging technology. With reports claiming that the respective employees listen in on our conversations, this new study clears the air about the controversial practice of recordings.
There are certain quality assurance practices that are carried out by these companies to work on the devices either to correct technical errors or enhance it to more sophistication or simply put it to test as a regular learning tool. None of these involve intentional eavesdropping on user privacy. They take a tiny portion of the conversation, which is randomized and characterized by anonymity, and analyze it for the purpose of quality control. This has been categorized under the intentional and the non-intentional. The intentional one is when the user alerts the assistant with a wake phrase and the unintentional one is when the assistant mistakes a sound to be a wake phrase and awakens to task. In both cases user identity is not compromised. In most cases the technical glitches are addressed through tiny bits of sounds.
There is also a rule that bounds employees. Most people working on your Google assistant or other digital personal assistants are bound by confidentiality, which if broken can be held as a criminal act.