Enough with the digital assistant privacy scare stories already

By Mark Sullivan

Over the past few months, three publications have raised the alarm that our digital personal assistants (Alexa, Siri, et al) are spying on us when people at the companies that made them listen in to some of our interactions as part of a quality control process. Much ado about not very much, I say.

Bloomberg ran a story in April with the headline “Amazon Workers Are Listening to What You Tell Alexa.” Then Belgian public broadcaster VRT NWS ran a story titled “Google employees are eavesdropping, even in your living room.”

The latest installment comes from The Guardian‘s Alex Hern under the headline “Apple contractors ‘regularly hear confidential details’ on Siri recordings.”

While these articles play on consumers’ current data privacy anxieties, the quality assurance practices they cover are, for the most part, commonplace and meant only to improve products.

It’s true that tech companies like Apple, Google, and Amazon record a tiny sliver of user-assistant interactions to analyze for quality control. There are two types of these recordings–the ones where users intended to talk to the assistant, and the ones where they didn’t. The intentional ones come after a user says a wake phrase such as “Hey Siri,” causing the assistant to start listening. The unintentional ones happen when the assistant mistakenly thinks it’s heard its wake phrase and starts recording speech or sounds the user didn’t intend to communicate. In both cases, the recordings that companies study are not associated with the user’s identity.

‘Less than 1%’

The tech companies use the recordings–the intentional ones–to measure and improve the assistants’ ability to understand the words of individual users, and improve the appropriateness and helpfulness of the assistant’s responses. Apple says it uses less than 1% of daily Siri interactions for the purpose, and that most recordings are just a few seconds long.

The Bloomberg, NVT NWS, and Guardian stories don’t question the need for digital assistant quality assurance, but seem surprised that actual human beings are reviewing the recordings.

“. . . what people are certainly not aware of, simply because Google doesn’t mention it in its terms and conditions, is that Google employees can listen to excerpts from those recordings,” worries VRT NWS. “But the company does not explicitly state that that work is undertaken by humans who listen to the pseudonymised recordings,” frets The Guardian.

The idea that a tech company’s employees would review a small, anonymized sample of the content of one of its services shouldn’t be a shocker. It’s just another kind of proprietary company data that’s kept under lock, just like the company’s marketing plans or user account data. If employeesleaked or misused such data they may be fired or even prosecuted.

And if not human beings, who would review the QA samples of personal assistant recordings? Monkeys? Bots? Actually, it may be possible to grade Siri’s responses using AI technology, but if Apple already had the intelligence needed to build such technology, why wouldn’t it just build that intelligence into Siri?

Siri, misheard

The Guardian‘s “whistleblower” source says it’s in the recordings of unintentional interactions–where the assistant mishears its wake word or phrase–that the most private and sensitive data is captured.

Sometimes, “you can definitely hear a doctor and patient, talking about the medical history of the patient. Or you’d hear someone, maybe with car engine background noise—you can’t say definitely, but it’s a drug deal … you can definitely hear it happening.”

The person said Siri can be triggered by a “zip” (or unzip) sound, which certainly could precede some sensitive dialogue. (I was unable to replicate this false positive with several zippers at several distances from the microphone.)

But the unintentional recordings serve an important purpose too. Apple, for example, uses these recordings to understand when and how Siri mistakes some word or sound as its wake phrase. It uses that information to reduce these false positives.

The Guardian‘s (single) source says the Apple contractor the source works for encourages staffers to report unintentional recordings “as a technical problem.” The person doesn’t say exactly how often these false triggers occur, and neither does Apple. Google told Wired that false triggers occur in about .02% of its Assistant’s interactions. If they happen more often with Siri, that would seem to underline the need to gather samples of Siri’s wake phrase hearing errors.

Even though the unintentional recordings are just as anonymized as the intentional ones, the whistleblower still worries that somebody might make the connection between the recordings and actual user accounts.

“Apple is subcontracting out, there’s a high turnover. It’s not like people are being encouraged to have consideration for people’s privacy, or even consider it. If there were someone with nefarious intentions, it wouldn’t be hard to identify [people on the recordings].”

It’s hard to understand why bad-intentioned contractor employees would go to all the trouble of searching out the name of the person on a recording, then risk their job (and more) to do what? Blackmail the person? Anything’s possible, I guess, but it seems a little far-fetched.

The Guardian’s source also seems alarmed that Apple would use contract labor–not Apple employees–for Siri quality control. Welcome to the tech industry. University of California Santa Cruz researchers estimated in 2018 that 39,000 people do contract work for tech companies in San Mateo and Santa Clara counties (where Apple, Facebook, Google, and many other Silicon Valley companies are based). For the first time last year, Google employed more contractors than full-time employees. Contractors do all kinds of work, from product design to sales to content moderation.

Apple says its Siri interactions are analyzed in secure facilities by reviewers who are obligated to adhere to “Apple’s strict confidentiality requirements.”

The nut of The Guardian‘s story comes at paragraph 15:

The contractor argued Apple should reveal to users this human oversight exists—and, specifically, stop publishing some of its jokier responses to Siri queries. Ask the personal assistant “are you always listening”, for instance, and it will respond with: “I only listen when you’re talking to me.”

That is patently false,, the contractor said. They argued that accidental triggers are too regular for such a lighthearted response.

Maybe Siri’s response should be: “I only listen when you’re talking to me (or when I think you’re talking to me, or when I’m trying to learn and exploit your most personal secrets.”

The Guardian rightly points out that while Alexa and Google Assistant let users opt out of the recordings, the only way Siri users can opt out is to stop using Siri. It’s likely Apple simply did not perceive a user privacy threat in its Siri QA process. Given that users can opt out of innocuous matters such as having their app crashes reported back to Apple, offering a similar choice for Siri quality assurance would be consistent with its other practices.

Context is everything

The digital assistant privacy scare stories show up at a time when Alexa, Siri, and Assistant are under the microscope. They represent a new AI technology that’s not well understood. It’s still a common belief among the public, for example, that the microphones used by such digital assistants are always listening, whether a wake word has been uttered or not. Not true. The audio software behind the assistants use multiple levels of wakefulness, and in their resting state they listen only for their wake word or phrase.

And the stories come against a backdrop of general anxiety and distrust of the tech giants. Because of irresponsible behavior by companies like Facebook, we no longer assume the companies will be responsible stewards of our data. Apple has built real privacy features into its hardware and services, but it has also talked loudly on the subject, which seems to have served as an invitation for some to paint the company as a hypocrite on the issue. That’s part of this, too.

A functioning tech media holds big tech companies to account on privacy issues. But it should dig for privacy practices that could lead to demonstrable harm to consumers before sounding the alarm.

 

Fast Company , Read Full Story

(33)