Our collective privacy problem is not your fault

By Lindsey Barrett

You see it with criticisms of “sharenting” that chide parents for indulging the natural instinct of bragging about their children by sharing pictures of them, rather than blaming the companies that share those photos in ways that parents wouldn’t expect, or the lobbyists who pushed for weaker children’s privacy regulations (and continue to). You see it in explainer pieces suggesting that any problems created by an opaquely exploitative technology are the user’s fault for using it. And you see it from policymakers who frame privacy violations as the preventable consequence of generational recklessness.

Individuals who aren’t able to bike to work or who use plastic water bottles aren’t the population that’s primarily responsible for global warming—Exxon, Shell, and their peer companies are, along with the lobbyists who press their interests and the politicians who listen to them and take their money. Similarly, blaming individuals for structural privacy failures, even as a part of illustrating how badly reforms are needed, is a far too common posture, and needs to be rooted out of privacy discussions for good.

Recently, the New York Times published an illuminating series of articles examining the surreptitious collection of location-tracking data, based on a file of such data that traced the movements of over 12 million Americans over several months. The file illustrated the patterns of people from all walks of life, and in all parts of their lives, such as a senior Department of Defense official and his wife attending the Women’s March, the comings and goings of kids at the local high school in Pasadena, and another man visiting a hospital regularly with his wife until she died. The series provides an important glimpse into a far-reaching problem that deeply impacts people’s lives, whether they realize it or not: powerful companies are able to cheaply, opaquely, and profitably track you, learn things about you, share that information with other entities, and use this data to make decisions about you, almost always without your informed consent or ability to stop it from happening.

Yet even as it demonstrates location tracking’s invasiveness and describes shady corporate practices in appropriately critical terms, the first piece of the series often frames the problem of pervasive privacy violations as failures of individual foresight. The authors describe how we “shed” data, instead of explaining that companies furtively take it from us, and end on the elegantly ominous conclusion that “the greatest trick technology companies ever played was persuading society to surveil itself.” The piece correctly criticizes how the system operates, but also blames every complicit schmuck who’s gullible enough to walk around with a smartphone in their pocket for allowing it to exist, rather than focusing on the companies that built and maintain it or the policymakers who fail to dismantle it.

The piece blames every complicit schmuck who’s gullible enough to walk around with a smartphone in their pocket.

The subsequent articles have done a better job of eschewing that framing, and my objective is not to beat up on two thoughtful reporters for a rhetorical slip while they’re devoting much-needed scrutiny to a profoundly important issue. But the series is illustrative of a much larger problem. Privacy rights in the United States struggle not only from inadequate legal protections, but because the companies that profit from our data have been far too effective at convincing policymakers, journalists, and the rest of us that their violations of our privacy are our fault.

In some ways, the tendency to blame individuals simply reflects the mistakes of our existing privacy laws, which are built on a vision of privacy choices that generally considers the use of technology to be a purely rational decision, unconstrained by practical limitations such as the circumstances of the user or human fallibility. These laws are guided by the idea that providing people with information about data collection practices in a boilerplate policy statement is a sufficient safeguard. If people don’t like the practices described, they don’t have to use the service.

But as researchers have repeatedly demonstrated and anyone using digital services in 2019 can recognize, people are often unable to absorb the information that companies supply or make corresponding changes to their behavior. First of all, the explanations of data collection practices that companies provide generally aren’t all that informative. In an illustration of how ill-matched the idea of privacy policies are with reality, a 2008 study found it would take the average American 40 minutes a day to read every privacy policy they encountered, at a cost of up to $5,038 a year in lost productivity. These policies are also generally written in complex legalese that most people don’t understand (as the Times piece helpfully notes).

To make matters worse, companies also rely on manipulative design tricks to wheedle people into spending more time on a service, spending more money, or providing more data than they intended. When we blame people for violations of their privacy because they didn’t read a privacy policy or because they generally rely on digital services, we are blaming them for failing to make meaningful privacy choices in an ecosystem that is designed to make these choices functionally impossible.

Telling people that invasive corporate tracking is their fault simply isn’t a serious position in 2019.

When you combine this backdrop with decades of lobbyists and a certain strain of economists assuring us that most privacy violations are trivial and unworthy of strong legal prohibitions, blaming people for their own exploitation doesn’t always seem obviously wrong. This can even be true for those who might otherwise look skeptically at policy arguments that ignore how coercion works in the real world in favor of focusing on star-spangled, free-market tropes of personal responsibility and freedom to choose. The tendency to blame individuals for using privacy-invasive services rather than the companies doing the invading can be undergirded by a sense that these services are avoidable or superficial, which can lead to the conclusion that any harms people experience as a result of using Facebook or Google are just deserts. If you don’t like it, hey, no one’s making you use it, and the world will continue spinning on its axis if you don’t peruse TikTok.

But many of these services aren’t avoidable in any meaningful sense. Companies such as Facebook track you even if you don’t have an account, and all kinds of ad-tech companies and data brokers make a lot of money by compiling oceans of data about you without you realizing it. Even when people do take the time to read the convoluted privacy policies, they may not have another option if they dislike the practices described. An employee might be required to use an employer-supplied Gmail account or the biometric entry system that guards their workplace. In this highly consolidated ecosystem, a less-invasive alternative to a service you object to might be highly impracticable to use, too expensive, or simply not exist. Telling people that invasive corporate tracking is their fault simply isn’t a serious position in 2019.

Nor are privacy violations trivial. Location tracking can assist stalkers and create national security threats, as the Times series terrifyingly demonstrates, but seemingly productive uses of the information are often harmful as well. Data collected about us is used to assess us, infer attributes that could be invasive or revealing, and create profiles of us that can affect important life opportunities without our even knowing it or having any ability to challenge it. Exploitative data practices and permissive laws that incentivize them affect us all.

It’s what privacy scholars and advocates have been screaming at the tops of their lungs for decades.

None of these ideas are novel—it’s what privacy scholars and advocates have been screaming at the tops of their lungs for decades. But as Congress, state legislatures, and presidential candidates examine different proposals for how to better protect our privacy, it’s never been more important to be crystal clear about how the system operates, how power skews and fuels it, and where the blame for digital exploitation truly lies.

Blame the tech and telecom companies that have been warned over and over that their collection practices are coercive and socially damaging, whose product decisions have a ripple effect throughout the ecosystem, and whose lobbying has prevented meaningful privacy protections from being enacted at both state and federal levels. Blame the small and medium-sized companies that eagerly follow in those companies’ footsteps, and which violate people’s privacy just like the behemoths do. Blame the policymakers and regulators who have actively sought to create and preserve legal impunity for companies that surveil us. But don’t blame people who, in a big, confusing, panopticon-like world, are simply doing the best they can.

 

Lindsey Barrett is a staff attorney and teaching fellow at the Institute for Public Representation at Georgetown Law, where she represents clients on consumer protection matters before the FTC and the FCC.

 

Fast Company , Read Full Story

(21)