2018 has been a turning point in the privacy debate. In addition to the passing of extensive consumer privacy protections in Europe, a series of breaches, hacks and the revelation of a number of distasteful tech privacy practices have completely changed the scope of the conversation. While the U.S. has yet to pass comprehensive privacy legislation — barring more ambitious attempts at the local and state level — the stage is set for the federal government to at least begin considering this monumental undertaking. In our second post of our the future of privacy series, we’re detailing the dimensions of the privacy policy debate in broad strokes so you can begin to understand the discussion and how it relates to you.

What do we know about federal privacy legislation?

There isn’t much known about potential legislation, but a series of congressional hearings throughout the year has signaled Congress’ interest in the privacy issue. However, as of yet, there is no set timeline, nor is it clear that Congress has or will have a unified approach on legislation, despite privacy being a bipartisan issue. This ambiguity about the shape legislation will take can be seen within the tech industry itself. Apple CEO Tim Cook spoke in Brussels in October about the need for strong privacy laws, similar to the GDPR, in the U.S. Other tech companies, while supportive of privacy legislation, have been more muted about specific protections they’d like to see.

Still, despite the lack of a unified federal approach, several themes have emerged as politicians, experts and tech companies have discussed what they believe to be the role of privacy legislation. These themes touch upon many aspects of privacy, but there are arguably three aspects of the debate that are extremely important. They revolve around whether states instead of the federal government should lead the way in privacy policy, whether privacy policies should seek to merely add safeguards to existing data collection practices or revamp them entirely and, finally, whether or not privacy should be seen as a fundamental human right. Debate isn’t limited to these three considerations, but these issues provide a decent introduction to the privacy debate and its stakes for consumers, which is why we’ve chosen to focus on them for this piece.

The core tenants of the emerging privacy debate

Who should set the rules?

September was a busy month for tech companies. Following a Sept. 5 congressional hearing with social media companies, on Sept. 26, Congress met with the leadership from several major tech companies to discuss consumer privacy. Notably, all tech leadership present was in agreement on the need for strong federal legislation while simultaneously disparaging what was generally referred to as a “patchwork” of state legislation. Federal legislation was framed as a win-win for consumers, legislators and companies since Congress has the power to create a single clear standard. Suggestions for policy, however, were very general and amorphous, with the discussion dancing around the GDPR and California’s recently passed Consumer Privacy Act (CCPA). Tech companies have previously expressed distaste for the California law, so critics and consumer advocates saw the testimony as a not-so-subtle attempt to reset the progress that it and other states have made on privacy issues.

The biggest concern advocates have is that a federal law could prevent the progress that comes from state-enacted legislation. For example, it’s because California was the first state to adopt breach notification standards that companies abide by the practice of notifying consumers of hacks and breaches today across the board. Other states’ initiatives have also pushed for minimum standards that companies themselves never offered until their hands were forced. A federal law, in theory, could jeopardize any privacy gains made through state legislation while eliminating the flexibility to address future privacy concerns arising from unanticipated features of the growing digital economy.

A second hearing held on Oct. 10, which featured policy experts and privacy advocates, discussed these concerns at length. One highlight was that state laws, as well as state attorneys general, are important “laboratories of democracy.” Alastair Mactaggart, the wealthy developer who personally pushed for California’s privacy law also testified, explaining how the California law was not in conflict with business interests, as well as the importance of making federal law at least as strong as California’s.

Does consumer consent need a 21st century upgrade?

To truly understand some of the proposed changes to state and federal privacy policy, it’s important to talk about the logic of today’s privacy paradigm. The central underpinning of approaches to privacy policy today is something called “notice and consent” (or notice and choice). It’s named such because companies provide “notices” about the types of data collected and how often this data is used. Consumers then have the “choice” of consenting to these rules (often in the form of “I agree” buttons) or, sometimes, notices are written like contracts, with the implicit understanding that use of a service means acceptance of its terms.

It’s increasingly being argued that this system is a mess. Notices hardly contain all the information consumers need. With the full details of a service being omitted from notices, consumers are often required to hunt down several documents whose contents have to be translated from legalese. Even if the notices comprehensively presented all relevant details, it’s worth considering that notices are much like snapshots and only relevant for a moment in time. Companies often imagine new ways to use data, many times without telling consumers. In fact, the emerging digital economy demands this kind of treatment of data. The way companies will get ahead of one another in the age of big data is to think up ways to use and share data that competitors, or even their own founders, haven’t. At the same time, unfortunately, our notions of consent remain static. Consent is something that’s usually taken to be indefinite; essentially, so long as the consumer continues to use the service, consent is assumed to continue — despite any radical transformations that occur to systems controlling a consumer’s data.

Some legal experts have also taken aim at what they see as the relatively low standard for consent. There are a multitude of reasons for their critiques. Facts like the dwindling number of offline companies and services who aren’t aggressively engaged in the data economy, as well as the fact that many online companies have comparable terms of service documents leave consumers choosing between the lesser of two evils. Additionally, the fact that consumers can’t pick and choose which terms they agree to make notices and terms and conditions less like contracts and more like ultimatums. Given that most people aren’t legal scholars and their time is constrained, some of these experts suggest that even though customers are using online services (only sometimes with the understanding of what they’re agreeing to) this isn’t consent, but mere relenting or acquiescence. None of this analysis even touches upon companies like data brokers and other third parties that simply acquire data without consumers’ knowledge. However, the biggest problem that these experts posit is that even if the privacy tradeoffs individuals make when clicking “I agree” are personally worth it, in the long run, the permissions granted by such policies harm all consumers as a collective. This invokes a point that some privacy experts have made — privacy is no longer an individual problem, because accessing an individual’s data can threaten the privacy of their associates. In aggregate, this leads to a society where data breaches and all other types of invasive behaviors become more common and less able to prevent or control.

With some of these insights in mind, newer privacy proposals, like the GDPR, have attempted to modify the notice and consent model by making some aspects of data collection require “opt-in” consent as opposed to “opt-out” consent. This means that by default aspects of data collection won’t occur unless the consumer explicitly requests. It’s worth noting that in America, even the ambitious CCPA passed this year doesn’t require opt-in consent — although, in many cases, it makes opting out of data collection easier to do. Other provisions of the GDPR make notices far more transparent, shoring up the notice and consent philosophy. The California law attempts to build upon notice and consent, albeit in a different manner. Through opt-outs, consumers can choose which firms to share data with, and companies might provide incentives to those willing to share their data.

What kind of right is the right to privacy?

There’s another debate, above the level of policy proposals, that’s taking place as well. This debate is more of a philosophical discussion about the type of legal perspective necessary to codify an individual’s right to privacy. Given that most of this discussion takes place in developed Western economies with robust property protections, most of the debate has framed privacy as a matter of property rights. Our personal information is ours to share as we wish, thus privacy failures are simply intrusions on property rights. Viewed this way, laws improving upon notice and consent should focus on making notices as transparent and thorough as possible while potentially offering incentives for consenting customers.

This perspective exists on a gradient, with some viewing the tech industry’s inability to compensate consumers for the gains companies make using their data as a failure. Individuals with this particular view hold that data collection is much like mining for a natural resource. Through our interaction with technology, we produce value in the form of data used to train artificial intelligence and create new business models. Putting a price on data will, in theory, limit how much companies “consume” and reduce the severity of breaches, while simultaneously limiting the economic inequalities that the tech industry generates. Others who view data as property may only focus on issues like data portability — that is, the ability to dissociate your data with one entity and transfer it to another. Indeed, both the GDPR and the CCPA affirm some notion of data portability.

But some legal scholars and experts are wondering if this property paradigm is the right approach to privacy. Just like notice and consent, data as property can be a very messy concept in practice. For example, a single strand of data is often created and “owned” by more than one party, and as we noted above, data belonging to one individual can have privacy implications for their family and friends. Because of this, it isn’t clear that a straightforward application of property rights will be capable of fully encompassing all the harm that misused data can produce. What’s more, given all the ways our data could be used against us in the future, either to contradict our own economic interests or to reveal our personalities, beliefs and motivations to governments, corporations and individuals alike, some have argued that privacy should instead be codified as a human right.

What does this mean for the average consumer?

It’s still too early in the privacy debate to tell which blend of perspectives will ultimately win out, and it likely won’t be until the digital economy fully matures that we, as a society, know exactly what we’ll need. Hopefully, this post and the Future of Privacy series thus far have given your insight into why our existing notions of privacy are wholly inadequate for addressing the potential downsides of the emerging data economy. With this knowledge, you can make a more informed decision about the elected officials you support and the types of products and services you purchase.

Keep up with the most important tech and privacy stores by following our technology blog.