Some recent articles on privacy and surveillance, available on SSRN, that you may want to add to your to-read list:
Three Paradoxes of Big Data
Neil M. Richards
Washington University in Saint Louis – School of Law
Jonathan H. King
Washington University in Saint Louis
September 3, 2013
66 Stanford Law Review Online 41 (2013)
Big data is all the rage. Its proponents tout the use of sophisticated analytics to mine large data sets for insight as the solution to many of our society’s problems. These big data evangelists insist that data-driven decisionmaking can now give us better predictions in areas ranging from college admissions to dating to hiring to medicine to national security and crime prevention. But much of the rhetoric of big data contains no meaningful analysis of its potential perils, only the promise. We don’t deny that big data holds substantial potential for the future, and that large dataset analysis has important uses today. But we would like to sound a cautionary note and pause to consider big data’s potential more critically. In particular, we want to highlight three paradoxes in the current rhetoric about big data to help move us toward a more complete understanding of the big data picture. First, while big data pervasively collects all manner of private information, the operations of big data itself are almost entirely shrouded in legal and commercial secrecy. We call this the Transparency Paradox. Second, though big data evangelists talk in terms of miraculous outcomes, this rhetoric ignores the fact that big data seeks to identify at the expense of individual and collective identity. We call this the Identity Paradox. And third, the rhetoric of big data is characterized by its power to transform society, but big data has power effects of its own, which privilege large government and corporate entities at the expense of ordinary individuals. We call this the Power Paradox. Recognizing the paradoxes of big data, which show its perils alongside its potential, will help us to better understand this revolution. It may also allow us to craft solutions to produce a revolution that will be as good as its evangelists predict.
NSA Surveillance Since 9/11 and the Human Right to Privacy
G. Alex Sinha
Human Rights Watch ; American Civil Liberties Union
August 31, 2013
Loyola Law Review, New Orleans, Forthcoming
Since shortly after 9/11, the National Security Agency (“NSA”) has been collecting massive amounts of data about American citizens and permanent residents, ostensibly with the aim of preempting future terrorist attacks. While the NSA’s program has invited substantial scholarly attention, specifically surrounding its compliance with the United States Constitution and various domestic statutes, the academic debate about its merits entirely omits one crucial fact: the United States is also legally obliged to protect a human right to privacy, as codified in Article 17 of the International Covenant on Civil and Political Rights (“ICCPR”). This paper seeks to eliminate the blind spot caused by that omission, illustrating the relevance of human rights for assessing the legality and propriety of NSA surveillance. It argues that even under conservative assumptions about the scope of the NSA program and the coverage of the ICCPR, there is good reason to think that the program violates the covenant. At the very least, as this detailed case study of the NSA program demonstrates, more clarity from the Human Rights Committee on the right to privacy is essential in a world characterized by increasing government surveillance.
Part 1 of this paper provides a brief history of domestic spying in the United States, leading up to and through the passage of the Foreign Intelligence Surveillance Act of 1978 (“FISA”). FISA constituted the first major legislative effort to regulate the electronic surveillance of American citizens or permanent residents within the United States for foreign intelligence or international counterterrorism purposes, and Part 1 concludes by outlining the key provisions of this landmark statute. Part 2 of the paper traces the chronology of revelations about the NSA program and relevant statutory developments, starting with the original disclosure of the program in December of 2005 and ending with revelations made in August of 2013. Part 3 of the paper unpacks the language of Article 17 of the ICCPR, and discusses the obligations of states parties to uphold the right to privacy enshrined there. This Part then compares what we know about the NSA program with state responsibilities under the ICCPR.
Fool’s Gold: an Illustrated Critique of Differential Privacy
Jane R. Bambauer
University of Arizona – James E. Rogers College of Law
Gatton College of Business & Economics, University of Kentucky
Oklahoma State University – Stillwater
September 15, 2013
16 Vanderbilt Journal of Entertainment & Technology Law, 2014 (Forthcoming)
Arizona Legal Studies Discussion Paper No. 13-47
Differential privacy has taken the privacy community by storm. Computer scientists developed this technique to allow researchers to submit queries to databases without being able to glean sensitive information about the individuals described in the data. Legal scholars champion differential privacy as a practical solution to the competing interests in research and confidentiality, and policymakers are poised to adopt it as the gold standard for data privacy. It would be a disastrous mistake.
This Article provides an illustrated guide to the virtues and pitfalls of differential privacy. While the technique is suitable for a narrow set of research uses, the great majority of analyses would produce results that are beyond absurd: average income in the negative millions, or correlations well above 1, for example.
The legal community has been misled into thinking that differential privacy can offer the benefits of data research without sacrificing privacy. In fact, differential privacy will usually produce either very wrong research results or very useless privacy protections. Policymakers and data stewards will have to rely on a mix of approaches: perhaps differential privacy where it is well-suited to the task, and other disclosure prevention techniques in the great majority of situations where it isn’t.
Why Data Privacy Law Is (Mostly) Constitutional
Neil M. Richards
Washington University in Saint Louis – School of Law
October 2, 2013
Neil M. Richards, Intellectual Privacy, Oxford University Press, 2014, Forthcoming
This essay argues that privacy critics arguing that most privacy rules create constitutional problems overstate their case. Since the New Deal, American law has rested on the wise judgment that, by and large, commercial regulation should be made on the basis of economic and social policy rather than blunt constitutional rules. This has become one of the basic principles of American Constitutional law. Although some observers have suggested that the Supreme Court’s recent decision in Sorrell v. IMS Health (2011) changes this state of affairs, such readings are incorrect. Sorrell involved a challenge to a poorly-drafted Vermont law that discriminated on both content and viewpoint. Such a law would have been unconstitutional if it had regulated even unprotected speech. As the Sorrell Court made clear, the real problem with the Vermont law at issue was that it didn’t regulate enough, unlike the “more coherent policy” of the undoubtedly constitutional federal Health Insurance Portability and Accountability Act of 1996.
Data privacy law should thus rarely be thought as implicating serious constitutional difficulties, which is a good thing. As we move into the digital age, in which more and more of our society is affected or constituted by data flows, we face a similar threat. If “data” were somehow “speech,” virtually every economic law would become clouded by constitutional doubt. Economic or commercial policy affecting data flows (which is to say all economic or social policy) would become almost impossible. This might be a valid policy choice, but it is not one that the First Amendment commands. Any radical suggestions to the contrary are unsupported by our Constitutional law. In a democratic society, the basic contours of information policy must ultimately be up to the people and their policymaking representatives, and not to unelected judges. We should decide policy on that basis, rather than on odd readings of the First Amendment.
Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms
Microsoft Research ; MIT Center for Civic Media ; University of New South Wales (UNSW)
New York University School of Law
Boston College Law Review, Vol. 55, No. 1, 2014
NYU School of Law, Public Law Research Paper No. 13-64
NYU Law and Economics Research Paper No. 13-36
The rise of “big data” analytics in the private sector poses new challenges for privacy advocates. Unlike previous computational models that exploit personally identifiable information (PII) directly, such as behavioral targeting, big data has exploded the definition of PII to make many more sources of data personally identifiable. By analyzing primarily metadata, such as a set of predictive or aggregated findings without displaying or distributing the originating data, big data approaches often operate outside of current privacy protections (Rubinstein 2013; Tene and Polonetsky 2012), effectively marginalizing regulatory schema. Big data presents substantial privacy concerns – risks of bias or discrimination based on the inappropriate generation of personal data – a risk we call “predictive privacy harm.” Predictive analysis and categorization can pose a genuine threat to individuals, especially when it is performed without their knowledge or consent. While not necessarily a harm that falls within the conventional “invasion of privacy” boundaries, such harms still center on an individual’s relationship with data about her. Big data approaches need not rely on having a person’s PII directly: a combination of techniques from social network analysis, interpreting online behaviors and predictive modeling can create a detailed, intimate picture with a high degree of accuracy. Furthermore, harms can still result when such techniques are done poorly, rendering an inaccurate picture that nonetheless is used to impact on a person’s life and livelihood.
In considering how to respond to evolving big data practices, we began by examining the existing rights that individuals have to see and review records pertaining to them in areas such as health and credit information. But it is clear that these existing systems are inadequate to meet current big data challenges. Fair Information Privacy Practices and other notice-and-choice regimes fail to protect against predictive privacy risks in part because individuals are rarely aware of how their individual data is being used to their detriment, what determinations are being made about them, and because at various points in big data processes, the relationship between predictive privacy harms and originating PII may be complicated by multiple technical processes and the involvement of third parties. Thus, past privacy regulations and rights are ill equipped to face current and future big data challenges.
We propose a new approach to mitigating predictive privacy harms – that of a right to procedural data due process. In the Anglo-American legal tradition, procedural due process prohibits the government from depriving an individual’s rights to life, liberty, or property without affording her access to certain basic procedural components of the adjudication process – including the rights to review and contest the evidence at issue, the right to appeal any adverse decision, the right to know the allegations presented and be heard on the issues they raise. Procedural due process also serves as an enforcer of separation of powers, prohibiting those who write laws from also adjudicating them.
While some current privacy regimes offer nominal due process-like mechanisms in relation to closely defined types of data, these rarely include all of the necessary components to guarantee fair outcomes and arguably do not apply to many kinds of big data systems (Terry 2012). A more rigorous framework is needed, particularly given the inherent analytical assumptions and methodological biases built into many big data systems (boyd and Crawford 2012). Building on previous thinking about due process for public administrative computer systems (Steinbock 2005; Citron 2010), we argue that individuals who are privately and often secretly “judged” by big data should have similar rights to those judged by the courts with respect to how their personal data has been used in such adjudications. Using procedural due process principles, we analogize a system of regulation that would provide such rights against private big data actors.
The Massive Metadata Machine: Liberty, Power, and Secret Mass Surveillance in the U.S. and Europe
Bryce Clayton Newell
University of Washington – The Information School
October 11, 2013
I/S: A Journal of Law and Policy for the Information Society (ISJLP), 10, 2014
This paper explores the relationship between liberty and security implicated by secret government mass surveillance programs. It includes both doctrinal and theoretical analysis. Methodologically, the paper examines judicial reasoning in cases where parties have challenged secret government surveillance programs on Constitutional or human rights grounds in both United States’ Courts and at the European Court of Human Rights (ECtHR). Theoretically, this paper will draw on theories in the fields of law, surveillance studies, and political theory to question how greater recognition of citizen rights to conduct reciprocal surveillance of government activity (for example, through expanded rights to freedom of information) might properly balance power relations between governments and their people. Specifically, the paper will question how liberal and neo-republican conceptions of liberty, defined as the absence of actual interference and the possibility of arbitrary domination, respectively, and the jurisprudence of the ECtHR can inform the way we think about the proper relationship between security and liberty in the post-9/11, post-Snowden United States of America.