Di-agnosis
( details )
Essay
8,154 words
2019

Most Americans object to the collection of their data, yet use services that (often unbeknownst to them) track their every click, movement, and conversation. The information-industrial complex’s production of knowledge about our bodies and behaviors relies on an attendant production of ignorance. This agnogenesis is twofold: companies obfuscate their data collection practices through various aesthetic-semantic strategies; and users assent to these practices by irrational cognition.

Part 1 surveys the contemporary data-economy. Part 2 outlines legal data collection policies from 1970 to the present. Part 3 discusses corporate strategies to hide data collection practices. Part 4 considers cognitive limitations that influence assent to these practices.

As every man goes through life he fills in a number of forms for the record, each containing a number of questions... There are thus hundreds of little threads radiating from every man millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider’s web, and if they materialized as rubber bands, buses, trams and even people would all lose the ability to move, and the wind would be unable to carry torn-up newspapers or autumn leaves along the streets of the city.1Aleksandr Solzhenitsyn, Cancer Ward (New York: Dial Press, 1968), 192.

When Aleksandr Solzhenitsyn wrote of administrative records in 1968 he could have hardly imagined the tangle of threads a half-century later. Today the average Internet user enters into sixty-five contracts each week—two thousand each year.2As noted in Lorrie Faith Cranor and Aleecia M. McDonald, “The Cost of Reading Privacy Policies,” I/S: A Journal of Law and Policy for the Information Society 4 (2008): 560–561. This is a conservative estimate based on Nielsen Online data of American Internet users from March, 2008. Another 2008 study by Harald Weinrich found an average of 78 unique contracts per month. Every app we install and website we browse has its own Terms of Service (ToS), a contract governing the relation between providers and users.3Marco Lippi et al., “CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service,” Artificial Intelligence and Law 27 (2019): 117. At this moment, you are bound by at least as many as there are words in this sentence: the computer or phone on which you are reading, the browser by which you accessed this reading, the services updating weather data and fetching email subtly in the background, and untold others.4Even if you happen to be reading in print, forget not the printer license. My apologies.

Just as the invention of airplanes led to the propertization of airspace—a shift in the perceived value of a resource (in this case, air) often triggers a struggle to create and enforce property rights in that resource—the growth of cyberspace has led to the propertization of its medium: data.5Joshua A.T. Fairfield, Owned: Property, Privacy, and the New Digital Serfdom (Cambridge, UK: Cambridge University Press, 2017); Stewart Banner, American Property: A History of How, Why, and What We Own (Cambridge: Harvard University Press, 2011). When Larry Page and Sergey Brin launched Google in 1998, the search engine’s privacy policy reflected the early-digital era—a time when little data was collected and users were always viewed “in aggregate, not as individuals.” “Google,” the first disclosure read, “is sensitive to the privacy concerns of its users. The Internet allows individuals to explore and communicate with unprecedented ease, but it also allows websites to collect and distribute personal information with equal ease. We at Google know that many users are, understandably, concerned about such practices, and we wish to make clear our policy for collecting and using personal information.”

It then listed the four types of information Google would amass: aggregated search activity; personal information you provide; clickthrough information; and cookies. Two decades later, Google’s privacy policy enumerates forty-one collection categories: among them, content you create, upload, or receive from others when using our services; time and date of calls and messages; duration of calls; terms you search for; views and interactions with content and ads; people with whom you communicate or share content; activity on third-party sites and apps that use our services; voice and audio information when you use audio features; location data from GPS, WIFI access-points, and cell-towers; and device sensor data.6Charlie Warzel and Ash Ngu, “Google’s 4,000-Word Privacy Policy Is a Secret History of the Internet,” New York Times, July 10, 2019. The expansion of Google’s collection practices—the spilling over of its data belly—reflects the growth of the “information-industrial complex.”7Shawn Powers and Michael Jablonski termed the “information-industrial complex” after the military-industrial complex in their 2015 book The Real Cyber War: The Political Economy of Internet Freedom. Every click is captured, chat analyzed, eye-movement tracked, and heartbeat measured—and then monetized. One behavioral targeting firm, Audience Science, “records billions of behavioral events daily” to “create intelligent audience segments to connection people with relevant advertising.”8As quoted in Joseph Turow, Jennifer King, Chris Jay Hoofnatle, Amy Bleakley, and Michael Hennessy, “Americans Reject Tailored Advertising and Three Activities That Enable It,” Annenberg School for Communication, University of Pennsylvania (2009): 6.

This “data capitalism” exists not only online, but IRL.9That is, “in real life,” for those who remember a time when such distinctions were not necessary. In 2007, Anne Wojcicki, then married to Google chief Sergey Brin, launched 23andMe. Its $99 “Personal Genome Service” kit offered cut-rate genetic testing “to connect you to the 23 paired volumes of your own genetic blueprint... bringing you personal insight into ancestry, genealogy, and inherited traits.”10Charles Seife, “23andMe Is Terrifying, But Not for the Reasons the FDA Thinks,” Scientific American, November 27, 2013. Spit in a vial, send it in, and the company will analyze thousands of regions in your DNA that are known to vary from human to human—and which are responsible for some of our traits. You may learn, for instance, that you have “cilantro taste aversion,” that you have one (or, in my case, 277) of 2,872 Neanderthal variants, or that you are predisposed for late-onset Alzheimer’s. A close analysis of 23andMe’s DNA—its Terms of Services—reveals that the Personal Genome Service isn’t primarily intended to be a medical device. Instead, it is “a front-end for a massive information-gathering operation.”11Ibid. 23andMe reserves the right to use individuated genomic information to inform you about events and to try to sell you products and services—or to sell this information to insurance companies and pharmaceutical firms. “The long game here is not to make money selling kits, although the kits are essential to get the base level data,” Patrick Chung, a 23andMe board member, disclosed. “Once it has the data, [the company] does actually become the Google of personalized health care.”12Elizabeth Murphy, “Inside 23andMe founder Anne Wojcicki’s $99 DNA Revolution,” Fast Company, October 14, 2013. 23andMe may also share information in its database—ten million people have submitted their DNA so far, though by 2021 that total could near one-hundred million—with researchers and law enforcement: “Under certain circumstances your Personal Information may be subject to processing pursuant to laws, regulations, judicial or other government subpoenas, warrants, or orders. For example, we may be required to disclose Personal Information in coordination with regulatory authorities in response to lawful requests by public authorities, including to meet national security or law enforcement requirements.”13“Full Privacy Statement,” 23andMe, September 30, 2019; Antonio Regalado, “More than 26 million people have taken an at-home ancestry test,” MIT Technology Review, February 11, 2019. In July 2018, 23andMe shared the genetic data of five million customers in a “collaboration” with GlaxoSmithKline.14Anne Wojcicki, “A Note On 23andMe’s New Collaboration with GSK,” 23andMe, July 25, 2018. Earlier that year, in April, the Sacramento County District Attorney’s Office used data from consumer DNA testing to identify the long-elusive “Golden State Killer.”15Jocelyn Kaiser, “We will find you: DNA search used to nab Golden State Killer can home in on about 60% of white Americans,” Science, October 11, 2018. Consumer genetic information has since been used to find suspects in at least fifty cold cases.16Megan Molteni, “What the Golden State Killer Tells Us About Forensic Genetics,” Wired, April 24, 2019.

“Surveillance capitalism,” as social psychologist Shoshana Zuboff termed it, has propertized once-private information about our behaviors and our bodies— has “claim[ed] human experience as free raw material for translation into behavioral data.”17hoshana Zuboff, “Big other: surveillance capitalism and the prospects of an information civilization,” Journal of Information Technology 30 (2015): 75–89. It has also re-propertized property. In 2014, John Deere—the world’s largest agricultural machinery maker—told the Copyright Office that farmers do not own their tractors. Tractors, like many other vehicles, household appliances, and common electronic devices are increasingly no longer just mechanical: many depend on software for their functionality.

Because the purchase of hardware, John Deere argues, does not encompass the networked software systems that are integral to its operation, “the vehicle owner receives an implied license for the life of the vehicle to operate the vehicle.”18Darin Bartholomew, “Long Comment Regarding a Proposed Exemption Under 17 U.S.C. 1201” (2014): 6. This license may be voided if owners modify the vehicle for “diagnosis, repair, aftermarket personalization..., or other improvement.”19Ibid., 1. Companies including General Motors, Nest, and Fitbit offer similar licenses for the physical products they sell—sedans, thermostats, fitness monitors—and which owners tend to believe that they straightforwardly own. Enforcing this restricted model of ownership requires pervasive surveillance of customer activity to detect any violations to the software’s copyright or licensing agreement. For manufactures of these smart products, surveillance is both a business model and a regulatory mechanism—“both the modus operandi and raison d’être.”20Natasha Tusikov, “Precarious Ownership of the Internet of Things in the Age of Data”: 123; Taylor Shelton et al., “The ‘actually existing smart city,’” Cambridge Journal of Regions, Economy and Society 8 (2015): 16. “Smart,” argues technology critic Evgeny Morozov, may be better understood as an acronym for “Surveillance Marketed as Revolutionary Technology.”21Evgeny Morozov, as quoted in Trevor Timm, “The government just admitted it will use smart home devices for spying,” The Guardian, February 9, 2016.

This “death of privacy” is of little concern, so it is alleged, because the value of privacy died before it.22A. Michael Froomkin coined “the death of privacy” at the turn of the century in his article of the same name, published in Stanford Law Review 52 (2000). USA Today reports that “Millennials don’t worry about online privacy... [They] let it all hang out. Online that is.”23Malcolm Hadley, “Millennials don’t worry about online privacy,” USA Today, April 21, 2013. The New York Post agrees: “Whether this Brave New World keeps you up at night could depend on your age. Recent reports have millennials leading the charge to delete Facebook and other social media. Don’t buy it. If they’re deleting it’s because they’re bored, not because they’re repulsed by the Cambridge Analytica affair or suddenly started caring about digital privacy.”24Matthew Hennessey, “Why millennials will learn nothing from Facebook’s privacy crisis,” New York Post, April 7, 2018. Asked of his company’s suspect privacy practices, Disney CEO Robert Iger suggested that such concerns were antique. “Kids don’t care,” he said, adding that when he talked to his adult children about their online privacy concerns “they can’t figure out what I’m talking about.”25Gina Keating, “Disney CEO bullish on direct Web marketing to consumers,” Reuters, July 23, 2009.

In truth, young, old, and in-between care about privacy in equal measure. A 2010 study conducted by researchers at the University of Pennsylvania found no statistically significant differences between generational privacy concerns. For instance, 82% of 18–24 year-olds, 84% of 25–34 year-olds, and 92% of 55–64 year-olds have “refused to give information to a business or a company because [they] thought it was not really necessary or was too personal.” Likewise, 62% of 18–24 year-olds, 68% of 25–34 year-olds, and 64% of 55–64 year-olds insisted that “there should be a law that gives people the right to know everything that a website knows about them.”26Chris Jay Hoofnagle, Jennifer King, Su Li, and Joseph Turow, “How Different are Young Adults From Older Adults When it Comes to Information Privacy Attitudes & Policies?” Annenberg School for Communication, University of Pennsylvania (2010): 10–11. Both millennials (ages 18–34) and people over 35 believe in large numbers (70% and 77%, respectively, with a 3.1% margin of error) that “no one should ever be allowed to have access to my personal data or web behavior.”27Jay Stanley, “Do Young People Care About Privacy?”, American Civil Liberties Union, April 23, 2013. Even when they are told that the act of following them on websites will take place anonymously, the aversion remains: 68% of Americans “definitely” would not allow it, and 19% would “probably not allow it.28Joseph Turow, Jennifer King, Chris Jay Hoofnatle, Amy Bleakley, Michael Hennessy, “Americans Reject Tailored Advertising and Three Activities That Enable It,” Annenberg School for Communication, University of Pennsylvania (2009): 3. Further, when asked to choose what, if anything, should be a company’s punishment if it “uses a person’s information illegally,” 18% of respondents indicate that the company should “be put out of business” and 35% select that “responsibly executives should face jail time.”29Ibid., 4. Chris Jay Hoofnagle, director of the Berkeley Center for Law & Technology’s information privacy programs, concludes that “young-adult Americans have an aspiration for increased privacy even while they participate in an online reality that is optimized to increase their revelation of personal data.”30Hoofnagle, King, Li, and Turow, “How Different are Young Adults From Older Adults When it Comes to Information Privacy Attitudes & Policies?”: 20.

Despite fervent, intergenerational demand for data privacy, Internet users are unaware of the extent to which their information is harvested. In an empirical investigation of Canadians’ knowledge of corporate data collection and usage practices (2019), Ekaterina Bogdanov and Marshall David Rice found that more than 60% of subjects could not correctly identify how their data were being mined and monetized. A total of 73.8% of respondents did not know that companies “insert inaudible, high frequency sounds into online advertisements that can then be picked up by devices to allow advertisers to track users’ activities.” Similarly, 74% of respondents did not know the correct answer (true) to the statement, “Some companies put cameras into billboards and smartphone/tablet apps that can recognize people’s emotions and use that information to show people advertisements that match their mood.”31Ekaterina Bogdanov and Marshall David Rice, “Privacy in Doubt,” Canadian Journal of Administrative Sciences 35 (2019): 166. Another study, by Murat Kezer (2016), indicates low levels of awareness of companies’ data abilities related to online browsing among US adults. For example, only 15.1% of respondents correctly stated that “when you go to a website it can collect information about you even if you do not register,” and only 5.8% knew that “popular search engine sites, such as Google, track the sites you come from and go to.”32Murat Kezer et al., “Age differences in privacy attitudes, literacy and privacy management on Facebook,” Cyberpsychology: Journal of Psychosocial Research on Cyberspace 10 (2016). Qualitative research by Yong Jun Park (2014) reports that while interviewees knew that they access the Internet on their computers, they were surprised to learn that they also access the Internet when they use mobile applications and, therefore, their activities were no more private.33Yong Jin Park and S. Mo Jang, “Understanding privacy knowledge and skill in mobile communication,” Computers in Human Behavior 38 (2014): 300.

Perhaps unsurprisingly, levels of privacy awareness—and, in turn, vulnerability—trace sociodemographic disparities. In a follow-up study, Park found that education level and income were predictors of “privacy capital”: awareness of privacy, attitude toward the importance of privacy and data sharing, and confidence in the ability to maintain privacy.34Yong Jin Park and Jae Eun Chung, “Health privacy as sociotechnical capital,” Computers in Human Behavior 76 (2017): 227–236. Likewise, Bogdanov and Rice reveal that as income goes up, so too does knowledge of corporate data practices. 77% of respondents who earn less than $40,000 incorrectly believe that “companies do not monitor fitness-tracking devices and share the health information they collect with insurance companies.” This percentage is significantly lower among individuals who earned $40,000 to $99,999 (69% incorrect) and yet lower for those earning more than $100,000 (61% incorrect).35Bogdanov and Rice, “Privacy in Doubt: An Empirical Investigation of Canadians’ Knowledge of Corporate Data Collection and Usage Practices”: 168. While 51% of respondents with a university degree correctly identified that “retailers analyze women’s purchase histories to determine if they are pregnant in order to send them coupons and ads relevant to pregnancy,” that percentage fell to 29% for those who did not have a degree.36Ibid. Data collection, though, does not discriminate.

The rift between demand for privacy and participation in services that violate those demands has been called the “privacy paradox.”37Anja Bechmann, “Non-informed Consent Cultures: Privacy Policies and App Contracts on Facebook,” Journal of Media Business Studies 11 (2014): 22. Why, if we profess to value our privacy as we do and are angered to learn of its abuse, do we use things whose very operation depends on the erasure of that privacy? Why do we power the “electric panopticon” if we don’t want to be its prisoner?38E.J. Smith and N.A. Kollars, “QR panopticism: user behavior triangulation and barcode-scanning applications,” Information Security Journal 24 (2015): 160. The information-industrial complex, and its attendant production of behavioral knowledge, is supported by the production of ignorance: technology companies obfuscate their data collection practices through various aesthetic-semantic strategies while cognitive limitations allow users to ignore the realities of these practices.39With “production of ignorance” I am referring to agnotology. Coined by Robert Proctor—from the Neoclassical Greek agnōsis (“not knowing”)—agnotology is “a neologism signifying the study of the cultural production of ignorance.”

The Informed Minority: A Brief History of Data Law

Since the 1970s, data privacy has been regulated by the Fair Information Practice Principles (FIPPs). FIPPs were initially proposed in 1973 by the United States Secretary’s Advisory Committee on Automated Personal Data Systems in response to the growing use of automated data systems.40Daniel J. Solove, “Privacy Self Management and the Consent Dilemma.” Harvard Law Review 126 (2013): 1882. The Advisory Committee, chaired by the RAND Corporation’s Willis H. Ware, recommended five principles for legislation and government oversight. First, “There must be no personal data record-keeping systems whose very existence is secret.” Second, “There must be a way for an individual to find out what information about him is in a record and how it is used.” Third, “There must be a way for an individual to prevent information about him that was obtained for one purpose from being used or made available for other purposes without his consent.” Forth, “There must be a way for an individual to correct or amend a record of identifiable information about him.” Fifth, “Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuse of the data.”41Willis H. Ware et al., Records, Computers and the Rights of Citizens: Report of the Secretary’s Advisory Committee on Automated Personal Data Systems (Washing- ton D.C.: U.S. Department of Health, Education & Welfare, 1973): xx–xxi. These protections—notice/awareness, choice/consent, access/participation, integrity/security, and enforcement/redress—were adopted by the Federal Trade Commission and thereafter codified in the Organization for Economic Cooperation and Development Guidelines of 1980 and the Asia-Pacific Economic Cooperation Privacy Framework of 2004.42Lorrie Faith Cranor and Aleecia M. McDonald, “The Cost of Reading Privacy Policies,” I/S: A Journal of Law and Policy for the Information Society 4 (2008): 546; Asia-Pacific Economic Cooperation, “APEC Privacy Framework,” 16th APEC Ministerial Meeting (2004): 1–23.43Bogdanov and Rice, “Privacy in Doubt: An Empirical Investigation of Canadians’ Knowledge of Corporate Data Collection and Usage Practices”: 164.

The Fair Information Practice Principles do not restrict corporate collection and use of consumer data. Under FIPPs, corporations can collect and use consumer data as they choose, so long as the consumer consents to such collection and use. In order to comply with disclosure suggestions, corporations developed the privacy notice, inviting users to read about a service's data-related practices.44Robert Pitofsky, “Privacy Online: Fair Information Practices in the Electronic Marketplace,” Federal Trade Commission (2000): i. When the Federal Trade Commission checked on industry self-regulation in 1998, it found that 92% of U.S. commercial websites collected some type of data, while only 14% provided comprehensive notice of their practices.45Ibid., ii. In the years that followed, the FTC conducted Internet sweeps to urge compliance and, by 2000, that percentage rose to 41%.46Ibid., 2. However, multiple studies showed that consumers were reluctant to shop online because they had privacy worries. In one survey, 92% of consumers reported “concern” and 67% reported “high concern” about the misuse of their personal information.47“About the FTC,” Federal Trade Commission.

Under the Federal Trade Commission Act of 1914, the FTC’s charter is predominantly financial. The agency’s vision is to maintain a “vibrant economy”; barriers to new markets are a serious issue.48Robert Pitofsky, “Privacy Online: Fair Information Practices in the Electronic Marketplace”: 6. Ruling that legislation would stifle growth of the Internet economy, the FTC instead championed privacy seals as “an efficient way to implement privacy protection.”49Carlos Jensen and Colin Potts, “Privacy Policies Examined: Fair Warning or Fair Game?,” GVU Technical Report 3-4 (2003): 5. Two seal providers, TRUSTe and the Better Business Bureau, began certifying website privacy policies. TRUSTe requires companies to follow basic privacy standards and document their own practices. TRUSTe also investigates consumer allegations that certified websites are not abiding by their policies. The stringency of these certifications, though, has come into question. In fact, companies with TRUSTe seals typically offer less privacy-protective policies than those without TRUSTe seals.50“Privacy Shield Program Overview,” Privacy Shield Framework. The U.S. Department of Commerce and European Commission have since developed the Privacy Shield Framework, under which organizations “self-certify to the Department and publicly declare their commitment to adhere to the Principles.”51

Earlier this year, the American Law Institute published its Principles of the Law for Data Privacy.52“The ALI is the unofficial College of Cardinals of the U.S. legal profession,” notes Adam Levitin, a Georgetown University law professor and ALI member. “Even though its members are not representatives of the public, once the ALI approves these Restatements, lawyers, arbitrators, judges and justices use them as a handy reference guide to what the law is and should be.” Intended “to provide a framework for regulating data privacy and for duties and responsibilities for entities that process personal data,” the Principles reaffirm the self-regulatory foundation of the Fair Information Practice Principles.53“ALI Approves Principles of the Law, Data Privacy,” American Law Institute, May 22, 2019. The Principles provide that the “form by which consent is obtained must be reasonable under the circumstances, based on the type of personal data involved and the nature of the personal-data activity.”54Daniel J. Solove and Paul M. Schwartz, “ALI Data Privacy: Overview and Black Letter Text,” American Law Institute (2019): 19. Moreover, “the transparency statement shall clearly, conspicuously, and accurately explain the data controller or data processor’s current personal-data activities...”55Ibid., 33.

Underlying the Principles of Law for Data Privacy, and privacy seals before that, and Fair Information Practice principles before them is the informed minority hypothesis. The informed minority hypothesis holds that “in competitive markets a minority of term-conscious buyers is sufficient to discipline and police sellers from using unfavorable terms.”56Yannis Bakos, Florencia Marotta-Wurgler, and David R. Trossen, “Does Anyone Read the Fine Print? Consumer Attention to Standard-Form Contracts,” Journal of Legal Studies 43 (2014): 1. That is, as long as individuals have notice of terms and choice to accept those terms they will regulate the market by selecting services that are most fair. As privacy’s leading regulator, the Federal Trade Commission has thus stated its goals are to “make information collection and use practices transparent” and to provide people “with the ability to make decisions about their data at a relevant time and context.”57“Protecting Consumer Privacy in an Era of Rapid Change,” Federal Trade Commission (2012): i. For fifty years, data privacy reforms have sought provide a better “opportunity-to-read” that would “increase the number of readers of standard forms” and render the notion of assent more “robust.”58“Promoting Reading and the Opportunity to Read Terms, Principles of the Law, American Law Institute (2010): 101–103; The “duty to read” may be attributed to Samuel Williston’s legal treatise, The Law of Contracts (1920).

In Intervening in Markets on the Basis of Imperfect Information (1979), legal scholars Alan Schwartz and Louis L. Wilde estimate that in order to be effective the informed minority needs to constitute twenty to thirty percent of customers.59Bakos, Marotta-Wurgler, and Trossen, “Does Anyone Read the Fine Print? ”: 24. Empirical studies evaluating the legitimacy of the informed-minority theory for Internet regulation find that the actual informed-minority is two orders of magnitude smaller than that required. After tracking 48,154 individuals over one month, Bakos, Marotta-Wurgler, and Trossen (2014) observed that only 0.2% of software shoppers access a product’s End-User Licensing Agreement. The average length of time spent on these licensing agreements was just over a minute.60Ibid., 21. Obar and Oeldorf-Hirsch (2019) reveal that 74% of users— communications students who study privacy, surveillance, and Big Data, no less—opted for the ‘quick-join’ button when joining a fictitious social network, NameDrop. Those that read the privacy policy, which at 250 words-per-minute should have taken a half-hour, did so for less than one minute.61Jonathan A. Obar and Anne Oeldorf-Hirsch, “The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service of Social Networking Sites,” Information, Communication & Society (2018): 1. In consenting to NameDrop's terms, the participants agreed to share collected information with the National Security Agency and potential employers. They also obliged to “immediately assign their-first born child to NameDrop, Inc. If the user does not yet have children, this agreement will be enforceable until the year 2050.”62Ibid., 7.

Despite evidence that this informed-minority is neither a minority nor informed, data privacy law remains committed to individual self-determination. “[Mandated disclosure],” write Omri Ben-Shahar and Carl Schneider, “does not offend and generally appeals to the two fundamental political ideologies, the free-market principle, laissez-faire, and deregulation on one hand and the autonomy principle, consumer protection, and human empowerment on the other.”63Omri Ben-Shahar and Carl Schneider, More Than You Wanted to Know: The Failure of Mandated Disclosure (Princeton: Princeton University Press, 2014): 208; As government refuses to regulate Big Data in the name of preserving the autonomy of its citizens it gains the unprecedented ability to surveil the lives of those citizens. In regulation’s place we continue the biggest lie on the Internet: “I have read and agree to the terms and conditions.”

Ignorance by Inundation: Corporate Aesthetic-Semantic Strategies

When Barack Obama unveiled his Bill of Rights for Consumer Data Privacy in 2012, the legislation drew unlikely applause. Google, Microsoft, Yahoo, and AOL—then responsible for delivering nearly 90% of targeted advertisements—agreed to abide.64“We Can’t Wait: Obama Administration Unveils Blueprint for a “Privacy Bill of Rights” to Protect Consumers Online,” The White House, February 23, 2012. “It’s great to see that companies are stepping up to our challenge to protect privacy,” said FTC Chairman Jon Leibowitz. “Consumers have greater choice and control over how they are tracked online.”65Ibid.

The centerpiece of the Privacy Bill of Rights was familiar, transparency: the right of consumers to exercise “appropriate control” over their personal data and to have “clear and simple choices, presented at times and in ways that enable consumers to make meaningful decisions about personal data collection, use, and disclosure.”66“Consumer Data Privacy in a Networked World,” The White House (2012): 11. Five years earlier, in 2007, Leibowitz told a town hall meeting that “initially, privacy policies seemed like a good idea. But in practice, they often leave a lot to be desired. In many cases, consumers don’t notice, read, or understand the privacy policies.”67Jon Leibowitz, “So Private, So Public: Remarks at the FTC Town Hall Meeting on Behavioral Advertising,” Federal Trade Commission, November 1, 2007.

Realizing, perhaps, that privacy self-determination was better than governmental regulation, Silicon Valley opted-in. To convince (or confuse) others to opt-in to their terms, they turned to research from their analog days: add noise to hush the signal.

Prolixity: In 2009, the Securities and Exchange Commission sued automotive manufacturer Collins & Aikman for fraudulent rebate transactions. Upon Collins & Aikman’s request to review the evidence, the SEC turned over 1.7 million records—10.6 million pages in all—saying that the defendant could search them for the relevant evidence.68Sharon D. Nelson and John W. Simek, “Data Dumps: The Bane of E-Discovery,” SLAW, November 16, 2010. In Felman Production V. Industry Risk Insurers (2010), plaintiffs admitted that thirty percent of their produced documents were irrelevant. As the court noted, the material included “car and camera manuals, personal photographs, and other plainly irrelevant documents, including offensive materials.”69Ibid. The document dump, as it is so known, creates diversion by deluge.

The average reading speed for individuals with a grade twelve or college education is approximately 250 to 280 words per minute.70Stanford E. Taylor, “Eye Movements in Reading: Facts and Fallacies,” American Educational Research Journal 2 (1965): 193. At this rate, reading five thousand words—the length to this writing—would take at least eighteen minutes. In 2008, researchers at Carnegie Mellon found that the average privacy policy ran 2,514 words. To read every word of the 1,500 privacy policies that the average American agreed to each year would require 201 hours, or 40 minutes per day.71Cranor and McDonald, “The Cost of Reading Privacy Policies”: 563. They concluded that “national opportunity cost for the time to read privacy policies is on the order of $781 billion annually.”72Ibid., 565. In the decade since, privacy policies have become longer—up 58%, to an average of 3,964 words.73Pierre Nicholas Schwab, “Reading privacy policies of the 20 most-used mobile apps takes 6h40,” Into the Minds, May 28, 2018. The verbosity of privacy policies may be necessary to comply with legal and regulatory requirements, but it also means that privacy policies are not helpful to users in making informed privacy decisions.74Ashwini Rao, Florian Schaub, Norman Sadeh, Alessandro Acquisti, and Ruogu Kang, “Expecting the Unexpected: Understanding Mismatched Privacy Expectations Online,” Proceedings of the Twelfth Symposium on Usable Privacy and Security (2016): 77.

Durability: The time-to-read problem assumes that once one has read a privacy policy they need never read it again. Every contract, however, contains a modification clause that entitles the business to modify the terms by posting a new version on its website, by emailing a new agreement, or by asking the user to re-click “I Agree.”75Omri Ben Shahar, “The Myth of the Opportunity to Read in Contract Law,” European Review of Contract Law 1 (2009): 18. Over the last five years, Facebook has updated its privacy policy as many times; Google twice as many.76“Updates: Privacy Policy,” Google.

Complexity: In their Amazon review of Immanuel Kant’s Critique of Pure Reason (1781), “I Rascible” suggests that “it is not a book one is likely to stumble across and think, ‘this looks interesting,’ if for no other reason (pure or otherwise) than that a quick dip into the book shows it to be dense and difficult.” “Emzy”: “This book was the bane of my life during University.” “Me”: “Not a book you casually read on a summer vacation. The way it’s written is very hard to follow.” Indeed, Kant’s tome is famously impenetrable. The Lexile test, which evaluates a linguistic complexity based on factors like sentence length and the difficulty of vocabulary, assigns it a score of 1500—a difficulty fit for doctors and lawyers. But Critique may seem like a casual summer read compared to many privacy policies. Baidu, CNN, Hulu, Airbnb, and Walt Disney register scores near 1600.77Kevin Litman-Navarro, “We Read 150 Privacy Policies. They Were an Incomprehensible Disaster,” New York Times, June 12, 2019. eBay rivals Descartes’ Philosophical Essays—“but neither should we fall into the error of those who occupy their minds only with deep and serious matters, of which, after much effort, they acquire only a confused knowledge, while they hoped for a profound one”—with terms like this: “when you give us content, you grant us a nonexclusive, worldwide, perpetual, irrevocable, royalty-free, sublicensable (through multiple tiers) right to exercise the copyright, publicity, and database rights (but no other rights) you have in the content, in any media known now or in the future.”78René Descartes as quoted in “The Lexile Framework for Reading Map,” MetaMetrics; eBay User Agreement as quoted in Ben Shahar, “The Myth of the Opportunity to Read in Contract Law”: 13. Seizov, Wulf, and Luzak (2019) assert that “the language used in disclosures has crucial effects on their understandability and comprehensiveness. Convoluted sentence structure, extensive use of legal jargon, and a general tendency towards prohibitively high reading levels (advanced vocabulary, grammatical, and syntactical choices) have plagued disclosure texts for decades.”79J. Luzak, O. Seizov, and A.J. Wulf, “The Transparency Trap: A Multidisciplinary Perspective on the Design of Transparent Online Disclosures in the EU,” Journal of Consumer Policy 42 (2019): 161.

Ambiguity: Alongside the indecipherable, privacy notices contain ambiguous language. Airbnb, for instance, justifies collecting users’ personal information with vague language like “adequate performance” and “legitimate interest.”80As quoted in Litman-Navarro, “We Read 150 Privacy Policies. They Were an Incomprehensible Disaster.” Obscurity allows for a wide range of interpretation that provides flexibility for companies to defend their data practices in a lawsuit.81 It also minimizes consumer doubt. An analysis of privacy notices employed by twenty-eight online retailers and travel agencies by communications researcher Irene Pollach (2005) discerns several obfuscatory tactics: “the way a notice is constructed syntactically and grammatically establishes social relationships of power and agency... Specific grammatical choices such as passive voice, nominalization, and personal reference modify understanding and can impart a false sense of security, detachment, irregularity, responsibility, or powerlessness when it comes to consumers reading (pre-)contractual information.”82J. Luzak, O. Seizov, and A.J. Wulf, “The Transparency Trap: A Multidisciplinary Perspective on the Design of Transparent Online Disclosures in the EU”: 162.

Homo Economicus: Bounded Rationality in Consumer Assent

Just as the notice-and-choice regulatory framework rests on the informed minority hypothesis, the informed minority hypothesis rests on an hypothesis of its own: homo economicus. First proposed by John Stuart Mill in “On the Definition of Political Economy” (1836), this economic man “desires to possess wealth, and is capable of judging the comparative efficacy of means for obtaining that end.”83John Stewart Mill, as quoted in Joseph Persky, “The Ethology of Homo Economicus,” The Journal of Economic Perspectives (1995): 223. “The perfect rationality of homo economicus,” elaborates philosopher Gregory Wheeler, “imagines a hypothetical agent who has complete information about the options available for choice, perfect foresight of the consequences from choosing those options, and the wherewithal to solve an optimization problem (typically of considerable complexity) that identifies an option which maximizes the agent’s personal utility.”84Mill’s definition has since been revised in William Stanley Jevons’ “calculator man” (1871) and Frank Knight’s “slot-machine man” (1921); Gregory Wheeler, “Bounded Rationality,” The Stanford Encyclopedia of Philosophy (2019). Proponents of privacy self-determination imagine that we behave something like homo economicus—we weigh the risks and the benefits, calculate the profits and the losses, evaluate the alternatives.85Susanne, Barth and Menno D.T. de Jong, “The privacy paradox: Investigating discrepancies between expressed privacy concerns and actual online behavior.” Telematics and Informatics 34 (2017): 1044–45. We act only after we’ve run the probability distributions for all possible outcomes.

Enter Herbert Simon. Revising the Rational Choice Theory of Modern Behavior, Simon suggested that human rationality is not as exacting as we would like to think. Instead, rationality is “bounded”: “Broadly stated, the task is to replace the global rationality of economic man with the kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist.”86Herbert Simon (1955) as quoted in Wheeler, “Bounded Rationality” (2019). The cognitive limitations proposed by Simon’s “bounded rationality”—and thereafter elucidated by behavioral economists, psychologists, and neuroscientists—explain why we agree to be bound by terms we do not agree with.

Discounting: Every few months, engineers, investors, and thought-leaders make their pilgrimage to Esalen Institute, a storied hippie hotel on California’s Pacific coast. Founded in 1962 to help bring yoga, organic food, and meditation into the American mainstream, Esalen has become Silicon Valley’s prosthetic soul. Back at headquarters, Google hosts regularly mindfulness classes.87Nellie Bowles, “Where Silicon Valley Is Going to Get in Touch With Its Soul,” New York Times, December 4, 2017.

Given the choice between a small reward now and a bigger reward later, subjects will commonly choose the former.88Jess Benhabib, Alberto Bisin, and Andrew Schotter, “Hyperbolic Discounting: An Experimental Analysis,” New York University (2004): 1–21. This “present bias”—otherwise known as hyperbolic discounting—leads to inconsistent evaluation of close and distant events. A similar effect applies when we click “Agree”: the immediate benefits of accessing the service discount potential future harms.89Acquisti and Grossklags, “What Can Behavioral Economics Teach Us About Privacy?”: 6. Determining these future harms is all the more difficult with our data. Acquisti and Grossklags (2019) explain: “the complex life-cycle of personal data in modern information societies can result in a multitude of consequences that individuals are hardly able to consider in their entirety.”90Ibid. As seemingly isolated pieces of data accumulate they are combined and analyzed to reveal patterns. What may seem like an innocuous piece of data at one point in time could in aggregate reveal sensitive details. For instance, Hal R. Varian has shown that an individuals have little or no control of the secondary use of their information.91Hal R. Varian, “Economic aspects of personal privacy,” Privacy and Self-Regulation in the Information Age, National Telecommunications and Information Administration (1996).

Overload: Searching for an explanation for why nine in ten users agreed to provide their first-born child as payment for NameDrop—the aforementioned fictional social network— Jonathan Obar and Anne Oeldorf-Hirsch determined that 98% of users never reached the clause. The condition was buried in Section 2.3.1 of the eight thousand word Terms of Service; those that decided to investigate the policy gave up before they got there. A regression analysis identified information overload as a significant negative predictor of reading terms upon sign up.92Jonathan A. Obar and Anne Oeldorf-Hirsch, “The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service of Social Networking Sites,” Information, Communication & Society (2018): 1. As Susana Kim Ripken explains in the context of securities regulation: “When faced with too much data, people have a tendency to become distracted by less relevant information and to ignore information that may turn out to be highly relevant. They can handle moderate amounts of data well, but tend to make inferior decisions when required to process increasingly more information.”93Susana Kim Ripken as quoted in Ryan M. Calo, “Against Notice Skepticism in Privacy (and Elsewhere),” Notre Dame Law Review 87 (2012): 1054.

Pill bottles, on the other hand, effectively reduce overload by prioritizing essential information—say, “Do not take more than 8 caplets every 24 hours.” When the cognitive-analytic cost of the “precautionary step” is too excessive—as in NameDrop’s terms—critical information is not differentiated from the periphery.94Omri Ben Shahar, “The Myth of the Opportunity to Read in Contract Law”: 7.

Gratification: The American Law Institute’s Principles of the Law of Software Contracts prescribes that software vendors must use clickwraps. Specifically, to be enforceable, buyers must click on an “I agree” icon next to a scroll box containing the text of the license. The drafters note that “[t]his form of clickwrap closely resembles traditional modes of agreeing to paper standard forms.”95Florencia Marotta-Wurgler, “Will Increased Disclosure Help? Evaluating the Recommendations of the ALI’s ‘Principles of the Law of Software Contracts,’” University of Chicago Law Review 78: 172. However, when signing contracts offline the setting is often formalized. Moreover, the tactility of the contract and the requirement to provide a signature enforces the stakes.96Anja Bechmann, “Non-informed Consent Cultures: Privacy Policies and App Contracts on Facebook”: 21. A pop-up window—that pixelated annoyance between us and what we want—does not compare. Further, the Cues-Filtered-Out Theory holds that individuals disclose more personal data in computer mediated communication settings compared to face-to-face.97Susanne, Barth and Menno D.T. de Jong, “The privacy paradox: Investigating discrepancies between expressed privacy concerns and actual online behavior”: 1047.

Optimism: When researchers at Yale surveyed 242 participants about Facebook’s Statement of Rights and Responsibilities, they found that their subjects believed that the terms were more favorable to them than they actually are. Asked, for example, if “liking” the page of a pizza chain would lead to an advertisement for that restaurant with a photo of your face and the text “[Your Name] likes this restaurant,” a statistically significant majority of respondents answered (incorrectly) that “No, your photo and name cannot appear in an advertisement.”98Ian Ayres and Alan Schwartz, “The no-reading problem in consumer contract law,” Stanford Law Review 66 (2014): 598. They deduced that consumers often “hold optimistically mistaken beliefs about important terms.”99Ibid., 606.

“Term optimism” may be explained by the Valence Effect of Prediction. Under the Valence Effect, we have the tendency to overestimate the likelihood of favorable events happening to us relative to others. Likewise, according to Third-Person Effect Theory, we tend to overestimate the effect of media on others while underestimating the influence on ourselves.100Susanne, Barth and Menno D.T. de Jong, “The privacy paradox: Investigating discrepancies between expressed privacy concerns and actual online behavior”: 1046. Acquisti and Grossklags (2006) substantiated these cognitive distortions when they found that social network users believe that providing personal information publicly on social networks could cause privacy problems to other users, but were not particularly concerned about their own privacy on those networks.101Acquisti and Grossklags, “What Can Behavioral Economics Teach Us About Privacy?”: 9. In this regard, “the negative effects of information disclosure in social networks are mostly ascribed to others while [we] consider [our]selves the beneficiaries of positive effects only.”102Susanne, Barth and Menno D.T. de Jong, “The privacy paradox: Investigating discrepancies between expressed privacy concerns and actual online behavior”: 1046.

The Social Contract

Man is born free, and everywhere he is in chains.103Jean-Jacques Rousseau, The Social Contract, trans. H.J. Toser (Hertfordshire, UK: Wordsworth Classics of World Literature, 1998), 5. Originally published 1762.

So wrote Jean-Jacques Rousseau in 1762, in echo of Locke and anticipation of Marx. His On the Social Contract sketched a dystopia: a sovereign unchecked, an uncivil state, a people unfree. Little did Rousseau know that after we threw off the chains of monarchy it would be our contracts that chained us: link of corporate obfuscation after link of our cognitive limitations. Indeed, democracy’s laissez-faire regulation has given rise to a netocracy.104Wired magazine coined “netocracy” in the early 1990s. It is a portmanteau of Internet and aristocracy. Over the past decade, Amazon, Apple, Facebook, and Google have accumulated more capital than nearly any other commercial entity in history. Together, they are worth of $2.8 trillion (the GDP of France), a 24 percent share of the S&P 500 Top 50, and close to the value of every stock traded on the Nasdaq in 2001.105Scott Galloway, “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking Machine,” Esquire, February 8, 2018. Some in Silicon Valley, such as PayPal and Palantir co-founder Peter Thiel, have espoused Neoreactionary ideas inspired by Rousseau’s pre-revolution France.106Klint Finley, “Geeks for Monarchy: The Rise of the Neoreactionary,” Techcrunch, November 22, 2013.

Rousseau offered a way out: the “general will” of the civic body (or, as we might say, the consumtariat).107Alexander Bard termed today’s underclass the consumtariat, a combination of consumer and proletariat, in his 2017 book The Netocrats. In recent years, technologists have turned strategies developed for data collection against that collection. They have trained neutral networks to analyze privacy policies for predatory clauses and used deep learning to rate their fairness from A to F.108Marco Lippi et al., “CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service” : 117–139; Hamza, Harkous et al., “Polisis: Automated Analysis and Presentation of Privacy Policies Using Deep Learning,” Proceedings of the 27th USENIX Security Symposium (2018): 531–548. Others have read terms line-by-line and crowdsourced their findings.109Terms of Service: Didn’t Read, https://tosdr.org/. But reclaiming our sovereignty—our rights to our behaviors, bodies, even our tractors— starts with something dangerously simple: turning “I Agree” into “I Contend.”