Internet Privacy Doesn’t Mean A Thing … Yet.

By Meg Leta Jones
Published on July 25, 2016
1 / 2
In this Digital Age it seems that anything uploaded to the web stays there for good.
In this Digital Age it seems that anything uploaded to the web stays there for good.
2 / 2
“Ctrl+Z: The Right to Be Forgotten” by Meg Leta Jones
“Ctrl+Z: The Right to Be Forgotten” by Meg Leta Jones

In this age of social media and connectivity, anything that goes up online might not ever come down. And this data from our pasts — no matter how far removed from our present — has proven its potential to ruin futures in the time it takes to click on a link. The consequences of this available data have ranged from embarrassing to devastating, jeopardizing careers, reputations, and relationships, with the worst part being that there is no defense against personal data that’s no longer personal. In Ctrl+Zby Meg Leta Jones (New York University Press, 2016), readers are presented with a long-overdue solution to this haunting problem: a digital right to be forgotten. Proponents argue the benefits of legally requiring Internet entities to delete, hide, or make anonymous content as requested by users. Critics say it’s a technologically impossible attack on free speech and open access. But Ctrl+Z offers a simple, concise, and nuanced look at the possibilities an idea like this could bring. Jones encourages readers to broaden our perspectives and examine our myriad of choices when it comes to our right to let bygones be bygones and let our past remain forgotten, even in the tangles of the World Wide Web.

To find more books that pique our interest, visit the Utne Reader Bookshelf.

Introduction

Two cases addressing the complicated concerns of reputation, identity, privacy, and memory in the Digital Age were decided the same day on opposite sides of the Atlantic with different conclusions. The first began in Spain. In 2010, Mario Costeja González requested that a Spanish newspaper remove information about a sale of his property related to insolvency proceedings. When the paper refused, he requested that Google remove the search results that included the information. Google’s refusal led to litigation in the Court of Justice of the European Union. On May 13, 2014, the court ordered Google to edit the results retrieved when González’s name was searched because the information about the property sale was “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue carried out by the operator of the search engine.”

On the same day in the U.S., two American Idol contestants brought every conceivable claim against Viacom, MTV, and a number of other defendants over online content that led to their disqualification from the television show. These two contestants made it to the “Top 32” round when information about their earlier arrests was published on websites like Smoking Gun. The hopeful singers had not disclosed their arrests to the show’s producers. An unexceptional U.S. case, all of their claims were dismissed by the Tennessee district court for two main reasons. First, some of the claims were too old. Even though the Internet allows for continued public accessibility, under Tennessee state law, defamation claims must be filed within one year from the time the content was published. Second, any lawsuit in the U.S. seeking damages for the publication of true information is not going to get far.

Although the facts of the cases differ in ways that matter to the law as well as to public opinion, both involved parties asking the judicial system to limit the damage of digital content that would otherwise remain available for an indefinite period of time. Policymakers around the globe are being pressed to figure out a systematic response to the threat of digital memory — and it is a complex threat involving uncertain technological advancements, disrupted norms, and divergent values.

On October 12, 2012, fifteen-year-old Amanda Todd took her own life after posting a desperate YouTube video explaining the details of how she was bullied. In the video, the vulnerable girl explained that a scandalous image she was convinced to create led to brutal on- and offline torment. She suffered from depression and anxiety as a result; in the video, she holds a card that reads, “I can never get that photo back.” In 2008, Max Mosley, a former head of Formula One racing, was awarded £60,000 in damages by an English High Court in a claim against the British News of the World newspaper for publishing a story detailing his involvement in an allegedly Nazi-themed, sadomasochistic, prostitution-filled orgy — complete with video. In an effort to remove the material related to the event, Mosley brought legal action in twenty- two countries and sought deletion from almost two hundred websites in Germany alone by 2011. The most recent of Mosley’s claims for removal was filed against Google to hide the remaining links to the story. Just months later, terrorists murdered twelve people in Paris as retribution for satirical depictions of Muhammad published by a French newspaper and circulated online, prompting interior ministers from European Union countries to call for Internet service providers to find and take down online content that “aims to incite hatred and terror” as well as to allow governments to monitor activity to prevent future attacks.

To drive home the importance and difficulty of the issue, imagine the worst thing you have ever done, your most shameful secret. Imagine that cringe-inducing incident somehow has made its way online. When future first dates or employers or grandchildren search your name, that incident may be easily discoverable. In a connected world, a life can be ruined in a matter of minutes, and a person, frozen in time. Keeping that embarrassing secret offline is not as easy as it once was. The wrong button can get pressed, networks can be confusing, people can be misidentified, and sometimes foes — or friends — are vindictive. Your secret may never be disclosed, but it may nonetheless be discovered when the bits of data trails are put together — and you may be the last to know. You may not only suffer dramatically from the discovery and use of your personal information; worry or fear may curb your behavior on- and offline to avoid the risks of unwanted attention, misinterpretation, or abuse. Now imagine the biggest jerk you have ever met, someone you do not want to be able to hide records of his inconsiderate, nasty, spiteful, or twisted behavior. Imagine that no matter what rules you come up with, he will try to abuse them.

The global Internet population is around 2.1 billion people, with over 274 million users in North America and 519 million users in Europe. Every minute in 2012, 204,166,667 emails were sent, over 2,000,000 queries were received by Google, 684,478 pieces of content were shared on Facebook, 100,000 tweets were sent, 3,125 new photos were added to Flickr, 2,083 check-ins occurred on Foursquare, 270,000 words were written on Blogger, and 571 new websites were created. The average webpage has fourteen cookies to track users around this massive information network. Who should be able to hide or delete information? How do we determine when and what is appropriate to keep or discard?

The Right(s) to Be Forgotten

Since Stacy Snyder became the cautionary tale of social media in 2006, it is well known that employment prospects can be negatively impacted by information on the Internet. Snyder was denied her teaching degree days before graduation because an administrator at the high school where she was training in Pennsylvania discovered a photo of her wearing a pirate hat and drinking from a plastic cup, with the caption “Drunken Pirate,” on MySpace. Today these stories are a dime a dozen.

Jacqueline Laurent-Auger was disappointed when her contract with the private boys’ school where she taught drama for fifteen years was not renewed. The school was concerned about Laurent-Auger’s judgment, explaining, “The availability on the Internet of erotic films in which she acted created an entirely new context that was not ideal for our students. After discussion and reflection, we concluded that adult films must remain just that, a product for adults. That’s why we decided not to renew Mrs. Laurent-Auger’s contract.” But it would have been hard for her to predict this consequence of her decision, considering color television was only just becoming available when she filmed the scenes in the late 1960s and early 1970s — nearly fifty years before. On this point, the school stated that the Internet had ushered the “erotic portion of [Laurent-Auger’s] career into the present.”

In 2010, Mary Bale was caught on video picking up an alley cat and tossing it into a garbage bin. The cat’s owners posted the video online, and within hours, Bale’s name and address were published online, a “Death to Mary Bale” Facebook page and numerous anti-Mary Bale Twitter accounts were created, and she later received death threats.

When the site Jezebel outed a number of high school students who had posted vulgar, racist tweets after President Obama was reelected, it highlighted many of the controversies and questions at issue with digital memory. The students who were identified deleted the tweets and some their Twitter accounts, but for a number of them, the Jezebel article was already on the first page or the top result when their names were searched on Google. Of course, Jezebel could have covered the story without identifying the teenagers, but as the site explained, “While the First Amendment protects their freedom of speech, it doesn’t protect them from the consequences that might result from expressing their opinions.”

The web has become a searchable and crunchable database for questions of any kind, a living cultural memory whose implications are complex and wide reaching. The abundance of information online is heavily relied on by prospective professional and social contacts. Around 80 percent of employers, 30 percent of universities, and 40 percent of law schools search applicants online. Before a first date, 48 percent of women and 38 percent of men perform online research on their date. Thanks to the content that users purposefully place online (wisely or not), the growth of surveillance technologies in everyday devices, and free instant sharing platforms, it is increasingly easy to gain and publish harmful information about others online. “Slut-shaming” and “revenge porn” sites dedicated to humiliating women in sexually vulnerable positions are hotly debated and extremely popular. These sites, which encourage users to post sexually explicit photos or videos of a former lover online with accompanying identifying information like name, address, and workplace, receive fifteen to thirty posts and are visited by over 350,000 unique visitors each day. Suicides of cyberbullying victims have been rising and highlight the extreme consequences of online content. With over half of adolescents and teens experiencing some form of online bullying, the ability to remove content is in high demand for some people sympathetic to the poor choices that may occur in youth. Sites like MugshotsOnline.com collect, organize, and publish millions of mug shots acquired from government agency data sources, allow users to “vote for the weekly top 10,” and include a disclaimer at the bottom of the homepage explaining that those who are included may or may not have been convicted. Public information such as court filings that has been historically difficult to access is now organized and presented digitally in ways that provide a public service as well as risks to individuals identified through the records. In fact, an industry of online reputation management has emerged to counter negative content and search results.

The Internet law scholar Viktor Mayer-Schönberger has warned that the digitization and disclosure of life’s personal details “will forever tether us to all our past actions, making it impossible, in practice, to escape them.” The tether is actually a detailed structure of discoverability that relies on a number of technical and social occurrences. All Internet communication — downloading webpages, sending email, transferring files, and so on — is achieved by connecting to another computer, splitting the data into packets (the basic unit of information), and sending them on their way to the intended destination using the TCP/ IP standard. When a host wants to make a connection, it first looks up the IP address for a given domain name and then sends packets to that IP address. For example, the uniform resource locator (URL) www.twitter.com/megleta contains the domain name “www.twitter.com.” A DNS resolver computer, commonly operated by the Internet service provider, the company providing Internet service, will perform the domain-name-to-IP-address lookup. Content being sent to users making requests within this delivery system is increasingly stored in the cloud, beyond the reach of the creator.

Search engines continually crawl and index the Internet’s available content. This content is then ranked, with the results most relevant to the user’s search entry appearing first. Search engines have emerged as vital and ubiquitous navigation tools. They compete with one another by refining these ranking systems, but the details of how these systems process and present information are not disclosed. The search giant Google’s goal is to “organize the world’s information and make it universally accessible and useful.” In 2009, the company began offering personalized search results based on the data profiles it had for the individual user. This data collection to serve users’ personalized content includes logs of each search request, results the user actually saw, and results he or she eventually clicked on. In 2006, John Battelle discovered that Google could identify the IP addresses (and/or Google cookie values) of users who have searched for a term as well as a list of terms searched by an IP address (and/or Google cookie value).

Prior to the phenomenon of a “Google account,” the company kept the originating IP address, cookie value ID, time, date, search term, and resulting clicks. The infusion of Web 2.0 into the search experience asks the user to sign in to Google as a way to utilize its many social services including email, chat, telephony, photo collection, maps, and a social networking site, as well as its ever-enhanced search engine. The privacy policy now explains that the company collects information you give it and information from the use of its services, including device information (such as your hardware model, operating system version, unique device identifiers, and mobile-network information including phone number), log information (including search queries; telephony log information; IP address; and device event information  such as crashes, system activity, hardware settings, browser type, browser language, the date and time of your request and referral URL, and cookies), location information (including GPS, sensor data, and WiFi access point), local storage, and cookies and anonymous identifiers when interacting with partner services. A number of other companies like Facebook and Twitter are similarly adapting business models based on the opportunity to collect, process, and sell user data by partnering with other sites and services to create a web of trackability.

This brings us to what is known as “big data.” What makes data big is the amount of data created, the number of partnerships that allow the tracking of data across sites and platforms, and growing data markets and collaborations where this information is shared among interested parties. When a user logs onto the Internet and visits a website, hundreds of electronic tracking files may be triggered to capture data from the user’s activity on the site and from other information held in existing stored files, and that data is sent to companies. A study done by the Wall Street Journal found that the nation’s top-fifty websites installed an average of sixty-four pieces of tracking technology, and a dozen sites installed over one hundred. These are not the cookies of the 1990s, which recorded a user’s activity to make revisits more convenient. New tools scan in real time and access location information, income, shopping interests, and health concerns. Some files respawn, known as zombie cookies, even after the user actively deletes them. All of this information is then sold on data exchanges. One of the leading data-trading platforms, BlueKai, which launched in 2008, claimed to have access to over one hundred million unique buyers by 2009.

Due to ubiquitous connectivity, discoverability is not necessarily limited to a personal computer. Sensors creating data about owners and others from phones, cars, credit cards, televisions, household appliances, wearable computing necklaces, watches, and eyewear, and the growing list of “Internet of Things” devices, mean that more personal information is being disclosed, processed, and discovered. All of these nodes of discoverability will be added to the structure of discoverability sooner or later, if they have not been already.

But, as Julian Barnes writes in his 1984 novel Flaubert’s Parrot, “You can define a net in one of two ways, depending on your point of view. Normally, you would say that it is a meshed instrument designed to catch fish. But you could, with no great injury to logic, reverse the image and define a net as a jocular lexicographer once did: he called it a collection of holes tied together with string.” This same structure of discoverability, the net that seems all consuming and everlasting, is full of holes. It is a fragile structure. Digital information is not permanent. Any number of reasons, from lack of interest to site upgrades, can lead to the loss of access to content.

Dependence on digital information puts us at the mercy of bit rot, data rot, and link rot. Bit rot refers to the degradation of software over time. Modern computers do not have floppy disk drives; many of them do not even have compact disk drives. The University of Colorado library has an entire basement space with outdated computer equipment kept just to read old digital formats holding content necessary for research. Data rot or data decay refers to the decay of storage media, such as the loss of magnetic orientation of bits on a floppy disk or loss of electrical charges of bits in solid state drives. Systems that provide growing capabilities to store digital data have been shown also to increase the likelihood of uncorrected and undetected data corruption. Link rot occurs when a hyperlink no longer works, when the target it originally referenced no longer exists. The average life of a webpage is about one hundred days. Users may be presented with the disappointing “404 Error Page Not Found” message, or the original content may have been overwritten. As a recent example, in 2014 a team at Harvard Law School found that approximately half of the URLs in U.S. Supreme Court opinions no longer link to the original information. Moving from paper to digital offers the benefits and drawbacks of increased discoverability, and information has a different life cycle when stored in digital formats that require a computer to interpret the content for a human reader.

The benefits of increased discoverability are rarely overstated. Promises of improved citizen participation and government accountability, use of resources in every area from power to education, scientific discoveries from reproduction to climatology, understanding of humanities research from history to art, and commercial possibilities justifiably drive us to share. The crisis of decreased discoverability, on the other hand, is rarely acknowledged, but it is vital to meeting the promises of data-based progress, as well as to framing the problem of digital memory. This data-driven march is neither good nor bad, but limitations exist and values must be reassessed and reestablished in order to determine what should be preserved and what may be left ephemeral.


Reprinted with permission fromCtrl+Z: The Right to Be Forgottenby Meg Leta Jones, published by New York University Press, 2016.

UTNE
UTNE
In-depth coverage of eye-opening issues that affect your life.