Online Comment: Likers, Haters and Manipulators at the Bottom of the Web
Online comment can be informative or misleading, entertaining or maddening. Some comments are off-topic, or even topic-less. In Reading the Comments(MIT Press, 2015), Joseph Reagle urges us to read the conversations “on the bottom half of the Internet” to learn much about human nature and social behavior.
To find more books that pique our interest, visit the Utne Reader Bookshelf.
Comment: The Bottom Half of the Web
There’s a reason that comments are typically put on the bottom half of the Internet. —@AvoidComments (Shane Liesegang), Twitter
“Am I ugly?” This question has been asked on YouTube by dozens of young people, and hundreds of thousands of comments, ranging from supportive to insulting, have been left in response. The Web is perfect for this sort of thing, and it has been almost from the start—even if some think it alarming. Over a decade ago, some of the earliest popular exposure that the Web received was through photo-rating sites like HOTorNOT. “Am I Ugly?” videos continue this phenomenon and remain true to YouTube’s origins. YouTube was conceived, in part, as a video version of HOTorNOT. YouTube cofounder Jawed Karim was impressed with the site “because it was the first time that someone had designed a Website where anyone could upload content that everyone else could view.” But not only could people upload content: others could comment on that sophomoric content. And if the word sophomoric seems haughty, Mark Zuckerberg was a Harvard sophomore when he first launched Facemash, his hot-or-not site that used purloined student photos from dormitory directories, what Harvard calls “facebooks.”
This uploading and evaluating of content by users is now associated with various theories and buzzwords. Social media, like YouTube, are populated by user-generated content. Facebook is an example of Web 2.0 in that it harnesses the power of human networks. The online activity of masses of ordinary people might display the wisdom of the crowd or collective intelligence. Books on these topics claim that this “changes everything” and is transforming the Internet, markets, freedom, and the world. Yet I continue to be intrigued by what is happening in the margins—the seemingly modest comment. But what is comment?
As I use the term, comment is a genre of communication. In 2010, for instance, YouTube lifted its fifteen-minute limit on videos, and since then, there has been a flurry of ten-hour compilations of geeky, catchy, and annoying audio. Under a hypnotic video of Darth Vader breathing, someone commented: “What am I doing with my life?! 10 hours of breathing!” From this we can see that comment is communication, it is social, it is meant to be seen by others, and it is reactive: it follows or is in response to something and appears below a post on a blog, a book description on Amazon, or a video on YouTube. Although comment is reactive, it is not always responsive or substantively engaging. Many comments on social news sites are prefaced with the acronym tl;dr (too long; didn’t read), meaning that the commenter is reacting to a headline or blurb without having read the article. Comment is short—often as simple as the click of a button, sometimes measured in characters, but rarely more than a handful of paragraphs. And it is asynchronous, meaning that it can be made within seconds, hours, or even days of its provocation. Putting aside future transformations, comment is already present: comment has a long history (some of which I discuss briefly), and it is pervasive. Our world is permeated by comment, and we are the source of its judgment and the object of its scrutiny. There is little novelty in the form of comment itself, but its contemporary ubiquity makes it worthy of careful consideration, especially given online comment’s tarnished reputation as something best avoided.
This understanding of comment as communication that is reactive, short, and asynchronous fails to draw a bright line. (I use the term comment to speak of the genre and reserve comments for an actual plurality of messages.) For instance, at what point does a message become too long to be considered a comment? Unlike a tweet, there is no character limit for a comment, but I focus on communication that is relatively short and can live outside the expectations of real-time interaction. And although these are the rough contours of comment, its essence is best expressed by way of—appropriately enough—an online exhortation: “Don’t read the comments.” This popular maxim is captured in the tweets of game designer Shane Liesegang. At his account @AvoidComments, he claimed that “there’s a reason that comments are typically put on the bottom half of the Internet.” There is a lot of dreck down there, but in sifting through the comments, we can learn much about ourselves and the ways that other people seek to exploit the value of our social selves. This book is an exercise in reading (rather than avoiding) comment, and it documents an expedition to the bottom of the Web. I show how comment can inform (via reviews), improve (via feedback), manipulate (via fakes), alienate (via hate), shape (via social comparison), and perplex us. I touch on the historical antecedents of online comment and visit the communities of Amazon reviewers, fan fiction authors, online learners, scammers, free thinkers, and mean kids.
The point of this journey is expressed by an adage often used by media theorist Marshal McLuhan: “We live invested in an electric information environment that is quite as imperceptible to us as water is to fish.” Comment is easily seen but invisible to the extent that we take it for granted. Often when comment does make an impression on us, it is a nuisance to be disabled or an offense to be ignored. And even when we do see and appreciate comment, most people have no idea the extent to which it is manipulated. For example, people like things on Facebook over four billion times a day, and even these littlest of comments are big business. Scammers are proliferating “like farms.” When purported pages about cute puppies, brave veterans, and young people stricken with cancer have enough likes, their content is decorated with ads and links to malware sites, or they are sold to others who will do the same. It is easy to appreciate why some recommend that we “never read the comments.” Much like California during its gold rush, the bottom half of the Web can be lively and lawless, and it is where many are attempting to make a fortune. Although I do not advocate that everyone read all the comments all the time, I think that it is wise to understand them.
The easiest way to avoid comments is not to have them. Because many sites have disabled their comments, I begin this journey with what gossip teaches about online discussion and why many users and sites are turning away from comment. I argue that disabling comment is a reflection of a platform’s growth as users seek intimate serendipity and flee “filtered sludge.”
The origins of YouTube and Facebook demonstrate that people like to talk about one another: we gossip. Although gossip might seem like a trivial thing, evolutionary psychologist Robin Dunbar argues that it is central to understanding humanity. If you participate in online communities, you might have heard of Dunbar’s eponymous number of 150. People invoke Dunbar’s number when a community (such as an email list whose members used to know nearly everyone else on the list) grows too big. The Web is a big place, and any technology that permits its denizens to communicate with one another has to grapple with the problem of social scale. After the group becomes too large, people complain that the magic has gone. Graffiti and scams proliferate. The known personalities and easy cadence of the group have been replaced by strangers and bickering about unruliness and the need for moderation.
Dunbar did not set out to coin an aphorism about online community. He was seeking to answer the question of why primates, especially humans, are smart—why human brains are about nine times larger, relative to body size, than the brains of other animals. Some have suggested that brain size was related to environment, the use of color vision to find fruit, distances traveled while foraging, or the complex omnivore diet. When Dunbar looked at all of these variables among primates, however, he found no such pattern. But the size of primates’ neocortex did correlate with the size of their groups and the time that they spent grooming one another.
A large group is better protected against predation than a small group is, but it also has internal competition for food and mating. Even monkeys can scheme and are sensitive to threats from their peers. Grooming, therefore, is an activity through which alliances are forged and disputes resolved. Experiments with wild vervet monkeys show that they are more likely to pay attention to the distress calls of individuals with whom they recently groomed. But keeping track of who is scratching whose back can be complicated. In a group of twenty, there are nineteen direct relationships and 171 third-party relationships, so as group size increases, so does the complexity of the network and the time that primates spend grooming one another. According to Dunbar, primates’ large brains are the result of an evolutionary race of alliances through social grooming.
For humans, social grooming includes language. Because larger groups require more efficient means of forging alliances, gossip circulates information about others in the social networks in which they exist. While the practice of talking about others (including rumors and bathroom graffiti, or latrinalia) is more interesting and complex than I can address here, I understand gossip simply as “evaluative social chat.” And the alliances that result from sharing opinions about others can be Machiavellian. On a television reality show, for example, Sandy might realize that John’s seeming betrayal of Alice could itself be a lie. Dunbar argues that gossip requires a sophisticated type of social cognition known as the theory of mind through which we infer the mental states of others. Even four-year-old children demonstrate second-order intentionality: the child has a belief about what someone else wants. Adults can negotiate fourth- or even fifth-order intentionality. This is amusingly demonstrated in a scene in The Princess Bride where two opponents engage in a battle of wits. The “Man in Black” poisons one of two goblets of wine, and Vizzini must choose and drink from the safe one. Vizzini begins his chain of deduction with the assumption that “A clever man would put the poison into his own goblet, because he would know that only a great fool would reach for what he was given. I am not a great fool, so I can clearly not choose the wine in front of you. But you must have known I was not a great fool. You would have counted on it, so I can clearly not choose the wine in front of me.” This type of inference requires a “clever man” with a big brain.
In any case, Dunbar’s number of 150 is, roughly, the cognitive limit of how many relationships humans can maintain given their complexity (such as “the enemy of my enemy is my friend”). However, Dunbar proposed multiple tiers, from the family up to the tribe of a couple thousand people. The rough size of the clan—the individuals that a person keeps in contact with and can track relationships for—is 150 people. This is roughly the size of early farming communities, modern planters in Indonesia and the Philippines, and contemporary Hutterites. It is roughly what the Church of England concluded to be the ideal size for a congregation and the number of soldiers in a military company. Also, using the birth rates observed in hunter-gatherer or peasant societies, this number corresponds to about five generations, which is as far back in time as anyone living can remember: “only within the circle of individuals defined by those relationships can you specify who is whose cousin, and who is merely an acquaintance.”How is Dunbar’s number related to comment? It provides an unexpected clue to why comment frequently fails on the Web.
In the blogging domain, there is little that Dave Winer has not written code for, started a company around, or opined about. He often is credited with deploying the first blog comments in 1998. (Another contender for this claim is Bruce Ableson at Open Diary.) Despite his quick smile and easy manner, Winer ends up in a lot of online arguments. In 2001, he was described in the New York Times as someone who is “not shy about ruffling the big names in high technology.” Winer is opinionated and passionate. He also is willing to say that something “sucks” or to call someone an idiot. As I will discuss further, drama genres of comment, such as sites where people can ask others questions or make lists of things to avoid, lend themselves to conflict. Certain personalities also seemingly attract conflict. In 2006, when Winer announced that he would stop blogging before the end of the year, a group of antagonists created a countdown clock that they said would continue “until he shuts up.” (He continued blogging.) Critics expressed their antagonism in comments to his blog, and he repeatedly considered disabling the comments altogether. In 2007, he wrote that he did not think comments were an essential part of a blog, especially if they “interfere with the natural expression of the unedited voice of an individual”:
We already had mail lists before we had blogs. The whole notion that blogs should evolve to become mail lists seems to waste the blogs. Comments are very much mail-list-like things. A few voices can drown out all others. The cool thing about blogs is that while they may be quiet, and it may be hard to find what you’re looking for, at least you can say what you think without being shouted down. This makes it possible for unpopular ideas to be expressed.
In 2010, Winer developed this idea further, arguing that blog comments should be short and about the blog posting (or responsive, using my term). They ought not be digressive or overly long. He proposed a system that allows comments of less than a thousand characters to be invisibly submitted within twenty-four hours: “After the commenting period is over, the comments would become visible, and no further comments would be permitted.” Those who wished to respond later or at greater length could do so on their own blogs. This idea was supported by the trackback feature of many blogs. If I respond to a blog post by Winer with a post on my own blog, for example, then my blogging platform would inform Winer’s blog service of my response. Winer’s blog entry would then include a link to my own. Winer’s blog “tracks back” to those who respond. Trackbacks were seen as a way to complement or replace comments, but they have largely fallen into disuse after their abuse by spammers. For Winer, neither blog comments nor tweets are appropriate for conversation. In 2012, Winer, the person who often is credited with first enabling blog comments, disabled them from his own blog—seemingly forever.
As Mathew Ingram, a technology writer, noted about a 2007 fracas over Winer’s “Why Facebook Sucks” posting, Winer’s approach sometimes “brings the hate.” But Winer’s experience is not unusual. After the halcyon days of blogging, many bloggers abandoned their sites or shuttered their comments. Some popular sites (including Boing Boing in 2003, the Washington Post in 2006, Engadget in 2010, and Popular Science in 2013) have turned off their comments for extended periods. In 2013, Rob Beschizza, managing editor of Boing Boing, tweeted that he may do so again “for good,” perhaps after inappropriate comments were made on a posting about the death of a friend. Boing Boing began as a paper zine in the late 1980s and went online in 1995. In its early days, it was like an informational swap meet among friends, but by the new millennium, it had become too popular to serve as an unfettered venue for sharing and gossiping among friends and has struggled with this fact ever since.
However, there are two other responses to unruly comments beyond disabling or ignoring them. Website managers can attempt to fortify their commenting system and Website users can relocate in search of what I call intimate serendipity.
Fortifying Comment Systems
By fortify, I mean to make the system more resistant to abuse. Some sites require users to perform a task (like typing in text that has been distorted) before leaving a comment so as to minimize abuse. However, abusers often match the cleverness of the challenge or farm out the task to low-cost workers on the other side of the world. Many sites permit readers to filter comments based on ratings from other users who act as moderators, such as at the nerdy news site Slashdot. This site also uses metamoderation, whereby others’ moderations can be rated as fair or unfair. Even so, people sometimes complain that a cabal of moderators has taken over and is abusing the system. That is, a group of users collude to promote one another’s postings and standing. One can often find someone complaining in a story’s comments that their submission and summary of the story was better and earlier but that it was ignored because she was not part of a clique.
Facebook and Google+ have required users to use their real names. While Facebook has been relatively lax in enforcement, Google+ was quite strict at its start but stepped back from the requirement in 2014. (My dog has a Facebook page, but it is under his real name.) Such social networks are then able to leverage their identity policies and reach by providing authentication and commenting services for others. Slate adopted Facebook’s 2011 “Comments Box” service, and Farhad Manjoo, a staff writer at Slate, was pleased that Facebook knew real names and that comments could be seen on Facebook by the commenter’s friends and family: “This introduces to the Web one of the most important offline rules for etiquette: Don’t say anything that you’d be ashamed to say in front of your mom.” MG Siegler, a blogger at TechCrunch, noticed that since they “flipped the switch” and adopted Facebook’s service there had been a large drop in comments, but “this is completely expected and definitely not a bad thing.” Before, a post might get hundreds of comments, half of which were “weak to poor” and half of those “pure trollish nonsense.” With the new system, a similar post might receive about a hundred comments, half of which “are actually coherent thoughts in response to the post itself—you know, what a comment is supposed to be.” Others claimed that real-name policies suppressed anonymous speech and were incompatible with the multiple identities that we maintain in life. Furthermore, using centralized commenting systems cedes ever more autonomy and privacy to the likes of Google, Facebook, and other comment-specific services like Disqus (used by 750,000 sites, including CNN’s Website) and Livefyre (used by the BBC and the New York Times).
Sometimes, relatively simple approaches do the trick. The link-sharing and discussion site MetaFilter requires a one-time $5 membership fee. This fee, its strong community norms, and occasional moderation for flagrant abuse seem to have fostered a robust and civil culture. Newspapers, too, have experimented with asking readers to subscribe or pay a small fee to comment. In keeping with Dunbar’s insight, Clay Shirky, author and NYU professor, is fond of examples of communities whose size is purposefully limited. Ten years ago, he wrote about an email list that removed the oldest subscribed member to make room for the newest one. Another list’s periodic purge was inspired by a supposed neighborhood hot tub that was accessed by a key-coded gate lock: people could give the code to friends, but when the bathers became too rowdy or the tub was overcrowded, the owner simply changed the code and gave the new one to his immediate friends under the same policy.
A decade later, Shirky continued to stress that “intimacy doesn’t scale.” In a 2013 talk entitled “Why Do Comments Suck?,” he put it simply: “Comment systems can be good, big, cheap—pick two.” Many sites with comments seek a large audience. They “want lots of people to forward the article to a million friends, shut up and then read another article.” Sites that treat their users as community members (through smaller size or careful moderation) tend to have better comments. This is a good insight but easier said than done: good communities tend to grow. This is the paradox of their success. People then often relocate to another site without a good understanding of what went wrong (except that it went “downhill”) or what they were looking for in the first place.
Twitter and the Search for Intimate Serendipity
We now have a toolbox of tactics for resisting comment abuse, and they often are good enough—for a time. But some communities struggle as they experiment with finding a system that is appropriate to their changing size and circumstances. Those not satisfied with the changes often leave and relocate to a new media platform in search of what I call intimate serendipity. When I went to blogging get-togethers in 2003, it was with a dozen of like-minded enthusiasts: I met interesting people and we had good conversations. Over a decade later, going to a meeting for people who post comments to the Web seems passé. (Today almost any gathering could qualify as such a meeting.) After a network of people (online or otherwise) becomes popular, people want to bring their friends. At first, this is great. The value of a network increases significantly with each new node. A network of five phones permits ten connections; doubling the phones to ten permits forty-five possible connections. As Dunbar notes, however, at some point the scale of networks overwhelms the participants. First, we ask, “Who brought that guy to the party?” Second, the network becomes a target for those who wish to exploit it via spam and manipulation.
I sometimes ask my students if the parade of Web platforms is over (i.e., Geocities, Myspace, Facebook, Twitter, and Google+). Given the difficulties involved in leaving a service (because content and connections must be abandoned) and the profusion of niche networks, some might think that there is little need for anything new. But people do relocate when an existing platform becomes overly populated by jerks, spammers, and ads or overly constrained by controls and filters. People often want a network where intimacy and serendipity are possible. Although there are sites where being anonymous and a jerk is the norm, many people want to express their authentic selves without fearing attacks, manipulation, or unusual exposure and while remaining open to things that surprise and delight. When they can’t do so, they move, as seen in the migrations of geeks between social news sites like Slashdot, Digg, Reddit, and Hacker News.
By 2007, three years after its founding, Digg was being criticized as a system that was rife with manipulation by supposed “bury brigades” that suppressed others’ stories. At the same time, the company was trying to become financially viable or even profitable. As users began to leave, the site exacerbated discontent with the introduction of unpopular changes, including new comment systems and site designs. By 2010, the site was on its deathbed; while the service limped on, and the name had been affixed to various “reboots,” its founder, staff, and community were gone. Many who left Digg relocated to Reddit. Correspondingly, those who had been at Reddit, especially the technically inclined, lamented that the site had become too big and diluted. When Paul Graham launched Hacker News in 2007, the intention was to replicate the early Reddit days, and some early contributors to Reddit followed.
The irony is that success sometimes, seemingly, brings about a comment system’s demise. In 2009, law professor and civic reformer Lawrence Lessig announced that he was retiring his blog. Lessig’s problem was not with haters but with the love. He is an influential legal scholar, a successful author, and a founder of significant cultural, academic, and civic organizations. He has argued against copyright extensions before the U.S. Supreme Court, is a founder of Creative Commons, and popularized the notion of “free culture” with a successful book. However, a growing family, a shift in research focus, and the “increasingly technical burden to maintaining a blog” were too much. He and his volunteers tried to keep up, but a third of the thirty thousand comments on his blog were likely “fraudsters,” and online casino spam was growing. Google stopped indexing the site at one point. However, he was “still trying to understand twitter.” In fact, lots of people were moving to Twitter. For a time, it gave its users intimate serendipity.
At its beginning in 2006, Twitter felt edgy and intimate. It was not uncommon to find users flush with an encounter with a (minor or major) celebrity. Also, people (especially the famous) were thrilled to be able to authentically express themselves. People like talking about themselves, and Twitter appeared to be a safe space to do so. Research indicates that people spend 30 to 40 percent of their interactions telling others about their subjective experiences, so it is not surprising that researchers found that 41 percent of the tweets in their study were of the “me now” variety. Two Harvard University neuroscientists concluded that “disclosing information about the self is intrinsically rewarding” because it triggers regions of the brain that are associated with the mesolimbic dopamine reward system. In experiments in which subjects could choose to speak about themselves or factual matters, people chose to speak about themselves the majority of the time. When these choices were associated with small payments, people were willing to pay an average 17 percent “tax” to talk about themselves: “Just as monkeys are willing to forgo juice rewards to view dominant groupmates and college students are willing to give up money to view attractive members of the opposite sex, our participants were willing to forgo money to think and talk about themselves.”
Reprinted with permission from Reading the Comments by Joseph M. Reagle, Jr., and published by MIT Press, 2015.
The Lesson of Cryptocurrency: We Can Design Better Money
By design, cryptocurrency is made for equality by assigning value to humanitarian efforts, feeding the hungry and lifting people from poverty.
The Reparations of History
What the modern world owes slavery.
How to Turn Neighborhoods Into Hubs of Resilience
Three places showing how to make the transition from domination and resource extraction to regeneration and interdependence.