Visceralizing Uncertainty With Data Science

1 / 4
2 / 4
3 / 4
4 / 4

Excerpted from Data Feminism by Catherine D’Ignazio and Lauren F. Klein. Reprinted with Permission from The MIT PRESS. Copyright 2020.

Scientific researchers are now proving by experiment what designers and artists haveknown through practice: activating emotion, leveraging embodiment, and creatingnovel presentation forms help people grasp and learn more from data-drivenarguments,as well as remember them more fully.

As it turns out, visceralizing data mayhelp designers solve one particularly pernicious problem in the visualization community:how to represent uncertainty in a medium that’s become rhetorically synonymouswith the truth. To this end, designers have created a huge array of charts andtechniques for quantifying and representing uncertainty. These include box plots, violinplots (figure 3.6), gradient plots, and confidence intervals.

Unfortunately, however,people are terrible at recognizing uncertainty in data visualizations, even when they’re explicitly told that something is uncertain. This remains true even for some researchers who use data themselves!

For example, let’s consider the Total Electoral Votes graphic displayed as part of the New York Times live online coverage of the 2016 presidential election (figure 3.7). The blue and red lines represent the New York Times’s best guess at the outcomes over the course of election night and into the following day.

The gradient areas show the degree of uncertainty that surrounded those guesses, with the darker inner area showing electoral vote outcomes that came up 25 percent to 75 percent of the time, and the lighter outer areas showing outcomes that came up 75 percent to 95 percent and 5 percent to 25 percent of the time, respectively. If you look closely at the far left of the graphic, which represents election night (everything prior to the 12:00 a.m. axis label), the outcome of Trump winning and Clinton losing easily falls within the 5 to 25 percent likelihood range.

Although many election postmortems pronounced the 2016 election the Great Failure of Data and Statistics, because most simulations and other statistical models suggested that Clinton would win, most forecasts did include the possibility of a Trump victory. The underlying problem was not the failure of data but the difficulties of depicting uncertainty in visual form.

People are just not sufficiently trained to recognize uncertainty in graphics such as this. Rather than interpreting the gradient bands as probabilities (e.g., Trump had a 20 percent chance of winning at 6 p.m.), people may interpret them as votes (e.g., Trump had 20 percent of the vote at 6 p.m.). Or they may ignore the gradient bands altogether and look only at the lines. Or they see Clinton on top and assume she is winning. This is called heuristics in psychology literature—using mental shortcuts to make judgments—and it happens all the time when people are asked to assess probabilities.  A large part of the problem is that visualization conventions reinforce these misjudgments. The graphics look so certain, even when they are trying their very hardest to visually illustrate uncertainty!

Figure 3.6: What is the best way to communicate uncertainty in a medium that looks so certain? Designers have created diverse chart forms to try to solve this problem. Depicted here are five violin plots; each shows the distribution of data along with their probability density (the purple part). You could also think of this form as a beautiful purple vagina, as the comic xkcd has observed. Images from the Data Visualisation Catalogue.

Figure 3.7: A 2016 chart from the New York Times that uses opacity—darker and lighter shades of blue and red—to indicate uncertainty. Images by Gregor Aisch, Nate Cohn, Amanda Cox, Josh Katz, Adam Pearce, and Kevin Quealy for the New York Times.

Jessica Hullman, whose work on rhetoric we’ve already mentioned, offers one solution to this problem. Instead of creating fixed plots as in the New York Times example that represents uncertainty in aggregate or static form, Hullman advocates for rendering experiences of uncertainty.  In other words, leverage emotion and affect so that people experience uncertainty perceptually. Or, to invoke a common refrain from rhetorical training and design schools, “show, don’t tell.” Rather than telling people that they are looking at uncertainty while employing a certain-looking graphic style—which creates conditions ripe for those pesky heuristics to intervene—make them feel the uncertainty.

We can see a good example of showing uncertainty in action on the same New York Times live election coverage webpage. At the top of the page was a gauge (figure 3.8) that showed the New York Times’s real-time prediction of who was likely to win the race, with a gradient of categories that ranged from medium blue (“Very Likely” that Clinton would win) to medium red (“Very Likely” that Trump would win).

But the needle did not stay in one place. It jittered between the twenty-fifth and seventy-fifth percentiles, showing the range of outcomes that the New York Times was then predicting, based on simulations using the most recent data. At the beginning of the day, the range of motion was fairly wide but still only showed the needle on the Hillary Clinton side. As the night went on, its range narrowed, and the center moved closer and closer to the red side of the gauge. By 9 p.m., the needle jittered just a little, and on the Trump side only.

A number of New York Times readers were aggressive in their dislike of the jitter, calling it “irresponsible” and “unethical” and “the most stressful thing I’ve ever looked at online and I’ve seen a lot of stressful shit.” In response, Gregor Aisch, one of the designers of the gauge, defended it, explaining that “we thought (and still think!) this movement actually helped demonstrate the uncertainty around our forecast, conveying the relative precision of our estimates.”  So was this “unethical” design, or the sophisticated communication of uncertainty?

Figure 3.8: The controversial “jittering” election gauge featured in the New York Times coverage of the 2016 presidential election. Images by Gregor Aisch, Nate Cohn, Amanda Cox, Josh Katz, Adam Pearce, and Kevin Quealy for the New York Times.

Building off of Hullman’s work, we’d say that the answer is the latter. The jittering election gauge was actually exhibiting current best practices for communicating uncertainty. It gave people the perceptual, intuitive, visceral, and emotional experience of uncertainty to reinforce the quantitative depiction of uncertainty.

The fact that it unsettled so much of the New York Times readership probably had less to do with the ethics of the visualization and more to do with the outcome of the election. So score one for emotion in the task of representing uncertainty.

Excerpted from Data Feminism by Catherine D’Ignazio and Lauren F. Klein. Reprinted with Permission from The MIT PRESS. Copyright 2020.

In-depth coverage of eye-opening issues that affect your life.