In Coming to Our Senses (Oxford University Press, 2014), cognitive scientist Viki McCabe argues that our tendency to privilege cognitive processes over the information we access with our perceptual systems often leads to making inaccurate decisions based on oversimplified theories. Instead, we should rely on our sense of the structural organization of the world to accurately perceive the complex and interconnected systems that comprise our environment. The following excerpt from “The Structure of Reality” offers an example of a situation in which expectancy bias and failure to perceive structural data led to disaster.
“Only when the human organism fails to achieve an adequate response to its situation is there material for the processing of thought.“—L. L. Whyte
The USS Vincennes Incident
At 9:54 a.m. on July 3, 1988, the U.S. Navy cruiser Vincennes mistakenly shot down Iran Air’s Flight 655, killing all 290 people on board. It was the ninth worst incident in aeronautical history and to make it even worse, the decision that led to these deaths was based on a theory of the situation rather than on supporting evidence. Here is a very brief recounting of how our mind can reframe reality.
When this incident began, the Vincennes was in Iranian territorial waters in violation of international law and had been mixing it up with several Iranian gunboats. At 9:47 a.m., a distant blip—an airplane lifting off from Bandar Abbas airport—was picked up by the Vincennes’ radar, whose crew responded immediately with a standard Identification Friend or Foe (IFF) query. They received a Mode 3 Commair response, which indentified the plane as a commercial airliner. But during the gunboat fracas circumstances on the Vincennes had become chaotic, and in the confusion the crew ended up providing mixed messages—one speculating that the blip could be an enemy F-14 fighter jet and another insisting the blip was a civilian plane.
“In the cramped and ambiguous combat environment of the Persian Gulf…the captain chose to rely on his own judgment.” He reportedly ran a simulation of the situation in his mind where he tried “to imagine what the pilot was thinking, what the pilot’s intent was.” His belief—that without direct evidence, we can nonetheless deduce what someone whom we do not know and cannot see is planning to do—could qualify as magic thinking. Yet without checking further, the captain developed the theory that the plane was an F-14 fighter and that it was diving directly at the Vincennes.
A simulation is not the situation itself. It is only a theory of the situation. A key point is that no one else actually saw this theorized threat. In fact, a crewmember standing right behind the captain later “testified that he never saw indications that the aircraft was descending.” Further, the commander of a nearby frigate, the USS Sides, reported that his radar showed an ascending, not a descending plane. That plane was not only much larger than a fighter jet, but it was also flying in Iranian airspace over Iranian territorial waters on its regularly scheduled twice-weekly flight from Tehran, Iran to Dubai, United Arab Emirates via Bandar Abbas, Iran. The radar-tracking systems of the Sides and the Vincennes both covered that same airspace. When the record of the Vincennes’ tracking system was later reviewed, the information it showed was found to be identical to the one from the USS Sides. How was it that the captains of these two ships reported seeing such different situations?
One clue was that each man approached the situation he was in very differently, which in turn influenced what they saw, or thought they saw. By creating a mental simulation, the captain of the Vincennes approached the situation through the filter of a theory he had constructed in his mind, while the commander of the Sides responded primarily to the information that he perceived on his radar screen. Of interest as well is that the commander of the USS Sides reported that in the week before the incident, “the actions of the Vincennes, which was equipped with the sophisticated and costly Aegis air defense system, were ‘consistently aggressive.’ ” In addition, in his Newsweek article, “Sea of Lies,” John Barry wrote that the Vincennes had deliberately instigated the earlier skirmishes with the Iranian gunboats despite the fact that “this was her first time in combat” and she was breaking “international rules of war.”
Since the Vincennes was acting illegally, it’s hard to imagine what her captain was thinking when, without provocation, he chose to confront Iranian gunboats. But, it is even harder to imagine what he was thinking when he mistook a large, Airbus passenger plane that was ascending for a much smaller, sleeker F-14 jet fighter that was diving, and shot it down. However, one thing was clear: What he thought he saw did not come from his direct perception of the situation at hand. It had to have come from the theory he had created in his mind. In this sense, the descending jet was simply part of the captain’s theory of the situation, not a component of the situation itself. Since he was instrumental in shooting this purported jet down, it is not difficult to conclude that this theory also implied that the Vincennes was about to be attacked by an enemy plane, and that an immediate defensive response was required. Again, there was no evidence to indicate that these imagined circumstances reflected the reality at hand. Yet due to that theory, the passengers on Iran Air Flight 655 never reached their destination or lived out their lives.
The only way for this situation to have been assessed accurately was to carefully observe the radar screen and to track the location and activity of any planes in the area. At the very least, that information would have shown whether a plane was in military airspace and could be a fighter jet, or whether it was in civilian airspace and was likely to be a harmless passenger or cargo plane. The fatal error was the failure to pay sufficient attention to the information the radar revealed, and instead to reframe reality using an unsubstantiated theory of being under attack.
The Influence of the Expectancy Bias
I apply the term theory broadly. It refers to ideas about the world that originate in someone’s mind, rather than from observable evidence. Also, I make a distinction between the structure of a phenomenon, which reveals who or what it is and what it is doing, and its content, which consists of a narrative description of that phenomenon that we create from a mentally abstracted subset of its parts and assemble into a representation. The perceivable structure of the Vincennes incident and the information that specified it was the tracking configuration on the radar screen that showed where and when the blip appeared. But the captain’s version of this event was a story in which he switched the components of that configuration to match his misconceived theory that the Vincennes was in danger.
University of Michigan psychologist Richard Nisbett testified before Congress that both the Vincennes’ captain and his crew suffered from “expectancy bias.” Expectancy bias occurs when people expecting something to happen allow this to distort their view of what is actually happening to match their expectations. Nisbett proposed that because the Vincennes’ crew believed the blip was a hostile plane, they failed to see the ascending Airbus. Instead they apparently imagined a descending enemy fighter. But expectations, like simulations, are similar to theories. All three are mental versions of situations as opposed to perceptions that reveal the situations themselves. In other words, by pointing the finger at the people involved and their possible propensities to see what they expected to “see” instead of what was actually there, Nisbett overlooked the more basic role that substituting a cognitive for a perceptual process—a theory for actual evidence—played in promoting this event. We often forget that our cognitive processes lack windows on the world. They receive their information about what goes on outside ourselves from our perceptual systems. They then translate that complex intelligence into simpler symbolic forms that are often influenced by our preconceptions, theories, beliefs, and general worldview. Without such a theory to set the stage, the captain’s and the crew’s expectancy bias would have no ground upon which to play out.
The Navy compounded the situation by creating false videos to cover up what actually happened. The Iranians were enraged at such a maneuver and accused the United States of a “barbaric massacre” and “vowed to avenge the blood of their martyrs.” There have been unconfirmed rumors that to retaliate, the Ayatollah Khomeini retained a hit man who, on December 21, 1988, blew up Pan Am Flight 103 over Lockerbie, Scotland. On November 16, 2003, the International Court of Justice concluded that the actions of the Vincennes in the Persian Gulf were unlawful. The most important fact to take away from this dismal tale is that the outcome would have been very different if the captain and crew of the Vincennes had simply put their theories aside and paid more attention to the information on the radar screen. That information revealed the true structure of this complex event in which the location of the blip, the commercial airspace on the radar, and the ascending Airbus in the sky were linchpin components.
Structural Information Versus Mental Bias
This situation was particularly tragic, but not that surprising. Because we are easily seduced by words and the simpler narrative explanations, simulations, and theories we create in our minds, they often override and suppress the complex systems we perceive in the world. Our tendency to privilege our mental over our perceptual processes often distracts our attention from our direct experience. This mental bias is aided and abetted by our tendency to create and keep our theories on the ready in our conscious awareness—a state of affairs that can easily lead us to jump to conclusions—while the structural information our perceptions access often remains below our own radar.
It is not that we do not continually perceive and use structural information to guide our behavior, it is that we often remain consciously unaware that such information exists. For example, we can easily recognize a friend who is walking toward us from the configurations produced by their movements, but those configurations rarely surface into our conscious awareness. If we were asked how we knew who that person was, we would likely come up with a list of their separate features—blue eyes, red hair, and freckles—but we would be wrong. Even though we are conscious of people’s individual features, research shows that recognition is not based on features. It depends on our perception of facial structure.
Ironically, the feature information we are aware of and consciously abstract from phenomena is often unreliable, leading us to make uninformed, error-filled decisions, while the structural information that we are largely unaware of is far more precise and supports more informed and accurate decisions. Fortunately, this curious situation does not affect most of our behavior, even though 95% of our actions are guided by such information, because they also occur on autopilot; but serious problems can arise when we make conscious decisions based on cognitively distorted information. For example, using facial features as evidence rather than facial or body structure resulted in a case of mistaken identity that sent an innocent man to prison for life. In such instances, facial features can best be characterized as the misused content of facial structure.
To recap, recognition is a direct perceptual act that relies on structural rather than content information. In contrast, abstracting features from a person’s facial structure is a cognitive act because abstraction requires agency (who chooses?) and choice (which features?). While our senses are reciprocally structured to detect the information that is there, they are not equipped (as our cognitive processes are) to add their two cents to the mix. Further cognitive processes such as abstraction and theorizing are easily influenced by our resident biases, which then override our more veridical sensory experience. When we do have such biases, it is difficult for us to give them up. Instead we tend to rationalize and defend the theories and decisions they promote as if they were part of us. In fact, at a project at the U.S. Office of Naval Research that analyzed the Vincennes incident for the Navy, the participants produced an elaborate explanation of what happened that in effect rationalized the decision to shoot down the Iranian Airbus. To their credit, however, in a write-up of that project they did note that “many have claimed that there were clear flaws or biases in the decision making of the crew of the Vincennes.” Post hoc explanations such as the one put forth by the Office of Naval Research can be bent in one direction or another depending on which components of a complex event such as the Vincennes incident one includes. But one factor is not at issue: where evidence comes from. It comes from the world, not from mental simulations or theories. In this case, the only evidence that was actually perceived by several different people, including the commander of the Sides, was the fact that the plane in question was ascending in commercial airspace.
What is so peculiar about making up our own version of a situation in our minds is that, in most cases, all the information we need to make an accurate decision is visible from the situation at hand, as it was on the Vincennes’ radar screen. There is a parallel true story with one difference: the radar reader acted on the structural information he saw, not on content information he theorized. As a result, he saved the USS Missouri and her entire crew.
The key point to take away from this analysis is that the structural information the world displays and our content-driven representations of that information are considerably different. While our conscious thought processes rely on the content that we abstract from the world and assemble into representations that can be infiltrated by our preconceptions and biases, the real world was created by the syntax of space and time and the grammar of gravity. Because the world and the complex systems that constitute it are under constant transformation, the information it displays does not come packaged as static assemblies of features and parts from which we construct theories. Rather it reveals itself in dynamic configurations that emerge from the self-organized interactions of each system’s constituent components. In this way, the pattern of information on a ship’s radar screen emerges from the interactions of the planes or missiles flying over the area with the time and location at which they appear on the screen.
Reprinted from Coming to Our Senses: Perceiving Complexity to Avoid Catastrophes by Viki McCabe with permission from Oxford University Press USA. Copyright © Oxford University Press, 2014.