Military history

Chapter 37

Beyond Rational Choice

Reason is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

—David Hume, A Treatise of Human Nature, 1740

The presumption of rationality was the most contentious feature of formal theories. The presumption was that individuals were rational if they behaved in such a way that their goals, which could be obnoxious as well as noble, would be most likely to be achieved. This was the point made by the eighteenth-century philosopher David Hume. He was as convinced of the importance of reason as he was that it could not provide its own motivation. This would come from a great range of possible human desires: “Ambition, avarice, self-love, vanity, friendship, generosity, public spirit,” which would be “mixed in various degrees and distributed through society.”1 As Downs put it, the rational man “moves towards his goals in a way which to the best of his knowledge uses the least possible input of scarce resources per unit of valued output.” This also required focusing on one aspect of an individual and not his “whole personality.” The theory “did not allow for the rich diversity of ends served by each of his acts, the complexity of his motives, the way in which every part of his life is intimately related to his emotional needs.2 Riker wrote that he was not asserting that all behavior was rational, but only that some behavior was “and that this possibly small amount is crucial for the construction and operation of economic and political institutions.”3 In addition, the settings in which actors were operating—whether a congressional election, legislative committee, or revolutionary council—were also taken as givens, unless the issues being studied concerned establishing new institutions. The challenge then was to show that collective political outcomes could be explained by individuals ranking “their preferences consistently over a set of possible outcomes, taking risk and uncertainty into consideration and acting to maximize their expected payoffs.” This could easily become tautological because the only way that preferences and priorities could be discerned was by examining the choices made in actual situations.

The main challenge to the presumption that intended egotistical choices was the best basis from which to understand human behavior, was that it was consistently hard to square with reality. To take a rather obvious example, researchers tried to replicate the prisoner’s dilemma in the circumstances in which it was first described.4 Could prosecutors gain leverage in cases involving codefendants by exchanging a prospect of a reduced sentence in return for information or testimony against other codefendants? The evidence suggested that it made no difference to the rates of pleas, convictions, and incarcerations in robbery cases with or without codefendants. The surmised reason for this was the threat of extralegal sanctions that offenders could impose on each other. The codefendants might be kept separate during the negotiations, but they could still expect to meet again.5 To the proponents of rational choice, such observations were irrelevant. The claim was not that rational choice replicated reality but that as an assumption it was productive for the development of theory.

By the 1990s, the debate on rationality appeared to have reached a stalemate, with all conceivable arguments exhausted on both sides. It was, however, starting to be reshaped by new research, bringing insights from psychology and neuroscience into economics. The standard critique of rational choice theory was that people were just not rational in the way that the theory assumed. Instead, they were subject to mental quirks, ignorance, insensitivity, internal contradictions, incompetence, errors in judment, overactive or blinkered imaginations, and so on. One response to this criticism was to say that there was no need for absurdly exacting standards of rationality. The theory worked well enough if it assumed people were generally reasonable and sensible, attentive to information, open-minded, and thoughtful about consequences.6

As a formal theory, however, rationality was assessed in terms of the ideal of defined utilities, ordered preferences, consistency, and a statistical grasp of probabilities when relating specific moves to desired outcomes. This sort of hyper-rationality was required in the world of abstract modeling. The modelers knew that human beings were rarely rational in such an extreme form, but their models required simplifying assumptions. The method was deductive rather than inductive, less concerned with observed patterns of behavior than developing hypotheses which could then be subjected to empirical tests. If what was observed deviated from what was predicted, that set a research task that could lead to either a more sophisticated model or specific explanations about why a surprising result occurred in a particular case. Predicted outcomes might well be counterintuitive but then turn out to be more accurate than those suggested by intuition.

One of the clearest expositions of what a truly rational action required was set out in 1986 by Jon Elster. The action should be optimal, that is, the best way to satisfy desire, given belief. The belief itself would be the best that could be formed, given the evidence, and the amount of evidence collected would be optimal, given the original desire. Next the action should be consistent so that both the belief and the desire were free of internal contradictions. The agent must not act on a desire that, in her own opinion, was less weighty than other desires which might be reasons for not acting. Lastly, there was the test of causality. Not only must the action be rationalized by the desire and the belief, but it must also be caused by them. This must also be true for the relation between belief and evidence.7

Except in the simplest of situations, meeting such demanding criteria for rational action required a grasp of statistical methods and a capacity for interpretation that could only be acquired through specialist study. In practice, faced with complex data sets, most people were apt to make elementary mistakes.8 Even individuals capable of following the logical demands of such an approach were unlikely to be prepared to accept the considerable investment it would involve. Some decisions were simply not worth the time and effort to get them absolutely right. The time might not even be available in some instances. Gathering all the relevant information and evaluating it carefully would use up more resources than the potential gains from getting the correct answer.

If rational choices required individuals to absorb and evaluate all available information and analyze probabilities with mathematical precision, it could never capture actual human behavior. As we have seen, the urge to scientific rigor that animated rational choice theory only really got going once actors sorted out their preferences and core beliefs. The actors came to the point where their calculations might be translated into equations and matrices as formed individuals, with built-in values and beliefs. They were then ready to play out their contrived dramas. The formal theorists remained unimpressed by claims that they should seek out more accurate descriptions of human behavior, for example, by drawing on the rapid advances in understanding the human brain. One economist patiently explained that this had nothing to do with his subject. It was not possible to “refute economic models” by this means because these models make “no assumptions and draw no conclusions about the physiology of the brain.” Rationality was not an assumption but a methodological stance, reflecting a decision to view the individual as the unit of agency.9

If rational choice theory was to be challenged on its own terms, the alternative methodological stance had to demonstrate that it not only approximated better to perceived reality but also that it would produce better theories. The challenge was first set out in the early 1950s by Herbert Simon. He had a background in political science and a grasp of how institutions worked. After entering economics through the Cowles Commission, he became something of an iconoclast at RAND. He developed a fascination with artificial intelligence and how computers might replicate and exceed human capacity. This led him to ponder the nature of human consciousness. He concluded that a reliable behavioral theory must acknowledge elements of irrationality and not just view them as sources of awkward anomalies. While at the Carnegie Graduate School of Industrial Administration, he complained that his economist colleagues “made almost a positive virtue of avoiding direct, systematic observations of individual human beings while valuing the casual empiricism of the economist’s armchair introspections.” At Carnegie he went to war against neoclassical economics and lost. The economists grew in numbers and power in the institution and had no interest in his ideas of “bounded rationality.”10 He gave up on economics and moved into psychology and computer science. This idea of “bounded rationality,” however, came to be recognized as offering a compelling description of how people actually made decisions in the absence of perfect information and computational capacity. It accepted human fallibility without losing the predictability that might still result from a modicum of rationality. Simon showed how people might reasonably accept suboptimal outcomes because of the excessive effort required to get to the optimal. Rather than perform exhaustive searches to get the best solution, they searched until they found one that was satisfactory, a process he described as “satisficing.”11 Social norms were adopted, even when inconvenient, to avoid unwanted conflicts. When the empirical work demonstrated strong and consistent patterns of behavior this might reflect the rational pursuit of egotistical goals, but alternatively these patterns might reflect the influence of powerful conventions that inclined people to follow the pack.

Building upon Simon’s work, Amos Tversky and Daniel Kahneman introduced further insights from psychology into economics. To gain credibility, they used sufficient mathematics to demonstrate the seriousness of their methodology and so were able to create a new field of behavioral economics. They demonstrated how individuals used shortcuts to cope with complex situations, relying on processes that were “good enough” and interpreted information superficially using “rules of thumb.” As Kahneman put it, “people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.”12 The Economist summed up what behavioral research suggested about actual decision-making:

[People] fear failure and are prone to cognitive dissonance, sticking with a belief plainly at odds with the evidence, usually because the belief has been held and cherished for a long time. People like to anchor their beliefs so they can claim that they have external support, and are more likely to take risks to support the status quo than to get to a better place. Issues are compartmentalized so that decisions are taken on one matter with little thought about the implications for elsewhere. They see patterns in data where none exist, represent events as an example of a familiar type rather than acknowledge distinctive features and zoom in on fresh facts rather than big pictures. Probabilities are routinely miscalculated, so . . . people . . . assume that outcomes which are very probable are less likely than they really are, that outcomes which are quite unlikely are more likely than they are, and that extremely improbable, but still possible, outcomes have no chance at all of happening. They also tend to view decisions in isolation, rather than as part of a bigger picture.13

Of particular importance were “framing effects.” These were mentioned earlier as having been identified by Goffman and used in explanations of how the media helped shape public opinion. Framing helped explain how choices came to be viewed differently by altering the relative salience of certain features. Individuals compared alternative courses of action by focusing on one aspect, often randomly chosen, rather than keep in the frame all key aspects.14 Another important finding concerned loss aversion. The value of a good to an individual appeared to be higher when viewed as something that could be lost or given up than when evaluated as a potential gain. Richard Thaler, one of the first to incorporate the insights from behavioral economics into mainstream economics, described the “endowment effect,” whereby the selling price for consumption goods was much higher than the buying price.15


Another challenge to the rational choice model came from experiments that tested propositions derived from game theory. These were not the same as experiments in the natural sciences which should not be context dependent. Claims that some universal truths about human cognition and behavior were being illuminated needed qualification. The results could only really be considered at all valid for Western, educated, industrialized, rich, and democratic (WEIRD) societies in which the bulk of the experiments were conducted. Nonetheless, while WEIRD societies were admittedly an unrepresentative subset of the world’s population, they were also an important subset.16

One of the most famous experiments was the ultimatum game. It was first used in an experimental setting during the early 1960s in order to explore bargaining behavior. From the start, and to the frustration of the experimenters, the games showed individuals making apparently suboptimal choices. A person (the proposer) was given a sum of money and then chose what proportion another (the responder) should get. The responder could accept or refuse the offer. If the offer was refused, both got nothing. A Nash equilibrium based on rational self-interest would suggest that the proposer should make a small offer, which the responder should accept. In practice, notions of fairness intervened. Responders regularly refused to accept anything less than a third, while most proposers were inclined to offer something close to half, anticipating that the other party would expect fairness.17 Faced with this unexpected finding, researchers at first wondered if there was something wrong with the experiments, such as whether there had been insufficient time to think through the options. But giving people more time or raising the stakes to turn the game into something more serious made little difference. In a variation known as the dictator game, the responder was bound to accept whatever the proposer granted. As might be expected, lower offers were made—perhaps about half the average sum offered in the ultimatum game.18 Yet, at about 20 percent of the total, they were not tiny.

It became clear that the key factor was not faulty calculation but the nature of the social interaction. In the ultimatum game, the responders accepted far less if they were told that the amount had been determined by a computer or the spin of a roulette wheel. If the human interaction was less direct, with complete anonymity, then proposers made smaller grants.19 A further finding was that there were variations according to ethnicity. The amounts distributed reflected culturally accepted notions of fairness. In some cultures, the proposers would make a point of offering more than half; in others, the responders were reluctant to accept anything. It also made a difference if the transaction was within a family, especially in the dictator game. Playing these games with children also demonstrated that altruism was something to be learned during childhood.20 As they grew older, most individuals turned away from the self- regarding decisions anticipated by classical economic theory and become more other-regarding. The exceptions were those suffering from neural disorders such as autism. In this way, as Angela Stanton caustically noted, the canonical model of rational decision-making treated the decision-making ability of children and those with emotional disorders as the norm.21

The research confirmed the importance of reputation in social interactions.22 The concern with influencing another’s beliefs about oneself was evident when there was a need for trust, for example, when there were to be regular exchanges. This sense of fairness and concern about reputation, though it appeared instinctive and impulsive, was hardly irrational. It was important for an individual to have a good reputation to consolidate her social networks, while a social norm that sustained group cohesion was worth upholding. There was further experimental evidence suggesting that when a proposer had been insufficiently altruistic, the responders would not accept their reward in order to ensure that the miserly proposer was punished.23

Another experiment involved a group of investors. When each made an investment everyone else gained, though they made a small loss. These losses should not have mattered, for they were covered by the gains resulting from the investments of others. Those motivated by a narrow self-interest would see an incentive to become a free rider. They could avoid losses by making no personal investments while benefiting from the investments of others. They would then gain at the expense of the group. Such behavior would soon lead to a breakdown in cooperation. To prevent this would require the imposition of sanctions by the rest of the group, even though this would cost them as individuals. When given a choice which group to join, individuals at first often recoiled from joining one with known sanctions against free riders but eventually would migrate to that group, as they appreciated the importance of ensuring cooperation.

Free riders, or unfair proposers in the ultimatum game, were also stigmatized. In another experiment, individuals who expected to play by the rules were told in advance of the game the identities of other players who would be free riders. Once these individuals had been described as less trustworthy, they were generally seen as less likable and attractive. When the games were underway, this prior profiling influenced behavior. There was a reluctance to take risks with those designated untrustworthy, even when these individuals were acting no differently from others. Little effort was made to check their reputations against actual behavior during the game. In experiments which showed individuals described as either free-riders or cooperators experiencing pain, far less empathy was shown for the free riders than for the cooperators.24

One response from those committed to the rational actor model was that it was interesting but irrelevant. The experiments involved small groups, often graduate students. It was entirely possible that as these types of situations became better understood, behavior would tend to become more rational as understood by the theory. Indeed, there was evidence that when these games were played with subjects who were either professors or students in economics and business, players acted in a far more selfish way, were more likely to free ride, were half as likely to contribute to a public good, kept more resources for themselves in an ultimatum game, and were more likely to defect in a prisoner’s dilemma game. This fit in with studies that showed economists to be more corruptible and less likely to donate to charity.25 One researcher suggested that the “experience of taking a course in microeconomics actually altered students’ conceptions of the appropriateness of acting in a self-interested manner, not merely their definition of self-interest.”26 In studies of traders in financial markets, it transpired that while the inexperienced might be influenced by Thaler’s “endowment effect,” for example, the experienced were not.27 This might not be flattering to economists, but it did show that egotistical behavior could also be quite natural. This argument, however, could be played back to the formal theorists. To be sure, it showed the possibility of self-interested and calculating behavior but it also required a degree of socialization. If it could not be shown to occur naturally and if it had to be learned, then that demonstrated the importance of social networks as a source of guidance on how to behave.

When individuals were acting as consumers in a marketplace or in other circumstances that encouraged them to act as egotistical and self-regarding, their behavior could get close to what might be expected from models that assumed such conduct. The experiments employed to explore the degree of actual rationality reflected the preoccupations with a particular sort of choice, a type “with clearly defined probabilities and outcomes, such as choosing between monetary gambles.”28 It was almost by accident that as researchers sought to prove the rational actor models through experiments they came to appreciate the importance of social pressures and the value attached to cooperation. Within the complex social networks of everyday life, truly egotistical and self-regarding behavior was, in a basic sense, irrational.

Attempts were made to recast formal theories to reflect the insights of behavioral psychology, in the guise of behavioral economics, but they made limited progress. The most important insight from the new research was that rather than studying individuals as more complex and rounded than the old models assumed, it was even more important to study them in their social context.

Only a very particular view of rationality considered cooperation irrational and failed to understand why it made sense to make sacrifices to punish the uncooperative and free riders in order to uphold norms and sustain cooperative relationships. Many social and economic transactions would become impossible if at every stage there was suspicion and reason to doubt another’s motives. The essence of trust was to knowingly and willingly accept a degree of vulnerability, aware that trustees might intend harm but finding it more profitable to assume that they did not. The evidence suggested that by and large people would prefer to trust others than not to trust. There were formidable normative pressures to honor commitments once made, and a reputation for untrustworthiness could prove to be a hindrance. Life became a lot easier if the people with whom one was dealing trusted and could be trusted in turn, saving the bother of complicated contracts and enforcement issues. Trusting another did not necessarily assume good faith. The calculus could be quite balanced. On occasion there might be no choice but to trust someone, even though there were indicators to prompt suspicion, because the alternative of not trusting was even more likely to lead to a bad result. In other circumstances, with little information one way or another, accepting another’s trustworthiness would involve a leap of faith. This was why deception was deplored. It meant taking advantage of another’s trust, hiding malicious intent behind a mask of good faith. Trust involved accepting evidence of another’s intentions; deception involved faking this evidence.29

So important was trust that even when clues were arriving thick and fast that they were being deceived, individuals could stay in denial for a surprising time. A confidence trickster might be vulnerable to intensive probing and so would rely on those who were inclined to accept his story: the woman yearning for love or the greedy looking for a get-rich-quick proposition. Research showed that people were “poor deception-detectors and yet are overconfident of their ability to detect deception.”30 “Cognitive laziness” led to shortcuts that resulted in misapprehending people and situations, failing to explore context, ignoring contradictions, and sticking with an early judgment of another’s trustworthiness.31


The ability to recognize different traits in people, to distinguish them according to their personalities, is essential to all social interaction. It might be difficult to predict the responses of people to particular situations, but to the extent that it is possible to anticipate the responses of specific individuals, their behavior might be anticipated or even manipulated.

The process of developing theories about how other minds work has been described as “mentalization.” Instead of assuming that other minds resembled one’s own, by observing the behavior of others it became evident that others had distinctive mental and emotional states. The quality of empathy, of being able to feel as another feels, was drawn from the German Einfühlung, which was about the process of feeling one’s way into an art object or another person. Empathy might be a precursor to sympathy, but it was not the same. With empathy one could feel another’s pain; with sympathy one would also pity another for his pain. It could be no more than sharing another’s emotional state in a vicarious way, but also something more deliberative and evaluative, a form of role-playing.

Mentalization involved three distinct sets of activity, working in combination. The first set was an individual’s own mental state and those of others represented in terms of perceptions and feelings, rather than the true features of the stimuli that prompted the perceptions and feelings in the first place. They were beliefs about the state of the world rather than the actual state of the world. When simulating the mental states of others, people would be influenced by what was known of their past behavior and also of those aspects of the wider world relevant to the current situation. The second set of activities introduced information about observed behavior. When combined with what could be recalled from the past, this allowed for inferences about mental states and predictions about the next stage in a sequence of behavior. The third set was activated by language and narrative. Frith and Frith concluded that this drew on past experience to generate “a wider semantic and emotional context for the material currently being processed.”32

This wider context could be interpreted using a “script.” The concept comes from Robert Abelson, who developed an interest during the 1950s in the factors shaping attitudes and behavior. His work was stimulated by a 1958 RAND workshop with Herbert Simon on computer simulations of human cognition. Out of this came a distinction between “cold” cognition, where new information was incorporated without trouble into general problem-solving, and “hot” cognition, where it posed a challenge to accepted beliefs. Abelson became perplexed by the challenges posed by cognition for rational thinking and in 1972 wrote of a “theoretical despond,” as he “severely questioned whether information has any effect upon attitudes and whether attitudes have any effects upon behaviour.” It was at this point that he hit upon the idea of scripts. His first thoughts were that they would be comparable to a “role” in psychological theory and a “plan” in computer programming, “except that it would be more occasional, more flexible, and more impulsive in its execution than a role or plan, and more potentially exposed in its formation to affective and ‘ideological’ influences.”33 This led to his work with Roger Schank. Together they developed the idea of a script as a problem in artificial intelligence to refer to frequently recurring social situations involving strongly stereotyped conduct. When such a situation arose, people resorted to the plans which underlay these scripts.34 Thus, a script involved a coherent sequence of events that an individual could reasonably expect in these circumstances, whether as a participant or as an observer.35

Scripts referred to the particular goals and activities taking place in a particular setting at a particular time. A common example was a visit to a restaurant: the script helped anticipate the likely sequence of events, starting with the menu and its perusal, ordering the food, tasting the wine, and so on. In situations where it became necessary to make sense of the behavior of others, the appropriate script created expectations about possible next steps, a framework for interpretation. As few scripts were followed exactly, the other mentalizing processes allowed them to be adapted to the distinctive features of the new situation. We will explore the potential role of scripts in strategy in the next section.

Individuals varied in their ability to mentalize. Those who were more cooperative, had a higher degree of emotional intelligence, and enjoyed larger social networks tended to be better mentalizers. It might be thought that this would also be an attribute of those of a Machiavellian disposition, who were inclined to deceive and manipulate. This might be expected to depend on an ability to understand another’s mind and its vulnerabilities. While such people might lack empathy or hot cognition, the expectation would be of a degree of cold cognition, an insight into what another knows and believes. Yet studies of individuals described as “Machiavellian”—used in psychological studies to refer to somewhat callous and selfish personalities largely influenced by rewards and punishments—suggested that both their hot and cold cognition were limited. This led to the proposition that these individuals’ limited ability to mentalize meant that they found it easier to exploit and manipulate others because there was little to prompt guilt and remorse.36 There could therefore be individuals who were so naturally manipulative that they were apparently incapable of dealing with other people on any other basis.

Such findings arguably provided more support for the view that the rational actor celebrated in economic theory tended to the psychopathic and socially maladroit. As Mirowski notes, in an awkward soliloquy, it was striking how many of the theorists who insisted on an egotistical rationality, who claimed to “theorize the very pith and moment of human rationality”—of which Nash was but one example—were not naturally empathetic and lived very close to the mental edge, at times tipping over into depression and even suicide.37

The issue, however, was relevant for two other reasons. First it highlighted an important distinction between traits such as deception or Machiavellianism as affecting instinctive behavior, and strategies involving deception emerging out of a deliberate process of reasoning. Second, it recalled attitudes toward those who relied on tricks and cunning, which was to deplore this when directed inward into one’s own society while often applauding when applied outward against enemies. This pointed to a different sort of challenge, for mentalization should be relatively straightforward and reasonably reliable with the in-group with whom interaction was regular and a culture and background was shared. With an out-group, about whom less was known and suspicions were harbored, mentalization would be much more difficult. It was hard to empathize with those perceived to be remote, unattractive, and bad. So there could be an easy grasp of the likely thinking of fellow members of the in-group, facilitating cooperation. And where there were difficulties, they could be addressed through direct communication. The minds that were most important to fathom and penetrate, however—especially during a con- flict—would be those of the out-group. Not only would it be a challenge to address preconceptions and prejudices in order to produce a rounded picture, but there would also be fewer opportunities to communicate to clarify areas of difference.

Systems 1 and 2

From all of this a complex picture of decision-making emerged. It was at all times influenced by the social dimension and emphasized the importance of familiarity; the effort required to understand the distant and menacing; the inclination to frame issues in terms of past experiences, often quite narrowly and with a short-term perspective; and the use of shortcuts (heuristics) to make sense of what was going on. None of this fit easily with descriptions in terms of the systematic evaluation of all options, a readiness to follow an algorithmic process to the correct answer, employing the best evidence and analysis, keeping long-term goals clearly in mind. Yet at the same time, and despite the regular derision directed at decision-making that relied on hunch and intuition, apparently instinctive decisions were often more than adequate and at times even better than might be managed by intensive deliberation.38 It was even relevant to academics in their choice of theories. As Walt observed, the time spent learning the complex mathematics demanded by some formal theories was time spent not “learning a foreign language, mastering the relevant details of a foreign policy issue, immersing oneself in a new body of theoretical literature, or compiling an accurate body of historical data.”39

As a combination of neuroimaging and experimental games illuminated the areas of the brain activated by different forms of cognition and decision, the sources of the tension between the bottom-up, instinctive processes and the top-down, deliberative processes could be detected. The parts of the brain associated with earlier evolutionary stages, the brain stem and the amygdale, were associated with choices defined by feelings and marked by instincts and mental shortcuts. Dopamine neurons automatically detected patterns in the stimuli coming in from the environment and matched them with stored information derived from experience and learning. These were connected by the orbitofrontal cortex (OFC) to conscious thought. It was the expansion of the frontal cortex during evolution that gave humans their comparative advantage in intelligence. Here could be detected the influence of explicit goals (such as holding on to a good reputation or making money). When trying to understand other people and what they might do, the medial prefrontal cortex and anterior paracingulate cortex became activated. These were not activated when playing a computer game because there was no point in trying to assess a computer’s intentions. Yet compared with the notionally more primitive brain, the prefrontal cortex appeared limited in its computational capacity, barely able to handle seven things at once.

Jonah Lehrer summed up the implications of the research:

The conventional wisdom about decision-making has got it exactly backward. It is the easy problems—the mundane math problems of daily life—that are best suited to the conscious brain. These simple decisions won’t overwhelm the prefrontal cortex. In fact they are so simple that they tend to trip up the emotions, which don’t know how to compare prices or compute the odds of a poker hand. (When people rely on their feelings in such situations, they make avoidable mistakes, like those due to loss aversion and arithmetical errors.) Complex problems, on the other hand, require the processing powers of the emotional brain, the supercomputer of the mind. This doesn’t just mean you can just blink and know what to do—even the unconscious takes a little time to process information—but it does suggest that there’s a better way to make difficult decisions.40

When the actual processes of decision-making were considered, there was therefore very little relationship to the formal model of decision-making. Emotion could no longer be seen as something separate from reason and apt to lead reason astray, so that only a dispassionate intellectual discipline, the sort displayed by Plato’s philosopher-kings, could ensure rational control. Instead emotion appeared as bound up with all thought processes.41 Neuroimaging of the brain confirmed the extraordinary activity involved in evaluating situations and options before the conclusions reach human consciousness. The revelation lay in just how much computation and analysis humans were capable of before they were really aware of any serious thought underway at all. Here in the subconscious could be found the various heuristics and biases explored by the behavioral economists, or the repressed feelings that fascinated Freud and the psychoanalysts. It was here that decisions took form, and where people and propositions acquired positive or negative connotations.

Human beings did what felt right, but that did not mean their behavior was uninformed or irrational. Only when the circumstances were unusual did they have to ponder and wonder what to do next. Then thought processes became more conscious and deliberate. The conclusions might be more rational or they might be more rationalized. If the instinctive feelings were trusted, the natural course was to look for arguments to explain why they were correct rather than subject them to truly critical scrutiny. Two distinct processes were therefore identified, both capable of processing information and formulating decisions. Their combined effect was described as a “dualprocess model of reasoning.” Their least loaded labels were System 1 and System 2.42 The distinction between the two may be drawn too sharply, as they clearly feed off each other and interact. The value for our purposes is to allow us to identify two distinctive forms of strategic reasoning which at least have some basis in cognitive psychology.

The intuitive System 1 processes were largely unconscious and implicit. They operated quickly and automatically when needed, managing cognitive tasks of great complexity and evaluating situations and options before they reached consciousness. This referred to not one but a number of processes, perhaps with different evolutionary roots, ranging from simple forms of information retrieval to complex mental representations.43 They all involved the extraordinary computational and storage power of the brain, drawing on past learning and experiences, picking up on and interpreting cues and signals from the environment, suggesting appropriate and effective behavior, and enabling individuals to cope with the circumstances in which they might find themselves without having to deliberate on every move. Here could be found a grasp of how society worked and individuals operated, what had been internalized about societies and a variety of situations, bringing it together in ways faster and more focused than possible by more explicit and deliberate means. The outcomes were feelings—including strong senses of like and dislike, signals and patterns—with scripts for action that might be difficult to articulate but were followed without always understanding where they came from. What emerged out of System 1 did not need to be contrary to reason and could involve calculations and evaluations far exceeding those that could be accomplished with the more cumbersome and limited processes associated with System 2. In some ways, the modeling associated with game theory captured both the potential and limitations of System 2 thinking. If there was no System 1, that was probably how individuals might think, though without the prompts of System 1 they might find it difficult ever actually to reach a conclusion.

The intuitive System 1 thinking would still at times need to be supplemented by System 2 processes. These were conscious, explicit, analytical, deliberative, more intellectual, and inherently sequential—just what was expected of strategic reasoning. Unfortunately, System 2 processes were also slower and struggled with excessive complexity. They were also more demanding, for exerting self-control could be “depleting and unpleasant,” leading to a loss of motivation.44 The features of System 2 involved attributes that were uniquely human. Although the process may have started with chimps, they were assumed to reflect more recent evolutionary development, associated with language and the ability to address hypothetical situations, without immediate context, beyond immediate experience. The move from System 1 did not mean that feelings no longer played a part. For example, when deciding whether to cooperate or defect in the ultimatum game, players’ positive or negative feelings about the options influenced their decisions. When another player was perceived to have acted unfairly, this could arouse strong feelings affecting the severity of the response.45

Whether the decisions emerging out of System 1 were good would depend on the quality and relevance of internalized information. As in other areas, instincts could often be reliable guides but a desire to believe could sometimes override best interests. Instinctive choices had features that potentially limited their effectiveness. First, shortcuts were used, turning new situations into something familiar in order to draw on apparently relevant experience or knowledge. This was the case even when the stakes were high.46 Second, though more effort might be invested in high-stakes decisions, this could be to find evidence to support choices that seemed intuitively correct from the start.47 Third, thinking was often short term, shaped by immediate challenges. Kahneman observed that “an exclusive concern with the long term may be prescriptively sterile, because the long term is not where life is lived.” During the course of a conflict there would be responses to the “pain of losses and the regret of mistakes.”48 In this respect, the first encounters were bound to be more important, as these tested the accuracy of the initial framing and showed how issues were likely to be framed in the future. The next chapter notes the importance of considering strategy as starting from an existing situation rather than a distant goal.

Learning and training could make a difference, as was evident in those who had to work out what to do during the course of a competitive game, an intense battle, or any stressful situation without time for much deliberation. Intuitive decisions could therefore reflect strong biases, limited prior knowledge, narrow framing, and short time scales. With more deliberation decisions did not necessarily improve, especially if the extra deliberation was devoted to rationalizing intuitive conclusions. But deliberation did allow for correcting biases, more abstract conceptualizations, reconstructing the frame, and pushing out the time horizons. The evidence suggested that the more conscious reasoning kicked in when the circumstances had unique features, the information was poor, inconsistencies and anomalies were evident compared with expectations, or there was an awareness of the danger of bias. Individuals with a lack of empathy (psychopathy) were less inclined to cooperate and more likely to defect in games involving trust. When they were asked to act against type, so that the empathetic defected and the psychopathic cooperated, extra activity was observed in the prefrontal cortex because of the effort needed to exert control.49 Deliberate System 2 thinking interacted with intuitive System 1 thinking, a potential source of control that was not always controlling.

The tension was evident when evidence challenged strongly held beliefs. Experts who had a considerable stake in a particular proposition could put considerable intellectual effort into discrediting the evidence and those who supported alternative propositions. A study of pundits by Philip Tetlock in the 1980s demonstrated that their predictions were no better than might have been achieved through random choice, and that the most famous and regarded were often the worst. Because of their self-image as being uniquely expert they would convey more certainty than was often justified by the evidence. The best pundits, he noted, were those who were ready to monitor how well their predictions were going and were not too quick to disregard dissonant findings.50

The two processes provided a compelling metaphor for a struggle that was central to the production of strategies. Simply put, strategy as commonly represented was System 2 thinking par excellence, capable of controlling the illogical forms of reasoning—often described as emotional—that emerged out of System 1. The reality turned out to be much more complicated and intriguing, for in many respects System 1 was more powerful and could overwhelm System 2 unless a determined effort was made to counter its impact. A strategy could involve following System 1 as it was posted into consciousness and appeared as the right thing to do, so that conscious effort was directed at finding reasons why it should be done—strategy as rationalization. One way to think of strategy, therefore, was as a System 2 process engaged in a tussle with System 1 thinking, seeking to correct for feelings, prejudices, and stereotypes; recognizing what was unique and unusual about the situation; and seeking to plot a sensible and effective way forward.

A key finding from experiments was that individuals were not naturally strategic. When they understood that they were taking part in a competitive strategic game and were told the rules, the criteria, and the rewards for success, then they acted strategically. They could appreciate, for example, that sticking to an established pattern of behavior just because it worked in the past would probably not work in the future because a clever opponent would know what to expect. They also realized that their opponent’s future actions were likely to vary from those observed in the past. This was the essence of strategic reasoning: making choices on the basis of the likely choices of opponents and, in so doing, recognizing that opponents’ choices would depend in turn on expectations about what they might choose.51

Yet when the need for strategy was left unexplained and implicit, individuals often missed cues and opportunities. Nor were they always enthusiastic and competitive when told they were playing a strategic game. Strategies were often inconsistent, clumsy, and unsophisticated; reflected shifting or uncertain preferences; responded to the wrong stimuli; and focused on the wrong factors, misunderstanding partners as well as antagonists. Players often had to be urged to make the effort to get into the minds of their opponents. This is why the next chapter argues that many everyday and routine encounters should not be really considered as “strategic.”

David Sally compared what could be learned from experimental games with what might be predicted by game theory. The “explosion of experimental work in the past 20 years,” he wrote in 2003, revealed that human beings, “despite their advantages in the areas of reasoning, rationality and mentalizing, can be the most befuddling and the least consistent game-players.” At various times they came over as “cooperative, altruistic, competitive, selfish, generous, equitable, spiteful, communicative, distant, similar, mindreading or mindblind as small elements in the game structure or social setting are altered.”52 A lot of responses to events were intuitive, undertaken without much hard thought or analysis of alternatives, and produced judgments that were quick and plausible. Individuals were not natural strategists. It required a conscious effort.

If you find an error or have any questions, please email us at Thank you!