CHAPTER 9
Visual blight often results from the fact that the ability of a merchant’s sign to attract attention depends on the size and brightness of other merchants’ signs©@by Feldman_1/Getty Images
LEARNING OBJECTIVES
After reading this chapter, you should be able to:
1. LO1Describe the three basic elements of a game, how the possible payoffs are summarized, and the effect of dominant and dominated strategy choices.
2. LO2Identify and explain the prisoner’s dilemma and how it applies to real-world situations.
3. LO3Explain games in which the timing of players’ choices matters.
4. LO4Discuss strategies that enable players to solve commitment problems through material or psychological incentives.
At a Christmas Eve dinner party in 1997, actor Robert DeNiro pulled singer Tony Bennett aside for a moment. “Hey, Tony—there’s a film I want you in,” DeNiro said. He was referring to the project that became the 1999 Warner Brothers hit comedy Analyze This, in which the troubled head of a crime family, played by DeNiro, seeks the counsel of a psychotherapist, played by Billy Crystal. In the script, both the mob boss and his therapist are big fans of Bennett’s music.
Bennett heard nothing further about the project for almost a year. Then his son and financial manager, Danny Bennett, got a phone call from Warner Brothers, in which the studio offered Tony $15,000 to sing “Got the World on a String” in the movie’s final scene. As Danny described the conversation, “ . . . they made a fatal mistake. They told me they had already shot the film. So I’m like: ‘Hey, they shot the whole film around Tony being the end gag and they’re offering me $15,000?’”1
Warner Brothers wound up paying $200,000 for Bennett’s performance.
In business negotiations, as in life, timing can be everything. If executives at Warner Brothers had thought the problem through carefully, they would have negotiated with Bennett before shooting the movie. At that point, Bennett would have realized that the script could be rewritten if he asked too high a fee. By waiting, studio executives left themselves with no attractive option other than to pay Bennett’s price.
The payoff to many actions depends not only on the actions themselves, but also on when they’re taken and how they relate to actions taken by others. In previous chapters, economic decision makers confronted an environment that was essentially fixed. This chapter will focus on cases in which people must consider the effect of their behavior on others. For example, an imperfectly competitive firm will want to weigh the likely responses of rivals when deciding whether to cut prices or to increase its advertising budget. Interdependencies of this sort are the rule rather than the exception in economic and social life. To make sense of the world we live in, then, we must take these interdependencies into account.
Our focus in Chapter 8, Monopoly, Oligopoly, and Monopolistic Competition, was on the pure monopolist. In this chapter, we’ll explore how a few simple principles from the theory of games can help us better understand the behavior of oligopolists and monopolistic competitors—the two types of imperfectly competitive firms for which strategic interdependencies are most important. Along the way, we’ll also see how the same principles enable us to answer a variety of interesting questions drawn from everyday social interaction.
USING GAME THEORY TO ANALYZE STRATEGIC DECISIONS
In chess, tennis, or any other game, the payoff to a given move depends on what your opponent does in response. In choosing your move, therefore, you must anticipate your opponent’s responses, how you might respond, and what further moves your own response might elicit. Economists and other behavioral scientists have devised the theory of games to analyze situations in which the payoffs to different actors depend on the actions their opponents take.
THE THREE ELEMENTS OF A GAME
A game has three basic elements: the players, the list of possible actions (or strategies) available to each player, and the payoffs the players receive for each possible combination of strategies. We’ll use a series of examples to illustrate how these elements combine to form the basis of a theory of behavior.
The first example focuses on an important strategic decision confronting two oligopolists who produce an undifferentiated product and must decide how much to spend on advertising.
EXAMPLE 9.1The Cost of Advertising
Should United Airlines spend more money on advertising?
Suppose that United Airlines and American Airlines are the only air carriers that serve the Chicago–St. Louis market. Each currently earns an economic profit of $6,000 per flight on this route. If United increases its advertising spending in this market by $1,000 per flight, and American spends no more on advertising than it does now, United’s profit will rise to $8,000 per flight and American’s will fall to $2,000. If both spend $1,000 more on advertising, each will earn an economic profit of $5,500 per flight. These payoffs are symmetric so that if United spends the same amount on advertising while American increases its spending by $1,000, United’s economic profit will fall to $2,000 per flight and American’s will rise to $8,000. The payoff structure is also common knowledge—that is, each company knows what the relevant payoffs will be for both parties under each of the possible combinations of choices. If each must decide independently whether to increase spending on advertising, what should United do?
Think of this situation as a game. What are its three elements? The players are the two airlines. Each airline must choose one of two strategies: to raise ad spending by $1,000 or leave it the same. The payoffs are the economic profits that correspond to the four possible scenarios resulting from their choices. One way to summarize the relevant information about this game is to display the players, strategies, and payoffs in the form of a simple table called a payoff matrix (see Table 9.1).

Confronted with the payoff matrix in Table 9.1, what should United Airlines do? The essence of strategic thinking is to begin by looking at the situation from the other party’s point of view. Suppose United assumes that American will raise its spending on advertising (the left column in Table 9.1). In that case, United’s best bet would be to follow suit (the top row in Table 9.1). Why is the top row United’s best response when American chooses the left column? United’s economic profits, given in the upper-left cell of Table 9.1, will be $5,500, compared to only $2,000 if it keeps spending the same (see the lower-left cell).
Alternatively, suppose United assumes that American will keep ad spending the same (that is, that American will choose the right column in Table 9.1). In that case, United would still do better to increase spending because it would earn $8,000 (the upper-right cell), compared to only $6,000 if it keeps spending the same (the lower-right cell). In this particular game, no matter which strategy American chooses, United will earn a higher economic profit by increasing its spending on advertising. And since this game is perfectly symmetric, a similar conclusion holds for American: No matter which strategy United chooses, American will do better by increasing its spending on ads.
When one player has a strategy that yields a higher payoff no matter which choice the other player makes, that player is said to have a dominant strategy. Not all games involve dominant strategies, but both players in this game have one, and that is to increase spending on ads. For both players, to leave ad spending the same is a dominated strategy—one that leads to a lower payoff than an alternative choice, regardless of the other player’s choice.
Notice, however, that when each player chooses the dominant strategy, the resulting payoffs are smaller than if each had left spending unchanged. When United and American increase their spending on ads, each earns only $5,500 in economic profits, compared to the $6,000 each would have earned without the increase.
NASH EQUILIBRIUM
A game is said to be in equilibrium if each player’s strategy is the best he or she can choose, given the other players’ choices. This definition of equilibrium is sometimes called a Nash equilibrium, after the mathematician John Nash, who developed the concept in the early 1950s. Nash was awarded the Nobel Prize in Economics in 1994 for his contributions to game theory.2 When a game is in equilibrium, no player has any incentive to deviate from his current strategy.
If each player in a game has a dominant strategy, as in Example 9.1, equilibrium occurs when each player follows that strategy. But even in games in which not every player has a dominant strategy, we can often identify an equilibrium outcome. Consider, for instance, the following variation on the advertising game as illustrated in Example 9.2.
EXAMPLE 9.2Nash Equilibrium
Should American Airlines spend more money on advertising?
Suppose United Airlines and American Airlines are the only carriers that serve the Chicago–St. Louis market. Their payoff matrix for advertising decisions is shown in Table 9.2. Does United have a dominant strategy? Does American? If each firm does the best it can, given the incentives facing the other, what will be the outcome of this game?

Incentive
In this game, no matter what United does, American will do better to raise its ad spending, so raising the advertising budget is a dominant strategy for American. United, however, does not have a dominant strategy. If American raises its spending, United will do better to leave its spending unchanged; if American does not raise spending, however, United will do better to spend more. Even though United doesn’t have a dominant strategy, we can employ the Incentive Principle to predict what is likely to happen in this game. United’s managers are assumed to know what the payoff matrix is, so they can predict that American will spend more on ads since that is American’s dominant strategy. Thus the best strategy for United, given the prediction that American will spend more on ads, is to keep its own spending unchanged. If both players do the best they can, taking account of the incentives each faces, this game will end in the lower-left cell of the payoff matrix: American will raise its spending on ads and United will not.
Note that the choices corresponding to the lower-left cell in Table 9.2 satisfy the definition of a Nash equilibrium. If United found itself in that cell, its alternative would be to raise its ad spending, a move that would reduce its payoff from $4,000 to $3,000. So United has no incentive to abandon the lower-left cell. Similarly, if American found itself in the lower-left cell of Table 9.2, its alternative would be to leave ad spending the same, a move that would reduce its payoff from $5,000 to $2,000. So American also has no incentive to abandon the lower-left cell. The lower-left cell of Table 9.2 is a Nash equilibrium—a combination of strategies for which each player’s choice is the best available option, given the choice made by the other player.
CONCEPT CHECK 9.1
What should United and American do if their payoff matrix is modified as follows?

RECAP
USING GAME THEORY TO ANALYZE STRATEGIC DECISIONS
The three elements of any game are the players, the list of strategies from which they can choose, and the payoffs to each combination of strategies. This information can be summarized in a payoff matrix.
Equilibrium in a game occurs when each player’s strategy choice yields the highest payoff available, given the strategies chosen by other players. Such a combination of strategies is called a Nash equilibrium.
THE PRISONER’S DILEMMA
The first advertising example we discussed above belongs to an important class of games called the prisoner’s dilemma. In the prisoner’s dilemma, when each player chooses his dominant strategy, the result is unattractive to the group of players as a whole.
THE ORIGINAL PRISONER’S DILEMMA
The next example recounts the original scenario from which the prisoner’s dilemma drew its name.
EXAMPLE 9.3Prisoner’s Dilemma
Should the prisoners confess?
Two prisoners, Horace and Jasper, are being held in separate cells for a serious crime that they did in fact commit. The prosecutor, however, has only enough hard evidence to convict them of a minor offense, for which the penalty is a year in jail. Each prisoner is told that if one confesses while the other remains silent, the confessor will be cleared of the crime, and the other will spend 20 years in prison. If both confess, they will get an intermediate sentence of five years. These payoffs are summarized in Table 9.3. The two prisoners are not allowed to communicate with one another. Do they have a dominant strategy? If so, what is it?

In this game, the dominant strategy for each prisoner is to confess. No matter what Jasper does, Horace will get a lighter sentence by speaking out. If Jasper confesses, Horace will get five years (upper-left cell) instead of 20 (lower-left cell). If Jasper remains silent, Horace will go free (upper-right cell) instead of spending a year in jail (lower-right cell). Because the payoffs are perfectly symmetric, Jasper will also do better to confess, no matter what Horace does. The difficulty is that when each follows his dominant strategy and confesses, both will do worse than if each had shown restraint. When both confess, they each get five years (upper-left cell), instead of the one year they would have gotten by remaining silent (lower-right cell). Hence the name of this game, the prisoner’s dilemma.
CONCEPT CHECK 9.2
GM and Chrysler must both decide whether to invest in a new process. Games 1 and 2 below show how their profits (in millions of dollars) depend on the decisions they might make. Which of these games is a prisoner’s dilemma?

The prisoner’s dilemma is one of the most powerful metaphors in all of human behavioral science. Countless social and economic interactions have payoff structures analogous to the one confronted by the two prisoners. Some of those interactions occur between only two players, as in the examples just discussed; many others involve larger groups. Games of the latter sort are called multiplayer prisoner’s dilemmas. But regardless of the number of players involved, the common thread is one of conflict between the narrow self-interest of individuals and the broader interests of larger communities.
THE ECONOMICS OF CARTELS
A cartel is any coalition of firms that conspires to restrict production for the purpose of earning an economic profit. As we will see in the next example, the problem confronting oligopolists who are trying to form a cartel is a classic illustration of the prisoner’s dilemma.
The Economic Naturalist 9.1
Why are cartel agreements notoriously unstable?
Consider a market for bottled water served by two oligopolists, Aquapure and Mountain Spring. Each firm can draw water free of charge from a mineral spring located on its own land. Customers supply their own bottles. Rather than compete with one another, the two firms decide to join together by selling water at the price a profit-maximizing pure monopolist would charge. Under their agreement (which constitutes a cartel), each firm would produce and sell half the quantity of water demanded by the market at the monopoly price (see Figure 9.1). The agreement isn’t legally enforceable, however, which means that each firm has the option of charging less than the agreed price. If one firm sells water for less than the other firm, it will capture the entire quantity demanded by the market at the lower price.
FIGURE 9.1 The Market Demand for Mineral Water.Faced with the demand curve shown, a monopolist with zero marginal cost would produce 1,000 bottles per day (the quantity at which marginal revenue equals zero) and sell them at a price of $1.00 per bottle.
Why is this agreement likely to collapse?
Since the marginal cost of mineral water is zero, the profit-maximizing quantity for a monopolist with the demand curve shown in Figure 9.1 is 1,000 bottles per day, the quantity for which marginal revenue equals marginal cost. At that quantity, the monopoly price is $1 per bottle. If the firms abide by their agreement, each will sell half the market total, or 500 bottles per day, at a price of $1 per bottle, for an economic profit of $500 per day.
But suppose Aquapure reduced its price to 90 cents per bottle. By underselling Mountain Spring, it would capture the entire quantity demanded by the market, which, as shown in Figure 9.2, is 1,100 bottles per day. Aquapure’s economic profit would rise from $500 per day to ($0.90 per bottle)(1,100 bottles per day) = $990 per day—almost twice as much as before. In the process, Mountain Spring’s economic profit would fall from $500 per day to zero. Rather than see its economic profit disappear, Mountain Spring would match Aquapure’s price cut, recapturing its original 50 percent share of the market. But when each firm charges $0.90 per bottle and sells 550 bottles per day, each earns an economic profit of ($0.90 per bottle)(550 bottles per day) = $495 per day, or $5 less per day than before.
FIGURE 9.2 The Temptation to Violate a Cartel Agreement.By cutting its price from $1 per bottle to 90 cents per bottle, Aquapure can sell the entire market quantity demanded at that price, 1,100 bottles per day, rather than half the monopoly quantity of 1,000 bottles per day.
Suppose we view the cartel agreement as an economic game in which the two available strategies are to sell for $1 per bottle or to sell for $0.90 per bottle. The payoffs are the economic profits that result from these strategies. Table 9.4 shows the payoff matrix for this game. Each firm’s dominant strategy is to sell at the lower price, yet in following that strategy, each earns a lower profit than if each had sold at the higher price.

The game does not end with both firms charging $0.90 per bottle. Each firm knows that if it cuts the price a little further, it can recapture the entire market and, in the process, earn a substantially higher economic profit. At every step, the rival firm will match any price cut, until the price falls all the way to the marginal cost—in this example, zero.
Why is it so difficult for companies to enforce agreements against price cutting?
Cartel agreements confront participants with the economic incentives inherent in the prisoner’s dilemma, which explains why such agreements have historically been so unstable. Usually a cartel involves not just two firms, but several; an arrangement that can make retaliation against price cutters extremely difficult. In many cases, discovering which parties have broken the agreement is difficult. For example, the Organization of Petroleum Exporting Countries (OPEC), a cartel of oil producers formed in the 1970s to restrict oil production, has no practical way to prevent member countries from secretly pumping oil offshore in the dead of night.
TIT-FOR-TAT AND THE REPEATED PRISONER’S DILEMMA
When all players cooperate in a prisoner’s dilemma, each gets a higher payoff than when all defect. So people who confront prisoner’s dilemmas will be on the lookout for ways to create incentives for mutual cooperation. What they need is some way to penalize players who defect. When players interact with one another only once, this turns out to be difficult. But when they expect to interact repeatedly, new possibilities emerge.
A repeated prisoner’s dilemma is a standard prisoner’s dilemma that confronts the same players not just once but many times. Experimental research on repeated prisoner’s dilemmas in the 1960s identified a simple strategy that proves remarkably effective at limiting defection. The strategy is called tit-for-tat, and here’s how it works: The first time you interact with someone, you cooperate. In each subsequent interaction, you simply do what that person did in the previous interaction. Thus, if your partner defected on your first interaction, you’d then defect on your next interaction with her. If she then cooperates, your move next time will be to cooperate as well.
On the basis of elaborate computer simulations, University of Michigan political scientist Robert Axelrod showed that tit-for-tat was a remarkably effective strategy, even when pitted against a host of ingenious counterstrategies that had been designed for the explicit purpose of trying to exploit it. The success of tit-for-tat requires a reasonably stable set of players, each of whom can remember what other players have done in previous interactions. It also requires that players have a significant stake in what happens in the future, for it is the fear of retaliation that deters people from defecting.
Because rival firms in the same industry interact with one another repeatedly, it might seem that the tit-for-tat strategy would ensure widespread collusion to raise prices. And yet, as noted earlier, cartel agreements are notoriously unsuccessful. One difficulty is that tit-for-tat’s effectiveness depends on there being only two players in the game. In competitive and monopolistically competitive industries, there are generally many firms, and even in oligopolies there are often several. When there are more than two firms and one defects now, how do the cooperators selectively punish the defector later? By cutting price? That will penalize everyone, not just the defector. Even if there are only two firms in an industry, these firms realize that other firms may enter their industry. So the would-be cartel members have to worry not only about each other, but also about the entire list of firms that might decide to compete with them. Each firm may see this as a hopeless task and decide to defect now, hoping to reap at least some economic profit in the short run. What seems clear, in any event, is that the practical problems involved in implementing tit-for-tat have made it difficult to hold cartel agreements together for long.
The Economic Naturalist 9.2
How did Congress unwittingly solve the television advertising dilemma confronting cigarette producers?
In 1970, Congress enacted a law making cigarette advertising on television illegal after January 1, 1971. As evidenced by the steadily declining proportion of Americans who smoke, this law seems to have achieved its stated purpose of protecting citizens against a proven health hazard. But the law also had an unintended effect, which was to increase the economic profit of cigarette makers, at least in the short run. In the year before the law’s passage, manufacturers spent more than $300 million on advertising—about $60 million more than they spent during the year after the law was enacted. Much of the saving in advertising expenditures in 1971 was reflected in higher cigarette profits at year-end. But if eliminating television advertising made companies more profitable, why didn’t the manufacturers eliminate the ads on their own?
When an imperfectly competitive firm advertises its product, its demand curve shifts rightward, for two reasons. First, people who have never used that type of product learn about it, and some buy it. Second, people who consume a different brand of the product may switch brands. The first effect boosts sales industrywide; the second merely redistributes existing sales among brands.
Although advertising produces both effects in the cigarette industry, its primary effect is brand switching. Thus, the decision of whether to advertise confronts the individual firm with a prisoner’s dilemma. Table 9.5 shows the payoffs facing a pair of cigarette producers trying to decide whether to advertise. If both firms advertise on TV (upper-left cell), each earns a profit of only $10 million per year, compared to a profit of $20 million per year for each if neither advertises (lower-right cell). Clearly, both will benefit if neither advertises.

Why were cigarette manufacturers happy when Congress made it illegal for them to advertise on television?
Yet note the powerful incentive that confronts each firm. RJR sees that if Philip Morris doesn’t advertise, RJR can earn higher profits by advertising ($35 million per year) than by not advertising ($20 million per year). RJR also sees that if Philip Morris does advertise, RJR will again earn more by advertising ($10 million per year) than by not advertising ($5 million per year). Thus, RJR’s dominant strategy is to advertise. And because the payoffs are symmetric, Philip Morris’s dominant strategy is also to advertise. So when each firm behaves rationally from its own point of view, the two together do worse than if they had both shown restraint. The congressional ad ban forced cigarette manufacturers to do what they could not have accomplished on their own.
As the following Economic Naturalist 9.3 example makes clear, understanding the prisoner’s dilemma can help the economic naturalist to make sense of human behavior not only in the world of business, but also in other domains of life as well.
The Economic Naturalist 9.3
Why do people shout at parties?
Whenever large numbers of people gather for conversation in a closed space, the ambient noise level rises sharply. After attending such gatherings, people often complain of sore throats and hoarse voices. If everyone spoke at a normal volume at parties, the overall noise level would be lower, and people would hear just as well. So why do people shout?
The problem involves the difference between individual incentives and group incentives. Suppose everyone starts by speaking at a normal level. But because of the crowded conditions, conversation partners have difficulty hearing one another, even when no one is shouting. The natural solution, from the point of the individual, is to simply raise one’s voice a bit. But that is also the natural solution for everyone else. And when everyone speaks more loudly, the ambient noise level rises so that no one hears any better than before.
Why do people often have to shout to be heard at parties?
No matter what others do, the individual will do better by speaking more loudly. Doing so is a dominant strategy for everyone, in fact. Yet when everyone follows the dominant strategy, the result is worse (no one can hear well) than if everyone had continued to speak normally. While shouting is wasteful, individuals acting alone have no better option. If anyone were to speak softly while others shout, that person wouldn’t be heard. No one wants to go home with raw vocal cords, but people apparently prefer that cost to the alternative of not being heard at all.
RECAP
THE PRISONER’S DILEMMA
The prisoner’s dilemma is a game in which each player has a dominant strategy, and in which the payoff to each player when each chooses that strategy is smaller than if each had chosen a dominated strategy. Incentives analogous to those found in the prisoner’s dilemma help to explain a broad range of behavior in business and everyday life—among them excessive spending on advertising and cartel instability. The tit-for-tat strategy can help sustain cooperation in two-player repeated prisoner’s dilemmas but tends to be ineffective in multiplayer repeated prisoner’s dilemmas.
GAMES IN WHICH TIMING MATTERS
In the games discussed so far, players were assumed to choose their strategies simultaneously, and which player moved first didn’t matter. For example, in the prisoner’s dilemma, self-interested players would follow their dominant strategies even if they knew in advance what strategies their opponents had chosen. But in other situations, such as the negotiations between Warner Brothers and Tony Bennett described at the beginning of this chapter, timing is of the essence.
We begin with an example of a game whose outcome cannot be predicted if both players move simultaneously, but whose outcome is clear if one player has the opportunity to move before the other.
EXAMPLE 9.4The Importance of Timing
Should Dodge build a hybrid Viper?
The Dodge Viper and the Chevrolet Corvette compete for a limited pool of domestic sports car enthusiasts. Each company knows that the other is considering whether to bring out a hybrid version of its car. If both companies bring out hybrids, each will earn $60 million in profit. If neither brings out a hybrid, each company will earn $50 million. If Chevrolet introduces a hybrid and Dodge does not, Chevrolet will earn $80 million and Dodge will earn $70 million. If Dodge brings out a hybrid and Chevrolet does not, Dodge will earn $80 million and Chevrolet will earn $70 million. Does either firm have a dominant strategy in this situation? What will happen in this game if Dodge gets to choose first, with Chevrolet choosing after having seen Dodge’s choice?
When both companies must make their decisions simultaneously, the payoff matrix for the example looks like Table 9.6.

The logic of the profit figures in Table 9.6 is that although consumers generally like the idea of a hybrid sports car (hence the higher profits when both companies bring out hybrids than when neither does), the companies will have to compete more heavily with one another if both offer the same type of car (and hence the lower profits when both offer the same type of car than when each offers a different type).
In the payoff matrix in Table 9.6, neither company has a dominant strategy. The best outcome for Dodge is to offer a hybrid Viper while Chevrolet does not offer a hybrid Corvette (lower-left cell). The best outcome for Chevrolet is to offer a hybrid Corvette while Dodge does not offer a hybrid Viper (upper-right cell). Both the lower-left and upper-right cells are Nash equilibria of this game because if the companies found themselves in either of these cells, neither would unilaterally want to change its position. Thus, in the upper-right cell, Chevrolet wouldn’t want to change (that cell is, after all, the best possible outcome for Chevrolet), and neither would Dodge (since switching to a hybrid would reduce its profit from $70 million to $60 million). But without being told more, we simply cannot predict where the two companies will end up.
If one side can move before the other, however, the incentives for action become instantly clearer. For games in which timing matters, a decision tree, or game tree, is a more useful way of representing the payoffs than a traditional payoff matrix. This type of diagram describes the possible moves in the sequence in which they may occur, and lists the final payoffs for each possible combination of moves.
If Dodge has the first move, the decision tree for the game is shown in Figure 9.3. At A, Dodge begins the game by deciding whether to offer a hybrid. If it chooses to offer one, Chevrolet must then make its own choice at B. If Dodge does not offer a hybrid, Chevrolet will make its choice at C. In either case, once Chevrolet makes its choice, the game is over.
FIGURE 9.3 Decision Tree for Hybrid Example.This decision tree shows the possible moves and payoffs for the game in the hybrid example, in the sequence in which they may occur.
In thinking strategically about this game, the key for Dodge is to put itself in Chevrolet’s shoes and imagine how Chevrolet would react to the various choices it might confront. In general, it will make sense for Dodge to assume that Chevrolet will respond in a self-interested way—that is, by choosing the available option that offers the highest profit for Chevrolet. Dodge knows that if it chooses to offer a hybrid, Chevy’s best option at B will be not to offer a hybrid (since Chevy’s profit is $10 million higher at E than at D). Dodge also knows that if it chooses not to offer a hybrid, Chevy’s best option at C will be to offer one (since Chevy’s profit is $30 million higher at F than at G). Dodge thus knows that if it offers a hybrid, it will end up at E, where it will earn $80 million, whereas if it does not offer a hybrid, it will end up at F, where it will earn only $70 million. So when Dodge has the first move in this game, its best strategy is to offer a hybrid. And Chevrolet then follows by choosing not to offer one.
CREDIBLE THREATS AND PROMISES
Incentive
Could Chevrolet have deterred Dodge from offering a hybrid by threatening to offer a hybrid of its own, no matter what Dodge did? The problem with this strategy is such a threat would not have been credible. In the language of game theory, a credible threat is one that will be in the threatener’s interest to carry out when the time comes to act. As the Incentive Principle suggests, people are likely to be skeptical of any threat if they know there will be no incentive to follow through when the time comes. The problem here is that Dodge knows that it would not be in Chevrolet’s interest to carry out its threat in the event that Dodge offered a hybrid. After all, once Dodge has already offered the hybrid, Chevy’s best option is to offer a nonhybrid.
The concept of a credible threat figured prominently in the negotiations between Warner Brothers’ managers and Tony Bennett over the matter of Mr. Bennett’s fee for performing in Analyze This. Once most of the film had been shot, managers knew they couldn’t threaten credibly to refuse Mr. Bennett’s salary demand because at that point adapting the film to another singer would have been extremely costly. In contrast, a similar threat made before production of the movie had begun would have been credible.
Just as in some games credible threats are impossible to make, in others credible promises are impossible. A credible promise is one that is in the interests of the promiser to keep when the time comes to act. In the following example, both players suffer because of the inability to make a credible promise.
EXAMPLE 9.5A Credible Promise
Should the business owner open a remote office?
The owner of a thriving business wants to start up an office in a distant city. If she hires someone to manage the new office, she can afford to pay a weekly salary of $1,000—a premium of $500 over what the manager would otherwise be able to earn—and still earn a weekly economic profit of $1,000 for herself. The owner’s concern is that she won’t be able to monitor the manager’s behavior. The owner knows that by managing the remote office dishonestly, the manager can boost his take-home pay to $1,500 while causing the owner an economic loss of $500 per week. If the owner believes that all managers are selfish income-maximizers, will she open the new office?
The decision tree for the remote-office game is shown in Figure 9.4. At A, the managerial candidate promises to manage honestly, which brings the owner to B, where she must decide whether to open the new office. If she opens it, they reach C, where the manager must decide whether to manage honestly. If the manager’s only goal is to make as much money as he can, he will manage dishonestly (bottom branch at C) because, that way, he will earn $500 more than by managing honestly (top branch at C).
FIGURE 9.4 Decision Tree for the Remote-Office Game.The best outcome is for the owner to open the office at B and for the manager to manage the office honestly at C. But if the manager is purely self-interested and the owner knows it, this path will not be an equilibrium outcome.
So if the owner opens the new office, she will end up with an economic loss of $500. If she had not opened the office (bottom branch at B), she would have realized an economic profit of zero. Because zero is better than −$500, the owner will choose not to open the remote office. In the end, the opportunity cost of the manager’s inability to make a credible promise is $1,500: the manager’s forgone $500 salary premium and the owner’s forgone $1,000 return.
CONCEPT CHECK 9.3
Smith and Jones are playing a game in which Smith has the first move at A in the decision tree shown below. Once Smith has chosen either the top or bottom branch at A, Jones, who can see what Smith has chosen, must choose the top or bottom branch at B or C. If the payoffs at the end of each branch are as shown, what is the equilibrium outcome of this game? If before Smith chose, Jones could make a credible commitment to choose either the top or bottom branch when his turn came, what would he do?

MONOPOLISTIC COMPETITION WHEN LOCATION MATTERS
In many sequential games, the player who gets to move first enjoys a strategic advantage. That was the case, for instance, in the decision of whether to produce a hybrid sports car in Example 9.4. In that example, the first mover did better because he was able to exploit the knowledge that both firms do better if each one’s product is different from the other’s rather than similar to it. But that won’t always be true. When the feature that differentiates one seller’s product from another’s is temporal or spatial location, the firm with the last move in a game sometimes enjoys the upper hand, as The Economic Naturalist 9.4 illustrates.
The Economic Naturalist 9.4
Why do we often see convenience stores located on adjacent street corners?
In many cities, it’s common to see convenience stores located in clusters, followed by long stretches with no stores at all. If the stores were more spread out, almost all consumers would enjoy a shorter walk to the nearest convenience store. Why do stores tend to cluster in this fashion?
In Figure 9.5, suppose that when the convenience store located at A first opened, it was the closest store for the 1,200 shoppers who live in identical apartment houses evenly distributed along the road between A and the freeway one mile to the east.3 Those who live to the east of the freeway shop elsewhere because they cannot cross the freeway. Those who live to the west of the store at A shop either at A or at some other store still further to the west, whichever is closer. In this setting, why might a profit-maximizing entrepreneur planning to open a new store between A and the freeway choose to locate at B rather than at some intermediate location such as C?
FIGURE 9.5 The Curious Tendency of Monopolistic Competitors to Cluster.As a group, consumers would enjoy a shorter walk if the store at B were instead located at C, or even at D. But a second store will attract more customers by locating at B.
It turns out that a store located at C would, in fact, minimize the distance that shoppers living between A and the freeway would have to walk to reach the nearest store. If there were a store at C, no shopper on this stretch of road would have to walk more than ⅓ of a mile to reach the nearest store. The 800 people who live between point D (which is halfway between A and C) and the freeway would shop at C, while the 400 who live between D and A would shop at A.
Why do retail merchants tend to locate in clusters?
Despite the fact that C is the most attractive location for a new store from the perspective of consumers, it is not the most advantageous for the store’s owner. The reason is that the owner’s profit depends on how many people choose to shop at his store, not on how far they have to walk to get there. Given that consumers shop at the store closest to where they live, the best option from the entrepreneur’s perspective is to locate his store at B, on the street corner just east of A. That way, his store will be closer to all 1,200 people who live between A and the freeway. It is this logic that often helps explain the clustering of convenience stores, gas stations, and other monopolistically competitive firms whose most important differentiating feature is geographic location.
The insight that helped answer the question posed in The Economic Naturalist 9.4 comes from economist Harold Hotelling.4 Hotelling employed this insight to explain why two hot dog vendors on a stretch of beach almost invariably locate next to one another midway between the endpoints of the beach.
For many oligopolistic or monopolistically competitive firms, an important dimension of product differentiation is location in time rather than in physical space. The timing of flight departures for different airlines in the New York–Los Angeles market is one example. The timing of film showings by different local movie theaters is another. In these cases, too, we often see product clustering. Thus, in the New York–Los Angeles market, both United and American have flights throughout the afternoon departing exactly on the hour. And in many local movie markets, the first evening showing starts at 7:15 p.m. in dozens of different theaters.
In other examples, the differentiating features that matter most might be said to describe the product’s location in a more abstract “product space.” With soft drinks, for example, we might array different products according to their degrees of sweetness or carbonation. Here, too, it is common to see rival products that lie very close to one another, such as Coca-Cola and Pepsi. Clustering occurs in these cases for the reasons analogous to those discussed by Hotelling in his classic paper.
RECAP
GAMES IN WHICH TIMING MATTERS
The outcomes in many games depend on the timing of each player’s move. For such games, the payoffs are best summarized by a decision tree rather than a payoff matrix. Sometimes the second mover does best to offer a product that differs markedly from existing products. Other times, the second mover does best to mimic existing products closely.
COMMITMENT PROBLEMS
Games like the one in Concept Check 9.3, as well as the prisoner’s dilemma, the cartel game, and the remote-office game, confront players with a commitment problem—a situation in which they have difficulty achieving the desired outcome because they cannot make credible threats or promises. If both players in the original prisoner’s dilemma could make a binding promise to remain silent, both would be assured of a shorter sentence, hence the logic of the underworld code of Omerta, under which the family of anyone who provides evidence against a fellow mob member is killed. A similar logic explains the adoption of military-arms-control agreements, in which opponents sign an enforceable pledge to curtail weapons spending.
The commitment problem in the remote-office game could be solved if the managerial candidate could find some way of committing himself to manage honestly if hired. The candidate needs a commitment device—something that provides the candidate with an incentive to keep his promise.
Business owners are well aware of commitment problems in the workplace and have adopted a variety of commitment devices to solve them. Consider, for example, the problem confronting the owner of a restaurant. She wants her table staff to provide good service so that customers will enjoy their meals and come back in the future. Because good service is valuable to her, she would be willing to pay servers extra for it. For their part, servers would be willing to provide good service in return for the extra pay. The problem is that the owner cannot always monitor whether the servers do provide good service. Her concern is that having been paid extra for it, the servers may slack off when she isn’t looking. Unless the owner can find some way to solve this problem, she will not pay extra; the servers will not provide good service; and she, they, and the diners will suffer. A better outcome for all concerned would be for the servers to find some way to commit themselves to good service.
Restaurateurs in many countries have tried to solve this commitment problem by encouraging diners to leave tips at the end of their meals. The attraction of this solution is that the diner is always in a good position to monitor service quality. The diner should be happy to reward good service with a generous tip because doing so will help to ensure good service in the future. And the server has a strong incentive to provide good service because he knows that the size of his tip may depend on it.
The various commitment devices just discussed—the underworld code of Omerta, military-arms-control agreements, the tip for the server—all work because they change the incentives facing the decision makers. But as the next example illustrates, changing incentives in precisely the desired way is not always practical.
EXAMPLE 9.6Changing Incentives
Will Sylvester leave a tip when dining on the road?
Sylvester has just finished a $100 steak dinner at a restaurant that is 500 miles from where he lives. The server provided good service. If Sylvester cares only about himself, will he leave a tip?
Will leaving a tip at an out-of-town restaurant affect the quality of service you receive?
Once the server has provided good service, there is no way for her to take it back if the diner fails to leave a tip. In restaurants patronized by local diners, failure to tip is not a problem because the server can simply provide poor service the next time a nontipper comes in. But the server lacks that leverage with out-of-town diners. Having already received good service, Sylvester must choose between paying $100 and paying $120 for his meal. If he is an essentially selfish person, the former choice may be a compelling one.
CONCEPT CHECK 9.4
A traveler dines at a restaurant far from home. Both he and the server who serves him are rational and self-interested in the narrow sense. The server must first choose between providing good service and bad service, whereupon the diner must choose whether or not to leave a tip. The payoffs for their interaction are as summarized on the accompanying game tree. What is the most the diner would be willing to pay for the right to make a binding commitment (visible to the server) to leave a tip at the end of the meal in the event of having received good service?

SOLVING COMMITMENT PROBLEMS WITH PSYCHOLOGICAL INCENTIVES
In all the games we have discussed so far, players were assumed to care only about obtaining the best possible outcome for themselves. Thus, each player’s goal was to get the highest monetary payoff, the shortest jail sentence, the best chance to be heard, and so on. The irony, in most of these games, is that players do not attain the best outcomes. Better outcomes can sometimes be achieved by altering the material incentives selfish players face, but not always.
If altering the relevant material incentives is not possible, commitment problems can sometimes be solved by altering people’s psychological incentives. As the next example illustrates, in a society in which people are strongly conditioned to develop moral sentiments—feelings of guilt when they harm others, feelings of sympathy for their trading partners, feelings of outrage when they are treated unjustly—commitment problems arise less often than in more narrowly self-interested societies.
EXAMPLE 9.7The Impact of Moral Sentiments
In a moral society, will the business owner open a remote office?
Consider again the owner of the thriving business who is trying to decide whether to open an office in a distant city. Suppose the society in which she lives is one in which all citizens have been strongly conditioned to behave honestly. Will she open the remote office?
Suppose, for instance, that the managerial candidate would suffer guilt pangs if he embezzled money from the owner. Most people would be reluctant to assign a monetary value to guilty feelings. But for the sake of discussion, let’s suppose that those feelings are so unpleasant that the manager would be willing to pay at least $10,000 to avoid them. On this assumption, the manager’s payoff if he manages dishonestly will be not $1,500, but $1,500 − $10,000 = −$8,500. The new decision tree is shown in Figure 9.6.
FIGURE 9.6 The Remote-Office Game with an Honest Manager.If the owner can identify a managerial candidate who would choose to manage honestly at C, she will hire that candidate at B and open the remote office.
In this case, the best choice for the owner at B will be to open the remote office because she knows that at C the manager’s best choice will be to manage honestly. The irony, of course, is that the honest manager in this example ends up richer than the selfish manager in the previous example, who earned only a normal salary.
Are People Fundamentally Selfish?
As Example 9.7 suggests, the assumption that people are self-interested in the narrow sense of the term does not always capture the full range of motives that govern choice in strategic settings. Think, for example, about the last time you had a meal at an out-of-town restaurant. Did you leave a tip? If so, your behavior was quite normal. Researchers have found that tipping rates in restaurants patronized mostly by out-of-town diners are essentially the same as in restaurants patronized mostly by local diners.
Indeed, there are many exceptions to the outcomes predicted on the basis of the assumption that people are self-interested in the most narrow sense of the term. People who have been treated unjustly often seek revenge even at ruinous cost to themselves. Every day, people walk away from profitable transactions whose terms they believe to be “unfair.” In these and countless other ways, people do not seem to be pursuing self-interest narrowly defined. And if motives beyond narrow self-interest are significant, we must take them into account in attempting to predict and explain human behavior.
Preferences as Solutions to Commitment Problems
Economists tend to view preferences as ends in themselves. Taking them as given, they calculate what actions will best serve those preferences. This approach to the study of behavior is widely used by other social scientists, and by game theorists, military strategists, philosophers, and others. In its standard form, it assumes purely self-interested preferences for present and future consumption goods of various sorts, leisure pursuits, and so on. Concerns about fairness, guilt, honor, sympathy, and the like typically play no role.
Yet such concerns clearly affect the choices people make in strategic interactions. Sympathy for one’s trading partner can make a businessperson trustworthy even when material incentives favor cheating. A sense of justice can prompt a person to incur the costs of retaliation, even when incurring those costs will not undo the original injury.
Preferences can clearly shape behavior in these ways; however, this alone does not solve commitment problems. The solution to such problems requires not only that a person have certain preferences, but also that others have some way of discerning them. Unless the business owner can identify the trustworthy employee, that employee cannot land a job whose pay is predicated on trust. And unless the predator can identify a potential victim whose character will motivate retaliation, that person is likely to become a victim.
From among those with whom we might engage in ventures requiring trust, can we identify reliable partners? If people could make perfectly accurate character judgments, they could always steer clear of dishonest persons. That people continue to be victimized at least occasionally by dishonest persons suggests that perfectly reliable character judgments are either impossible to make or prohibitively expensive.
Vigilance in the choice of trading partners is an essential element in solving (or avoiding) commitment problems, for if there is an advantage in being honest and being perceived as such, there is an even greater advantage in only appearing to be honest. After all, a liar who appears trustworthy will have better opportunities than one who glances about furtively, sweats profusely, and has difficulty making eye contact. Indeed, he will have the same opportunities as an honest person but will get higher payoffs because he will exploit them to the fullest.
In the end, the question of whether people can make reasonably accurate character judgments is an empirical one. Experimental studies have shown that even on the basis of brief encounters involving strangers, subjects are adept at predicting who will cooperate and who will defect in prisoner’s dilemma games. For example, in one experiment in which only 26 percent of subjects defected, the accuracy rate of predicted defections was more than 56 percent. One might expect that predictions regarding those we know well would be even more accurate.
Do you know someone who would return an envelope containing $1,000 in cash to you if you lost it at a crowded concert? If so, then you accept the claim that personal character can help people to solve commitment problems. As long as honest individuals can identify at least some others who are honest, and can interact selectively with them, honest individuals can prosper in a competitive environment.
RECAP
COMMITMENT PROBLEMS AND THE EFFECTS OF PSYCHOLOGICAL INCENTIVES
· Commitment problems arise when the inability to make credible threats and promises prevents people from achieving desired outcomes. Such problems can sometimes be solved by employing commitment devices—ways of changing incentives to facilitate making credible threats or promises.
· Most applications of the theory of games assume that players are self-interested in the narrow sense of the term. In practice, however, many choices—such as leaving tips in out-of-town restaurants—appear inconsistent with this assumption.
· The fact that people seem driven by a more complex range of motives makes behavior more difficult to predict, but also creates new ways of solving commitment problems. Psychological incentives often can serve as commitment devices when changing players’ material incentives is impractical. For example, people who are able to identify honest trading partners, and interact selectively with them, are able to solve commitment problems that arise from lack of trust.
SUMMARY
· Economists use the theory of games to analyze situations in which the payoffs of one’s actions depend on the actions taken by others. Games have three basic elements: the players; the list of possible actions, or strategies, from which each player can choose; and the payoffs the players receive for those strategies. The payoff matrix is the most useful way to summarize this information in games in which the timing of the players’ moves is not decisive. In games in which timing matters, a decision tree provides a much more useful summary of the information. (LO1, LO3)
· Equilibrium in a game occurs when each player’s strategy choice yields the highest payoff available, given the strategies chosen by the other. (LO1)
· A dominant strategy is one that yields a higher payoff regardless of the strategy chosen by the other player. In some games such as the prisoner’s dilemma, each player has a dominant strategy. Equilibrium occurs in such games when each player chooses his or her dominant strategy. In other games, not all players have a dominant strategy. (LO1, LO2)
· Equilibrium outcomes are often unattractive from the perspective of players as a group. The prisoner’s dilemma has this feature because it is each prisoner’s dominant strategy to confess, yet each spends more time in jail if both confess than if both remain silent. The incentive structure of this game helps explain such disparate social dilemmas as excessive advertising, military arms races, and failure to reap the potential benefits of interactions requiring trust. (LO2)
· Individuals often can resolve these dilemmas if they can make binding commitments to behave in certain ways. Some commitments—such as those involved in military-arms-control agreements—are achieved by altering the material incentives confronting the players. Other commitments can be achieved by relying on psychological incentives to counteract material payoffs. Moral sentiments such as guilt, sympathy, and a sense of justice often foster better outcomes than can be achieved by narrowly self-interested players. For this type of commitment to work, the relevant moral sentiments must be discernible by one’s potential trading partners. (LO4)
KEY TERMS
basic elements of a game
cartel
commitment device
commitment problem
credible promise
credible threat
decision tree
dominant strategy
dominated strategy
game tree
Nash equilibrium
payoff matrix
prisoner’s dilemma
repeated prisoner’s dilemma
tit-for-tat
REVIEW QUESTIONS
1. 1.Identify the three basic elements of a game. (LO1)
2. 2.How is your incentive to defect in a prisoner’s dilemma altered if you learn that you will play the game not just once but rather indefinitely many times with the same partner? (LO2)
3. 3.Explain why a military arms race is an example of a prisoner’s dilemma. (LO2)
4. 4.Why did Warner Brothers make a mistake by waiting until the filming of Analyze This was almost finished before negotiating with Tony Bennett to perform in the final scene? (LO3)
5. 5.Suppose General Motors is trying to hire a small firm to manufacture the door handles for Buick sedans. The task requires an investment in expensive capital equipment that cannot be used for any other purpose. Why might the president of the small firm refuse to undertake this venture without a long-term contract fixing the price of the door handles? (LO3)
6. 6.Describe the commitment problem that narrowly self-interested diners and waiters would confront at restaurants located on interstate highways. Given that in such restaurants tipping does seem to ensure reasonably good service, do you think people are always selfish in the narrowest sense? (LO4)
PROBLEMS
![]()
1. 1.Consider the following game, called matching pennies, which you are playing with a friend. Each of you has a penny hidden in your hand, facing either heads up or tails up (you know which way the one in your hand is facing). On the count of “three,” you simultaneously show your pennies to each other. If the face-up side of your coin matches the face-up side of your friend’s coin, you get to keep the two pennies. If the faces do not match, your friend gets to keep the pennies. (LO1)
a. Who are the players in this game? What are each player’s strategies? Construct a payoff matrix for the game.
b. Does either player have a dominant strategy? If so, what?
c. Is there an equilibrium? If so, what?
2. 2.Consider the following game. Harry has four quarters. He can offer Sally from one to four of them. If she accepts his offer, she keeps the quarters Harry offered her and Harry keeps the others. If Sally declines Harry’s offer, they both get nothing ($0). They play the game only once, and each cares only about the amount of money he or she ends up with. (LO1, LO3)
a. Who are the players? What are each player’s strategies? Construct a decision tree for this game.
b. Given their goal, what is the optimal choice for each player?
3. 3.Blackadder and Baldrick are rational, self-interested criminals imprisoned in separate cells in a dark medieval dungeon. They face the prisoner’s dilemma displayed in the matrix.
Assume that Blackadder is willing to pay $1,000 for each year by which he can reduce his sentence below 20 years. A corrupt jailer tells Blackadder that before he decides whether to confess or deny the crime, she can tell him Baldrick’s decision. How much is this information worth to Blackadder? (LO2)
4. 4.In studying for his economics final, Sam is concerned about only two things: his grade and the amount of time he spends studying. A good grade will give him a benefit of 20; an average grade, a benefit of 5; and a poor grade, a benefit of 0. By studying a lot, Sam will incur a cost of 10; by studying a little, a cost of 6. Moreover, if Sam studies a lot and all other students study a little, he will get a good grade and they will get poor ones. But if they study a lot and he studies a little, they will get good grades and he will get a poor one. Finally, if he and all other students study the same amount of time, everyone will get average grades. Other students share Sam’s preferences regarding grades and study time. (LO2)
a. Model this situation as a two-person prisoner’s dilemma in which the strategies are to study a little and to study a lot, and the players are Sam and all other students. Construct a payoff matrix in which the payoffs account for both the cost and benefit of studying.
b. What is the equilibrium outcome in this game? Which outcome would everyone (both the other students and Sam) prefer?
5. 5.Newfoundland’s fishing industry has recently declined sharply due to overfishing, even though fishing companies were supposedly bound by a quota agreement. If all fishermen had abided by the agreement, yields could have been maintained at high levels. (LO2)
a. Model this situation as a prisoner’s dilemma in which the players are Company A and Company B, and the strategies are to keep the quota and break the quota. Suppose that if both companies keep the quota, then each receives a payoff of $100, and if both break the quota, then each receives a payoff of $0. On the other hand, if one company breaks the quota and the other keeps the quota, then the company that breaks the quota receives a payoff of $150 and the company that keeps the quota receives a payoff of –$50. Construct the corresponding payoff matrix, and explain why overfishing is inevitable in the absence of effective enforcement of the quota agreement.
b. Provide another environmental example of a prisoner’s dilemma.
c. In many potential prisoner’s dilemmas, a way out of the dilemma for a would-be cooperator is to make reliable character judgments about the trustworthiness of potential partners. Explain why this solution is not available in many situations involving degradation of the environment.
6. 6.Two airplane manufacturers are considering the production of a new product, a 150-passenger jet. Both are deciding whether to enter the market and produce the new planes. The payoff matrix is as follows (payoff values are in millions of dollars):

The implication of these payoffs is that the market demand is large enough to support only one manufacturer. If both firms enter, both will sustain a loss. (LO2)
a. Identify two possible equilibrium outcomes in this game.
b. Consider the effect of a subsidy. Suppose the European Union decides to subsidize the European producer, Airbus, with a check for $25 million if it enters the market. Revise the payoff matrix to account for this subsidy. What is the new equilibrium outcome?
c. Compare the two outcomes (pre-and post-subsidy). What qualitative effect does the subsidy have?
7. 7.Jill and Jack both have two pails that can be used to carry water down from a hill. Each makes only one trip down the hill, and each pail of water can be sold for $5. Carrying the pails of water down requires considerable effort. Both Jill and Jack would be willing to pay $2 each to avoid carrying one pail down the hill and an additional $3 to avoid carrying a second pail down the hill. (LO2)
a. Given market prices, how many pails of water will each child fetch from the top of the hill?
b. Jill and Jack’s parents are worried that the two children don’t cooperate enough with one another. Suppose they make Jill and Jack share equally their revenues from selling the water. Given that both are self-interested, construct the payoff matrix for the decisions Jill and Jack face regarding the number of pails of water each should carry. What is the equilibrium outcome?
8. 8.The owner of a thriving business wants to open a new office in a distant city. If he can hire someone who will manage the new office honestly, he can afford to pay that person a weekly salary of $2,000 ($1,000 more than the manager would be able to earn elsewhere) and still earn an economic profit of $800. The owner’s concern is that he will not be able to monitor the manager’s behavior and that the manager would therefore be in a position to embezzle money from the business. The owner knows that if the remote office is managed dishonestly, the manager can earn $3,100, which results in an economic loss of $600 per week. (LO3)
a. If the owner believes that all managers are narrowly self-interested income maximizers, will he open the new office?
b. Suppose the owner knows that a managerial candidate is a devoutly religious person who condemns dishonest behavior and who would be willing to pay up to $15,000 to avoid the guilt she would feel if she were dishonest. Will the owner open the remote office?
9. 9.Consider the following “dating game,” which has two players, A and B, and two strategies, to buy a movie ticket or a baseball ticket. The payoffs, given in points, are as shown in the matrix below. Note that the highest payoffs occur when both A and B attend the same event.

Assume that players A and B buy their tickets separately and simultaneously. Each must decide what to do knowing the available choices and payoffs but not what the other has actually chosen. Each player believes the other to be rational and self-interested. (LO1, LO2, LO3)
a. Does either player have a dominant strategy?
b. How many potential equilibria are there? (Hint: To see whether a given combination of strategies is an equilibrium, ask whether either player could get a higher payoff by changing his or her strategy.)
c. Is this game a prisoner’s dilemma? Explain.
d. Suppose player A gets to buy his or her ticket first. Player B does not observe A’s choice but knows that A chose first. Player A knows that player B knows he or she chose first. What is the equilibrium outcome?
e. Suppose the situation is similar to part d, except that player B chooses first. What is the equilibrium outcome?
10. 10.Imagine yourself sitting in your car in a campus parking lot that is currently full, waiting for someone to pull out so that you can park your car. Somebody pulls out, but at the same moment a driver who has just arrived overtakes you in an obvious attempt to park in the vacated spot before you can. Suppose this driver would be willing to pay up to $10 to park in that spot and up to $30 to avoid getting into an argument with you. (That is, the benefit of parking is $10 and the cost of an argument is $30.) At the same time he guesses, accurately, that you too would be willing to pay up to $30 to avoid a confrontation and up to $10 to park in the vacant spot. (LO3, LO4)
a. Model this situation as a two-stage decision tree in which his bid to take the space is the opening move and your strategies are (1) to protest and (2) not to protest. If you protest (initiate an argument), the rules of the game specify that he has to let you take the space. Show the payoffs at the end of each branch of the tree.
b. What is the equilibrium outcome?
c. What would be the advantage of being able to communicate credibly to the other driver that your failure to protest would be a significant psychological cost to you (for example, maybe a cost of $25)?
ANSWERS TO CONCEPT CHECKS
1. 9.1No matter what American does, United will do better to leave ad spending the same. No matter what United does, American will do better to raise ad spending. So each player will play its dominant strategy: American will raise its ad spending and United will leave its ad spending the same. (LO1)

2. 9.2In game 1, no matter what Chrysler does, GM will do better to invest, and no matter what GM does, Chrysler will do better to invest. Each has a dominant strategy, but in following it, each does worse than if it had not invested. So game 1 is a prisoner’s dilemma. In game 2, no matter what Chrysler does, GM again will do better to invest; but no matter what GM does, Chrysler will do better not to invest. Each has a dominant strategy, and in following it, each gets a payoff of 10—which is 5 more than if each had played its dominated strategy. So game 2 is not a prisoner’s dilemma. (LO2)
3. 9.3Smith assumes that Jones will choose the branch that maximizes his payoff, which is the bottom branch at either B or C. So Jones will choose the bottom branch when his turn comes, no matter what Smith chooses. Since Smith will do better (60) on the bottom branch at B than on the bottom branch at C (50), Smith will choose the top branch at A. So equilibrium in this game is for Smith to choose the top branch at A and Jones to choose the bottom branch at B. Smith gets 60 and Jones gets 105.

If Jones could make a credible commitment to choose the top branch no matter what, both would do better. Smith would choose the bottom branch at A and Jones would choose the top branch at C, giving Smith 500 and Jones 400. (LO3)
4. 9.4The equilibrium of this game in the absence of a commitment to tip is that the server will give bad service because if she provides good service, she knows that the diner’s best option will be not to tip, which leaves the server worse off than if she had provided good service. Since the diner gets an outcome of 20 if he can commit to leaving a tip (15 more than he would get in the absence of such a commitment), he would be willing to pay up to 15 for the right to commit. (LO4)

1As quoted by Geraldine Fabrikant, “Talking Money with Tony Bennett,” The New York Times, May 2, 1999, Money & Business, p. 1.
2Nash was awarded the Nobel Prize in Economics in 1994 for his contributions to game theory. His life was also the subject of the Academy Award–winning film A Beautiful Mind.
3“Evenly distributed” means that the number of shoppers who live on any segment of the road between A and the freeway is exactly proportional to the length of that segment. For example, the number who live along a segment one-tenth of a mile in length would be 1/10 × 1,200 = 120.
4Harold Hotelling, “Stability and Competition,” Economic Journal 39, no. 1 (1929), pp. 41–57.