CHAPTER 10
©Hayden Frank
LEARNING OBJECTIVES
After reading this chapter, you should be able to:
1. LO1Describe and illustrate numerous rules of thumb people employ when making decisions using incomplete information and explain how reliance on these rules sometimes leads to poor choices.
2. LO2Explain how misinterpretation of contextual clues can lead to poor choices.
3. LO3Describe situations in which impulse-control problems prevent people from executing rationally conceived economic plans.
4. LO4Describe the phenomenon of loss aversion and explain how it often creates a powerful bias in favor of the status quo.
5. LO5Explain how relaxing the assumption that people are narrowly self-interested alters the predictions of traditional economic models.
6. LO6Describe how concerns about relative position often create conflicts between the interests of individuals and those of the broader community.
7. LO7List examples of laws and regulations that appear motivated by the behavioral tendencies described in this chapter.
Until the 1980s, traditional economic models almost invariably assumed a decision maker who was narrowly self-interested, well-informed, highly disciplined, and possessed of sufficient cognitive capacity to solve relatively simple optimization problems. This mythical individual was often called homo economicus for short.
During the intervening years, theoretical and empirical developments in economics and psychology have challenged each of these core assumptions. The challenges fall into three broad categories: (1) we often make systematic cognitive errors that prevent us from discovering which choices will best promote our interests; (2) even when we can discern which choices would be best, we often have difficulty summoning the willpower to execute them; and (3) we often pursue goals that appear inconsistent with self-interest, narrowly understood. In this chapter, we will consider each of these challenges in turn.
1978 Nobel Laureate Herbert Simon: “Satisficers, not maximizers.”©Bettmann/Getty Images
The late Nobel laureate Herbert Simon (1984) was the first to impress upon economists that human beings are incapable of behaving like the rational beings portrayed in standard rational choice models. Simon, a pioneer in the field of artificial intelligence, stumbled upon this realization in the process of trying to instruct a computer to “reason” about a problem. He discovered that when we ourselves confront a puzzle, we rarely reach a solution in a neat, linear fashion. Rather, we search in a haphazard way for potentially relevant facts and information and usually quit once our understanding reaches a certain threshold. Our conclusions are often inconsistent, sometimes even flatly incorrect. But much of the time, we come up with serviceable, if imperfect, solutions. In Simon’s terms, we are “satisficers,” not maximizers. We move on once we feel we’ve got a solution that is “good enough.”
Subsequent economists have taken Simon’s lead and developed a very sophisticated literature on decision making under incomplete information. We now realize that when information is costly to gather and cognitive processing ability is limited, it is not even rational to make fully informed choices of the sort portrayed in traditional models. Paradoxically, it is irrational to be completely well-informed! When information is costly, the benefit from gathering more of it—from being able to make a very good decision rather than just a good one—may simply not justify the added cost. Thus, the literature on decision making under incomplete information, far from being a challenge to the traditional model, has actually bolstered our confidence in it. (More on these issues in Chapter 12, The Economics of Information.)
But there is another offshoot of Simon’s work, one that is less friendly to traditional models. This research, which has been strongly influenced by the economist Richard Thaler and the cognitive psychologists Daniel Kahneman and the late Amos Tversky, demonstrates that even with transparently simple problems, people often violate the most fundamental axioms of rational choice. One of the most cherished tenets of the rational choice model, for example, is that wealth is fungible. Fungibility implies, among other things, that our total wealth, not the amount we have in any particular account, determines what we buy. But Tversky and Kahneman provided a vivid experimental demonstration to the contrary.1 They asked one group of people to imagine that, having earlier purchased tickets for $10, they arrive at the theater to discover they have lost them. Members of a second group were asked to picture themselves arriving just before the performance to buy their tickets when they find that they have each lost $10 from their wallets. People in both groups were then asked whether they would continue with their plans to attend the performance. In the rational choice model, the forces governing this decision are exactly the same for both groups. Losing a $10 ticket should have precisely the same effect as losing a $10 bill. And yet, in repeated trials, most people in the lost-ticket group said they would not attend the performance, while an overwhelming majority—88 percent—in the lost-bill group say they would.
Amos Tversky (left) and Daniel Kahneman, early in their careers.Courtesy of Barbara Tversky
Richard Thaler developed a theory of mental accounting to explain this anomaly.2 He began by noting that people apparently organize their spending into separate “mental accounts” for food, housing, entertainment, general expenses, and so on. People who lose their tickets act as if they debit $10 from their mental entertainment accounts, while those who lose a $10 bill debit that amount from their general expense account. For people in the former group, the loss makes the apparent cost of seeing the show rise from $10 to $20, whereas for those in the second it remains $10. The rational choice model makes clear that the second group’s assessment is the correct one. And on reflection, most people do, in fact, agree that losing a ticket is no better reason not to see the performance than losing a $10 bill.

Richard Thaler, who was awarded the Nobel Prize in economics in 2017 for his pioneering work in behavioral economics.
Carsten Rehder/AFP/Getty Images
Working with numerous collaborators, Thaler, Kahneman, and Tversky identified a large catalog of systematic departures from rational choice, many of which stem from the application of judgmental and decision heuristics, or rules of thumb. Next, we describe some of the heuristics they identified.
JUDGMENTAL HEURISTICS OR RULES OF THUMB
AVAILABILITY
We often estimate the frequency of an event, or class of events, by the ease with which we can summon relevant examples from memory.3 The more easily we can recall examples, the more likely we judge an event to be. This rule of thumb is called the availability heuristic. On balance, it is an effective strategy since it is generally easier to recall examples of things that happen frequently. The problem is that frequency is not the only factor that influences ease of recall. If people are asked, for example, whether there are more murders than suicides in New York State each year, most answer confidently in the affirmative, yet there are always more suicides than murders. Murders are easier to recall not because they are more frequent, but because they are more salient.
EXAMPLE 10.1Availability Heuristic
Which category of words is more common in the English language: those that begin with “r” or those that have “r” as their third letter?
Using the availability heuristic, most people react by trying to summon examples in each category. And since most people find it easier to think of examples of words starting with “r,” the availability heuristic leads them to answer that such words occur more frequently. Yet English words with “r” in the third slot are actually far more numerous. The availability heuristic fails here because frequency isn’t the only thing that governs ease of recall. We store words in memory in multiple ways—by their meanings, by the sounds they make, by the images they evoke, by their first letters, and by numerous other features. But virtually no one stores words in memory by the identity of their third letter, which is why words with “r” in that slot are harder to recall.
REPRESENTATIVENESS
Another common rule of thumb is to assume that the likelihood of something belonging to a given category is directly related to the degree to which it shares characteristics thought to be representative of membership in that category.4 For example, because librarians are stereotypically viewed as introverted while salespersons are viewed as gregarious, the representativeness heuristic suggests that a given shy person is more likely to be a librarian than a salesperson. But this reasoning leads us astray when we fail to account for other factors that influence the relevant probabilities. Suppose the popular stereotypes are accurate—that, say, 90 percent of all librarians are shy, as compared with only 20 percent of salespeople. Would it then be safe to conclude that a given shy person is more likely to be a librarian than a salesperson? Not necessarily. Suppose, conservatively, that there are 90 salespeople in the population for every 10 librarians (the true ratio is more than 1,000 to 1). Under the assumed proportions of shy persons in each group, there would then be 9 shy librarians in the population for every 18 shy salespeople. And in that case, any given shy person would actually be twice as likely to be a salesperson than to be a librarian!
EXAMPLE 10.2Representativeness Heuristic
Only two moving companies provide local delivery service in a small western city, United and North American. United operates 80 percent of the vans, North American the remaining 20 percent. On a dark and rainy night, a pedestrian is run over and killed by a moving van. The lone witness to the incident testifies that the van was owned by North American. The law states that a company shall be held liable for damages only if it can be shown that the probability that the company was the guilty party is at least half. An independent laboratory hired by the court finds that under dark and rainy conditions, the witness is able to identify the owner of a moving van with 60 percent accuracy. What is the probability that a North American van ran over the pedestrian?
Out of every hundred moving vans in the area, 80 are United’s and 20 are North American’s. Of the 80 United vans, the witness incorrectly identifies 32 (or 40 percent of them) as belonging to North American. Of the 20 North American trucks, the witness correctly identifies 12 as belonging to North American. Thus the probability that a van identified as North American’s actually is North American’s is 12/(32 + 12) = 3/11. So North American will not be held liable.
CONCEPT CHECK 10.1
A witness testifies that the taxicab that struck and injured Smith in a dark alley was green. On investigation, the attorney for Green Taxi Company discovers that the witness identifies the correct color of a taxi in a dark alley 80 percent of the time. There are two taxi companies in town, Green and Blue. Green operates 15 percent of all local taxis. The law says that Green Taxi Company is liable for Smith’s injuries if and only if the probability that it caused them is greater than 0.5. Is Green liable?
REGRESSION TO THE MEAN
Another common judgment error is to ignore a phenomenon known to statisticians as regression to the mean. The idea is that if our first measurement of something is far from its average value, a second measurement will tend to be closer to its average value. Suppose, for example, that you draw a ball from an urn containing balls numbered from 1 to 100 and get a ball numbered 85. Would you then be surprised that if you drew a second ball from that same urn, the number on it would be smaller than 85? Since there the urn contains 84 balls with smaller numbers and only 15 with higher ones, it is in fact very likely that your second ball will have a smaller number. Errors often occur when people fail to take this phenomenon into account.
Suppose, for example, that you are a manager choosing whether to adopt a supportive stance toward your employees—by praising them when they perform well—or a critical stance—by criticizing them when they perform poorly. To help you decide, you do an experiment in which you adopt a critical stance toward employees who have performed unusually poorly and a supportive stance toward those who have performed unusually well. On the basis of this experiment, you decide that the critical stance works better. Why might this inference have been misleading?
When you praise an employee, it will often be after that employee has turned in an unusually good performance. Similarly, when you are critical of an employee, it will often be after that employ has performed unusually poorly. Quite apart from any direct effects of praise or blame, subsequent performances in both cases are likely to be more nearly normal. This could lead you to conclude that praise caused worse performance and that criticism caused better performance. Independent experimental work suggests that this conclusion would be erroneous. Supportive management styles, it appears, are more likely than critical styles to bring out the best in employees.
The Economic Naturalist 10.1
Why did the American Olympic swimmer Shirley Babashoff, who set one world record and six national records at the 1976 Olympics, refuse to appear on the cover of Sports Illustrated?
Invitations to appear on the cover of the magazine typically come, as in Babashoff’s case, in the wake of outstanding athletic performances. Many athletes believed that appearing on the cover of Sports Illustrated made them subject to the Sports Illustrated jinx, which doomed them to subpar performance during the ensuing months. But this perception was a statistical illusion. As the psychologist Tom Gilovich explained, “It does not take much statistical sophistication to see how regression effects may be responsible for the belief in the Sports Illustrated jinx. Athletes’ performances at different times are imperfectly correlated. Thus, due to regression alone, we can expect an extraordinary good performance to be followed, on the average, by a somewhat less extraordinary performance. Athletes appear on the cover of Sports Illustrated when they are newsworthy—i.e., when their performance is extraordinary. Thus, an athlete’s superior performance in the weeks preceding a cover story is very likely to be followed by somewhat poorer performance in the weeks after. Those who believe in the jinx . . . are mistaken, not in what they observe, but in how they interpret what they see. Many athletes do suffer deterioration in their performance after being pictured on the cover of Sports Illustrated, and the mistake lies in citing a jinx, rather than citing regression as the proper interpretation of this phenomenon.”5
CONCEPT CHECK 10.2
Following an unusual increase in burglaries in a New York City neighborhood, the chief of police assigned additional officers to patrol the neighborhood. In the following month, the number of burglaries declined significantly. Does this mean the increased patrols were effective?
ANCHORING AND ADJUSTMENT
Another common judgmental heuristic is called anchoring and adjustment. It holds that when we try to estimate something, we often begin with a tentative initial estimate, called the anchor, which we then adjust in the light of whatever additional information is available. A common bias pattern observed in anchoring and adjustment is that the anchors are sometimes of questionable relevance, and the adjustments people make are often insufficient.
Suppose, for example, that a professor who stands 6’4” asks his students to estimate his height. They may often anchor on the average height for men—about 5’10” in the U.S.—and then adjust that value upward because their visual assessment indicates that their professor is significantly taller than average. Typically, however, their adjustments will fall short. In most cases, the average of their estimates of the professor’s height in this example will be two or three inches shorter than his actual height.
A particularly vivid example of this potential bias comes from an experiment in which American students were asked to estimate the percentage of African countries belonging to the United Nations. Most had no idea what the true percentage was. Before even hearing the question, subjects in two groups were asked to perform the ostensibly unrelated task of spinning a wheel that could stop on any number from 1 to 100. In the first group, the wheel was rigged to stop on 65, while in the second it was rigged to stop on 10. Students in both groups surely knew that the number on which the spinning wheel stopped bore no relation to the true percentage of African nations in the UN. Yet the average estimate of the group whose wheel stopped on 65 was 45 percent, while the average estimate of the group whose wheel stopped on 10 was only 25.6
RECAP
JUDGMENTAL HEURISTICS AND RULES OF THUMB
According to the availability heuristic, we often estimate the frequency of an event by the ease with which we can recall relevant examples. Errors often result because frequency is only imperfectly correlated with ease of recall.
The representative heuristic holds that the likelihood of something belonging to a given category is directly related to the degree to which it resembles stereotypical members of that category. Here, too, errors often result because other factors also influence the relevant probabilities.
The tendency to ignore regression to the mean—the idea that extreme events are likely to be followed by more nearly normal ones—is yet another source of judgmental errors.
Anchoring and adjustment holds that when we try to estimate something, we often begin with a tentative anchor, which we then adjust in the light of additional information. The anchors that people employ are often of questionable relevance, and the corresponding adjustments are often insufficient.
MISINTERPRETATION OF CONTEXTUAL CLUES
Another important class of judgment errors stems from misinterpretation of common contextual cues.
THE PSYCHOPHYSICS OF PERCEPTION
As psychologists have long understood, our assessments are almost always heavily shaped by the local environmental context. Consider a couple driving with their 10-year-old daughter to visit her grandparents. They are 10 miles from their destination when she asks, “Are we almost there yet?” If those 10 miles remain on a 120-mile journey, her parents will reassure her that they’re nearly there. But if the same 10 miles remain on a 12-mile journey they will answer differently: “No, honey, we’ve still got quite a way to go.” Similarly, in the year 1920, if your car could reach a speed of 60 miles per hour eventually, it would have seemed blazingly fast. But unless a car could reach 60 mph in under six seconds today, many motorists would think it sluggish.
Every assessment of this sort rests on an explicit or implicit frame of reference. According to a relationship known as the Weber-Fechner law of psychophysics, our perception of the change in any stimulus varies according to the size of the change measured as a proportion of the original stimulus. In some contexts, there is nothing problematic about reckoning importance in proportional terms. But in other contexts, it can lead us astray. Would you, for example, drive across town to save $10 on a $20 dorm lamp? Most students confronted with this question say they would, in part because saving 50 percent off the purchase price of an object seems like a significant achievement. But if those same students are asked whether they would drive across town to save $10 on a $4,000 TV, almost all will say no because the savings is such a trivial percentage of the purchase price. Although there is no uniquely correct answer to these questions, a rational person’s answer to both should be the same. In both cases, after all, the benefit of driving across town is exactly $10. So if the drive counts as more than $10 worth of hassle, the answer to both questions should be no. But if the hassle of the drive is less than $10, it makes sense to drive across town in both cases.7
THE DIFFICULTY OF ACTUALLY DECIDING
In the parable of Buridan’s ass, a starving donkey stands equidistant from two identical bales of hay. But because his attraction to each bale is exactly equal, he is unable to decide which bale to approach and therefore dies of hunger. Although it is difficult to imagine such indecision causing a human to die of hunger, the fact remains that many people experience significant anxiety when forced to decide between two options that are roughly equally attractive. If the options were, in fact, equally attractive, it would of course not matter which one was chosen. But the anxiety some people experience in such situations leads them to violate basic predictions of traditional economic models.
Suppose you were faced with a choice between the two apartments labeled A and B in Figure 10.1. The attraction of A is that it’s close to campus, but the downside is that its rent is fairly high. In contrast, B’s rent is much more reasonable, but unfortunately it happens to be much further from campus. By suitable manipulation of the rent/distance values, it should be possible to describe pairs of apartments like A and B such that a given individual would be essentially indifferent between them. The interesting thing, however, is that this doesn’t mean that she would find it easy to choose. On the contrary, many people experience significant anxiety when confronted with such choices.
FIGURE 10.1 The Tradeoff between Rent and Convenience
Now suppose we gather a large group of subjects and manipulate the distance/rent values of A and B so that when forced to choose, half the subjects choose A and the remaining half choose B. What do you think would have happened if we had asked that same group of subjects to choose among three apartments—the same A and B as before plus a new option at C, as shown in Figure 10.2. According to traditional rational choice theory, the apartment at C is an irrelevant option, since it is worse along both dimensions than B. And, in fact, when C is added to the set of options, no one chooses it.
FIGURE 10.2 An Irrelevant Alternative
Yet the presence of the option at C has a profound effect on the observed pattern of choices between A and B. This time many more subjects choose B, and many fewer choose A.8 The apparent explanation of the shift is that subjects find it easy to choose B over C, and that invests option B with a halo that leads subjects to favor it when confronted with the anxiety-inducing choice between A and B.
Experienced salespersons may exploit this pattern when dealing with people who have difficulty choosing among hard-to-compare alternatives. To close the deal, it may be enough to expose the client to a new option that is worse along every dimension than one of the original options.
As the preceding examples illustrate, cognitive errors often lead to departures from the predictions of standard rational choice models. We call such examples “departures from rational choice with regret,” the idea being that once people are made aware of the cognitive errors they have made, they seem motivated to avoid those errors in the future.
Many proponents of traditional economic models appear reluctant to amend those models to take account of systematic cognitive errors. Some argue, for example, that in competitive environments, the penalties associated with such errors will reduce their frequency, enabling us to safely neglect them. Yet the mere fact that people make systematic cognitive errors does not imply that that reliance on heuristics is maladaptive in any global sense. The important question is whether following some alternative strategy would lead to better results on average. Rules like the availability heuristic are extremely easy to apply and work well much of the time. The costs of an occasional wrong decision must be weighed against the obvious advantages of reliance on simple decision and judgment rules.
In any event, there is clear evidence that systematic cognitive errors exist and are widespread. Absent clear evidence that reliance on heuristics is a losing strategy on balance, there is a strong methodological case for taking explicit account of cognitive errors in descriptive models of individual behavior. A plausible case can also be made that by becoming more aware of the circumstances that are likely to elicit cognitive errors, we can learn to commit those errors less frequently.
RECAP
MISINTERPRETATION OF CONTEXTUAL CLUES
Almost every human judgment requires an explicit or explicit frame of reference. The Weber-Fechner law holds that perceptions of the change in any stimulus varies according to the size of the change as a proportion of the original stimulus. This reckoning leads many to say, for example, that they would drive across town to save $10 on a $20 radio but not to save $10 on $1,000 computer. Yet if the benefit of driving across town exceeds the cost of driving across town for the radio, the same must be true for the computer (since the benefit is $10 in both cases and the cost of driving is the same in both cases).
Other choice anomalies arise because people often have difficulty choosing between two alternatives whose characteristics are difficult to compare. When confronted by such choices, the addition of a third alternative that is worse than the first of the original alternatives on every dimension often has the effect of tilting choice heavily in favor of the first alternative. According to traditional models, the addition of such an alternative should have no effect on the choice between the other two.
IMPULSE-CONTROL PROBLEMS
Another troubling feature of traditional economic models is that they appear to rule out the possibility that people might regret having chosen behaviors whose consequences were perfectly predictable at the outset. Yet such expressions of regret are common. Many people wake up wishing they had drunk less the night before, but few wake up wishing they had drunk more.
It is likewise a puzzle within traditional economic models that people often incur great expense and inconvenience to prevent behaviors they would otherwise freely choose. For example, the model has difficulty explaining why some people would pay thousands of dollars to attend weight-loss camps that will feed them only 1,500 calories per day.
Welfare analysis based on the traditional models assigns considerable importance, as it should, to people’s own judgments about what makes them better off. In this framework, it is not clear how one could ever conclude that a risk freely chosen by a well-informed person was a wrong choice. By itself, the fact that the choice led to a bad outcome is clearly not decisive. If someone is unlucky enough to be killed crossing the street, for example, we would not conclude that crossing streets is a generally bad idea.
Traditional models thus suggest that criticizing someone’s decision to have an unprotected sexual encounter that resulted in AIDS would be similarly problematic. If the person knew the risks and yet freely chose to ignore them, this model makes it hard to say more than that he was just unlucky.
Compelling evidence from psychology suggests, however, that expressions of regret may often be genuine, that people might sensibly impose constraints on their own behavior, and that risks freely chosen by well-informed people may not be optimal from their own point of view. Much of this evidence concerns behavior in the realm of intertemporal choice.
The rational choice model says that people will discount future costs and benefits in such a way that the choice between two rewards that are separated by a fixed time interval will be the same no matter when the choice is made.9 Consider, for example, the pair of choices A and B:
A: $100 tomorrow vs. $110 a week from tomorrow.
B: $100 after 52 weeks vs. $110 after 53 weeks.
Both pairs of rewards are separated by exactly one week. If future receipts are discounted as assumed in traditional models, people will always make the same choice under alternative A as they do under alternative B. Since the larger payoff comes a week later in each case, the ordering of the present values of the two alternatives must be the same in A as in B, irrespective of the rate at which people discount. When people confront such choices in practice, however, there is a pronounced tendency to choose differently under the two scenarios: most pick the $100 option in A, whereas most choose the $110 option in B.
Richard Herrnstein, George Ainslie, George Loewenstein, Ted O’Donoghue, and others who have studied intertemporal choices experimentally have amassed substantial evidence that perceived future costs and benefits fall much more sharply with delay than assumed in traditional models.10 One consequence is that preference reversals of the kind just observed occur frequently. The classic reversal involves choosing the larger, later reward when both alternatives occur with substantial delay, then switching to the smaller, earlier reward when its availability becomes imminent. Thus, from the pair of alternatives labeled B above, in which both rewards come only after a relatively long delay, most subjects chose the larger, later reward; whereas from the pair labeled A, most chose the earlier, smaller reward.
So, why would people pay thousands of dollars to attend a weight-loss camp that will feed them only 1,500 calories per day? If they tend to discount future costs and benefits sharply, the answer is clear: they really want to eat less, but they know that if tempting foods are readily available, they will lack the willpower to do so. The weight-loss camp solves this problem by making tempting foods less available.
The tendency to discount future costs and benefits excessively gives rise to a variety of familiar impulse-control problems and, in turn, to a variety of strategies for solving them. Anticipating their temptation to overeat, people often try to limit the quantities of sweets, salted nuts, and other delicacies they keep on hand. Anticipating their temptation to spend cash in their checking accounts, people enroll in payroll-deduction savings plans. Foreseeing the difficulty of putting down a good mystery novel in midstream, many people know better than to start one on the evening before an important meeting. Reformed smokers seek the company of nonsmokers when they first try to kick the habit and are more likely than others to favor laws that limit smoking in public places. The recovering alcoholic avoids cocktail lounges.
Effective as these bootstrap methods of self-control may often be, they are far from perfect. Many people continue to express regret about having overeaten, having drunk and smoked too much, having saved too little, having stayed up too late, having watched too much television, and so on. Traditional economic models urge us to dismiss these expressions as sour grapes. But behavioral evidence suggests that these same expressions are often coherent. In each case, the actor chose an inferior option when a better one was available and later feels genuinely sorry about it. As with behaviors that stem from systematic cognitive errors, those that stem from impulse-control problems may be described as “departures from rational choice with regret.”
In view of the obvious difficulties to which excessive discounting gives rise, it might seem puzzling that this particular feature of human motivation is so widespread. As with cognitive errors that stem from reliance on judgmental heuristics, however, here too it is important to stress that a behavioral tendency leading to bad outcomes some of the time does not imply that it is maladaptive in a global sense. Heavy discounting of future events forcefully directs our attention to immediate costs and benefits. If forced to choose between a motivational structure that gave heavy weight to current payoffs and an alternative that gave relatively greater weight to future payoffs, it is easy to imagine why evolutionary forces might have favored the former. After all, in highly competitive and uncertain environments, immediate threats to survival were numerous, and there must always have been compelling advantages in directing most of our energies toward meeting them.
In any case, there is clear evidence here, too, that excessive discounting exists and is widespread. Whether or not this motivational structure is disadvantageous on balance, there is a persuasive case for taking explicit account of it if our aim is to predict how people will behave under real-world conditions.
Some of the most important developments in economics in recent decades have entailed revisions to traditional assumptions about the nature of people’s preferences. We focus next on three major changes—the introduction of asymmetries in the way people evaluate alternatives involving gains and losses, the relaxation of the assumption that people are self-interested in the narrow sense, and the introduction of concerns about relative position.
RECAP
IMPULSE-CONTROL PROBLEMS
Although traditional economic models rule out the possibility that people might regret having chosen behaviors whose consequences were perfectly predictable, such regrets are common.
Compelling evidence from psychology suggests, that expressions of regret may often be genuine, that people often have difficulty waiting for a larger, better reward when an inferior reward is available immediately. Traditional models also have difficulty explaining why people might pay large sums to attend a weight-loss camp that will dramatically constrain their daily caloric intake. But if people tend to discount future costs and benefits excessively, they know that if tempting foods are readily available, they may lack the willpower to avoid them. Without taking explicit account of self-control problems, it is difficult to predict how people will behave under real-world conditions.
LOSS AVERSION AND STATUS QUO BIAS
Traditional economic models hold that the extra satisfaction from gaining a dollar should be roughly equal to the reduction in satisfaction from losing a dollar. In practice, however, people’s behavior suggests that losses often weigh much more heavily than gains.
An illustration of this tendency comes in the form of experiments designed to probe the difference in the value people assign to an object they already own versus the value they assign to one they have an opportunity to buy. In one experiment involving 44 subjects, half of participants were chosen at random to receive a coffee mug that they were free to keep or sell as they chose. The other half of the participants did not receive a mug. Participants were then given an opportunity to examine the mugs carefully—their own if they’d been given one, a neighboring subject’s if they hadn’t.
Subjects were then told to submits bids reflecting how much they’d be willing to pay for a mug if they didn’t already have one or how much they’d be willing to accept for selling the mug they already had. In repeated rounds of this experiment, researchers used these bids to determine the prices at which the number of mugs offered would be equal to the number of mugs demanded. Those equilibrium prices varied between $4.25 and $4.75.
The researchers used the term “mug lovers” to describe the half of all subjects who happened to like the mugs most among all participants and “mug haters” to describe those who liked the mugs least. The fact that the mugs had been distributed at random implied that half of them, on average, would have ended up in the hands of mug lovers and the other half in the hands of mug haters. Thus, the apparent prediction of traditional economic models was that approximately half of the mugs—which is to say, 11 mugs on average—should be exchanged at the market-clearing prices. But the actual number of observed trades was dramatically smaller—only slightly more than two mugs changed hands, on average.
The apparent explanation was that the mere fact of owning a mug caused people to assign much greater value to it. The median price at which mug owners were willing to sell their mugs was $5.25, for example, more than twice the amount the median non-owner was willing to pay.11
This asymmetry in valuations is often called loss aversion. Similar experiments involving the exchange of modestly priced objects suggest that someone who possesses an object values it about twice as high, on average, as someone who does not possess it.
But there is evidence that when the things at stake are significantly more important, loss aversion becomes even more pronounced. In another experiment by Richard Thaler, subjects were asked to imagine that there was a 1 in 1,000 chance they had been exposed to an invariably fatal disease.12 How much would they be willing to pay for the only available dose of an antidote that would save their lives? Their median response to this question was $2,000. A second group of subjects was asked how much they would have to be paid to be willing to volunteer to be exposed to that same disease if there was no available antidote. This time the median response was close to $500,000—more than 250 times higher than in the first case.
Note that in both cases, subjects are being asked how much they value a one-in-a-thousand reduction in their probability of dying. If people treated gains and losses symmetrically, their answers to the two questions would be roughly the same. But in the first case, they were being asked to buy an increase in their survival prospects (a gain), and in the second case, they were asked to sell a health benefit they already owned (a loss). People do not treat gains and losses symmetrically, and it appears that loss aversion becomes much more pronounced when the objects in question are more important.
Because every change in policy generates winners and losers, loss aversion has important implications for public policy. When the total benefits from a change in policy exceed the total cost, it will sometimes be practical for the winners to compensate the losers so that everyone comes out ahead. But when large numbers of people are involved, compensation often becomes impractical. And in such cases, resistance from those who stand to lose from the policy change often overwhelms the support of those who stand to gain, even when the total benefits substantially exceed the total costs. Loss aversion is thus an important source of what has been called status quo bias in the domain of public policy. The difficulty of enacting policy changes is summarized in the fabled iron law of politics: the losers always cry louder than the winners sing.
Cost-Benefit
Is status quo bias a bad thing? It might seem reasonable to wonder whether the best posture for cost-benefit analysts might be to simply accept at face value the inflated weights that people assign to losses. In other words, perhaps the widespread resistance to change that we observe in practice should be interpreted to mean that change often does more harm than good.
Weighing against this interpretation, however, is evidence that people’s initial estimates of how painful losses will be often fail to take into account the remarkable human tendency to adapt to altered circumstances. Many people say, for example, that they would prefer to be killed in an accident than to survive as a paraplegic. And most people who have become paralyzed have indeed experienced the transition as devastating. But what psychologists were surprised to discover was that paraplegics adapt more quickly and completely than they ever would have thought possible. According to one study, the post-accident happiness levels of paralyzed accident victims were significantly lower than they had been before their accidents, but not statistically different from those of a control group.13 Parting with a possession, similarly, is likely to be far more aversive at first than it is after a period of adaptation to living without it.
There are thus reasonable grounds for believing that steps to counteract status quo bias will often serve community interests. Behavioral economists have begun to explore such steps. One effective way to mitigate status quo bias is to focus more carefully on how different alternatives are presented to people. Suppose, for example, that policymakers want to encourage citizens to save a larger proportion of their incomes. Researchers have discovered that an effective and relatively painless way to accomplish that goal is to make participation in employer payroll-deduction savings the default option. Historically, most employers required their employees to take an active step in setting up a payroll-deduction savings plan. But researchers persuaded a sample of firms to make enrollment in the plan automatic, allowing those who didn’t want to participate to take the active step of opting out. When participation became the status quo, savings plan enrollments shot up dramatically—in one case, a jump from less than 50 percent participation to 86 percent of all first-year employees.14 Similar experiments have demonstrated the power of default options to alter behavior in other domains, ranging from the kinds of insurance people buy to their willingness to be organ donors.
The Economic Naturalist 10.2
Why was Obamacare difficult to enact and harder still to repeal?
The United States has been much slower than most other industrialized countries to ensure universal access to health care for its citizens. It is difficult to understand this history without the help of both traditional and behavioral economics models.
Nearly every economic analysis of the health care industry begins with the observation that individually purchased private insurance is not a viable business model for providing medical services. Because serious health problems are so costly to treat, insurance is broadly affordable only if most policyholders are healthy most of the time.
As we will discuss at greater length in Chapter 12, The Economics of Information, the fundamental problem is that insurers can often identify specific people—like those with serious preexisting conditions—who are likely to need expensive care. Any company that issued policies to such people at affordable rates would be driven into bankruptcy, with its most profitable customers lured away by competitors offering lower rates made possible by selling only to healthy people. (The difficulty is easy to see if you imagine what would happen if the government required companies to sell affordable fire insurance to people after their homes had already burned down.) Because of this adverse selection problem, unregulated private markets for individual insurance cannot accommodate the least healthy—those who most desperately need health insurance.
Many countries solve this problem by having the government provide health insurance for all. In some, like Britain, the government employs the care providers directly. Others, like France, reimburse private practitioners—as does the Medicare program for older Americans. The United States probably would have adopted one of those models had it not been for historical accidents that led to widespread adoption of employer-provided health plans in the 1940s.
To control costs of World War II mobilization, regulators capped growth of private-sector wages, making it hard for employers to hire desperately needed workers. But because many fringe benefits weren’t capped, employers could exploit a loophole by offering additional benefits, like health insurance. Its cost was deductible as a business expense, and in 1943, the Internal Revenue Service ruled that its value was not taxable as employee income. By 1953, employer health plans covered 63 percent of workers, compared to only 9 percent in 1940.
Eligibility for favorable tax treatment hinged on the plans being available to all employees, a feature that mitigated the adverse-selection problem described earlier. Although hiring workers with preexisting conditions meant paying higher premiums, tight labor markets made many employers willing to bear that cost because insurance was an effective recruiting tool.
Employer plans are thus a significant improvement over individual private insurance, but they are still deeply flawed. If you lose your job, you can lose your coverage. This problem was cast into sharp relief by high unemployment in the wake of the financial crisis of 2008.
The number of workers covered by employer plans has been declining for several decades. According to census data, 65 percent of workers had employer-backed plans in 2000, but only 55 percent were covered by 2010. This decline has been driven in part by rapid increases in health care costs.
Economists agree that no matter how those costs are apportioned on paper, any money spent on employer-sponsored health plans ultimately comes at the expense of wages. Real wages have risen little in recent decades, and the prospect that they will keep stagnating portends further erosions in coverage. So even if we ignore the inherent failings of the employer model, it simply won’t be able to deliver broad health coverage.
Since its passage by Congress in March of 2010, American health care policy has been guided by the provisions of the Affordable Care Act, popularly known as Obamacare. Modeled after proposals advanced by the Heritage Foundation, the American Enterprise Institute, and other conservative research organizations in the 1990s, the main provisions of the ACA were intended to eliminate the most salient problems associated with the prevailing system.
One provision establishes insurance exchanges, where participating companies must offer coverage to all customers, irrespective of preexisting conditions. Another imposes a financial penalty on those who fail to obtain coverage—the individual mandate. And a third prescribes subsidies to make insurance more affordable for low-income families. (The Massachusetts plan engineered by Mitt Romney as governor in 2006 took an almost identical approach.)
If Congress had been designing a health care program from scratch, it probably would have opted for a largely government-financed system like the ones common in other countries. But polls consistently showed that most voters were satisfied with their existing employer-provided health insurance, and because lawmakers had at least an implicit understanding of loss aversion, they understood that any attempt to eliminate existing coverage was certain to generate powerful resistance. And hence their decision to build Obamacare on top of the employer-provided health insurance system.
Without each of its three main provisions, Obamacare couldn’t have worked. The individual mandate was by far the most controversial of the three. Critics denounced it as a violation of our liberty. Paradoxically, the slim majority on the Supreme Court that affirmed the mandate’s constitutionality in 2012 seemed to embrace that view, likening the mandate to requiring citizens to eat broccoli for their own good. The court defended the constitutionality of the mandate by calling it a tax rather than a penalty.
But that interpretation will strike many economists as a misreading of the mandate’s purpose. It isn’t that people should buy health insurance because it would be good for them. Rather, the problem is that failure to do so would cause significant harm to others since insurance can work only if most policyholders are healthy most of the time. Society will always step in to provide care—though often in much more costly and delayed and ineffective forms—to the uninsured who fall ill. To claim the right not to buy health insurance is thus to assert a right to impose enormous costs on others. Many legal scholars insist that the Constitution guarantees no such right.
In any event, the mandate continued to be unpopular, and congressional critics of Obamacare sought to repeal the Affordable Care Act scores of times during the years following its passage. They came close to doing so in a dramatic series of House and Senate votes in 2017. But in the end, repeal efforts were met with overwhelming public resistance and could not attract even the slim majorities required for passage.
Obamacare had boosted the number of Americans with health insurance by more than 20 million, and estimates by the nonpartisan Congressional Budget Office suggested that repeal would reduce that number by a similar amount.
In short, once the Affordable Care Act was enacted into law, the conditions it brought about became the new status quo. So even as loss aversion helps explain why passage of the Affordable Care Act was so difficult in the first place, it also explains why it was so difficult to repeal once it became law.
Despite that difficulty, slim majorities in both houses of Congress passed an amendment to the 2017 Tax Cuts and Jobs Act that eliminated the Obamacare provision that penalized people who did not enroll in a health insurance plan. As we will discuss in Chapter 12, The Economics of Information, this step threatens the long-term viability of the Affordable Care Act.
RECAP
LOSS AVERSION AND STATUS QUO BIAS
Widespread behavioral evidence suggests that people tend to weigh losses much more heavily than gains. The minimum price for which people would be willing to sell objects they already own is substantially higher than the price they would be willing to pay to acquire those same objects if they do not already own them.
This asymmetry is often called loss aversion. In experiments involving low-priced items, the difference between willingness to accept and willingness to pay is often on the order of two to one. But the ratio can be dramatically higher for high-valued items, such as conditions related to health and safety.
Because every change in policy generates winners and losers, loss aversion implies a large bias in favor of the status quo in public policy decisions. Evidence suggests that this bias often has harmful consequences since people’s initial estimates of how painful losses will be often fail to take into account their ability to adapt to altered circumstances.
The strategic use of default options has proven to be an effective way to mitigate status quo bias. For example, making participation in employer payroll-deduction savings plans the default option has been shown to significantly reduce the tendency to save too little.
BEYOND NARROW SELF-INTEREST
Traditional economic models exclude other-regarding motives. They concede, for example, that homo economicus might volunteer in a United Way campaign in order to reap the benefits of an expanded network of business or social contacts, but insist that he would never make anonymous donations to charity. Yet charitable donations, many of them completely anonymous, totaled almost $400 billion a year in the U.S. in 2017.15 Traditional models acknowledge that homo economicus might tip in a local restaurant to ensure good service on his next visit, but predict that he would not tip when dining alone in a restaurant far from home. Yet tipping rates are essentially the same in these two situations.16 Traditional models also concede that homo economicus might go to the polls in a local ward election, in which a single vote might be decisive, but predict universal abstention in presidential elections. Yet tens of millions of people regularly vote in presidential elections. In short, traditional self-interest models are simple and elegant, but they also predict incorrectly much of the time.
THE PRESENT-AIM STANDARD OF RATIONALITY
The most widely employed alternative to the self-interest model is the present-aim model, which holds that people are efficient in the pursuit of whatever objectives they happen to hold at the moment of action.17 The obvious appeal of the present-aim model is that it enables us to accommodate the plurality of goals that many people actually seem to hold, thus permitting us to account for behaviors that are anomalous within the self-interest framework. Someone donates anonymously to charity not because she hopes to receive indirect material benefits, but because she gets a warm glow from helping others in need. Someone tips in restaurant away from home not to ensure good service on his next visit, but because he feels sympathy for the waiter’s interests. Someone votes in a presidential election not because his vote might tip the balance, but because he feels it his civic duty to do so. And so on.
But the present-aim model’s flexibility also turns out to be a significant liability. The problem is that virtually any bizarre behavior can be “explained” after the fact simply by assuming a sufficiently strong taste for it. We call this the “crankcase-oil problem”: If someone drinks a gallon of used crankcase oil from his car, then writhes in agony and dies, a present-aim theorist has a ready explanation—the man must have really liked crankcase oil. Proponents of traditional models thus have sound reasons for their methodological skepticism about the present-aim model. As they correctly point out, a model that can explain virtually everything is not really a scientific model at all.
We are confronted, then, with a choice between two flawed alternatives. The self-interest standard of rationality generates testable predictions about behavior, but these predictions often turn out to be flatly incorrect. The present-aim standard of rationality accommodates a much broader range of observed behavior, but remains vulnerable to the charge of excessive flexibility. In place of these two, we consider a third standard, one that is significantly more flexible than the self-interest standard, but also one whose flexibility sidesteps the most serious objections to the present-aim standard.
THE ADAPTIVE RATIONALITY STANDARD18
Like both the self-interest and present-aim standards, the adaptive rationality standard assumes that people choose efficient means to achieve their ends. But unlike the other conceptions, which take goals as given, adaptive rationality regards goals as objects of choice themselves and, as such, subject to a similar efficiency requirement.
By what criterion might we evaluate the efficiency of an individual’s choice of goals? Since the problem that plagues the present-aim standard is excessive flexibility, the efficiency standard we employ for evaluating alternatives to this model should be both objective and strict. On both counts, Charles Darwin’s theory of natural selection, enriched to allow for the influence of cultural and other environmental forces during development, is an attractive candidate. In this framework, the design criterion for a goal or taste is the same as for an arm or a leg or an eye—namely, the extent to which it assists the individual in the struggle to acquire the resources required for survival and reproduction. If it works better than the available alternatives, selection pressure will favor it. Otherwise, selection pressure will work against it. Our proposal, in brief, is that analysts be free to add a preference to the individual’s list of concerns, but only upon showing that an individual motivated by that preference need not be handicapped in the competition to acquire the resources needed for survival and reproduction.
This standard passes the simple test of ruling out a taste for drinking crankcase oil. Indeed, it might seem such a stringent standard as to rule out any conception of rationality other than narrow self-interest. After all, if natural selection favors the traits and behaviors that maximize individual reproductive fitness, and if we define behaviors that enhance personal fitness as selfish, then self-interest becomes the only viable human motive by definition. This tautology was a central message of much of the sociobiological literature of the 1970s and 1980s.
On a closer look, however, the issues are less simple. For there are many situations in which individuals whose only goal is self-interest are likely to fare worse than others who pursue a richer mix of goals. Such is the case, for example, when individuals confront commitment problems, of which the one-shot prisoner’s dilemma, discussed in Chapter 9, Games and Strategic Behavior, provides a clear illustration. If both players in this game cooperate, each does better than if both defect, and yet each gets a higher payoff by defecting no matter which strategy the other player chooses. If both players could commit themselves to a strategy of mutual cooperation, they would have a clear incentive to do so. Yet mere promises issued by narrowly self-interested persons would not seem to suffice, for each person would have no incentive to keep such a promise.
Suppose, however, that some people have a (perhaps context-specific) taste for cooperating in one-shot prisoner’s dilemmas. If two players knew one another to have this taste, they could interact selectively, thereby reaping the gains of mutual cooperation. It is important to stress that merely having the taste is by itself insufficient to solve the problem. One must also be able to communicate its presence credibly to others and be able to identify its presence in them.
Can the presence of a taste for cooperation be reliably discerned by outsiders? Some experiments suggest that subjects are surprisingly accurate at predicting who would cooperate and who would defect in one-shot prisoner’s dilemmas played with near strangers.19 In these experiments, the base rate of cooperation was 73.7 percent and the base rate of defection only 26.3 percent. If subjects randomly guessed that a player would cooperate, they would thus have been accurate 73.7 percent of the time and a random prediction of defection accurate only 26.3 percent of the time. The actual accuracy rates for these two kinds of prediction were 80.7 percent and 56.8 percent, respectively. The likelihood of such high accuracy rates occurring by chance is less than 1 in 1,000.
Subjects in this experiment were strangers at the outset and were able to interact with one another for only 30 minutes before making their predictions. It is plausible to suppose that predictions would be considerably more accurate for people we have known for a long time.
For example, consider a thought experiment based on the following scenario:
An individual has a gallon jug of unwanted pesticide. To protect the environment, the law requires that unused pesticide be turned in to a government disposal facility located 30 minutes’ drive from her home. She knows, however, that she could simply pour the pesticide down her basement drain with no chance of being caught and punished. She also knows that her one gallon of pesticide, by itself, will cause only negligible harm if disposed of in this fashion.
Can you think of anyone who you feel certain would dispose of the pesticide properly? Most people say they can. Usually they have in mind friends they have known for a long time. If you feel you can identify such a person, then you, too, accept the central premise of the adaptive rationality account—namely, that it is possible to identify unselfish motives in at least some other people.
The presence of such motives, coupled with the ability of outsiders to discern them, makes it possible to solve a broad range of important commitment problems. Knowing this, even a rational, self-interested individual would have every reason to choose preferences that were not narrowly self-interested. Of course, people do not choose their preferences in any literal sense. The point is that if moral sentiments can be reliably discerned by others, then the complex interaction of genes and culture that yields human preferences can sustain preferences that lead people to subordinate narrow self-interest to the pursuit of other goals.
For example, although homo economicus might want to deter a potential aggressor by threatening to retaliate, his threat would not be credible if the cost of retaliation exceeded the value of what he stood to recover. By contrast, someone known to care strongly about honor for its own sake could credibly threaten to retaliate even in such cases. Such a person would thus be much less vulnerable to aggression than someone believed to be narrowly self-interested.
Similarly, although homo economicus might want to deter a one-sided offer from a potential trading partner by threatening to reject it, his threat would not be credible if his incentives clearly favored accepting the offer. By contrast, someone known to care strongly about equity for its own sake could credibly threaten to refuse such offers. Such a person would thus be a more effective bargainer than someone believed to be narrowly self-interested.
A second potential pathway whereby moral sentiments might be adaptive at the individual level is by helping people solve a variety of impulse control problems.20 Consider, for example, the repeated prisoner’s dilemma. As Rapoport, Axelrod, and others have shown, the tit-for-tat strategy fares well against alternative strategies in the repeated prisoner’s dilemma.21 Self-interested persons thus have good reasons to play tit-for-tat, yet to do so they must first solve an impulse-control problem. Playing tit- for-tat means cooperating on the first interaction, which in turn implies a lower payoff on that move than could have been had by defecting. The reward for playing tit-for-tat lies in the prospect of a string of mutually beneficial interactions in the future. The rational choice model holds that if the benefits of future cooperation, suitably discounted, outweigh the current costs, people will cooperate. But because the gain from defecting comes now, whereas its cost comes in the future, the decision maker confronts a classic impulse-control problem. (For more information on game theory, see Chapter 9, Games and Strategic Behavior)
The moral sentiment of sympathy, which figured so prominently in Adam Smith’s early writings, helps to solve this problem. Someone who feels sympathy for the interests of her trading partner faces an additional cost of defecting, one that occurs at the same time as the gain from defecting. Because of this cost, the person who sympathizes with her trading partners is less likely to defect and is thus more likely to reap the long-run gains of cooperation.
Moral sentiments and other non-self-interested motives figure prominently in economic and social life. Traditional economic models ignore such motives, implicitly viewing the behaviors they provoke as irrational. But it is more descriptive to call them “departures from traditional rational choice models without regret” because, when people are told that they are behaving irrationally under the self-interest standard, they typically do not seem to want to alter their behavior. The attraction of the adaptive rationality standard is that it provides a methodologically rigorous framework within which to expand the narrow range of motives considered in traditional models. In the same fashion, it provides a coherent framework within which to take explicit account of systematic cognitive errors and impulse-control problems.
CONCERNS ABOUT FAIRNESS
Additional evidence for the claim that narrow self-interest is not the only important human motive comes from an elegant experiment known as the ultimatum bargaining game. 22 The game is played by two players, “Proposer” and “Responder.” It begins with Proposer being given a sum of money (say, $100) that he must then propose how to divide between himself and Responder. Responder then has two options: (1) he can accept, in which case each party gets the amount proposed, or (2) he can refuse, in which each party gets zero and the $100 goes back to the experimenter.
If both players cared only about their own absolute incomes, as assumed by economic orthodoxy, Proposer should propose $99 for himself and $1 for Responder (only whole-dollar amounts are allowed). And Responder will then accept this one-sided offer because he will reason that getting $1 is better than nothing.
Yet in scores of experiments performed in many different countries, this is almost never what happens. Thus, in one typical study in which the total amount to be divided was $10, the average amount offered by Proposer was $4.71, and in more than 80 percent of all cases Proposer offered exactly $5, a 50-50 split.23 When Responders in this same study were asked to report the minimum amounts they would accept, their average response was $2.59. These experiments suggest clearly that laboratory subjects care not only about how much money they get, but also about how it is apportioned in relative terms.24 Most people seem predisposed to reject offers whose terms they find sufficiently “unfair.”
A large literature has emerged in recent years, much of it based on laboratory experiments, showing that concerns about fairness influence numerous similar economic choices. As in the laboratory, so in life. Believing their current contracts to be unfair, workers are often willing to go on strike, even when they know it may cost them their jobs. Customers often refuse to patronize merchants whose terms they believe to be inequitable, even when it will be more costly or less convenient to buy from alternative suppliers.25
Material incentives clearly matter. Ultimately, however, the law can do only so much to constrain individual behavior. To achieve a well-ordered society, we must rely at least in part on people’s willingness to subordinate personal interests for the common good. Although the adaptive rationality standard embraces the possibility of voluntary self-restraint in such cases, the self-interest standard all but denies that possibility.
It is troubling, therefore, that at least some evidence suggests that exposure to self-interest models tends to inhibit self-restraint. For example, one study found that economics majors were more than twice as likely as non-majors to defect when playing one-shot prisoner’s dilemmas with strangers, a difference that was not merely a reflection of the fact that people who chose to major in economics were more opportunistic to begin with.26 The same study also found that academic economists were more than twice as likely as the members of other disciplines to report giving no money at all to any private charity.27
These findings suggest that our choice among standards of rationality may be important not just for our attempts to predict and explain behavior, but also because of its potential impact on our willingness to engage in self-restraint. If so, we have even greater reason to reconsider the welfare implications of traditional economic models.
RECAP
BEYOND NARROW SELF-INTEREST
Self-interest is without doubt an important human motive. But predictions based on the assumption that it is the only important motive fall short. Most people vote in presidential elections, despite the negligible possibility that their votes will tip the balance. Most people leave tips even in restaurants they never expect to visit again. And people donate billions of dollars to charity each year, much of it anonymously.
The present-aim model of rational choice asserts that people often behave unselfishly because they get a warm glow from doing so. That enables the present-aim model to accommodate many behaviors that the narrow self-interest model cannot. But that flexibility also turns out to be a significant liability since virtually any bizarre behavior can be “explained” after the fact simply by assuming a sufficiently strong taste for it. For this reason, critics object that the present-aim model doesn’t make any testable predictions.
Like both the self-interest and present-aim standards, the adaptive rationality standard assumes that people choose efficient means to achieve their ends. But unlike the other conceptions, which take goals as given, adaptive rationality regards goals as objects of choice themselves and, as such, subject to a similar efficiency requirement. Analysts are free to add a preference to the individual’s list, but only upon showing that the preference not compromise efforts to acquire the resources needed for survival in competitive environments. Certain unselfish motives qualify under this standard because they help people solve important commitment problems, such as the one-shot prisoner’s dilemma. If two players had reason to believe one another to be trustworthy, they could interact selectively, thereby reaping the gains of mutual cooperation. But merely being trustworthy is not enough. One must also be able to communicate one’s trustworthiness to others credibly and be able to identify its presence in them. Experimental evidence suggests that these conditions are often met.
Additional evidence for the claim that narrow self-interest is not the only important human motive comes from experiments showing that people reject transactions that would have benefited them if they believe the terms of those transactions to be unfair.
CONCERNS ABOUT RELATIVE POSITION
Traditional economic models assume that well-being depends on current and future levels of absolute consumption. Considerable evidence suggests, however, that well-being depends as much or more on current and future levels of relative consumption.28 The adaptive rationality standard holds that a taste for relative consumption can be introduced into the utility function if being thus motivated need not compromise resource acquisition in competitive environments. If people known to care about relative consumption are indeed better bargainers than those thought to care only about absolute consumption, this test is met.
Adding a taste for relative consumption alters the descriptive and normative messages of traditional models about many issues of interest in the economic policy domain. Workplace safety regulation provides a representative illustration. Traditional models hold that the optimal amount of workplace safety will be provided as long as labor markets are competitive and workers are well-informed about the potential health and safety risks they face. When these conditions are satisfied, workers can survey their options and choose the job with the wage-safety combination that best satisfies their preferences. Traditional models thus predict unequivocally that regulations mandating higher safety levels in competitive labor markets will reduce welfare.
Marxists and other critics of the market system have long countered that because labor markets are not effectively competitive, safety regulation is needed to prevent employers from exploiting workers. The puzzle for both camps is to explain why safety regulations are most binding in those labor markets that, on traditional measures, are the most highly competitive. Once we acknowledge that people care not only about absolute consumption, but about relative consumption as well, an explanation is at hand.
Most parents, for example, want to send their children to the best possible schools. Some workers might thus decide to accept a riskier job at a higher wage because that would enable them to meet the monthly payments on a house in a better school district. But other workers are in the same boat, and school quality is an inherently relative concept. So if other workers also traded safety for higher wages, the ultimate outcome would be merely to bid up the prices of houses in better school districts. Everyone would end up with less safety, yet no one would achieve the goal that made that trade seem attractive in the first place. As in a military arms race, when all parties build more arms, no one is any more secure than before. (See our discussion of positional arms races in Chapter 9, Games and Strategic Behavior.)
Workers confronting these incentives might well prefer an alternative state of the world in which all enjoyed greater safety, even at the expense of all having lower wages. But workers can control only their own job choices, not the choices of others. If any individual worker accepted a safer job while others didn’t, that worker would be forced to send her children to inferior schools. To get the outcome they desire, workers must act in unison. A mere nudge won’t do. Merely knowing that individual actions are self-canceling doesn’t eliminate the incentive to take those actions.
Analogous reasoning suggests that individual incentives are misleadingly high for virtually everything we might sell for money—our leisure time, future job security, and a host of other environmental amenities. The problem is not that we exchange these items for money, but that our incentives lead us to sell them too cheaply. In the case of workplace safety, the solution is not to ban risk, but to make it less attractive to individuals. As concerns the length of the workweek, the best solution is not strict limits on hours worked, but rather a change in incentives (such as overtime laws) that make long hours less attractive.
The same logic suggests a clear rationale for laws favoring monogamy over polygyny. In polygynous species in the animal kingdom—which is to say most vertebrate species—dominant males enjoy enormous reproductive payoffs. In one colony of seals, for example, 4 percent of the adult males sired almost 90 percent of all offspring.29 Not surprisingly, payoffs of this magnitude stimulate intensive competition among males for access to fertile females. This competition manifests itself variously in the form of physical combat and elaborate forms of displays and courtship rituals. And although this competition does often lead to genetic innovations that serve the interests of the species as a whole, for the most part, it is profoundly wasteful.
Peacocks as a group, for example, would be better off if each individual’s tailfeathers were to shrink by a significant proportion. But these feathers are an important form of courtship display, and the gain to any individual bird from having shorter tailfeathers would be more than offset by the reproductive penalty.
In polygynous human societies, males who rank high in the wealth distribution tend to have disproportionately many wives. Here, too, the struggle for position is often costly. Laws favoring monogamy might thus be viewed as an arms-control agreement of sorts. By preventing the concentration of reproductive success in the hands of a small minority of men, they reduce the incentive to engage in the costliest aspects of the struggle to move forward in the relative wealth distribution.
In traditional economic models, people care about absolute consumption but not about relative consumption. In these models, the individual pursuit of maximum utility leads to an efficient allocation of resources. There is abundant evidence, however, that concerns about relative consumption also loom large for most people. Under this broader and more realistic view of preferences, there is no presumption that individual decisions yield socially optimal allocations. Because one person’s forward movement in any consumption hierarchy entails backward moves for others, income-generating activities are misleadingly attractive to individuals. The logic of the broader model thus suggests the possible attraction of policies that make the pursuit of additional wealth less attractive. Taxes on income, workplace safety regulation, and a variety of restrictions on commercial sexual activity fall into this category.
The Economic Naturalist 10.3
Why might prosperous people often overestimate the pain they will experience from paying higher taxes?
It’s totally natural to think that if you have less money, you’ll find it more difficult to buy the things you want. That’s because most of the events in life that cause us to have less money—think home fires, job losses, business reverses, divorces, serious illnesses, and the like—are ones that reduce our incomes while leaving the incomes of others unaffected. In such cases, we really do find it more difficult to buy what we want. Only half of the houses are in top-half school districts, for example, so we’re less likely to be able to bid successfully for those houses when we have less money relative to others who also want them.
But it’s a completely different story when everyone’s spendable income goes down at once, as when we all pay higher taxes. Because across-the-board declines in disposable income leave relative purchasing power unaffected, they don’t affect the outcomes of the bidding for houses in better school districts.
This is a garden-variety cognitive error. When we try to forecast how an income decline will affect us, we rely on the previous income declines that are most accessible in memory. And since most the income declines we actually experience leave the incomes of others unchanged, we tend to forecast similar pain when thinking about the possibilities of a tax increase. But because across-the-board declines in disposable income don’t affect relative purchasing power at all, prosperous families could actually pay higher taxes without having to make any painful sacrifices.
RECAP
CONCERNS ABOUT RELATIVE POSITION
Evidence suggests that well-being depends not only on absolute levels of current and future consumption, but also on the corresponding levels of relative consumption.
Adding a taste for relative consumption alters many of the predictions and prescriptions of traditional economic models. Traditional models hold, for example, that the optimal amount of workplace safety will be provided in competitive labor markets. Critics respond that because labor markets are not effectively competitive, safety regulation is needed to prevent employers from exploiting workers. The puzzle for both camps is why safety regulations are most binding in those labor markets that, on traditional measures, are the most highly competitive.
Models that incorporate concerns about relative position address this puzzle by noting that by accepting riskier jobs, parents can use their higher pay to bid for housing in better school districts. But because school quality is an inherently relative concept, when other parents follow suit, the collective effect of their efforts is merely to bid up the prices of houses in better school districts. As before, half of all children end up attending bottom-half schools.
Workers confronting these incentives might well prefer an alternative state of the world in which all enjoyed greater safety, even at the expense of all having lower wages. Analogous reasoning suggests that individual incentives are misleadingly high for virtually everything we might sell for money—our leisure time, future job security, and a host of other environmental amenities. The problem is not that we exchange these items for money, but rather that we tend to sell them too cheaply.
SOME POLICY APPLICATIONS
POLICIES THAT ADDRESS IMPULSE-CONTROL PROBLEMS
Here, we consider a variety of policies adopted by societies around the world that can be most easily understood if we view them as attempts to mitigate the consequences of impulse-control problems.
Crimes of Passion, Gambling, and Entrapment
Statutory law itself implicitly recognizes the force of impulse-control problems in some contexts. For instance, the law of homicide is relatively lenient in the case of a husband who has murdered his wife’s lover upon encountering them together in bed. Most countries impose at least some restrictions on gambling. Entrapment law implicitly acknowledges the injustice of tempting a normally law-abiding citizen to break the law, and in many jurisdictions, it is illegal to leave one’s keys in a parked car. Legal sanctions against addictive drugs are also common in countries around the world.
Regulating Marriage and Sexual Behavior
Impulse-control problems arise with particular force—and often with severe consequences—in the domain of sexual behavior, and here, too, an understanding of the relevant motivational forces has important implications for the law. Someone who eats too much will gain a few pounds, but sexual indiscretions have become a life-and-death matter. Most people know by now that “safe sex” is the most sensible option for sexually active persons. Yet many ignore this knowledge in the moment of decision. People who confront life-threatening and relationship-threatening impulse-control problems have an interest in the regulation of sexual behavior—their own as well as others’—that is fundamentally different from the one suggested by the dispassionate focus of the traditional economic models.
Traditional models leave little room even to acknowledge impulse-control problems in mature adults. For example, the single, fleeting reference to impulsivity in Richard Posner’s 442-page Sex and Reason comes in reference to teenagers, and even here he adds a parenthetical remark that downplays its significance:
Teenagers are on average more impulsive, hence on average less responsive to incentives, than adults are (although Chapter 10 presented evidence of rational behavior by teenagers toward abortion).30
Traditional models find it difficult to accommodate the notion that people might improve their lives by deliberately restricting their own options. Yet many of the laws and social norms that define acceptable behavior simply cannot be understood without reference to the fact that people confront powerful temptations to engage in sexual activity that is contrary to their long-run interests.
Consider, for example, the laws and norms regarding adultery. Spouses may strongly wish to remain in their marriages and realize that their prospects for doing so will be higher if they remain sexually faithful. And yet they may also recognize the potential lure of extramarital romance. Anticipating this conflict, most wedding ceremonies attempt to strengthen the partners’ resolve by having them make public vows of fidelity. And in most societies, these vows are backed up by legal and social sanctions against adultery.
Posner appears puzzled by such practices, which he regards as having arisen from arbitrary religious beliefs:
. . . our sexual attitudes and the customs and laws that grow out of them and perhaps reinforce or even alter them by some feedback process, are the product of moral attitudes rooted in religious beliefs rather than in the sort of functional considerations examined in the preceding chapter.31
But the functionalist perspective favored by proponents of traditional models can also be used to examine the content of morality. The adaptive rationality standard draws our attention to potential gains available in many situations from steering clear of options that might seem compellingly attractive. Systems of morality may be viewed as simply yet another means by which we attempt to achieve the desired commitments.
Regulating Savings
Higher savings rates support dramatically higher lifetime consumption. Suppose, for example, that two people start with the same income and that the first saves 20 percent and the second only 5 percent of each year’s earnings, including interest from savings. If the rate of return is 10 percent (less than the actual rate of return in the U.S. stock market during the past century), only 11 years will elapse before the high saver’s consumption overtakes the low saver’s. After 20 years, the high saver’s consumption is 16 percent greater; after 30 years, it is 35 percent greater.
The high-savings trajectory might seem the compelling choice, yet even the low-savings trajectory overstates the actual rate of savings in the United States. People with low savings often voice regret about not having saved more. But the traditional rational choice models insist that the difference between these two life histories is merely a reflection of differing preferences and, hence, not a proper subject for welfare comparisons or regulatory intervention. On this view, government programs aimed at stimulating personal savings rates are unambiguously welfare-reducing.
Once we acknowledge a tendency toward excessive discounting, however, we are more inclined to take the low saver’s expressions of regret at face value and to trust the intuition that the thrifty person really has done better. After all, the decision to save confronts the individual with a standard impulse-control problem. A given amount of consumption must be sacrificed today so that a larger amount of consumption may take place in the future. Those who discount excessively may know he should save for the future, yet be sorely tempted by the rewards of immediate consumption.
These observations suggest the utility of viewing thrift as a moral virtue since persons who hold that view will be better able to resist the temptation to consume. Explicit recognition of the impulse-control problem confronting savers also suggests that collective action to promote savings may be welfare-enhancing.
LAWS AND REGULATIONS MOTIVATED BY CONCERNS ABOUT RELATIVE POSITION32
As just noted, one rationale for policy measures to stimulate additional savings is that impulse-control problems often prevent people from adhering to rationally chosen savings plans. Our savings shortfall also stems from a second source, one that is much harder to address by unilateral action. The following thought experiment illustrates the basic problem: If you were society’s median earner, which of these two worlds would you prefer?
A. You save enough to support a comfortable standard of living in retirement, but your children attend a school whose students score in the 20th percentile on standardized tests in reading and math.
B. You save too little to support a comfortable standard of living in retirement, but your children attend a school whose students score in the 50th percentile on those tests.
Again, because the concept of a “good” school is inescapably relative, this thought experiment captures an essential element of the savings decision confronting most middle-income families. If others bid for houses in better school districts, failure to do likewise will often consign one’s children to inferior schools. Yet no matter how much each family spends, half of all children must attend schools in the bottom half. The choice posed by the thought experiment is one that most parents would prefer to avoid. But when forced to choose, most say they would pick the second option.
Context influences our evaluations of not just schools, but virtually every other good or service we consume. To look good in a job interview, for example, means simply to dress better than other candidates. It is the same with gifts: “In a poor country, a man proves to his wife that he loves her by giving her a rose but in a rich country, he must give a dozen roses.”33
The savings decision thus resembles the collective action problem inherent in a military arms race. Each nation knows that it would be better if all spent less on arms. Yet if others keep spending, it is simply too dangerous not to follow suit. Curtailing an arms race thus requires an enforceable agreement. Similarly, unless all families can bind themselves to save more, those who do so unilaterally will pay the price of having to send their children to inferior schools. Or they may be unable to dress advantageously for job interviews. Or they may be unable to buy gifts that meet expectations on important social occasions.
Once people perceive that some of their choices are rooted in impulse-control problems or in the kinds of cognitive errors that behavioral economists have identified, they often feel motivated to behave differently going forward. That’s why we call these deviations “departures with rational choice with regret.”
In practical legal and regulatory policy terms, however, the greatest shortcomings of traditional models stem from their inability to accommodate what can be described as departures from the predictions of those models that people do not seem to regret. Such departures typically occur because people are pursuing goals that are different from the ones assumed by traditional models.
The most compelling rationale for a law or regulation that constrains behavior is to prevent harm to others. Although cognitive errors are systematic and widespread, they generally harm the person who commits them rather than innocent bystanders. More important, any damage caused by cognitive errors is at least, in principle, subject to remedial action by the very persons who are prone to those errors. So while taking account of systematic cognitive errors is indisputably important for generating more accurate predictions of behavior in many domains, it is not clear that these errors meet the threshold for enacting laws that constrain individual behavior. And indeed, as Richard Thaler and Cass Sunstein argued in their widely cited 2008 book, it is often possible to help overcome cognitive errors by restructuring the environments in which people make important decisions.34 For example, the general tendency to eat a poorly balanced diet, they report, is ameliorated when cafeteria workers display fruits and salads in more prominent locations and relegate less healthful foods to less prominent shelves.
In the case of impulse-control problems, the scale of the associated welfare losses—including those associated with diet-related illnesses, insufficient savings, and drug and alcohol abuse—is considerably larger than in the case of cognitive errors. But here, too, many of the relevant injuries are self-inflicted and, hence, at least in principle, subject to remedial action by affected parties. Taking impulse-control problems into account is important for understanding patterns of human behavior, and in exceptional cases, may even justify the enactment of laws that restrain individual choice.
The welfare losses associated with departures from traditional models without regret are often dramatically larger than those associated with the departures that people do regret. And because the former losses are much less susceptible to individual remedial action, they present a far more compelling case for collective action. Consider, in particular, the welfare losses associated with the contest for relative position in the distributions of income and consumption. These losses run to hundreds of millions, perhaps even several trillions, of dollars each year in the U.S. economy alone.35 Stemming as they do from a multi-person prisoner’s dilemma, these losses cannot be curtailed by unilateral individual action. It is thus in this domain that broadening traditional economic models promises to help identify laws, regulations, and other collective steps with the potential to generate truly significant welfare enhancements.
RECAP
SOME POLICY APPLICATIONS
Societies around the world have adopted a variety of policies that are most easily understood as attempts to mitigate the consequences of impulse-control problems. These include the treatment of so-called crimes of passion; prohibitions against gambling, addictive drugs, and prostitution; entrapment laws; and sanctions against adultery. Programs that aim to stimulate savings can also be viewed as a response to impulse-control problems that make it difficult to execute rational savings plans.
Collective action problems rooted in concerns about relative position are also implicated in the laws, norms, and regulations we observe in multiple societies. Such concerns give rise to incentives that resemble those that generate military arms races. When outcomes depend on relative spending, individual incentives often favor mutually offsetting patterns of spending behavior. A common feature of workplace safety regulation, limitations on work hours, and programs to stimulate savings is that each reduces the incentive to engage in mutually offsetting patterns of spending.
SUMMARY
· Because it is costly to gather and analyze information, it often makes sense to employ rules of thumb when making complex assessments and decisions. But although the rules that most of us use appear to work reasonably well under a broad range of circumstance, they sometimes lead to error. (LO1)
· Behavioral economists have identified numerous systematic errors that are associated with the availability and representativeness heuristics. Anchoring and adjustment can also lead to biased assessments, as can failure to take account of regression to the mean or the tendency for extreme events to be followed by more nearly normal ones. It is also common for people to misinterpret contextual cues in their assessments and choices, as, for example, by reckoning the importance of a price reduction not by its absolute magnitude, but by its fraction of the original price. (LO1, LO2)
· Traditional economic models rule out the possibility that people might regret having chosen behaviors whose consequences were perfectly predictable, yet such expressions of regret appear genuine. Evidence suggests that people tend to discount future costs and benefits excessively leading them to choose imminent, though inferior, rewards instead of substantially larger rewards that require waiting. (LO3)
· Although traditional models assume that the reduction in happiness from a very small decline in wealth should be approximately the same as the gain in happiness from a similar gain in wealth, evidence suggests that losses weigh much more heavily than gains. Called loss aversion, this asymmetry in valuations is approximately two to one for minor changes, but the ratio can be substantially larger when important changes are at stake. Because every change in policy generates winners and losers, loss aversion implies a large bias in favor of the status quo in public policy decisions. Evidence suggests that this bias often has harmful consequences since people’s initial estimates of how painful losses will be often fail to take into account their ability to adapt to altered circumstances. (LO4)
· Self-interest is not the only important human motive, as evidenced by the fact that most people vote in presidential elections whose outcomes their votes won’t influence.The present-aim model of rational choice broadens the narrow self-interest model by asserting that people often behave unselfishly because they get a warm glow from doing so. Methodologists object that the present-aim approach is not really a testable theory since it can “explain” virtually any bizarre behavior simply by assuming a suitably strong taste for it. (LO5)
· The adaptive rationality approach also permits a broader conception of human motive, but unlike the present- aim approach, permits new motives to be added only if they can plausibly shown not to handicap people in their quest to survive in competitive environments. Certain unselfish motives qualify under this standard because they help people solve important commitment problems, such as the one-shot prisoner’s dilemmas. (LO5)
· Additional evidence for the claim that narrow self-interest is not the only important human motive comes from experiments showing that people reject transactions that would have benefited them if they believe the terms of those transactions are unfair. (LO5)
· Traditional models assume that well-being depends only on absolute levels of current and future consumption. But evidence suggests that it also depends heavily on the corresponding levels of relative consumption. Adding concerns about relative position alters many of the predictions and prescriptions of traditional economic models. When well-being depends only on absolute levels of consumption and certain other conditions are met, traditional models hold that individual incentives produce socially optimal patterns of spending. But when reward depends on relative position, individual incentives often result in mutually offsetting, and hence partially wasteful spending patterns. (LO6)
· The findings of behavioral economists in recent decades have contributed to our understanding of laws and other institutions. Societies around the world, for example, have adopted a variety of policies that are most easily understood as attempts to mitigate the consequences of impulse-control problems. These include special treatment of so-called crimes of passion; prohibitions against gambling, addictive drugs, and prostitution; entrapment laws; sanctions against adultery; and programs to stimulate savings. (LO7)
· Many other widely adopted laws, norms, and regulations are parsimoniously interpreted as attempts to mitigate collective action problems caused by concerns about relative position. These include workplace safety regulation, limitations on work hours, and programs to stimulate savings, all of which reduce individual incentives to engage in mutually offsetting patterns of spending. (LO7)
KEY TERMS
adaptive rationality standard
anchoring and adjustment
availability heuristic
fungibility
homo economicus
judgmental and decision heuristics
loss aversion
present-aim standard of rationality
regression to the mean
representativeness heuristic
satisficing
status quo bias
ultimatum bargaining game
Weber-Fechner law
REVIEW QUESTIONS
1. 1.How does the representativeness heuristic explain why people might think, mistakenly, that a randomly chosen shy person is more likely to be a librarian than a salesperson? (LO1)
2. 2.Explain why thinking of costs or benefits in proportional terms might lead to suboptimal choices. (LO2)
3. 3.Explain why traditional economic models find it difficult to explain why people would pay to attend weight-loss camps that restrict their daily calorie intake. (LO3)
4. 4.How does loss aversion help explain why attempts to repeal the Affordable Care Act met such strong resistance? (LO4)
5. 5.Cite two examples of behavior that appear to contradict the assumption that people are narrowly self-interested. (LO5)
PROBLEMS
1. 1.Only two moving companies provide local delivery service in a small western city, United and North American. United operates 20 percent of the trucks, North American the remaining 80 percent. On a dark and rainy night, a pedestrian is run over and killed by a moving van. The lone witness to the incident testifies that the van was owned by United. An independent laboratory hired by the court finds that under dark and rainy conditions, the witness is able to identify the owner of a moving van with 90 percent accuracy. What is the probability that the van that struck the witness was owned by United? (LO1)
2. 2.Herb, a tennis player, has been struggling to develop a more consistent serve. He made the following remark to his partner during the second set of a recent match: “I feel like I’m making progress. I haven’t double-faulted once today.” He then served two double faults, which caused him to say, “Every time I say I haven’t double- faulted, I immediately start to.” Herb’s perception may have been influenced by (LO1)
a. the sunk cost effect.
b. regression to the mean.
c. the availability heuristic.
d. More than one of the above.
e. None of the above.
3. 3.Studies have shown that in the New York City subways crime rates fall in the years following increased police patrols. Does this pattern suggest that the increased patrols are the cause of the crime reductions? (LO1)
4. 4.Dalgliesh the detective fancies himself a shrewd judge of human nature. In careful tests, it has been discovered that he is right 80 percent of the time when he says that a suspect is lying. Dalgliesh says that Jones is lying. The polygraph expert, who is right 100 percent of the time, says that 40 percent of the subjects interviewed by Dalgliesh are telling the truth. What is the probability that Jones is lying? (LO1)
5. 5.Claiborne is a gourmet. He makes it a point never to visit a restaurant a second time unless he has been served a superb meal on his first visit. He is puzzled at how seldom the quality of his second meal is as high as the first. Should he be? (LO1)
6. 6.Mary will drive across town to take advantage of a 40 percent off sale on a $40 blouse but will not do so to take advantage of a 10 percent off sale on a $1,000 stereo. Assuming that her alternative is to pay list price for both products at the department store next to her home, is her behavior rational? (LO2)
7. 7.Tom has said he would be willing to drive across town in order to save $10 on the purchase price of a $20 clock radio. If Tom is rational, this implies that he (LO2)
a. believes that the opportunity cost of the trip is not more than $10.
b. should also drive across town to save $10 on a $500 television set.
c. should not drive across town to save $10 on a $500 television set.
d. should drive across town only if the savings on a $500 television set is at least $250.
e. More than one of the above are correct.
8. 8.Hal is having difficulty choosing between two tennis racquets, A and B. As shown in the diagram, B has more power than A, but less control. According to the rational choice model, how will the availability of a third alternative—racquet C—influence Hal’s final decision? If Hal behaves like most ordinary decision makers in this situation, how will the addition of C to his choice set matter? (LO2)
9. 9.In the fall, Crusoe puts 50 coconuts from his harvest into a cave just before a family of bears goes in to hibernate. As a result, he is unable to get the coconuts out before the bears emerge the following spring. Coconuts spoil at the same rate no matter where he stores them, and yet he continues this practice each year. Why might he do this? (LO3, LO5)
10. 10.When students in a large class were surveyed about how much they would be willing to pay for a coffee mug with their university’s logo on it, their median willingness to pay was $5. At random, half of the students in this class were then given such a coffee mug, and each of the remaining students were given $5 in cash. Students who got mugs were then offered an opportunity to sell them to students who had not gotten one. According to standard economic models, how many mugs would be expected to change hands? How, if at all, would a behavioral economist’s prediction differ? (LO4)
11. 11.Describe the advantages and disadvantages of electing a political leader who is known to favor harsh military reprisals against foreign aggression, even when such reprisals are highly injurious to our own national interests. (LO5)
12. 12.Explain why rational voters whose happiness depends on relative consumption might favor laws that require them to save a certain portion of each year’s earnings for retirement. (LO6)
ANSWERS TO CONCEPT CHECKS
1. 10.1For every 100 taxis in a dark alley, 15 will be green, 85 blue. The witness will identify 0.8(15) = 12 of the green taxis as green, the remaining 3 green taxis as blue; he will identify 0.8(85) = 68 of the blue taxis as blue, the remaining 17 blue taxis as green. The probability that the cab in question was green, given that the witness said it was, is thus equal to 12/(12 + 17) = 0.413 and as this is less than half, the Green Taxi Company should not be held liable. (LO1)
2. 10.2Regression to the mean suggests that a month with an unusually high number of burglaries is likely to be followed by a month in which the number of burglaries is more nearly normal. So the increased patrols were not necessarily the cause of the observed reduction in burglaries. (LO1)
1Amos Tversky and Daniel Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science 185 (1974), pp. 1124–1131.©Carsten Rehder/AFP/Getty Images
2Richard Thaler, “Mental Accounting and Consumer Choice,” Marketing Science 4, no. 3 (Summer 1985), pp. 199–214.
3Amos Tversky and Daniel Kahneman. “The Framing of Decisions and the Psychology of Choice,” Science 211 (1981), pp. 453–458.
4Daniel Kahneman and Amos Tversky, “Subjective Probability: A Judgment of Representativeness,” Cognitive Psychology 3, no. 3 (1972), pp. 430–454.
5Thomas Gilovich, How We Know What Isn’t So: The Fallibility of Human Reason In Everyday Life (New York: Free Press, 1991).
6Amos Tversky and Daniel Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science 185 (1974), pp. 1124–1131.
7For more examples along similar lines, see Richard Thaler, “Toward a Positive Theory of Consumer Choice,” Journal of Economic Behavior and Organization 1 (1980), pp. 39–60.
8Amos Tversky and Itamar Simonson, “Context-Dependent Preferences,” Management Science 39, no. 10 (October 1993), pp. 1179–1189.
9This is known as “exponential discounting.”
10According to psychologists, people tend to discount future costs and benefits hyperbolically, not exponentially. Two of the most important earlier papers on this issue are Shin-Ho Chung and Richard Herrnstein, “Choice and Delay of Reinforcement,” Journal of the Experimental Analysis of Behavior 10 (1967), pp. 67–74; and George Ainslie, “Specious Reward: A Behavioral Theory of Impulsiveness and Impulse Control,” Psychological Bulletin 82 (July 1975), pp. 463–496. See also Jon Elster, “Don’t Burn Your Bridge Before You Come to It: Seven Types of Ambiguity in Precommitment,” Texas Law Review Symposium on Precommitment Theory, Bioethics, and Constitutional Law, September 2002; Thomas Schelling, “The Intimate Contest for Self-Command,” The Public Interest, Summer 1980, pp. 94–118; R. Thaler and H. Shefrin, “An Economic Theory of Self-Control,” Journal of Political Economy 89 (1981), pp. 392–405; and Gordon Winston, “Addiction and Backsliding: A Theory of Compulsive Consumption,” Journal of Economic Behavior and Organization 1 (1980), pp. 295–394. For a comprehensive review of work discussing the economic applications of the hyperbolic discounting model, see Shane Frederick, George Loewenstein, and Ted O’Donoghue, “Time Discounting and Time Preference: A Critical Review,” Journal of Economic Literature 40, no. 2 (June 2002), pp. 351–401.
11Daniel Kahneman, Jack L. Knetsch, and Richard H. Thaler, “Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias,” The Journal of Economic Perspectives, 5(1), pp. 193–206, Winter 1991
12Michael Lewis, “The Economist Who Realized How Crazy We Are,” Bloomberg View, May 29, 2015.
13Philip Brinkman, Dan Coates, and Ronnie Janoff-Bulman, “Lottery Winners and Accident Victims: Is Happiness Relative?” Journal of Personality and Social Psychology 36, no. 8 (1978), pp. 917–927.
14Brigitte C. Madrian and Dennis F. Shea, “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior,” Quarterly Journal of Economics 116, no. 4 (2001), pp. 1149–1187.
15Giving USA, “Total Charitable Donations Rise to New High of $390.05 Billion,” https://givingusa.org/giving-usa-2017-total-charitable-donations-rise-to-new-high-of-390-05-billion/.
16O. Bodvarsson and W. Gibson, “Gratuities and Customer Appraisal of Service: Evidence from Minnesota Restaurants,” Journal of Socio-Economics 23, no. 3 (1994), pp. 287–302.
17Derek Parfit, Reasons and Persons (Oxford: The Clarendon Press, 1984).
18The discussion that follows draws heavily on Robert H. Frank, “If Homo Economicus Could Choose His Own Utility Function, Would He Want One with a Conscience?” American Economic Review 77 (September 1987), pp. 593–604; and Robert H. Frank, Passions Within Reason (New York: W. W. Norton, 1988).
19Robert H. Frank, Thomas Gilovich, and Dennis Regan, “The Evolution of One-Shot Cooperation,” Ethology and Sociobiology 14 (July 1993), pp. 247–256.
20Robert H. Frank, Passions Within Reason (New York: W. W. Norton, 1988).
21In tit-for-tat, each player cooperates on the first move, and on each successive move does whatever her partner did on the previous move. See Anatol Rapoport and A. Chammah, Prisoner’s Dilemma (Ann Arbor: University of Michigan Press, 1965); and Robert Axelrod, The Evolution of Cooperation (New York: Basic Books, 1984).
22Werner Guth, Rolf Schmittberger, and Bernd Schwarze, “An Experimental Analysis of Ultimatum Bargaining,” Journal of Economic Behavior and Organization 3 (1982), pp. 367–388.
23See Table 1 in Daniel Kahneman, Jack Knetsch, and Richard Thaler, “Perceptions of Unfairness: Constraints on Wealthseeking,” American Economic Review 76 (September 1986), pp. 728–741
24For evidence in favor of this interpretation, see especially Lawrence M. Kahn and J. Keith Murnighan, “A General Experiment on Bargaining in Demand Games with Outside Options,” American Economic Review 83 (December 1993), pp. 1260–1280.
25For an engaging survey of relevant evidence, see chapters 14 and 15 of Richard Thaler, Misbehaving (New York: W.W. Norton, 2015).
26Robert H. Frank, Thomas Gilovich, and Dennis Regan, “Does Studying Economics Inhibit Cooperation?” Journal of Economic Perspectives 7 (Spring 1993), pp. 159–171. We found, for example, that the difference in defection rates grew larger the longer a student had studied economics. Questionnaire responses also indicated that freshmen in their first microeconomics course were more likely at the end of the term to expect opportunistic behavior from others than they were at the beginning.
27For more information on the issue of whether exposure to the self-interest model inhibits cooperation, see Gerald Marwell and Ruth Ames, “Economists Free Ride, Does Anyone Else?” Journal of Public Economics 15 (1981), pp. 295–310; John Carter and Michael Irons, “Are Economists Different, and If So, Why?” Journal of Economic Perspectives 5 (Spring 1991); Antony Yezer, Robert Goldfarb, and Paul Poppen, “Does Studying Economics Discourage Cooperation? Watch What We Do, Not What We Say,” Journal of Economic Perspectives, Spring 1996; and Robert H. Frank, Thomas Gilovich, and Dennis Regan, “Do Economists Make Bad Citizens?” Journal of Economic Perspectives, Spring 1996
28The points developed in this section are developed in greater detail in Robert H. Frank, “The Demand for Unobservable and Other Nonpositional Goods,” American Economic Review 75 (March 1985), pp. 101–116; and Robert H. Frank, Choosing the Right Pond (New York: Oxford University Press, 1985).
29Richard Dawkins, The Selfish Gene (New York: Oxford University Press, 1976).
30Richard A. Posner, Sex and Reason (Cambridge, MA: Harvard University Press, 1992), p. 331.
31Richard A. Posner, Sex and Reason (Cambridge, MA: Harvard University Press, 1992), p. 236.
32The points developed in this section are developed in greater detail in Robert H. Frank, “The Demand for Unobservable and Other Nonpositional Goods,” American Economic Review 75 (March 1985), pp. 101–116; and Robert H. Frank, Choosing the Right Pond (New York: Oxford University Press, 1985).
33Richard Layard, “Human Satisfactions and Public Policy,” Economic Journal, vol. 90, 1980: 737–750, p. 741.
34Richard Thaler and Cass Sunstein, Nudge (New Haven, CT: Yale University Press, 2008).
35For a discussion of these losses, see Robert H. Frank, Luxury Fever (New York: The Free Press, 1999).