Common section

PART FOUR

PRACTICAL LESSONS

CHAPTER 14

Why Do Some Societies Make Disastrous Decisions?

Road map for success ■ Failure to anticipate ■ Failure to perceive ■ Rational bad behavior ■ Disastrous values ■ Other irrational failures ■ Unsuccessful solutions ■ Signs of hope ■

Education is a process involving two sets of participants who supposedly play different roles: teachers who impart knowledge to students, and students who absorb knowledge from teachers. In fact, as every open-minded teacher discovers, education is also about students imparting knowledge to their teachers, by challenging the teachers’ assumptions and by asking questions that the teachers hadn’t previously thought of. I recently repeated that discovery when I taught a course, on how societies cope with environmental problems, to highly motivated undergraduates at my institution, the University of California at Los Angeles (UCLA). In effect, the course was a trial run-through of this book’s material, at a time when I had drafted some chapters, was planning other chapters, and could still make extensive changes.

My first lecture after the class’s introductory meeting was on the collapse of Easter Island society, the subject of this book’s Chapter 2. In the class discussion after I had finished my presentation, the apparently simple question that most puzzled my students was one whose actual complexity hadn’t sunk into me before: how on earth could a society make such an obviously disastrous decision as to cut down all the trees on which it depended? One of the students asked what I thought the islander who cut down the last palm tree said as he was doing it. For every other society that I treated in subsequent lectures, my students raised essentially the same question. They also asked the related question: how often did people wreak ecological damage intentionally, or at least while aware of the likely consequences? How often did people instead do it without meaning to, or out of ignorance? My students wondered whether—if there are still people left alive a hundred years from now—those people of the next century will be as astonished about our blindness today as we are about the blindness of the Easter Islanders.

This question of why societies end up destroying themselves through disastrous decisions astonishes not only my UCLA undergraduates but also professional historians and archaeologists. For example, perhaps the most cited book on societal collapses is The Collapse of Complex Societies, by the archaeologist Joseph Tainter. In assessing competing explanations for ancient collapses, Tainter remained skeptical of even the possibility that they might have been due to depletion of environmental resources, because that outcome seemed a priori so unlikely to him. Here is his reasoning: “One supposition of this view must be that these societies sit by and watch the encroaching weakness without taking corrective actions. Here is a major difficulty. Complex societies are characterized by centralized decision-making, high information flow, great coordination of parts, formal channels of command, and pooling of resources. Much of this structure seems to have the capability, if not the designed purpose, of countering fluctuations and deficiencies in productivity. With their administrative structure, and capacity to allocate both labor and resources, dealing with adverse environmental conditions may be one of the things that complex societies do best (see, for example, Isbell [1978]). It is curious that they would collapse when faced with precisely those conditions they are equipped to circumvent.... As it becomes apparent to the members or administrators of a complex society that a resource base is deteriorating, it seems most reasonable to assume that some rational steps are taken toward a resolution. The alternative assumption—of idleness in the face of disaster—requires a leap of faith at which we may rightly hesitate.”

That is, Tainter’s reasoning suggested to him that complex societies are not likely to allow themselves to collapse through failure to manage their environmental resources. Yet it is clear from all the cases discussed in this book that precisely such a failure has happened repeatedly. How did so many societies make such bad mistakes?

My UCLA undergraduates, and Joseph Tainter as well, have identified a baffling phenomenon: namely, failures of group decision-making on the part of whole societies or other groups. That problem is of course related to the problem of failures of individual decision-making. Individuals, too, make bad decisions: they enter bad marriages, they make bad investments and career choices, their businesses fail, and so on. But some additional factors enter into failures of group decision-making, such as conflicts of interest among members of the group, and group dynamics. This is obviously a complex subject to which there would not be a single answer fitting all situations.

What I’m going to propose instead is a road map of factors contributing to failures of group decision-making. I’ll divide the factors into a fuzzily delineated sequence of four categories. First of all, a group may fail to anticipate a problem before the problem actually arrives. Second, when the problem does arrive, the group may fail to perceive it. Then, after they perceive it, they may fail even to try to solve it. Finally, they may try to solve it but may not succeed. While all this discussion of reasons for failure and societal collapses may seem depressing, the flip side is a heartening subject: namely, successful decision-making. Perhaps if we understood the reasons why groups often make bad decisions, we could use that knowledge as a checklist to guide groups to make good decisions.

The first stop on my road map is that groups may do disastrous things because they failed to anticipate a problem before it arrived, for any of several reasons. One is that they may have had no prior experience of such problems, and so may not have been sensitized to the possibility.

A prime example is the mess that British colonists created for themselves when they introduced foxes and rabbits from Britain into Australia in the 1800s. Today these rate as two of the most disastrous examples of impacts of alien species on an environment to which they were not native (see Chapter 13 for details). These introductions are all the more tragic because they were carried out intentionally at much effort, rather than resulting inadvertently from tiny seeds overlooked in transported hay, as in so many cases of establishment of noxious weeds. Foxes have proceeded to prey on and exterminate many species of native Australian mammals without evolutionary experience of foxes, while rabbits consume much of the plant fodder intended for sheep and cattle, outcompete native herbivorous mammals, and undermine the ground by their burrows.

With the gift of hindsight, we now view it as incredibly stupid that colonists would intentionally release into Australia two alien mammals that have caused billions of dollars in damages and expenditures to control them. We recognize today, from many other such examples, that introductions often prove disastrous in unexpected ways. That’s why, when you go to Australia or the U.S. as a visitor or returning resident, one of the first questions you are now asked by immigration officers is whether you are carrying any plants, seeds, or animals—to reduce the risk of their escaping and becoming established. From abundant prior experience we have now learned (often but not always) to anticipate at least the potential dangers of introducing species. But it’s still difficult even for professional ecologists to predict which introductions will actually become established, which established successful introductions will prove disastrous, and why the same species establishes itself at certain sites of introduction and not at others. Hence we really shouldn’t be surprised that 19th century Australians, lacking the 20th century’s experience of disastrous introductions, failed to anticipate the effects of rabbits and foxes.

In this book we have encountered other examples of societies understandably failing to anticipate a problem of which they lacked prior experience. In investing heavily in walrus hunting in order to export walrus ivory to Europe, the Greenland Norse could hardly have anticipated that the Crusades would eliminate the market for walrus ivory by reopening Europe’s access to Asian and African elephant ivory, or that increasing sea ice would impede ship traffic to Europe. Again, not being soil scientists, the Maya at Copán could not foresee that deforestation of the hill slopes would trigger soil erosion from the slopes into the valley bottoms.

Even prior experience is not a guarantee that a society will anticipate a problem, if the experience happened so long ago as to have been forgotten. That’s especially a problem for non-literate societies, which have less capacity than literate societies to preserve detailed memories of events long in the past, because of the limitations of oral transmission of information compared to writing. For instance, we saw in Chapter 4 that Chaco Canyon Anasazi society survived several droughts before succumbing to a big drought in the 12th century A.D. But the earlier droughts had occurred long before the birth of any Anasazi affected by the big drought, which would thus have been unanticipated because the Anasazi lacked writing. Similarly, the Classic Lowland Maya succumbed to a drought in the 9th century, despite their area having been affected by drought centuries earlier (Chapter 5). In that case, although the Maya did have writing, it recorded kings’ deeds and astronomical events rather than weather reports, so that the drought of the 3rd century did not help the Maya anticipate the drought of the 9th century.

In modern literate societies whose writing does discuss subjects besides kings and planets, that doesn’t necessarily mean that we draw on prior experience committed to writing. We, too, tend to forget things. For a year or two after the gas shortages of the 1973 Gulf oil crisis, we Americans shied away from gas-guzzling cars, but then we forgot that experience and are now embracing SUVs, despite volumes of print spilled over the 1973 events. When the city of Tucson in Arizona went through a severe drought in the 1950s, its alarmed citizens swore that they would manage their water better, but soon returned to their water-guzzling ways of building golf courses and watering their gardens.

Another reason why a society may fail to anticipate a problem involves reasoning by false analogy. When we are in an unfamiliar situation, we fall back on drawing analogies with old familiar situations. That’s a good way to proceed if the old and new situations are truly analogies, but it can be dangerous if they are only superficially similar. For instance, Vikings who immigrated to Iceland beginning around the year A.D. 870 arrived from Norway and Britain, which have heavy clay soils ground up by glaciers. Even if the vegetation covering those soils is cleared, the soils themselves are too heavy to be blown away. When the Viking colonists encountered in Iceland many of the same tree species already familiar to them from Norway and Britain, they were deceived by the apparent similarity of the landscape (Chapter 6). Unfortunately, Iceland’s soils arose not through glacial grinding but through winds carrying light ash blown out in volcanic eruptions. Once the Vikings had cleared Iceland’s forests to create pastures for their livestock, the light soil became exposed for the wind to blow out again, and much of Iceland’s topsoil soon eroded away.

A tragic and famous modern example of reasoning by false analogy involves French military preparations from World War II. After the horrible bloodbath of World War I, France recognized its vital need to protect itself against the possibility of another German invasion. Unfortunately, the French army staff assumed that a next war would be fought similarly to World War I, in which the Western Front between France and Germany had remained locked in static trench warfare for four years. Defensive infantry forces manning elaborate fortified trenches had been usually able to repel infantry attacks, while offensive forces had deployed the newly invented tanks only individually and just in support of attacking infantry. Hence France constructed an even more elaborate and expensive system of fortifications, the Maginot Line, to guard its eastern frontier against Germany. But the German army staff, having been defeated in World War I, recognized the need for a different strategy. It used tanks rather than infantry to spearhead its attacks, massed the tanks into separate armored divisions, bypassed the Maginot Line through forested terrain previously considered unsuitable for tanks, and thereby defeated France within a mere six weeks. In reasoning by false analogy after World War I, French generals made a common mistake: generals often plan for a coming war as if it will be like the previous war, especially if that previous war was one in which their side was victorious.

The second stop on my road map, after a society has or hasn’t anticipated a problem before it arrives, involves its perceiving or failing to perceive a problem that has actually arrived. There are at least three reasons for such failures, all of them common in the business world and in academia.

First, the origins of some problems are literally imperceptible. For example, the nutrients responsible for soil fertility are invisible to the eye, and only in modern times did they become measurable by chemical analysis. In Australia, Mangareva, parts of the U.S. Southwest, and many other locations, most of the nutrients had already been leached out of the soil by rain before human settlement. When people arrived and began growing crops, those crops quickly exhausted the remaining nutrients, with the result that agriculture failed. Yet such nutrient-poor soils often bear lush-appearing vegetation; it’s just that most of the nutrients in the ecosystem are contained in the vegetation rather than in the soil, and are removed if one cuts down the vegetation. There was no way for the first colonists of Australia and Mangareva to perceive that problem of soil nutrient exhaustion—nor for farmers in areas with salt deep in the ground (like eastern Montana and parts of Australia and Mesopotamia) to perceive incipient salinization—nor for miners of sulfide ores to perceive the toxic copper and acid dissolved in mine runoff water.

Another frequent reason for failure to perceive a problem after it has arrived is distant managers, a potential issue in any large society or business. For example, the largest private landowner and timber company in Montana today is based not within that state but 400 miles away in Seattle, Washington. Not being on the scene, company executives may not realize that they have a big weed problem on their forest properties. Well-run companies avoid such surprises by periodically sending managers “into the field” to observe what is actually going on, while a tall friend of mine who was a college president regularly practiced with his school’s undergraduates on their basketball courts in order to keep abreast of student thinking. The opposite of failure due to distant managers is success due to on-the-spot managers. Part of the reason why Tikopians on their tiny island, and New Guinea highlanders in their valleys, have successfully managed their resources for more than a thousand years is that everyone on the island or in the valley is familiar with the entire territory on which their society depends.

Perhaps the commonest circumstance under which societies fail to perceive a problem is when it takes the form of a slow trend concealed by wide up-and-down fluctuations. The prime example in modern times is global warming. We now realize that temperatures around the world have been slowly rising in recent decades, due in large part to atmospheric changes caused by humans. However, it is not the case that the climate each year has been exactly 0.01 degree warmer than in the previous year. Instead, as we all know, climate fluctuates up and down erratically from year to year: three degrees warmer in one summer than in the previous one, then two degrees warmer the next summer, down four degrees the following summer, down another degree the next one, then up five degrees, etc. With such large and unpredictable fluctuations, it has taken a long time to discern the average upwards trend of 0.01 degree per year within that noisy signal. That’s why it was only a few years ago that most professional climatologists previously skeptical of the reality of global warming became convinced. As of the time that I write these lines, President Bush of the U.S. is still not convinced of its reality, and he thinks that we need more research. The medieval Greenlanders had similar difficulties in recognizing that their climate was gradually becoming colder, and the Maya and Anasazi had trouble discerning that theirs was becoming drier.

Politicians use the term “creeping normalcy” to refer to such slow trends concealed within noisy fluctuations. If the economy, schools, traffic congestion, or anything else is deteriorating only slowly, it’s difficult to recognize that each successive year is on the average slightly worse than the year before, so one’s baseline standard for what constitutes “normalcy” shifts gradually and imperceptibly. lt may take a few decades of a long sequence of such slight year-to-year changes before people realize, with a jolt, that conditions used to be much better several decades ago, and that what is accepted as normalcy has crept downwards.

Another term related to creeping normalcy is “landscape amnesia”: forgetting how different the surrounding landscape looked 50 years ago, because the change from year to year has been so gradual. An example involves the melting of Montana’s glaciers and snowfields caused by global warming (Chapter 1). After spending the summers of 1953 and 1956 in Montana’s Big Hole Basin as a teenager, I did not return until 42 years later, in 1998, when I began visiting every year. Among my vivid teenaged memories of the Big Hole were the snow covering the distant mountaintops even in mid-summer, my resulting sense that a white band low in the sky encircled the basin, and my recollection of a weekend camping trip when two friends and I clambered up to that magical band of snow. Not having lived through the fluctuations and gradual dwindling of summer snow during the intervening 42 years, I was stunned and saddened on my return to the Big Hole in 1998 to find the band almost gone, and in 2001 and 2003 actually all melted off. When I asked my Montana resident friends about the change, they were less aware of it: they unconsciously compared each year’s band (or lack thereof) with the previous few years. Creeping normalcy or landscape amnesia made it harder for them than for me to remember what conditions had been like in the 1950s. Such experiences are a major reason why people may fail to notice a developing problem, until it is too late.

I suspect that landscape amnesia provided part of the answer to my UCLA students’ question, “What did the Easter Islander who cut down the last palm tree say as he was doing it?” We unconsciously imagine a sudden change: one year, the island still covered with a forest of tall palm trees being used to produce wine, fruit, and timber to transport and erect statues; the next year, just a single tree left, which an islander proceeds to fell in an act of incredibly self-damaging stupidity. Much more likely, though, the changes in forest cover from year to year would have been almost undetectable: yes, this year we cut down a few trees over there, but saplings are starting to grow back again here on this abandoned garden site. Only the oldest islanders, thinking back to their childhoods decades earlier, could have recognized a difference. Their children could no more have comprehended their parents’ tales of a tall forest than my 17-year-old sons today can comprehend my wife’s and my tales of what Los Angeles used to be like 40 years ago. Gradually, Easter Island’s trees became fewer, smaller, and less important. At the time that the last fruit-bearing adult palm tree was cut, the species had long ago ceased to be of any economic significance. That left only smaller and smaller palm saplings to clear each year, along with other bushes and treelets. No one would have noticed the falling of the last little palm sapling. By then, the memory of the valuable palm forest of centuries earlier had succumbed to landscape amnesia. Conversely, the speed with which deforestation spread over early Tokugawa Japan made it easier for its shoguns to recognize the landscape changes and the need for preemptive action.

The third stop on the road map of failure is the most frequent, the most surprising, and requires the longest discussion because it assumes such a wide variety of forms. Contrary to what Joseph Tainter and almost anyone else would have expected, it turns out that societies often fail even to attempt to solve a problem once it has been perceived.

Many of the reasons for such failure fall under the heading of what economists and other social scientists term “rational behavior,” arising from clashes of interest between people. That is, some people may reason correctly that they can advance their own interests by behavior harmful to other people. Scientists term such behavior “rational” precisely because it employs correct reasoning, even though it may be morally reprehensible. The perpetrators know that they will often get away with their bad behavior, especially if there is no law against it or if the law isn’t effectively enforced. They feel safe because the perpetrators are typically concentrated (few in number) and highly motivated by the prospect of reaping big, certain, and immediate profits, while the losses are spread over large numbers of individuals. That gives the losers little motivation to go to the hassle of fighting back, because each loser loses only a little and would receive only small, uncertain, distant profits even from successfully undoing the minority’s grab. Examples include so-called perverse subsidies: the large sums of money that governments pay to support industries that might be uneconomic without the subsidies, such as many fisheries, sugar-growing in the U.S., and cotton-growing in Australia (subsidized indirectly through the government’s bearing the cost of water for irrigation). The relatively few fishermen and growers lobby tenaciously for the subsidies that represent much of their income, while the losers (all the taxpayers) are less vocal because the subsidy is funded by just a small amount of money concealed in each citizen’s tax bill. Measures benefiting a small minority at the expense of a large majority are especially likely to arise in certain types of democracies that bestow “swing power” on some small groups: e.g., senators from small states in the U.S. Senate, or small religious parties often holding the balance of power in Israel to a degree scarcely possible under the Dutch parliamentary system.

A frequent type of rational bad behavior is “good for me, bad for you and for everybody else”—to put it bluntly, “selfish.” As a simple example, most Montana fishermen fish for trout. A few fishermen who prefer to fish for a pike, a larger fish-eating fish not native to western Montana, surreptitiously and illegally introduced pike to some western Montana lakes and rivers, where they proceeded to destroy trout fishing by eating out the trout. That was good for the few pike fishermen and bad for the far greater number of trout fishermen.

An example producing more losers and higher dollar losses is that, until 1971, mining companies in Montana on closing down a mine just left it with its copper, arsenic, and acid leaking out into rivers, because the state of Montana had no law requiring companies to clean up after mine closure. In 1971 the state of Montana did pass such a law, but companies discovered that they could extract the valuable ore and then just declare bankruptcy before going to the expense of cleaning up. The result has been about $500,000,000 of cleanup costs to be borne by the citizens of Montana and the U.S. Mining company CEOs had correctly perceived that the law permitted them to save money for their companies, and to advance their own interests through bonuses and high salaries, by making messes and leaving the burden to society. Innumerable other examples of such behavior in the business world could be cited, but it is not as universal as some cynics suspect. In the next chapter we shall examine how that range of outcomes results from the imperative for businesses to make money to the extent that government regulations, laws, and public attitudes permit.

One particular form of clashes of interest has become well known under the name “tragedy of the commons,” in turn closely related to the conflicts termed “the prisoner’s dilemma” and “the logic of collective action.” Consider a situation in which many consumers are harvesting a communally owned resource, such as fishermen catching fish in an area of ocean, or herders grazing their sheep on a communal pasture. If everybody over-harvests the resource, it will become depleted by overfishing or overgrazing and thus decline or even disappear, and all of the consumers will suffer. It would therefore be in the common interests of all consumers to exercise restraint and not overharvest. But as long as there is no effective regulation of how much resource each consumer can harvest, then each consumer would be correct to reason, “If I don’t catch that fish or let my sheep graze that grass, some other fisherman or herder will anyway, so it makes no sense for me to refrain from overfishing or overharvesting.” The correct rational behavior is then to harvest before the next consumer can, even though the eventual result may be the destruction of the commons and thus harm for all consumers.

In reality, while this logic has led to many commons resources becoming overharvested and destroyed, others have been preserved in the face of harvesting for hundreds or even thousands of years. Unhappy outcomes include the overexploitation and collapse of most major marine fisheries, and the extermination of much of the megafauna (large mammals, birds, and reptiles) on every oceanic island or continent settled by humans for the first time within the last 50,000 years. Happy outcomes include the maintenance of many local fisheries, forests, and water sources, such as the Montana trout fisheries and irrigation systems that I described in Chapter 1. Behind these happy outcomes lie three alternative arrangements that have evolved to preserve a commons resource while still permitting a sustainable harvest.

One obvious solution is for the government or some other outside force to step in, with or without the invitation of the consumers, and to enforce quotas, as the shogun and daimyo in Tokugawa Japan, Inca emperors in the Andes, and princes and wealthy landowners in 16th-century Germany did for logging. However, that is impractical in some situations (e.g., the open ocean) and involves excessive administrative and policing costs in other situations. A second solution is to privatize the resource, i.e., to divide it into individually owned tracts that each owner will be motivated to manage prudently in his/her own interests. That practice was applied to some village-owned forests in Tokugawa Japan. Again, though, some resources (such as migratory animals and fish) are impossible to subdivide, and the individual owners may find it even harder than a government’s coast guard or police to exclude intruders.

The remaining solution to the tragedy of the commons is for the consumers to recognize their common interests and to design, obey, and enforce prudent harvesting quotas themselves. That is likely to happen only if a whole series of conditions is met: the consumers form a homogeneous group; they have learned to trust and communicate with each other; they expect to share a common future and to pass on the resource to their heirs; they are capable of and permitted to organize and police themselves; and the boundaries of the resource and of its pool of consumers are well defined. A good example is the case, discussed in Chapter 1, of Montana water rights for irrigation. While the allocation of those rights has been written into law, nowadays the ranchers mostly obey the water commissioner whom they themselves elect, and they no longer take their disputes to court for resolution. Other such examples of homogeneous groups prudently managing resources that they expect to pass to their children are the Tikopia Islanders, New Guinea highlanders, members of Indian castes, and other groups discussed in Chapter 9. Those small groups, along with the Icelanders (Chapter 6) and the Tokugawa Japanese constituting larger groups, were further motivated to reach agreement by their effective isolation: it was obvious to the whole group that they would have to survive just on their resources for the foreseeable future. Such groups knew that they could not make the frequently heard “ISEP” excuse that is a recipe for mismanagement: “It’s not my problem, it’s someone else’s problem.”

Clashes of interest involving rational behavior are also prone to arise when the principal consumer has no long-term stake in preserving the resource but society as a whole does. For example, much commercial harvesting of tropical rainforests today is carried out by international logging companies, which typically take out short-term leases on land in one country, cut down the rainforest on all their leased land in that country, and then move on to the next country. The loggers have correctly perceived that, once they have paid for their lease, their interests are best served by cutting its forest as quickly as possible, reneging on any agreements to replant, and leaving. In that way, loggers destroyed most of the lowland forests of the Malay Peninsula, then of Borneo, then of the Solomon Islands and Sumatra, now of the Philippines, and coming up soon of New Guinea, the Amazon, and the Congo Basin. What is thus good for the loggers is bad for the local people, who lose their source of forest products and suffer consequences of soil erosion and stream sedimentation. It’s also bad for the host country as a whole, which loses some of its biodiversity and its foundations for sustainable forestry. The outcome of this clash of interests involving short-term leased land contrasts with a frequent outcome when the logging company owns the land, anticipates repeated harvests, and may find a long-term perspective to be in its interests (as well as in the interests of local people and the country). Chinese peasants in the 1920s recognized a similar contrast when they compared the disadvantages of being exploited by two types of warlords. It was hard to be exploited by a “stationary bandit,” i.e. a locally entrenched warlord, who would at least leave peasants with enough resources to generate more plunder for that warlord in future years. Worse was to be exploited by a “roving bandit,” a warlord who like a logging company with short-term leases would leave nothing for a region’s peasants and just move on to plunder another region’s peasants.

A further conflict of interest involving rational behavior arises when the interests of the decision-making elite in power clash with the interests of the rest of society. Especially if the elite can insulate themselves from the consequences of their actions, they are likely to do things that profit themselves, regardless of whether those actions hurt everybody else. Such clashes, flagrantly personified by the dictator Trujillo in the Dominican Republic and the governing elite in Haiti, are becoming increasingly frequent in the modern U.S., where rich people tend to live within their gated compounds (Plate 36) and to drink bottled water. For example, Enron’s executives correctly calculated that they could gain huge sums of money for themselves by looting the company coffers and thereby harming all the stockholders, and that they were likely to get away with their gamble.

Throughout recorded history, actions or inactions by self-absorbed kings, chiefs, and politicians have been a regular cause of societal collapses, including those of the Maya kings, Greenland Norse chiefs, and modern Rwandan politicians discussed in this book. Barbara Tuchman devoted her book The March of Folly to famous historical examples of disastrous decisions, ranging from the Trojans bringing the Trojan horse within their walls, and the Renaissance popes provoking the Protestant succession, to the German decision to adopt unrestricted submarine warfare in World War I (thereby triggering America’s declaration of war), and Japan’s Pearl Harbor attack that similarly triggered America’s declaration of war in 1941. As Tuchman put it succinctly, “Chief among the forces affecting political folly is lust for power, named by Tacitus as ‘the most flagrant of all passions.’ ” As a result of lust for power, Ester Island chiefs and Maya kings acted so as to accelerate deforestation rather than to prevent it: their status depended on their putting up bigger statues and monuments than their rivals. They were trapped in a competitive spiral, such that any chief or king who put up smaller statues or monuments to spare the forests would have been scorned and lost his job. That’s a regular problem with competitions for prestige, which are judged on a short time frame.

Conversely, failures to solve perceived problems because of conflicts of interest between the elite and the masses are much less likely in societies where the elite cannot insulate themselves from the consequences of their actions. We shall see in the final chapter that the high environmental awareness of the Dutch (including their politicians) goes back to the fact that much of the population—both the politicians and the masses—lives on land lying below sea level, where only dikes stand between them and drowning, so that foolish land planning by politicians would be at their own personal peril. Similarly, New Guinea highlands big-men live in the same type of huts as everyone else, scrounge for firewood and timber in the same places as everyone else, and were thereby highly motivated to solve their society’s need for sustainable forestry (Chapter 9).

All of these examples in the preceding several pages illustrate situations in which a society fails to try to solve perceived problems because the maintenance of the problem is good for some people. In contrast to that so-called rational behavior, other failures to attempt to solve perceived problems involve what social scientists consider “irrational behavior”: i.e., behavior that is harmful for everybody. Such irrational behavior often arises when each of us individually is torn by clashes of values: we may ignore a bad status quo because it is favored by some deeply held value to which we cling. “Persistence in error,” “wooden-headedness, “refusal to draw inference from negative signs,” and “mental standstill or stagnation” are among the phrases that Barbara Tuchman applies to this common human trait. Psychologists use the term “sunk-cost effect” for a related trait: we feel reluctant to abandon a policy (or to sell a stock) in which we have already invested heavily.

Religious values tend to be especially deeply held and hence frequent causes of disastrous behavior. For example, much of the deforestation of Easter Island had a religious motivation: to obtain logs to transport and erect the giant stone statues that were the object of veneration. At the same time, but 9,000 miles away and in the opposite hemisphere, the Greenland Norse were pursuing their own religious values as Christians. Those values, their European identity, their conservative lifestyle in a harsh environment where most innovations would in fact fail, and their tightly communal and mutually supportive society allowed them to survive for centuries. But those admirable (and, for a long time, successful) traits also prevented them from making the drastic lifestyle changes and selective adoptions of Inuit technology that might have helped them survive for longer.

The modern world provides us with abundant secular examples of admirable values to which we cling under conditions where those values no longer make sense. Australians brought from Britain a tradition of raising sheep for wool, high land values, and an identification with Britain, and thereby accomplished the feat of building a First World democracy remote from any other (except New Zealand), but are now beginning to appreciate that those values also have downsides. In modern times a reason why Montanans have been so reluctant to solve their problems caused by mining, logging, and ranching is that those three industries used to be the pillars of the Montana economy, and that they became bound up with Montana’s pioneer spirit and identity. Montanans’ pioneer commitment to individual freedom and self-sufficiency has similarly made them reluctant to accept their new need for government planning and for curbing individual rights. Communist China’s determination not to repeat the errors of capitalism led it to scorn environmental concerns as just one more capitalist error, and thereby to saddle China with enormous environmental problems. Rwandans’ ideal of large families was appropriate in traditional times of high childhood mortality, but has led to a disastrous population explosion today. It appears to me that much of the rigid opposition to environmental concerns in the First World nowadays involves values acquired early in life and never again reexamined: “the maintenance intact by rulers and policy-makers of the ideas they started with,” to quote Barbara Tuchman once again.

It is painfully difficult to decide whether to abandon some of one’s core values when they seem to be becoming incompatible with survival. At what point do we as individuals prefer to die than to compromise and live? Millions of people in modern times have indeed faced the decision whether, to save their own life, they would be willing to betray friends or relatives, acquiesce in a vile dictatorship, live as virtual slaves, or flee their country. Nations and societies sometimes have to make similar decisions collectively.

All such decisions involve gambles, because one often can’t be certain that clinging to core values will be fatal, or (conversely) that abandoning them will ensure survival. In trying to carry on as Christian farmers, the Greenland Norse in effect were deciding that they were prepared to die as Christian farmers rather than live as Inuit; they lost that gamble. Among five small Eastern European countries faced with the overwhelming might of Russian armies, the Estonians and Latvians and Lithuanians surrendered their independence in 1939 without a fight, the Finns fought in 1939-1940 and preserved their independence, and Hungarians fought in 1956 and lost their independence. Who among us is to say which country was wiser, and who could have predicted in advance that only the Finns would win their gamble?

Perhaps a crux of success or failure as a society is to know which core values to hold on to, and which ones to discard and replace with new values, when times change. In the last 60 years the world’s most powerful countries have given up long-held cherished values previously central to their national image, while holding on to other values. Britain and France abandoned their centuries-old role as independently acting world powers; Japan abandoned its military tradition and armed forces; and Russia abandoned its long experiment with communism. The United States has retreated substantially (but hardly completely) from its former values of legalized racial discrimination, legalized homophobia, a subordinate role of women, and sexual repression. Australia is now reevaluating its status as a rural farming society with British identity. Societies and individuals that succeed may be those that have the courage to take those difficult decisions, and that have the luck to win their gambles. The world as a whole today faces similar decisions about its environmental problems that we shall consider in the final chapter.

Those are examples of how irrational behavior associated with clashes of values does or doesn’t prevent a society from trying to solve perceived problems. Common further irrational motives for failure to address problems include that the public may widely dislike those who first perceive and complain about the problem—such as Tasmania’s Green Party that first protested foxes’ introduction into Tasmania. The public may dismiss warnings because of previous warnings that proved to be false alarms, as illustrated by Aesop’s fable about the eventual fate of the shepherd boy who had repeatedly cried “Wolf!” and whose cries for help were then ignored when a wolf did appear. The public may shirk its responsibility by invoking ISEP (p. 430: “It’s someone else’s problem”).

Partly irrational failures to try to solve perceived problems often arise from clashes between short-term and long-term motives of the same individual. Rwandan and Haitian peasants, and billions of other people in the world today, are desperately poor and think only of food for the next day. Poor fishermen in tropical reef areas use dynamite and cyanide to kill coral reef fish (and incidentally to kill the reefs as well) in order to feed their children today, in the full knowledge that they are thereby destroying their future livelihood. Governments, too, regularly operate on a short-term focus: they feel overwhelmed by imminent disasters and pay attention only to problems that are on the verge of explosion. For example, a friend of mine who is closely connected to the current federal administration in Washington, D.C., told me that, when he visited Washington for the first time after the 2000 national elections, he found that our government’s new leaders had what he termed a “90-day focus”: they talked only about those problems with the potential to cause a disaster within the next 90 days. Economists rationally attempt to justify these irrational focuses on short-term profits by “discounting” future profits. That is, they argue that it may be better to harvest a resource today than to leave some of the resource intact for harvesting tomorrow, on the grounds that the profits from today’s harvest could be invested, and that the investment interest thereby accumulated between now and some alternative future harvest time would tend to make today’s harvest more valuable than the future harvest. In that case, the bad consequences are born by the next generation, but that generation cannot vote or complain today.

Some other possible reasons for irrational refusal to try to solve a perceived problem are more speculative. One is a well-recognized phenomenon in short-term decision-making termed “crowd psychology.” Individuals who find themselves members of a large coherent group or crowd, especially one that is emotionally excited, may become swept along to support the group’s decision, even though the same individuals might have rejected the decision if allowed to reflect on it alone at leisure. As the German dramatist Schiller wrote, “Anyone taken as an individual is tolerably sensible and reasonable—as a member of a crowd, he at once becomes a blockhead.” Historical examples of crowd psychology in operation include late medieval Europe’s enthusiasm for the Crusades, accelerating overinvestment in fancy tulips in Holland peaking between 1634 and 1636 (“Tulipomania”), periodic outbursts of witch-hunting like the Salem witch trials of 1692, and the crowds whipped up into frenzies by skillful Nazi propagandists in the 1930s.

A calmer small-scale analog of crowd psychology that may emerge in groups of decision-makers has been termed “groupthink” by Irving Janis. Especially when a small cohesive group (such as President Kennedy’s advisors during the Bay of Pigs crisis, or President Johnson’s advisors during the escalation of the Vietnam War) is trying to reach a decision under stressful circumstances, the stress and the need for mutual support and approval may lead to suppression of doubts and critical thinking, sharing of illusions, a premature consensus, and ultimately a disastrous decision. Both crowd psychology and groupthink may operate over periods of not just a few hours but also up to a few years: what remains uncertain is their contribution to disastrous decisions about environmental problems unfolding over the course of decades or centuries.

The final speculative reason that I shall mention for irrational failure to try to solve a perceived problem is psychological denial. This is a technical term with a precisely defined meaning in individual psychology, and it has been taken over into the pop culture. If something that you perceive arouses in you a painful emotion, you may subconsciously suppress or deny your perception in order to avoid the unbearable pain, even though the practical results of ignoring your perception may prove ultimately disastrous. The emotions most often responsible are terror, anxiety, and grief. Typical examples include blocking the memory of a frightening experience, or refusing to think about the likelihood that your husband, wife, child, or best friend is dying because the thought is so painfully sad.

For example, consider a narrow river valley below a high dam, such that if the dam burst, the resulting flood of water would drown people for a considerable distance downstream. When attitude pollsters ask people downstream of the dam how concerned they are about the dam’s bursting, it’s not surprising that fear of a dam burst is lowest far downstream, and increases among residents increasingly close to the dam. Surprisingly, though, after you get to just a few miles below the dam, where fear of the dam’s breaking is found to be highest, the concern then falls off to zero as you approach closer to the dam! That is, the people living immediately under the dam, the ones most certain to be drowned in a dam burst, profess unconcern. That’s because of psychological denial: the only way of preserving one’s sanity while looking up every day at the dam is to deny the possibility that it could burst. Although psychological denial is a phenomenon well established in individual psychology, it seems likely to apply to group psychology as well.

Finally, even after a society has anticipated, perceived, or tried to solve a problem, it may still fail for obvious possible reasons: the problem may be beyond our present capacities to solve, a solution may exist but be prohibitively expensive, or our efforts may be too little and too late. Some attempted solutions backfire and make the problem worse, such as the Cane Toad’s introduction into Australia to control insect pests, or forest fire suppression in the American West. Many past societies (such as medieval Iceland) lacked the detailed ecological knowledge that now permits us to cope better with the problems that they faced. Others of those problems continue to resist solution today.

For instance, please think back to Chapter 8 on the ultimate failure of the Greenland Norse to survive after four centuries. The cruel reality is that, for the last 5,000 years, Greenland’s cold climate and its limited, unpredictably variable resources have posed an insuperably difficult challenge to human efforts to establish a long-lasting sustainable economy. Four successive waves of Native American hunter-gatherers tried and ultimately failed before the Norse failed. The Inuit came closest to success by maintaining a self-sufficient lifestyle in Greenland for 700 years, but it was a hard life with frequent deaths from starvation. Modern Inuit are no longer willing to subsist traditionally with stone tools, dogsleds, and hand-held harpooning of whales from skin boats, without imported technology and food. Modern Greenland’s government has not yet developed a self-supporting economy independent of foreign aid. The government has experimented again with livestock as did the Norse, eventually gave up on cattle, and still subsidizes sheep farmers who cannot make a profit by themselves. All that history makes the ultimate failure of the Greenland Norse unsurprising. Similarly, the Anasazi ultimate “failure” in the U.S. Southwest has to be seen in the perspective of many other ultimately “failed” attempts to establish long-lasting farming societies in that environment so hostile for farming.

Among the most recalcitrant problems today are those posed by introduced pest species, which often prove impossible to eradicate or control once they have become established. For example, the state of Montana continues to spend over a hundred million dollars per year on combatting Leafy Spurge and other introduced weed species. That’s not because Montanans don’t try to eradicate them, but simply because the weeds are impossible to eradicate at present. Leafy Spurge has roots 20 feet deep, too long to pull up by hand, and specific weed-controlled chemicals cost up to $800 per gallon. Australia has tried fences, foxes, shooting, bulldozers, myxomatosis virus, and calicivirus in its ongoing efforts to control rabbits, which have survived all such efforts so far.

The problem of catastrophic forest fires in dry parts of the U.S. Intermontane West could probably be brought under control by management techniques to reduce the fuel load, such as by mechanically thinning out new growth in the understory and removing fallen dead timber. Unfortunately, carrying out that solution on a large scale is considered prohibitively expensive. The fate of Florida’s Dusky Seaside Sparrow similarly illustrates failure due to expense, as well as due to the usual penalty for procrastination (“too little, too late”). As the sparrow’s habitat dwindled, action was postponed because of arguments over whether its habitat really was becoming critically small. By the time the U.S. Fish and Wildlife Service agreed in the late 1980s to buy its remaining habitat at the high cost of $5,000,000, that habitat had become so degraded that its sparrows died out. An argument then raged over whether to breed the last sparrows in captivity to the closely related Scott’s Seaside Sparrow, and then reestablish purer Dusky Seaside Sparrows by back-crossing the resulting hybrids. By the time that permission was finally granted, those last Dusky captives had become infertile through old age. Both the habitat preservation effort and the captive breeding effort would have been cheaper and more likely to succeed if they had been begun earlier.

Thus, human societies and smaller groups may make disastrous decisions for a whole sequence of reasons: failure to anticipate a problem, failure to perceive it once it has arisen, failure to attempt to solve it after it has been perceived, and failure to succeed in attempts to solve it. This chapter began with my relating the incredulity of my students, and of Joseph Tainter, that societies could allow environmental problems to overwhelm them. Now, at the end of this chapter, we seem to have moved towards the opposite extreme: we have identified an abundance of reasons why societies might fail. For each of those reasons, each of us can draw on our own life experiences to think of groups known to us that failed at some task for that particular reason.

But it’s also obvious that societies don’t regularly fail to solve their problems. If that were true, all of us would now be dead or else living again under the Stone Age conditions of 13,000 years ago. Instead, the cases of failure are sufficiently noteworthy to warrant writing this book about them—a book of finite length, about only certain societies, and not an encyclopedia of every society in history. In Chapter 9 we specifically discussed some examples drawn from the majority of societies that succeeded.

Why, then, do some societies succeed and others fail, in the various ways discussed in this chapter? Part of the reason, of course, involves differences among environments rather than among societies: some environments pose much more difficult problems than do others. For instance, cold isolated Greenland was more challenging than was southern Norway, whence many of Greenland’s colonists originated. Similarly, dry, isolated, high-latitude, low-elevation Easter Island was more challenging than was wet, less isolated, equatorial, high Tahiti where ancestors of the Easter Islanders may have lived at one stage. But that’s only half of the story. If I were to claim that such environmental differences were the sole reason behind different societal outcomes of success or failure, it would indeed be fair to charge me with “environmental determinism,” a view unpopular among social scientists. In fact, while environmental conditions certainly make it more difficult to support human societies in some environments than in others, that still leaves much scope for a society to save or doom itself by its own actions.

It’s a large subject why some groups (or individual leaders) followed one of the paths to failure discussed in this chapter, while others didn’t. For instance, why did the Inca Empire succeed in reafforesting its dry cool environment, while the Easter Islanders and Greenland Norse didn’t? The answer partly depends on idiosyncrasies of particular individuals and will defy prediction. But I still hope that better understanding of the potential causes of failure discussed in this chapter may help planners to become aware of those causes, and to avoid them.

A striking example of such understanding being put to good use is provided by the contrast between the deliberations over two consecutive crises involving Cuba and the U.S., by President Kennedy and his advisors. In early 1961 they fell into poor group decision-making practices that led to their disastrous decision to launch the Bay of Pigs invasion, which failed ignominiously, leading to the much more dangerous Cuban Missile Crisis. As Irving Janis pointed out in his book Groupthink, the Bay of Pigs deliberations exhibited numerous characteristics that tend to lead to bad decisions, such as a premature sense of ostensible unanimity, suppression of personal doubts and of expression of contrary views, and the group leader (Kennedy) guiding the discussion in such a way as to minimize disagreement. The subsequent Cuban Missile Crisis deliberations, again involving Kennedy and many of the same advisors, avoided those characteristics and instead proceeded along lines associated with productive decision-making, such as Kennedy ordering participants to think skeptically, allowing discussion to be freewheeling, having subgroups meet separately, and occasionally leaving the room to avoid his overly influencing the discussion himself.

Why did decision-making in these two Cuban crises unfold so differently? Much of the reason is that Kennedy himself thought hard after the 1961 Bay of Pigs fiasco, and he charged his advisors to think hard, about what had gone wrong with their decision-making. Based on that thinking, he purposely changed how he operated the advisory discussions in 1962.

In this book that has dwelt on Easter Island chiefs, Maya kings, modern Rwandan politicians, and other leaders too self-absorbed in their own pursuit of power to attend to their society’s underlying problems, it is worth preserving balance by reminding ourselves of other successful leaders besides Kennedy. To solve an explosive crisis, as Kennedy did so courageously, commands our admiration. Yet it calls for a leader with a different type of courage to anticipate a growing problem or just a potential one, and to take bold steps to solve it before it becomes an explosive crisis. Such leaders expose themselves to criticism or ridicule for acting before it becomes obvious to everyone that some action is necessary. But there have been many such courageous, insightful, strong leaders who deserve our admiration. They include the early Tokugawa shoguns, who curbed deforestation in Japan long before it reached the stage of Easter Island; Joaquín Balaguer, who (for whatever motives) strongly backed environmental safeguards on the eastern Dominican side of Hispaniola while his counterparts on the western Haitian side didn’t; the Tikopian chiefs who presided over the decision to exterminate their island’s destructive pigs, despite the high status of pigs in Melanesia; and China’s leaders who mandated family planning long before overpopulation in China could reach Rwandan levels. Those admirable leaders also include the German chancellor Konrad Adenauer and other Western European leaders, who decided after World War II to sacrifice separate national interests and to launch Europe’s integration in the European Economic Community, with a major motive being to minimize the risk of another such European war. We should admire not only those courageous leaders, but also those courageous peoples—the Finns, Hungarians, British, French, Japanese, Russians, Americans, Australians, and others—who decided which of their core values were worth fighting for, and which no longer made sense.

Those examples of courageous leaders and courageous peoples give me hope. They make me believe that this book on a seemingly pessimistic subject is really an optimistic book. By reflecting deeply on causes of past failures, we too, like President Kennedy in 1961 and 1962, may be able to mend our ways and increase our chances for future success (Plate 32).

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!