12

Image

… FOR NOW

IN THE GRAVEYARD OF HISTORY

At the end of Chapter 3 we left Ebenezer Scrooge staring in horror at his own untended tombstone. Clutching the hand of the Ghost of Christmas Yet to Come, he cried out: “Are these the shadows of the things that Will be, or are they shadows of the things that May be, only?”

I suggested that we might well ask the same about Figure 12.1, which shows that if Eastern and Western social development keep on rising at the same speed as in the twentieth century, the East will regain the lead in 2103. But since the pace at which social development has been rising has actually been accelerating since the seventeenth century, Figure 12.1 is really a conservative estimate; the graph might be best interpreted as saying that 2103 is probably the latest point at which the Western age will end.

Eastern cities are already as large as Western, and the gap between the total economic output of China and the United States (perhaps the easiest variable to predict) is narrowing rapidly. The strategists on America’s National Intelligence Council think China’s output will catch up with the United States’ in 2036. The bankers at Goldman Sachs think it will happen in 2027; the accountants at PricewaterhouseCoopers, in 2025; and some economists, such as Angus Maddison of the Organization for Economic Cooperation and Development, and the Nobel Prize winner Robert Fogel, opt for even closer dates (2020 and 2016, respectively). It will take longer for the East’s war-making capacity, information technology, and per capita energy capture to overtake the West’s, but it seems reasonable to suspect that after 2050 Eastern social development will catch up quickly.

image

Figure 12.1. Written in stone? If Eastern and Western social development scores carry on rising at the same speed as in the twentieth century, Western rule will end in 2103.

Yet nagging doubts do remain. All the expert predictions mentioned above were offered in 2006–2007, on the eve of a financial crisis that these same bankers, accountants, and economists had managed not to foresee; and we should also bear in mind that the whole point of A Christmas Carol is that Scrooge’s fate is not written in stone. “If the courses be departed from,” Scrooge assures the Ghost, “the ends will change,” and, sure enough, Scrooge pops out of bed on Christmas morning a new man. “He became as good a friend, as good a master, and as good a man,” said Dickens, “as the good old City knew, or any other good old city, town, or borough, in the good old world.”

Will the West, Scrooge-like, reinvent itself in the twenty-first century and stay on top? In this final chapter, I want to suggest a rather surprising answer to this question.

I have argued throughout this book that the great weakness of most attempts to explain why the West rules and to predict what will happen next is that the soothsayers generally take such a short perspective, looking back just a few hundred years (if that) before telling us what history means. It is rather as if Scrooge tried to learn his lessons solely by talking to the Ghost of Christmas Present.

We will do better to follow Scrooge’s actual method, hanging on the words of the Ghost of Christmas Past, or to imitate Hari Seldon, who interrogated millennia of history before peering into the Galactic Empire’s future. Like Scrooge and Seldon, we need to identify not only where current trends are taking us but also whether these trends are generating forces that will undermine them. We need to factor in the paradox of development, identify advantages of backwardness, and foresee not only how geography will shape social development but also how social development will change the meanings of geography. And when we do all these things, we will find that the story still has a twist in its tail.

AFTER CHIMERICA

We have been cursed to live in interesting times.

Since about 2000 a very odd relationship has developed between the world’s Western core and its Eastern periphery. Back in the 1840s the Western core went global, projecting its power into every nook and cranny in the world and turning what had formerly been an independent Eastern core into a new periphery to the West. The relationship between core and periphery subsequently unfolded along much the same lines as those between cores and peripheries throughout history (albeit on a larger scale), with Easterners exploiting their cheap labor and natural resources to trade with the richer Western core. As often happens on peripheries, some people found advantages in backwardness, and Japan remade itself. In the 1960s several East Asian countries followed it into the American-dominated global market and prospered, and after 1978, when it finally settled into peace, responsibility, and flexibility, so did China. The East’s vast, poor populations and indigenous intelligentsias that had struck earlier Western observers as forces of backwardness now began to look like huge advantages. The industrial revolution was finally spreading across the East, and Eastern entrepreneurs were building factories and selling low-cost goods to the West (particularly the United States).

Nothing in this script was particularly new, and for a decade or more all went well (except for Westerners who tried to compete with low-cost East Asian goods). By the 1990s, however, manufacturers in China were discovering—as people on so many peripheries had done before them—that not even the richest core could afford to buy everything that a periphery could potentially export.

What has made the East-West relationship so unusual is the solution to this problem that emerged after 2000. Even though the average American was earning nearly ten times as much as the average Chinese worker, China effectively lent Westerners money to keep buying Eastern goods. It did this by investing some of its huge current-account surplus in dollar-denominated securities such as United States Treasury Bonds. Buying up hundreds of billions of dollars also kept China’s currency artificially cheap relative to the United States’, making Chinese goods even less expensive for Westerners.

The relationship, economists realized, was rather like a marriage in which one spouse does the saving and investing, the other does the spending, and neither partner can afford a divorce. If China stopped buying dollars, the American currency might collapse and the 800 billion United States dollars that China already held would lose their value. If, on the other hand, Americans stopped buying Chinese goods, their living standards would slide and their easy credit would dry up. An American boycott might throw China into industrial chaos, but China could retaliate by dumping its dollars and ruining the U.S. economy.

The historian Niall Ferguson and the economist Moritz Schularick christened this odd couple “Chimerica,” a fusion of China and America that delivered spectacular economic growth but was also a chimera—a dream from which the world eventually had to wake up. Americans could not go on borrowing Chinese money to buy Chinese goods forever. Chimerica’s ocean of cheap credit inflated the prices of every kind of asset, from racehorses to real estate, and in 2007 the bubbles started bursting. In 2008 Western economies went into free fall, dragging the rest of the world after them. By 2009, $13 trillion of consumer wealth had evaporated. Chimerica had fallen.

By early 2010, prompt government interventions had headed off a repeat of the 1930s depression, but the consequences of Chimerica’s collapse were nonetheless enormous. In the East unemployment spiked, stock markets tumbled, and China’s economy expanded barely half as fast in 2009 as it had done in 2007. But that said, China’s 7.5 percent growth in 2009 remained well above what economies in the Western core could hope for even in the best years. Beijing had to find $586 billion for a stimulus package, but it at least had the reserves to cover this.

In the West, however, the damage was far worse. The United States piled a $787 billion stimulus on top of its mountain of existing debt and still saw its economy shrink by more than 2 percent in 2009. The International Monetary Fund announced that summer that it expected Chinese economic growth to rebound to 8.5 percent in 2010, while the United States would manage just 0.8 percent. Most alarming of all, the Congressional Budget Office forecast that the United States would not pay off the borrowing for its stimulus package until 2019, by which time the entitlements of its aging population would be dragging its economy down even further.

When the leaders of the world’s twenty biggest economies met in April 2009 to craft their response to the crisis, a new wisecrack went around: “After [Tiananmen Square in] 1989 capitalism saved China. After 2009 China saved capitalism.” There is much truth to this, but an even better analogy for 2009 might be 1918. That was the year when the sucking sound of power and wealth draining across the Atlantic, from the bankrupt old core in Europe to the thriving new one in the United States, became undeniable. Two thousand and nine may prove to have been the year when the sound of the drain across the Pacific, from bankrupt America to thriving China, became equally audible. Chimerica may have been merely a layover on the road to Eastern rule.

Needless to say, not everyone agrees with this prognosis. Some pundits point out that the United States has made itself over just as thoroughly as Scrooge plenty of times already. All too many critics wrote off the United States in the great depression of the 1930s and the stagflation of the 1970s, only to see it to bounce back to defeat the Nazis in the 1940s and the Soviets in the 1980s. American entrepreneurs and scientists, the optimists insist, will figure something out, and even if the United States does slide into crisis in the 2010s it will get the better of China in the 2020s.

Others stress that China has problems too. Most obviously, as economic success drives up wages, China is losing some of the advantages of its backwardness. In the 1990s low-end manufacturing jobs started migrating from China’s coasts to its interior, and are now leaving China altogether for even-lower-wage countries such as Vietnam. Most economists see this as the natural course of China’s integration into the global economy, but to a few it is the first sign that China is losing its edge.

Other China bashers see demography as a bigger challenge. Thanks to low birth and immigration rates, the average age is rising faster in China than in America, and by 2040 the entitlements of the elderly may weigh more heavily on China’s economy than on that of the United States. China’s shortage of natural resources may also slow economic growth, and tensions between the booming cities and languishing countryside may get much worse. If any of these things happen, popular unrest (which is already rising) could get out of control. Ethnic revolts and protests against corruption and environmental catastrophes helped bring down plenty of Chinese dynasties in the past; maybe they will do so again in the near future. And if the Communist party does fall, the country might break apart, just as it did at the end of the Han, Tang, Yuan, and Qing dynasties. The best analogy for China in 2020 might not, after all, be the United States in 1920, soaking up the old core’s wealth, but China itself in 1920, sliding into civil war.

Then again, an influential group of Western Panglosses insists, maybe none of these guesses really matters, because all will be for the best regardless. Despite seeing wealth and power drain across the Atlantic in the twentieth century, the typical western European in 2000 is richer than his or her forebear at the height of Europe’s imperial grandeur, because the rising capitalist tide has lifted all the boats. In the twenty-first century the drain across the Pacific may lift everyone’s boats even higher. Angus Maddison, mentioned above for his calculation that China’s gross domestic product will overtake that of the United States in 2020, foresees Chinese incomes tripling (to an average of $18,991 per person) between 2003 and 2030. He expects that American incomes will rise only 50 percent, but because they started from such a high level the typical American in 2030 will earn $58,722, more than three times as much as the typical Chinese. Robert Fogel, who thinks China’s economy will outgrow the United States’ in 2016, is even more bullish. By 2040, he says, Chinese incomes will reach an astonishing $85,000—but by that time the average American will be making $107,000.*

Most Panglossian of all is what the journalist James Mann calls the “Soothing Scenario,” a claim that come what may, prosperity will Westernize the East. Asking whether the West still rules will then be a meaningless question, because the whole world will have become Western. “Trade freely with China,” George W. Bush urged in 1999, “and time is on our side.”

The only way to flourish in the modern global economy, this argument runs, is to be liberal and democratic—that is, more like the Western core. Japan, Taiwan, South Korea, and Singapore all moved from one-party toward somewhat democratic rule as they grew rich in the late twentieth century, and if the Chinese Communist Party can embrace capitalism, perhaps it can embrace democracy too. Those regions most involved in global trade may already be doing so. In Guangdong and Fujian provinces, for instance, many local officials are nowadays directly elected. National politics certainly remains authoritarian, but the rulers in Beijing have become markedly more responsive to public concerns over natural disasters, public health crises, and corruption.

Many Westerners who have spent time in the East, though, are less impressed with the idea that the East will become culturally Westernized at the very moment that it achieves the power to dominate the globe. Americans, after all, did not start acting more like Europeans after they displaced Europe as the dominant region in the Western core; rather, Europeans began complaining about the Americanization of their own culture.

China’s urban elites did find plenty to like in Western culture when they entered the American-dominated global economy in the 1980s. They dropped the Mao suit, opened English schools, and even (briefly) sipped lattes at a Starbucks in the Forbidden City. The overpriced bars in Beijing’s Back Lakes district are as full of hyperactive twenty-somethings checking stock quotes on their cell phones as those in New York or London. The question, though, is whether Westernization will continue if power and wealth carry on draining across the Pacific.

The journalist Martin Jacques suggests not. We are already, he argues, seeing the rise of what he calls “contested modernities” as Easterners and South Asians adapt the industrialism, capitalism, and liberalism invented in the nineteenth-century Western core to their own needs. In the first half of the twenty-first century, Jacques speculates, Western rule will give way to a fragmented global order, with multiple currency zones (dollar-, euro-, and renminbi-denominated) and spheres of economic/military influence (an American sphere in Europe, southwest Asia, and perhaps South Asia, and a Chinese sphere in East Asia and Africa), each dominated by its own cultural traditions (Euro-American, Confucian, and so on). But in the second half of the century, he predicts, numbers will tell; China will rule and the world will be Easternized.

Extrapolating from how China has used its power since the 1990s, Jacques argues that the Sinocentric world of the late twenty-first century will be quite different from the Western world of the nineteenth and twentieth centuries. It will be even more hierarchical, with the old Chinese idea that foreigners should approach the Middle Kingdom as tribute-bearing supplicants replacing Western theories about the nominal equality of states and institutions. It will also be illiberal, dropping the West’s rhetoric about universal human values; and statist, brooking no opposition to the powers of political rulers. All over the world, people will forget the glories of the Euro-American past. They will learn Mandarin, not English, celebrate Zheng He, not Columbus, read Confucius instead of Plato, and marvel at Chinese Renaissance men such as Shen Kuo rather than Italians such as Leonardo.

Some strategists think Chinese global rule will follow Confucian traditions of peaceful statecraft and be less militarily aggressive than the West’s; others disagree. Chinese history gives no clear guidance. There have certainly been Chinese leaders who opposed war as a policy tool (particularly among the gentry and bureaucracy), but there have been plenty of others who readily used force, including the first few emperors of virtually every dynasty except the Song. Those international-relations theorists who style themselves “realists” generally argue that China’s caution since the Korean War owes more to weakness than to Confucius. Beijing’s military spending has increased more than 16 percent each year since 2006 and is on target to match America’s in the 2020s. Depending on the decisions future leaders make, the East’s rise to global rule in the twenty-first century may be even bloodier than the West’s in the nineteenth and twentieth.

So there we have it. Maybe great men and women will come to America’s aid, preserving Western rule for a few generations more; maybe bungling idiots will interrupt China’s rise for a while. Maybe the East will be Westernized, or maybe the West will be Easternized. Maybe we will all come together in a global village, or maybe we will dissolve into a clash of civilizations. Maybe everyone will end up richer, or maybe we will incinerate ourselves in a Third World War.

This mess of contradictory prognoses evokes nothing so much as the story I mentioned in Chapter 4 of the blind men and the elephant, each imagining he was touching something entirely different. The only way to explain why the West rules, I suggested at that point in the book, was by using the index of social development to cast a little light on the scene. I now want to suggest that the same approach can help us see what the elephant will look like a hundred years from now.

2103

So let us look again at Figure 12.1, particularly at the point where the Eastern and Western lines meet in 2103. The vertical axis shows that by then social development will stand at more than five thousand points.

This is an astonishing number. In the fourteen thousand years between the end of the Ice Age and 2000 CE, social development rose nine hundred points. In the next hundred years, says Figure 12.1, it will rise four thousand points more. Nine hundred points took us from the cave paintings of Altamira to the atomic age; where will another four thousand take us? That, it seems to me, is the real question. We cannot understand what will come after Chimerica unless we first understand what the world will look like at five thousand points.

In an interview in 2000 the economist Jeremy Rifkin suggested, “Our way of life is likely to be more fundamentally transformed in the next several decades than in the previous thousand years.” That sounds extreme, but if Figure 12.1 really does show the shape of the future, Rifkin’s projection is in fact a serious understatement. Between 2000 and 2050, according to the graph, social development will rise twice as much as in the previous fifteen thousand years; and by 2103 it will have doubled again. What a mockery, this, of history!

This is where all the prognostications that I discussed in the previous section fall down. All extrapolate from the present into the near future, and all—unsurprisingly—conclude that the future will look much like the present, but with a richer China. If we instead bring the whole weight of history to bear on the question—that is, if we talk to the Ghost of Christmas Past—we are forced to recognize just how unprecedented the coming surge in social development is going to be.

The implications of development scores of five thousand points are staggering. If, for the sake of argument, we assume that the four traits of energy capture, urbanization, information technology, and war-making capacity will each account for roughly the same proportions of the total social development score in 2103 as they did in 2000,* then a century from now there will be cities of 140 million people (imagine Tokyo, Mexico City, New York, São Paolo, Mumbai, Delhi, and Shanghai rolled into one) in which the average person consumes 1.3 million kilocalories of energy per day.

A fivefold increase in war-making capacity is even harder to visualize. We already have enough weapons to destroy the world several times over, and rather than simply multiplying nuclear warheads, bombs, and guns, the twenty-first century will probably see technologies that make twentieth-century weapons as obsolete as the machine gun made the musket. Something like “Star Wars,” the anti-ballistic-missile shield that American scientists have been working on since the 1980s, will surely become a reality. Robots will do our fighting. Cyberwarfare will become all-important. Nanotechnology will turn everyday materials into impenetrable armor or murderous weapons. And each new form of offense will call forth equally sophisticated defenses.

Most mind-boggling of all, though, are the changes in information technology implied by Figure 12.1. The twentieth century took us from crude radios and telephones to the Internet; it is not so far-fetched to suggest that the twenty-first will give everyone in the developed cores instant access to and total recall of all the information in the world, their brains networked like—or into—a giant computer, with calculating power trillions of times greater than the sum of all brains and machines in our own time.

All these things, of course, sound impossible. Cities of 140 million people surely could not function. There is not enough oil, coal, gas, and uranium in the world to supply billions of people with 1.3 million kilocalories per day. Nano-, cyber-, and robot wars would annihilate us all. And merging our minds with machines—well, we would cease to be human.

And that, I think, is the most important and troubling implication of Figure 12.1.

I have made two general claims in this book. The first was that biology, sociology, and geography jointly explain the history of social development, with biology driving development up, sociology shaping how development rises (or doesn’t), and geography deciding where development rises (or falls) fastest; and the second was that while geography determines where social development rises or falls, social development also determines what geography means. I now want to extend these arguments. In the twenty-first century social development promises—or threatens—to rise so high that it will change what biology and sociology mean too. We are approaching the greatest discontinuity in history.

The inventor and futurist Ray Kurzweil calls this the Singularity—“a future period during which the pace of technological change will be so rapid, its impact so deep … that technology appears to be expanding at infinite speed.” One of the foundations of his argument is Moore’s Law, the famous observation made by the engineer (and future chairman of Intel) Gordon Moore in 1965 that with every passing year the miniaturization of computer chips roughly doubled their speed and halved their cost. Forty years ago gigantic mainframes typically performed a few hundred thousand calculations per second and cost several million dollars, but the little thousand-dollar laptop I am now tapping away on can handle a couple of billion per second—a ten-million-fold improvement in price-performance, or a doubling every eighteen months, much as Moore predicted.

If this trend continues, says Kurzweil, by about 2030 computers will be powerful enough to run programs reproducing the 10,000 trillion electrical signals that flash every second among the 22 billion neurons inside a human skull. They will also have the memory to store the 10 trillion recollections that a typical brain houses. By that date scanning technology will be accurate enough to map the human brain neuron by neuron—meaning, say the technology boosters, that we will be able to upload actual human minds onto machines. By about 2045, Kurzweil thinks, computers will be able to host all the minds in the world, effectively merging carbon-and silicon-based intelligence into a single global consciousness. This will be the Singularity. We will transcend biology, evolving into a new, merged being as far ahead of Homo sapiens as a contemporary human is of the individual cells that merge to create his/her body.

Kurzweil’s enthusiastic vision provokes as much mockery as admiration (“the Rapture for Nerds,” some call it), and the odds are that—like all prophets before him—he will be wrong much more often than he is right. But one of the things Kurzweil is surely correct about is that what he calls “criticism from incredulity,” simple disbelief that anything so peculiar could happen, is no counterargument. As the Nobel Prize–winning chemist Richard Smalley likes to say, “When a scientist says something is possible, they’re probably underestimating how long it will take. But if they say it’s impossible, they’re probably wrong.” Humans are already taking baby steps toward some sort of Singularity, and governments and militaries are taking the prospect of a Singularity seriously enough to start planning for it.

We can, perhaps, already see what some of these baby steps have wrought. I pointed out in Chapter 10 that the industrial revolution set off even bigger changes in what it means to be human than the agricultural revolution had done. Across much of the world, better diets now allow humans to live twice as long as and grow six inches taller than their great-great-grandparents. Few women now spend more than a small part of their lives bearing and rearing babies, and compared with any earlier age, few babies now die in infancy. In the richest countries doctors seem able to perform miracles—they can keep us looking young (in 2008, five million Botox procedures were performed in the United States), control our moods (one in ten Americans has used Prozac), and consolidate everything from cartilage to erections (in 2005 American doctors wrote 17 million prescriptions for Viagra, Cialis, and Levitra). The aging emperors of antiquity, I suspect, would have thought these little purple pills quite as wonderful as anything in Kurzweil’s Singularity.

Twenty-first-century genetic research promises to transform humanity even more, correcting copying errors in our cells and growing new organs when the ones we were born with let us down. Some scientists think we are approaching “partial immortalization”: like Abraham Lincoln’s famous ax (which had its handle replaced three times and its blade twice), each part of us might be renewed while we ourselves carry on indefinitely.

And why stop at just fixing what is broken? You may remember the 1970s television series The Six Million Dollar Man, which began with a pilot named Steve Austin (played by Lee Majors) losing an arm, an eye, and both legs in a plane crash. “We can rebuild him—we have the technology,” says the voiceover, and Austin quickly reappears as a bionic man who outruns cars, has a Geiger counter in his arm and a zoom lens in his eye, and eventually a bionic girlfriend (Lindsay Wagner) too.

Thirty years on, athletes have already gone bionic. When the golfer Tiger Woods needed eye surgery in 2005, he upgraded himself to better-than-perfect 20/15 vision, and in 2008 the International Association of Athletics Federations even temporarily banned the sprinter Oscar Pistorius from the Olympics because his artificial legs seemed to give him an edge over runners hobbled by having real legs.*

By the 2020s middle-aged folks in the developed cores might see farther, run faster, and look better than they did as youngsters. But they will still not be as eagle-eyed, swift, and beautiful as the next generation. Genetic testing already gives parents opportunities to abort fetuses predisposed to undesirable shortcomings, and as we get better at switching specific genes on and off, so-called designer babies engineered for traits that parents like may become an option. Why take chances on nature’s genetic lottery, ask some, if a little tinkering can give you the baby you want?

Because, answer others, eugenics—whether driven by racist maniacs like Hitler or by consumer choice—is immoral. It may also be dangerous: biologists like to say that “evolution is smarter than you,” and we may one day pay a price for trying to outwit nature by culling our herd of traits such as stupidity, ugliness, obesity, and laziness. All this talk of transcending biology, critics charge, is merely playing at being God—to which Craig Venter, one of the first scientists to sequence the human genome, reportedly replies: “We’re not playing.”

Controversy continues, but I suspect that our age, like so many before it, will in the end get the thought it needs. Ten thousand years ago some people may have worried that domesticated wheat and sheep were unnatural; two hundred years ago some certainly felt that way about steam engines. Those who mastered their qualms flourished; those who did not, did not. Trying to outlaw therapeutic cloning, beauty for all, and longer life spans does not sound very workable, and banning the military uses of tinkering with nature sounds even less so.

The United States Defense Advanced Research Projects Agency (DARPA) is one of the biggest funders of research into modifying humans. It was DARPA that brought us the Internet (then called the Arpanet) in the 1970s, and its Brain Interface Project is now looking at molecular-scale computers, built from enzymes and DNA molecules rather than silicon, that could be implanted in soldiers’ heads. The first molecular computers were unveiled in 2002, and by 2004 better versions were helping to fight cancer. DARPA, however, hopes that more advanced models will give soldiers some of the advantages of machines by speeding up their synaptic links, adding memory, and even providing wireless Internet access. In a similar vein, DARPA’s Silent Talk project is working on implants that will decode preverbal electrical signals within the brain and send them over the Internet so troops can communicate without radios or e-mail. One National Science Foundation report suggests that such “network-enabled telepathy” will become a reality in the 2020s.

The final component of Kurzweil’s Singularity, computers that can reproduce the workings of biological brains, is moving even faster. In April 2007 IBM researchers turned a Blue Gene/L supercomputer into a massively parallel cortical simulator that could run a program imitating a mouse’s brain functions. The program was only half as complex as a real mouse brain, and ran at only one-tenth of rodent speed, but by November of that year the same lab had already upgraded to mimicking bigger, more complex rat brains.

Half a slowed-down rat is a long way from a whole full-speed human, and the lab team in fact estimated that a human simulation would require a computer four hundred times as powerful, which with 2007 technology would have had unmanageable energy, cooling, and space requirements. Already in 2008, however, the costs were falling sharply, and IBM predicted that the Blue Gene/Q supercomputer, which should be up and running in 2011, would get at least a quarter of the way there. The even more ambitious Project Kittyhawk, linking thousands of Blue Genes, should move closer still in the 2020s.

To insist that this will add up to Kurzweil’s Singularity by 2045 would be rash. It might be rasher still, however, to deny that we are approaching a massive discontinuity. Everywhere we look, scientists are assaulting the boundaries of biology. Craig Venter’s much-publicized ambition to synthesize life had earned him the nickname “Dr. Frankencell,” but in 2010 his team succeeded in manufacturing the genome of a simple bacterium entirely from chemicals and transplanting it into the walls of cells to create JCVI-syn1.0, the earth’s first synthetic self-reproducing organism. Genetics even has its own version of Moore’s Law, Carlson’s Curve:* between 1995 and 2009 the cost of DNA synthesis fell from a dollar per base pair to less than 0.1 cent. By 2020, some geneticists think, building entirely new organisms will be commonplace. Hard as it is to get our minds around the idea, the trends of the last couple of centuries are leading toward a change in what it means to be human, making possible the vast cities, astonishing energy levels, apocalyptic weapons, and science-fiction kinds of information technology implied by social development scores of five thousand points.

This book has been full of upheavals in which social development jumped upward, rendering irrelevant many of the problems that had dominated the lives of earlier generations. The evolution of Homo sapiens swept away all previous ape-men; the invention of agriculture made many of the burning issues of hunter-gatherer life unimportant; and the rise of cities and states did the same to the concerns of prehistoric villagers. The closing of the steppe highway and the opening of the oceans ended realities that had constrained Old World development for two thousand years, and the industrial revolution of course made mockery of all that had gone before.

These revolutions have been accelerating, building on one another to drive social development up further and faster each time. If development does leap up by four thousand points in the twenty-first century, as Figure 12.1 predicts, this ongoing revolution will be the biggest and fastest of all. Its core, many futurists agree, lies in linked transformations of genetics, robotics, nanotechnology, and computing, and its consequences will overturn much of what we have known.

But while Figure 12.1 clearly shows the Eastern development score gaining on the West’s, you may have noticed that every example I cited in this section—DARPA, IBM, the Six Million Dollar Man—was American. Eastern scientists have made plenty of contributions to the new technologies (robotics, for instance, is as advanced in Japan and South Korea as anywhere), but so far the revolution has been overwhelmingly Western. This might mean that the pundits who point to America’s decline and a coming Chinese age will be proved wrong after all: if the United States dominates the new technologies as thoroughly as Britain dominated industrial ones two centuries ago, the genetic/nanotechnology/robotics revolution might shift wealth and power westward even more dramatically than the industrial revolution did.

On the other hand, the underlying shift of wealth from West to East might mean that the current American dominance is just a lag from the twentieth century, and that by the 2020s the big advances will be happening in Eastern labs. China is already using lavish funding to lure its best scientists back from America; perhaps Lenovo, not IBM, will provide the mainframes that host a global consciousness in the 2040s, and Figure 12.1 will be more or less right after all.

Or then again, perhaps the Singularity will render ten-thousand-year-old categories such as “East” and “West” completely irrelevant. Instead of transforming geography, it might abolish it. The merging of mortals and machines will mean new ways of capturing and using energy, new ways of living together, new ways of fighting, and new ways of communicating. It will mean new ways of working, thinking, loving, and laughing; new ways of being born, growing old, and dying. It may even mean the end of all these things and the creation of a world beyond anything our unimproved, merely biological brains can imagine.

Any or all these things may come to pass.

Unless, of course, something prevents them.

THE WORST-CASE SCENARIO

Late in 2006, my wife and I were invited to a conference at Stanford University called “A World at Risk.” This star-studded event, featuring some of the world’s leading policy makers, took place on a bright winter’s day. The sun shone warmly from a clear blue sky as we made our way to the venue. The stock market, house prices, employment, and consumer confidence were at or near all-time highs. It was morning in America.

Over breakfast we heard from former secretaries of state and defense about the nuclear, biological, and terrorist threats facing us. Before lunch we learned about the shocking scale of environmental degradation and the high risk that international security would collapse, and as we ate we were told that global epidemics were virtually inevitable. And then things went downhill. We reeled from session to session in gathering gloom, overwhelmed by expert report after expert report on the rising tide of catastrophe. The conference had been a tour de force, but by the time the after-dinner speaker announced that we were losing the war on terror, the audience could barely even respond.

This day of despair made me think (to put it mildly). In the first century CE and again a thousand years later, social development ran into a hard ceiling and the forces of disruption that development itself had created set off Old World–wide collapses. Are we now discovering a new hard ceiling, somewhere around one thousand points on the index? Are the hoofbeats of the horsemen of the apocalypse overtaking our baby steps toward the Singularity even as you read these words?

The five familiar figures—climate change, famine, state failure, migration, and disease—all seem to be back. The first of these, global warming, is perhaps the ultimate example of the paradox of development, because the same fossil fuels that drove the leap in social development since 1800 have also filled the air with carbon, trapping heat. Our plastic toys and refrigerators have turned the world into a greenhouse. Temperatures have risen 1°F since 1850, with most of the increase coming in the last thirty years; and the mercury in the thermometer just keeps rising.

In the past, higher temperatures often meant better agricultural yields and rising development (as in the Roman and Medieval Warm Periods), but this time may be different. The United Nations Intergovernmental Panel on Climate Change (IPCC) suggested in 2007 that “Altered frequencies and intensities of extreme weather, together with sea level rise, are expected to have mostly adverse effects on natural and human systems … warming could lead to some impacts that are abrupt or irreversible.” And that may be putting it mildly; the small print in their report is even more alarming.

The air bubbles in the ice caps show that carbon dioxide levels have fluctuated across the last 650,000 years, from just 180 molecules of carbon dioxide per million molecules of air in the ice ages to 290 parts per million (ppm) in warm interglacials. Carbon dioxide never reached 300 ppm—until 1958. By May 2010 it was clocked at 393 ppm, and the IPCC estimates that if present trends continue unchecked, carbon dioxide levels will reach 550 ppm by 2050—higher than they have been for 24 million years—and average temperatures will jump another 5°F. And if energy capture keeps rising as Figure 12.1 implies, the world could get much hotter, much faster.

Even if we stopped pumping out greenhouse gases tomorrow, there is already so much carbon in the air that warming will carry on. We have changed the atmosphere’s chemistry. Whatever we do now, the North Pole will melt. Conservative estimates, such as the IPCC’s, suggest that the ice will be gone by 2100; the most radical think polar summers will be ice-free by 2013. Most scientists come down around 2040.

As the poles melt, the sea level will rise. The waters are already a good five inches higher than they were in 1900, and the IPCC expects them to rise a further two feet by 2100. The direst predictions for the polar meltdown add another fifty feet to the sea level, drowning millions of square miles of the planet’s best farmland and richest cities. The world is shrinking in more ways than we realized.

But despite all the icy meltwater, the seas will keep getting warmer as they absorb heat from the atmosphere, and because the oceans now cool off less in winter than they used to, hurricane and cyclone seasons will get longer and fiercer. Wet places will be wetter, with more violent storms and floods; dry places drier, with more wildfires and dust storms.

Many of us have already had some kind of wake-up call that made global warming personal. Mine came in 2008. Well before California’s fire season normally gets going, the air thickened with ash as the forests burned around our house. The sky turned an unearthly orange and the rotors of firefighting helicopters drowned out our voices. We cleared a broad firebreak around our home against future blazes and in the end had only one really close call before the rains came. Or perhaps I should say before the rainsfinally came: the active fire season in the western United States is now seventy-eight days longer than it was in the 1970s. The typical fire burns five times as long as it did thirty years ago. And firefighters predict worse to come.

All this comes under the heading of what the journalist Thomas L. Friedman has called “the really scary stuff we already know.” Much worse is what he calls “the even scarier stuff we don’t know.” The problem, Friedman explains, is that what we face is not global warming but “global weirding.” Climate change is nonlinear: everything is connected to everything else, feeding back in ways too bewilderingly complex to model. There will be tipping points when the environment shifts abruptly and irreversibly, but we don’t know where they are or what will happen when we reach them.

The scariest of the stuff we don’t know is how humans will react. Like all the episodes of climate change in the past, this one will not directly cause collapse. In 2006 the Stern Review, a British study, estimated that if we continue business as usual until 2100, climate change will drive global economic output down 20 percent from current levels—a dismal prospect, but not the end of the world as we know it; and even if the direst predictions come true, with temperatures rising 10°F, humanity will muddle through. The real concern is not the weather itself but that long before 2100 people’s reactions to climate change will unleash more horsemen of the apocalypse.

The most obvious is famine. The green revolution was perhaps the twentieth century’s greatest achievement, increasing food production even faster than population could grow. By 2000 it seemed that if we could just contain the viciousness and stupidity of dictators and warlords, starvation might yet be banished. But one decade on, that seems less likely. Once again the paradox of development is at work. As wealth rises, farmers feed more and more cheap grain to animals so we can eat expensive meat, or turn more and more acres over to biofuels so we can drive cars without burning oil. The result: the prices of staple foods doubled or tripled between 2006 and 2008 and hungry crowds rioted around Africa and Asia. The combination of the biggest cereal harvest in history (2.3 billion tons) and the financial crisis pushed prices down in 2009, but with the world’s population set to reach 9 billion by 2050, the United Nations’ Food and Agriculture Organization expects that price volatility and food shortages will only increase.

Geography will continue to be unfair in the twenty-first century. Global warming will raise crop yields in cold, rich countries such as Russia and Canada, but will have the opposite effect in what the U.S. National Intelligence Council calls an “arc of instability” stretching from Africa through Asia (Figure 12.2). Most of the poorest people in the world live in this arc, and declining harvests could potentially unleash the last three horsemen of the apocalypse.

The National Intelligence Council estimates that between 2008 and 2025 the number of people facing food or water shortages will leap from 600 million to 1.4 billion, most of them in the arc; and not to be outdone in apocalyptic predictions, the Stern Reviewconcluded that by 2050 hunger and drought will set 200 million “climate migrants” moving—five times as many as the world’s entire refugee population in 2008.

Plenty of people in the Western core already see migration as a threat, even though since the closing of the steppe highway three centuries ago migration has more often been a motor of development than a danger to it.* In 2006 a Gallup poll reported that Americans thought immigration was the country’s second-worst problem (after the war in Iraq). To many Americans, the danger of Mexicans smuggling drugs and taking jobs seems to outweigh all benefits; to many Europeans, fears of Islamist terrorism loom just as large. In both regions, nativist lobbies argue that the new settlers are uniquely difficult to assimilate.

image

Figure 12.2. The big thirst: the National Intelligence Council’s “arc of instability” (stretching from Africa through Asia), plotted against regions likely to face water shortages by 2025. The darkest-shaded areas will face “physical scarcity,” defined as having more than 75 percent of their water allocated to agriculture, industry, and/or domestic use. Medium-dark areas will be “approaching physical scarcity,” with 60 percent of their water taken up by these purposes, and the lightest areas will face “economic scarcity,” with more than 25 percent of their water committed. Rich countries such as the United States, Australia, and China can pipe water from wet areas to dry; poor ones cannot.

Global warming threatens to make even the most lurid fears of anti-immigrant activists come true by the 2020s. Tens of millions of the world’s hungriest, angriest, and most desperate people may be fleeing the Muslim world for Europe, and Latin America for the United States. The population movements could dwarf anything in history, reviving the kind of problems that the steppe highway used to present.

Disease, the fourth horseman of the apocalypse, may be one of these problems. Migrations across the steppes spread the plagues of the second and fourteenth centuries, and the greatest pandemic of the twentieth century, the H1N1 influenza of 1918, was spread by a flood of young men under arms between America and Europe. H1N1 killed more people in one year—perhaps 50 million—than the Black Death did in a century, and two or three times as many as AIDS has done in the last thirty years.

Air travel has made disease much harder to contain. After incubating in Africa since at least 1959, AIDS exploded across four continents in the 1980s, and Severe Acute Respiratory Syndrome (SARS) leaped to thirty-seven countries in 2003 within weeks of evolving in southern China. Geneticists sequenced the syndrome’s DNA in thirty-one days (as compared to fifteen years for HIV) and aggressive international action nipped it in the bud. By the time epidemiologists identified the so-called swine flu (known as “New H1N1” to distinguish it from the 1918 flu) in 2009, however, it had already spread too widely to be contained.

If swine flu or one of the equally alarming strains of avian flu starts behaving like the H2N2 virus that killed 1–2 million people in 1957, the World Health Organization estimates that it will kill 2–7.4 million people; if it behaves like the 1918 flu, it will kill 200 million. The world is better prepared than it was in 1918, but deaths on even one-tenth of that scale could cause a short-term economic meltdown to make the 2007–2009 financial crisis look trivial. The World Bank guesses that a pandemic would knock 5 percent off global economic output, and some of the “Ten Things You Need to Know About Pandemic Influenza” listed on the World Health Organization’s website are even more alarming:

The world may be on the brink of another pandemic.

• All countries will be affected.

• Medical supplies will be inadequate.

• Large numbers of deaths will occur.

• Economic and social disruption will be great.

As when the horsemen rode in the past, climate change, famine, migration, and disease will probably feed back on one another, unleashing the fifth horseman, state failure. The arc of instability is home to some of the world’s most rickety regimes, and as pressure mounts several may collapse as completely as Afghanistan or Somalia, increasing suffering and providing more havens for terrorists. And if instability drags in the cores, whose economies are thoroughly entangled with the arc’s resources, we may slide into the mother of all worst-case scenarios.

As early as 1943 an American mission to the Persian Gulf identified the central problem. “The oil in this region,” it reported, “is the greatest single prize in all history.” Rich nations in the Western core soon reoriented their grand strategies around Gulf oil. When western Europe’s power waned in the 1950s, the United States stepped in, covertly or overtly intervening to help friends, harm enemies, and preserve access in the arc. Although less dependent on Gulf oil, the Soviet Union meddled almost as vigorously to deny it to American interests, and when Russia retreated in the 1990s, China’s addiction to oil (which accounts for 40 percent of the rise in global demand since 2000) forced it, too, to join the great game.

China’s hunger for resources (soybeans, iron, copper, cobalt, timber, and natural gas as well as oil) promises constant clashes with Western interests in the arc of instability in the 2010s. Chinese diplomats stress their country’s “peaceful rising” (some tone it down still further to “peaceful development”), but Western anxiety has increased steadily since the 1990s. In 2004, for instance, China’s search for iron set off what newspapers quickly dubbed the “great drain robbery,” with thieves the world over snatching manhole covers and shipping them to the East to be melted down. Chicago alone lost 150 in one month. Where would it end? Westerners asked. Today manhole covers, tomorrow the world. According to one poll in 2005, 54 percent of Americans agreed that China’s rise was “a threat to world peace”; in a 2007 poll Americans called China the second-greatest threat to global stability, trailing only Iran.

China returns the compliment. When NATO planes bombed China’s embassy in Belgrade in 1999, killing three journalists, furious crowds stoned Western embassies in Beijing and firebombed a consulate in Chengdu. “PEOPLE AGONIZED BY CRIMINAL ACT,” the China Daily’s headline raged. In 2004 the Communist party still insisted on the reality of a “strategic conspiracy of hostile forces to Westernize and cause China to disintegrate.”

In 1914, when Europe’s great powers faced off over the ruins of the Ottoman Empire in the Balkans, Serbia’s Black Hand terrorist gang needed only a pistol to set off World War I. In 2008, a United States commission concluded that “it is more likely than not that a weapon of mass destruction will be used in a terrorist attack somewhere in the world by the end of 2013.” With the great powers now facing off over the ruins of Europe’s empires in the arc of instability, the havoc al-Qaeda or Hezbollah might wreak with such weapons does not bear thinking about.

The entanglements in the arc are far scarier than those in the Balkans a century ago because they could so easily go nuclear. Israel has built up a large arsenal since about 1970; in 1998 India and Pakistan both tested atomic bombs; and since 2005 the European Union and United States have accused Iran of seeking the same goal. Most observers expect Iran to be nuclear-capable sometime in the 2010s, which may drive up to half a dozen Muslim states* to seek nuclear deterrents. Israel anticipates a nuclear-armed Iran by 2011, but might not wait for matters to reach that point. Israeli warplanes have already destroyed nuclear reactors in Iraq and Syria, and new attacks may follow if Iran’s program proceeds.

No American administration could remain neutral in a nuclear confrontation in the arc of instability between its closest friend and bitterest enemy. Nor, perhaps, could Russia or China. Both have opposed Iranian nuclear ambitions but they did let Iran apply to join their Shanghai Cooperation Organization,* a loose body working largely to counter American interests in central Asia.

An all-out East-West war would, of course, be catastrophic. For China it would be suicidal: the United States outnumbers it twenty to one in nuclear warheads and perhaps a hundred to one in warheads that can be relied on to reach enemy territory. China tested an antimissile missile in January 2010, but lags far behind American capabilities. The United States has eleven aircraft carrier battle groups to China’s zero (although China began building its first carrier in 2009) and an insurmountable lead in military technology. The United States could not, and would not want to, conquer and occupy China, but almost any imaginable war would end with humiliating defeat for China, the fall of the Communist party, and perhaps the country’s breakup.

That said, winning a war might be almost as bad for the United States as losing it would be for China. Even a low-intensity conflict would have horrendous costs. If Chimerica splits abruptly and vindictively it will mean financial disaster for both partners. A nuclear exchange would be worse still, turning the west coast of North America and much of China into radioactive ruins, killing hundreds of millions, and throwing the world economy into a tailspin. Worst of all, a Sino-American war could easily drag in Russia, which still has the world’s biggest nuclear arsenal.

Any way we look at it, all-out war is madness. Fortunately, a huge body of expert literature reassures us that in a globalized world such madness is impossible. “No physical force can set at nought the force of credit,” says one authority. According to another, the “international movement of capital is the biggest guarantor of world peace.” A third adds that fighting “must involve the expenditure of so vast a sum of money and such an interference with trade, that a war would be accompanied or followed by a complete collapse of … credit and industry”; it would mean “total exhaustion and impoverishment, industry and trade would be ruined, and the power of capital destroyed.”

This is comforting—except for the fact that these experts were not talking about the risk of Sino-American conflict in the 2010s. All were writing between 1910 and 1914, insisting the modern world’s complicated web of trade and finance ruled out any chance of a great-power war in Europe. We all know how that turned out.

Perhaps the world’s statesmen will yank us back from precipice after precipice. Maybe we can avoid a nuclear 1914 for another generation; maybe for fifty years. But is it realistic to think we can keep the bomb out of the hands of terrorists and rogue states forever? Or deter every leader, regardless of national interest, from ever deciding that nuclear war is the best option? Even if we limit proliferation to its current rate, by 2060 there will be close to twenty nuclear powers, several of them in the arc of instability.

Every year we avoid Armageddon the threats from the horsemen of the apocalypse keep building. Pressure on resources will mount, new diseases will evolve, nuclear weapons will proliferate, and—most insidious of all—global weirding will shift the calculus in unpredictable ways. It seems crazily optimistic to think we can juggle all these dangers indefinitely.

We appear to be approaching a new hard ceiling. When the Romans ran up against the original hard ceiling in the first century CE, they faced two possible outcomes: they might find a way through, in which case social development would leap upward, or they might not, in which case the horsemen would drag them down. Their failure began a six-century decline, cutting Western social development more than one-third. In the eleventh century, when Song China reached the same hard ceiling, it, too, failed to break through and Eastern development fell almost one-sixth between 1200 and 1400.

As we press against a new hard ceiling in the twenty-first century, we face the same options but in starker forms. When the Romans and Song failed to find solutions, they had the relative luxury of several centuries of slow decline, but we will not be so lucky. There are many possible paths that our future might follow, but however much they wind around, most seem to lead ultimately to the same place: Nightfall.

What a Singularity will mean for Western rule is open for debate, but what Nightfall will mean seems much clearer. Back in 1949, Einstein told a journalist, “I do not know how the Third World War will be fought, but I can tell you what they will use in the Fourth—rocks.” After Nightfall, no one will rule.

THE GREAT RACE

Talking to the Ghost of Christmas Past leads to an alarming conclusion: the twenty-first century is going to be a race. In one lane is some sort of Singularity; in the other, Nightfall. One will win and one will lose. There will be no silver medal. Either we will soon (perhaps before 2050) begin a transformation even more profound than the industrial revolution, which may make most of our current problems irrelevant, or we will stagger into a collapse like no other. It is hard to see how any intermediate outcome—a compromise, say, in which everyone gets a bit richer, China gradually overtakes the West, and things otherwise go on much as before—can work.

This means that the next forty years will be the most important in history.

What the world needs to do to prevent Nightfall is not really a mystery. The top priority is to avoid all-out nuclear war, and the way to do that is for the great powers to reduce their nuclear arsenals. Paradoxically, pursuing total disarmament may be a riskier course, because nuclear weapons cannot be uninvented. Great powers can always build new bombs in a hurry, and the really bad guys—terrorists and rulers of rogue states—will in any case ignore all agreements. Proliferation will increase the risk that wars will go nuclear across the next thirty to forty years, but the stablest situation will be one where the great powers have enough weapons to deter aggression but not enough to kill us all.

The older nuclear powers—the United States, Russia, Britain, France, China—have been moving in this direction since the 1980s. During the Cold War the mathematician, pacifist, and meteorologist (until he abandoned weather research after realizing how much it helped the air force) Lewis Fry Richardson made a widely cited calculation that there was a 15–20 percent likelihood of a nuclear war before 2000. By 2008, however, the energy scientist Vaclav Smil could offer a positively sunny estimate that the chance of even a World War II–scale conflict (killing 50 million people) before 2050 was well under 1 percent, and in January 2010 the Bulletin of the Atomic Scientists moved the minute hand of its celebrated “Doomsday Clock”—indicating how close we stand to Nightfall—back from five minutes to six minutes before midnight.

The second priority is to slow down global weirding. Here things are going less well. In 1997 the world’s great and good gathered at Kyoto to work out a solution, and agreed that by 2012 greenhouse gas emissions must be cut to 5.2 percent below their 1990 levels. The proposed cuts, however, fell mostly on rich Western nations, and the United States—the world’s biggest polluter in the 1990s—refused to ratify the protocol. To many critics this seemed (as an Indian official put it) like “guys with gross obesity telling guys just emerging from emaciation to go on a major diet,” but American policy makers responded that emissions could not be controlled unless India and China (which in 2006 displaced the United States as the world’s biggest polluter) made cuts too.

By 2008 the United States and China were both more interested in change, but the political will needed for comprehensive agreements seems to be lacking. The authors of the Stern Review estimate that the kind of low-carbon technologies, forest preservation, and energy efficiencies that could avert disaster by holding carbon levels to 450 ppm by 2050 will cost about a trillion dollars. Compared to the price of doing nothing that is trivial, but with their finances in tatters after the 2007–2009 economic crisis, many governments backed away from expensive plans to reduce emissions, and the Copenhagen summit of December 2009 produced no binding agreement.

Despite their obvious differences, nuclear war and global weirding actually both present much the same problem. For five thousand years, states and empires have been the most effective organizations on earth, but as social development has transformed the meaning of geography, these organizations have become less effective. Thomas Friedman has summed it up neatly. “The first era of globalization [roughly 1870–1914] shrank the world from a size ‘large’ to a size ‘medium,’” he observed in 1999, but “this era of globalization [since 1989] is shrinking the world from a size ‘medium’ to a size ‘small.’” Six years later the shrinking had gone so far that Friedman identified a whole new phase, “Globalization 3.0.” This, he suggested, “is shrinking the world from a size small to a size tiny and flattening the playing field at the same time.”

In this tiny, flattened world, there is no place left to hide. Nuclear weapons and climate change (not to mention terrorism, disease, migration, finance, and food and water supply) are global problems that require global solutions. States and empires, which have sovereignty only within their own frontiers, cannot address them effectively.

Einstein pointed out the obvious solution less than a month after atomic bombs destroyed Hiroshima and Nagasaki in 1945. “The only salvation for civilization and the human race,” he told The New York Times, “lies in the creation of world government.” Publicly mocked as a naïve scientist interfering in matters he did not understand, Einstein put his point more bluntly: “If the idea of world government is not realistic, then there is only one realistic view of our future: wholesale destruction of man by man.”

Looking back across the last fifteen thousand years, Einstein does seem to have judged the direction of history correctly. From Stone Age villages through early states such as Uruk and the Shang, early empires such as Assyria and Qin, and oceanic empires such as the British, there has been a clear trend toward bigger and bigger political units. The logical outcome seems to be the rise of an American global empire in the early twenty-first century—or, as the economic balance tilts against the West, a Chinese global empire in the mid or late twenty-first century.

The problem with this logic, though, is that these larger political units have almost always been created through war, precisely the outcome Einstein’s world government is supposed to prevent. If the only way to avoid nuclear war is a world government, and if the only way to create a world government is through a Sino-American nuclear war, the outlook is grim.

In fact, though, neither of these propositions is entirely true. Since 1945, nonstate organizations have taken on more and more functions. These organizations range from charities and private multinational corporations that operate beneath the umbrella of states to federations such as the European Union, United Nations, and World Trade Organization that impinge on state sovereignty. States certainly remain the guarantors of security (the United Nations has done little better than the League of Nations at stopping wars) and finance (in 2008–2009 it took government bailouts to save capitalism), and will not fade away anytime soon; but the most effective way to hold back Nightfall for another forty years may be by enmeshing states more deeply with nonstate organizations, getting governments to surrender some of their sovereignty in return for solutions that they might be unable to reach independently.

That will be a messy business, and as so often in the past, new challenges will call for new thought. But even if we manage in the next half-century to create institutions that can find global solutions for global problems, this will still only be a necessary rather than a sufficient condition for the Singularity to win the race.

We might compare our situation with what happened in the first, eleventh, and seventeenth centuries, when social development pressed against the hard ceiling at forty-three points on the index. I suggested in Chapter 11 that the only way the Romans or the Song could have broken through in the first and eleventh centuries was by doing what Europe and China did in the seventeenth century: that is, by restructuring geography by closing the steppe highway and creating an oceanic highway. Only then would they have bought themselves security from migrations, raised the kinds of questions that called for a scientific revolution, and begun creating the kinds of incentives that would set off an industrial revolution. Neither the Romans nor the Song, of course, were able to do this, and within a few generations migration, disease, famine, and state failure combined with climate change to set off Eurasia-wide collapses.

When Europeans and Chinese did restructure geography in the seventeenth century, they pushed the hard ceiling upward, though as we saw in Chapter 9 they did not shatter it. By 1750 problems were mounting once again, but by that time British entrepreneurs had used the time that geographical restructuring had bought to begin a revolution in energy capture.

In the twenty-first century we need to follow a similar path. First we must restructure political geography to make room for the kinds of global institutions that might slow down war and global weirding; then we must use the time that buys to carry out a new revolution in energy capture, shattering the fossil-fuel ceiling. Carrying on burning oil and coal like we did in the twentieth century will bring on Nightfall even before the hydrocarbons run out.

Some environmentalists recommend a different approach, urging us to return to simpler lifestyles that reduce energy use enough to halt global weirding, but it is hard to see how this will work. World population will probably grow by another 3 billion before it peaks at 9 billion around 2050, and hundreds of millions of these people are likely to rise out of extreme poverty, using more energy as they do so. David Douglas, the chief sustainability officer at Sun Microsystems, points out that if each of these new people owns just one 60-watt incandescent lightbulb, and if each of them uses it just four hours per day, the world will still need to bring another sixty or so 500-megawatt power plants on line. The International Energy Agency expects world oil demand to rise from 86 million barrels per day in 2007 to 116 million in 2030; and even then, they estimate, 1.4 billion people will still be without electricity.

The double whammy of the world’s poor multiplying and getting richer as they do so makes it most unlikely that energy capture will fall over the next fifty years. If we use less energy for fertilizers or for fuel to move food around, hundreds of millions of the poor will starve, which will probably bring on Nightfall faster than anything. But if people do not starve, they will demand more and more energy. In China alone, fourteen thousand new cars hit the roads every day; 400 million people (more than the entire population of the United States) will probably flee low-energy farms for high-energy cities between 2000 and 2030; and the number of travelers vacationing overseas, burning jet fuel and staying in hotels, will probably increase from 34 million in 2006 to 115 million in 2020.

We are not going to reduce energy capture unless catastrophe forces us to—which means that the only way to avoid running out of resources, poisoning the planet, or both, will be by tapping into renewable, clean power.

Atomic energy will probably be a big part of this. Fears about radiation have shackled nuclear programs since the 1970s, but may fall away as the new age gets new thought. Or perhaps solar power will be more important: only one-half of one-billionth of the energy that the sun emits comes to Earth, and roughly one-third of that is reflected back again. Even so, enough solar energy reaches us every hour to power all current human needs for a year—if we could harness it effectively. Alternatively, nanotechnology and genetics may deliver radically new sources of energy. Much of this of course sounds like science fiction, and it will certainly take enormous technological leaps to usher in such an age of abundant clean energy. But if we do not make such leaps—and soon—Nightfall will win the race.

For the Singularity to win, we need to keep the dogs of war on a leash, manage global weirding, and see through a revolution in energy capture. Everything has to go right. For Nightfall to win only one thing needs go wrong. The odds look bad.

THE SHAPE OF THINGS TO COME

Some scientists think they already know who will win the race, because the answer is written in the stars. One day around 1950 (no one remembers exactly when) the physicist Enrico Fermi and three of his colleagues met for lunch at the Los Alamos National Laboratory in New Mexico. After laughing about a New Yorker cartoon showing a flying saucer, they moved on to extraterrestrials in general before turning to more conventional scientific topics. Suddenly Fermi burst out: “But where are they?”

It took Fermi’s lunch mates a moment or two to realize that he was still worrying about spacemen. Running a few numbers through his head while eating, it had struck him that even if only a vanishingly small proportion of our galaxy’s 250 billion stars have habitable planets,* outer space should still be teeming with aliens. Earth is relatively young, at less than five billion years, so some of these species should be much older and more advanced than us. Even if their spaceships were as slow as our own, it should have taken them at most 50 million years to explore the whole galaxy. So where were they? Why had they not made contact?

In 1967 the astronomers Iosif Shklovskii and Carl Sagan offered a sobering solution to Fermi’s paradox. If just one star in every quarter of a million is orbited by just one habitable planet, they calculated, there would be a million potential alien civilizations in the Milky Way. The fact that we have not heard from any of them,* Shklovskii and Sagan concluded, must mean that advanced civilizations always destroy themselves. The astronomers even suggested that they must invariably do so within a century of inventing nuclear weapons, since otherwise the aliens would have plenty of time to fill the cosmos with signals that we would pick up. All the evidence (or, strictly speaking, the lack of it), then, points to Nightfall by 2045, the centenary of Hiroshima and Nagasaki. (By a slightly unsettling coincidence, 2045 is also the year Kurzweil nominated for the Singularity.)

It is a clever argument, but as always, there is more than one way to do the numbers. A million civilizations rushing into Nightfall is only a guess, and most solutions of the Drake Equation (dreamed up by the astronomer Frank Drake in 1961 as a rough way to calculate the number of civilizations in the galaxy) in fact generate much lower scores. Drake himself calculated that our galaxy has produced just ten advanced civilizations in its entire history, in which case ET could be out there without us knowing.

In the end Fermi’s paradox is not very helpful, because the answer to how the great race will turn out lies not in the stars but in our own past. Even if history cannot give us the precise tools of prediction that Asimov imagined in Foundation, it does provide some rather solid hints. These, I suspect, are the only real foundation for looking forward.

In the short term, the patterns established in the past suggest that the shift of wealth and power from West to East is inexorable. The transformation of the old Eastern core into a Western periphery in the nineteenth century allowed the East to discover advantages in its backwardness, and the latest of these—the incorporation of China’s vast, poor workforce into the global capitalist economy—is still playing out. Bungling, internal divisions, and external wars may hold China back, as they did so often between the 1840s and 1970s, but sooner or later—probably by 2030, almost certainly by 2040—China’s gross domestic product will overtake that of the United States. At some point in the twenty-first century China will use up the advantages of its backwardness, but when that happens the world’s center of economic gravity will probably still remain in the East, expanding to include South and Southeast Asia. The shift in power and wealth from West to East in the twenty-first century is probably as inevitable as the shift from East to West that happened in the nineteenth century.

The West-to-East shift will surely be faster than any in earlier history, but the old Western core currently has a huge lead in per capita energy capture, technology, and military capacity, and will almost certainly maintain its rule in some form through the first half of this century. So long as the United States is strong enough to act as globocop, major wars should be as rare as they were when Britain was globocop in the nineteenth century. But beginning somewhere between 2025 and 2050, America’s lead over the rest of the world will narrow, as Britain’s did after about 1870, and the risks of a new world war will increase.

The speed of technological change may well add to the instability by making access to high-tech weapons easier. According to Steven Metz, a professor at the United States Army War College, “We will see if not identical technologies, then parallel technologies being developed [outside the United States], particularly because of the off-the-shelf nature of it all. We’ve reached the point where the bad guys don’t need to develop it; instead they can just buy it.” A RAND Corporation report even suggested in 2001 that “the U.S. and its military must include in its planning for possible military conflict the possibility that China may be more advanced technologically and militarily by 2020.”

The United States will probably be the first nation to develop a functional antimissile shield, as well as robots and nanoweapons that render human combatants obsolete, cybertechnology that can neutralize or seize control of enemy computers and robots, and satellites that militarize space. One risk is that if—as seems probable—the United States can deploy some or all of these wonder weapons before 2040, its leaders might be tempted to exploit a temporary but enormous technological edge to reverse their long-term strategic decline. Yet I suspect that is unlikely. Even in the feverish atmosphere of the early 1950s the United States resisted the temptation to strike the Soviet Union before it could build up its nuclear arsenal. The real risk is probably that other nations, fearing American military breakthroughs in the next few decades, might prefer striking first to falling even further behind. That kind of thinking played a big part in taking Germany to war in 1914.

It is going to take great statesmanship to preserve the peace in the bewildering twenty-first century. I have argued throughout this book that great men/women and bungling idiots have never played as big a part in shaping history as they have believed they did. Rather than changing the course of history, I suggested, the most that chaps could do was to speed up or slow down the deeper processes driven by maps. Even the most disastrous decisions, such as the wars that Justinian of Byzantium and Khusrau of Persia launched between 530 and 630 CE, just accelerated a collapse that was already under way. Without Justinian’s and Khusrau’s wars, Western social development might have started recovering sooner, but even with them, development did eventually bounce back.

Since 1945, however, leaders really have had the ability to change history. Khrushchev and Kennedy came close to doing so in 1962. Nuclear weapons leave us no margin of error, no second chance. Mistakes used to cause decline and fall; now they cause Nightfall. For the first time in history, leadership really is decisive. We can only hope that our age, like most before it, gets the thought it needs.

I concluded in Chapter 11 that explanations for why the West rules have to be couched in terms of probabilities, not certainties, and this is even truer of the twenty-first century’s great race. Right now the odds are apparently against us, but it does seem to me that if our age is able to get the thought it needs, the odds will steadily shift in the Singularity’s favor.

If renewable, clean energy sources replace hydrocarbons across the next fifty years, they should reduce (though certainly not eliminate) the risk of great powers coming to blows over resources or being drawn into feuds in the arc of instability. They should also slow the process of global weirding, reducing the pressures within the arc, and may boost food production even more dramatically than the industrial revolution did. If robotics makes the advances many scientists anticipate, intelligent machines may save wealthy Europe and Japan from demographic disaster, providing cheaply the labor and care that their aging populations need. If nanotechnology similarly lives up to the hype, we might even start cleaning up the air and oceans by the 2040s.

In the end, though, there is only one prediction we can rely on: neither Nightfall nor the Singularity will actually win the great race, because the race will have no finishing line. When we reach 2045 (Kurzweil’s estimated time of arrival for the Singularity, and Shklovskii and Sagan’s latest date for Nightfall, a century after Hiroshima and Nagasaki) we will not get to declare the end of history and announce a winner. If, as I suspect will happen, we are still holding Nightfall at bay in the mid twenty-first century and social development is soaring past two thousand points, the emerging Singularity will not so much end the race as transform the race—and above all, transform the human race.

Looked at in a really long perspective, the threats that so scare us today seem to have a lot in common with the kinds of forces that have repeatedly pushed evolution into high gear in the past. Time after time, relatively sudden changes in the environment have created conditions in which mutations flourish, transforming the gene pool. About 1.8 million years ago the drying-out of East Africa’s forests apparently allowed freaks with big brains to fare better than Homo habilis. A brutal phase in the Ice Age about a hundred thousand years ago may have given Homo sapiens an equivalent opportunity to shine. And now, in the twenty-first century, something similar is perhaps happening again.

Mass extinctions are already under way, with one species of plant or land animal disappearing every twenty minutes or so. A 2004 study estimated that the cheeriest possible outcome is that 9 percent of the world’s 10 million species of plants and land animals will face extinction by 2050, and plenty of biologists expect biodiversity to shrink by as much as one-third or one-half. Some even speak of a sixth mass extinction,* with two-thirds of Earth’s species dying out by 2100. Humans may be among them; but rather than simply wiping Homo off the planet, the harsh conditions of the twenty-first century might act like those 1.8 million or a hundred thousand years ago, creating an opportunity for organisms with new kinds of brains—in this case, brains that merge man and machine—to replace older beings. Far from trampling us, the hoofbeats of the horsemen of the apocalypse might serve to turn our baby steps toward a Singularity into a new great leap.

The Singularity, however, might be every bit as scary as Nightfall. In Kurzweil’s vision, the Singularity culminates with the merging of human and machine intelligence in the 2040s, and those of us who live long enough for this might in effect live forever; but some of the humans who have the most experience with this—technologists in the United States Army—doubt that things will stop at that point. The former colonel Thomas Adams, for instance, suspects that war is already moving beyond “human space” as weapons become “too fast, too small, too numerous, and … create an environment too complex for humans to direct.” Technology, he suggests, is “rapidly taking us to a place where we may not want to go, but probably are unable to avoid.” The merging of humans and computers may be just a brief phase before what we condescendingly call “artificial” intelligence replaces Homo sapiens as thoroughly as Homo sapiens replaced all earlier ape-men.

If this is where a Singularity takes us in the later twenty-first century, it will mean the end of biology as we have known it, and with it the end of sloth, fear, and greed as the motors of history. In that case my Morris Theorem—that change is caused by lazy, greedy, frightened people (who rarely know what they’re doing) looking for easier, more profitable, and safer ways to do things—will finally reach its limits.

Sociology as we know it will go the same way, though what kinds of rules will govern a robotic society is anyone’s guess; and the Singularity will surely obliterate the old geography. The ancient distinctions between East and West will be irrelevant to robots.

When historians (if such things still exist) look back from 2103 on the shift from carbon- to silicon-based intelligence, it may strike them as inevitable—as inevitable, in fact, as I have claimed that the earlier shifts from foraging to farming, villages to cities, and agriculture to industry were. It may seem just as obvious that the regional traditions that had grown from the original agricultural cores since the end of the Ice Age were bound to merge into a single posthuman world civilization. The early twenty-first century’s anxiety over why the West ruled and whether it would keep on doing so might look a little ridiculous.

THE TWAIN MEET

There is a certain irony in all this. I began this book with a what-if story about the Chinese Empire taking Prince Albert to Beijing as a hostage in 1848, and then spent eleven chapters explaining why that didn’t happen. The answer to the book’s main question, I concluded, is geography; maps, not chaps, sent the little dog Looty to Balmoral rather than Albert to Beijing.

In this chapter I took the argument further, suggesting that explaining why the West rules also largely answers the question of what will happen next. As surely as geography dictated that the West would rule, it also dictates that the East will catch up, exploiting the advantages of its backwardness until its social development overtakes the West’s. But here we encounter another irony. Rising social development has always changed the meaning of geography, and in the twenty-first century, development will rise so high that geography will cease to mean anything at all. The only thing that will count is the race between a Singularity and Nightfall. To keep Nightfall at bay we will have to globalize more and more of our concerns, and arguments about which part of the world has the highest social development will matter less and less.

Hence the deepest irony: answering the book’s first question (why the West rules) to a great extent also answers the second (what will happen next), but answering the second robs the first of much of its significance. Seeing what is coming next reveals what should, perhaps, have been obvious all along—that the history that really matters is not about the East, the West, or any other subsection of humanity. The important history is global and evolutionary, telling the story of how we got from single-celled organisms to the Singularity.

I have argued throughout the book that neither long-term lock-in nor short-term accident theories explain history very well, but now, once again, I want to go further. In the really long run, on the time scale of evolutionary history, neither long-term lock-in nor short-term accident theories actually matter very much. Fifteen thousand years ago, before the Ice Age ended, East and West meant little. A century from now they will once again mean little. Their importance in the intervening era was just a side effect of geography between the age when the first farmers pushed social development past about six points and that when the first machine-enhanced, postbiological creatures push social development past five thousand points. By the time that happens—somewhere, I suspect, between 2045 and 2103—geography will no longer mean very much at all. East and West will be revealed as merely a phase we went through.

Even if everything in this phase had gone as differently as could be imagined—if, say, Zheng He had really gone to Tenochtitlán, if there had been a new kind of Pacific rather than a new kind of Atlantic economy, if there had been a Chinese rather than a British industrial revolution, and if Albert had gone to Beijing rather than Looty to Balmoral—the deep forces of biology, sociology, and geography would still have pushed history in much the same direction. America (or Zhengland, as we might now call it) would have become part of the Eastern rather than the Western core and the West would now be catching up with the East rather than the other way around, but the world would still have shrunk from size large to size small and would still now be shrinking to size tiny. The early twenty-first century would still have been dominated by Chimerica, and whether it fell or not, the race between Nightfall and the Singularity would still be going on. And East and West would still be losing their significance.

This should not be a shocking conclusion. As long ago as 1889, while the world was still shrinking from size large to size medium, a young poet named Rudyard Kipling could already see part of the same truth. Freshly back in London from the far-flung battle line, Kipling got his big break with a ripping yarn of imperial derring-do called “The Ballad of East and West.”* It tells the story of Kamal, a border raider who steals an English colonel’s mare. The colonel’s son leaps onto his own horse and pursues Kamal through the desert in a chase of epic proportions (“They have ridden the low moon out of the sky, their hoofs drum up the dawn, / The dun he went like a wounded bull, but the mare like a new-roused fawn”). Finally, though, the Englishman is thrown. Kamal charges back at him, rifle raised. But all ends well: the two men “looked each other between the eyes, and there they found no fault, / They have taken the Oath of the Brother-in-Blood on leavened bread and salt.”

Stirring stuff, but it is the poem’s opening line—“Oh, East is East, and West is West, and never the twain shall meet”—that gets all the attention, mostly from people quoting it as an example of the nineteenth-century West’s insufferable self-satisfaction. Yet that was surely not the effect Kipling was hoping for. What he actually wrote was:

Oh, East is East, and West is West, and never the twain shall meet,

Till Earth and Sky stand presently at God’s great Judgment Seat;

But there is neither East nor West, Border, nor Breed, nor Birth,

When two strong men stand face to face,

tho’ they come from the ends of the earth!

As Kipling saw it, people (real men, anyway) are all much the same; it is just geography that obscures the truth, requiring us to take a trip to the ends of the earth to figure things out. But in the twenty-first century, soaring social development and a shrinking world are making such trips unnecessary. There will be neither East nor West, border, nor breed, nor birth when we transcend biology. The twain shall finally meet if we can just put off Nightfall long enough.

Can we do that? I think the answer is yes. The great difference between the challenges we face today and those that defeated Song China when it pressed against the hard ceiling a thousand years ago and the Roman Empire another thousand before that is that we now know so much more about the issues involved. Unlike the Romans and the Song, our age may yet get the thought it needs.

On the last page of his book Collapse, the biologist and geographer Jared Diamond suggested that there are two forces that might save the world from disaster: archaeologists (who uncover the details of earlier societies’ mistakes) and television (which broadcasts their findings). As an archaeologist who watches a lot of television, I certainly agree, but I also want to add a third savior, history. Only historians can draw together the grand narrative of social development; only historians can explain the differences that divide humanity and how we can prevent them from destroying us.

This book, I hope, might help a little in the process.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!