3
ARCHAEOLOGY EVOLVING
Social evolution was still rather a new idea when cultural anthropologists launched the rebellion against it described at the end of Chapter 2. The word’s modern sense goes back only to 1857, when Herbert Spencer, a homeschooled English polymath, published an essay called “Progress: Its Law and Cause.” Spencer was an odd character, who had already tried his hand at being a railway engineer, a copy editor at the then brand-new magazine The Economist, and a romantic partner of the lady novelist George Eliot (none of which suited him; he never held a steady job or married). This essay, though, was an overnight sensation. In it Spencer explained, “From the remotest past which Science can fathom, up to the novelties of yesterday, that in which progress essentially consists, is the transformation of the homogeneous into the heterogeneous.” Evolution, Spencer insisted, is the process by which things begin simply and get more complex, and it explains everything about everything:
The advance from the simple to the complex, through a process of successive differentiations, is seen alike in the earliest changes of the Universe to which we can reason our way back, and in the earliest changes which we can inductively establish; it is seen in the geologic and climatic evolution of the Earth; it is seen in the unfolding of every single organism on its surface, and in the multiplication of kinds of organisms; it is seen in the evolution of Humanity, whether contemplated in the civilized individual, or in the aggregate of races; it is seen in the evolution of Society in respect alike of its political, its religious, and its economical organization; and it is seen in the evolution of all those endless concrete and abstract products of human activity which constitute the environment of our daily life.
Spencer spent the next forty years bundling geology, biology, psychology, sociology, politics, and ethics into a single evolutionary theory. He succeeded so well that by 1870 he was probably the most influential philosopher writing in English, and when Japanese and Chinese intellectuals decided they needed to understand the West’s achievements, he was the first author they translated. The great minds of the age bowed to his ideas. The first edition of Charles Darwin’s On the Origin of Species, published in 1859, did not contain the word “evolution”; nor did the second or third, nor even the fourth or fifth. But in the sixth imprint, in 1872, Darwin felt compelled to borrow the term that Spencer had by now popularized.*
Spencer believed that societies had evolved through four levels of differentiation, from the simple (wandering bands without leaders) through the compound (stable villages with political leaders) and doubly compound (groups with churches, states, complex divisions of labor, and scholarship) to the trebly compound (great civilizations like Rome and Victorian Britain). The scheme caught on, though no two theorists quite agreed on how to label the stages. Some spoke of evolution from savagery through barbarism to civilization; others preferred evolution from magic through religion to science. By 1906 the forest of terminologies was so annoying that Max Weber, the founding father of sociology, complained about “the vanity of contemporary authors who conduct themselves in the face of a terminology used by someone else as if it were his toothbrush.”
Whatever the labels evolutionists used, though, they all faced the same problem. They had a gut feeling that they must be right, but little hard evidence to prove it. The newly forming discipline of anthropology therefore set out to supply data. Some societies, the thinking went, are less evolved than others: the colonized peoples of Africa or the Trobriand Islands, with their stone tools and colorful customs, are like living ancestors, reflecting what civilized people in trebly compound societies must have been like in prehistory. All that the anthropologist had to do (apart from putting up with malaria, internal parasites, and ungrateful natives) was take good notes, and he (not too often she in those days) could come home and fill in the gaps in the evolutionary story.
It was this intellectual program that Malinowski rejected. In a way, though, it is odd that the issue came up at all. If evolutionists wanted to document progress, why not do so directly, using archaeological data, the physical remains left behind by actual prehistoric societies, rather than indirectly, using anthropological observations of contemporary groups and speculating that they were survivals? The answer: archaeologists a century ago just did not know very much. Serious excavation had barely begun, so evolutionists had to combine the skimpy information in archaeological reports with incidental details from ancient literature and random ethnographic accounts—which made it all too easy for Malinowksi and like-minded anthropologists to expose evolutionists’ reconstructions as speculative just-so stories.
Archaeology is a young science. As little as three centuries ago, our most ancient evidence about history—China’s Five Classics, the Indian Vedas, the Hebrew Bible, and the Greek poet Homer—barely reached back to 1000 BCE. Before these masterpieces, all was darkness. The simple act of digging things up changed everything, but it took a while. When Napoleon invaded Egypt in 1799 he brought with him a legion of scholars, who copied down or carried off dozens of ancient inscriptions. In the 1820s French linguists unlocked the secrets of these hieroglyphic texts, abruptly adding two thousand years to documented history. Not to be outdone, in the 1840s British explorers tunneled into ruined cities in the lands that are now Iraq or, hanging from ropes, transcribed royal inscriptions in the mountains of Iran; before the decade was over, scholars could read Old Persian, Assyrian, and the wisdom of Babylon.
When Spencer started writing about progress in the 1850s, archaeology was still more adventure than science, bursting with real-life Indiana Joneses. It was only in the 1870s that archaeologists began applying the geological principle of stratigraphy (the commonsense insight that since the uppermost layers of earth on a site must have got there after the lower layers, we can use the sequence of deposits to reconstruct the order of events) to their digs, and stratigraphic analysis became mainstream only in the 1920s. Archaeologists still depended on linking their sites with events mentioned in ancient literature to date what they excavated, and so until the 1940s finds in most parts of the world floated in a haze of conjecture and guesswork. That ended when nuclear physicists discovered radiocarbon dating, using the decay of unstable carbon isotopes in bone, charcoal, and other organic finds to tell how old objects were. Archaeologists began imposing order on prehistory, and by the 1970s a global framework was taking shape.
When I was a graduate student in the 1980s one or two senior professors still claimed that when they had been students their teachers had advised them that the only essential tools for fieldwork were a tuxedo and a small revolver. I am still not sure whether I should have believed them, but whatever the truth of the matter, the James Bond era was certainly dying by the 1950s. The real breakthroughs increasingly came from the daily grind of an army of professionals, grubbing facts, pushing further into prehistory, and fanning out across the globe.
Museum storerooms were overflowing with artifacts and library shelves groaning under the weight of technical monographs, but some archaeologists worried that the fundamental question—what does it all mean?—was going unanswered. The situation in the 1950s was the mirror image of the 1850s: where once grand theory sought data, now data cried out for theory. Armed with their hard-won results, mid-twentieth-century social scientists, particularly in the United States, felt ready for another crack at theorizing.
Calling themselves neo-evolutionists to show that they were more advanced than fuddy-duddy “classical” evolutionists like Spencer, some social scientists began suggesting that while it was wonderful to have so many facts to work with, the mass of evidence had itself become part of the problem. The important information was buried in messy narrative accounts by anthropologists and archaeologists or in historical documents: in short, it was not scientific enough. To get beyond the forest of nineteenth-century typologies and create a unifying theory of society, the neo-evolutionists felt, they needed to convert these stories into numbers. By measuring differentiation and assigning scores they could rank societies and then search for correlations between the scores and possible explanations. Finally, they could turn to questions that might make all the time and money spent on archaeology worthwhile—whether there is just one way for societies to evolve, or multiple ways; whether societies cluster in discrete evolutionary stages (and if so, how they move from one stage to another); or whether a single trait, such as population or technology (or, for that matter, geography), explains everything.
In 1955 Raoul Naroll, an anthropologist working on a vast multi-university data-gathering project called the Human Relations Area Files, took the first serious stab at what he described as an index of social development. Randomly choosing thirty preindustrial societies from around the world (some contemporary, others historical), he trawled the files to find out how differentiated they were, which, he thought, would be reflected in how big their largest settlements were, how specialized their craftworkers were, and how many subgroups they had. Converting the results to a standard format, Naroll handed out scores. At the bottom were the Yahgan people of Tierra del Fuego, who had impressed Darwin in 1832 as “exist[ing] in a lower state of improvement than [those] in any other part of the world.” They scored just twelve out of a possible sixty-three points. At the top were the pre-Spanish-conquest Aztecs, with fifty-eight points.
Over the next twenty years other anthropologists tried their hands at the game. Despite the fact that each used different categories, data sets, mathematical models, and scoring techniques, they agreed on the results between 87 and 94 percent of the time, which is pretty good for social science. Fifty years after Spencer’s death, a hundred after his essay on progress, neo-evolutionists looked poised to prove the laws of social evolution.
ANTHROPOLOGY DEVOLVING
So what happened? If neo-evolutionists had delivered the goods and explained everything about social evolution, we would all have heard about it. And more to the point right now, they would already have answered the why-the-West-rules question. That question is, after all, about the relative levels of development of Eastern and Western societies: whether, as long-term lock-in theorists claim, the West pulled ahead long ago, or, as short-term accident theorists would have it, the West’s lead is very recent. If neo-evolutionists could measure social development we would not have to mess around with complicated diagrams like Table 2.1. It would just be a matter of calculating Eastern and Western scores at various points since the end of the Ice Age, comparing them, and seeing which theory corresponds better with reality. So why has no one done this?
Largely, I suspect, because neo-evolutionism imploded. Even before Naroll took up his slide rule in the 1950s, the desire to measure societies struck many anthropologists as naïve. The “law-and-order crowd” (as critics called Naroll and his ilk), with their punch cards of coded data, arcane debates about statistics, and warehouse-size computers, seemed strangely divorced from the reality of archaeologists digging trenches or anthropologists interviewing hunter-gatherers; and as the times started a-changing in the 1960s, neo-evolutionism began to look not so much ridiculous as downright sinister. The anthropologist Marshall Sahlins, for example, whose “Original Affluent Society” essay I mentioned in Chapter 2, had begun his career in the 1950s as an evolutionist, but in the 1960s decided that “sympathy and even admiration for the Vietnamese struggle, coupled to moral and political disaffection with the American war, might undermine an anthropology of economic determinism and evolutionary development.”
By 1967, when Sahlins was in Paris arguing that hunter-gatherers were not really poor, a new generation of anthropologists—who had cut their teeth on America’s civil rights, antiwar, and women’s movements, and were often steeped in the counterculture—was staking out much tougher positions. The only thing evolutionists were really doing, they suggested, was ranking non-Western societies by how much they resembled the Westerners doing the measuring, who—amazingly—always gave themselves the highest scores.
“Evolutionary theories,” the archaeologists Michael Shanks and Christopher Tilley wrote in the 1980s, “easily slip into ideologies of self-justification or assert the priorities of the West in relation to other cultures whose primary importance is to act as offsets for our contemporary ‘civilization.’” Nor, many critics felt, was this confidence in numbers merely a harmless game Westerners played to make themselves feel good; it was part and parcel of the hubris that had given us carpetbombing, the Vietnam War, and the military-industrial complex. Hey hey, ho ho, LBJ had got to go; and so, too, the professors of ethnocentrism with their arrogance and their mathematics.
The sit-ins and name-calling turned an academic debate into a Manichean showdown. To some evolutionists, their critics were morally bankrupt relativists; to some critics, evolutionists were stooges of American imperialism. Through the 1980s and ’90s anthropologists fought it out in hiring, tenure, and graduate admissions committees, ruining careers and polarizing scholarship. Anthropology departments on America’s most famous campuses degenerated into something resembling bad marriages, until, broken down by years of mutual recriminations, the couples started leading separate lives. “We no longer [even] call each other names,” one prominent anthropologist lamented in 1984. In the extreme case—at Stanford, my own university—the anthropologists divorced in 1998, formally splitting into the Department of Anthropological Sciences, which liked evolution, and the Department of Cultural and Social Anthropology, which did not. Each did its own hiring and firing and admitted and trained its own students; members of one group had no need to acknowledge members of the other. They even gave rise to a new verb, to “stanfordize” a department.
The woes—or joys, depending on who was talking—of stanfordization kept anthropologists entertained in bars at professional conferences for several years, but stanfordizing is not much of a solution to one of the biggest intellectual problems in the social sciences.* If we are going to explain why the West rules we need to confront the arguments on both sides of this issue.
Social evolution’s critics were surely right that the law-and-order crowd was guilty of hubris. Like Herbert Spencer himself, in trying to explain everything about everything they perhaps ended up explaining rather little about anything. There was a lot of confusion over what neo-evolutionists were actually measuring, and even when they agreed on just what was supposed to be evolving within societies (which mostly happened when they stuck to Spencer’s favorite idea of differentiation) it was not always obvious what ranking the world’s societies in a league table would actually accomplish.
Score sheets, the critics insisted, obscure more than they reveal, masking the peculiarities of individual cultures. I certainly found that to be true when I was studying the origins of democracy in the 1990s. The ancient Greek cities that invented this form of government were really peculiar; many of their residents honestly believed that instead of asking priests what the gods thought, the best way to find the truth was to get all the men together on the side of a hill, argue, and take a vote. Giving ancient Greece a score for differentiation does not explain where democracy came from, and burying the Greeks’ peculiarity somewhere in an index of social development can actually make the task harder by diverting attention from their unique achievements.
Yet that does not mean that an index of social development is a waste of time; just that it was the wrong tool for that specific question. Asking why the West rules is a different kind of question, a grand comparative one that requires us to range across thousands of years of history, look at millions of square miles of territory, and bring together billions of people. For this task an index of social development is exactly the tool we need. The disagreement between long-term lock-in and short-term accident theories is, after all, about the overall shape of social development in East and West across the ten or so millennia that “East” and “West” have been meaningful concepts. Instead of concentrating on this and directly confronting each other’s arguments, long-termers and short-termers tend to look at different parts of the story, use different bodies of evidence, and define their terms in different ways. Following the law-and-order crowd’s lead and reducing the ocean of facts to simple numerical scores has drawbacks but it also has the one great merit of forcing everyone to confront the same evidence—with surprising results.
WHAT TO MEASURE?
The first step is to figure out exactly what we need to measure. We could do worse than listen to Lord Robert Jocelyn, who fought in the Opium War that made Western rule clear to all. On a sweltering Sunday afternoon in July 1840 he watched as British ships approached Tinghai, where a fort blocked their approach to the Yangzi River mouth. “The ships opened their broadsides upon the town,” Jocelyn wrote, “and the crashing of timber, falling houses, and groans of men resounded from the shore. The firing lasted from our side for nine minutes … We landed on a deserted beach, a few dead bodies, bows and arrows, broken spears and guns remaining the sole occupants of the field.”
The immediate cause of Western rule is right here: by 1840 European ships and guns could brush aside anything an Eastern power could field. But there was, of course, more to the rise of Western rule than military power alone. Armine Mountain, another officer with the British fleet in 1840, likened the Chinese force at Tinghai to something out of the pages of medieval chronicles: it looked “as if the subjects of [those] old prints had assumed life and substance and colour,” he mused, “and were moving and acting before me unconscious of the march of the world through centuries, and of all modern usage, invention, or improvement.”
Mountain grasped that blowing up ships and forts was merely the proximate cause of Western dominance, the last link in a long chain of advantages. A deeper cause was that British factories could turn out explosive shells, well-bored cannon, and oceangoing warships, and British governments could raise, fund, and direct expeditions operating halfway round the world; and the ultimate reason that the British swept into Tinghai that afternoon was their success at extracting energy from the natural environment and using it to achieve their goals. It all came down to the fact that Westerners had not only scrambled further up the Great Chain of Energy than anyone else but also scrambled so high that—unlike any earlier societies in history—they could project their power across the entire world.
This process of scrambling up the Great Chain of Energy is the foundation of what, following the tradition of evolutionary anthropologists since Naroll in the 1950s, I will call social development—basically, a group’s ability to master its physical and intellectual environment to get things done.* Putting it more formally, social development is the bundle of technological, subsistence, organizational, and cultural accomplishments through which people feed, clothe, house, and reproduce themselves, explain the world around them, resolve disputes within their communities, extend their power at the expense of other communities, and defend themselves against others’ attempts to extend power. Social development, we might say, measures a community’s ability to get things done, which, in principle, can be compared across time and space.
Before we go any further with this line of argument, there is one point I need to make in the strongest possible terms: measuring and comparing social development is not a method for passing moral judgment on different communities. For example, twenty-first-century Japan is a land of air-conditioning, computerized factories, and bustling cities. It has cars and planes, libraries and museums, high-tech healthcare and a literate population. The contemporary Japanese have mastered their physical and intellectual environment far more thoroughly than their ancestors a thousand years ago, who had none of these things. It therefore makes sense to say that modern Japan is more developed than medieval Japan. Yet this implies nothing about whether the people of modern Japan are smarter, worthier, or luckier (let alone happier) than the Japanese of the Middle Ages. Nor does it imply anything about the moral, environmental, or other costs of social development. Social development is a neutral analytical category. Measuring it is one thing; praising or blaming it is another altogether.
I will argue later in this chapter that measuring social development shows us what we need to explain if we are to answer the why-the-West-rules question; in fact, I will propose that unless we come up with a way to measure social development we will never be able to answer this question. First, though, we need establish some principles to guide our index-making.
I can think of nowhere better to start than with Albert Einstein, the most respected scientist of modern times. Einstein is supposed to have said that “in science, things should be made as simple as possible, but no simpler”: that is, scientists should boil their ideas down to the core point that can be checked against reality, figure out the simplest possible way to perform the check, then do just that—nothing more, but nothing less either.
Einstein’s own theory of relativity provides a famous example. Relativity implies that gravity bends light, meaning—if the theory is right—that every time the sun passes between Earth and another star, the sun’s gravity will bend the light coming from that star, making the star appear to shift position slightly. That provides an easy test of the theory—except for the fact that the sun is so bright that we cannot see stars near it. But in 1919 the British astronomer Arthur Eddington came up with a clever solution, very much in the spirit of Einstein’s aphorism: by looking at the stars near the sun during a solar eclipse, Eddington realized, he could measure whether they had shifted by the amount Einstein predicted.
Eddington set off to the South Pacific, made his observations, and pronounced Einstein correct. Acrimonious arguments ensued, because the difference between results that supported Einstein and results that disproved him was tiny, and Eddington was pushing the instruments available in 1919 to their very limits; yet despite the theory of relativity’s complexity,* astronomers could agree on what they needed to measure and how to measure it. It was then just a matter of whether Eddington had got the measurements right. Coming down from the sublime movement of the stars to the brutal bombardment of Tinghai, though, we immediately see that things are much messier when we are dealing with human societies. Just what should we be measuring to assign scores to social development?
If Einstein provides our theoretical lead, we might take a practical lead from the United Nations Human Development Index, not least because it has a lot in common with the kind of index that will help answer our question. The UN Development Programme devised the index to measure how well each nation is doing at giving its citizens opportunities to realize their innate potential. The Programme’s economists started by asking themselves what human development really means, and boiled it down to three core traits: average life expectancy, average education (expressed by literacy levels and enrollments in school), and average income. They then devised a complicated weighting system to combine the traits to give each country a score between zero, meaning no human development at all (in which case everyone would be dead) and one—perfection, given the possibilities of the real world in the year the survey was done. (In case you’re wondering, in the most recent index available as I write, that for 2009, Norway came first, scoring .971, and Sierra Leone last, with .340.)
The index satisfies Einstein’s rule, since three traits is probably as simple as the UN can make things while still capturing what human development means. Economists still find a lot not to like about it, though. Most obviously, life expectancy, education, and income are not the only things we could measure. They have the advantage of being relatively easy to define and document (some potential traits, like happiness, would be much harder), but there are certainly other things we could look at (say employment rates, nutrition, or housing) that might generate different scores. Even economists who agree that the UN’s traits are the best ones sometimes balk at conflating them into a single human development score; they are like apples and oranges, these economists say, and bundling them together is ridiculous. Other economists are comfortable both with the variables chosen and with conflating them, but do not like the way the UN statisticians weight each trait. The scores may look objective, these economists point out, but in reality they are highly subjective. Still other critics reject the very idea of scoring human development. It creates the impression, they say, that Norwegians are 97.1 percent of the way toward ultimate bliss, and 2.9 times as blissful as people in Sierra Leone—both of which seem, well, unlikely.
But despite all the criticisms, the human development index has proved enormously useful. It has helped relief agencies target their funds on the countries where they can do most good, and even the critics tend to agree that the simple fact of having an index moves the debates forward by making everything more explicit. An index of social development across the last fifteen-thousand-plus years faces all the same problems as the UN’s index (and then some), but it also, I think, offers some similar advantages.
Like the UN economists, we should aim to follow Einstein’s rule. The index must measure as few dimensions of society as possible (keep it simple) while still capturing the main features of social development as defined above (don’t make it too simple). Each dimension of society that we measure should satisfy six rather obvious criteria. First, it must be relevant: that is, it must tell us something about social development. Second, it must be culture-independent: we might, for example, think that the quality of literature and art are useful measures of social development, but judgments in these matters are notoriously culture-bound. Third, traits must be independent of one another—if, for instance, we use the number of people in a state and the amount of wealth in that state as traits, we should not use per capita wealth as a third trait, because it is just a product of the first two traits. Fourth, traits must be adequately documented. This is a real problem when we look back thousands of years, because the available evidence varies so much. Especially in the distant past, we simply do not know much about some potentially useful traits. Fifth, traits must be reliable, meaning that experts more or less agree on what the evidence says. Sixth, traits must be convenient. This may be the least important criterion, but the harder it is to get evidence for something or the longer it takes to calculate results, the less useful that trait is.
There is no such thing as a perfect trait. Each trait we might choose inevitably performs better on some of these criteria than on others. But after spending many months now looking into the options, I have settled on four traits that I think do quite well on all six criteria. They do not add up to a comprehensive picture of Eastern and Western society, any more than the UN’s traits of life expectancy, education, and income tell us everything there is to know about Norway or Sierra Leone. But they do give us a pretty good snapshot of social development, showing us the long-term patterns that need to be explained if we are to know why the West rules.
My first trait is energy capture. Without being able to extract energy from plants and animals to feed soldiers and sailors who did little farming themselves, from wind and coal to carry ships to China, and from explosives to hurl shells at the Chinese garrison, the British would never have reached Tinghai in 1840 and blown it to pieces. Energy capture is fundamental to social development—so much so that back in the 1940s the celebrated anthropologist Leslie White proposed reducing all human history to a single equation: E x T →C, he pronounced, where E stands for energy, T for technology, and C for culture.
This is not quite as philistine as it sounds. White was not really suggesting that multiplying energy by technology tells us all we might want to know about Confucius and Plato or artists like the Dutch Old Master Rembrandt and the Chinese landscape painter Fan Kuan. When White spoke of “culture” he in fact meant something rather like what I am calling social development. But even so, his formulation is too simple for our purposes. To explain Tinghai we need to know more.
All the energy capture in the world would not have taken a British squadron to Tinghai if they had not been able to organize it. Queen Victoria’s minions had to be able to raise troops, pay and supply them, get them to follow leaders, and carry out a host of other tricky jobs. We need to measure this organizational capacity. Up to a point organizational capacity overlaps with Spencer’s old idea of differentiation, but neo-evolutionists learned in the 1960s that it is almost impossible to measure differentiation directly, or even to define it in a way that will satisfy critics. We need a proxy, something closely related to organizational capacity but easier to measure.
The one I have chosen is urbanism. Perhaps that will seem odd; after all, the fact that London was a big place does not directly reflect Lord Melbourne’s revenue flows or the Royal Navy’s command structure. On further reflection, though, I hope the choice will seem less odd. It took astonishing organization to support a city of 3 million people. Someone had to get food and water in and waste products out, provide work, maintain law and order, put out fires, and perform all the other tasks that go on, day in, day out, in every great city.
It is certainly true that some of the world’s biggest cities today are dysfunctional nightmares, riddled with crime, squalor, and disease. But that, of course, has been true of most big cities throughout history. Rome had a million residents in the first century BCE; it also had street gangs that sometimes brought government to a halt and death rates so high that more than a thousand country folk had to migrate into Rome every month just to make up the numbers. Yet for all Rome’s foulness (brilliantly evoked in the 2006 HBO television series Rome), the organization needed to keep the city going was vastly beyond anything any earlier society could have managed—just as running Lagos (population 11 million) or Mumbai (population 19 million), let alone Tokyo (population 35 million), would have been far beyond the Roman Empire’s capabilities.
This is why social scientists regularly use urbanism as a rough guide to organizational capacity. It is not a perfect measure, but it is certainly a useful rough guide. In our case, the size of a society’s largest cities has the extra advantage that we can trace it not only in the official statistics produced in the last few hundred years but also in the archaeological record, allowing us to get an approximate sense of levels of organization all the way back to the Ice Age.
As well as generating physical energy and organizing it, the British of course also had to process and communicate prodigious amounts of information. Scientists and industrialists had to transfer knowledge precisely; gunmakers, shipbuilders, soldiers, and sailors increasingly needed to read written instructions, plans, and maps; letters had to move between Asia and Europe. Nineteenth-century British information technology was crude compared to what we now take for granted (private letters needed three months to get from Guangzhou to London; government dispatches, for some reason, needed four), but it had already advanced far beyond eighteenth-century levels, which, in turn, were well head of the seventeenth century. Information processing is critical to social development, and I use it as my third trait.
Last but sadly not least is the capacity to make war. However well the British extracted energy, organized it, and communicated, it was their ability to turn these three traits toward destruction that settled matters in 1840. I grumbled in Chapter 1 about Arthur C. Clarke equating evolution with skill at killing in his science-fiction classic 2001: A Space Odyssey, but an index of social development that did not include military power would be no use at all. As Chairman Mao famously put it, “Every Communist must grasp this truth: ‘Political power grows out of the barrel of a gun.’ “Before the 1840s, no society could project military power across the whole planet, and to ask who “ruled” was nonsense. After the 1840s, though, this became perhaps the most important question in the world.
Just as with the UN’s human development index, there is no umpire to say that these traits, rather than some other set, are the ultimate way to measure social development, and again like the UN index, any change to the traits will change the scores. The good news, though, is that none of the alternative traits I have looked at over the last few years changed the scores much, and none changed the overall pattern at all.*
If Eddington had been an artist he might have been an Old Master, representing the world at a level of detail painful to behold. But making an index of social development is more like chainsaw art, carving grizzly bears out of tree trunks. This level of roughness and readiness would doubtless have turned Einstein’s hair even whiter, but different problems call for different margins of error. For the chainsaw artist, the only important question is whether the tree trunk looks like a bear; for the comparative historian, it is whether the index shows the overall shape of the history of social development. That, of course, is something historians will have to judge for themselves, comparing the pattern the index reveals with the details of the historical record.
Provoking historians to do this may in fact be the greatest service an index can perform. There is plenty of scope for debate: different traits and different ways of assigning scores might well work better. But putting numbers on the table forces us to focus on where errors might have crept in and how they can be corrected. It may not be astrophysics, but it is a start.
HOW TO MEASURE?
Now it is time to come up with some numbers. It is easy enough to find figures for the state of the world in 2000 CE (since it is such a nice round number, I use this date as the end point for the index). The United Nations’ various programs publish annual statistical digests that tell us, for instance, that the average American consumes 83.2 million kilocalories of energy per year, compared to 38 million for the average person in Japan; that 79.1 percent of Americans live in cities, as against 66 percent of Japanese; that there are 375 Internet hosts per thousand Americans but only 73 per thousand Japanese; and so on. The International Institute for Strategic Studies’s annual Military Balance tells us, so far as it can be known, how many troops and weapons each country has, what their capabilities are, and how much they cost. We are drowning in numbers. They do not add up to an index, though, until we decide how to organize them.
Sticking to the simple-as-possible program, I set 1,000 points as the maximum social development score attainable in the year 2000 and divide these points equally between my four traits. When Raoul Naroll published the first modern index of social development in 1956 he also gave equal points to his three traits, if only, as he put it, “because no obvious reason appeared for giving one any more weight than another.” That sounds like a counsel of despair, but there is actually a good reason for weighting the traits equally: even if I thought up reasons to weight one trait more heavily than another in calculating social development, there would be no grounds to assume that the same weightings have held good across the fifteen-thousand-plus years under review or have applied equally to East and West.
Having set the maximum possible score for each trait in the year 2000 at 250 points, we come to the trickiest part, deciding how to award points to East and West at each stage of their history. I will not go step-by-step through every calculation involved (I summarize the data and some of the main complexities in the appendix at the end of this book, and I have posted a fuller account online),* but it might be useful to take a quick look inside the kitchen, as it were, and explain the procedure a bit more fully. (If you don’t think so, you can of course skip to the next section.)
Urbanism is probably the most straightforward trait, although it certainly has its challenges. The first is definitional: Just what do we mean by urbanism? Some social scientists define urbanism as the proportion of the population living in settlements above a certain size (say, ten thousand people); others, as the distribution of people across several ranks of settlements, from cities down to hamlets; others still, as the average size of community within a country. These are all useful approaches, but are difficult for us to apply across the whole period we are looking at here because the nature of the evidence keeps changing. I decided to go with a simpler measure: the size of the largest known settlement in East and West at each moment in time.
Focusing on largest city size does not do away with definitional problems, since we still have to decide how to define the boundaries of cities and how to combine different categories of evidence for numbers within them. It does, though, reduce the uncertainties to a minimum. When I played around with the numbers I found that combining largest city size with other criteria, such as the best guesses at the distribution of people between cities and villages or the average size of cities, hugely increased the difficulties of the task but hardly changed the overall scores at all; so, since the more complicated ways of measuring produced roughly the same results but with a whole lot more guesswork, I decided to stick to simple city sizes.
In 2000 CE, most geographers classified Tokyo as the world’s biggest city, with about 26.7 million residents.* Tokyo, then, scores the full 250 points allotted to organization/urbanism, meaning that for all other calculations it will take 106,800 people (that is, 26.7 million divided by 250) to score 1 point. The biggest Western city in 2000 CE was New York, with 16.7 million people, scoring 156.37 points. The data from a hundred years ago are not as good, but all historians agree that cities were much smaller. In the West, London had about 6.6 million residents (scoring 61.80 points) in 1900 CE, while in the East Tokyo was still the greatest city, but with just 1.75 million people, earning 16.39 points. By the time we get back to 1800 CE, historians have to combine several different kinds of evidence, including records of food supply and tax payments, the physical area covered by cities, the density of housing within them, and anecdotal accounts, but most conclude that Beijing was the world’s biggest city, with perhaps 1.1 million souls (10.30 points). The biggest Western city was again London, with about 861,000 people (8.06 points).
The further we push back in time, the broader the margins of error, but for the thousand years leading up to 1700 the biggest cities were clearly Chinese (with Japanese ones often close behind). First Chang’an, then Kaifeng, and later Hangzhou came close to or passed a million residents (around 9 points) between 800 and 1200 CE. Western cities, by contrast, were never more than half that size. A few centuries earlier the situation was reversed: in the first century BCE Rome’s million residents undoubtedly made it the world’s metropolis, while Chang’an in China had probably 500,000 citizens.
As we move back into prehistory the evidence of course becomes fuzzier and the numbers become smaller, but the combination of systematic archaeological surveys and detailed excavation of smaller areas still gives us a reasonable sense of city sizes. As I mentioned earlier, this is very much chainsaw art. The most commonly accepted estimates might be as much as 10 percent off but are unlikely to be much wider of the mark than that; and since we are applying the same methods of estimation to Eastern and Western sites, the broad trends should be fairly reliable. To score 1 point on this system requires 106,800 people, so slightly more than one thousand people will score 0.01 points, the smallest number I felt was worth entering on the index. As we saw in Chapter 2, the biggest Western villages reached this level around 7500 BCE and the biggest Eastern ones around 3500 BCE. Before these dates, West and East alike score zero (you can see tables of the scores in the appendix).
It might be worth taking a moment here to talk about energy capture as well, since it poses very different problems. The simplest way to think about energy capture is in terms of consumption per person, measured in kilocalories per day. Following the same procedure as for urbanism, I start in the year 2000 CE, when the average American burned through some 228,000 kilocalories per day. That figure, certainly the highest in history, gets the West the full compliment of 250 points (as I said earlier in the chapter, I am not interested in passing judgment on our capacities to capture energy, build cities, communicate information, and wage war; only in measuring them). The highest Eastern consumption per person in 2000 CE was Japan’s 104,000 kilocalories per day, earning 113.89 points.
Official statistics on energy go back only to about 1900 CE in the East and 1800 in the West, but fortunately there are ways to work around that. The human body has some basic physiological needs. It will not work properly unless it gets about 2,000 kilocalories of food per day (rather more if you are tall and/or physically active, rather less if you are not; the current American average of 3,460 kilocalories of food per day is, as supersized waistbands cruelly reveal, well in excess of what we need). If you take in much less than 2,000 kilocalories per day your body will gradually shut down functions—strength, vision, hearing, and so on—until you die. Average food consumption can never have been much below 2,000 kilocalories per person per day for extended periods, making the lowest possible score about 2 points.
In reality, though, the lowest scores have always been above 2 points, because most of the energy humans consume is in nonfood forms. We saw in Chapter 1 that Homo erectus was probably already burning wood for cooking at Zhoukoudian half a million years ago, and Neanderthals were certainly doing so 100,000 years ago, as well as wearing animal skins. Since we know so little about Neanderthal lifestyles our guesses cannot be very precise, but by tapping into nonfood energy sources Neanderthals definitely captured on average another thousand-plus kilocalories per day on top of their food, earning them about 3.25 points altogether. Fully modern humans cooked more than Neanderthals, wore more clothes, and also built houses from wood, leaves, mammoth bones, and skins—all of which, again, were parasitic on the chemical energy that plants had created out of the sun’s electromagnetic energy. Even the technologically simplest twentieth-century-CE hunter-gatherer societies captured at least 3,500 calories per day in food and nonfood sources combined. Given the colder weather, their distant forebears at the end of the Ice Age must have averaged closer to 4,000 kilocalories per day, or at least 4.25 points.
I doubt that any archaeologist would quibble much over these estimates, but there is a huge gap between Ice Age hunters’ 4.25 points and the contemporary gasoline-and electricity-guzzling West’s 250. What happened in between? By pooling their knowledge, archaeologists, historians, anthropologists, and ecologists can give us a pretty good idea.
Back in 1971, the editors of the magazine Scientific American invited the geoscientist Earl Cook to contribute an essay that he called “The Flow of Energy in an Industrial Society.” He included in it a diagram, much reprinted since then, showing best guesses at per-person energy consumption among hunter-gatherers, early agriculturalists (by which he meant the farmers of southwest Asia around 5000 BCE whom we met in Chapter 2), advanced agriculturalists (those of northwest Europe around 1400 CE), industrial folk (western Europeans around 1860), and late-twentieth-century “technological” societies. He divided the scores into four categories of food (including the feed that goes into animals whose meat is eaten), home and commerce, industry and agriculture, and transport (Figure 3.1).
Cook’s guesstimates have stood up remarkably well to nearly forty years of comparison with the results gathered by historians, anthropologists, archaeologists, and economists.* They only provide a starting point, of course, but we can use the detailed evidence surviving from each period of Eastern and Western history to tell us how far the actual societies departed from these parameters. Sometimes we can draw on textual evidence, but in most periods up to the last few hundred years archaeological finds—human and animal bones; houses; agricultural tools; traces of terracing and irrigation; the remains of craftsmen’s workshops and traded goods, and the carts, ships, and roads that bore them—are even more important.
Sometimes help comes from surprising directions. The ice cores that featured so prominently in Chapters 1 and 2 also show that airborne pollution increased sevenfold in the last few centuries BCE, mostly because of Roman mining in Spain, and in the last ten years, studies of sediments from peat bogs and lakes have confirmed this picture. Europeans apparently produced nine or ten times as much copper and silver in the first century CE as in the thirteenth century CE, with all the energy demands that implies—people to dig the mines, and animals to cart away the slag; more of both to build roads and ports, to load and unload ships, and carry metals to cities; watermills to crush the ores; and above all wood, as timber to shore up mineshafts and fuel to feed forges. This independent source of evidence also lets us compare levels of industrial activity in different periods. Not until the eleventh century CE—when Chinese documents say that the relentless demands of ironworkers stripped the mountains around Kaifeng so bare of trees that coal, for the first time in history, became an important power source—did pollution in the ice return to Roman-era levels, and only with the belching smokestacks of nineteenth-century Britain did pollution push seriously beyond Roman-era levels.
Figure 3.1. The Great Chain of Energy in numbers: the geoscientist Earl Cook’s estimates of energy capture per person per day, from the time of Homo habilis to 1970s America
Once again, I want to emphasize that we are doing chainsaw art. For instance, I estimate per-person energy capture at the height of the Roman Empire, in the first century CE, around 31,000 kilocalories per day. That is well above Cook’s estimate of 26,000 calories for advanced agricultural societies, but archaeology makes it very clear that Romans ate more meat, built more cities, used more and bigger trading ships (and so on, and so on) than Europeans would do again until the eighteenth century. That said, Roman energy capture could certainly have been 5 percent higher or lower than my estimate. For reasons I address in the appendix, though, it was probably not more than 10 percent higher or lower, and definitely not 20 percent. Cook’s framework and the detailed evidence constrain guesstimates pretty tightly, and as with the urbanism scores, the fact that the same person is doing the guessing in all cases, applying the same principles, should mean that the errors are at least consistent.
Information technology and war-making raise their own difficulties, discussed briefly in the appendix and more fully on my website, but the same principles apply as with urbanism and energy capture, and probably the same margins of error too. For reasons I discuss in the appendix, the scores will need to be systematically wrong by 15 or even 20 percent to make a real difference to the fundamental pattern of social development, but such big margins of error seem incompatible with the historical evidence. In the end, though, the only way to know for sure is for other historians, perhaps preferring other traits and assigning scores in other ways, to propose their own numbers.
Fifty years ago the philosopher Karl Popper argued that progress in science is a matter of “conjectures and refutations,” following a zigzag course as one researcher throws out an idea and others scramble to disprove it, in the process coming up with better ideas. The same, I think, applies to history. I am confident that any index that stays close to the evidence will produce more or less the same pattern as mine, but if I am wrong, and if others find this scheme wanting, hopefully my failure will encourage them to uncover better answers. To quote Einstein one more time, “There could be no fairer destiny for any theory … than that it should point the way to a more comprehensive theory in which it lives on.”
WHEN AND WHERE TO MEASURE?
Two final technical issues. First, how often should we calculate the scores? If we wanted to, we could trace changes in social development from year to year or even month to month since the 1950s. I doubt that there would be much point, though. After all, we want to see the overall shape of history across very long periods, and for that—as I hope to show in what follows—taking the pulse of social development once every century seems to provide enough detail.
As we move back toward the end of the Ice Age, though, checking social development on a century-by-century basis is neither possible nor particularly desirable. We just can’t tell much difference between what was going on in 14,000 and the situation in 13,900BCE (or 13,800 for that matter), partly because we don’t have enough good evidence and partly because change just happened very slowly. I therefore use a sliding scale. From 14,000 through 4000 BCE, I measure social development every thousand years. From 4000 through 2500 BCE the quality of evidence improves and change accelerates, so I measure every five hundred years. I reduce this to every 250 years between 2500 BCE and 1500 BCE, and finally measure every century from 1400 BCE through 2000 CE.
This has its risks, most obviously that the further back in time we go, the smoother and more gradual change will look. By calculating scores only every thousand or five hundred years we may well miss something interesting. The hard truth, though, is that only occasionally can we date our information much more precisely than the ranges I suggest. I do not want to dismiss this problem out of hand, and will try in the narrative in Chapters 4 through 10 to fill in as many of the gaps as possible, but the framework I use here does seem to me to offer the best balance between practicality and precision.
The other issue is where to measure. You may have been struck while reading the last section by my coyness about just what part of the world I was talking about when I generated numbers for “West” and “East.” I spoke at some points about the United States and at others about Britain; sometimes of China, sometimes of Japan. Back in Chapter 1 I described the historian Kenneth Pomeranz’s complaints about how comparative historians often skew analysis of why the West rules by sloppily comparing tiny England with enormous China and concluding that the West already led the East by 1750 CE. We must, he insisted, compare like-sized units. I spent Chapters 1 and 2 responding to this by defining West and East explicitly as the societies that have descended from the original Western and Eastern agricultural revolutions in the Hilly Flanks and the Yellow and Yangzi river valleys; now it is time to admit that that resolved only part of Pomeranz’s problem. In Chapter 2, I described the spectacular expansion of the Western and Eastern zones in the five thousand or so years after cultivation began and the differences in social development that often existed between core areas such as the Hilly Flanks or Yangzi Valley and peripheries such as northern Europe or Korea; so which parts of the East and West should we focus on when working out scores for the index of social development?
We could try looking at the whole of the Eastern and Western zones, although that would mean that the score for, say, 1900 CE would bundle together the smoking factories and rattling machine guns of industrialized Britain with Russia’s serfs, Mexico’s peons, Australia’s ranchers, and every other group in every corner of the vast Western zone. We would then have to concoct some sort of average development score for the whole Western region, then do it again for the East, and repeat the process for every earlier point in history. This would get so complicated as to become impractical, and I suspect it would be rather pointless anyway. When it comes to explaining why the West rules, the most important information normally comes from comparing the most highly developed parts of each region, the cores that were tied together by the densest political, economic, social, and cultural interactions. The index of social development needs to measure and compare changes within these cores.
As we will see in Chapters 4–10, though, the core areas have themselves shifted and changed across time. The Western core was geographically actually very stable from 11,000 BCE until about 1400 CE, remaining firmly at the eastern end of the Mediterranean Sea except for the five hundred years between about 250 BCE and 250 CE, when the Roman Empire drew it westward to include Italy. Otherwise, it always lay within a triangle formed by what are now Iraq, Egypt, and Greece. Since 1400 CE it has moved relentlessly north and west, first to northern Italy, then to Spain and France, then broadening to include Britain, Belgium, Holland, and Germany. By 1900 it straddled the Atlantic and by 2000 was firmly planted in North America. In the East the core remained in the original Yellow-Yangzi zone right up until 1800 CE, although its center of gravity shifted northward toward the Yellow River’s central plain after about 4000 BCE, back south to the Yangzi Valley after 500 CE, and gradually north again after 1400. It expanded to include Japan by 1900 and southeast China by 2000 (Figure 3.2). For now I just want to note that all the social development scores reflect the societies in these core areas; why the cores shifted will be one of our major concerns in Chapters 4 through 10.
THE PATTERN OF THE PAST
So much for the rules of the game; now for some results. Figure 3.3 shows the scores across the last sixteen thousand years, since things began warming up at the end of the Ice Age.
Figure 3.2. Shifting centers of power: the sometimes slow, sometimes rapid relocation of the most highly developed core within the Western and Eastern traditions since the end of the Ice Age
Figure 3.3. Keeping score: Eastern and Western social development since 14,000 BCE
After all this buildup, what do we see? Frankly, not much, unless your eyesight is a lot better than mine. The Eastern and Western lines run so close together that it is hard even to distinguish them, and they barely budge off the bottom of the graph until 3000 BCE. Even then, not much seems to happen until just a few centuries ago, when both lines abruptly take an almost ninety-degree turn and shoot straight up.
But this rather disappointing-looking graph in fact tells us two very important things. First, Eastern and Western social development have not differed very much; at the scale we are looking at, it is hard to tell them apart through most of history. Second, something profound happened in the last few centuries, by far the fastest and greatest transformation in history.
To get more information, we need to look at the scores in a different way. The trouble with Figure 3.3 is that the upward swing of the Eastern and Western lines in the twentieth century was so dramatic that to have the scale on the vertical axis go high enough to include the scores in 2000 CE (906.38 for the West and 565.44 for the East) we have to compress the much lower scores in earlier periods to the point that they are barely visible to the naked eye. This problem afflicts all graphs that try to show patterns where growth is accelerating, multiplying what has gone before, rather than simply adding to it. Fortunately there is a convenient way to solve the problem.
Imagine that I want a cup of coffee but have no money. I borrow a dollar from the local version of Tony Soprano (imagine, too, that this story is set back in the days when a dollar still bought a cup of coffee). He is, of course, my friend, so he won’t charge me interest so long as I pay him back within a week. If I miss the deadline, though, my debt will double every seven days. Needless to say, I fail to show up when the payment is due, so now I owe him two dollars. Fiscal prudence not being my strength, I let another week pass, so I owe four dollars; then another week. Now his marker is worth eight dollars. I skip town and conveniently forget our arrangement.
Figure 3.4 shows what happens to my debt. Just like Figure 3.3, for a long time there is nothing much to see. The line charting the interest becomes visible only around week 14—by which time I owe a breathtaking $8,192. On week 16, when my debt has spiraled to $32,768, the line finally pulls free from the bottom of the graph. By week 24, when the mobsters track me down, I owe $8,260,608. That was one expensive cup of coffee.
By this standard, of course, the growth of my debt in the first few weeks—from one, to two, to four, to eight dollars—was indeed trivial. But imagine that I had bumped into one of the loan shark’s foot soldiers a month or so after my fateful coffee, when my debt stood at sixteen dollars. Let us also say that I didn’t have sixteen dollars, but did give him a five. Concerned for my health, I make four more weekly payments of five dollars each, but then drop off the map again and stop paying. The black line in Figure 3.5shows what happened when I paid nothing, while the gray one shows how my debt grows after those five five-dollar payments. My coffee still ends up costing more than $3 million, but that is less than half what I owed without the payments. They were crucially important—yet they are invisible in the graph. There is no way to tell from Figure 3.5 why the gray line ends up so much lower than the black.
Figure 3.4. The $8 million cup of coffee: compound interest plotted on a conventional graph. Even though the cost of a cup of coffee spirals from $1 to $8,192 across fourteen weeks, the race to financial disaster remains invisible on the graph until week 17.
Figure 3.6 tells the story of my ruin in a different way. Statisticians call Figures 3.4 and 3.5 linear-linear graphs, because the scales on each axis grow by linear increments; that is, each week that passes occupies the same amount of space along the horizontal axis, each dollar of debt the same space on the vertical axis. Figure 3.6, by contrast, is what statisticians call log-linear. Time is still parceled out along the horizontal scale in linear units, but the vertical scale records my debt logarithmically, meaning that the space between the bottom axis of the graph and the first point on the vertical axis covers my debt’s tenfold growth from one to ten dollars; in the space between the first and second points it again expands tenfold, from ten to a hundred dollars; then tenfold more, from a hundred to a thousand; and so on to ten million at the top.
Politicians and advertisers have turned misleading us with statistics into a fine art. Already a century and a half ago the British prime minister Benjamin Disraeli felt moved to remark, “There are three kinds of lies: lies, damned lies, and statistics,” and Figure 3.6may strike you as proving his point. But all it really does is highlight a different aspect of my debt than Figures 3.4 and 3.5. A linear-linear scale does a good job of showing just how bad my debt is; a log-linear scale does a good job of showing how things got to be so bad. In Figure 3.6 the black line runs smooth and straight, showing that without any payments the size of my debt accelerates steadily, doubling every week. The gray line shows how after four weeks of doubling, my series of five-dollar payments slow down, but do not cancel out, my debt’s rate of growth. When I stop paying, the gray line once again rises parallel to the black one, since my debt is once again doubling every week, but does not end up at quite such a dizzying height.
Figure 3.5. A poor way to represent poor planning: the black line shows the same spiral of debt as Figure 3.4, while the gray line shows what happens after small payments against the debt in weeks 5 through 9. on this conventional (linear-linear) graph, these crucial payments are invisible.
Neither politicians nor statistics always lie; it is just that there is no such thing as a completely neutral way to present either policies or numbers. Every press statement and every graph emphasizes some aspects of reality and downplays others. Thus Figure 3.7, showing social development scores from 14,000 BCE through 2000 CE on a log-linear scale, produces a wildly different impression than the linear-linear version of the same scores in Figure 3.3. There is much more going on here than met the eye in Figure 3.3. The leap in social development in recent centuries is very real and remains clear; no amount of fancy statistical footwork will ever make it go away. But Figure 3.7 shows that it did not drop out of a clear blue sky, the way it seemed to do in Figure 3.3. By the time the lines start shooting upward (around 1700 CE in the West and 1800 in the East) the scores in both regions were already about ten times higher than they were at the left-hand side of the graph—a difference that was barely visible in Figure 3.3.
Figure 3.6. Straight roads to ruin: the spiral of debt on a log-linear scale. The black line shows the steady doubling of the debt if no payments are made, while the gray shows the impact of the small payments in weeks 5 through 9 before it goes back to doubling when the payments stop.
Figure 3.7 shows that explaining why the West rules will mean answering several questions at once. We will need to know why social development leaped so suddenly after 1800 CE to reach a level (somewhere close to 100 points) where states could project their power globally. Before development reached such heights, even the strongest societies on earth could dominate only their own region, but the new technologies and institutions of the nineteenth century allowed them to turn local domination into worldwide rule. We will also, of course, need to figure out why the West was the first part of the world to reach this threshold. But to answer either of these questions we will also have to understand why development had already increased so much over the previous fourteen thousand years.
Figure 3.7. The growth of social development, 14,000 BCE–2000 CE, plotted on a log-linear scale. This may be the most useful way to present the scores, highlighting the relative rates of growth in East and West and the importance of the thousands of years of changes before 1800 CE.
Nor is that the end of what Figure 3.7 reveals. It also shows that the Eastern and Western scores were not in fact indistinguishable until just a few hundred years ago: Western scores have been higher than Eastern scores for more than 90 percent of the time since 14,000 BCE. This seems to be a real problem for short-term accident theories. The West’s lead since 1800 CE is a reversion to the long-term norm, not some weird anomaly.
Figure 3.7 does not necessarily disprove short-term accident theories, but it does mean that a successful short-term theory will need to be more sophisticated, explaining the long-term pattern going back to the end of the Ice Age as well as events since 1700 CE. But the patterns also show that long-term lock-in theorists should not rejoice too soon. Figure 3.7 reveals clearly that Western social development scores have not always been higher than Eastern. After converging through much of the first millennium BCE, the lines cross in 541 CE and the East then remains ahead until 1773. (These implausibly precise dates of course depend on the unlikely assumption that the social development scores I have calculated are absolutely accurate; the most sensible way to put things may be to say that the Eastern score rose above the Western in the mid sixth century CE and the West regained the lead in the late eighteenth.) The facts that Eastern and Western scores converged in ancient times and that the East then led the world in social development for twelve hundred years do not disprove long-term lock-in theories, any more than the fact that the West has led for nearly the whole time since the end of the Ice Age disproves short-term accident theories; but again, they mean that a successful theory will need to be rather more sophisticated and to take account of a wider range of evidence than those offered so far.
Before leaving the graphs, there are a couple more patterns worth pointing out. They are visible in Figure 3.7, but Figure 3.8 makes them clearer. This is a conventional linear-linear graph but covers just the three and a half millennia from 1600 BCE through 1900CE. Cutting off the enormous scores for 2000 CE lets us stretch the vertical axis enough that we can actually see the scores from earlier periods, while shortening the time span lets us stretch the horizontal axis so the changes through time are clearer too.
Two things particularly strike me about this graph. The first is the peak in Western scores in the first century CE, around forty-three points, followed by a slow decline after 100 CE. If we look a little farther to the right, we see an Eastern peak just over forty-two points in 1100 CE, at the height of the Song dynasty’s power in China, then a similar decline. A little farther still to the right, around 1700 CE, Eastern and Western scores both return to the low forties but this time instead of stalling they accelerate; a hundred years later the Western line goes through the roof as the industrial revolution begins.
Figure 3.8. Lines through time and space: social development across the three and a half millennia between 1600 BCE and 1900 CE, represented on a linear-linear plot. Line A shows a possible threshold around 43 points, which may have blocked the continuing development of the West’s Roman Empire in the first centuries CE and China’s Song dynasty around 1100 CE, before East and West alike broke through it around 1700 CE. Line B shows a possible connection between declining scores in both East and West in the first centuries CE, and line C shows another possible East-West connection starting around 1300 CE.
Was there some kind of “low-forties threshold” that defeated Rome and Song China? I mentioned in the introduction that, in his book The Great Divergence, Kenneth Pomeranz argued that East and West alike ran into an ecological bottleneck in the eighteenth century that should, by rights, have caused their social development to stagnate and decline. Yet they did not, the reason being, Pomeranz suggested, that the British—more through luck than judgment—combined the fruits of plundering the New World with the energy of fossil fuels, blowing away traditional ecological constraints. Could it be that the Romans and Song ran into similar bottlenecks when social development reached the low forties but failed to open them? If so, maybe the dominant pattern in the last two thousand years of history has been one of long-term waves, with great empires clawing their way up toward the low-forties ceiling then falling back, until something special happened in the eighteenth century.
The second thing that strikes me about Figure 3.8 is that we can draw vertical lines on it as well as horizontal ones. The obvious place to put a vertical line is in the first century CE, when Western and Eastern scores both peaked, even though the Eastern score was well below the Western (34.13 versus 43.22 points). Rather than (or as well as) focusing on the West hitting a low-forties ceiling, perhaps we should be looking for some set of events affecting both ends of the Old World, driving down Roman and Han Chinese social development scores regardless of the levels they had reached.
We could put another vertical line around 1300 CE, when Eastern and Western scores again followed similar patterns, although this time it was the Western score that was much lower (30.73 as against 42.66 points). The Eastern score had already been sliding for a hundred years, but the Western score now joined it, only for both lines to pick up after 1400 and accelerate even more sharply around 1700. Again, instead of focusing on the scores hitting a low-forties ceiling in the early eighteenth century, perhaps we should look for some global events that started pushing Eastern and Western development along a shared path in the fourteenth century. Perhaps the industrial revolution came first to the West not because of some extraordinary fluke, as Pomeranz concluded, but because East and West were both on track for such a revolution; and then something about the way the West reacted to the events of the fourteenth century gave it a slight but decisive lead in reaching the takeoff point in the eighteenth.
It seems to me that Figures 3.3, 3.7, and 3.8 illuminate a real weakness in both long-term lock-in and short-term accident theories. A few of the theorists focus on the story’s beginning in the agricultural revolution, while the great majority look only at its very end, in the last five hundred years. Because they largely ignore the thousands of years in between, they rarely even try to account for all the spurts of growth, slowdowns, collapses, convergences, changes in leadership, or horizontal ceilings and vertical links that jump out at us when we can see the whole shape of history. That, putting it bluntly, means that neither approach can tell us why the West rules; and that being the case, neither can hope to answer the question lurking beyond that—what will happen next.
SCROOGE’S QUESTION
At the climax of Charles Dickens’s A Christmas Carol, the Ghost of Christmas Yet to Come brings Ebenezer Scrooge to a weed-choked churchyard. Silently, the Ghost points out an untended tombstone. Scrooge knows his name will be on it; he knows that here, alone, unvisited, he will lie forever. “Are these the shadows of the things that Will be, or are they shadows of the things that May be, only?” he cries out.
We might well ask the same question about Figure 3.9, which takes the rates of increase in Eastern and Western social development in the twentieth century and projects them forward.* The Eastern line crosses the Western in 2103. By 2150 the West’s rule is finished, its pomp at one with Nineveh and Tyre.
The West’s epitaph looks as clear as Scrooge’s:
WESTERN RULE
1773–2103
R.I.P.
Yet are these really the shadows of the things that Will be?
Confronted with his own epitaph, Scrooge fell to his knees. “Good Spirit,” he begged, grabbing the specter’s hand, “assure me that I yet may change these shadows you have shown me, by an altered life!” Christmas Yet to Come said nothing, but Scrooge worked out the answer for himself. He had been forced to spend an uncomfortable evening with the Ghosts of Christmas Past and Christmas Present because he needed to learn from both of them. “I will not shut out the lessons that they teach,” Scrooge promised. “Oh, tell me I may sponge away the writing on this stone!”
Figure 3.9. The shape of things to come? If we project the rates at which Eastern and Western social development grew in the twentieth century forward into the twenty-second, we see the East regain the lead in 2103. (On a log-linear graph, the Eastern and Western lines would both be straight from 1900 onward, reflecting unchanging rates of growth; because this is a linear-linear plot, both curve sharply upward.)
I commented in the introduction that I’m in a minority among those who write on why the West rules, and particularly on what will happen next, in not being an economist, modern historian, or political pundit of some sort. At the risk of overdoing the Scrooge analogy, I would say that the absence of premodern historians from the discussion has led us into the mistake of talking exclusively to the Ghost of Christmas Present. We need to bring the Ghost of Christmas Past back in.
To do this I will spend Part II of this book (Chapters 4–10) being a historian, telling the stories of East and West across the last few thousand years, trying to explain why social development changed as it did, and in Part III (Chapters 11 and 12) I will pull these stories together. This, I believe, will tell us not only why the West rules but also what will happen next.