2

Image

THE WEST TAKES THE LEAD

GLOBAL WARMING

Though the cavemen shivering around their campfires twenty thousand years ago could not know it, their world had begun moving back toward warmth. Over the next ten thousand years the combination of climate change and their own superfast brains would transform geography, generating distinct regional ways of life that have continued to this very day. East and West began to mean something.

The consequences of global warming were mind-boggling. In two or three centuries around 17,000 BCE the sea level rose forty feet as the glaciers that had blanketed northern America, Europe, and Asia melted. The area between Turkey and Crimea, where the waves of the Black Sea now roll (Figure 2.1), had been a low-lying basin during the Ice Age, but glacial runoff now turned it into the world’s biggest freshwater lake. It was a flood worthy of Noah’s ark,* with the waters rising six inches per day at some stages. Every time the sun came up, the lakeshore had advanced another mile. Nothing in modern times begins to compare.

image

Figure 2.1. The big picture: this chapter’s story seen at the global scale

Earth’s changing orbit set off a wild seesaw of warming and cooling, feast and famine. Figure 2.2 shows how the ratios between two isotopes of oxygen in the Antarctic ice cores mentioned in Chapter 1 zigzagged back and forth as the climate changed. Only after about 14,000 BCE, when melting glaciers stopped dumping icy water into the oceans, did the world clearly start taking two steps toward warmth for every one back toward freezing. Around 12,700 BCE these steps turned into a gallop, and within a single lifespan the globe warmed by about 5°F, bringing it within a degree or two of what we have known in recent times.

image

Figure 2.2. A story written in ice: the ratio between oxygen isotopes in air bubbles trapped in the Antarctic ice pack, revealing the swings between warm/wet and cold/dry weather across the last twenty thousand years

Medieval Christians liked to think of the universe as a Great Chain of Being, from God down to the humblest earthworm. The rich man in his castle, the poor man at his gate—all had their allotted places in a timeless order. We might do better, though, to imagine an anything-but-timeless Great Chain of Energy. Gravitational energy structures the universe. It turned the primeval cosmic soup into hydrogen and helium and then turned these simple elements into stars. Our sun works as a great nuclear reactor converting gravitational into electromagnetic energy, and plants on Earth photosynthesize a tiny portion of this into chemical energy. Animals then consume plants, metabolizing chemical energy into kinetic energy. The interplay between solar and other planets’ gravities shapes the earth’s orbit, determining how much electromagnetic energy we get, how much chemical energy plants create, and how much kinetic energy animals make from it; and that determines everything else.

Around 12,700 BCE, Earth leaped up the Great Chain of Energy. More sunlight meant more plants, more animals, and more choices for humans, about how much to eat, how much to work, and how much to reproduce. Every individual and every little band probably combined the options in their own ways, but overall, humans reacted to moving up the Great Chain of Energy in much the same ways as the plants and animals they preyed upon: they reproduced. For every human alive around 18,000 BCE (maybe half a million) there were a dozen people in 10,000 BCE.

Just how people experienced global warming depended on where they lived. In the southern hemisphere the great oceans moderated the impact of climate change, but the north saw dramatic contrasts. For foragers in the pre–Black Sea Basin, warming was a disaster, and things were little better for people living on coastal plains. They had enjoyed some of the Ice Age world’s richest pickings, but a warmer world meant higher sea levels. Every year they retreated as waves drowned a little more of their ancestral hunting grounds, until finally everything was lost.* Yet for most humans in the northern hemisphere, moving up the Great Chain of Energy was an unalloyed good. People could follow plants and other animals north into regions that were previously too cold to support them, and by 13,000 BCE (the exact date is disputed) humans had fanned out across America, where no ape-man had trod before. By 11,500 BCE people reached the continent’s southern tip, scaled its mountains, and pushed into its rain forests. Mankind had inherited the earth.

THE GARDEN OF EDEN

The biggest beneficiaries of global warming lived in a band of “Lucky Latitudes” roughly 20–35 degrees north in the Old World and 15 degrees south to 20 degrees north in the New (see Figure 2.1). Plants and animals that had clustered in this temperate zone during the Ice Age multiplied wildly after 12,700 BCE, particularly, it seems, at each end of Asia, where wild cereals—forerunners of barley, wheat, and rye in southwest Asia and of rice and millet in East Asia—evolved big seeds that foragers could boil into mush or grind up and bake into bread. All they needed to do was wait until the plants ripened, shake them, and collect the seeds. Experiments with modern southwest Asian wild grains suggest that a ton of edible seeds could have been extracted from just two and a half acres of plants; each calorie of energy spent on harvesting earned fifty calories of food. It was the golden age of foraging.

In the Ice Age, hunter-gatherers had roamed the land in tiny bands because food was scarce, but their descendants now began changing their ways. Like the largest-brained species of several kinds of animals (whether we are talking about bees, dolphins, parrots, or our closest relatives, apes), humans seem to clump together instinctively. We are sociable.

Maybe big-brained animals got this way because they were smart enough to see that groups have more eyes and ears than individuals and do better at spotting enemies. Or maybe, some evolutionists suggest, living in groups came before big brains, starting what the brain scientist Steven Pinker calls a “cognitive arms race” in which those animals that figured out what other animals were thinking—keeping track of friends and enemies, of who shared and who didn’t—outbred those whose brains were not up to the task.

Either way, we have evolved to like one another, and our ancestors chose to exploit Earth’s movement up the Great Chain of Energy by forming bigger permanent groups. By 12,500 BCE it was no longer unusual to find forty or fifty people living together within the Lucky Latitudes, and some groups passed the hundred mark.

In the Ice Age, people had tended to set up camp, eat what plants and kill what animals they could find, then move on to another location, then another, and another. We still sing about being a wandering man, rambling on, free as a bird, and so on, but when the Great Chain of Energy made settling down a serious possibility, hearth and home clearly spoke to us more strongly. People in China began making pottery (a bad idea if you plan to move base every few weeks) as early as 16,000 BCE, and in highland Peru hunter-gatherers were building walls and keeping them clean around 11,000 BCE—pointless behavior for highly mobile people, but perfectly sensible for anyone living in one place for months at a stretch.

The clearest evidence for clumping and settling comes from what archaeologists call the Hilly Flanks, an arc of rolling country curving around the Tigris, Euphrates, and Jordan valleys in southwest Asia. I will spend most of this chapter talking about this region, which saw humanity’s first major movement away from hunter-gatherer lifestyles—and with it, the birth of the West.

The site of ‘Ain Mallaha in modern Israel (Figure 2.3; also known as Eynan) provides the best example of what happened. Around 12,500 BCE, now-nameless people built semisubterranean round houses here, sometimes thirty feet across, using stones for the walls and trimming tree trunks into posts to support roofs. Burned food scraps show that they gathered an astonishing variety of nuts and plants that ripened at different times of year, stored them in plaster-lined waterproof pits, and ground them up on stone mortars. They left the bones of deer, foxes, birds, and (above all) gazelle scattered around the village. Archaeologists love gazelles’ teeth, which have the wonderful property of producing different-colored enamel in summer and winter, making it easy to tell what time of year an animal died. ‘Ain Mallaha has teeth of both colors, which probably means that people lived there year-round. We know of no contemporary sites like this anywhere in the world outside the Hilly Flanks.

Settling down in bigger groups must have changed how people related to one another and the world around them. In the past humans had had to follow the food, moving constantly. They doubtless told stories about each place they stopped: this is the cave where my father died, that is where our son burned down the hut, there is the spring where the spirits speak, and so on. But ‘Ain Mallaha was not just one place in a circuit; for the villagers who lived here, it was the place. Here they were born, grew up, and died. Instead of leaving their dead somewhere they might not revisit for years, they now buried them among and even inside their houses, rooting their ancestors in this particular spot. People took care of their houses, rebuilding them over and over again.

image

Figure 2.3. The beginning of the West: sites in and around the Hily Flanks dliscussed in this chapter

They also started worrying about dirt. Ice Age foragers had been messy people, leaving their campsites littered with food scraps. And why not? By the time maggots moved in and scavengers showed up, the band would be long gone, seeking the next source of food. It was a different story at ‘Ain Mallaha, though. These people were not going anywhere, and had to live with their garbage. The excavators found thousands of rat and mouse bones at ‘Ain Mallaha—animals that had not existed in the forms we know them during the Ice Age. Earlier scavengers had had to fit human refuse into a broader feeding strategy. It was a nice bonus if humans left bones and nuts all over a cave floor, but any proto-rats who tried to rely on this food source would starve to death long before humans came back to replenish it.

Permanent villages changed the rules for rodents. Fragrant, delicious mounds of garbage became available 24/7, and sneaky little rats and mice that could live right under humans’ noses fared better in this new setting than big, aggressive ones that attracted attention. Within a few dozen generations (a century would be plenty of time; mice, after all, breed like mice) rodents in effect genetically modified themselves to cohabit with humans. Sneaky (domestic) vermin replaced their big (wild) ancestors as completely as Homo sapiens had replaced Neanderthals.

Domestic rodents repaid the gift of endless garbage by voiding their bowels into stored food and water, accelerating the spread of disease. Humans learned to dislike rats for just this reason; some among us even find mice scary. The scariest scavengers of all, though, were wolves, who also find garbage irresistible. Most humans see drawbacks to having terrifying, Call of the Wild–type monsters hanging around, so as with the rodents, it was smaller, less threatening animals that fared best.

Archaeologists long assumed that humans actively domesticated dogs, making the tamer wolf cubs into pets and breeding them to produce tamer-still pups who liked humans almost as much as humans liked themselves, but recent studies suggest that natural selection once again worked without our conscious input. Either way, though, the interaction of wolves, garbage, and humans created the animals we call dogs, which could kill the disease-bearing rodents that competed with them for scraps and even fight with true wolves, earning their place as man’s best friend. Woman’s, too: around 11,000 BCE an elderly woman was buried at ‘Ain Mallaha with one hand resting on a puppy, both of them curled up as if asleep.*

DAILY BREAD

In the introduction to this book I spun out the science-fiction writer Robert Heinlein’s one-liner that “progress is made by lazy men looking for easier ways to do things” into a general sociological theory that history is made by lazy, greedy, frightened people (who rarely know what they’re doing) looking for easier, more profitable, and safer ways to do things. This principle kicked in with a vengeance in the Hilly Flanks at the end of the Ice Age, creating a distinctive Western way of living, with higher social development than in any other part of the world.

We can probably praise (or blame) women for this. In modern hunter-gatherer societies women do most of the plant gathering while men do more hunting. Judging from the tendency for men’s graves to contain more spear-and arrowheads while women’s have more grinding tools, things were similar in prehistory, too, which suggests that the answer to the question that has dominated this book so far—when and where we should start speaking of a Western way of life distinct from other ways—grew out of the ingenuity of women living in the Hilly Flanks nearly fifteen thousand years ago.

Wild cereals are annual plants. That is, they grow, produce seeds, and die in one season, then their seeds grow into new plants the next year. When a plant ripens, the rachis (little stalks attaching individual seeds to the plant) weaken and one by one the seeds fall to the ground, where their protective husks shatter and they germinate. For foragers fifteen thousand years ago the simplest way to harvest such seeds was to take a basket and shake the plants so the almost-ripe seeds fell into it. The only problem was that every seed on every wild plant in every stand ripened at different times. If gatherers got to a stand late in the season, most of the seeds would already have fallen and germinated or been eaten by birds. If they came too early the rachis would still be strong and most seeds would be too firmly attached to shake loose. Either way, they lost most of the crop. They could, of course, visit the stand repeatedly, but then they would have less time to visit other stands.

We don’t know whether sloth (not wanting to walk from stand to stand), greed (just wanting more food), or fear (of hunger or of someone else getting to the plant first) was the inspiration, but someone, very likely a woman, had a bright idea: Why not take some of the best seeds and replant them in a particularly fertile spot? Then, she presumably thought, if we look after them—turning the soil, pulling up weeds, maybe even watering the plants—we can rely on them to be there every year, and even to give us better yields. Life is good.

Once again, the earliest direct evidence comes from the Hilly Flanks, and indirectly we can thank the Ba’ath Party for it. The Ba’athists are best known as Saddam Hussein’s murderous political movement in Iraq, but they first seized power next door in Syria in 1963. After purging their rivals they set about modernizing Syria. Damming the Euphrates to create the fifty-mile-long Lake Assad that now generates most of Syria’s electricity was a big part of this. Foreseeing that the dam would flood the heart of the Hilly Flanks, the Syrian Directorate General of Antiquities launched an international campaign to study the sites that would be destroyed. In 1971 a British team explored the mound of Abu Hureyra. Finds on the surface suggested there had been a village here around 7000 BCE, and the archaeologists documented this in rich detail; but one trench revealed that this village had been built on the ruins of an older settlement, dating back to 12,700 BCE.

This was a huge bonus. The excavators raced against time, as the floodwaters rose, and against war, as the Syrian army drafted their workers to fight Israel in the 1973 Yom Kippur/Ramadan conflict. By the time the site was drowned, the team had excavated a little over five hundred square feet of the earliest village: a tiny area, but one of the most important in archaeology. They found semisubterranean circular huts, grinding stones, hearths, and thousands of carbonized seeds. Most came from wild grasses, but a handful of plump, heavy rye seeds stood out.

These seeds suggest that people at Abu Hureyra were using hoes to till fields. They were planting seeds beneath the surface rather than just dropping them on it, and this favored larger seedlings, which find it easier to push their way up to the air, over smaller ones, which find this difficult. If the prehistoric cultivators simply ate everything they grew this would not have mattered, but if they saved some of the seeds to plant again next year, big seeds would be slightly overrepresented. At first the difference would not be enough to notice, but if cultivators repeated this often enough, they would gradually change the meaning of “normal” as the average size of seeds slowly increased. Archaeobotanists (people who study ancient plant remains) call these bigger seeds “cultivated,” to distinguish them from wild grains and from the fully domesticated grains we eat today.

By the time the ‘Ain Mallahans buried the old woman and her little dog around 11,000 BCE, Abu Hureyrans had replanted rye so often that it gave them bigger seeds. This must have seemed a small thing at the time, but it proved (to use one of archaeology’s worst puns) the seed from which the West would grow.

PARADISE LOST

Half a planet away, icily indifferent to puppies and rye, the glaciers kept melting. A hundred thousand years earlier their advance had scoured North America, creating the vast flatness of the Midwest; their retreat now turned these increasingly forested plains into a boggy, mosquito-infested mess. Drunken woodland is what ecologists call it—the ground gets so wet that trees cannot stand up straight. Ridges of boulders and ice that had not melted yet trapped the runoff from glaciers in vast lakes. Geologists have named the biggest of these Lake Agassiz (Figure 2.1) after the Swiss scientist who, back in the 1830s, first realized that there must have been global ice ages. By 10,800 BCE Lake Agassiz covered almost a quarter-million square miles of the western plains, four times the area of modern Lake Superior. Then the inevitable happened: rising temperatures and rising waters undermined the icy spur holding the lake back.

Its collapse was a drawn-out cataclysm, in striking contrast to many modern disaster stories. In the impressively implausible movie The Day After Tomorrow, for instance, Dennis Quaid plays Jack Hall, a scientist (apparently the only one) who has noticed that global warming is going to cause the ice caps to collapse the next day. Summoned to the White House, he tells the president that a superstorm is about to create temperatures of −150°F, switching off the Gulf Stream that bathes northern Europe’s coasts with tropical water and keeps London, England, from having winters like London, Ontario. The superstorm will trigger a new ice age, Hall insists, making most of North America uninhabitable. Not surprisingly, the president is skeptical. Nothing gets done. A few hours later the storm erupts, trapping Hall’s son in New York. Heroics ensue.

I won’t spoil the plot by telling you how the movie turns out, except to say that when Lake Agassiz really turned off the Gulf Stream around 10,800 BCE, things unfolded rather differently. There was no superstorm, but for twelve hundred years, while the lake drained into the Atlantic, the world slid back into ice age conditions. (Geologists call the period 10,800–9600 BCE the Younger Dryas after the waterlogged petals of a little flower called the Arctic Dryas that is common in peat bogs of this date.) The wild cereals that had fed permanent villages in the Hilly Flanks, made garbage heaps possible, and given us mice and dogs now grew less thickly and yielded fewer, smaller seeds.*

Mankind was expelled from the Garden of Eden. Abandoning year-round villages, most people divided into smaller groups and went back to roaming the hillsides in search of their next meal, much like their ancestors at the coldest point of the Ice Age. Animal bones from the Hilly Flanks show that gazelles were getting smaller by 10,500 BCE as people overhunted them, and the enamel on human teeth regularly has telltale ridges indicating chronic childhood malnutrition.

There has never been another catastrophe on quite this scale. To find a parallel, in fact, we have to turn to science fiction. In 1941 Isaac Asimov, then just starting his career, published a story called “Nightfall” in the magazine Astounding Science Fiction. He set it on Lagash, a planet with six suns. Wherever Lagashians go, at least one sun is shining and it is always day—except for once every 2,049 years, when the suns line up just right for a passing moon to create an eclipse. The sky darkens and the stars come out. The terrified populace goes mad. By the time the eclipse ends the Lagashians have destroyed their civilization and plunged themselves into savagery. Over the next 2,049 years they slowly rebuild their culture, only for night to fall again and start the whole process over.

The Younger Dryas sounds like “Nightfall” revisited: the earth’s orbit generates wild swings between freezing and thawing, which every few thousand years produce disasters like the draining of Lake Agassiz, wiping the slate of history clean. Yet while “Nightfall” is a great story (the Science Fiction Writers of America voted it the best science-fiction story of all time, and for what it is worth it has my vote too) it is not such a good model for historical thinking. In the real world not even the Younger Dryas could wipe the slate clean like “Nightfall.” We might do better, in fact, to follow the ancient Greek thinker Heraclitus, who—2,500 years before Asimov sat down to write—observed, “You can’t step into the same river twice.” It is a famous paradox: the second time you put your foot into a stream the waters you originally disturbed have flowed on to the sea and the river is not the same river anymore.

In just the same way, you cannot have the same ice age twice. The societies in the Hilly Flanks when Lake Agassiz collapsed around 10,800 BCE were no longer the same as those that had been there during the previous ice age. Unlike Asimov’s Lagashians, earthlings did not go mad when nature turned their world upside down. Instead they applied a particularly human skill, ingenuity, and built on what they had already done. The Younger Dryas did not turn the clock back. Nothing ever does that.

Some archaeologists suggest that far from being a Nightfall moment, the Younger Dryas actually speeded innovation up. Like all scientific techniques, those used to date the earliest cultivated rye seeds from Abu Hureyra have built-in margins of error. The site’s excavators point out that while the midpoints of the date ranges for the large rye seeds mentioned earlier fall around 11,000 BCE, before the Younger Dryas, they could perfectly well have been harvested five hundred years later, after the Younger Dryas began. Perhaps it was not laziness or greed that prompted the women of Abu Hureyra to tend rye; maybe it was fear. As temperatures fell and wild foods declined, Abu Hureyrans may have experimented, discovering that careful tending produced more and bigger seeds. On the one hand, cold, dry weather made it harder to cultivate cereals; on the other, the harsher weather increased incentives to do so. Some archaeologists imagine Younger Dryas foragers carrying bags of seeds around, scattering them in promising-looking spots as insurance against nature letting them down.

Further digging will show whether this is right, but we already know that not everyone in the Hilly Flanks responded to climatic disaster by returning to moving around in search of food. At Mureybet, just upstream from Abu Hureyra, French excavators found a new village established around 10,000 BCE. They exposed only twenty-five square feet of the earliest levels before Lake Assad swallowed this site too, but it was enough to show that the villagers scraped together sufficient wild plants and gazelles to hang on year-round. And in a house dated 10,000–9500 BCE the archaeologists made an unexpected discovery: embedded in a clay bench were the horns of a wild aurochs, the fierce six-foot-tall predecessor of the modern ox, plus the shoulder blades of two more.

No pre–Younger Dryas site has yielded anything quite this odd, but after 10,000 BCE villages filled up with all kinds of surprising things. Take, for example, Qermez Dere in northern Iraq, exposed by bulldozing in 1986. Only two small trenches could be excavated, one of which hit an area for preparing wild foods, much like those known from ‘Ain Mallaha or Abu Hureyra. The other trench, though, produced no evidence of domestic activities. Instead it contained a sequence of three roundish chambers, each twelve to fifteen feet across and dug five feet beneath the ancient ground level. The first chamber was plastered and a row of four pillars had been set in the floor, so close together that it was hard to move around the room. One of the pillars was found intact: molded in clay and plaster over a stone core, it tapered and had odd bulges near the top, making it look like a stylized human torso with shoulders. The room had been filled (apparently deliberately) with several tons of earth, containing several groups of big animal bones and unusual objects like stone beads. A new room was then dug, just like the first one, on almost exactly the same spot; it, too, was plastered then filled in with tons of earth. Then a third room was dug in the same place, plastered, and filled in. After dumping a few baskets of soil into this final chamber, people placed six human skulls, minus their jawbones, just above the floor. The skulls were in bad shape, suggesting that they had been in circulation for a long time before being buried here.

What on earth were these people doing? It is a standing joke among archaeologists that whenever we cannot figure out what we have dug up, we say it is religious (having just finished excavating a site on Sicily that I think is religious, I should confess to not finding the joke very funny anymore). The problem, of course, is that we cannot dig up past beliefs; yet that does not mean archaeologists are just making things up when they talk about prehistoric religion.

If we take a fairly commonsense definition of religion as belief in powerful, supernatural, normally unseen beings who care about humans and expect humans to care about them (which seems to apply to so many societies that some evolutionary psychologists think religion is hardwired into the human brain), we should be able to recognize, if not necessarily understand, remains of rituals through which people communicated with a divine world.

Rituals are notoriously culture-specific. Depending on when and where you find yourself, it may be that the mighty ones will listen only if you pour the blood of a live white goat on the right side of this particular rock; or only if you take off your shoes, kneel down, and pray facing in that direction; or if you tell your misdeeds to a man in black who doesn’t have sex; and so on. The list is endless. Yet despite their wondrous variety, rituals do have certain things in common. Many require special places (mountaintops, caves, unusual buildings), objects (images, statues, valuable or foreign goods), movements (processions, pilgrimages), and clothes (highly formal, totally disheveled), all heightening the sense of stepping outside the everyday. Feasting, often involving unusual foods, is popular; so too is fasting, which induces altered states of mind. Sleep deprivation, pain, repetitive chanting and dancing, or (the favorite) drugs all do the same job, and may tip truly holy people into trances, fits, and visions.

These sites have it all: strange underground rooms, humanoid pillars, jawless skulls—and while everything in the archaeology of religion is speculative, I find it hard not to see them as religious responses to the Younger Dryas. The world was freezing, plants were dying, and the gazelles were going away; what could be more natural than asking gods, spirits, and ancestors for aid? What could make more sense than identifying special people and creating special places to facilitate communication? The shrine at Qermez Dere looks like an amplifier, turning up the volume on requests for help.

So when the world warmed up at the end of the Younger Dryas, around 9600 BCE, the Hilly Flanks were not the same place they had been when the world had warmed up at the end of the main ice age, three thousand years earlier. Global warming did not step into the same society twice. Sites from the earlier period of warming, such as ‘Ain Mallaha, give the impression that people just happily took advantage of nature’s bounty, but in the villages that popped up around the Hilly Flanks after 9600 BCE people sank serious resources into religion. Many post-9600 sites have evidence for elaborate treatment of human and aurochs skulls and several have big, underground chambers that look like communal shrines. At Jerf al-Ahmar in Syria, now slumbering alongside so many other sites beneath the waters of Lake Assad, French archaeologists found ten multiroomed houses around a large underground chamber. A human skull was sitting on a bench and in the middle of the room was a headless skeleton. It looks disturbingly like a human sacrifice.

Most spectacular of all is Göbekli Tepe, perched on a hilltop with commanding views across southeast Turkey. Since 1995 its German and Turkish excavators have exposed four sunken chambers, up to ten feet deep and thirty feet across, dating to 9000 BCE or even earlier. Like the smaller, earlier chambers at Qermez Dere, each had been deliberately filled in. Each contained T-shaped stone columns, some seven feet tall, decorated with carved animals. Geomagnetic surveys suggest that fifteen more chambers remain unexcavated; in all there may be two hundred stone pillars at the site, many weighing over eight tons. A twenty-foot-long pillar found unfinished in a quarry weighed fifty tons.

People did all this with nothing more sophisticated than flint tools. While we will never know why this particular hilltop was so sacred, it certainly looks like a regional sanctuary, perhaps a place for festivals where hundreds of people congregated for weeks at a time, carving pillars, dragging them to the chambers, and setting them upright. One thing seems certain, though: never before in history had such large groups worked together.

Humans were not passive victims of climate change. They applied ingenuity, working to get the gods and ancestors on board in the struggle against adversity. And while most of us doubt that these gods and ancestors actually existed, the rituals may well have done some good anyway as a kind of social glue. People who sincerely believed that big rituals in lavish shrines would win the gods’ aid were surely more likely to tough it out and stick together no matter how hard times got.

By 10,000 BCE, the Hilly Flanks stood out from the rest of the world. Most people in most places still drifted between caves and campsites, like the one excavated since 2004 at Longwangcan in China, where the only traces of their activity that survive are small circles of baked earth from campfires. A battered piece of shale from this site might be a simple stone spade, perhaps implying that cultivation of crops had begun, but there is nothing like the fat rye seeds of Abu Hureyra, let alone the monuments of Mureybet or Qermez Dere. The most substantial building known from the Americas is a small hut of bent saplings covered with hides, detected by meticulous excavators at Monte Verde in Chile; while in the whole of India archaeologists have not been able to find even that much, and scatters of stone tools are the only traces of human activity.

A distinctive Western world was taking shape.

PARADISE TRANSFORMED

By 9600 BCE Earth was warming up again, and this time around, Hilly Flankers already knew how to get the most from grasses. They quickly (by the standards of earlier times, anyway) resumed cultivation. By 9300 BCE wheat and barley seeds from sites in the Jordan Valley were noticeably bigger than wild versions and people were modifying fig trees to improve their yields. The world’s oldest known granaries, clay storage chambers ten feet wide and ten feet tall, come from the Jordan Valley around 9000 BCE. By then cultivation was under way in at least seven pockets in the Hilly Flanks, from modern Israel to southeast Turkey, and by 8500 BCE big-seeded cereals were normal all across the region.

Changes were very slow indeed by modern standards, but over the next thousand years they made the Hilly Flanks increasingly different from any other part of the world. The people of this area were, unknowingly, genetically modifying plants to create fully domesticated crops that could not reproduce themselves without human aid. Like dogs, these plants needed us as much as we needed them.

Plants, like animals, evolve because random mutations occur when DNA is copied from one generation to the next. Once in a while, a mutation increases a plant’s chance of reproducing. This is particularly common if the environment is changing too, as happened when permanent villages created niches in which small, tame wolves had advantages over big, fierce ones, or when cultivation gave big seedlings advantages over small ones. I already mentioned that wild cereals reproduce by having each seed ripen and fall to the ground at a different time from the others, whereupon the husk shatters, leaving the seed free to grow. But a few plants—just one per one or two million normal plants—have a random mutation on a single gene that strengthens the rachis connecting the seed to the plant and also the husk protecting the seed. When these seeds ripen they do not fall to the ground and the husks cannot shatter. The seeds literally wait for a harvester to come along and get them. Before there were any harvesters the mutant plants died out each year because their seeds could not get into the soil, making this a most disadvantageous mutation. The same thing happened if humans shook the plants and caught the grains that fell; the mutant seeds would not fall, and once again died out.

Archaeobotanists argue passionately over just what happened to change this situation, but most likely good old-fashioned greed got involved. After investing their energy in hoeing, weeding, and watering the best stands of grasses, women (assuming, again, that it was women) may have wanted to squeeze every last bit of food from their plants. That would have meant visiting each stand to shake the bushes several times, and they would surely have noticed that no matter how hard they shook, some stubborn seeds—the mutants with the tough rachis—just would not drop. What could be more natural than to rip the offending stalk out of the ground and take the whole plant home? Wheat and barley stalks do not weigh much, after all, and I’m fairly sure that’s how I would react if confronted by a cereal that would not surrender.

If women then replanted a random selection of their seeds, they would have taken mutant seeds along with normal ones; in fact, the mutants would be slightly overrepresented, because at least some normal seeds would already have fallen and been lost. Each year that they replanted they would slightly increase the proportion of mutants in their cultivated stands. This was clearly an agonizingly slow process, quite invisible to the people involved, but it set off an evolutionary spiral just as dramatic as what happened to mice in garbage dumps. Within a couple of thousand years, instead of one plant that waited for the harvester per field of one or two million, they had only genetically modified domesticated plants. The excavated finds suggest that even around 8500 BCE fully domesticated wheat and barley were still almost unheard of. By 8000, though, about half the seeds we find in the Hilly Flanks have the tough rachis that would wait for the harvester; by 7500, virtually all do.

Laziness, greed, and fear constantly added improvements. People discovered that planting cereals in a garden one year then protein-rich beans the next replenished the soil as well as varying their diet; in the process, they domesticated lentils and chickpeas. Crushing wheat and barley on coarse grindstones filled bread with grit, which wore people’s teeth down to stumps; so they sieved out the impurities. They found new ways to prepare grains, baking clay into waterproof pots for cooking. If we are right to draw analogies with modern agriculturalists, women would have been responsible for most or all of these innovations, as well as for learning to weave linen into clothes. Skins and furs were out.

While women tamed plants, men (probably) took on animals. By 8000 BCE herders in what is now western Iran were managing goats so effectively that bigger, calmer strains evolved. Before 7000 BCE herders turned the wild aurochs into something like the placid cows we know today and tamed wild boars into pigs. Across the next few thousand years they learned not to kill all animals for meat while they were still young but to keep some around for wool and milk, and then—most useful of all—to harness them to wheeled carts.* Previously, moving anything meant picking it up and carrying it, but an ox in harness could deliver three times the draft power of a man. By 4000 BCE the domestication of plants and animals converged in the ox-drawn plow. People carried on tinkering, but nearly six thousand years would pass before humans added significant new energy sources to this package by harnessing the power of coal and steam in the industrial revolution.

The early farmers of the Hilly Flanks transformed the way people lived. Those of us who quake at the prospect of sitting next to a screaming baby on a long plane ride should spare a thought for female foragers, who regularly carry their infants with them as they walk thousands of miles every year gathering plants. Not surprisingly, they do not want too many children; consciously or not, they space their pregnancies by extending breastfeeding into the child’s third or fourth year (producing breast milk prevents ovulation). Ice Age foragers probably followed similar strategies, but the more they settled down, the less they needed to do this. Having more babies in fact became a boon, providing extra labor, and recent skeletal studies suggest that the average woman in an early farming village, staying in one place with stores of food, gave birth to seven or eight babies (of whom maybe four would survive to their first birthday and perhaps three to reproductive age) as compared to the mere five or six live births of her roving ancestresses. The more food people grew, the more babies they could feed; although, of course, the more babies they fed, the more food they had to grow.

Population soared. By 8000 BCE some villages probably had five hundred residents, ten times the size of pre–Younger Dryas hamlets such as ‘Ain Mallaha. By 6500 Çatalhöyük in modern Turkey had perhaps three thousand. These were villages on steroids, and they had all the problems that implies. Microscopic analysis of sediments from Çatalhöyük shows that people simply dumped garbage and night soil in stinking heaps between houses, to be trodden into the dust and mud. The filth would have appalled hunter-gatherers but surely delighted rats, flies, and fleas. We can see from tiny pieces of excrement trodden into the dirt floors that villagers also stabled domestic animals in their homes, and human skeletons from the site of ‘Ain Ghazal in Jordan show that by 7000BCE tuberculosis had jumped from cattle to people. Settling down and raising more food increased fertility, but also meant more mouths to feed and more germs to share, both of which increased mortality. Each new farming village probably grew rapidly for a few generations until fertility and mortality balanced each other out.

Yet for all the squalor, this was clearly what people wanted. Little hunter-gatherer bands had had broad geographical horizons but narrow social ones: the landscape changed but the faces did not. The early farmer’s world was just the opposite. You might pass your whole life within a day’s walk of the village where you were born, but what a place it was—full of shrines where the gods revealed themselves, festivals and feasts to delight the senses, and gossipy, nosy neighbors in solid houses with plastered floors and waterproof roofs. These buildings would strike most people today as cramped, smoky, smelly hovels, but they were a big step up from sharing damp caves with bears or huddling out of the rain under skins stretched over branches.

Early farmers tamed the landscape, breaking it into concentric circles—at the center was home; then came the neighbors; then the cultivated fields; then the pastures, where shepherds and flocks trekked between summer and winter grazing; and beyond them the wild, an unregulated world of scary animals, savages who hunted, and who knew what monsters. A few excavations have found stone slabs incised with lines that, at least to the eye of the believer, look a bit like maps of fields divided by tiny paths; and around 9000 BCE villagers in Jerf al-Ahmar and some of the neighboring sites now under Lake Assad seem to have been experimenting with a kind of protowriting, scratching images of snakes, birds, farm animals, and abstract signs on little stone tokens.

By imposing such mental structures on their world, Hilly Flankers were, we might say, domesticating themselves. They even remade what love meant. The love between husband and wife or parent and child is natural, bred into us over millions of years, but farming injected new forces into these relationships. Foragers had always shared their knowledge with their young, teaching them to find ripe plants, wild game, and safe caves, but farmers had something more concrete to pass down. To do well, people now needed property—a house, fields, and flocks, not to mention investments like wells, walls, and tools. The first farmers were apparently quite communal, sharing food and perhaps cooking collectively, but by 8000 BCE they were building bigger, more complicated houses, each with its own storerooms and kitchens, and perhaps dividing the land into privately owned fields. Life increasingly focused on small family groups, probably the basic unit for transmitting property between generations. Children needed this materialinheritance, because the alternative was poverty. Transmitting property became a matter of life and death.

There are signs of what can only be called an obsession with ancestors. We perhaps see it as early as 10,000 BCE, with the jawless skulls of Qermez Dere, but as farming developed, it escalated. Burying multiple generations of the dead under house floors became common, mingling bodies in ways that seem to express very physically the link between property and descent. Some people went further, disinterring bodies after the flesh decayed, removing the skulls, and reburying the headless corpses. Using plaster, they modeled faces on the skulls, sticking shells in the eye sockets and painting in details like hair.

Dame Kathleen Kenyon, a formidable woman in the man’s world of 1950s archaeology, was the first to document this horror-movie custom in her excavations at the famous site of Jericho on the West Bank, but plastered skulls have now been found in dozens of settlements. What people did with the skulls is less clear, since we only find ones that have been reburied. Most were placed in pits, though at Çatalhöyük one young woman was buried around 7000 BCE hugging to her breast a skull that had been replastered and painted red no fewer than three times.

Such intimacy with corpses makes most of us squeamish but clearly mattered a lot to early farmers in the Hilly Flanks. Most archaeologists think it shows that ancestors were the most important supernatural beings. The ancestors had passed on property, without which the living would starve; in return the living honored them. Possibly ancestral rituals clothed the transmission of property in a holy aura, justifying why some people owned more than others. People may also have used skulls for necromancy, summoning ancestors to ask when to plant, where to hunt, and whether to raid neighbors.

Ancestor cults flourished all over the Hilly Flanks. At Çatalhöyük almost every house had bodies under the floor and ancestral skulls plastered into the surfaces and walls. At ‘Ain Ghazal two pits were found containing life-size statues and busts made from bundles of reeds coated with plaster. Some had twin heads; most were painted with giant, staring eyes. Most striking of all, around 8000 BCE people at Çayönü in southeast Turkey built what its excavators labeled a “House of the Dead,” with sixty-six skulls and more than four hundred skeletons stashed behind an altar. Chemists identified deposits on the altar as hemoglobin crystals from human and animal blood. More human blood was caked on clay bowls, and two other buildings also had bloodstained altars, one with the image of a human head carved on it. The mind fairly boggles. It sounds like a slasher movie—struggling victims tied to altars, priests tearing their jugulars open with razor-sharp flint blades and sawing off their heads for storage, worshippers drinking their blood …

Or maybe not. Nothing archaeologists dig up can prove or disprove such flights of fancy. Still, the statues and the House of the Dead seem to imply the emergence of religious specialists who somehow persuaded everyone that they had privileged access to the supernatural. Perhaps they could fall into trances or fits; perhaps they could just describe their visions better. Whatever the reason, priests may have been the first people to enjoy institutionalized authority. Here, perhaps, we see the beginnings of entrenched hierarchy.

Whether that is true or not, hierarchy developed fastest within households. I have already observed that men and women had had different roles in foraging societies, the former more active in hunting and the latter in gathering, but studies of contemporary groups suggest that domestication sharpens the sexual division of labor, tying women to the home. The high mortality/high fertility regime required most women to spend most of their lives pregnant and/or minding small children, and changes in agriculture—changes that women themselves probably pioneered—reinforced this. Domesticated cereals need more processing than most wild foods, and since threshing, grinding, and baking can be done in the home while supervising infants, these logically became women’s work.

When land is abundant but labor is scarce (as in the earliest days of cultivation), people normally cultivate large areas lightly, with men and women hoeing and weeding together. If population increases but the supply of farmland does not, as happened in the Hilly Flanks after 8000 BCE, it makes sense to work the land more intensively, squeezing more from each acre by manuring, plowing, and even irrigating. All these tasks require upper-body strength. Plenty of women are as strong as men, but men do increasingly dominate outdoor work and women indoor work as agriculture intensifies. Grown men work the fields; boys tend the flocks; and women and girls manage the ever more sharply defined domestic sphere. A study of 162 skeletons dating around 7000 BCE from Abu Hureyra revealed striking gender distinctions. Both men and women had enlarged vertebrae in their upper backs, probably from carrying heavy loads on their heads, but only women had a distinctive arthritic toe condition caused by spending long periods kneeling, using their toes as a base to apply force while grinding grain.

Weeding, clearing stones, manuring, watering, and plowing all increased yields, and inheriting a well-tended field, rather than just any bit of land, made all the difference to a household’s fortunes. The way religion developed after 9600 BCE suggests that people worried about ancestors and inheritance, and we should probably assume that it was at this point that they began reinforcing their rituals with other institutions. With so much at stake, men in modern peasant cultures want to be sure they really are the fathers of the children who will inherit their property. Foragers’ rather casual attitudes about sex yield to obsessive concern with daughters’ premarital virginity and wives’ extramarital activities. Men in traditional agricultural societies typically marry around the age of thirty, after they have come into their inheritance, while women generally marry around fifteen, before they have had much time to stray. While we cannot be sure that these patterns originated at the dawn of farming, it does seem rather likely. By, say, 7500 BCE a girl would typically grow up under the authority of her father, then, as a teenager, exchange it for the authority of a husband old enough to be her father. Marriage would become a source of wealth as those who already had good lands and flocks would marry others in the same happy situation, consolidating holdings. The rich got richer.

Having things worth inheriting meant having things worth stealing, and it is surely no coincidence that evidence for fortifications and organized warfare mushrooms in the Hilly Flanks after 9600 BCE. Modern hunter-gatherer life is famously violent; with no real hierarchy to keep their passions in check, young hunters often treat homicide as a reasonable way to settle disagreements. In many bands, it is the leading cause of death. But to live together in villages, people had to learn to manage interpersonal violence. Those that did so would have flourished—and have been able to harness violence to take things from other communities.

The most remarkable evidence comes from Jericho, famous for the biblical story of the walls that tumbled down when Joshua blew his trumpet. When Kathleen Kenyon dug there fifty years ago, she did find walls—but not Joshua’s. Joshua lived around 1200BCE, but Kenyon uncovered what looked like fortifications eight thousand years older. She interpreted these as a defensive bastion, twelve feet high and five feet thick, dating to around 9300 BCE. New studies in the 1980s showed that she was probably mistaken, and that her “fortification” actually consisted of several small walls built at different times, perhaps to hold back a stream; but her second great find, a stone tower twenty-five feet tall, probably really was defensive. In a world where the most advanced weapon was a stick with a pointed stone tied to the end, this was a mighty bulwark indeed.

Nowhere outside the Hilly Flanks did people have so much to defend. Even in 7000 BCE, almost everyone outside this region was a forager, shifting seasonally, and even where they had begun to settle down in villages, such as Mehrgarh in modern Pakistan or Shangshan in the Yangzi Delta, these were simple places by the standards of Jericho. If hunter-gatherers from any other place on earth had been airlifted to Çayönü or Çatalhöyük they would not, I suspect, have known what hit them. Gone would be their caves or little clusters of huts, replaced by bustling towns with sturdy houses, great stores of food, powerful art, and religious monuments. They would find themselves working hard, dying young, and hosting an unpleasant array of microbes. They would rub shoulders with rich and poor, and chafe under or rejoice in men’s authority over women and parents’ over children. They might even discover that some people had the right to murder them in rituals. And they might well wonder why people had inflicted all this on themselves.

GOING FORTH AND MULTIPLYING

Fast-forward ten thousand years from the origins of hierarchy and drudgery in the prehistoric Hilly Flanks to Paris in 1967.

To the middle-aged men who administered the University of Paris campus in the dreary suburb of Nanterre—the heirs of traditions of patriarchy stretching back to Çatalhöyük—it seemed obvious that the young ladies in their charge should not be allowed to entertain young gentlemen in their dorm rooms (or vice versa). Such rules have probably never seemed obvious to the young, but for three hundred generations teenagers had had to live with them. But not anymore. As winter closed in, students challenged their elders’ right to dictate their love lives. In January 1968 Daniel Cohn-Bendit, nowadays a respected Green Party member of the European Parliament but then a student activist known as “Danny the Red,” compared the minister for youth’s attitudes to the Hitler Youth’s. In May students took on armed police in running street-fights, paralyzing downtown Paris with barricades and burning cars. President De Gaulle met secretly with his generals to find out whether—if it came to a new Bastille Day—the army would stand by him.

Enter Marshall Sahlins, a youngish anthropology professor from the University of Michigan. Sahlins had made his name with a series of brilliant essays on social evolution and by criticizing the Vietnam War; now he forsook Ann Arbor (“a small university citymade up exclusively of side streets,” he unkindly but not unfairly called it) to spend two years at the Collège de France, the Mecca of both anthropological theory and student radicalism. As the crisis deepened, Sahlins sent a manuscript to the journal Les temps modernes, required reading for everyone who was anyone on the French intellectual scene. It was to become one of the most influential anthropological essays ever written.

Open the gates of nurseries, universities, and other prisons,” student radicals had scrawled on a wall at Nanterre. “Thanks to teachers and exams competitiveness starts at six.” Sahlins’s manuscript offered something to the students: not an answer, which the anarchists probably did not want (“Be a realist, demand the impossible” went one of their slogans), but at least some encouragement. The central issue, Sahlins argued, was that bourgeois society had “erected a shrine to the Unattainable: Infinite Needs.” We submit to capitalist discipline and compete to earn money so we can chase Infinite Needs by buying things we don’t really want. We could learn something, Sahlins suggested, from hunter-gatherers. “The world’s most primitive people,” he explained, “have few possessions but they are not poor.” This only sounded like a paradox: Sahlins argued that foragers typically worked just twenty-one to thirty-five hours per week—less than Paris’s industrial laborers or even, I suspect, its students. Hunter-gatherers did not have cars or TVs, but they did not know they were supposed to want them. Their means were few but their needs were fewer, making them, Sahlins concluded, “the original affluent society.”

Sahlins had a point: Why, he asked, did farming ever replace foraging if the rewards were work, inequality, and war? Yet replace foraging it clearly did. By 7000 BCE farming completely dominated the Hilly Flanks. Already by 8500 BCE cultivated cereals had spread to Cyprus and by 8000 had reached central Turkey. By 7000 fully domesticated plants had reached all these areas and spread eastward to (or, perhaps, developed independently in) Pakistan. They had reached Greece, southern Iraq, and central Asia by 6000, Egypt and central Europe by 5000, and the shores of the Atlantic by 4000 (Figure 2.4).

Archaeologists have argued for decades over why this happened, without much agreement. At the end of a magisterial recent review, for instance, the strongest generalization that Graeme Barker of Cambridge University felt he could make was that farmers replaced foragers “in different ways and at different rates and for different reasons, but in comparable circumstances of challenges to the world they knew.”

image

Figure 2.4. Going forth and multiplying, version one: the westward spread of domesticated plants from the Hilly Flanks to the Atlantic, 9000–4000 BCE

Yet although the process was messy—going on across millennia at the scale of entire continents, how could it not be?—we can make quite a lot of sense of it if we remember that it was, at the end of the day, all about Earth’s movement up the Great Chain of Energy. Orbital change meant that Earth captured more of the sun’s electromagnetic energy; photosynthesis converted some of that larger share into chemical energy (that is, more plants); metabolism converted some of that larger stock of chemical energy into kinetic energy (that is, more animals); and farming allowed humans to extract vastly more energy from plants and other animals for their own use. Pests, predators, and parasites in turn sucked as much of this newfound energy out of farmers as they could, but there was still plenty left over.

Humans, like plants and other animals, found a major outlet for their extra energy in sexual reproduction. High birthrates meant that new villages could grow rapidly until every square inch of available land was being farmed, whereupon hunger and sickness rose until they canceled out fertility. Energy capture and energy consumption then reached a rough balance. Some villages stabilized like this, always hovering on the edge of misery; in others a few daring souls decided to start over. They might walk an hour to a vacant (perhaps less desirable) spot in the same valley or plain—or trudge hundreds of miles in search of green pastures they had heard about. They might even cross the seas. Many adventurers must have failed, the ragged, starving survivors crawling home with their tails between their legs. Others, though, triumphed. Population boomed until deaths caught up with births again or until colonies spun off colonies of their own.

Most farmers expanding into new territory found foragers already living there. It is tempting to imagine scenes like something out of old Western movies, with cattle raids, scalping, and shoot-outs (with both sides using bows and arrows), but the reality may have been less dramatic. Archaeological surveys suggest that the first farmers in each region tended to settle in different areas from the local foragers, almost certainly because the best farmland and the best foraging grounds rarely overlapped. At least at first, farmers and foragers may have largely ignored each other.

Eventually, of course, foraging did disappear. You will find few hunters or gatherers today prowling the manicured landscapes of Tuscany or Tokyo’s suburbs. Farming populations grew rapidly, needing only a few centuries to fill up the best land, until they had no option but to push into the (in their eyes) marginal territories of the foragers.

There are two main theories about what happened next. The first suggests that farmers basically destroyed the original affluent society. Disease might have played a part; rats, flocks, and permanent villages certainly made farmers less healthy than hunter-gatherers. We should not, though, imagine epidemics like those that carried off Native Americans in their millions after 1492. The farmers’ and foragers’ disease pools had been separated by just a few miles of forest, not uncrossable oceans, so they had not diverged very far.

Yet even without mass kill-offs, weight of numbers was decisive. If foragers decided to fight, as happened on so many colonial frontiers in modern times, they might destroy the odd farming village, but more colonists would just keep coming, swamping resistance. Alternatively, foragers might choose flight, but no matter how far they fell back, new farmers would eventually arrive, chopping down still more trees and breathing germs everywhere, until foragers ended up in the places farmers simply could not use, such as Siberia or the Sahara.

The second theory says none of these things happened, because the first farmers across most of the regions shown in Figure 2.4 were not descendants of immigrants from the Hilly Flanks at all. They were local hunter-gatherers who settled down and became farmers themselves. Sahlins made farming sound deeply unattractive compared to the original affluent society, but in all likelihood foragers rarely faced a simple choice between two lifestyles. A farmer who left his plow and started walking would not cross a sharp line into foragers’ territory. Rather, he would come to villages where people farmed a little less intensively than he did (maybe hoeing their fields instead of plowing and manuring), then people who farmed less intensively still (maybe burning patches of forest, cultivating them until the weeds grew back, then moving on), and eventually people who relied entirely on hunting and gathering. Ideas, people, and microbes drifted back and forth across this broad contact zone.

When people realized that neighbors with more intensive practices were killing the wild plants and chasing off the animals that their own foraging lifestyles depended on, rather than attacking these vandals or running away they also had the option of joining the crowd and intensifying their own cultivation. Instead of picking farming over foraging, people probably only decided to spend a little less time gathering and a little more time gardening. Later they might have to decide whether to start weeding, then plowing, then manuring, but this was—to repeat an image from the previous chapter—a series of baby steps rather than a once-and-for-all great leap from the original affluent society to backbreaking toil and chronic illness. On the whole, across hundreds of years and thousands of miles, those who intensified also multiplied; those who clung to their old ways dwindled. In the process, the agricultural “frontier” crept forward. No one chose hierarchy and working longer hours; women did not embrace arthritic toes; these things crept up on them.

No matter how many stone tools, burned seeds, or house foundations archaeologists dig up, they will never be able to prove either theory, but once again genetics has come (partly) to the rescue. In the 1970s Luca Cavalli-Sforza of Stanford University began a massive survey of European blood groups and nuclear DNA. His team found a consistent gradient of gene frequencies from southeast to northwest (Figure 2.5), which, they pointed out, mapped quite well onto the archaeological evidence for the spread of farming shown in Figure 2.4. Their conclusion: after migrants from western Asia brought farming to Europe, their descendants largely replaced the aboriginal foragers, pushing their remnants into the far north and west.

The archaeologist Colin Renfrew argued that linguistics also supported Cavalli-Sforza’s scenario: the first farmers, he suspected, not only replaced European genes with southwest Asian ones but also replaced Europe’s native languages with Indo-European ones from the Hilly Flanks, leaving just isolated pockets of older tongues such as Basque. The drama of dispossession that ended the original affluent society is inscribed in modern Europeans’ bodies and reenacted every time they open their mouths.

At first the new evidence only increased the scholarly arguments. Linguists immediately challenged Renfrew, arguing that modern European languages would differ much more from one another if they had really begun diverging from an ancestral tongue six or seven millennia ago, and in 1996 an Oxford team led by Bryan Sykes challenged Cavalli-Sforza on the genetics. Sykes looked at mitochondrial DNA rather than the nuclear DNA Cavalli-Sforza had studied, and instead of a southeast–northwest progression, likeFigure 2.5, identified a pattern too messy to be represented easily on a map, finding six groups of genetic lineages, only one of which could plausibly be linked to agricultural migrants from western Asia. Sykes suggested that the other five groups are much older, going back mostly to the original out-of-Africa peopling of Europe 25,000 to 50,000 years ago; all of which, he concluded, indicates that Europe’s first farmers were mainly aboriginal foragers who decided to settle down, rather than the descendants of immigrants from the Hilly Flanks.

image

Figure 2.5. A story written in blood: Luca Cavalli-Sforza’s interpretation of Europe’s genetic makeup, based on a massive sample of nuclear DNA. He concluded that this map, showing degrees of genetic similarity of modern populations to the hypothesized colonists from the Hilly Flanks, with 8 representing complete similarity and 1 the lowest level of correspondence (measuring the first principal component in his statistical manipulation of the results, accounting for 95 percent of the variation in the sample), showed that colonists descended from the Hilly Flanks spread agriculture across Europe. But many archaeologists and some geneticists disagree.

The Cavalli-Sforza and Sykes teams squared off fiercely in the pages of the American Journal of Human Genetics in 1997, but since then their positions have steadily converged. Cavalli-Sforza now calculates that immigrant farmers from western Asia account for 26–28 percent of European DNA; Sykes puts the figure nearer 20 percent. To say that one of Europe’s first farmers descended from southwest Asian immigrants for every three or four who descended from natives is oversimplifying, but is not far wrong.

PREDESTINATION

Neither Cavalli-Sforza’s and Renfrew’s claims nor Sykes’s alternative—nor even the emerging compromise between them—would have made the students at Nanterre very happy, because all the theories treat the triumph of farming as inevitable. Competition, genetics and archaeology imply, has little to do with exams or teachers, because it has always been with us. Its logic means that things had to turn out more or less as they did.

But is this true? People, after all, have free will. Sloth, greed, and fear may be the motors of history, but each of us gets to choose among them. If three-quarters or more of Europe’s first farmers descended from aboriginal foragers, surely prehistoric Europeans could have stopped farming in its tracks if enough of them had decided against intensifying cultivation. So why did that not happen?

Sometimes it did. After sweeping from what is now Poland to the Paris Basin in a couple of hundred years before 5200 BCE, the wave of agricultural advance ground to a halt (Figure 2.4). For a thousand years hardly any farmers invaded the last fifty or sixty miles separating them from the Baltic Sea and few Baltic foragers took up more intensive cultivation. Here foragers fought for their way of life. Along the farming/foraging fault line we find remarkable numbers of fortified settlements and skeletons of young men killed by blunt-instrument traumas on the front and left sides of their skulls—just what we would expect if they died fighting face-to-face with right-handed opponents using stone axes. Several mass graves may even be grisly relics of massacres.

We will never know what acts of heroism and savagery went on along the edge of the North European Plain seven thousand years ago, but geography and economics probably did as much as culture and violence to fix the farming/foraging frontier. Baltic foragers lived in a chilly Garden of Eden, where rich marine resources supported dense populations in year-round villages. Archaeologists have unearthed great mounds of seashells, leftovers from feasts, which piled up around the hamlets. Nature’s bounty apparently allowed the foragers to have their cake (or shellfish) and eat it: there were enough foragers to stand up to farmers but not so many that they had to shift toward farming to feed themselves. At the same time, farmers found that the plants and animals that had originally been domesticated in the Hilly Flanks fared less well this far north.

We frankly do not know why farming did finally move north after 4200 BCE. Some archaeologists emphasize push factors, proposing that farmers multiplied to the point that they steamrollered all opposition; others stress pull factors, proposing that a crisis within forager society opened the north to invasion. But however it ended, the Baltic exception seems to prove the rule that once farming appeared in the Hilly Flanks the original affluent society could not survive.

In saying this I am not denying the reality of free will. That would be foolish, although plenty of people have succumbed to the temptation. The great Leo Tolstoy, for instance, closed his novel War and Peace with an odd excursus denying free will in history—odd, because the book is studded with agonized decisions (and indecisions), abrupt changes of mind, and not a few foolish blunders, often with momentous consequences. All the same, said Tolstoy, “Free will is for history only an expression connoting what we do not know about the laws of human history.” He continued:

The recognition of man’s free will as something capable of influencing historical events … is the same for history as the recognition of a free force moving the heavenly bodies would be for astronomy … If there is even a single body moving freely, then the laws of Kepler and Newton are negated and no conception of the movement of the heavenly bodies any longer exists. If any single action is due to free will, then not a single historical law can exist, nor any conception of historical events.

This is nonsense. High-level nonsense, to be sure, but nonsense all the same. On any given day any prehistoric forager could have decided not to intensify, and any farmer could have walked away from his fields or her grindstone to gather nuts or hunt deer. Some surely did, with immense consequences for their own lives. But in the long run it did not matter, because the competition for resources meant that people who kept farming, or farmed even harder, captured more energy than those who did not. Farmers kept feeding more children and livestock, clearing more fields, and stacking the odds still further against foragers. In the right circumstances, like those prevailing around the Baltic Sea in 5200 BCE, farming’s expansion slowed to a crawl. But such circumstances could not last forever.

Farming certainly suffered local setbacks (overgrazing, for instance, seems to have turned the Jordan Valley into a desert between 6500 and 6000 BCE), but barring a climatic disaster like a new Younger Dryas, all the free will in the world could not stop agricultural lifestyles from expanding to fill all suitable niches. The combination of brainy Homo sapiens with warm, moist, and stable weather plus plants and animals that could evolve into domesticated forms made this as inevitable as anything can be in this world.

By 7000 BCE the dynamic, expansive agricultural societies at the western end of Eurasia were unlike anything else on earth, and by this point it makes sense to distinguish “the West” from the rest. Yet while the West was different from the rest, the differences were not permanent, and across the next few thousand years people began independently inventing agriculture in perhaps half a dozen places across the Lucky Latitudes (Figure 2.6).

The earliest and clearest case outside the Hilly Flanks is China. Cultivation of rice began in the Yangzi Valley between 8000 and 7500 BCE and of millet in north China by 6500. Millet was fully domesticated around 5500 and rice by 4500, and pigs were domesticated between 6000 and 5500. Recent finds suggest that cultivation began almost as early in the New World too. Cultivated squash was evolving toward domesticated forms by 8200 BCE in northern Peru’s Nanchoc Valley and in Mexico’s Oaxaca Valley by 7500–6000 BCE. Peanuts appear in Nanchoc by 6500, and while archaeological evidence that wild teosinte was evolving into domesticated corn in Oaxaca goes back only to 5300 BCE, geneticists suspect that the process actually began as early as 7000.

The Chinese and New World domestications were definitely independent of events in the Hilly Flanks, but things are less clear in Pakistan’s Indus Valley. Domesticated barley, wheat, sheep, and goats all appear abruptly at Mehrgarh around 7000 BCE—so abruptly that many archaeologists think that migrants from the Hilly Flanks carried them there. The presence of wheat seems particularly telling, because so far no one has identified local wild wheats from which domesticated wheat could have evolved anywhere near Mehrgarh. Botanists have not explored the region very thoroughly (not even the Pakistani army has much stomach for poking around these wild tribal lands), so there may be surprises in store. And while present evidence does suggest that Indus Valley agriculture was an offshoot of the Hilly Flanks, we should note that it rapidly went its own way, with local zebu cattle domesticated by 5500 BCE and a sophisticated, literate urban society emerging by 2500 BCE.

image

Figure 2.6. Promised lands: seven regions around the world where domestication of plants or animals may have begun independently between 11,000 and 5000 BCE

The eastern Sahara Desert was wetter around 7000 BCE than it is now, with strong monsoon rains filling lakes every summer, but it was still a brutal place to live. Adversity was apparently the mother of invention here: cattle and sheep could not survive in the wild, but foragers could eke out a living if they herded animals from lake to lake. Between 7000 and 5000 BCE the foragers turned themselves into pastoralists and their wild cattle and sheep into larger, tamer animals.

By 5000 BCE agriculture was also emerging in two highland zones, one in Peru, where llama or alpaca were being herded and quinoa seeds were mutating to wait for the harvester, and one in New Guinea. The New Guinean evidence has been as controversial as that from the Indus Valley, but it now seems clear that by 5000 BCE highlanders were burning off forests, draining swamps, and domesticating bananas and taro.

These regions have had very different histories, but, like the Hilly Flanks, each was the starting point for a distinctive economic, social, and cultural tradition that has lasted down to our own day. Here we can finally answer the question that has dogged us sinceChapter 1, of how to define the West. We saw there the historian Norman Davies’s criticisms of what he called the “elastic geography” of definitions of the West, “designed,” as he put it, “to further the interests of their authors.” Davies threw the baby out with the bathwater, refusing to speak of the West at all. Thanks to the time depth archaeology provides, we can now do better.

The modern world’s great civilizations all go back to these original episodes of domestication at the end of the Ice Age. There is no need to let the intellectual squabbles Davies describes rob us of “the West” as an analytical category: it is simply a geographical term, referring to those societies that descended from the westernmost Eurasian core of domestication, in the Hilly Flanks. It makes no sense to talk about “the West” as a distinctive region before about 11,000 BCE, when cultivation began making the Hilly Flanks unusual; and the concept starts to become an important analytical tool only after 8000 BCE, when other agricultural cores also started appearing. By 4500 BCE the West had expanded to include most of Europe, and in the last five hundred years colonists have taken it to the Americas, the Antipodes, and Siberia. “The East,” naturally enough, simply means those societies that descended from the easternmost core of domestication that began developing in China by 7500 BCE. We can also speak of comparable New World, South Asian, New Guinean, and African traditions. Asking why the West rules really means asking why societies descended from the agricultural core in the Hilly Flanks, rather than those descended from the cores in China, Mexico, the Indus Valley, the eastern Sahara, Peru, or New Guinea, came to dominate the planet.

One long-term lock-in explanation springs to mind immediately: that people in the Hilly Flanks—the first Westerners—developed agriculture thousands of years before anyone else in the world because they were smarter. They passed their smartness on with their genes and languages when they spread across Europe; Europeans took it along when they colonized other parts of the globe after 1500 CE; and that is why the West rules.

Like the racist theories discussed in Chapter 1, this is almost certainly wrong, for reasons the evolutionist and geographer Jared Diamond laid out forcefully in his classic book Guns, Germs, and Steel. Nature, Diamond observed, is just not fair. Agriculture appeared in the Hilly Flanks thousands of years earlier than anywhere else not because the people living here were uniquely smart, but because geography gave them a head start.

There are 200,000 species of plants in the world today, Diamond observed, but only a couple of thousand are edible, and only a couple of hundred have much potential for domestication. In fact, more than half the calories consumed today come from cereals, and above all wheat, corn, rice, barley, and sorghum. The wild grasses these cereals evolved from are not spread evenly over the globe. Of the fifty-six grasses with the biggest, most nutritious seeds, thirty-two grow wild in southwest Asia and the Mediterranean Basin. East Asia has just six wild species; Central America, five; Africa south of the Sahara, four; North America, also four; Australia and South America, two each; and western Europe, one. If people (in large groups) were all much the same and foragers all over the world were roughly equally lazy, greedy, and scared, it was overwhelmingly likely that people in the Hilly Flanks would domesticate plants and animals before anyone else because they had more promising raw materials to work with.

The Hilly Flanks had other advantages too. It took just one genetic mutation to domesticate wild barley and wheat, but turning teosinte into something recognizable as corn called for dozens. The people who entered North America around 14,000 BCE were no lazier or stupider than anyone else, nor did they make a mistake by trying to domesticate teosinte rather than wheat. There was no wild wheat in the New World. Nor could immigrants bring domesticated crops with them from the Old World, because they could enter the Americas only while there was a land bridge from Asia. When they crossed, before the rising oceans drowned the land bridge around 12,000 BCE, there were no domesticated food crops to bring; by the time there were domesticated food crops,* the land bridge was submerged.

Turning from crops to animals, the odds favored the Hilly Flanks almost as strongly. There are 148 species of large (over a hundred pounds) mammals in the world. By 1900 CE just 14 of them had been domesticated. Seven of the 14 were native to southwest Asia; and of the world’s 5 most important domesticates (sheep, goat, cow, pig, and horse), all but the horse had wild ancestors in the Hilly Flanks. East Asia had 5 of the 14 potentially domesticable species and South America just 1. North America, Australia, and Africa south of the Sahara had none at all. Africa, of course, teems with big animals, but there are obvious challenges in domesticating species such as lions, who will eat you, or giraffes, who can outrun even lions.

We should not, then, assume that people in the Hilly Flanks invented agriculture because they were racially or culturally superior. Because they lived among more (and more easily) domesticable plants and animals than anyone else, they mastered them first. Concentrations of wild plants and animals in China were less favorable, but still good; domestication came perhaps two millennia later there. Herders in the Sahara, who had just sheep and cattle to work with, needed another five hundred years, and since the desert could not support crops, they never became farmers. New Guinean highlanders had the opposite problem, with just a narrow range of plants and no domesticable large animals. They needed a further two thousand years and never became herders. The agricultural cores in the Sahara and New Guinea, unlike the Hilly Flanks, China, the Indus Valley, Oaxaca, and Peru, did not develop their own cities and literate civilizations—not because they were inferior, but because they lacked the natural resources.

Native Americans had more to work with than Africans and New Guineans but less than Hilly Flankers and people in China. Oaxacans and Andeans moved quickly, cultivating plants (but not animals) within twenty-five centuries of the end of the Younger Dryas. Turkeys and llamas, their only domesticable animals other than dogs, took centuries more.

Australians had the most limited resources of all. Recent excavations show that they experimented with eel farming, and given another few thousand years may well have created domesticated lifestyles. Instead, European colonists overwhelmed them in the eighteenth century CE, importing wheat and sheep, descendants of the original agricultural revolution in the Hilly Flanks.

So far as we can tell, people were indeed much the same everywhere. Global warming gave everyone new choices, among working less, working the same amount and eating more, or having more babies, even if that meant working harder. The new climate regime also gave them the option of living in larger groups and moving around less. Everywhere in the world, people who chose to stay put, breed more, and work harder squeezed out those who made different choices. Nature just made the whole process start earlier in the West.

EAST OF EDEN

Maybe so, the advocate of long-term lock-in theories might agree; maybe people really are much the same everywhere, and maybe geography did make Westerners’ jobs easier. Yet there is more to history than weather and the size of seeds. Surely the details of the particular choices people made among working less, eating more, and raising bigger families matter too. The end of a story is often written in its beginning, and perhaps the West rules today because the culture created in the Hilly Flanks more than ten thousand years ago, the parent from which all subsequent Western societies descend, just had more potential than the cultures created in other core regions around the world.

Let us take a look, then, at the best-documented, oldest, and (in our own times) most powerful civilization outside the West, that which began in China. We need to find out how much its earliest farming cultures differed from those in the West and whether these differences set East and West off along different trajectories, explaining why Western societies came to dominate the globe.

Until recently archaeologists knew very little about early agriculture in China. Many scholars even thought that rice, that icon of Chinese cuisine in our own day, began its history in Thailand, not China. The discovery of wild rice growing in the Yangzi Valley in 1984 showed that rice could have been domesticated here after all, but direct archaeological confirmation remained elusive. The problem was that while bakers inevitably burn some of their bread, preserving charred wheat or barley seeds for archaeologists to find, boiling, the sensible way to cook rice, rarely has this result. Consequently it is much harder for archaeologists to recover ancient rice.

A little ingenuity, however, soon got archaeologists around this roadblock. In 1988 excavators at Pengtoushan in the Yangzi Valley (Figure 2.7) noticed that around 7000 BCE potters began mixing rice husks and stalks into their clay to prevent pots cracking in the kiln, and close study revealed surefire signs that these plants were being cultivated.

The real breakthroughs, though, began in 1995, when Yan Wenming of Peking University* teamed up with the American archaeologist Richard MacNeish, as hardcore a fieldworker as any in the world. (MacNeish, who began digging in Mexico in the 1940s, logged an awe-inspiring 5,683 days in the trenches—nearly ten times what I have managed to do; and when he died in 2001, aged eighty-two, it was with his boots on, in an accident while doing fieldwork in Belize. He reportedly talked archaeology with the ambulance driver all the way to the hospital.) MacNeish brought to China not only decades of expertise studying early agriculture but also the archaeobotanist Deborah Pearsall, who in turn brought a new scientific technique. Even though rice rarely survives in archaeological deposits, all plants absorb tiny amounts of silica from groundwater. The silica fills some of the plant’s cells, and when the plant decays it leaves microscopic cell-shaped stones, called phytoliths, in the soil. Careful study of phytoliths can reveal not just whether rice was being eaten but also whether it was domesticated.

image

Figure 2.7. The beginning of the East: sites in what is now China discussed in this chapter

Yan and MacNeish dug a sixteen-foot-deep trench in Diaotonghuan Cave near the Yangzi Valley, and Pearsall was able to show from phytoliths that by 12,000 BCE people were uprooting wild rice and bringing it back to the cave. Rather like the Hilly Flanks, where wild wheat, barley, and rye flourished as the world warmed up, this was a hunter-gatherer golden age. There is no sign in the phytoliths that rice was evolving toward domestic forms the way rye was evolving at Abu Hureyra, but the Younger Dryas was clearly just as devastating in the Yangzi Valley as in the West. Wild rice virtually disappeared from Diaotonghuan by 10,500 BCE, only to return when the weather improved after 9600. Coarse pottery, probably vessels for boiling the grains, became common about that time (2,500 years before the first pottery from the Hilly Flanks). Around 8000 BCE the phytoliths start getting bigger, a sure sign that people were cultivating the wild rice. By 7500 BCE fully wild and cultivated grains were equally common at Diaotonghuan; by 6500, fully wild rice had disappeared.

A cluster of excavations in the Yangzi Delta since 2001 supports this timeline, and by 7000 BCE people in the Yellow River valley had clearly begun cultivating millet. Jiahu, a remarkable site between the Yangzi and Yellow rivers, had cultivated rice and millet and perhaps also domesticated pigs by 7000 BCE, and at Cishan a fire around 6000 BCE scorched and preserved almost a quarter of a million pounds of large millet seeds in eighty storage pits. At the bottom of some pits, under the millet, were complete (presumably sacrificed) dog and pig skeletons, some of the earliest Chinese evidence for domesticated animals.

As in the West, domestication involved countless small changes across many centuries in a range of crops, animals, and techniques. The high water table at Hemudu in the Yangzi Delta has given archaeologists a bonanza, preserving huge amounts of waterlogged rice as well as wood and bamboo tools, all dating from 5000 BCE onward. By 4000, rice was fully domesticated, as dependent on human harvesters as were wheat and barley in the West. Hemudans also had access to domesticated water buffalo and were using buffalo shoulder blades as spades. In northern China’s Wei Valley archaeologists have documented a steady shift from hunting toward full-blown agriculture after 5000 BCE. This was clearest in the tools being used: stone spades and hoes replaced axes as people moved from simply clearing patches in the forest to cultivating permanent fields, and spades got bigger as farmers turned the soil more deeply. In the Yangzi Valley recognizable rice paddies, with raised banks for flooding, may go back as far as 5700 BCE.

Early Chinese villages, like Jiahu around 7000 BCE, looked quite like the first villages in the Hilly Flanks, with small, roughly round semisubterranean huts, grindstones, and burials between the houses. Between fifty and a hundred people lived at Jiahu. One hut was slightly larger than the others but the very consistent distribution of finds suggests that wealth and gender distinctions were still weak and cooking and storage were communal. This was changing by 5000 BCE, when some villages had 150 residents and were protected by ditches. At Jiangzhai, the best-documented site of this date, huts faced an open area containing two large piles of ash, which may be remains of communal rituals.

The Jiangzhai sacrifices—if such they are—look pretty tame compared to the shrines Westerners had already been building for several thousand years, but two remarkable sets of finds in graves at Jiahu suggest that religion and ancestors were every bit as important as in the Hilly Flanks. The first consists of thirty-plus flutes carved from the wing bones of red-crowned cranes, all found in richer-than-average male burials. Five of the flutes can still be played. The oldest, from around 7000 BCE, had five or six holes, and while they were not very subtle instruments, modern Chinese folk songs can be played on them. By 6500 BCE seven holes were normal and the flutemakers had standardized pitch, which probably means that groups of flautists were performing together. One grave of around 6000 BCE held an eight-hole flute, capable of playing any modern melody.

All very interesting; but the flutes’ full significance becomes clear only in the light of twenty-four rich male graves containing turtle shells, fourteen of which had simple signs scratched on them. In one grave, dating around 6250 BCE, the deceased’s head had been removed (shades of Çatalhöyük!) and replaced with sixteen turtle shells, two of them inscribed. Some of these signs—in the eyes of some scholars, at least—look strikingly like pictograms in China’s earliest full-blown writing system, used by the kings of the Shang dynasty five thousand years later.

I will come back to the Shang inscriptions in Chapter 4, but here I just want to observe that while the gap between the Jiahu signs (around 6250 BCE) and China’s first proper writing system (around 1250 BCE) is almost as long as that between the strange symbols from Jerf al-Ahmar in Syria (around 9000 BCE) and the first proper writing in Mesopotamia (around 3300 BCE), China has more evidence for continuity. Dozens of sites have yielded the odd pot with an incised sign, particularly after 5000 BCE. All the same, specialists disagree fiercely over whether the crude Jiahu scratchings are direct ancestors of the five-thousand-plus symbols of the Shang writing system.

Not the least of the arguments in favor of links is the fact that so many Shang texts were also scratched on turtle shells. Shang kings used these shells in rituals to predict the future, and traces of this practice definitely go back to 3500 BCE; could it be, the excavators of Jiahu now ask, that the association of turtle shells, writing, ancestors, divination, and social power began before 6000 BCE? As anyone who has read Confucius knows, music and rites went together in first-millennium-BCE China; could the flutes, turtle shells, and writing in the Jiahu graves be evidence that ritual specialists able to talk to the ancestors emerged more than five thousand years earlier?

That would be a remarkable continuity, but there are parallels. Earlier in the chapter I mentioned the peculiar twin-headed statues with giant staring eyes, dating around 6600 BCE, found at ‘Ain Ghazal in Jordan; Denise Schmandt-Besserat, an art historian, has pointed out that descriptions of the gods written down in Mesopotamia around 2000 BCE are strikingly like these statues. In East and West alike, some elements of the first farmers’ religions may have been extremely long-lived.

Even before the discoveries at Jiahu, Kwang-chih Chang of Harvard University—the godfather of Chinese archaeology in America from the 1960s until his death in 2001—had suggested that the first really powerful people in China had been shamans who persuaded others that they could talk to animals and ancestors, fly between worlds, and monopolize communication with the heavens. When Chang presented this theory, in the 1980s, the evidence available only allowed him to trace such specialists back to 4000BCE, a time when Chinese societies were changing rapidly and some villages were turning into towns. By 3500 BCE some communities had two or three thousand residents, as many as Çatalhöyük or ‘Ain Ghazal had had three thousand years earlier, and a handful of communities could mobilize thousands of laborers to build fortifications from layer upon layer of pounded earth (good building stone is rare in China). The most impressive wall, at Xishan, was ten to fifteen feet thick and ran for more than a mile. Even today it still stands eight feet high in places. Parts of children’s skeletons in clay jars under the foundations may have been sacrifices, and numerous pits full of ash within the settlement contained adults in poses suggesting struggle, sometimes mixed with animal bones. These may have been ritual murders like those from Çayönü in Turkey, and there is some evidence that such grisly rites go back to 5000 BCE in China.

If Chang was right that shamans were taking on leadership roles by 3500 BCE, they may have lived in the large houses, covering up to four thousand square feet, that now appeared in some towns (archaeologists often call these “palaces,” though that is a bit grandiose). These had plastered floors, big central hearths, and ash pits holding animal bones (from sacrifices?). One contained a white marble object that looks like a scepter. The most interesting “palace,” at Anban, stood on high ground in the middle of the town. It had stone pillar bases and was surrounded by pits full of ash, some holding pigs’ jaws that had been painted red, others pigs’ skulls wrapped in cloth, and others still little clay figurines with big noses, beards, and odd pointed hats (much like Halloween witches).

Two things about these statuettes get archaeologists excited. First, the tradition of making them lasted for thousands of years, and a very similar model found in a palace dating around 1000 BCE had the Chinese character wu painted on its hat. Wu meant “religious mediator,” and some archaeologists conclude that all these figurines, including the ones from Anban, must represent shamans. Second, many of the figurines look distinctly Caucasian, not Chinese. Similar models have been found all the way from Anban to Turkmenistan in central Asia along the path that later became the Silk Road, linking China to Rome. Shamanism remains strong in Siberia even today; for a price, ecstatic visionaries will still summon up spirits and predict the future for adventurous tourists. The Anban figurines might indicate that shamans from the wilds of central Asia were incorporated into Chinese traditions of religious authority around 4000 BCE; they might, some archaeologists think, even mean that the shamans of the Hilly Flanks, going back to 10,000 BCE, had some very distant influence on the East.

Other fragments of evidence suggest this is perfectly possible. The most extraordinary is a set of mummies from the Tarim Basin, almost totally unknown to Westerners until the magazines Discover, National Geographic, Archaeology, and Scientific Americangave them a publicity blitz in the mid-1990s. The mummies’ Caucasoid features seem to prove beyond doubt that people did move from central and even western Asia into China’s northwest fringes by 2000 BCE. In a coincidence that seems almost too good to be true, not only did the people buried in the Tarim Basin have beards and big noses like the Anban figurines; they were also partial to pointed hats (one grave contained ten woolen caps).

It is easy to get overexcited about a few unusual finds, but even setting aside the wilder theories, it looks like religious authority was as important in early China as in the early Hilly Flanks. And if any doubts remain, two striking discoveries from the 1980s should dispel them. Archaeologists excavating at Xishuipo were astonished to find a grave of around 3600 BCE containing an adult man flanked by images of a dragon and a tiger laid out in clamshells. More clamshell designs surrounded the grave. One showed a dragon-headed tiger with a deer on its back and a spider on its head; another, a man riding a dragon. Chang suggested that the dead man was a shaman and that the inlays showed animal spirits that helped him to move between heaven and earth.

A discovery in Manchuria, far to the northeast, surprised archaeologists even more. Between 3500 and 3000 BCE a cluster of religious sites covering two square miles developed at Niuheliang. At its heart was what the excavators called the “Goddess Temple,” an odd, sixty-foot-long semisubterranean corridor with chambers containing clay statues of humans, pig-dragon hybrids, and other animals. At least six statues represented naked women, life size or larger, sitting cross-legged; the best preserved had red painted lips and pale blue eyes inset in jade, a rare, hard-to-carve stone that was becoming the luxury good of choice all over China. Blue eyes being unusual in China, it is tempting to link these statues to the Caucasian-looking figurines from Anban and the Tarim Basin mummies.

Despite Niuheliang’s isolation, half a dozen clusters of graves are scattered through the hills around the temple. Mounds a hundred feet across mark some of the tombs, and the grave goods include jade ornaments, one of them carved into another pig-dragon. Archaeologists have argued, with all the ingenuity that lack of evidence brings out in us, over whether the men and women buried here were priests or chiefs. Quite possibly they were both at once. Whoever they were, though, the idea of burying a minority of the dead—usually men—with jade offerings caught on all over China, and by 4000 BCE actual worship of the dead was beginning at some cemeteries. It looks as if people in the Eastern core were just as concerned about ancestors as those in the Hilly Flanks, but expressed their concern in different ways—by removing skulls from the dead and keeping them among the living in the West, and by honoring the dead at cemeteries in the East. But at both ends of Eurasia the greatest investments of energy were in ceremonies related to gods and ancestors, and the first really powerful individuals seem to have been those who communicated with invisible worlds of ancestors and spirits.

By 3500 BCE agricultural lifestyles rather like those created in the West several millennia earlier—involving hard work, food storage, fortifications, ancestral rites, and the subordination of women and the young to men and the old—seem to have been firmly established in the Eastern core and were expanding from there. The Eastern agricultural dispersal also seems to have worked rather like that in the West; or, at least, the arguments among the experts take similar forms in both parts of the world. Some archaeologists think people from the core area between the Yellow and Yangzi rivers migrated across East Asia, carrying agriculture with them; others, that local foraging groups settled down, domesticated plants and animals, traded with one another, and developed increasingly similar cultures over large areas. The linguistic evidence is just as controversial as in Europe, and as yet there are not enough genetic data to settle anything. All we can say with confidence is that Manchurian foragers were living in large villages and growing millet by at least 5000 BCE. Rice was being cultivated far up the Yangzi Valley by 4000, on Taiwan and around Hong Kong by 3000, and in Thailand and Vietnam by 2000. By then it was also spreading down the Malay Peninsula and across the South China Sea to the Philippines and Borneo (Figure 2.8).

Just like the Western agricultural expansion, the Eastern version also hit some bumps. Phytoliths show that rice was known in Korea by 4400 BCE and millet by 3600, the latter reaching Japan by 2600, but prehistoric Koreans and Japanese largely ignored these novelties for the next two thousand years. Like northern Europe, coastal Korea and Japan had rich marine resources that supported large, permanent villages ringed by huge mounds of discarded seashells. These affluent foragers developed sophisticated cultures and apparently felt no urge to take up farming. Again like Baltic hunter-gatherers in the thousand years between 5200 and 4200 BCE, they were numerous (and determined) enough to see off colonists who tried to take their land but not so numerous that hunger forced them to take up farming to feed themselves.

image

Figure 2.8. Going forth and multiplying, version two: the expansion of agriculture from the Yellow-Yangzi valleys, 6000–1500 BCE

In both Korea and Japan the switch to agriculture is associated with the appearance of metal weapons—bronze in Korea around 1500 BCE and iron in Japan around 600 BCE. Like European archaeologists who argue over whether push or pull factors ended the affluent Baltic foraging societies, some Asianists think the weapons belonged to invaders who brought agriculture in their train while others suggest that internal changes so transformed foraging societies that farming and metal weapons suddenly became attractive.

By 500 BCE rice paddies were common on Kyushu, Japan’s southern island, but the expansion of farming hit another bump on the main island of Honshu. It took a further twelve hundred years to get a foothold on Hokkaido in the north, where food-gathering opportunities were particularly rich. But in the end, agriculture displaced foraging as completely in the East as in the West.

BOILING AND BAKING, SKULLS AND GRAVES

How are we to make sense of all this? Certainly East and West were different, from the food people ate to the gods they worshipped. No one would mistake Jiahu for Jericho. But were the cultural contrasts so strong that they explain why the West rules? Or were these cultural traditions just different ways of doing the same things?

Table 2.1 summarizes the evidence. Three points, I think, jump out. First, if the culture created in the Hilly Flanks ten thousand years ago and from which subsequent Western societies descend really did have greater potential for social development than the culture created in the East, we might expect to see some strong differences between the two sides of Table 2.1. But we do not. In fact, roughly the same things happened in both East and West. Both regions saw the domestication of dogs, the cultivation of plants, and domestication of large (by which I mean weighing over a hundred pounds) animals. Both saw the gradual development of “full” farming (by which I mean high-yield, labor-intensive systems with fully domesticated plants and wealth and gender hierarchy), the rise of big villages (by which I mean more than a hundred people), and, after another two to three thousand years, towns (by which I mean more than a thousand people). In both regions people constructed elaborate buildings and fortifications, experimented with protowriting, painted beautiful designs on pots, used lavish tombs, were fascinated with ancestors, sacrificed humans, and gradually expanded agricultural lifestyles (slowly at first, accelerating after about two thousand years, and eventually swamping even the most affluent foragers).

image

Table 2.1. The beginnings of East and West compared

Second, not only did similar things happen in both East and West, but they also happened in more or less the same order. I have illustrated this in Table 2.1 with lines linking the parallel developments in each region. Most of the lines have roughly the same slope, with developments coming first in the West, followed about two thousand years later by the East.* This strongly suggests that developments in the East and West shared a cultural logic; the same causes had the same consequences at both ends of Eurasia. The only real difference is that the process started two thousand years earlier in the West.

Third, though, neither of my first two points is completely true. There are exceptions to the rules. Crude pottery appeared in the East at least seven thousand years earlier than in the West, and lavish tombs one thousand years earlier. Going the other way, Westerners built monumental shrines more than six thousand years before Easterners. Anyone who believes that these differences set East and West off along distinct cultural trajectories that explain why the West rules needs to show why pottery, tombs, and shrines matter so much, while anyone (me, for instance) who believes they did not really matter needs to explain why they diverge from the general pattern.

Archaeologists mostly agree why pottery appeared so early in the East: because the foods available there made boiling so important. Easterners needed containers they could put on a fire and consequently mastered pottery very early. If this is right, rather than focusing on the pottery itself, we should perhaps be asking whether differences in food preparation locked East and West into different trajectories of development. Maybe, for instance, Western cooking provided more nutrients, making for stronger people. That, though, is not very convincing. Skeletal studies give a rather depressing picture of life in both the Eastern and Western agricultural cores: it was, as the seventeenth-century English philosopher Thomas Hobbes more or less put it, poor, nasty, and short (though not necessarily brutish). In East and West alike early farmers were malnourished and stunted, carried heavy parasite loads, had bad teeth, and died young; in both regions, improvements in agriculture gradually improved diet; and in both regions, fancier elite cuisines eventually emerged. The Eastern reliance on boiling was one among many differences in cooking, but overall, the similarities between Eastern and Western nutrition vastly outweigh the differences.

Or maybe different ways of preparing food led to different patterns of eating and different family structures, with long-term consequences. Again, though, it is far from obvious that this actually happened. In both East and West the earliest farmers seem to have stored, prepared, and perhaps eaten food communally, only to shift across the next few millennia toward doing these things at the family level. Once more, East-West similarities outweigh differences. The early Eastern invention of pottery is certainly an interesting difference, but it does not seem very relevant to explaining why the West rules.

What of the early prominence of elaborate tombs in the East and the even earlier prominence of elaborate shrines in the West? These developments, I suspect, were actually mirror images of each other. Both, as we have seen, were intimately linked to an emerging obsession with ancestors at a time when agriculture was making inheritance from the dead the most important fact of economic life. For reasons we will probably never understand, Westerners and Easterners came up with different ways to give thanks to and get in contact with the ancestors. Some Westerners apparently thought that passing their relatives’ skulls around, filling buildings with bulls’ heads and pillars, and sacrificing people in them would do the trick; Easterners generally felt better about burying carved jade animals with their relatives, worshipping their tombs, and eventually beheading other people and throwing them in the grave too. Different strokes for different folks; but similar results.

I think we can draw two conclusions from Table 2.1. First, early developments in the Western and Eastern cores were mostly rather similar. I do not want to gloss over the very real differences in everything from styles of stone tools to the plants and animals people ate, but none of these differences lends much support to the long-term lock-in theory we have been discussing, that something about the way Western culture developed after the Ice Age gave it greater potential than Eastern culture and explains why the West rules. That seems to be untrue.

If any long-term lock-in theory can survive confronting the evidence in Table 2.1, it is the simplest one of all, that thanks to geography the West got a two-thousand-year head start in development, retained that lead long enough to arrive first at industrialization, and therefore dominates the world. To test this theory we need to extend our East-West comparison into more recent periods to see if that is what really happened.

That sounds simple enough, but the second lesson of Table 2.1 is that cross-cultural comparison is tricky. Just listing important developments in two columns was only a start, because making sense of the anomalies in Table 2.1 required us to put boiling and baking, skulls and graves into context, to find out what they meant within prehistoric societies. And that plunges us into one of the central problems of anthropology, the comparative study of societies.

When nineteenth-century European missionaries and administrators started collecting information about the peoples in their colonial empires, their reports of outlandish customs amazed scholars. Anthropologists catalogued these activities, speculating about their diffusion around the globe and what they might tell us about the evolution of more civilized (by which they meant more European-like) behavior. They sent eager graduate students to exotic climes to collect more examples. One of these bright young men was Bronislaw Malinowski, a Pole studying in London who found himself in the Trobriand Islands in 1914 when World War I broke out. Unable to get a boat home, Malinowski did the only reasonable thing; after sulking briefly in his tent, he got himself a girlfriend. Consequently, by 1918 he understood Trobriand culture from the inside out. He grasped what his professors in their book-lined studies had missed: that anthropology was really about explaining how customs fit together. Comparisons must be between complete functioning cultures, not individual practices torn out of context, because the same behavior may have different meanings in different contexts. Tattooing your face, for instance, may make you a rebel in Kansas, but it marks you as a conformist in New Guinea. Equally, the same idea may take different forms in different cultures, like the circulating skulls and buried jades in the prehistoric West and East, both expressing reverence toward ancestors.

Malinowski would have hated Table 2.1. We cannot, he would have insisted, make a grab bag of customs from two functioning cultures and pass judgment on which was doing better. And we certainly cannot write books with chapter titles like “The West Takes the Lead.” What, he would have asked, do we mean by “lead”? How on earth do we justify disentangling specific practices from the seamless web of life and measuring them against each other? And even if we could disentangle reality, how would we know which bits to measure?

All good questions, and we need to answer them if we are to explain why the West rules—even though the search for answers has torn anthropology apart over the last fifty years. With some trepidation, I will now plunge into these troubled waters.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!