CHAPTER EIGHT
I
BY THE TIME Mr Marsham came to build his house, it would have been unthinkable for a man of his position not to have a formal dining room in which to entertain, but just how formal and how spacious and whether situated at the front of the house or the back are all matters that would have required some reflection since dining rooms were still novel enough that their dimensions and situation could not be assumed. In the end, as we have seen, Mr Marsham decided to eliminate the proposed servants’ hall and give himself a thirty-foot-long dining room – big enough to accommodate eighteen or twenty guests, a very large number for a country parson. Even if he entertained frequently, as would seem to be indicated, it must have been a lonely room on the nights he dined alone. At least the view across to the churchyard was a pleasant one.
We know almost nothing about how Mr Marsham used this room, not simply because we know so little about Mr Marsham but also because we know surprisingly little about certain aspects of dining rooms themselves. In the middle of the table was likely to have stood an object of costly elegance known as an epergne (pronounced ‘ay-pairn’), consisting of dishes connected by ornamental branches, each dish containing a selection of fruits or nuts. For a century or so no table of discernment was without its epergne, but why it was called an epergne no one remotely knows. The word doesn’t exist in French. It just seems to have popped into being from nowhere.
Around the epergne on Mr Marsham’s table are likely to have been cruet stands – elegant little racks, usually of silver, holding condiments – and these too have a mystery. Traditional cruet stands came with two glass bottles with stoppers, for oil and vinegar, and three matching casters – that is, bottles with perforated tops for sprinkling (or casting) flavourings on to food. Two of the casters contained salt and pepper, but what went into the third caster is unknown. It is generally presumed to have been dried mustard, but that is really because no one can think of anything more likely. ‘No satisfactory alternative has ever been suggested’ is how the food historian Gerard Brett has put it. In fact, there is no evidence to suggest that mustard was ever desired or utilized in such ready fashion by diners at any time in history. Probably for this reason, by Mr Marsham’s day the third caster was rapidly disappearing from tables – as indeed were cruet stands themselves. Condiments now increasingly varied from meal to meal as certain ones became associated with particular foods – mint sauce with lamb, mustard with ham, horseradish with beef and so on. Scores of other flavourings were applied in the kitchen. But just two were considered so indispensable that they never left the table at all. I refer of course to salt and pepper.
Why it is that these two, out of all the hundreds of spices and flavourings available, have such a durable venerability is one of the questions with which we began the book. The answer is a complicated one, but a dramatic one. I can tell you at once that nothing you touch today will have more bloodshed, suffering and woe attached to it than the innocuous twin pillars of your salt and pepper set.
Start with salt. Salt is a cherished part of our diet for a very fundamental reason. We need it. We would die without it. It is one of about forty tiny specks of incidental matter – odds and ends from the chemical world – that we must get into our bodies to give ourselves the necessary zip and balance to sustain daily life. Collectively they are known as vitamins and minerals, and there is a great deal – a really quite surprising amount – that we don’t know about them, including how many of them we need, what exactly some of them do, and in what amounts they are optimally consumed.
That they were needed at all was a piece of knowledge that was an amazingly long time coming. Until well into the nineteenth century, the notion of a well-balanced diet had occurred to no one. All food was believed to contain a single vague but sustaining substance – ‘the universal aliment’. A pound of beef had the same value for the body as a pound of apples or parsnips or anything else, and all that was required of a human was to make sure that an ample amount was taken in. The idea that embedded within particular foods were vital elements that were central to one’s well-being had not yet been thought of. That’s not altogether surprising because the symptoms of dietary deficiency – lethargy, aching joints, increased susceptibility to infection, blurred vision – seldom suggest dietary imbalance. Even today if your hair started to fall out or your ankles swelled alarmingly, it is unlikely your first thoughts would turn to what you had eaten lately. Still less would you think about what you hadn’t eaten. So it was with bewildered Europeans, who for a very long time died in often staggering numbers without knowing why.
Of scurvy alone it has been suggested that as many as two million sailors died between 1500 and 1850. Typically it killed about half the crew on any long voyage. Various desperate expedients were tried. Vasco da Gama on a cruise to India and back encouraged his men to rinse their mouths with urine, which did nothing for their scurvy and can’t have done much for their spirits either. Sometimes the toll was truly shocking. On a three-year voyage in the 1740s, a British naval expedition under the command of Commodore George Anson lost 1,400 men out of two thousand who sailed. Four were killed by enemy action; virtually all the rest died of scurvy.
Over time people noticed that sailors with scurvy tended to recover when they got to a port and received fresh foods, but nobody could agree what it was about those foods that helped them. Some thought it wasn’t the foods at all, but just a change of air. In any case, it wasn’t possible to keep foods fresh for weeks on long voyages, so simply identifying efficacious vegetables and the like was slightly pointless. What was needed was some kind of distilled essence – an antiscorbutic, as the medical men termed it – that would be effective against scurvy but portable too. In the 1760s, a Scottish doctor named William Stark, evidently encouraged by Benjamin Franklin, conducted a series of patently foolhardy experiments in which he tried to identify the active agent by, somewhat bizarrely, depriving himself of it. For weeks he lived on only the most basic of foods – bread and water chiefly – to see what would happen. What happened was that in just over six months he killed himself, from scurvy, without coming to any helpful conclusions at all. In roughly the same period, James Lind, a naval surgeon, conducted a more scientifically rigorous (and personally less risky) experiment by finding twelve sailors who had scurvy already, dividing them into pairs, and giving each pair a different putative elixir – vinegar to one, garlic and mustard to another, oranges and lemons to a third, and so on. Five of the groups showed no improvement, but the pair given oranges and lemons made a swift and total recovery. Amazingly, Lind decided to ignore the significance of the result and doggedly stuck with his personal belief that scurvy was caused by incompletely digested food building up toxins within the body.
It fell to the great Captain James Cook to get matters on to the right course. On his circumnavigation of the globe in 1768–71, Captain Cook packed a range of antiscorbutics to experiment on, including thirty gallons of carrot marmalade and a hundred pounds of sauerkraut for every crew member. Not one person died from scurvy on his voyage – a miracle that made him as much a national hero as his discovery of Australia or any of his other many achievements on that epic undertaking. The Royal Society, Britain’s premier scientific institution, was so impressed that it awarded him the Copley Medal, its highest distinction. The British navy itself was not so quick, alas. In the face of all the evidence, it prevaricated for another generation before finally providing citrus juice to sailors as a matter of routine.*
The realization that an inadequate diet was the cause not only of scurvy but of a range of common diseases was remarkably slow in coming. Not until 1897 did a Dutch physician named Christiaan Eijkman, working in Java, notice that people who ate wholegrain rice didn’t get beriberi, a debilitating nerve disease, while people who ate polished rice very often did. Clearly some thing or things were present in some foods and missing in others, and served as a determinant of well-being. It was the beginning of an understanding of ‘deficiency disease’, as it was known, and it won him the Nobel Prize in medicine even though he had no idea what these active agents were.
The real breakthrough came in 1912, when Casimir Funk, a Polish biochemist working at the Lister Institute in London, isolated Thiamine, or Vitamin B1, as it is now more generally known. Realizing it was part of a family of molecules, he combined the terms ‘vital’ and ‘amines’ to make the new word ‘vitamines’. Although Funk was right about the vital part, it turned out that only some of the vitamines were amines (that is to say, nitrogen-bearing), and so the name was changed to ‘vitamins’ to make it ‘less emphatically inaccurate’, in Anthony Smith’s nice phrase.
Funk also asserted that there was a direct correlation between a deficiency of specific amines and the onset of certain diseases – scurvy, pellagra and rickets in particular. This was a huge insight and had the potential to save millions of shattered lives, but unfortunately it wasn’t heeded. The leading medical textbook of the day continued to insist that scurvy was caused by any number of factors – ‘insanitary surroundings, overwork, mental depression and exposure to cold and damp’ were the principal ones its authors thought worth listing – and only marginally by dietary deficiency. Worse still, in 1917 America’s leading nutritionist, E. V. McCollum of the University of Wisconsin – the man who actually coined the terms Vitamins A and B – declared that scurvy was not in fact a dietary deficiency disease at all, but was caused by constipation.
Finally, in 1939 a Harvard Medical School surgeon named John Crandon decided to settle matters once and for all by the age-old method of withholding Vitamin C from his diet for as long as it took to make himself really ill. It took a surprisingly long time. For the first eighteen weeks, his only symptom was extreme fatigue. (Remarkably, he continued to operate on patients throughout this period.) But in the nineteenth week he took an abrupt turn for the worse – so much so that he would almost certainly have died had he not been under close medical supervision. He was injected with 1,000 milligrams of Vitamin C and was restored to life almost at once. Interestingly, he had never acquired the one set of symptoms that everyone associates with scurvy: the falling out of teeth and bleeding of gums.
Meanwhile, it turned out that Funk’s vitamines were not nearly as coherent a group as originally thought. Vitamin B proved to be not one vitamin but several, which is why we have B1, B2 and so on. To add to the confusion, Vitamin K has nothing to do with an alphabetical sequence. It was called K because its Danish discoverer, Henrik Dam, dubbed it Koagulations vitamin for its role in blood clotting. Later, folic acid was added to the group. Sometimes it is called Vitamin B9, but more often it is just called folic acid. Two other vitamins – pantothenic acid and biotin – don’t have numbers or, come to that, much profile, but that is largely because they never cause us problems. No human has yet been found with insufficient quantities of either.
The vitamins are, in short, a disorderly bunch. It is almost impossible to define them in a way that comfortably embraces them all. A standard textbook definition is that a vitamin is ‘an organic molecule not made in the human body which is required in small amounts to sustain normal metabolism’, but in fact Vitamin K is made in the body, by bacteria in the gut. Vitamin D, one of the most vital substances of all, is actually a hormone, and most of it comes to us not through diet but through the magical action of sunlight on skin.
Vitamins are curious things. It is odd, to begin with, that we cannot produce them ourselves when we are so very dependent on them for our well-being. If a potato can produce Vitamin C, why can’t we? Within the animal kingdom only humans and guinea pigs are unable to synthesize Vitamin C in their own bodies. Why us and guinea pigs? No point asking. Nobody knows. The other remarkable thing about vitamins is the striking disproportion between dosage and effect. Put simply, we need vitamins a lot, but we don’t need a lot of them. Three ounces of Vitamin A, lightly but evenly distributed, will keep you purring for a lifetime. Your B1 requirement is even less – just one ounce spread over seventy or eighty years. But just try doing without those energizing specks and see how long it is before you start to fall to pieces.
The same considerations exactly apply with the vitamins’ fellow particles the minerals. The fundamental difference between vitamins and minerals is that vitamins come from the world of living things – from plants and bacteria and so on – and minerals do not. In a dietary context, ‘minerals’ is simply another name for the chemical elements – calcium, iron, iodine, potassium and the like – that sustain us. Ninety-two elements occur naturally on earth, though some in only very tiny amounts. Francium, for instance, is so rare that it is thought that the whole planet may contain just twenty francium atoms at any given time. Of the rest, most pass through our bodies at some time or other, sometimes quite regularly, but whether they are important or not still is often not known. You have a lot of bromine distributed through your tissues. It behaves as if it is there for a purpose, but nobody yet has worked out what that purpose might be. Remove zinc from your diet and you will get a condition known as hypogeusia in which your taste buds stop working, making food boring or even revolting, but until as recently as 1977 zinc was thought to have no role in diet at all.
Several elements, like mercury, thallium and lead, seem to do nothing good for us and are positively detrimental if consumed excessively.* Others are also unnecessary but far more benign, of which the most notable is gold. That is why gold can be used as a filling for teeth: it doesn’t do you any harm. Of the rest, some twenty-two elements are known or thought to be of central importance to life, according to Essentials of Medical Geology. We are certain about sixteen of them; the other six we merely think are vital. Nutrition is a remarkably inexact science. Consider magnesium, which is necessary for the successful management of proteins within the cells. Magnesium abounds in beans, cereals and leafy vegetables, but modern food processing reduces the magnesium content by up to 90 per cent – effectively annihilates it. So most of us are not taking in anything like the recommended daily amount – not that anyone really knows what that amount should be. Nor can anybody specify the consequences of magnesium deficiency. We could be taking years off our lives, or points off our IQ, or the edge off our memory, or almost any other bad thing you care to suggest. We just don’t know. Arsenic is similarly uncertain. Obviously if you get too much in your system you will very quickly wish you hadn’t. But we all get a little arsenic in our diets, and some authorities are absolutely certain it is vital to our well-being in these tiny amounts. Others are not so sure.
Which brings us back, in a very roundabout way, to salt. Of all the minerals the most vital in dietary terms is sodium, which we mostly consume in the form of sodium chloride – table salt.* Here the problem is not that we are getting too little, but possibly way too much. We don’t need all that much – 200 milligrams a day, about what you would get with six or eight vigorous shakes of a salt cellar – but we take in about sixty times that amount on average. In a normal diet it is almost impossible not to because there is so much salt in the processed foods we eat with such ravenous devotion. Often it is heaped into foods that don’t seem salty at all – breakfast cereals, prepared soups and ice creams, for instance. Who would guess that an ounce of cornflakes contains more salt than an ounce of salted peanuts? Or that the contents of one can of soup – almost any can at all – will considerably exceed the total daily recommended salt allowance for an adult?
Archaeological evidence shows that once people settled down in agricultural communities they began to suffer salt deficiencies – something that they had not experienced before – and so had to make a special effort to find salt and get it into their diet. One of the mysteries of history is how they knew they needed to do so because the absence of salt in the diet awakes no craving. It makes you feel bad and eventually it kills you – without the chloride in salt, cells simply shut down, like an engine without fuel – but at no point would a human being think: ‘Gosh, I could sure do with some salt.’ So how they knew to go searching for it is an interesting question, particularly as in some places getting it required some ingenuity. Ancient Britons, for instance, heated sticks on a beach, then doused them in the sea and scraped the salt off. Aztecs, by contrast, acquired salt by evaporating their own urine. These are not intuitive acts, to put it mildly. Yet getting salt into the diet is one of the most profound urges in nature and it is a universal one. Every society in the world in which salt is freely available consumes, on average, forty times the amount needed to sustain life. We just can’t get enough of the stuff.
Salt is now so ubiquitous and cheap that we forget how intensely desirable it was once, but for much of history it drove men to the edges of the world. Salt was needed to preserve meats and other foods, and so was often required in vast quantities: Henry VIII had 25,000 oxen slaughtered and salted for one military campaign in 1513. So salt was a hugely strategic resource. In the Middle Ages caravans of as many as forty thousand camels – enough to form a column seventy miles long – conveyed salt across the Sahara from Timbuktu to the lively markets of the Mediterranean.
People have fought wars over it and been sold into slavery for it. So salt has caused some suffering in its time. But that is nothing compared with the hardship and bloodshed and murderous avarice associated with a range of tiny foodstuffs that we don’t need at all and could do perfectly well without. I refer to salt’s complements in the condiment world: the spices.* Nobody would die without spices, but plenty have died for them.
A very big part of the history of the modern world is the history of spices, and the story starts with an unprepossessing vine that once grew only on the Malabar coast of eastern India. The vine is called Piper nigrum. If presented with it in its natural state you would almost certainly struggle to guess its importance, but it is the source of all three ‘true’ peppers – black, white and green. The little round hard peppercorns that we pour into our household pepper mills are actually the vine’s tiny fruit, dried to pack a gritty kick. The difference between the varieties is simply a function of when they are picked and how they are processed.
Pepper has been appreciated since time immemorial in its native territory, but it was the Romans who made it an international commodity. Romans loved pepper. They even peppered their desserts. Their attachment to it kept the price high and gave it a lasting value. Spice traders from the distant east couldn’t believe their luck. ‘They arrive with gold and depart with pepper,’ one Tamil trader remarked in wonder. When the Goths threatened to sack Rome in 408, the Romans bought them off with a tribute that included three thousand pounds of pepper. For his wedding meal in 1468, Duke Karl of Bourgogne ordered 380 pounds of black pepper – far more than even the largest wedding party could eat – and displayed it conspicuously so that people could see how fabulously wealthy he was.
Incidentally, the long-held idea that spices were used to mask rotting food doesn’t stand up to much scrutiny. The only people who could afford most spices were the ones least likely to have bad meat, and anyway spices were too valuable to be used as a mask. So when people had spices they used them carefully and sparingly, and not as a sort of flavoursome cover-up.
Pepper accounted for some 70 per cent of the spice trade by bulk, but other commodities from further afield – nutmeg and mace, cinnamon, ginger, cloves and turmeric, as well as several largely forgotten exotics such as calamus, asafoetida, ajowan, galangal and zedoary – began to find their way to Europe, and these became even more valuable. For centuries spices were not just the world’s most valued foodstuffs, they were the most treasured commodities of any type. The Spice Islands, hidden away in the Far East, remained so desirable and prestigious and exotic that when James I gained possession of two small islets, it was such a coup that for a time he was pleased to style himself ‘King of England, Scotland, Ireland, France, Puloway and Puloroon’.
Nutmeg and mace were the most valuable because of their extreme rarity.* Both came from a tree, Myristica fragrans, which was found on the lower slopes of just nine small, volcanic islands rising sheer from the Banda Sea, amid a mass of other islands – none with quite the right soils and microclimates to support the nutmeg tree – between Borneo and New Guinea in what is now Indonesia. Cloves, the dried flower buds of a type of myrtle tree, grew on six similarly selective islands some two hundred miles to the north in the same chain, known to geography as the Moluccas but to history as the Spice Islands. Just to put this in perspective, the Indonesian archipelago consists of sixteen thousand islands scattered over 735,000 square miles of sea, so it is little wonder that the locations of fifteen of them remained a mystery to Europeans for so long.
All of these spices reached Europe through a complicated network of traders, each of whom naturally took a cut. By the time they reached European markets, nutmeg and mace fetched as much as sixty thousand times what they sold for in the Far East. Inevitably, it was only a matter of time before those at the end of the supply chain concluded it would be a lot more lucrative to cut out the intermediate stages and get all the profits at the front end.
So began the great age of exploration. Christopher Columbus is the best remembered of the early explorers but not the first. In 1487, five years ahead of him, Fernão Dulmo and João Estreito set off from Portugal into the uncharted Atlantic, vowing to turn back after forty days if they hadn’t found anything by then. That was the last anyone ever saw of them. It turned out that finding the right winds to bring one back to Europe wasn’t at all easy. Columbus’s real achievement was managing to cross the ocean successfully in both directions. Though an accomplished enough mariner, he was not terribly good at a great deal else, especially geography, the skill that would seem most vital in an explorer. It would be hard to name any figure in history who has achieved more lasting fame with less competence. He spent large parts of eight years bouncing around Caribbean islands and coastal South America convinced that he was in the heart of the Orient and that Japan and China were at the edge of every sunset. He never worked out that Cuba is an island and never once set foot on, or even suspected the existence of, the land mass to the north that everyone thinks he discovered: the United States. He filled his holds with valueless iron pyrite thinking it was gold and with what he confidently believed to be cinnamon and pepper. The first was actually a worthless tree bark and the second were not true peppers but chilli peppers – excellent when you have grasped the general idea of them, but a little eye-wateringly astonishing on first hearty chomp.
Everyone but Columbus could see that this was not the solution to the spice problem, and in 1497 Vasco da Gama, sailing for Portugal, decided to go the other way to the Orient, around the bottom of Africa. This was a much trickier proposition than it sounds. Contrary prevailing winds and currents wouldn’t allow a southern-sailing vessel to simply follow the coastline, as logic would indicate. Instead it was necessary for Gama to sail far out into the Atlantic Ocean – almost to Brazil, in fact, though he didn’t know it – to catch westerly breezes that would shoot his fleet around the southern cape. This made it a truly epic voyage. Europeans had never sailed this far before. Gama’s ships were out of sight of land for as much as three months at a time. This was the voyage that effectively discovered scurvy. No earlier sea voyages had been long enough for the symptoms of scurvy to take hold.
It also brought two other unhappy traditions to the maritime world. One was the introduction of syphilis to Asia – just five years after Columbus’s men conveyed it to Europe from the Americas – helping to make it a truly international disease. The other was the casual infliction of extreme violence on innocent people. Vasco da Gama was a breathtakingly vicious man. On one occasion he captured a Muslim ship carrying hundreds of men, women and children, locked the passengers and crew in the hold, carried off everything of value, and then – gratuitously, appallingly – set the ship ablaze. Almost everywhere he went Gama abused or slaughtered the people he encountered, and so set a tone of distrust and brutish violence that would characterize and diminish the whole of the age of discovery.
Gama never got to the Spice Islands. Like most others, he thought the East Indies were just a little east of India – hence their name, of course – but in fact they proved to be way beyond India, so far beyond that Europeans arriving there began to wonder if they had sailed most of the way around the world and were almost back to the Americas. If so, then a trip to the Indies for spices would be more simply effected by sailing west, past the new lands lately discovered by Columbus, rather than going all the way around Africa and across the Indian Ocean.
In 1519, Ferdinand Magellan set off in five leaky ships, in a brave but seriously underfunded operation, to find a western route. What he discovered was that between the Americas and Asia was a greater emptiness than anyone had ever imagined Earth had room for: the Pacific Ocean. No one has ever suffered more in the quest to get rich than Ferdinand Magellan and his crew as they sailed in growing disbelief across the Pacific in 1521. Their provisions all but exhausted, they devised perhaps the least appetizing dish ever served: rat droppings mixed with wood shavings. ‘We ate biscuit which was no longer biscuit but powder of biscuits swarming with worms,’ recorded one crew member. ‘It stank strongly of the urine of rats. We drank yellow water that had been putrid for many days. We also ate some ox hides that covered the top of the mainyard . . . and often we ate sawdust from boards.’ They went three months and twenty days without fresh food or water before finding relief and a shoreline in Guam – and all in a quest to fill the ships’ holds with dried flower buds, bits of tree bark and other aromatic scrapings to sprinkle on food and make into pomanders.
In the end, only eighteen out of two hundred and sixty men survived the voyage. Magellan himself was killed in a skirmish with natives in the Philippines. The surviving eighteen did very well out of the voyage, however. In the Spice Islands they loaded up with 53,000 pounds of cloves, which they sold in Europe for a profit of 2,500 per cent, and almost incidentally in the process became the first human beings to circle the globe. The real significance of Magellan’s voyage was not that it was the first to circumnavigate the planet, but that it was the first to realize just how big that planet was.
Although Columbus had little idea of what he was doing, it was his voyages that ultimately proved the most important, and we can date the moment that that became so with precision. On 5 November 1492, on Cuba, two of his crewmen returned to the ship carrying something no one from their world had ever seen before: ‘a sort of grain [that the natives] call maiz which was well tasted, bak’d, dry’d and made into flour’. In the same week, they saw some Taino Indians sticking cylinders of smouldering weed in their mouths, drawing smoke into their chests and pronouncing the exercise satisfying. Columbus took some of this odd product home with him too.
And so began the process known to anthropologists as the Columbian Exchange – the transfer of foods and other materials from the New World to the old and vice versa. By the time the first Europeans arrived in the New World, farmers there were harvesting more than a hundred kinds of edible plants – potatoes, tomatoes, sunflowers, marrows, aubergines, avocados, a whole slew of beans and squashes, sweet potatoes, peanuts, cashews, pineapples, papaya, guava, yams, manioc (or cassava), pumpkins, vanilla, four types of chilli pepper and chocolate, among rather a lot else – not a bad haul.
It has been estimated that 60 per cent of all the crops grown in the world today originated in the Americas. These foods weren’t just incorporated into foreign cuisines. They effectively became the foreign cuisines. Imagine Italian food without tomatoes, Greek food without aubergines, Thai and Indonesian foods without peanut sauce, curries without chillies, hamburgers without French fries or ketchup, African food without cassava. There was scarcely a dinner table in the world in any land to east or west that wasn’t drastically improved by the foods of the Americas.
No one foresaw this at the time, however. For the Europeans the irony is that the foods they found they mostly didn’t want, while the ones they wanted they didn’t find. Spices were what they were after and the New World was dismayingly deficient in those, apart from chillies, which were too fiery and startling to be appreciated at first. Many promising New World foods failed to attract any interest at all. The indigenous people of Peru had 150 varieties of potato, and valued them all. An Incan of five hundred years ago would have been able to identify varieties of potato in much the way that a modern wine snob identifies grapes. The Quechuan language of Peru still has a thousand words for different types or conditions of potatoes. Hantha, for instance, describes a potato that is distinctly on the old side but still has edible flesh. The conquistadores, however, brought home only a few varieties, and there are those who say they were by no means the most delicious. Further north, the Aztecs had a great fondness for amaranth, a cereal that produces a nutritious and tasty grain. It was as popular a foodstuff in Mexico as maize, but the Spanish were offended by the way the Aztecs used it, mixed with blood, in rites involving human sacrifice, and refused to touch it.
The Americas, it may be said, gained much from Europe in return. Before the Europeans stormed into their lives, people in Central America had only five domesticated creatures – the turkey, duck, dog, bee and cochineal insect – and no dairy products. Without European meat and cheese, Mexican food as we know it could not exist. Wheat in Kansas, coffee in Brazil, beef in Argentina and a great deal more would not be possible.
Less happily, the Columbian Exchange also involved disease. With no immunity to many European diseases, the natives sickened easily and ‘died in heapes’. One epidemic, probably viral hepatitis, killed an estimated 90 per cent of the natives in coastal Massachusetts. A once-mighty tribal group in the region of modern Texas and Arkansas, the Caddo, saw its population fall from an estimated 200,000 to just 1,400 – a drop of over 99 per cent. An equivalent outbreak in modern New York would reduce the population to 56,000 – ‘not enough to fill Yankee Stadium’ in the chilling phrase of Charles C. Mann. Altogether disease and slaughter reduced the native population of Mesoamerica by an estimated 90 per cent in the first century of European contact. In return they gave Columbus’s men syphilis.*
Over time the Columbian Exchange also of course involved the wholesale movement of peoples, the setting up of colonies, the transfer – sometimes enforced – of language, religion and culture. Almost no single act in history has more profoundly changed the world than Columbus’s blundering search for eastern spices.
There is another irony in all this. By the time the age of exploration was fully under way, the heyday of spices was coming to an end anyway. In 1545, just twenty years or so after Magellan’s epic voyage, an English warship, the Mary Rose, sank in mysterious circumstances off the English coast near Portsmouth. More than four hundred men died. When the ship was recovered in the late twentieth century, marine archaeologists were surprised to find that almost every sailor owned a tiny bag of black pepper, which he kept attached to his waist. It would have been one of his most prized possessions. The fact that even a common sailor of 1545 could now afford a supply of pepper, however modest, meant that pepper’s days of hyper-rarity and extreme desirability were at an end. It was on its way to taking its place alongside salt as a standard and comparatively humble condiment.
People continued to fight over the more exotic spices for another century or so, and sometimes even over the more common ones. In 1599, eighty British merchants, exasperated by the rising cost of pepper, formed the British East India Company with a view to getting a piece of the market for themselves. This was the initiative that brought King James the treasured isles of Puloway and Puloroon, but in fact the British never had much success in the East Indies, and in 1667, in the Treaty of Breda, they ceded all claims to the region to the Dutch in return for a small piece of land of no great significance in North America. The piece of land was called Manhattan.
By now, however, there were new commodities that people wanted even more, and the quest for these was, in the most unexpected ways, about to change the world still further.
II
Two years before his unhappy adventure with ‘many worms creeping’, Samuel Pepys recorded in his diary a rather more prosaic milestone in his life. On 25 September 1660, he tried a new hot beverage for the first time, recording in his diary: ‘And afterwards I did send for a cup of tee (a China drink), of which I never had drank before.’ Whether he liked it or not Pepys didn’t say, which is a shame as it is the first mention we have in English of anyone’s drinking a cup of tea.
A century and a half later, in 1812, a Scottish historian named David Macpherson, in a dry piece of work called History of the European Commerce with India, quoted the tea-drinking passage from Pepys’s diary. That was a very surprising thing to do because in 1812 Pepys’s diaries were supposedly still unknown. Although they resided in the Bodleian Library in Oxford, and so were available for inspection, no one had ever looked into them – so it was thought – because they were written in a private code that had yet to be deciphered. How Macpherson managed to find and translate the relevant passage in six volumes of dense and secret scribblings, not to mention what gave him the inspiration to look there in the first place, are mysteries that are some distance beyond being answerable.
By chance, an Oxford scholar, the Reverend George Neville, master of Magdalen College, saw Macpherson’s passing reference to Pepys’s diaries and grew intrigued to know what else might be in them. Pepys after all had lived through momentous times – through the restoration of the monarchy, the last great plague epidemic, the Great Fire of London of 1666 – so their content was bound to be of interest. He commissioned a clever but penurious student named John Smith to see if he could crack the code and transcribe the diaries. The work took Smith three years. The result of course was the most celebrated diaries in the English language. Had Pepys not had that cup of tea, Macpherson not mentioned it in a dull history, Neville been less curious and young Smith less intelligent and dogged, the name Samuel Pepys would mean nothing to anyone but naval historians, and a very considerable part of what we know about how people lived in the second half of the seventeenth century would in fact be unknown. So it was a good thing that he had that cup of tea.
Normally, like most other people of his class and period, Pepys drank coffee, though coffee itself was still pretty novel in 1660. Britons had been vaguely familiar with coffee for decades, but principally as a queer, dark beverage encountered abroad. A traveller named George Sandys in 1610 grimly described coffee as being ‘blacke as soot, and tasting not much unlike it’. The word was spelled in any number of imaginative ways – as ‘coava’, ‘cahve’, ‘cauphe’, ‘coffa’ and ‘cafe’, among others – before finally coming ashore as ‘coffee’ in about 1650.
Credit for coffee’s popularity in England belongs to a man named Pasqua Rosee, who was Sicilian by birth and Greek by background, and who worked as a servant for Daniel Edwards, a British trader in Smyrna, now Izmir, in Turkey. Moving to England with Edwards, Rosee served coffee to Edwards’s guests, and this proved so popular that he was emboldened to open a café – the first in London – in a shed in the churchyard of St Michael Cornhill in the City of London in 1652. Rosee promoted coffee for its health benefits, claiming that it cured or prevented headaches, ‘defluxion of rheums’, wind, gout, scurvy, miscarriages, sore eyes and much else.
Rosee did very well out of his business, but his reign as premier coffee-maker didn’t last long. Some time after 1656 he was compelled to leave the country ‘for some misdemeanour’, which the record unfortunately doesn’t specify. All that is known is that he departed suddenly and was heard of no more. Others swiftly moved in to take his place. By the time of the Great Fire, more than eighty coffee houses were in business in London and they had become a central part of the life of the city.
The coffee served in the coffee houses wasn’t necessarily very good coffee. Because of the way coffee was taxed in Britain (by the gallon) the practice was to brew it in large batches, store it cold in barrels and reheat it a little at a time for serving. So coffee’s appeal in Britain was less to do with its being a quality beverage than a social lubricant. People went to coffee houses to meet people of shared interests, to gossip, read the latest journals and newspapers – a brand-new word and concept in the 1660s – and exchange information of value to their lives and business. When people wanted to know what was going on in the world, they went to a coffee house to find out. People took to using coffee houses as their offices – as, most famously, at Lloyd’s Coffee House on Lombard Street, which gradually evolved into Lloyd’s insurance market. William Hogarth’s father hit on the idea of opening a coffee house in which only Latin would be spoken. It failed spectacularly – toto bene, as Mr Hogarth himself might have said – and he spent years in debtors’ prison in unhappy consequence.
Although it was pepper and spices that brought the East India Company into being, its destiny was tea. In 1696, William Pitt the Younger massively cut the tax on tea, replacing it with the dreaded window tax (on the logical presumption that it was a lot harder to hide windows than to smuggle tea) and the effect on consumption was immediate. Between 1699 and 1721 tea imports increased almost a hundredfold, from 13,000 pounds to 1.2 million pounds, then quadrupled again in the thirty years to 1750. Tea was slurped by labourers and daintily sipped by ladies. It was taken at breakfast, dinner and supper. It was the first beverage in history to belong to no class, and the first to have its own ritual slot in the day: teatime. It was easier to make at home than coffee, and it also went especially well with another great gustatory treat that was suddenly becoming affordable for the average wage earner: sugar. Britons came to adore sweet, milky tea as no other nation had (or even perhaps could). For something over a century and a half, tea was at the heart of the East India Company, and the East India Company was at the heart of the British Empire.
Not everyone got the hang of tea immediately. The poet Robert Southey related the story of a lady in the country who received a pound of tea as a gift from a city friend when it was still a novelty. Uncertain how to engage with it, she boiled it up in a pot, spread the leaves on toast with butter and salt, and served it to her friends, who nibbled it gamely and declared it interesting but not quite to their taste. Elsewhere, however, it raced ahead, in tandem with sugar.
The British had always loved sugar, so much so that when they first got access to it, about the time of Henry VIII, they put it on almost everything – on eggs, meat, and into their wine. They scooped it on to potatoes, sprinkled it over greens, ate it straight off the spoon if they could afford to. Even though sugar was very expensive, people consumed it till their teeth turned black, and if their teeth didn’t turn black naturally they blackened them artificially to show how wealthy and marvellously self-indulgent they were. But now, thanks to plantations in the West Indies, sugar was becoming increasingly affordable, and people were discovering that it went particularly well with tea.
Sweet tea became a national indulgence. By 1770 per capita consumption of sugar was running at 20 pounds a head and most of that, it seems, was spooned into tea. (That sounds like quite a lot until you realize that Britons today eat 80 pounds of sugar per person annually, while Americans pack away a decidedly robust 126 pounds of sugar per head.) As with coffee, tea was held to confer health benefits; among much else, it was said that it ‘assuageth the pains of the Bowels’. A Dutch doctor, Cornelius Bontekoe, recommended drinking fifty cups of tea a day – and in extreme cases as many as two hundred – in order to keep oneself sufficiently primed.
Sugar also played a big role in a less commendable development: the slave trade. Nearly all the sugar Britons consumed was grown on West Indian estates worked by slaves. We have a narrow tendency to associate slavery exclusively with the plantation economy of the southern US, but in fact plenty of other people got rich from slavery, not least the traders who shipped 3.1 million Africans across the ocean before the trade in humans was abolished in 1807.
Tea was adored and esteemed not just in Great Britain but in her overseas dominions, too. Tea was taxed in America as part of the hated Townshend duties. In 1770 these duties were repealed on everything but tea in what proved to be a fatal misjudgement. They were kept on tea partly to remind colonists of their subjugation to the crown and partly to help the East India Company out of a deep and sudden hole. The company had become hopelessly overextended. It had accumulated 17 million pounds of tea – a huge amount of a perishable product – and, perversely, had tried to create an air of well-being by paying out more in dividends than it could really afford. Bankruptcy loomed unless it could reduce its stockpiles. Hoping to ease it through the crisis, the British government gave the company an effective monopoly on tea sales in America. Every American knows what happened next.
On 16 December 1773, a group of eighty or so colonists dressed as Mohawk Indians boarded British ships in Boston Harbor, broke open 342 tea chests and dumped the contents overboard. That sounds like a fairly moderate act of vandalism. In fact, it was a year’s supply of tea for Boston, with a value of £18,000, so it was a grave and capital offence, and everyone involved knew so. Nobody at the time, incidentally, called it the Boston Tea Party; that name wasn’t first used until 1834. Nor could the behaviour of the crowds be characterized as one of good-natured high spirits, as we Americans rather like to think. The mood was murderously ugly. The unluckiest person in all this was a British customs agent named John Malcolm. Malcolm had recently been hauled from a house in Maine and tarred and feathered, a blisteringly painful punishment since it involved the application of hot tar to bare skin. Usually it was applied with stiff brushes, which were painful enough in themselves, though in at least one instance the victim was simply held by his ankles and dunked head first into a barrel of tar. To the coating of tar was added handfuls of feathers before the victim was paraded through the streets, and often beaten or even hanged. So there was nothing at all jovial about tarring and feathering, and we can only imagine Malcolm’s dismay as he was hauled wriggling from his house a second time and given another ‘Yankee jacket’, as it was also known. Once dried, it took days of delicate picking and scrubbing to get the tar and feathers off. Malcolm sent a square of charred and blackened epidermis back to England with a note asking if he could please come home. His wish was granted. Meanwhile, however, America and Britain were implacably on the road to war. Fifteen months later the first shots were fired. As a versifier of the day noted:
What discontents, what dire events,
From trifling things proceed?
A little Tea, thrown in the Sea,
Has thousands caused to bleed.
At the same time that Britain was losing its American colonies, it was facing serious problems connected to tea from the other direction as well. By 1800 tea was embedded in the British psyche as the national beverage, and imports were running at 23 million pounds a year. Virtually all that tea came from China. This caused a large and chronic trade imbalance. The British resolved this problem in part by selling opium produced in India to the Chinese. Opium was a very considerable business in the nineteenth century, and not just in China. People in Britain and America – women in particular – took a lot of opium, too, mostly in the form of medicinal paregoric and laudanum. Imports of opium to the United States went from 24,000 pounds in 1840 to no less than 400,000 pounds in 1872, and it was women who mostly sucked it down, though quite a lot was given to children, too, as a treatment for croup. Franklin Delano Roosevelt’s grandfather Warren Delano made much of the family’s fortune by trading opium, a fact that the Roosevelt family has never exactly crowed about.
To the unending exasperation of the Chinese authorities, Britain became particularly skilled at persuading Chinese citizens to become opium addicts – university courses in the history of marketing really ought to begin with British opium sales – so much so that by 1838 Britain was selling almost five million pounds of opium to China every year. Unfortunately, this still wasn’t enough to offset the huge costs of importing tea from China. An obvious solution was to grow tea in some warm part of the expanding British empire. The problem was that the Chinese had always been secretive about the complicated processes of turning tea leaves into a refreshing beverage, and no one outside China knew how to get an industry going. Enter a remarkable Scotsman named Robert Fortune.
For three years in the 1840s, Fortune travelled around China, disguised as a native, collecting information on how tea was grown and processed. It was risky work: had he been caught he would certainly have been imprisoned and could well have been executed. Although Fortune spoke none of the languages of China, he got around that problem by pretending always to come from a distant province where another dialect prevailed. In the course of his travels, he not only learned the secrets of tea production, but also introduced to the West many valuable plants, among them the fan palm, the kumquat and several varieties of azaleas and chrysanthemums.
Under his guidance, tea cultivation was introduced to India in that curiously inevitable year 1851 with the planting of some 20,000 seedlings and cuttings. In half a century, from a base of nothing in 1850, tea production in India rose to 140 million pounds a year.
As for the East India Company, however, its period of glory came to an abrupt and unhappy conclusion. The precipitating event, unexpectedly enough, was the introduction of a new kind of rifle, the Enfield P53, at just about the time that tea cultivation was starting. The rifle was an old-fashioned type that was loaded by tipping powder down the barrel. The powder came in grease-coated paper cartridges which had to be bitten open. A rumour spread among the native sepoys, as the soldiers were known, that the grease used was made from the fat of pigs and cows – a matter of profoundest horror for Muslim and Hindu soldiers alike since the consumption of such fats, even unwittingly, would condemn them to eternal damnation. The East India Company’s officers handled the matter with stunning insensitivity. They court-martialled several Indian soldiers who refused to handle the new cartridges, and threatened to punish any others who didn’t fall into line. Many sepoys became convinced that it was all part of a plot to replace their own faiths with Christianity. By unfortunate coincidence, Christian missionaries had recently become active in India, fanning suspicions further. The upshot was the Sepoy Rebellion of 1857, in which the native soldiers turned on their British masters, whom they very much outnumbered, and killed them in large numbers. At Cawnpore, the rebels gathered together two hundred women and children in a hall and hacked them to pieces. Other innocent victims, it was reported, were thrown into wells and left to drown.
When news of these cruelties reached British ears, retribution was swift and unforgiving. Rebellious Indians were tracked down and executed in ways calculated to instil terror and regret. One or two were even fired from cannons, or so it is often recorded. Untold numbers were shot or summarily hanged. The whole episode left Britain profoundly shaken. More than five hundred books appeared on the uprising in its immediate aftermath. India, it was commonly agreed, was too big a country and too big a problem to leave in the hands of a business. Control of India passed to the British crown and the East India Company was wound up.
III
All of these foods, all of these discoveries, all of this endless fighting made its way back to England and ended up on dinner tables, and in a new kind of room: the dining room. The dining room didn’t acquire its modern meaning until the late seventeenth century and didn’t become general in houses until even later. In fact, it only just made it into Samuel Johnson’s dictionary of 1755. When Thomas Jefferson put a dining room in Monticello, it was quite a dashing thing to do. Previously meals had been served at little tables in any convenient room.
What caused dining rooms to come into being wasn’t a sudden universal urge to dine in a space exclusively dedicated to the purpose, but rather, by and large, a simple desire on the part of the mistress of the house to save her lovely new upholstered furniture from greasy desecration. Upholstered furniture, as we have lately seen, was expensive, and the last thing a proud owner wanted was to have anyone wiping fingers on it.
The arrival of the dining room marked a change not only in where the food was served, but how it was eaten and when. For one thing, forks were now suddenly becoming common. Forks had been around for a long time, but took for ever to gain acceptance. ‘Fork’ originally signified an agricultural implement and nothing more; it didn’t take on a food sense until the mid-fifteenth century, and then it described a large implement used to pin down a bird or joint for carving. The person credited with introducing the eating fork to England was Thomas Coryate, an author and traveller from the time of Shakespeare who was famous for walking huge distances – to India and back on one occasion. In 1611 he produced his magnum opus, called Coryate’s Crudities, in which he gave much praise to the dinner fork, which he had first encountered in Italy. The same book was also notable for introducing English readers to the Swiss folk hero William Tell and to a new device called the umbrella.
Eating forks were thought comically dainty and unmanly – and dangerous too, come to that. Since they had only two sharp tines, the scope for spearing one’s lip or tongue was great, particularly if one’s aim was impaired by wine and jollity. Manufacturers experimented with additional numbers of tines – sometimes as many as six – before settling, late in the nineteenth century, on four as the number that people seemed most comfortable with. Why four should induce the optimum sense of security isn’t easy to say, but it does seem to be a fundamental fact of flatware psychology.
The nineteenth century also marked a time of change for the way food was served. Before the 1850s, nearly all the dishes of the meal were placed on the table at the outset. Guests would arrive to find the food waiting. They would help themselves to whatever was nearby and ask for other dishes to be passed or call a servant over to fetch one for them. This style of dining was traditionally known as service à la française, but now a new practice came in known as service à la russe in which food was delivered to the table in courses. A lot of people hated the new practice because it meant everyone had to eat everything in the same order and at the same pace. If one person was slow, it held up the next course for everyone else, and meant that food lost heat. Dinners now sometimes dragged on for hours, putting a severe strain on many people’s sobriety and nearly everyone’s bladders.
The nineteenth century also became the age of the overdressed dining table. A diner at a formal gathering might sit down to as many as nine wine glasses just for the main courses – more were brought for dessert – and a blinding array of silverware with which to conduct an assault on the many dishes put before him. The types of specialized eating implements for cutting, serving, probing, winkling and otherwise getting viands from serving dish to plate and from plate to mouth became almost numberless. As well as a generous array of knives, forks and spoons of a more or less conventional nature, the diner needed also to know how to recognize and manipulate specialized cheese scoops, olive spoons, terrapin forks, oyster prongs, chocolate muddlers, gelatin knives, tomato slices, and tongs of every size and degree of springiness. At one point, a single manufacturer offered no fewer than 146 different types of flatware for the table. Curiously, one of the few survivors from this culinary onslaught is one that is most difficult to understand: the fish knife. No one has ever identified a single advantage conferred by its odd scalloped shape or worked out the original thinking behind it. There isn’t a single kind of fish that it cuts better or bones more delicately than a conventional knife does.
The overdressed dining table: table glass including decanters, claret jugs and a carafe, from Mrs Beeton’s The Book of Household Management.
Dining was, as one book of the period phrased it, ‘the great trial’, with rules ‘so numerous and so minute in respect of detail that they require the most careful study; and the worst of it is that none of them can be violated without exposing the offender to instant detection’. Protocol ruled every action. If you wished to take a sip of wine, you needed to find someone to drink with you. As one foreign visitor explained it in a letter home: ‘A messenger is often sent from one end of the table to the other to announce to Mr B—— that Mr A—— wishes to take wine with him; whereupon each, sometimes with considerable trouble, catches the other’s eye . . . When you raise your glass, you look fixedly at the one with whom you are drinking, bow your head, and then drink with great gravity.’
Some people needed more help with the rules of table behaviour than others. John Jacob Astor, one of the richest men in America but not evidently the most cultivated, astounded his hosts at one dinner party by leaning over and wiping his hands on the dress of the lady sitting next to him. One popular American guidebook, The Laws of Etiquette; or, Short Rules and Reflections for Conduct in Society, informed readers that they ‘may wipe their lips on the table cloth, but not blow their noses with it’. Another solemnly reminded readers that it was not polite in refined circles to smell a piece of meat while it was on one’s fork. It also explained: ‘The ordinary custom among well-bred persons is as follows: soup is taken with a spoon.’
Mealtimes moved around, too, until there was scarcely an hour of the day that wasn’t an important time to eat for somebody. Dining hours were dictated to some extent by the onerous and often preposterous obligations of making and returning social calls. The convention was to drop in on others between twelve and three each day. If someone called and left a card but you were out, etiquette dictated that you must return the call the next day. Not to do so was the gravest affront. What this meant in practice was that most people spent their afternoons dashing around trying to catch up with people who were dashing around in a similarly unproductive manner trying to catch up with them.
Partly for this reason the dinner hour moved later and later – from midday to mid-afternoon to early evening – though the new conventions were by no means taken up uniformly. One visitor to London in 1773 noted that in a single week he was invited to dinners that started successively at 1 p.m., 5 p.m., 3 p.m. and ‘half after six, dinner on table at seven’. Eighty years later when John Ruskin informed his parents that it had become his habit to dine at six in the evening, they received the news as if it marked the most dissolute recklessness. Eating so late, his mother told him, was dangerously unhealthy.
Another factor that materially influenced dining times was theatre hours. In Shakespeare’s day performances began about two o’clock, which kept them conveniently out of the way of mealtimes, but that was dictated largely by the need for daylight in open-air arenas like the Globe. Once plays moved indoors, starting times tended to get later and later and theatre-goers found it necessary to adjust their dining times accordingly – though this was done with a certain reluctance and even resentment. Eventually, unable or unwilling to modify their personal habits any further, the beau monde stopped trying to get to the theatre for the first act and took to sending a servant to hold their seats for them till they had finished dining. Generally they would show up – noisy, drunk and disinclined to focus – for the later acts. For a generation or so it was usual for a theatrical company to perform the first half of a play to an auditorium full of dozing servants who had no attachment to the proceedings and to perform the second half to a crowd of ill-mannered inebriates who had no idea what was going on.
Dinner finally became an evening meal in the 1850s, influenced by Queen Victoria. As the distance between breakfast and dinner widened, it became necessary to create a smaller meal around the middle of the day, for which the word ‘luncheon’ was appropriated. ‘Luncheon’ originally signified a lump or portion (as in ‘a luncheon of cheese’). In that sense it was first recorded in English in 1580. In 1755 Samuel Johnson was still defining it as a quantity of food – ‘as much food as one’s hand can hold’ – and only slowly over the next century did it come to signify, in refined circles at least, the middle meal of the day.
One consequential change is that where people used to get most of their calories at breakfast time and midday, with only a small evening top-up at suppertime, now those intakes are almost exactly reversed. Most of us consume the bulk – a sadly appropriate word here – of our calories in the evening and take them to bed with us, a practice that doesn’t do us any good at all. The Ruskins, it turns out, were right.
* The Naval Board used lime juice rather than lemon juice because it was cheaper, which is why British sailors became known as limeys. Lime juice wasn’t nearly as effective as lemon juice.
* Mercury especially so. It has been estimated that as little as 1/25 of a teaspoon of mercury could poison a sixty-acre lake. It is fairly amazing that we don’t get poisoned more often. According to one computation, no fewer than 20,000 chemicals in common use are also poisonous to humans if ‘touched, ingested or inhaled’. Most are twentieth-century creations.
* Sodium chloride is strange stuff because it is made up of two extremely aggressive elements: sodium and chlorine. Sodium and chlorine are the Hell’s Angels of the mineral kingdom. Drop a lump of pure sodium into a bucket of water and it will explode with enough force to kill. Chlorine is even more deadly. It was the active ingredient in the poison gases of the First World War and, as swimmers know, even in very dilute form it makes the eyes sting. Yet put these two volatile elements together and what you get is innocuous sodium chloride – common table salt.
* The difference between herbs and spices is that herbs come from the leafy part of plants and spices from the wood, seed, fruit or other non-leafy part.
* Nutmeg is the seed of the tree; mace is part of the flesh that surrounds the seed. Mace was actually the rarer of the two. About a thousand tons of nutmeg were harvested annually, but only about a hundred tons of mace.
* Amerindians got syphilis too, but suffered less from it, in much the way that Europeans suffered less from measles and mumps.