Chapter 3

Death Wisdom: The downside of knowing the future

How strange that this sole thing that is certain and common to all exercises almost no influence on men, and that they are the furthest from regarding themselves as the brotherhood of death!

—Nietzsche1

Tahlequah was twenty years old when she gave birth to her daughter on July 24, 2018. Although the infant was full term, she died shortly after birth. Under normal circumstances, there would be an expert on hand to determine the cause of death. But these were not normal circumstances.

Immediately after the baby died, Tahlequah did something that would soon take the world by storm. She began carrying her dead child with her everywhere she went. She did this for weeks on end in what witnesses called a tour of grief.2 During this period, she rarely ate. When she slept, members of her family would take turns carrying the infant themselves. “We do know her family is sharing the responsibility… that she’s not always the one carrying it, that they seem to take turns,” said Jenny Atkinson, who watched the event unfold.3

International news outlets traveled to Seattle, Washington, to bear witness to Tahlequah’s grief. There was an outpouring of sympathy from all over the world. People wrote poems about her. They posted drawings of her carrying her baby on Twitter. There was an op-ed in the New York Times from the author Susan Casey on how best to process the collective pain the public felt at watching this mother grieve.

On August 12, 2018, after seventeen days, Tahlequah finally let her infant go. Her body sank to the bottom of the Pacific Ocean. A few days later, scientists from the Center for Whale Research at Friday Harbor in Washington confirmed that Tahlequah had moved on, hunting salmon off the coast of the San Juan Islands. She was back to her old self.

If it wasn’t clear by now, Tahlequah is not a human. She is an orca—popularly known as the killer whale, the largest dolphin species. Jenny Atkinson was also not just a witness, but the director of the Whale Museum in Washington, closely monitoring this unprecedented event. There are many examples of this behavior by dolphins in the peer-reviewed scientific literature: mothers carrying the dead bodies of their infants on their rostrums (beaks), constantly pushing them toward the surface. Dolphins care for sick or ailing family members in this way, supporting them near the surface to help them breathe. However, calf carrying typically only lasts a few hours. Which is what makes Tahlequah’s seventeen-day vigil so unique. It was so long that her own health was affected. She was noticeably skinnier after weeks of not eating, focusing instead on pushing her calf through the water. Even scientists trained to dispassionately observe animal behavior were visibly shaken. “I am sobbing,” said Deborah Giles, a research scientist for the University of Washington Center for Conservation Biology. “I can’t believe she is still carrying her calf around.”4

Many newspaper reporters described Tahlequah’s behavior as an example of mourning, as an indisputable example of animal grief. These stories were peppered with words like vigil and funeral, concepts that are bound tightly with an understanding of—and response to—death that we typically think of as the domain of humans, not animals. Some animal behavior experts, however, argued that describing calf carrying as a product of grief is nothing more than anthropomorphizing, attributing humanlike emotions and cognition to animals unjustly. “We dilute a real, powerful and observable human emotion by granting other animals the same emotions so freely without any scientific rigour,” argued the zoologist Jules Howard in The Guardian.5

I don’t want to spend this chapter litigating the pitfalls of anthropomorphism, however. Instead, I want to tackle the specific problem of what death means to nonhuman animals. Because death means something to them. It meant something to Tahlequah. But what? This chapter is dedicated to figuring that out. And at the end of this chapter, even if we are sure that humans understand the meaning of death on a deeper level than Tahlequah or other animals—on such a deep level that we should reserve words like grief and mourning for our species alone—we are still left with a bigger question. Are humans better off than other species because of our understanding of death?

Death wisdom

What do animals know about death? Darwin himself wondered about this, asking in The Descent of Man, “Who can say what cows feel, when they surround and stare intently on a dying or dead companion?”6 Almost 150 years later, the anthropologist Barbara J. King published a book—How Animals Grieve—citing countless examples of animals from across the taxonomic spectrum reacting to the death of a social partner or family member in ways similar to Tahlequah. Her examples range from animals we typically associate with intelligence, like dolphins, to animals we don’t. “Chickens, like chimpanzees, elephants, and goats, have a capacity for grief,” writes King.7

The question of what animals know about death (and thus how they grieve) is part of comparative thanatology—a field of scientific inquiry attempting to understand animals’ death-knowledge.8 Comparative thanatologists want to know how an animal knows whether something is alive or dead, and what death means to them. Ants, for example, know something about death because a dead one will release necromones—chemicals only present when decomposition sets in. When another ant smells necromones on a dead ant, it will carry away the body and dump it out of the nest. But you can trigger this same body-removal response (called necrophoresis) by spraying any ant with necromones and watch as other ants carry them kicking and screaming out of the nest. This does not suggest that ants have a particularly sophisticated knowledge of death, and only a very limited way of recognizing it.

But other animals react to death in ways instantly recognizable to us. The carrying of dead infants is not limited to dolphins. It is also commonly observed in most primates. Mothers will carry the body of their infant for days or even weeks at a time. This is often accompanied by behaviors that look, to a human, like grief: social withdrawal, mournful vocalizations, and a “failure to eat or sleep,” as Barbara King describes it.9 But grief, if that is indeed what we are witnessing, is not synonymous with an understanding of death.

Dr. Susana Monsó is a philosopher with the University of Veterinary Medicine Vienna whose research focus is the concept of death in animals. She argues that “grief does not necessarily signal a [concept of death]—what it signals is a strong emotional attachment to the dead individual.”10 This sets up a scenario where there are different levels of sophistication when it comes to an animal’s understanding of death. The most basic is called a minimal concept of death, a kind of death-knowledge that many—if not most—animals have. Monsó argues that for an animal to have a minimal concept of death, it need only be able to recognize two simple attributes: “1) non-functionality (death stops all bodily and mental functions), and 2) irreversibility (death is a permanent state).”11 An animal is not born knowing these things, but learns about death through exposure.

Monsó explained to me that “for an animal to develop a minimal concept of death, she must first have some expectations regarding how beings in her surroundings typically behave.” For example, soon after being born, a young dolphin would quickly learn how living things behave. She would expect other dolphins to move their flukes up and down to swim through the water, chase and eat fish, and make lots of whistling and clicking sounds. But when she first encounters a dead dolphin, she will notice that none of these things are occurring. And if she observes the dead dolphin long enough, she will learn that it’s a permanent state. Her mind will then be able to categorize the world into living and no-longer-living things. Monsó argues that a minimal concept of death is “relatively easy to acquire and fairly widespread in nature.” It does not require particularly complex cognition. Grief, then, can crop up as a rather straightforward emotional response to the permanent nonfunctionality of a social partner or family member.

It’s important to understand, however, that just because a dolphin can recognize death, it does not mean she understands her own mortality. Or that all living things must die. These are two additional levels of understanding that nonhuman animals lack. According to Monsó, “a very sophisticated notion of personal mortality also incorporates the notions of inevitability, unpredictability, and causality. They might acquire, through an accumulation of experiences with death, a notion that they can die, but probably not that they will die. I think that such a notion is probably restricted to humans.”

There seems to be consensus among scientists and philosophers that there is a fundamental difference between what animals and humans understand about death, especially the awareness of mortality itself. “Among animals,” writes King in How Animals Grieve, “we alone fully anticipate the inevitability of death.” This is called mortality salience: the scientific term for an ability to know that you—and everyone else—will one day die. I prefer the more poetic term death wisdom.

When my daughter was eight, we heard her crying in her room not long after we read her a bedtime story and said good night. She was sitting up in bed looking particularly miserable. She explained that she was thinking about death, and that one day she would close her eyes and never open them again. Never see, or think, or feel anything anymore. She was scared, but also described a kind of existential dread that was new to her. I suspect that it’s a feeling you, too, recognize: the crush of sadness that overwhelms the mind when contemplating the reality of one’s own death. It was not something that my daughter had ever spoken about—or experienced—before that moment. And it was heartbreaking to watch.

A question arises: What are the cognitive capacities that we possess—and that nonhuman animals do not—that give rise to our deep understanding of death?

Time and the curse of the awkward fraction

According to Susana Monsó, the minimal concept of death that animals possess “requires neither an explicit concept of time nor much by way of mental time travel or episodic foresight.” These are cognitive ingredients—possibly unique to the human mind—that are required for death wisdom. I will deal with each of these in turn so we can see just what it is that gives our species its deep understanding of death. Let’s start first with the explicit concept of time.

An explicit concept of time is the understanding that there will be a tomorrow, and a day after, and a day after that. This knowledge can extend to just a few hours into the future, but also days, years, or millennia. It’s explicit insofar as this knowledge is something that we can analyze with our conscious minds, and thus understand and think about conceptually. The main benefit of the explicit knowledge that time marches forward is that you can plan for the future.

By contrast, an animal doesn’t need any real understanding of what time or “the future” is to nonetheless eke out a perfectly respectable living. A house cat, for example, could simply eat when they are hungry and sleep when they are tired, without any interest in what tomorrow might bring. Nietzsche believed that this gave animals an edge over humans.

“[T]he animal lives unhistorically: for it is contained in the present, like a number without any awkward fraction left over.”12

Nietzsche was lamenting how animals likely suffer less than humans because they are unburdened by the knowledge of the past, and wholly unaware of what their future holds. Nietzsche believed that animals, like children, “play in blissful blindness between the hedges of the past and future.”

This idea—that animals live their lives stuck in the present—is widespread, and a subject of long-standing debate for scientists. Except for a handful of cases that we will learn about in this section, it doesn’t appear as if many species have an explicit concept of time by human standards. While animals don’t ruminate on the future, time is still meaningful to them. They might not have an explicit understanding of what time means conceptually, but almost all living things have an implicit concept of time baked into their DNA.

“The physiological, biochemical, and behavioral lives of all animals are organized around the twenty-four-hour day,” says Michael Cardinal-Aucoin, professor of biology at Lakehead University, and a specialist in circadian biology. “Their lives are timed; they anticipate regularly occurring, cyclical events.”

As mammals, we’re deeply affected by one cyclical event in particular: sunrise. As I am writing these words, the predicted length of this day is twenty-three hours, fifty-nine minutes, and 59.9988876 seconds. The moon drifts farther from and closer to the Earth throughout the course of a day. This means that the moon’s gravitational pull on Earth is not constant, which in turn means that the Earth’s rotational speed is always in flux. Because of this, an Earth day is rarely exactly twenty-four hours long. On average, the moon drifts about two inches farther away from the Earth each year, which is why the length of an Earth day has been slowly expanding over the millennia. Seventy million years ago, there were only twenty-three and a half hours in a day.13

These fluctuations and changes to the length of a day are—in the grand scheme of things—minimal, which has allowed many species to evolve behavioral patterns that are based on the reliability of the rising and setting of the sun. Humans, for example, use natural light to calibrate our internal clocks. Like many mammal species, we sleep when the sun goes down. As the light fades toward the end of the day, our pineal glands produce the hormone melatonin, which serves as a signal to our brain that it’s time to sleep.14 This coincides with a buildup of a chemical called adenosine, which accumulates slowly in our brains throughout the course of the day and reaches critical levels soon after the sun goes down, generating that feeling of sleepiness that ultimately obliges us to go to bed. Other species, like nocturnal bats, are active at night, and thus have opposite sleep generation systems: They get sleepy when the sun rises. In both cases, the sun serves as a reliable indicator of the passage of time.

There is a more ancient system for keeping track of time in the cells of all living creatures that doesn’t involve light. “There is a molecular mechanism in our cells that marks the passage of time,” Cardinal-Aucoin told me. This internal clock system is regulated by clock genes in our DNA. Once activated, these genes begin producing proteins—called PER proteins—that trickle into the cell during the night. Eventually, enough proteins will be produced that a threshold is reached and the clock genes stop making proteins. The PER proteins then slowly break apart until their numbers are so reduced that the clock genes turn back on and start making proteins again. This process takes almost exactly twenty-four hours—one full rotation of the Earth. This mechanism, called the transcription-translation feedback loop (TTFL), is found in the cells of most living things, from plants to bacteria to humans. It helps explain why all living things on the Earth—including animals that live in dark caves or at the bottom of the ocean where light does not penetrate—are nonetheless sensitive to the twenty-four-hour sun cycle. Jeffrey C. Hall, Michael Rosbash, and Michael W. Young were awarded the Nobel Prize in Medicine in 2017 for their discovery of clock genes in the 1980s. Before their discovery, scientists knew that humans (and other animals) had an internal clock that didn’t need the sun to calibrate itself, but the discovery of the TTFL gave us the explanation as to how our cells pulled this off.

The ancient, cellular response to the passage of time via the TTFL and the external cues from the sun that tell us where we are in the day/night cycle do not necessarily translate into an explicit awareness of time, however. It’s exceedingly unlikely that cats, for example, think about time in the way that humans do. My cat Oscar is, like all domestic cats, crepuscular: most active at dawn and dusk. Like other mammals, his cells use the TTFL to regulate his internal clock, and his brain uses the relative amount of sunlight to induce or suppress his morning/evening activity through the release of hormones. He is sensitive to the passage of time. But this doesn’t translate into Oscar knowing what abstract time concepts like “tomorrow” mean, let alone the concept of “next winter.” This kind of explicit knowledge requires those other cognitive skills that Susana Monsó mentioned when it comes to the human capacity for death wisdom: mental time travel and episodic foresight.

Picture yourself in a boat on a river

Cast your mind back to last night. Do you remember what you ate for supper? Do you remember if you enjoyed the meal? Do you remember where you were sitting while you ate? Chances are you can recall quite a lot. Maybe you have a strong visual memory of what you ate, as if it were a photograph imprinted in your mind. Or maybe the memory is encoded by language: the names of the dishes and ingredients and so forth. Maybe it’s recalled through a sensation, like pleasure or disgust.

Now imagine yourself eating supper tomorrow night. Imagine it’s a plate of spaghetti with Bolognese sauce, and you are seated on your best friend’s living room floor. You don’t have a fork or spoon, so you are eating the spaghetti with your hands. And your friend is singing “My Heart Will Go On,” the theme song to the 1997 film Titanic. It’s an odd scenario, totally unique, and I use it to illustrate just how special our powers of imagination can be. You can envision something that may never happen, but you can envision it all the same.

The ability to both recall the past and think about the future is called mental time travel. It is succinctly defined by the psychologists Thomas Suddendorf and Michael Corballis as “the faculty that allows humans to mentally project themselves backward in time to relive, or forward to prelive, events.”15 It’s intimately tied to another cognitive capacity called episodic foresight, which is the ability to mentally project yourself into the future to simulate imagined events and potential outcomes.”16 We have access to an infinite array of imagined scenarios in which we can be placed at the center. You can ask yourself the question “What might happen if I ate spaghetti with my hands” and imagine the many possible outcomes, including scary ones. For example, in one of those scenarios you might choke to death on undercooked spaghetti.

For an animal to have humanlike death wisdom, they, too, would require a capacity for episodic foresight. But for most species, there is little evidence that they do. Which seems, on the face of it, strange. How do animals plan for the future if they cannot imagine themselves in it?

To help figure this out, let’s consider the legendary future-planning skills of the Clark’s nutcracker. This little bird is a member of the corvid family (like its cousins the crow and the raven) and was named after William Clark (of Lewis and Clark fame) who discovered it during his infamous expedition over the Rocky Mountains back in the early 1800s. While Clark is given credit for the discovery, he, of course, wasn’t the first to see the bird. The Shoshoni, for example, had already been using the name tookottsi for the nutcracker for close to a thousand years before Clark arrived on the scene.17 As such, I’ll use the Shoshoni term over the more common “Clark’s nutcracker.”

The tookottsi’s main food source is the seeds of pine trees, which are plentiful during the fall season, but scarce during winter. So, the tookottsi have mastered the art of hoarding. In the fall, they will pick the seeds out of pine cones and hide (also known as “cache”) them all over their home territory—as far as twenty miles away—so they can access them throughout the winter months. They bury a dozen or so at a time just a few inches under the ground, making it difficult for squirrels or other birds to find. Tookottsis can hide close to 100,000 seeds in as many as ten thousand separate caches18,19 in a given season. And, rather astonishingly, they can remember the location of most of these caches for up to nine months.20

It certainly seems as if the tookottsi is future-planning, using episodic foresight to imagine themselves in a wintery landscape where food is scarce, and where storing seeds is the best way to prevent starvation. But this is not the case. A tookottsi born in the spring will go through the seed-caching process even though it has never experienced a seed-scarce winter. It is planning for a future it could not possibly know about or imagine. The mechanism driving food-caching in the tookottsi’s mind is rooted in its evolutionary history, an instinct for caching that does not require the animal to imagine itself in future scenarios. Almost all examples of animals planning for the future—bees collecting nectar and making honey for winter, crows building a nest for their eggs—can be attributed to these instinctual drives and not mental time travel.

The German psychologist Doris Bischof-Köhler once famously proposed that only humans have the ability to mentally time travel in such a way that they can imagine and thus plan for a future motivational state that conflicts with a current motivational state.21 There are, however, a couple of animal species that seem capable of this, and are thus the best examples we have of the capacity for mental time travel in nonhuman animals. As is often the case, the best examples come from our closest relatives: chimpanzees. To properly digest this example, you need to know something important about their behavior. Are you familiar with that movie and television trope about chimpanzees throwing things when they are angry, including their poop? Well, it’s true. Here’s what the Jane Goodall Institute has to say about poop-flinging:

In their natural habitat, when chimpanzees become angry, they often stand up, wave their arms, and throw branches or rocks—anything nearby that they can get their hands on. Captive chimpanzees are deprived of the diverse objects they would find in nature, and the most readily available projectile is feces. Since they also tend to get a pretty strong reaction from people when they do throw it, their behaviour is reinforced and likely to be repeated, which explains the abundance of YouTube videos on this subject.22

Now, allow me to introduce you to Santino, whose object-throwing wrath is world famous. Born in 1978, Santino is a male chimpanzee living at the Furuvik zoo in Sweden. He has long had a reputation for throwing stones at the human tourists gathered in the designated viewing area near his enclosure. In 1997, zookeepers noticed that Santino seemed to be hurling an unusually large number of projectiles (mostly rocks, not feces) over the course of a couple of days. When they went into his enclosure to investigate, they found a stockpile of stones and other objects hidden under vegetation along the shores of the moat near the tourist viewing area. There were even pieces of concrete that he had lugged over from the far side of his enclosure. Researchers later discovered that Santino would spend hours before the zoo opened collecting and stashing his stones in preparation.23,24

Now, as we saw with the tookottsi, stashing things is not evidence of sophisticated future planning that necessarily involves episodic foresight. What makes Santino’s behavior special, however, is that he was preparing his stockpile long before he was overcome with fits of rock-throwing rage. By all accounts, he seemed calm while creating his stockpiles. This suggests that Santino was preparing for a future in which he knew he was going to feel angry (even though he was not feeling angry in that moment). Unlike the tookottsi, Santino seemed to be time traveling in his mind and using those memories to imagine himself in future scenarios. Because Santino appears to have been imagining a future in which he felt differently from the way he was feeling in the moment, he challenges the Bischof-Köhler hypothesis that this is a human-only trait. Mathias Osvath, the lead researcher studying Santino’s behavior, stated that “the accumulating weight of data throws grave doubt on the notion that the episodic cognitive system is exclusive to humans.”25

Another challenge to Bischof-Köhler’s hypothesis comes from western scrub jays. Scrub jays are corvids, like crows, ravens, and the tookottsi. Like other corvids, scrub jays cache food. In one famous experiment, jays were kept overnight in one of two cages: one in which they received dog kibble for breakfast, and one in which they received peanuts for breakfast. They never knew which cage they might end up in for the night, and thus couldn’t be sure what food they’d be having for breakfast. For the experiment, the jays were allowed to eat as much food as they wanted during the day (and were thus no longer hungry) and were then given access to peanuts and kibble that they could then stash in either (or both) of the overnight compartments. The birds ended up caching most of the kibble in the cage where peanuts were the usual breakfast food, and cached more peanuts in the cage where kibble was the usual breakfast food. In other words, they were planning it so that no matter which compartment they were stuck in for the night, they could wake up to a breakfast consisting of both peanuts and kibble.

The key thing to remember is that the jays were not hungry while they were caching food. Instead, they were imagining a scenario where they would be. “The western scrub-jays demonstrate behaviour that shows they are concerned both about guarding against food shortages and maximising the variety of their diets,” explained Nicola Clayton, one of the authors of the study.26 “Jays can spontaneously plan for tomorrow without reference to their current motivational state, thereby challenging the idea that this is a uniquely human ability.”27

These are the best examples we have of animals possessing and acting on a capacity for episodic foresight. As impressive as they are, there are two important things to note here. First, if animals do have episodic foresight like humans, it doesn’t seem to be particularly widespread. Second, these species do not seem to use their mental time traveling abilities to the same extent as humans. They seem to mostly plan for the (near) future as it relates to the acquisition of food. I don’t mean to downplay these examples because they rather elegantly (in my opinion) demonstrate that episodic foresight does indeed exist in nonhuman minds. But they also demonstrate the limits of animals’ foresight abilities in that, for whatever reason, animals don’t seem able to use this skill for anything other than food acquisition (and assaulting zoo visitors).

So what does this tell us about animals’ capacity for death wisdom?

This is what we know: Most animals have a minimal concept of death. They know that death means a previously living thing has entered a state of permanent nonfunctionality. We know that natural selection can give animals an ability to plan through instinctual behavior that does not rely on an explicit concept of time, nor any form of mental time travel or episodic foresight. We know that most animal species, like the tookottsi, can prepare for their future just fine without needing episodic foresight. And despite evidence for episodic foresight in some species (e.g., chimpanzees, western scrub jays), there is no scientific evidence that nonhuman animals can think about or plan for an unlimited number of future situations, including their own death. This is in stark contrast to humans. Death wisdom appears to be the domain of our species and likely our species alone. The question then becomes: Is this a good or a bad thing? In terms of natural selection (and our own sanity), is death wisdom a boon, or a curse?

The curse of Kassandra

The field of evolutionary thanatology was first introduced in 2018 as a new academic discipline focusing on how animals (including humans) evolved their understanding of—and behavioral responses to—death.28 Modern humans, as you well know, do not treat our dead in the same way as any other animal species. We have elaborate cultural rules and rituals. The ancient Egyptians from the Old Kingdom (2686 to 2125 BCE) famously mummified the bodies of the elite members of society, placed their organs (stomach, intestines, liver, and lungs) in canopic jars, and preserved the body in linen bandages. The heart was left untouched, and the brain was removed and discarded. In modern-day South Korea, bodies are cremated and the ashes are compressed into shiny beads that can be worn as jewelry. At some funeral homes in North America, visitors are given a drive-in option, allowing the bereaved to stay in their cars as they roll by their loved one’s coffin.

Evolutionary thanatology is dedicated to understanding not just how these human funerary practices evolved culturally, but how our psychological understanding and responses to death evolved over time. Since it can be rather difficult to probe the psychology of species that have been dead for millions of years, a much easier place to start is to look at our closest living relative: chimpanzees. In a series of articles unveiling the field of evolutionary thanatology, the psychologist James Anderson considered what we know (and don’t know) about chimpanzees’ understanding of death, writing:

Whether chimpanzees understand that all creatures will die (universality) is less clear, but a reasonable suggestion is that they know that other creatures can die. This knowledge probably includes a notion of their own vulnerability, if not the inevitability of their own death.29

An understanding of the inevitability of one’s death is the key difference between human and animal psychology where death is concerned. Humans know our death is inevitable. Chimpanzees might understand this, but, based on the scientific evidence mentioned, probably don’t. This means that somewhere during the evolution of Homo sapiens from the common ancestor we shared with chimpanzees, we split with our closest ape relatives when it came to our capacity to envision our deaths. Something happened in our ancestors’ brains/minds that turned our minimal concept of death into full-blown death wisdom.

Imagine, then, the very moment that a genetic mutation cropped up in a hominid genome that led to a baby being born with, for the first time, the cognitive capacity to learn that its death is unavoidable. This is not just a hypothetical scenario, but a real event that occurred somewhere in the past seven million years. It’s unlikely that a single mutation would have caused a death wisdom gene to pop into existence out of nothing, of course. It would’ve been a natural selection process that transpired over the course of millennia building upon a collection of evolving cognitive skills—like those needed for mental time travel or episodic foresight. But there was undeniably a moment in our species’ history where a hominid baby was born with a full capacity for mortality salience to parents that lacked this capacity to the same extent. A moment when death wisdom blossomed in the mind of a child for the first time in the history of life on this planet.

Imagine that poor child, growing up somewhere in Africa. Let’s call her Kassandra. During puberty, and after a lifetime of learning about death by witnessing family members and the animals around her die, Kassandra would feel the first pain of death wisdom grip her mind. Like it did for my daughter around the age of eight. If Kassandra were to try to explain the nature of her anxiety to her parents using whatever language capacity her species had at the time, her parents simply wouldn’t understand. She would be living in a private hell of existential angst with literally nobody on the planet who could understand what she was going through.

How did this newfound knowledge benefit that young girl? There’s every reason to believe that death wisdom bursting into a young mind like that would cause so much trauma that Kassandra would be unable to function normally. At the very least, it’s difficult to see how this knowledge would increase her fitness, evolutionarily speaking. Kassandra’s parents and siblings were surely already struggling to eke out a living, as was the norm for our prehistoric ancestors. They already lived in fear. What possible benefit could there be to knowing she would one day die? This young girl should, by all accounts, have suffered enough psychological trauma to end her genetic line then and there.

But that did not happen. Instead, Kassandra’s genetic line became the dominant one. Her success as an individual within her family and tribe led to the spread of death wisdom to the entire species. And from Kassandra’s genetic stock sprang Homo sapiens, not only the last hominid species standing, but the single most successful mammalian species ever to walk this planet.

How did Kassandra manage this? In the book Denial: Self-Deception, False Beliefs, and the Origins of the Human Mind, the physician Ajit Varki explains how a conversation with the late biologist Danny Brower led to a hypothesis of the origin of the human mind that deals specifically with Kassandra’s problem, writing:

Such an animal would already have built-in reflex mechanisms for fear responses to dangerous or life-threatening situations. But this unconscious fear would now become a conscious one, a constant terror of knowing one is going to die, and that it could happen anytime, anywhere. In this model, selection would only favor the individual who attains full ToM [Theory of Mind] at about the same time as also achieving the ability to deny his or her mortality. This combination would be a very rare event. It is even possible that this was the defining moment for the original speciation of behaviorally modern humans. This is the Rubicon that we humans seem to have crossed over.30

The argument offered in Denial is that if an animal like Kassandra were to be born with that combination of cognitive skills that leads to death wisdom (equivalent to what is referred to as “full ToM” in the quote on the previous page), then it would fail to survive because of the “extremely negative immediate consequences.”31 Essentially, it would lose its mind and be unable to sire any offspring at all (let alone survive its childhood). Only by evolving the ability to compartmentalize these thoughts of mortality (what Varki calls the capacity for denial) would an animal like Kassandra be able to remain sane enough to procreate.

What, then, are the evolutionary benefits to death wisdom? If it’s such a potential liability that we can only explain its existence through a capacity for denying it, why was it so darn helpful to Kassandra that she became the dominant genetic line? Here’s the answer: Death wisdom relies on cognitive skills that are hugely beneficial to the human ability to understand how the world works (e.g., mental time travel, episodic foresight, explicit knowledge of time). Our capacity for being able to ask why things happen, and thus make predictions and plans that can change the course of events, is part of our why specialist aptitude that we learned about in Chapter 1. Episodic foresight is clearly a cognitive capacity that is involved in this process. And since death wisdom is an unavoidable knock-on effect of episodic foresight, we simply cannot unlink death wisdom from our why specialist capacity. Natural selection appears to see the benefit in why specialism insofar as it has helped us proliferate. The same must then be true of episodic foresight and its companion death wisdom. So a clear benefit to death wisdom is its involvement in—or perhaps emergence from—other cognitive capacities that have allowed our species to outcompete all other hominids and most other mammals for domination of this planet.

It’s also possible that death wisdom might have helped our species achieve success by bolstering our capacity for shared sociality. Far from being a bug in the system, or an unwanted knock-on effect, it might actually be a feature. The psychologist Ernest Becker won a Pulitzer for his book The Denial of Death, wherein he explains that much of human behavior—and most of our culture—is generated in response to our knowledge about our own deaths, and subsequent attempts to create something that is immortal, something that will live on after we have died, and thus has meaning and value.32 Humans create systems of belief, and laws, and science so that we can find for ourselves what Becker described as “a feeling of primary value, of cosmic specialness, of ultimate usefulness to creation, of unshakable meaning.” We build temples, skyscrapers, and multigenerational families in the hope that “the things that man creates in society are of lasting worth and meaning, that they outlive or outshine death and decay, that man and his products count.” Ernest Becker makes a good case that death wisdom inspires us to create a plethora of immortality projects, some of which themselves might be a boon to our evolutionary fitness as they get transmitted down to future generations via culture. Things like science itself, which is driven as much by the individual scientists’ desire for infamy as it is the pure love of knowledge.

Becker is right. There is no denying that death wisdom generates beautiful things that add value (and meaning) to the human condition. But it is precisely our faith in the importance of our cultural immortality projects and their absolute central role in our feelings of worth that brings out the worst in human behavior. Holy wars are fought between competing ideologies about the nature of the path to immortality. Genocide—like that masterminded by King Leopold II in Congo in consort with Christian missionaries—is committed in the name of our timeless gods (both theological and economic). Walk around any city on this planet and you will likely encounter statues of historical figures whose names and likeness we still know precisely because they dedicated their lives to achieving notoriety for all the wrong reasons. You can still find statues honoring Joseph Stalin, Nathan Bedford Forrest, and Cecil Rhodes. Many of these statues celebrate the lives of individuals who achieved fame through war, murder, and the subjugation of their fellow humans. Death wisdom does give us the drive to seek out immortality by generating art and beauty, but also—perhaps ironically—death.

There are other negative consequences to death wisdom from an evolutionary perspective. Aside from the previously stated immortality projects that have clearly gone awry (e.g., genocide), there are the everyday negative consequences of death wisdom. Things like depression, anxiety, and suicide. Although mood disorders have complex origins that can involve a huge number of causes (e.g., seasonal affective disorder, which can be triggered by changes in hormone levels due to lack of exposure to sunlight, or postpartum depression triggered by hormonal changes in a woman’s body after giving birth), there is no doubt that our ability to contemplate our deaths can negatively impact our mood. So much so that feelings of nihilism, hopelessness, and thoughts of death are wrapped up in a depression diagnosis and are potential causes in and of themselves of suicide. There are currently 280 million people on this planet with depression. More than 700,000 people will die by suicide this year; it’s the fourth leading cause of death in five- to twenty-nine-year-olds.33 While death wisdom on its own is surely not the reason for these depression and suicide numbers, there is no doubt it is involved. Nietzsche himself is perhaps the classic example, having lived with lifelong depression while simultaneously grappling with the philosophical problem of nihilism. These things are surely inexorably linked.

I know from my life that I do not spend very much time contemplating my death. Occasionally, like my daughter, I have moments late at night when trying to fall asleep where the reality of death creeps into my mind and dread takes hold. But these thoughts are fleeting, soon replaced by song lyrics or tomorrow’s to-do list. This is, I suspect, the reality for most humans. Just because we can contemplate our demise does not necessarily mean that we spend too much time actually doing it. This, then, is how our capacity for death denial keeps us sane. It allows us to ignore these morbid, intrusive thoughts just long enough so that we can get our laundry done.

Arguably on balance, the benefits of episodic foresight and why specialism outweigh the negative consequences of death wisdom. The simple fact that there are eight billion of us spread around this globe each having contemplated our deaths at some point suggests that death wisdom is manageable. As far as evolution is concerned, death wisdom is not problematic enough to have affected our success as a species.

But the day-to-day consequences of death wisdom really do suck. I believe that animals have a better relationship to death than we do. As we have seen in this chapter, many animals do know they can die. They know what death is. They are not ignorant enough to, as Nietzsche put it, “play in blissful blindness between the hedges of the past and future,” as the previously mentioned quote from Nietzsche suggested. But despite this knowledge, they do not suffer as much as we do for the simple reason that they cannot imagine their deaths. A narwhal will never lament the specter of death like Nietzsche did. Had he been a narwhal, he would’ve been free from nihilistic dread. And had I been a narwhal, I would not have had to sit at my daughter’s bedside watching her eyes fill with tears as she thought about her inevitable death. I would trade any of my beloved immortality projects to wipe the death wisdom curse from my daughter’s mind.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!