Chapter 1
Gradually, man has become a fantastic animal that has to fulfill one more condition of existence than any other animal: man has to believe, to know, from time to time, why he exists.
—Nietzsche1
It took Mike McCaskill twenty years to beat the stock market. But when he did, boy, did he succeed.
Mike started small, trading penny stocks as a hobby while working at his parents’ furniture store.2 When the store closed in 2007, he decided to turn to his hobby full-time. He sold his car for $10,000, and then stuck that cash into his trading account. Over the next two years, a volatile market and the subprime mortgage crisis saw the S&P 500 lose half its value, which only served to excite a day trader like Mike. He reveled in the chance to unravel the mystery of where the market was going. He predicted that stocks would spike not long after the election of President Obama, so he took the hundreds of thousands he had made in penny stocks and dumped it into the regular stock market.
But he was wrong.
As Obama was sworn in on January 20, 2009, Mike watched as the Dow Jones continued to plummet, eventually hitting its lowest point during the financial crisis on March 5 at 6,594.44 points. That was a 50 percent drop from the all-time high in October 2007 of 14,164.43, and was just 3 percent shy of the record-breaking crash that sparked the Great Depression in 1929. It looked bad for Mike. His trading account was completely wiped out.
But Mike regrouped, scraping together a few hundred dollars that he put back into his account, though this time he would alter his portfolio strategy so it would pay out if the market should lose money. In other words, he would short stocks; a highly risky strategy where he’d borrow shares in a stock and then sell them with a promise to buy them back later and return them to the lender. If the stock price dropped, he’d make money on the resale, but if the stock price went up, he’d be forced to buy the shares back and take a huge loss. This is the trick that investors like Michael Burry and Mark Baum in The Big Short used to bet against the housing market in 2007. At the time, the housing market was considered one of the safest bets in American finance, so betting that it would lose value was both risky and seemingly foolish. Of course, as we now know, their prediction turned out to be right, and they made a killing. For Mike, however, his prediction turned out to be wrong. The $700 billion that the US government had pumped into the economy through the Troubled Asset Relief Program started to work. As of early April, the market rebounded. And Mike, who bet on market collapse, lost everything. Again.
Frustrated, Mike quit trading full-time and spent the next ten years working at King Louie’s Sports Complex in Louisville, Kentucky, eventually becoming the director of volleyball and golf programming. He still dabbled in stocks, betting on long-shot stocks that could potentially make him a millionaire. That’s when he stumbled across GameStop.
It was the summer of 2020, and the company was struggling: a brick-and-mortar video-game seller trying to keep afloat in a market dominated by a digital retail environment. Hardly anyone goes to the mall anymore to sift through products at stores like GameStop. They just order straight from Amazon, or download games directly to their PlayStations. Michael Pachter, a video game and digital media and electronics analyst with Wedbush Securities, described GameStop as a melting ice cube. “For sure it’s going to go away eventually,” he told Business Insider in January 2020, estimating that the company would be finished within a decade.3 Andrew Left, a high-profile investor with Citron Research who specialized in short-selling, pinpointed GameStop as “a failing mall-based retailer” that was “drowning,”4 which is why he and many other investors began shorting the stock in huge quantities. Like Mike in 2009 and the small group of people who bet against the housing market in 2007, these professionals decided to cash in on GameStop’s imminent collapse. On paper, at least, this seemed reasonable.
But Mike didn’t think GameStop was destined for bankruptcy. Quite the opposite. He was not only sure that GameStop was a viable company, but that all these short positions held by these hedge fund managers meant that its stock could go through the roof in what’s called a short squeeze. If the stock price started to go up, investors with short positions would try to offload their stocks quickly to cut their losses. This mass selling would cause the stock to rise even faster, creating a squeeze, and making anyone smart enough to have bought stock when it was worth next to nothing a crap ton of money.
Mike’s gut told him that a short squeeze was on the horizon. He began buying stock options, which meant he would buy the stock once it hit a certain price. But the stock didn’t move much at first, his options expired, and Mike continued to zero out his account repeatedly. Then, in late 2020, Mike hit it big on another stock pick—Bionano Genomics—giving him a fresh cash injection, which he dumped into GameStop. Soon after, in January 2021, the squeeze started. A series of improbable and confusing events led to a rapid rise in the value of GameStop in the market, including the millions of people following Reddit’s wallstreetbets subreddit: They had identified the company as having an inordinate number of short positions, which sparked a coordinated effort to buy the stock in droves. As you can imagine, the move screwed over investors like Andrew Left who were, in the eyes of the redditors, cynically betting on the demise of a vulnerable company. It worked. GameStop rather famously increased in value by a ludicrous amount—having gone from around $4 a share when Mike started buying it to a high of $347.51 by January 27. Mike cashed out.
He made $25 million.
What are we to make of this? The lesson here is not that it takes serious smarts and years of experience studying the stock market to correctly predict why and when stock prices will rise and fall. There was no way that Mike could have known that the market vigilantes from wallstreetbets were either planning on or capable of creating such a historic and artificial short squeeze on GameStop. There was nothing about Mike’s gut that was magically more prescient. In fact, as we’ve seen, he was often more wrong than right when it came to betting on the stock market. With GameStop, he simply got lucky.
Consider a similar story that also involves luck, but with an unexpected protagonist. In 2012, the British Sunday newspaper The Observer held a contest between three teams: a group of schoolchildren, three professional investment managers, and a house cat named Orlando.5 Each team was given £5,000 (about US $7,000) to invest in stocks from the FTSE All-Share index and could switch up their stocks every three months. The team with the most money in their account after a year would win. Orlando “picked” his stocks by dropping a toy mouse onto a grid with numbers corresponding to stocks he could buy. After one year of investing, the kids had lost money, with £4,840 left in their account. The fund managers had £5,176. Orlando beat them all with £5,542.
Unlike the kids or the fund managers, there is no way for a cat to know what was happening. Although some animals can be taught to exchange tokens for rewards and thus attribute arbitrary value to otherwise value-less objects, the abstract concept of “money,” let alone “the stock market,” exists only in the heads of Homo sapiens. Orlando’s stock-picking technique was just the researchers’ clever way of generating random stock picks to prove a point. That point being that people investing in the stock market might as well be throwing darts at a board. When it comes to picking winning stocks, it’s all a big crapshoot.
With Orlando in mind, I was curious to know how Mike McCaskill would characterize his stock-picking prowess. So, in March 2021, I called him up to ask. I told him that I was writing a book about human and animal intelligence. I told him the story of Orlando vs. the fund managers and that it appears as if luck—not knowledge—seems to play a huge role when it comes to the stock market. To my astonishment, Mike McCaskill, who had spent twenty years studying the stock market and had just earned $25 million, said: “I agree. It’s a hundred percent all luck.”
Now, it’s true that Mike had researched GameStop and deduced that it was primed for a squeeze. But Andrew Left was equally as convinced that a squeeze was impossible. Left was wrong. Back in 2020, Michael Pachter was sure GameStop would be gone by the end of the decade, although as of March 2021, he had changed his tune and now proclaims that GameStop is “here to stay.”6 One of those predictions is obviously wrong. The wallstreetbets redditors were sure that GameStop was headed for a short squeeze, which was right. But they were also sure that the squeeze would continue past the $347.51 peak on January 27 and encouraged everyone to hold the stock. That was wrong. GameStop crashed back down to under $50 just a few days after Mike dumped his stock and became a millionaire. Mike got lucky there, too. He agreed with the redditors that the stock was going to keep climbing—maybe climb above $1,000 per share. But, on a whim, he decided that his $25 million profit was good enough and dumped his stock at exactly the right moment. Mike’s rags-to-riches story is built on a series of random and fortuitous events.
“Human nature likes order,” wrote the economist Burton Malkiel in his seminal book A Random Walk Down Wall Street. “People find it hard to accept the notion of randomness.” Malkiel popularized the idea that the movement of any individual stock in the market is essentially random—it’s impossible to know why a stock is doing what it’s doing. People who reliably make money from the market are those who own a diverse portfolio of different kinds of investments (e.g., stocks, bonds, annuities), which spreads out the risk, with the broader principle that the market, over the long haul, will eventually increase in value. Picking individual stocks, or betting on certain trends, is much closer to gambling than science. Which is why we shouldn’t be too surprised that a cat is just as likely to make a killing on Wall Street as a day trader.
Mike McCaskill spent his career asking a simple question: Why do stock prices go up? This need to understand why is what differentiates Mike (and humans in general) from nonhuman animals. And it’s what makes Mike’s story so revealing. As soon as human children learn their first words, the whys start coming. My daughter once asked me: Why can’t the cat talk? A good question. And one I have dedicated my research career to answering. As we grow older, we never stop asking these types of questions. Why haven’t we found signs of alien life? Why do people commit murder? Why do we die? Humans are the why specialist species. It is one of a handful of cognitive traits that separates our thinking style from other animals.
And yet, this burning desire to understand cause and effect does not always give us a leg up. As Mike’s investment story reveals, asking “why” did not give him, the hedge fund managers, or anyone an edge when it came to stock price predictions. Without knowing why stocks move, Orlando the cat’s decision-making system produced similar results. And it’s not limited to stocks. The world is full of animals making effective, beneficial decisions all the time—and hardly any of it involves contemplating why the world is the way it is. Being human and a why specialist has obvious benefits, as we will see in this chapter. But if we look at decision-making across time and species, including our own, I propose we consider a provocative premise: Does asking why give us a biological advantage? The answer might seem obvious (yes!), but I don’t think it is. In order to help answer this question, consider this: Even though our species can grasp cause and effect on a deep level, we barely used this ability for the first quarter of a million years we walked the Earth. That tells us something rather important, from an evolutionary perspective, about the value of why.
The origins of why
Let’s imagine that we’re in the basket of a hot-air balloon. We’re floating gently over the canopy of a lush green forest that coats a cluster of undulating hills overlooking Lake Baringo in western Kenya. Or at least what will one day be known as Kenya. This is a time-traveling hot-air balloon, and we’ve been transported back to the Middle Pleistocene (now officially renamed the Chibanian Age) exactly 240,000 years ago. It is dusk. The air is heavy and moist, signaling the start of the monsoon season. This area would have been much wetter during the Chibanian, making the area around Lake Baringo one of the lushest and most productive in the region. From our vantage point a few hundred meters above the basin, we can see movement on the ground all around us—two distinct animal groups making their way toward the tree line as the sun sets.
One of the groups is instantly recognizable: chimpanzees. A handful of females with their young in tow, and a group of larger males scouting ahead. With night approaching, they are likely looking to find some trees to build a nest and settle in for the night. The other group is even more familiar. It is a group of modern humans—Homo sapiens—similar in number to the chimpanzee group. In fact, similar in almost every regard. There are females with their young and a group of males scouting their way toward the forest where they will set up camp for the evening. Humans and chimpanzees both descended from the same ape ancestor that roamed Africa seven million years ago: Sahelanthropus tchadensis. To the untrained eye, this ancient ape from west Africa would’ve looked like a chimpanzee. Its ancestors would branch off to eventually evolve into modern chimpanzees on one side, and our human relatives on the other, including Australopithecus and Homo erectus. You’ve probably seen this lot in a natural history museum or textbook: that famous lineup of the “origins of man” that has become the basis of countless parodies and memes. After seven million years in Africa, chimpanzees and humans still lived very similar lifestyles in nearly identical conditions to their ancient ape predecessors. We know from the fossil record that humans and chimpanzees lived side by side in this area of the East African Rift 240,000 years ago.7
I’ve guided our time-traveling balloon to this era in this particular location because it’s the first appearance of what scientists now consider to be modern humans.8 They are nearly identical to you or me in every conceivable way—physically and cognitively.9 And yet, nothing about their lifestyle resembles how we have come to live in the twenty-first century. Like their chimpanzee cousins asleep in the trees, these early humans roamed the shores of the lake, searching for berry patches and animal carcasses. They would likely have been naked, free from jewelry, clothing, or any of the artistic or symbolic adornments that we use today. However, their nakedness reveals some significant differences from chimpanzees: far fewer hair follicles and more exposed skin, designed for sweat to evaporate quickly and keep the body cool as they wandered under the blazing sun. Humans also have longer legs with relatively more muscle in their lower limbs, another adaptation to support our ambulatory (walking) lifestyle.10 And then of course there are the heads. The front half of the human and chimpanzee head—the face area—is similar enough, with the obvious exception being the chin. Humans have one, but chimpanzees do not. Strangely, no other hominid species throughout history evolved chins before Homo sapiens came along. Remarkably, scientists still don’t have a clear answer as to why we have chins.11 But it’s the back half of our heads that’s truly astonishing. Human heads are round, looking like an overfilled water balloon. That extra cranial space is stuffed with brain tissue—three times the size as our chimpanzee cousins.
There are also some behavioral traits that distinguish the humans. They are holding rudimentary stone tools, which they’ve used to carve meat off a dead elephant. One of the older female humans is helping a child spin a wood shaft into a notch in an old dry log in order to create a cooking fire, giving her instructions in the unmistakable cadence of human language.12 The chimpanzees, on the other hand, are mostly silent, with only nut-cracking stones (not sharp blades) in their possession, and certainly no chimpanzee-made fires. They simply don’t have the kind of minds that allows them to create these things. To this day, the ability to create both fire and stone blades remains outside of their cognitive capabilities.
Despite some clear differences in cognition that led to breakthroughs like fire and blades, early humans and chimpanzees remained relatively similar for most of the Chibanian. As the period drew to a close some 126,000 years ago, humans began their infamous journey out of Africa, using those long muscular legs to carry themselves to Europe where they would encounter Neanderthals and Denisovans—two hominid species that had evolved in Asia and Europe from a common ancestor that left Africa two million years earlier. Like humans, they had use of fire, spears, and stone tools, and may well have had language abilities to some extent. Humans both mated and competed with these other species until there was nothing left but traces of them in our DNA. Then, around 200,000 years after our initial hot-air balloon trip to Lake Baringo, evidence that our human ancestors were asking some of the important why questions that would lead to our impending domination of this planet cropped up for the first time in the form of cave paintings.
Roughly 43,900 years ago, a group of humans living in Sulawesi, Indonesia, walked into a cave on the island’s southwest tip and began drawing. Using red pigment, they created a series of hunting scenes—humans chasing wild pigs with ropes and spears. But there was something odd about the humans depicted in the drawings: They had animal heads. These half-human, half-animal figures are called therianthropes (from the Greek theri/θἠρ meaning beast and Anthropos/ἄνθρωπος meaning man). A few thousand years later, a European ancestor carved the Löwenmensch figurine: a limestone therianthrope statue depicting a human with a lion’s head found in the Hohlenstein-Stadel cave system near Baden-Württemberg, Germany.
There is really one reason that, forty millennia ago, our human ancestors would spend time creating art in the form of therianthropes. It symbolized something. When we see therianthropes represented in art from the past few thousand years, it’s typically associated with religious symbolism: like Horace (the falcon-headed Egyptian god), Lucifer (often depicted as half-human, half-goat in Christian art), or Ganesh (the elephant-headed Hindu god). The Sulawesian therianthropes are “the world’s earliest known evidence for our ability to conceive of the existence of supernatural beings,” Dr. Adam Brumm told the New York Times after he and his research team discovered the Sulawesian therianthropes in 2017.13 What is a supernatural being? It is a creature that has abilities and knowledge beyond what humans have. Some experts suggest that these therianthropes might be spirit guides, creatures giving us aid, answers, or advice.14 This assumes, then, that our ancestors had been asking questions that required supernatural answers. And what could these questions possibly be other than those that underpin all religions: Why does the world exist? Why am I here? And why do I have to die? These ancient therianthropes are the best evidence we have of why specialist questions swimming about in our ancestors’ heads.
Soon after our ancestors carved the first therianthropes, evidence of novel technologies begin popping up in the archaeological record. Like hats. The first evidence of a human wearing a hat stems from 25,000 years ago in the form of the Venus von Willendorf statue, a limestone carving depicting a female figure wearing a beaded headdress. Although I am sure it’s just dumb luck in terms of which ancient artifacts we’ve dug up, I find it amusing that the evidence of humans conceiving of the supernatural predates evidence that we wore hats. It suggests that our ancestors were more concerned with the problem of why we die than why their heads get wet when it rains.
After the appearance of therianthropes and hats, the human capacity for creating stuff based on our understanding of cause and effect really took off. There’s evidence from about 23,000 years ago that a small group of humans living in current-day Israel had figured out how to plant and harvest wild barley and oats in little farm plots.15 An understanding of what causes seeds to germinate, and how they can be cared for over the course of a growing season was a huge deal. We now had precise control over planning our meals. This is a direct result of our understanding of cause and effect as we developed an understanding of plant behavior. A more rudimentary sense of things like gravity allowed the ancient Romans to build massive aqueducts, transporting water over huge distances and even pumping it uphill. As we stared at a river, we wondered, rather remarkably, why the water was moving, and used the answer to that question to help us build our ancient cities.
These “why” questions underpin our greatest discoveries: Why is that star always in the same place each spring? Astronomy as a field was born. Why do I keep getting diarrhea when I drink milk? This was a question that probably kept Louis Pasteur up at night, leading to the discovery of pasteurization. Why does my hair stand on end when I shuffle barefoot across a rug? We now understand this as a result of a phenomenon known as “electricity.” Why are there so many different plant and animal species? Charles Darwin had a good answer to this one (evolution). Anything we’ve held up as an example of our intellectual exceptionalism—and that differentiates our behavior from that of other species—has the deepest of roots in this one skill. Of all the things that fall under the glittery umbrella of human intelligence, our understanding of cause and effect is the source from which everything else springs.
These are all remarkable feats, and indeed, once this “why specialization” began, our story becomes littered with grand achievements in the sciences, the arts, and everything in between. But then we must ask: Why did it take so long to begin? Why did we spend 200,000 years not doing this?
The answer is quite simple. Despite what our gut tells us, being a why specialist is just not that big of a deal. It may feel important, but that feeling is human bias at work. From the point of view of evolution, it’s simply not that special at all. Indeed, all animals, including our own for a long time, got by just fine without any need to ask “why.” It’s time to rethink its relative importance. While it has produced inarguable benefits—like pasteurized milk—it’s also the most likely cause of our impending extinction. But before we go down that dark path, let’s first get to grips with how being a why specialist differs from the way other animals think about the world.
The bear behind the bush
Last fall, I went walking in the woods under a canopy of yellowing maple leaves together with my friend Andrea and her dog Lucy. Suddenly, the silence of the forest was broken by a deep whomp that resonated in the ground beneath our feet. Up ahead on the path, the leaves of an alder bush were rustling. We froze in place, nervous that maybe a bear was lurking near us. I went over to investigate. Instead of a bear, I found a large branch of a long-dead tree, which must have rolled down the hill a few feet before coming to rest against the alder, generating that sound that had startled all three of us.
This scenario is something that animals have been dealing with for millions of years. Natural selection is built on countless iterations of animals hearing a sudden sound, determining what it signifies, and deciding how to react. For apex predators like the Komodo dragon (an enormous Indonesian lizard that has been known to eat people), a random noise in the bush might trigger curiosity as it could be something to eat. For prey species like squirrels, a sudden noise might be the opposite: a potential predator or threat that would send it fleeing in the opposite direction.
There are only two ways an animal can interpret the significance of a sudden noise. The first is to learn through association that a loud noise emanating from behind a bush often precedes the appearance of a living thing. The second is to infer that a noise is caused by a living thing. It sounds subtle, but this difference—between learned associations and causal inference—is where nonhuman animal thinking ends and being a why specialist begins.
Consider the burrowing bettong. This bizarre little marsupial from Western Australia looks like a miniature kangaroo with the face of a mouse, a thick rat tail, and the body of a pudgy squirrel. They were once one of the most populous mammals in Australia, but there are now just 19,000 left.16 The near extinction of the bettong was due to the introduction of non-native wildlife by European settlers, including the infamously murderous house cat and the red fox. Bettongs, you see, do not have much in the way of a natural fear of cats or foxes. Whereas most bite-sized marsupials would flee, bettongs just stand around nonchalantly. Unsurprisingly, this makes them easy prey. In a recent experiment, researchers compared the behavior of bettongs that had been exposed to catlike predators to those that were seeing a catlike predator for the first time.17 As you might expect, bettongs that had experiences with catlike predators fled, whereas bettongs that had never encountered a cat saw no reason to scurry. In other words, bettongs needed to learn that cats and foxes pose a threat. As a result, conservationists in the region have been actively teaching bettongs to fear cats and foxes so that they can be released into the wild again as a way to preserve their species from extinction. But it’s not easy. Without a natural instinctive fear, each bettong will need to experience the threat firsthand to develop the correct learned association. Self-preservation, in other words, must be taught through experience.
Humans, on the other hand, can bypass this process and learn without necessarily needing firsthand experience. Humans’ why specialist thinking offers us two cognitive skills that animals like bettongs lack: imagination and an understanding of causality. Humans are capable of cycling through what primate researchers Elisabetta Visalberghi and Michael Tomasello call an infinite “web of possibilities”18 in our mind’s eye in search of an explanation for what our senses are picking up. The comparative psychologist Thomas Suddendorf describes this imaginative skill as an “open-ended capacity to create nested mental scenarios” in his book The Gap: The Science of What Separates Us from Other Animals, which argues that this particular ability is a fundamental difference between the way humans and animals understand the world.19 In the example I shared earlier, I was capable of imagining any number of animals that I had previously seen while out walking in the woods, like porcupines or skunks, rooting around behind the alders making weird sounds before concluding that it must be a bear based on how loud the sound was. But I can also imagine things that I have never experienced but understand abstractly (for example, if I read about something in a sci-fi novel or fantasy series). In this regard, it can be anything, like the possibility that a meteorite had dropped from the sky and landed behind the bush. This fanciful knowledge is what the philosopher Ruth Garrett Millikan calls dead facts.20 These are facts about the world that an animal would not have any use for in its daily life. Nonhuman animals, according to Millikan, “generally have no interest in facts that don’t pertain directly to practical activity. They do not represent or remember dead facts.” Animals accumulate living facts relevant to their everyday lives: Bees remember the location of a good dandelion field, dogs remember the path through the woods that leads to their favorite pond, and crows remember which human fed them in a park. But humans accumulate a seemingly endless number of useless (i.e., dead) facts: the distance to the moon (384,400 km), the true identity of Luke Skywalker’s father (Darth Vader), or which Paula Abdul video starred Keanu Reeves (“Rush Rush”). Our heads are full of dead facts—both real and imagined. Most of them will never be of any use to us. But they are the lifeblood of our why specialist nature as they help us to imagine an infinite number of solutions to whatever problems we encounter—for good or ill.
The second component of being a why specialist is an understanding of causality. Causality is not just knowing that there is a correlation between two events (e.g., whenever my cat leaves the litter box there is a fresh poop left behind), but an understanding that one event is the reason for the other event (i.e., the cat is making the poop). It allows for a more complete understanding of how things in nature work.
There’s a long-standing debate as to whether any other animal is capable of this kind of causal reasoning. There is a famous experiment meant to ferret out the presence of causal inference called the string-pulling paradigm that has been given to more than 160 animal species.21 This is how the experiment works: A piece of food is suspended on a long string from a branch or platform. In order to bring the food close enough to eat, the animal must haul in the string. You or I would do this by grabbing the string with one hand, pulling it closer, and then grabbing the food when it’s within reach with our other hand. The principle being that you must secure the string first before reaching for the food. When Bernd Heinrich, the biologist most famous for his writing and work on birds, tried this experiment with ravens, they solved it rather quickly. They would pull a section of string toward them and then step on it with one of their feet before reaching down to grab more. They didn’t arrive at this solution through trial and error. They eyed the string thoughtfully for a few seconds and then moved in a deliberate fashion, pulling and stepping until the food arrived. This suggests that they understood the nature of the problem and the causal links involved (i.e., gravity pulls things down, stepping on the string holds it in place). Heinrich concluded that “seeing into the situation before executing the behavior appears to be the most-parsimonious explanation to account for the result.”22 In other words, the ravens first thought about the nature of the problem, and then cycled through a number of solutions in their mind’s eye, and then executed and achieved the goal. Does this prove that ravens are, to a lesser degree, why specialists like us? Many researchers believe so.
However, one research group performed a variation of the string-pulling experiment on New Caledonian crows (who are usually experts at this task) that challenged this conclusion. Researchers hung the string through a small hole in a board, which made it difficult for the crows to see what was happening as they pulled on the string. When a crow encountered this string problem for the first time, they, like Heinrich’s ravens, seemed to understand that they needed to pull on the string to get at the food. But after pulling once on the string and being unable to see the food move closer to them, they stopped pulling. Without the visual feedback of the food moving closer to them they suddenly seemed unable to understand what was happening. The authors concluded that “our findings here raise the possibility that string pulling is based on operant conditioning mediated by a perceptual-motor feedback cycle rather than on ‘insight’ or causal knowledge of string ‘connectivity.’”23 In other words, the crows had no causal understanding of what was happening—it was all just learned associations (pulling string = food closer) that they couldn’t learn because they couldn’t see anything. Scientists are still debating the results of these 160 string-pulling experiments with animals, with some sure that the animals understand causality, others sure they do not, and many convinced that these experiments aren’t designed well enough to give us any insight into the question of causal reasoning in animals in the first place.
Most of the time, it doesn’t matter if an animal understands causality; it can still make good (or poor) decisions regardless. If a dog like Lucy hears a sudden sound coming from behind a bush and has learned that random sounds in the woods are often correlated with the presence of predators like bears, she will rightly decide to approach with caution. If I, on the other hand, hear a sound and begin cycling through potential causes (e.g., meteorites, bears, a Komodo dragon that has escaped from the zoo), I will wind up making an identically effective decision (approach with caution). Both Lucy and I can make identical inferences (that is, draw a conclusion about how things are) through completely different cognitive paths: me, through causal inference, and Lucy via good old learned associations.
Here’s an experiment you can do on your own dog to show their capacity for inferential reasoning and how thoroughly useful it is for them without the need for causal understanding. Take a dog treat and stick it in your shoe. Now, shake the shoe for a few seconds before letting your dog stick their nose in and grab the treat. Now, without your dog watching, grab both shoes and stick a treat in just one of them. Have your dog watch as you shake them both, and then hold them each out for your dog to observe. In all likelihood, the dog will find the treat on their first try. Why? Because they heard one shoe make noise (the treat was tossing around inside) and the other didn’t. This is called diagnostic inference.24 It’s an advanced kind of learned association where the dog has figured out that a sound goes hand in hand with a treat. It’s important to understand, however, that the dog does not comprehend that the treat is the thing causing the sound. That’s causal inference. But the dog doesn’t need it. It still found the treat.
Diagnostic inference, as you can imagine, has its limitations. Here’s an example where our abilities with causal inference outshine other animals. Imagine now that I am holding two shoes. One is filled with florps and the other with boopers. I show you a picture of florps (candies that resemble mini marshmallows) and a picture of bloopers (small metal balls). Even though you’ve never seen florps or bloopers—aside from the photos, you know nothing else about them—the moment I shake the shoes you will know which has the bloopers: it’s the shoe making more noise. This is because you understand causal properties of objects on a deep level. Soft objects make less noise than hard objects. Dogs would be incapable of this: They would need examples of the different sounds these objects make before they can generate a learned association.
Clearly, diagnostic inference and basic learned associations can only take an animal so far. Without an understanding of—or interest in—underlying causality, an animal will never ask the kind of why questions that led to the accomplishments that Homo sapiens have enjoyed: fire, agriculture, particle accelerators, and so forth. It seems obvious that humans have a serious advantage over other animals when it comes to both basic (e.g., what’s causing a sound) and complex (e.g., knowing that viruses cause disease) survival skills thanks to our minds. We are capable of cycling through an infinite web of possibilities and dead facts that help us in our quest for causal understanding. But this brings us back to the original conundrum: If causal understanding is such an obvious advantage over other ways of thinking, why did it take our species 200,000 years before we began using this ability to begin the spread of modern civilization? The answer is that sometimes, being a why specialist leads our species toward unexpected ludicrousness that is so bad for our species (evolutionarily speaking) that it makes you wonder if we’d actually be better off relying solely on learned associations.
Chicken butt solutions
Imagine for a moment that we are back in our time-traveling hot-air balloon, this time visiting Lake Baringo 100,000 years ago. We find our group in a slightly more permanent-looking camp on the shores of the lake. From our vantage point, we are witness to an unfortunate albeit common event. A young boy has recently been bitten on the calf by a puff adder, the deadliest snake in Africa. Without treatment there is a high probability that he will die. Luckily, an adult is rushing over with stalks from a large plant with wide palm leaves called ensete, or false banana. When she snaps the stalk in two, a sap emerges, which she quickly wipes onto the bite wound. Though nowhere near as effective as modern antivenom, this plant has analgesic and antiseptic properties (and is still used by locals in modern-day Kenya to treat snakebites).25 How did this prehistoric human know how to do that? Our ancient knowledge of plant medicine was based on a combination of learned associations and causal inference. There was probably a moment where an ancient Baringo relative cut their arm while out hunting in the bush, and randomly grabbed a few leaves from a false banana to stanch the bleeding. A few days later, they may have noticed that their cut healed faster than normal. They might have asked themselves: Why? This would’ve led to the conclusion that there was some property in the leaf that helped healing. This knowledge would’ve been passed on (through language and culture) for thousands of years, leading to a brilliant snakebite cure that saved the little boy’s life.
Clearly, causal inference is a powerful tool in our ancestors’ why specialist arsenal. But that’s not to say that it was wielded correctly all the time. Sometimes our need to look for causal connections creates more problems than it solves. It creates the illusion of causality where there is none.
To see what I mean, let’s take one more trip in the balloon. This time let’s go to medieval Wales, around the year 1000 CE. We’re floating over rolling green hills overlooking the Irish Sea where a group of humans are living in a small village. A century from now, a fortress will be built on this spot by an Anglo-Norman baron, setting off a chain of events that will ultimately lead to the founding of the lovely seaside town of Aberystwyth. But for now, it’s just a tiny village of Welsh-speaking locals who’ve encountered a similar problem to our prehistoric clan above. A young boy—the son of the village’s leader—was playing in the tall grass when he was bitten by a European adder. Although less deadly than a puff adder, the bite could still be fatal for a child, especially if left untreated. Luckily, there is a healer in the town.
The boy’s mother, who has brought him to the healer’s home, cradles his head as the venom causes the wound in his calf to swell. The healer hurries over to the boy, carrying a rooster that he has retrieved from his chicken coop. After plucking a few tail feathers to reveal its skin, he presses the now naked rooster bottom against the boy’s bite wound. After holding this position for more than an hour, he declares the boy healed. The boy is then carried back to his home where he dies a few hours later: The rooster had little effect, and the boy suffered cardiac arrest from the adder venom.
This treatment—the rubbing of a rooster’s butt against a snakebite wound—was one of the accepted medical solutions to treat snakebites throughout Europe at the time. A medical text from Wales written in the late fourteenth century provides clear guidelines: “For a snakebite, if it is a man [who has been bitten], take a live cockerel and put its bottom onto the bite and leave it there, and that is good. If it is a woman, take a live hen in the same way, and that will get rid of the poison.”26
The same medieval Welsh manuscript includes other medical remedies, such as a cure for deafness by shoving a mixture of ram urine, eel bile, and ash tree sap into your ear. To get rid of a cancerous tumor, boil up some wine with goat dung and barley flour and rub onto the tumor. Also, there is no need to worry about dying of a spider bite; spiders are only dangerous between September and February, and if you get bitten during that period, just crush up some dead flies and wipe it onto the bite and you’ll be fine. This might all sound ludicrous to modern readers, but occasionally—either through dumb luck or an application of causal inference that just happened to be correct—medieval medicine worked. Sometimes better than modern medicine. Scientists recently found a potential treatment for the antibiotic resistant superbug MRSA in Bald’s Leechbook—a medical text from the ninth century—in the form of a salve made from onions, leeks, garlic, and cow bile.27
The history of medicine is causal inference in action: the expert community in a given time and place had focused on why disease happens and how and why people die from wounds—searching not just for correlation, but causation. This led to the development of an elaborate theoretical paradigm—now in the dustbin of history—called humorism. If you’ve never heard of it, don’t worry. Hardly anyone alive today thinks about it, and with good reason.
Yet, humorism was the dominant medical paradigm in Europe for close to two thousand years. Western civilization is built on the back of this now defunct, discredited medical system. Any famous figure from Western history before the nineteenth century—Julius Caesar, Joan of Arc, Charlemagne, Eleanor of Aquitaine, Napoleon—would have known about and believed in humorism.
It first arose as a concept around 500 BCE in ancient Greece. The word humor is a translation of the Greek word χυμóς, which literally means sap. It was the Greek physician Hippocrates (of Hippocratic oath fame) who is most associated with the popularization of the idea, which he described as follows:
“The Human body contains blood, phlegm, yellow bile and black bile. These are the things that make up its constitution and cause its pains and health. Health is primarily that state in which these constituent substances are in the correct proportion to each other, both in strength and quantity, and are well mixed. Pain occurs when one of the substances presents either a deficiency or an excess, or is separated in the body and not mixed with others.”28
The second and early third-century Greek physician Galen and the tenth-century Persian physician and polymath Avicenna are credited with expanding upon these ideas to create the then modern form of humorism in vogue at the time we visited Wales in our time-traveling balloon. Imbalances in the humors described how disease arose. The humors themselves—blood, phlegm, yellow bile, and black bile—were made up of four contraries: hot, cold, wet, and dry. Yellow bile was hot and dry, blood was hot and wet, phlegm was cold and wet, and black bile was cold and dry. These four contraries were responsible for forming everything in the universe, including the four elements: fire, water, air, and earth. Fire, for example, would be hot and dry, whereas water was cold and wet. Knowledge of these opposing forces could be used by a physician to cure any ailment. Someone with a fever would be too hot and too dry, throwing their humors out of whack (i.e., creating an abundance of yellow bile). Treating the fever thus involved exposing the patient to something cold and wet—like lettuce—to restore the balance of the humors.
The explanation for the rooster solution to the snakebite is rooted in humorism, although that Welsh manuscript doesn’t get into details. Nonetheless, the thinking was that applying a rooster’s behind to a snakebite wound would draw the venom out from the person and transfer it to the rooster. This, of course, would occur because of the magical combination of humor imbalance and the contraries.29
Humorism was a beautifully complex medical system built entirely on the back of causal inference. Practitioners were right about the fact that disease and injury involve changes to—and problems with—the many substances in our body that regulate our biology, including blood, bile, etc. They were just wrong about the mechanics of causality. Humorism was eventually replaced by modern medicine in the mid-nineteenth century. Modern medicine is born of the scientific method that incorporates a technique that is fundamental to sniffing out the difference between correlation and causation: the clinical trial.30 With it, you can take an inference for causality (like rooster butts rubbed on a wound causing venom to leave the body) and subject it to verification. You could, for example, give one hundred snakebite patients a rooster butt cure, one hundred patients a placebo (like rubbing the wound with garlic bread), and one hundred patients no treatment. If you look at the results and find that all three groups had the same cure rate, you now know that rooster butts (and bread) don’t really cure snakebite wounds. Working back from there, you can test all the underlying assumptions of humorism until you eventually learn that the inferences concerning how the humors work had been wrong all along.
Of course, the scientific method and clinical trials don’t always produce accurate results. For the longest time, the scientific method led us to believe that the cause of gastric ulcers was stress—until 1984 when Barry J. Marshall and J. Robin Warren showed that the bacterium Helicobacter pylori was the root cause. They figured this out after Marshall siphoned off some bacteria from the stomach of a patient with gastritis, added it to a cup of broth, and drank it. He developed gastritis three days later: evidence that the bacterium was the culprit. Unfortunately, it takes time for the scientific method to ferret out real phenomena, which allows our why specialist yearnings to churn out crappy, humorism-style answers in the meantime. And crappy answers to the bigger why questions are more than just an inconvenience; sometimes they are so bad, it makes you wonder if being a why specialist might be the ultimate downfall of our species.
Are why specialists special?
We’ve had our “why specialist” capabilities from the moment our species popped into existence on the shores of Lake Baringo, but throughout most of prehistory, it didn’t amount to much. Our population numbers were like chimpanzees for a hundred thousand years. In terms of hominid evolution, it was not until quite recently (i.e., 40,000 years ago) that technological advances like farming—a product of our understanding of why plants grow—allowed us to settle down and, generation after generation, broaden our population at levels that put us on a path for global domination. On the one hand, this proves that being a why specialist has helped our species proliferate to an absurd degree compared to our non–why specialist chimpanzee cousins.
But what does this mean in terms of answering the question of whether our human way of thinking—our intelligence built on a bedrock of why specialism—is in fact special, exceptional, or even good? The fact that chimpanzee and humans lived side by side along the shores of Lake Baringo on equal footing with similar levels of success for a hundred millennia suggests that being a why specialist was not an evolutionary triumph right out of the gate. In fact, from what we know about the success of nonhuman animal species, it’s clear that animals can make fantastically useful decisions without the need for asking why things happen, and, in fact, sometimes causal understanding is inferior to the less complicated ways of thinking about the world (like associative learning).
In the final pages of their wide-ranging review of causal inference in animals, cognitive ethologists Christian Schloegl and Julia Fischer concluded that “from an evolutionary perspective, it does not really matter whether the animal reasons, associates, or expresses innate behavior, as long as it gets the job done.”31 Amen. By all accounts, nonhuman animals are getting by just fine in this world without a deep understanding of causation.
For example, humans are by no means the only species that has figured out that plants can be used as medicine; other species have arrived at this same conclusion through associative learning. There is a plant in Africa called bitter leaf—Vernonia amygdalina—a member of the daisy family that is used by modern-day humans to relieve symptoms of malaria, as well as upset stomach and intestinal parasites. Chimpanzees have been observed collecting this same plant, stripping it of its leaves and outer bark, and chewing on the bitter pith. It’s not a plant they typically eat and probably tastes as gross to a chimpanzee as it does to a human. Scientists determined that chimpanzees only engage in this behavior when they have high levels of intestinal parasites; it does indeed appear to lower their parasite load after ingestion.32 They had learned to associate the eating of this plant with relief from intestinal cramps. Importantly, these chimpanzees likely do not care about why this works, only that it does. Using only learned association and not causal inference, chimpanzees—and many other species from birds that eat clay for an upset stomach to elephants that eat bark to induce labor—can figure out how to self-medicate.33
Here’s a question for you to illustrate the power of associative learning. If you suspected you had breast cancer, who would you rather have look at your mammogram? A radiologist with thirty years of experience diagnosing cancer or a pigeon? If you value your life, would it surprise you to hear that I say go with the pigeon? Their ability for associative learning coupled with their visual acuity gives them an edge over radiologists when it comes to spotting cancer. There’s actually a study that tested this, and the results are fascinating.
Using a boring old form of associative learning called classical conditioning, researchers trained pigeons to peck at pictures of cancerous breast tissue. After spending a few days learning how to visually differentiate between cancerous vs. noncancerous tissue, the pigeons were given a set of brand-new images of breast tissue to diagnose. They accurately identified cancerous tissue 85 percent of the time. Their accuracy levels jumped to 99 percent when pooling the responses of all four birds. This syndicate of cancer-pecking pigeons was better than the human radiologists who were given the same task.34
Like humans, pigeons have the visual acuity and perceptual machinery to be able to notice the difference in detail between cancerous vs. benign tissue, and the cognitive ability to place these two types of tissue into separate conceptual categories. In this kind of task, being a why specialist does not give humans an advantage. All you need is a keen visual system and basic associative learning for a pigeon to outclass a radiologist when it comes to spotting cancerous tissue.
But it’s the negative consequences of being a why specialist that really call into question its exceptionalism and general coolness. Consider the potential fallout from the way a human (as opposed to a chimpanzee) would approach the question of causality in the case of using bitter leaf to cure an upset stomach. It’s easy to imagine a scenario where the human capacity for asking why when it comes to the question of “Why do I feel better when I eat bitter leaf?” could lead us down a dark path. A human might conclude that the plant contained supernatural properties bestowed on it by a benevolent god. The plant might then occupy a sacred place in society and be used in rituals to extract its magical properties. Perhaps the plant would be used in a special ceremony, boiled down into a strong broth that was fed to newborn babies to give them supernatural resilience on their journey through life. As a result, a lot of babies would die from this ritual as the concentrated toxins in the plant killed them.
The history of our species is fraught with these kinds of terrible answers to why questions. The question of why humans from different parts of the world look different (e.g., lighter or darker skin, shorter or taller, different nose and eye shapes) was, according to the nineteenth-century American physician Samuel George Morton, due to polygenism. This is the idea that different populations of modern humans either evolved from separate lineages of early hominids or had been created separately by God. Either way, according to Morton, you could see the differences in these populations (which he lumped into five races) by looking at their skulls, with the skulls of white people being the largest and roundest, therefore containing the most brain material and, obviously, being the smartest. Did I mention that Morton was white? In his infamous book Crania Americana he describes the “Caucasian Race” as “distinguished for the facility with which it attains the highest intellectual endowments.”35 We now know that the basic premise of this argument is wrong. There is no relationship between skull size (and thus brain size) and intelligence. There are dozens of examples of people who have had half their brain removed, or hydrocephaly where fluid in their skulls reduces their brain size to a small percentage of a normal human brain, that lead completely normal lives, and even have completely normal IQs. For humans, brain size is completely uncoupled from cognitive capacity. As we’ll see in later chapters, there’s good reason to believe that brain size can’t tell us anything about intelligence in animals, either. It was this kind of racism—scientific racism—that fueled the justification for slavery in the United States and the centuries-long white supremacy that has created untold suffering for millions of people. All of it based on a horrendous (and completely wrong) answer to an otherwise innocent why question.
What’s worse, the very future of our species is threatened by unintentionally horrendous answers to why questions. The internal combustion engine is a marvelous piece of technology that allows us to create little explosions that can turn a shaft, which propels wheels or jet turbines or whatnot. It was drawn from an answer to the question of why heat and pressure cause objects to move. Unfortunately, the fuel that we burn to make these little explosions (like wood, coal, gasoline) releases carbon dioxide that rises into the atmosphere, where it absorbs and radiates heat. Because we’ve been running millions of combustion engines for the past century, we’ve generated so much extra carbon dioxide in the atmosphere that the Earth is warming quite quickly, which, as climate scientists have been warning for quite some time, is bad. So bad that it is starting to tear at the very fabric of our societies, and, according to the Global Challenges Foundation, has contributed to the one in ten chance that our species will go extinct within a century.36 So yes, chimpanzees cannot make stone blades or combustion engines because they lack a capacity for asking why questions like humans, but they are also not shooting themselves in the foot, evolutionarily speaking.
Evolution is still deciding what to make of the human capacity for causal reasoning. It remains to be seen how the future of our species will be impacted by our why specialist nature. The solution to the existential threats that we’ve created for ourselves (like climate change) will be rooted in the very same causal inference cognitive system that created them in the first place. It is an open question whether a solution will arrive in time, or if our why specialist nature has doomed us all.
The bottom line is that you simply don’t need to have a why specialist’s understanding of causation to be a successful species (and, indeed, it might even make one less successful). Nor do you need an understanding of causation to become a millionaire day trader. Mike McCaskill spent two decades basing his stock-purchasing decisions on his carefully considered understanding of cause and effect within this stock market. But it was really nothing more than the kind of random gamble that Orlando the cat could do. “My father says that I am just gambling,” Mike told me. “Had I traded normal I would have been wealthy a long time ago.”
You can use your why specialist reasoning to choose the stocks and bonds in your portfolio if you’d like, or you can let your cat pick them for you. The illusion of intellectual superiority over your cat because of your why specialist aptitude is just that: an illusion.