4
“It was the first drug cure in all of
psychiatric history.”
—NATHAN KLINE
DIRECTOR OF RESEARCH AT ROCKLAND STATE
HOSPITAL IN NEW YORK (1974)1
The “magic bullet” model of medicine that had led to the discovery of the sulfa drugs and antibiotics was very simple in kind. First, identify the cause or nature of the disorder. Second, develop a treatment to counteract it. Antibiotics killed known bacterial invaders. Eli Lilly’s insulin therapy was a variation on the same theme. The company developed this treatment after researchers came to understand that diabetes was due to an insulin deficiency. In each instance, knowledge of the disease came first—that was the magic formula for progress. However, if we look at how the first generation of psychiatric drugs was discovered, and look too at how they came to be called antipsychotics, anti-anxiety agents, and antidepressants—words that indicate they were antidotes to specific disorders—we see a very different process at work. The psychopharmacology revolution was born from one part science and two parts wishful thinking.
Neuroleptics, Minor Tranquilizers, and Psychic Energizers
The story of the discovery of Thorazine, the drug that is remembered today as having kicked off the psychopharmacology “revolution,” begins in the 1940s, when researchers at Rhône-Poulenc, a French pharmaceutical company, tested a class of compounds known as phenothiazines for their magic-bullet properties. Phenothiazines had first been synthesized in 1883 for use as chemical dyes, and Rhône-Poulenc’s scientists were trying to synthesize phenothiazines that were toxic to the microbes that caused malaria, African sleeping sickness, and worm-borne illnesses. Although that research didn’t pan out, they did discover in 1946 that one of their phenothiazines, promethazine, had antihistaminic properties, which suggested it might have use in surgery. The body releases histamine in response to wounds, allergies, and a range of other conditions, and if this histaminic response is too strong, it can lead to a precipitous drop in blood pressure, which at the time occasionally proved fatal to surgical patients. In 1949, a thirty-five-year-old surgeon in the French Navy, Henri Laborit, gave promethazine to several of his patients at the Maritime Hospital at Bizerte in Tunisia, and he discovered that in addition to its antihistaminic properties, it induced a “euphoric quietude…. Patients are calm and somnolent, with a relaxed and detached expression.”2
Promethazine, it seemed, might have use as an anesthetic. At that time, barbiturates and morphine were regularly employed in medicine as general sedatives and painkillers, but those drugs suppressed overall brain function, which made them quite dangerous. But promethazine apparently acted only on selective regions of the brain. The drug “made it possible to disconnect certain brain functions,” Laborit explained. “The surgical patient felt no pain, no anxiety, and often did not remember his operation.”3 If the drug was used as part of a surgical cocktail, Laborit reasoned, it would be possible to use much lower doses of the more dangerous anesthetic agents. A cocktail that included promethazine—or an even more potent derivative of it, if such a compound could be synthesized—would make surgery much safer.
Chemists at Rhône-Poulenc immediately went to work. To assess a compound, they would give it to caged rats that had learned, upon hearing the sound of a bell, to climb a rope to a resting platform in order to avoid being shocked (the floor of the cage was electrified). They knew they had found a successor to promethazine when they injected compound 4560 RP into the rats: Not only were the rats physically unable to climb the rope, they weren’t emotionally interested in doing so either. This new drug, chlorpromazine, apparently disconnected brain regions that controlled both motor movement and the mounting of emotional responses, and yet it did so without causing the rats to lose consciousness.
Laborit tested chlorpromazine as part of a drug cocktail in surgical patients in June of 1951. As expected, it put them into a “twilight state.” Other surgeons tested it as well, reporting that it served to “potentiate” the effects of the other anesthetic agents, the cocktail inducing an “artificial hibernation.” In December of that year, Laborit spoke of this new advance in surgery at an anesthesiology conference in Brussels, and there he made an observation that suggested chlorpromazine might also be of use in psychiatry. It “produced a veritable medicinal lobotomy,” he said.4
Although today we think of lobotomy as a mutilating surgery, at that time it was regarded as a useful operation. Only two years earlier, the Nobel Prize in Medicine had been awarded to the Portuguese neurologist, Egas Moniz, who had invented it. The press, in its most breathless moments, had even touted lobotomy as an operation that plucked madness neatly from the mind. But what the surgery most reliably did, and this was well understood by those who performed the operation, was change people in a profound way. It made them lethargic, disinterested, and childlike. That was seen by the promoters of lobotomy as an improvement over what the patients had been before—anxious, agitated, and filled with psychotic thoughts—and now, if Laborit was to be believed, a pill had been discovered that could transform patients in a similar way.
In the spring of 1952, two prominent French psychiatrists, Jean Delay and Pierre Deniker, began administering chlorpromazine to psychotic patients at St. Anne’s Hospital in Paris, and soon use of the drug spread to asylums throughout Europe. Everywhere the reports were the same: Hospital wards were quieter, the patients easier to manage. Delay and Deniker, in a series of articles they published in 1952, described the “psychic syndrome” induced by chlorpromazine:
Seated or lying down, the patient is motionless on his bed, often pale and with lowered eyelids. He remains silent most of the time. If questioned, he responds after a delay, slowly, in an indifferent monotone, expressing himself with few words and quickly becoming mute. Without exception, the response is generally valid and pertinent, showing that the subject is capable of attention and of reflection. But he rarely takes the initiative of asking a question; he does not express his preoccupations, desires, or preference. He is usually conscious of the amelioration brought on by the treatment, but he does not express euphoria. The apparent indifference or the delay of the response to external stimuli, the emotional and affective neutrality, the decrease in both initiative and preoccupation without alteration in conscious awareness or in intellectual faculties constitute the psychic syndrome due to the treatment.5
U.S. psychiatrists dubbed chlorpromazine, which was marketed in the United States as Thorazine, as a “major tranquilizer.” Back in France, Delay and Deniker coined a more precise scientific term: This new drug was a “neuroleptic,” meaning it took hold of the nervous system. Chlorpromazine, they concluded, induced deficits similar to those seen in patients ill with encephalitis lethargica. “In fact,” Deniker wrote, “it would be possible to cause true encephalitis epidemics with the new drugs. Symptoms progressed from reversible somnolence to all types of dyskinesia and hyperkinesia, and finally to parkinsonism.”6 Physicians in the United States similarly understood that this new drug was not fixing any known pathology. “We have to remember that we are not treating diseases with this drug,” said psychiatrist E. H. Parsons, at a 1955 meeting in Philadelphia on chlorpromazine. “We are using a neuropharmacologic agent to produce a specific effect.”7
At the same time that Rhône-Poulenc was testing phenothiazines for their possible magic-bullet properties against malaria, Frank Berger, a Czech-born chemist, was doing research of a somewhat similar kind in London, and his work led, in 1955, to the introduction of “minor tranquilizers” to the market.
During the war, Berger had been one of the scientists in Britain who had helped develop methods to produce medically useful quantities of penicillin. But penicillin was effective only against grampositive bacteria (microbes that took up a stain developed by Danish scientist Hans Christian Gram), and after the war ended, Berger sought to find a magic bullet that could kill gram-negative microbes, the ones that caused a host of troubling respiratory, urinary, and gastrointestinal illnesses. At that time, there was a commercial disinfectant sold in Britain, called Phenoxetol, that was advertised as effective against gram-negative bacteria in the environment, and Berger, who worked for British Drug Houses, Ltd., tinkered with the active ingredient in that product, a phenylglycerol ether, in an effort to produce a product with superior antibacterial effects. When a compound called mephenesin proved promising, he gave it to mice to test its toxicity. “The compound, much to my surprise, produced reversible flaccid paralysis of the voluntary skeletal muscles unlike that I had ever seen before,” Berger wrote.8
Berger had stumbled on a potent muscle-relaxing agent. That was curious enough, but what was even more surprising, the drug-paralyzed mice didn’t show any signs of being stressed by their new predicament. He would put the animals on their backs and they would be unable to right themselves, and yet their “heart beat was regular, and there were no signs suggesting an involvement of the autonomic nervous system.” The mice remained quiet and tranquil, and Berger found that even when he administered low doses of this amazing new compound to mice—the doses too small to cause muscle paralysis—they displayed this odd tranquility.
Berger realized that a drug of this sort might have commercial possibilities as an agent that allayed anxiety in people. However, mephenesin was a very short-acting drug, providing only a few minutes of peace. In 1947, Berger moved to the United States and went to work for Wallace Laboratories in New Jersey, where he synthesized a compound, meprobamate, that lasted eight times as long in the body as mephenesin. When Berger gave it to animals, he discovered that it also had powerful “taming” effects. “Monkeys after being given meprobamate lost their viciousness and could be more easily handled,” he wrote.9
Wallace Laboratories brought meprobamate to market in 1955, selling it as Miltown. Other pharmaceutical companies scrambled to develop competitor drugs, and as they did so, they looked for compounds that would make animals less aggressive and numb to pain. At Hoffmann-La Roche, chemist Leo Sternbach identified chlordiazepoxide as having a “powerful and unique” tranquilizing effect after he gave it to mice that ordinarily could be prompted to fight by the application of electric shocks to their feet.10 Even with a low dose of the drug, the mice remained noncombative when shocked. This compound also proved to have potent taming effects in larger animals—it turned tigers and lions into pussycats. The final proof of chlordiazepoxide’s merits involved another electric-shock exam. Hungry rats were trained to press a lever for food, and then they were taught that if they did so while a light in the cage blinked on, they would be shocked. Although the rats quickly learned not to press the lever while the light was on, they nevertheless exhibited signs of extreme stress—defecating, etc.—whenever it lit up their cage. But if they were given a dose of chlordiazepoxide? The light would flash and they wouldn’t be the least bit bothered. Their “anxiety” had vanished, and they would even press the lever to get something to eat, unworried about the shock to come. Hoffmann-La Roche brought chlordiazepoxide to market in 1960, selling it as Librium.
For obvious reasons, the public heard little about the animal tests that had given rise to the minor tranquilizers. However, an article published in the Science News Letter was the exception to the rule, as its reporter put the animal experiments into a human frame of reference. If you took a minor tranquilizer, he explained, “this would mean that you might still feel scared when you see a car speeding toward you, but the fear would not make you run.”11
Psychiatry now had a new drug for quieting hospitalized patients and a second one for easing anxiety, the latter a drug that could be marketed to the general population, and by the spring of 1957, it gained a medicine for depressed patients, iproniazid, which was marketed as Marsilid. This drug, which was dubbed a “psychic energizer,” could trace its roots back to a poetically apt source: rocket fuel.
Toward the end of World War II, when Germany ran low on the liquid oxygen and ethanol it used to propel its V-2 rockets, its scientists developed a novel compound, hydrazine, to serve as a substitute fuel. After the war ended, chemical companies from the Allied countries swooped in to grab samples of it, their pharmaceutical divisions eager to see if its toxic properties could be harnessed for magic-bullet purposes. In 1951, chemists at Hoffmann-La Roche created two hydrazine compounds, isoniazid and iproniazid, that proved effective against the bacillus that caused tuberculosis. The novel medicines were rushed into use in several TB hospitals, and soon there were reports that the drug seemed to “energize” patients. At Staten Island’s Sea View Hospital, Timemagazine reported, “patients who had taken the drugs danced in the wards, to the delight of news photographers.”12
The sight of TB patients doing a jig suggested that these drugs might have a use in psychiatry as a treatment for depression. For various reasons, iproniazid was seen as having the greater potential, but initial tests did not find it to be particularly effective in lifting spirits, and there were reports that it could provoke mania. Tuberculosis patients treated with iproniazid were also developing so many nasty side effects—dizziness, constipation, difficulty urinating, neuritis, perverse skin sensations, confusion, and psychosis—that its use had to be curtailed in sanitariums. However, in the spring of 1957, Nathan Kline, a psychiatrist at Rockland State Hospital in Orangeburg, New York, rescued iproniazid with a report that if depressed patients were kept on the drug long enough, for at least five weeks, it worked. Fourteen of the sixteen patients he’d treated with iproniazid had improved, and some had a “complete remission of all symptoms.”13
On April 7, 1957, the New York Times summed up iproniazid’s strange journey: “A side effect of an anti-tuberculosis drug may have led the way to chemical therapy for the unreachable, severely depressed mental patient. Its developers call it an energizer as opposed to a tranquilizer.”14
Such were the drugs that launched the psychopharmacology revolution. In the short span of three years (1954–1957), psychiatry gained new medicines for quieting agitated and manic patients in asylums, for anxiety, and for depression. But none of these drugs had been developed after scientists had identified any disease process or brain abnormality that might have been causing these symptoms. They arrived out of the post–World War II search for magic bullets against infectious diseases, with researchers, during that process, stumbling on compounds that affected the central nervous system in novel ways. The animal tests of chlorpromazine, meprobamate, and chlordiazepoxide revealed that these agents sharply curbed normal physical and emotional responses, but did so without causing a loss of consciousness. That was what was so novel about the major and minor tranquilizers. They curbed brain function in a selective manner. It was unclear how iproniazid worked—it seemed to rev up the brain in some way—but, as the New York Times had noted, its mood-lifting properties were properly seen as a “side effect” of an anti-tuberculosis agent.
The drugs were best described as “tonics.” But in the media, a story of a much different sort was being told.
An Unholy Alliance
The storytelling forces in American medicine underwent a profound shift in the 1950s, and to see how that is so, we need to briefly recount the history of the American Medical Association prior to that time. At the turn of the century, the AMA set itself up as the organization that would help the American public distinguish the good from the bad. At that time, there were fifty thousand or so medicinal products sold in the United States, and they were of two basic types. There were thousands of small companies that sold syrups, elixirs, and herbal remedies directly to the public (or as packaged goods in stores), with these “patent” medicines typically made from “secret” ingredients. Meanwhile, Merck and other “drug houses” sold their chemical preparations, which were known as “ethical” drugs, to pharmacists, who then acted as the retail vendors of these products. Neither group needed to prove to a government regulatory agency that its products were safe or effective, and the AMA, eager to establish a place for doctors in this freewheeling marketplace, set itself up as the organization that would do this assessment. It established a “propaganda department” to investigate the patent medicines and thus protect Americans from “quackery,” and it established a Council on Pharmacy and Chemistry to conduct chemical tests of the ethical drugs. The AMA published the results of these tests in its journals and provided the best ethical drugs with its “seal of approval.” The AMA also published each year a “useful drugs” book, and its medical journals would not allow advertisements for any drug that had not passed its vetting process.
With this work, the AMA turned itself into a watchdog of the pharmaceutical industry and its products. By doing so, the organization was both providing a valuable service to the public and furthering its members’ financial interests, for its drug evaluations provided patients with a good reason to visit a doctor. A physician, armed with his book of useful drugs, could prescribe an appropriate one. And it was this knowledge, as opposed to any government-authorized prescribing power, that provided physicians with their value in the marketplace (in terms of providing access to medicines).
The selling of drugs in the United States began to change with the passage of the 1938 Food and Drug Cosmetics Act. The law required drug firms to prove to the Food and Drug Administration that their products were safe (they still did not have to prove that their drugs were helpful), and in its wake, the FDA began decreeing that certain medicines could be purchased only with a doctor’s prescription.* In 1951, Congress passed the Durham-Humphrey Amendment to the act, which decreed that most new drugs would be available by prescription only, and that prescriptions would be needed for refills, too.
Physicians now enjoyed a very privileged place in American society. They controlled the public’s access to antibiotics and other new medicines. In essence, they had become the retail vendors of these products, with pharmacists simply fulfilling their orders, and as vendors, they now had financial reason to tout the wonders of their products. The better the new drugs were perceived to be, the more inclined the public would be to come to their offices to obtain a prescription. “It would appear that a physician’s own market position is strongly influenced by his reputation for using the latest drug,” explained Fortune magazine.15
The financial interests of the drug industry and physicians were lined up in a way they never had been before, and the AMA quickly adapted to this new reality. In 1952, it stopped publishing its yearly book on “useful drugs.” Next, it began allowing advertisements in its journals for drugs that had not been approved by its Council on Pharmacy and Chemistry. In 1955, the AMA abandoned its famed “seal of acceptance” program. By 1957, it had cut the budget for its Council on Drugs to a paltry $75,000, which was understandable, given that the AMA was no longer in the business of assessing the merits of these products. Three years later, the AMA even lobbied against a proposal by Tennessee senator Estes Kefauver that drug companies prove to the FDA that their new drugs were effective. The AMA, in its relationship to the pharmaceutical industry, had “become what I would call sissy,” confessed Harvard Medical School professor Maxwell Finland, in testimony to Congress.16
But it wasn’t just that the AMA had given up its watchdog role. The AMA and physicians were also now working with the pharmaceutical industry to promote new drugs. In 1951, the year that the Durham-Humphrey Act was passed, Smith Kline and French and the American Medical Association began jointly producing a television program called The March of Medicine, which, among other things, helped introduce Americans to the “wonder” drugs that were coming to market. Newspaper and magazine articles about new medications inevitably included testimonials from doctors touting their benefits, and as Pfizer physician Haskell Weinstein later confessed to a congressional committee, “much of what appears [in the popular press] has in essence been placed by the public relations staffs of the pharmaceutical firms.”17 In 1952, an industry trade publication, FDC Reports, noted that the pharmaceutical industry was enjoying a “sensationally favorable press,” and a few years later, it commented on why this was so. “Virtually all important drugs,” it wrote, receive “lavish praise by the medical profession on introduction.”18
This new marketplace for drugs proved profitable for all involved. Drug industry revenues topped $1 billion in 1957, the pharmaceutical companies enjoying earnings that made them “the darlings of Wall Street,” one writer observed.19 Now that physicians controlled access to antibiotics and all other prescription drugs, their incomes began to climb rapidly, doubling from 1950 to 1970 (after adjusting for inflation). The AMA’s revenues from drug advertisements in its journals rose from $2.5 million in 1950 to $10 million in 1960, and not surprisingly, these advertisements painted a rosy picture. A 1959 review of drugs in six major medical journals found that 89 percent of the ads provided no information about the drugs’ side effects.20
Such was the environment in the 1950s when the first psychiatric drugs were brought to market. The public was eager to hear of wonder drugs, and this was just the story that the pharmaceutical industry and the nation’s physicians were eager to tell.
Miracle Pills
Smith Kline and French, which obtained a license from Rhône-Poulenc to sell chlorpromazine in the United States, secured FDA approval for Thorazine on March 26, 1954. A few days later, the company used its March of Medicine show to launch the product. Although Smith Kline and French had spent only $350,000 developing Thorazine, having administered it to fewer than 150 psychiatric patients prior to submitting its application to the FDA, the company’s president, Francis Boyer, told viewers that this was a product that had gone through the most rigorous testing imaginable. “It was administered to well over five thousand animals and proved active and safe for human administration,” he said. “We then placed the compound in the hands of physicians in our great American medical centers to explore its clinical value and possible limitations. In all, over two thousand doctors in this country and Canada have used it…. The development of a new medicine is difficult and costly, but it is a job our industry is privileged to perform.”21
Boyer’s was a story of rigorous science at work, and less than three months later, Time, in an article titled “Wonder Drug of 1954?,” pronounced Thorazine a “star performer.” After a dose of Thorazine, the magazine explained, patients “sit up and talk sense with [the doctor], perhaps for the first time in months.”22 In a follow-up article, Time reported that patients “willingly took [the] pills” and that once they did, they “fed themselves, ate heartily and slept well.” Thorazine, the magazine concluded, was as important “as the germ-killing sulfas discovered in the 1930s.”23
This was a magic-bullet reference that was impossible to miss, and other newspapers and magazines echoed that theme. Thanks to chlorpromazine, U.S. News and World Report explained, “patients who were formerly untreatable within a matter of weeks or months become sane, rational human beings.”24 The New York Times, in a series of articles in 1954 and 1955, called Thorazine a “miracle” pill that brought psychiatric patients “peace of mind” and “freedom from confusion.” Thorazine, newspapers and magazines agreed, had ushered in a “new era of psychiatry.”25
With such stories being told about Thorazine, it was little wonder that the public went gaga when Miltown, in the spring of 1955, was introduced into the market. This drug, Time reported, was for “walk-in neurotics rather than locked-in psychotics,” and according to what psychiatrists were telling newspaper and magazine reporters, it had amazing properties.26 Anxiety and worries fled so quickly, Changing Times explained, that it could be considered a “happy pill.” Reader’s Digest likened it to a “Turkish bath in a tablet.” The drug, explained Consumer Reports, “does not deaden or dull the senses, and it is not habit forming. It relaxes the muscles, calms the mind, and gives people a renewed ability to enjoy life.”27
The public rush to obtain this new drug was such that Wallace Laboratories and Carter Products, which were jointly selling meprobamate, struggled to keep up with the demand. Drugstores lucky enough to have a supply put out signs that screamed: YES, WE HAVE MILTOWN! The comedian Milton Berle said that he liked the drug so much that he might change his first name to Miltown. Wallace Laboratories hired Salvador Dalí to help stoke Miltown fever, paying the great artist $35,000 to create an exhibit at an AMA convention that was meant to capture the magic of this new drug. Attendees walked into a darkened claustrophobic tunnel that represented the interior of a caterpillar—this was what it was like to be anxious—and then, as they emerged back into the light, they came upon a golden “butterfly of tranquility,” this metamorphosis due to meprobamate. “To Nirvana with Miltown” is how Time described Dalí’s exhibit.28
There was one slightly hesitant note that appeared in newspaper and magazine articles during the introduction of Thorazine and Miltown. In the 1950s, many of the psychiatrists at top American medical schools were Freudians, who believed that mental disorders were caused by psychological conflicts, and their influence led Smith Kline and French, in its initial promotion of Thorazine, to caution reporters that “there is no thought that chlorpromazine is a cure for mental illness, but it can have great value if it relaxes patients and makes them accessible to treatment.”29 Both Thorazine and Miltown, explained the New York Times, should be considered as “adjuncts to psychotherapy, not the cure.”30 Thorazine was called a “major tranquilizer” and Miltown a “minor tranquilizer,” and when Hoffmann-La Roche brought iproniazid to market, it was described as a “psychic energizer.” These drugs, although they may have been remarkable in kind, were not antibiotics for the mind. As Life magazine noted in a 1956 article titled “The Search Has Only Started,” psychiatry was still in the early stages of its revolution, for the “bacteria” of mental disorders had yet to be discovered.31
Yet, in very short order, even this note of caution went by the wayside. In 1957, the New York Times reported that researchers now believed that iproniazid might be a “potent regulator of unbalanced cerebral metabolism.”32 This suggested that the drug, which had been developed to fight tuberculosis, might be fixing something that was wrong in the brains of depressed patients. A second drug for depressed patients, imipramine, arrived on the market during this time, and in 1959 the New York Times called them “antidepressants” for the first time. Both appeared to “reverse psychic states,” the paper said.33 These drugs were gaining a new status, and finally psychiatrist Harold Himwich, in a 1958 article in Science, explained that they “may be compared with the advent of insulin, which counteracts symptoms of diabetes.”34 The antidepressants were fixing something wrong in the brain, and when Hoffmann-La Roche brought Librium to market in 1960, it picked up on this curative message. Its new drug was not just another tranquilizer, but rather “the successor to this entire group…. Librium is the biggest step yet toward ‘pure’ anxiety relief as distinct from central sedation or hypnotic action.”35 Merck did the same, marketing its drug Suavitil as “a mood normalizer…. Suavitil offers a new and specific type of neurochemical treatment for the patient who is disabled by anxiety, tension, depression, or obsessive-compulsive manifestations.”36
The final step in this image makeover of the psychiatric drugs came in 1963. The NIMH had conducted a six-week trial of Thorazine and other neuroleptics, and after these drugs were shown to be more effective than a placebo in knocking down psychotic symptoms, the researchers concluded that that the drugs should be regarded “as antischizophrenic in the broad sense. In fact, it is questionable whether the term ‘tranquilizer’ should be retained.”37
With this pronouncement by the NIMH, the transformation of the psychiatric drugs was basically complete. In the beginning, Thorazine and other neuroleptics had been viewed as agents that made patients quieter and emotionally indifferent. Now they were “antipsychotic” medications. Muscle relaxants that had been developed for use in psychiatry because of their “taming” properties were now “mood normalizers.” The psychic energizers were “antidepressants.” All of these drugs were apparently antidotes to specific disorders, and in that sense, they deserved to be compared to antibiotics. They were disease-fighting agents, rather than mere tonics. All that was missing from this story of magic-bullet medicine was an understanding of the biology of mental disorders, but with the drugs reconceived in this way, once researchers came to understand how the drugs affected the brain, they developed two hypotheses that, at least in theory, filled in this gap.
Chemicals in the Brain
At the start of the 1950s, there was an ongoing debate among neurologists about how signals crossed the tiny synapses that separated neurons in the brain. The prevailing view was that the signaling was electrical in kind, but others argued for chemical transmission, a debate that historian Elliot Valenstein, in his book Blaming the Brain, characterized as the “war between the sparks and the soups.” However, by the mid-1950s, researchers had isolated a number of possible chemical messengers in the brains of rats and other mammals, including acetylcholine, serotonin, norepinephrine, and dopamine, and soon the “soup” model had prevailed.
With that understanding in place, an investigator at the NIMH, Bernard Brodie, planted the intellectual seed that grew into the theory that depression was due to a chemical imbalance in the brain. In 1955, in experiments with rabbits, Brodie reported that reserpine, an herbal drug used in India to quiet psychotic patients, lowered brain levels of serotonin. It also made the animals “lethargic” and “apathetic.” Arvid Carlsson, a Swedish pharmacologist who had worked for a time in Brodie’s lab, soon reported that reserpine also reduced brain levels of norepinephrine and dopamine (which jointly are known as catecholamines). Thus, a drug that depleted serotonin, norepinephrine, and dopamine in the brain seemed to make animals “depressed.” However, investigators discovered that if animals were pretreated with iproniazid or imipramine before they were given reserpine, they didn’t become lethargic and apathetic. The two “antidepressants,” in one manner or another, apparently blocked reserpine’s usual depletion of serotonin and the catecholamines.38
During the 1960s, scientists at the NIMH and elsewhere figured out how iproniazid and imipramine worked. The transmission of signals from the “presynaptic” neuron to the “postsynaptic” neuron needs to be lightning fast and sharp, and in order for the signal to be terminated, the chemical messenger must be removed from the synapse. This is done in one of two ways. Either the chemical is metabolized by an enzyme and shuttled off as waste, or else it flows back into the presynaptic neuron. Researchers discovered that iproniazid thwarts the first process. It blocks an enzyme, known as monoamine oxidase, that metabolizes norepinephrine and serotonin. As a result, the two chemical messengers remain in the synapse longer than normal. Imipramine inhibits the second process. It blocks the “reuptake” of norepinephrine and serotonin by the presynaptic neuron, and thus, once again, the two chemicals remain in the synapse longer than normal. Both drugs produce a similar end result, although they do so by different means.
In 1965, the NIMH’s Joseph Schildkraut, in a paper published in the Archives of General Psychiatry, reviewed this body of research and set forth a chemical imbalance theory of affective disorders:
Those drugs [like reserpine] which cause depletion and inactivation of norepinephrine centrally produce sedation or depression, while drugs which increase or potentiate norepinephrine are associated with behavioral stimulation or excitement and generally exert an antidepressant effect in man. From these findings a number of investigators have formulated a hypothesis about the pathophysiology of the affective disorders. This hypothesis, which has been designated the “catecholamine hypothesis of affective disorders,” proposes that some, if not all depressions are associated with an absolute or relative deficiency of catecholamines, particularly norepinephrine.39
Although this hypothesis had its obvious limitations—it was, Schildkraut said, “at best a reductionistic oversimplification of a very complex biological state”—the first pillar in the construction of the doctrine known today as “biological psychiatry” had been erected. Two years later, researchers erected the second pillar: the dopamine hypothesis of schizophrenia.
Evidence for this theory arose from investigations into Parkinson’s disease. In the late 1950s, Sweden’s Arvid Carlsson and others suggested that Parkinson’s might be due to a deficiency in dopamine. To test this possibility, Viennese neuropharmacologist Oleh Hornykiewicz applied iodine to the brain of a man who’d died from the illness, as this chemical turns dopamine pink. The basal ganglia, an area of the brain that controls motor movements, was known to be rich in dopaminergic neurons, and yet in the basal ganglia of the Parkinson’s patient, there was “hardly a tinge of pink discoloration,” Hornykiewicz reported.40
Psychiatric researchers immediately understood the possible relevance of this to schizophrenia. Thorazine and other neuroleptics regularly induced Parkinsonian symptoms—the same tremors, tics, and slowed gait. And if Parkinson’s resulted from the death of dopaminergic neurons in the basal ganglia, then it stood to reason that antipsychotic drugs, in some manner or another, thwarted dopamine transmission in the brain. The death of dopaminergic neurons and the blocking of dopamine transmission would both produce a dopamine malfunction in the basal ganglia. Carlsson soon reported that Thorazine and the other drugs for schizophrenia did just that.
This was a finding, however, that told of drugs that “disconnected” certain brain regions. They weren’t normalizing brain function; they were creating a profound pathology. However, at this same time, researchers reported that amphetamines—drugs known to trigger hallucinations and paranoid delusions—elevated dopamine activity in the brain. Thus, it appeared that psychosis might be caused by too much dopamine activity, which the neuroleptics then curbed (and thus brought back into balance). If so, the drugs could be said to be antipsychotic in kind, and in 1967, Dutch scientist Jacques Van Rossum explicitly set forth the dopamine hypothesis of schizophrenia. “When the hypothesis of dopamine blockade by neuroleptic agents can be further substantiated, it may have fargoing consequences for the pathophysiology of schizophrenia. Overstimulation of dopamine receptors could then be part of the aetiology” of the disease.41
Expectations Fulfilled
The revolution in mental health care that Congress had hoped for when it created the NIMH twenty years earlier was now—or so it seemed—complete. Psychiatric drugs had been developed that were antidotes to biological disorders, and researchers believed that the drugs worked by countering chemical imbalances in the brain. The horrible mental hospitals that had so shamed the nation at the end of World War II could now be shuttered, as schizophrenics—thanks to the new drugs—could be treated in the community. Those suffering from a milder disorder, like depression or anxiety, simply needed to reach into their medicine cabinets for relief. In 1967, one in three American adults filled a prescription for a “psychoactive” medication, with total sales of such drugs reaching $692 million.42
This was a narrative of a scientific triumph, and in the late 1960s and early 1970s, the men who had been the pioneers in this new field of “psychopharmacology” looked back with pride at their handiwork. “It was a revolution and not just a transition period,” said Frank Ayd Jr., editor of the International Drug Therapy Newsletter. “There was an actual revolution in the history of psychiatry and one of the most important and dramatic epics in the history of medicine itself.”43 Roland Kuhn, who had “discovered” imipramine, reasoned that the development of antidepressants could properly be seen as “an achievement of the progressively developing human intellect.”44 Anti-anxiety medicines, said Frank Berger, the creator of Miltown, were “adding to happiness, human achievement, and the dignity of man.”45 Such were the sentiments of those who had led this revolution, and finally, at a 1970 symposium on biological psychiatry in Baltimore, Nathan Kline summed up what most of those in attendance understood to be true: They all had earned a place in the pantheon of great medical men.
“Medicine and science will be just that much different because we have lived,” Kline told his colleagues. “Treatment and understanding of [mental] illness will forever be altered … and in our own way we will persist for all time in that small contribution we have made toward the Human Venture.”46
A Scientific Revolution … or a Societal Delusion?
Today, by retracing the discovery of the first generation of psychiatric drugs and following their transformation into magic bullets, we can see that by 1970 two possible histories were unfolding. One possibility is that psychiatry, in a remarkably fortuitous turn of events, had stumbled on several types of drugs that, although they produced abnormal behaviors in animals, nevertheless fixed various abnormalities in the brain chemistry of those who were mentally ill. If so, then a true revolution was indeed under way, and we can expect that when we review the long-term outcomes produced by these drugs, we will find that they help people get well and stay well. The other possibility is that psychiatry, eager to have its own magic pills and eager to take its place in mainstream medicine, turned the drugs into something they were not. These firstgeneration drugs were simply agents that perturbed normal brain function in some way, which is what the animal research had shown, and if that is so, then it stands to reason that the long-termoutcomes produced by the drugs might be problematic in kind.
Two possible histories were under way, and in the 1970s and 1980s, researchers investigated the critical question: Do people diagnosed with depression and schizophrenia suffer from a chemical imbalance that is then corrected by the medication? Were the new drugs truly antidotes to something chemically amiss in the brain?
* In 1914, the Harrison Narcotics Act required a doctor’s prescription for opiates and cocaine. The 1938 Food and Drug Cosmetics Act extended that prescription-only requirement to a larger number of drugs.