Chapter 10

The Science of Crime

WHEN MOST PEOPLE THINK OF EVIDENCE, they think of crime and crime detection. Assisted by several generations of television and more than several generations of crime novels, most people believe that evidence is what detectives look for when a crime has been committed, and what prosecutors present to the jury in order to ensure that the perpetrators get their just desserts.

This understanding of evidence is not wrong. Although the premise of this book is that evidence is much more than just the traces left at a crime scene, the kind of evidence that is featured in countless mystery novels and on television shows such as CSI and Law and Order nonetheless represents an important part of the world of evidence, a part that commonly goes by the name “forensics.” Fingerprint evidence has long been the best-known form of forensic evidence, but there are also voiceprints, shoeprints, tire marks, bite marks, tool marks, ballistic examinations, handwriting analysis, hair and fiber matching, and many more, including less well-known ones such as the analysis of blood spatter patterns. And now there is DNA.

Most of these forensic techniques have long enjoyed largely favorable portrayals on television and in popular fiction. In recent years, however, there has been heightened scrutiny of most of the forms of forensic evidence that have for generations been accepted with little or no controversy, not only on television but also in real trials in real courtrooms. In 1993, in Daubert v. Merrell Dow Pharmaceuticals, the US Supreme Court insisted that scientific and expert evidence be of a type that has been established as reliable.1 And among the consequences of this increased attention to scientific reliability has been spate of exonerations of individuals whose criminal convictions were shown to have been the result of shoddy forensic techniques.2 Indeed, even fingerprint evidence, long considered the gold standard of forensic techniques, has been called into question.3 Perhaps most importantly and most influentially, an extensive 2009 report by the National Research Council challenged almost all of the methods long used by forensics experts in and out of law enforcement, claiming not only that the reliability of those methods was open to question, but also that the methods used to determine that reliability had traditionally failed even the most basic requirements of experimental validity.4

Modern critics of long-accepted forensic techniques have identified multiple problems. One is the tendency of forensic experts, when testifying in court, to exaggerate the conclusiveness of their results, frequently with the use—or overuse—of words like “match” and “certain.”5 And sometimes even with the testimonial assertion that the probability of error was “zero.”6 Such conclusions and characterizations are problematic, in part, because they can misleadingly suggest that it is inconceivable that anyone other than the defendant on trial could be the source of the forensically incriminating evidence. If a ballistics examiner, for example, locates with a microscope a similarity (or multiple similarities) between the marks—the striations, or the “lands” and “grooves”—on a bullet as well as the imperfections in the interior of a gun barrel that can create such marks (and similarly with the marks on a shell casing and the imperfections on a firing pin), trained and experienced examiners can to a high probability estimate that this bullet was fired from this gun.7 But there is a difference between a high probability and a certainty, and courts and lawyers have been concerned that describing this high probability as “certain” or as a “match” will mislead jurors into thinking that the imperfections on the interior of a barrel are necessarily unique, which is not true, and that the comparison between these imperfections and the marks on the bullet is a straightforward matter involving little or no judgment on the part of the examiner, which is also not true. For these and other reasons, for example, Judge Jed Rakoff of the US District Court for the Southern District of New York limited the expert ballistics witness for the prosecution in a criminal case to saying in his testimony only that some bullet was “more likely than not” to have come from a particular gun, a probabilistic assessment that certainly did not overstate the evidentiary value of the evidence and the expert’s conclusion, even though it may well have understated it.8

Related to the problem of exaggerated reliability are questions about the validity of the testing procedures that lead to conclusions about the reliability of some forensic technique. Traditionally, forensic scientists have argued for the reliability of their methods by reference to the high percentage of convictions that have resulted from their examinations and subsequent testimony. But this approach to validation suffers from multiple and obvious flaws. One is that the testimony may have influenced the jury conviction, thus eliminating the untainted “ground truth” that is necessary for rigorous validation. If we are interested in the extent to which ballistics examination, for example, can identify a gun possessed by the defendant as the murder weapon, then we need to know whether the examination actually identified that gun as the source of bullet in the victim’s body. But if we know only that this gun produced this bullet because the jury said so, and if the jury’s saying so was at least in part a product of the examiner’s saying so, then the existence of the jury verdict as evidence of the method’s reliability is worthless. Moreover, juries are not required to explain the grounds for their verdict, and it is impermissible for a judge to require a jury to subdivide its verdict into particular answers to particular questions.9 Consequently, the fact of a conviction is consistent with the jury having disbelieved or otherwise disregarded the ballistics examiner but having nevertheless convicted the defendant because of other evidence. Indeed, much the same can apply to a prosecutor’s initial decision to prosecute. If a prosecutor went forward with the prosecution only if all of the evidence was extremely strong, including the non-forensic evidence, then the fact of conviction tells us almost nothing about whether the forensic evidence itself was a causal factor in the decision to prosecute.

Although such flaws have plagued almost all of the forms of forensic identification evidence, recent developments have forced some of the various forensic subcommunities to conduct the kind of legitimate testing that would begin to resemble genuinely scientific method and thus provide sound, even if not conclusive, evidence of the reliability of their methods. Were we actually to know—by test firing, for example—which of a large sample of bullets comes from a particular gun and which does not, and if we were then to give the ballistics examiner a sample of bullets and a sample of guns and ask that examiner which bullets came from which guns, we would then be able to determine the extent—the error rates—to which an examiner identified a bullet as coming from a gun when in fact it did not, and the extent to which an examiner identified a bullet as not coming from a particular gun when in actuality it did. Recently at least some of the forensic subcommunities have done all of this in response to the 2009 National Research Council report.10 And when this is done, we then do have genuine scientific support for the value of the method as evidence. In forensics, as elsewhere, the question of the worth of some fact or test or method as evidence is itself something that can be established—or not—by evidence.

I have used ballistics identification as an example, but the same issues persist for almost all forensic identification techniques. Was the hair found at the scene of the crime the hair of the defendant? Did the cotton fiber in the getaway car come from the coat of the person accused of the bank robbery? Was an empty gas can in the vicinity of the fire evidence that the fire was intentionally caused by a human being—arson? Were the defendant’s fingerprints on the gas can evidence that the defendant was the arsonist? Is the similarity between the handwriting on a ransom note and the defendant’s handwriting on a birthday card to his mother evidence that the defendant is the kidnapper? If the marks on the skull of a victim was beaten to death are similar to the marks on a pipe wrench owned by the defendant, is this evidence against the defendant, and, if so, of what strength? And so on. In all of these examples there are, of course, innocent explanations for the evidence, but the existence of such innocent explanations goes to the weight of the evidence and not to its admissibility.11 As long as this kind of evidence makes it more likely than it would have been without the evidence that the defendant was guilty of the crime charged, the existence of innocent explanations does not preclude treating the identifying forensic evidence as having evidentiary value.

Handwriting examination provides another good example of the use, misuse, and validation of forensic evidence. But in looking at handwriting identification evidence, we need to make it clear initially that we are not talking about the purported ability of some purported experts to assess personality traits allegedly revealed by the subject’s handwriting. There are people, billing themselves as experts, who claim, often profitably, to be able to determine from someone’s handwriting whether that person is shy or extroverted, cautious or daring, empathetic or selfish, frugal or generous, and much more. Those claims of expertise—graphology—might possibly have slightly more validity than phrenology, but not much more, and probably have none at all, as multiple serious studies of this type of graphology have established.12

Putting the pseudoscience of graphology aside, therefore, what is now before us is the handwriting equivalent of ballistics identification—the identification of a person by use of handwriting comparisons. Just as a bullet can show, even if not conclusively, which gun most likely produced it, handwriting similarly can point, again not conclusively, to the identity of the writer.13 And if the guilty person wrote a ransom note or used a forged signature to cash a check made out to someone else, for example, handwriting identification claims to be able to determine whether the defendant was actually the person who wrote or signed the incriminating documents.

Here again the issues parallel those raised by ballistics identification. Professional handwriting identification experts have long touted the accuracy of their methods, but the existence of few rigorous tests of those methods led many commentators to express a degree of skepticism even greater than that expressed about ballistics.14 After the Supreme Court’s 1993 Daubert decision, and especially after the 2009 National Research Council report, however, the community of professional handwriting examiners, their organizations, and those who would use their testimony added considerable rigor both to the methods and to the testing of those methods. The reliability of handwriting identification still lags considerably behind that of ballistics identification, though, and far behind that of fingerprint identification.15 Yet handwriting identification by professionals well exceeds nonprofessional identification, and most recent studies put the error rate for professional handwriting identification examination at somewhere between 15 and 20 percent, meaning that the accuracy of an identification of a piece of unverified handwriting as coming (or not coming) from a particular individual whose handwriting is known is somewhere in the vicinity of 80 to 85 percent.16 This would hardly be enough, by itself, to prove a defendant guilty beyond a reasonable doubt, but it is plainly enough to satisfy the minimal standards for its admissibility as evidence. Between the minimal standards for the initial admissibility of any evidence and the maximal standards of proof beyond a reasonable doubt, however, we have, after Daubert—the requirement to satisfy the standards for expert or scientific evidence, standards that are higher than for normal admissibility but not as high as the standards necessary for conviction on that evidence alone.

Once again, and as with ballistics, the courts have been divided on whether expert testimony about similarity in handwriting is sufficient to satisfy the heightened standard for expert evidence.17 And we can speculate that this standard is heightened even further, albeit without acknowledgment, when the question is whether the evidence can be used by the prosecution in a criminal case. A useful example of the difference between what is necessary to satisfy the heightened, and perhaps further heightened, standard for expert scientific evidence and what is necessary for that evidence to be relevant in the first place comes from the 1995 federal court decision in United States v. Starzecpyzel.18 The defendants were accused of stealing more than a hundred works of art from the obviously wealthy aunt of one of the defendants, then forging the aunt’s signature on documents consigning the works to Christie’s and Sotheby’s for sale at auction, and thereafter directing the auction houses to remit the proceeds to Swiss bank accounts controlled by the defendants. Needless to say, this scheme constituted multiple crimes under federal law. In order to prove the scheme at trial, and thus to prove that the signatures on the consigning documents were forged, the prosecution sought to use several forensic document examiners to testify that they had examined other documents signed by the aunt as well as the consigning documents and had come to the conclusion that the signatures on the latter were forgeries. When the admissibility of this expert testimony was challenged, Judge Lawrence McKenna of the US District Court for the Southern District of New York concluded that the expert’s methods not only were not science but could even also be characterized as “junk science.” The experts and their approach, he said, possessed neither a rigorously delineated methodology nor very much in the way of testing-based assurances of the accuracy of whatever methodology they used. Yet having come to that conclusion, and thus to the conclusion that the ability to identify a signature as a forgery was not scientific, Judge McKenna nevertheless proceeded to allow the document examiners to testify. There are many types of evidence that are not science, and whose validity cannot be established by scientific methods, he acknowledged, but that might still have admissible probative value, not as science but as legitimate and helpful practical testimony by an expert in some practical field.19 The evidence was thus admitted, the consequence being that the defendants were convicted and each sentenced to several years in federal prison.

In allowing the evidence to be used despite its nonscientific character, Judge McKenna drew a useful analogy between this form of expertise and the expertise of a harbor pilot. The expertise of a harbor pilot, the judge reasoned, came from long and successful experience, experience that could constitute expertise even if those possessing it did not have a structured and tested methodology. For Judge McKenna, the basic point of the analogy was to support the conclusion that various forms of experience might still qualify as expertise, even if that experience could not be scientifically tested or even grounded in testable hypotheses. We could not, after all, do a test in which multiple harbor pilots were asked to use multiple methods of piloting the ships under their control in order to see which ships arrived safely at the dock and which ones sank as a result of hitting unseen reefs. We might further analogize Judge McKenna’s harbor pilots to drivers whose insurance companies set their premiums or give them safe driver discounts by virtue of the drivers possessing an accident-free record for some number of years. Most safe drivers cannot explain why they are safe drivers, other than uttering a few platitudes about paying attention and not driving too fast. And safe drivers do not subject their approaches to rigorous testing by changing one aspect of their driving to see if that produces accidents. And, importantly, the insurance company does not care. All the insurance company cares about is that someone’s long accident-free record shows that them to be a safe driver. Similarly, and as Judge McKenna concluded, long and successful experience can qualify someone as an expert, thus giving their judgments value and weight as evidence, even if their practices and methodology are far removed from anything resembling science.

The formal legal basis for Judge McKenna’s decision is now obsolete, the Supreme Court having subsequently ruled in a different case that even nonscientific evidence must meet some sort of standard of reliability before it can be admitted as the testimony of an expert.20 But even if the reliability of various experience-based forms of expertise, such as harbor piloting, cannot be established with scientific rigor, there are alternative, even if less rigorous, indicators of reliability that are still permissible, and the kind of nonscientific and individual-judgment-based expertise that Judge McKenna allowed would still be allowed today. More importantly, and of relevance to evidence generally and not merely to forensic evidence in criminal trials, there are many forms of evidence that are not science, and that cannot be scientifically tested. Recall from Chapter 2 the physician whose diagnoses are based on long experience rather than laboratory experiments or clinical trials. The nonscientific nature of such evidence, and the non-rigorous testing of the reliability of such evidence, undoubtedly weakens its evidential value. But weaker evidential value is not the same as no evidential value, and the lessons of nonscientific forensic evidence are important throughout the realm of evidence and inferences based on evidence.

The examples of ballistic and handwriting identification merely scratch the surface of a vast literature and an even vaster history about forensics and criminal investigation. The combination of the Supreme Court’s now-decades-old insistence on indicators of reliability for the admission of all forms of expert evidence, and the roughly contemporaneous rise of concern about the erroneous conviction of innocent defendants, has called into question much of the conventional wisdom about forensic evidence, the forensic science that has long been taken to be valid and reliable by lawyers, judges, the police, the public, novelists. and television writers. But the lessons to be learned are lessons about evidence generally, and not just about the kind of evidence that is left at the “scene of the crime.” And to that we now turn.

Recurring Lessons

The recent history of forensic techniques provides ample grounds for caution about the reliability of those techniques, but that history also, albeit less obviously, provides grounds for being cautious about the caution. Literally by definition, the field of forensics is about crime, and identifying the perpetrators of crimes is typically in aid of using the legal system to punish them, often by imprisonment. It is thus easy to see why the evaluation of forensic evidence exists largely in the context of the potential use of such evidence to put people in prison, sometimes to execute them, and almost always to do at least something unpleasant to those convicted of a criminal offense.

Because the use, misuse, and evaluation of forensic techniques has so overwhelmingly been situated within the criminal justice system and its long-standing and desirable requirements of proof beyond a reasonable doubt, it is tempting to move quickly from the inability of some forensic technique to prove guilt beyond a reasonable doubt to conclusions about the overall weaknesses of that forensic technique. That temptation, however, should be resisted. And it should be resisted for two reasons that have recurred throughout this book. The first is that whether evidence is good or bad, sufficient or insufficient, depends on what we are going to do if the evidence is sufficient. How much evidence we need is a function of the question “For what?” Even in the criminal justice system, forensic techniques that would be insufficient to support a conviction, perhaps even in conjunction with other evidence, might well be sufficient to prompt an investigation and might also be adequate to lead investigators to look in one direction or at one suspect rather than another.

Moreover, as discussed in Chapter 7 in the context of lie detection, evidence that would plainly be insufficient by itself to prove a defendant’s guilt beyond a reasonable doubt, and that might even be inadmissible in conjunction with other evidence in light of the legal system’s concern for the rights of defendants, might nevertheless be more than sufficient to allow a defendant to establish that there was a reasonable doubt as to his guilt. If a defendant were to offer testimony by a ballistics expert that the bullet extracted from a victim “matched” a gun owned by someone other than the defendant and who had a motive for killing the victim, we can wonder whether the judges who balked at terms like “match” would have done so in this context. Consequently, we can also wonder whether some of the work that is being done by the “beyond a reasonable doubt” burden of proof is also casting a shadow on questions about the validity of some forms of evidence more generally, perhaps distorting the evaluation of evidence in the service of ensuring (or double-ensuring) that only the most plainly guilty of defendants are actually convicted.21

The history of ballistics identification again provides a useful and sobering lesson of this last point. Scorn toward such evidence is nowadays more common than it used to be, and it is now and then referred to as “junk science.”22 But the original uses of ballistics evidence were not for the purposes of putting people in prison but instead for keeping them out. Although there had been some earlier and laughably crude attempts at ballistics identification, the first such identification that even approached systematic rigor was the one that saved Charlie Stielow from the electric chair. Stielow, a mentally challenged handyman in upstate New York, had been tried, convicted, and sentenced to death in 1916 for the murder of his employers. But the combination of his lawyers’ efforts and a sympathetic governor secured for Stielow an affidavit and subsequent testimony by a self-trained ballistics expert who established that Stielow’s gun could not have been the murder weapon, the prosecution’s case notwithstanding. Stielow was exonerated and released.23

Stielow’s case is a dramatic example of how evidence that might be properly deemed insufficient to support a criminal conviction might still be sufficient to support an acquittal or exoneration by raising a reasonable doubt about some defendant’s guilt. And we can say the same for other forms of forensic evidence that appear deficient when used by the prosecution in a criminal case. Handwriting identification examiners, for example, whose reliability was established only by their record of convictions and thus whose reliability was insufficient to satisfy modern standards of reliability, might still have such extensive experience that we would—and should—be reluctant to dismiss their testimony as to the potential deficiencies of a prosecutor’s evidence to the contrary. And in a civil case, where denial of genuine liability is as wrong as mistaken imposition of liability, again what is not good enough for the prosecution in a criminal case might be considered to be good enough for an injured plaintiff in a civil case. Admittedly, the tenor of the previous several sentences is in tension with the law’s traditional and still-persistent view that the admissibility of evidence does not vary with the type of case or the party or side that is offering the evidence. But the case of Charlie Stielow and other factually innocent defendants who have been wrongly convicted may suggest that the law’s traditional indifference to the source and purposes of evidence should perhaps be open for reevaluation.24

Linked to the question “For what?” is the question “Compared to what?” Not only do we need to know the purposes for which a piece of evidence is being put, or the hypotheses it is being used to test, before we know whether the evidence is relevant, whether we should consider it, and how much weight we should give it, but we also need to know what alternative evidence will be used if the evidence under consideration cannot be used. And that inquiry includes the possibility of “none.” If some piece of evidence is not usable, but we are forced to make a decision, perhaps the decision will be made on no evidence at all. This may not be the typical way in which the question “Compared to what?” arises, but it is still essential, for forensic evidence and elsewhere, that we understand the alternatives before we too quickly say that some type of evidence is not good enough.

Consider again the case of handwriting comparison. In American law, lay comparisons of handwriting—sometimes comparisons offered by witnesses and sometimes comparisons performed directly by the jury—are plainly accepted for the preliminary purpose of authentication and have frequently been accepted as relevant substantive evidence.25 If a layperson familiar with someone’s signature testifies that the signature on a document whose authenticity is disputed is the signature of the person whose signature the witness knows, that testimony will commonly be admitted, even recognizing that the weight to be given to that testimony is ultimately for the jury to decide. Even worse (or better, depending on your point of view), the jury itself is sometimes allowed to assess whether one signature or piece of handwriting is similar to another, and is permitted to do to with no expert assistance at all. If in the Starzecpyzel case the expert had not been permitted to testify as to the dissimilarity between the signature of Ms. Goldstone (whose art it was) and the signature on the consignment documents presented to Christie’s and Sotheby’s, that determination would likely have been left for the jury to make. The question then is not whether expert handwriting identification comparison is good, but, again, whether it is good enough. And the question of whether it is good enough is substantially a function of what the jury (or judge) would have done in the absence of that expert testimony.

And, finally, we should not forget the Blackstonian idea of preferring the errors of false acquittal to the errors of false conviction. If we update Blackstone’s insight into the language of decision theory, we can understand him as saying that determining guilt or innocence is an exercise in decision making under uncertainty, and that designing procedures for making that decision involves selecting among necessarily imperfect decision procedures. And although some of the problems with forensic evidence might be corrected without cost, supposing that we can do so for most of those problems seems Pollyannaish. Not only do almost all of the remedies come at some cost in the literal financial sense, but they also often, even if not necessarily, come at the kind of cost that Blackstone first recognized. In most cases. fixing the procedures that have produced many of the recent exonerations in order to guard against false positives will involve increasing the number of false negatives. For example, prohibiting a ballistics expert from describing an identified similarity as a “match” is likely to lead at least some jurors to discount that expert’s conclusions more than the science or the statistics would justify. It seems likely, therefore, that across a large enough array of cases, excluding admittedly imperfect forensic evidence will produce the erroneous acquittal of more guilty defendants than would be produced by the beyond a reasonable doubt standard operating alone. That consequence is real, but whether that consequence is a price worth paying is a determination that the evidence alone cannot make. It is common these days to identify the failures of forensic science and consequent false convictions as something like “shocking.”26 And insofar as those failures can (and should) be corrected at little cost and with no change in the number of false acquittals, then shocking they are. But if some of the forensic failures can be corrected only at some cost, and only by shifting the ratio of false convictions to false acquittals, then the very existence of those failures may be less shocking and more inevitable than may initially appear.

A Note on DNA

Although forensic identification by the use of fingerprints, ballistics, handwriting, and much else still dominates the forensic world, all of these techniques are in some way quaint. We are, after all, now in the era of DNA. Because the chances of two people having identical DNA is much smaller than even the chances of two people having the identical fingerprints—to say nothing of two people having identical handwriting or two guns producing bullets with identical markings or two people looking the same to an eyewitness—DNA investigation holds out the promise of offering dramatic improvements on all of the other known methods of forensic identification. Indeed, it is curious that DNA identification is so often described as “DNA fingerprinting,” suggesting that fingerprint identification is the ideal, and ideal to which DNA analysis aspires. But that gets it backward. Fingerprint analysis, for all its reliability, is not as reliable as has long been assumed, and DNA analysis, done properly, is vastly better.27

That DNA analysis is vastly better than the alternatives for many or most forensic purposes does not mean that it is without problems. One of those problems is that what DNA analysis has the potential to do is not always how it is implemented in practice. Samples are lost, contaminated, and mislabeled. Or miscollected in the first place. Or even if not lost, contaminated, mislabeled, or miscollected, they may be only partial. Moreover, the chemicals used in the analysis may have expired. Or the analysts may be inexperienced, incompetent, or poorly supervised. And so on. For DNA analysis, as with almost any other human activity, it is a mistake to confuse some technique’s potential if done perfectly with the actual performance of the technique. It may be rare for the examiner to spill a milkshake on the sample, but that extreme and hypothetical image is a reminder that there is a difference between theory and implementation, and a difference between ideal and actual performance. Most legal decisions on the use of DNA as evidence, therefore, are less about the science than they are about how that science is applied by fallible humans and laboratories in particular instances.28 In fact, the inquiry is best understood as involving three parts. First there is the science; second there are the techniques for implementing the science; and third is how those techniques have been carried out on particular occasions.29 And even if the first of the three is now well accepted, the second and third have the potential for making what is extraordinarily reliable in theory often far less so in practice.

The second area of concern emerges from the fact that DNA testing is partial, in the sense that even the best currently available techniques do not and cannot look at all of the alleles in all of the polymorphic sites. In simple language, currently it is only possible to examine some parts of the DNA profile, even with the best of techniques. And although no two people (except for identical twins) have truly the same DNA, two (or more) people might be identical in those parts of the DNA profile that it is now possible to examine. Then the question moves from theoretical certainty to practical probability. But here things get tricky—because we are no longer in the realm of almost immeasurably small possibilities that someone other than the defendant could have produced the DNA found on, say, the murder weapon or the door handle of the stolen car. Rather, we are dealing with probabilities such that it is at least conceivable that the DNA was produced by someone other than the defendant. And so, to use an entirely made-up number, suppose the chances are one in ten million that someone other than the defendant left this particular DNA sample on the murder weapon. That’s a big number, but it also means, in theory, that there are thirty-three people in the United States who are as likely as the defendant to have left their DNA on the murder weapon. And if there are thirty-three such people, then the probability that it was the defendant is not much more than 3 percent, at least if we were to consider this and no other evidence.30

This is obviously a dramatically oversimplified example. But the basic point of the example takes us back to Reverend Bayes. If the police could have rounded up all thirty-three of these people, and if there were no other evidence at all, then the 3 percent figure would be correct. But the defendant is the defendant because of other evidence, despite the fiction—and it is a fiction—of a genuine presumption of innocence. Given that there is other evidence against the defendant—place of residence, age, motive, opportunity, and so on, then the question is what the DNA comparison, or match, does to the probability that this defendant is guilty. If we start with a probability of 1 / 330,000,000—that the defendant was simply a random resident of anywhere in the United States picked up off the street (or selected by a nationwide lottery) to be a defendant—then what the DNA adds is plainly not enough, even though it does increase substantially the probability of the defendant’s guilt. But that, of course, is not how the criminal process works. At a trial, various other pieces of evidence, including the theoretically irrelevant fact that it is this defendant who is on trial, will lower the 330,000,000 pool to a much, much smaller number. And then the existence of a DNA match between the defendant’s DNA and the DNA found at the crime scene, given the extraordinarily small possibility that any other member of that pool could have been a match, is what gives DNA evidence its statistical power and thus its power as evidence. Once we recognize that the presumption of innocence is a fiction, and that the prior probability of the defendant’s guilt is much higher than 1 / 330,000,000 by the time the DNA evidence is introduced, we can see that what the DNA evidence does is raise a non-negligible prior probability to a much higher posterior probability, often enough to convict.31

Or often enough to exonerate. An important feature of the DNA “revolution” is the frequency with which DNA analysis has been used to show that people previously convicted of crimes either could not have committed those crimes or at least might not have with a sufficiently high probability to raise a reasonable doubt.32 And so, although there is no room here to explore all of the scientific, statistical, legal, and moral issues that the widespread use of DNA analysis has spawned, in one respect the fact of numerous DNA exonerations reinforces one of the themes that pervades this book—whether evidence is good depends on whether it is good enough, and whether it is good enough depends on what it is to be used for. That is why fMRI lie detection, which may not yet be good enough to put people in prison, might be good enough to keep them out. And that is why it is a mistake to think of DNA solely in terms of apocalyptic 1984 scenarios in which massive DNA databases invade our privacy and make it impossible to escape even our minor transgressions. That would indeed be something to worry about, but the same revolutions in DNA and big data that make it more difficult for the guilty to avoid detection also make it more difficult to convict the innocent. And that is to be celebrated and not lamented.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!