Jennifer Mnookin
UCLA
Jennifer Mnookin

Jennifer Mnookin is dean of the UCLA School of Law. Harry Edwards is a senior judge on the U.S. Court of Appeals for the D.C. Circuit. They serve as co-chairs of the Senior Advisors to the PCAST Working Group. This op-ed appeared in the Washington Post.

On the popular television show “CSI,” forensic evidence was portrayed as glitzy, high-tech — and virtually infallible. Unfortunately, this depiction is often a far cry from reality. This week, a significant report issued by the President’s Council of Advisors on Science and Technology (PCAST) persuasively explains that expert evidence based on a number of forensic methods — such as bite mark analysis, firearms identification, footwear analysis and microscopic hair comparisons — lacks adequate scientific validation. Quite simply, these techniques have not yet been proved to be reliable forms of legal proof.

The report is a much-needed wake-up call to all who care about the integrity of the criminal-justice system. It builds upon mounting evidence that certain types of “forensic feature-comparison methods” may not be as reliable as they have long appeared. A recent, unprecedented joint study by the Innocence Project and the FBI looked at decades of testimony by hair examiners in criminal cases — and found flaws in the testimony an astonishing 95 percent of the time. In a number of serious felonies, DNA testing has revealed that bite-mark evidence underpinning convictions was simply incorrect. More generally, faulty forensic evidence has been found in roughly half of all cases in which post-conviction DNA testing has led to exoneration.

What is noteworthy about the new report is that it is written solely by eminent scientists who carefully assess forensic methods according to appropriate scientific standards. The report finds that many forensic techniques do not yet pass scientific muster. This strongly implies these techniques are not ready for use in the courtroom either.

Some of our law students have asked, “Why do we still need these other forensic methods? Can’t we just rely on DNA testing instead?” The simplest answer is that DNA is not always available in criminal prosecutions. Our students also ask, “Why don’t courts just decline to admit testimony that rests on forensic methods that have not been validated?” The truth is that we wish they did, although we also understand why that has been institutionally challenging.

Unfortunately, judges frequently rely on the experience of a forensic practitioner, and the long-standing use of a given technique, rather than focusing on the technique’s scientific validity. This is not surprising. The rule of law embraces the quest for constancy and predictability, as well as a determination to treat like cases alike. Therefore, even as many judges have come to recognize the weak scientific underpinnings of some methods, they continue to allow such testimony primarily because nearly all other judges have done so before. In a nod to scientific concerns, some judges have placed modest restrictions on the exact words a forensic expert can use when testifying regarding a possible “match” between a piece of evidence and a particular individual. We doubt that this has much effect on jurors.

In 1993, the Supreme Court detailed federal judges’ gatekeeping obligation, under which they must assess the validity and reliability of purported scientific evidence. However, that opinion, Daubert v. Merrell Dow, expressed confidence in the adversarial system, pointing to the role of “vigorous cross-examination” and “presentation of contrary evidence” as the “traditional and appropriate means of attacking shaky but admissible evidence.” Respectfully, experience has shown that, at least in criminal trials, the suggestion that the “adversarial system” represents an adequate means of demonstrating the unreliability of forensic evidence is mostly fanciful.

Forensic practice has developed through apprenticeships and peer-to-peer training — more by doing than by studying. But doing something — even doing it thousands of times — does not by itself establish how accurate you are or how often you make mistakes, unless you have a structured method to gain feedback about your performance. That requires well-designed studies focused on measuring performance by testing how often examiners are right or wrong based on tasks like those encountered in practice. It cannot happen via ordinary casework, in which a practitioner cannot really know the ground truth. Serious research will help forensic practitioners understand what they do not know about the limits of their discipline, and it will cause them to be more forthright in explaining these limits to judges and jurors.

The PCAST report puts forward a plausible, workable test for validity: Forensic disciplines should pursue empirical studies designed to test error rates and accuracy in conditions akin to those found in the real world. This is a reasonable standard. For example, latent fingerprint evidence would not have met this standard just a few years ago; now, thanks to thoughtful recent research, the report finds that it does. Any forensic technique that is valid and trustworthy ought to be able to pass this test. And the converse is equally true: Any forensic technique that fails to meet this standard should not be used in court.

The integrity of our criminal-justice system deserves no less. Requiring that the forensic methods we use in court have a reasonable modicum of scientific validity is neither pro-defense nor pro-prosecution; it is, rather, both pro-science and pro-justice.