Fingerprint examination, though in use for more than 100 years, has until recently undergone surprisingly little scientific scrutiny. A paper recently published in the scientific journal PLOS ONE provides important new insight into what specific, visual aspects of fingerprint pairs make their analysis more or less difficult.
The paper, authored by UCLA psychology professor Philip Kellman, UCLA Law professor Jennifer Mnookin and several additional co-authors, investigates one particularly important question: What makes specific fingerprint comparisons easier or harder than others? While fingerprint analysts might have intuitions based on their experience about reasons for difficult comparisons, these experts lacked validated methods for objectively measuring difficulty or for determining scientifically what visual aspects of the fingerprints themselves contribute to that difficulty.
The model developed in the paper was able to account for 64 percent of the variation in print comparison accuracy on a novel set of fingerprint images. The presence of specific fingerprint features as well as image quality metrics like contrast and fingerprint ridge clarity turned out to be important predictors of classification accuracy.
“While some of the predictors we found likely comport with what fingerprint examiners would have intuited, being able to demonstrate their role scientifically, and measuring, rather than just assuming, their importance is a significant step forward,” Mnookin said.
Mnookin and Kellman were joined in the paper by UCLA doctoral candidate in psychology Gennady Erhlikhmann and recent Ph.D. Everett Mettler, St. Joseph’s University Psychology professor Patrick Garrigan, Tandra Ghose of the Max Planck Institute, fingerprint expert David Charlton, and cognitive psychologist Itiel Dror of Cognitive Consultants International, who has conducted extensive research into the psychological aspects of fingerprint evidence.
Finding an objective method to determine fingerprint comparison difficulty has a good deal of practical importance. While fingerprint experts may have high accuracy overall, errors are more common for difficult comparisons than for easier ones. That means that methods for assessing the difficulty of a comparison, in advance of its analysis and based on objective visual characteristics, could help increase the accuracy of fingerprint identification overall. Measuring difficulty could also help crime laboratories assess which print comparisons might require special attention or care.
“These results suggest that it’s not far-fetched to imagine that relatively soon, error-prone comparisons could be flagged by a computer,” Garrigan said.
“If we could easily determine which print comparisons created a heightened risk for error, that would have lots of advantages,” Mnookin said. “Those prints could be more carefully scrutinized by analysts, treated differently in the courtroom, or subjected to additional verification procedures.”
“In addition to contributing to our knowledge of a crucial area of forensic science, this research also helps us better understand experts’ advanced visual processing skills attained from experience and training, an important area in psychological research,” Kellman said. “We are currently analyzing follow-up data, and our model continues to look promising.”
This research was supported by grant funding from the National Institute of Justice.