A research article published June 10 in the Proceedings of the National Academy of Sciences highlights the importance of careful application of high-tech forensic science to avoid wrongful convictions.
In a study with implications for an array of forensic examinations that rely on “vast databases and efficient algorithms,” researchers found the odds of a false match significantly increase when examiners make millions of comparisons in a quest to match wires found at a crime scene with the tools allegedly used to cut them.
The rate of mistaken identifications could be as high as one in 10 or more, concluded the researchers, who are affiliated with the Center for Statistics and Applications in Forensic Evidence (CSAFE), based in Ames, Iowa.
“It is somewhat of a counterintuition,” said co-author Susan VanderPlas, an assistant professor of statistics at the University of Nebraska-Lincoln. “You are more likely to find the right match – but you’re also more likely to find the wrong match.”
VanderPlas worked as a research professor at CSAFE before moving to Nebraska in 2020. Co-authors of the study, “Hidden Multiple Comparisons Increase Forensic Error Rates,” were Heike Hoffmann and Alicia Carriquiry, both affiliated with CSAFE and Iowa State University’s Department of Statistics.
Wire cuts and tool marks are used frequently as evidence in robberies, bombings, and other crimes. In the case of wire cuts, tiny striations on the cut ends of a wire may be matched to one of many available tools in a toolbox or garage. Comparing the evidence to more tools increases the chances that similar striations may be found on unrelated tools, resulting in a false accusation and conviction.
Wire-cutting evidence has been at issue in at least two cases that garnered national attention, including one where the accused was linked to a bombing based on a small piece of wire, a tiny fraction of an inch in diameter, that was matched to a tool found among the suspect’s belongings.
“Wire-cutting evidence is used in court and, based on our findings, it shouldn’t be – at least not without presenting additional information about how many comparisons were made,” VanderPlas said.
Wire cutting evidence is evaluated by comparing the striations found on the cut end of a piece of wire against the cutting blades of tools suspected to have been used in the crime. In a manual test, the examiner slides the end of the wire along the path created along another piece of material cut by the same tool to see where the pattern of striations match.
An automated process uses a comparison microscope and pattern-matching algorithms, to find possible matches pixel by pixel.
This can result in thousands upon thousands of individual comparisons, depending upon the length of the cutting blade, diameter of the wire, and even the number of tools checked.
For example, VanderPlas said she and her husband tallied the various tin snips, wire cutters, pliers and similar tools stored in their garage and came up with a total of 7 meters in blade length.
Examiners may not even be aware of the number of comparisons they are making as they search for a matching pattern, because those comparisons are hidden in the algorithms.
“This often-ignored issue increases the false discovery rate, and can contribute to the erosion of public trust in the justice system through conviction of innocent individuals,” the study authors wrote.
Forensic examiners typically testify based upon subjective rules about how much similarity is required to make an identification, the study explained. The researchers could not obtain error rate studies for wire-cut examinations and used published error rates for ballistics examinations to estimate possible false discovery rates for wire-cut examinations.
Before wire-cut examinations are used as evidence in court, the researchers recommended that:
- Examiners report the overall length or area of materials used in the examination process, including blade length and wire diameter. This would enable examination-wide error rates to be calculated.
- Studies be conducted to assess both false discovery and false elimination error rates when examiners are making difficult comparisons. Studies should link the length and area of comparison to error rates.
- The number of items searched, comparisons made and results returned should be reported when a database is used at any stage of the forensic evidence evaluation process.
The VanderPlas article joins other reports calling for improvements in forensic science in America. The National Academies Press, publisher of the PNAS journal and other publications of the National Academies of Sciences, Engineering and Medicine, also published the landmark 2009 report “Strengthening Forensic Science in the United States: A Path Forward.”