True-Crime Podcasts Get Fingerprint Help from NIST (The National Institute of Standards and Technology)
Fingerprinting for identification began in the mid-to-late 19th century, with systematic use in law enforcement starting in the 1890s. While ancient civilizations used fingerprints for seals, modern forensic use began in 1858 with Sir William Herschel in India, followed by the first criminal conviction in 1892 in Argentina.
Since the 1911 People v. Jennings case, fingerprint evidence has been legally admitted in U.S. courts, establishing it as a key tool for prosecutors.
Fingerprints are a staple in true crime, often mentioned alongside DNA, with studies showing 48 percent of true-crime consumers are interested in the forensic science behind the cases. True-crime podcasts that mention fingerprints, as the forensic technique is a foundational element of criminal investigations and is mentioned in thousands of episodes across the genre. Podcasts such as Wicked and Grim ("First Crime Solved with Fingerprints"), Weird Darkness ("Two Strangers, One Face: The 1903 Case That Made Fingerprints the Gold Standard"), and Criminal ("Photo, Hair, Fingerprint") have dedicated specific episodes to the history or specific use of fingerprint evidence.
A recently published article by Chad Boutin, Science and IT Writer at the National Institute of Standards and Technology (NIST), reports that a collection of 10,000 fingerprints has been fully annotated with details to help train both human fingerprint examiners and AI tools. NIST has also released open-source software to evaluate and sort fingerprints by quality, potentially helping fingerprint examiners work more efficiently.
The National Institute of Standards and Technology (NIST) is a non-regulatory U.S. federal agency within the Department of Commerce, founded in 1901. It promotes innovation and industrial competitiveness by advancing measurement science, standards, and technology, thereby enhancing economic security and improving quality of life.
Mr. Boutin’s article revealed that:
- A NIST collection of 10,000 fingerprints has now been fully annotated with details that will help train both human fingerprint examiners and AI tools.
- NIST has also released open-source software to evaluate and sort fingerprints by quality, potentially helping fingerprint examiners work more efficiently.
- The two releases are intended to help improve forensic fingerprint examination, an important aspect of criminal investigations.
Sifting through fingerprints gathered from crime scenes is the job of fingerprint analysts and — increasingly — their computers. Training humans and their machine partners for this meticulous work is no easy task, but, according to Mr. Boutin, help has arrived in the form of a new data and software release from the National Institute of Standards and Technology (NIST).
The data, consisting of thousands of fingerprints along with notes detailing their quality, follows the release of an open-source software package that can rapidly assess print quality. Together, they offer a pair of tools for improving the expertise of forensic scientists.
“These two resources will help improve the science of fingerprint identification,” said NIST computer scientist Greg Fiumara. “The data is the largest and most complete fingerprint dataset now available, and the software is a modified version of a print analysis tool used by U.S. law enforcement that we are making freely available to the world.”
In the article, Mr. Boutin explains that “The fingerprint data, available as part of NIST Technical Note (TN) 2367, augments a previous release, Special Database (SD) 302, that NIST initially made available in 2019. It contains about 10,000 fingerprints collected in a lab setting from 200 volunteers, who consented to the use of their prints for research purposes. All other personal information was scrubbed from the database, including the volunteers’ names and places of residence.”
“The prints are from people we recruited to come in and do things like write a note, pick up a circuit board, handle a dollar bill, that sort of thing,” greg Fiumara added. “Then we recovered the prints they left behind using different methods that crime scene investigators commonly use.”
Since the data’s initial release, more than 1,000 research organizations from more than 90 countries have downloaded it. But it was not complete. Only about half of its fingerprints contained annotations — specific details about a print that offer a guide to evaluating the print’s quality. It is these annotations that make the database such a valuable teaching tool, because they show new examiners — and increasingly, AI — what to look for and what to avoid when evaluating a print.
Recently, experts went back and created annotations for the remaining prints. As with fingerprints gathered from actual crime scenes, the prints in the dataset vary widely in quality: In some spots, the lines left by a fingertip’s tiny, curving ridges are clear and unbroken, while in others these lines are smudged or incomplete. The annotations, which include color-coded regions indicating different levels of print quality, will help educate humans and AI alike, Fiumara said.
“These images are good for classroom education, to teach examiners how to look for identifying features,” he said. “And they will also help teach AI algorithms where to look and how to weigh a feature’s importance. With this kind of training, a fingerprint evaluation algorithm will get better.”
According to NIST science writer Chad Boutin, “For software developers and print examiners alike, the second resource in the release will provide additional value. NIST recently obtained software called LQMetric, designed to assess fingerprint quality, but whose use was limited to U.S. law enforcement. Over the past year, NIST funded the conversion of the software to a version that would run on Mac, Windows, or Linux systems, and then made it open source for anyone worldwide to use. The newly reconfigured software, which NIST is calling OpenLQM, can function as a standalone program or be incorporated into other software like a plug-in.”
“You give OpenLQM a fingerprint, and it returns a number from 0–100 that is an assessment of the print’s quality,” Fiumara said. “It can help print assessors work more quickly, which is important in forensic science when you often have hundreds of prints to review from a crime scene. You want to help them separate out the prints that contain the highest level of detail. That’s where the software comes in.”
Both the dataset and the software have proved valuable to users.
“LQMetric software has been an invaluable asset,” said Anthony Koertner, a certified latent print examiner at the Department of the Army Criminal Investigation Division’s U.S. Army Criminal Investigation Laboratory. “It’s been pivotal in our efforts to achieve greater objectivity and reproducibility in latent print quality assessments. The open-source release, complemented by NIST Special Database 302, represents a significant advancement for the global forensic community. Together, they provide powerful new resources for practitioners and researchers to drive innovation and enhance collaboration in the field.”
NIST’s fingerprint dataset SD 302 includes 10,000 fingerprint images, including this one from the sticky side of a postage stamp. The dataset is now fully annotated, with details such as the colorized regions shown on the right. The colors, which represent regions of differing quality, will help train both humans and machine learning algorithms how to distinguish identifying features and weigh their importance as evidence. Credit: B. Hayes/NIST
Fingerprint misidentification (false-positive) rates in controlled studies are low, often cited at around 0.1% to 0.2%, but real-world errors occur, particularly with complex, partial, or poor-quality latent prints. While experts rarely declare a wrong match in ideal tests, studies suggest higher error rates (3%-20% or higher in specific tests) when analyzing complex, non-identical prints.
A major 2011 National Institutes of Health (NIH.gov) study found a 0.1% false-positive rate. Another study found a 0.2% rate, with one participant responsible for most errors. Moreover, false negatives are more common, with error rates around 7.5% to 10%.
This collection of 10,000 fingerprints, fully annotated with details, will, no doubt, help train both human fingerprint examiners and AI tools.


Comments
Post a Comment
Thank You for your input and feedback. If you requested a response, we will do so as soon as possible.