The forensic sciences play a critical role in society, helping to determine truth and justice. But they are still subject to human error and the costs can be enormous. This has led to the rise of cognitive forensics, which engages with issues relating to human cognition and bias within the field of forensics.
Humans do not perceive the world in a literal and objective manner, but we often act as if we do. Our experience of reality is, in truth, an interpretive process that is influenced by our expectations, beliefs, attention, and past experiences. The distortions between perception and reality can relate to size, shape, sound, quantity, and color. Even when we’re informed of our possible biases, we are unlikely to acknowledge them with accuracy.
For forensic analysts, who are charged almost exclusively with perceptual judgments of similarity, the consequences of these unconscious biases can become grave miscarriages of justice.
Cognitive science has already been integrated with several other high-risk fields such as medicine, air traffic control, and nuclear power. But a series of failures within the forensic community—many of which came into view with the advent of DNA profiling—have now demonstrated the need for cognitive research into forensic practices, too. The continued progression and acceptance of cognitive forensics can ultimately achieve broader and more accurate criminal justice outcomes.
The confounding nature of unconscious bias lies in the fact that it goes unnoticed in the person perpetrating it. Forensic bias can manifest itself in many ways: in conflating expertise in one area with expertise in another; in an expert’s trusting of their own “gut feeling” over the rigor of established processes; or in the subtle influence of extraneous information. In each case, general awareness of the existence of biases does not, in and of itself, prevent errors of bias. It can, however, lead to procedural reforms that may mitigate the frequency of such errors.
Many forensic analysts have a wealth of prior experience to draw upon. In some cases, this is an asset, as forensic analysts may be able to discriminate between, say, two sets of fingerprints with only a cursory glance. But human memory is also inherently flawed, and reliance upon it can lead to forensic errors.
Cognitive psychologists have found that memory errors occur at three different stages: during the recording of a memory (encoding); during the storage of a memory (storage); or during the recall of a memory (retrieval). At each stage, memory differs from the actual event in question. Failures in memory can cause experienced analysts to overlook critical details and overestimate their confidence in making a determination.
Research into cognitive forensics has demonstrated that forensic analysts should thoroughly and contemporaneously document the exact procedures and processes used within their casework, instead of relying on memory and past experience.
Human decisions are often based on a number of different inputs. One’s current mood, previous experience, and other peripheral information all play a role. Sometimes, these factors can improve one’s decision making. But in some cases, the presence of contextual information can lead to confirmation bias, wherein an investigator unconsciously but deliberately seeks out and interprets information that conforms to pre-existing beliefs or expectations.
A 2006 study tested the potentially damaging nature of contextual information. The organizers of the study presented fingerprint analysts with fingerprints that each analyst had already judged to be positive matches—but the study organizers also included extraneous and contextual information that might have suggested the prints were not in fact matches. Four out of five analysts unwittingly reversed their initial, correct decision as a result.
One reform suggested by cognitive forensics is to withhold information that a forensic analyst should never need. A forensic analyst, for example, shouldn’t need to know the race of the victim or a suspect when analyzing a print from a crime scene; a hair analyst doesn’t need to know whether the suspect confessed or not.
Sometimes, however, an analyst will need to be exposed to potentially biasing information, and in those cases, a technique known as Linear Sequential Unmasking (LSU) should be applied. This means withholding potentially biasing information as long as possible, allowing for segmented analysis. Applying such reforms need not be a blanket process, but a case-by-case situation. A crystal-clear fingerprint doesn’t need multiple independent evaluations, while a smudged print may benefit from added bias-protections such as multiple analysts and/or LSU.
Confidence is a tricky thing. Jurors, for example, tend to believe that an expert witness’s level of confidence is indicative of their level of accuracy. Research, however, suggests that the correlation between confidence and accuracy is actually quite weak. Eyewitness testimony is notoriously less accurate than one might suspect; the Innocence Project estimates that faulty eyewitness testimony is responsible for over 70 percent of wrongful convictions.
Confidence is the most indicative of accuracy at the time of the initial assessment. But as time passes, a process called “confidence hardening” begins, where extraneous circumstances serve to reinforce one’s belief in their original statement. Continued reliance on one’s individual level of confidence in the time after a judgment has already been made is ill-advised.
It’s still early days for cognitive forensics and some institutional forces have been resistant to the idea of second-guessing forensic processes. But acknowledging the existence of internal and institutional biases is not admitting weakness, it’s demonstrating growth and intelligence.
Accurate and systematic feedback is crucial for improving forensic analysis. Feedback is different from reinforcement; the former is corrective, not solidifying. Research has shown that experience alone is not enough to improve accuracy in areas like facial recognition. But when analysts are given trial-by-trial feedback, their accuracy does improve and continues to improve even when feedback ceases.
Forensic analysts should routinely analyze cases where a “ground truth” is known so that their decision-making process can be examined after the fact. An intelligent and independent review of both training and experience can boost accuracy amongst forensic analysts.
Some cognitive forensics experts have advocated for putting the science back in forensic science; when too much focus is put on the investigative nature of the forensic profession, some of the scientific rigor can be lost. But forensic work cannot be isolated the way scientific work can, as it regularly touches on other areas of the investigation, such as trace evidence, case information, and reference materials. Both physical contamination and cognitive contamination are a risk. A commitment to applying sequential unmasking can mitigate, but not cure, these risks.
Meaningful progress in reducing bias within forensics analysis requires protracted engagement with cognitive forensics. Forensics institutions and forensics professionals must maintain a dialogue about how current practices and assumptions can be reformed to enhance the accuracy of investigations. By investigating itself thoroughly, forensics can take the next step towards true justice.
Matt Zbrog is a writer and freelancer who has been living abroad since 2016. His nonfiction has been published by Euromaidan Press, Cirrus Gallery, and Our Thursday. Both his writing and his experience abroad are shaped by seeking out alternative lifestyles and counterculture movements, especially in developing nations. You can follow his travels through Eastern Europe and Central Asia on Instagram at @weirdviewmirror. He’s recently finished his second novel, and is in no hurry to publish it.