Skip to content

The subconscious mercenaries

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

In September I covered a paper that described the massive amount of bias created in the legal system in parts of the US where forensic laboratories are paid in return for coming to conclusions resulting in guilty verdicts. Another recent paper, published in Psychological Science has found that extraordinary levels of bias can occur even when money is not explicitly involved. The paper titled “Are Forensic Experts Biased by the Side That Retained Them?” deceived forensic psychologists and psychiatrists into believing they were consulting for either the defence or the prosecution in a real criminal case. In reality, the cases were real – but they were in fact historical court cases of past crimes. All 108 psychologists and psychiatrists in fact scored the same four case files on levels of psychopathy. The measures used were Hare’s Psychopathy Checklist  and the Static-99R, a test used to predict sexual recidivism in sex offenders. The researchers found that the psychologists and psychiatrists displayed allegiance effects, clearly differing in their conclusions based on whether they were working for the prosecution or the defence.


The result is particularly profound because the participants in the experiment spent only a quarter of an hour with the retaining attorney, while in real life the experts might expect to have ongoing contact with retaining attorneys as well as interactions with the offenders themselves, which might increase the allegiance effect.

The findings have implications far beyond forensic psychology and psychiatry, as the application of a wide range of forensic sciences have been found to be highly subjective when brought into the courtroom. A 2009 report by the National Academy of Sciences (NAS) questioned the reliability of practices such as matching bite marks, hair fibres, tire treads, blood stain pattern analysis, ballistics, handwriting comparisons… the list goes on. When forensic science comes into the courtroom, bias could have a big impact.

A fascinating new ProPublica/PBS Frontline documentary (watch in full online) examines cases in which forensic evidence has been responsible for wrongful convictions. We hear how in such cases forensic examiners have routinely testified that fingerprints are “infallible” and can be matched with 100% certainty and that there is a “zero percent” chance that they could be wrong.

This can no longer be said to be the case after the landmark ruling in the Brandon Mayfield case that overturned fingerprint evidence that was thought to be infallible by experts at the top of their field. Fingerprint testing is regarded to be one of the most reliable of all the forensic sciences after DNA testing. Today that mantra has changed but there is still no bona fide scientific standard which must be passed for fingerprints to be considered a match. Unlike on CSI, machines do not make the final critical match, ultimately people do – and these people are vulnerable to bias just like anyone else.

In one study (Dror, 2006) two thirds of forensic examiners changed their mind when they were presented with confessions alongside fingerprints. Admittedly the study is small with only six participants but the evidence is compelling due to the nature of the the study design. Professional, competent, certified, full time fingerprint examiners were deceived into unwittingly making different judgments regarding the very same evidence from real criminal cases they had themselves judged in the past. Amazingly a couple of inconsistent decisions were even made in the control group where there were no confessions to sway their mind.

If you’re anything like me, alarm bells will have rang at the words in the previous paragraph: “presented with confessions”. The literature regarding false confessions is truly fascinating – 25% of false convictions recorded by the Innocence Project have been found to contain false confessions. To try to understand this phenomenon, we’ll begin in the 1630 plague of Milan:

“One Mora, who appears to have been half a chemist and half a barber, was accused of being in league with the devil to poison Milan. His house was surrounded, and a number of chemical preperations were found. The poor man asserted, that they were intended as preservatives against infection; but some physicians, to whom they submitted, declared they were poison. Mora was put to the rack, where he for a long time asserted his innocence. He confessed at last, when his courage was worn down by torture, that he was in league with the devil and foreign powers to poison the whole city; that he had anointed the doors, and infected the fountains of water. He named several persons as his accomplices, who were aprehended and put to a similar torture. They were all found guilty, and executed. Mora’s house was rased to the ground, and a column erected on the spot, with an inscription to commemorate his guilt.”

What happened next in Milan in the 1660’s gives us reason to doubt our assumption that only torture could lead someone to confess to something as utterly absurd as conspiring with the devil to poison their city:

“The number of persons who confessed that they were employed by the Devil to distribute posion is almost incredible. An epidemic frenzy was abroad, which seemed to be as contagious as the plague. Imagination was as disordered as the body and day after day persons came voluntarily forward to accuse themselves. They generally has the marks of disease upon them, and some died in the act of confession”.

In a set of experiments conducted last year (Kassin, 2012) university students were wrongly accused of breaking the university honours code by ruining an experiment or cheating in an experiment. Even though they were innocent, 94% of participants wrote a hand written confession. The students were accused of ruining the experiment when they were told computer software would be able to discover if they had really ruined the experiment. Importantly, the experimenters did not state that they already knew the participants were guilty – just that they would be able to check (the implication being that the computer would exonerate them if they were innocent, in the same way that those accused in real life may believe forensic evidence will exonerate them). In the cheating experiment, half of the innocent students wrongly confessed when they were told they had been monitored by hidden CCTV cameras (none had confessed before this bluff). In interviews with the students after the experiment it became clear that most didn’t really believe they were guilty but instead confessed to escape interrogation with the expectation that the software or CCTV footage would exonerate them.

These experiments may seem absurdly removed from reality but in the real world the accused are sometimes put under tremendous pressure to confess and given massive incentives to plead guilty, from reduced sentences to escaping the death penalty. In America at least, the police are allowed to bluff – claiming they have evidence of guilt that in reality is completely fictional. In 1989 seventeen year old Marty Tankleff wrongly confessed to the murder of his own parents following a five hour interrogation after the police allegedly lied claiming that his father had woken from consciousness and accused him. Today once again under debate is Amanda Knox’s signed confession (transcribed by police) implicating herself for the murder of Meredith Kercher following hours of police interrogation.

Another study last year by the same author (Kassin and Wallace, 2011) found that out of 132 US judges, 69% stated they would find a defendant guilty as a result of a retracted confession extracted through a fifteen hour interrogation in which the interrogators coerced the defendant by “screaming, threatening the death penalty, and waving a gun, all while refusing to accept his claims of innocence”. This despite the fact that the only other evidence in the hypothetical case was irrelevant or contradictory: a hair found on the victim’s body which was thought to be the defendants but was found to be inconclusive and the victim’s stolen items, which failed to be found by police on the defendant’s property.

Eyewitness testimony has long been known to be extremely open to bias, in another recent psychology experiment over half of the participants who had witnessed a staged theft changed their selection to select the confessor from a lineup when they were informed of a confession (Hasel and Kassin, 2009). The point of all this is that forensic scientists aren’t for some reason especially fallible, they are just as fallible and open to bias as eyewitnesses, judges, police and any other human being. They are not at fault themselves, the system they work within is at fault for allowing them to receive all sorts of biasing information which can cloud the subjective judgment that they are entrusted to make.

In an interview earlier this year Dr. Itel Dror described the backlash he faced in communicating his message of the dangers of bias to the forensic science community:

“Historically, forensic science has not been investigated in this way, until recently there wasn’t any research on the human element in forensic decision-making. When I started to look at this area ten years ago, the forensic community said ‘What? The human element is not relevant. What are you talking about? We are objective!’ This mindset was very interesting, because in forensic science the human is the instrument of analysis. In most forensic areas there are no objective criteria, it is based on human experts examining different visual patterns of blood splatter, fingerprints, shoe prints, handwriting, and so on, and making subjective judgments. Until recently the forensic community ignored all the human elements. Initially, there was a lot of denial, and even resistance, because I was the first to start asking questions about the role of the human examiner in perceiving and interpreting information that is used to make decisions…

…Ten years ago the forensic community were very naive about all of this, because the courts had accepted their testimony for over 100 years. For example, in fingerprint analysis (the most used forensic domain) examiners would say, “We are totally objective and infallible, we never make mistakes, we have a zero error rate” and the court accepted it, so they accepted it! When I started working in this area ten years ago it was initially very unpleasant, and there were some very angry people who did not like me saying that they were subjective and did not use objective criteria. Actually what I was saying is you are a human being, and human beings make mistakes! Now it has changed quite a lot. So after a decade of climbing up a mountain and swimming against the current, progress has been made. But initially there was a lot of resistance, which at times became quite personal, even from the leaders of the community. For example, when I published one of my papers, the chair of The Fingerprint Society in the UK, wrote a letter to the editor of the journal saying, and I quote “We are totally objective, fingerprint examiners are never affected by context. If the fingerprint examiners are affected by context, if they are subjective, they shouldn’t be fingerprint examiners, they should go and seek employment in Disneyland!”

Defendants can often not afford to hire forensic experts to defend themselves, if the results of the latest research are anything to go by, it seems the odds can be unfairly stacked against the defendant. Forensic science is often seen in both the courtroom and by the lay public as of higher value than other kinds of evidence and it is precisely this overwhelming faith in forensic science that can be dangerous.

The recent criticism of forensic evidence is not only in terms of bias and reliability but extends to the lack of standards for accreditation. The recent Propublica/PBS Frontline documentary slammed the ACFEI, the largest forensic science membership association in the US after their journalism student researching the documentary managed to obtain certification by taking only an online test despite having zero experience in the forensic sciences. The ACFEI refused to answer the question of what percentage of applicants pass their tests. John Bridges the former president of the organisation was prepared to speak however, estimating the failure rate at less than 1% and stating that he quit because he couldn’t change the way the organisation does business. You’ll have to watch the documentary or read the transcript to find out more, the association is currently engaged in five defamation suits, I don’t want to become the sixth.

It seems blindingly obvious (excuse the pun) that forensic examinations should be blinded, this at least now seems to be the position of the American Bar Association. Unfortunately, apparently this is not yet the case across the board in practice (Dror, 2013). Progress has been slow, and in the UK, the situation may be yet more severe. Dr. Dror explains:

“In the UK, forensic examiners are part of, and work for the police. That already creates a certain context! So ideally forensic scientists would be separated from the police. If not, steps need to be taken to give them independence, such as ensuring that police detectives on the case do not have direct contact with the forensic examiners, so they cannot pressure and influence them, intentionally or not. They should not be considered part of the police, they are not there to help the police – they are scientists. Recently in the US, in Washington DC, all the forensic scientists have been taken out of the police force and into an independent body. In the UK, not only this is not happening but also independent forensic services have been closed down for economic reasons – it is going the opposite way.”

Beyond blinding, Dror et al, suggest forensic evidence should be examined in the form of an “evidence lineup” to protect against the base-rate assumption that an individual who is accused is likely to be guilty. As long ago as 1987, this method was found to reduce the false positive rate in human hair identification from 30.8% to 3.8% (Miller, 1987).

In the courtroom, cases of false positives can translate into an innocent person behind bars, so it seems absurd that measures such as blinding and evidence line-ups haven’t long been universal. The implications are particularly grave because of the phenomenon coined by Dr. Dror of the “bias snowball effect” (or “corroboration inflation”) in which witnesses have been found to change their stories based on the presence of other forms of evidence – such as forensic evidence. In a research settings it’s long been expected for experiments to be blind or double blind. In the courtroom where so much is at stake, it is astounding that bias continues to be allowed into the building.

References

Dror, I. E., & Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification56(4), 600.

Dror I.E. & Kukucka J. (2013). The forensic confirmation bias: Problems, perspectives, and proposed solutions, Journal of Applied Research in Memory and Cognition, 2 (1) 42-52. DOI: 

Hasel, L. E., & Kassin, S. M. (2009). On the presumption of evidentiary independence: can confessions corrupt eyewitness identifications? Psychological science, 20(1), 122–126. doi:10.1111/j.1467-9280.2008.02262.x (PDF)

Kassin S.M., Miller, L. S. (1987). Procedural bias in forensic science examinations of human hair. Law and Human Behavior, 11, 157–163

Murrie D.C., Boccaccini M.T., Guarnera L.A. & Rufino K.A. (2013). Are Forensic Experts Biased by the Side That Retained Them?, Psychological Science, 24 (10) 1889-1897. DOI: 

National Research Council. (2009). Strengthening Forensic Science in the United States: A Path Forward. (PDF)

Wallace, D. B., & Kassin, S. M. (n.d.). Harmless Error Analysis: How Do Judges Respond to Confession Errors? Law and Human Behavior, 1–9. doi:10.1007/s10979-010-9262-0

Image Credit: Adapted from Flickr/Emmanuel Huybrechts

To keep up to date with this blog you can follow Neurobonkers on TwitterFacebookRSS or join the mailing list.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next
How do 21st century photo editing techniques impact classic conceptions of beauty? Just have a look at the image above, by the Italian artist Anna Utopia Giordano. Giordano’s Venus project gives Photoshop makeovers to paintings such as Botticelli’s La Nascita di Venere.