Skip to content

‘A magnificent tool for understanding the human mind… a terrible tool for the courtroom’

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

A few months ago I posted a piece on the alarming resurgence in the use of lie detectors in the UK and the US. A new documentary looks at the use of other neuroscience based methods involving brains scans that have been proposed for use in the court room.


Following the shows, a fascinating panel discussion took place at MIT’s McGovern Institute where a panoply of professors debated the potential and the pitfalls of using brains scans for lie detection. Taking part in the discussion were:

Robert Desimone, Director of the McGovern Institute and the Doris and Don Berkey Professor in MIT’s Department of Brain and Cognitive Sciences, Joshua D. Greene, the John and Ruth Hazel Associate Professor of the Social Sciences in the Department of Psychology at Harvard University, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience in the Department of Brain and Cognitive Sciences and a founding member of the McGovern Institute, Bea Luna, the Staunton Professor of Psychiatry and Pediatrics, Professor of Psychology, and Director of the Laboratory of Neurocognitive Development at the University of Pittsburgh School of Medicine, and Stephen J. Morse, the Ferdinand Wakeman Hubbell Professor of Law and Associate Director of the Center for Neuroscience and Society at the University of Pennsylvania.

The debate considers the fact that the current literature is based on somewhat synthetic experiments. Some positive findings have emerged, but the experiments involved little at stake and this could have a big impact on the implications of the findings. Prof. Nancy Kanwisher sums up her opinion:

“I think it doesn’t really work. I think it may be that someday with advances that we can’t imagine now, it is remotely possible that could happen, but right now, nobody has shown anything even remotely close to real lie detection. What they have shown and probably (gestures) what he picked up on, on this scan is that if you ask people to say something that’s not true versus something that’s true, it takes a little bit more mental effort to say something that’s not true. That produces very systematic activations in the brain that can be replicated, there is extra mental activity, but it is not diagnostic mental activity. If he had done this experiment and said ‘we think that you took that ring and we are going to test and find out if you are innocent or not and what we are going to do is we are going to find if you respond differentially, during that time’, in other words if you were suspected and if a lot was at stake, it is quite possible that all of that same brain activity would happen.”

Kanwisher goes on to state how “for this to be of any use in the real world you would need to test under those circumstances that could make all the difference in brain scanning. That is major stakes, not fifty bucks in a psychology experiment but life imprisonment. You need to test it where the person believes that this scan could determine their fate. You would need to have a gold standard to determine later if they were telling the truth. I can’t imagine how you would ever do that experiment”.

The discussion continues along the same fascinating path, discussing ways to experiment with real lies, not artificial lies. Ways to get people to lie of their own accord in an experimental setting, rather than telling people to lie – in which case they would probably respond very differently – a major flaw in much of the literature. One way is giving people a way to cheat on a game and looking at the people with impossibly high scores. Methods such as this still only create minor lies, not affecting Kanwisher’s criticisms. We hear how prefrontal activity which has been linked to when people make a special effort to think about something, is activated when people tell an instructed falsehood. However, as much activity was found when people admitted cheating as when people lied – creating a gaping hole for practical uses.

Another stumbling block is that the majority of fMRI research on lie detecting has been done at a group level rather than at the level of an individual. A comparison that is given is that the average height difference between a man and a woman does little to tell us if a short or tall person is a man or a woman. There is so much variation between men and women’s height that – even though there is a significant difference between the averages, you can’t remove the signal from the noise with any certainty.

Thankfully, US judges have rejected fMRI based lie detection data in the couple of cases so far where it has been brought to trial. But the next question is whether brain scans can be used to determine culpability – a person’s state of mind when committing a crime. This is a fascinating discussion, which opens up many more metaphorical cans of worms and unanswered questions that will keep you up at night.

Your can watch the full documentary below.

Part 1:

Part 2:

Panel discussion at MIT’s McGovern Institute:

Via @BrainsOnTrialImage Credit: PBS, Dana Busch

Reference:

Gazzaniga M. et al. (2010). A JUDGE’S GUIDE TO NEUROSCIENCE: A CONCISE INTRODUCTION, SAGE Center, UC Santa Barbara. (PDF)

To keep up to date with this blog you can follow Neurobonkers on TwitterFacebookRSS or join the mailing list

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next