Being Honest About Deception Detection: between popular idea and scientific evidence

Being Honest About Deception Detection: between popular idea and scientific evidence

Deception detection is at the heart of aviation security. With an increasing pressure on practitioners to use methods that enable us to distinguish a truth teller from a liar, it is becoming more important than ever to avoid falling for some common misperceptions in lie detection. Often popular ideas are misaligned with scientific evidence. Among these are the idea that suspicious passengers can be detected through their behaviour, that technology will solve deception detection for good, and that some humans excel at lie detection. Bennett Kleinberg and Bruno Verschuere discuss these ideas and illustrate the difficulty of deception detection in an airport security context.

Few topics touch the core of aviation security and policing as much as deception detection. With many airports serving several hundred thousand passengers a day and an omnipresent threat of potential terrorist attacks, the need for good deception detection is more significant than ever before. If it were known, through deception detection, whether a prospective passenger held malicious intent or not, much of the airport security processes in place would be obsolete. This article discusses common questions around deception detection: Is it possible to spot liars? Perhaps through technology or through highly skilled individuals? Can we train people to become better at lie detection and would high accuracy not solve all deception detection problems?

Popular Idea #1:
Behaviour reveals deception

There is a widely held belief that liars exhibit behaviour that is different from someone who is telling the truth. Studies across the world have shown that people – laypersons, security professionals and police officers – reason that liars can be identified by specific behaviours such as bodily movements, fidgeting, sweating, micro-expressions and eye movements, to name but a few. These ideas may be based upon the notion that we are less conscious of our non-verbal behaviour and that it is more difficult to control how we behave rather than what we say.1 The popular idea is that liars ‘leak’ non-verbal cues that give away their lie.2 This is particularly appealing for airport security because it suggests that we could train people to spot individuals who potentially hold malicious intent through their behaviour. Despite its appeal, there is little scientific evidence that non-verbal cues strongly signal deception. Hundreds of cues have been studied, ranging from eye movements to tremors in the voice with the conclusion that the vast majority of these cues are not diagnostic.3 And for the cues that do seem to differentiate between liars and truth tellers, the differences are small and hard to replicate.4

“…hundreds of cues have been studied, ranging from eye movements to tremors in the voice with the conclusion that the vast majority of these cues are not diagnostic…”

A possible explanation for the lack of support for non-verbal deception indicators is that they are not exclusively related to deception. Consider the following scenario: a passenger arrives at the security control and is nervously avoiding eye contact, sweating on his forehead and impatiently moving from one foot to another. While this behaviour could be an indicator that this person is hiding something, it is equally plausible – and in fact, this is what research suggests – that the person is displaying this behaviour for reasons unrelated to deception. They might be anxious about missing a flight, might have come from a stressful day at work or might simply be nervous about the very process of the security checks.

Perhaps then, those non-verbal cues should not be used to diagnose deception directly, but merely as a starting point for further examination. This approach is the basis of ‘behaviour detection techniques’, which typically involve officers trained in one of the many behaviour detection frameworks ‘walking the line’ at airports, train stations or other public spaces. The purpose of that activity is to spot individuals who allegedly behave suspiciously and are then singled out for further investigation (e.g. investigative interviewing). Problematic for that approach is the twisted argument of using behaviour only as an initial selection criterion. The behavioural indicators used for the initial selection are still unsupported and invalid from a scientific perspective.

A large-scale experiment pitched behavioural observation techniques against those that focused on the actual content of what someone said. Participants in that experiment were given a fake identity and a cover story before being instructed to try to cross the border at an airport security checkpoint. Some border control agents used a behaviour detection technique while others engaged in investigative interviewing with the focus to elicit information that could, for example, reveal inconsistencies about their alleged travel plans. Although the stakes in this study were clearly different from those of a someone aiming to hijack or bomb a plane, the results indicated that behavioural observation techniques (3% of passengers with fake identity detected) were not useful for deception detection at airports and were outperformed by investigative interviewing methods (66% of passengers with fake identity detected).5

Pinocchio’s nose does not exist because behaviours that may accompany lying may also be displayed by truth tellers, and this also holds for micro-expressions.6,7,8 Behaviour detection techniques, in particular, have been heavily debated in the scientific community and are criticised for lacking scientific support.

Popular Idea #2:
Technology will solve deception detection

Recent years have witnessed leaps in technology and computational sophistication. In particular, the area often broadly termed as ‘artificial intelligence’ has fundamentally changed the impact of technology in our everyday lives. From text prediction software on smartphones to self-driving cars and facial recognition software, machine judgments are becoming increasingly accurate and will likely continue to do so as more data is collected for self-learning systems.

It is not surprising then that we have also seen a surge in attempts to detect deception through technology often with promises of high detection accuracy. These include interviews with virtual border agents, thermal imaging approaches and voice analyses. While the embracing of technology to address the challenge of deception detection is a laudable effort, there is devil in the detail.

A common pitfall is that of the category mistake: it seems intuitive to argue that if we can build self-driving cars and self-learning systems that beat human experts in a range of areas (e.g. complex video games), we should also be able to rely on technology for deception detection. There is indeed merit to this idea, but it is important not to fall prey to the mistake of intermixing two distinct categories of problems. For example, self-driving cars essentially rely on mapping a perceived environment (e.g., telling a lamp post from a human) to a number of actions to be taken (e.g., decelerating or changing lanes). However, problems around human behaviour – among which deception features prominently – stem from a different category. Here the potential variables affecting an outcome are unlimited and thereby make deception detection a complex problem. Consider earthquake prediction as an analogy: even with the most sophisticated forecasting systems and unequalled computational power, earthquake prediction is not much better than 2000 years ago.9 The problem might simply be too complex to be solved. It might be that deception detection is more similar to earthquake detection than it is to self-driving cars.

As a consequence, purely technology-driven approaches are unlikely to help with deception detection because technology cannot compensate for flawed theory. To date, most technological applications of lie detection lack scientific corroboration and rigour.

Popular Idea #3:
Some of us are very good at detecting lies

One of biggest misalignments between popular belief and scientific evidence concerns the ability of humans to detect deception. It remains an ongoing area of research to determine whether some people are consistently able to detect lie from truth under some circumstances, but the current state of knowledge is clear: humans are very poor at telling lie from truth.10

“…purely technology-driven approaches are unlikely to help with deception detection because technology cannot compensate for flawed theory…”

But maybe some do better than others? People with presumed good deception detection skills have sometimes been called ‘lie detection wizards’. However, there is no good evidence that their apparent skill is more than just a statistical artefact. When you test a large group of people on a lie-truth task where 50% represents guessing level, it is expected from a statistical perspective that there will be variation due to measurement error. Some will perform exactly at 50%, but in fact most people will not perform precisely at 50% and show a lower or higher score. So, does a high score indicate measurement error or superior lie detection ability? To find out, high-achievers must perform well again in a repeated test of the same skill. Someone highly skilled in deception detection would need to perform well above the guessing level repeatedly and consistently. To date, no scientific evidence exists that there are people with that ability.

It is also important to differentiate between situations where one has a solid baseline and those where it requires a one-shot decision. Someone might be able to identify lies in their long-time friend because there is a rich baseline to rely on and past hunches about lies were confirmed or falsified through a repeated feedback loop. Compare this to a situation where a security professional has a brief interaction with a passenger they have never met. The security situation lacks a baseline of normal behaviour of that passenger and even with highly contextualised knowledge (e.g., lies from a long-time friend), the evidence base for better-than-chance deception detection accuracy is scarce.

A popular response to poor human deception detection skills is that appropriate training will solve the problem. Humans can excel on a range of tasks – often as a function of sufficient training – so surely this must also be true for the ability to detect lies? As yet, no studies have shown that practice has a major effect on someone’s ability to identify lies and truths. To the contrary, several meta-analyses have concluded that there is no difference between laypersons and trained experts and that, if any, training effects are typically small, especially when they focus on methods that look at non-verbal behaviour.11 Interestingly, experts do differ in one dimension, namely their confidence in using the right deception detection cues – contrary to novices, experts report a firm conviction that they are knowledgeable in looking at the right signs to detect liars albeit with no scientific support for these signs.12

Is deception detection at airports doomed to fail?

The very context of airport security is a statistically unforgiving one. Consider the following thought experiment:

A company produced a screening method that correctly identifies 99% of terrorists and 99% of normal passengers. You use the tool on all passengers going through security in your shift where you screen 1000 passengers. Before you started your shift, an intelligence source confirms that one potential terrorist is coming to your checkpoint. A young man goes through the security check, and the screening alarm goes off. What are the chances that this person is the terrorist?

Many people intuitively respond “99%” or a similarly high number and consider the tool’s decision as likely correct – after all, it only makes a mistake in 1% of the cases. However, there is a catch to this problem. Because only 1 in 1000 people is a terrorist in the example, 1% of mistakes is still a large number (here 1% of 999 non-terrorists = 10 passengers). Thus, the alarm will go off 11 times (10 normal passengers and 1 terrorist); hence the chances that the young man is a terrorist when the alarm goes off, are merely 1 out of 11, or 9%.

This paradox is known as the ‘base rate fallacy’ and puts the detection of deception in an applied setting in a challenging position: even highly accurate methods cannot succeed without vast numbers of false positives (i.e. ordinary passengers where the alarm goes off). This is highly undesirable for two reasons. First, the airport falsely suspects passengers that have no malicious intent whatsoever, which adds to passenger dissatisfaction with security processes. Second, security professionals waste time on innocuous people, thereby diverting resources away from security that could more effectively used. Note that an accuracy of 99% is closer to science fiction than to empirical evidence. Current state-of-the-art tools typically produce accuracy rates in the 65-80% range13, implying both a high number of false positives and false negatives (e.g., missing potential terrorists).

All current approaches lack a meaningful workaround to that problem and can therefore not succeed in an applied, low base rate context. Potentially, stepwise decision-making systems that use valid cues might be the way forward. As yet, it is a research question how and whether such systems can work.

“…it is safe to assume that self-proclaimed deception detection wizards are no better than others…”

An Outlook

Decennia of attempts at detecting who is lying and who is telling the truth – ranging from chewing rice to identifying liars over computerised tasks to brain scans – have shared one conclusion: deception detection is very hard.

To date, the evidence for behavioural approaches to deception detection is weak, and the methods in use are generally unsupported by scientific evidence. Moreover, despite their appeal, technological advancements might not improve deception detection rates because the problem could simply be too complex to achieve with very high accuracy. Finally, research suggests that deception detection is a problem where human expertise adds little to improve accuracy, and it is safe to assume that self-proclaimed deception detection wizards are no better than others.

The above is not to say that no scientifically supported deception detection methods exist. A range of controlled studies have found support for methods that rely on the content of people’s statement to discern liars from truth-tellers14 and computerised tasks using reaction times show promise in identifying concealed information15,16. However, all of these methods (i) require time because they rely either on extracting a detailed statement from passengers or on fine-tuned computer tasks, and (ii) have clear boundary conditions such as the deception domain (e.g. lies about the past vs lies about the future) or need prior knowledge (e.g. detecting whether someone recognises something vs detecting what someone knows).

Often, what seems intuitive or technologically advanced is misaligned with scientific evidence, creating a void of evidence-based methods for deception detection at airports. Some misconceptions around deception detection persist and risk being implemented in airport security. We encourage more scientific research on the critical topic of deception detection and a focus on evidence-based tools in aviation security.


Bennett Kleinberg

Dr Bennett Kleinberg is Assistant Professor in Data Science at University College London. His research is on emerging crime problems, deception detection and integrated decision-making processes.


Bruno Verschuere

Dr Bruno Verschuere is Associate Professor of Forensic Psychology at the University of Amsterdam. He regularly gives workshops and lectures on lie detection and has published over 50 papers on deception and lie detection.

References
  1. Vrij, 2008. Detecting lies and deceit. 2nd edition, Wiley.
  2. Ekman, 2009. Telling lies: Clues to deceit in the marketplace, politics, and marriage. Norton & Company.
  3. De Paulo et al., 2003. Cues to deception. Psychological Bulletin, 129(1), 74–118.
  4. Bond et al., 2014. New Findings in Non-Verbal Lie Detection. In Detecting Deception (pp. 37–58). John Wiley & Sons, Ltd.
  5. See: Ormerod & Dando, 2015. Finding a needle in a haystack: Toward a psychologically informed method for aviation security screening. Journal of Experimental Psychology: General, 144(1), 76–84.
  6. Porter, S., & ten Brinke, L. (2008). Reading Between the Lies: Identifying Concealed and Falsified Emotions in Universal Facial Expressions. Psychological Science, 19(5), 508–514. https://doi.org/10.1111/j.1467-9280.2008.02116.x
  7. Hartwig, M. (n.d.). Telling Lies: Fact, Fiction, and Nonsense. Retrieved April 4, 2019, from Psychology Today website: http://www.psychologytoday.com/blog/living-single/201411/telling-lies-fact-fiction-and-nonsense-maria-hartwig
  8. Luke, T. J. (2018). Lessons from Pinocchio: cues to deception may be highly exaggerated. doi: 10.31219/osf.io/xt8fq
  9. See: Silver, 2015. The signal and the noise: why so many predictions fail – but some don’t. New York, NY: Penguin Books.
  10. Bond & De Paulo, 2006. Accuracy of Deception Judgments. Personality and Social Psychology Review, 10(3), 214–234.
  11. Hauch et al., 2016. Does Training Improve the Detection of Deception? A Meta-Analysis. Communication Research, 43(3)
  12. Bogaard et al., 2016. Strong, but wrong: lay people’s and police officers’ beliefs about verbal and nonverbal cues to deception. PloS One, 11(6)
  13. e.g., Hauch, V., Blandón-Gitlin, I., Masip, J., & Sporer, S. L. (2015). Are computers effective lie detectors? A meta-analysis of linguistic cues to deception. Personality and Social Psychology Review, 19(4), 307–342. https://doi.org/10.1177/1088868314556539; Vrij, A., Fisher, R. P., & Blank, H. (2017). A cognitive approach to lie detection: A meta-analysis. Legal and Criminological Psychology, 22(1), 1–21. https://doi.org/10.1111/lcrp.12088
  14. Hauch, V., Sporer, S. L., Masip, J., & Blandón-Gitlin, I. (2017). Can credibility criteria be assessed reliably? A meta-analysis of criteria-based content analysis. Psychological Assessment, 29(6), 819–834. https://doi.org/10.1037/pas0000426
  15. Suchotzki, K., Verschuere, B., Van Bockstaele, B., Ben-Shakhar, G., & Crombez, G. (2017). Lying takes time: A meta-analysis on reaction time measures of deception. Psychological Bulletin, 143(4), 428–453. https://doi.org/10.1037/bul0000087
  16. Verschuere, B., & Kleinberg, B. (2016). ID-Check: Online Concealed Information Test Reveals True Identity. Journal of Forensic Sciences, 61, S237–S240. https://doi.org/10.1111/1556-4029.12960

Leave a Reply