Interview with Dr Itiel Dror – Cognitive Bias: What Psychology Can Tell Us About Experts and Forensic Science
Dr Itiel Dror (Centre for the Forensic Sciences, University College London) holds a PhD in psychology from Harvard University. His research interests are wide-ranging, but he has specialised in human expertise and decision-making.
This interest in human experts, specifically in the forensic domain where he has conducted empirical studies on bias in fingerprinting and other forensic domains, has earnt him much attention. His work has been covered by Nature (18 March 2010) and in The Economist (21 January 2012), and focuses on applying scientific knowledge and theoretical models of the human brain and mind to practical everyday problems.
He has translated this research into developing effective ways to improve and human performance and decision-making in a number of domains.
LJ – Leila Jameel
ID – Dr Itiel Dror
LJ: Thank you for agreeing to talk to The Transparent Psychologist! I guess we should start by talking about your background. I understand that you have done a lot of different work in several domains examining expert performance…
ID: On the face of it, it looks as if I have worked in many different domains, with forensic experts, frontline police, in the military and medical sectors, with pilots and aviation personnel, and others. But, for me it is all the same, in the sense that I am a cognitive person. What I am interested in is what makes an expert, in terms of how they perceive information, make judgments and decisions. I investigate what happens to the brain, and the way people think when they become an expert. For instance, the content of the medical domain is very different from that of a pilot, but all of these experts domains are similar; they are human beings surrounded by a huge amount of information, some of it is ambiguous, some of it is missing or distorted. They all have to process this information and use it to make judgments and decisions. So my work has focused on understanding expert training and performance, which I have then applied it to many different domains.
LJ: What started you in this line of work?
ID: That is a historical question! When I was doing my PhD, I had an idea about how pilots may process information differently. My PhD supervisor had contacts with the air force, so he told them our ideas; they loved it, and invited me to stay on an airbase over the summer to collect data from pilots. That was the first step. And then I began to think, I wonder if this different for medical doctors, for police officers etc.? And I found a lot of similarities across these domains.
LJ: What was different about the pilots in terms of their brains, and/or how they processed information and made decisions?
ID: That is a complicated question…First of all the pilots I was researching believed they were better at everything, but they were not! I found that there were certain areas in which they were indeed better than the average person, and in which they had special abilities. However, this effect was not found across all abilities, but for very specific abilities, which I characterized in a number of papers. For example, I found that the pilots were better at certain types of spatial navigation tasks, but not other ones. The pilots performed better than average on tasks involving metric spatial relationships, but not on tasks involving categorical spatial relationships (Dror, Kosslyn, Waag, 1993). To understand the nuances, you need to realize that cognitive abilities are not memory, or problem solving, or decision-making. Each of these cognitive categories is broken down into many, many different kinds of cognitive processes, and a person can be better at one but not others. The question is, what abilities are required to be better at a certain profession. So, it limited to say pilots need to be better at, or are better spatial navigation, because that encompasses a whole host of cognitive processes!
LJ: Your most recent work, and where you have focused a lot of your attention, is in applying cognitive psychology to the forensic domain. Here you have investigated the impact of ‘cognitive bias’ – what exactly is that?
ID: By ‘cognitive bias’ I do not mean stereotypes or prejudice such as being racist or sexist. Rather, that a ‘cognitive bias’ refers to our inability to be entirely objective, which may manifest via several possible routes such as, perceptual distortion, inaccurate judgments, illogical and/or irrational interpretations.
LJ: In a new paper (Kassin, Dror, Kukucka, 2013) you outline several different problems with forensic science, suggesting it may not be as scientific or objective as it may appear to the layperson. You have investigated different cognitive factors that might be at play when a forensic scientist conducts their work, leading to cognitive bias. Can you tell me about that?
ID: There are a number of issues. First of all I wouldn’t say that I focus on forensic examiners. Yes, I have done a lot of work in the forensic domain, but I have also done a lot of work in medical and other domains.
The forensic domain is different in a number of aspects. Firstly, in the medical domain or aviation, it has been recognized for decades that the human factor is very important, and there has been a lot of research on medical decision-making, aviation decision-making, team work in aviation etc. Historically, forensic science has not been investigated in this way, until recently there wasn’t any research on the human element in forensic decision-making. When I started to look at this area ten years ago, the forensic community said “What? The human element is not relevant. What are you talking about? We are objective!” This mindset was very interesting, because in forensic science the human is the instrument of analysis. In most forensic areas there are no objective criteria, it is based on human experts examining different visual patterns of blood splatter, fingerprints, shoe prints, handwriting, and so on, and making subjective judgments. Until recently the forensic community ignored all the human elements. Initially, there was a lot of denial, and even resistance, because I was the first to start asking questions about the role of the human examiner in perceiving and interpreting information that is used to make decisions.
Secondly, forensic scientists present themselves as being scientists. A pilot or medical doctor testifying would never say, “This is science!” rather “I can not be 100% sure, but this is my conclusion which it is based on science”. Ten years ago the forensic community were very naive about all of this, because the courts had accepted their testimony for over 100 years. For example, in fingerprint analysis (the most used forensic domain) examiners would say, “We are totally objective and infallible, we never make mistakes, we have a zero error rate” and the court accepted it, so they accepted it! When I started working in this area ten years ago it was initially very unpleasant, and there were some very angry people who did not like me saying that they were subjective and did not use objective criteria. Actually what I was saying is you are a human being, and human beings make mistakes! Now it has changed quite a lot. So after a decade of climbing up a mountain and swimming against the current, progress has been made. But initially there was a lot of resistance, which at times became quite personal, even from the leaders of the community. For example, when I published one of my papers, the chair of The Fingerprint Society in the UK, wrote a letter to the editor of the journal saying, and I quote “We are totally objective, fingerprint examiners are never affected by context. If the fingerprint examiners are affected by context, if they are subjective, they shouldn’t be fingerprint examiners, they should go and seek employment in Disneyland!”
LJ: Hahaha! Unbelievable.
ID: In a way you cannot blame them, they are forensic not cognitive scientists, and have been trained to think that they never make mistakes. Now most, not all, there are still a few dinosaurs who don’t get it, of the forensic community over the world has accepted this and started to take steps to fix it. The judicial system, have also taken it on board, and judges have become more sophisticated from a cognitive perspective in understanding it. Also a number of enquiries into the reliability and validity of the forensic disciplines have been conducted.
LJ: In another recent paper (Dror, Kassin, Kukucka, 2013) you make several recommendations of ways in which you think the field of forensic science can be improved. Firstly as discussed, you state that the community should acknowledge the limitations, and accept that there is an element of subjectivity. You also discuss that forensic examiners work in a way such that they try to build a case against a suspect, and thus do not have a balanced view from the outset. You then go on to you consider more specific methods such as blind testing, buffering examiners from irrelevant information about the case, and so on. There have been some objections to this. I am trying to put it all together to understand exactly what you think could or should be done, and why there is resistance to that.
ID: First of all, we need to bring awareness to the issue. That is not enough. It is necessary but not sufficient. If people don’t understand and acknowledge the limitations they are not going to take steps. So we need to demonstrate it to them and explain it to them.
In a study we conducted several years ago, we gave fingerprint examiners the same pair of fingerprints twice, but put it in very different contexts (Dror & Charlton, 2006). In one context they believed the suspect was very likely to be guilty where they had confessed to the crime. Here the examiners found a match. In the other context, with the same fingerprint, they were led to believe that another suspect had confessed to the crime. Here the examiners did not find a match. We gave them irrelevant information and the same examiners changed their decision for the same fingerprint! Once they see this kind of research they begin to understand the cognitive architecture underpinning how the brain interprets information, and then to understand that we are all influenced by expectations, experience etc. To the forensic scientist it is totally irrelevant who confessed to the crime or not. So this kind of information should not be made available to them, thus buffering them from the irrelevant context, which may unintentionally bias their decision.
So first they need to be on board with understanding the limitations. Once they understand and accept the problems, there are many measurements that can be taken to minimize those problems. Some of them may never happen, for example separating forensic scientists from the police force. Today in the UK, forensic examiners are part of, and work for the police. That already creates a certain context! So ideally forensic scientists would be separated from the police. If not, steps need to be taken to give them independence, such as ensuring that police detectives on the case do not have direct contact with the forensic examiners, so they cannot pressure and influence them, intentionally or not. They should not be considered part of the police, they are not there to help the police – they are scientists. Recently in the US, in Washington DC, all the forensic scientists have been taken out of the police force and into an independent body. In the UK, not only this is not happening but also independent forensic services have been closed down for economic reasons – it is going the opposite way.
These are a few examples of steps which can be taken that do not cost a lot of money, to improve the quality of the service, where experts are more objective and impartial, providing the courts with better information.
LJ: As well as the problems in conducting the science in the first place (i.e. the potential influence of contextual information and the issue of working with the police), forensic scientists also have to present their evidence to the courts. In your recent work you talk of the tension and bi-directional relationship of these two components of forensic scientists work.
ID: Yes, the judicial system is an adversarial system. So when forensic scientists are in court, although they are trying to be impartial, they are either part of the defense or the prosecution, who trying to prove the suspect innocent or guilty. It is hard to be a scientist in an anti-scientific system. In a recent piece of research (Murrie, Boccaccini, Guarnera & Rufino, in press), forensic examiners were sent identical files, but half were told they were working for the defense and half for the prosecution. The reports they produced differed; depending on whether the examiner thought it was solicited by the defense or prosecution. I am not accusing anyone of intentionally lying, but this information clearly puts one in a certain frame of mind or disposition, and so the brain sees what it wants, confirming that view. Science is a bit more complicated than one way or the other, innocent or guilty. Yet forensic examiners, are pushed to be part of the two-sided judicial system and not to act as scientists. They can forget their role or get sucked into it. It is very hard to be immune to this different culture.
LJ: That must apply in many different areas where individuals act as expert witnesses to the courts?
ID: Yes. There is no question! I am looking at forensic science, because forensic experts are very good, so I believe if cognitive bias is true for them is likely to be true for many other domains. If one looked at other expert domains the problems may be even bigger. I looked at DNA, the gold standard in forensic science, if these experts are affected by context then you can be sure people who look at all kinds of evidence are too.
LJ: For forensic science it is also more shocking because it is presented as something concrete. Whereas with other things, say medics and clinicians, it is clear they are giving their professional opinion, which is based on science, but that it is still an opinion.
ID: Yes, forensic science has traditionally been misrepresented to the courts. A recent enquiry concluded that fingerprints are not a matter of fact, but of opinion. To make it worse laypeople’s – the jurors – experience of forensic science is often drawn from CSI! CSI, gives forensic science a Hollywood makeover, it does not represent reality! So if a forensic scientist comes and says I have found a match jurors do not doubt it – it is consistent with what they know – but what they know is a misrepresentation of what a forensic examiner really can and cannot do.
LJ: It is interesting there have been objections, not just to acknowledging there is a problem, but also to seemingly simple suggestions, such as forensic scientists only seeing the relevant forensic sample rather than whole case file. Where do you think this comes from?
ID: The reason people still object, although this is lessening, is because the police, the military, the forensic sciences all have very strong cultures, and it is hard to change things which have been the way they have for many years. This is human nature! But things are changing, in terms of training, acknowledgement, regulation of standard operating systems etc. specifically to deal with cognitive bias.
LJ: You also say that forensic scientists should be wary of relying on technology, which I found surprising because we tend to think of technology as more objective.
ID: The influence of context and environment, on human judgment and perception comes in many ways, of which technology is one. For example in the forensic domain, there is what is known as ‘base rate bias’. If someone in airport security monitoring x-rays sits there all day never seeing a bomb, then they do not expect there to be a bomb, and so are biased not to see a bomb. So a system has been developed which projects a bomb onto the x-ray screen. The screener must identify these and it helps to keep them engaged. A similar thing happens in the forensic domain, where the technology used clearly indicates where the examiner should expect to see certain kinds of information. Thus they begin to expect to see this information such that even if the information is not there they are more likely to see it, and if the information is somewhere else they are less likely to see it. We have run experiments finding that when expectation is high perception is affected, which again introduces cognitive bias into the work of the forensic examiner (Dror & Mnookin, 2010; Dror, Wertheim, Fraser-Mackenzie & Walajtys, 2012). You see the same problem in other areas where humans collaborate with technology.
LJ: In general, are there individual differences in susceptibility to cognitive biases?
ID: Yes there are, but this is an area for further research. Some people are more susceptible, but if this is innate, to do with personality or training, it is not clear. There were some examiners in my research who were not affected by bias no matter what we did! What is it that makes these examiners more resistant? I do not know yet, we need to collect more data. So is it important to do more research in this area, but also to remember that if you do not know the context you are not affected by it, regardless of your susceptibility!
LJ: Also researchers trying to quantify this are subjective human beings too!
ID: People always ask me in forensic conferences, “Are you biased?” I think they think I am going to say no, but my response is always “Of course I am biased! But, I take measurements to minimize it.” For instance, when new drugs are being trialed, a placebo is used for comparison, in order to isolate the actual effect of the drug from the subjective context of being given a drug to make you feel better. Similarly, when I collect data, I don’t analyse it. I ask one of my research assistants, who doesn’t know what is it about to analyse it, without providing them with any irrelevant information about my expectations etc. Then it is what we call a blind procedure. We also employ inter-rater reliability to see if there is a consensus across analysts. These are fundamental principles in scientific research, but forensic scientists have not been exposed to these counterbalancing measures to minimize bias.
LJ: So maybe forensic scientists need some basic understanding of scientific design and psychology in their training?
ID: They need certain scientific and cognitive understanding. The problem is also with judges – how do they evaluate scientific data? The court is not a place to do science!
LJ: Well, I think our time is up! Thank you for your openness and frankness. It is has certainly opened my eyes, and I am sure readers will find it fascinating.
Dror, I. E., & Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification, 56, 600–616.
Dror, I. E., Kassin, S. M., Kukucka, J. (2013). New application of psychology to law: Improving forensic evidence and expert witness contributions. Journal of Applied Research in Memory and Cognition, 2, 78-81.
Dror, I. E., Kosslyn, S. M., & Waag, W. L. (1993). Visual-Spatial Abilities of Pilots. Journal of Applied Psychology, 78 (5), 763-773.
Dror, I. E. & Mnookin, J. (2010). The use of technology in human expert domains: Challenges and risks arising from the use of automated fingerprint identification systems in forensics. Law, Probability and Risk, 9 (1), 47-67.
Dror, I. E., Wertheim, K., Fraser-Mackenzie, P., and Walajtys, J. (2012). The impact of human-technology cooperation and distributed cognition in forensic science: Biasing effects of AFIS contextual information on human experts. Journal of Forensic Sciences, 57 (2), 343-352.
Kassin, S. M., Dror, I. E., Kukucka, J. (2013). The forensic confirmation bias: Problems, perspectives and proposed solutions. Journal of Applied Research in Memory and Cognition, 2, 42-52.
Murrie, D.C., Boccaccini, M.T., Guarnera, L.A., & Rufino, K.A. (in press). Are forensic experts biased by the side that retained them? Psychological Science.
Dr Itiel Dror’s Homepage:
For more information on bias in forensic science:
PBS ‘Frontline’ TV (USA) “Can Unconscious Bias Undermine Fingerprint Analysis?