What do a gambler, a doctor, and a juror all have in common?
Betting. Interpreting the odds to decide wether to make that last bet or count your blessings and call it a night.
Diagnosis. Using genetic testing to inform a patient’s life time risk of a disease and thus a suitable course of treatment.
Sentencing. Assessing the evidence to calculate the likelihood that a defendant is guilty.
All of these people use probability to make and inform important decisions.
Predicting the future: Gambling
Tarot readers and psychics aside, most of us accept we cannot predict the future. Nevertheless, we use probability in our daily life to determine how likely it is that something will, or will not, happen. Take the example of flipping a coin. Some prefer to call ‘heads’ and some ‘tails’, but there is a 50% chance of the coin landing on either side. Yet, cognitive psychology reveals we are not objective at using probability to make predictions. Now suppose the coin is flipped three times and each time lands on heads – whilst this is unlikely, it is not improbable. If you had to bet £100 on the next toss, which side would you choose?
The odds remain 50:50. It does not matter which side it landed on before. However, a classic study by Tverksy and Kahneman (1971) found people believed that tails was more probable after a run of three heads. This was coined (pardon the pun!) the “gamblers fallacy” and highlights our naive belief that a successful outcome is due after a run of bad luck – which may well be a gambler’s downfall! Daily life is littered with sayings such as; “You wait all day for a bus and then two come at once!”. Whilst improbable things happen more often than we realise, sayings like this ascribe a hidden belief in karma, which may undermine our ability to be rational decision makers.
Diagnosing the present: Doctors
Imagine you are a doctor screening a woman for breast cancer. You find a lump, and based on your experience estimate the probability of it being cancerous is just 1%. Nevertheless, modern technology can help identify whether or not it is indeed cancer, and inform treatment. So you send her for a test, which accurately classifies roughly 80% of cancerous, and 90% of benign (non-cancerous) tumors. The result is positive: it is cancerous.What is your estimate of the overall probability of the lump being cancerous given your initial estimate, and the reliability of the test result?
Researchers asked doctors to solve this problem (Eddy, 1982). 95/100 estimated the probability of the lump being cancerous was around 75%. They were way off! The correct answer is drastically lower, at 7-8%. The doctors assumed the chance of the woman having cancer given a positive result was equal to that of a positive test result if she actually had cancer, and so inaccurately inflated their estimates. In clinical settings, where life and death decisions must be made probabilistic errors are no laughing matter.
This mistake is known as “confusion of the inverse”. In order to understand why the correct answer should only be 7-8% we need to do some maths! According to ‘Bayes theorem‘ – a subtle rule of probability – the correct way to estimate the odds of cancer given a positive test result is as follows.
Probability of cancer given that the test results are positive:
p(positive|cancer) p(cancer) / p(positive|cancer) p(cancer) + p(positive|benign) p(benign)
p(cancer) = the original estimate of 1% probability of cancer = .01
p(benign) = the remaining 99% probability of not having cancer = .99
p(positive|cancer) = an 80% chance of a positive test result given cancer = .80
p(positive|benign) = a 10% chance of falsely identifying a benign tumor as malignant = .10
(.80) (.01) / (.80) (.01) + (.10) (.99)
.008 / .107
Adapted from: Plous (1993)
Bayes theorem emphasises the importance of paying attention to the ‘prior probability‘ – the best estimate of a probability before a new piece of information is known. In the cancer problem, the initial estimate was very low at only 1%, but the doctor’s allowed this to become contaminated by the new information provided by the test, forgetting that it was only 80-90% accurate. If something is extremely probable or improbable, then it is important to not inaccurately update it according to an unreliable piece of information.
Establishing the past: Jurors
Jurors must weight different strands of evidence and integrate these into a whole to estimate how likely it is that a suspect “dun it”. Increasingly cases rely on forensic evidence, including DNA samples, which require an assessment of probabilities. Several problems with the way people interpret such evidence have been identified. A DNA sample from the perpetrator matches 1 in a 1,000,000 individuals at random. If a match is found for a suspect, what are the odds that the suspect is innocent?
People confuse these two calculations, estimating the likelihood of innocence to also be 1 in a 1,000,000. Lawyers call this mistake “the prosecutor’s fallacy”, when people confuse the odds associated with a piece of evidence with the odds of guilt. Experimental work also suggests people are sensitive to subtle changes in how probabilistic evidence is presented. Dr Itiel Dror, a cognitive neuroscientist from University College London explains, “I could say you have a 1/100 or .01 chance of dying from some risk factor – people do not weight these as equal, even though they are the same information presented in different ways. Jurors are affected by evidence in this way.” In 2011, a judge ruled against presenting statistics as evidence to jurors, which calculate the odds of one event happening given the odds of other related events. For instance, forensic scientists use theorems to establish how likely it is that a shoe-print left at the crime scene comes from a pair of trainers found at the suspect’s house, given how common that model of shoe is, its size, how worn down the sole is etc. However, lay people do not understand this logic, easily becoming confused by the complex math underlying it, enhancing the risk of poorly informed decision making.
Use probability with caution
Estimating and using probability presents all kinds of daily dilemmas. What is the solution? Firstly, remember we are subject to using folklore to explain entirely probable occurrences. Secondly, pay attention to prior probability, i.e. the initial 1% cancer estimation, and do not be too heavily swayed by new evidence, which might lead to inaccurate updating of an estimate. It is important to remember that according to Bayes theorem the absolute difference between the revised probability and prior probability should not be large. Finally, when evaluating probabilistic evidence, remember the devil is in the detail!
Bayes, T., & Price, R. (1763). “An Essay towards solving a Problem in the Doctrine of Chance. By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S.”. Philosophical Transactions of the Royal Society of London 53 (0): 370–418.
Eddy, D. M., (1982). Probabilistic Reasoning in Clinical Medicine: Problems and Opportunities. In Kahneman, Slovic & Tversky (Eds.) Judgment under uncertainty: Heuristics and biases (pp. 249–267). New York: Cambridge University Press.
Plous, S. (1993). The Psychology of Judgment and Decision-Making. New York: McGraw-Hill.
Tversky, A., Kahneman, D. (1971). “Belief in the Law of Small Numbers”. Psychological Bulletin 76 (2): 105–110.