The Transparent Psychologist

Bringing transparency to neuroscience, including recent advances in clinical psychology, human brain imaging and cognitive science. De-bunking myths and critically evaluating research. Exploring how the public interacts with neuroscience by examining popular coverage in the media.

Category: Uncategorized

How Scientists’ are Studying the Brain: A Multi-Method Review

Neuroscience is expanding rapidly. US President Barack Obama is launching a new initiative called ‘Brain Research through Advancing Innovative Neurotechnologies (BRAIN)’, estimated to receive up to $3 billion dollars in investment, it is clear that President George Bush’s ‘decade of the brain’ has left a lasting imprint. So what exactly are scientists doing to try to understand the most complex organ known to exist? There are many technological advances, practically every week, that even as someone within the field it is hard to keep track. Here I broadly break down and explain the different approaches to mapping and understanding the brain, linking biological and cognitive approaches.

 N.B. I do not by any means pretend this is an exhaustive list! It simply reflects an account of my own research into this area, which will hopefully give readers a flavour of the different approaches, and scale of development and innovation in this field.

Post-mortem Dissection

Historically studies of the human body were derived from post-mortem dissections. This approach is still used in most anatomy teaching, including neuroanatomy, and has proved invaluable. My experience of brain dissection is that whilst it was a little ghoulish and odd, seeing and feeling a real 3D brain was really helpful in putting textbook based information into context. However, it of course only allows us to examine at a level visible to the naked eye, and within a dead organism. These two factors are obviously extremely limiting when studying the brain, which is commonly estimated to contain approximately 100 billion neurons (brain cells), and particularly when trying to link structure to function.


A new technique developed in Brazil by Dr Suzana Herculano-Houzel has tried to get a more accurate estimate of the number of neurons in the human brain. Her research team took the brains of four neurologically healthy men and turned them into what has been described as a ‘brain soup’. The method involved dissolving the cell membranes of neurons, and then taking a sample of the soup, allowing one to count the number of neuron cell nuclei within the sample. This figure is then scaled up to calculate an estimate of the overall number of neurons in the brain. Whilst to many this may be a deeply disturbing idea, the scientists were able to reach a more precise estimate, of 86 billion neurons. Furthermore, it also dealt with the issue of different brain regions having more or less dense neuronal packing. The soup created a homogenous sample of neurons from a range of brain regions.

So postmortem brain dissection and research is alive and kicking, but now comes in all kinds of weird and wonderful forms!

Histological Studies

Histology refers to study of cells and tissues of plants and animals under a microscope; this is achieved by sectioning and staining cells and tissues, and examining them under a microscope to reveal the microscopic level of anatomy.


Cytoarchitectonics refers specifically to the study of the arrangement of neuronal cells bodies in the brain and spinal cord. Studying the microscopic level of the human cerebral cortex (a thick layer of neuronal tissue that covers most of the brain and is associated with human evolution) is credited to the Viennese psychiatrist Theodor Meynert (1833-1892). In 1867 he noticed regional variations in the histological structure of different parts of the grey matter in the cerebral cortex.

! For non-neuroscience folks !

The brain is made of both grey and white matter. The cerebral cortex is comprised of grey matter consisting of neurons. The white matter lies in a layer below the grey matter, and consists of the axons and dendrites, which connect neurons with one another and other parts of the central nervous system.

Korbinian Broadmann was a German anatomist who studied the cytoarchitectuaral organisation of neurons in the cerebral cortex. In 1909 Brodmann published maps of different cortical areas in humans. He used the cytoarchitecture of cells to distinguish brain regions from one another, and his map continues to be used widely in psychology and neuroscience when studying structural localization of cognitive functions. Whilst his work was extremely impressive, technological advances have superseded those available to him at that time; better stains and more powerful microscopes are available, allowing scientists to study the brain in even more detail.


Moreover, Broadmann’s maps were based on visual analysis of the cells using a microscope. Recently scientists have begun to use statistical analysis and developed quantitative criteria to redefine regional boundaries. These quantitative criteria involves measurement of cell density within the grey matter, and also patterns between the surface of the cortex and the white matter layer. This information is taken into account to create a sliding window procedure, where boundaries are defined where the cytoarchitectonic structure changes maximally.

See Amunts, Schleicher & Zilles (2007) for a more detailed read!


This technique examines the density of neurotransmitter receptors within different layers of the cortex, which can be useful for telling us about the structure of cortex at a molecular level. Changes in receptor density can provide new criteria for a more detailed mapping of the human brain than can be achieved by cytoarchitectonics alone (Zilles et al., 1995). The density of neurotransmitter receptors varies significantly between different locations in the human brain, and this has been linked to both cytoarhcitectonic or structurally defined boundaries, and to the functional organisation of the cortex (Zilles et al., 2002). Zilles et al (2002) compared data from various methods, including both cytoarchitectonic and post-mortem studies of the human brain, and found that areas of similar function show similar ‘receptor fingerprints’, and differ from those with other properties.

!Key definition!

Neurotransmitters are chemicals that transmit signals from a neuron to a target cell across a synapse. These chemicals are packaged into synaptic vesicles

 A synapse is a gap between two cells; pre- and postsynaptic. In a chemical synapse electrical activity in the presynaptic neuron is converted into the release of a chemical (neurotransmitter ) that binds to receptors located in the postsynaptic cell.

The neurotransmitter then initiates either an electrical response or a secondary messenger pathway, which either excites or inhibits activity in the postsynaptic cell.

Patient Studies

Research studying individuals with neurological and neurodevelopmental conditions, or in those who suffer brain injury after a stroke or accident has provided a cornerstone of modern psychology and neuroscience. By investigating the resulting deficits in such individuals, and linking this with the area of damage to the brain, broad localization of different neurological functions has been possible. Before neuroimaging (see section below) techniques were developed, work of this nature was particularly important.

One particularly seminal case is that of patient “Tan”. Pierre Paul Broca (1824-1880) was a French physician, surgeon and anatomist. He is best known for his research on Broca’s area, a region of the frontal lobe that was named after him. In 1861, Broca met a patient, who had a 21-year history of progressive loss of speech. The patient was able to understand, but not to produce language, and was otherwise mentally competent. He was nicknamed “Tan” due to his inability to clearly speak any other words (Broca, 1861). When he died some days later Broca performed an post-mortem, and found that he had a lesion in the frontal lobe of the left cerebral hemisphere. Broca went on to find post-mortem evidence from 12 more cases in support of the localisation of articulated language (Broca, 1861; Fancher, 1979).

Another classic neuropsychological case is that of “Phineas Gage”, an American railroad construction foreman – ask any psychology or neuroscience undergraduate, and they will no doubt be sick to death of hearing about this rather unfortunate fellow! On the 13th September in 1848 Gage became instantly famous for surviving a horrible accident in which a large iron rod was driven completely through his head, destroying much of his brain’s left frontal lobe. To the astonishment of the men Gage was working with, he was able to speak within a few minutes of the incident and to walk without assistance. Whilst he survived the injury, and initially seemed to have got away scot-free, it later emerged that there were profound changes to his personality and behavior over the next twelve years.

This often quoted passage from Dr John Martyn Harlow (who attended to his immediate care following the injury and subsequently published research papers about his recovery) highlights the scale of change observed:

“ The equilibrium or balance, so to speak, between his intellectual faculties and animal propensities, seems to have been destroyed. He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operations, which are no sooner arranged than they are abandoned in turn for others appearing more feasible. A child in his intellectual capacity and manifestations, he has the animal passions of a strong man. Previous to his injury, although untrained in the schools, he possessed a well-balanced mind, and was looked upon by those who knew him as a shrewd, smart businessman, very energetic and persistent in executing all his plans of operation. In this regard his mind was radically changed, so decidedly that his friends and acquaintances said he was “no longer Gage”.”

This was the first time that changes to personality and behavior had been linked to brain damage.


Neuroimaging is a relatively new discipline within medicine, psychology and neuroscience. It includes the use of various techniques to either directly or indirectly image the structure, functions and connectivity of the brain.

Structural – MRI and CT

Structural imaging of the brain is typically achieved using magnetic resonance imaging (MRI). It allows us to visualise the internal structures of the body in detail – rather like an x-ray allows one to visualise bone. It provides good contrast between soft tissues, so is particularly useful when imaging the brain. Structural imaging is widely used in medicine for the diagnosis of large intracranial disease such as tumours, or brain injury.


An MRI scanner is a huge device, in which a person lies with a large and very powerful magnet. The magnetic field is used to align the magnetization of nuclei in the body. Radio frequency magnetic fields are applied to systematically alter the alignment of this magnetization, causing the nuclei to produce a rotating magnetic field. The magnetic field gradients cause nuclei in different locations to precess at different speeds, which allows spatial information to be recovered that is detectable by the scanner. This information is recorded and then a construction of an image of the scanned area of the body is created. By using different combinations of gradient, 2D or 3D volumes can be obtained (Squire & Novelline, 1997).

Click here for information on CT (an alternative form of structural imaging)!

Functional – fMRI and PET

 Functional MRI (fMRI) uses MRI technology that measures brain activity by detecting associated changes in blood flow (Huettel, Song & McCarthy, 2009). The assumption underlying this technique is that cerebral blood flow and neuronal activation co-occur; when a region of the brain is in use, blood flow to that area also increases. The procedure is similar to MRI but uses the change in magnetization between oxygen-rich and oxygen-poor blood as its measures. The resulting brain activation is typically presented graphically by colour-coding the strength of activation across the brain or region of interest.

fMRI has become a commonly used technique in brain research because it is safe and easy to use. fMRI scanners allow research participants to be presented with different visual images or sound stimuli, to which they can respond by pressing a button or moving a joystick. Consequently, fMRI can be used to reveal brain structures and processes associated with perception and cognition. It has good spatial resolution – it is accurate to about 2-3 millimeters at present. However, it is limited by its poor temporal resolution – as there is a time-lag in the increased blood flow response to neural activity.

In neuroscience research fMRI has largely replaced positron emission topography (PET). PET produces a 3D image or picture of function processes in the body using short acting radioactive tracers. PET, retains the significant advantage of being able to identify specific brain receptors associated with particular neurotransmitters through its ability to image radiolabelled receptor “ligands” (receptor ligands are any chemicals that stick to receptors). As cerebral bloodflow can be disrupted in many different types of brain pathology using fMRI can be difficult in a clinical setting. Thus, PET tends to be used more widely in the clinical domain.


Click here for more information on PET!

See my article ‘Neuromarketing: The Future’  for neuroethical concerns about the potential applications of fMRI!

Connectome – DTI

Whilst most neuroscientific research has focused on identifying specific regions for functions, increasing attention is being placed on brain connectivity, and thus the flow of information within and between regions. A new type of MRI called diffusion tensor imaging (DTI) is allowing the network of fibres, or white matter, to be systematically examined, allowing for an analysis of the connections between different cortical regions, composed of grey matter.

DTI uses information generated by the MRI scan to establish the directional flow of fibre orientations. These are the displayed in 2D images by assigning one colour to each orthogonal axis, creating a detailed map of the whole brain network. Using mathermatical theories it is then possible to establish how strongly connected different regions are to one another.

DTIIts main clinical application has been in study and treatment of neurological disorders. For instance, Professor Sachdev et al., have found that with ageing there is a reduction in the efficiency of these networks. Furthermore, they identified that this is related to cognitive function, including decreased information processing speed, and was correlated with performance on tests of executive functions (such as attention and concentration), and visuospatial skills (navigating in space) (Wen et al., 2011)

Its ability to reveal abnormalities in white matter fibre structure and provide models of brain connectivity is a major breakthrough for neuroscience. Since, its invention in 1985 a range of similar techniques have been and are being developed, including diffusion weighted imaging and diffusion spectrum imaging.



Amunts, K., Schleicher, A., & Zilles, K. (2007). Cytoarchitecture of the cerebral cortex—more than localization. Neuroimage, 37(4), 1061-1065.

Broca, P. “Remarks on the Seat of the Faculty of Articulated Language, Following an Observation of Aphemia (Loss of Speech)”. Bulletin de la Société Anatomique, Vol. 6, (1861), 330–357.

Fancher, R., E. Pioneers of Psychology, 2nd ed. (New York: W.W. Norton & Co., 1990 (1979), pp. 72–93.

Huettel, S. A.; Song, A. W.; McCarthy, G. (2009), Functional Magnetic Resonance Imaging (2 ed.), Massachusetts: Sinauer,

Squire LF, Novelline RA (1997). Squire’s fundamentals of radiology (5th ed.) Harvard University Press. P. 36.

Wen, W., Zhu, W., He, Y., Kochan, N. A., Reppermund, S., Slavin, M. J., … & Sachdev, P. (2011). Discrete neuroanatomical networks are associated with specific cognitive abilities in old age. The Journal of Neuroscience, 31(4), 1204-1212.

Zilles, K., Palomero-Gallagher, N., Grefkes, C., Scheperjans, F., Boy, C., Amunts, K., & Schleicher, A. (2002). Architectonics of the human cerebral cortex and transmitter receptor fingerprints: reconciling functional neuroanatomy and neurochemistry. European neuropsychopharmacology, 12(6), 587-599.

Zilles, K., Schlaug, G., Matelli, M., Luppino, G., Schleicher, A., Qü, M., … & Roland, P. E. (1995). Mapping of human and macaque sensorimotor areas by integrating architectonic, transmitter receptor, MRI and PET data. Journal of anatomy, 187(Pt 3), 515.


Interview with John Williams: Parent of a Child with Autism, Author of the Blog and New Comedy Show ‘My Son’s Not Rainman’

“ ‘My Son’s Not Rainman’ is a new show for 2013 from comedian John Williams. It’s a show about finding the positive in everything, from the joy and wonder of the Special School Disco to the unadulterated thrill of getting the front seat on the Docklands Light Railway. Ultimately, it’s just an uplifting tale about what it really means to be different.”



John is a dad.

He’s also a comedian (Laughing Horse New Act of the Year Finalist, Jokers’ Joker of the Year). He trained at Royal Scottish Academy of Music and Drama until they chucked him out.

This is his first solo show, and the blog is the first thing he has written since leaving school far too many years ago.

This story couldn’t be told without The Boy.

He’s the autistic one.

He’s the most amazing, frustrating, contradictory boy in the world.

He has never drawn a charcoal sketch of the London skyline or memorised a train timetable.


I went to see John’s show earlier this month and a glowing review.

I was impressed by John’s commitment to challenging people’s perception of what is ‘normal’ and also the stereotypical view of autism. He skilfully uses humour to break down awkward barriers that make it hard to talk about disability and being different.

John has kindly agreed to do a follow-on interview for ‘The Transparent Psychologist’. So, here goes!


LJ – Leila Jameel

JW – John Williams

Blogger and Comedian

LJ: John, thanks for agreeing to talk to me! Tell me, why did you decide to start the blog and then the comedy show?

JW: For a long time I wanted to try to find a way to share with people what it’s like bringing up a child who’s different. I wanted people to be more accepting of those with autism, and I became increasingly frustrated that some of the more challenging behaviours that can be so prevalent are never talked about. My hope was that if I could find a way to share our story people might become more accepting in the future. And I wanted to dispel the myth that life with a child with additional needs is always grey. My son brings light into my world. We laugh and joke a great deal. Ours is a happy life, even if a little unconventional at times!

LJ: Do you think that using humour makes it easier to talk about difficult things?

JW: I guess so – it certainly makes stuff more accessible. I’m always aware that the flip-side is there’s a danger that it’s then seen as mocking or treating a subject flippantly – it’s a fine balance as you mentioned in your review of my show, and it’s one that makes me nervous all the time.

LJ: How do you decide which bits of material will or won’t work?

JW: The main test is if it makes me laugh. And after that it’s just trying it out on an audience to see if they agree. The previews for the show have been really useful for that. With the blogs, it’s different. I so often sit down to write a blog about “x” and then suddenly I find I’ve written about something completely different. There’s less pressure to be funny with the blog too!

LJ: Have you written any jokes that you thought maybe push the boundaries too far?

JW: I’d say no. I don’t really write jokes in the conventional sense – everything in the show is based on an event that has happened, and that’s where the humour comes from. In many ways it’s a collection of funny stories rather than a series of jokes. I use my son as the guide. If I thought there was anything in the show that he would one day be uncomfortable with, then I’d stop saying it immediately.

LJ: What is your favourite story from your show?

JW: Without giving too much away, it’s probably the story of the special school disco. People talk about the rollercoaster of emotions from the show – and I think that afternoon at the disco sums it up better than most. It was one of the most joyful things I’ve ever witnessed, only tinged with regret that not everybody in the world gets to share in it. The world would be a much better place in lots of ways if more people experienced the delights of a special school disco!

LJ: So what are your plans for both the blog and the comedy show?

JW: I don’t know really – both have taken off far more than I ever thought they would, and I’m a little blown away by it all. It would be nice to tour the show in some way, but I’m mindful of childcare and lots of other issues. Any plans for the future will always involve my son coming first, but I’d love to continue sharing our story in whatever way works for us.

LJ: You are patron of a charity ‘’ – can you tell us what they do?

JW: I met Annette and Tracey at one of my comedy nights – they are two mums of children with autism based in Kingston-upon-Thames, Surrey. Their aim is set up a ‘hub’ or cafe for young people on the autistic spectrum and their families. A place to talk, share and laugh, all created with sensory issues in mind. I loved the idea and have done a couple of fundraising shows with some other comedian friends to help out. They asked me to become their patron, and I was delighted – they work very hard and deserve every success. It would be great to imagine their plans for Kingston rolled out across the country.

Parent of The Boy

LJ: What did you think/feel when your son was first diagnosed with autism? How much did you know about it beforehand, and how much support were you offered?

JW: I suppose it was relief in the end. We knew something was wrong, but in many ways my son doesn’t fit the classic diagnosis for autism at all. There are times now when I question his diagnosis, I don’t know if that’s something all parents go through. In terms of support, there was none. You got the label and that was it. However, there was a local National Autistic Society ( support worker who was really helpful. This was some years ago now, and I hope things have changed for the better now.

LJ: How has having a son with autism affected your life?

JW: With or without autism, he has brought joy and light and wonder into my world. I am an infinitely better person for having him in my life. It’s a bit of a cliché, but the day I became a parent is the day I realised what it really means to be alive.

LJ: What advice would you give parents out there whose child has just received a diagnosis?

JW: I don’t know if I can offer any – I’m just a bloke sorting through it like everyone else. Google is your friend, but it’s very much your enemy too. There are some weird and wonderful resources and claims made out there, so choose carefully who or what you listen to! I don’t want to come across as trite by saying things will get better, as for some it does and some it doesn’t. All I can say is for each day try to find the joy. There will be some days where it’s harder to find than others, but it will be there.

LJ: I get the feeling that seeing the funny side has helped you through some difficult times…?

JW: I didn’t always see the positive, let’s put it like that… I think over the years I’ve realised that my being positive has a huge impact on my son. I suppose I’m his role model in this, and I can either choose to be consumed by regret and sadness or we can both decide to live each day with fun and happiness.

LJ: Many parents of children with autism are anxious about their child’s future. What worries you the most?

JW: The horror stories of service for adults on the autistic spectrum. The lack of support available from the age of 18 is a real concern, and although I think there have been some improvements, there is still a huge deficit. Although getting the right education for my son has been a battles at times, adulthood opens up a whole new battleground – employment, housing, social care, the list goes on. I suppose the main worry is what will happen if/when I am no longer around – I don’t think that ever goes away.

LJ: Do you think labels such as ‘autism’ are helpful in explaining someone’s difficulties, or a dangerous reflection of our inability to accept people and behaviours that are different?

JW: I do understand the need for labels, but sometimes we can’t lump everything into a nice, neat group and pigeonhole people. First and foremost my son is a person. A unique person with his own identity, needs, wants and wishes – as is every other person with autism. If we put ‘everybody who has blue eyes’ into a group, we wouldn’t expect them to interact and behave in the same way as we seem to do with those with autism. Sometimes it feels like doctors and physicians become so wrapped up in labels and expected behaviour traits that they forget to look at the person in front of them as just that. A person. A unique individual. Labels are useful for getting the right support – statementing for schools, disability living allowance and so on. But so many times concerns or worries about my son have been dismissed with the words “it’s because he’s autistic”. He isn’t. It won’t define him. He’s an individual with autism. There’s a huge difference

LJ: What is the hardest thing about having a son with autism?

JW: Knowing I can never climb inside him and see the world the way he sees it, even for just a moment. And then maybe I’d be better able to support him. But everything is relative. My son has speech, he can read, laugh and joke. There are so many with autism who are non-verbal, or more severe than he is. Many times I’m reminded just how lucky we are.

LJ: Do you think there any pros to having autism?

JW: That question will never be for me to answer, it will always be for my son. There isn’t a defining line where he ends and the autism begins. Does he have a sense of humour because of his autism or in spite of it? I’ll never know which aspects of his personality are ‘him’ and which are ‘autism’. I do really hope that one day he can say Yes there are pros to this. At the moment, I’m not so sure.

LJ: Thank you for your frank and honest answers John! I wish you every success with the blog and show.

John’s blog –

Follow John on Twitter – @Autistic_Kid

John will be performing at the Edinburgh Fringe Festival, 1 August – 25th August, and is planning more UK dates!

Why you are PROBABLY wrong

What do a gambler, a doctor, and a juror all have in common?

Betting. Interpreting the odds to decide wether to make that last bet or count your blessings and call it a night.  

Diagnosis. Using genetic testing to inform a patient’s life time risk of a disease and thus a suitable course of treatment. 

Sentencing. Assessing the evidence to calculate the likelihood that a defendant is guilty.

All of these people use probability to make and inform important decisions.

Predicting the future: Gambling

Tarot readers and psychics aside, most of us accept we cannot predict the future. Nevertheless, we use probability in our daily life to determine how likely it is that something will, or will not, happen. Take the example of flipping a coin. Some prefer to call ‘heads’ and some ‘tails’, but there is a 50% chance of the coin landing on either side. Yet, cognitive psychology reveals we are not objective at using probability to make predictions. Now suppose the coin is flipped three times and each time lands on heads – whilst this is unlikely, it is not improbable. If you had to bet £100 on the next toss, which side would you choose?

The odds remain 50:50. It does not matter which side it landed on before. However, a classic study by Tverksy and Kahneman (1971) found people believed that tails was more probable after a run of three heads. This was coined (pardon the pun!) the “gamblers fallacy” and highlights our naive belief that a successful outcome is due after a run of bad luck – which may well be a gambler’s downfall! Daily life is littered with sayings such as; “You wait all day for a bus and then two come at once!”. Whilst improbable things happen more often than we realise, sayings like this ascribe a hidden belief in karma, which may undermine our ability to be rational decision makers. 


Diagnosing the present: Doctors

Imagine you are a doctor screening a woman for breast cancer. You find a lump, and based on your experience estimate the probability of it being cancerous is just 1%. Nevertheless, modern technology can help identify whether or not it is indeed cancer, and inform treatment. So you send her for a test, which accurately classifies roughly 80% of cancerous, and 90% of benign (non-cancerous) tumors. The result is positive: it is cancerous.What is your estimate of the overall probability of the lump being cancerous given your initial estimate, and the reliability of the test result?

Researchers asked doctors to solve this problem (Eddy, 1982). 95/100 estimated the probability of the lump being cancerous was around 75%. They were way off! The correct answer is drastically lower, at 7-8%. The doctors assumed the chance of the woman having cancer given a positive result was equal to that of a positive test result if she actually had cancer, and so inaccurately inflated their estimates. In clinical settings, where life and death decisions must be made probabilistic errors are no laughing matter.

This mistake is known as “confusion of the inverse”. In order to understand why the correct answer should only be 7-8% we need to do some maths! According to ‘Bayes theorem‘ – a subtle rule of probability – the correct way to estimate the odds of cancer given a positive test result is as follows.

Probability of cancer given that the test results are positive:

p(cancer|positive) =

p(positive|cancer) p(cancer) / p(positive|cancer) p(cancer) + p(positive|benign) p(benign)


p(cancer) = the original estimate of 1% probability of cancer = .01

p(benign) = the remaining 99% probability of not having cancer = .99

p(positive|cancer) = an 80% chance of a positive test result given cancer = .80

p(positive|benign) = a 10% chance of falsely identifying a benign tumor as malignant = .10

(.80) (.01) / (.80) (.01) + (.10) (.99)


.008 / .107





Adapted from: Plous (1993)

Bayes theorem emphasises the importance of paying attention to the ‘prior probability‘ – the best estimate of a probability before a new piece of information is known. In the cancer problem, the initial estimate was very low at only 1%, but the doctor’s allowed this to become contaminated by the new information provided by the test, forgetting that it was only 80-90% accurate. If something is extremely probable or improbable, then it is important to not inaccurately update it according to an unreliable piece of information.

Establishing the past: Jurors

Jurors must weight different strands of evidence and integrate these into a whole to estimate how likely it is that a suspect “dun it”. Increasingly cases rely on forensic evidence, including DNA samples, which require an assessment of probabilities. Several problems with the way people interpret such evidence have been identified. A DNA sample from the perpetrator matches 1 in a 1,000,000 individuals at random. If a match is found for a suspect, what are the odds that the suspect is innocent?

People confuse these two calculations, estimating the likelihood of innocence to also be 1 in a 1,000,000. Lawyers call this mistake “the prosecutor’s fallacy”, when people confuse the odds associated with a piece of evidence with the odds of guilt. Experimental work also suggests people are sensitive to subtle changes in how probabilistic evidence is presented. Dr Itiel Dror, a cognitive neuroscientist from University College London explains, “I could say you have a 1/100 or .01 chance of dying from some risk factor – people do not weight these as equal, even though they are the same information presented in different ways. Jurors are affected by evidence in this way.” In 2011, a judge ruled against presenting statistics as evidence to jurors, which calculate the odds of one event happening given the odds of other related events. For instance, forensic scientists use theorems to establish how likely it is that a shoe-print left at the crime scene comes from a pair of trainers found at the suspect’s house, given how common that model of shoe is, its size, how worn down the sole is etc. However, lay people do not understand this logic, easily becoming confused by the complex math underlying it, enhancing the risk of poorly informed decision making.


Use  probability with caution

Estimating and using probability presents all kinds of daily dilemmas. What is the solution? Firstly, remember we are subject to using folklore to explain entirely probable occurrences. Secondly, pay attention to prior probability, i.e. the initial 1% cancer estimation, and do not be too heavily swayed by new evidence, which might lead to inaccurate updating of an estimate. It is important to remember that according to Bayes theorem the absolute difference between the revised probability and prior probability should not be large. Finally, when evaluating probabilistic evidence, remember the devil is in the detail!


Bayes, T., & Price, R. (1763). “An Essay towards solving a Problem in the Doctrine of Chance. By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S.”. Philosophical Transactions of the Royal Society of London 53 (0): 370–418.

Eddy, D. M., (1982). Probabilistic Reasoning in Clinical Medicine: Problems and Opportunities. In Kahneman, Slovic & Tversky (Eds.) Judgment under uncertainty: Heuristics and biases (pp. 249–267). New York: Cambridge University Press.

Plous, S. (1993). The Psychology of Judgment and Decision-Making. New York: McGraw-Hill.

Tversky, A., Kahneman, D. (1971). “Belief in the Law of Small Numbers”. Psychological Bulletin 76 (2): 105–110.

My Son’s Not Rainman: Is Laughing About Being Different a Good Thing?

On Monday night I went to see a show called ‘My Son’s Not Rainman’ by comedian John Williams, and father of a young boy with autism. I wasn’t entirely sure what I was in for, and although I knew I had come to see a show about autism, I didn’t realise it was stand-up comedy until the lights went down…

As it dawned upon me that this was a comedian making ….err… jokes about …umm…. autism, a serious neurodevelopmental disorder that I spend most of my time trying to get to grips with, I felt quite awkward. But, by the time we got to the “Let’s trivialise a major brain disorder in 60 seconds” section of the show, where John proceeded to list a string of ‘interesting facts’ about autism and his son’s specific variant, I was sold! I found myself laughing my head off to John’s genuine and humorous perspective of living with even the most challenging aspects of autism. Somehow, John managed to turn the most non-PC material into hilarious, but sensitively true insights. And, the whole audience, mainly composed of family members of those with autism and people who work with autistic individuals, was laughing with me.

We heard about how his son uses a wheelchair to avoid becoming fatigued from his cerebral palsy, and how this lead to an interesting situation at mini-golf. We chuckled along to the tale of his son’s obsession with biting – people, objects, anything! – and how this particular pastime has led to a exclusion from a number of establishments, including a special school for children on the autistic spectrum (go figure!). We giggled about how his son’s inability to separate reality from fantasy resulted in a rather disappointing experience with an Omnitrix, but on the upside allowed he and his Dad to enjoy a ‘magical’ ride on the DLR instead of forking out for a ticket to Thorpe Park!


It felt safe to laugh, even when we were on very thin ice material wise. Why? Because we weren’t laughing at his son, or even at autism, we were laughing at how ridiculously how hard it is to be someone, or live with someone, who doesn’t understand the world around them, and how sometimes society makes it even harder.

There were tender moments too. John displayed an absolute and unconditional love for his son, and it became clear that his blog and comedy help him to keep strong for both of their sake’s. Making light of a bad situation is a form of coping mechanism. By making something normally dark and difficult to talk about funny you can also tell the truth, and make people listen and think. Laughing about it somehow makes it okay and breaks down those barriers, and allows for important messages to be sent.

Needless to say, this has been rattling around my thoughts ever since. And, I should point out that in general I don’t think laughing about disability is okay… Indeed for my very first blog post (, I discussed the problem of TV shows which feature disability becoming ‘entertainment’, and I questioned whether this is a form of modern freakshow.

I have since racked my brain, trying to justify why it is okay to laugh at the fine line John treads along this precarious balance. Not only does John have credibility, he does the show his way. He hopes to portray autism with its warts and all. Media depictions of autism tend to focus on the story of someone who struggles with social-communication, but who like Rainman can count cards in a casino, or from memory draw St Paul’s cathedral in minute and accurate detail. Now, whilst there are people on the autistic spectrum with savant skills, this is a minority, and autism is just that – a spectrum, with every variant imaginable, and unimaginable too!

John’s blog

Follow John on Twitter – @Autistic_Kid

John will be performing at the Edinburgh Fringe Festival, 1 August – 25th August, and is planning more UK dates!

BEWARE OF YAWNS! Article Review: Contagious Yawning in Autistic and Typical Development

My research focuses on the autistic spectrum, and I spend an awful lot of my time trawling the web for interesting publications on this widely researched area. This paper is one of my favourite experiments on autism that I have come across, so I thought I would review it for The Transparent Psychologist!

It uses a novel approach to examine a subtle component of everyday social interaction, ‘contagious yawning’, and seeks to understand the emergence of this behaviour in typically developing individuals and associated impairments of this phenomena in those with autism.

Whilst the findings should be interpreted with caution, and the nature of the article is speculative, it neatly links research in a range of related areas, covering behavioural evidence, cognitive theories and possible neural substrates for contagious yawning.

Contagious yawn

Helt, M. S., Eigsti, I. M., Snyder, P. J., Fein, D.A. (2010), Contagious Yawning in Autistic and Typical Development, Child Development, 81, 5, 1620-1631.


The psychological concept “contagion” refers to the trend of a particular behaviour to spread successively through a group of individuals. Behaviours that most often elicit contagious reactions are thought to be representative of the inner states of others and signify our ability to converge emotionally with others around us. For instance, previous work has identified that babies in hospital begin to cry when they hear other babies crying (Hoffman, 1978; Simner, 1971), and that canned laughter on television programmes prompts viewers to laugh (Bush, Barr, McHugo & Lanzetta, 1989; Provine, 2000).

It has been argued that emulating another’s behaviour may be linked to neurological mechanisms involved in the development of our ability to intuit and feel the emotions of others (Rogers & Williams, 2006). Autistic Spectrum Disorders (ASD) are thought to be characterised by a deficit in cognitive empathy, whereby individuals with an ASD have problems on tasks involving ‘perspective taking’ of another’s internal state. Consistent with this hypothesis it has been found that children with ASD show deficits in imitating others (McIntosh, Reichmann-Decker, Winkelmann and Wilbarger, 2006) and are less susceptible to catching others’ contagious emotional states (Scambledr et al., 2006).

Yawning has been found to be contagious, where seeing another person yawn, thinking about yawning, reading or even hearing the word can elicit a yawn in 40-60% of healthy adults (Baenninger & Greco, 1991; Platek, Critton, Myers & Gallup, 2003; Provine, 1989). This is a particularly interesting aspect of contagious imitation, as unlike most other examples it results in large and long behavioural effects – a yawn typically lasts up to 10 seconds, and thus is an example of contagious behaviour that is easy to detect. The authors suggest that contagious yawning may be a trigger response for further behavioural imitation.

Little work has examined the developmental course of contagious yawning, or the relative susceptibility of individuals with ASD. Understanding of both typical development of (study 1), and impairment of (study 2) this phenomena may facilitate further insights into the development of social cognition.

Study 1

In Study 1 the authors examined the developmental trajectory of susceptibility to contagious yawning in typically developing children aged 1–6 years, in order to identify its emergence during development. Experimenters read stories to 120 children individually for a total reading time of 12 minutes. During the first 2 minutes of reading the experimenter did not yawn at all. In the next 10 minutes the experimenter paused to yawn on four separate occasions, and recorded if the child also yawned within the following 90 seconds. The results indicated that the frequency of contagious yawning increased substantially in children aged 4 upwards.

Study 2

In an extension of this work, the authors examined contagious yawning in 30 children with autism spectrum disorders (ASD) aged 6–15. The same method described in study 1 was used. The children with a diagnosis of ASD showed diminished susceptibility to contagious yawning compared with control participants. Furthermore, children who did not meet full diagnostic criteria for an ASD, but had significant levels of symptomology, thought to represent a milder variant of autism, were more susceptible to contagious yawning than children who met full diagnosistic criteria for an ASD.


These findings are suggested to support the theory that contagious yawning is linked to social development. Furthermore, they are explored with regards to the development of involuntary and subconscious ‘‘matching’’ behaviour that we subconsciously engage in, often at an undetectable level to the naked eye or ear and emotional contagion. This work is also discussed with reference to the neural basis of contagious yawning and how this might explain the mechanisms underlying the behaviour, and differences between the control participants and those with an ASD.

Studies investigating forms of empathy and emotional convergence consistently indicate activation in the insula and anterior cingulate cortex. The authors of the present study speculate that these areas are likely candidates for distinguishing the neural activation involved in contagious yawning. Since these areas are also implicated in cognitive empathy tasks, this may explain the population differences observed in those with an ASD whom have a cognitive deficit in this ability.

Interview with Dr Itiel Dror – Cognitive Bias: What Psychology Can Tell Us About Experts and Forensic Science


Dr Itiel Dror (Centre for the Forensic Sciences, University College London) holds a PhD in psychology from Harvard University. His research interests are wide-ranging, but he has specialised in human expertise and decision-making.


This interest in human experts, specifically in the forensic domain where he has conducted empirical studies on bias in fingerprinting and other forensic domains, has earnt him much attention. His work has been covered by Nature (18 March 2010) and in The Economist (21 January 2012), and focuses on applying scientific knowledge and theoretical models of the human brain and mind to practical everyday problems.

He has translated this research into developing effective ways to improve and human performance and decision-making in a number of domains.


LJ – Leila Jameel

ID – Dr Itiel Dror

LJ: Thank you for agreeing to talk to The Transparent Psychologist! I guess we should start by talking about your background. I understand that you have done a lot of different work in several domains examining expert performance…

ID: On the face of it, it looks as if I have worked in many different domains, with forensic experts, frontline police, in the military and medical sectors, with pilots and aviation personnel, and others. But, for me it is all the same, in the sense that I am a cognitive person. What I am interested in is what makes an expert, in terms of how they perceive information, make judgments and decisions. I investigate what happens to the brain, and the way people think when they become an expert. For instance, the content of the medical domain is very different from that of a pilot, but all of these experts domains are similar; they are human beings surrounded by a huge amount of information, some of it is ambiguous, some of it is missing or distorted. They all have to process this information and use it to make judgments and decisions. So my work has focused on understanding expert training and performance, which I have then applied it to many different domains.

LJ: What started you in this line of work?

ID: That is a historical question! When I was doing my PhD, I had an idea about how pilots may process information differently. My PhD supervisor had contacts with the air force, so he told them our ideas; they loved it, and invited me to stay on an airbase over the summer to collect data from pilots. That was the first step. And then I began to think, I wonder if this different for medical doctors, for police officers etc.? And I found a lot of similarities across these domains.

LJ: What was different about the pilots in terms of their brains, and/or how they processed information and made decisions?

ID: That is a complicated question…First of all the pilots I was researching believed they were better at everything, but they were not! I found that there were certain areas in which they were indeed better than the average person, and in which they had special abilities. However, this effect was not found across all abilities, but for very specific abilities, which I characterized in a number of papers. For example, I found that the pilots were better at certain types of spatial navigation tasks, but not other ones. The pilots performed better than average on tasks involving metric spatial relationships, but not on tasks involving categorical spatial relationships (Dror, Kosslyn, Waag, 1993). To understand the nuances, you need to realize that cognitive abilities are not memory, or problem solving, or decision-making. Each of these cognitive categories is broken down into many, many different kinds of cognitive processes, and a person can be better at one but not others. The question is, what abilities are required to be better at a certain profession. So, it limited to say pilots need to be better at, or are better spatial navigation, because that encompasses a whole host of cognitive processes!

LJ: Your most recent work, and where you have focused a lot of your attention, is in applying cognitive psychology to the forensic domain. Here you have investigated the impact of ‘cognitive bias’ – what exactly is that?

ID: By ‘cognitive bias’ I do not mean stereotypes or prejudice such as being racist or sexist. Rather, that a ‘cognitive bias’ refers to our inability to be entirely objective, which may manifest via several possible routes such as, perceptual distortion, inaccurate judgments, illogical and/or irrational interpretations.

LJ: In a new paper (Kassin, Dror, Kukucka, 2013) you outline several different problems with forensic science, suggesting it may not be as scientific or objective as it may appear to the layperson. You have investigated different cognitive factors that might be at play when a forensic scientist conducts their work, leading to cognitive bias. Can you tell me about that?

ID: There are a number of issues. First of all I wouldn’t say that I focus on forensic examiners. Yes, I have done a lot of work in the forensic domain, but I have also done a lot of work in medical and other domains.

The forensic domain is different in a number of aspects. Firstly, in the medical domain or aviation, it has been recognized for decades that the human factor is very important, and there has been a lot of research on medical decision-making, aviation decision-making, team work in aviation etc. Historically, forensic science has not been investigated in this way, until recently there wasn’t any research on the human element in forensic decision-making. When I started to look at this area ten years ago, the forensic community said “What? The human element is not relevant. What are you talking about? We are objective!” This mindset was very interesting, because in forensic science the human is the instrument of analysis. In most forensic areas there are no objective criteria, it is based on human experts examining different visual patterns of blood splatter, fingerprints, shoe prints, handwriting, and so on, and making subjective judgments. Until recently the forensic community ignored all the human elements. Initially, there was a lot of denial, and even resistance, because I was the first to start asking questions about the role of the human examiner in perceiving and interpreting information that is used to make decisions.

Secondly, forensic scientists present themselves as being scientists. A pilot or medical doctor testifying would never say, “This is science!” rather “I can not be 100% sure, but this is my conclusion which it is based on science”. Ten years ago the forensic community were very naive about all of this, because the courts had accepted their testimony for over 100 years. For example, in fingerprint analysis (the most used forensic domain) examiners would say, “We are totally objective and infallible, we never make mistakes, we have a zero error rate” and the court accepted it, so they accepted it! When I started working in this area ten years ago it was initially very unpleasant, and there were some very angry people who did not like me saying that they were subjective and did not use objective criteria. Actually what I was saying is you are a human being, and human beings make mistakes! Now it has changed quite a lot. So after a decade of climbing up a mountain and swimming against the current, progress has been made. But initially there was a lot of resistance, which at times became quite personal, even from the leaders of the community. For example, when I published one of my papers, the chair of The Fingerprint Society in the UK, wrote a letter to the editor of the journal saying, and I quote “We are totally objective, fingerprint examiners are never affected by context. If the fingerprint examiners are affected by context, if they are subjective, they shouldn’t be fingerprint examiners, they should go and seek employment in Disneyland!”

LJ: Hahaha! Unbelievable.

ID: In a way you cannot blame them, they are forensic not cognitive scientists, and have been trained to think that they never make mistakes. Now most, not all, there are still a few dinosaurs who don’t get it, of the forensic community over the world has accepted this and started to take steps to fix it. The judicial system, have also taken it on board, and judges have become more sophisticated from a cognitive perspective in understanding it. Also a number of enquiries into the reliability and validity of the forensic disciplines have been conducted.

LJ: In another recent paper (Dror, Kassin, Kukucka, 2013) you make several recommendations of ways in which you think the field of forensic science can be improved. Firstly as discussed, you state that the community should acknowledge the limitations, and accept that there is an element of subjectivity. You also discuss that forensic examiners work in a way such that they try to build a case against a suspect, and thus do not have a balanced view from the outset. You then go on to you consider more specific methods such as blind testing, buffering examiners from irrelevant information about the case, and so on. There have been some objections to this. I am trying to put it all together to understand exactly what you think could or should be done, and why there is resistance to that.

ID: First of all, we need to bring awareness to the issue. That is not enough. It is necessary but not sufficient. If people don’t understand and acknowledge the limitations they are not going to take steps. So we need to demonstrate it to them and explain it to them.

In a study we conducted several years ago, we gave fingerprint examiners the same pair of fingerprints twice, but put it in very different contexts (Dror & Charlton, 2006). In one context they believed the suspect was very likely to be guilty where they had confessed to the crime. Here the examiners found a match. In the other context, with the same fingerprint, they were led to believe that another suspect had confessed to the crime. Here the examiners did not find a match. We gave them irrelevant information and the same examiners changed their decision for the same fingerprint! Once they see this kind of research they begin to understand the cognitive architecture underpinning how the brain interprets information, and then to understand that we are all influenced by expectations, experience etc. To the forensic scientist it is totally irrelevant who confessed to the crime or not. So this kind of information should not be made available to them, thus buffering them from the irrelevant context, which may unintentionally bias their decision.

So first they need to be on board with understanding the limitations. Once they understand and accept the problems, there are many measurements that can be taken to minimize those problems. Some of them may never happen, for example separating forensic scientists from the police force. Today in the UK, forensic examiners are part of, and work for the police. That already creates a certain context! So ideally forensic scientists would be separated from the police. If not, steps need to be taken to give them independence, such as ensuring that police detectives on the case do not have direct contact with the forensic examiners, so they cannot pressure and influence them, intentionally or not. They should not be considered part of the police, they are not there to help the police – they are scientists. Recently in the US, in Washington DC, all the forensic scientists have been taken out of the police force and into an independent body. In the UK, not only this is not happening but also independent forensic services have been closed down for economic reasons – it is going the opposite way.

These are a few examples of steps which can be taken that do not cost a lot of money, to improve the quality of the service, where experts are more objective and impartial, providing the courts with better information.

LJ: As well as the problems in conducting the science in the first place (i.e. the potential influence of contextual information and the issue of working with the police), forensic scientists also have to present their evidence to the courts. In your recent work you talk of the tension and bi-directional relationship of these two components of forensic scientists work.

ID: Yes, the judicial system is an adversarial system. So when forensic scientists are in court, although they are trying to be impartial, they are either part of the defense or the prosecution, who trying to prove the suspect innocent or guilty. It is hard to be a scientist in an anti-scientific system. In a recent piece of research (Murrie, Boccaccini, Guarnera & Rufino, in press), forensic examiners were sent identical files, but half were told they were working for the defense and half for the prosecution. The reports they produced differed; depending on whether the examiner thought it was solicited by the defense or prosecution. I am not accusing anyone of intentionally lying, but this information clearly puts one in a certain frame of mind or disposition, and so the brain sees what it wants, confirming that view. Science is a bit more complicated than one way or the other, innocent or guilty. Yet forensic examiners, are pushed to be part of the two-sided judicial system and not to act as scientists. They can forget their role or get sucked into it. It is very hard to be immune to this different culture.

LJ: That must apply in many different areas where individuals act as expert witnesses to the courts?

ID: Yes. There is no question! I am looking at forensic science, because forensic experts are very good, so I believe if cognitive bias is true for them is likely to be true for many other domains. If one looked at other expert domains the problems may be even bigger. I looked at DNA, the gold standard in forensic science, if these experts are affected by context then you can be sure people who look at all kinds of evidence are too.

LJ: For forensic science it is also more shocking because it is presented as something concrete. Whereas with other things, say medics and clinicians, it is clear they are giving their professional opinion, which is based on science, but that it is still an opinion.

ID: Yes, forensic science has traditionally been misrepresented to the courts. A recent enquiry concluded that fingerprints are not a matter of fact, but of opinion. To make it worse laypeople’s – the jurors – experience of forensic science is often drawn from CSI! CSI, gives forensic science a Hollywood makeover, it does not represent reality! So if a forensic scientist comes and says I have found a match jurors do not doubt it – it is consistent with what they know – but what they know is a misrepresentation of what a forensic examiner really can and cannot do.

LJ: It is interesting there have been objections, not just to acknowledging there is a problem, but also to seemingly simple suggestions, such as forensic scientists only seeing the relevant forensic sample rather than whole case file. Where do you think this comes from?

ID: The reason people still object, although this is lessening, is because the police, the military, the forensic sciences all have very strong cultures, and it is hard to change things which have been the way they have for many years. This is human nature! But things are changing, in terms of training, acknowledgement, regulation of standard operating systems etc. specifically to deal with cognitive bias.

LJ: You also say that forensic scientists should be wary of relying on technology, which I found surprising because we tend to think of technology as more objective.

ID: The influence of context and environment, on human judgment and perception comes in many ways, of which technology is one. For example in the forensic domain, there is what is known as ‘base rate bias’. If someone in airport security monitoring x-rays sits there all day never seeing a bomb, then they do not expect there to be a bomb, and so are biased not to see a bomb. So a system has been developed which projects a bomb onto the x-ray screen. The screener must identify these and it helps to keep them engaged. A similar thing happens in the forensic domain, where the technology used clearly indicates where the examiner should expect to see certain kinds of information. Thus they begin to expect to see this information such that even if the information is not there they are more likely to see it, and if the information is somewhere else they are less likely to see it. We have run experiments finding that when expectation is high perception is affected, which again introduces cognitive bias into the work of the forensic examiner (Dror & Mnookin, 2010; Dror, Wertheim, Fraser-Mackenzie & Walajtys, 2012). You see the same problem in other areas where humans collaborate with technology.

LJ: In general, are there individual differences in susceptibility to cognitive biases?

ID: Yes there are, but this is an area for further research. Some people are more susceptible, but if this is innate, to do with personality or training, it is not clear. There were some examiners in my research who were not affected by bias no matter what we did! What is it that makes these examiners more resistant? I do not know yet, we need to collect more data. So is it important to do more research in this area, but also to remember that if you do not know the context you are not affected by it, regardless of your susceptibility!

LJ: Also researchers trying to quantify this are subjective human beings too!

ID: People always ask me in forensic conferences, “Are you biased?” I think they think I am going to say no, but my response is always “Of course I am biased! But, I take measurements to minimize it.” For instance, when new drugs are being trialed, a placebo is used for comparison, in order to isolate the actual effect of the drug from the subjective context of being given a drug to make you feel better. Similarly, when I collect data, I don’t analyse it. I ask one of my research assistants, who doesn’t know what is it about to analyse it, without providing them with any irrelevant information about my expectations etc. Then it is what we call a blind procedure. We also employ inter-rater reliability to see if there is a consensus across analysts. These are fundamental principles in scientific research, but forensic scientists have not been exposed to these counterbalancing measures to minimize bias.

LJ: So maybe forensic scientists need some basic understanding of scientific design and psychology in their training?

ID: They need certain scientific and cognitive understanding. The problem is also with judges – how do they evaluate scientific data? The court is not a place to do science!

LJ: Well, I think our time is up! Thank you for your openness and frankness. It is has certainly opened my eyes, and I am sure readers will find it fascinating.


Dror, I. E., & Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification, 56, 600–616.

Dror, I. E., Kassin, S. M., Kukucka, J. (2013). New application of psychology to law: Improving forensic evidence and expert witness contributions. Journal of Applied Research in Memory and Cognition, 2, 78-81.

Dror, I. E., Kosslyn, S. M., & Waag, W. L. (1993). Visual-Spatial Abilities of Pilots. Journal of Applied Psychology, 78 (5), 763-773.

Dror, I. E. & Mnookin, J. (2010). The use of technology in human expert domains: Challenges and risks arising from the use of automated fingerprint identification systems in forensics. Law, Probability and Risk, 9 (1), 47-67.

Dror, I. E., Wertheim, K., Fraser-Mackenzie, P., and Walajtys, J. (2012). The impact of human-technology cooperation and distributed cognition in forensic science: Biasing effects of AFIS contextual information on human experts. Journal of Forensic Sciences, 57 (2), 343-352.

Kassin, S. M., Dror, I. E., Kukucka, J. (2013). The forensic confirmation bias: Problems, perspectives and proposed solutions. Journal of Applied Research in Memory and Cognition, 2, 42-52.

Murrie, D.C., Boccaccini, M.T., Guarnera, L.A.,  & Rufino, K.A. (in press). Are forensic experts biased by the side that retained them? Psychological Science.

Further Reading:

Dr Itiel Dror’s Homepage:

For more information on bias in forensic science:

Television Interviews:

BBC’s Newsnight interview with Dr. Dror on cognitive aspects in expert decision-making

PBS ‘Frontline’ TV (USA) “Can Unconscious Bias Undermine Fingerprint Analysis?

Interview with Dr Jake Fairnie from

Biography and Introduction

Dr Jake Fairnie was born in Bristol. He now lives in London and completed his BSc in Psychology in 2009 at University College London (UCL) with First Class Honours. He went on to study for a PhD in Cognitive Neuroscience, under the supervision of Professor Nilli Lavie, based in the Attention and Cognitive Control group at the UCL Institute of Cognitive Neuroscience (ICN).

Dr Jake Fairnie

In his spare time he produces short films. His film ‘We Didn’t Start the scanner’ ( ) won the ‘ICN Brains on Film’ competition in 2012, as well as the ‘Guerilla Science Eat My Sci Short Film’ competition in 2012.

He completed his PhD earlier this year, and has since been working on, an exciting new project that he co-founded and that he has kindly agreed to talk to The Transparent Psychologist about! Interview 

Leila Jameel = LJ

Dr Jake Fairnie = JF

LJ: Congratulations on completing your PhD! So tell me about your research interests…

JF: I’m deeply fascinated by human perception. More specifically the way in which our brains cope with the overwhelming amount of sensory information in today’s hectic world. For instance, we can be distracted by our name in a distant conversation at a party; an attractive individual in the street; or a spider on a dashboard. Yet at the same time, we fail to notice a magician’s sleight of hand, the touch of a pickpocket, or even (as in one experiment) a ‘moonwalking gorilla’ set up to infiltrate a basketball game.

LJ: You are the co-founder of Tell me about it; what is it and what does it do?

JF: is the only free, openly editable online database of article summaries. Users can post, read and discuss condensed versions of scientific publications and related online content such as YouTube videos and podcasts. The site allows students and the academic community to examine a large amount of literature in a short space of time. I guess is a bit like Wikipedia but for academic literature.

LJ: It sounds great. What prompted you to set up

JF: was born out of something that Dr Anna Remington (a Junior Research Fellow at the University of Oxford and the other co-founder of and I found ourselves doing when we began working together in the Attention and Cognitive Control group at UCL. Students and academics spend a huge amount of time reading literature and as we worked in a similar area of research we quickly realised that we were reading the same publications. So we created a shared online document where we would write summary versions of articles we’d read allowing us to cover twice the material in the same amount of time. It struck us that this should happen on a global scale: imagine how much time could be saved if the whole academic world worked together in this manner! was born.

LJ: How is an article summary different from an abstract?

JF: An abstract is the author’s subjective interpretation of the paper (e.g. ‘This is the first paper to report the impact of cellphone use on visual awareness’) and can sometimes be misleading. As with newspaper headlines – it can appear to be one thing, and then you get to the end, and realise it’s another! However, MiniManuscript text summaries focus on the raw facts, such as what they did and what they found (e.g. ‘30% of individuals failed to notice an unexpected salient object, this increased to 90% for individuals that were engaged in a phone conversation’). One colleague summarised MiniManuscript as: “open platform for re-writing the abstract.” I really like that way of looking at it, even if it doesn’t paint the full picture of what MiniManuscript offers. saves people time by making it easier to work out whether a research paper is really relevant. Given the premium placed on Twitter-style brevity nowadays and the limited capacity of the brain, it is highly valuable to have a resource that encourages people to focus on the relevant information and not to dwell on the irrelevant details. We think could not have come at a better time for the academic world!

LJ: So is it just for the academic community, or can it be used by the general public?

JF: is completely free and open for anyone to view any of the content (note that you’d need to sign up for a free account in order to contribute). We have no restrictions, exclusions or paywalls. Given that the site contains summaries of academic publications, we are currently focussed on spreading the word within the academic community, but we welcome anyone who wants to take a look and contribute!  We believe that there can be benefits from having an open dialogue with people from a wide range of disciplines. Our ethos is that scientific knowledge should be shared in the most efficient, open and connected way.

In fact, text summaries are not only beneficial to researchers (in allowing them to assess a large amount of information in a short space of time), but also to journalists, dyslexics and people who do not speak English as their first language.

LJ: Do you think there are there any risks or drawback to using

JF: There are no drawbacks or risks to Users can opt to remain anonymous; all we require is that contributors provide their academic status. This protects anyone that might be frightened of causing upset with their comments. Our discussion threads are a great platform for debate (which is the basis of good science) and we want to encourage as much participation as possible.

LJ: I think anonymity is great, as it will encourage people to be free and open about their thoughts. But, with anonymity may also come risks…how do you moderate the forums?

JF: We have a couple of ways to moderate the forums on Users can flag other users, or their comments (like on Facebook) which notifies the MiniManuscript team with a high priority marker. This provides an initial safety barrier. Within the comment thread there is potential for people to be abusive, but that is not the aim. The aim is to talk about the research openly.

LJ: What are your long term plans for What do you think its impact and role will be?

JF: We believe that can become an accepted tool in the world of research. The site is particularly helpful in the preliminary stages of scientific projects (e.g. literature reviews, grant proposals, publications) and the discussion threads provide a great platform for post-publication peer review. We are creating a more efficient, open and connected culture within academia.

We want to evolve the site’s functionality around our users. Twitter is a great example of what I mean by this. When Twitter was first launched, the hashtag symbol (#) was just a keyboard character like any other. However, the founders quickly noticed that people were using the symbol to mark keywords or topics in their Tweets. The site soon began hyperlinking the hashtags and added trending topics to the homepage. Similarly, we are paying close attention to MiniManuscript users, and aim to build functionality around them.

LJ: On that note, what are your thoughts about open access?

NB Open access refers to the open and free availability of publicly-funded research outputs. Historically readers or institutions have had to subscribe to specific journals or publishers in order to access research papers. However, the culture is changing and many journals are ceasing to charge readers and are going ‘open access’.

JF: approaches the open access debate in a radically different way. It’s not only about financial access, it’s also about conceptual access, making sure that ideas in papers are easily understood. We can all be guilty of using unnecessary jargon, and this is particularly apparent in scientific writing. In a MiniManuscript text summary only the bare essentials of a paper are outlined. This promotes an environment where potentially complex concepts are explained in their most simple terms, and thus making the concept more accessible.

LJ: How is funded?

JF: So far we have managed to sustain the site through institutional grants. We won the UCL Bright Ideas Award 2012 and the Shell LiveWIRE Grand Ideas Award 2012 that provided the seed capital to develop, launch and promote the website. While advertising is brilliant and keeps much of the internet free, it is distracting – and distraction is the enemy of efficiency. So we are doing all we can to avoid it.  Keeping free from charges and adverts does mean that we rely on sponsorship and donations to keep going.

LJ: Fantastic!

On a lighter note, tell me an interesting fact about yourself?

JF: U2 once flashed my name up on screen at Glastonbury… it is a long story!

( ).

LJ: And finally, what are your plans for the future?

JF: I’m keeping my options open at the moment, so Mystic Meg would have a tricky time with me…!

LJ: Many thanks for talking to me. I hope is a success!

Neuromarketing: The Future?

These days it seems everyone has jumped on the ‘neuro’ bandwagon. While this huge explosion of neuroscience is very exciting it is difficult not to be skeptical, and a little concerned, by some neuro-related claims… Here I explore neuromarketing, asking what is it, and what the implications are for science and society? Can we really be sure its claims stand up to the test? And, if so, is it even a good thing?

What is Neuromarketing?

The term ‘neuromarketing’ was invented by Ale Smidts is 2002, and describes a new field of marketing research that aims to study consumers cognitive and emotional responses to marketing stimuli. Researchers in this field use neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) or electroencephalography (EEG), to measure changes brain activity when participants posing as consumers view adverts or make decisions. fMRI measures changes in blood flow, which is thought to provide an indirect measure of brain activity. The underlying assumption is that when an area of the brain is in use, blood flow to that region also increases. EEG records electrical activity along the scalp, which is thought to be associated with neurons (brain cells) communicating with one another. The aim of utilising techniques such as this is to try to understand how and why consumers make decisions, and which parts of the brain may underpin this.


Coca-Cola or Pepsi: Can Cultural Information Bias Our Preferences?

There is some evidence supporting marketing’s ability to alter consumer behaviour. Pepsi and Coca-Cola are very similar in chemical composition, yet consumers maintain a behavioural preference for one over the other. In a 2004 study by McClure and colleagues participants were given the “Pepsi Challenge”, a blind test of Coca-Cola and Pepsi, whilst their brains were scanned using fMRI. In one condition participants did not know which drink was which; here 50% of participants chose Pepsi. In this condition, activity in a region of the brain associated with reward (ventromedial prefrontal cortex) was found to predict participants’ preference for Pepsi or Coca-Cola. In a second condition participants were told which drink was which; now 3/4 said that Coca-Cola tasted better! Interestingly, their brain activity also changed in this condition. Areas of the brain associated with high-level cognition and memory (dorsolateral prefrontal cortex, hippocampus and midbrain), were also recruited when participants sampled Coca-Cola, but not Pepsi. The authors suggested that participants may be relating Coca-Cola to their previous experiences and memories associated with it. The results were argued to indicate that preferences may be biased by cultural information.

What Is The evidence?

Neuroscientists and cognitive scientists have raised concerns about the credibility of the enterprise. Whilst interesting findings may be established in the lab, what does this mean for the real world? Seeing a region of the brain associated with pleasure light up when people are presented with a picture of a particular product they like, does not necessarily translate into these people going and buying the product, or altering their buying activity on the basis of a clever advert. On the other hand, studies like that of McClure et al. demonstrate the potential of successful brands and marketing campaigns to influence our behaviour, and perhaps even perception. But one of the aims of neuromarketing is to investigate our reactions before a product is even launched, during the development stages. It remains unclear whether neuroimaging provides any better data than other marketing methods for such endeavors (see Ariely & Berns, 2010). There seems to be a misconception that neuroimaging techniques provide a portal into consumers’ minds and behaviour. These techniques are undeniably useful, but the underlying technology is still developing, and so the data generated must be considered carefully.

Neuroethics of Neuromarketing

Consumer advocate organisations have criticised neuromarketing’s ethical scruples. Neuroethics is a new strand of neuroscience, which as the frontiers widen further and further, is asking exactly these kinds of questions. In a 2008 publication Murphy et al. outlined two possible types of risks associated with neuromarketing; firstly the protection of people directly involved in the research, and secondly the protection of consumer autonomy if neuromarketing is found to be effective in predicting consumer behaviour.

The first is fairly straightforward to solve, providing those running the studies do so in a regulated manner in accordance with neuroimaging conducted for scientific or clinical purposes. However, concern has been raised over the lack of current regulation (see Ariely & Berns, 2010). For instance, there remains unease as to whether neuroimaging information will be used to discriminate against individuals or particular groups. Furthermore, scanning a large number of individuals always raises the possibility of detecting clinically abnormal results, which demands skilled interpretation and may require referral.

The latter, protecting consumer autonomy, has been labeled ‘stealth marketing’, referring specifically to the tools of neuromarketing, which may in the future “provide sufficient insight into humans neural function to allow manipulation of the brain such that the consumer cannot detect the subterfuge and that such manipulations result in the desired behaviour in at least some exposed persons” (Murphy et al., 2008). Whilst this is not currently possible with the technology available, if it were developed Murphy et al. (2008) argue it “would represent a major incursion on individual autonomy”. Putting scientific issues to the side for a moment… If companies are dead set on influencing us subliminally by studying our ‘subconscious’ responses to products or advertising campaigns, then are we left vulnerable? Whilst this does not seem likely to happen any time soon, it raises concerns about the widespread use of neuroimaging techniques. And, if these technologies were developed begs the question; will we be able to make fully informed decisions about which goods we buy and why?!

Hope or Hype?

It seems that there is a lot of promise and interest in the field of neuromarketing, but also a lot of unknowns and potential risks. The work of those such as McClure et al., raises interesting questions about how cultural information becomes embedded in the brain, and how the marketing of ideas affects decision-making, relevant to both scientific researchers as well as marketers. However, whether neuroimaging provides an efficient tool to answer this question is yet to be shown.

Some have argued companies fixation with neuromarketing is a fad, and that it is too expensive to be rolled out in a big way. However, new enterprises such as that of NeuroSpire (, which aims to offer neuroimaging techniques to companies at a cheaper cost by turning it into a “DIY affair”(Lecher, 2013; see ‘Popsci’ reference for original article), may represent its future. Opposing this burgeoning business either on scientific or ethical grounds may be futile. Thus, it seems that scientists and consumer advocate organisations should seek to help companies to engage in neuromarketing in a safe and regulated manner.



Ariely, D., Berns, G.S. (2010) Neuromarketing: the Hope and Hype of Neuroimaging in Business. Nature Reviews Neuroscience, 11(4), 284-292. DOI: 10.1038/nrn2795.

Lecher, C. 2013.


McClure, S.M., Li, J., Tomlin, D., Cypert, K.S., Montague, L.M., Montague, P.R. (2004). Neural Correlates of Behavioural Preference for Culturally Familiar Drinks. Neuron, 44, 379–387.

Murphy, E.R., Illes, J., Reiner, P.B. (2008). Neuroethics of Neuromarketing. Journal of Consumer Behaviour, 7, 293–302. DOI: 10.1002/cb.252.

An Anthropologist On Mars: Will We Ever Reach A Consensus On Autism?

Autism has had a troubled history. From its birth in the 1940’s, to its present form, and proposed changes, its conceptualisation and diagnosis remains a controversial issue. With DSM-V set to kick in this year, comes a new set of diagnostic guidelines, and rising concerns that mild forms of the disorder may no longer be recognised. Here I discuss a brief historical overview of autism, and what the proposed changes to DSM-V may mean for people ‘on the spectrum’. 

“Life on Mars” and Beyond: What is Autism?

Autism is a neurodevelopmental disorder, characterised by a ‘triad of impairments’ in social interaction, communication, and repetitive or restricted behaviours and interests. Temple Grandin, a noted academic and autism activist, famously described her inability to understand the social world as leaving her feeling “like an anthropologist on Mars”. Impairments in communication have been identified at the level of basic language acquisition, up-to the sophisticated linguistic level of intention understanding and sarcasm. People with autism may take things literally, sometimes causing huge misunderstanding. For instance, the expression “You’re pulling my leg”, “Have you changed your mind?” or “It caught my eye” might cause confusion, and even concern to someone who has difficulty interpreting non-literal language! Restricted behaviors and interests may include included repetitive movements (known as stereotypy), compulsive behaviours, a resistance to change, or ritualistic behaviors. Other aspects, such as atypical eating, or sensory sensitivity are also common but are not essential criteria for a diagnosis.

From Greek Beginnings to London: a Brief History of Autism

In 1943 child psychiatrist Leo Kanner was the first to clearly define autism. He recognised 11 children in his clinic in Baltimore who showed virtually no interest in other people or the outside world. He described this profile as ‘autistic aloneness’, borrowing the term ‘autism’ from the Swiss psychiatrist Eugene Bleuler who had used it to describe the inward, self-absorbed aspects of adults with schizophrenia – the term comes from the Greek word ‘autos’ that means ‘self’. Kanner did not however consider autism an infantile form of schizophrenia, recognising that the clinical symptoms were distinct and present from birth.

In Chicago in the 1960’s Bruno Bettelheim portrayed children with autism as living in a ‘glass bubble’. He controversially viewed autism as a severe reaction to an unaffectionate maternal relationship, leading to a drastic form of treatment called ‘parentectomy’ – the removal of the child from his or her parents. This was done in the hope that the child’s development would improve once removed from the supposedly hostile and uncaring home environment, but no evidence was found to support this, and his theory fell into disrepute. However, sadly this view stuck for sometime.

In the period between the 1940’s and late 1980’s autism was conceptualised as a rare and categorically distinct, where people with autism were set apart from the rest of the population. Lorna Wing, a psychiatrist based in London, and mother of a child with autism, was a founding member of the ‘National Autistic Society’. In the 1980’s she suggested that the prevailing view of autism as a rare categorical disorder was out-dated. Her research indicated that autism was a spectrum, and far more common than previous estimates had indicated.

The Current State of Affairs: Are We All On A Spectrum?

In recent years Wing’s view of autism has been widely taken up, and developed further by academics and clinicians such as Uta Frith and Christopher Gillberg. Autism it now considered a spectrum condition, meaning that while all people with autism share certain difficulties, the severity of the condition and how it affects an individual varies largely. This range is enormous, branching the gulf between non-verbal individuals who may have accompanying learning disabilities and require a lifetime of professional support, to extremely high-functioning individuals who are actively recruited in the technology industry for their specialist skills.

From a diagnostic point of view, clinicians have become more sensitive to the many faces of autism, able to identify core deficits in social and communication skills, in the absence of more global disability. And, with this broadening of the autistic spectrum, different sub-types have emerged. At the milder end of the spectrum we have Asperger Syndrome (AS). In 1944 Han Asperger, an Austrian pediatrician published a definition of ‘autistic psychopathy’. He was working in isolation of Kanner, and it is unclear if they were even aware of each other’s work. Asperger identified four boys with a pattern of behaviour and abilities that included a lack of empathy, reduced ability to form friendships, one-sided conversations, and intense absorption in a special interest. Asperger described these children as “little professors” because of their ability to talk about their favorite subject at length and in great detail. Asperger’s work was written in German, and he died before his identification of this pattern of behavior became widely recognised. In the early 1990’s AS gained some interest following Wing and Frith’s research on a recent translation, which lead to the inclusion of the condition to DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Revision – more about this later!) in 1994, exactly half a century after the original research. Despite this resurgence of interest, AS remained a controversial and contentious diagnosis due to its unclear relationship to the autistic spectrum.

The label AS is currently given to people who are of average, or above average intelligence, who fit other autistic criteria, but do not display difficulties in language. Individuals who fit this brief, but have a concurrent history of language delay and/ or impairment are labeled with high-functioning autism (HFA). A huge amount of research has been conducted to examine the relationship between AS and the autistic spectrum, and the differences between AS and HFA. To summarise this vast literature briefly, both people with HFA and AS are affected by the ‘triad of impairments’, common to all people with autism. Both groups are likely to be of average or above average intelligence. However, there may be features such as age of onset, language impairment and motor skill deficits, which differentiate the two conditions. Beyond childhood, however, HFA and AS are thought to be relatively indistinguishable.

The view of an autistic spectrum has been expanded even further in the last decade or so. Simon Baron-Cohen developed a questionnaire (The Autism Quotient) to measure autistic traits for the purposes of screening those with AS or HFA. Research using such measures has recently suggested that the traits may be normally distributed in the population, and that those with an autistic spectrum condition diagnosis sit at the tail end.

Re-defining Autism: DSM-V

The debate as to whether we need two diagnostic terms for AS and HFA has been raging. Researchers have been trying to establish whether there is indeed a continuous spectrum of autistic traits, where the clinical cut-off should be, and how we should conceptualise these different labels. DSM-V is due in May of this year and as a culmination of a 14 year review process BIG changes are once again planned for how we view and diagnosis autism.

But what is DSM, and what do the changes mean? DSM is a publication of the American Psychiatric Association, which provides a common language and standard criteria for the classification of all mental disorders. The DSM is used in the US, and to some extent and various degrees around the world. It is used by clinicians and researchers, and relied upon by psychiatric drug regulation agencies, health insurance companies, pharmaceutical companies and policy makers. So, it has a huge influence on many aspects of how those with mental disorders are classified and interact with the world.

In DSM-V (the 5th edition of the manual, due out in May of this year) separate diagnostic labels of autism or classic autism, HFA and AS will be replaced by one umbrella term “Autism Spectrum Disorder” (ASD). This spectrum will include a variety of clinical identifiers and associated features, which will be used to indicate severity in each domain and describe an individual’s clinical presentation. The DSM-V revision website states that the umbrella term of ASD has been proposed in order to address issues with diagnostic reliability and overlap between the subtypes. It appears that under the current system inter-rater reliability was poor (i.e. different clinicians diagnose the same person with different disorders), and that even intra-reliability was a problem (i.e. the same clinician diagnoses the same symptom profile differently across time). Furthermore, there is a lack of evidence supporting differential outcomes or difficulties for those with different sub-types, suggesting that autism is defined by a common set of behaviours and should be characterised by a single name according to severity.

Revisions to the specific criteria required to receive a diagnosis of ASD have also been made. Firstly, the new criteria are more thorough and strict compared. Secondly, at the social interaction and communication domains of impairment will be combined into one, titled “Social/Communication Deficits.” Finally, the requirement of a delay in language development is no longer necessary for a diagnosis.

The Implications

The revisions to DSM-V are proposed in hope of making the diagnosis of ASD more specific, reliable, and valid. Whilst these revisions are based on research, analysis, and expert opinion, various high-profile people have spoken out highlighting potential pitfalls. It is the removal of AS which has sparked the most attention. Individuals, who currently hold this diagnosis, may need re-evaluation or further treatment and condition management. At this stage, they will probably receive a different diagnosis, which has the potential to be very confusing, especially for individuals’ who identify strongly with their diagnosis. Furthermore, with the tightening up of specific criteria there are legitimate concerns that people who are currently diagnosed with one of the higher-functioning (HFA/AS) subtypes will no longer meet the more strict diagnostic criteria, and may experience difficulties qualifying for and accessing services. There is a lot of anxiety and uncertainty surrounding how policy makers and insurance companies will interpret these revisions, and how state funding and educational services will adopt the changes.

In recent times autism has come to represent many things, and is even considered a positive asset in some areas of life. With the advent of Baron-Cohen’s Autism Quotient, remarking ‘we are all a bit autistic’ has even become very on-trend. Returning to the history of autism briefly… Back in the 1940’s Hans Asperger, noticed that many of the children he identified as autistic used their special talents in adulthood and went on to have successful careers. Indeed, one of them became a Professor of Astronomy and another won a Nobel prize in Literature. A modern equivalent of this is the German company ‘Auticon’ who recognise the specialist talents of many people with AS despite their limitations. They employ people with AS as software testers and offer their services to companies.

What is clear is that these changes will have a big impact on people with ASD and their families. It remains to be seen how clinicians will use the diagnostic criteria, and whether the revisions will lead to improvements in our understanding of and provision for ASD or not. I think there is a conflict between recognising the talents and strengths of those on the spectrum, through programs such as that of Auticon, and not dismissing the specific but pervasive challenges that being a high-functioning person with autistic symptoms (whatever the label!) poses. Re-writing the definition of autism again may, or may not be the answer, only time will tell. But, I don’t think the story ends here… 

The Modern Freak Show? Our Obsession With The Weird and Wonderful.

The recent slew of TV programs about ‘weird and wonderful’ people raises interesting questions. What is the purpose of these shows; to inform and educate, or to entertain? From “The Town that caught Tourette’s” to “Obsessive Compulsive Hoarders”, 4od is awash with documentaries about extraordinary people. But, they aren’t just extraordinary, some of the subjects of these shows are very, very ill. Some of them struggle to function in everyday life as a result of their conditions or disabilities. 

The Undateables” is a channel 4 documentary that has become a hit show, and has just been commissioned for its third series. The premise is that people living with challenging conditions are often considered ‘undateable’. The series meets some of these people and follows them on their quest for love. So what is the point of putting cameras in their life to document their perspective’s and struggles? Is it to give us a right old laugh at their expense, or to sympathise with their needs? And is it all doom and gloom, or can they inspire us by demonstrating a positive outlook in often dire circumstances?

It features a range of people with a range of difficulties. We have met a couple of Tourette’s sufferers, whose attempts to suppress their less-than-appropriate-date-time tics are excruciating to watch. On the other hand, it is also refreshing to see this condition represented in a serious light. The coverage of people with Tourette’s Syndrome tends to focus on the hilarity of involuntarily shouting obscenities. Having worked with people with Tourette’s I know this is far from the truth; it not only affects their love lives but every single aspect of their daily life.

We have also met a couple of people with Autistic Spectrum Disorders. From Michael whose conversation relies on catch phrases he has rote learnt, or prompts on his phone from his mum, to Richard, to whom the prospect of dating someone from outside of a 5-mile radius is terrifying. Again, I find a tension; between documenting how the mundane and everyday to you and I is extremely challenging for these individuals, and exploiting their difficulties to pull in viewers like spectators to a circus.

As one would expect public opinion has been mixed on this… a quick search on twitter reveals an awful lot of encouraging tweets such as “I really do love the #Undateables …they are all inspiring people…”*. On the other hand, there is also some rather less ‘positive’ exposure, for example, “I got my mum a bunch of flowers. I am sitting at the front of the bus with them looking like a fat virgin on his first date #Undateable.”*

*Tweets have been paraphrased to protect author anonymity!

The advertising watchdog has received complaints that the show is offensive towards disabled people, and encourages stereotyping and bullying. And, the individual’s featured ability to make an informed decision to consent has also been called into question. While Channel 4 argue that they hope to change perceptions of disability, the show has been attacked in the media for clearly setting up a distinction between disabled people and non-disabled people. Surely the way to address this is normalise rather than emphasise the differences between us all? Whilst showing disabled people dating is a rather radical concept, the show capitalizes on the paramount difficulties this poses the individuals featured. Instead it should focus on the fact that everyone, disabled and non-disabled alike, wants to find “the one” and that for many people it is a challenging and demoralizing experience!