Almost every child goes through a dinosaur phase. In some cases, it’s a frenzied week of roaring and leaving spiky plastic models all over the floor, before a combination of sore feet and a sore throat drive you onto the next stage of development. In my case, it lasted about 5 years. I owned sacks of dinosaur toys, a library’s worth of dinosaur books, and irritated my friends by criticising the accuracy of their dinosaur games (You can’t play with a dinosaur from the Creataceous and a dinosaur from the Jurassic at the same time. You just cannot.) Eventually, peer pressure made me decide that dinosaurs were for little kids, and I forgot about them for a decade or so.
But last year, I took a module in Palaeobiology– the study of extinct organisms– as part of my degree. I was back in the realm of dinosaurs– older, wiser but still embarrassingly excited. Then as I delved deeper into my external reading, I found some papers that shook my world, shattered my dreams, and generally slapped my childhood in the face. My dinosaur books had been lying to me about my favourite dinosaur of all time: Deinonychus.
Deinonychus (pronounced Die-NON-ik-uss) was a mean guy. Resembling its smaller, superstar cousin the Velociraptor, Deinonychus nonetheless has its own claims to fame.
Before the 1960s, scientists took a pretty dim view of dinosaurs. The consensus was that they were all stupid, sluggish and cold-blooded, and probably died out because they couldn’t cope with the same challenges that we sleek, sexy mammals can. But that view started to fall apart when John Ostrom took a closer look at Deinonychus. He suggested that these animals were speedy, intelligent pack-hunters who worked together to bring down large prey, using the fearsome sickle-shaped claw on each foot to disembowel their victims. Like wolves. Slashy Captain Hook wolves. This image of Deinonychus helped create a revolution in the way that we think about dinosaurs, and it was still championed in all my dinosaur books. As the sort of child who didn’t bat an eyelid at the bloodiest scenes of Watership Down, it inspired me. Over several years, I built up a portfolio of really creepy drawings of dinosaurs killing each other, made with nothing but a pencil and a red felt-tip pen, and ravaging packs of Deinonychus featured heavily in my “art”. On reflection, I feel lucky that my parents didn’t refer me to a child psychologist.
But in 2006, long after I’d abandoned dinosaurs in favour of blushing at teenage boys, some scientists decided to test out the theories about those fearsome feet. Phillip Manning and his team built an accurate hydraulic model of a Deinonychus leg, complete with terror-claw, and made it kick a pig carcass that had kindly volunteered to play the part of an herbivorous dinosaur. Yet far from slicing the carcass into ribbons of sandwich ham, the claws were AWFUL at doing any sort of tearing damage. Instead, they created small shallow puncture wounds that did very little to the surrounding tissue, let alone the internal organs. Not so much a river of blood and gore, then: if Deinonychus behaved like my books said, then the herbivores probably walked away with mildly painful wounds that cleared up in a week. Something else was going on with these bizarre claws. Stumped, Manning suggested that Deinonychus could have used its claws like crampons, allowing it to climb onto the backs of large prey and attack from there. So my vision of dramatic battles between massive herbivores and a fearsome pack of predators wasn’t totally shattered… yet.
It was thanks to a guy called Denver Fowler that my artwork really faded into fantasy. He noticed that modern eagles and hawks—known as raptors—also have one claw bigger than the other on their feet. However, you’ll never see a pack of eagles descending onto a cow in a field and slashing it to death, neither do they need climbing aids. These birds hunt by swooping onto smaller animals, then pick them to bits with their beaks, often while the prey is still alive. A struggling animal could be very dangerous to a bird of prey, potentially breaking its fragile bones, so it’s vital for the raptor to keep it pinned down firmly. This is where that claw comes in. By clamping down with their powerful modified talon, raptors immobilise their prey, allowing them to concentrate on their (very fresh) meal without distraction. Fowler compared the feet of raptors with those of their ancient cousin, Deinonychus, and found many similarities in their anatomy. The flexibility of the toe bearing that large claw may have come in handy not for delivering slashes… but for swivelling down into a death grip on small prey. That’s right—small prey. Those epic clashes I’d envisioned between huge herbivores and fierce little predators seemed less and less feasible.
So how did Deinonychus ACTUALLY live? Fowler envisions a solitary predator that pursued animals smaller or similar to its own size at high speed. It would then pounce on top of its victim and press it firmly to the ground, channelling its bodyweight through the tip of the powerful sickle-claws to prevent escape. Then it would have leaned forward and proceeded to rip its squirming dinner into bitesize chunks—gory, but not quite the image I’d held. Fowler hadn’t gone as far as to demonstrate that my favourite dinosaur was a peaceful vegetarian, but I have to admit—he’d stolen just a little bit of its badassery. This doesn’t mean Deinonychus stops being cool, though. In fact, it could teach us a lot about the early days of its modern relatives: the birds.
Fowler compared modern raptors with Deinonychus once more, and noticed how, when perching on struggling prey, raptors often beat their wings vigorously. This keeps the bird in a prime position on top of the prey, making sure its victim stays pressed to the ground. We’ve known for a while that many predatory dinosaurs like Deinonychus had feathers on their skin– perhaps the first chink to appear in their armour of terror. But scientists have long argued about how the particular lineage of feathery dinosaurs that evolved into birds first developed the “flight stroke”—the special high-powered downbeat of the wings that creates lift. Looking at Deinonychus inspired Fowler to come up with a new theory. If dinosaurs also stability-flapped their feathered arms when making a kill, over the generations, it could have selected for greater upper body strength and the ability to beat the arms hard and fast– features that would later come in very useful when their descendants took to the air. Although Deinonychus was not a direct ancestor of birds—it appeared long after the first flying dinosaurs—it was closely related to them, so it’s likely that they shared similar behaviour. So by looking at how Deinonychus might have hunted, we can take steps in unravelling one of the biggest, most controversial mysteries in all of Palaeobiology.
In future, then, perhaps we’ll look back on Deinonychus as triggering a second revolution in how we see the dinosaurs. If I told that to my 7-year-old self, I hope she’d have been consoled. Deinonychus… you might not be the psycho-killer of my imagination, but you’re still cool to me.
Naked creepy Deinonychus: By Mistvan (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) via Wikimedia Commons
Fluffy Deinonychus: By Peng 6 July 2005 16:32 (UTC) (selbst gemacht –Peng 6 July 2005 16:32 (UTC)) [GFDL (http://www.gnu.org/copyleft/fdl.html) via Wikimedia Commons
Despite the widespread availability of evidence-based medicine in the western world, ‘alternative medicines’ are still commonly used. Such medicines are usually inspired by pre-scientific medical practices; those which have been passed down through generations. However many established medical treatments also arise from traditional medical practices. For example the use of aspirin as an analgesic (pain killer) has its roots in the use of tree bark for similar purposes throughout history. The difference between established medicines like aspirin, and alternative medicines such as homeopathy, is that the former have been found to be effective when exposed to rigorous scientific trials.
A form of alternative medicine that has recently been subjected to scientific scrutiny is the use of magnetic bracelets as a method of analgesia. It effective, such therapies would provide cheap and easy-to-implement treatments for chronic pain such as that experienced in arthritis. Unfortunately there is little evidence of such treatments being effective. A meta-analysis of randomised clinical trials looking at the use of magnet therapy to relieve pain found that there was no statistically significant benefit to wearing magnetic bracelets (1). However it can be argued that existing clinical trials may have been hampered by the difficulty in finding a suitable control condition.
The placebo effect
The ‘placebo effect’ is a broad term used to capture the influence that knowledge concerning an experimental manipulation might have on outcome measures. Consider a situation where you are trying to assess the effectiveness of a drug. To do this you might give the drug to a group of patients and compare their subsequent symptomatology to a control group of patients who do not get the drug. However even if the drug group show an improvement in symptoms compared to the control group, you cannot be certain whether this improvement is due to the chemical effects of the drug. This is because the psychological effects of knowing you are receiving a treatment may produce a beneficial effect on reported symptoms which would be absent from the control group. The solution to this problem is to give the control group an intervention that resembles the experimental treatment (i.e. a sugar pill instead of the actual drug). This ensures that both groups are exposed to the same treatment procedure, and therefore should experience the same psychological effects. Indeed this control treatment is often referred to as a ‘placebo’ because it is designed to control the placebo effect. The drug must exhibit an effect over and above the placebo treatment in order to be considered beneficial.
A requirement for any study wishing to control for the placebo effect is that the participants must be ‘blind’ (i.e. unaware) as to which intervention (treatment or placebo) they are getting. If the participant is aware that they are getting an ineffective placebo treatment, the positive psychological benefits of expecting an improvement in symptoms is likely to disappear, and thus the placebo won’t genuinely control for the psychological effects of receiving an intervention.
A placebo for magnetic bracelets
The obvious placebo for a magnetic bracelet is an otherwise identical non-magnetic bracelet. However the problem with using non-magnetic bracelets as a control is that it is easy for the participant to identify which intervention they are getting, as it is easy to distinguish magnetic or non-magnetic materials. The can be illustrated by considering a clinical trial which appeared to show that magnetic bracelets produce a significant pain relief effect (2). In this study participants wore either a standard magnetic bracelet, a much weaker magnetic bracelet or a non-magnetic (steel) bracelet. The standard magnetic bracelet was only found to reduce pain when compared to the non-magnetic bracelet. However the researchers also found evidence that participants wearing the non-magnetic bracelet became aware that it was non-magnetic, and therefore could infer that they were participating in a control condition. This suggests that the difference between conditions might be due to a placebo effect, as the participants weren’t blind to the experimental manipulation.
This failure of blinding was not present for the other control condition (weak magnetic bracelet) presumably because these bracelets were somewhat magnetic. As no statistically significant difference was found between the standard and weak magnetic bracelets it could therefore be concluded that the magnetic bracelets have no analgesic effect. However it could also be argued that if magnetism does reduce pain, the weaker bracelet may have provided a small beneficial effect which might have served to ‘cancel out’ the effect of the standard magnetic bracelet. The study could therefore be considered inconclusive as neither of the control conditions were capable of isolating the effect of magnetism.
More recent research
Recent clinical trials conducted by researchers at York University has tried to solve the issue of finding a suitable control condition for magnetic bracelets. Stewart Richmond and colleagues (3) included a condition where participants wore copper bracelets, in addition to the three conditions used in previous research, while researching the effect of such bracelets on the symptoms of Osteoarthritis . As copper is non-magnetic it can act as a control in testing the hypothesis that magnetic metals relieve pain. However as copper is also an traditional treatment for pain, it does not have the drawback of the non-metallic bracelet regarding the expectation of success. The participant is likely to have the same expectation of a copper bracelet working as they would for a magnetic bracelet.
The study found that there was no significant difference between any of the bracelets on most of the measures of pain, stiffness and physical function. However the standard magnetic bracelet did perform better than the various controls on one sub-scale of one of the 3 measures of pain taken. However this isolated positive effect was considered likely to be spurious because of the number of comparisons relating to changes in pain that were performed during the study (see 4). The same group has recently published an almost identical study relating to the pain reported by individuals suffering from Rheumatoid Arthritis rather than Osteoarthritis (5). Using measures of pain, physical function and inflammation they again found no significant differences in effect between the four different bracelet types.
The existing research literature seems to suggest that magnetic bracelets have no analgesic effect over and above a placebo effect. The use of a copper bracelet overcomes some of the problems of finding a suitable control condition to compare magnetic bracelets against. One argument against using copper bracelets as a control is that as they themselves are sometimes considered an ‘alternative’ treatment for pain, they may also have an analgesic effect. Such an effect could potentially cancel out any analgesic effect of the magnetic bracelets when statistical comparisons are performed. However copper bracelets did not perform any better than the non-magnetic steel bracelets in either study (3, 5) despite the potential additional placebo effect that might apply during the copper bracelets condition. Indeed on many of the measures of pain the copper bracelet actually performed worse than the non-magnetic bracelet. The copper bracelet can therefore be considered a reasonable placebo to use in research testing the analgesic effect of magnetic bracelets.
Despite the negative results of clinical trials, it may be wise not to entirely rule out a potential analgesic effects of magnetic bracelets. Across all three studies (2, 3, 5) the measures of pain were generally lowest in the standard magnetic bracelet group. Indeed significant effects were found in two of the studies (2, 3) although these were confounded by the aforementioned problems concerning control conditions and multiple comparisons. Nevertheless it could be argued that, given the existing data, magnetic bracelets may have a small positive effect, but that this effect is not large or consistent enough to produce a statistically significant difference in clinical trials. This theory could be tested by conducting trials with far more patients (and thus greater statistical power) or by using a number of different bracelets of differing magnetic strengths to see if any reported analgesic effect increases with the strength of the magnetic field. Until such research is performed it is best to assume that magnetic bracelets do not have any clinical relevant analgesic effect.
Image courtesy of FreeDigitalPhotos.net
(1) Pittler MH, Brown EM, Ernst E. (2007) Static magnets for reducing pain: systematic review and meta-analysis of randomized trials. CMAJ 2007;177(7):736—42.
(2) Harlow T, Greaves C, White A, Brown L, Hart A, Ernst E. (2004) Randomised controlled trial of magnetic bracelets for relieving pain in osteoarthritis of the hip and knee. BMJ 329(7480):1450—4.
(3) Richmond SJ, Brown SR, Campion PD, Porter AJL, Klaber Moffett JA, et al. (2009) Therapeutic effects of magnetic and copper bracelets in osteoarthritis: a randomised placebo-controlled crossover trial. Complement Ther Med 17(5–6): 249–56.
(5) Richmond SJ, Gunadasa S, Bland M, MacPherson H (2013) Copper Bracelets and Magnetic Wrist Straps for Rheumatoid Arthritis – Analgesic and Anti-Inflammatory Effects: A Randomised Double-Blind Placebo Controlled Crossover Trial. PLoS ONE 8(9):
Lying, the deliberate attempt to mislead someone, is a processes that we all engage in at some time or another. Indeed research has found that the average person lies at least once a day, suggesting that lying is a standard part of social interaction (1). Despite its common occurrence lying is not an automatic process. Instead it represents an advanced cognitive function; a skill that requires more basic cognitive abilities to be present before it can emerge. To lie an individual first needs to be able to appreciate the benefits of lying (e.g. a desire to increase social status) so that they have the motivation to behave deceitfully. Successful lying also requires ‘theory of mind’ or the ability to understand what another person knows. This is necessary so that the would-be liar can spot firstly the opportunity to lie, and secondly what sort of deception might be required to produce a successful lie. Finally lying also requires the ability to generate a plausible and coherent, but nonetheless fabricated description of an event. Given these prerequisites it is unlikely that we are ‘born liars’. Instead the ability to lie is believed to develop sometime between the ages of 2 and 4 (2). The fact that the ability to lie develops over time suggests that the our performance of the ‘skill’ of lying should be sensitive to practice. Do people who lie more often become better at it?
Lying is tiring!
Lying is considered more cognitively demanding that telling the truth due to the extra cognitive functions that need to be utilised to produce a lie. The idea that lying is cognitively demanding is supported both by behavioural data showing that deliberately producing a misleading response takes longer, and is more prone to error, than producing a truthful response (3) and by neurological data showing that lying requires additional activity in the prefrontal areas of the brain when compared to truth telling (4). These observable differences between truth telling and lying allow a measure of ‘lying success’ to be created. For example a successful, or skilled liar, should be able to perform lies more quickly and accurately than a less successful liar, perhaps to the extent that there is no noticeable difference in performance between truth telling and lying in such individuals. Likewise, if the ability to lie is affected by practice, then practice should make lies appear more like the truth in terms of behavioural performance.
Practice makes perfect (but is this a lie)?
Despite the intuitive appeal of the idea that lying becomes easier with practice, much past research has failed to find an effect of practice on lying, either when measuring behavioural (3) or neuroimaging (5) markers of lying. Such results have led to the conclusion that lying may always be significantly more effortful than truth telling, no matter how practiced an individual is at deception.
A recent study (6) has re-examined this issue. They used a version of the ‘Sheffield Lie Test’ where participants are presented with a list of questions that require a yes/no response (e.g. ‘Did you buy chocolate today?’). The experiment involved three main phases. In the first, baseline phase, participants were required to respond truthfully to half the statements and to lie in response to the other half of the statements. In the middle, training phase, the statements were split into two groups. For a control group of statements the proportion that required a truthful response remained at 50% for all participants. For an experimental group of statements the proportion that required a truthful response was varied between participants. Participants either had to lie in response to 25%, 50% or 75% of these statements, thus giving the participants differing levels of ‘practice’ at lying. The final, test phase, was a repeat of the baseline phase. This design allowed two research questions to be assessed. Firstly the researchers could identify whether practice at lying reduced the ‘lie effect’ on reaction time and error rate (e.g. the increased reaction time and error rate that occurs when a participant is required to lie, compared to when they are required to tell the truth). Secondly the researchers could identify whether any reduction in the lie effect applied just to the statements on which the groups had experienced differing practice levels, or whether it also generalised to those statements where all groups had the same level of practice.
The results revealed that practice did produce an improvement in the ability to lie during the period when the training was actually taking place, and that this improvement applied to both the control statements and the experimental statements. The participants who had to lie more demonstrated reduced error rates and reaction times compared to those who had to lie less during the training phase. However in the test phase this improvement was only maintained for the set of statements where the frequency of lying had been manipulated. The group who had practiced lying on 75% of the experimental statements were no faster or more accurate at lying on the control statements than the group who had to lie in response to just 25% of the experimental statements. These results suggest that practice can make you better at lying, but this improvement is only sustained over time for the specific lies that you have rehearsed.
Some lies may be better than others!
One important criticism of most studies on the effect of practice on lying is that they tend to use questions or tasks that require binary responses (i.e. yes/no questions). However in real life lying often involves the concoction of complex false narratives,a form of lying that is likely to be far more cognitively demanding than just saying ‘No’ in response to a question whose answer is ‘Yes’. Likewise the lies tested in laboratory studies tend to be rehearsed, or at least prepared lies. In contrast many real-life lies are concocted at short notice, with the deceptive narrative being constructed in ‘real-time’, whilst the person is in the process of lying. It is likely that the effect of training, and how that training generalises to other lies, will be different for these more advanced forms of lying than it is for the more simple types of lies that tend to be tested under laboratory conditions. Given this, if a psychologist tells you that we know for certain how practice impacts on the ability to deceive, you can be sure that they are lying!
(1) DePaulo, B.M., Kashy, D.A., Kirkendol, S.E., Wyer, M.M. & Epstein, J.A. (1996) Lying in everyday life. Journal of Personality and Social Psychology, 70 (5) 979-995. http://smg.media.mit.edu/library/DePauloEtAl.LyingEverydayLife.pdf
(2) Ahern, E.C., Lyon, T.D. & Quas, J.A. (2011) Young Children’s Emerging Ability to Make False Statements. Developmental Psychology. 47 (1) 61-66. http://www.ncbi.nlm.nih.gov/pubmed/21244149
(3) Vendemia, J.M.C., Buzan,R.F., & Green,E.P. (2005) Practice effects, workload and reaction time in deception. American Journal of Psychology. 5, 413–429. http://www.jstor.org/discover/10.2307/30039073?uid=3738032&uid=2129&uid=2&uid=70&uid=4&sid=21101917386241
(4)Spence, S.A. (2008) Playing Devil’s Advocate: The case against MRI lie detection. Legal and Criminological Psychology 13, 11-25. http://psychsource.bps.org.uk/details/journalArticle/3154771/Playing-Devils-advocate-The-case-against-fMRI-lie-detection.html
(5) Johnson,R., Barnhardt,J., & Zhu, J.(2005) Differential effects of practice on the executive processes used for truthful and deceptive responses: an event-related brain potential study. Brain Research: Cognitive Brain Research 24, 386–404. http://www.ncbi.nlm.nih.gov/pubmed/16099352
(6) Van Bockstaele, B., Verschuere, B., Moens, T., Suchotzki, K., Debey, E. & Spruyt, A. (2012) Learning to lie: effects of practice on the cognitive cost of lying. Frontiers in Psychology, November (3) 1-8. http://www.ncbi.nlm.nih.gov/pubmed/23226137
The age-old ‘nature-nurture’ debate revolves around understanding to what extent various traits within a population are determined by biological or environmental factors. In this context ‘traits’ can include not only aspects of personality, but also physical differences (e.g. eye colour) and differences in the vulnerability to disease. Investigating the nature-nurture question is important because it can help us appreciate the extent to which biological and social interventions can affect things like disease vulnerabilities, and other traits that significantly affect life outcomes (e.g. intelligence). The ‘nurture’ part of this topic can be dealt with to some extent by research in disciplines such as Sociology and Psychology. In contrast genetic research is crucial to understanding the ‘nature’ part of the equation. Genetics also has relevance for the ‘nurture’ part of the debate because environmental factors such as stress and nutrition affect how genes perform their function (gene expression). Indeed genetic and environmental factors can interact in more complex ways; certain genetic traits can alter the probability of an organism experiencing certain environmental factors. For example a genetic trait towards a ‘sweet tooth’ is likely to increase the chances of the organism experiencing a high-sugar diet!
Given the importance of genetic information to understanding how organisms differ, I would argue that a basic knowledge of Genetics is essential for anyone interested in ‘life sciences’. This is true whether your interest is largely medical, psychological or social. Unfortunately if, like me, you skipped A-Level Biology for something more exciting (or A-Level Physics in my case!) you might Genetics at bit of mystery.
Some basic genetics
Genetic information is encoded in DNA (Deoxyribonucleic acid). Sections of DNA that perform specific, separable functions are called Genes. Genes are the units of genetic information that can be inherited from generation to generation. Most Genes are arranged on long stretches of DNA called chromosomes, although a small proportion of genes are transmitted via cell mitochondria instead. Most organisms inherit 2 sets of chromosomes, one from each parent. Different genes perform different functions, mostly involving the creation of particular chemicals, often proteins, which influence how the organism develops. All cells in the body contain the DNA for all genes, however only a subset of genes will be ‘expressed’ (i.e. perform their function) in each cell. This variation in gene expression between cells allows the fixed (albeit very large) number of genes to generate a vast number of different chemicals. This in turn allows organisms to vary widely in form while still sharing very similar genetic information (thus explaining how it can be that we share 98% of our DNA with monkeys, and 50% with bananas!).
The complete set of genetic information an individual has is called their ‘genotype’. The genotype varies between all individuals (apart from identical twins) and thus defines the biological differences between us. In contrast the ‘phenotype’ is the complete set of observable properties that can be assigned to an organism. Genetics tries to understand the relationship between the genotype and a particular individual phenotype (trait). For example how does the genetic information contained in our DNA (genotype) influence our eye colour (phenotype)? As already mentioned environmental factors play a significant role in altering the phenotype produced by a particular genotype. Explicitly the phenotype is the result of the expression of the genotype in a particular environment.
Roughly speaking, heritability is the influence that a person’s genetic inheritance has on their phenotype. More officially it is the proportion of the total variance in a trait within a population that can be attributable to genetic effects. It tells you how much of the variation between individuals can be attributed to genetic differences. Note that this is not the same as saying that 60% of an individual’s trait is determined by genetic information. In narrow-sense heritability (the most common form used), what counts as ‘genetic effects’ is only that which is directly determined from the genetic information past on by the parents. This ignores variations caused by the interaction between different genes, and between genes and the environment. This is the most popular usage of heritability in science because it is far more predictive of breeding outcomes, and therefore tells us more about nature part of the ‘nature-nurture’ question, than the alternative (broad-sense) conceptualisation of heritability.
Uses and abuses
Genetic research can provide crucial information in the fight against certain diseases. Identifying genes that are predictive of various illnesses allow us to identify individuals who are vulnerable to a disease. This then allows preventive measures to be implemented to counter the possible appearance of the disease. Furthermore once the genes that contribute to a disease are known, knowledge as to how those genes express will help reveal the cellular mechanisms behind the disease. This improves our understanding of how the disease progresses and operates, and therefore helps with identifying treatment opportunities. In reality of course Genetics is rarely this simple. Many conditions that have a genetic basis (i.e. that show a significant level of heritability) appear to be influenced by mutations within a large number of different genes. Indeed in many cases, especially with psychiatric disorders, it may be that conditions we treat as one unitary disorder are in fact a multitude of different genetic disorders that have very similar phenotypes. Nevertheless, despite these problems genetic research is helping to uncover the biological basis of many illnesses.
One problem with Genetics, and heritability in particular, is that of interpretation. There is often a mistaken belief that a high level of heritability signifies that environmental factors have little or no effect on a trait. This misunderstanding springs from an ignorance of the fact that estimates of heritability comes from within a particular population, in a particular environment. If you change the environment (or indeed the population) then the heritability level will change. This is because gene expression is affected by environmental factors and so the influence of genetic information on a trait will always be dependent to some extent on the environment. As an example a recent study showing that intelligence was highly heritable (1) lead to some right-wing commentators using it as ‘proof’ of the intellectual inferiority of certain populations, because of their lower scores on IQ tests. Such an interpretation is then used to argue that policies relating to equal treatment of people are flawed, because some people are ‘naturally’ better. Apart from the debatable logic of the argument itself, the actual interpretation of the genetic finding is flawed because a high heritability of IQ does not suggest that environmental differences have no effect on IQ scores. To illustrate this point consider that the study in question estimated heritability in an exclusive Caucasian sample from countries with universal access to education. If you expanded the sample to include those who did not have access to education it would most likely reduce the estimate of heritability, as you would have increased the influence of environmental factors within the population being studied! Ironically therefore you could argue that only by treating everyone equally would you be able to determine those who are truly stronger on a particular trait! Independent of what your views on equality are, the most important lesson as regards genetics is that you cannot use estimates of heritability, however high, to suggest that differences in the environment have no effect on trait outcomes.
(1) Davies, G. et al (2011) Genome-wide association studies establish that human intelligence is highly heritable and polygenic. Molecular Psychiatry 16, 996-1005. http://www.nature.com/mp/journal/v16/n10/full/mp201185a.html
Although not directly cited, I found the following information useful when creating the post (and when trying to get my head around Genetics!).
Quantitative Genetics: measuring heritability. In Genetics and Human Behaviour: the ethical context. Nuffield Council on Bioethics. 2002. http://www.nuffieldbioethics.org/sites/default/files/files/Genetics%20and%20behaviour%20Chapter%204%20-%20Quantitative%20genetics.pdf
Visscher, P.M., Hill, W.G. & Wray, N.R. (2008) Heritability in the genomics era – concepts and misconceptions. Nature Reviews Genetics, 9 255-266. http://www.ncbi.nlm.nih.gov/pubmed/18319743
Bargmann, C.I. & Gilliam, T.C. (2012) Genes & Behaviour (Kandel, E.R. et al (Eds)). In Principles of Neural Science (Fifth Edition). McGraw-Hill.
Given that it is Halloween, it seems only right to discuss some recent psychology experiments relating to potential paranormal phenomenon!
Can ‘psychics’ sense information others can’t?
Today Merseyside Sceptics Society published the results of a ‘Halloween psychic challenge’. They invited a number of the UK’s top psychics* to attempt to prove their abilities under controlled conditions, although only two psychics accepted the invitation (1, 2). In the test each psychic had to sit in the presence of 5 different female volunteers who were not known to them. These volunteers acted as ‘sitters’ and the psychics had to attempt to perform a ‘reading’ on them, in effect to use their putative psychic powers to obtain information about the sitter’s life and personality. During the reading the psychic was separated from the sitter by a screen such that the psychic could not actually see the sitter. The psychics were also not allowed to talk to the sitters. These conditions ensured that any information the psychics retrieved was not gathered through processes that could be explained using non-psychic means (e.g. cold reading or semantic inference). The psychics recorded their readings by writing them down.
A copy of the 5 readings made by each psychic (one for each sitter) was given to each sitter and they were asked to rate how well each reading described them, and which reading provided the best description. If the psychic abilities were genuine, then each sitter should rate the reading that was made for them as being most accurate. Of the 10 readings (from the 2 psychics for each of the 5 sitters) only 1 was correctly selected by the sitter as being about them, no more than one would expect by chance. Moreover the average ‘accuracy ratings’ provided by the sitters (for the readings that were actually about them) was low for both psychics (approximately 3.2 out of 10). What of the one reading that a sitter did identify as an accurate description (see 1 for a full transcript of this reading)? It is noticeable that in this reading the statements (some of which were not accurate) were either very general, or could be inferred from the knowledge that the sitter was a young, adult female (e.g ‘wants children’). The (correct) statement that most impressed the sitter (‘wants to go to South America’) was also pretty general and is probably true of a decent proportion of young woman. It can be safely concluded therefore that even this ‘accurate’ reading happened by chance.
In terms of the experimental design it is important to note that both psychics had, prior to the experiment, agreed to the methodology in the belief that they would be able to demonstrate their psychics powers under such conditions. Likewise both psychics rated their confidence in the readings they gave during the experiment highly, suggesting that they didn’t think that anything which occurred during the experiment might have upset their psychic powers. The study could be criticised for its small sample size, although this is due to many psychics, including some of the better known ones like Derek Acorah and Sally Morgan, apparently refusing to take part. It could therefore be argued that despite the psychics involved in the study failing the test, other ‘better’ psychics might pass. However such an argument remains merely speculative until such psychics agree to take part in controlled studies.
Although these negative results may not be surprising I still think it might be of interest to perform the experiment a different way. The problem with relying on the sitter’s ratings is that they may reflect attitudes of the sitters concerning psychic abilities (although all the sitters were apparently open to the idea of psychic powers being genuine). For example even though the sitters were unaware of which reading was about them, they could theoretically have given a low rating to an accurate reading to ensure that no psychic abilities were demonstrated. A better methodology might be to get each sitter to provide a self description, and then ask the psychic to choose the description that they think fits their reading of the person best. Such a test would also reduce the problems of interpreting the accuracy of the vague, general statements such as ‘wants children’ that psychics are prone to give. Another interesting idea would be to get psychics, along with non-psychics and self-confessed cold readers, to perform both a blind sitting (e.g. using a method similar to that described above) and a sitting where the participant can see and perhaps talk to the sitter. This could provide evidence to suggest whether claimed psychic abilities are really just a manifestation (even unintentionally) of cold-reading. If this were the case one would expect no difference in performance between the three groups in the blind test, but both the cold-readers and the psychics to perform better in the non-blind test (but with no difference between psychics and cold readers in that condition).
Can we see into the future?
The second set of experiments that I wish to discuss are potentially more exciting because there is at least a hint of positive results. Instead of testing the telepathy that psychics claim to possess (i.e. the ability to transfer information without the use of known senses) these studies investigated the phenomenon of ‘retroactive influence’ in a random sample of participants. Retroactive influence is the phenomenon of current performance being influenced by future events. In effect it suggests that people can (at least unconsciously) see into the future!
In a series of 9 well-controlled experiments the Psycholgist Daryl Bem produced results that appear to show that participant’s responses in a variety of tasks were influenced by events that occurred after those responses had been made (3). What is most impressive about these results is that Bem used a succession of different paradigms to produce the same effect, ensuring that the effect was not just due to an artifact in one particular experimental design. In brief this is what his results appear to demonstrate:
- Precognitive Detection: Participants had to select one of two positions in which they though an emotive picture would appear on a computer. However the computer randomly decided where to place the picture after the participant has made their selection. Nevertheless participants performance suggested that they were able to predict the upcoming positions of a photo at above chance levels,
- Retroactive Priming: In priming, the appearance of one stimulus (the ‘prime’) just before a second stimulus that the participant has to perform a task on, can either improve or worsen reaction time to that task, depending on whether the prime is congruent or incongruent with the second, ‘task’ stimulus. For example the appearance of a positive word prior to a negative image will slow reaction time on a valence classification task for the image (i.e. deciding whether the image is positive or negative) because the valence of the word is incongruent with the valence of the image. Bem’s results suggest that this reaction time effect also occurs when the prime is presented after both the image, and the time when the participant has made their response to it.
- Retroactive habituation: People tend to habituate to an image, for example an aversive image that has been seen before is rated as less aversive than one that has not been seen before. Bem demonstrated that this habituation can occur even when the repeated presentation occurs after the rating of the image is made (i.e. given the choice between two images, participants will select as less aversive the image that the computer will later present to them several times).
- Retroactive facilitation of recall: When participants had to recall a list of words, they were shown to be better at recalling items that they were later required to perform a separate task on, even though they were unaware of which items on the list they would be re-exposed to.
It is important to note that in all these experiments the selection (by computer) of which items would appear after the initial task, was performed independent of the participant’s response, so the results could not be due to the computer somehow using the participant’s responses to define its choice of which stimuli to present.
These findings caused much controversy and discussion within the psychological research community. Recently three independent attempts to replicate the ‘retroactive facilitation of recall’ effect have failed, producing null results despite using almost exactly the same method as Bem’s original study, and identical software (4). These failures of replication have highlighted problems in psychological research around the concepts of replication and the ‘file-drawer problem’ (5). There isn’t space to do justice to these issues here, suffice to say that the jury is still out on Bem’s findings at least partly because we can’t be sure whether other failed attempts to produce these effects remain unpublished, thus making Bem’s positive results appear more impressive that they might actually be. Another potential problem that is yet to be fully addressed is the issue of experimenter bias. Again this is a complex issue, and it appears to particularly be a problem in research into paranormal phenomenon, because positive results consistently tend to come from researchers who believe in said phenomenon, while negative results consistently come from sceptical researchers (see 6 for a discussion).
Retroactive facilitation of recall is currently the only of Bem’s effects that others have attempted to replicate in an open manner (i.e. by registering the attempt with an independent body before data collection, and by publishing the results after). Until more replication is attempted the question as to whether we can unconsciously see into the future must be considered open to debate. Hopefully these topics will be subject to much research in the future allowing us to find out whether these effects are real, or just the consequence of some other factor. It is worth mentioning at this point another paradigm that sometimes produces positive results regarding paranormal abilities. In experiments using a Ganzfeld Field (where participant’s auditory and visual systems are flooded with white noise and uniform light respectively) there is some evidence that those experiencing such stimulus are able to ‘receive’ information from someone sitting in a separate room (see 7 for a review). This appears therefore to be a potential demonstration of telepathy, although the effect is open to the same issues of replication and experimenter bias that surround Bem’s findings. Even ignoring these uncertainties, it should be noted that in these Ganzfeld Field experiments, and in Bem’s study, the size of the effects are very modest. For example in Bem’s precognitive detection paradigm, participants overall performance was at 53% as compared to chance level performance of 50%, while in the Ganzfeld experiments performance (choosing which one of four stimuli were being ‘transmitted’) is at around 32% against a chance performance of 25%. While these differences are found to be statistically significant (in some studies) because of the large number of participants or trials used, they don’t exactly represent impressive performance! Therefore even if such paranormal phenomenon were to be eventually proven as genuine, this wouldn’t mean that the sort of mind reading abilities claimed by psychics are actually possible!
*note that in this article the term ‘psychics’ is used merely as a label to define people who claim to have psychic powers, its use does not represent acceptance that such powers actually exist.
3) Bem, D. J. (2011). Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. Journal of Personality and Social Psychology, 100(3), 407-425. Link
4) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Failing the Future: Three Unsuccessful Attempts to Replicate Bem’s ‘Retroactive Facilitation of Recall’ Effect. Plos One, 7(3). Link
5) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Replication, replication, replication. Psychologist, 25(5), 346-348. Link
6) Schlitz, M., Wiseman, R., Watt, C., & Radin, D. (2006). Of two minds: Skeptic-proponent collaboration within parapsychology. British Journal of Psychology, 97, 313-322. Link
7) Wackermann, J., Putz, P. & Allefeld, C. (2008) Ganzfeld-induced hallucinatory experience, its phenomenology and cerebral electrophysiology. Cortex 44, 1364-1378 Link
Image from ‘Seance on a wet afternoon’ (1964) Dir: Bryan Forbes, Distribution: Rank Organisation, Studio: Allied Film Makers.
In a recent article published in the Guardian (originally available on his personal website) George Monbiot looked at recent scientific evidence suggesting a link between ‘junk food’ and Alzheimer’s disease (1). This prompted me to think about the wider subject of nutrition and mental health. It’s an uncomfortable subject to consider especially if, like me, you enjoy a trip to the local takeaway and the ‘occasional’ alcoholic beverage. Nevertheless the availability and popularity of processed foods in modern industrial societies (2, 3) makes the impact of diet on brain function an issue that we all need to seriously consider.
Despite a significant amount of research being undertaken into how diet affects the brain, there appears to be little discussion of the subject in public discourse. This may due to the scientific uncertainties inherent in the study of diet and mental processes, especially when contrasted with the strong influence that the commercial interests of food manufacturers and retailers hold over government decision making. Here I intend to briefly review the difficulties researchers face in studying this topic, and what we know so far about how diet may alter mental health.
The problems of studying nutrition
A major problem with the study of diet is that it is really particular nutrients within food (e.g. vitamins and minerals) that influence our brains, rather than the foods themselves. As people can only really report their diets in terms of the foodstuffs they consume, and as each foodstuff contains a variety of chemicals in varying levels, each of which may be harmful, beneficial or neutral to our health to differing extents, it is not straightforward to map the relationship between foodstuffs and changes in health.
A second problem is that the impact of individual nutrients is likely to be mediated by other factors, such as the nutrient’s baseline level in the body, or the presence or absence of other nutrients. For example nutrients that are known to be beneficial to human health when consumed in food often fail to produce positive results when consumed in supplementary form (e.g. vitamin pills) an effect that is most likely due to the absence (in supplements) of naturally co-existing chemicals that facilitate the body’s uptake of the nutrient when it is consumed via foods (4). Likewise other factors that are independent of diet, such as age, genetics, and the level of physical activity, are likely to influence the effect of nutrition on health (e.g. 5). It is unethical to systematically control and manipulate a person’s entire diet over the period of time necessary to identify changes in mental processes likely to be triggered by diet. It is also impossible to fully control for the influence of other non-diet factors over a similar time frame. Therefore it is not possible to establish causality between individual foods and health outcomes with any certainty. Of course it is possible to perform such experiments on laboratory animals, but as such animals lack many of the cognitive functions that are disrupted in neurological diseases such as dementia, such studies are of limited use when considering the impact of nutrition on mental health in humans.
In light of these problems, the effect of nutrition on health is often studied via ‘cohort studies’, where large numbers of people are surveyed as to their dietary habits and health over an extended period of time. Such studies are not only expensive and time-consuming to complete, but also rely on potentially unreliable self-report measures (see (6) for a discussion). Alternatively, the influence of individual nutrients is sometimes studied by giving one group of participants supplements containing the nutrient, and others a placebo. This approach lacks the ecological validity of cohort studies, but allows a tighter control over the intake level of the nutrient involved, thus allowing its effects to be isolated. Neither method however overcomes the previously mentioned problems regarding establishing causality.
What we do know?
Given the complex relationship between food and nutrition, and the imprecision of self-report measures, diet is often characterised in cohort studies in broad terms. One relative common distinction that is used is between the so-called ‘Mediterranean Diet’ and the ‘Western Diet’. The former involves the high intake of fruit, vegetables, fish, cereals and unsaturated fats (e.g. the type of fat that tends to be found in nuts and seeds). In contrast the ‘Western Diet’ involves the frequent consumption of foods with high levels of saturated fats, such as red meats, dairy products as well as other processed foods such as confectionery and ‘convenience’ foods. Studies tend to show that those who have diets that more closely resemble the Mediterranean Diet have lower instances of both dementia and mild cognitive impairment, even after confounding factors like age, socio-economic status and physical activity are controlled for (7). More specifically it has been shown that high intake of fruit and vegetables, as well as omega-3 fats (dietary rather than through supplements) predict a reduced likelihood of dementia (8); dementia levels in those with diets high in fruit and vegetables being 2.6%, compared with 5.7% for those with diets poor in fruit, vegetables and omega-3 fats.
The neurological effects of diet are not just restricted to dementia however. There is increasing evidence that diets high in saturated fat and sugars may contribute to behavioural problems in children and adolescents, including ADHD (9, 10). Similarly artificial food additives, such as the colourings and preservatives commonly added to confectionery and soft drinks, appear to increase hyperactivity in children (11). For example in a double-blind placebo trial (12) it was found that children regularly given a drink containing additives became more hyperactive (as measured by parent and teacher ratings, and through performance on a computerised attention task) than those given a placebo drink with the same frequency. This effect was present in both 3 year old and 8 year old children, suggesting that the influence of additives is not restricted to one particular stage of development.
Evidence also exists which suggests that deficiencies in a variety of vitamins and minerals within the body may encourage depressive symptoms. For example double-blind placebo trials consistently show that Thiamine supplements improve mood (13) while other studies have suggested that low levels of vitamins B6 and E are implicated in depression (14). The effect of diet on mood may be self-reinforcing as depressed individuals often turn to ‘comfort eating’ (13) which is likely to involve foods that are high in saturated fats, and which in turn may promote obesity which could further depress mood and self-esteem over the long term.
In what way do nutrients affect the brain?
Due to the aforementioned complexities in identifying the contribution of different nutrients, it has proven difficult to identify the exact mechanisms by which the under or over abundance of certain nutrients might affect the brain. However two interrelated systems are thought to be most vulnerable to dietary factors; the neuroinflammatory response of brain neurons, and the processes surrounding insulin signalling within the brain (15). Neuroinflammation is the immune response to neuron damage. It acts to preserve the damaged neuron and promote its recovery, but it can also cause damage to surrounding neurons. It is thought that the beneficial effect of diets high in fruit and vegetables may partly be due to the polyphenols present in plant matter working to limit neuroinflammation in the brain (e.g. 16). In terms of the second system, Insulin is involved in regulating the uptake of glucose by neurons, as well as maintaining their function and structure (17). Diets that are high in saturated fats appear to promote ‘insulin resistance’ which reduces the body’s ability to utilise Insulin (hence the association between obesity and type II diabetes). This in turn negatively impacts on the ability of neurons to function properly and to adapt to changes in the signalling patterns of other connecting neurons. This leads to reduced neural plasticity and an increased likelihood of chronic, maladaptive neuroinflammation, both of which are likely to interfere with normal cognitive functioning. This may be the mechanism by which frequent consumption of junk foods leads to a greater risk of dementia (1).
Should I change what I eat?
While it is never possible to rule out the influence of confounding factors, the basic message one can take from these studies seems pretty intuitive. We are better off eating foods that can be thought of as ‘natural’ for humans to eat. Throughout history the human race have presumably mainly relied on fruits, vegetables, nuts and cereals, supplemented with small amounts of fish and meat. It therefore makes sense that these foods would be conducive to both our physical and mental health, as research seems to suggest. In contrast the convenience and affordability of seemingly unnatural foods such as confectionery, processed meats and ‘ready meals’ belies their damaging impact on our health. We could do our future selves a favour by avoiding the temptation these foods provide, and making the extra effort to eat healthily.
Image courtesy of www.freedigitalphotos.net
- http://www.monbiot.com/2012/09/10/the-mind-thieves/ (retrieved 24/09/2012).
- Popkin, B. M. (2004). The nutrition transition: An overview of world patterns of change. Nutrition Reviews, 62(7), S140-S143. <link>
- Thow, A. M. (2009). Trade liberalisation and the nutrition transition: mapping the pathways for public health nutritionists. Public Health Nutrition, 12(11), 2150-2158. <link>
- Morris, M. C. (2012) Nutritional determinants of cognitive aging and dementia. Proc Nutr Soc, 71(1), 1-13. <link>
- Dauncey, M. J. (2009). New insights into nutrition and cognitive neuroscience. Proceedings of the Nutrition Society, 68(4), 408-415 <link>
- http://www.sciencebrainwaves.com/uncategorized/the-dangers-of-self-report/ (retrieved 24/09/2012)
- Sofi, F., Abbate, R., Gensini, G. F., & Casini, A. (2010). Accruing evidence on benefits of adherence to the Mediterranean diet on health an updated systematic review and meta-analysis. American Journal of Clinical Nutrition, 92(5), 1189-1196. <link>
- Barberger-Gateau, P., Raffaitin, C., Letenneur, L., Berr, C., Tzourio, C., Dartigues, J. F., et al. (2007). Dietary patterns and risk of dementia – The three-city cohort study. Neurology, 69(20), 1921-1930 <link>
- Oddy, W. H., Robinson, M., Ambrosini, G. L., O’Sullivan, T. A., de Klerk, N. H., Beilin, L. J., et al. (2009). The association between dietary patterns and mental health in early adolescence. Preventive Medicine, 49(1), 39-44 <link>
- Howard, A. L., Robinson, M., Smith, G. J., Ambrosini, G. L., Piek, J. P., & Oddy, W. H. (2011). ADHD Is Associated With a “Western” Dietary Pattern in Adolescents. Journal of Attention Disorders, 15(5), 403-411 <link>
- Schab, D.W & Trinh, N.T. (2004). Do Artificial Food Colors Promote Hyperactivity
in Children with Hyperactive Syndromes? A Meta-Analysis of Double-Blind
Placebo-Controlled Trials. Developmental and Behavioral Pediatrics, 25(6), 423-434 <link>
- McCann, D., Barrett, A., Cooper, A., Crumpler, D., Dalen, L., Grimshaw, K., . . . Stevenson, J. (2007). Food additives and hyperactive behaviour in 3-year-old and 8/9-year-old children in the community: A randomised, double-blinded, placebo controlled trial. Lancet, 370, 1560-1567. <link>
- Benton, D., & Donohoe, R. T. (1999). The effects of nutrients on mood. Public Health Nutr, 2(3A), 403-409. <link>
- Soh, N. L., Walter, G., Baur, L., & Collins, C. (2009). Nutrition, mood and behaviour: a review. Acta Neuropsychiatrica, 21(5), 214-227 <link>
- Parrott, M. D., & Greenwood, C. E. (2007). Dietary influences on cognitive function with aging: from high-fat diets to healthful eating. Ann N Y Acad Sci, 1114, 389-397. <link>
- Lim, G. P., Chu, T., Yang, F., Beech, W., Frautschy, S. A., & Cole, G. M. (2001). The curry spice curcumin reduces oxidative damage and amyloid pathology in an Alzheimer transgenic mouse. J Neurosci, 21(21), 8370-8377. <link>
- http://www.thealzheimerssolution.com/insulin-brain-function-and-alzheimers-disease-is-insulin-resistance-to-blame-for-alzheimers/ (retrieved 28/09/2012)
It is a common occurrence to come across people who believe things that seem extraordinary, and who maintain that belief even in the face of huge amounts of contradictory evidence. For example despite vast amounts of evidence suggesting otherwise, there are people who believe that aliens create crop circles, that astrology can predict their future, and that the next Adam Sandler movie will be any good. A delusion can be defined as an extraordinary belief that is strongly held despite the presence of seemingly overwhelming evidence to the contrary. They are of particular interest to psychologists and neuroscientists because they occur in a number of neurological disorders, as well as in seemingly healthy individuals. For example a variety of paranoid or grandiose delusions frequently occur in psychotic disorders such as schizophrenia. Delusions relating to various bizarre forms of misidentification, such as the belief that a loved one is an imposter (the Capgras delusion) can also occur, often in forms of dementia such as Alzheimer’s Disease, and even in old age populations who do not exhibit any other noticeable cognitive impairment (1). Delusions of various types also occur in Parkinson’s disease, depression and as a result of other brain traumas such as those caused by strokes.
One error or two?
On a theoretical level there has traditionally been a distinction between 1-step and 2-step theories of delusions. 1-step theories (e.g. 2) suggest that a single perceptual deficit causes delusions. The delusion represents the most logical response to the bizarre perceptual information the brain is receiving as a result of the perceptual deficit. For example paranoid delusions may be caused by a perceptual bias towards threat signals which makes the sufferer conclude that some overbearing threat must be present to explain the constant warnings coming from the sensory environment. In contrast 2-step models (e.g. 3) argue that in addition to a perceptual deficit, there must also be a second, cognitive deficit. Such theories are motivated in part by the finding that there are some individuals who exhibit very similar perceptual deficits to those with delusions, but nevertheless do not hold delusional beliefs. For example there are individuals with bilateral damage to specific parts of the frontal lobe who, like patients with the Capgras delusion, experience a lack of familiarity when they come into contact with a particular close relative. However in contrast to the Capgras patients, the frontal lobe patients do not hold the belief that the relative is an imposter (4). Instead they are able to understand that it is their experience that has changed, rather than their relative. While 1-step theories suggest that delusions are caused by a single neuro-perceptual deficit, which varies in its nature depending on the nature of the delusion, 2-step theories require that an additional, separate deficit exists within the neural system involved in the formation and evaluation of beliefs. Variances in this second cognitive stage explain the likelihood of adopting a delusional belief in the context of disrupted perceptual experiences, and hence the difference between the Capgras and frontal lobe patients.
How are beliefs formed and updated?
If delusions are underpinned by a 2-step deficit, with the second, cognitive step being similar across delusional disorders, then the question arises as to what is the exact nature of this cognitive deficit? Recently an answer to this question has been proposed based off the insight that our ability to navigate the world is achieved through a process of inferential learning (e.g. 5). In short it is proposed that the brain creates representations as to how the external world is organized based off the information it receives. These models of the world by their nature encapsulate our belief system, as they contain representations of how different information is related, and what is likely to occur in any given situation. These models also allow the brain to predict both upcoming external stimulation, and internal experience. When actual experience differs from that which is expected, signals communicating this discrepancy (referred to as prediction-error signals) are sent back to the areas that generated the prediction, with the purpose of updating the model from which the original prediction arose. This process, when working optimally, allows us to adapt to new, unexpected information while at the same time enabling the majority of unexceptional information we encounter to be processed quickly and with minimum effort (because it has been predicted in advance).
Within this system the updating of beliefs can be framed using the principles of Bayesian inference, whereby the decision as to whether to adopt one of (say) two explanations to account for an unexpected stimulus is taken by balancing the inherent probability of each explanation (based off the current model of the world that the individual holds) with the likelihood of the unexpected stimulus having occurred if each explanation were true. When in the presence of a surprising or anomalous experience, such as those caused by the perceptual deficits believed to underpin the first step of delusion formation, an alteration in the belief pattern will only occur if the difference between the probability of the sensation occurring given that the new belief is true, compared to its probability of it occurring if the existing belief is true, is greater than the difference in the inherent probability of the two beliefs. In order to adopt an atypical or delusional belief, whose inherent probability would usually be very low, new evidence would have to appear that is almost inexplicable within the current belief system, while being fully explainable using the new belief. For example to believe that the moon is made of cheese would probably require you to actually travel to the moon, dig a bit of it up, put it in your mouth and taste cheese. Any lesser form of evidence would be discarded as a coincidence or trick, as the inherent probability of the moon being made of cheese given your existing belief system is (or at least should be) extremely low!
Delusions: A problem with prediction error?
In delusions it is proposed that this process of error-dependent updating of beliefs is disrupted. Most likely this occurs through a process whereby the weight (or importance) given to various prediction error signals is sub-optimal (e.g. 6, 7). If prediction error signals are given undue weight then potentially unimportant variances from expectation will become flagged as being highly salient. This in turn would mean that they are given unnecessary influence in updating our belief system. An anomalous experience that would normally not be treated as particularly relevant to understanding how the world works, either because of the unusual context in which it occurred, or its infrequency, would, if this deficit existed, be treated as important enough to warrant a change in the individual’s belief system. In terms of Bayesian inference, a system which gives undue weight to prediction errors would be one that had a bias towards accepting the influence of the new anomalous experiences without taking fully into account the relative inherent probabilities of the competing potential beliefs (which would usually strongly favour the non-delusional belief) (8). A less convincing anomalous experience would therefore be required in order to successfully challenge an existing non-delusional belief.
As an example, reconsider the aforementioned difference between patients with frontal lobe lesions and those with the Capgras delusion. In both types of patient the feeling of familiarity that is expected to appear on the physical recognition of a known person is absent. In the non-deluded individual, while this discrepancy is noted, it is not used to adopt the ‘imposter explanation’ because the correct weight is given to the prediction error and it is therefore not strong enough to overturn an otherwise functioning belief that the individual is who they claim to be (a belief that would be supported by several other pieces of information). In contrast the deluded individual gives far too much weight to the unexpected experience of non-familiarity, and the model is changed to accommodate it through the acquisition of the belief that the person is an imposter. As the prediction error deficit in such cases is restricted to the perceptual system dedicated to familiarity processing, other evidence that is contradictory to the imposter hypothesis, but which comes from a different source (e.g. people telling the deluded individual that they are wrong) is not treated with the same weight as the experience of absent familiarity. The delusion is therefore maintained even in light of strong contradictory evidence.
More widespread delusions
Whereas the Capgras delusion tends to be monothematic (i.e. it relates to just one known person having been replaced by an imposter, rather than people in general being imposters) faulty prediction error signalling can also be used to explain more widespread delusional thinking such as paranoia. For example one potentail consequence of the incorrect updating of belief systems is that the model of the world that the individual holds will itself become further divorced from reality, making it less able to accurately predict upcoming stimulation. This in turn will lead to a further increase in the frequency of prediction errors; to the extent that surprising or anomalous information would appear to occur with seemingly baffling frequency. If the deficit in prediction error exists across more than one perceptual domain, the inferential response to this might be to adopt a paranoid outlook to explain this constant uncertainty in the world. For example a delusion that MI5 are spying on the sufferer might be the best explanation for a world where objects and strangers seem to take on a sinister level of salience, and unexpected events seem to happen with alarming frequency (6).
Is healthy belief formation optimal, or are we all deluded?
The strength of a model of delusions based off deficits in the processes of inferential learning is that it can be used to explain the characteristics of general belief formation. For example deficits in prediction-error signaling may explain why some otherwise healthy individuals tend to adopt a wide variety of irrational beliefs. Such people may lack the perceptual deficit that causes the bizarre but specific anomalous experiences suffered by individuals with clinical delusions, but they may share with the clinical group a general deficit in inferential reasoning which results in a tendency to accept unusual beliefs that are poorly supported by available evidence. Along similar lines, variances from optimal processing (in terms of Bayesian inference) may explain more general cognitive biases that seem to be present in most people (including scientists!) and which are therefore presumably hard wired in the human brain because they have some adaptive evolutionary advantage. For example most people display a ‘belief bias’, the tendency to evaluate the validity of evidence based on their prior beliefs, rather than on the inherent validity of the evidence as could be assessed through logical reasoning (9). This bias could be said to be the result of our system of inferential learning being sub-optimal (in Bayesian terms) but in the opposite direction to that seen in delusion, such that we have a bias towards evaluating beliefs more in terms of their inherent probability (as we see it) without fully taking into account new evidence.
More generally the processes of inferential learning and belief formation may be able to explain why people who have had relatively similar types of upbringing and experience can often exhibit very different sets of beliefs. These differences are likely to be in part due to differences in the process of belief formation between individuals. It would seem very unlikely that anybody’s brain is able to process information in strict accordance with Bayesian inference, given that neural signals are coded through the transmission of neurotransmitters between groups of neurons, a process that is naturally susceptible to a significant amount of noise. Differences in beliefs between people are presumably therefore inevitable, as is the likelihood that we all, at some time, adopt irrational convictions. Of course these are just things that I believe, and I may be deluded in believing them!
Image courtesy of www.freedigitalphotos.net
(1) Holt, A.E., & Albert, M.L. (2007) Cognitive Neuroscience of delusions in aging. Neuropsychiatric disease and treatment, 2 (2) 181-189. Link
(2) Maher, B.A. (1974) Delusional thinking and perceptual disorder. Journal of Individual Psychology, 30:98-113. Link
(3) Coltheart, M, Langdon, R. & McKay, R. (2011) Delusional Belief. Annual Review of Psychology, 62, 271-298 Link
(4) Tranel, D., Damasio, H. & Damasio, A.R. (1995) Double dissociation between overt and covert face recognition. Journal of Cognitive Neuroscience, 7(4) 425-432. Link
(5) Friston, K. (2003). Learning and inference in the brain. Neural Networks, 16(9), 1325-1352. Link
(6) Fletcher, P. C., & Frith, C. D. (2009). Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nature Reviews Neuroscience, 10(1), 48-58. Link
(7) Corlett, P. R., Taylor, J. R., Wang, X. J., Fletcher, P. C., & Krystal, J. H. (2010). Toward a neurobiology of delusions. Progress in Neurobiology, 92(3), 345-369. Link
(8) McKay, R. (2012). Delusional Inference. Mind & Language, 27(3), 330-355. Link
(9) Markovits, H. & Nantel, G. (1989). The belief bias effect in the production and evaluation of logical conclusions. Memory & Cognition, 17(1) 11-17. Link
You see, but you do not observe…
A Scandal in Bohemia, The Adventures of Sherlock Holmes: Arthur Conan Doyle
What is the distinction between seeing and observing? The term ‘seeing’ suggests a passive process, whereas observation clearly requires something additional; the attention to a particular detail or details within the visual scene, the extraction of salient information and perhaps the further evaluation of that information. Neuroscience has made great strides in understanding the functioning of our basic sensory mechanisms, such as those that allow seeing. This work has reached such a level that we are now coming close to being able to create ‘bionic eyes’; mechanical replicas which can mimic the workings of damaged parts of the visual system (1). However is it a much harder task to fully understand the myriad of different ‘higher order’ functions that serve to differentiate observation from merely seeing. These functions are the reason that human experience is much more than the sum of the output from our sensory systems. At the heart of this problem is the need to understand the phenomenon of consciousness. Consciousness can be difficult to define precisely, with different philosophers breaking consciousness down into different sets of features (2) producing concepts that, perhaps inevitably, tend to be somewhat vague and potentially overlapping. However the most fundamental aspect of consciousness would appear to be our ability to experience awareness of (certain) sensory information, and to impose our higher order abilities on that information. In short, given that the majority of sensory processing is performed outside of consciousness, how is it that certain information can be sectioned off and subject to processes such as attention, evaluation and reflection, and how is it that we are aware of both the selected data, and the cognitive processes we perform on it?
Brain waves and synchronisation
The simplest way of addressing the issue of consciousness is to compare the response of the brain during circumstances where the level of consciousness awareness is different. It has long been known that states of consciousness (such as wakefulness, sleep and coma) are marked by differences in the pattern of ‘brain waves’; the oscillating electrical signals that are produced by the brain. It would seem sensible therefore to assume that such changes in the pattern of brain waves reflect, at least in part, changes in the functioning of the mechanism that enables consciousness. Similar changes in brain oscillations are also seen in a wide variety of different brain areas during performance of cognitive tasks, which of course also require the conscious processing of information. In general cognitive processes appear not only to alter the power of such oscillations, but also to evoke an increase in synchronisation between these oscillations (such that the phase difference between the signals generated from the brain areas activated by the task remains constant over time). Such synchronisation is believed to allow communication between disparate brain areas; so-called ‘communication through coherence’ (3). If one takes the simple example of one neuronal population passing a signal to another, then to provide the greatest likelihood of that signal being received, the sending neurons must all fire at the same time (hence the oscillating nature of brain waves) thus maximising the signal sent to the receiving neurons. However the timing of this signal is also important. To maximise the chance of the signal being propagated, the firing of the sending neurons must be timed so that the signal arrives at a time when the receiving neurons are optimally receptive to the signal (or alternatively, if inhibition of signalling is required, at a time when the receiving neurons are optimally insensitive of the signal). Therefore when different brain areas need to communicate in order to facilitate cognitive processing their pattern of neuronal firing much achieve coherence, so they tend to synchronise with (for unidirectional, excitation signals at least) the conduction delay between the two areas being equal to the phase difference between the two oscillating signals.
Global Neuronal Workspace
As the cognitive tasks that produce neural synchrony all require conscious processing of some sort, we would expect that the experience of consciousness in general must rely on changes in synchrony between brain areas. Indeed studies that have directly compared conscious vs non conscious processing (e.g. comparing instances where the same stimulus is consciously perceived versus instances where it is not) have found an increase in synchronisation between distant cortical sites not directly related to the processing of the relevant sensory information (e.g. 4). Evidence from several MRI studies suggests that the location of these synchronising sites is consistent across different tasks, involving a specific set of areas in the frontal and parietal lobes as well as the thalamo-cortical circuits that control the flow of sensory information to and from the cortex (see 5 for a review). The relevance of this finding to consciousness is supported by evidence that the source of the altered brain response between different states of consciousness appears to be generated by a similar set of areas (6). This has led to the idea that these brain areas represent a ‘global neuronal workspace’ (GNW: 5,7) that supports consciousness. The GNW system is thought to be able to orchestrate synchronisation between different sensory processing areas in such a way as to allow certain sensory representations to be amplified and maintained, while inhibiting others. As synchronisation facilitates neuronal communication it may allow the specific information being held within different sensory areas to form a single, multi-sensory representation within the workspace, explaining how the conscious experience of perception is of a unified sensation, despite the fact that information from each sense is analysed separately (8 – the ‘perceptual binding’ problem). In addition the parietal and frontal areas of the GNW contain a large number of neurons with long axons which allow these areas to project information to a wide variety of disparate brain areas. This in turn is thought to allow them to make the representation held within the GNW available to the areas of the brain involved in higher processing functions. In effect the amplified representation that is maintained by the GNW is also broadcast to these other processing sites, thus allowing higher order processing of conscious information. It is this selection and amplification of a specific representation, and it’s subsequent global availability (to other brain areas) which we experience as consciousness. The concept of synchronous firing and a global neuronal workspace may also help explain other aspects of the conscious experience, such as metacognition (our ability to perform mental processing on the outputs of other mental processing e.g. to know what we know). Metacognition may simply be the conscious component of a much larger perceptual system that is continuously reflecting on our own activity and its likely consequences (9). The metacognition we experience consciously may therefore simply be the instances where this process reaches conscious access via the GNW and is therefore exposed to other higher order processing functions.
The consequences a neural explanation of consciousness
The study of the neural basis of consciousness is an exciting, but complex subject. It also however raises significant philosophical questions. The idea that consciousness is merely a manifestation of the firing patterns of neurons and their arrangement vis-a-vis each other is not a particularly controversial conclusion from a neuroscience perspective, as one would expect every aspect of human cognition to manifest via changes in brain physiology. However the topic is controversial in general because it suggests that if something as core to our being, to our experience of being ‘human’, as consciousness is in fact solely reliant on biological mechanisms, then concepts such as the mind, the soul and free are redundant. If there is no ‘ghost in the machine’ driving our conscious behaviour then are we really nothing more than just a collection of tissue; are we really just, in effect, extremely complex machines? The consequences of this discussion has important implications for philosophy and morality (for an interesting discussion on this topic see 10). More optimistically however, the ability to understand the biological underpinnings of consciousness can lead to greater understanding of the basis of neurological disorders that cause the loss of conscious abilities, and of psychiatric symptoms that relate to the disruption of consciousness. For example many people suffering from forms of psychosis can experience what could be termed failures of consciousness, such that patterns of conscious thought become disordered, or that they may feel that their thoughts are being read or even controlled by others. An understanding as to how the brain generates consciousness is surely an important step in identifying what has gone wrong in these situations, and potentially how they can be remedied.
Image ‘Idea and Creative Concept’ by ‘Mr Lightman’, courtesy of freedigitalphotos.net http://www.freedigitalphotos.net/images/view_photog.php?photogid=3921
1. Mathieson et al (2012). Photovoltaic retinal prosthesis with high pixel density. Nature Photonics, 6, 391-397. http://www.nature.com/nphoton/journal/v6/n6/full/nphoton.2012.104.html
2. Gok, S.E., and Sayan, E. (2012) A philosophical assessment of computational models of consciousness. Cognitive Systems Research 17–18 (2012) 49–62. http://www.sciencedirect.com/science/article/pii/S1389041711000635
3 Fries, P. (2005) A mechanisms for cognitive dynamics: neuronal communication through neuronal coherence. Trends in cognitive sciences. 9(10) 474-480. http://www.sciencedirect.com/science/article/pii/S1364661305002421
4. Doesburg, S.M., Green, J.J., McDonald, J.J., and Ward, L.M. (2009). Rhythms of consciousness: Binocular rivalry reveals large-scale oscillatory network dynamics mediating visual perception. PLoS ONE 4, e6142. http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0006142
5. Dehaene, S. and Changeux, J.P., (2011). Experimental and Theoretical Approaches
to Conscious Processing. Neuron 70, 201-227. http://www.cell.com/neuron/abstract/S0896-6273%2811%2900258-3
6. Boly, M et al (2008) Intrinsic brain activity in altered states of consciousness – How conscious is the default mode of brain function? Annals of the New York Academy of Sciences. 1129, 119-129. http://www.ncbi.nlm.nih.gov/pubmed/18591474
7. Dehaene, S. & Naccache, L. (2001) Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework, Cognition 79 1–37. http://www.jsmf.org/meetings/2003/nov/Dehaene_Cognition_2001.pdf
8. Varela, F., Lachaux, J.P., Rodriguez, E., and Martinerie, J. (2001). The brainweb: Phase synchronization and large-scale integration. Nat. Rev. Neurosci. 2, 229–239. http://www.nature.com/nrn/journal/v2/n4/abs/nrn0401_229a.html
9. Timmermans, B., Schilbach, L., Pasquali, A., and Cleeremans, A. (2012) Higher order thoughts in action: consciousness as an unconscious re-description process. Phil. Trans. R. Soc. B (2012) 367, 1412–1423. http://rstb.royalsocietypublishing.org/content/367/1594/1412.abstract
If you’ve got the money honey, we’ve got your disease — Guns n’ Roses: Welcome to the Jungle
One of the key challenges of cognitive neuroscience is to gain an understanding of the neural mechanisms behind the various psychiatric disorders that can blight mankind. Knowledge of the how various brain mechanisms work in health, and how, and in what way, they become defective is crucial for the development of neurological treatments for such conditions. Such an approach doesn’t imply tacit acceptance of the idea that all behaviour is guided by changes in the brain, or that psychiatric problems are solely of a biological origin. Indeed it is well established that social and psychological factors can drive changes in brain function (for example purely cognitive therapies can alter the patterns of neural firing ). What understanding the neural basis of disease does allow, is the development of better methods of tackling such conditions at a neurological level, which is important because in many patients the social and psychological factors that have triggered their condition may prove to be either impractical or impossible for clinicians to alter (e.g. changing the structure of society).
Addiction is an extremely prevalent problem in modern society. Alcohol and opiate addictions alone are estimated to affect 15million Europeans, costing around 65 million Euros a year in both health and non-health related costs (2). Addiction can be defined as the persistent, compulsive dependence on a behavior or substance (3) and therefore spans not just drug dependencies, but also ‘behavioural addictions’ such as gambling, overeating, sex addiction and compulsive shopping (oniomania). Although the definition of addiction is reasonably straightforward, the process of addiction needs to be broken down into its constituent cognitive parts before it can be fully understood. Addiction, and indeed all psychiatric problems, are not unitary constructs; they reflect abnormalities in several different facets of human cognition. For example unipolar depression can involve not just low mood, but also failure to respond to pleasurable experiences (anhedonia), low energy, anxiety and loss of appetite. Breaking down such conditions into their components parts is crucial if we are to be able to understand how they develop and how they can be treated. From a clinical perspective, focusing on the array of symptoms rather than the overall condition can help identify sub-types of the condition, which in turn can allow treatments to be modified to address the particular set of symptoms presented by an individual patient.
So which cognitive processes may be at fault when an individual becomes addicted? While opinions vary on this subject, in general it can be said that addiction involves abnormalities in the following interconnected processes:
- Reward processing
- Motivation and learning
- Decision Making
- Cognitive control
By their nature addictive behaviours have, at least initially, a rewarding effect. Moreover these effects are felt both by those who later become addicted and those that do not. Clearly therefore something in the processing of rewarding events must either change during addiction, or be naturally defective in the addicted individual. Unfortunately, while there are a number of different theories concerning how reward processing is disrupted in addiction, the exact nature of the deficiency is as yet uncertain. For example do people become sensitized to a drug and thus gradually require more to be able to maintain a balanced physiological state, or are people at risk of addiction more naturally prone to negative emotions and therefore have a greater tendency to seek out rewarding stimuli despite the risk? Despite this uncertainty around the exact nature of the cognitive deficiencies in reward processing, neurological research has revealed that experience of reward (e.g. intoxication) is strongly associated with activity within circuits of the brain that make use of the neurotransmitter dopamine (neurotransmitters are the chemicals that facilitate communication between different neurons in the brain). This dopaminergic system encompasses subcortical areas directly related to processing of motivationally relevant stimuli, such as the striatum and amygdala, as well as cortical areas such as the prefrontal cortex which are involved in the prediction of future reward, the evaluation of existing rewards and decision making (4). Various addictive drugs appear to alter the balance of dopamine within this system, usually increasing it, presumably creating the feeling of high associated with drug taking. Over the long term an ‘exhaustion’ effect may occur, whereby the brain is unable to maintain its previous tonic (standard) level of dopamine because of the effect on dopamine levels of frequent performance of the addictive behaviour. This may then lead to the withdrawal state and to a situation where the addicted user becomes trapped in a cycle of repeating the addictive behaviour, not to achieve the high that the behaviour was initially associated with, but merely to maintain a acceptable tonic level of dopamine, thus avoid the ‘low’ that occurs with withdrawal from the behaviour. Other neurotransmitter systems which also innervate similar brain areas, such as the noradrenergic system, also play a part in addiction, although they have in general been less widely studied regarding their role in reward processing.
Stimuli that are not directly rewarding, but are predictive or otherwise associated with the positive effects of the addictive behaviour, act to induce cravings for the addictive behaviour. The processing of such ‘addictive cues’, in comparison to similar stimuli unassociated with the addiction, tend to provoke greater activity in a wide variety of brain areas including those involved in the actual processing of reward, alongside frontal-cortical circuits involved in the regulation of thoughts and actions, and areas involved in memory, sensory processing and the engagement of motor actions (5). This suggests that contextual factors that induce cravings can not only evoke brain activity in the reward centers of the brain, but also engage greater perceptual processing and attention, and even trigger motor activity, presumably in preparation for seeking out or performing the addictive behavior. Dysfunctions within these circuits are likely to have a knock-on effect on the processes such as learning and memory. Persistent performance of the addictive behaviour after exposure to addictive cues will lead to a strengthening of the association between the cue and the behaviour, and between both the cue and behaviour and the subsequent hedonic effects of the reward. The strengthening of such associations can lead to a behaviour that was previously under conscious control becoming habitual. The more habitual or automatic a behaviour becomes, the more effort is required to control it, and ultimately the more likely the behaviour is to be performed regardless of its utility in a particular circumstance. In short, it becomes compulsive. Indeed the ease with which a behaviour can become habitual may distinguish addicts from those who remain ‘casual users’.
In addition to the neural circuits involved in reward and learning, the frontal areas of the brain which are also activated by both the addictive behaviour itself and during craving are crucial in the process of addiction. Such areas are broadly believed to be involved in ‘cognitive control’; they act to regulate activity from the more primal, sub-cortical brain areas which are involved in motivation, emotional and learning. This effectively meaning that they provide control over thoughts and behavior. Perhaps unsurprisingly, the (partially separate) systems within the frontal cortex that are involved in decision making and in inhibiting pre-potent (i.e. habitual or natural) responses are both found to be deficient in addicted populations, thus explaining why addicts make decisions that are counter-productive to their health, even when they are fully aware of the likely consequences of their actions (8). Increasing sensitivity, or reactivity from the subcortical reward circuits, coupled with a weakening of the control exerted on them by the frontal control areas is likely to be behind the habituation of addictive behavior, and the subsequent failure to regulate that behavior. In some senses the addict (or more accurately, the frontal control areas of the addict’s brain) loses control over their instinctive behavior.
One of the most serious problems with addiction can be what is termed ‘insight’ or the ability to understand that you are ill. Lack of insight is a severe challenge for clinicians as it can be nearly impossible to effectively implement any treatment when the patient is unaware that the treatment is needed. Again frontal areas, most notably the Insula and anterior cingulate cortices, appear to be crucially involved in the lack of insight (6). The Insula is involved in monitoring internal body states (interoceptive awareness) and producing the ‘subjective experience’ relating to this. It also is involved in deriving salience from sensory information and, along with the anterior cingulate, influencing behavior accordingly (7) thus providing a crucial system for the expression of the effect of addictive cues on behaviour. Addiction-induced dysfunctions in this system may therefore lead to an inability to properly process and respond to changes in body state caused by the performance of (or withdrawal from) the addictive behavior, and may stop the individual from fully appreciating that addictive cues are provoking the cravings which are driving the addictive behaviour. Thus insight into the problematic nature of their condition is lost to the individual.
This article represents a very brief overview of the sorts of cognitive and neural structures involved in addiction. It isn’t unfortunately possible to do justice to the full scope of research into addictive behavior in a short article. What should be clear however is that drug abuse can induce changes in a multitude of different interconnected neural circuits, affecting a multitude of different cognitive functions. This effect can be somewhat different depending on the drug of abuse, but nevertheless also applies to a significant extent to non-drug addictions, implying that such neurological changes can occur without the direct influence of external chemical agents. It follows that these changes must therefore be at least partly the consequence of purely internal, cognitive shifts in the workings of the brain, which do, of course, also occur in drug based additions, thus exacerbating the natural neurochemical effects of the drug. Despite the complexity of the processes involved, increased understanding of the neurological and cognitive basis of addiction should enable, in time, more advanced and effective treatments to be designed. Future research into addiction will also hopefully enable ‘markers’ for the condition to be identified; biological or cognitive indices that predict those who are at potential risk of addiction. This in turn would improve our ability to take preventative measures to reduce the prevalence of this debilitating problem.
1) Porto, P. R., Oliveira, L., Mari, J., Volchan, E., Figueira, I., & Ventura, P. (2009). Does Cognitive Behavioral Therapy Change the Brain? A Systematic Review of Neuroimaging in Anxiety Disorders. Journal of Neuropsychiatry and Clinical Neurosciences, 21(2), 114-125. http://neuro.psychiatryonline.org/article.aspx?articleID=103678
2) Olesen, J., Gustavsson, A., Svensson, M., Wittchen, H. U., Jonsson, B., Grp, C. S., et al. (2012). The economic cost of brain disorders in Europe. European Journal of Neurology, 19(1), 155-162. http://onlinelibrary.wiley.com/doi/10.1111/j.1468-1331.2011.03590.x/full
4) Parvaz, M. A., Alia-Klein, N., Woicik, P. A., Volkow, N. D., & Goldstein, R. Z. (2011). Neuroimaging for drug addiction and related behaviors. Reviews in the Neurosciences, 22(6), 609-624. http://www.bnl.gov/medical/Personnel/Rita-Goldstein/files/Parvaz_RNS2011.pdf
5) Yalachkov, Y., Kaiser, J., & Naumer, M. J. (2012). Functional neuroimaging studies in addiction: Multisensory drug stimuli and neural cue reactivity. Neuroscience and Biobehavioral Reviews, 36(2), 825-835. http://www.sciencedirect.com/science/article/pii/S0149763411002119
(6) Goldstein, R. Z., Craig, A. D., Bechara, A., Garavan, H., Childress, A. R., Paulus, M. P., et al. (2009). The Neurocircuitry of Impaired Insight in Drug Addiction. Trends in Cognitive Sciences, 13(9), 372-380. http://www.sciencedirect.com/science/article/pii/S1364661309001466
7) Menon, V., & Uddin, L. Q. (2010). Saliency, switching, attention and control: a network model of insula function. Brain Struct Funct, 214(5-6), 655-667. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2899886/
8 ) Duka, T., Crombag, H. S., & Stephens, D. N. (2011). Experimental medicine in drug addiction: towards behavioral, cognitive and neurobiological biomarkers. Journal of Psychopharmacology, 25(9), 1235-1255. http://jop.sagepub.com/content/25/9/1235.short
Media reports into recent research have claimed that neuroscientists are now effectively able to perform ‘mind reading’. Such reporting inevitable raises ethical questions about what applications such research might eventually be put to, and, judging by some of the comments that the on-line versions of these articles have provoked, have alarmed some people regarding the eventual path that such research might take. But how accurate is the claim that neuroscientific techniques can read minds?
Early this year an article in the Guardian ( http://www.guardian.co.uk/science/2012/jan/31/mind-reading-program-brain-words ) reported that:
‘Scientists have picked up fragments of people’s thoughts by decoding the brain activity caused by words that they hear.’
Reporting on the same experiment the Daily Mail ( http://www.dailymail.co.uk/sciencetech/article-2095214/As-scientists-discover-translate-brainwaves-words–Could-machine-read-innermost-thoughts.html ) claimed:
’It’s a staggering development that could have tremendous implications….judges could use mind-reading machines to find out if murder suspects are telling the truth….mind reading devices might be used to eavesdrop covertly on the most private thoughts and dreams.’
The experiment in question, conducted by Dr Brian Pasley and colleagues (1) involved the recruitment of patients who were to undergo brain surgery. The researchers placed electrodes upon the auditory areas of the brain during the period when the patients’ skulls were open and their cerebral cortex exposed. They then played the patients a sequence of different words and recorded the electrical activity generated by the auditory cortex in response to this speech. Using complex modeling procedures they were able to reconstruct the spoken words solely from the neural signals recorded by the electrodes. Furthermore they were able to successfully apply this model to the electrical responses generated by a separate set of words that had not been used in creation of the model (e.g. which were in effect ‘novel’ to the model) suggesting that the model could theoretically be applied to reconstruct any speech heard by the patient.
While these results are undoubtedly impressive, has the media coverage of them been accurate? In terms of the Guardian’s report, their claim that this represents a decoding of ‘fragments of thoughts’ seems to depend on a rather broad definition of the term ‘thoughts’. What the research did was to reconstruct auditory stimuli that the auditory cortex was in the process of analysing. What has been achieved therefore is the decoding, at a detailed level, of the perceptual process, NOT the reading of internally generated thoughts. This is a significant step away from ‘decoding thoughts’ as the process being decoded is entirely dependent on the presentation of an external stimulus. This doesn’t therefore represent ‘mind reading’ because the same result could theoretically be achieved without reference to the brain, e.g. by taking measurements from the relevant sensory organ or by just observing the sensory stimulus itself (2). Even if the research did represent mind reading, there seems little justification for the Daily Mail’s claim that the research could lead to ‘covert eavesdropping’. It should be obvious that the methodology required not only the opening up of the participant’s skull, but also the co-operation of the participant in allowing data to be taken for the construction of the model. Furthermore what is not mentioned by either article is that the reconstructed words were not actually intelligible to a human listener, but had to be ‘recognised’ via a speech recognition algorithm (an example of the reconstructed speech can be heard here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001251#s5).
Actual Mind Reading?
While the results of Dr Pasley’s study required the participant’s brains to be exposed, other neuroimaging methods are not so intrusive, and could therefore be considered closer to the covert mind-reading reported by the Mail. Magnetic Resonance Imaging (MRI) allows brain activity to be measured in a non-invasive way, so that no surgery of any kind is required (although lying down in a scanner which costs millions of pounds and is the size of a small boat, is still required, making it far from ‘covert’!). MRI studies have produced some equivalent results to that of Pasley’s study, but using visual stimuli; with images (3) and short movies (4) having been reconstructed purely from data obtained from MRI scans. Of course such results don’t represent mind reading any more than Dr Pasley’s study, since they reflect a reconstruction of external sensory information. However other MRI studies have produced results that have allowed scientists to predict processes occurring within a participant’s brain that are not directly tied to the characteristic of external stimuli. A couple of studies by Yukiyasu Kamitani and Frank Tong (5,6) have shown that models can be created that allow an observer to identify to which stimulus a participant is (covertly) attending to. In effect these studies, and others like them, use the output from the perceptual processing mechanisms of the brain to identify how ‘top-down’ influences (such as expectation and attention) are driving perception. Strictly speaking they represent mindreading as although the mental processes in question are still involved in analysing external stimuli, it is not necessarily possible to garner the information provided by the MRI data in any other way (short of asking the person themselves). This is because the ‘top-down influences’ in question arise internally from the brain, rather than being a function of the external stimulus. Neuroimaging has enabled the concept of mind reading to be taken further however, into the realms of decoding mental events that don’t rely on any external stimulation at all. Recent studies have found that it is possible to decode what broad categories of objects someone is imagining, in the absence of any coincident external stimulation (7) although the performance level of the model is reasonably modest (~ 50%). Similarly, it also appears that the results of basic decision making processes can be identified from brain activity, with decisions relating to which button to press and when to press it (8) and whether a participant in lying (9) being decipherable using models constructed in a similar way to those already described. Interestingly the neural information that allows these decisions to be decoded occurs many seconds BEFORE the decision has actually been made, highlighting how conscious actions are likely driven by brain processes that are outside conscious awareness, rather than being the result of conscious ‘free will’. Most recently such work has been extended to more complex scenarios, with MRI data being used to predict at what point in solving an algebraic problem a child is at, and whether they are performing the calculation correctly (10).
The possibility of covert mind reading?
Clearly the aforementioned examples reflect mind reading, but do they represent the top of a ‘slippery slope’ that will lead to technology that will allow the sort of covert eavesdropping envisioned by the Daily Mail? The first impediment to such technology is the process of neuroimaging itself. MRI scanners are far from being portable enough to allow forced or covert application of brain scanning. Furthermore MRI scanning involves the production of a large magnetic field and the firing of electromagnetic pulses towards the object being imaged, both functions that would be totally impractical outside a controlled, isolated environment. Other neuroimaging methods, such as EEG, function by recording the electrical remnants of brain activity from outside the skull, and are therefore cheaper and more portable than MRI. However they lack the spatial resolution that would be required for any sophisticated mind reading application, and in any case they are extremely sensitive to external noise, again making them unsuitable for use outside of controlled environments.
Even if we assume that future technological advances would allow systems to be developed that would enable covert collection brain activity data, would such technology enable your innermost thoughts to be deciphered? There are a number of reasons to doubt that this would be possible. Current mind reading models are only able to distinguish between very broad categories of thoughts, or between very coarse categories of decisions (e.g. lie/truth, attending to one or other stimulus). To be able to read the specific details of an individual’s thoughts you would need models that distinguished between the literally billions of different things that someone could be thinking about, and the multitude of different decisions that they could make. To even create such models would involve the co-operation of individuals in a data collection process that would take an incalculable length of time. Even if such data were collected, and the subsequent required level of computation to create accurate models were possible, the ability to generalize such models to the brain activity of other individuals would rely on an assumption that every person’s brain being identical in terms of where different individual thoughts and memories are stored. This seems extremely unlikely, and is in fact counter to what we know about individual differences in brain anatomy and function. Thus while it is possible to aggregate data across participant to produce mind-reading for coarse decisions, it would be impossible to replicate such a method to distinguish between more subtle categories of thought. Even in situations where co-operation of the participant is attained, and only a coarse distinction between different psychological states is required, such mind reading techniques are problematic. Taking the example of the mooted ‘MRI Lie detector’ such a system will always be somewhat unreliable because, just like the current physiological lie detectors, they could be easily deceived if the participant can train themselves to act as if the truth is a lie (or vice versa). This is because the brain activity which is associated with lying most likely relates to the emotional and cognitive processes involved in creating a false story, rather than to lying per se. It follows that simply engaging in these same emotional and cognitive processes while telling the truth should produce neural activity which mimics that produced by a lie. If even the decoding of simple decisions can be subverted easily, it would seem impossible that attempts at more subtle discriminations of different thoughts would not be subject to even greater uncertainty. Finally it is important to note that all the forms of mind reading reviewed here are the result of probabilistic calculations. The parts of the brain that are deemed active at a certain point in time are the result of statistical computations as to whether a small signal is reflective of task-related neural activity or noise. Likewise the classification of such activity as belonging to one category of thought/decision over another is also based off probabilistic inference. There is no certainty in such a process; in fact it is fraught with uncertainty.
To conclude it seems very unlikely that neuroimaging methods will ever be able to perform the sort of mind reading predicted by scare stories in the press. In some cases such methods may not even represent a particular improvement on the sort of mind reading applications that already exist. What the mind reading research discussed in this article does allow is a greater understanding of how the brain works, which in turn provides insight into how the brain achieves the myriad feats it performs so frequently with apparent ease. The most fruitful practical application of such knowledge is likely to be in the treatment of patients with brain damage. For example the limited mind reading functions possible from existing neuroimaging methods may allow technology to be developed that would allow patients who suffer from brain damage to the extent that they cannot communicate using their peripheral nervous system, some primitive form of communication through their brain activity. In contrast your private thought and memories are likely to remain safe from the prying eyes of neuroscientists!
Image (top right) courtesy of Idea Go: http://www.freedigitalphotos.net/images/view_photog.php?photogid=809
(1) Pasley BN, David SV, Mesgarani N, Flinker A, Shamma SA, et al. (2012) Reconstructing Speech from Human Auditory Cortex. PLoS Biol 10(1): e1001251. doi:10.1371/journal.pbio.1001251 http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001251
(2) Tong, F. & Pratte, M.S. (2012) Decoding Patterns of Human Brain Activity. Annual Review of Psychology, 63: 483-509. http://www.ncbi.nlm.nih.gov/pubmed/21943172
(3) Miyawaki, Y. Uchida, H. et al (2008) Visual Image Reconstruction from Human Brain Activity using a Combination of Multi-scale Local Image Decoders.. Neuron 60, 915–929, http://iopscience.iop.org/1742-6596/197/1/012021
(4) Nishimoto, S., Vu, A.T., et al (2011) Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies. Current Biology 21, 1641–1646 http://www.sciencedirect.com/science/article/pii/S0960982211009377
(5) Kamitani Y, Tong F. 2005. Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 8:679–85 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1808230/
(6) Kamitani Y, Tong F. 2006. Decoding seen and attended motion directions from activity in the human visual cortex. Curr. Biol. 16:1096–102 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1635016/
(7) Reddy, L., Tsuchiya, N. & Serre, T. (2010). Reading the mind’s eye: Decoding category information during mental imagery. Neuroimage. 50(2) 818-825 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2823980/
(8) Soon CS, Brass M, Heinze HJ, Haynes JD. 2008. Unconscious determinants of free decisions in the human brain. Nat. Neurosci. 11:543–45 http://www.nature.com/neuro/journal/v11/n5/full/nn.2112.html
(9) Davatzikos C, Ruparel K, Fan Y, Shen DG, Acharyya M, et al. 2005. Classifying spatial patterns of brain activity with machine learning methods: application to lie detection. NeuroImage 28:663–68 http://www.sciencedirect.com/science/article/pii/S1053811905005914
(10) Anderson, J.R. (2012) Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model algorithms. Neuropsychologia, 50(4) 487-498. http://www.sciencedirect.com/science/article/pii/S0028393211003605