The painful truth: Magnetic bracelets, the placebo effect & analgesia

Despite the widespread availability of evidence-based medicine in the western world, ‘alternative medicines’ are still commonly used. Such medicines are usually inspired by  pre-scientific medical practices; those which have been passed down through generations. However many established medical treatments also arise from traditional medical practices. For example the use of aspirin as an analgesic (pain killer) has its roots in the use of tree bark for similar purposes throughout history. The difference between established medicines like aspirin, and alternative medicines such as homeopathy, is that the former have been found to be effective when exposed to rigorous scientific trials.

Can magnetic bracelets help relieve joint pain in conditions like Arthritis?

Can magnetic bracelets help relieve joint pain in conditions like Arthritis?

A form of alternative medicine that has recently been subjected to scientific scrutiny is the use of magnetic bracelets as a method of analgesia. It effective, such therapies would provide cheap and easy-to-implement treatments for chronic pain such as that experienced in arthritis. Unfortunately there is little evidence of such treatments being effective. A meta-analysis of randomised clinical trials looking at the use of magnet therapy to relieve pain found that there was no statistically significant benefit to wearing magnetic bracelets (1). However it can be argued that existing clinical trials may have been hampered by the difficulty in finding a suitable control condition.

The placebo effect

The ‘placebo effect’ is a broad term used to capture the influence that knowledge concerning an experimental manipulation might have on outcome measures. Consider a situation where you are trying to assess the effectiveness of a drug. To do this you might give the drug to a group of patients and compare their subsequent symptomatology to a control group of patients who do not get the drug. However even if the drug group show an improvement in symptoms compared to the control group, you cannot be certain whether this improvement is due to the chemical effects of the drug. This is because the psychological effects of knowing you are receiving a treatment may produce a beneficial effect on reported symptoms which would be absent from the control group. The solution to this problem is to give the control group an intervention that resembles the experimental treatment (i.e. a sugar pill instead of the actual drug). This ensures that both groups are exposed to the same treatment procedure, and therefore should experience the same psychological effects. Indeed this control treatment is often referred to as a ‘placebo’ because it is designed to control the placebo effect. The drug must exhibit an effect over and above the placebo treatment in order to be considered beneficial.

A requirement for any study wishing to control for the placebo effect is that the participants must be ‘blind’ (i.e. unaware) as to which intervention (treatment or placebo) they are getting. If the participant is aware that they are getting an ineffective placebo treatment, the positive psychological benefits of expecting an improvement in symptoms is likely to disappear, and thus the placebo won’t genuinely control for the psychological effects of receiving an intervention.

A placebo for magnetic bracelets

The obvious placebo for a magnetic bracelet is an otherwise identical non-magnetic bracelet. However the problem with using non-magnetic bracelets as a control is that it is easy for the participant to identify which intervention they are getting, as it is easy to distinguish magnetic or non-magnetic materials. The can be illustrated by considering a clinical trial which appeared to show that magnetic bracelets produce a significant pain relief effect (2). In this study participants wore either a standard magnetic bracelet, a much weaker magnetic bracelet or a non-magnetic (steel) bracelet. The standard magnetic bracelet was only found to reduce pain when compared to the non-magnetic bracelet. However the researchers also found evidence that participants wearing the non-magnetic bracelet became aware that it was non-magnetic, and therefore could infer that they were participating in a control condition. This suggests that the difference between conditions might be due to a placebo effect, as the participants weren’t blind to the experimental manipulation.

This failure of blinding was not present for the other control condition (weak magnetic bracelet) presumably because these bracelets were somewhat magnetic. As no statistically significant difference was found between the standard and weak magnetic bracelets it could therefore be concluded that the magnetic bracelets have no analgesic effect. However it could also be argued that if magnetism does reduce pain, the weaker bracelet may have provided a small beneficial effect which might have served to ‘cancel out’ the effect of the standard magnetic bracelet. The study could therefore be considered inconclusive as neither of the control conditions were capable of isolating the effect of magnetism.

More recent research

Recent clinical trials conducted by researchers at York University has tried to solve the issue of finding a suitable control condition for magnetic bracelets. Stewart Richmond and colleagues (3) included a condition where participants wore copper bracelets, in addition to the three conditions used in previous research, while researching the effect of such bracelets on the symptoms of Osteoarthritis . As copper is non-magnetic it can act as a control in testing the hypothesis that magnetic metals relieve pain. However as copper is also an traditional treatment for pain, it does not have the drawback of the non-metallic bracelet regarding the expectation of success. The participant is likely to have the same expectation of a copper bracelet working as they would for a magnetic bracelet.

The study found that there was no significant difference between any of the bracelets on most of the measures of pain, stiffness and physical function. However the standard magnetic bracelet did perform better than the various controls on one sub-scale of one of the 3 measures of pain taken. However this isolated positive effect was considered likely to be spurious because of the number of comparisons relating to changes in pain that were performed during the study (see 4). The same group has recently published an almost identical study relating to the pain reported by individuals suffering from Rheumatoid Arthritis rather than Osteoarthritis (5). Using measures of pain, physical function and inflammation they again found no significant differences in effect between the four different bracelet types.

No effect?

The existing research literature seems to suggest that magnetic bracelets have no analgesic effect over and above a placebo effect. The use of a copper bracelet overcomes some of the problems of finding a suitable control condition to compare magnetic bracelets against. One argument against using copper bracelets as a control is that as they themselves are sometimes considered an ‘alternative’ treatment for pain, they may also have an analgesic effect. Such an effect could potentially cancel out any analgesic effect of the magnetic bracelets when statistical comparisons are performed. However copper bracelets did not perform any better than the non-magnetic steel bracelets in either study (3, 5) despite the potential additional placebo effect that might apply during the copper bracelets condition. Indeed on many of the measures of pain the copper bracelet actually performed worse than the non-magnetic bracelet. The copper bracelet can therefore be considered a reasonable placebo to use in research testing the analgesic effect of magnetic bracelets.

Despite the negative results of clinical trials, it may be wise not to entirely rule out a potential analgesic effects of magnetic bracelets. Across all three studies (2, 3, 5) the measures of pain were generally lowest in the standard magnetic bracelet group. Indeed significant effects were found in two of the studies (2, 3) although these were confounded by the aforementioned problems concerning control conditions and multiple comparisons. Nevertheless it could be argued that, given the existing data, magnetic bracelets may have a small positive effect, but that this effect is not large or consistent enough to produce a statistically significant difference in clinical trials. This theory could be tested by conducting trials with far more patients (and thus greater statistical power) or by using a number of different bracelets of differing magnetic strengths to see if any reported analgesic effect increases with the strength of the magnetic field. Until such research is performed it is best to assume that magnetic bracelets do not have any clinical relevant analgesic effect.

Image courtesy of FreeDigitalPhotos.net

References

(1) Pittler MH, Brown EM, Ernst E. (2007) Static magnets for reducing pain: systematic review and meta-analysis of randomized trials. CMAJ 2007;177(7):736—42.

(2) Harlow T, Greaves C, White A, Brown L, Hart A, Ernst E. (2004) Randomised controlled trial of magnetic bracelets for relieving pain in osteoarthritis of the hip and knee. BMJ 329(7480):1450—4.

(3) Richmond SJ, Brown SR, Campion PD, Porter AJL, Klaber Moffett JA, et al. (2009) Therapeutic effects of magnetic and copper bracelets in osteoarthritis: a randomised placebo-controlled crossover trial. Complement Ther Med 17(5–6): 249–56.

(4) https://en.wikipedia.org/wiki/Problem_of_multiple_comparisons

(5) Richmond SJ, Gunadasa S, Bland M, MacPherson H (2013) Copper Bracelets and Magnetic Wrist Straps for Rheumatoid Arthritis – Analgesic and Anti-Inflammatory Effects: A Randomised Double-Blind Placebo Controlled Crossover Trial. PLoS ONE 8(9):

Biotech for all – taking science back to it’s roots?

This morning I came across a very interesting TED talk by Ellen Jorgensen entitled “Biohacking — you can do it, too” (http://on.ted.com/gaqM). The basic premise is to make biotech accessible to all, by setting up community labs, where anyone can learn to genetically engineer an organism, or sequence a genome. This might seem like a very risky venture from an ethical point of view, but actually she makes a good argument for the project being at least as ethically sound than your average lab. With the worldwide community of ‘biohackers’ having agreed not only to abide by all local laws and regulations, but drawing up its own code of ethics.

So what potential does this movement have as a whole? One thing it’s unlikely to lead to is bioterrorism, an idea that the media like to infer when they report on the project. The biohacker labs don’t have access to pathogens, and it’s very difficult to make a harmless microbe into a malicious one without access to at least the protein coding DNA of a pathogen. Unfortunately, the example she gives of what biohacking *has* done is rather frivolous, with a story of how a German man identified the dog that had been fouling in his street by DNA testing. However, she does give other examples of how the labs could be used, from discovering your ancestry to creating a yeast biosensor. This rings of another biotech project called iGem (igem.org), where teams of undergraduate students work over the summer to create some sort of functional biotech (sensors are a popular option) from a list of ‘biological parts’.

image

The Cambridge 2010 iGem team made a range of colours of bioluminescent (glowing!) E.coli as part of their project.

My view is that Jorgensen’s biohacker project might actually have some potential to do great things. Professional scientists in the present day do important work, but are often limited by bureaucracy and funding issues – making it very difficult to do science for the sake of science. Every grant proposal has to have a clear benefit for humanity, or in the private sector for the company’s wallet, which isn’t really how science works. The scientists of times gone by were often rich and curious people, who made discoveries by tinkering and questioning the world around them, and even if they did have a particular aim in mind they weren’t constricted to that by the agendas of companies and funding bodies. Biohacking seems to bring the best of both worlds, a space with safety regulations and a moral code that allows anyone to do science for whatever off-the-wall or seemingly inconsequential project that takes their fancy – taking science back to the age of freedom and curiosity.

Insights into the beginnings of microbiology

Pasteur Institute
Over the holidays I rediscovered a book I picked up in an antique shop a year or so ago called “Milestones in Microbiology”. I had assumed it was going to be a standard history book with lots of dates and names and events, but it turned out to be a collection of groundbreaking microbiology papers from the 16th century to the early 20th century – quite a special find for a microbiology student. Many of the papers included were written by familiar names such as Pasteur, Leeuwenhoek, Lister, Koch, Fleming and more, and the collection was compiled and translated by Thomas Brock (a familiar name to anyone who’s been set Brock’s Biology of Microorganisms as a first year text book!).

I’ve not yet read the whole collection, but having read the first few papers I’m very much sold. The early texts on the field of microbiology are not just intriguing but fairly accessible too. The style of writing is far less technical than today’s academic papers, as well as being in full prose (in those days journals didn’t have strict word limits). My favourite example of this so far is when Leeuwenhoek describes one of his test subjects as “a good fellow” a comment that would be branded unneccessary and completely aside from the point in today’s academic world!

It’s not often you get the chance to view groundbreaking scientific advances through the eyes of the scientists you get taught about in the textbooks. Reading the paper in which Leeuwenhoek first describes bacteria (or “little animals” as he calls them) feels like something of a privelege, as well as a trip back in time, so definately worth a read for anyone with an interest in the field. A more up to date version of the book seems to be available on Amazon or for University of Sheffield students there’s a few copies in Western Bank Library – enjoy!

On another note, if you’re interested in this sort of thing I’d also definately recommend a trip to the Pasteur museum in Paris. I visited it a few years ago whilst in Paris and like the papers mentioned above it’s a fascinating insight into the work of pioneering microbiologists. It’s a fairly understated part of the modern Pasteur Institute, with the museum situated in the building of the original Pasteur Institute. The museum contains plenty of scientific curiosities, such as Pasteur’s original experimental equipment, and documents his work from his early background in chemistry and stereoisomers up to his more famous vaccine and microbiological work. Finally on a less biological theme, the museum also contains Pasteur’s living quarters and crypt, which were also part of the original institute building!

 

 

Want to lie convincingly? Get practicing!

Lying, the deliberate attempt to mislead someone, is a processes that we all engage in at some time or another. Indeed research has found that the average person lies at least once a day, suggesting that lying is a standard part of social interaction (1). Despite its common occurrence lying is not an automatic process. Instead it represents an advanced cognitive function; a skill that requires more basic cognitive abilities to be present before it can emerge. To lie an individual first needs to be able to appreciate the benefits of lying (e.g. a desire to increase social status) so that they have the motivation to behave deceitfully. Successful lying also requires ‘theory of mind’ or the ability to understand what another person knows. This is necessary so that the would-be liar can spot firstly the opportunity to lie, and secondly what sort of deception might be required to produce a successful lie. Finally lying also requires the ability to generate a plausible and coherent, but nonetheless fabricated description of an event. Given these prerequisites it is unlikely that we are ‘born liars’. Instead the ability to lie is believed to develop sometime between the ages of 2 and 4 (2). The fact that the ability to lie develops over time suggests that the our performance of the ‘skill’ of lying should be sensitive to practice. Do people who lie more often become better at it?

Lying is tiring!
Lying is considered more cognitively demanding that telling the truth due to the extra cognitive functions that need to be utilised to produce a lie. The idea that lying is cognitively demanding is supported both by behavioural data showing that deliberately producing a  misleading response takes longer, and is more prone to error, than producing a truthful response (3) and by neurological data showing that lying requires additional activity in the prefrontal areas of the brain when compared to truth telling (4). These observable differences between truth telling and lying allow a measure of ‘lying success’ to be created. For example a successful, or skilled liar, should be able to perform lies more quickly and accurately than a less successful liar, perhaps to the extent that there is no noticeable difference in performance between truth telling and lying in such individuals. Likewise, if the ability to lie is affected by practice, then practice should make lies appear more like the truth in terms of behavioural performance.

Practice makes perfect (but is this a lie)?
Despite the intuitive appeal of the idea that lying becomes easier with practice, much past research has failed to find an effect of practice on lying, either when measuring behavioural (3) or neuroimaging (5) markers of lying. Such results have led to the conclusion that lying may always be significantly more effortful than truth telling, no matter how practiced an individual is at deception.

A recent study (6) has re-examined this issue. They used a version of the ‘Sheffield Lie Test’ where participants are presented with a list of questions that require a yes/no response (e.g. ‘Did you buy chocolate today?’). The experiment involved three main phases. In the first, baseline phase, participants were required to respond truthfully to half the statements and to lie in response to the other half of the statements. In the middle, training phase, the statements were split into two groups. For a control group of statements the proportion that required a truthful response remained at 50% for all participants. For an experimental group of statements the proportion that required a truthful response was varied between participants. Participants either had to lie in response to 25%, 50% or 75% of these statements, thus giving the participants differing levels of ‘practice’ at lying. The final, test phase, was a repeat of the baseline phase. This design allowed two research questions to be assessed. Firstly the researchers could identify whether practice at lying reduced the ‘lie effect’ on reaction time and error rate (e.g. the increased reaction time and error rate that occurs when a participant is required to lie, compared to when they are required to tell the truth). Secondly the researchers could identify whether any reduction in the lie effect applied just to the statements on which the groups had experienced differing practice levels, or whether it also generalised to those statements where all groups had the same level of practice.

The results revealed that practice did produce an improvement in the ability to lie during the period when the training was actually taking place, and that this improvement applied to both the control statements and the experimental statements. The participants who had to lie more demonstrated reduced error rates and reaction times compared to those who had to lie less during the training phase. However in the test phase this improvement was only maintained for the set of statements where the frequency of lying had been manipulated. The group who had practiced lying on 75% of the experimental statements were no faster or more accurate at lying on the control statements than the group who had to lie in response to just 25% of the experimental statements. These results suggest that practice can make you better at lying, but this improvement is only sustained over time for the specific lies that you have rehearsed.

Some lies may be better than others!
One important criticism of most studies on the effect of practice on lying is that they tend to use questions or tasks that require binary responses (i.e. yes/no questions). However in real life lying often involves the concoction of complex false narratives,a form of lying that is likely to be far more cognitively demanding than just saying ‘No’ in response to a question whose answer is ‘Yes’. Likewise the lies tested in laboratory studies tend to be rehearsed, or at least prepared lies. In contrast many real-life lies are concocted at short notice, with the deceptive narrative being constructed in ‘real-time’, whilst the person is in the process of lying. It is likely that the effect of training, and how that training generalises to other lies, will be different for these more advanced forms of lying than it is for the more simple types of lies that tend to be tested under laboratory conditions. Given this, if a psychologist tells you that we know for certain how practice impacts on the ability to deceive, you can be sure that they are lying!

________________________________________________________________________________________________________

References

(1) DePaulo, B.M., Kashy, D.A., Kirkendol, S.E., Wyer, M.M. & Epstein, J.A. (1996) Lying in everyday life. Journal of Personality and Social Psychology, 70 (5) 979-995. http://smg.media.mit.edu/library/DePauloEtAl.LyingEverydayLife.pdf
(2) Ahern, E.C., Lyon, T.D. & Quas, J.A. (2011) Young Children’s Emerging Ability to Make False Statements. Developmental Psychology. 47 (1) 61-66. http://www.ncbi.nlm.nih.gov/pubmed/21244149
(3) Vendemia, J.M.C., Buzan,R.F., & Green,E.P. (2005) Practice effects, workload and reaction time in deception. American Journal of Psychology. 5, 413–429. http://www.jstor.org/discover/10.2307/30039073?uid=3738032&uid=2129&uid=2&uid=70&uid=4&sid=21101917386241
(4)Spence, S.A. (2008) Playing Devil’s Advocate: The case against MRI lie detection. Legal and Criminological Psychology 13, 11-25. http://psychsource.bps.org.uk/details/journalArticle/3154771/Playing-Devils-advocate-The-case-against-fMRI-lie-detection.html
(5) Johnson,R., Barnhardt,J., & Zhu, J.(2005) Differential effects of practice on the executive processes used for truthful and deceptive responses: an event-related brain potential study. Brain Research: Cognitive Brain Research 24, 386–404. http://www.ncbi.nlm.nih.gov/pubmed/16099352
(6) Van Bockstaele, B., Verschuere, B., Moens, T., Suchotzki, K., Debey, E. & Spruyt, A. (2012) Learning to lie: effects of practice on the cognitive cost of lying. Frontiers in Psychology, November (3) 1-8. http://www.ncbi.nlm.nih.gov/pubmed/23226137

A matter of inheritance

Image courtesy of 'Digital Dreams' / FreeDigitalPhotos.net

Image courtesy of ‘Digital Dreams’ / FreeDigitalPhotos.net

The age-old ‘nature-nurture’ debate revolves around understanding to what extent various traits within a population are determined by biological or environmental factors. In this context ‘traits’ can include not only aspects of personality, but also physical differences (e.g. eye colour) and differences in the vulnerability to disease. Investigating the nature-nurture question is important because it can help us appreciate the extent to which biological and social interventions can affect things like disease vulnerabilities, and other traits that significantly affect life outcomes (e.g. intelligence). The ‘nurture’ part of this topic can be dealt with to some extent by research in disciplines such as Sociology and Psychology. In contrast genetic research is crucial to understanding the ‘nature’ part of the equation. Genetics also has relevance for the ‘nurture’ part of the debate because environmental factors such as stress and nutrition affect how genes perform their function (gene expression). Indeed genetic and environmental factors can interact in more complex ways; certain genetic traits can alter the probability of an organism experiencing certain environmental factors. For example a genetic trait towards a ‘sweet tooth’ is likely to increase the chances of the organism experiencing a high-sugar diet!

Given the importance of genetic information to understanding how organisms differ, I would argue that a basic knowledge of Genetics is essential for anyone interested in ‘life sciences’. This is true whether your interest is largely medical, psychological or social.  Unfortunately if, like me, you skipped A-Level Biology for something more exciting (or A-Level Physics in my case!) you might Genetics at bit of mystery.

Some basic genetics

Genetic information is encoded in DNA (Deoxyribonucleic acid). Sections of DNA that perform specific, separable functions are called Genes. Genes are the units of genetic information that can be inherited from generation to generation. Most Genes are arranged on long stretches of DNA called chromosomes, although a small proportion of genes are transmitted via cell mitochondria instead. Most organisms inherit 2 sets of chromosomes, one from each parent. Different genes perform different functions, mostly involving the creation of particular chemicals, often proteins, which influence how the organism develops.  All cells in the body contain the DNA for all genes, however only a subset of genes will be ‘expressed’ (i.e. perform their function) in each cell. This variation in gene expression between cells allows the fixed (albeit very large) number of genes to generate a vast number of different chemicals. This in turn allows organisms to vary widely in form while still sharing very similar genetic information (thus explaining how it can be that we share 98% of our DNA with monkeys, and 50% with bananas!).

The complete set of genetic information an individual has is called their ‘genotype’. The genotype varies between all individuals (apart from identical twins) and thus defines the biological differences between us. In contrast the ‘phenotype’ is the complete set of observable properties that can be assigned to an organism. Genetics tries to understand the relationship between the genotype and a particular individual phenotype (trait). For example how does the genetic information contained in our DNA (genotype) influence our eye colour (phenotype)? As already mentioned environmental factors play a significant role in altering the phenotype produced by a particular genotype. Explicitly the phenotype is the result of the expression of the genotype in a particular environment.

Heritability

Roughly speaking, heritability is the influence that a person’s genetic inheritance has on their phenotype. More officially it is the proportion of the total variance in a trait within a population that can be attributable to genetic effects. It tells you how much of the variation between individuals can be attributed to genetic differences. Note that this is not the same as saying that 60% of an individual’s trait is determined by genetic information. In narrow-sense heritability (the most common form used), what counts as ‘genetic effects’ is only that which is directly determined from the genetic information past on by the parents. This ignores variations caused by the interaction between different genes, and between genes and the environment. This is the most popular usage of heritability in science because it is far more predictive of breeding outcomes, and therefore tells us more about nature part of the ‘nature-nurture’ question, than the alternative (broad-sense) conceptualisation of heritability.

Uses and abuses

Genetic research can provide crucial information in the fight against certain diseases. Identifying genes that are predictive of various illnesses allow us to identify individuals who are vulnerable to a disease. This then allows preventive measures to be implemented to counter the possible appearance of the disease. Furthermore once the genes that contribute to a disease are known, knowledge as to how those genes express will help reveal the cellular mechanisms behind the disease. This improves our understanding of how the disease progresses and operates, and therefore helps with identifying treatment opportunities. In reality of course Genetics is rarely this simple. Many conditions that have a genetic basis (i.e. that show a significant level of heritability) appear to be influenced by mutations within a large number of different genes. Indeed in many cases, especially with psychiatric disorders, it may be that conditions we treat as one unitary disorder are in fact a multitude of different genetic disorders that have very similar phenotypes. Nevertheless, despite these problems genetic research is helping to uncover the biological basis of many illnesses.

One problem with Genetics, and heritability in particular, is that of interpretation. There is often a mistaken belief that a high level of heritability signifies that environmental factors have little or no effect on a trait. This misunderstanding springs from an ignorance of the fact that estimates of heritability comes from within a particular population, in a particular environment. If you change the environment (or indeed the population) then the heritability level will change. This is because gene expression is affected by environmental factors and so the influence of genetic information on a trait will always be dependent to some extent on the environment. As an example a recent study showing that intelligence was highly heritable (1) lead to some right-wing commentators using it as ‘proof’ of the intellectual inferiority of certain populations, because of their lower scores on IQ tests. Such an interpretation is then used to argue that policies relating to equal treatment of people are flawed, because some people are ‘naturally’ better. Apart from the debatable logic of the argument itself, the actual interpretation of the genetic finding is flawed because a high heritability of IQ does not suggest that environmental differences have no effect on IQ scores. To illustrate this point consider that the study in question estimated heritability in an exclusive Caucasian sample from countries with universal access to education. If you expanded the sample to include those who did not have access to education it would most likely reduce the estimate of heritability, as you would have increased the influence of environmental factors within the population being studied! Ironically therefore you could argue that only by treating everyone equally would you be able to determine those who are truly stronger on a particular trait! Independent of what your views on equality are, the most important lesson as regards genetics is that you cannot use estimates of heritability, however high, to suggest that differences in the environment have no effect on trait outcomes.

_________________________________________________________________________________________________________

 References

(1) Davies, G. et al (2011) Genome-wide association studies establish that human intelligence is highly heritable and polygenic. Molecular Psychiatry 16, 996-1005. http://www.nature.com/mp/journal/v16/n10/full/mp201185a.html

Although not directly cited, I found the following information useful when creating the post (and when trying to get my head around Genetics!).

Quantitative Genetics: measuring heritability. In Genetics and Human Behaviour: the ethical context. Nuffield Council on Bioethics. 2002.  http://www.nuffieldbioethics.org/sites/default/files/files/Genetics%20and%20behaviour%20Chapter%204%20-%20Quantitative%20genetics.pdf

Visscher, P.M., Hill, W.G. & Wray, N.R. (2008) Heritability in the genomics era – concepts and misconceptions. Nature Reviews Genetics, 9 255-266. http://www.ncbi.nlm.nih.gov/pubmed/18319743

Bargmann, C.I. & Gilliam, T.C. (2012) Genes & Behaviour (Kandel, E.R. et al (Eds)). In Principles of Neural Science (Fifth Edition). McGraw-Hill.

Life of a pathogen: pump some iron!

Has somebody set up a miniature  weightlifting gym for microbes? Not yet, but just like you and I bacteria need iron to stay  alive. However, unlike us they don’t get iron as a supplement in  their cereal – they  have to find it for themselves. In bacteria  iron is needed to make proteins involved in vital processes such as  respiration and DNA synthesis. With the stakes so high they need specialised ways to get iron, and more often than not they have to  scrounge it from us, their human host.

Iron scavenging molecules (called siderophores) are one way that bacteria can get iron from a host. In the human body the levels of free iron are kept very low, so the  siderophores have to be very good at finding iron then hanging on to it (high affinity). Once they’ve done this they need to get back into the bacterial cell via special transporters in the cell membrane (see figure below).

So, send out some scavengers and get loads  of iron? Not so simple! Firstly,  the whole process takes a lot of energy for the cell. In E.coli it takes 4  different proteins just to make the siderophore, plus another 4 proteins and some  ATP (the energy currency of the cell) to get it back in again. Secondly, too much iron is toxic to the cell, so it needs to make sure that it only goes to all this trouble when it really needs to – in other words it needs some gene regulation.

This is where it gets clever.  Inside the cell there’s a protein called Fur (ferric uptake regulator) that keeps an eye on how much iron is in the cell and turns the genes for iron scavenging on and off. When there’s lots  of iron in the cell the iron binds to Fur. This allows Fur to bind to the iron  uptake genes and turn them off, so the cell doesn’t waste any resources or  overload itself with iron (see figure below). When there’s not enough iron in the cell there’s no  iron spare to bind to Fur, so Fur can’t bind to the DNA. This means that the  genes are active and the proteins for iron scavenging are made.

That’s a pretty good system, but a  lot of pathogenic bacteria take it a step further. When pathogens enter the body they need to spring into action to make virulence factors – the proteins and molecules that allow them to survive in the body and do all the  nasty things that they do. It would be a massive waste of energy if they made these all the time, so they need to be able to  activate them specifically when they enter a host. Bacteria don’t have eyes or GPS so they have to sense the environment to work out where they are. Low iron levels is one signal that they are inside a host, so it makes sense to use an iron  sensing protein to regulate other virulence factor genes (figure 3). For example, E.coli uses the Fur regulator to regulate  virulence factor genes for fimbriae (fibres which can latch onto human cells),  haemolysin (a toxin that breaks open red blood cells) and Shiga-like toxin (a toxin that helps E.coli cells to get inside human cells).

So, in the arms race of human vs. pathogen it seems that bacteria have found a few sneaky solutions this time. Not only have they gotten around the body’s iron restriction mechanisms, but they also use the low iron levels as a trigger for more deadly weapons.

Spooky goings on in Psychology!

Given that it is Halloween, it seems only right to discuss some recent psychology experiments relating to potential paranormal phenomenon!

Can ‘psychic’ abilities be demonstrated during controlled experiments?

Can ‘psychics’ sense information others can’t?

Today Merseyside Sceptics Society published the results of a ‘Halloween psychic challenge’. They invited a number of the UK’s top psychics* to attempt to prove their abilities under controlled conditions, although only two psychics accepted the invitation (1, 2). In the test each psychic had to sit in the presence of 5 different female volunteers who were not known to them. These volunteers acted as ‘sitters’ and the psychics had to attempt to perform a ‘reading’ on them, in effect to use their putative psychic powers to obtain information about the sitter’s life and personality. During the reading the psychic  was separated from the sitter by a screen such that the psychic could not actually see the sitter. The psychics were also not allowed to talk to the sitters. These conditions ensured that any information the psychics retrieved was not gathered through processes that could be explained using non-psychic means (e.g. cold reading or semantic inference). The psychics recorded their readings by writing them down.

A copy of the 5 readings made by each psychic (one for each sitter) was given to each sitter and they were asked to rate how well each reading described them, and which reading provided the best description. If the psychic abilities were genuine, then each sitter should rate the reading that was made for them as being most accurate. Of the 10 readings (from the 2 psychics for each of the 5 sitters) only 1 was correctly selected by the sitter as being about them, no more than one would expect by chance. Moreover the average ‘accuracy ratings’ provided by the sitters (for the readings that were actually about them) was low for both psychics (approximately 3.2 out of 10). What of the one reading that a sitter did identify as an accurate description (see 1 for a full transcript of this reading)?  It is noticeable that in this reading the statements (some of which were not accurate) were either very general, or could be inferred from the knowledge that the sitter was a young, adult female (e.g ‘wants children’). The (correct) statement that most impressed the sitter (‘wants to go to South America’) was also pretty general and is probably true of a decent proportion of young woman. It can be safely concluded therefore that even this ‘accurate’ reading happened by chance.

In terms of the experimental design it is important to note that both psychics had, prior to the experiment, agreed to the methodology in the belief that they would be able to demonstrate their psychics powers under such conditions. Likewise both psychics rated their confidence in the readings they gave during the experiment highly, suggesting that they didn’t think that anything which occurred during the experiment might have upset their psychic powers. The study could be criticised for its small sample size, although this is due to many psychics, including some of the better known ones like Derek Acorah and Sally Morgan, apparently refusing to take part. It could therefore be argued that despite the psychics involved in the study failing the test, other ‘better’ psychics might pass. However such an argument remains merely speculative until such psychics agree to take part in controlled studies.

Although these negative results may not be surprising I still think it might be of interest to perform the experiment a different way. The problem with relying on the sitter’s ratings is that they may reflect attitudes of the sitters concerning psychic abilities (although all the sitters were apparently open to the idea of psychic powers being genuine). For example even though the sitters were unaware of which reading was about them, they could theoretically have given a low rating to an accurate reading to ensure that no psychic abilities were demonstrated. A better methodology might be to get each sitter to provide a self description, and then ask the psychic to choose the description that they think fits their reading of the person best. Such a test would also reduce the problems of interpreting the accuracy of the vague, general statements such as ‘wants children’ that psychics are prone to give. Another interesting idea would be to get psychics, along with non-psychics and self-confessed cold readers, to perform both a blind sitting (e.g. using a method similar to that described above) and a sitting where the participant can see and perhaps talk to the sitter. This could provide evidence to suggest whether claimed psychic abilities are really just a manifestation (even unintentionally) of cold-reading. If this were the case one would expect no difference in performance between the three groups in the blind test, but both the cold-readers and the psychics to perform better in the non-blind test (but with no difference between psychics and cold readers in that condition).

Can we see into the future?

The second set of experiments that I wish to discuss are potentially more exciting because there is at least a hint of positive results. Instead of testing the telepathy that psychics claim to possess (i.e. the ability to transfer information without the use of known senses) these studies investigated the phenomenon of ‘retroactive influence’ in a random sample of participants. Retroactive influence is the phenomenon of current performance being influenced by future events. In effect it suggests that people can (at least unconsciously) see into the future!

In a series of 9 well-controlled experiments the Psycholgist Daryl Bem produced results that appear to show that participant’s responses in a variety of tasks were influenced by events that occurred after those responses had been made (3). What is most impressive about these results is that Bem used a succession of different paradigms to produce the same effect, ensuring that the effect was not just due to an artifact in one particular experimental design. In brief this is what his results appear to demonstrate:

  1. Precognitive Detection: Participants had to select one of two positions in which they though an emotive picture would appear on a computer. However the computer randomly decided where to place the picture after the participant has made their selection. Nevertheless participants performance suggested that they were able to predict the upcoming positions of a photo at above chance levels,
  2. Retroactive Priming: In priming, the appearance of one stimulus (the ‘prime’) just before a second stimulus that the participant has to perform a task on, can either improve or worsen reaction time to that task, depending on whether the prime is congruent or incongruent with the second, ‘task’ stimulus. For example the appearance of a positive word prior to a negative image will slow reaction time on a valence classification task for the image (i.e. deciding whether the image is positive or negative) because the valence of the word is incongruent with the valence of the image. Bem’s results suggest that this reaction time effect also occurs when the prime is presented after both the image, and the time when the participant has made their response to it.
  3. Retroactive habituation: People tend to habituate to an image, for example an aversive image that has been seen before is rated as less aversive than one that has not been seen before. Bem demonstrated that this habituation can occur even when the repeated presentation occurs after the rating of the image is made (i.e. given the choice between two images, participants will select as less aversive the image that the computer will later present to them several times).
  4. Retroactive facilitation of recall: When participants had to recall a list of words, they were shown to be better at recalling items that they were later required to perform a separate task on, even though they were unaware of which items on the list they would be re-exposed to.

It is important to note that in all these experiments the selection (by computer) of which items would appear after the initial task, was performed independent of the participant’s response, so the results could not be due to the computer somehow using the participant’s responses to define its choice of which stimuli to present.

These findings caused much controversy and discussion within the psychological research community. Recently three independent attempts to replicate the ‘retroactive facilitation of recall’ effect have failed, producing null results despite using almost exactly the same method as Bem’s original study, and identical software (4). These failures of replication have highlighted problems in psychological research around the concepts of replication and the ‘file-drawer problem’ (5). There isn’t space to do justice to these issues here, suffice to say that the jury is still out on Bem’s findings at least partly because we can’t be sure whether other failed attempts to produce these effects remain unpublished, thus making Bem’s positive results appear more impressive that they might actually be. Another potential problem that is yet to be fully addressed is the issue of experimenter bias. Again this is a complex issue, and it appears to particularly be a problem in research into paranormal phenomenon, because positive results consistently tend to come from researchers who believe in said phenomenon, while negative results consistently come from sceptical researchers (see 6 for a discussion).

Retroactive facilitation of recall is currently the only of Bem’s effects that others have attempted to replicate in an open manner (i.e. by registering the attempt with an independent body before data collection, and by publishing the results after). Until more replication is attempted the question as to whether we can unconsciously see into the future must be considered open to debate. Hopefully these topics will be subject to much research in the future allowing us to find out whether these effects are real, or just the consequence of some other factor. It is worth mentioning at this point another paradigm that sometimes produces positive results regarding paranormal abilities. In experiments using a Ganzfeld Field (where participant’s auditory and visual systems are flooded with white noise and uniform light respectively) there is some evidence that those experiencing such stimulus are able to ‘receive’ information from someone sitting in a separate room (see 7 for a review). This appears therefore to be a potential demonstration of telepathy, although the effect is open to the same issues of replication and experimenter bias that surround Bem’s findings. Even ignoring these uncertainties, it should be noted that in these Ganzfeld Field experiments, and in Bem’s study, the size of the effects are very modest. For example in Bem’s precognitive detection paradigm, participants overall performance was at 53% as compared to chance level performance of 50%, while in the Ganzfeld experiments performance (choosing which one of four stimuli were being ‘transmitted’) is at around 32% against a chance performance of 25%. While these differences are found to be statistically significant (in some studies) because of the large number of participants or trials used, they don’t exactly represent impressive performance! Therefore even if such paranormal phenomenon were to be eventually proven as genuine, this wouldn’t mean that the sort of mind reading abilities claimed by psychics are actually possible!

 

*note that in this article the term ‘psychics’ is used merely as a label to define people who claim to have psychic powers, its use does not represent acceptance that such powers actually exist.

References
1) http://www.guardian.co.uk/science/2012/oct/31/halloween-challenge-psychics-scientific-trial
2) http://www.merseysideskeptics.org.uk/
3) Bem, D. J. (2011). Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. Journal of Personality and Social Psychology, 100(3), 407-425. Link
4) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Failing the Future: Three Unsuccessful Attempts to Replicate Bem’s ‘Retroactive Facilitation of Recall’ Effect. Plos One, 7(3). Link
5) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Replication, replication, replication. Psychologist, 25(5), 346-348. Link
6) Schlitz, M., Wiseman, R., Watt, C., & Radin, D. (2006). Of two minds: Skeptic-proponent collaboration within parapsychology. British Journal of Psychology, 97, 313-322. Link
7) Wackermann, J., Putz, P. & Allefeld, C. (2008) Ganzfeld-induced hallucinatory experience, its phenomenology and cerebral electrophysiology. Cortex 44, 1364-1378 Link

Image from ‘Seance on a wet afternoon’ (1964) Dir: Bryan Forbes, Distribution: Rank Organisation, Studio: Allied Film Makers.

Food for the brain: How diet affects mental health

Is a diet of junk food bad for your mental health?

In a recent article published in the Guardian (originally available on his personal website) George Monbiot looked at recent scientific evidence suggesting a link between ‘junk food’ and Alzheimer’s disease (1). This prompted me to think about the wider subject of nutrition and mental health. It’s an uncomfortable subject to consider especially if, like me, you enjoy a trip to the local takeaway and the ‘occasional’ alcoholic beverage. Nevertheless the availability and popularity of processed foods in modern industrial societies (2, 3) makes the impact of diet on brain function an issue that we all need to seriously consider.

Despite a significant amount of research being undertaken into how diet affects the brain, there appears to be little discussion of the subject in public discourse. This may due to the scientific uncertainties inherent in the study of diet and mental processes, especially when contrasted with the strong influence that the commercial interests of food manufacturers and retailers hold over government decision making. Here I intend to briefly review the difficulties researchers face in studying this topic, and what we know so far about how diet may alter mental health.

The problems of studying nutrition

A major problem with the study of diet is that it is really particular nutrients within food (e.g. vitamins and minerals) that influence our brains, rather than the foods themselves.  As people can only really report their diets in terms of the foodstuffs they consume, and as each foodstuff contains a variety of chemicals in varying levels, each of which may be harmful, beneficial or neutral to our health to differing extents, it is not straightforward to map the relationship between foodstuffs and changes in health.

A second problem is that the impact of individual nutrients is likely to be mediated by other factors, such as the nutrient’s baseline level in the body, or the presence or absence of other nutrients. For example nutrients that are known to be beneficial to human health when consumed in food often fail to produce positive results when consumed in supplementary form (e.g. vitamin pills) an effect that is most likely due to the absence (in supplements) of naturally co-existing chemicals that facilitate the body’s uptake of the nutrient when it is consumed via foods (4). Likewise other factors that are independent of diet, such as age, genetics, and the level of physical activity, are likely to influence the effect of nutrition on health (e.g. 5). It is unethical to systematically control and manipulate a person’s entire diet over the period of time necessary to identify changes in mental processes likely to be triggered by diet. It is also impossible to fully control for the influence of other non-diet factors over a similar time frame. Therefore it is not possible to establish causality between individual foods and health outcomes with any certainty. Of course it is possible to perform such experiments on laboratory animals, but as such animals lack many of the cognitive functions that are disrupted in neurological diseases such as dementia, such studies are of limited use when considering the impact of nutrition on mental health in humans.

In light of these problems, the effect of nutrition on health is often studied via ‘cohort studies’, where large numbers of people are surveyed as to their dietary habits and health over an extended period of time. Such studies are not only expensive and time-consuming to complete, but also rely on potentially unreliable self-report measures (see (6) for a discussion). Alternatively, the influence of individual nutrients is sometimes studied by giving one group of participants supplements containing the nutrient, and others a placebo. This approach lacks the ecological validity of cohort studies, but allows a tighter control over the intake level of the nutrient involved, thus allowing its effects to be isolated. Neither method however overcomes the previously mentioned problems regarding establishing causality.

What we do know?

Given the complex relationship between food and nutrition, and the imprecision of self-report measures, diet is often characterised in cohort studies in broad terms. One relative common distinction that is used is between the so-called ‘Mediterranean Diet’ and the ‘Western Diet’. The former involves the high intake of fruit, vegetables, fish, cereals and unsaturated fats (e.g. the type of fat that tends to be found in nuts and seeds). In contrast the ‘Western Diet’ involves the frequent consumption of foods with high levels of saturated fats, such as red meats, dairy products as well as other processed foods such as confectionery and ‘convenience’ foods.  Studies tend to show that those who have diets that more closely resemble the Mediterranean Diet have lower instances of both dementia and mild cognitive impairment, even after confounding factors like age, socio-economic status and physical activity are controlled for (7). More specifically it has been shown that high intake of fruit and vegetables, as well as omega-3 fats (dietary rather than through supplements) predict a reduced likelihood of dementia (8); dementia levels in those with diets high in fruit and vegetables being 2.6%, compared with 5.7% for those with diets poor in fruit, vegetables and omega-3 fats.

The neurological effects of diet are not just restricted to dementia however. There is increasing evidence that diets high in saturated fat and sugars may contribute to behavioural problems in children and adolescents, including ADHD (9, 10). Similarly artificial food additives, such as the colourings and preservatives commonly added to confectionery and soft drinks, appear to increase hyperactivity in children (11). For example in a double-blind placebo trial (12) it was found that children regularly given a drink containing additives became more hyperactive (as measured by parent and teacher ratings, and through performance on a computerised attention task) than those given a placebo drink with the same frequency. This effect was present in both 3 year old and 8 year old children, suggesting that the influence of additives is not restricted to one particular stage of development.

Evidence also exists which suggests that deficiencies in a variety of vitamins and minerals within the body may encourage depressive symptoms. For example double-blind placebo trials consistently show that Thiamine supplements improve mood (13) while other studies have suggested that low levels of vitamins B6 and E are implicated in depression (14). The effect of diet on mood may be self-reinforcing as depressed individuals often turn to ‘comfort eating’ (13) which is likely to involve foods that are high in saturated fats, and which in turn may promote obesity which could further depress mood and self-esteem over the long term.

In what way do nutrients affect the brain?

Due to the aforementioned complexities in identifying the contribution of different nutrients, it has proven difficult to identify the exact mechanisms by which the under or over abundance of certain nutrients might affect the brain. However two interrelated systems are thought to be most vulnerable to dietary factors; the neuroinflammatory response of brain neurons, and the processes surrounding insulin signalling within the brain (15). Neuroinflammation is the immune response to neuron damage. It acts to preserve the damaged neuron and promote its recovery, but it can also cause damage to surrounding neurons. It is thought that the beneficial effect of diets high in fruit and vegetables may partly be due to the polyphenols present in plant matter working to limit neuroinflammation in the brain (e.g. 16). In terms of the second system, Insulin is involved in regulating the uptake of glucose by neurons, as well as maintaining their function and structure (17).  Diets that are high in saturated fats appear to promote ‘insulin resistance’ which reduces the body’s ability to utilise Insulin (hence the association between obesity and type II diabetes). This in turn negatively impacts on the ability of neurons to function properly and to adapt to changes in the signalling patterns of other connecting neurons. This leads to reduced neural plasticity and an increased likelihood of chronic, maladaptive neuroinflammation, both of which are likely to interfere with normal cognitive functioning. This may be the mechanism by which frequent consumption of junk foods leads to a greater risk of dementia (1).

Should I change what I eat?

While it is never possible to rule out the influence of confounding factors, the basic message one can take from these studies seems pretty intuitive. We are better off eating foods that can be thought of as ‘natural’ for humans to eat. Throughout history the human race have presumably mainly relied on fruits, vegetables, nuts and cereals, supplemented with small amounts of fish and meat. It therefore makes sense that these foods would be conducive to both our physical and mental health, as research seems to suggest. In contrast the convenience and affordability of seemingly unnatural foods such as confectionery, processed meats and ‘ready meals’ belies their damaging impact on our health. We could do our future selves a favour by avoiding the temptation these foods provide, and making the extra effort to eat healthily.

__________________________________________________________________________________________________________________

Image courtesy of www.freedigitalphotos.net

References

  1. http://www.monbiot.com/2012/09/10/the-mind-thieves/ (retrieved 24/09/2012).
  2. Popkin, B. M. (2004). The nutrition transition: An overview of world patterns of change. Nutrition Reviews, 62(7), S140-S143. <link>
  3. Thow, A. M. (2009). Trade liberalisation and the nutrition transition: mapping the pathways for public health nutritionists. Public Health Nutrition, 12(11), 2150-2158. <link>
  4. Morris, M. C. (2012) Nutritional determinants of cognitive aging and dementia. Proc Nutr Soc, 71(1), 1-13. <link>
  5. Dauncey, M. J. (2009). New insights into nutrition and cognitive neuroscience. Proceedings of the Nutrition Society, 68(4), 408-415 <link>
  6. http://www.sciencebrainwaves.com/uncategorized/the-dangers-of-self-report/ (retrieved 24/09/2012)
  7. Sofi, F., Abbate, R., Gensini, G. F., & Casini, A. (2010). Accruing evidence on benefits of adherence to the Mediterranean diet on health an updated systematic review and meta-analysis. American Journal of Clinical Nutrition, 92(5), 1189-1196. <link>
  8. Barberger-Gateau, P., Raffaitin, C., Letenneur, L., Berr, C., Tzourio, C., Dartigues, J. F., et al. (2007). Dietary patterns and risk of dementia – The three-city cohort study. Neurology, 69(20), 1921-1930 <link>
  9. Oddy, W. H., Robinson, M., Ambrosini, G. L., O’Sullivan, T. A., de Klerk, N. H., Beilin, L. J., et al. (2009). The association between dietary patterns and mental health in early adolescence. Preventive Medicine, 49(1), 39-44 <link>
  10. Howard, A. L., Robinson, M., Smith, G. J., Ambrosini, G. L., Piek, J. P., & Oddy, W. H. (2011). ADHD Is Associated With a “Western” Dietary Pattern in Adolescents. Journal of Attention Disorders, 15(5), 403-411 <link>
  11. Schab, D.W & Trinh, N.T. (2004). Do Artificial Food Colors Promote Hyperactivity
    in Children with Hyperactive Syndromes? A Meta-Analysis of Double-Blind
    Placebo-Controlled Trials. Developmental and Behavioral Pediatrics, 25(6), 423-434 <link>
  12. McCann, D., Barrett, A., Cooper, A., Crumpler, D., Dalen, L., Grimshaw, K., . . . Stevenson, J. (2007). Food additives and hyperactive behaviour in 3-year-old and 8/9-year-old children in the community: A randomised, double-blinded, placebo controlled trial. Lancet, 370, 1560-1567. <link>
  13. Benton, D., & Donohoe, R. T. (1999). The effects of nutrients on mood. Public Health Nutr, 2(3A), 403-409. <link>
  14. Soh, N. L., Walter, G., Baur, L., & Collins, C. (2009). Nutrition, mood and behaviour: a review. Acta Neuropsychiatrica, 21(5), 214-227 <link>
  15. Parrott, M. D., & Greenwood, C. E. (2007). Dietary influences on cognitive function with aging: from high-fat diets to healthful eating. Ann N Y Acad Sci, 1114, 389-397. <link>
  16. Lim, G. P., Chu, T., Yang, F., Beech, W., Frautschy, S. A., & Cole, G. M. (2001). The curry spice curcumin reduces oxidative damage and amyloid pathology in an Alzheimer transgenic mouse. J Neurosci, 21(21), 8370-8377. <link>
  17. http://www.thealzheimerssolution.com/insulin-brain-function-and-alzheimers-disease-is-insulin-resistance-to-blame-for-alzheimers/ (retrieved 28/09/2012)

How delusions occur, and why they may be widespread!

Why do many people believe that Crop Circles are created by alien life forms?

It is a common occurrence to come across people who believe things that seem extraordinary, and who maintain that belief even in the face of huge amounts of contradictory evidence. For example despite vast amounts of evidence suggesting otherwise, there are people who believe that aliens create crop circles, that astrology can predict their future, and that the next Adam Sandler movie will be any good. A delusion can be defined as an extraordinary belief that is strongly held despite the presence of seemingly overwhelming evidence to the contrary. They are of particular interest to psychologists and neuroscientists because they occur in a number of neurological disorders, as well as in seemingly healthy individuals. For example a variety of paranoid or grandiose delusions frequently occur in psychotic disorders such as schizophrenia. Delusions relating to various bizarre forms of misidentification, such as the belief that a loved one is an imposter (the Capgras delusion) can also occur, often in forms of dementia such as Alzheimer’s Disease, and even in old age populations who do not exhibit any other noticeable cognitive impairment (1). Delusions of various types also occur in Parkinson’s disease, depression and as a result of other brain traumas such as those caused by strokes.

One error or two?
On a theoretical level there has traditionally been a distinction between 1-step and 2-step theories of delusions. 1-step theories (e.g. 2) suggest that a single perceptual deficit causes delusions. The delusion represents the most logical response to the bizarre perceptual information the brain is receiving as a result of the perceptual deficit. For example paranoid delusions may be caused by a perceptual bias towards threat signals which makes the sufferer conclude that some overbearing threat must be present to explain the constant warnings coming from the sensory environment. In contrast 2-step models (e.g. 3) argue that in addition to a perceptual deficit, there must also be a second, cognitive deficit. Such theories are motivated in part by the finding that there are some individuals who exhibit very similar perceptual deficits to those with delusions, but nevertheless do not hold delusional beliefs. For example there are individuals with bilateral damage to specific parts of the frontal lobe who, like patients with the Capgras delusion, experience a lack of familiarity when they come into contact with a particular close relative. However in contrast to the Capgras patients, the frontal lobe patients do not hold the belief that the relative is an imposter (4). Instead they are able to understand that it is their experience that has changed, rather than their relative. While 1-step theories suggest that delusions are caused by a single neuro-perceptual deficit, which varies in its nature depending on the nature of the delusion, 2-step theories require that an additional, separate deficit exists within the neural system involved in the formation and evaluation of beliefs. Variances in this second cognitive stage explain the likelihood of adopting a delusional belief in the context of disrupted perceptual experiences, and hence the difference between the Capgras and frontal lobe patients.

How are beliefs formed and updated?
If delusions are underpinned by a 2-step deficit, with the second, cognitive step being similar across delusional disorders, then the question arises as to what is the exact nature of this cognitive deficit? Recently an answer to this question has been proposed based off the insight that our ability to navigate the world is achieved through a process of inferential learning (e.g. 5). In short it is proposed that the brain creates representations as to how the external world is organized based off the information it receives. These models of the world by their nature encapsulate our belief system, as they contain representations of how different information is related, and what is likely to occur in any given situation. These models also allow the brain to predict both upcoming external stimulation, and internal experience. When actual experience differs from that which is expected, signals communicating this discrepancy (referred to as prediction-error signals) are sent back to the areas that generated the prediction, with the purpose of updating the model from which the original prediction arose. This process, when working optimally, allows us to adapt to new, unexpected information while at the same time enabling the majority of unexceptional information we encounter to be processed quickly and with minimum effort (because it has been predicted in advance).
Within this system the updating of beliefs can be framed using the principles of Bayesian inference, whereby the decision as to whether to adopt one of (say) two explanations to account for an unexpected stimulus is taken by balancing the inherent probability of each explanation (based off the current model of the world that the individual holds) with the likelihood of the unexpected stimulus having occurred if each explanation were true. When in the presence of a surprising or anomalous experience, such as those caused by the perceptual deficits believed to underpin the first step of delusion formation, an alteration in the belief pattern will only occur if the difference between the probability of the sensation occurring given that the new belief is true, compared to its probability of it occurring if the existing belief is true, is greater than the difference in the inherent probability of the two beliefs. In order to adopt an atypical or delusional belief, whose inherent probability would usually be very low, new evidence would have to appear that is almost inexplicable within the current belief system, while being fully explainable using the new belief. For example to believe that the moon is made of cheese would probably require you to actually travel to the moon, dig a bit of it up, put it in your mouth and taste cheese. Any lesser form of evidence would be discarded as a coincidence or trick, as the inherent probability of the moon being made of cheese given your existing belief system is (or at least should be) extremely low!

Delusions: A problem with prediction error?
In delusions it is proposed that this process of error-dependent updating of beliefs is disrupted. Most likely this occurs through a process whereby the weight (or importance) given to various prediction error signals is sub-optimal (e.g. 6, 7). If prediction error signals are given undue weight then potentially unimportant variances from expectation will become flagged as being highly salient. This in turn would mean that they are given unnecessary influence in updating our belief system. An anomalous experience that would normally not be treated as particularly relevant to understanding how the world works, either because of the unusual context in which it occurred, or its infrequency, would, if this deficit existed, be treated as important enough to warrant a change in the individual’s belief system. In terms of Bayesian inference, a system which gives undue weight to prediction errors would be one that had a bias towards accepting the influence of the new anomalous experiences without taking fully into account the relative inherent probabilities of the competing potential beliefs (which would usually strongly favour the non-delusional belief) (8). A less convincing anomalous experience would therefore be required in order to successfully challenge an existing non-delusional belief.
As an example, reconsider the aforementioned difference between patients with frontal lobe lesions and those with the Capgras delusion. In both types of patient the feeling of familiarity that is expected to appear on the physical recognition of a known person is absent. In the non-deluded individual, while this discrepancy is noted, it is not used to adopt the ‘imposter explanation’ because the correct weight is given to the prediction error and it is therefore not strong enough to overturn an otherwise functioning belief that the individual is who they claim to be (a belief that would be supported by several other pieces of information). In contrast the deluded individual gives far too much weight to the unexpected experience of non-familiarity, and the model is changed to accommodate it through the acquisition of the belief that the person is an imposter. As the prediction error deficit in such cases is restricted to the perceptual system dedicated to familiarity processing, other evidence that is contradictory to the imposter hypothesis, but which comes from a different source (e.g. people telling the deluded individual that they are wrong) is not treated with the same weight as the experience of absent familiarity. The delusion is therefore maintained even in light of strong contradictory evidence.

More widespread delusions
Whereas the Capgras delusion tends to be monothematic (i.e. it relates to just one known person having been replaced by an imposter, rather than people in general being imposters) faulty prediction error signalling can also be used to explain more widespread delusional thinking such as paranoia. For example one potentail consequence of the incorrect updating of belief systems is that the model of the world that the individual holds will itself become further divorced from reality, making it less able to accurately predict upcoming stimulation. This in turn will lead to a further increase in the frequency of prediction errors; to the extent that surprising or anomalous information would appear to occur with seemingly baffling frequency. If the deficit in prediction error exists across more than one perceptual domain, the inferential response to this might be to adopt a paranoid outlook to explain this constant uncertainty in the world. For example a delusion that MI5 are spying on the sufferer might be the best explanation for a world where objects and strangers seem to take on a sinister level of salience, and unexpected events seem to happen with alarming frequency (6).

Is healthy belief formation optimal, or are we all deluded?
The strength of a model of delusions based off deficits in the processes of inferential learning is that it can be used to explain the characteristics of general belief formation. For example deficits in prediction-error signaling may explain why some otherwise healthy individuals tend to adopt a wide variety of irrational beliefs. Such people may lack the perceptual deficit that causes the bizarre but specific anomalous experiences suffered by individuals with clinical delusions, but they may share with the clinical group a general deficit in inferential reasoning which results in a tendency to accept unusual beliefs that are poorly supported by available evidence. Along similar lines, variances from optimal processing (in terms of Bayesian inference) may explain more general cognitive biases that seem to be present in most people (including scientists!) and which are therefore presumably hard wired in the human brain because they have some adaptive evolutionary advantage. For example most people display a ‘belief bias’, the tendency to evaluate the validity of evidence based on their prior beliefs, rather than on the inherent validity of the evidence as could be assessed through logical reasoning (9). This bias could be said to be the result of our system of inferential learning being sub-optimal (in Bayesian terms) but in the opposite direction to that seen in delusion, such that we have a bias towards evaluating beliefs more in terms of their inherent probability (as we see it) without fully taking into account new evidence.
More generally the processes of inferential learning and belief formation may be able to explain why people who have had relatively similar types of upbringing and experience can often exhibit very different sets of beliefs. These differences are likely to be in part due to differences in the process of belief formation between individuals. It would seem very unlikely that anybody’s brain is able to process information in strict accordance with Bayesian inference, given that neural signals are coded through the transmission of neurotransmitters between groups of neurons, a process that is naturally susceptible to a significant amount of noise. Differences in beliefs between people are presumably therefore inevitable, as is the likelihood that we all, at some time, adopt irrational convictions. Of course these are just things that I believe, and I may be deluded in believing them!

Image courtesy of www.freedigitalphotos.net

References
(1) Holt, A.E., & Albert, M.L. (2007) Cognitive Neuroscience of delusions in aging. Neuropsychiatric disease and treatment, 2 (2) 181-189. Link
(2) Maher, B.A. (1974) Delusional thinking and perceptual disorder. Journal of Individual Psychology, 30:98-113. Link
(3) Coltheart, M, Langdon, R. & McKay, R. (2011) Delusional Belief. Annual Review of Psychology, 62, 271-298 Link
(4) Tranel, D., Damasio, H. & Damasio, A.R. (1995) Double dissociation between overt and covert face recognition. Journal of Cognitive Neuroscience, 7(4) 425-432. Link
(5) Friston, K. (2003). Learning and inference in the brain. Neural Networks, 16(9), 1325-1352. Link
(6) Fletcher, P. C., & Frith, C. D. (2009). Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nature Reviews Neuroscience, 10(1), 48-58. Link
(7) Corlett, P. R., Taylor, J. R., Wang, X. J., Fletcher, P. C., & Krystal, J. H. (2010). Toward a neurobiology of delusions. Progress in Neurobiology, 92(3), 345-369. Link
(8) McKay, R. (2012). Delusional Inference. Mind & Language, 27(3), 330-355. Link
(9) Markovits, H. & Nantel, G. (1989). The belief bias effect in the production and evaluation of logical conclusions. Memory & Cognition, 17(1) 11-17. Link

Consciousness In The Brain

 

You see, but you do not observe…

A Scandal in Bohemia, The Adventures of Sherlock Holmes:  Arthur Conan Doyle

Can neuroscience provide an explanation as to how the brain enables us to consciously process information?

What is the distinction between seeing and observing? The term ‘seeing’ suggests a passive process, whereas observation clearly requires something additional; the attention to a particular detail or details within the visual scene, the extraction of salient information and perhaps the further evaluation of that information. Neuroscience has made great strides in understanding the functioning of our basic sensory mechanisms, such as those that allow seeing. This work has reached such a level that we are now coming close to being able to create ‘bionic eyes’; mechanical replicas which can mimic the workings of damaged parts of the visual system (1). However is it a much harder task to fully understand the myriad of different ‘higher order’ functions that serve to differentiate observation from merely seeing. These functions are the reason that human experience is much more than the sum of the output from our sensory systems. At the heart of this problem is the need to understand the phenomenon of consciousness. Consciousness can be difficult to define precisely, with different philosophers breaking consciousness down into different sets of features (2) producing concepts that, perhaps inevitably, tend to be somewhat vague and potentially overlapping. However the most fundamental aspect of consciousness would appear to be our ability to experience awareness of (certain) sensory information, and to impose our higher order abilities on that information. In short, given that the majority of sensory processing is performed outside of consciousness, how is it that certain information can be sectioned off and subject to processes such as attention, evaluation and reflection, and how is it that we are aware of both the selected data, and the cognitive processes we perform on it?

Brain waves and synchronisation
The simplest way of addressing the issue of consciousness is to compare the response of the brain during circumstances where the level of consciousness awareness is different. It has long been known that states of consciousness (such as wakefulness, sleep and coma) are marked by differences in the pattern of ‘brain waves’; the oscillating electrical signals that are produced by the brain. It would seem sensible therefore to assume that such changes in the pattern of brain waves reflect, at least in part, changes in the functioning of the mechanism that enables consciousness. Similar changes in brain oscillations are also seen in a wide variety of different brain areas during performance of cognitive tasks, which of course also require the conscious processing of information. In general cognitive processes appear not only to alter the power of such oscillations, but also to evoke an increase in synchronisation between these oscillations (such that the phase difference between the signals generated from the brain areas activated by the task remains constant over time). Such synchronisation is believed to allow communication between disparate brain areas; so-called ‘communication through coherence’ (3). If one takes the simple example of one neuronal population passing a signal to another, then to provide the greatest likelihood of that signal being received, the sending neurons must all fire at the same time (hence the oscillating nature of brain waves) thus maximising the signal sent to the receiving neurons. However the timing of this signal is also important. To maximise the chance of the signal being propagated, the firing of the sending neurons must be timed so that the signal arrives at a time when the receiving neurons are optimally receptive to the signal (or alternatively, if inhibition of signalling is required, at a time when the receiving neurons are optimally insensitive of the signal). Therefore when different brain areas need to communicate in order to facilitate cognitive processing their pattern of neuronal firing much achieve coherence, so they tend to synchronise with (for unidirectional, excitation signals at least) the conduction delay between the two areas being equal to the phase difference between the two oscillating signals.

Global Neuronal Workspace
As the cognitive tasks that produce neural synchrony all require conscious processing of some sort, we would expect that the experience of consciousness in general must rely on changes in synchrony between brain areas. Indeed studies that have directly compared conscious vs non conscious processing (e.g. comparing instances where the same stimulus is consciously perceived versus instances where it is not) have found an increase in synchronisation between distant cortical sites not directly related to the processing of the relevant sensory information (e.g. 4). Evidence from several MRI studies suggests that the location of these synchronising sites is consistent across different tasks, involving a specific set of areas in the frontal and parietal lobes as well as the thalamo-cortical circuits that control the flow of sensory information to and from the cortex (see 5 for a review). The relevance of this finding to consciousness is supported by evidence that the source of the altered brain response between different states of consciousness appears to be generated by a similar set of areas (6). This has led to the idea that these brain areas represent a ‘global neuronal workspace’ (GNW: 5,7) that supports consciousness. The GNW system is thought to be able to orchestrate synchronisation between different sensory processing areas in such a way as to allow certain sensory representations to be amplified and maintained, while inhibiting others. As synchronisation facilitates neuronal communication it may allow the specific information being held within different sensory areas to form a single, multi-sensory representation within the workspace, explaining how the conscious experience of perception is of a unified sensation, despite the fact that information from each sense is analysed separately (8 – the ‘perceptual binding’ problem). In addition the parietal and frontal areas of the GNW contain a large number of neurons with long axons which allow these areas to project information to a wide variety of disparate brain areas. This in turn is thought to allow them to make the representation held within the GNW available to the areas of the brain involved in higher processing functions. In effect the amplified representation that is maintained by the GNW is also broadcast to these other processing sites, thus allowing higher order processing of conscious information. It is this selection and amplification of a specific representation, and it’s subsequent global availability (to other brain areas) which we experience as consciousness. The concept of synchronous firing and a global neuronal workspace may also help explain other aspects of the conscious experience, such as metacognition (our ability to perform mental processing on the outputs of other mental processing e.g. to know what we know). Metacognition may simply be the conscious component of a much larger perceptual system that is continuously reflecting on our own activity and its likely consequences (9). The metacognition we experience consciously may therefore simply be the instances where this process reaches conscious access via the GNW and is therefore exposed to other higher order processing functions.

The consequences a neural explanation of consciousness
The study of the neural basis of consciousness is an exciting, but complex subject. It also however raises significant philosophical questions. The idea that consciousness is merely a manifestation of the firing patterns of neurons and their arrangement vis-a-vis each other is not a particularly controversial conclusion from a neuroscience perspective, as one would expect every aspect of human cognition to manifest via changes in brain physiology. However the topic is controversial in general because it suggests that if something as core to our being, to our experience of being ‘human’, as consciousness is in fact solely reliant on biological mechanisms, then concepts such as the mind,  the soul and free are redundant. If there is no ‘ghost in the machine’ driving our conscious behaviour then are we really nothing more than just a collection of tissue; are we really just, in effect, extremely complex machines? The consequences of this discussion has important implications for philosophy and morality (for an interesting discussion on this topic see 10). More optimistically however, the ability to understand the biological underpinnings of consciousness can lead to greater understanding of the basis of neurological disorders that cause the loss of conscious abilities, and of psychiatric symptoms that relate to the disruption of consciousness. For example many people suffering from forms of psychosis can experience what could be termed failures of consciousness, such that patterns of conscious thought become disordered, or that they may feel that their thoughts are being read or even controlled by others. An understanding as to how the brain generates consciousness is surely an important step in identifying what has gone wrong in these situations, and potentially how they can be remedied.

                                                                                                                                                   

Image ‘Idea and Creative Concept’ by ‘Mr Lightman’, courtesy of freedigitalphotos.net http://www.freedigitalphotos.net/images/view_photog.php?photogid=3921

References
1. Mathieson et al (2012). Photovoltaic retinal prosthesis with high pixel density. Nature Photonics, 6, 391-397. http://www.nature.com/nphoton/journal/v6/n6/full/nphoton.2012.104.html
2. Gok, S.E., and Sayan, E. (2012) A philosophical assessment of computational models of consciousness. Cognitive Systems Research 17–18 (2012) 49–62. http://www.sciencedirect.com/science/article/pii/S1389041711000635
3 Fries, P. (2005) A mechanisms for cognitive dynamics: neuronal communication through neuronal coherence. Trends in cognitive sciences. 9(10) 474-480. http://www.sciencedirect.com/science/article/pii/S1364661305002421
4. Doesburg, S.M., Green, J.J., McDonald, J.J., and Ward, L.M. (2009). Rhythms of consciousness: Binocular rivalry reveals large-scale oscillatory network dynamics mediating visual perception. PLoS ONE 4, e6142. http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0006142
5. Dehaene, S. and Changeux, J.P., (2011). Experimental and Theoretical Approaches
to Conscious Processing. Neuron 70, 201-227. http://www.cell.com/neuron/abstract/S0896-6273%2811%2900258-3
6. Boly, M et al (2008) Intrinsic brain activity in altered states of consciousness – How conscious is the default mode of brain function? Annals of the New York Academy of Sciences. 1129, 119-129. http://www.ncbi.nlm.nih.gov/pubmed/18591474
7. Dehaene, S. & Naccache, L. (2001) Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework, Cognition 79 1–37. http://www.jsmf.org/meetings/2003/nov/Dehaene_Cognition_2001.pdf
8. Varela, F., Lachaux, J.P., Rodriguez, E., and Martinerie, J. (2001). The brainweb: Phase synchronization and large-scale integration. Nat. Rev. Neurosci. 2, 229–239. http://www.nature.com/nrn/journal/v2/n4/abs/nrn0401_229a.html
9. Timmermans, B., Schilbach, L., Pasquali, A., and Cleeremans, A. (2012) Higher order thoughts in action: consciousness as an unconscious re-description process. Phil. Trans. R. Soc. B (2012) 367, 1412–1423. http://rstb.royalsocietypublishing.org/content/367/1594/1412.abstract
10. http://www.time.com/time/magazine/article/0,9171,1580394-1,00.html