Why are we ‘Looking for Aliens’?

The idea that there might be alien life elsewhere in the universe has captured the imaginations of generations of scientists, writers, artists and….well pretty much everyone! Science Brainwaves has a fantastic *free* lecture coming up on Friday 8th November, where Dr Simon Goodwin will describe how astronomers are looking for life on other planets, and what it might be like. So without giving away too many spoilers I thought it would be the perfect opportunity to find out what got our ancestors thinking about aliens and what we might do if we find them….

A 17th Century illustration of the heliocentric system suggested by Copernicus (by Andreas Cellarius from the Harmonia Macrocosmica ,1660) Picture from Wikipedia

A 17th Century illustration of the heliocentric system suggested by Copernicus (by Andreas Cellarius from the Harmonia Macrocosmica ,1660)
Picture from Wikipedia

It’s difficult to say (or at least difficult for me to say, with my limited resources and time!) when people first started thinking about the possibility of life on other planets. However, it’s fair to say that big astronomical discoveries have probably captured people’s imaginations throughout the ages – in the same way that the moon landing got everyone talking about little green men. One such breakthrough is the ‘Heliocentric Revolution’. Heliocentrism is the concept of the solar system with the sun at the centre instead of the earth, an idea that has been around since at least 3rd century BC. However, it was Copernicus who revived the idea in the 16th Century, which was expanded on by the works of Kepler (who calculated the orbits of the planets) and Galileo (who observed other planets by telescope). The spread of the idea that earth wasn’t the centre of the universe must have made our ancestors wonder what else could be out there. Earth was no longer special, just another planet orbiting the sun, so why shouldn’t there be other life filled planets like ours?

 

alien contact

Top left: The Voyager Golden Record
Bottom left: The Pioneer Plaque
Right: The Aricebo Message (decoded and coloured)
All pictures are from Wikipedia

So far we’ve obviously not had much luck in finding life, but it’d probably be prudent to think about what we’d do if we do find it – especially if it’s intelligent. Stephen Hawking has been heard to offer an opinion on the subject:

“If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans,”

“We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”

Not the most optimistic of outlooks, but he’s got a point. Several attempts to contact alien life have been made by astronomers, but have they given away too much information? In 1973 the ‘Pioneer Plaque’ was sent out on the pioneer 10 and 11 spacecraft, followed in 1974 by the broadcast of the ‘Arecibo Message’ (both pictured right). A slightly more artistic message was sent out in 1977 on the ‘Voyager Golden Record’, which contained information on the sights and sounds of earth. It’s quite romantic to think that if there messages reached intelligent alien life that they might just pop in for a cuppa to say ‘hi’, but the consequences could be a lot worse if the aliens were hostile (and if you’ve got a flair for the melodramatic).

As a microbiologist I can’t help but be a little cynical about grand ideas of intelligent life. At the moment we’ll probably be lucky to find some basic single celled life - which I’ve heard doesn’t tend to be all that talkative (but which as a microbiologist I would find much more exciting anyway!). Anyway, who am I to say what we may or may not find (with all the experience of a 3rd year astrobiology module) - come and hear it from the expert at Science Brainwaves’ free Looking for Aliens Lecture!*

 

*did I mention it’s free? :P

Designer Babies- What’s It All About?

Throughout the social and scientific worlds, there is controversy surrounding the potential to genetically modify embryos to create ‘designer babies’. These are embryos that have been screened for genetic diseases, and will therefore only contain selected desired qualities chosen by the parents. However, there are many stories in the media which exaggerate and distort the facts- and this can even be seen in the term ‘designer babies’ itself. It is important to think about the likelihood and implications of this idea, plus to outline what actually gave rise to this concept.

We could suggest that the idea of genetically engineered embryos or the ideas that led to this originated in 1978 and the first in-vitro fertilisation (IVF) treatment. The procedure gave and still gives hundreds of infertile couples a chance to have a child by transferring an egg fertilised in a laboratory into the mothers uterus. It subsequently led to a procedure known as preimplantation genetic diagnosis (PGD). This is a technique used on embryos to profile their genome- it is a form of genetic profiling and embryo screening and is a more technical and accurate way of regarding ‘designer babies’. In terms of health benefits, using PGD means embryos can be screened outside the womb. Embryos can be selected that only carry normal and healthy genes, and are therefore free from genetic abnormalities. Whilst this technique is currently popular, PGD could in the future be used to select any desired specified trait of a child, such as eye colour, intelligence, and athleticism; be used to select embryos to be without a genetic disorder, to increase successful pregnancies, to match a sibling in order to be a donor, for sex selection and therefore be used to design your own baby. Selecting the gender of a child is already possible due to the fact only the X or Y chromosome needs to be identified, but other traits are more difficult due to the amount of genetic material required. Recent breakthroughs have meant that every single chromosome in an embryo can be scanned for genes involved in anything from Down’s Syndrome to lactose intolerance using a single microchip, but how advanced is this and what are the ethics behind this?

There are a large array of ethical, social and scientific concerns over the concept of creating a ‘perfect’ child. Some people worry that in the future there will be an imbalance between genders in the general population especially in societies that favor boys over girls, such as China. Also, a key issue suggested is that there is an element of eugenics to this idea- PGD will mean that people with ‘unattractive’ qualities will be lessened and potentially society may discriminate against those who have not been treated. If we look at this from a more extreme perspective, it could be suggested that we may end up with a race of ‘super-humans’ and a divide between those who have been treated and those who haven’t.  Also, this selection of genotypes suggests a potential deleterious effect on the human gene pool, meaning less genetic variation. Whilst at first this may seem positive due to the fact you could eliminate genetic disorders such as hemophilia A before it becomes prevalent in the body, it is also likely that new diseases may evolve and accidentally be introduced into the human race. Due to the decreased gene pool, only partial evolution would be able to occur and therefore we will be more susceptible to new diseases having a dramatic effect. It is clear from this evidence that regulations must be put in place and strictly enforced before any new advances are made.

So, how close are we to being able to ‘customise’ our children?
In terms of altering genes already present in the embryo, we are already well on our way to refining this technology. Scientists have been altering animal genes for years, and germline therapy is already being used on animals. Germline gene therapy is now being closely linked and developed with PGD- and it could soon be used to change human genetics. Our germline cells are our sex cells (egg and sperm), and using this branch of gene therapy essentially involves manipulating and adding new genes to the cells. The clear possibility from this in terms of PGD is that any trait can be added to an embryo to create a designer baby. This may involve adding a gene to stop a genetic disorder being expressed in a baby’s phenotype by fixing them as they are noticed in PGD, but it could also mean that only certain people will be able to advance in society.

On he other hand however, before these ‘more advanced’ humans are created we need to learn more about the genetic code. The basis of all genetic technologies lies in the human genome, and whilst PGD advances are ever-increasing, at present we can only use this technique to look at one or two genes at a time. Therefore, we cannot use it to alter the genes in embryos, and this would logically lead us to think about gene therapy, but the current lack of technology and the strict regulations regarding experimenting with germline gene therapy makes it unlikely that anyone will be able to create a completely designer baby in the near future.

Designing our babies is a reality that government bodies and various organizations are beginning to accept and address fully, and society’s view of the moral implications behind PGD and gene therapy being a key factor in determining how far this concept can advance; there will be increasingly new debates and controversy over the acceptable applications of gene technologies in humans and human embryos.

 

 

 

 

 

The Downfall of Antibiotics

Unless you have missed the countless number of headlines over the past few years about MRSA and hospital superbugs, you are probably aware that antibiotic resistance is a huge problem in healthcare at the moment. It brings with it visions of a post-apocalyptic world of widespread plagues of death and destruction and strong desire to constantly wash your hands the moment you enter a hospital. This brings the question, how real is this problem?

The bad news is that it is very real. Around the world a plethora of diseases from cystitis to TB have shown levels of resistance to our current drugs arsenal, even those we have kept back to use as a last resort. The even worse news is that in terms of new drugs in the pipeline, the well is beginning to dry up. If infectious disease is a war, we are definitely losing.

Bacterial resistance is a problem which has been become more widespread year on year. It is caused by a build-up of mutations in bacteria which stops the antibiotics from working properly, no longer killing or stopping the growth of the germs. This is generally caused by the over prescribing and misuse of antibiotics, such as GPs prescribing the use for viral infections, or patients not finishing the course of their prescribed drugs. It is also caused by the widespread use of antibiotics in agriculture to promote growth in cattle. This has created an environment where pathogens are constantly meeting and combating low levels of antibiotics, allowing the favouring of resistance strains over susceptible ones.

Traditional antibiotics may no longer be a viable option, this has sparked a search for alternatives to our current drugs, which work in other ways than simply killing the bacteria, and the good news is that there is a lot of promising research currently in different stages of development.  One avenue that is showing potential is that of anti-adhesion therapies.  These are drugs which prevent bacteria from gripping to the cells of the body. If they cannot grab the body, they cannot overcome the strategies we have evolved to stop them colonising us, such as the mucus in our airways or the flushing out of germs by urine in the urinary tract. This means they can’t colonise our bodies in high enough numbers to cause us harm. These anti-adhesion drugs which prevent bacteria from attaching to host cells do not put any selective pressure on the bacteria and therefore are not likely to induce resistance meaning that they have high potential for use in the dystopian future we all fear.

There are a large number ways by which we can stop bacteria from attaching to use. These therapies use different means all to the same end, i.e. to alter the interaction between bacteria and patient so that bacteria no longer stick efficiently to cells. Listed below is just a small sample of strategies under investigation.

So, anti-adhesion strategies are not a new idea. Cranberry juice has long been used as a home-made solution to urinary tract infections. This has been shown to be effective in clinical research, however the downside is that this result is less then consistent and there is still a great amount of debate about how it actually works. A component of cranberries and related berries has proven itself as an inhibitor of bacterial attachment, and the high sugar (specifically fructose) content of cranberry juice  can work block the ‘arms’ of bacteria, known as fimbrae which grab cells. For those who aren’t big fans of cranberry juice, there is more good news: this anti-infection effect on bacteria is not limited to cranberries; in fact, new research has identified compounds from several other plants including tea and red wine.

On a less appealing note for you personally, there is also evidence that suggest that breast milk may contain a cocktail of ingredients that can prevent bacteria attachment to the recipient. Breast milk has been shown to contain hundreds of proteins, sugars and antibodies, some of which may be affective anti-adherent compounds against a myriad of diseases. This makes evolutionary sense, as providing infants with ‘anti-adherence milk’ gives them a regular protective coating of the digestive system at a time when the immune system is not yet completely up and running and the children are at risk from so many infections.

An alternative strategy is the use of probiotics, which uses non-harmful species of bacteria to fight the harmful kind.  In the battle to colonise our body, this strategy is akin to sending reinforcements to the good guys.  Commonly used species include lactobacilli and bifidobacteria, which can be added to foods like yogurt. This medical use of probiotics is currently being trialled and is showing some success against a wide number of infections, ranging from food poisoning to vaginosis to stomach ulcers.

The final questions we should ask are: are these therapies are as efficient as antibiotics? Are they just going to generate more complex resistant bugs? Would we be better off concentrating all our efforts on just searching for new antibiotics?  Unfortunately we can’t answer this, at least not by ourselves. What we do know is that we are entering a post-antibiotic era. The rule book has changed and science may need to start playing catch up.

The painful truth: Magnetic bracelets, the placebo effect & analgesia

Despite the widespread availability of evidence-based medicine in the western world, ‘alternative medicines’ are still commonly used. Such medicines are usually inspired by  pre-scientific medical practices; those which have been passed down through generations. However many established medical treatments also arise from traditional medical practices. For example the use of aspirin as an analgesic (pain killer) has its roots in the use of tree bark for similar purposes throughout history. The difference between established medicines like aspirin, and alternative medicines such as homeopathy, is that the former have been found to be effective when exposed to rigorous scientific trials.

Can magnetic bracelets help relieve joint pain in conditions like Arthritis?

Can magnetic bracelets help relieve joint pain in conditions like Arthritis?

A form of alternative medicine that has recently been subjected to scientific scrutiny is the use of magnetic bracelets as a method of analgesia. It effective, such therapies would provide cheap and easy-to-implement treatments for chronic pain such as that experienced in arthritis. Unfortunately there is little evidence of such treatments being effective. A meta-analysis of randomised clinical trials looking at the use of magnet therapy to relieve pain found that there was no statistically significant benefit to wearing magnetic bracelets (1). However it can be argued that existing clinical trials may have been hampered by the difficulty in finding a suitable control condition.

The placebo effect

The ‘placebo effect’ is a broad term used to capture the influence that knowledge concerning an experimental manipulation might have on outcome measures. Consider a situation where you are trying to assess the effectiveness of a drug. To do this you might give the drug to a group of patients and compare their subsequent symptomatology to a control group of patients who do not get the drug. However even if the drug group show an improvement in symptoms compared to the control group, you cannot be certain whether this improvement is due to the chemical effects of the drug. This is because the psychological effects of knowing you are receiving a treatment may produce a beneficial effect on reported symptoms which would be absent from the control group. The solution to this problem is to give the control group an intervention that resembles the experimental treatment (i.e. a sugar pill instead of the actual drug). This ensures that both groups are exposed to the same treatment procedure, and therefore should experience the same psychological effects. Indeed this control treatment is often referred to as a ‘placebo’ because it is designed to control the placebo effect. The drug must exhibit an effect over and above the placebo treatment in order to be considered beneficial.

A requirement for any study wishing to control for the placebo effect is that the participants must be ‘blind’ (i.e. unaware) as to which intervention (treatment or placebo) they are getting. If the participant is aware that they are getting an ineffective placebo treatment, the positive psychological benefits of expecting an improvement in symptoms is likely to disappear, and thus the placebo won’t genuinely control for the psychological effects of receiving an intervention.

A placebo for magnetic bracelets

The obvious placebo for a magnetic bracelet is an otherwise identical non-magnetic bracelet. However the problem with using non-magnetic bracelets as a control is that it is easy for the participant to identify which intervention they are getting, as it is easy to distinguish magnetic or non-magnetic materials. The can be illustrated by considering a clinical trial which appeared to show that magnetic bracelets produce a significant pain relief effect (2). In this study participants wore either a standard magnetic bracelet, a much weaker magnetic bracelet or a non-magnetic (steel) bracelet. The standard magnetic bracelet was only found to reduce pain when compared to the non-magnetic bracelet. However the researchers also found evidence that participants wearing the non-magnetic bracelet became aware that it was non-magnetic, and therefore could infer that they were participating in a control condition. This suggests that the difference between conditions might be due to a placebo effect, as the participants weren’t blind to the experimental manipulation.

This failure of blinding was not present for the other control condition (weak magnetic bracelet) presumably because these bracelets were somewhat magnetic. As no statistically significant difference was found between the standard and weak magnetic bracelets it could therefore be concluded that the magnetic bracelets have no analgesic effect. However it could also be argued that if magnetism does reduce pain, the weaker bracelet may have provided a small beneficial effect which might have served to ‘cancel out’ the effect of the standard magnetic bracelet. The study could therefore be considered inconclusive as neither of the control conditions were capable of isolating the effect of magnetism.

More recent research

Recent clinical trials conducted by researchers at York University has tried to solve the issue of finding a suitable control condition for magnetic bracelets. Stewart Richmond and colleagues (3) included a condition where participants wore copper bracelets, in addition to the three conditions used in previous research, while researching the effect of such bracelets on the symptoms of Osteoarthritis . As copper is non-magnetic it can act as a control in testing the hypothesis that magnetic metals relieve pain. However as copper is also an traditional treatment for pain, it does not have the drawback of the non-metallic bracelet regarding the expectation of success. The participant is likely to have the same expectation of a copper bracelet working as they would for a magnetic bracelet.

The study found that there was no significant difference between any of the bracelets on most of the measures of pain, stiffness and physical function. However the standard magnetic bracelet did perform better than the various controls on one sub-scale of one of the 3 measures of pain taken. However this isolated positive effect was considered likely to be spurious because of the number of comparisons relating to changes in pain that were performed during the study (see 4). The same group has recently published an almost identical study relating to the pain reported by individuals suffering from Rheumatoid Arthritis rather than Osteoarthritis (5). Using measures of pain, physical function and inflammation they again found no significant differences in effect between the four different bracelet types.

No effect?

The existing research literature seems to suggest that magnetic bracelets have no analgesic effect over and above a placebo effect. The use of a copper bracelet overcomes some of the problems of finding a suitable control condition to compare magnetic bracelets against. One argument against using copper bracelets as a control is that as they themselves are sometimes considered an ‘alternative’ treatment for pain, they may also have an analgesic effect. Such an effect could potentially cancel out any analgesic effect of the magnetic bracelets when statistical comparisons are performed. However copper bracelets did not perform any better than the non-magnetic steel bracelets in either study (3, 5) despite the potential additional placebo effect that might apply during the copper bracelets condition. Indeed on many of the measures of pain the copper bracelet actually performed worse than the non-magnetic bracelet. The copper bracelet can therefore be considered a reasonable placebo to use in research testing the analgesic effect of magnetic bracelets.

Despite the negative results of clinical trials, it may be wise not to entirely rule out a potential analgesic effects of magnetic bracelets. Across all three studies (2, 3, 5) the measures of pain were generally lowest in the standard magnetic bracelet group. Indeed significant effects were found in two of the studies (2, 3) although these were confounded by the aforementioned problems concerning control conditions and multiple comparisons. Nevertheless it could be argued that, given the existing data, magnetic bracelets may have a small positive effect, but that this effect is not large or consistent enough to produce a statistically significant difference in clinical trials. This theory could be tested by conducting trials with far more patients (and thus greater statistical power) or by using a number of different bracelets of differing magnetic strengths to see if any reported analgesic effect increases with the strength of the magnetic field. Until such research is performed it is best to assume that magnetic bracelets do not have any clinical relevant analgesic effect.

Image courtesy of FreeDigitalPhotos.net

References

(1) Pittler MH, Brown EM, Ernst E. (2007) Static magnets for reducing pain: systematic review and meta-analysis of randomized trials. CMAJ 2007;177(7):736—42.

(2) Harlow T, Greaves C, White A, Brown L, Hart A, Ernst E. (2004) Randomised controlled trial of magnetic bracelets for relieving pain in osteoarthritis of the hip and knee. BMJ 329(7480):1450—4.

(3) Richmond SJ, Brown SR, Campion PD, Porter AJL, Klaber Moffett JA, et al. (2009) Therapeutic effects of magnetic and copper bracelets in osteoarthritis: a randomised placebo-controlled crossover trial. Complement Ther Med 17(5–6): 249–56.

(4) https://en.wikipedia.org/wiki/Problem_of_multiple_comparisons

(5) Richmond SJ, Gunadasa S, Bland M, MacPherson H (2013) Copper Bracelets and Magnetic Wrist Straps for Rheumatoid Arthritis – Analgesic and Anti-Inflammatory Effects: A Randomised Double-Blind Placebo Controlled Crossover Trial. PLoS ONE 8(9):

Biotech for all – taking science back to it’s roots?

This morning I came across a very interesting TED talk by Ellen Jorgensen entitled “Biohacking — you can do it, too” (http://on.ted.com/gaqM). The basic premise is to make biotech accessible to all, by setting up community labs, where anyone can learn to genetically engineer an organism, or sequence a genome. This might seem like a very risky venture from an ethical point of view, but actually she makes a good argument for the project being at least as ethically sound than your average lab. With the worldwide community of ‘biohackers’ having agreed not only to abide by all local laws and regulations, but drawing up its own code of ethics.

So what potential does this movement have as a whole? One thing it’s unlikely to lead to is bioterrorism, an idea that the media like to infer when they report on the project. The biohacker labs don’t have access to pathogens, and it’s very difficult to make a harmless microbe into a malicious one without access to at least the protein coding DNA of a pathogen. Unfortunately, the example she gives of what biohacking *has* done is rather frivolous, with a story of how a German man identified the dog that had been fouling in his street by DNA testing. However, she does give other examples of how the labs could be used, from discovering your ancestry to creating a yeast biosensor. This rings of another biotech project called iGem (igem.org), where teams of undergraduate students work over the summer to create some sort of functional biotech (sensors are a popular option) from a list of ‘biological parts’.

image

The Cambridge 2010 iGem team made a range of colours of bioluminescent (glowing!) E.coli as part of their project.

My view is that Jorgensen’s biohacker project might actually have some potential to do great things. Professional scientists in the present day do important work, but are often limited by bureaucracy and funding issues – making it very difficult to do science for the sake of science. Every grant proposal has to have a clear benefit for humanity, or in the private sector for the company’s wallet, which isn’t really how science works. The scientists of times gone by were often rich and curious people, who made discoveries by tinkering and questioning the world around them, and even if they did have a particular aim in mind they weren’t constricted to that by the agendas of companies and funding bodies. Biohacking seems to bring the best of both worlds, a space with safety regulations and a moral code that allows anyone to do science for whatever off-the-wall or seemingly inconsequential project that takes their fancy – taking science back to the age of freedom and curiosity.

Insights into the beginnings of microbiology

Pasteur Institute
Over the holidays I rediscovered a book I picked up in an antique shop a year or so ago called “Milestones in Microbiology”. I had assumed it was going to be a standard history book with lots of dates and names and events, but it turned out to be a collection of groundbreaking microbiology papers from the 16th century to the early 20th century – quite a special find for a microbiology student. Many of the papers included were written by familiar names such as Pasteur, Leeuwenhoek, Lister, Koch, Fleming and more, and the collection was compiled and translated by Thomas Brock (a familiar name to anyone who’s been set Brock’s Biology of Microorganisms as a first year text book!).

I’ve not yet read the whole collection, but having read the first few papers I’m very much sold. The early texts on the field of microbiology are not just intriguing but fairly accessible too. The style of writing is far less technical than today’s academic papers, as well as being in full prose (in those days journals didn’t have strict word limits). My favourite example of this so far is when Leeuwenhoek describes one of his test subjects as “a good fellow” a comment that would be branded unneccessary and completely aside from the point in today’s academic world!

It’s not often you get the chance to view groundbreaking scientific advances through the eyes of the scientists you get taught about in the textbooks. Reading the paper in which Leeuwenhoek first describes bacteria (or “little animals” as he calls them) feels like something of a privelege, as well as a trip back in time, so definately worth a read for anyone with an interest in the field. A more up to date version of the book seems to be available on Amazon or for University of Sheffield students there’s a few copies in Western Bank Library – enjoy!

On another note, if you’re interested in this sort of thing I’d also definately recommend a trip to the Pasteur museum in Paris. I visited it a few years ago whilst in Paris and like the papers mentioned above it’s a fascinating insight into the work of pioneering microbiologists. It’s a fairly understated part of the modern Pasteur Institute, with the museum situated in the building of the original Pasteur Institute. The museum contains plenty of scientific curiosities, such as Pasteur’s original experimental equipment, and documents his work from his early background in chemistry and stereoisomers up to his more famous vaccine and microbiological work. Finally on a less biological theme, the museum also contains Pasteur’s living quarters and crypt, which were also part of the original institute building!

 

 

Want to lie convincingly? Get practicing!

Lying, the deliberate attempt to mislead someone, is a processes that we all engage in at some time or another. Indeed research has found that the average person lies at least once a day, suggesting that lying is a standard part of social interaction (1). Despite its common occurrence lying is not an automatic process. Instead it represents an advanced cognitive function; a skill that requires more basic cognitive abilities to be present before it can emerge. To lie an individual first needs to be able to appreciate the benefits of lying (e.g. a desire to increase social status) so that they have the motivation to behave deceitfully. Successful lying also requires ‘theory of mind’ or the ability to understand what another person knows. This is necessary so that the would-be liar can spot firstly the opportunity to lie, and secondly what sort of deception might be required to produce a successful lie. Finally lying also requires the ability to generate a plausible and coherent, but nonetheless fabricated description of an event. Given these prerequisites it is unlikely that we are ‘born liars’. Instead the ability to lie is believed to develop sometime between the ages of 2 and 4 (2). The fact that the ability to lie develops over time suggests that the our performance of the ‘skill’ of lying should be sensitive to practice. Do people who lie more often become better at it?

Lying is tiring!
Lying is considered more cognitively demanding that telling the truth due to the extra cognitive functions that need to be utilised to produce a lie. The idea that lying is cognitively demanding is supported both by behavioural data showing that deliberately producing a  misleading response takes longer, and is more prone to error, than producing a truthful response (3) and by neurological data showing that lying requires additional activity in the prefrontal areas of the brain when compared to truth telling (4). These observable differences between truth telling and lying allow a measure of ‘lying success’ to be created. For example a successful, or skilled liar, should be able to perform lies more quickly and accurately than a less successful liar, perhaps to the extent that there is no noticeable difference in performance between truth telling and lying in such individuals. Likewise, if the ability to lie is affected by practice, then practice should make lies appear more like the truth in terms of behavioural performance.

Practice makes perfect (but is this a lie)?
Despite the intuitive appeal of the idea that lying becomes easier with practice, much past research has failed to find an effect of practice on lying, either when measuring behavioural (3) or neuroimaging (5) markers of lying. Such results have led to the conclusion that lying may always be significantly more effortful than truth telling, no matter how practiced an individual is at deception.

A recent study (6) has re-examined this issue. They used a version of the ‘Sheffield Lie Test’ where participants are presented with a list of questions that require a yes/no response (e.g. ‘Did you buy chocolate today?’). The experiment involved three main phases. In the first, baseline phase, participants were required to respond truthfully to half the statements and to lie in response to the other half of the statements. In the middle, training phase, the statements were split into two groups. For a control group of statements the proportion that required a truthful response remained at 50% for all participants. For an experimental group of statements the proportion that required a truthful response was varied between participants. Participants either had to lie in response to 25%, 50% or 75% of these statements, thus giving the participants differing levels of ‘practice’ at lying. The final, test phase, was a repeat of the baseline phase. This design allowed two research questions to be assessed. Firstly the researchers could identify whether practice at lying reduced the ‘lie effect’ on reaction time and error rate (e.g. the increased reaction time and error rate that occurs when a participant is required to lie, compared to when they are required to tell the truth). Secondly the researchers could identify whether any reduction in the lie effect applied just to the statements on which the groups had experienced differing practice levels, or whether it also generalised to those statements where all groups had the same level of practice.

The results revealed that practice did produce an improvement in the ability to lie during the period when the training was actually taking place, and that this improvement applied to both the control statements and the experimental statements. The participants who had to lie more demonstrated reduced error rates and reaction times compared to those who had to lie less during the training phase. However in the test phase this improvement was only maintained for the set of statements where the frequency of lying had been manipulated. The group who had practiced lying on 75% of the experimental statements were no faster or more accurate at lying on the control statements than the group who had to lie in response to just 25% of the experimental statements. These results suggest that practice can make you better at lying, but this improvement is only sustained over time for the specific lies that you have rehearsed.

Some lies may be better than others!
One important criticism of most studies on the effect of practice on lying is that they tend to use questions or tasks that require binary responses (i.e. yes/no questions). However in real life lying often involves the concoction of complex false narratives,a form of lying that is likely to be far more cognitively demanding than just saying ‘No’ in response to a question whose answer is ‘Yes’. Likewise the lies tested in laboratory studies tend to be rehearsed, or at least prepared lies. In contrast many real-life lies are concocted at short notice, with the deceptive narrative being constructed in ‘real-time’, whilst the person is in the process of lying. It is likely that the effect of training, and how that training generalises to other lies, will be different for these more advanced forms of lying than it is for the more simple types of lies that tend to be tested under laboratory conditions. Given this, if a psychologist tells you that we know for certain how practice impacts on the ability to deceive, you can be sure that they are lying!

________________________________________________________________________________________________________

References

(1) DePaulo, B.M., Kashy, D.A., Kirkendol, S.E., Wyer, M.M. & Epstein, J.A. (1996) Lying in everyday life. Journal of Personality and Social Psychology, 70 (5) 979-995. http://smg.media.mit.edu/library/DePauloEtAl.LyingEverydayLife.pdf
(2) Ahern, E.C., Lyon, T.D. & Quas, J.A. (2011) Young Children’s Emerging Ability to Make False Statements. Developmental Psychology. 47 (1) 61-66. http://www.ncbi.nlm.nih.gov/pubmed/21244149
(3) Vendemia, J.M.C., Buzan,R.F., & Green,E.P. (2005) Practice effects, workload and reaction time in deception. American Journal of Psychology. 5, 413–429. http://www.jstor.org/discover/10.2307/30039073?uid=3738032&uid=2129&uid=2&uid=70&uid=4&sid=21101917386241
(4)Spence, S.A. (2008) Playing Devil’s Advocate: The case against MRI lie detection. Legal and Criminological Psychology 13, 11-25. http://psychsource.bps.org.uk/details/journalArticle/3154771/Playing-Devils-advocate-The-case-against-fMRI-lie-detection.html
(5) Johnson,R., Barnhardt,J., & Zhu, J.(2005) Differential effects of practice on the executive processes used for truthful and deceptive responses: an event-related brain potential study. Brain Research: Cognitive Brain Research 24, 386–404. http://www.ncbi.nlm.nih.gov/pubmed/16099352
(6) Van Bockstaele, B., Verschuere, B., Moens, T., Suchotzki, K., Debey, E. & Spruyt, A. (2012) Learning to lie: effects of practice on the cognitive cost of lying. Frontiers in Psychology, November (3) 1-8. http://www.ncbi.nlm.nih.gov/pubmed/23226137

A matter of inheritance

Image courtesy of 'Digital Dreams' / FreeDigitalPhotos.net

Image courtesy of ‘Digital Dreams’ / FreeDigitalPhotos.net

The age-old ‘nature-nurture’ debate revolves around understanding to what extent various traits within a population are determined by biological or environmental factors. In this context ‘traits’ can include not only aspects of personality, but also physical differences (e.g. eye colour) and differences in the vulnerability to disease. Investigating the nature-nurture question is important because it can help us appreciate the extent to which biological and social interventions can affect things like disease vulnerabilities, and other traits that significantly affect life outcomes (e.g. intelligence). The ‘nurture’ part of this topic can be dealt with to some extent by research in disciplines such as Sociology and Psychology. In contrast genetic research is crucial to understanding the ‘nature’ part of the equation. Genetics also has relevance for the ‘nurture’ part of the debate because environmental factors such as stress and nutrition affect how genes perform their function (gene expression). Indeed genetic and environmental factors can interact in more complex ways; certain genetic traits can alter the probability of an organism experiencing certain environmental factors. For example a genetic trait towards a ‘sweet tooth’ is likely to increase the chances of the organism experiencing a high-sugar diet!

Given the importance of genetic information to understanding how organisms differ, I would argue that a basic knowledge of Genetics is essential for anyone interested in ‘life sciences’. This is true whether your interest is largely medical, psychological or social.  Unfortunately if, like me, you skipped A-Level Biology for something more exciting (or A-Level Physics in my case!) you might Genetics at bit of mystery.

Some basic genetics

Genetic information is encoded in DNA (Deoxyribonucleic acid). Sections of DNA that perform specific, separable functions are called Genes. Genes are the units of genetic information that can be inherited from generation to generation. Most Genes are arranged on long stretches of DNA called chromosomes, although a small proportion of genes are transmitted via cell mitochondria instead. Most organisms inherit 2 sets of chromosomes, one from each parent. Different genes perform different functions, mostly involving the creation of particular chemicals, often proteins, which influence how the organism develops.  All cells in the body contain the DNA for all genes, however only a subset of genes will be ‘expressed’ (i.e. perform their function) in each cell. This variation in gene expression between cells allows the fixed (albeit very large) number of genes to generate a vast number of different chemicals. This in turn allows organisms to vary widely in form while still sharing very similar genetic information (thus explaining how it can be that we share 98% of our DNA with monkeys, and 50% with bananas!).

The complete set of genetic information an individual has is called their ‘genotype’. The genotype varies between all individuals (apart from identical twins) and thus defines the biological differences between us. In contrast the ‘phenotype’ is the complete set of observable properties that can be assigned to an organism. Genetics tries to understand the relationship between the genotype and a particular individual phenotype (trait). For example how does the genetic information contained in our DNA (genotype) influence our eye colour (phenotype)? As already mentioned environmental factors play a significant role in altering the phenotype produced by a particular genotype. Explicitly the phenotype is the result of the expression of the genotype in a particular environment.

Heritability

Roughly speaking, heritability is the influence that a person’s genetic inheritance has on their phenotype. More officially it is the proportion of the total variance in a trait within a population that can be attributable to genetic effects. It tells you how much of the variation between individuals can be attributed to genetic differences. Note that this is not the same as saying that 60% of an individual’s trait is determined by genetic information. In narrow-sense heritability (the most common form used), what counts as ‘genetic effects’ is only that which is directly determined from the genetic information past on by the parents. This ignores variations caused by the interaction between different genes, and between genes and the environment. This is the most popular usage of heritability in science because it is far more predictive of breeding outcomes, and therefore tells us more about nature part of the ‘nature-nurture’ question, than the alternative (broad-sense) conceptualisation of heritability.

Uses and abuses

Genetic research can provide crucial information in the fight against certain diseases. Identifying genes that are predictive of various illnesses allow us to identify individuals who are vulnerable to a disease. This then allows preventive measures to be implemented to counter the possible appearance of the disease. Furthermore once the genes that contribute to a disease are known, knowledge as to how those genes express will help reveal the cellular mechanisms behind the disease. This improves our understanding of how the disease progresses and operates, and therefore helps with identifying treatment opportunities. In reality of course Genetics is rarely this simple. Many conditions that have a genetic basis (i.e. that show a significant level of heritability) appear to be influenced by mutations within a large number of different genes. Indeed in many cases, especially with psychiatric disorders, it may be that conditions we treat as one unitary disorder are in fact a multitude of different genetic disorders that have very similar phenotypes. Nevertheless, despite these problems genetic research is helping to uncover the biological basis of many illnesses.

One problem with Genetics, and heritability in particular, is that of interpretation. There is often a mistaken belief that a high level of heritability signifies that environmental factors have little or no effect on a trait. This misunderstanding springs from an ignorance of the fact that estimates of heritability comes from within a particular population, in a particular environment. If you change the environment (or indeed the population) then the heritability level will change. This is because gene expression is affected by environmental factors and so the influence of genetic information on a trait will always be dependent to some extent on the environment. As an example a recent study showing that intelligence was highly heritable (1) lead to some right-wing commentators using it as ‘proof’ of the intellectual inferiority of certain populations, because of their lower scores on IQ tests. Such an interpretation is then used to argue that policies relating to equal treatment of people are flawed, because some people are ‘naturally’ better. Apart from the debatable logic of the argument itself, the actual interpretation of the genetic finding is flawed because a high heritability of IQ does not suggest that environmental differences have no effect on IQ scores. To illustrate this point consider that the study in question estimated heritability in an exclusive Caucasian sample from countries with universal access to education. If you expanded the sample to include those who did not have access to education it would most likely reduce the estimate of heritability, as you would have increased the influence of environmental factors within the population being studied! Ironically therefore you could argue that only by treating everyone equally would you be able to determine those who are truly stronger on a particular trait! Independent of what your views on equality are, the most important lesson as regards genetics is that you cannot use estimates of heritability, however high, to suggest that differences in the environment have no effect on trait outcomes.

_________________________________________________________________________________________________________

 References

(1) Davies, G. et al (2011) Genome-wide association studies establish that human intelligence is highly heritable and polygenic. Molecular Psychiatry 16, 996-1005. http://www.nature.com/mp/journal/v16/n10/full/mp201185a.html

Although not directly cited, I found the following information useful when creating the post (and when trying to get my head around Genetics!).

Quantitative Genetics: measuring heritability. In Genetics and Human Behaviour: the ethical context. Nuffield Council on Bioethics. 2002.  http://www.nuffieldbioethics.org/sites/default/files/files/Genetics%20and%20behaviour%20Chapter%204%20-%20Quantitative%20genetics.pdf

Visscher, P.M., Hill, W.G. & Wray, N.R. (2008) Heritability in the genomics era – concepts and misconceptions. Nature Reviews Genetics, 9 255-266. http://www.ncbi.nlm.nih.gov/pubmed/18319743

Bargmann, C.I. & Gilliam, T.C. (2012) Genes & Behaviour (Kandel, E.R. et al (Eds)). In Principles of Neural Science (Fifth Edition). McGraw-Hill.

Life of a pathogen: pump some iron!

Has somebody set up a miniature  weightlifting gym for microbes? Not yet, but just like you and I bacteria need iron to stay  alive. However, unlike us they don’t get iron as a supplement in  their cereal – they  have to find it for themselves. In bacteria  iron is needed to make proteins involved in vital processes such as  respiration and DNA synthesis. With the stakes so high they need specialised ways to get iron, and more often than not they have to  scrounge it from us, their human host.

Iron scavenging molecules (called siderophores) are one way that bacteria can get iron from a host. In the human body the levels of free iron are kept very low, so the  siderophores have to be very good at finding iron then hanging on to it (high affinity). Once they’ve done this they need to get back into the bacterial cell via special transporters in the cell membrane (see figure below).

So, send out some scavengers and get loads  of iron? Not so simple! Firstly,  the whole process takes a lot of energy for the cell. In E.coli it takes 4  different proteins just to make the siderophore, plus another 4 proteins and some  ATP (the energy currency of the cell) to get it back in again. Secondly, too much iron is toxic to the cell, so it needs to make sure that it only goes to all this trouble when it really needs to – in other words it needs some gene regulation.

This is where it gets clever.  Inside the cell there’s a protein called Fur (ferric uptake regulator) that keeps an eye on how much iron is in the cell and turns the genes for iron scavenging on and off. When there’s lots  of iron in the cell the iron binds to Fur. This allows Fur to bind to the iron  uptake genes and turn them off, so the cell doesn’t waste any resources or  overload itself with iron (see figure below). When there’s not enough iron in the cell there’s no  iron spare to bind to Fur, so Fur can’t bind to the DNA. This means that the  genes are active and the proteins for iron scavenging are made.

That’s a pretty good system, but a  lot of pathogenic bacteria take it a step further. When pathogens enter the body they need to spring into action to make virulence factors – the proteins and molecules that allow them to survive in the body and do all the  nasty things that they do. It would be a massive waste of energy if they made these all the time, so they need to be able to  activate them specifically when they enter a host. Bacteria don’t have eyes or GPS so they have to sense the environment to work out where they are. Low iron levels is one signal that they are inside a host, so it makes sense to use an iron  sensing protein to regulate other virulence factor genes (figure 3). For example, E.coli uses the Fur regulator to regulate  virulence factor genes for fimbriae (fibres which can latch onto human cells),  haemolysin (a toxin that breaks open red blood cells) and Shiga-like toxin (a toxin that helps E.coli cells to get inside human cells).

So, in the arms race of human vs. pathogen it seems that bacteria have found a few sneaky solutions this time. Not only have they gotten around the body’s iron restriction mechanisms, but they also use the low iron levels as a trigger for more deadly weapons.

Spooky goings on in Psychology!

Given that it is Halloween, it seems only right to discuss some recent psychology experiments relating to potential paranormal phenomenon!

Can ‘psychic’ abilities be demonstrated during controlled experiments?

Can ‘psychics’ sense information others can’t?

Today Merseyside Sceptics Society published the results of a ‘Halloween psychic challenge’. They invited a number of the UK’s top psychics* to attempt to prove their abilities under controlled conditions, although only two psychics accepted the invitation (1, 2). In the test each psychic had to sit in the presence of 5 different female volunteers who were not known to them. These volunteers acted as ‘sitters’ and the psychics had to attempt to perform a ‘reading’ on them, in effect to use their putative psychic powers to obtain information about the sitter’s life and personality. During the reading the psychic  was separated from the sitter by a screen such that the psychic could not actually see the sitter. The psychics were also not allowed to talk to the sitters. These conditions ensured that any information the psychics retrieved was not gathered through processes that could be explained using non-psychic means (e.g. cold reading or semantic inference). The psychics recorded their readings by writing them down.

A copy of the 5 readings made by each psychic (one for each sitter) was given to each sitter and they were asked to rate how well each reading described them, and which reading provided the best description. If the psychic abilities were genuine, then each sitter should rate the reading that was made for them as being most accurate. Of the 10 readings (from the 2 psychics for each of the 5 sitters) only 1 was correctly selected by the sitter as being about them, no more than one would expect by chance. Moreover the average ‘accuracy ratings’ provided by the sitters (for the readings that were actually about them) was low for both psychics (approximately 3.2 out of 10). What of the one reading that a sitter did identify as an accurate description (see 1 for a full transcript of this reading)?  It is noticeable that in this reading the statements (some of which were not accurate) were either very general, or could be inferred from the knowledge that the sitter was a young, adult female (e.g ‘wants children’). The (correct) statement that most impressed the sitter (‘wants to go to South America’) was also pretty general and is probably true of a decent proportion of young woman. It can be safely concluded therefore that even this ‘accurate’ reading happened by chance.

In terms of the experimental design it is important to note that both psychics had, prior to the experiment, agreed to the methodology in the belief that they would be able to demonstrate their psychics powers under such conditions. Likewise both psychics rated their confidence in the readings they gave during the experiment highly, suggesting that they didn’t think that anything which occurred during the experiment might have upset their psychic powers. The study could be criticised for its small sample size, although this is due to many psychics, including some of the better known ones like Derek Acorah and Sally Morgan, apparently refusing to take part. It could therefore be argued that despite the psychics involved in the study failing the test, other ‘better’ psychics might pass. However such an argument remains merely speculative until such psychics agree to take part in controlled studies.

Although these negative results may not be surprising I still think it might be of interest to perform the experiment a different way. The problem with relying on the sitter’s ratings is that they may reflect attitudes of the sitters concerning psychic abilities (although all the sitters were apparently open to the idea of psychic powers being genuine). For example even though the sitters were unaware of which reading was about them, they could theoretically have given a low rating to an accurate reading to ensure that no psychic abilities were demonstrated. A better methodology might be to get each sitter to provide a self description, and then ask the psychic to choose the description that they think fits their reading of the person best. Such a test would also reduce the problems of interpreting the accuracy of the vague, general statements such as ‘wants children’ that psychics are prone to give. Another interesting idea would be to get psychics, along with non-psychics and self-confessed cold readers, to perform both a blind sitting (e.g. using a method similar to that described above) and a sitting where the participant can see and perhaps talk to the sitter. This could provide evidence to suggest whether claimed psychic abilities are really just a manifestation (even unintentionally) of cold-reading. If this were the case one would expect no difference in performance between the three groups in the blind test, but both the cold-readers and the psychics to perform better in the non-blind test (but with no difference between psychics and cold readers in that condition).

Can we see into the future?

The second set of experiments that I wish to discuss are potentially more exciting because there is at least a hint of positive results. Instead of testing the telepathy that psychics claim to possess (i.e. the ability to transfer information without the use of known senses) these studies investigated the phenomenon of ‘retroactive influence’ in a random sample of participants. Retroactive influence is the phenomenon of current performance being influenced by future events. In effect it suggests that people can (at least unconsciously) see into the future!

In a series of 9 well-controlled experiments the Psycholgist Daryl Bem produced results that appear to show that participant’s responses in a variety of tasks were influenced by events that occurred after those responses had been made (3). What is most impressive about these results is that Bem used a succession of different paradigms to produce the same effect, ensuring that the effect was not just due to an artifact in one particular experimental design. In brief this is what his results appear to demonstrate:

  1. Precognitive Detection: Participants had to select one of two positions in which they though an emotive picture would appear on a computer. However the computer randomly decided where to place the picture after the participant has made their selection. Nevertheless participants performance suggested that they were able to predict the upcoming positions of a photo at above chance levels,
  2. Retroactive Priming: In priming, the appearance of one stimulus (the ‘prime’) just before a second stimulus that the participant has to perform a task on, can either improve or worsen reaction time to that task, depending on whether the prime is congruent or incongruent with the second, ‘task’ stimulus. For example the appearance of a positive word prior to a negative image will slow reaction time on a valence classification task for the image (i.e. deciding whether the image is positive or negative) because the valence of the word is incongruent with the valence of the image. Bem’s results suggest that this reaction time effect also occurs when the prime is presented after both the image, and the time when the participant has made their response to it.
  3. Retroactive habituation: People tend to habituate to an image, for example an aversive image that has been seen before is rated as less aversive than one that has not been seen before. Bem demonstrated that this habituation can occur even when the repeated presentation occurs after the rating of the image is made (i.e. given the choice between two images, participants will select as less aversive the image that the computer will later present to them several times).
  4. Retroactive facilitation of recall: When participants had to recall a list of words, they were shown to be better at recalling items that they were later required to perform a separate task on, even though they were unaware of which items on the list they would be re-exposed to.

It is important to note that in all these experiments the selection (by computer) of which items would appear after the initial task, was performed independent of the participant’s response, so the results could not be due to the computer somehow using the participant’s responses to define its choice of which stimuli to present.

These findings caused much controversy and discussion within the psychological research community. Recently three independent attempts to replicate the ‘retroactive facilitation of recall’ effect have failed, producing null results despite using almost exactly the same method as Bem’s original study, and identical software (4). These failures of replication have highlighted problems in psychological research around the concepts of replication and the ‘file-drawer problem’ (5). There isn’t space to do justice to these issues here, suffice to say that the jury is still out on Bem’s findings at least partly because we can’t be sure whether other failed attempts to produce these effects remain unpublished, thus making Bem’s positive results appear more impressive that they might actually be. Another potential problem that is yet to be fully addressed is the issue of experimenter bias. Again this is a complex issue, and it appears to particularly be a problem in research into paranormal phenomenon, because positive results consistently tend to come from researchers who believe in said phenomenon, while negative results consistently come from sceptical researchers (see 6 for a discussion).

Retroactive facilitation of recall is currently the only of Bem’s effects that others have attempted to replicate in an open manner (i.e. by registering the attempt with an independent body before data collection, and by publishing the results after). Until more replication is attempted the question as to whether we can unconsciously see into the future must be considered open to debate. Hopefully these topics will be subject to much research in the future allowing us to find out whether these effects are real, or just the consequence of some other factor. It is worth mentioning at this point another paradigm that sometimes produces positive results regarding paranormal abilities. In experiments using a Ganzfeld Field (where participant’s auditory and visual systems are flooded with white noise and uniform light respectively) there is some evidence that those experiencing such stimulus are able to ‘receive’ information from someone sitting in a separate room (see 7 for a review). This appears therefore to be a potential demonstration of telepathy, although the effect is open to the same issues of replication and experimenter bias that surround Bem’s findings. Even ignoring these uncertainties, it should be noted that in these Ganzfeld Field experiments, and in Bem’s study, the size of the effects are very modest. For example in Bem’s precognitive detection paradigm, participants overall performance was at 53% as compared to chance level performance of 50%, while in the Ganzfeld experiments performance (choosing which one of four stimuli were being ‘transmitted’) is at around 32% against a chance performance of 25%. While these differences are found to be statistically significant (in some studies) because of the large number of participants or trials used, they don’t exactly represent impressive performance! Therefore even if such paranormal phenomenon were to be eventually proven as genuine, this wouldn’t mean that the sort of mind reading abilities claimed by psychics are actually possible!

 

*note that in this article the term ‘psychics’ is used merely as a label to define people who claim to have psychic powers, its use does not represent acceptance that such powers actually exist.

References
1) http://www.guardian.co.uk/science/2012/oct/31/halloween-challenge-psychics-scientific-trial
2) http://www.merseysideskeptics.org.uk/
3) Bem, D. J. (2011). Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. Journal of Personality and Social Psychology, 100(3), 407-425. Link
4) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Failing the Future: Three Unsuccessful Attempts to Replicate Bem’s ‘Retroactive Facilitation of Recall’ Effect. Plos One, 7(3). Link
5) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Replication, replication, replication. Psychologist, 25(5), 346-348. Link
6) Schlitz, M., Wiseman, R., Watt, C., & Radin, D. (2006). Of two minds: Skeptic-proponent collaboration within parapsychology. British Journal of Psychology, 97, 313-322. Link
7) Wackermann, J., Putz, P. & Allefeld, C. (2008) Ganzfeld-induced hallucinatory experience, its phenomenology and cerebral electrophysiology. Cortex 44, 1364-1378 Link

Image from ‘Seance on a wet afternoon’ (1964) Dir: Bryan Forbes, Distribution: Rank Organisation, Studio: Allied Film Makers.