Some sense in sensory deprivation

How would you cope if you couldn’t hear, see or feel anything? How do sensory systems react when they have no information to process? Such questions may seem rather bizarre, but they are in fact the topic of sensory deprivation research. Sensory deprivation involves systematically preventing information from reaching one or more sensory modalities. As a research methodology it has a long and chequered history.

Some early sensory deprivation studies were funded by the CIA with the purpose of identifying effective methods of interrogation (1, 2). Such experiments involved attempts to completely remove all sensory information from an individual. Participants were left for long periods in secluded, sound-proofed rooms, wearing cardboard sleeves to reduce tactile stimulation and goggles to reduce visual stimulation. For the unfortunate participants, the consequences of being exposed to these conditions were often hallucinations, anxiety and mental deterioration. The extent of the volunteers suffering was such that they were (eventually) paid compensation in recognition of their maltreatment. Indeed the controversy surrounding these sorts of experiments led to the introduction of stricter ethical guidelines to prevent unwitting participants of behavioural research being exposed to potential harm or distress (1).

Although these early, total sensory deprivation experiments were clearly unacceptable, more subtle forms of sensory deprivation are still used in human research. Sensory deprivation can, if implemented correctly, reveal important information about the functioning of the nervous system without causing any harm to those participating in the research. One example of the positive use of sensory deprivation concerns its role in improving our understanding of tinnitus. Back in the 1950s it was discovered that people with normal hearing often began to experience hallucinatory sounds similar to tinnitus if they were left in a silent, sound-proofed room (3). This finding led to the idea that hearing loss may contribute to the development of tinnitus. However, most tinnitus sufferers do not exhibit the sort of total hearing loss that is mimicked by a silent, sound-proofed room. Indeed some tinnitus sufferers demonstrate very good hearing! It appears therefore that processes other than hearing loss must be involved in the development of tinnitus.

Modern theories of tinnitus point the finger at the process of homeostatic plasticity; the mechanism by which the equilibrium of neurological systems is maintained through adjustments made to physiological processes (4). In the same way that a thermostat alters the activity of a heating system to maintain a certain temperature, it is thought that the brain modifies the degree of spontaneous firing within neuronal populations in order to maintain a consistent level of activity. In response to damage to the auditory nerve, homeostatic plasticity may cause the brain to implement a ‘gain’ to ongoing spontaneous activity within the auditory system. While this gain may serve to reduce, or even remove, the impact that nerve damage has on hearing ability, it may also induce tinnitus by elevating baseline neural activity to a level that is similar to that evoked by genuine sounds (4).

Scientists funded by the British Tinnitus Association recently used a more refined sensory deprivation methodology to test whether homeostatic plasticity may contribute to tinnitus (5). 18 participants were asked to wear an earplug in one ear for a week. The earplug was specially designed to mimic the sort of high-frequency hearing loss that commonly occurs due to old age or noise damage. Altering the input in just one ear not only minimised the inconvenience for participants, it also allowed the effects of the auditory deprivation to be ethically tested over a far longer period than would otherwise be possible with more substantial deprivation. The methodology therefore provided a far more naturalistic assessment of the effects of hearing loss on the auditory system.

14 of the 18 participants reported experiencing hallucinatory sounds during the week, with most of the sounds taking a form similar to those experienced in tinnitus. This confirmed that real-life forms of hearing loss are capable of inducing tinnitus-like symptoms. Crucially, the pitch of the hallucinated sounds matched the frequency spectrum of the deprivation induced by the earplugs; selective loss of hearing for high frequencies produced mainly high frequency hallucinatory sounds. As the type of hearing loss induced in this study should only provoke homeostatic changes in neuronal populations that process high frequency sounds, this finding supports the idea that homeostatic plasticity contributes to the development of tinnitus.

Despite its somewhat inglorious history, sensory deprivation remains an extremely important methodological tool. By removing the influence of external stimuli, sensory deprivation provides a clearer view of the workings of internally-driven neurological processes such as homeostatic plasticity. As neurological disorders are often characterised by dysfunctions in these internal processes, sensory deprivation studies can provide invaluable insight into the causes of such disorders.

More information about research into the causes of tinnitus is available at http://www.tinnitus.org.uk/the-search-for-a-cure-1

References

(1) McCoy, A.W. (2007) Science in Dachau’s shadow: Hebb, Beecher and the development of CIA Psychological torture and modern medical Ethics. Journal of the History of the Behavioral Sciences, Vol. 43(4), 401–417. <Link>

(2) Klein, N. (2007). The shock doctrine : the rise of disaster capitalism (1st ed. ed.). New York: Metropolitan Books/Henry Holt. <Link>

(3) Heller, M.F. & Bergman, M. (1953). Tinnitus Aurium in normally hearing persons. The Annals of Otology, Rhinology and Laryngology, 62 (1), 73-83 <Link>

(4) Schaette R, Kempter R (2006) Development of tinnitus-related neuronal hyperactivity through homeostatic plasticity after hearing loss: a computational model. Eur J Neurosci 23: 3124–3138. <Link>

(5) Schaette R, Turtle C, Munro KJ (2012) Reversible Induction of Phantom Auditory Sensations through Simulated Unilateral Hearing Loss. PLoS ONE 7(6): e35238. doi:10.1371/journal.pone.0035238 <Link>

Want to lie convincingly? Get practicing!

Lying, the deliberate attempt to mislead someone, is a processes that we all engage in at some time or another. Indeed research has found that the average person lies at least once a day, suggesting that lying is a standard part of social interaction (1). Despite its common occurrence lying is not an automatic process. Instead it represents an advanced cognitive function; a skill that requires more basic cognitive abilities to be present before it can emerge. To lie an individual first needs to be able to appreciate the benefits of lying (e.g. a desire to increase social status) so that they have the motivation to behave deceitfully. Successful lying also requires ‘theory of mind’ or the ability to understand what another person knows. This is necessary so that the would-be liar can spot firstly the opportunity to lie, and secondly what sort of deception might be required to produce a successful lie. Finally lying also requires the ability to generate a plausible and coherent, but nonetheless fabricated description of an event. Given these prerequisites it is unlikely that we are ‘born liars’. Instead the ability to lie is believed to develop sometime between the ages of 2 and 4 (2). The fact that the ability to lie develops over time suggests that the our performance of the ‘skill’ of lying should be sensitive to practice. Do people who lie more often become better at it?

Lying is tiring!
Lying is considered more cognitively demanding that telling the truth due to the extra cognitive functions that need to be utilised to produce a lie. The idea that lying is cognitively demanding is supported both by behavioural data showing that deliberately producing a  misleading response takes longer, and is more prone to error, than producing a truthful response (3) and by neurological data showing that lying requires additional activity in the prefrontal areas of the brain when compared to truth telling (4). These observable differences between truth telling and lying allow a measure of ‘lying success’ to be created. For example a successful, or skilled liar, should be able to perform lies more quickly and accurately than a less successful liar, perhaps to the extent that there is no noticeable difference in performance between truth telling and lying in such individuals. Likewise, if the ability to lie is affected by practice, then practice should make lies appear more like the truth in terms of behavioural performance.

Practice makes perfect (but is this a lie)?
Despite the intuitive appeal of the idea that lying becomes easier with practice, much past research has failed to find an effect of practice on lying, either when measuring behavioural (3) or neuroimaging (5) markers of lying. Such results have led to the conclusion that lying may always be significantly more effortful than truth telling, no matter how practiced an individual is at deception.

A recent study (6) has re-examined this issue. They used a version of the ‘Sheffield Lie Test’ where participants are presented with a list of questions that require a yes/no response (e.g. ‘Did you buy chocolate today?’). The experiment involved three main phases. In the first, baseline phase, participants were required to respond truthfully to half the statements and to lie in response to the other half of the statements. In the middle, training phase, the statements were split into two groups. For a control group of statements the proportion that required a truthful response remained at 50% for all participants. For an experimental group of statements the proportion that required a truthful response was varied between participants. Participants either had to lie in response to 25%, 50% or 75% of these statements, thus giving the participants differing levels of ‘practice’ at lying. The final, test phase, was a repeat of the baseline phase. This design allowed two research questions to be assessed. Firstly the researchers could identify whether practice at lying reduced the ‘lie effect’ on reaction time and error rate (e.g. the increased reaction time and error rate that occurs when a participant is required to lie, compared to when they are required to tell the truth). Secondly the researchers could identify whether any reduction in the lie effect applied just to the statements on which the groups had experienced differing practice levels, or whether it also generalised to those statements where all groups had the same level of practice.

The results revealed that practice did produce an improvement in the ability to lie during the period when the training was actually taking place, and that this improvement applied to both the control statements and the experimental statements. The participants who had to lie more demonstrated reduced error rates and reaction times compared to those who had to lie less during the training phase. However in the test phase this improvement was only maintained for the set of statements where the frequency of lying had been manipulated. The group who had practiced lying on 75% of the experimental statements were no faster or more accurate at lying on the control statements than the group who had to lie in response to just 25% of the experimental statements. These results suggest that practice can make you better at lying, but this improvement is only sustained over time for the specific lies that you have rehearsed.

Some lies may be better than others!
One important criticism of most studies on the effect of practice on lying is that they tend to use questions or tasks that require binary responses (i.e. yes/no questions). However in real life lying often involves the concoction of complex false narratives,a form of lying that is likely to be far more cognitively demanding than just saying ‘No’ in response to a question whose answer is ‘Yes’. Likewise the lies tested in laboratory studies tend to be rehearsed, or at least prepared lies. In contrast many real-life lies are concocted at short notice, with the deceptive narrative being constructed in ‘real-time’, whilst the person is in the process of lying. It is likely that the effect of training, and how that training generalises to other lies, will be different for these more advanced forms of lying than it is for the more simple types of lies that tend to be tested under laboratory conditions. Given this, if a psychologist tells you that we know for certain how practice impacts on the ability to deceive, you can be sure that they are lying!

________________________________________________________________________________________________________

References

(1) DePaulo, B.M., Kashy, D.A., Kirkendol, S.E., Wyer, M.M. & Epstein, J.A. (1996) Lying in everyday life. Journal of Personality and Social Psychology, 70 (5) 979-995. http://smg.media.mit.edu/library/DePauloEtAl.LyingEverydayLife.pdf
(2) Ahern, E.C., Lyon, T.D. & Quas, J.A. (2011) Young Children’s Emerging Ability to Make False Statements. Developmental Psychology. 47 (1) 61-66. http://www.ncbi.nlm.nih.gov/pubmed/21244149
(3) Vendemia, J.M.C., Buzan,R.F., & Green,E.P. (2005) Practice effects, workload and reaction time in deception. American Journal of Psychology. 5, 413–429. http://www.jstor.org/discover/10.2307/30039073?uid=3738032&uid=2129&uid=2&uid=70&uid=4&sid=21101917386241
(4)Spence, S.A. (2008) Playing Devil’s Advocate: The case against MRI lie detection. Legal and Criminological Psychology 13, 11-25. http://psychsource.bps.org.uk/details/journalArticle/3154771/Playing-Devils-advocate-The-case-against-fMRI-lie-detection.html
(5) Johnson,R., Barnhardt,J., & Zhu, J.(2005) Differential effects of practice on the executive processes used for truthful and deceptive responses: an event-related brain potential study. Brain Research: Cognitive Brain Research 24, 386–404. http://www.ncbi.nlm.nih.gov/pubmed/16099352
(6) Van Bockstaele, B., Verschuere, B., Moens, T., Suchotzki, K., Debey, E. & Spruyt, A. (2012) Learning to lie: effects of practice on the cognitive cost of lying. Frontiers in Psychology, November (3) 1-8. http://www.ncbi.nlm.nih.gov/pubmed/23226137

Spooky goings on in Psychology!

Given that it is Halloween, it seems only right to discuss some recent psychology experiments relating to potential paranormal phenomenon!

Can ‘psychic’ abilities be demonstrated during controlled experiments?

Can ‘psychics’ sense information others can’t?

Today Merseyside Sceptics Society published the results of a ‘Halloween psychic challenge’. They invited a number of the UK’s top psychics* to attempt to prove their abilities under controlled conditions, although only two psychics accepted the invitation (1, 2). In the test each psychic had to sit in the presence of 5 different female volunteers who were not known to them. These volunteers acted as ‘sitters’ and the psychics had to attempt to perform a ‘reading’ on them, in effect to use their putative psychic powers to obtain information about the sitter’s life and personality. During the reading the psychic  was separated from the sitter by a screen such that the psychic could not actually see the sitter. The psychics were also not allowed to talk to the sitters. These conditions ensured that any information the psychics retrieved was not gathered through processes that could be explained using non-psychic means (e.g. cold reading or semantic inference). The psychics recorded their readings by writing them down.

A copy of the 5 readings made by each psychic (one for each sitter) was given to each sitter and they were asked to rate how well each reading described them, and which reading provided the best description. If the psychic abilities were genuine, then each sitter should rate the reading that was made for them as being most accurate. Of the 10 readings (from the 2 psychics for each of the 5 sitters) only 1 was correctly selected by the sitter as being about them, no more than one would expect by chance. Moreover the average ‘accuracy ratings’ provided by the sitters (for the readings that were actually about them) was low for both psychics (approximately 3.2 out of 10). What of the one reading that a sitter did identify as an accurate description (see 1 for a full transcript of this reading)?  It is noticeable that in this reading the statements (some of which were not accurate) were either very general, or could be inferred from the knowledge that the sitter was a young, adult female (e.g ‘wants children’). The (correct) statement that most impressed the sitter (‘wants to go to South America’) was also pretty general and is probably true of a decent proportion of young woman. It can be safely concluded therefore that even this ‘accurate’ reading happened by chance.

In terms of the experimental design it is important to note that both psychics had, prior to the experiment, agreed to the methodology in the belief that they would be able to demonstrate their psychics powers under such conditions. Likewise both psychics rated their confidence in the readings they gave during the experiment highly, suggesting that they didn’t think that anything which occurred during the experiment might have upset their psychic powers. The study could be criticised for its small sample size, although this is due to many psychics, including some of the better known ones like Derek Acorah and Sally Morgan, apparently refusing to take part. It could therefore be argued that despite the psychics involved in the study failing the test, other ‘better’ psychics might pass. However such an argument remains merely speculative until such psychics agree to take part in controlled studies.

Although these negative results may not be surprising I still think it might be of interest to perform the experiment a different way. The problem with relying on the sitter’s ratings is that they may reflect attitudes of the sitters concerning psychic abilities (although all the sitters were apparently open to the idea of psychic powers being genuine). For example even though the sitters were unaware of which reading was about them, they could theoretically have given a low rating to an accurate reading to ensure that no psychic abilities were demonstrated. A better methodology might be to get each sitter to provide a self description, and then ask the psychic to choose the description that they think fits their reading of the person best. Such a test would also reduce the problems of interpreting the accuracy of the vague, general statements such as ‘wants children’ that psychics are prone to give. Another interesting idea would be to get psychics, along with non-psychics and self-confessed cold readers, to perform both a blind sitting (e.g. using a method similar to that described above) and a sitting where the participant can see and perhaps talk to the sitter. This could provide evidence to suggest whether claimed psychic abilities are really just a manifestation (even unintentionally) of cold-reading. If this were the case one would expect no difference in performance between the three groups in the blind test, but both the cold-readers and the psychics to perform better in the non-blind test (but with no difference between psychics and cold readers in that condition).

Can we see into the future?

The second set of experiments that I wish to discuss are potentially more exciting because there is at least a hint of positive results. Instead of testing the telepathy that psychics claim to possess (i.e. the ability to transfer information without the use of known senses) these studies investigated the phenomenon of ‘retroactive influence’ in a random sample of participants. Retroactive influence is the phenomenon of current performance being influenced by future events. In effect it suggests that people can (at least unconsciously) see into the future!

In a series of 9 well-controlled experiments the Psycholgist Daryl Bem produced results that appear to show that participant’s responses in a variety of tasks were influenced by events that occurred after those responses had been made (3). What is most impressive about these results is that Bem used a succession of different paradigms to produce the same effect, ensuring that the effect was not just due to an artifact in one particular experimental design. In brief this is what his results appear to demonstrate:

  1. Precognitive Detection: Participants had to select one of two positions in which they though an emotive picture would appear on a computer. However the computer randomly decided where to place the picture after the participant has made their selection. Nevertheless participants performance suggested that they were able to predict the upcoming positions of a photo at above chance levels,
  2. Retroactive Priming: In priming, the appearance of one stimulus (the ‘prime’) just before a second stimulus that the participant has to perform a task on, can either improve or worsen reaction time to that task, depending on whether the prime is congruent or incongruent with the second, ‘task’ stimulus. For example the appearance of a positive word prior to a negative image will slow reaction time on a valence classification task for the image (i.e. deciding whether the image is positive or negative) because the valence of the word is incongruent with the valence of the image. Bem’s results suggest that this reaction time effect also occurs when the prime is presented after both the image, and the time when the participant has made their response to it.
  3. Retroactive habituation: People tend to habituate to an image, for example an aversive image that has been seen before is rated as less aversive than one that has not been seen before. Bem demonstrated that this habituation can occur even when the repeated presentation occurs after the rating of the image is made (i.e. given the choice between two images, participants will select as less aversive the image that the computer will later present to them several times).
  4. Retroactive facilitation of recall: When participants had to recall a list of words, they were shown to be better at recalling items that they were later required to perform a separate task on, even though they were unaware of which items on the list they would be re-exposed to.

It is important to note that in all these experiments the selection (by computer) of which items would appear after the initial task, was performed independent of the participant’s response, so the results could not be due to the computer somehow using the participant’s responses to define its choice of which stimuli to present.

These findings caused much controversy and discussion within the psychological research community. Recently three independent attempts to replicate the ‘retroactive facilitation of recall’ effect have failed, producing null results despite using almost exactly the same method as Bem’s original study, and identical software (4). These failures of replication have highlighted problems in psychological research around the concepts of replication and the ‘file-drawer problem’ (5). There isn’t space to do justice to these issues here, suffice to say that the jury is still out on Bem’s findings at least partly because we can’t be sure whether other failed attempts to produce these effects remain unpublished, thus making Bem’s positive results appear more impressive that they might actually be. Another potential problem that is yet to be fully addressed is the issue of experimenter bias. Again this is a complex issue, and it appears to particularly be a problem in research into paranormal phenomenon, because positive results consistently tend to come from researchers who believe in said phenomenon, while negative results consistently come from sceptical researchers (see 6 for a discussion).

Retroactive facilitation of recall is currently the only of Bem’s effects that others have attempted to replicate in an open manner (i.e. by registering the attempt with an independent body before data collection, and by publishing the results after). Until more replication is attempted the question as to whether we can unconsciously see into the future must be considered open to debate. Hopefully these topics will be subject to much research in the future allowing us to find out whether these effects are real, or just the consequence of some other factor. It is worth mentioning at this point another paradigm that sometimes produces positive results regarding paranormal abilities. In experiments using a Ganzfeld Field (where participant’s auditory and visual systems are flooded with white noise and uniform light respectively) there is some evidence that those experiencing such stimulus are able to ‘receive’ information from someone sitting in a separate room (see 7 for a review). This appears therefore to be a potential demonstration of telepathy, although the effect is open to the same issues of replication and experimenter bias that surround Bem’s findings. Even ignoring these uncertainties, it should be noted that in these Ganzfeld Field experiments, and in Bem’s study, the size of the effects are very modest. For example in Bem’s precognitive detection paradigm, participants overall performance was at 53% as compared to chance level performance of 50%, while in the Ganzfeld experiments performance (choosing which one of four stimuli were being ‘transmitted’) is at around 32% against a chance performance of 25%. While these differences are found to be statistically significant (in some studies) because of the large number of participants or trials used, they don’t exactly represent impressive performance! Therefore even if such paranormal phenomenon were to be eventually proven as genuine, this wouldn’t mean that the sort of mind reading abilities claimed by psychics are actually possible!

 

*note that in this article the term ‘psychics’ is used merely as a label to define people who claim to have psychic powers, its use does not represent acceptance that such powers actually exist.

References
1) http://www.guardian.co.uk/science/2012/oct/31/halloween-challenge-psychics-scientific-trial
2) http://www.merseysideskeptics.org.uk/
3) Bem, D. J. (2011). Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. Journal of Personality and Social Psychology, 100(3), 407-425. Link
4) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Failing the Future: Three Unsuccessful Attempts to Replicate Bem’s ‘Retroactive Facilitation of Recall’ Effect. Plos One, 7(3). Link
5) Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Replication, replication, replication. Psychologist, 25(5), 346-348. Link
6) Schlitz, M., Wiseman, R., Watt, C., & Radin, D. (2006). Of two minds: Skeptic-proponent collaboration within parapsychology. British Journal of Psychology, 97, 313-322. Link
7) Wackermann, J., Putz, P. & Allefeld, C. (2008) Ganzfeld-induced hallucinatory experience, its phenomenology and cerebral electrophysiology. Cortex 44, 1364-1378 Link

Image from ‘Seance on a wet afternoon’ (1964) Dir: Bryan Forbes, Distribution: Rank Organisation, Studio: Allied Film Makers.

The dangers of self-report

A common methodology in behavioural science is to use self-report questionnaires to gather data. Data from these questionnaire can be used to identify relationships between scores on the variable(s) that the questionnaire is assumed to measure and either performance on behavioural tasks, physiological measures taken during an experiment, or even scores obtained from other questionnaires (some studies just report on the correlations between batches of self-report measures!). Self-report measures are popular for a number of reasons. Firstly they represent a ‘cheap’ way (in terms of both time and cost) of obtaining data. Secondly they can be easily implemented to large samples, especially with the advent of on-line questionnaire distribution sites such as Survey Monkey. Finally they can be used to measure constructs that would be difficult to obtain with behavioural or physiological measures (for example facets of personality such as introversion). This issue of self-report methodology is important because studies that use this method are regularly reported in the media (see http://www.bbc.co.uk/news/health-17209448 for a recent example) and therefore have a significant impact on how the general public perceive scientific research. I therefore think it is important to discuss potential problems with self-report measures.

Most (but certainly not all) questionnaires that are used in behavioural research undergo  testing for reliability, to check that they produce consistent results when applied to the same population over time. More importantly they are normally also tested for validity, to check that the questionnaire measures what it claims to measure. Such tests are done following the logic that the questionnaire should be able to discriminate participants in a similar way to relevant non-self report measures. For example scores on a questionnaire measuring depression should be able to discriminate between depressed patients and controls, while scores on a questionnaire measuring diet should be able to predict the ‘Body Fat Percentage’ of respondents with reasonable accuracy. While such tests can act to increase confidence that a questionnaire is measuring what it claims to measure they are not foolproof. For example just because a depression questionnaire can discriminate between patients and controls does not mean that it measures depression well, as the two groups will likely vary in several different ways. Likewise a questionnaire that distinguishes between patients and controls may not be able to identify the (presumably) more subtle differences between depressed and non-depressed healthy individuals, or the range of depressive tendencies within the healthy population. In fact that are a large number of reasons why questionnaire may not be entirely valid, including the following:

Honesty/Image management – researchers who use self-report questionnaires are relying on the honesty of their participants. The degree to which this is a problem will undoubtedly vary with the topic of the questionnaire, for example participants are less likely to be honest about measures relating to sexual behaviour, or drug use, than they are about caffeine consumption, although it is unwise to assume, even when you are measuring something relatively benign, that participants will always be truthful. Worse, the level at which participants will want to manage how they appear will no doubt vary depending on personality, which means that the level of dishonesty may vary significantly between different groups that a study is trying to compare.

Introspective ability – Even if a participant is trying to be honest, they may lack the introspective ability to provide an accurate response to a question. We are probably all aware of people who appear to view themselves in a completely different light to how others see them. Undoubtedly we are all to some extent unable to introspectively assess ourselves completely accurately. Therefore any self-report information we provide may be incorrect despite our best efforts to be honest and accurate.

Understanding – Participants may also varying regarding their understanding or interpretation of particular questions. This is less a problem with questionnaires measuring concrete things like alcohol consumption, but is a very big problem when measuring more abstract concepts such as personality. From personal experience I have participated in an experiment where I was asked at regular intervals to report how ‘dominant’ I felt. As I can honestly say I don’t monitor my feelings of ‘dominance’ and how they change over time, I know that my responses to the question were pretty random. Even if I could conjure an understanding of what the question was getting at, it would be impossible to ensure that everyone who completed the questionnaire interpreted that question in the same way that I did.

Rating scales – Many questionnaires use rating scales to allow respondents to provide more nuanced responses than just yes/no. While yes/no questions do often appear restrictive in terms of how you can respond, using rating scales can bring their own problems. People interpret and use scales differently, what I might rate as ‘8’ on a 10 point scale, someone with the same opinion might only rate as a ‘6’ because they interpret the meanings of the scale points differently. There is research which suggests that people have different ways of filling out ratings scales (1). Some people are ‘extreme responders’ who like to use the edges of the scales, whereas other like to hug around the midpoints and rarely use the most outer points. This naturally produces differences in scores between participants that reflects something other than what the questionnaire was designed to measure. A related problem is that of producing nonsense distinctions. For example studies sometimes appear where participants are given a huge rating scale to choose from, for example a scale of 1-100 to rate the confidence of a decision as to whether two lines are the same length (2).  Is anyone really capable of segmenting their certainty over such a decision into 100 different units? Is there really any meaningful difference, even within the same individual, between a certainty of 86 and a certainty of 72 in such a paradigm? Any differences found in such experiments therefore run the risk of being spurious.

Response bias – This refers to individual’s tendency to respond a certain way, regardless of the actual evidence they are assessing. For example on a yes/no questionnaire asking about personal experiences, some participants might be biased towards responding yes (i.e. they may only require minimal evidence to decide on a yes response, so if an experience has happened only once they may still respond ‘yes’ to a question relating to whether they have had that experience). Alternatively other participants may have a conservative response bias and only respond positively to such questions if the experience being inquired about has happened regularly. This is a particular problem when the relationship between different questionnaires is assessed, as a correlation between two different questionnaires may simply reflect the response bias of the participants being consistent across questionnaires, rather than any genuine relationship between the variables the questionnaire is measuring.

Ordinal Measures – Almost all self-report measures produce ordinal data. Ordinal data is that which only tells you the order that units can be ranked in, not the distances between them. It is contrasted with interval data which tells you the exact distances between different units. This distinction is easiest to define by thinking of a race. The position in which each runner finishes in is an ordinal measure. It tells you who is fastest and slowest, but not the relative differences between the different runners. In contrast the finishing time is an interval measure, as it provides information relating to the relative differences between the runners. Even when the questionnaire measures something that could be measured in SI units, and is therefore theoretically an interval scale (i.e. alcohol consumption) it is doubtful whether the responses can really be treated as interval because of the problems relating to response accuracy raised above. More pertinently most self-report measures in behavioural science relate to constructs, such a personality measures, that can’t be measured in interval units and are therefore always ordinal. The problem with ordinal data is not the data itself, but the common practice of using parametric statistical techniques with such data, because these tests make assumptions about the distribution of the data that cannot be met when said data is ordinal. Deviations from such assumptions can lead to incorrect inferences being made (3) bringing the conclusions of such studies into question.

Control of sample – this has become more of an issue with the advent of online questionnaire distribution sites like Survey Monkey. Previously a researcher had to be present when a participant completed a questionnaire, now with these tools the researcher need never meet any of their participants. While this allows much bigger samples to be collected much more quickly, it does cause several concerns over the sample make up. For example there are few controls to stop the same person filling in the same questionnaire multiple times. There is also little disincentive for participants to respond with spurious responses, and there is little control over how much attention the participant pays to various parts of the questionnaire. Conversely, from personal experience, I know that sometimes it is hard to complete these questionnaires because there is no way of asking the researcher for clarification as to the meaning of various questions. Finally as the researcher has lost control over the make up of their sample, they may end up with a sample which is vastly skewed towards a certain type of person, as only certain types of people are likely to fill in such questionnaires. These issues existed even before the advent of online data collection (e.g. (4)), but collecting data ‘in absentia’ exacerbates the size of such problems.

Although there are many problems with using self-report questionnaires they will continue to be a popular methodology in behavioural science because of their utility. While it might be preferable for every variable a researcher wants to investigate to be manipulated systematically using behavioural techniques, this is in practice impossible as it would severely restrict what each individual research design could achieve, and would make certain topics effectively impossible to research. Self-report measures are therefore a necessary tool for behavioural research. Furthermore some of the problems listed above can be countered through the careful design and application of self-report measures. For example response bias can be removed by ‘reversing’ half the questions on a questionnaire so that the variable is scored by positive responses on half the questions and negative responses on the other half, thus cancelling out any response bias. Likewise statistical techniques are being devised to attempt to pick out dishonest reporting, a problem that can also be attenuated by ensuring anonymity and confidentiality of responses (e.g. the researcher leaving the room when the participant is completing the questionnaire). Given this it would be wrong to dismiss any findings that are reliant on self-report measures. However whenever you read about research where self-report measures have been used to draw conclusions about human behaviour, it is always worth bearing in mind the multitude of problems associated with such measures, and how they might impact on the validity of the conclusions that have been drawn.

(1) Austin, E. J., Gibson, G. J., Deary, I. J., McGregor, M. J., & Dent, J. B. (1998). Individual response spread in self-report scales: personality correlations and consequences. Personality and Individual Differences, 24, 421–438. http://www.sciencedirect.com/science/article/pii/S019188699700175X

(2) Balakrishnan, J. D. (1999). Decision processes in discrimination: Fundamental misrepresentations of signal detection theory. Journal of Experimental Psychology: Human Perception & Performance, 25, 1189-1206. http://psycnet.apa.org/psycinfo/1999-11444-002

(3) Wilcox, R. R. (2005). Introduction to robust estimation and hypothesis testing. Academic Press. ISBN: 0127515429

(4) Fan, X., Miller, B. C., Park, K., Winward, B. W., Christensen, M., Grotevant, H. D., et al. (2006). An exploratory study about inaccuracy and invalidity in adolescent self-report surveys. Field Methods,18, 223–244. http://fmx.sagepub.com/content/18/3/223.short

What is cognitive neuroscience, and why should anyone care?

I often have trouble explaining to people what I am doing for my PhD. This is not a consequence of the topic being so fiendishly complex that no-one else can understand it. Instead it comes from a fact that the area of study seems to fall between several difference subject areas. When I tell people that I am doing my PhD within the Neuroscience department I imagine this provokes images of test-tubes, microscopes and pipettes, and perhaps associations with genetics, animal testing and stem cells. In reality I have little knowledge or experience of any of these topics, having last done ‘traditional’ lab work while I was at secondary school. If you asked me to dissect something, I would probably run a mile! When I instead say that I work within the psychiatry department this probably brings up an altogether different set of images, of drug therapies, ECT and perhaps of ‘talking therapies’ such as CBT (cognitive behavioural therapy). In fact both the above statements regarding my PhD are true, as the Psychiatry department sits within the Neuroscience department, but neither appear to give an accurate impression of what I actually do.

The best description of my area of research is ‘cognitive neuroscience’, but what does this mean? Cognitive Neuroscience relates to the study of the neural basis of behaviour. Roughly, it bridges the gap between biological sciences, and behavioural sciences such as psychology and psychiatry. It attempts to determine how the brain achieves the legion of processes that it performs – crudely ‘what part of the brain does what’! Cognitive neuroscience has only been seen as a separate area of study relatively recently, partly because the advanced brain imaging techniques which the discipline now heavily relies on have only been developed within the last 30 years (according to Wikipedia the term ‘cognitive neuroscience’ itself was coined in the back of a taxi in 1979!!). However scientists from various disciplines have been trying to understand how the brain functions, using whatever methods were available, since at least the 19th century.

Cognitive Neuroscience relies heavily on work done within behavioural sciences, which have served to define how human behaviour and cognition can be classified into concepts that can be studied. Unsurprisingly therefore, cognitive neuroscience research normally involves the application of a behavioural task which has already been utilised without the use of brain imaging techniques. One question this raises is what does knowing how the brain achieves it function tell us that purely behavioural science does not?  Psychologists have been ably investigating the details of mental processes for well over a century without knowing (or even caring) what part(s) of the brain are involved. The knowledge that spatial processing is largely dependent on the Hippocampus is not necessary for studying the intricacies and individual differences in spatial processing. So what does an understanding of the neural basis of mental processes achieve?

Firstly understanding the neural basis of a mental process can help distinguish between different theories relating to how that process is performed. Behavioural data is often not sufficient to distinguish between competing theories (e.g. whether a particular process is performed in totality, or whether it is split into components processes that are dealt with separately, and whether such component processes are performed in parallel or in series). Neuroimaging data can be used to provide strong evidence in relation to these questions (1).  Secondly cognitive neuroscience can provide insight into areas of cognition that were difficult or impossible to address without neuroimaging techniques. For example much work has been done on trying to understand what the brain does ‘at rest’ (i.e. when no task is being performed, effectively ‘mind wandering’) which can allow us to understand how the brain might work as an self-contained integrative mechanism. As, by definition, non-task related mental processes can’t be manipulated systematically, it is hard to investigate these processes from a purely behavioural standpoint. Similarly neuroimaging has enabled scientists to begin to uncover the neural basis of ‘consciousness’, raising interesting questions about how our experience of the world is constructed (3). These achievements of cognitive neuroscience help elucidate the nature of human thought and behaviour, shedding light on why we act the way that we do. 

On a larger scale, understanding how the brain is able to processes such a large variety of information, and produce such a wide variety of responses, can help guide the design of artificial intelligence systems intended to mimic human abilities, facilitating advances in medicine and engineering. Finally, and perhaps most importantly, knowing how the brain produces certain responses can lead to the development of interventions to alter the functioning of the appropriate brain areas when those responses become problematic (e.g. during mental health disorders). One of the major aims of cognitive neuroscience is to identify the neural deficiencies that mark various psychiatry and neurodegenerative disorders. From this information it becomes potentially possible to identify methods of combating such deficiencies. Indeed biological interventions are being developed that can target specific brain areas, potentially offering great hope for improving the therapeutic treatment of mental disorders.  

References

(1) Jonides et al (2006). What has Functional Neuroimaging told us about the Mind? So many examples, so little space. Cortex, 42, 414-417 http://www-personal.umich.edu/~jjonides/pdf/2006_3.pdf

(2) Van den Heuval & Pol (2010) Exploring the brain network: A review on resting-state fMRI functional connectivity. European Neuropsychopharmacology, 20(8), 519-534 http://www.sciencedirect.com/science/article/pii/S0924977X10000684

(3) Dehaene & Changeux (2011) Experimental and Theoretical Approaches to Conscious Processing. Neuron, 70. 200-225 http://www.sciencedirect.com/science/article/pii/S0896627311002583