The dangers of self-report

A common methodology in behavioural science is to use self-report questionnaires to gather data. Data from these questionnaire can be used to identify relationships between scores on the variable(s) that the questionnaire is assumed to measure and either performance on behavioural tasks, physiological measures taken during an experiment, or even scores obtained from other questionnaires (some studies just report on the correlations between batches of self-report measures!). Self-report measures are popular for a number of reasons. Firstly they represent a ‘cheap’ way (in terms of both time and cost) of obtaining data. Secondly they can be easily implemented to large samples, especially with the advent of on-line questionnaire distribution sites such as Survey Monkey. Finally they can be used to measure constructs that would be difficult to obtain with behavioural or physiological measures (for example facets of personality such as introversion). This issue of self-report methodology is important because studies that use this method are regularly reported in the media (see for a recent example) and therefore have a significant impact on how the general public perceive scientific research. I therefore think it is important to discuss potential problems with self-report measures.

Most (but certainly not all) questionnaires that are used in behavioural research undergo  testing for reliability, to check that they produce consistent results when applied to the same population over time. More importantly they are normally also tested for validity, to check that the questionnaire measures what it claims to measure. Such tests are done following the logic that the questionnaire should be able to discriminate participants in a similar way to relevant non-self report measures. For example scores on a questionnaire measuring depression should be able to discriminate between depressed patients and controls, while scores on a questionnaire measuring diet should be able to predict the ‘Body Fat Percentage’ of respondents with reasonable accuracy. While such tests can act to increase confidence that a questionnaire is measuring what it claims to measure they are not foolproof. For example just because a depression questionnaire can discriminate between patients and controls does not mean that it measures depression well, as the two groups will likely vary in several different ways. Likewise a questionnaire that distinguishes between patients and controls may not be able to identify the (presumably) more subtle differences between depressed and non-depressed healthy individuals, or the range of depressive tendencies within the healthy population. In fact that are a large number of reasons why questionnaire may not be entirely valid, including the following:

Honesty/Image management – researchers who use self-report questionnaires are relying on the honesty of their participants. The degree to which this is a problem will undoubtedly vary with the topic of the questionnaire, for example participants are less likely to be honest about measures relating to sexual behaviour, or drug use, than they are about caffeine consumption, although it is unwise to assume, even when you are measuring something relatively benign, that participants will always be truthful. Worse, the level at which participants will want to manage how they appear will no doubt vary depending on personality, which means that the level of dishonesty may vary significantly between different groups that a study is trying to compare.

Introspective ability – Even if a participant is trying to be honest, they may lack the introspective ability to provide an accurate response to a question. We are probably all aware of people who appear to view themselves in a completely different light to how others see them. Undoubtedly we are all to some extent unable to introspectively assess ourselves completely accurately. Therefore any self-report information we provide may be incorrect despite our best efforts to be honest and accurate.

Understanding – Participants may also varying regarding their understanding or interpretation of particular questions. This is less a problem with questionnaires measuring concrete things like alcohol consumption, but is a very big problem when measuring more abstract concepts such as personality. From personal experience I have participated in an experiment where I was asked at regular intervals to report how ‘dominant’ I felt. As I can honestly say I don’t monitor my feelings of ‘dominance’ and how they change over time, I know that my responses to the question were pretty random. Even if I could conjure an understanding of what the question was getting at, it would be impossible to ensure that everyone who completed the questionnaire interpreted that question in the same way that I did.

Rating scales – Many questionnaires use rating scales to allow respondents to provide more nuanced responses than just yes/no. While yes/no questions do often appear restrictive in terms of how you can respond, using rating scales can bring their own problems. People interpret and use scales differently, what I might rate as ‘8’ on a 10 point scale, someone with the same opinion might only rate as a ‘6’ because they interpret the meanings of the scale points differently. There is research which suggests that people have different ways of filling out ratings scales (1). Some people are ‘extreme responders’ who like to use the edges of the scales, whereas other like to hug around the midpoints and rarely use the most outer points. This naturally produces differences in scores between participants that reflects something other than what the questionnaire was designed to measure. A related problem is that of producing nonsense distinctions. For example studies sometimes appear where participants are given a huge rating scale to choose from, for example a scale of 1-100 to rate the confidence of a decision as to whether two lines are the same length (2).  Is anyone really capable of segmenting their certainty over such a decision into 100 different units? Is there really any meaningful difference, even within the same individual, between a certainty of 86 and a certainty of 72 in such a paradigm? Any differences found in such experiments therefore run the risk of being spurious.

Response bias – This refers to individual’s tendency to respond a certain way, regardless of the actual evidence they are assessing. For example on a yes/no questionnaire asking about personal experiences, some participants might be biased towards responding yes (i.e. they may only require minimal evidence to decide on a yes response, so if an experience has happened only once they may still respond ‘yes’ to a question relating to whether they have had that experience). Alternatively other participants may have a conservative response bias and only respond positively to such questions if the experience being inquired about has happened regularly. This is a particular problem when the relationship between different questionnaires is assessed, as a correlation between two different questionnaires may simply reflect the response bias of the participants being consistent across questionnaires, rather than any genuine relationship between the variables the questionnaire is measuring.

Ordinal Measures – Almost all self-report measures produce ordinal data. Ordinal data is that which only tells you the order that units can be ranked in, not the distances between them. It is contrasted with interval data which tells you the exact distances between different units. This distinction is easiest to define by thinking of a race. The position in which each runner finishes in is an ordinal measure. It tells you who is fastest and slowest, but not the relative differences between the different runners. In contrast the finishing time is an interval measure, as it provides information relating to the relative differences between the runners. Even when the questionnaire measures something that could be measured in SI units, and is therefore theoretically an interval scale (i.e. alcohol consumption) it is doubtful whether the responses can really be treated as interval because of the problems relating to response accuracy raised above. More pertinently most self-report measures in behavioural science relate to constructs, such a personality measures, that can’t be measured in interval units and are therefore always ordinal. The problem with ordinal data is not the data itself, but the common practice of using parametric statistical techniques with such data, because these tests make assumptions about the distribution of the data that cannot be met when said data is ordinal. Deviations from such assumptions can lead to incorrect inferences being made (3) bringing the conclusions of such studies into question.

Control of sample – this has become more of an issue with the advent of online questionnaire distribution sites like Survey Monkey. Previously a researcher had to be present when a participant completed a questionnaire, now with these tools the researcher need never meet any of their participants. While this allows much bigger samples to be collected much more quickly, it does cause several concerns over the sample make up. For example there are few controls to stop the same person filling in the same questionnaire multiple times. There is also little disincentive for participants to respond with spurious responses, and there is little control over how much attention the participant pays to various parts of the questionnaire. Conversely, from personal experience, I know that sometimes it is hard to complete these questionnaires because there is no way of asking the researcher for clarification as to the meaning of various questions. Finally as the researcher has lost control over the make up of their sample, they may end up with a sample which is vastly skewed towards a certain type of person, as only certain types of people are likely to fill in such questionnaires. These issues existed even before the advent of online data collection (e.g. (4)), but collecting data ‘in absentia’ exacerbates the size of such problems.

Although there are many problems with using self-report questionnaires they will continue to be a popular methodology in behavioural science because of their utility. While it might be preferable for every variable a researcher wants to investigate to be manipulated systematically using behavioural techniques, this is in practice impossible as it would severely restrict what each individual research design could achieve, and would make certain topics effectively impossible to research. Self-report measures are therefore a necessary tool for behavioural research. Furthermore some of the problems listed above can be countered through the careful design and application of self-report measures. For example response bias can be removed by ‘reversing’ half the questions on a questionnaire so that the variable is scored by positive responses on half the questions and negative responses on the other half, thus cancelling out any response bias. Likewise statistical techniques are being devised to attempt to pick out dishonest reporting, a problem that can also be attenuated by ensuring anonymity and confidentiality of responses (e.g. the researcher leaving the room when the participant is completing the questionnaire). Given this it would be wrong to dismiss any findings that are reliant on self-report measures. However whenever you read about research where self-report measures have been used to draw conclusions about human behaviour, it is always worth bearing in mind the multitude of problems associated with such measures, and how they might impact on the validity of the conclusions that have been drawn.

(1) Austin, E. J., Gibson, G. J., Deary, I. J., McGregor, M. J., & Dent, J. B. (1998). Individual response spread in self-report scales: personality correlations and consequences. Personality and Individual Differences, 24, 421–438.

(2) Balakrishnan, J. D. (1999). Decision processes in discrimination: Fundamental misrepresentations of signal detection theory. Journal of Experimental Psychology: Human Perception & Performance, 25, 1189-1206.

(3) Wilcox, R. R. (2005). Introduction to robust estimation and hypothesis testing. Academic Press. ISBN: 0127515429

(4) Fan, X., Miller, B. C., Park, K., Winward, B. W., Christensen, M., Grotevant, H. D., et al. (2006). An exploratory study about inaccuracy and invalidity in adolescent self-report surveys. Field Methods,18, 223–244.

Rob Hoskin

Received a PhD from the Neuroscience Department of Sheffield University. Views expressed in blog posts do not necessarily represent the views of the Science Brainwaves organisation.

49 thoughts to “The dangers of self-report”

  1. Rob,
    This is a FANTASTIC article. What is the best way to contact you to follow-up with a few of your comments?

    all the best,

    1. Jeremy

      I’ve sent you a mail with some contact details (I don’t put my e-mail address on the site to avoid getting spammed). Alternatively if you have any general comments about the blog you can just use this comment thread so that other can see.


  2. Great article, I am using this for my unviersity assignment and tried to access one of the links on your reference list (personality and individual differences) but it will not allow me to log in, would it be possible for you to email me the full text?

    Thank you.

    1. Hi, your university should hold a subscription to that journal. If you try the link via your university account it should bring up the .pdf file (i.e. log into your university mail account first, then try to access the article).

  3. An excellent article! Really reflects on my own current predicament concerning gathering data on procrastination. While evaluating the self report questionnaire measuring procrastination, I found that many people who i knew personally gave themselves less points on their level of procrastination, than actually was the case.

  4. Great article, have you published an article on this problem in a scientific review or communicated in a congress on it ? I would like to cite you in my Phd thesis. Best regards

      1. I’m glad you added this response, I am using this article to address limitations in self-report measures in my PhD thesis as well. Thank you for your contributions to research

  5. This is a FANTASTIC article. What is the best way to contact you to follow-up with a few of your comments?
    I want alot of studies about (does self report can measure abilities or intelligences)
    all the best,

    1. It’s probably best to post any questions on the comments section under this blog.

      Self-report techniques are sometimes used to measure abilities, but it is much

      better to use actual behavioural tests to test abilities.

  6. This is a FANTASTIC article. What is the best way to contact you to follow-up with a few of your comments?
    I want alot of studies about (does self report can measure abilities or intelligences)
    all the best,

  7. This e book, which immediately would doubtless be labeled as “New Age,” is
    where the claim first seems that the proportions of the human physique are primarily based on the Golden Ratio .
    For youths who love racing games, this one could be racing with calculation.

  8. Pingback: Klefer
  9. I ᴡantеd tο thank yoᥙ for tһis very ɡood read!!
    I definitely enjoyed every little bit of it. I’ve got you book-markеd to look at new things you post…

  10. Its like you read my mind! You seem to know so much about this,
    like you wrote the book in it or something.
    I think that you can do with a few pics to drive the message home a bit, but instead of
    that, this is fantastic blog. An excellent read.

    I’ll definitely be back.

  11. Hey just wanted to give you a quick heads up and let you know a
    few of the pictures aren’t loading properly. I’m not sure why but I think its a
    linking issue. I’ve tried it in two different web
    browsers and both show the same outcome.

  12. Hey there! I could have sworn I’ve been to this site before but after reading through some of the
    post I realized it’s new to me. Nonetheless, I’m definitely delighted I found it and I’ll be bookmarking and
    checking back often!

  13. I was wondering if you ever considered changing the structure of your site?
    Its very well written; I love what youve got to say.
    But maybe you could a little more in the way of
    content so pdople could connect with it better.
    Youve got an awful lot of text for only having one or two
    pictures. Maybe you could space it out better?

  14. Ӏts like you read my mind! You seem to know so much about
    this, like you wrote the boοk in it or somethіng. I think that you coᥙld
    do with a few pics to ɗrive the message home a little bit, but
    instead of that, this is wonderful Ƅlog. An excelⅼent read.

    I’ll certainly be back.

  15. Unquestionably believe that which you stated. Your favorite justification seemed to
    be on the internet the easiest thing to be aware of. I say to you, I certainly get irked while people consider worries that they plainly do not know about.
    You managed to hit the nail upon the top as well as defined out the whole
    thing without having side effect , people can take a signal.
    Will likely be back to get more. Thanks

Leave a Reply to Rob Hoskin Cancel reply

Your email address will not be published. Required fields are marked *