(Mis)interpreting Science

Just how many times have you seen a crazy sensationalist headline like this?

Unfortunately it seems to be quite common. While the problem lies with various factors, the consequence however is the same: inaccurate stories, grossly simplified misleading the public, sometimes on purpose, sometimes not. (*Just so you know I made that headline up!) Now I am all for giving the right tools to the public so they can question the stories for themselves. So here is what you need to know about misinterpretations in science in order to question the claims the right way. In my view, misinterpreting science falls under three camps, but project design is by far the most essential step in order to circumvent such issues. This includes (but is not limited to) sample size, use of controls, double-blind experiments and importantly whether it can be replicated. But my focus here is what comes after the experiment- the results.

Done by: The Scientists

The one bugbear that gets picked up on the most is when something is “statistically significant” in particular cases where there is a large sample size but a small difference in effect. Significance is the probability of how likely something is going to occur by chance (instead of say occurring because of a certain treatment). Even if there is statistical significance (between giving the treatment or not) the size of the effect between the two will be so small as to make the results meaningless and therefore can’t be applied practically. An example of this by IFL Science is the pattern seen when taking aspirin to reduce heart attacks.

One common phrase that I have heard a lot among people when a cause and effect study has been conducted is “Well of course the two are linked!” Nothing cheeses me off more. To suggest that two effects are directly linked is not always correct as Nature writes. However there just might be a factor that we haven’t considered at all or don’t even yet know about.

It’s the time of the year when people come down with colds. The person sitting next to you on the bus just sneezed and the next thing you know, you’re coughing all over the place. As the famous phrase goes, this too shall pass. It is just a matter of time before you will heal on your own. But if you believe that holding figs under your armpits for one week will make your cold go away and then you do that, you might start to mistakenly start to think that it was the figs that healed you. This is known as regression to the mean; assuming that doing something differently or new will enact a change in something that would have disappeared with time anyway. Ben Goldacre in his book Bad Science illustrates this perfectly with his example on homeopathy.

Scientists love to model things. It’s a fantastic treat to model something and then extrapolate from it what might happen. Unfortunately, models are not always accurate depictions of the real world and to reach a conclusion from doing so is inherently risky. However, let’s not can this method entirely, because with new research models can always be updated.

Done by: Scientists, Media and the Public

For a scientist to fudge with their results is a big no-no. But it has been known to happen. Or perhaps the scientist might just ignore a study that claims the opposite of theirs. This is cherry picking and is something that everyone does to some extent: only selecting the results or giving them due credit when it fits with your favourite theory and ignoring all the other evidence. This is definitely frowned upon during the peer review process. I’m pretty sure that anyone who has ever submitted a paper has probably seen the words “But have you read/seen this paper by _____?” written in the margins. But it is not just scientists that do that. It is also fairly common with the media who will most likely not bother to mention that other study because it conflicts with their message or target audience. The public as well are liable to cherry-picking. They might use that one paper to judge an entire topic (not even acknowledging that it may have been retracted!). Essentially, people will only ever read or listen to those that agree with their viewpoint no matter how logical, evidence-based or valid an opposing argument is, and that is what is known as confirmation bias. This is why anyone who is anti-GM will probably be signed up to a magazine or website with words “organic” “natural” and “health” in their title.

Done by: Media outlets

This is quite possibly the easiest way to annoy a scientist. Even if they haven’t selected their results in accordance with a particular view or opinion on purpose, all the media has to do is use suitable words and omit specific details to create a sensationalist headline. And are you trying to sell a product? Then use scientific jargon. Anyone who has paid attention to anti-aging cream adverts on TV will know exactly what I mean! In a hilarious anecdote Ben Goldacre explains how a newspaper tried to use a scientific formula to turn something as idle as Jessica Alba’s bum wiggle into something scientifically news-worthy.

And finally. . .
This is of course is just a brief summary of a few issues that can lead to misinterpretations, and if all this has left a bad taste in your mouth and you don’t know which way to turn then fear not! Sense about Science has come to the rescue with their campaign Ask for Evidence, where you can. . . well you know, ask! I highly recommend Ben Goldacre’s book Bad Science, it makes for some great and funny reading on cases where interpretations have gone sour for all kinds of reasons. But the take-home message from all of this is simple:
Don’t be bamboozled by the claims around you, make sure you question everything.

From: http://unearthedcomics.com/comics/plaid/


Danae Dodge

I received my PhD in Scientific Archaeology from the University of Sheffield in 2011 which specialised in ancient DNA and anthropology. For my profile, see my websites: http://independent.academia.edu/DanaeDodge https://www.linkedin.com/pub/danae-dodge/9b/868/389 I started getting involved in Science Brainwaves as a volunteer in 2010. I have volunteered at presentations, events (such as the British Science Festival in 2011) and even participated in the Science is Vital protest march in October 2010. My first blog for Science Brainwaves was "Ancient Humans: Who were they? And who got it on?" which was the written version of a talk I gave for the Natural History Society at the University of Sheffield on 5 December 2011. I also have a public engagement page dedicated to ancient DNA, which I encourage both the public and specialists to join: https://plus.google.com/communities/115424956261446503473