Almost every child goes through a dinosaur phase. In some cases, it’s a frenzied week of roaring and leaving spiky plastic models all over the floor, before a combination of sore feet and a sore throat drive you onto the next stage of development. In my case, it lasted about 5 years. I owned sacks of dinosaur toys, a library’s worth of dinosaur books, and irritated my friends by criticising the accuracy of their dinosaur games (You can’t play with a dinosaur from the Creataceous and a dinosaur from the Jurassic at the same time. You just cannot.) Eventually, peer pressure made me decide that dinosaurs were for little kids, and I forgot about them for a decade or so.
But last year, I took a module in Palaeobiology– the study of extinct organisms– as part of my degree. I was back in the realm of dinosaurs– older, wiser but still embarrassingly excited. Then as I delved deeper into my external reading, I found some papers that shook my world, shattered my dreams, and generally slapped my childhood in the face. My dinosaur books had been lying to me about my favourite dinosaur of all time: Deinonychus.
Deinonychus (pronounced Die-NON-ik-uss) was a mean guy. Resembling its smaller, superstar cousin the Velociraptor, Deinonychus nonetheless has its own claims to fame.
Before the 1960s, scientists took a pretty dim view of dinosaurs. The consensus was that they were all stupid, sluggish and cold-blooded, and probably died out because they couldn’t cope with the same challenges that we sleek, sexy mammals can. But that view started to fall apart when John Ostrom took a closer look at Deinonychus. He suggested that these animals were speedy, intelligent pack-hunters who worked together to bring down large prey, using the fearsome sickle-shaped claw on each foot to disembowel their victims. Like wolves. Slashy Captain Hook wolves. This image of Deinonychus helped create a revolution in the way that we think about dinosaurs, and it was still championed in all my dinosaur books. As the sort of child who didn’t bat an eyelid at the bloodiest scenes of Watership Down, it inspired me. Over several years, I built up a portfolio of really creepy drawings of dinosaurs killing each other, made with nothing but a pencil and a red felt-tip pen, and ravaging packs of Deinonychus featured heavily in my “art”. On reflection, I feel lucky that my parents didn’t refer me to a child psychologist.
But in 2006, long after I’d abandoned dinosaurs in favour of blushing at teenage boys, some scientists decided to test out the theories about those fearsome feet. Phillip Manning and his team built an accurate hydraulic model of a Deinonychus leg, complete with terror-claw, and made it kick a pig carcass that had kindly volunteered to play the part of an herbivorous dinosaur. Yet far from slicing the carcass into ribbons of sandwich ham, the claws were AWFUL at doing any sort of tearing damage. Instead, they created small shallow puncture wounds that did very little to the surrounding tissue, let alone the internal organs. Not so much a river of blood and gore, then: if Deinonychus behaved like my books said, then the herbivores probably walked away with mildly painful wounds that cleared up in a week. Something else was going on with these bizarre claws. Stumped, Manning suggested that Deinonychus could have used its claws like crampons, allowing it to climb onto the backs of large prey and attack from there. So my vision of dramatic battles between massive herbivores and a fearsome pack of predators wasn’t totally shattered… yet.
It was thanks to a guy called Denver Fowler that my artwork really faded into fantasy. He noticed that modern eagles and hawks—known as raptors—also have one claw bigger than the other on their feet. However, you’ll never see a pack of eagles descending onto a cow in a field and slashing it to death, neither do they need climbing aids. These birds hunt by swooping onto smaller animals, then pick them to bits with their beaks, often while the prey is still alive. A struggling animal could be very dangerous to a bird of prey, potentially breaking its fragile bones, so it’s vital for the raptor to keep it pinned down firmly. This is where that claw comes in. By clamping down with their powerful modified talon, raptors immobilise their prey, allowing them to concentrate on their (very fresh) meal without distraction. Fowler compared the feet of raptors with those of their ancient cousin, Deinonychus, and found many similarities in their anatomy. The flexibility of the toe bearing that large claw may have come in handy not for delivering slashes… but for swivelling down into a death grip on small prey. That’s right—small prey. Those epic clashes I’d envisioned between huge herbivores and fierce little predators seemed less and less feasible.
So how did Deinonychus ACTUALLY live? Fowler envisions a solitary predator that pursued animals smaller or similar to its own size at high speed. It would then pounce on top of its victim and press it firmly to the ground, channelling its bodyweight through the tip of the powerful sickle-claws to prevent escape. Then it would have leaned forward and proceeded to rip its squirming dinner into bitesize chunks—gory, but not quite the image I’d held. Fowler hadn’t gone as far as to demonstrate that my favourite dinosaur was a peaceful vegetarian, but I have to admit—he’d stolen just a little bit of its badassery. This doesn’t mean Deinonychus stops being cool, though. In fact, it could teach us a lot about the early days of its modern relatives: the birds.
Fowler compared modern raptors with Deinonychus once more, and noticed how, when perching on struggling prey, raptors often beat their wings vigorously. This keeps the bird in a prime position on top of the prey, making sure its victim stays pressed to the ground. We’ve known for a while that many predatory dinosaurs like Deinonychus had feathers on their skin– perhaps the first chink to appear in their armour of terror. But scientists have long argued about how the particular lineage of feathery dinosaurs that evolved into birds first developed the “flight stroke”—the special high-powered downbeat of the wings that creates lift. Looking at Deinonychus inspired Fowler to come up with a new theory. If dinosaurs also stability-flapped their feathered arms when making a kill, over the generations, it could have selected for greater upper body strength and the ability to beat the arms hard and fast– features that would later come in very useful when their descendants took to the air. Although Deinonychus was not a direct ancestor of birds—it appeared long after the first flying dinosaurs—it was closely related to them, so it’s likely that they shared similar behaviour. So by looking at how Deinonychus might have hunted, we can take steps in unravelling one of the biggest, most controversial mysteries in all of Palaeobiology.
In future, then, perhaps we’ll look back on Deinonychus as triggering a second revolution in how we see the dinosaurs. If I told that to my 7-year-old self, I hope she’d have been consoled. Deinonychus… you might not be the psycho-killer of my imagination, but you’re still cool to me.
Naked creepy Deinonychus: By Mistvan (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) via Wikimedia Commons
Fluffy Deinonychus: By Peng 6 July 2005 16:32 (UTC) (selbst gemacht –Peng 6 July 2005 16:32 (UTC)) [GFDL (http://www.gnu.org/copyleft/fdl.html) via Wikimedia Commons
The idea that there might be alien life elsewhere in the universe has captured the imaginations of generations of scientists, writers, artists and….well pretty much everyone! Science Brainwaves has a fantastic *free* lecture coming up on Friday 8th November, where Dr Simon Goodwin will describe how astronomers are looking for life on other planets, and what it might be like. So without giving away too many spoilers I thought it would be the perfect opportunity to find out what got our ancestors thinking about aliens and what we might do if we find them….
It’s difficult to say (or at least difficult for me to say, with my limited resources and time!) when people first started thinking about the possibility of life on other planets. However, it’s fair to say that big astronomical discoveries have probably captured people’s imaginations throughout the ages – in the same way that the moon landing got everyone talking about little green men. One such breakthrough is the ‘Heliocentric Revolution’. Heliocentrism is the concept of the solar system with the sun at the centre instead of the earth, an idea that has been around since at least 3rd century BC. However, it was Copernicus who revived the idea in the 16th Century, which was expanded on by the works of Kepler (who calculated the orbits of the planets) and Galileo (who observed other planets by telescope). The spread of the idea that earth wasn’t the centre of the universe must have made our ancestors wonder what else could be out there. Earth was no longer special, just another planet orbiting the sun, so why shouldn’t there be other life filled planets like ours?
So far we’ve obviously not had much luck in finding life, but it’d probably be prudent to think about what we’d do if we do find it – especially if it’s intelligent. Stephen Hawking has been heard to offer an opinion on the subject:
“If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans,”
“We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”
Not the most optimistic of outlooks, but he’s got a point. Several attempts to contact alien life have been made by astronomers, but have they given away too much information? In 1973 the ‘Pioneer Plaque’ was sent out on the pioneer 10 and 11 spacecraft, followed in 1974 by the broadcast of the ‘Arecibo Message’ (both pictured right). A slightly more artistic message was sent out in 1977 on the ‘Voyager Golden Record’, which contained information on the sights and sounds of earth. It’s quite romantic to think that if there messages reached intelligent alien life that they might just pop in for a cuppa to say ‘hi’, but the consequences could be a lot worse if the aliens were hostile (and if you’ve got a flair for the melodramatic).
As a microbiologist I can’t help but be a little cynical about grand ideas of intelligent life. At the moment we’ll probably be lucky to find some basic single celled life – which I’ve heard doesn’t tend to be all that talkative (but which as a microbiologist I would find much more exciting anyway!). Anyway, who am I to say what we may or may not find (with all the experience of a 3rd year astrobiology module) – come and hear it from the expert at Science Brainwaves’ free Looking for Aliens Lecture!*
*did I mention it’s free?
Throughout the social and scientific worlds, there is controversy surrounding the potential to genetically modify embryos to create ‘designer babies’. These are embryos that have been screened for genetic diseases, and will therefore only contain selected desired qualities chosen by the parents. However, there are many stories in the media which exaggerate and distort the facts- and this can even be seen in the term ‘designer babies’ itself. It is important to think about the likelihood and implications of this idea, plus to outline what actually gave rise to this concept.
We could suggest that the idea of genetically engineered embryos or the ideas that led to this originated in 1978 and the first in-vitro fertilisation (IVF) treatment. The procedure gave and still gives hundreds of infertile couples a chance to have a child by transferring an egg fertilised in a laboratory into the mothers uterus. It subsequently led to a procedure known as preimplantation genetic diagnosis (PGD). This is a technique used on embryos to profile their genome- it is a form of genetic profiling and embryo screening and is a more technical and accurate way of regarding ‘designer babies’. In terms of health benefits, using PGD means embryos can be screened outside the womb. Embryos can be selected that only carry normal and healthy genes, and are therefore free from genetic abnormalities. Whilst this technique is currently popular, PGD could in the future be used to select any desired specified trait of a child, such as eye colour, intelligence, and athleticism; be used to select embryos to be without a genetic disorder, to increase successful pregnancies, to match a sibling in order to be a donor, for sex selection and therefore be used to design your own baby. Selecting the gender of a child is already possible due to the fact only the X or Y chromosome needs to be identified, but other traits are more difficult due to the amount of genetic material required. Recent breakthroughs have meant that every single chromosome in an embryo can be scanned for genes involved in anything from Down’s Syndrome to lactose intolerance using a single microchip, but how advanced is this and what are the ethics behind this?
There are a large array of ethical, social and scientific concerns over the concept of creating a ‘perfect’ child. Some people worry that in the future there will be an imbalance between genders in the general population especially in societies that favor boys over girls, such as China. Also, a key issue suggested is that there is an element of eugenics to this idea- PGD will mean that people with ‘unattractive’ qualities will be lessened and potentially society may discriminate against those who have not been treated. If we look at this from a more extreme perspective, it could be suggested that we may end up with a race of ‘super-humans’ and a divide between those who have been treated and those who haven’t. Also, this selection of genotypes suggests a potential deleterious effect on the human gene pool, meaning less genetic variation. Whilst at first this may seem positive due to the fact you could eliminate genetic disorders such as hemophilia A before it becomes prevalent in the body, it is also likely that new diseases may evolve and accidentally be introduced into the human race. Due to the decreased gene pool, only partial evolution would be able to occur and therefore we will be more susceptible to new diseases having a dramatic effect. It is clear from this evidence that regulations must be put in place and strictly enforced before any new advances are made.
So, how close are we to being able to ‘customise’ our children?
In terms of altering genes already present in the embryo, we are already well on our way to refining this technology. Scientists have been altering animal genes for years, and germline therapy is already being used on animals. Germline gene therapy is now being closely linked and developed with PGD- and it could soon be used to change human genetics. Our germline cells are our sex cells (egg and sperm), and using this branch of gene therapy essentially involves manipulating and adding new genes to the cells. The clear possibility from this in terms of PGD is that any trait can be added to an embryo to create a designer baby. This may involve adding a gene to stop a genetic disorder being expressed in a baby’s phenotype by fixing them as they are noticed in PGD, but it could also mean that only certain people will be able to advance in society.
On he other hand however, before these ‘more advanced’ humans are created we need to learn more about the genetic code. The basis of all genetic technologies lies in the human genome, and whilst PGD advances are ever-increasing, at present we can only use this technique to look at one or two genes at a time. Therefore, we cannot use it to alter the genes in embryos, and this would logically lead us to think about gene therapy, but the current lack of technology and the strict regulations regarding experimenting with germline gene therapy makes it unlikely that anyone will be able to create a completely designer baby in the near future.
Designing our babies is a reality that government bodies and various organizations are beginning to accept and address fully, and society’s view of the moral implications behind PGD and gene therapy being a key factor in determining how far this concept can advance; there will be increasingly new debates and controversy over the acceptable applications of gene technologies in humans and human embryos.
Unless you have missed the countless number of headlines over the past few years about MRSA and hospital superbugs, you are probably aware that antibiotic resistance is a huge problem in healthcare at the moment. It brings with it visions of a post-apocalyptic world of widespread plagues of death and destruction and strong desire to constantly wash your hands the moment you enter a hospital. This brings the question, how real is this problem?
The bad news is that it is very real. Around the world a plethora of diseases from cystitis to TB have shown levels of resistance to our current drugs arsenal, even those we have kept back to use as a last resort. The even worse news is that in terms of new drugs in the pipeline, the well is beginning to dry up. If infectious disease is a war, we are definitely losing.
Bacterial resistance is a problem which has been become more widespread year on year. It is caused by a build-up of mutations in bacteria which stops the antibiotics from working properly, no longer killing or stopping the growth of the germs. This is generally caused by the over prescribing and misuse of antibiotics, such as GPs prescribing the use for viral infections, or patients not finishing the course of their prescribed drugs. It is also caused by the widespread use of antibiotics in agriculture to promote growth in cattle. This has created an environment where pathogens are constantly meeting and combating low levels of antibiotics, allowing the favouring of resistance strains over susceptible ones.
Traditional antibiotics may no longer be a viable option, this has sparked a search for alternatives to our current drugs, which work in other ways than simply killing the bacteria, and the good news is that there is a lot of promising research currently in different stages of development. One avenue that is showing potential is that of anti-adhesion therapies. These are drugs which prevent bacteria from gripping to the cells of the body. If they cannot grab the body, they cannot overcome the strategies we have evolved to stop them colonising us, such as the mucus in our airways or the flushing out of germs by urine in the urinary tract. This means they can’t colonise our bodies in high enough numbers to cause us harm. These anti-adhesion drugs which prevent bacteria from attaching to host cells do not put any selective pressure on the bacteria and therefore are not likely to induce resistance meaning that they have high potential for use in the dystopian future we all fear.
There are a large number ways by which we can stop bacteria from attaching to use. These therapies use different means all to the same end, i.e. to alter the interaction between bacteria and patient so that bacteria no longer stick efficiently to cells. Listed below is just a small sample of strategies under investigation.
So, anti-adhesion strategies are not a new idea. Cranberry juice has long been used as a home-made solution to urinary tract infections. This has been shown to be effective in clinical research, however the downside is that this result is less then consistent and there is still a great amount of debate about how it actually works. A component of cranberries and related berries has proven itself as an inhibitor of bacterial attachment, and the high sugar (specifically fructose) content of cranberry juice can work block the ‘arms’ of bacteria, known as fimbrae which grab cells. For those who aren’t big fans of cranberry juice, there is more good news: this anti-infection effect on bacteria is not limited to cranberries; in fact, new research has identified compounds from several other plants including tea and red wine.
On a less appealing note for you personally, there is also evidence that suggest that breast milk may contain a cocktail of ingredients that can prevent bacteria attachment to the recipient. Breast milk has been shown to contain hundreds of proteins, sugars and antibodies, some of which may be affective anti-adherent compounds against a myriad of diseases. This makes evolutionary sense, as providing infants with ‘anti-adherence milk’ gives them a regular protective coating of the digestive system at a time when the immune system is not yet completely up and running and the children are at risk from so many infections.
An alternative strategy is the use of probiotics, which uses non-harmful species of bacteria to fight the harmful kind. In the battle to colonise our body, this strategy is akin to sending reinforcements to the good guys. Commonly used species include lactobacilli and bifidobacteria, which can be added to foods like yogurt. This medical use of probiotics is currently being trialled and is showing some success against a wide number of infections, ranging from food poisoning to vaginosis to stomach ulcers.
The final questions we should ask are: are these therapies are as efficient as antibiotics? Are they just going to generate more complex resistant bugs? Would we be better off concentrating all our efforts on just searching for new antibiotics? Unfortunately we can’t answer this, at least not by ourselves. What we do know is that we are entering a post-antibiotic era. The rule book has changed and science may need to start playing catch up.
Despite the widespread availability of evidence-based medicine in the western world, ‘alternative medicines’ are still commonly used. Such medicines are usually inspired by pre-scientific medical practices; those which have been passed down through generations. However many established medical treatments also arise from traditional medical practices. For example the use of aspirin as an analgesic (pain killer) has its roots in the use of tree bark for similar purposes throughout history. The difference between established medicines like aspirin, and alternative medicines such as homeopathy, is that the former have been found to be effective when exposed to rigorous scientific trials.
A form of alternative medicine that has recently been subjected to scientific scrutiny is the use of magnetic bracelets as a method of analgesia. It effective, such therapies would provide cheap and easy-to-implement treatments for chronic pain such as that experienced in arthritis. Unfortunately there is little evidence of such treatments being effective. A meta-analysis of randomised clinical trials looking at the use of magnet therapy to relieve pain found that there was no statistically significant benefit to wearing magnetic bracelets (1). However it can be argued that existing clinical trials may have been hampered by the difficulty in finding a suitable control condition.
The placebo effect
The ‘placebo effect’ is a broad term used to capture the influence that knowledge concerning an experimental manipulation might have on outcome measures. Consider a situation where you are trying to assess the effectiveness of a drug. To do this you might give the drug to a group of patients and compare their subsequent symptomatology to a control group of patients who do not get the drug. However even if the drug group show an improvement in symptoms compared to the control group, you cannot be certain whether this improvement is due to the chemical effects of the drug. This is because the psychological effects of knowing you are receiving a treatment may produce a beneficial effect on reported symptoms which would be absent from the control group. The solution to this problem is to give the control group an intervention that resembles the experimental treatment (i.e. a sugar pill instead of the actual drug). This ensures that both groups are exposed to the same treatment procedure, and therefore should experience the same psychological effects. Indeed this control treatment is often referred to as a ‘placebo’ because it is designed to control the placebo effect. The drug must exhibit an effect over and above the placebo treatment in order to be considered beneficial.
A requirement for any study wishing to control for the placebo effect is that the participants must be ‘blind’ (i.e. unaware) as to which intervention (treatment or placebo) they are getting. If the participant is aware that they are getting an ineffective placebo treatment, the positive psychological benefits of expecting an improvement in symptoms is likely to disappear, and thus the placebo won’t genuinely control for the psychological effects of receiving an intervention.
A placebo for magnetic bracelets
The obvious placebo for a magnetic bracelet is an otherwise identical non-magnetic bracelet. However the problem with using non-magnetic bracelets as a control is that it is easy for the participant to identify which intervention they are getting, as it is easy to distinguish magnetic or non-magnetic materials. The can be illustrated by considering a clinical trial which appeared to show that magnetic bracelets produce a significant pain relief effect (2). In this study participants wore either a standard magnetic bracelet, a much weaker magnetic bracelet or a non-magnetic (steel) bracelet. The standard magnetic bracelet was only found to reduce pain when compared to the non-magnetic bracelet. However the researchers also found evidence that participants wearing the non-magnetic bracelet became aware that it was non-magnetic, and therefore could infer that they were participating in a control condition. This suggests that the difference between conditions might be due to a placebo effect, as the participants weren’t blind to the experimental manipulation.
This failure of blinding was not present for the other control condition (weak magnetic bracelet) presumably because these bracelets were somewhat magnetic. As no statistically significant difference was found between the standard and weak magnetic bracelets it could therefore be concluded that the magnetic bracelets have no analgesic effect. However it could also be argued that if magnetism does reduce pain, the weaker bracelet may have provided a small beneficial effect which might have served to ‘cancel out’ the effect of the standard magnetic bracelet. The study could therefore be considered inconclusive as neither of the control conditions were capable of isolating the effect of magnetism.
More recent research
Recent clinical trials conducted by researchers at York University has tried to solve the issue of finding a suitable control condition for magnetic bracelets. Stewart Richmond and colleagues (3) included a condition where participants wore copper bracelets, in addition to the three conditions used in previous research, while researching the effect of such bracelets on the symptoms of Osteoarthritis . As copper is non-magnetic it can act as a control in testing the hypothesis that magnetic metals relieve pain. However as copper is also an traditional treatment for pain, it does not have the drawback of the non-metallic bracelet regarding the expectation of success. The participant is likely to have the same expectation of a copper bracelet working as they would for a magnetic bracelet.
The study found that there was no significant difference between any of the bracelets on most of the measures of pain, stiffness and physical function. However the standard magnetic bracelet did perform better than the various controls on one sub-scale of one of the 3 measures of pain taken. However this isolated positive effect was considered likely to be spurious because of the number of comparisons relating to changes in pain that were performed during the study (see 4). The same group has recently published an almost identical study relating to the pain reported by individuals suffering from Rheumatoid Arthritis rather than Osteoarthritis (5). Using measures of pain, physical function and inflammation they again found no significant differences in effect between the four different bracelet types.
The existing research literature seems to suggest that magnetic bracelets have no analgesic effect over and above a placebo effect. The use of a copper bracelet overcomes some of the problems of finding a suitable control condition to compare magnetic bracelets against. One argument against using copper bracelets as a control is that as they themselves are sometimes considered an ‘alternative’ treatment for pain, they may also have an analgesic effect. Such an effect could potentially cancel out any analgesic effect of the magnetic bracelets when statistical comparisons are performed. However copper bracelets did not perform any better than the non-magnetic steel bracelets in either study (3, 5) despite the potential additional placebo effect that might apply during the copper bracelets condition. Indeed on many of the measures of pain the copper bracelet actually performed worse than the non-magnetic bracelet. The copper bracelet can therefore be considered a reasonable placebo to use in research testing the analgesic effect of magnetic bracelets.
Despite the negative results of clinical trials, it may be wise not to entirely rule out a potential analgesic effects of magnetic bracelets. Across all three studies (2, 3, 5) the measures of pain were generally lowest in the standard magnetic bracelet group. Indeed significant effects were found in two of the studies (2, 3) although these were confounded by the aforementioned problems concerning control conditions and multiple comparisons. Nevertheless it could be argued that, given the existing data, magnetic bracelets may have a small positive effect, but that this effect is not large or consistent enough to produce a statistically significant difference in clinical trials. This theory could be tested by conducting trials with far more patients (and thus greater statistical power) or by using a number of different bracelets of differing magnetic strengths to see if any reported analgesic effect increases with the strength of the magnetic field. Until such research is performed it is best to assume that magnetic bracelets do not have any clinical relevant analgesic effect.
Image courtesy of FreeDigitalPhotos.net
(1) Pittler MH, Brown EM, Ernst E. (2007) Static magnets for reducing pain: systematic review and meta-analysis of randomized trials. CMAJ 2007;177(7):736—42.
(2) Harlow T, Greaves C, White A, Brown L, Hart A, Ernst E. (2004) Randomised controlled trial of magnetic bracelets for relieving pain in osteoarthritis of the hip and knee. BMJ 329(7480):1450—4.
(3) Richmond SJ, Brown SR, Campion PD, Porter AJL, Klaber Moffett JA, et al. (2009) Therapeutic effects of magnetic and copper bracelets in osteoarthritis: a randomised placebo-controlled crossover trial. Complement Ther Med 17(5–6): 249–56.
(5) Richmond SJ, Gunadasa S, Bland M, MacPherson H (2013) Copper Bracelets and Magnetic Wrist Straps for Rheumatoid Arthritis – Analgesic and Anti-Inflammatory Effects: A Randomised Double-Blind Placebo Controlled Crossover Trial. PLoS ONE 8(9):
This morning I came across a very interesting TED talk by Ellen Jorgensen entitled “Biohacking — you can do it, too” (http://on.ted.com/gaqM). The basic premise is to make biotech accessible to all, by setting up community labs, where anyone can learn to genetically engineer an organism, or sequence a genome. This might seem like a very risky venture from an ethical point of view, but actually she makes a good argument for the project being at least as ethically sound than your average lab. With the worldwide community of ‘biohackers’ having agreed not only to abide by all local laws and regulations, but drawing up its own code of ethics.
So what potential does this movement have as a whole? One thing it’s unlikely to lead to is bioterrorism, an idea that the media like to infer when they report on the project. The biohacker labs don’t have access to pathogens, and it’s very difficult to make a harmless microbe into a malicious one without access to at least the protein coding DNA of a pathogen. Unfortunately, the example she gives of what biohacking *has* done is rather frivolous, with a story of how a German man identified the dog that had been fouling in his street by DNA testing. However, she does give other examples of how the labs could be used, from discovering your ancestry to creating a yeast biosensor. This rings of another biotech project called iGem (igem.org), where teams of undergraduate students work over the summer to create some sort of functional biotech (sensors are a popular option) from a list of ‘biological parts’.
The Cambridge 2010 iGem team made a range of colours of bioluminescent (glowing!) E.coli as part of their project.
My view is that Jorgensen’s biohacker project might actually have some potential to do great things. Professional scientists in the present day do important work, but are often limited by bureaucracy and funding issues – making it very difficult to do science for the sake of science. Every grant proposal has to have a clear benefit for humanity, or in the private sector for the company’s wallet, which isn’t really how science works. The scientists of times gone by were often rich and curious people, who made discoveries by tinkering and questioning the world around them, and even if they did have a particular aim in mind they weren’t constricted to that by the agendas of companies and funding bodies. Biohacking seems to bring the best of both worlds, a space with safety regulations and a moral code that allows anyone to do science for whatever off-the-wall or seemingly inconsequential project that takes their fancy – taking science back to the age of freedom and curiosity.
Over the holidays I rediscovered a book I picked up in an antique shop a year or so ago called “Milestones in Microbiology”. I had assumed it was going to be a standard history book with lots of dates and names and events, but it turned out to be a collection of groundbreaking microbiology papers from the 16th century to the early 20th century – quite a special find for a microbiology student. Many of the papers included were written by familiar names such as Pasteur, Leeuwenhoek, Lister, Koch, Fleming and more, and the collection was compiled and translated by Thomas Brock (a familiar name to anyone who’s been set Brock’s Biology of Microorganisms as a first year text book!).
I’ve not yet read the whole collection, but having read the first few papers I’m very much sold. The early texts on the field of microbiology are not just intriguing but fairly accessible too. The style of writing is far less technical than today’s academic papers, as well as being in full prose (in those days journals didn’t have strict word limits). My favourite example of this so far is when Leeuwenhoek describes one of his test subjects as “a good fellow” a comment that would be branded unneccessary and completely aside from the point in today’s academic world!
It’s not often you get the chance to view groundbreaking scientific advances through the eyes of the scientists you get taught about in the textbooks. Reading the paper in which Leeuwenhoek first describes bacteria (or “little animals” as he calls them) feels like something of a privelege, as well as a trip back in time, so definately worth a read for anyone with an interest in the field. A more up to date version of the book seems to be available on Amazon or for University of Sheffield students there’s a few copies in Western Bank Library – enjoy!
On another note, if you’re interested in this sort of thing I’d also definately recommend a trip to the Pasteur museum in Paris. I visited it a few years ago whilst in Paris and like the papers mentioned above it’s a fascinating insight into the work of pioneering microbiologists. It’s a fairly understated part of the modern Pasteur Institute, with the museum situated in the building of the original Pasteur Institute. The museum contains plenty of scientific curiosities, such as Pasteur’s original experimental equipment, and documents his work from his early background in chemistry and stereoisomers up to his more famous vaccine and microbiological work. Finally on a less biological theme, the museum also contains Pasteur’s living quarters and crypt, which were also part of the original institute building!
Lying, the deliberate attempt to mislead someone, is a processes that we all engage in at some time or another. Indeed research has found that the average person lies at least once a day, suggesting that lying is a standard part of social interaction (1). Despite its common occurrence lying is not an automatic process. Instead it represents an advanced cognitive function; a skill that requires more basic cognitive abilities to be present before it can emerge. To lie an individual first needs to be able to appreciate the benefits of lying (e.g. a desire to increase social status) so that they have the motivation to behave deceitfully. Successful lying also requires ‘theory of mind’ or the ability to understand what another person knows. This is necessary so that the would-be liar can spot firstly the opportunity to lie, and secondly what sort of deception might be required to produce a successful lie. Finally lying also requires the ability to generate a plausible and coherent, but nonetheless fabricated description of an event. Given these prerequisites it is unlikely that we are ‘born liars’. Instead the ability to lie is believed to develop sometime between the ages of 2 and 4 (2). The fact that the ability to lie develops over time suggests that the our performance of the ‘skill’ of lying should be sensitive to practice. Do people who lie more often become better at it?
Lying is tiring!
Lying is considered more cognitively demanding that telling the truth due to the extra cognitive functions that need to be utilised to produce a lie. The idea that lying is cognitively demanding is supported both by behavioural data showing that deliberately producing a misleading response takes longer, and is more prone to error, than producing a truthful response (3) and by neurological data showing that lying requires additional activity in the prefrontal areas of the brain when compared to truth telling (4). These observable differences between truth telling and lying allow a measure of ‘lying success’ to be created. For example a successful, or skilled liar, should be able to perform lies more quickly and accurately than a less successful liar, perhaps to the extent that there is no noticeable difference in performance between truth telling and lying in such individuals. Likewise, if the ability to lie is affected by practice, then practice should make lies appear more like the truth in terms of behavioural performance.
Practice makes perfect (but is this a lie)?
Despite the intuitive appeal of the idea that lying becomes easier with practice, much past research has failed to find an effect of practice on lying, either when measuring behavioural (3) or neuroimaging (5) markers of lying. Such results have led to the conclusion that lying may always be significantly more effortful than truth telling, no matter how practiced an individual is at deception.
A recent study (6) has re-examined this issue. They used a version of the ‘Sheffield Lie Test’ where participants are presented with a list of questions that require a yes/no response (e.g. ‘Did you buy chocolate today?’). The experiment involved three main phases. In the first, baseline phase, participants were required to respond truthfully to half the statements and to lie in response to the other half of the statements. In the middle, training phase, the statements were split into two groups. For a control group of statements the proportion that required a truthful response remained at 50% for all participants. For an experimental group of statements the proportion that required a truthful response was varied between participants. Participants either had to lie in response to 25%, 50% or 75% of these statements, thus giving the participants differing levels of ‘practice’ at lying. The final, test phase, was a repeat of the baseline phase. This design allowed two research questions to be assessed. Firstly the researchers could identify whether practice at lying reduced the ‘lie effect’ on reaction time and error rate (e.g. the increased reaction time and error rate that occurs when a participant is required to lie, compared to when they are required to tell the truth). Secondly the researchers could identify whether any reduction in the lie effect applied just to the statements on which the groups had experienced differing practice levels, or whether it also generalised to those statements where all groups had the same level of practice.
The results revealed that practice did produce an improvement in the ability to lie during the period when the training was actually taking place, and that this improvement applied to both the control statements and the experimental statements. The participants who had to lie more demonstrated reduced error rates and reaction times compared to those who had to lie less during the training phase. However in the test phase this improvement was only maintained for the set of statements where the frequency of lying had been manipulated. The group who had practiced lying on 75% of the experimental statements were no faster or more accurate at lying on the control statements than the group who had to lie in response to just 25% of the experimental statements. These results suggest that practice can make you better at lying, but this improvement is only sustained over time for the specific lies that you have rehearsed.
Some lies may be better than others!
One important criticism of most studies on the effect of practice on lying is that they tend to use questions or tasks that require binary responses (i.e. yes/no questions). However in real life lying often involves the concoction of complex false narratives,a form of lying that is likely to be far more cognitively demanding than just saying ‘No’ in response to a question whose answer is ‘Yes’. Likewise the lies tested in laboratory studies tend to be rehearsed, or at least prepared lies. In contrast many real-life lies are concocted at short notice, with the deceptive narrative being constructed in ‘real-time’, whilst the person is in the process of lying. It is likely that the effect of training, and how that training generalises to other lies, will be different for these more advanced forms of lying than it is for the more simple types of lies that tend to be tested under laboratory conditions. Given this, if a psychologist tells you that we know for certain how practice impacts on the ability to deceive, you can be sure that they are lying!
(1) DePaulo, B.M., Kashy, D.A., Kirkendol, S.E., Wyer, M.M. & Epstein, J.A. (1996) Lying in everyday life. Journal of Personality and Social Psychology, 70 (5) 979-995. http://smg.media.mit.edu/library/DePauloEtAl.LyingEverydayLife.pdf
(2) Ahern, E.C., Lyon, T.D. & Quas, J.A. (2011) Young Children’s Emerging Ability to Make False Statements. Developmental Psychology. 47 (1) 61-66. http://www.ncbi.nlm.nih.gov/pubmed/21244149
(3) Vendemia, J.M.C., Buzan,R.F., & Green,E.P. (2005) Practice effects, workload and reaction time in deception. American Journal of Psychology. 5, 413–429. http://www.jstor.org/discover/10.2307/30039073?uid=3738032&uid=2129&uid=2&uid=70&uid=4&sid=21101917386241
(4)Spence, S.A. (2008) Playing Devil’s Advocate: The case against MRI lie detection. Legal and Criminological Psychology 13, 11-25. http://psychsource.bps.org.uk/details/journalArticle/3154771/Playing-Devils-advocate-The-case-against-fMRI-lie-detection.html
(5) Johnson,R., Barnhardt,J., & Zhu, J.(2005) Differential effects of practice on the executive processes used for truthful and deceptive responses: an event-related brain potential study. Brain Research: Cognitive Brain Research 24, 386–404. http://www.ncbi.nlm.nih.gov/pubmed/16099352
(6) Van Bockstaele, B., Verschuere, B., Moens, T., Suchotzki, K., Debey, E. & Spruyt, A. (2012) Learning to lie: effects of practice on the cognitive cost of lying. Frontiers in Psychology, November (3) 1-8. http://www.ncbi.nlm.nih.gov/pubmed/23226137
The age-old ‘nature-nurture’ debate revolves around understanding to what extent various traits within a population are determined by biological or environmental factors. In this context ‘traits’ can include not only aspects of personality, but also physical differences (e.g. eye colour) and differences in the vulnerability to disease. Investigating the nature-nurture question is important because it can help us appreciate the extent to which biological and social interventions can affect things like disease vulnerabilities, and other traits that significantly affect life outcomes (e.g. intelligence). The ‘nurture’ part of this topic can be dealt with to some extent by research in disciplines such as Sociology and Psychology. In contrast genetic research is crucial to understanding the ‘nature’ part of the equation. Genetics also has relevance for the ‘nurture’ part of the debate because environmental factors such as stress and nutrition affect how genes perform their function (gene expression). Indeed genetic and environmental factors can interact in more complex ways; certain genetic traits can alter the probability of an organism experiencing certain environmental factors. For example a genetic trait towards a ‘sweet tooth’ is likely to increase the chances of the organism experiencing a high-sugar diet!
Given the importance of genetic information to understanding how organisms differ, I would argue that a basic knowledge of Genetics is essential for anyone interested in ‘life sciences’. This is true whether your interest is largely medical, psychological or social. Unfortunately if, like me, you skipped A-Level Biology for something more exciting (or A-Level Physics in my case!) you might Genetics at bit of mystery.
Some basic genetics
Genetic information is encoded in DNA (Deoxyribonucleic acid). Sections of DNA that perform specific, separable functions are called Genes. Genes are the units of genetic information that can be inherited from generation to generation. Most Genes are arranged on long stretches of DNA called chromosomes, although a small proportion of genes are transmitted via cell mitochondria instead. Most organisms inherit 2 sets of chromosomes, one from each parent. Different genes perform different functions, mostly involving the creation of particular chemicals, often proteins, which influence how the organism develops. All cells in the body contain the DNA for all genes, however only a subset of genes will be ‘expressed’ (i.e. perform their function) in each cell. This variation in gene expression between cells allows the fixed (albeit very large) number of genes to generate a vast number of different chemicals. This in turn allows organisms to vary widely in form while still sharing very similar genetic information (thus explaining how it can be that we share 98% of our DNA with monkeys, and 50% with bananas!).
The complete set of genetic information an individual has is called their ‘genotype’. The genotype varies between all individuals (apart from identical twins) and thus defines the biological differences between us. In contrast the ‘phenotype’ is the complete set of observable properties that can be assigned to an organism. Genetics tries to understand the relationship between the genotype and a particular individual phenotype (trait). For example how does the genetic information contained in our DNA (genotype) influence our eye colour (phenotype)? As already mentioned environmental factors play a significant role in altering the phenotype produced by a particular genotype. Explicitly the phenotype is the result of the expression of the genotype in a particular environment.
Roughly speaking, heritability is the influence that a person’s genetic inheritance has on their phenotype. More officially it is the proportion of the total variance in a trait within a population that can be attributable to genetic effects. It tells you how much of the variation between individuals can be attributed to genetic differences. Note that this is not the same as saying that 60% of an individual’s trait is determined by genetic information. In narrow-sense heritability (the most common form used), what counts as ‘genetic effects’ is only that which is directly determined from the genetic information past on by the parents. This ignores variations caused by the interaction between different genes, and between genes and the environment. This is the most popular usage of heritability in science because it is far more predictive of breeding outcomes, and therefore tells us more about nature part of the ‘nature-nurture’ question, than the alternative (broad-sense) conceptualisation of heritability.
Uses and abuses
Genetic research can provide crucial information in the fight against certain diseases. Identifying genes that are predictive of various illnesses allow us to identify individuals who are vulnerable to a disease. This then allows preventive measures to be implemented to counter the possible appearance of the disease. Furthermore once the genes that contribute to a disease are known, knowledge as to how those genes express will help reveal the cellular mechanisms behind the disease. This improves our understanding of how the disease progresses and operates, and therefore helps with identifying treatment opportunities. In reality of course Genetics is rarely this simple. Many conditions that have a genetic basis (i.e. that show a significant level of heritability) appear to be influenced by mutations within a large number of different genes. Indeed in many cases, especially with psychiatric disorders, it may be that conditions we treat as one unitary disorder are in fact a multitude of different genetic disorders that have very similar phenotypes. Nevertheless, despite these problems genetic research is helping to uncover the biological basis of many illnesses.
One problem with Genetics, and heritability in particular, is that of interpretation. There is often a mistaken belief that a high level of heritability signifies that environmental factors have little or no effect on a trait. This misunderstanding springs from an ignorance of the fact that estimates of heritability comes from within a particular population, in a particular environment. If you change the environment (or indeed the population) then the heritability level will change. This is because gene expression is affected by environmental factors and so the influence of genetic information on a trait will always be dependent to some extent on the environment. As an example a recent study showing that intelligence was highly heritable (1) lead to some right-wing commentators using it as ‘proof’ of the intellectual inferiority of certain populations, because of their lower scores on IQ tests. Such an interpretation is then used to argue that policies relating to equal treatment of people are flawed, because some people are ‘naturally’ better. Apart from the debatable logic of the argument itself, the actual interpretation of the genetic finding is flawed because a high heritability of IQ does not suggest that environmental differences have no effect on IQ scores. To illustrate this point consider that the study in question estimated heritability in an exclusive Caucasian sample from countries with universal access to education. If you expanded the sample to include those who did not have access to education it would most likely reduce the estimate of heritability, as you would have increased the influence of environmental factors within the population being studied! Ironically therefore you could argue that only by treating everyone equally would you be able to determine those who are truly stronger on a particular trait! Independent of what your views on equality are, the most important lesson as regards genetics is that you cannot use estimates of heritability, however high, to suggest that differences in the environment have no effect on trait outcomes.
(1) Davies, G. et al (2011) Genome-wide association studies establish that human intelligence is highly heritable and polygenic. Molecular Psychiatry 16, 996-1005. http://www.nature.com/mp/journal/v16/n10/full/mp201185a.html
Although not directly cited, I found the following information useful when creating the post (and when trying to get my head around Genetics!).
Quantitative Genetics: measuring heritability. In Genetics and Human Behaviour: the ethical context. Nuffield Council on Bioethics. 2002. http://www.nuffieldbioethics.org/sites/default/files/files/Genetics%20and%20behaviour%20Chapter%204%20-%20Quantitative%20genetics.pdf
Visscher, P.M., Hill, W.G. & Wray, N.R. (2008) Heritability in the genomics era – concepts and misconceptions. Nature Reviews Genetics, 9 255-266. http://www.ncbi.nlm.nih.gov/pubmed/18319743
Bargmann, C.I. & Gilliam, T.C. (2012) Genes & Behaviour (Kandel, E.R. et al (Eds)). In Principles of Neural Science (Fifth Edition). McGraw-Hill.
Has somebody set up a miniature weightlifting gym for microbes? Not yet, but just like you and I bacteria need iron to stay alive. However, unlike us they don’t get iron as a supplement in their cereal – they have to find it for themselves. In bacteria iron is needed to make proteins involved in vital processes such as respiration and DNA synthesis. With the stakes so high they need specialised ways to get iron, and more often than not they have to scrounge it from us, their human host.
Iron scavenging molecules (called siderophores) are one way that bacteria can get iron from a host. In the human body the levels of free iron are kept very low, so the siderophores have to be very good at finding iron then hanging on to it (high affinity). Once they’ve done this they need to get back into the bacterial cell via special transporters in the cell membrane (see figure below).
So, send out some scavengers and get loads of iron? Not so simple! Firstly, the whole process takes a lot of energy for the cell. In E.coli it takes 4 different proteins just to make the siderophore, plus another 4 proteins and some ATP (the energy currency of the cell) to get it back in again. Secondly, too much iron is toxic to the cell, so it needs to make sure that it only goes to all this trouble when it really needs to – in other words it needs some gene regulation.
This is where it gets clever. Inside the cell there’s a protein called Fur (ferric uptake regulator) that keeps an eye on how much iron is in the cell and turns the genes for iron scavenging on and off. When there’s lots of iron in the cell the iron binds to Fur. This allows Fur to bind to the iron uptake genes and turn them off, so the cell doesn’t waste any resources or overload itself with iron (see figure below). When there’s not enough iron in the cell there’s no iron spare to bind to Fur, so Fur can’t bind to the DNA. This means that the genes are active and the proteins for iron scavenging are made.
That’s a pretty good system, but a lot of pathogenic bacteria take it a step further. When pathogens enter the body they need to spring into action to make virulence factors – the proteins and molecules that allow them to survive in the body and do all the nasty things that they do. It would be a massive waste of energy if they made these all the time, so they need to be able to activate them specifically when they enter a host. Bacteria don’t have eyes or GPS so they have to sense the environment to work out where they are. Low iron levels is one signal that they are inside a host, so it makes sense to use an iron sensing protein to regulate other virulence factor genes (figure 3). For example, E.coli uses the Fur regulator to regulate virulence factor genes for fimbriae (fibres which can latch onto human cells), haemolysin (a toxin that breaks open red blood cells) and Shiga-like toxin (a toxin that helps E.coli cells to get inside human cells).
So, in the arms race of human vs. pathogen it seems that bacteria have found a few sneaky solutions this time. Not only have they gotten around the body’s iron restriction mechanisms, but they also use the low iron levels as a trigger for more deadly weapons.