Animal intelligence.

I have covered a wide range of topics throughout my Biomedical Science degree: but one thing that really sparks my interest that isn’t really biomed related are studies involving animal behaviour.
Animal rights have recently been a hot topic, especially due to the viral article recently informing about the dolphin killing ritual in the Faroe Islands, which have shocked many. One main reason underlying animal cruelty may be because their value is often under appreciated, their intelligence overlooked, as it is not apparently obvious. Animals are able to show emotions and even compassion for their own species, as well as inter-species.
Whether it’s little zebra fish showing aggressive behaviour with a mirror test, or elephants for their caring behaviours, all sorts of animals are capable of actions that are similar to many types of human emotional behaviour.

Everyday birds that are seen as an annoyance on the streets, for example pigeons are often underrated for their abilities, normally as they are viewed as vermin. A well-known instinct is for very accurate homing ability. This ability may be due to their sense of smell, by spatially mapping ‘olfactory positions’ of locations, or past suggestions of magnetic field detection, a theory less well received.
Pigeons can also show other characteristics e.g. choosing a preference of two options, tested with a string test using a touchscreen:

pigeon touchscreen
The test involved choosing between string attached to an empty animated box (red string), and a box that looked like it had food in it (green). This demonstrated pigeon’s ability to learn, and make associations with a preference and food reward.
They are also able to pass the mirror test as they have the ability to recognise their own reflection. However this is being argued against, as the pigeons in the original experiments were trained to respond to a mirror. Although simple, passing these tests shows a lot of potential.

Crows are able to take certain faces that are associated with negative emotion/danger to memory, and other crows are called to the proximity to learn and ‘mob’ the dangerous face. These memories may last a lifetime, so are able to spread widely in the community through social learning.
Other such species, like primates and cetaceans are fairly well known for their ability to shown complex behaviours.

Capuchin monkeys are known for displaying playful and lively behaviours to attract a mate. A recent amusing article has shown how females can be seen throwing stones at males, as a form of ‘flirtatious’ behaviour. It is one of the only ways they can attract a mate, as there is no physical characteristics shown in this species before copulation. For males, it is much easier; rubbing their bodies with urine is effective in attracting females.
Dolphins are able to show complex behaviours, e.g. being able to communicate through their own series of clicking/whistle sounds, each having different interpretations. Another very interesting learnt behaviour is protecting their young’s snout with a sponge, when teaching foraging techniques. Although, recently people have started to consider whether they are really as smart as they are put up to be, as they can show some of the same social characteristics as chickens e.g. roaming in large groups, empathetic responses, so may be on the same intelligence wavelength, which in my opinion should not be any more of a reason for their abuse. This had lead to suggestions, which I do agree on, that animals should be appreciated for their different types of intelligence, rather than put on a hierarchy- type scale.

This may rule out possible ‘classification’ of dolphins as ‘non-human persons’. The Cetacean family, to which whales and dolphins belong, are highly respected due to their complex intelligence, to the extent that some countries decided to ban captivity, and the use of dolphins for entertainment purposes, as their intelligence is up to the level that they could have the same rights as a ‘non human person’.

In many species of varying family and size, there is an abundance of complex behaviours that are analogous to human behaviours, some being a caring thought, to heart-wrenching empathy behaviour, even towards other species.
With many different animals, research and analysis will continually reveal that there is more than meets the eye. Behaviour is a very complex topic, with opinions coming from many different perspectives, leading to differing definitions of the term intelligent.

Image taken is a screenshot from video: TrendVideos32. 2013. Pigeons master touchscreen intelligence test. [Accessed 06 Feb 2014] from: http://www.youtube.com/watch?v=KEVUD9UM-FA

Science on the Origin of Life

Since the dawn of civilisation and the dreaming up of our early creation myths, the philosophical and scientific debate of the origin of life has enchanted people worldwide. Since the thousands of years when early man prayed to sky gods have we got any closer from determining how life originated on earth? And can we even prove any of the theories through the scientific method?

The most widely held theory is that of abiogenesis. This is the idea that the conditions present on early earth when life was beginning, such as the electrical activity and dense atmosphere, resulted in the spontaneous creation of the building blocks of life. When these early conditions were replicated in the lab, in the iconic Urey-Miller experiment of 1953, some ingredients for life, such as amino acids, were seen.

The one major problem with this theory is that just as in cooking, adding the ingredients together doesn’t automatically make the meal. Life is amazingly complex and intricate. Having the building blocks doesn’t account for how they organised themselves into the patterns that can be call life.
Scientists are working to fill in this gap, producing theories based on the original abiogenesis. These ideas attempt to explain how order was achieved. These include the Deep Sea Vent hypothesis, the Coenzyme and RNA world hypothesis, and the Iron-Sulfur World theory. Other theories stray away from the abiogenesis idea, such as Autocatalysis Clay hypothesis, Gold’s “Deep-Hot Biosphere” model, Lipid world and Polyphosphates, to name a few.

One theory looks beyond the earth for the origin of life. This is known as panspermia, the theory that life originated in space. The main proof behind the theory is the presence of dead microbes and fossils found in debris in the stratosphere. However this evidence has faced harsh criticism from the scientific world.

There are many questions yet to be answered by Panspermia. In the theory, life, in the form of microbes, came to earth piggybacking on meteors and asteroids. How did the microbes survive the harsh conditions of space, and the harsher conditions whilst entering or exiting the atmosphere? Where did they come from? How did they then survive on a barren planet enough to divide and evolve? This theory doesn’t really solve the fundamental question of where life originated, but it does extend the time for which life can form over. In the history of the universe, earth is relatively new.
Science is yet to form a watertight theory on the origin in life, and there is question if it ever will. The many different ideas debate over which is the closest to what events occurred millions of years ago. Without concrete evidence all we can do is continue to develop these ideas based on theories and assumptions.

Neolithic Revolution in the air!

THE NEOLITHIC REVOLUTION WAS A KEY MOMENT IN THE PREHISTORY OF HUMANS. It sparked civilisation as we know it- settlements were established, crops were grown and animals were domesticated transforming the economy of subsistence globally. Beginning in the Levant (Near East) around 12,000 years ago, the Neolithic Revolution spread into Europe 8000 years ago and lasted up until 4000 years ago when the Bronze Age began.

The major question is how did this revolution spread? Did the indigenous hunter gatherers adopt farming solely though cultural transmission? Or did the farmers pass on their practices alongside their genes? These two models (see diagram) – culturally diffused model (CDM) and demic diffused model (DDM) – originally seen as two polar opposites as mechanisms of the spread, have been debated throughout the 20th century. By identifying the proportion of Mesolithic/ hunter-gatherer and Neolithic/ farmer genes within the current gene pool (see diagram), the correct model could be identified.

Classical genetic markers in present day populations (such as blood groups) appear to lend support to the DDM revealing a genetic cline from the Near East towards the West. But modern genetic markers can reflect population processes that have taken place both before and after the Neolithic spread. Instead ancient DNA (aDNA) provides a unique window of opportunity to look back into the past. Ancient DNA studies do come fraught with difficulties. Over time DNA degrades and fragments into short molecules. Usually this means any contaminating modern DNA is favourably extracted and analysed instead. Nevertheless strict and rigorous protocols exist to minimise contamination and new technology has been optimised for aDNA extraction.

The archaeological record has shown that as farmers migrated across Europe, two different routes were taken as indicated by distinct ceramic styles. One route was through Central Europe, from Hungary to Slovakia, Ukraine and through to Paris, as shown by the Linearbandkeramik (LBK) and Alföldi Vonaldiszes Kerámia (AVK) pottery styles. The other route represented with Impressed Ware/Cardial culture was along the Mediterranean coast. aDNA studies have been conducted on samples from these different sites and cultures, and the picture that emerges is one more complex than just picking one model over the other. It certainly appears that the two routes have their own model: while the Central Europe/ LBK route shows little to no genetic continuity between the Mesolithic hunter-gatherers and the Neolithic farmers, the Mediterranean route tends towards genetic continuity and therefore a level of gene flow between the two populations, a pattern which even seems to lead up into Sweden.

But this most certainly is not the end of the story. For one thing, the genetic studies carried out were analysing the mitochondrial DNA (mtDNA), which is inherited solely down the female line (men inherit their mothers’ mitochondrial DNA but will not pass it on). In one study, it was found that of Spainish Neolithic samples while the mtDNA belonged to hunter gatherer groups from the Palaeolithic, the Y chromosome was shown to be from the Neolithic Near East. This does seem to suggest that the role of men and women during the advance of the Neolithic differed to some extent. Additionally, it also appears that the change to farming practices did not happen as rapidly as expected, and was not as clear cut. In two recent papers (with a particular focus on Germany), it was found that hunter gatherers and farmers lived alongside each other for about 2000 years and, interestingly while the Mesolithic hunter gatherers and the Neolithic farmers had their own distinctive gene pools, at some point in the Neolithic there were intermediary groups with shared ancestry and lifestyle undoubtedly reflecting the transition that was taking place.

There is a level of difficulty when studying the past; we cannot always state processes or cause and effects with a perfect degree of certainty, but we can say what the evidence appears to suggest, and in this case it appears to suggest a high a degree of complexity as the Neolithic Revolution took hold. There is never just any one specific model that can answer our questions, and there will always be other lines of evidence to explore. To answer the original question how the Neolithic Revolution spread cannot be placated with just one simple answer. It is never that easy. But as aDNA analyses show, we can still get one step closer to that very complicated answer.

More information:

Bollongino et al 2013 Science 342 (6157) 479-481

Brandt et al 2013 Science 342 (6155) 257-261

Gamba et al 2011 Molecular Ecology 21 (1) 45-56

Haak et al 2005 Science 310 (5750) 1016-1018

Lacan et al 2011 PNAS 108 (45) 18255–18259

Pinhasi et al 2012 Trends in Genetics 28 (10) 496-505

Skoglund et al 2012 Science 336 (6080) 466-469Neolithic Revolution

 

 

 

Reproduction Revamp: Stick Insects and Going It Alone.

125227306.pX5ldchw

Timema cristinae: making a lack of a love life cool.

Love can be tough. If you wish awkward dates and trawling through match.com were a thing of the past, you could take a leaf out of this stick insect’s book. Tanja Schwander (University of Lausanne) studies how Timema stick insects are changing the dating game. Rather than reproducing with a partner, female Timema have developed the ability to produce offspring individually.  There could be a number of causes for this bizarre transition from sexual to non-sexual offspring production, so read on for a how-to guide in ditching dating.

Conversion to non-sexual reproduction may occur genetically. When female Timema are prevented from mating, some eggs that haven’t been fertilised by sperm hatch and develop. Could this virgin birth scenario, reminiscent of biblical times, replace sexual reproduction in Timema? Or are virgin births merely a strategy to ensure female stick insects can carry on their line when opportunities to mate are thin on the ground?

Alternatively, a type of bacterial infection may stimulate non-sexual reproduction. Infecting bacteria are only transmitted through the female sex cell, the egg, and so males slow the spread of the bacteria. In light of this, the bacteria devised a cunning strategy to eliminate males: inducing a kind of non-sexual reproduction that produces only female offspring. Could bacterial infection be the instigator of non-sexual reproduction?

Schwander’s studies of genetic data reveal the virgin birth scenario cannot explain the change in Timema reproduction. Conversion to non-sexual reproduction may occur genetically, but not via virgin births. To determine if bacterial infection causes the stick insect’s lack of libido, Schwander cured the infection. This restored sexual reproduction and production of male offspring, proving bacterial infection can result in non-sexual reproduction. Watch this space; could Boots’ next bestseller be bacteria to eliminate human males?

The rise and fall of the slasher dinosaur

Almost every child goes through a dinosaur phase. In some cases, it’s a frenzied week of roaring and leaving spiky plastic models all over the floor, before a combination of sore feet and a sore throat drive you onto the next stage of development. In my case, it lasted about 5 years. I owned sacks of dinosaur toys, a library’s worth of dinosaur books, and irritated my friends by criticising the accuracy of their dinosaur games (You can’t play with a dinosaur from the Creataceous and a dinosaur from the Jurassic at the same time. You just cannot.) Eventually, peer pressure made me decide that dinosaurs were for little kids, and I forgot about them for a decade or so.

But last year, I took a module in Palaeobiology– the study of extinct organisms– as part of my degree. I was back in the realm of dinosaurs– older, wiser but still embarrassingly excited. Then as I delved deeper into my external reading, I found some papers that shook my world, shattered my dreams, and generally slapped my childhood in the face. My dinosaur books had been lying to me about my favourite dinosaur of all time: Deinonychus.

Deinonychus (pronounced Die-NON-ik-uss) was a mean guy. Resembling its smaller, superstar cousin the Velociraptor, Deinonychus nonetheless has its own claims to fame.

This particular specimen is a bit of a deviant, judging by his facial expression and his public nudity (we now know that Deinonychus probably had feathers)

This guy has a far more modern dress-sense

Before the 1960s, scientists took a pretty dim view of dinosaurs. The consensus was that they were all stupid, sluggish and cold-blooded, and probably died out because they couldn’t cope with the same challenges that we sleek, sexy mammals can. But that view started to fall apart when John Ostrom took a closer look at Deinonychus. He suggested that these animals were speedy, intelligent pack-hunters who worked together to bring down large prey, using the fearsome sickle-shaped claw on each foot to disembowel their victims. Like wolves. Slashy Captain Hook wolves. This image of Deinonychus helped create a revolution in the way that we think about dinosaurs, and it was still championed in all my dinosaur books. As the sort of child who didn’t bat an eyelid at the bloodiest scenes of Watership Down, it inspired me. Over several years, I built up a portfolio of really creepy drawings of dinosaurs killing each other, made with nothing but a pencil and a red felt-tip pen, and ravaging packs of Deinonychus featured heavily in my “art”. On reflection, I feel lucky that my parents didn’t refer me to a child psychologist.

But in 2006, long after I’d abandoned dinosaurs in favour of blushing at teenage boys, some scientists decided to test out the theories about those fearsome feet. Phillip Manning and his team built an accurate hydraulic model of a Deinonychus leg, complete with terror-claw, and made it kick a pig carcass that had kindly volunteered to play the part of an herbivorous dinosaur. Yet far from slicing the carcass into ribbons of sandwich ham, the claws were AWFUL at doing any sort of tearing damage. Instead, they created small shallow puncture wounds that did very little to the surrounding tissue, let alone the internal organs. Not so much a river of blood and gore, then: if Deinonychus behaved like my books said, then the herbivores probably walked away with mildly painful wounds that cleared up in a week. Something else was going on with these bizarre claws. Stumped, Manning suggested that Deinonychus could have used its claws like crampons, allowing it to climb onto the backs of large prey and attack from there. So my vision of dramatic battles between massive herbivores and a fearsome pack of predators wasn’t totally shattered… yet.

It was thanks to a guy called Denver Fowler that my artwork really faded into fantasy. He noticed that modern eagles and hawks—known as raptors—also have one claw bigger than the other on their feet. However, you’ll never see a pack of eagles descending onto a cow in a field and slashing it to death, neither do they need climbing aids. These birds hunt by swooping onto smaller animals, then pick them to bits with their beaks, often while the prey is still alive. A struggling animal could be very dangerous to a bird of prey, potentially breaking its fragile bones, so it’s vital for the raptor to keep it pinned down firmly. This is where that claw comes in. By clamping down with their powerful modified talon, raptors immobilise their prey, allowing them to concentrate on their (very fresh) meal without distraction. Fowler compared the feet of raptors with those of their ancient cousin, Deinonychus, and found many similarities in their anatomy. The flexibility of the toe bearing that large claw may have come in handy not for delivering slashes… but for swivelling down into a death grip on small prey. That’s right—small prey. Those epic clashes I’d envisioned between huge herbivores and fierce little predators seemed less and less feasible.

So how did Deinonychus ACTUALLY live? Fowler envisions a solitary predator that pursued animals smaller or similar to its own size at high speed. It would then pounce on top of its victim and press it firmly to the ground, channelling its bodyweight through the tip of the powerful sickle-claws to prevent escape.  Then it would have leaned forward and proceeded to rip its squirming dinner into bitesize chunks—gory, but not quite the image I’d held. Fowler hadn’t gone as far as to demonstrate that my favourite dinosaur was a peaceful vegetarian, but I have to admit—he’d stolen just a little bit of its badassery. This doesn’t mean Deinonychus stops being cool, though. In fact, it could teach us a lot about the early days of its modern relatives: the birds.

Fowler compared modern raptors with Deinonychus once more, and noticed how, when perching on struggling prey, raptors often beat their wings vigorously. This keeps the bird in a prime position on top of the prey, making sure its victim stays pressed to the ground. We’ve known for a while that many predatory dinosaurs like Deinonychus had feathers on their skin– perhaps the first chink to appear in their armour of terror. But scientists have long argued about how the particular lineage of feathery dinosaurs that evolved into birds first developed the “flight stroke”—the special high-powered downbeat of the wings that creates lift. Looking at Deinonychus inspired Fowler to come up with a new theory. If dinosaurs also stability-flapped their feathered arms when making a kill, over the generations, it could have selected for greater upper body strength and the ability to beat the arms hard and fast– features that would later come in very useful when their descendants took to the air. Although Deinonychus was not a direct ancestor of birds—it appeared long after the first flying dinosaurs—it was closely related to them, so it’s likely that they shared similar behaviour. So by looking at how Deinonychus might have hunted, we can take steps in unravelling one of the biggest, most controversial mysteries in all of Palaeobiology.

In future, then, perhaps we’ll look back on Deinonychus as triggering a second revolution in how we see the dinosaurs. If I told that to my 7-year-old self, I hope she’d have been consoled. Deinonychus… you might not be the psycho-killer of my imagination, but you’re still cool to me.

 Originally posted at http://notazookeeper.blogspot.co.uk/

Image credits:
Naked creepy Deinonychus: By Mistvan (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) via Wikimedia Commons

Fluffy Deinonychus: By Peng 6 July 2005 16:32 (UTC) (selbst gemacht –Peng 6 July 2005 16:32 (UTC)) [GFDL (http://www.gnu.org/copyleft/fdl.html) via Wikimedia Commons

Why are we ‘Looking for Aliens’?

The idea that there might be alien life elsewhere in the universe has captured the imaginations of generations of scientists, writers, artists and….well pretty much everyone! Science Brainwaves has a fantastic *free* lecture coming up on Friday 8th November, where Dr Simon Goodwin will describe how astronomers are looking for life on other planets, and what it might be like. So without giving away too many spoilers I thought it would be the perfect opportunity to find out what got our ancestors thinking about aliens and what we might do if we find them….

A 17th Century illustration of the heliocentric system suggested by Copernicus (by Andreas Cellarius from the Harmonia Macrocosmica ,1660) Picture from Wikipedia

A 17th Century illustration of the heliocentric system suggested by Copernicus (by Andreas Cellarius from the Harmonia Macrocosmica ,1660)
Picture from Wikipedia

It’s difficult to say (or at least difficult for me to say, with my limited resources and time!) when people first started thinking about the possibility of life on other planets. However, it’s fair to say that big astronomical discoveries have probably captured people’s imaginations throughout the ages – in the same way that the moon landing got everyone talking about little green men. One such breakthrough is the ‘Heliocentric Revolution’. Heliocentrism is the concept of the solar system with the sun at the centre instead of the earth, an idea that has been around since at least 3rd century BC. However, it was Copernicus who revived the idea in the 16th Century, which was expanded on by the works of Kepler (who calculated the orbits of the planets) and Galileo (who observed other planets by telescope). The spread of the idea that earth wasn’t the centre of the universe must have made our ancestors wonder what else could be out there. Earth was no longer special, just another planet orbiting the sun, so why shouldn’t there be other life filled planets like ours?

 

alien contact

Top left: The Voyager Golden Record
Bottom left: The Pioneer Plaque
Right: The Aricebo Message (decoded and coloured)
All pictures are from Wikipedia

So far we’ve obviously not had much luck in finding life, but it’d probably be prudent to think about what we’d do if we do find it – especially if it’s intelligent. Stephen Hawking has been heard to offer an opinion on the subject:

“If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans,”

“We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”

Not the most optimistic of outlooks, but he’s got a point. Several attempts to contact alien life have been made by astronomers, but have they given away too much information? In 1973 the ‘Pioneer Plaque’ was sent out on the pioneer 10 and 11 spacecraft, followed in 1974 by the broadcast of the ‘Arecibo Message’ (both pictured right). A slightly more artistic message was sent out in 1977 on the ‘Voyager Golden Record’, which contained information on the sights and sounds of earth. It’s quite romantic to think that if there messages reached intelligent alien life that they might just pop in for a cuppa to say ‘hi’, but the consequences could be a lot worse if the aliens were hostile (and if you’ve got a flair for the melodramatic).

As a microbiologist I can’t help but be a little cynical about grand ideas of intelligent life. At the moment we’ll probably be lucky to find some basic single celled life – which I’ve heard doesn’t tend to be all that talkative (but which as a microbiologist I would find much more exciting anyway!). Anyway, who am I to say what we may or may not find (with all the experience of a 3rd year astrobiology module) – come and hear it from the expert at Science Brainwaves’ free Looking for Aliens Lecture!*

 

*did I mention it’s free? :P

Designer Babies- What’s It All About?

Throughout the social and scientific worlds, there is controversy surrounding the potential to genetically modify embryos to create ‘designer babies’. These are embryos that have been screened for genetic diseases, and will therefore only contain selected desired qualities chosen by the parents. However, there are many stories in the media which exaggerate and distort the facts- and this can even be seen in the term ‘designer babies’ itself. It is important to think about the likelihood and implications of this idea, plus to outline what actually gave rise to this concept.

We could suggest that the idea of genetically engineered embryos or the ideas that led to this originated in 1978 and the first in-vitro fertilisation (IVF) treatment. The procedure gave and still gives hundreds of infertile couples a chance to have a child by transferring an egg fertilised in a laboratory into the mothers uterus. It subsequently led to a procedure known as preimplantation genetic diagnosis (PGD). This is a technique used on embryos to profile their genome- it is a form of genetic profiling and embryo screening and is a more technical and accurate way of regarding ‘designer babies’. In terms of health benefits, using PGD means embryos can be screened outside the womb. Embryos can be selected that only carry normal and healthy genes, and are therefore free from genetic abnormalities. Whilst this technique is currently popular, PGD could in the future be used to select any desired specified trait of a child, such as eye colour, intelligence, and athleticism; be used to select embryos to be without a genetic disorder, to increase successful pregnancies, to match a sibling in order to be a donor, for sex selection and therefore be used to design your own baby. Selecting the gender of a child is already possible due to the fact only the X or Y chromosome needs to be identified, but other traits are more difficult due to the amount of genetic material required. Recent breakthroughs have meant that every single chromosome in an embryo can be scanned for genes involved in anything from Down’s Syndrome to lactose intolerance using a single microchip, but how advanced is this and what are the ethics behind this?

There are a large array of ethical, social and scientific concerns over the concept of creating a ‘perfect’ child. Some people worry that in the future there will be an imbalance between genders in the general population especially in societies that favor boys over girls, such as China. Also, a key issue suggested is that there is an element of eugenics to this idea- PGD will mean that people with ‘unattractive’ qualities will be lessened and potentially society may discriminate against those who have not been treated. If we look at this from a more extreme perspective, it could be suggested that we may end up with a race of ‘super-humans’ and a divide between those who have been treated and those who haven’t.  Also, this selection of genotypes suggests a potential deleterious effect on the human gene pool, meaning less genetic variation. Whilst at first this may seem positive due to the fact you could eliminate genetic disorders such as hemophilia A before it becomes prevalent in the body, it is also likely that new diseases may evolve and accidentally be introduced into the human race. Due to the decreased gene pool, only partial evolution would be able to occur and therefore we will be more susceptible to new diseases having a dramatic effect. It is clear from this evidence that regulations must be put in place and strictly enforced before any new advances are made.

So, how close are we to being able to ‘customise’ our children?
In terms of altering genes already present in the embryo, we are already well on our way to refining this technology. Scientists have been altering animal genes for years, and germline therapy is already being used on animals. Germline gene therapy is now being closely linked and developed with PGD- and it could soon be used to change human genetics. Our germline cells are our sex cells (egg and sperm), and using this branch of gene therapy essentially involves manipulating and adding new genes to the cells. The clear possibility from this in terms of PGD is that any trait can be added to an embryo to create a designer baby. This may involve adding a gene to stop a genetic disorder being expressed in a baby’s phenotype by fixing them as they are noticed in PGD, but it could also mean that only certain people will be able to advance in society.

On he other hand however, before these ‘more advanced’ humans are created we need to learn more about the genetic code. The basis of all genetic technologies lies in the human genome, and whilst PGD advances are ever-increasing, at present we can only use this technique to look at one or two genes at a time. Therefore, we cannot use it to alter the genes in embryos, and this would logically lead us to think about gene therapy, but the current lack of technology and the strict regulations regarding experimenting with germline gene therapy makes it unlikely that anyone will be able to create a completely designer baby in the near future.

Designing our babies is a reality that government bodies and various organizations are beginning to accept and address fully, and society’s view of the moral implications behind PGD and gene therapy being a key factor in determining how far this concept can advance; there will be increasingly new debates and controversy over the acceptable applications of gene technologies in humans and human embryos.

 

 

 

 

 

The Downfall of Antibiotics

Unless you have missed the countless number of headlines over the past few years about MRSA and hospital superbugs, you are probably aware that antibiotic resistance is a huge problem in healthcare at the moment. It brings with it visions of a post-apocalyptic world of widespread plagues of death and destruction and strong desire to constantly wash your hands the moment you enter a hospital. This brings the question, how real is this problem?

The bad news is that it is very real. Around the world a plethora of diseases from cystitis to TB have shown levels of resistance to our current drugs arsenal, even those we have kept back to use as a last resort. The even worse news is that in terms of new drugs in the pipeline, the well is beginning to dry up. If infectious disease is a war, we are definitely losing.

Bacterial resistance is a problem which has been become more widespread year on year. It is caused by a build-up of mutations in bacteria which stops the antibiotics from working properly, no longer killing or stopping the growth of the germs. This is generally caused by the over prescribing and misuse of antibiotics, such as GPs prescribing the use for viral infections, or patients not finishing the course of their prescribed drugs. It is also caused by the widespread use of antibiotics in agriculture to promote growth in cattle. This has created an environment where pathogens are constantly meeting and combating low levels of antibiotics, allowing the favouring of resistance strains over susceptible ones.

Traditional antibiotics may no longer be a viable option, this has sparked a search for alternatives to our current drugs, which work in other ways than simply killing the bacteria, and the good news is that there is a lot of promising research currently in different stages of development.  One avenue that is showing potential is that of anti-adhesion therapies.  These are drugs which prevent bacteria from gripping to the cells of the body. If they cannot grab the body, they cannot overcome the strategies we have evolved to stop them colonising us, such as the mucus in our airways or the flushing out of germs by urine in the urinary tract. This means they can’t colonise our bodies in high enough numbers to cause us harm. These anti-adhesion drugs which prevent bacteria from attaching to host cells do not put any selective pressure on the bacteria and therefore are not likely to induce resistance meaning that they have high potential for use in the dystopian future we all fear.

There are a large number ways by which we can stop bacteria from attaching to use. These therapies use different means all to the same end, i.e. to alter the interaction between bacteria and patient so that bacteria no longer stick efficiently to cells. Listed below is just a small sample of strategies under investigation.

So, anti-adhesion strategies are not a new idea. Cranberry juice has long been used as a home-made solution to urinary tract infections. This has been shown to be effective in clinical research, however the downside is that this result is less then consistent and there is still a great amount of debate about how it actually works. A component of cranberries and related berries has proven itself as an inhibitor of bacterial attachment, and the high sugar (specifically fructose) content of cranberry juice  can work block the ‘arms’ of bacteria, known as fimbrae which grab cells. For those who aren’t big fans of cranberry juice, there is more good news: this anti-infection effect on bacteria is not limited to cranberries; in fact, new research has identified compounds from several other plants including tea and red wine.

On a less appealing note for you personally, there is also evidence that suggest that breast milk may contain a cocktail of ingredients that can prevent bacteria attachment to the recipient. Breast milk has been shown to contain hundreds of proteins, sugars and antibodies, some of which may be affective anti-adherent compounds against a myriad of diseases. This makes evolutionary sense, as providing infants with ‘anti-adherence milk’ gives them a regular protective coating of the digestive system at a time when the immune system is not yet completely up and running and the children are at risk from so many infections.

An alternative strategy is the use of probiotics, which uses non-harmful species of bacteria to fight the harmful kind.  In the battle to colonise our body, this strategy is akin to sending reinforcements to the good guys.  Commonly used species include lactobacilli and bifidobacteria, which can be added to foods like yogurt. This medical use of probiotics is currently being trialled and is showing some success against a wide number of infections, ranging from food poisoning to vaginosis to stomach ulcers.

The final questions we should ask are: are these therapies are as efficient as antibiotics? Are they just going to generate more complex resistant bugs? Would we be better off concentrating all our efforts on just searching for new antibiotics?  Unfortunately we can’t answer this, at least not by ourselves. What we do know is that we are entering a post-antibiotic era. The rule book has changed and science may need to start playing catch up.

The painful truth: Magnetic bracelets, the placebo effect & analgesia

Despite the widespread availability of evidence-based medicine in the western world, ‘alternative medicines’ are still commonly used. Such medicines are usually inspired by  pre-scientific medical practices; those which have been passed down through generations. However many established medical treatments also arise from traditional medical practices. For example the use of aspirin as an analgesic (pain killer) has its roots in the use of tree bark for similar purposes throughout history. The difference between established medicines like aspirin, and alternative medicines such as homeopathy, is that the former have been found to be effective when exposed to rigorous scientific trials.

Can magnetic bracelets help relieve joint pain in conditions like Arthritis?

Can magnetic bracelets help relieve joint pain in conditions like Arthritis?

A form of alternative medicine that has recently been subjected to scientific scrutiny is the use of magnetic bracelets as a method of analgesia. It effective, such therapies would provide cheap and easy-to-implement treatments for chronic pain such as that experienced in arthritis. Unfortunately there is little evidence of such treatments being effective. A meta-analysis of randomised clinical trials looking at the use of magnet therapy to relieve pain found that there was no statistically significant benefit to wearing magnetic bracelets (1). However it can be argued that existing clinical trials may have been hampered by the difficulty in finding a suitable control condition.

The placebo effect

The ‘placebo effect’ is a broad term used to capture the influence that knowledge concerning an experimental manipulation might have on outcome measures. Consider a situation where you are trying to assess the effectiveness of a drug. To do this you might give the drug to a group of patients and compare their subsequent symptomatology to a control group of patients who do not get the drug. However even if the drug group show an improvement in symptoms compared to the control group, you cannot be certain whether this improvement is due to the chemical effects of the drug. This is because the psychological effects of knowing you are receiving a treatment may produce a beneficial effect on reported symptoms which would be absent from the control group. The solution to this problem is to give the control group an intervention that resembles the experimental treatment (i.e. a sugar pill instead of the actual drug). This ensures that both groups are exposed to the same treatment procedure, and therefore should experience the same psychological effects. Indeed this control treatment is often referred to as a ‘placebo’ because it is designed to control the placebo effect. The drug must exhibit an effect over and above the placebo treatment in order to be considered beneficial.

A requirement for any study wishing to control for the placebo effect is that the participants must be ‘blind’ (i.e. unaware) as to which intervention (treatment or placebo) they are getting. If the participant is aware that they are getting an ineffective placebo treatment, the positive psychological benefits of expecting an improvement in symptoms is likely to disappear, and thus the placebo won’t genuinely control for the psychological effects of receiving an intervention.

A placebo for magnetic bracelets

The obvious placebo for a magnetic bracelet is an otherwise identical non-magnetic bracelet. However the problem with using non-magnetic bracelets as a control is that it is easy for the participant to identify which intervention they are getting, as it is easy to distinguish magnetic or non-magnetic materials. The can be illustrated by considering a clinical trial which appeared to show that magnetic bracelets produce a significant pain relief effect (2). In this study participants wore either a standard magnetic bracelet, a much weaker magnetic bracelet or a non-magnetic (steel) bracelet. The standard magnetic bracelet was only found to reduce pain when compared to the non-magnetic bracelet. However the researchers also found evidence that participants wearing the non-magnetic bracelet became aware that it was non-magnetic, and therefore could infer that they were participating in a control condition. This suggests that the difference between conditions might be due to a placebo effect, as the participants weren’t blind to the experimental manipulation.

This failure of blinding was not present for the other control condition (weak magnetic bracelet) presumably because these bracelets were somewhat magnetic. As no statistically significant difference was found between the standard and weak magnetic bracelets it could therefore be concluded that the magnetic bracelets have no analgesic effect. However it could also be argued that if magnetism does reduce pain, the weaker bracelet may have provided a small beneficial effect which might have served to ‘cancel out’ the effect of the standard magnetic bracelet. The study could therefore be considered inconclusive as neither of the control conditions were capable of isolating the effect of magnetism.

More recent research

Recent clinical trials conducted by researchers at York University has tried to solve the issue of finding a suitable control condition for magnetic bracelets. Stewart Richmond and colleagues (3) included a condition where participants wore copper bracelets, in addition to the three conditions used in previous research, while researching the effect of such bracelets on the symptoms of Osteoarthritis . As copper is non-magnetic it can act as a control in testing the hypothesis that magnetic metals relieve pain. However as copper is also an traditional treatment for pain, it does not have the drawback of the non-metallic bracelet regarding the expectation of success. The participant is likely to have the same expectation of a copper bracelet working as they would for a magnetic bracelet.

The study found that there was no significant difference between any of the bracelets on most of the measures of pain, stiffness and physical function. However the standard magnetic bracelet did perform better than the various controls on one sub-scale of one of the 3 measures of pain taken. However this isolated positive effect was considered likely to be spurious because of the number of comparisons relating to changes in pain that were performed during the study (see 4). The same group has recently published an almost identical study relating to the pain reported by individuals suffering from Rheumatoid Arthritis rather than Osteoarthritis (5). Using measures of pain, physical function and inflammation they again found no significant differences in effect between the four different bracelet types.

No effect?

The existing research literature seems to suggest that magnetic bracelets have no analgesic effect over and above a placebo effect. The use of a copper bracelet overcomes some of the problems of finding a suitable control condition to compare magnetic bracelets against. One argument against using copper bracelets as a control is that as they themselves are sometimes considered an ‘alternative’ treatment for pain, they may also have an analgesic effect. Such an effect could potentially cancel out any analgesic effect of the magnetic bracelets when statistical comparisons are performed. However copper bracelets did not perform any better than the non-magnetic steel bracelets in either study (3, 5) despite the potential additional placebo effect that might apply during the copper bracelets condition. Indeed on many of the measures of pain the copper bracelet actually performed worse than the non-magnetic bracelet. The copper bracelet can therefore be considered a reasonable placebo to use in research testing the analgesic effect of magnetic bracelets.

Despite the negative results of clinical trials, it may be wise not to entirely rule out a potential analgesic effects of magnetic bracelets. Across all three studies (2, 3, 5) the measures of pain were generally lowest in the standard magnetic bracelet group. Indeed significant effects were found in two of the studies (2, 3) although these were confounded by the aforementioned problems concerning control conditions and multiple comparisons. Nevertheless it could be argued that, given the existing data, magnetic bracelets may have a small positive effect, but that this effect is not large or consistent enough to produce a statistically significant difference in clinical trials. This theory could be tested by conducting trials with far more patients (and thus greater statistical power) or by using a number of different bracelets of differing magnetic strengths to see if any reported analgesic effect increases with the strength of the magnetic field. Until such research is performed it is best to assume that magnetic bracelets do not have any clinical relevant analgesic effect.

Image courtesy of FreeDigitalPhotos.net

References

(1) Pittler MH, Brown EM, Ernst E. (2007) Static magnets for reducing pain: systematic review and meta-analysis of randomized trials. CMAJ 2007;177(7):736—42.

(2) Harlow T, Greaves C, White A, Brown L, Hart A, Ernst E. (2004) Randomised controlled trial of magnetic bracelets for relieving pain in osteoarthritis of the hip and knee. BMJ 329(7480):1450—4.

(3) Richmond SJ, Brown SR, Campion PD, Porter AJL, Klaber Moffett JA, et al. (2009) Therapeutic effects of magnetic and copper bracelets in osteoarthritis: a randomised placebo-controlled crossover trial. Complement Ther Med 17(5–6): 249–56.

(4) https://en.wikipedia.org/wiki/Problem_of_multiple_comparisons

(5) Richmond SJ, Gunadasa S, Bland M, MacPherson H (2013) Copper Bracelets and Magnetic Wrist Straps for Rheumatoid Arthritis – Analgesic and Anti-Inflammatory Effects: A Randomised Double-Blind Placebo Controlled Crossover Trial. PLoS ONE 8(9):

Biotech for all – taking science back to it’s roots?

This morning I came across a very interesting TED talk by Ellen Jorgensen entitled “Biohacking — you can do it, too” (http://on.ted.com/gaqM). The basic premise is to make biotech accessible to all, by setting up community labs, where anyone can learn to genetically engineer an organism, or sequence a genome. This might seem like a very risky venture from an ethical point of view, but actually she makes a good argument for the project being at least as ethically sound than your average lab. With the worldwide community of ‘biohackers’ having agreed not only to abide by all local laws and regulations, but drawing up its own code of ethics.

So what potential does this movement have as a whole? One thing it’s unlikely to lead to is bioterrorism, an idea that the media like to infer when they report on the project. The biohacker labs don’t have access to pathogens, and it’s very difficult to make a harmless microbe into a malicious one without access to at least the protein coding DNA of a pathogen. Unfortunately, the example she gives of what biohacking *has* done is rather frivolous, with a story of how a German man identified the dog that had been fouling in his street by DNA testing. However, she does give other examples of how the labs could be used, from discovering your ancestry to creating a yeast biosensor. This rings of another biotech project called iGem (igem.org), where teams of undergraduate students work over the summer to create some sort of functional biotech (sensors are a popular option) from a list of ‘biological parts’.

image

The Cambridge 2010 iGem team made a range of colours of bioluminescent (glowing!) E.coli as part of their project.

My view is that Jorgensen’s biohacker project might actually have some potential to do great things. Professional scientists in the present day do important work, but are often limited by bureaucracy and funding issues – making it very difficult to do science for the sake of science. Every grant proposal has to have a clear benefit for humanity, or in the private sector for the company’s wallet, which isn’t really how science works. The scientists of times gone by were often rich and curious people, who made discoveries by tinkering and questioning the world around them, and even if they did have a particular aim in mind they weren’t constricted to that by the agendas of companies and funding bodies. Biohacking seems to bring the best of both worlds, a space with safety regulations and a moral code that allows anyone to do science for whatever off-the-wall or seemingly inconsequential project that takes their fancy – taking science back to the age of freedom and curiosity.