Wednesday, 11 June 2014

Yet more evidence for poor quality (science) reporting in animal studies

Hirst JA, Howick J, Aronson JK, Roberts N, Perera R, Koshiaris C, Heneghan C.The Need for Randomization in Animal Trials: An Overview of Systematic Reviews.PLoS One. 2014;9(6):e98856.

BACKGROUND AND OBJECTIVES: Randomization, allocation concealment, and blind outcome assessment have been shown to reduce bias in human studies. Authors from the Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies (CAMARADES) collaboration recently found that these features protect against bias in animal stroke studies. We extended the scope the work from CAMARADES to include investigations of treatments for any condition.
METHODS: We conducted an overview of systematic reviews. We searched Medline and Embase for systematic reviews of animal studies testing any intervention (against any control) and we included any disease area and outcome. We included reviews comparing randomized versus not randomized (but otherwise controlled), concealed versus unconcealed treatment allocation, or blinded versus unblinded outcome assessment.
RESULTS: Thirty-one systematic reviews met our inclusion criteria: 20 investigated treatments for experimental stroke, 4 reviews investigated treatments for spinal cord diseases, while 1 review each investigated treatments for bone cancer, intracerebral hemorrhage, glioma, multiple sclerosis, Parkinson's disease, and treatments used in emergency medicine. In our sample 29% of studies reported randomization, 15% of studies reported allocation concealment, and 35% of studies reported blinded outcome assessment. We pooled the results in a meta-analysis, and in our primary analysis found that failure to randomize significantly increased effect sizes, whereas allocation concealment and blinding did not. In our secondary analyses we found that randomization, allocation concealment, and blinding reduced effect sizes, especially where outcomes were subjective.
CONCLUSIONS: Our study demonstrates the need for randomization, allocation concealment, and blind outcome assessment in animal research across a wide range of outcomes and disease areas. Since human studies are often justified based on results from animal studies, our results suggest that unduly biased animal studies should not be allowed to constitute part of the rationale for human trials.

If animal studies are not run like clinical trials in humans, they tend to over estimate the treatment effect, whether it is MS related research or other research.. Randomisation means randomly associating people or animals into treatments in trials. With humans they are genetically different different ages, sexes etc, etc and live and eat differently from one another and randomisation will take account of this. 

In animal studies this appears to make a difference, which is interesting as the animals are often genetically identical and come from the same place and eat the same stuff and live in the same space and their cages may only be a few centimetres from each other. Do people subliminally pick on the animal that is different, or do they load studies with ropey-looking animals on their way to disease whilst leaving the healthly looking ones for drug. 

Animals studies clearly lack on quality of reporting and may be poor experimental design. Before a company ploughs a few million into developing drugs they tend to repeat experiments independently and so the dogey stuff that litters the literature may fall by the way-side.  Unfortunately the media may get hold of poor quality stuff and its the next false hope.

Researchers do need to up their game

P.S. If researchers undertake all their studies like clinical trials... many papers would be dull as dishwater. 

P.P.S. People also need to realise that not all animal studies are about finding treatments that are useful for humans and not all animal studies are about underpinning clinical trials. Many are about understanding a mechanism for which treatments may or may not come.  However good experimental design makes it more likely that the mechanism identifies is robust.

5 comments:

  1. Here are a few other ways lack of randomization and lack of blinding can affect the outcome of animal studies. Some of this comes from personal experience, from back in the Paleolithic, when I was in graduate school doing neuroanatomy in Carl Cotman's lab and hippocampal slice physiology in Gary Lynch's lab.

    --Let's say you have a bunch of rats arriving in the animal facility in a single bin. Which one are you going to pick up first? The one easiest to catch, who may be unusually docile or stupid. Which one are you going to pick up last? The one that was wiliest, fastest, or most fearful. But, you say, when I go to the animal facility the rats are in individual cages, and I just start with the one in the upper left and work my way down. Well someone put that first rat in that upper left cage, and I guarantee that a random number generator wasn't used.

    --You run an experiment with a rat, and something goes wrong. The slices don't survive, or the stain looks funny or something. Of course data from that animal isn’t included in the study. I don't think I've ever seen an "intent to treat" analysis in an animal study.

    --It can be too much of a hassle to completely randomize experimental treatments. So one day all the rats get treatment A, and the next they get treatment B. The rats getting treatment B are a day older, and the weather is different (not so much in southern California, but just possibly in the UK), and the person running the experiment has a stomach upset or has just had an argument with his/her significant other, etc. All of those things can introduce subtle biases.

    --And then, of course, there's malfeasance, such as cherry picking data that confirms a hypothesis. Or maybe you wrote the abstract to meet the conference deadline months before the experiments were completed, and then consciously or unconsciously made the data fit the abstract, and not the other way around.

    ReplyDelete
    Replies
    1. There is a group of people looking at the logistics of multi-centre animal studies and I think as part of the approach they will generate an online randomiser.

      If as you say up pick up the docile ones first and the racers last you have a cage of racers tearing chunks out of each other as they establish the pecking order, so you have the bully and the stressed ones that don't get disease.

      Delete
  2. Randomisation etc will only work, however, if the original experiment/trial is designed correctly in the first place. In vitamin d RTCs you see the assumption that they are working with a drug where they have complete control of supply. They then assume that the distribution of 25(OH) vs drug is a simple relationship, and are surprised/happy when they see no effect/large effect. The problem is that the distributions still overlap and they have not moved one group by a significant amount from the other. What they are seeing is random noise between the distributions or the effect of people changing their behaviour to the sun etc. It just creates noise in the literature.

    ReplyDelete
    Replies
    1. Yes but we are talking animals here and they are kept in light humidity controlled environments

      Delete
    2. My point was that randomisation does not fix badly designed research and there is more trouble caused by bad design than lack of randomisation. I used vitamin d research because it is a place where RCTs have been used, but the design of the RCTs show obvious flaws.

      Delete

Please note that all comments are moderated and any personal or marketing-related submissions will not be shown.