Thursday, 15 October 2015

UK animal experiments lack reporting standards

Macleod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, et al. (2015) Risk of Bias in Reports of In Vivo Research: A Focus for Improvement. PLoS Biol 13(10): e1002273. doi:10.1371/journal.pbio.1002273 The reliability of experimental findings depends on the rigour of experimental design. Here we show limited reporting of measures to reduce the risk of bias in a random sample of life sciences publications, significantly lower reporting of randomisation in work published in journals of high impact, and very limited reporting of measures to reduce the risk of bias in publications from leading United Kingdom institutions. Ascertainment of differences between institutions might serve both as a measure of research quality and as a tool for institutional efforts to improve research quality.



There are many reasons for the failure of animal experiments to translate into human benefit and it is clear that many EAE lack sufficient quality control, that the positive data is biased and unreproducible. I know that you know that but seems the authors of the papers just don't get it. EAE falls below the average so you can do better.

EAEers just can't be arsed or are too arrogant to change and adapt and serve to support the view held by some that animal experiments are a waste of time. The lack of perceived quality panders to that view, so come on you EAEers.....pull your socks up....because sooner of later this view will limit getting your grants...time is ticking. Change and maybe you can slow the rate of this happening.

It is still clear that animal studies fail to report on percieved measures of quality in experimental design and rigour. "Of over 1,000 publications from leading UK institutions, over two-thirds did not report even one of four items considered critical to reducing the risk of bias, and only one publication reported all four measures". So as we are leading institution that paper must have been mine as we address these issues.

People are lazy and the carrot does not work...the only way to get change is with the stick. But if you make a stick you have to ensure you are willing to use it, otherwise you are toothless.

The journals can be that stick, if they enforce certain reporting standards people will change. Charities and funders need to put there money where there mouth is and enforce their conditions....pull a bit of cash and things rapidly change.

In UK the MS Society, Wellcome Trust and the Medical Research Council have use of the ARRIVE guidelines as a condition of grant. Whilst the ARRIVE guidelines in its current form is rather idealistic, many aspects are reasonable....yet it is ignored. 

However it is easy to monitor and so I sense a student project coming...Project name and shame:-). 

2 comments:

  1. The responsibility can't just lie with the journals as the task of changing culture in research is too great. Simply policing its output at the point of publication is like mopping up the leak from a tap, when you should be fixing the plumbing! ARRIVE has been 'adopted' by many journals, but not yet fully implemented or reinforced to date.

    All stakeholders must take responsibility for this cultural change.

    Funders must reinforce the requirement, ask for evidence of its implementation, and demand that training/re-training is provided as part of conditions of award.
    .
    Research Institutions must also make it a requirement, and build it in to staff/student training programmes.

    Patient advocacy groups could also make a bit more noise about the state of research that is supposedly done in their name.

    Science appears to have lost sight of the prize whilst chasing its tail in the grant-publication-grant-publication-grant cycle.

    Outing, naming & shaming might help in the short term, though...

    ReplyDelete

Please note that all comments are moderated and any personal or marketing-related submissions will not be shown.