Thursday, April 11, 2013

Reliability of neuroscience research questioned

Reliability of neuroscience research questioned [ Back to EurekAlert! ] Public release date: 10-Apr-2013
[ | E-mail | Share Share ]

Contact: Philippa Walker
philippa.walker@bristol.ac.uk
44-117-928-7777
University of Bristol

New research has questioned the reliability of neuroscience studies, saying that conclusions could be misleading due to small sample sizes.

A team led by academics from the University of Bristol reviewed 48 articles on neuroscience meta-analysis which were published in 2011 and concluded that most had an average power of around 20 per cent a finding which means the chance of the average study discovering the effect being investigated is only one in five.

The paper, being published in Nature Reviews Neuroscience today [10 April], reveals that small, low-powered studies are 'endemic' in neuroscience, producing unreliable research which is inefficient and wasteful.

It focuses on how low statistical power caused by low sample size of studies, small effects being investigated, or both can be misleading and produce more false scientific claims than high-powered studies.

It also illustrates how low power reduces a study's ability to detect any effects, and shows that when discoveries are claimed, they are more likely to be false or misleading.

The paper claims there is substantial evidence that a large proportion of research published in scientific literature may be unreliable as a consequence.

Another consequence is that the findings are overestimated because smaller studies consistently give more positive results than larger studies. This was found to be the case for studies using a diverse range of methods, including brain imaging, genetics and animal studies.

Kate Button, from the School of Social and Community Medicine, and Marcus Munaf, from the School of Experimental Psychology, led a team of researchers from Stanford University, the University of Virginia and the University of Oxford.

She said: "There's a lot of interest at the moment in improving the reliability of science. We looked at neuroscience literature and found that, on average, studies had only around a 20 per cent chance of detecting the effects they were investigating, even if the effects are real. This has two important implications - many studies lack the ability to give definitive answers to the questions they are testing, and many claimed findings are likely to be incorrect or unreliable."

The study concludes that improving the standard of results in neuroscience, and enabling them to be more easily reproduced, is a key priority and requires attention to well-established methodological principles.

It recommends that existing scientific practices can be improved with small changes or additions to methodologies, such as acknowledging any limitations in the interpretation of results; disclosing methods and findings transparently; and working collaboratively to increase the total sample size and power.

###

Paper

'Power failure: why small sample size undermines the reliability of neuroscience' by Katherine Button, John Ioannidis, Claire Mokrysz, Brian Nosek, Jonathan Flint, Emma Robinson and Marcus Munafo in Nature Reviews Neuroscience.

For further information, please contact Philippa Walker in the University of Bristol's Press Office on 0117 928 7777 or philippa.walker@bristol.ac.uk


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Reliability of neuroscience research questioned [ Back to EurekAlert! ] Public release date: 10-Apr-2013
[ | E-mail | Share Share ]

Contact: Philippa Walker
philippa.walker@bristol.ac.uk
44-117-928-7777
University of Bristol

New research has questioned the reliability of neuroscience studies, saying that conclusions could be misleading due to small sample sizes.

A team led by academics from the University of Bristol reviewed 48 articles on neuroscience meta-analysis which were published in 2011 and concluded that most had an average power of around 20 per cent a finding which means the chance of the average study discovering the effect being investigated is only one in five.

The paper, being published in Nature Reviews Neuroscience today [10 April], reveals that small, low-powered studies are 'endemic' in neuroscience, producing unreliable research which is inefficient and wasteful.

It focuses on how low statistical power caused by low sample size of studies, small effects being investigated, or both can be misleading and produce more false scientific claims than high-powered studies.

It also illustrates how low power reduces a study's ability to detect any effects, and shows that when discoveries are claimed, they are more likely to be false or misleading.

The paper claims there is substantial evidence that a large proportion of research published in scientific literature may be unreliable as a consequence.

Another consequence is that the findings are overestimated because smaller studies consistently give more positive results than larger studies. This was found to be the case for studies using a diverse range of methods, including brain imaging, genetics and animal studies.

Kate Button, from the School of Social and Community Medicine, and Marcus Munaf, from the School of Experimental Psychology, led a team of researchers from Stanford University, the University of Virginia and the University of Oxford.

She said: "There's a lot of interest at the moment in improving the reliability of science. We looked at neuroscience literature and found that, on average, studies had only around a 20 per cent chance of detecting the effects they were investigating, even if the effects are real. This has two important implications - many studies lack the ability to give definitive answers to the questions they are testing, and many claimed findings are likely to be incorrect or unreliable."

The study concludes that improving the standard of results in neuroscience, and enabling them to be more easily reproduced, is a key priority and requires attention to well-established methodological principles.

It recommends that existing scientific practices can be improved with small changes or additions to methodologies, such as acknowledging any limitations in the interpretation of results; disclosing methods and findings transparently; and working collaboratively to increase the total sample size and power.

###

Paper

'Power failure: why small sample size undermines the reliability of neuroscience' by Katherine Button, John Ioannidis, Claire Mokrysz, Brian Nosek, Jonathan Flint, Emma Robinson and Marcus Munafo in Nature Reviews Neuroscience.

For further information, please contact Philippa Walker in the University of Bristol's Press Office on 0117 928 7777 or philippa.walker@bristol.ac.uk


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Source: http://www.eurekalert.org/pub_releases/2013-04/uob-ron040913.php

martin luther king jr baltimore ravens ravens Ravens vs Patriots 49ers Vs Falcons Mama Movie flyers

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.