In an article for the Guardian, Kate Button, a research psychologist at the University of Bristol, says that the small size of #neuroscience studies undermines the reliability of the conclusions drawn from them.
Button analysed 48 published papers that included data from 49 meta analyses and 730 primary studies and gave the research a statistical ‘power’ rating.
She describes statistical power as:
Statistical power is the ability of a study to detect an effect (eg higher rates of cancer in smokers) given that an effect actually exists (smoking actually is associated with increased risk of cancer). Power is related to the size of the study sample (the number of smokers and non-smokers we test) and the size of the real effect (the magnitude of the increased risk associated with smoking). Larger studies have more power and can detect smaller, more subtle effects. Small studies have lower power and can only detect larger effects reliably.
The research showed that the statistical power of neuroscience research is around 20%. That means much of the research is less reliable because of small sample sizes. Writing in Nature, Button says:
Small sample sizes are appropriate if the true effects being estimated are genuinely large enough to be reliably observed in such samples. However, as small studies are particularly susceptible to inflated effect size estimates and publication bias, it is difficult to be confident in the evidence for a large effect if small studies are the sole source of that evidence. Moreover, many meta-analyses show small-study effects on asymmetry tests (that is, smaller studies have larger effect sizes than larger ones) but nevertheless use random-effect calculations, and this is known to inflate the estimate of summary effects (and thus also the power estimates). Therefore, our power calculations are likely to be extremely optimistic.
I’m no statistician or neuroscientist but the L&D profession is currently embracing neuroscience with open arms. Talk to a neuroscientist about their research and they would share similar caveats to those of Button. That has not stopped the growth of a ’neuro’ industry however.
It would seem that it is incumbent on L&D professionals to really understand where the research has come from – the scale of it, and what might be usefully applied. I’d recommend reading these two articles by Button and be aware of the biases that come with scientific research e.g. positive research findings are more likely to be published than negative ones.
In his session at the Learning Technologies conference cognitive neuroscientist Dr Christian Jarrett said that if you are having trouble knowing what to believe then see if it would make any difference if you took the word brain out of the claim. If so, it probably doesn’t stand up.
Also, look for meta analyses and understand whether the person has an agenda eg do they want to sell you something.
He suggested following these sites to stay up to date with research: