The feministphilosophers take on the recent claim from a study published in Science (Reproducibility in Psychology - Nosek et al, 2015) that some research in psychology is not replicable (i.e. the same findings are not found over repeated, similar investigations) and, therefore, invalid. At least that's the way it's being reported in articles such as this on Slate which states that,

If it feels like you can find a study to back up any harebrained idea these days, you actually might be right, a new study says. On Thursday we, as humans, arrived at the zenith of irony when a study of studies was published and found—you guessed it—many studies were totally full of it, and overstated their findings. To be fair, the Reproducibility Project investigation, led by a University of Virginia psychologist did not set out to debunk every study in every discipline and only dealt with psychological-based studies published in three leading journals. The critique focused on social science research, so this does not necessarily disprove either of the studies you just read saying seven cups of coffee a day will, and will not, make you live longer.

It's a bit harsh, I know.

There are two related points to be made here. For me, the Science study's claims that many psychology research studies are not as valid as first reported is worrying for the research community. Well, no, it's rather the damning coverage from Slate and the like that is most damaging because most readers will not (and why should they, I suppose) wade through the sensationalist headline and claims. Also, most people will not consider what we need or want research to do.

First, feministphilosophers do a good job of unpicking what the Science study really said (see the bullet points in particular) but it's rather more useful to consider what we think is useful to know from research and, indeed, what we think we can find out from research. Many psychology and social science studies are large-scale, aggregate, quantitative studies which aim to examine trends or relations between social variables (e.g. age and voting patterns). The idea is then that the findings, if they are to be applicable to larger populations and useful, should be replicable across different samples in different contexts within the same population. The Science study says that many are not always and, therefore, not always useful at all. We've long known that such studies can be over-confident in their claims because the strength of their findings is based on statistical significance, which is a moveable object if ever there was one. They're also, however, some of the first studies to be cited on mainstream media because their findings are accessible (e.g. older people are more likely to vote Tory*) and clear-cut. As research goes, then, these sorts of studies can be well-known and influential.

I'm not a fan of large-scale quantitative studies and find them limited in their interest and application. They have their uses, of course, and we do need to know about large-scale changing social trends etc. and what they mean for us. They can also serve as context for more in-depth qualitative research. And it is qualitative research (standalone or as part of a mixed-methodology) that is much more interesting and informative. It is through in-depth qualitative work that we can examine individual and group experiences, and social structures, processes and change, and that is how we learn much more about the social world. Quantitative research may have been found (somewhat) wanting by Nosek et al but that's relevant only if we place all of our eggs in that basket.

Second, and perhaps more importantly, feministphilosophers make an excellent point that the Science article is an example of the research community doing exactly what it should be doing, and that is checking and rechecking its findings. No research study of any kind is ever definitive in its findings (and if it claims to be, you should be very sceptical) and it is only ever a contribution to broader knowledge about social issues. It is the job (nay, obligation) of the research community to constantly challenge, improve, critique and learn. A more positive reading of the Science article then is that research is doing just that:

After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true?  Zero.  And how many of the effects have we established are false?  Zero.  Is this a limitation of the project design?  No.  It is the reality of doing science, even if it is not appreciated in daily practice.  Humans desire certainty, and science infrequently provides it.  As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation.  The original studies examined here offered tentative evidence; the replications we conducted offered additional, confirmatory evidence.  In some cases, the replications increase confidence in the reliability of the original results; in other cases, the replications suggest that more investigation is needed to establish the validity of the original findings.  Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims.

* Not an actual finding on any research I've ever read, it's just an example.