Been thinking a fair bit about publication bias in science recently. In brief, it is known that the majority of published studies report positive effects, with few if any reporting null effects (failure to reject null hypothesis). This has been dubbed the “file drawer problem”, referring to the large amount of unpublished data (most likely showing null effects) sitting in the file drawers of research labs across the world. This is an important problem, because it presents a barrier to accurately estimating the presence/strength of particular effects (e.g., being sleepy makes you more creative, people walk slower when thinking about elderly people, etc.) and synthesizing the literature on the effect to integrate into theories of what might be going on.
In the field of design-by-analogy research, for example, it is widely assumed that long distance analogies (i.e., mappings from a superficially and potentially structurally quite different knowledge structure) hold the highest potential for inspiring creativity. This assumption follows from theoretical accounts of both analogy and creativity, and is widespread in industry, as evidenced by the presence of exercises like “forced associations”, which involve forcing mappings between random pairings of objects. There are studies that have shown direct evidence for this: Dahl and Moreau (2002) examining the effect of distance of analogy on novelty of designs in the domain of product design, Wilson et al (2010) examining the benefits of “surface dissimilar examples” in biologically inspired design, and a study I ran in my first year project looking at the effects of distance of analogy in an engineering design ideation task (Chan et al, 2011). So, there are positive demonstrations of an effect – but I have yet to see a meta-analysis in this area, and I have been running into some evidence that suggests distance of analogy per se is not really where the action is in terms of inspiring creativity. First, in my Master’s thesis (which will hopefully be published soon!), I zoomed in on designers actually analogizing while brainstorming, and found that distant analogies tended to lead to idea that were more similar to ideas already in hand, going against the notion that distant analogies (at least directly) help you think outside the box and arrive at really creative ideas. This, coupled with the observation that most studies that have shown effects of distant analogies (including my first year project) have used an input-output methodology (give people analogies, see what they produce, without seeing what’s going on in between), makes me wonder if the distant analogies are really doing all the work, or if other things are going on, e.g., chains of analogies of mixed distance. This idea connects with things I’ve been reading re: knowledge diversity in collaborative creativity, and makes me wonder if distant analogies by themselves are really the secret sauce, or if it’s more about the content of the particular analogies and how they combine with other ideas that may or may not be terribly different from what I’m currently thinking about.
Anyway, that’s a long rant, but an example nonetheless. The long and short of it is that I feel like there’s not really a whole lot of cumulative progress in this area, and I worry that attempts at cumulative progress will be inadequate due to the file drawer problem. This is why I am very pleased and excited to see that progress is currently being made in addressing the file drawer problem. There’s been an explosion of interest on this topic in the past couple of years. There was the controversial “voodoo” cog neuro paper a couple years ago, and this very fresh paper documenting publication bias across a wide range of disciplines and countries. There was also a very recent special issue of Perspectives in Psychological Science that addressed this issue as manifested in psychological science.
There’s also a bunch of outlets that I’ve only recently become aware of. PLoS publishes null results if the study is methodologically strong. There are outlets like figshare and psychfiledrawer that allow researchers to share null results and replications. There’s even a journal devoted to publishing null results in psychology, though I am skeptical it will have an impact without some large scale systemic changes in how the field values contributions and advances careers.
Ultimately, the solution I think needs to be at the social/cultural level: the real issue is the cultural pressure to run novel studies and publish positive effects (or else see your career go into the tank come tenure-time). This is why the most encouraging signs of progress to me are on the front of open science and altmetrics, where basically the push is to reform the current career model of science (“publish [in top journals] or perish”) and reward broader contributions to science (e.g., replications, collaborations with others that don’t necessarily result in “positive effect” papers).
Looking forward to things getting better and better. 🙂