Monday, June 16, 2008

yes, I'm conducting a literature review

Most researchers have a financial incentive to obtain positive study outcomes. I don't have direct incentives, but my research center will not continue to obtain grants if we don't make our funders happy. And our funders are, for the most part, trying to maintain their jobs, which means the programs they run need to be funded and they don't want to risk this funding with bad news.

Even if researchers are not financially tied to positive study outcomes, they often are emotionally tied to the programs under study. Many of our funders are completely convinced that the programs they oversee "work," and no research would ever convince them otherwise, even if their jobs weren't on the line. It doesn't take too long to get emotionally invested in a project.

So we come up with something nice to say in our reports even if we don't find anything positive. And even though we own the rights to all the data we collect (one thing SUNY insists on when accepting grants), we don't publish anything negative or null results (in our case, usually meaning that an intervention did not have an effect).

Null results are rarely published even in more purely academic settings simply because such studies are often less interesting to read. They're much more likely to be rejected by top-tier journals unless they're refuting something previously published and have a larger sample or better design. Both of my published papers are of null results and we didn't even bother submitting them to prestigious journals.

This is a problem because 20 people could conduct similar research and even if only one gets statistically significant results (which with a 95% confidence interval is likely to happen by chance), that one paper is likely to be added to the published literature and all the others trashed- if not by journal editors, then by the researchers themselves who won't trouble to write and submit a paper that's unlikely to get into a good journal when they could be spending their time on more effective career-building activities. Meta-analyses of published work are becoming more popular; to compensate for small sample sizes in research on a particular topic, they collect all the decent published studies on a topic and analyze them as a whole to see if they have statistically meaningful results. Problem is, meta-analysis can not compensate for publication bias.

2 comments:

Anonymous said...

Reen, What did she say?

Anonymous said...

Nana, She gave me a headache!