Monday, May 16, 2011

Meta-Research

...or the research of research, as I see it.  I recently read an article published in PLoS about how most published research is actually false (link). 

This very idea makes me cringe.  If that's true, why do we even bother?  Why spend enormous amounts of money to support something that's false? 

The paper doesn't really address either of those points, but rather it talks about how it can make such a claim.  Statistically, it is difficult to confirm things without absolutely enormous amounts of data.  Of course, getting data sets that are large enough for arbitrary experiments can range from difficult to impossible, with infeasibility being common.  This can put the statisticians at odds with the scientists.  The author's argument drives at this, pointing out that the data sets usually used are not large enough to be able to get a statistically valid answer. 

There is another problem.  People are people.  We are inherently biased.  It has been said that data is objective, and I used to believe that was true, at least in theory.  But then the question was posed to me: why don't scientists measure _everything measurable_ regarding an experiment?  Of course, this would mean an enormous amount of data, of which most of it is probably irrelevant.  But do we really know it's irrelevant?  The answer is no.  Our bias isn't shown so much in what we measure, but rather in what we choose not to measure - those things we think are irrelevant. 

Research costs money.  This usually means getting a grant.  Getting a grant usually means convincing someone that your research is going to do some good, be it cure a disease or (more commonly) make money.  With that in mind, why would anyone pay any amount of money for an application that reads "We want to do X.  We drew it out of a hat.  We have no idea what it does, and we have no idea what could come of this."  That is mostly unbiased (who put the ideas in the hat?).  It is also the least convincing argument I have ever heard for giving someone money. 

Now try "We want to research X, because it seems to have an effect on weight retention.  If this is true, we could develop an effective drug for weight loss."  Now we have something profitable.  But here's the problem: everyone involved wants it to be true.  More than likely, even someone working within ethical bounds is going to act differently when the desired outcome is known ahead of time.  I've been told repeatedly that one should never do data analysis until all the data is in.  However, we do not do this.  I have watched people stare at long running experiments that appear to deviate from expectations.  Frequently the target is personified, "Why are you putting a band there?  You're supposed to put it over here!"  I do this type of thing myself.  We already know what the experiment is "going" to do; we just need the formality of it actually doing it.

All in all, I think the paper is particularly interesting.  It gives a feel for how heterogeneous science really is, and it illustrates the ever-present (though shunned) human element.

Research shows that most research is wrong.

No comments:

Post a Comment