More on How Medical Science Works In Practice

A few days back, I suggested that everyone with an interest in increasing human longevity would do well to become more familiar with the way science works in practice. How else to determine the importance of what you read in the popular press? Scientists are human like the rest of us, which means mistakes, poor first tries, overattachment to ideas, institutional bias and other human failings are mixed in with the long record of triumphs and progress produced by the scientific community - people acting in accordance with the scientific method.

A while back, an examination of just how much research turns out to be wrong caused something of a stir amongst the public at large - no new news to those involved in science, of course, but many people treat the distinct findings of individual scientists and single teams with a touch too much reverence.

I could - and should - have added that most of what was written on these topics would turn out to be wrong. It might contain useful ideas, or prompt other people into useful directions, but it will be wrong. This is taken for granted by scientists; after all, the scientific method and the community that supports it form a system that makes useful, rapid, solid progress even though the individual components of that progress are largely flawed. Science is built by consensus and aggregation, a form of ongoing, distributed cross-checking of information. Every single collection of data could be 99% wrong, but you'll still get the right answer in the end if you have enough of those collections to compare.

The nature of the ongoing search for truth - or something close enough to be useful in the production of new technology - encourages me to me a late adopter, and wait a decade or so where I can for the engine of science to clarify each new result. It only makes sense not to rush in at the cutting edge.

The scientific method and the community of science that surrounds it is truly a powerful machine - able to take the worst aspects of human nature, sailing atop a river of garbage specked with half-wrong answers, and spin that mix into the gold of technology. It doesn't matter what your right to wrong to nonsense ratio is when it comes to deciphering the world; so long as you have the will to progress and your sifting mechanism is good enough, accumulating a whole pile of right is just a matter of time.

The front line of science is a messy place; a mostly wrong messy place, as any of us who have spent time there know. A recent study claimed massive error rates across all scientific papers - which is not a surprise to scientists. The closer to the edge of knowledge you come, the more wrong you'll find - a great frothing sea of wrong, enthusiastically generated by scientists in search of nuggets of right. It's all part of the process, and you have to step back from the details in order to see where the process is taking you. In any complex field, and biotechnology and medicine are about as complex is it gets outside astrophysics, validating truth takes time. Scratch any unanswered question and it'll bleed papers and reviews, a dozen for any given position on the topic.

Following up on the nature and character of wrong results, you'll find another paper well worth reading at PLoS Medicine. If you like to keep track of medical research, or are looking for specific answers in any field of new medicine, this should be required reading. It's one thing to see widely varied, changing, condradictory information presented by reputable researchers - but it's quite another to be able to put this in context, as a part of an ongoing and very human process, and therefore understand the likely weight behind each position.

In a recent article in PLoS Medicine, John Ioannidis quantified the theoretical basis for lack of replication by deriving the positive predictive value (PPV) of the truth of a research finding on the basis of a combination of factors. He showed elegantly that most claimed research findings are false. One of his findings was that the more scientific teams involved in studying the subject, the less likely the research findings from individual studies are to be true. The rapid early succession of contradictory conclusions is called the “Proteus phenomenon”. For several independent studies of equal power, Ioannidis showed that the probability of a research finding being true when one or more studies find statistically significant results declines with increasing number of studies.

As part of the scientific enterprise, we know that replication - the performance of another study statistically confirming the same hypothesis - is the cornerstone of science and replication of findings is very important before any causal inference can be drawn. While the importance of replication is also acknowledged by Ioannidis, he does not show how PPVs of research findings increase when more studies have statistically significant results. In this essay, we demonstrate the value of replication by extending Ioannidis' analyses to calculation of the PPV when multiple studies show statistically significant results.

Here are the underpinnings of the common sense approach to scientific research - listen when many teams agree with their findings, and expect widespread disagreement and contradictory findings in any young, but well-funded or popular field. Those of us following stem cell research these past few years have certainly seen a great deal of that, for example.

To finish up on this topic for today, here is another paper worth reading on the nature of truth and results in clinical trials, or the translation of research into action:

The credibility and replication of research findings evolve over time, as data accumulate. However, translation of postulated research promises to real-life biomedical applications is uncommon. In some fields of research, we may observe diminishing effects for the strength of research findings and rapid alternations of exaggerated claims and extreme contradictions - the “Proteus Phenomenon.” While these phenomena are probably more prominent in the basic sciences, similar manifestations have been documented even in clinical trials and they may undermine the credibility of clinical research. Significance-chasing bias may be in part responsible, but the greatest threat may come from the poor relevance and scientific rationale and thus low pre-study odds of success of research efforts. Given that we currently have too many research findings, often with low credibility, replication and rigorous evaluation become as important as or even more important than discovery. Credibility, replication, and translation are all desirable properties of research findings, but are only modestly correlated. In this essay, I discuss some of the evidence (or lack thereof) for the process of evolution and translation of research findings, with emphasis on the biomedical sciences.

All science is a process, not a printed statement of fact. The results are amazing and accelerating - if all too often taking for granted - when you consider the fallible nature of humanity. It goes to show that the scientific method is the only game in town when it comes to reliably providing the raw materials needed to advance and improve the state of being human.

Technorati tag: ,

Comment Submission

Post a comment; thoughtful, considered opinions are valued. New comments can be edited for a few minutes following submission. Comments incorporating ad hominem attacks, advertising, and other forms of inappropriate behavior are likely to be deleted.

Note that there is a comment feed for those who like to keep up with conversations.