The Decline Effect and its implications for drug regulators

Kelvin Teo

Courtesy of Andres Rueda, Flickr

To kickstart things, consider this real-life scenario that took place in a research institution:

Principal investigator/boss: How did your analysis of the six genes go?

Student: Five of the genes went according to our original hypothesis, one however took on a haphazard trend that doesn’t follow our hypothesis. I have no explanation for it either.

Boss: Never mind. Let’s just omit that errant gene from your report and concentrate on the 5 instead.

Last month, Jonah Lehrer wrote a thought-provoking article in the Annals of Science section of The New Yorker detailing a crisis faced by the Scientific Method with a catchy title – The Truth Wears Off. Lehrer began his narration with an assembly involving drug company executives, psychiatrists and neuroscientists in a hotel conference room to hear news of a disconserting nature – there is a decline in therapeutic effect of second generation anti-psychotic drugs such as Abilify and Seroquel. What this meant was that as compared to the clinical trials which initially reported dramatic improvement in psychiatric symptoms of the early batch of patients, the later trials reported a lesser extent in improvement.

The explanation, Lehrer writes, that accounts for this phenomenon is the Decline Effect. Such occurences weren’t restricted to medicine alone, even the other related fields such as psychology, ecology and even evolution biology were affected too. What specifically happens is that a researcher in his field may conduct an experiment initially, and may witness a strong effect in the experimental group as compared with the control group. However, when the latter conducts the experiment in subsequent times, the differences in effect between both groups seems to be waning. It was almost as if the main foundation of scientific rigour, reproducibility, was turning flimsy and gnawing away at the plausibility of the theories verified ages ago in experiments that we have taken as granted as the truth.

Lehrer offers two explanations for this Decline Effect. The first explanation is regression to mean. Thus, how regression to mean can result in the Decline effect is that the initial experiment could have harboured a statistical fluke, which over-dramatised the difference in observed effects between experimental and control groups. Subsequent experiments however cancelled out the earlier fluke. But, regression to mean doesn’t provide a convincing explanation of the decline of anti-psychotics for the very reason that prospective drugs have to be put through randomised clincial trials, the gold standard of medical evidence, with statistically significant therapeutic effects as compared with controls. A randomised clinical trial as the name suggests have a randomisation process that would have reduced the occurence of a statistical fluke.

The second explanation offered is more plausible and reflective of the pitfalls which researchers knowingly fall into – the tendency to stick to one’s pre-conceived hypothesis, to the extent of finding data that fits with it and excluding those that doesn’t, which was illustrated by the scenario at the start of this article. This is known as confirmation bias, in which researchers selectively interpret or include favourable data whilst ignoring unfavorable ones. Why this could account for the Decline Effect is due to the fact that the initial research was already subjected to confirmation bias, whilst the subsequent ones were conducted with higher scientific integrity freer from confirmation bias, which led to the observations that the difference in effect was smaller as compared with the initial research.

John Ioannidis, an epidemiologist at Stanford University was concerned over the impact of confirmation bias on biomedical research. In 2005, Ioannidis published a paper in the Journal of the American Medical Association that looked at 49 most cited clinical research studies published in three major medical journals. 45 of those studies reported the positive effect of a medical intervention and majority of those were randomised clinical trials. 34 of the studies were replicated and the results shown by the replicated studies were indeed cause for worry. 41% of the studies results were contradicted or were shown to have reported a diminished effect (meaning that the original study reported an exaggerated effect).

Why this is a cause for worry is due to the fact that medical practice have always relied on such trials as gold evidence of therapeutic benefit. Evidence-based medicine is what drives the prescribing practices of medical practitioners and the formulation of treatment guidelines by medical societies and organisations for certain conditions. Why there is an issue with contradicted studies or those who have over-exaggerated the effects of certain drugs is that the decision to prescribe a patient a certain drug (s) requires the consideration of the therapeutic benefit weighed against the adverse effects (negative effects) of the drug. Adverse effects of the drugs could very well affect the patient’s quality of life, and such is taken seriously in the decision to prescribe. However, if the researchers behind the creation of the drugs were blinded by their confirmation bias in the first place, medical practitioners would unknowingly be misled by their exaggerated promise, and their prescription could do more harm than good to the patients with negative impact on their quality of life. The issues with confirmation bias led Richard Palmer, a biologist at the University of Alberta, to make an apt observation:”Our beliefs are a form of blindness.” And what is particularly worrying is that such could lead to the blind leading the blind.

Hence, what are the implications of this Decline Effect for the drug regulators? One obvious move would be to prevent such from arising. There are two possible approaches to it. The first approach, the more expensive one, is for the drug regulatory body to replicate the clinical trial using rigorous scientific methods free of confirmation bias. The second one which sounds more feasible is to have a requirement for the pharmaceutical company wishing to register or license a drug to conduct trials within government healthcare establishments, for example tertiary healthcare centres owned by the government. This will place the clinical trial under the direct noses of the drug regulatory bodies, which can monitor progress, and even methodology of research such as data collection, analysis and interpretation. This will nip any scientific integrity issues like confirmation bias in the bud before it becomes a problem.

The Decline Effect in research, and in this case clincial research is definitely a cause for concern for the medical community. The methodology of research carried out by drug companies or entities linked to the latter should be increasingly placed under the panopticon of drug regulatory bodies or even medical watchdog groups so as to ensure that correct and reliable information is gleaned from such studies. Such information is crucial to the function of the medical community and ultimately the welfare of our patients.