New Study Highlights Misleading Pharmaceutical Research — the High Prevalence of Scientifically Meaningless Endpoints and Deliberately Confusing Risk Analysis

© 2011 Peter Free

 

26 August 2011

 

 

A UCLA study of recent pharmaceutical articles indicates a lack of scientific reasoning in them

 

Who would have “thunk” that profit-seeking might distort research methodology and reporting?

 

Studies about medications published in the most influential medical journals are frequently designed in a way that yields misleading or confusing results, new research suggests.

 

© 2011 Enrique Rivero, Results of medication studies in top medical journals may be misleading to readers, UCLA Health System (25 August 2011)

 

I’ve written about this problem here, here, and here.

 

 

Citation

 

Michael Hochman and Danny McCormick, Endpoint Selection and Relative (Versus Absolute) Risk Reporting in Published Medication Trials, Journal of General Internal Medicine, DOI: 10.1007/s11606-011-1813-7 (early online publication, 13 August 2011)

 

 

Peer review is no obstacle to publishing crappy medical science

 

The journals examined from 01 June 2008 to 30 September 2010 included the:

 

Annals of Internal Medicine

 

Archives of Internal Medicine

 

British Medical Journal

 

Journal of the American Medical Association

 

Lancet

 

New England Journal of Medicine

 

These as the most prestigious general sources in medicine.

 

 

Guilty of using the wrong measures, combined with misleading semantics

 

High proportions of the pharmaceutical research articles included measures that genuine scientists frequently criticize as being irrelevant, obviously inaccurate, or deliberately misleading:

 

Surrogate outcomes (37 percent of studies), which refer to intermediate markers, such as a heart medication's ability to lower blood pressure, but which may not be a good indicator of the medication's impact on more important clinical outcomes, like heart attacks.

 

Composite outcomes (34 percent), which consist of multiple individual outcomes of unequal importance lumped together — such as hospitalizations and mortality — making it difficult to understand the effects on each outcome individually.

 

Disease-specific mortality (27 percent), which measures deaths from a specific cause rather than from any cause; this may be a misleading measure because, even if a given treatment reduces one type of death, it could increase the risk of dying from another cause, to an equal or greater extent.

 

The new study also shows that 44 percent of study abstracts reported study results exclusively in relative — rather than absolute — numbers, which can be misleading.

 

© 2011 Enrique Rivero, Results of medication studies in top medical journals may be misleading to readers, UCLA Health System (25 August 2011) (emphasis added in last sentence)

 

 

Choosing the wrong outcome measures is bad medical science

 

Each of the above three outcome elements ignores proper scientific methodology:

 

Surrogate outcomes are irrelevant, unless the surrogate has been experimentally and quantitatively tied to outcome the experiment is supposed to measure.

 

The reason that surrogates are so often used in pharmaceutical research is because it’s easier, cheaper, and less experimentally complex to measure and impact the surrogate, rather than to do bona fide research into the actual causations of the outcome.

 

For example, take cholesterol levels.  I can make a drug to lower cholesterol pretty easily.  But seeing how that drug actually works in varying people bodies over the long-term is arbuably too complicated to be profitable.

 

So, if I’m a pharmaceutical company, I’ll treat the lab value, rather than concern myself with the actually relevant health outcome.

 

Using composite outcomes is even less defensible.

 

For example, it should be obvious (as Enrique Rivero points out) that hospitalizations are not related to mortality in any quantifiably detectible way.  If they were, most of people who go to the hospital for one reason or the other would die.  Either there, or soon after.  Which is obviously not the case.

 

The only real connection between hospitalizations and mortality is that most people who die in advanced countries have probably been hospitalized at one time or another.  This is a mostly meaningless correlation, since all of us eventually die.

 

Using composite outcomes is simply stupid.  There’s no kinder word for it.

 

Disease-specific mortality sounds good, but constitutes an end-run of what is actually important, “Am I going to live longer or better using this drug?”

 

Studies that ignore “all cause mortality” do so deliberately.  Even a decent high school science student would recognize that intervening to reduce cardiovascular mortality might have side effects that escalate one’s chances of dying from some other disease or unpleasant happenstance.

 

That, for example, is my caveat in regard to using statins in people who don’t have elevated cardiovascular risk factors.  Research indicates that statins do reduce cholesterol levels and cardiovascular mortality.  But research also seems to indicate that statins do not reduce all-cause mortality.

 

Statins’ apparent failure to reduce all cause mortality means that patients taking them are probably going to croak at the same time in the future, just from something else.

 

In short, we get elevated medical costs for no apparent gain in longevity and/or (perhaps) the quality of life.

 

 

Reporting relative risk, as opposed to absolute risk, is also deliberately misleading

 

If one has an already low risk for illness, reducing it further is (a) essentially meaningless, (b) likely to be expensive, and (c) accompanied by unnecessary and unpleasant side effects.

 

I’ve written about the logic of this here.

 

The reason that pharmaceutical results are reported in frequently distorting relative risk terms is that the drug’s action (a) looks more impressive that way and (b) the perceived risk of not taking the drug, larger:

 

"The way in which study results are presented is critical," McCormick said.

 

"It's one thing to say a medication lowers your risk of heart attacks from two-in-a-million to one-in-a-million, and something completely different to say a medication lowers your risk of heart attacks by 50 percent.

 

Both ways of presenting the data are technically correct, but the second way, using relative numbers, could be misleading."

 

© 2011 Enrique Rivero, Results of medication studies in top medical journals may be misleading to readers, UCLA Health System (25 August 2011)

 

 

Why such sloppy science? — Profit motive

 

In regard to profit-seeking’s distortion of sound science, the UCLA press release indicated:

 

While 45 percent of exclusively commercially funded trials used surrogate endpoints, only 29 percent of trials receiving non-commercial funding did.

 

And while 39 percent of exclusively commercially funded trials used disease-specific mortality, only 16 percent of trials receiving non-commercial funding did.

 

The researchers suggest that commercial sponsors of research may promote the use of outcomes that are most likely to indicate favorable results for their products, [Michael] Hochman said.

 

"For example, it may be easier to show that a commercial product has a beneficial effect on a surrogate marker like blood pressure than on a hard outcome like heart attacks," he said.

 

"In fact, studies in our analysis using surrogate outcomes were more likely to report positive results than those using hard outcomes like heart attacks."

 

© 2011 Enrique Rivero, Results of medication studies in top medical journals may be misleading to readers, UCLA Health System (25 August 2011) (paragraph split)

 

 

The moral? — Skepticism is necessary

 

Medical “research” is frequently more about profit than science and health.