Foolishly Designed and Implemented U.S. Clinical Medical Registries — Fail to Track Much of Noticeably Useful Substance — What Else Is Not New?

© 2015 Peter Free

 

04 May 2015

 

 

Citation — to study

 

Heather Lyu, Michol Cooper, Kavita Patel, Michael Daniel, and Martin A. Mackary, Prevalence and Data Transparency of National Clinical Registries in the United States, Journal for Healthcare Quality, DOI: 10.1097/jhq.0000000000000001 [sic] (published ahead of print, 24 April 2015)

 

Citation — to press release

 

Ekaterina Pasheva and Lauren Nelson, Study Questions Quality of U.S. Health Data, Johns Hopkins Medicine (30 April 2015)

 

 

Money grubbing complacence?

 

Consider this finding:

 

 

A new study by Johns Hopkins researchers concludes that most U.S. clinical registries that collect data on patient outcomes are substandard and lack critical features necessary to render the information they collect useful for patients, physicians and policy makers.

 

Findings of the study, published ahead of print April 24 in the Journal for Healthcare Quality, reveal poor data monitoring and reporting that researchers say are hurting national efforts to[:]

 

study disease,

 

guide patient choice of optimal treatments,

 

formulate rational health policies

 

and

 

track in a meaningful way how well physicians and hospitals perform.

 

“Our results highlight the acute need to improve the way clinical outcomes data are collected and reported,” says senior investigator Marty Makary, M.D., M.P.H., professor of surgery at the Johns Hopkins University School of Medicine.

 

“Failure to measure and accurately track patient outcomes remains one of the greatest problems in modern health care, curtailing our ability to understand disease, evaluate treatments and make the health-care industry a value-driven marketplace.”

 

In addition, the failure to track patient outcomes in a systematic way is tantamount to not measuring the performance of a sector that claims one-fifth of the nation’s economy, the research team says.

 

“We found it’s the Wild West,” Makary says. “With a few notable exceptions, most registries are underdeveloped, underfunded and often are not based on sound scientific methodology.”

 

© 2015 Ekaterina Pasheva and Lauren Nelson, Study Questions Quality of U.S. Health Data, Johns Hopkins Medicine (30 April 2015) (paragraphs split and partially reformatted)

 

 

This was not (apparently) a haphazardly drawn conclusion

 

From the study’s abstract:

 

 

We identified 153 clinical registries of which [:]

 

47.7% (73) were health services registries,

 

43.1% (66) were disease registries,

 

and

 

9.2% (14) were combination registries.

 

The mean number of hospitals per registry was 1,693 . . . and the mean number of patients per registry was 1,160,492 . . . .

 

Among the 117 AMA [American Medical Association] specialty societies, [only] 16.2% (19) were affiliated with a registry.

 

Government funding was associated with 26.1% (40/153) of registries.

 

Of the 153 registries, [only] 23.5% (36) risk adjusted outcomes and 18.3% (23) audited data.

 

Mandatory public reporting of hospital outcomes for all participating hospitals was associated with [a ridiculously low] 2.0% (3/153) of registries.

 

© 2015 Heather Lyu, Michol Cooper, Kavita Patel, Michael Daniel, and Martin A. Mackary, Prevalence and Data Transparency of National Clinical Registries in the United States, Journal for Healthcare Quality, DOI: 10.1097/jhq.0000000000000001 [sic] (published ahead of print, 24 April 2015) (at Abstract) (paragraph split, underlines added, my words in brackets and italics)

 

 

The bad news — in lay language

 

From John Hopkins:

 

 

Less than one-quarter of registries adjusted their results for differences in disease complexity — information statistically reflective of disparities in illness severity and socio-economic status among patients treated across hospitals. Unadjusted data, the researchers say, could be misleading and should be interpreted with great caution.

 

Less than one-fifth of registries contained independently entered data — information entered by clinicians other than the ones involved in care — an important principle in mitigating the well-established bias of self-reported data, the researchers say.

 

Although one-quarter of registries — 40 in total — were funded by taxpayers, only three shared their data publicly.

 

Of note, 84 percent (98 of 117) of U.S. recognized medical specialties had no national clinical registries — a significant gap in the efforts to compare the efficacy of treatments and evaluate the quality of care on a large scale.

 

The researchers say such failure to capture and measure patient outcomes is troubling because the insights gleaned from such information could have a direct and profound impact on scientific research and human lives.

 

“A robust clinical registry can tell doctors in real time what medications work well and which are harming patients, yet the infrastructure to achieve that is vastly under-supported,” says study co-author Michol Cooper, M.D., Ph.D., a surgical resident at the Johns Hopkins University School of Medicine.

 

“The same rigorous standards we use to evaluate how well a drug does ought to apply to the way we report patient outcomes data.”

 

© 2015 Ekaterina Pasheva and Lauren Nelson, Study Questions Quality of U.S. Health Data, Johns Hopkins Medicine (30 April 2015) (extracts, underlines added)

 

 

Which boils down to

 

Not only do we not reliably track anything of substance, we do not report our findings to anyone that matters.

 

 

If you think that this is obtuse . . .

 

Don’t bet on anything changing. If we actually tracked what worked and what does not, a whole lot of what happens now would have to stop. A bunch of folks would lose money and prestige.

 

 

The moral? — American health care is the equivalent of kids playing in an unscientifically designed sandbox

 

Fun for almost everyone, until one starts wondering whether much of what we do makes provable sense. Twenty percent of the American economy is based on a black box that no one tries to understand.

 

Ain’t capitalism grand? Illness is an immense profit source, and we need not much care whether we heal anyone. The “system” is — arguably and inferentially — mostly about looking and sounding good, rather than being good.