“Success Chasing” and “Spurious Beliefs” on the Part of Doctors, and Presumably Other Decision-Makers, Usually Lead to Worse Outcomes — the Value of Paying Attention to Failures in Order to Improve Performance — a Not so “Duh” Insight from a Brain Study

© 2011 Peter Free

 

02 December 2011

 

 

Ignoring failures would seem self-defeating to most of us, but —

 

Our culture’s infatuation with self-esteem and confidence has us selecting leaders who learn the least from their errors.  We seem to value braggadocio and hyper-self-confidence over actual competence.

 

For example, think about:

 

(a) the coming presidential election, our failure-prone President, and his political adversaries’ mostly clownish panel of Republican Party candidates;

 

or, more generally,

 

our society’s demonstrated tendency to select overly confident male blowhards to leadership positions in preference to  picking qualified, but less self-infatuated, women.

 

 

A small brain study suggests that ignoring mistakes is not a good idea

 

A research study of 35 physicians has suggested that higher performing doctors — meaning those who had better patient outcomes within the study’s experimental setting — paid more attention to their treatment mistakes than to their successes.

 

Their lower performing peers made more treatment mistakes because they paid more attention to their successes and too little to their errors.

 

This has implications for performance (and leadership) in other contexts.

 

 

Citation

 

Jonathan Downar, Meghana Bhatt, and P. Read Montague, Neural Correlates of Effective Learning in Experienced Medical Decision-Makers, PLoS ONE 6(1), doi:10.1371/journal.pone.0027768 (23 November 2011)

 

 

A caveat regarding this study’s scientific significance

 

Obviously, a study of only 35 people has no statistical significance in regard to the overall population, even of physicians.  And you might wonder why I’ve bothered with it.

 

The short answer is that this small bit of research parallels conclusions I’ve come to, over six decades, regarding humanity’s flawed reasoning ability.  And worse, our penchant for making “stuff” up — a trait that gets in the way of coping successfully with difficult situations, like the economic, military, and political morasses the United States is in today.

 

 

Complex decisions are often difficult to make accurately — contrary to what most of us seem to think, at least where political problems are concerned

 

The study’s authors said that:

 

For many decisions, it can be nearly impossible to pick out the few relevant factors from the many irrelevant factors, even with extensive experience.

 

A major stumbling block for learning in these multi-dimensional environments is the tendency to form spurious beliefs: i.e., to attribute a causal role to factors that have no actual bearing on the outcome.

 

© 2011 Jonathan Downar, Meghana Bhatt, and P. Read Montague, Neural Correlates of Effective Learning in Experienced Medical Decision-Makers, PLoS ONE 6(1), doi:10.1371/journal.pone.0027768 (23 November 2011) (at Introduction) (paragraph split)

 

 

“Spurious beliefs” — an example taken from a conversation with a new friend

 

Let’s transform the authors’ comments about “spurious beliefs” into an everyday context.

 

Yesterday, I made a new friend.  He is politically much farther to the extreme political right, than I am.  And I noticed that he made a number of pejorative “sound bite” statements that attacked the Right Wing’s typical bogeymen.

 

For example, he told me that, “Europe is a failure, due to its overly extensive social programs.”

 

Oh?

 

Why then is Germany in a much better economic situation than the United States?  Why does Germany export more than we do, and why is its manufacturing base broader and sounder?  And why is Germany’s income disparity significantly less than the United States?

 

Note

 

If Germany is being visibly dragged down today, it is by its more profligate (and less thoughtful) colleagues in the Eurozone.

 

He had not considered these questions.  His statement is an example of a “spurious belief” at work.

 

He also said, “Government only redistributes wealth.  It doesn’t create it.”

 

What about Republican President Eisenhower’s Interstate Highway System?  Didn’t that significantly boost America’s ability to function efficiently in an economic sense?  Clearly, that “piece” of government-funded infrastructure created wealth.

 

Similarly, doesn’t government’s investment in uneconomic scientific and medical inquiry sometimes yield large economic benefits down the road?  As with technological spinoffs from the space program, or medical advances made from investments administered by the National Institutes of Health?

 

My friend hadn’t thought about his asserted principles very deeply, or in a proof-gathering frame of mind.  And that is exactly the brain study’s point.

 

 

The experiment

 

The research team used magnetic resonance imaging to look at 35 experienced doctors’ brains, as they learned to use two pretend drugs in 64 “virtual” patient encounters in each of two consecutive training and testing phases.

 

The two drugs were fictitious.  This assured that the doctors would not bring beliefs about real drugs into the experiment.  By starting from scratch, the research team could watch how doctors learn in new situations.

 

Note

 

For those unfamiliar with the actual practice of medicine, this experimental setup mimics what happens when physicians prescribe newly released medications and watch patient outcomes over time.  Does the drug work, in which contexts, and for which sub-groups of patients?  And so on.

 

This kind of finely-tuned information is often not available from the manufacturers’ clinical trials.

 

The researchers set up the experiment this way:

 

Subjects were instructed that they would select treatments for a series of simulated patients with acute myocardial infarction (MI) [heart attack] in an emergency room setting.

 

For each patient, they viewed a simplified, 6-factor clinical history before selecting one of two fictional treatments (‘Levocyte’ and ‘Novotrin’).

 

They were instructed that both agents had some efficacy, but that they would need to learn by experience whether one medication was more effective than the other overall, or for certain types of patients.

 

Unknown to subjects, both medications had equal success rates of 50% overall.

 

However, one medication, Drug A . . . had a 75% success rate in patients with diabetes, but only a 25% success rate in patients without diabetes.

 

For Drug B, the opposite was true.

 

© 2011 Jonathan Downar, Meghana Bhatt, and P. Read Montague, Neural Correlates of Effective Learning in Experienced Medical Decision-Makers, PLoS ONE 6(1), doi:10.1371/journal.pone.0027768 (23 November 2011) (at Introduction) (paragraph split)

 

 

Don’t get thrown by the jargon

 

All this “medicalese” says is that (a) there were two drugs and (b) two medically-distinguishable groups of “heart attack” patients — and (c) the doctors had to figure which drug worked better for each group.

 

Keep in mind that the difference between 75 percent and 25 percent efficacy in medicine (in this sort of a setting) is virtually unheard of.  That’s significant because — were our minds genuinely capable of asking the right questions and making the right inferences from evidence — such a sizeable difference “should” have been apparent.  Yet it wasn’t, as we shall see.

 

 

This experimental setup exactly parallels decision-making in other fields

 

If I do A, what happens?  If I do B instead, what happens?

 

In real life, we don’t get our answers immediately.  It usually takes multiple attempts before we begin to (often dimly) see the connections between what we did, or didn’t do, and a specific outcome.

 

 

So what happened?

 

Comparatively assessed performance split the 35 physicians split into two groups.  The 9 doctors in the more effective group chose the right drug 77 to 98 percent of the time.

 

The 26 physicians in the less effective group chose correctly 38 to 70 percent of the time.  Of these lower performing 26 doctors, 17 performed no better than chance.

 

Note

 

If the performance distribution (from this tiny study) carries over into the general population of physicians, it means that roughly half of our doctors might as well be flipping coins, when it comes to making reasonably complex medical decisions.

 

That’s not a slam at anyone.  It merely indicates how complicated medical questions can be.  It also tends to support the virtue of having routine access to computerized diagnosis and treatment aids that are based on profession-wide experience and evidence.

 

 

What distinguished the high performers from the lower ones?

 

Here are two really interesting findings:

 

On behavioral measures, more than two-thirds of physicians robustly incorporated spurious associations into their treatment algorithms.

 

Overall, subjects were nearly twice as likely to invent a spurious rule as to detect the correct one.

 

Low performers, who included irrelevant factors in their algorithms, showed stronger prefrontal and parietal activation for successes than for failures.

 

Conversely, high performers showed stronger prefrontal and parietal activation after treatment failures than successes.

 

The profile of activation in inferior parietal areas related to attention and salience similarly suggest that while low-performers pay special attention to successes, high performers attend more to failures during learning.

 

© 2011 Jonathan Downar, Meghana Bhatt, and P. Read Montague, Neural Correlates of Effective Learning in Experienced Medical Decision-Makers, PLoS ONE 6(1), doi:10.1371/journal.pone.0027768 (23 November 2011) (at Introduction) (paragraph split)

 

In other words, the better doctors learned from their mistakes because they paid attention to them.  The less competent ones did not because they didn’t.

 

 

If most people (a) invent inaccurate guides to Reality and (b) pay no attention to inaccuracies or failures — how likely are we to be successful in coping with problems?

 

The authors made another relevant observation.  If experts make up mistaken rules and ignore evidence that disproves these rules, then it is not surprising that they often disagree about which supposedly causative factors are important in any given context.  The paper concluded that experiential (meaning anecdotal) learning alone is not enough to guarantee accuracy.

 

 

The moral? — Most of us don’t come close to recognizing Reality in what we think we know

 

We have inaccurate pictures of the world and how it works.  We don’t recognize that our Reality maps blunder.  And we ignore evidence that would correct our wrong thinking, if we paid attention to it.

 

In this especially anti-scientific age, these characteristics have become increasingly prominent among our population and its leaders.

 

Our culture’s apparently approving disconnect of blather from requirements for proof explains:

 

(a) what James Moore called the “GOP clown car,”

 

and

 

(b) why President Obama might win another term, despite having been inept, untruthful, or misfocused on virtually everything significant that he touched.

 

We see that Neural Correlates of Effective Learning in Experienced Medical Decision-Makers reaches significantly beyond its apparently narrow medical learning context.