Tuesday, January 19, 2010

Practice variation: Real data


That there is some variation in practice patterns among physicians, even for comparable patient populations, is inevitable. That its range is so wide is not, as often noted by Brent James. To the extent variation is not based on scientific evidence, it presents an impediment to process improvement that could reduce overuse and underuse in the delivery of medical care, or the amount of harm caused to patients. Why? Without some standardization, it is impossible to have a baseline against which to collect evidence as to the effect of proposed process improvement measures.

With help from friends at Blue Cross Blue Shield of MA, I offer an example. The issue here is the percentage of times that physicians choose to endoscopically examine and conduct a biopsy on patients with GERD, gastroesophageal reflux disease, which often presents as heartburn.

The top chart shows how the average cost per episode varies among the four quartiles of all cases. Note a variation of almost 100% in costs between the bottom and top quartiles. As noted by BCBS, the procedure cost is the single most important source of variation.

The second chart shows the variation, doctor by doctor, for use of endoscopies with biopsies. The charts shows that 74 of the 331 gastroenterologists have a significantly higher than average use of this procedure.

The question that follows is whether this degree of variation is accounted for by the variation within the patient population. That is, if one were applying standards of evidence-based medicine, would the distribution look like this? Or, is the distribution skewed by habit and predisposition of doctors? Is it influenced by a fee-for-service payment regime that encourages more procedures than are necessary? Are some doctors more fearful of malpractice suits and engaging in defensive medicine?

I often hear doctors say, when they are presented with these kinds of data, that "my patients are different," and that the data don't prove anything. But that assertion usually has no quantitative support.

BCBS is providing a valuable service in sharing these data with the hospitals in Massachusetts. The BIDMC data indicate that our doctors, like all others, vary within and across practice groups in management of the conditions at hand. We are finding this to be a useful tool in evaluating our practice patterns, both within our own practices and in comparison to others. In the face of these kinds of numbers, it is important to ask the questions.

19 comments:

Anonymous said...

Like most data, these charts raise more questions than answers. For instance, practically every box in the 3rd and 4th quartiles of the first chart is higher, even the ones marked "FAC inpatient" and FAC outpatient". What does FAC mean and does this have any significance? Also, in a reimbursed environment, how are some docs compensated so much more than others for the same procedure by the same insurance company? (or am I reading this incorrectly)meniapb

But, I think this sort of data is invaluable because, even as the docs seem to dismiss it with "my patients are different", most of them internally want to know why these differences exist. Even if it's presented at a departmental meeting and suggestions made for further data mining, it will be useful.It's just that it's only scratching the surface so far.

nonlocal MD

Anonymous said...

As we say here in Boston, "Fuh shuah." This is not the complete data set, and there are breakdowns for each practice group.

I'll see if the BCBS folks can answer your FAC question.

Anonymous said...

Not sure what that gobbledy-gook after the ) in my first paragraph was; sorry for the poor proofreading. Maybe I had a seizure and didn't know it. (:

nonlocal

Steven Spear said...

American health care has not kept up with is own success. Medical science was once (and until fairly recently) rudimentary and the ability to care for patients was constrained.

Then, the feel, intuition, and knack the artisan, with a master craftsman approach to treatment may have made sense.

The same was once true in design professionals like engineering and architecture.

That is no longer the case. The science has advanced incredibly far, along with the complexity of the systems of care through which that science is applied.

No longer is the artisan approach justified. Rather, a data driven, disciplined, approach is, in which the community of practice, not just individuals, relentless build useful knowledge that is then applied rigorously.

Steve Spear
Sr. Lecturer MIT
Sr. Fellow, IHI
Author: Chasing the Rabbit: How Market Leaders Outdistance the Competition
http://ChasingTheRabbitBook.com

D Safran said...

FAC signifies "facility". So, for each quartile, FAC inpatient indicates the average % of episode costs accounted for by spending on "inpatient facility" and similarly, the % accounted for by spending on "outpatient facility" (FAC outpatient).

Steven K. Holderness said...

I'd like to offer a few perspectives; one business related and one personal. Having a clear set of key performance indicators (KPIs) is very important in determining whether goals and objectives are met (ex: reducing operating costs, staying on budget, meeting sales objectives, etc). More importantly, any KPI review has to ensure an apples and apples comparison lest anyone jump to false conclusions; potentially pushing an organization or company down an incorrect business path. As Anonymous stated previously; "What does FAC mean and does this have any significance?" At a minimum, perhaps FAC definition could be provided as a caveat within those associated charts and graphs, alerting readers to the possible ramifications of how this information may (or may not) affect results.

On a personal note; while being treated for Hodgkin's Disease many years ago, after four months of chemo (twice a month for six months) my oncologist stated that there were no signs of cancer in my lymph nodes. Regardless, treatment would continue for the full six months as this was the standard treatment protocol. Otherwise, all results, favorable or unfavorable, would be skewed. As much as I wanted to stop treatment, my oncologist's explanation made perfect business sense to me. After treatment conclusion I completed annual tests to review my health. Those of you in the medical profession know the drill well; I was annually evaluated against an initial baseline of tests, one of which included gallium scans. My tests went well for several years until once, when the location of my testing changed hospitals. The department hospital doctor responsible for testing questioned the gallium scan procedure stating an MRI or perhaps a CT scan would be preferable as well as not having to expose me to radioactive gallium. Ultimately, my oncologist overruled the onsite doctor and I endured my gallium scan as I had in the past. Why the gallium scan versus the others? Because the baseline tests, which included gallium scans required at the time, was already set, and any change in tests would have skewed results going forward. Not only was my oncologist a good doctor, but a good business person as well.

In my personal examples, my oncologist was steadfast in not deviating from the standard treatment (ABVD) and testing associated with Hodgkin's Disease for the time (2000). From my limited medical experience with cancer treatments, cancer trials aside, it appears that standards and protocols are closely followed and monitored for reasons that should be apparent to most. How is it the medical profession appears to follow cancer treatment protocols closely, but, as in your example above shows, examination and biopsies vary with GERD, let alone other ailments? Why shouldn't the same discipline shown in following cancer treatment protocol be applied in your GERD example? As you note it is impossible to improve processes, and any inherit benefits (ex: improved treatment, reduced costs, etc) without some baseline and standardization.

In any event, makes for an interesting and lively discussion!

Keith said...

The real question in terms of physician variability is what amount of these expenses designated as "procedure" are costs attributed to physician expense and how much are facility fees. Having had some "oscopies" myself in the past, it seems the largest part of the bill has always been the fees charged by the facility; more often than not, a hospital, and not the physician. Some costs can be added by the physician depending on whether he/she biopsies or uses more medication during the procedure, but this cost is insignificant compared to the facility fee.

In this respect, I have always been amazed why BC and other insurers tolerate paying so much more for outpatient oscopies done in a hospital setting vs outpatient clinics where the cost is much cheaper. Rewarding institutions with higher reimbursement only alllows them to build fancier facilities with enhanced amenities but may not add value in terms of reduced errors and such. Thus you get the never ending cycle of the rich get richer that you have no doubt witnessed in the Boston area, where the Partners organization seems to be able to extract higher prices than the local competition despite exhiting no added value by any measure. This cost likely shows up, I suspect, as the wide differential in the graph you exhibit.

If the intent is to show the differential in cost between gastroenterologists, then Blue Cross needs to separate out those costs that are not influenced by the physician (unless you mean for GI docs to shop their services around to the hospiutal offering the lowest facility fee which seems impractical) and extract that from the equation. It is a cost largely out of their ability to influence.

76 Degrees in San Diego said...

The education of physicians is primarily "case-based" throughout the years of training. We learn from our predecessors. We also fear missing a diagnosis and in times of clinical uncertainty, order more tests (you would too). Changing behavior requires convincing evidence that is repeated. No physician even wants to be named in a legal action about a missed diagnosis. Most feel that not being named in a suit is worth the extra work/cost/ and in some cases, reward of doing the additional test/procedure. If your mentor was named in a suit alleging "not enough done", you will learn and apply her/his lesson expeditiously. It is not just evidence; it has to be convincing evidence

Steve Spear said...

Interesting point by 76 Degrees in San Diego about case based teaching.

Ideally, cases are used as specific examples to illustrate fundamental principles and concepts with the expectation that professionals can then make principle based decisions going forward. For instance, you do specific 'cases' in labs and such with the idea that you are learning general principles of chemistry, physics, engineering and the like.

Case based teaching fails if students walk away with anecdotal reasoning instead, encountering situations and looking for the set of events or circumstances that are a close approximation.

Under any circumstance, anecdotal reasoning is suspect. Even more so for complicated situations. In a multi variate setting, one case does not show first order or interactive effects well.

76's comments reinforce the notion of a artisan mentality, not a scientifically disciplined one.

Steve Spear

Sr. Lecturer, MIT
Sr. Fellow, IHI
http://ChasingTheRabbitBook.Com

Anonymous said...

Transferred from Facebook:

Emily; Wow. I had two endoscopies with biopsy, the second being a follow-up because the first one revealed "a minute focus" of Barrett's esophagus. But after the second biopsy was negative for BE (but all of a sudden positive for H. pylori??), I found out that the slides from the first endoscopy had been read twice, and only one of the pathologists had diagnosed BE. I also got a rubber-hose-up-the-nose test, in which a pH monitor was inserted in my tum for 24 hrs. Conclusion: equivocal results, *a lot* of tests (no idea how much they cost), years of Prilosec to control symptoms - which I would've had even without the tests. (Sorry for long anecdote...)

Bev: As a pathologist, I can comment that what probably happened was that, in the face of the pathologists' disagreement, they recommended obtaining more biopsies, since Barrett's can be patchy. There is controversy over how to manage Barrett's (e.g. how often to do endoscopy) since only a small percentage of Barrett's goes on to develop cancer.....more reason for "evidence based medicine."

Anonymous said...

Am I correct in seeing on the second graph that the average case load per doctor is 24? So the difference between an "average" and "significantly higher" doctor is about 13/24 patients versus 18/24? Given that one wouldn't really EXPECT all doctors to have the same biopsy frequency -- guy who gets lots of referrals from immunosuppressed transplant service versus guy who gets referrals from the community or just your PCPs threshold for referral -- this doesn't seem very shocking to me.


At the least I would probably want to see some sort of look across procedures. If the same people doing biopsies more also do more colonoscopies and other procedures then one has the makings of an argument.

Anonymous said...

Thank you. Your comment reminds me of a quote from Paul Batalden: "Measurement is a reductive act. We measure an aspect of a phenomenon. We often start with one or a few measures. A "natural" reaction is to want a more representative picture of the phenomenon -- hence a "breeder reactor" for measurement."

His point, I think, is that you have to start somewhere and see what you can learn, but if you think you are ever going to satisfy all of the data needs that definitively prove something, you will never reach that point of certainty.

76 Degrees in San Diego said...

Prof. Spears' concern is understood. However, published studies by medical educators point to case-based learning as being superior to convey information to medical learners as opposed to didactic lectures. In addition, in practice, you are confronted by the person in front of you, as opposed to presenting a lecture, or writing a publishable paper. Check the curriculum for the school up the river.....the direction is for MORE case-based learning in the first two years. There is the evidence of more effective physician practice following this course of education....

Anonymous said...

An "our patients are differnt" example right from BIDMC can be used to explore sources for the difference highlighted in the graphs you display here. An emerging GI diagnosis over the past 15 years is eosinophilic esophagitis. Patients often present as young, otherwise healthy individuals with symptoms of acid reflux not responsive to standard medical treatment, dysphagia, or food impaction. On endoscopy linear furrows or rings can be seen, but often the esophagus appears completely normal. It is not until a biopsy is taken that demonstrates eosinophils have infiltrated into the esophageal tissue that the diagnosis can be confirmed. BIDMC is emerging as the local and likely regional leader in correctly diagnosing and treating this often overlooked condition. Perhaps the knowledge that GERD-type symptoms not responding to standard treatment require biopsy independent of esophageal appearance is just the start to the potential differences the graph doesn't highlight between quartile 1 and quartile 4.

Anonymous said...

Re: Paul's response.

I agree that it's interesting data. But I think it's important to understand the null hypothesis for these arguments. This is very convincing data to reject the null hypothesis that physicians have similar rates of biopsies. But the null hypothesis that is relevant to policy and clinical practice is that physicians have similar rates of APPROPRIATE biopsies.

Not to be a downer. I'm glad you are working on this, and I would be very interested to see if dissemination of this information changes practice at all (Does "You order more CTs than 98% of other gastroenterologists" make them order fewer?). I am just touchy on the subject because I feel that the popular thinking is actually the opposite, that based on the Dartmouth and other studies there has become a widespread belief among policymakers and non-clinicians that practice variation is massive, intrinsically bad and wasteful.

Anonymous said...

Let me hasten to clarify my previous comment to say that I DO believe practice variation can and must be reduced. My question is, how do we accomplish that specifically; taking into account all the commenters' cited variables.

nonlocal

Anonymous said...

anon 7:25:

Based on the Dartmouth data, the degree of unexplained practice variation they demonstrate IS "massive, intrinsically bad and wasteful." If you were the GE who ordered 98% more CT scans than any other, wouldn't you want to know why? (not guess why, or bluster why, or deny why).

nonlocal MD

Anonymous said...

Indeed, but the Dartmouth study is not the be all end all. In particular there has recently arisen a counter-movement suggesting the Dartmouth investigators are at this point fairly biased in pushing their ideas. Studies from other centers have suggested that the problem is (like everything in life) more complicated than the simple answer the Dartmouth study suggests, for example the congestive heart failure versus cost study that UCLA was part of.


Nobody questions that a lot of practice variation is bad and wasteful. But a lot of non-clinicians seem to think that the ideal world is one which virtually NO practice variation and are pushing hard for mechanisms to create this. And that's what I dispute. At this point everybody's on the "practice variation is bad" train so it never hurts to throw one's contrarian voice out there. :)

J Taddeo said...

I’m from Focused Medical Analytics, the company that worked with BCBSMA to generate the variation analyses referenced here. Mr. Levy is correct, a physician’s predisposition, his/her fear of malpractice and the reimbursement structure can all cause significant variation. What we find most often in our work is that physicians simply do not know what their peers are doing. Presenting physicians with clear, accurate and non-judgmental data that shows how they compare to their peers is a very powerful tool to engage the physician to change behavior. We have many examples of how this process actually results in physicians changing the way they practice.