Robert “Bo” Gamble
As more and more emphasis is placed on value of care versus volume of care, oncology practices are increasingly looking for ways to compare their performance with that of their competitors. Benchmarking, as this is called, has historically been difficult in oncology because of barriers to the flow of information and the complexities of care. However, such performance comparisons are now becoming a part of payment models, and practices realize they need comparative data to understand how well they perform relative to their competitors and whether there is room for improvement.
The newly established Merit-based Incentive Payment System (MIPS) from CMS uses comparative data to determine whether practices have improved their performance and should receive financial incentives or be penalized. CMS’ Oncology Care Model (OCM) also collects practice data that are based on a variety of performance measures; in turn, practices receive comparative information that enables them to understand how well they are controlling expenses and managing patient outcomes.
But a problem with this information from CMS, practices say, is that its delivery is delayed months after the actual performance periods being measured, and much additional information that would be useful is not available. This recognition of the need for comparative data—in real time—has convinced a growing number of oncology practices to begin sharing anonymized data to see how they measure up to others.
“There’s currently no way to create many of the benchmarks that practices most covet, because they require data from some outside source like Medicare, so the information you need either isn’t available at all or it comes 9 months late, which is nearly as bad,” according to Robert “Bo” Gamble, director of Strategic Practice Initiatives at the Community Oncology Alliance (COA). “The good news is that there are plenty of valuable benchmarks that can be compiled with data that practices have, and we are just beginning to tap their potential for operational improvement.”
Strictly defined, benchmarks are standardized measures of performance that businesses have long used to compare themselves with competitors, as well as to compare current and past performance. They have found their place in medicine—“best hospital” lists all compare institutional performance on standard metrics—but benchmarking remains rare among independent oncologists. Gamble estimates that no more than 10% of COA’s members have engaged in any benchmarking beyond in-house analyses of a current year’s performance in comparison with performances in years past.
Figure. Anonymized Benchmarks for an Individual Practice
COA has launched a program called COAnalyzer to help practices that already use benchmarks and entice others to give them a try. Any practice that gives anonymized information about its own performance on a variety of metrics will be able to see how its numbers fit into the range of figures supplied by other participating practices.
The first benchmarks from COA focus on oral prescription metrics, such as how fast each practice’s pharmacy fills orders, how likely its patients are to refill prescriptions promptly, and how long inventory sits on the shelves. This information can provide valuable insight into needed improvements. For example, a practice that uses those benchmarks and finds slow inventory turnover may be tying up too much capital by prepurchasing too many expensive drugs. A practice with an unusually large percentage of patients who do not refill prescriptions on time will know it must do more to get its patients to take medication as directed.
“As time goes on, we plan to roll out other helpful benchmarks,” Gamble said. “‘Accounts receivable days outstanding’ is a big one. Good practice managers know that number off the top of their heads, but we’re about to give them an opportunity to see how their numbers compare to industry norms. We’re also going to put out numbers for management ratios and staffing ratios.”
Benchmarking may be relatively new to independent oncology practices, but it has already proven its value in many other sectors. The first major company to use benchmarks to compare itself with competitors was Xerox, which calculated in 1979 that its ratio of indirect to direct staff was twice as high as more successful competitors’ and that its copiers had 7 times as many defects as competitors’ did. Comparative data showed where the struggling company should focus its improvement efforts and facilitated a dramatic turnaround. Xerox was able to cut its manufacturing costs in half and its product defects by 67%.