Quality Reporting Automation Stalls As a Result of Poor EHR Data Entry

Article

Fill rates of data elements on electronic health records are a major contributing factor limiting the uptake of automated services.

Robert Moldwin, MD

Robert Moldwin, MD

Quality initiatives leveraging automated extraction of data elements from electronic health records (EHRs) have not panned out in the oncology sector. Despite the requirements of reporting of quality care measures being essential to determine reimbursement, authors of a recent report in JCO Clinical Cancer Informatics highlighted that fill rates of data elements on EHRs are a major contributing factor limiting the uptake of automated services.1

“Federal quality initiatives for oncology have recently centered around the automated extraction of high-quality data elements from EHRs,” the authors wrote. “[The results of the study] we demonstrate that automated clinical quality calculations are not feasible at present, with negative implications for public health reporting, research, and other secondary uses of EHR data. Solving this problem will require widespread creation and adoption of common data elements along with improvements in the routine capture and exchange of structured data.”

Quality care incentives are at the core of reimbursement models for Medicare, which use value over volume as part of the quality payment program. The Medicare Access and CHIP Reauthorization Act of 2015 streamlined multiple quality programs under a merit-based incentive payments system (MIPS). The established quality clinical markers, which adjust based on performance in the program, include several oncology-specific measures determined by the Centers for Medicare and Medicaid Services (CMS). The markers are evaluated in 4 domains including quality, cost, improvement activities, and promotion of interoperability, and serve as incentives to physicians maximize reimbursements.2

There are several criticisms of MIPS including whether the system accurately captures quality and if the payout exceeds the cost of participation.2 Additionally, the necessary steps to uncover and appropriately account for these statistics have left a burden on practice administration complete reporting have left practices valuable coast-savings on the table.1,2 For example, results of an analysis of financial cost and time associated with MIPS showed that in 2019 noted that a total of 201.7 hours were spent by physicians, clinical staff, and administrators on MIPS-related activities annually.2 Meanwhile, the cost of participation was an average of $12,811 per physician.

Automation Stalls Without Available Data

Investigators of the feasibility study aimed to determine if the data housed in EHRs could be automatically derive the clinical quality measures from oncology MIPS. Automation of pulling appropriate data offers practices an opportunity to reduce the administrative burden involved in the process.1

For the analysis, investigators referenced the 19 clinical quality measures that were applicable to cancer care in 2018. Investigators accessed patients records through CancerLinQ, a learning health system developed by the American Society of Clinical Oncology (ASCO). The platform extracts data from participating practices who submit EHRs to be used to populate a big-data platform. As of the time of the analysis, there were 63 participating practices with 8 EHR vendors and a database of over 1.63 million patients with a diagnosis of primary cancer.1,3

The key takeaway from the study results was poor population of the 63 available data elements. Investigators reported a mean fill rate of only 23% across practices, and no entries found in 14 data elements for any practice. At least 90% of practices provided information across 23 data elements.1 These fill rates varied across HER vendors with only 3 registration and reimbursement-related options available in all platforms (diagnosis_code, age_dob, and gender). Diagnosis date (84.7%) and tobacco use (83.5%) were filled most frequently.

Thirty-two data elements had a fill rate of less than 10% with 21 elements exceeding a 25% fill rate. As the fields narrowed, the bottom 50% of the data elements had a fill rate of 0.7%; the top elements had a fill rate of 42%. In total, 11 of the 19 oncology-related MIPS had 0% calculability because of no available data. In turn, only 2 measures—both concerning tobacco use, a common component in quality initiatives—had calculability greater than 1% at 32.2% and 31.4%.1

“One challenge is the complexity and frequent changes of oncology data elements, which do not lend themselves easily to standardization and maintenance of interoperable, computer-readable, and up-to-date data dictionaries,” the authors wrote. “Another challenge is the lack of a mandate to implement data capture standards within EHRs. Data in structured EHR fields vary widely among implementations because data capture standards have not been widely adopted by EHR systems, and also because practices do not routinely share data capture templates.”

The underfilling and inconsistency of data elements across EHRs have made automation unviable for MIPS reporting. Investigators noted that these results highlight the need for improvements in the routine capture of data and the creation of common data elements across practices. Another suggestion from the authors is CMS-based incentive to the community-based practices to develop the national standards for structured data capture.

Investigators involved in one project, Minimal Common Oncology Data Elements (mCODE), have aimed to meet the expanding interoperability requirements for the documentation of data for patients with cancer.4 Six high-level domains have been developed by a working group at ASCO including patient data, laboratory and vitals, disease, genomics, treatment, and outcomes. In total, 23 mCODE profiles established present an opportunity for the capture of 90 data elements, which would better represent the landscape of cancer care through shared knowledge.

Despite the notable entries offered in the data capture with project such as mCODE, standard EHRs are not designed to capture the array of oncology-specific information including staging, biomarkers, and adverse events, each of which provide quality care information.

The authors of the feasibility study noted that customization of EHRs further complicates the advancement of automation. “Significant customization to accommodate local clinical workflows and data capture preferences… such as clinical workflows, decreases standardization and interoperability and increases the difficulty of data extraction,” the authors wrote. They concluded that there is a long road ahead before the gap between quality measures and high-quality data can be closed.

References

  1. Schorer AE, Moldwin R, Koskimaki J, et al. Chasm between cancer quality measures and electronic health record data quality. JCO Clinical Cancer Informatics. Published online January 6, 2022. doi:10.1200/CCI.21.00128
  2. Khullar D, Bond AM, O’Donnell EM, Qian Y, Gans DN, Casalino LP. Time and financial costs of physician practices to participate in the Medicare merit-based incentive payment system. JAMA Health Forum. 2021;2(5):e210526. doi:10.1001/jamahealthforum.2021.0527
  3. Potter D, Brothers RM, Kolacevski A, et al. Development of CancerLinQ, a health information learning platform from multiple electronic health record systems to support improved quality of care. JCO Clin Cancer Inform. 2020;4:929-937. doi:10.1200/CCI.20.00064
  4. Osterman TJ, Terry M, Miller RS. Improving cancer data interoperability: the promise of the Minimal Common Oncology Data Elements (mCODE) initiative. JCO Clin Cancer Inform. 2020;4:993-1001. doi:10.1200/CCI.20.00059
Related Videos