Mapping the Quality Improvement Landscape

2010 - 2 April - Ultrasound in the ICU and Beyond
A Society of Critical Care Medicine workgroup focused on quality initiatives has concluded an extensive review of the current quality landscape, public reporting and the expansion of comparative effectiveness research.
 
A Society of Critical Care Medicine (SCCM) workgroup focused on quality initiatives and relationships with external quality organizations has concluded an extensive review of the current quality landscape, public reporting and the expansion of comparative effectiveness research (CER). The critical care community should understand these initiatives, as they are sure to impact intensive care unit (ICU) practice as reimbursement and performance measures become increasing tied to quality, rather than quantity. The Society is monitoring many of these measures as they relate to critical care, and is dedicated to building awareness in this arena.
 
Overlapping Roles and Missions
A patchwork of organizations, agencies and committees is active in advancing quality measurement and improvement for critically ill and injured patients in the United States. Although these organizations represent patients, purchasers and healthcare providers, critical care represents only a small component of these agencies’ portfolios. Substantial overlap and lack of integration among these groups result in under-representation of critical care priorities. Significant confusion is pervasive among medical providers and the public regarding the individual and shared roles and missions of these agencies and organizations, an issue the workgroup attempted to resolve.
 
The organizations with interest in critical care quality measurement and improvement and their roles in the quality arena are summarized in Table 1. To simplify this landscape and provide some insight into the primary and secondary missions and activities of these organizations, the workgroup established four major “quality mission” categories:
  • Measure development and validation (M)
  • Measure endorsement and approval for enforcement and publication (E)
  • Measure implementation and enforcement through credentialing and financial incentive mechanisms (I)
  • Promoters who publicize quality measure performance and leverage opinions of the public, legislators, purchasers and insurers of healthcare, and professional organizations (P)
These missions and operational goals overlap both within and among the agencies and organizations. Each has variable regulatory and operational capacity to influence and direct quality performance improvement.
 
Public Reporting
Patients are very interested in the quality of care provided by practitioners and health systems;(1) whether they use that information in a way that actually improves quality is less clear. Studies show that the mechanism by which quality information is shown to patients strongly affects whether it is used.(2) Epidemiologic studies suggest that available information is used by patients less than might be expected. A landmark study showed that five years after New York’s guide to coronary artery bypass graft surgery was made available, only 1% to 2% of patients knew the rating of their hospital or surgeon.(3) More recently, a well-designed study found that only 10% of Americans were using quality information to choose a hospital.(4)
 
Does public reporting work? Two systematic reviews reveal partial evidence pointing to “yes,” though neither was specific to critical care.(2,5) The weight of evidence suggests that public reporting efforts have prompted hospitals to work on improvement and have had moderate impact on consumers’ choices of health plans; in other areas, effects have been mixed or null. However, interpretation of these data is a complex exercise, as summed up in the title of an editorial accompanying one of these systematic reviews: “What Can We Say About the Impact of Public Reporting? Inconsistent Execution Yields Variable Results.”(6)
 
The Current State
The past decade has seen an explosion in voluntary, payor-required, and legislatively mandated public reporting. Although these efforts may appear to some as a new phenomenon, they have in fact been in operation for awhile. These reports are now a way of life in some states: if this revolutionary idea had been a child born when surgeonspecific outcomes were first published in New York in 1991,7then he would be grown and going off to college this year. In general, existing public reporting can be thought of as specific to provider, group, health plan, or hospital. The Agency for Healthcare Research and Quality (AHRQ) maintains a publically available compendium of free and commercial report cards; as of this writing, there are 209 such scorecards.8 Other publically available data include:
  • Provider-specific: New York, Massachusetts and Pennsylvania post provider-specific outcome data for various cardiovascular procedures.(9-12)
  • Practice group-specific: The Commonwealth of Massachusetts posts various quality measures at this level.
    Hospital-specific: A large number of sources offer hospital- level information; the two most widely known are the Centers for Medicare &
  • Medicaid Services’ Hospital Compare(13) and the Leapfrog Group.(14)
  • ICU-specific: At least one state, California, provides publically available ICU mortality rates and data on process measures for ventilator-associated pneumonia.(15)
In addition, a broad array of states and health plans now require public reporting of hospital-acquired infections. A compendium is maintained by the nonprofit Consumers Union.(16)
 
Anticipated Future Directions
Public reporting requirements are expected to expand in the future. Existing data suggest that reporting performance results in healthcare organizations’ attempts to improve quality in meaningful ways. There are strong ethical arguments in favor of public reporting, but it is also important to consider the potential harms, including:
 
Reduced access to care. When outcome data are reported publically, providers may (consciously or unconsciously) provide care only to those patients who are likely to have better outcomes. This limits access to care for the most desperately ill. Empirical evidence indicates that this has occurred in cardiac surgery and may occur in percutaneous coronary interventions.(17-20)
 
Increasing disparities. Although quality improvement may narrow disparities, there is some evidence that – if poorly designed – public reporting or pay-for-performance programs can also widen healthcare disparities.(21-22)
 
Incorrect conclusions. The creation of accurate comparative reports is methodologically and statistically complex.(23-24) This goes far beyond risk/case-mix adjustment (which carries its own set of important risks)(25) and requires complex accounting for hierarchical clustering among providers and hospitals.(23,24 26-28) Subtle alterations in statistical methodology result in meaningful changes in which hospitals are classified as outliers.(29) Without rigorous attention to these details policymakers and patients may draw incorrect conclusions about the quality of ICUs.
 
Comparative Effectiveness Research
Several landmark events have occurred in the past two years,(30) beginning with the Institute of Medicine (IOM) report(31) calling for a new focus on research to support improved decision making about interventions in healthcare for both physicians and patients. Those priorities were defined in subsequent publications by the IOM30,(32) and culminated with the creation of the American Recovery and Reinvestment Act of 2009 (ARRA)(33) that earmarked over $1 billion in funds to support this research. About $300 million is targeted to AHRQ to expand its existing Effective Health Care Program. Of the remaining funds, the National Institutes of Health and the Office of Health and Human Services Secretary will each receive about $400 million. These funds will be distributed by the end of fiscal year 2010 and include the creation of a federal council on CER.
Subsequently, numerous societies have published what they believe to be their highest priority items for focused research,(34-36) followed by learned commentary(37,38) on the initiative and their priorities. Of the initial 100 highest priority topics as defined by the IOM (the complete list can be viewed at www.iom.edu/cerpriorities), only three(32) are devoted to trauma, emergency medicine and critical care medicine. This despite critical care consuming more than 0.66% of total U.S. gross domestic product per annum.(39)
 
Definition. The IOM has defined CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition, or to improve the delivery of care. The purpose of CER is to assist customers, clinicians, purchasers and policy makers to make informed decisions that will improve healthcare at both the individual and population levels.”(31)
 
Applications to Critical Care. Strict inclusion and exclusion criteria are required in randomized controlled clinical trials (RCTs) to determine the benefit of a particular intervention under ideal circumstances. ICUs initiate and monitor multiple evidence-based treatment strategies on every patient, every day, each with a carefully defined, anticipated effectiveness demonstrated by RCTs conducted under the most ideal and controlled of circumstances. Conducting CER in environments where intensive care patients are treated would provide the opportunity to identify those interventions offering the most promising approach in this population of everincreasing complexity. CER could clarify the subgroups of patients likely to experience the greatest benefit as demonstrated by an RCT’s more limited application. Professional organizations have written position statements regarding this type of research.(40) Traditional RCTs answer the question “does this work,” not “which is better,” so other methodologies may be needed.(32)
 
Challenges. RAND COMPARE has published a thoughtful analysis measuring the potential of CER against nine potential dimensions of care:(41)
  • Spending           
  • Health
  • Consumer financial risk
  • Coverage
  • Waste
  • Capacity
  • Reliability
  • Operational feasibility
  • Patient experience
Although CER has great potential and theoretical applications to each domain, most effects of this research model have not been studied. A recent cost effectiveness study found 20% of treatments and preventive measures save money compared to an alternative, 4% to 6% increase costs and lead to worse outcomes, and 75% confer a benefit while increasing costs.(42) Historically, the ICU has not been an environment where consumers base their choice of provider or hospital on the cost of care. Unless the results of these initiatives actually lead to the adoption of more effective, efficient, compassionate and reliably appropriate care, the health of neither the individual nor the population will improve.
 
The Quality Initiatives and Relationships with External Quality Organizations Workgroup includes chair Ivor S. Douglas, MD; Daniel L. Herr MD, FCCM; William A. Brock, MD, FCCM; Michael Howell, MD; Carol Thompson, CCRN, PhD, FCCM; and Lori Harmon, RRT, MBA.
 
References
 
1. Sofaer S, Crofton C, Goldstein E, et al. What do consumers want to know about the quality of care in hospitals? Health Serv Res. 2005; 40(6 Pt 2):2018‐2036.
2. Faber M,  et al. Public reporting in health care: how do consumers use quality‐of‐care information? A systematic Review Med Care. 2009; 47:1‐8.
3. Schneider EC, et al. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998; 279:1638‐1642.
4. Kaiser Family Foundation and Agency for Healthcare Research and Quality. 2006 update on consumers' views of patient safety and quality information. Available at: http://www.kff.org/kaiserpolls/upload/7560.pdf. Accessed October 8, 2009.
5. Fung CH, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008; 148:111‐123.
6. Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med. 2008; 148:160‐161.
7. Topol EJ, et al. Scorecard cardiovascular medicine. Its impact and future directions. Ann Intern Med. 1994; 120:65‐70.
8. Agency for Healthcare Research and Quality. Health care report card compendium. Agency for Healthcare Research and Quality Web site. Available at: http://www.talkingquality.gov/compendium. Accessed March 7, 2010.
9. New York State Department of Health. Cardiovascular disease data and statistics. New York State Department of Health Web site.  http://www.health.state.ny.us/statistics/diseases/cardiovascular.  Accessed June 28, 2009.
10. Pennsylvania Health Care Cost Containment Council. Cardiac surgery in Pennsylvania. 2006-2007. Available at: http://www.phc4.org/reports/cabg/07/default.htm. Accessed March 7,  2010.
11. Massachusetts Data Analysis Center (Mass-DAC). Adult coronary artery bypass graft surgery in the Commonwealth of Massachusetts: fiscal year 2007 report. Available at: http://www.massdac.org/sites/default/files/reports/CABG%20FY2007.pdf. Accessed June 28, 2009.
12. Massachusetts Data Analysis Center (Mass-DAC). Percutaneous coronary intervention in the Commonwealth of Massachusetts: fiscal year 2007 report. Available at: http://www.massdac.org/sites/default/files/reports/PCI%20FY2007.pdf. Accessed June 28, 2009.
13. Department of Health and Human Services. Hospital Compare. U.S Departement of Health and Human Services Web site.  http://www.hospitalcompare.hhs.gov. Accessed June 28, 2009.
14. Leapfrog Group. Leapfrog Hospital Ratings. The LeapFrogGroup Web site. http://www.leapfroggroup.org/cp. Accessed June 28, 2009.
15. California Healthcare Foundation. Rating hospital quality in California. California HealthCare Foundation Web site.  http://www.calhospitalcompare.org. Accessed October 16, 2009.
16. Consumers Union. Hospital acquired infections: state disclosure reports. Safe Patient Project Web site. http://www.safepatientproject.org/topic/hospital_acquired_infections. Accessed October 16, 2009.
17. Resnic FS, et al. The public health hazards of risk avoidance associated with public reporting of risk‐adjusted outcomes in coronary intervention. J Am Coll Cardiol. 2009; 53:825‐830.
18. Apolito RA, et al. Impact of the New York State Cardiac Surgery and Percutaneous Coronary Intervention Reporting System on the management of patients with acute myocardial infarction complicated by cardiogenic shock. Am Heart J. 2008; 155:267‐273.
19. Moscucci M, et al. Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention databases. J Am Coll Cardiol. 2005;45:1759‐1765.
20. Schneider EC, et al. Influence of cardiac‐surgery performance reports on referral practices and access to care. A survey of cardiovascular specialists. N Engl J Med. 1996; 335:251‐256.
21. Casalino LP, et al. Will pay‐for‐performance and quality reporting affect health care disparities? Health Aff (Millwood). 2007; 26:w405‐w414.
22. Chien AT, et al. Pay for performance, public reporting, and racial disparities in health care: how are programs being designed? Med Care Res Rev. 2007; 64(5 Suppl):283S‐304S.
23. Shahian DM,  et al. Cardiac surgery report cards: comprehensive review and statistical critique. Ann Thorac Surg. 2001; 72:2155‐2168.
24. Shahian DM, et al. Massachusetts cardiac surgery report card: implications of statistical methodology. Ann Thorac Surg. 2005; 80:2106‐2113.
25. Iezzoni LI. The risks of risk adjustment. JAMA. 1997; 278:1600‐1607.
26. Landon BE, et al. Performance measurement in the small office practice: challenges and potential solutions. Ann Intern Med. 2008; 148:353‐357.
27. Landon BE, et al. Physician clinical performance assessment: prospects and barriers. JAMA. 2003; 290:1183‐1189.
28. Hofer TP,  et al. The unreliability of individual physician "report cards" for assessing the costs and quality of care of a chronic disease. JAMA. 1999; 281:2098‐2105.
29. Glance LG, et al. Impact of changing the statistical methodology on hospital and surgeon ranking: the case of the New York State cardiac surgery report card. Med Care. 2006; 44:311‐319.
30. Institute of Medicine. Board on Health Care Services Comparative Effectiveness Research Prioritization. Initial national priorities for comparative effectiveness research. Washington, DC: National Academies Press; 2009.
31. Eden J, Institute of Medicine (U.S.). Committee on Reviewing Evidence to Identify Highly Effective Clinical Services. Knowing what works in health care: a roadmap for the nation. Washington, DC,: National Academies Press; 2008.
32. Luce BR, et al. Rethinking randomized clinical trials for comparative effectiveness research: the need for transformational change. Ann Intern Med. 2009; 151:206‐209.
33. H.R. 1 American Recovery and Reinvestment Act of 2009. 1st Session, 111th Congress,  2009.  Available at: http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=111_cong_bills&docid=f:h1enr.pdf.
34.  Chalkidou K, et al. Comparative effectiveness research priorities: identifying critical gaps in evidence for clinical and health policy decision making. Int J Technol Assess Health Care. 2009; 25:241‐248.
35. Gibbons RJ, et al. The American Heart Association's principles for comparative effectiveness research: a policy statement from the American Heart Association. Circulation. 2009; 119:2955‐2962.
36. Stoner M, et al. Society for Vascular Surgery position statement: Comparative effectiveness research in vascular disease management. J Vasc Surg. 2009; 49:1592‐1593.
37. Berenson RA, et al. Does telemonitoring of patients‐‐the eICU‐‐improve intensive care? Health Aff (Millwood). 2009; 28:w937‐w947.
38. Iglehart JK. Prioritizing comparative‐effectiveness research‐‐IOM recommendations. N Engl J Med. 2009; 361:325‐328.
39. Halpern NA, et al. Critical care medicine in the United States 2000‐2005: an analysis of bed numbers, occupancy rates, payer mix, and costs*. Crit Care Med. 2010; 38:65-71.
40. Drozda JP, et al. ACC 2009 Advocacy Position Statement: principles for comparative effectiveness research. J Am Coll Cardiol. 2009 ;54:1744‐1746.
41. RAND Corporation. Effects of comparative effectiveness policy options. Santa Monica, CA: RAND Corporation; 2009.
42. Cohen JT, et al. Does preventive care save money? Health economics and the presidential candidates. N Engl J Med. 2008; 358:661‐663.