Secondary Logo

Journal Logo

Research Reports

Practice Indicators of Suboptimal Care and Avoidable Adverse Events

A Content Analysis of a National Qualifying Examination

Bordage, Georges, MD, PhD; Meguerditchian, Ari-Nareg, MD, MSc; Tamblyn, Robyn, PhD

Author Information
doi: 10.1097/ACM.0b013e3182a356af

Abstract

Adverse events (AEs) represent unintentional actions by health care professionals that result in substantial losses in patient health status, complications, prolonged recovery, disability, or even death.1–3 As many as one-third of patients admitted to a hospital experience at least one AE either while receiving care or after their discharge.4–6 AEs that occur during a patient’s hospitalization alone represent a significant financial burden to health care systems, with an average estimated cost of over $9 billion per year in the United States.7,8 Close to half of these AEs are considered preventable, as they are the direct result of errors in diagnosis, delays in the detection or management of complications, breakdowns in communication, equipment failures, mislabeled drugs, or procedural problems.9–11 The convergence of increasingly complex care plans, an aging patient population, and multiple transfers between health care delivery sites further increases the risk of practitioners providing suboptimal care (SOCR) and preventable AEs occurring.12 A central mission of medical licensing authorities is to protect the public against unsafe health care practices by assessing physicians’ ability to provide optimal care and avoid AEs. This goal raises a key design question for licensure examinations—namely, to what extent does the content of their examinations assess physicians’ knowledge, decision making, and actions as they contribute to SOCR and AEs? Medical competence is a multidimensional construct that can be assessed in a number of ways depending on the practice model used.13 One element of competence is a physician’s ability to practice safely and avoid SOCR and AEs.The purpose of our study was twofold: (1) to compile a list of physician-related practice indicators (PRINDs) that contribute to either causing or preventing SOCR and AEs, and (2) to determine the extent to which one national exam, the Medical Council of Canada (MCC) Qualifying Examination (QE), required for licensure in Canada, assessed these PRINDs. For the purpose of our study, we defined the PRINDs of SOCR and avoidable AEs as either a physician’s failure to act (errors of omission) or his or her inappropriate actions (errors of commission), which could produce a clinically relevant loss in health status and would occur with sufficient frequency to be considered a priority at the population level. Examples of such PRINDs (stated positively) range from prescribing influenza vaccinations to high-risk patients; providing heart failure patients written instructions on activity, diet, medications, and what to do if symptoms worsen; and recognizing an abdominal aortic aneurysm or an ectopic pregnancy. Some PRINDs represent specific events, such as prescribing medications that produce central nervous system side effects in patients who are at risk for falling, whereas others are more general, such as intervening rapidly when a complication arises.

From an assessment perspective, this type of research is part of the process of gathering ongoing evidence for the validity of exams.14 Our overarching goal with this study was to begin to establish a framework for identifying important indicators of safe medical practices and prevalent practice problems and errors, and to determine how these indicators might relate to exam content. In our study, we addressed three aspects of validity evidence, of the five sources of construct validity evidence described in the Standards for Educational and Psychological Testing15: (1) content as it relates to practice analysis and blueprinting, (2) response process regarding scoring procedures, and (3) consequences for medical schools and candidates taking national exams.

Method

We conducted our study in two parts. First, we compiled a list of PRINDs of SOCR and avoidable AEs. We then conducted a content analysis of the 2008 and 2009 MCC QEs.

PRINDs identification

To date, no one has systematically compiled a unique list of the knowledge and skills physicians need to avoid clinically relevant errors related to SOCR and preventable AEs, nor have they identified corresponding methods for assessing these skills. The purpose of this part of our study was threefold: (1) to compile a list of PRINDs that health care agencies and authorities judge to play a significant role in either causing or preventing SOCR and avoidable AEs, (2) to determine the extent to which physicians are responsible for practicing the behaviors associated with PRINDs, and (3) to estimate when during training and the method by which these PRINDs could be assessed. To identify the relevant PRINDs, we searched, during the fall 2009 and winter 2010, PubMed and Google for reviews of important practice errors that experts already had judged to play a role in either causing or preventing SOCR and avoidable AEs. We did not limit our searches by date or type, and our sources included reviews of AE studies,16 medical litigation insurance case report advisories,17 patient safety institutes,18,19 recommended safe practices,20 health care quality and accreditation agencies,21–31 pay-for-performance analyses,32 and quality improvement initiatives.33–35 We collapsed the PRINDs that we identified from multiple sources into a single indicator. To improve coherence, we then standardized each PRIND to capture the problem (e.g., deterioration or death in asthma patients having an acute exacerbation) and the expected behavior needed to avoid the problem (e.g., prescribe fast-acting bronchodilators and/or systemic corticosteroids).

Then, in March 2010, we surveyed an expert panel of 17 physicians from the MCC test committees to determine the level of physician responsibility in avoiding these events and the appropriate method to assess their relevant skills. From the list of test committee members and in consultation with MCC exam officials, we selected physician experts from a broad spectrum of specialties, with direct involvement in training programs and assessment methods, and able to make informed judgments about the appropriateness, timing, and method of assessing PRINDs.

We used a Web-based survey to independently gather each expert’s ratings of each PRIND concerning three issues: (1) the extent to which the PRIND was under the direct control of the physician; (2) whether the PRIND should be assessed on a general medical examination, such as the MCC QE, a specialty certification examination, or both; and (3) the extent to which the PRIND could be assessed by a written examination, a performance-based examination, and an in-training evaluation during supervised practice. We obtained ethics approval for our survey from the internal review board at McGill University. We used Likert scales to measure the extent of physician control and the suitability of each assessment method. We classified a PRIND as predominantly under the control of the physician when two-thirds or more of the experts rated it as ≥ 5 on a 7-point scale, where 1 represented no control and 7 represented complete control. We used the same two-thirds majority rule to determine whether the PRIND should be assessed on a general medical or specialty-only examination. We classified a specific assessment method as suitable for testing a PRIND if 70% or more of the experts rated the PRIND as likely or definitely to be tested by the method.

MCC QE content analysis

The purpose of this part of our study was to determine the extent to which one national licensing examination, the MCC QE, assessed the PRINDs. The MCC QE is required for licensure in Canada and consists of two parts (taken on two separate occasions) and six components.36 See List 1 for details.

List 1 Overview of the Parts and Components of the Medical Council of Canada’s Qualifying Examination

We analyzed the content of the fall 2008 and 2009 administrations of the MCC QE I and II. We based our analysis on the PRINDs that the experts identified as being appropriate for testing on a general medical examination. We analyzed the content of each multiple-choice question (MCQ) and clinical decision making (CDM) and objective structured clinical examination (OSCE) case using a four-point rating scale representing the extent to which a PRIND was tested: 0 indicated not at all; 1, the question or case included a PRIND topic but did not test the PRIND; 2, the question or case partially tested a PRIND; and 3, the question or case completely tested a PRIND. We also tallied the number of total and distinct PRINDs and the number and percentage of exam points attributed to each PRIND in each question or case.

For the purpose of our study and the ease of interpreting our findings, we used raw, unweighted scores for the compilations and analyses. The MCC, however, uses more complex, weighted, and scaled scoring procedures (e.g., the Knowledge component of Part I is worth 75%, whereas the CDM component is worth 25%).36 Thus, our findings are not scaled and represent a simple sum of item scores.

We analyzed the exam contents from more to less complex scoring procedures, starting with the Part II OSCE checklists (all 48 cases used), followed by the Part I CDM key features37 scoring keys (all 158 cases used) and the Part I MCQ scores (810 five-option MCQs used from five random exams with 162 questions per exam, with an equal number of questions across six clinical disciplines). For the Part II OSCE exams, two raters (G.B., R.T.) independently analyzed the content of the cases, with an overall 89.1% (545/612 PRINDs) agreement rate; they resolved discrepancies by consensus. They disagreed not in identifying the PRINDs tested on the exams but, rather, in identifying instances where PRINDs could potentially be tested—that is, going from a code of 0 (not at all tested) to a code of 1 (PRIND included but not tested). Given the high degree of agreement between the two raters and the consensus reached, one rater (G.B.) completed the analysis of the Part I components.

Results

PRINDs identification

Overall, we identified 92 unique PRINDs, related to 76 health care problems. The MCC experts judged only two PRINDs to be out of the control of the physician (85.7% rater agreement, intraclass correlation [ICC] = 0.342): one related to nursing activities regarding central line infections and the other related to hospital procedures for blood cross-matches.

Of the remaining 90 PRINDs, 59 (66%) were behaviors or decisions expected of all physicians and suitable for assessment on a general medical examination (64.2% rater agreement, ICC = 0.456). These 59 PRINDs were related to 46 health care problems. A detailed list of the health care problems and PRINDs expected (1) of all physicians entering or in practice is available in Supplemental Digital Appendix 1, and (2) of physicians entering or in specialty practice in Supplemental Digital Appendix 2 http://links.lww.com/ACADMED/A150. Examples of the latter include ensuring the proper administration route for chemotherapy, assessing readiness to extubate, and managing wound dehiscence.

Experts considered written examinations to be effective assessment methods for 34 PRINDs (of 59; 58%), performance examinations for 36 PRINDs (of 59; 61%), and in-training evaluations during supervised practice for 54 PRINDs (of 59; 92%).

MCC QE content analysis

Of the 59 PRINDs, 36 (61%) were tested on the 2008 and 2009 MCC QEs. When a PRIND was tested, most often it was assessed completely: 99% for Part I Knowledge (31.8/32), 92% for Part I CDM (16.9/18.4), and 85% for Part II (8.3/9.8). Of the 36 PRINDs tested, 19 (53%) were tested only on one of the three main parts of the MCC QEs (5 unique to Part I Knowledge, 10 to Part I CDM, and 4 to Part II OSCE), 9 (25%) were tested on two of the three parts, and 8 (22%) were tested on all three parts.

The mean number of PRINDs tested per exam was highest for Part I Knowledge (32.2 PRINDs), followed by Part I CDM (18.4 PRINDs) and Part II OSCE (9.8 PRINDs) (see Row 1, Table 1). However, Part I Knowledge included the smallest percentage of questions per exam testing PRINDs (32/162; 20%) compared with Part I CDM (14/36; 39%) and Part II OSCE (5.25/12; 44%) (see Row 2, Table 1). The Part I CDM total test scores contained the largest percentage of points related to PRINDs (10.8/36; 30%) compared with Part I Knowledge (32/162; 20%) and Part II OSCE (68.5/1522.3; 5%) (see Row 3, Table 1). Finally, the percentage of points attributed to PRINDs, when a PRIND was tested, for both Part I CDM and Knowledge, was 78% (8.5/10.9) and 99% (31.8/32), respectively, compared with Part II OSCE with 10% (6.64/68.5) (see Row 4, Table 1).

Table 1
Table 1:
Average Number and Percentage of Practice Indicators Tested on Each Componentof the 2008 and 2009 Medical Council of Canada Qualifying Examinations*

Discussion

Our list of PRINDs provides a framework for structuring case-based learning opportunities during undergraduate, graduate, and continuous professional development programs, and for selecting the content for licensing and certification examinations. Our findings also augment existing efforts to define the content of a patient safety curriculum for medicine.38–42

The behaviors that we identified in this study needed to reduce the risk of SOCR and avoidable AEs reflect predominantly problems in hospital-based, specialty care. Studies of medical errors have generally been conducted in these settings. As studies of medical errors expand to assess patient safety problems in the community, home, and long-term care, different types of problems and PRINDs likely will emerge. For example, we did not identify any problems related to mental health, and few related to maternal–child issues. The extensive health informatics infrastructure currently being developed will allow for the more accurate and timely detection of medical errors and the risk factors for such events across the health care continuum.43

Overall, about three-fifths of the PRINDs that we identified in this study were assessed on the 2008 and 2009 administrations of the MCC QE. Given the key role that licensure exams play in testing PRINDs related to causing and preventing SOCR and avoidable AEs, one could argue that such examinations need to test more PRINDs to better protect the public against suboptimal medical care. We also found a certain redundancy in the PRINDs tested across the different parts of the MCC QEs (e.g., 9 were tested on two of the three parts, and 8 were tested on all three), whereas certain PRINDs (23 in all) were never tested. Alternative blueprinting approaches could be used to reduce these redundancies, thus increasing the number of distinct PRINDs tested.

Our results from this study raise interesting issues about response processes regarding scoring procedures. For example, whereas the percentage of cases testing one or more PRINDs per exam was relatively similar between CDM and OSCE cases, the methods used to score these cases produced a vast difference in the percentage of points attributed to PRINDs in these two components—30% versus 5%, a sixfold difference, even when one accounts for the fact that the CDM cases tested on average about twice as many PRINDs (18.4 versus 9.8). In other words, CDM scores better represented candidates’ ability to avoid SOCR and AEs than OSCE or Knowledge scores. This discrepancy in scoring is similar to that of patient management problems (PMPs) decades ago, which eventually led to their demise. For the PMPs, thoroughness masked the essential or key feature elements in the resolution of the cases.37 For PRINDs, the multidimensional aspects of the OSCE checklists (i.e., assessing data gathering, problem solving, communication, and cultural, ethical, and organizational aspects of practice) and its scoring procedures mask our critical elements of interest—that is, a candidate’s ability to avoid SOCR or AEs. More focused scoring rubrics, like those in the study by Yudkowsky and colleagues,44 which included only clinically discriminating physical exam findings on the checklist, make the scoring more precise and reliable.

For cases testing a PRIND, vastly different scoring procedures guided the development of the scoring keys. The MCQs, by virtue of their limited focus, yielded an almost perfect match between a PRIND and the number of points attributed to it (99%). The CDM cases, by virtue of their key features approach, also had a high level of concordance between PRINDs and scores (78%). The scoring keys for CDM cases specifically focused on and exclusively rewarded the critical steps or actions in the resolution of the problem37—namely, the PRINDs. Illustrating this point is a case of respiratory failure, where all three key features tested a PRIND: (1) respond to lab results in a timely fashion; (2) prescribe fast-acting β-agonists for asthma patients having an acute exacerbation; and (3) prescribe influenza vaccination for high-risk patients. Consequently, all the points candidates could earn for that case were directly related to avoiding SOCR or AEs. In other cases, the PRIND is captured not by the candidate’s accumulation of points but by his or her loss of points. For example, in a case of abdominal pain (as in an ectopic pregnancy or aortic aneurism), failure to intervene rapidly will result in a zero score for the case overall, even if other appropriate actions were taken. Despite such positive actions, the patient will deteriorate unless swift action is taken; that is, the patient will experience an avoidable AE. The zero score in this case directly reflects the nature and importance of the PRIND.

Finally, in a 2007 predictive validity study, Tamblyn and colleagues45 found a positive relationship between the complaints retained by regulatory authorities for both CDM scores (51% increase in the relative rate of complaints retained per two-standard-deviation reduction in score) and OSCE communication scores (38% increase). The number of PRINDs alone, in our study, does not explain the higher predictive rates of complaints retained in practice found in the Tamblyn study for CDM and communication scores. However, although both Part I CDM and Part II OSCE had relatively similar percentages of cases testing PRINDs (39% and 44%, respectively), the much higher percentage of total test points attributed to PRINDs in Part I CDM (30%) compared with that in Part II OSCE (5%) might have contributed to the highest relative rate of complaints associated with lower scores for that component. The Part II communication score, with only half a point out of 100 (0.5%) related to PRINDs, is measuring something completely different from the PRINDs we analyzed in our study.

Conclusions

The results from our study shed light on three aspects of validity related to assessing the PRINDs of SOCR and avoidable AEs as one of the multidimensional components of a candidate’s readiness to practice safely and independently. These aspects of validity include (1) content as it relates to practice analysis and blueprinting, (2) response process regarding scoring procedures, and (3) consequences for medical schools and candidates taking national exams. The set of PRINDs that we compiled represents a first step in defining competencies for safe medical practice. Yet, we need to conduct a more comprehensive practice analysis to fully capture that domain, including, for example, patient safety problems and medical errors in the community, home, and long-term care. Depending on the candidates to assess and the practice analysis procedures used for criterion-referenced licensing examinations,46,47 we must decide what omissions and commissions occur frequently enough in practice and with sufficient health consequences to be considered essential competencies for a national exam intended for supervised or unsupervised practice. We then can incorporate such analyses and decisions into a broader, more comprehensive practice model for blueprinting that can guide the content selection for the entire exam.13 The data sources that we used in our study were mostly from Canada and the United States. Broadening these sources could foster the development of a more global practice model. In addition, the blueprinting strategy should also guide the selection of exam formats (e.g., MCQs, key features cases, or OSCEs) that are best suited for the competencies tested. According to our experts, all but one PRIND could be assessed using a written, performance, or in-training examination format. In addition, of the 23 PRINDs not tested on the 2008 and 2009 MCC QEs, only 6 could not be assessed using a written or performance examination. The main reason we found for not testing a PRIND was not because of test format limitations but because of the absence of a blueprinting strategy for PRINDs.

In addition, we need to address the fact that different scoring procedures led to different representations of the content. The key features approach, with its focus on critical steps and actions, offers a mechanism whereby a physician’s mastery of the decisions or behaviors needed to avoid SOCR and AEs is clearly captured and conveyed.38 Scoring issues are crucial both in terms of how scores are scaled and reported and how different subscores are used in predictive studies, as Tamblyn and colleagues used in their study of CDM and OSCE communication scores. Further complicating the issue is the fact that criterion-referenced licensing examinations focus on pass–fail cut-score decisions (versus norm-referenced examinations that use the whole range of the measurement scale) and the fact that different components of the examination may have different weights (e.g., CDM contributes 25% to a compensatory score, whereas Knowledge contributes 75%).36

Finally, the consequences of overtly testing candidates’ ability to avoid SOCR and AEs will undoubtedly prompt medical schools, program directors, and candidates taking the exams, as well as professional development and revalidation agencies, to pay closer attention to this important aspect of medical practice, much like OSCEs did for clinical exam skills two decades ago.48 As George Miller used to say, “Assessment drives the curriculum.” His statement is especially true when that call to avoid SOCR and AEs comes from the licensing authorities.

References

1. Matlow AG, Baker GR, Flintoft V, et al. Adverse events among children in Canadian hospitals: The Canadian Paediatric Adverse Events Study. CMAJ. 2012;184:E709–E718
2. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38:261–271
3. U.S. Food and Drug Administration. . What is a serious adverse event? http://www.fda.gov/safety/medwatch/howtoreport/ucm053087.htm. Accessed June 24, 2013
4. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138:161–167
5. de Vries EN, Ramrattan MA, Smorenburg SM, Gouma DJ, Boermeester MA. The incidence and nature of in-hospital adverse events: A systematic review. Qual Saf Health Care. 2008;17:216–223
6. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30:581–589
7. Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290:1868–1874
8. Vlayen A, Verelst S, Bekkering GE, Schrooten W, Hellings J, Claes N. Incidence and preventability of adverse events requiring intensive care admission: A systematic review. J Eval Clin Pract. 2012;18:485–497
9. Leape LL, Kabcenell AI, Gandhi TK, Carver P, Nolan TW, Berwick DM. Reducing adverse drug events: Lessons from a breakthrough series collaborative. Jt Comm J Qual Improv. 2000;26:321–331
10. Elder NC, Dovey SM. Classification of medical errors and preventable adverse events in primary care: A synthesis of the literature. J Fam Pract. 2002;51:927–932
11. Barach P, Berwick DM. Patient safety and the reliability of health care systems. Ann Intern Med. 2003;138:997–998
12. Laugaland K, Aase K, Barach P. Interventions to improve patient safety in transitional care—A review of the evidence. Work. 2012;41(suppl 1):2915–2924
13. Soto AC. Practice Analysis Studies: A Literature Review of Definitions, Concepts, Features, and Methodologies. A report Prepared for the Medical Council of Canada. 2011. http://mcc.ca/wp-content/uploads/Technical-Reports-Soto-2011.pdf. Accessed June 24, 2013
14. Clauser BE, Margolis MJ, Case SMBrennan RL. Testing for Licensure and Certification in the Professions. Educational Measurement. 20064th ed Westport, Conn Praeger Publishers
15. Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830–837
16. Brindley PG. Patient safety and acute care medicine: Lessons for the future, insights from the past. Crit Care. 2010;14:217
17. Canadian Medical Protective Association. . Aortic dissections: “Tearing” apart the data. 2008 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/risk_id/2008/com_ri0812-e.cfm. Accessed June 24, 2013
18. Canadian Patient Safety Institute. . Prevent Surgical Site Infections: Getting Started Kit. Safer Healthcare Now! 2011 http://www.saferhealthcarenow.ca/EN/Interventions/SSI/Documents/SSI%20Getting%20Started%20Kit.pdf. Accessed June 24, 2013
19. NHS. . Core list of never events. 2009–2010 http://www.nrls.npsa.nhs.uk/resources/collections/never-events/core-list/. Accessed June 24, 2013
20. Alper E, Rosenberg EI, O’Brien KE, Fischer M, Durning SJ. Patient safety education at U.S. and Canadian medical schools: Results from the 2006 Clerkship Directors in Internal Medicine survey. Acad Med. 2009;84:1672–1676
21. Integrated Healthcare Association. California Pay for Performance Program. Measurement Year 2008 P4P Manual. 2008 Washington, DC National Committee for Quality Assurance http://www.iha.org/pdfs_documents/p4p_california/MY2008_P4PManual_Bookmarked.pdf. Accessed June 24, 2013
22. Joint Commission. Specifications Manual for Joint Commission National Quality Core Measures. 2013 Washington, DC Joint Commission https://manual.jointcommission.org/releases/TJC2013A/. Accessed June 24, 2013
23. National Quality Forum. Safe Practices for Better Healthcare—2009 Update. 2009 Washington, DC National Quality Forum http://www.qualityforum.org/Publications/2009/03/Safe_Practices_for_Better_Healthcare%E2%80%932009_Update.aspx. Accessed June 24, 2013
24. Millar J, Mattke S. Selecting Indicators for Patient Safety at the Health Systems Level in OECD Countries, OECD Health Technical Papers No. 18, OECD Patient Safety Panel. 2004 Paris, France Organisation for Economic Co-operation and Development http://www.oecd.org/dataoecd/53/26/33878001.pdf. Accessed June 24, 2013
25. Agency for Healthcare Research and Quality, Department of Health and Human Services. AHRQ Quality Indicators: Guide to Patient Safety Indicators. 2003 Washington, DC Department of Health and Human Services http://www.qualityindicators.ahrq.gov/Downloads/Software/SAS/V31/psi_guide_v31.pdf. Accessed June 24, 2013
26. Institute for Healthcare Improvement. How-to Guide: Prevent Pressure Ulcers. 2012 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuidePreventPressureUlcers.aspx. Accessed June 24, 2013
27. Centers for Medicare and Medicaid Services and the Joint Commission. . Specifications Manual for National Hospital Quality Measures. Version 2.5. 2009 http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier4&cid=1203781887871. Accessed June 24, 2013
28. Institute for Healthcare Improvement. How-to Guide: Reduce Surgical Complications. 2012 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuideReduceSurgicalComplications.aspx. Accessed June 24, 2013
29. Institute for Healthcare Improvement. How-to Guide: Prevent Surgical Site Infections. 2012 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuidePreventSurgicalSiteInfection.aspx. Accessed June 24, 2013
30. Institute for Healthcare Improvement. How-to Guide: Prevent Ventilator-Associated Pneumonia. 2012 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuidePreventVAP.aspx. Accessed June 24, 2013
31. Institute for Healthcare Improvement. How-to Guide: Prevent Central Line-Associated Bloodstream Infections. 2012 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuidePreventCentralLineAssociatedBloodstreamInfection.aspx. Accessed June 24, 2013
32. Jones RS, Brown C, Opelka F. Surgeon compensation: “Pay for performance,” the American College of Surgeons National Surgical Quality Improvement Program, the Surgical Care Improvement Program, and other considerations. Surgery. 2005;138:829–836
33. Kern LM, Kaushal R. Health information technology and health information exchange in New York State: New initiatives in implementation and evaluation. J Biomed Inform. 2007;40(6 suppl):S17–S20
34. Nash DB. Change is coming! Prescriptions for Excellence in Health Care Newsletter Supplement. 2009;7:1–2
35. Reifsnyder JA. Improving the quality of care at the end of life. Prescriptions Excell Health Care. 2009;1:3–5
36. Medical Council of Canada. . Examinations. http://mcc.ca/examinations/. Accessed June 24, 2013
37. Page G, Bordage G. The Medical Council of Canada’s key features project: A more valid written examination of clinical decision-making skills. Acad Med. 1995;70:104–110
38. Kirch DG, Boysen PG. Changing the culture in medical education to teach patient safety. Health Aff (Millwood). 2010;29:1600–1604
39. Singh R, Naughton B, Taylor JS, et al. A comprehensive collaborative patient safety residency curriculum to address the ACGME core competencies. Med Educ. 2005;39:1195–1204
40. Thompson DA, Cowan J, Holzmueller C, Wu AW, Bass E, Pronovost P. Planning and implementing a systems-based patient safety curriculum in medical education. Am J Med Qual. 2008;23:271–278
41. Ellis O. Putting safety on the curriculum. BMJ. 2009;339:b3725
42. Varkey P, Karlapudi S, Rose S, Swensen S. A patient safety curriculum for graduate medical education: Results from a needs assessment of educators and patient safety experts. Am J Med Qual. 2009;24:214–221
43. Motamedi SM, Posadas-Calleja J, Straus S, et al. The efficacy of computer-enabled discharge communication interventions: A systematic review. BMJ Qual Saf. 2011;20:403–415
44. Yudkowsky R, Otaki J, Lowenstein T, Riddle J, Nishigori H, Bordage G. A hypothesis-driven physical examination learning and assessment procedure for medical students: Initial validity evidence. Med Educ. 2009;43:729–740
45. Tamblyn R, Abrahamowicz M, Dauphinee D, et al. Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. JAMA. 2007;298:993–1001
46. Smith IL, Hambleton RK. Content validity studies of licensing examinations. Educ Meas Issues Pract. 1990;9:7–10
47. Raymond MR. A practical guide to practice analysis for credentialing examinations. Educ Meas Issues Pract. 2002;21:25–37
48. Stillman PL, Haley HL, Regan MB, Philbin MM. Positive effects of a clinical performance assessment program. Acad Med. 1991;66:481–483
49. Institute for Healthcare Improvement. How-to Guide: Improved Care for Patients With Congestive Heart Failure. 2011 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuideImprovedCareforPatientswithCongestiveHeartFailure.aspx. Accessed June 26, 2013
    50. Institute for Healthcare Improvement. How-to Guide: Improved Care for Acute Myocardial Infarction. 2011 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuideImprovedCareAMI.aspx. Accessed June 26, 2013
      51. Lambie L, Mattke Sand the Members of the OECD Cardiac Care Panel. Selecting Indicators for the Quality of Cardiac Care at the Health Systems Level in OECD Countries. Paris, France OECD http://www.oecd.org/health/health-systems/33865450.pdf. Accessed June26, 2013
        52. Institute for Healthcare Improvement. How-to Guide: Prevent Harm From High-Alert Medications. 2012 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuidePreventHarmfromHighAlertMedications.aspx. Accessed June 26, 2013
          53. Institute for Healthcare Improvement. How-to Guide: Prevent Adverse Drug Events (Medication Reconciliation). 2011 Cambridge, Mass Institute for Healthcare Improvement http://www.ihi.org/knowledge/Pages/Tools/HowtoGuidePreventAdverseDrugEvents.asp. Accessed June 26, 2013
            54. Joint Commission. . Assuring Medication Accuracy at Transitions in Care. Patient Safety Solutions. Volume 1, solution 6. 2007 Washington, DC Joint Commission http://www.who.int/patientsafety/solutions/patientsafety/PS-Solution6.pdf. Accessed June 26, 2013
              55. Canadian Patient Safety Institute. Safer Healthcare Now! Campaign, How-to-Guide: Getting Started Kit: Prevention of Ventilator-Associated Pneumonia. 2012 http://www.saferhealthcarenow.ca/EN/Interventions/VAP/Documents/VAP%20Getting%20Started%20Kit.pdf. Accessed June 26, 2013
                56. Neumayer L, Mastin M, Vanderhoof L, Hinson D. Using the Veterans Administration National Surgical Quality Improvement Program to improve patient outcomes. J Surg Res. 2000;88:58–61
                57. Canadian Medical Protective Association. . Abdominal aortic aneurysm in the emergency department. 2007 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/risk_id/2007/com_ri0607-e.cfm. Accessed June 24, 2013
                  58. Canadian Medical Protective Association. . Breast cancer detection and diagnosis: Part 1. 2009 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/infosheets/2009/com_is0994-e.cfm. Accessed June 24, 2013
                    59. Canadian Medical Protective Association. . Timely diagnosis of ectopic pregnancy a key factor in reducing risk. 2009 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/infoletters/2009/com_il0920_1-e.cfm. Accessed June 24, 2013
                      60. Canadian Medical Protective Association. . Heads up—allergy shots can cause medico-legal reactions. 2008 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/infoletters/2003/com_il0340_2-e.cfm. Accessed June 24, 2013
                        61. Canadian Medical Protective Association. . Warfarin and INR monitoring: Are you on target? 2007 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/infoletters/2007/com_il0730_1-e.cfm. Accessed June 24, 2013
                          62. Canadian Medical Protective Association. . Suspecting sepsis in asplenic patients. 2008 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/infoletters/2002/com_il0240_3-e.cfm. Accessed June 24, 2013
                            63. Canadian Medical Protective Association. . Failure to make the diagnosis of colorectal cancer: A window of opportunity missed. 2008 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/risk_id/2008/com_ri0813-e.cfm. Accessed June 24, 2013
                              64. Canadian Medical Protective Association. . Adverse medication events. 2008 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/infoletters/1995/com_il9520_8-e.cfm. Accessed June 24, 2013
                                65. Canadian Medical Protective Association. . Specimen and report mix-ups. 2007 http://www.cmpa-acpm.ca/cmpapd04/docs/resource_files/infosheets/2007/com_is0776-e.cfm. Accessed June 24, 2013
                                  © 2013 by the Association of American Medical Colleges