Accreditation of undergraduate medical education (UGME) programs has existed in Canada and the United States for well over 100 years. Then, as now, the primary purposes of accreditation were to ensure the quality of medical education and to promote quality improvement, with the ultimate goal of providing optimal patient care.1–5 Direct linkages between accreditation and the quality of education are, however, difficult to establish.1–5
In their discussion on the role of accreditation in ensuring high-quality patient care, Boulet and van Zanten3 highlight the paucity of evidence linking accreditation with the betterment of education practices and call for additional markers of the impact of accreditation, such as longitudinal studies on schools pre and post accreditation, graduates’ performance on examinations taken later in their careers, and actual patient data.
So far, the literature on the impact of accreditation on the quality of medical education has predominantly focused on the use of student outcomes, such as performance on national examinations. However, student outcomes present several challenges with regard to data availability, comparability, and contamination. Rather than focusing on measures of the products of medical education programs (i.e., student data) as evidence of the impact of accreditation, one could look at the influence of accreditation on the processes put in place by medical education programs. The real power of accreditation could lie in its ability to foster a culture of quality improvement, where all components of learners’ educational experiences, beyond just curricular content and including services provided to them, are assessed.3,6,7 The conceptual model grounding this paper suggests that accreditation leads medical schools to commit resources to and engage in self-assessment activities that represent best practices of continuous quality improvement (CQI), which leads to the development of a CQI culture within schools (Figure 1). A CQI culture encourages medical schools to continuously work at improving the quality of medical education offered, potentially resulting in positive patient outcomes.8 Fostering the development of a CQI culture should be the centerpiece of accreditation systems, as a culture of CQI would ensure a medical school’s ongoing self-assessment against best education practices.
Our conception augments Boulet and van Zanten’s framework3 by contributing a novel perspective—the perspective that the impact of accreditation on the quality of medical education can be measured using markers other than student outcomes—to the existing studies on the validity of accreditation. While one paper9 does report on institutional CQI efforts managed by third parties and internal CQI activities at specific schools, the development of a CQI culture in medical education programs as a measure of accreditation impact has not been studied.
In the context of this paper, CQI is defined as any formal process initiated and overseen by medical schools to audit and improve the quality of education offered.9 These reviews assess compliance with accreditation standards9 and other markers of interest to schools, and are continuous10 (i.e., they continue even between scheduled accreditation visits). The review cycle is determined internally at each school, with increased scrutiny on standards or metrics that have been identified as challenging, with the results of the reviews being used for the schools’ CQI purposes only and not being shared with accreditation systems.
This paper reviews the evidence available on the impact of accreditation of medical education programs and proposes the use of CQI instruments as novel markers of accreditation impact.
Effectiveness of Accreditation
Medical schools engage in accreditation processes in the belief that they contribute to improved outcomes and, in particular, to improved patient outcomes.4 For accreditation to work as planned, the standards must be perceived as valid11 and the process itself accepted.
Validity of standards
In a 1997 survey, 1,659 U.S. UGME stakeholders were asked to rate 44 Liaison Committee on Medical Education (LCME) standards related to teaching, learning, and evaluation on a five-point Likert-type scale, where 1 = no importance and 5 = highly important, based on their perceived importance as indicators of the quality of education and on the clarity of the evidence required for compliance.12 These stakeholders included medical school deans, educational administrators, LCME members and surveyors (current or having served in the previous five years), UGME program directors, medical students, residents, and practicing physicians. The means for all 44 standards, based on 701 (42%) respondents, varied from 3.94 to 4.87. Despite the statistical flaw associated with the use of parametric methods to analyze ordinal data, these results suggest that all 44 standards were deemed moderately important.
Using a similar method, 662 faculty members and students from 41 Korean medical schools rated the importance of 87 accreditation standards in 2000.13 In this case, the means ranged from 3.80 to 4.49. The limitation noted above on the use of parametric statistics with rating scales also applies to this study.
In 1999, members of the Institute for International Medical Education identified, as minimum essential requirements for the medical degree, a core of 60 competencies covering seven domains: professional values, attitudes, behavior, and ethics; scientific foundations of medicine; communication skills; clinical skills; population health and health systems; management of information; and critical thinking and research.14
A more recent study reports the perception of 13 education experts on the importance of 150 accreditation standards in promoting the quality of UGME programs.15 These 150 standards consisted of a combination of standards from several accreditation systems around the world. Fourteen (9%) standards covering categories ranging from “governance and administration” to “students” were unanimously felt to be essential.15 The small number of participants in this study, however, limits the validity of its conclusions.
The above studies12–15 demonstrate the perceived content validity of accreditation standards as assessed in written statement form in surveys and by specific individuals and groups. This validity might not extend to the interpretation of standards by medical schools and in their application by accreditation systems. The correlational validity16 (i.e., the relationship between standards and graduates’ later achievement in residency or practice) has not been studied.
The general agreement on particular standards noted above suggests the presence of a core set that resonate with various geographies and cultures.14,15,17 Surveyors piloting global accreditation standards developed by the World Federation for Medical Education (WFME) noted the importance of adjusting international standards to local conditions, and others cautioned against applying the same standards to widely variable educational settings.18,19 Standards need to align with local social accountability mandates, with local educational and medical practices, and with the desired outcomes identified by local medical schools.20–25 Illustrating this need, the six accreditation systems recognized by the WFME specifically state as their purpose the fulfilling of social accountability mandates and the betterment of health care for their communities (Table 1).26–31
Acceptance of accreditation process
In spite of the perceived validity of accreditation standards, the merit of quality assurance processes, such as accreditation, is not always well accepted. Following the implementation of Teaching and Learning Quality Process Reviews in Hong Kong, a purposive sample of 22 academic staff at one university, interviewed about satisfying and unsatisfying incidents related to teaching and learning, mentioned as irritants the burdensome and time-consuming aspects of quality processes that distract from actual teaching, the disconnect of these processes from what faculty see as the essence of teaching and learning, their subversive demands on educational resources, their overprescriptive nature, and their overdependence on quantitative markers that are sometimes unsuitable, but which are still used given their availability.32
A similar study of 30 academics from 10 Australian universities reports their critique of quality assurance processes at their universities.33 Faculty members identified student evaluations of teaching, a common marker of teaching effectiveness used by accreditation systems, as unreliable and flawed. They questioned students’ ability to judge faculty teaching and the potential for students’ manipulation of teaching through evaluation.33 Similar to their Chinese colleagues, they believed current quality mechanisms did not align with their notion of quality and deplored the quantitative approach to measuring quality and the “check box” nature of quality assurance systems.33 Although not drawn from the medical education literature, these perspectives on quality assurance processes appear to be generalizable to UGME accreditation.
Adding to the dissatisfaction surrounding accreditation processes is the lack of evidence on their effectiveness at improving the quality of medical education and patient care.3 As mentioned previously, direct linkages between accreditation and quality education are difficult to establish.4 Further, identifying valid markers of effectiveness across schools, states, and countries is challenging.34 Positive changes in compliance rates with accreditation standards only provide information on how well schools meet standards; that is, they do not necessarily reflect the quality of the education provided or the excellence of schools’ graduates.
Markers of effectiveness
Markers of effectiveness of accreditation reported in the literature focus on curricular transformations and the quality of programs’ output, including the number of curricular changes implemented by programs, organizational quality outcomes, and student outcomes.
Number of curricular changes.
Of 90 medical schools visited by the LCME from 1992 to 1997, 61 had previously been cited for shortcomings in their educational programs.35 At their subsequent visit, 34 (56%) of the schools that had been previously cited reported major changes accomplished or in progress. Of the 29 schools that had not been previously cited, this proportion reached 52% (or 15 schools).35 The similar rate of change in the cited and noncited schools seems to indicate a failure of accreditation to drive educational reform. It might also be that the cited schools implement changes in response to accreditation (e.g., accreditation drives changes), while the noncited schools spontaneously undertake reforms, without the need to be challenged by the LCME. The design of the study, however, does not allow for the determination of which interpretation is more likely. Also of note, the importance and quality of the curricular changes implemented at the cited and noncited schools are not detailed in the report.
Between 2004 and 2012, the standards with which U.S. and Canadian UGME programs were most often found noncompliant were related to the provision of formative and summative assessment, central monitoring of the medical conditions encountered by students, central oversight of the curriculum, and existence of valid affiliation agreements between programs and their clinical affiliates.36 In the absence of an accreditation system, schools might or might not correct these deficiencies. Although they are definitely foreseeable, the impact of noncompliance with these standards on the quality of medical education has not been formally studied.
Organization quality outcomes.
An extensive review of 22 accreditation systems in health and social service industries, including the Joint Commission on Accreditation of Healthcare Organizations and the National Committee for Quality Assurance, showed a modest but positive influence of accreditation on service quality and outcomes, and on supplier operations (e.g., staff turnover rate, changes in membership, adoption of innovative practices).37 The impact of accreditation for the health and social service industries was evaluated using measures of clinical quality and outcomes (e.g., care of myocardial infarction and appropriate dosing of methadone), mortality, and patient satisfaction.
Student outcomes—namely, their performance on national examinations—are the most prevalent measures in the literature on the impact of accreditation on UGME.
In theory, comparing the performance of graduates from accredited and nonaccredited schools on national examinations reflects the impact of accreditation on the quality of medical education received. In Canada and the United States, all medical schools are accredited, preventing comparisons within these countries.3 Comparisons of the performance of Canadian and U.S. graduates with that of graduates from other countries on the same national or board examinations are difficult; several compounding factors, such as the quality of the applicants to these schools, schools’ entry requirements, and schools’ available resources, confuse the association between schools’ accreditation status and graduates’ performance.34 Graduates from other countries who attempt the United States Medical Licensing Examination (USMLE) examinations represent a small sample of their respective schools’ graduates and might have different baseline characteristics and abilities than graduates from these schools who do not attempt the examinations. Further blurring the link between schools’ accreditation status and graduates’ performance is the fact that nonaccredited schools might have aligned their curricula, requirements for faculty qualifications, and human and physical resources with those of accredited schools. Nonetheless, positive associations have been found between school accreditation status and student performance on external examinations.34,38 For example, first-attempt pass rates on all three components of the USMLE Step examination were higher for Mexican and Philippine graduates of accredited schools than for Mexican and Philippine graduates of nonaccredited schools (47.4%, 72.9%, and 83.3% vs. 31.5%, 66.7%, and 74.3% for USMLE Steps 1, 2, and 3, respectively).34 Graduates from Caribbean and non-Caribbean accredited schools also showed higher first-attempt pass rates on the USMLE Step 2 than graduates of nonaccredited schools, with a much larger difference for Caribbean school graduates (85.2% vs. 71.9%) than for non-Caribbean school graduates (73.5% vs. 69.6%).38
The challenges associated with the use of student outcomes as measures of quality education (see above) and the difficulties in evaluating these outcomes, in spite of existing definitions of desired outcomes, are major deterrents to using outcome-based evaluation systems.4,39
In contrast to the three markers of effectiveness detailed here, the conceptual model grounding this paper proposes to look at the impact of accreditation through a CQI lens. In their paper, Boulet and van Zanten3 advocate for longitudinal studies to mitigate the limitations of cross-sectional studies. Repeated determinations of the CQI orientation of medical schools at various points throughout their accreditation cycles align with this recommendation and could provide additional evidence of the impact of accreditation.
Accreditation as a Quality Improvement Tool
Quality management in higher education is shifting from a mechanistic perspective of process control to a cultural view of quality where change, development, and innovation are encouraged and valued.6,40,41 To achieve continuous improvement, programs themselves, rather than external organizations, ideally initiate the audit process.39 Accreditation aims at promoting CQI in UGME programs with the belief that programs’ CQI efforts will lead to continuous improvement of the quality of education offered, of graduates’ competency, and ultimately of the patient care provided. Since CQI stands as a centerpiece of accreditation systems, measures of the impact of accreditation on UGME programs should include, as one marker, the programs’ CQI orientation.
In business, most accreditation systems are based on industrial quality management models, such as the International Organization for Standardization or Total Quality Management, which control the quality of a program process but not the quality of its product.42 These types of models are designed to ensure customer satisfaction. As students are often thought of as the customers of higher education institutions, these models give priority to student issues over education issues.43 As a consequence, student evaluation of teaching successfully shifts the teacher–learner power relations, as recognized by the Australian academics interviewed in the study cited earlier.33 Since medical students might not be the best judges of the quality of their education, some have proposed viewing them as products of, rather than as consumers of, medical schools, arguing that medical schools’ real customers are residency programs and physician employers.44,45 Others believe equating medical school graduates to the output of other service industries could lead to the production of qualified technicians with defined units of training but lacking in complex skills, such as clinical reasoning, since teaching and evaluating these skills will not have been integrated as quality measures.42,46
The successful implementation of new accreditation standards has sometimes been interpreted as a demonstration of cultural changes in an education program. For example, in 2000, new accreditation criteria were developed for engineering education, which led to significant restructuring of curricula, instructional practices, and assessment activities. These changes resulted in a more uniform level of student experiences and outcomes and to a higher degree of faculty involvement in academic activities.47,48 Whether these positive findings reflected a cultural change or, rather, resulted from prescriptions from program leadership is unclear. Documentation of the sustainability of faculty participation would yield credence to the authors’ conclusions of a genuine cultural conversion.
Essential features of CQI include an organizational structure supporting quality improvement processes; the use of data to guide activities in an iterative way, designed with local conditions in mind; the use of competitive benchmarking; and the empowerment of teams to effect operational changes.49–51 Information about current and best practices; engagement of leadership, customers, and staff; and an infrastructure based on improvement knowledge are all key factors as well.52 Quality improvement is operationalized through the provision of adequate resources to carry out quality improvement projects; the use of indicators for building and maintaining customer relationships; the provision of training in quality awareness for employees, managers, and supervisors; and the development of procedures to monitor performance markers and client satisfaction.53
The Malcolm Baldrige National Quality Award (MBNQA) framework, widely validated in the fields of business, education, and health care, appropriately captures these essential features of quality.50,53–60 Although no information could be found on participation of medical schools in the Baldrige Performance Excellence Program, its education framework appears to be applicable to medical education contexts. The MBNQA survey instrument61 helps uncover the CQI orientation of organizations,55,57 where higher scores on a variety of performance indicators reflect a stronger CQI orientation.55,58,60 Its serial use provides information about positive and negative trends for individual features of CQI.59 If accreditation indeed fosters the development of a culture of CQI within medical schools, then, independent of accreditation-mandated CQI activities, accredited UGME programs should demonstrate a culture of CQI, evidenced by a higher degree of CQI implementation as measured by the MBNQA instrument, than nonaccredited programs. Consequently, the quality of UGME offered by programs with strong CQI orientations and the quality of their graduates would be expected to be higher. Strong CQI orientation, therefore, could serve as a proxy marker for the quality of graduates. The link between high-quality graduates and high-quality patient care, however, would remain an extrapolation. One confounder to establishing this link is the minimum of two years of postgraduate training that takes place between graduation from medical school and independent practice as a physician. Therefore, the care provided by recently licensed physicians not only reflects their undergraduate but also their postgraduate training, local practice patterns, maturation of individuals, interval changes in the knowledge and practice of medicine, and evolving expectations with regard to patient care. The use of patient care outcomes as indicators of the quality of UGME and of graduates, therefore, presents significant challenges.
Over the years, medical education has been held accountable to quality standards, which have continuously evolved to reflect best practices in education and changing societal needs. Accreditation is a costly exercise for programs.37,62,63 Yet, the evidence of its impact on medical education and patient care is lacking. If the true impact of accreditation lies in its ability to promote CQI within medical education programs, markers of its impact need to include CQI-related measures. Validated instruments to measure the CQI orientation of programs exist and can be borrowed from the business and management fields. Strong CQI orientation should lead to high-quality medical education and would serve as a proxy marker for the quality of graduates and possibly the quality of care they provide. It is time to move away from a focus on student outcomes as measures of the impact of accreditation and embrace additional markers, such as indicators of organizational CQI orientation.
1. van Zanten M, Norcini JJ, Boulet JR, Simon F. Overview of accreditation of undergraduate medical education programmes worldwide. Med Educ. 2008;42:930–937.
2. Cueto J Jr, Burch VC, Adnan NA, et al. Accreditation of undergraduate medical training programs: Practices in nine developing countries as compared with the United States. Educ Health (Abingdon). 2006;19:207–222.
3. Boulet J, van Zanten M. Ensuring high-quality patient care: The role of accreditation, licensure, specialty certification and revalidation in medicine. Med Educ. 2014;48:75–86.
4. Davis DJ, Ringsted C. Accreditation of undergraduate and graduate medical education: How do the standards contribute to quality? Adv Health Sci Educ Theory Pract. 2006;11:305–313.
5. Al Alwan I. Is accreditation a true reflection of quality? Med Educ. 2012;46:542–544.
6. Al-Shehri AM, Al-Alwan I. Accreditation and culture of quality in medical schools in Saudi Arabia. Med Teach. 2013;35(suppl 1):S8–S14.
7. Bardo JW. The impact of the changing climate for accreditation on the individual college or university: Five trends and their implications. New Dir Higher Educ. 2009;2009:47–58.
8. Huggan PJ, Samarasekara DD, Archuleta S, et al. The successful, rapid transition to a new model of graduate medical education in Singapore. Acad Med. 2012;87:1268–1273.
9. Barzansky B, Hunt D, Moineau G, et al. Continuous quality improvement in an accreditation system for undergraduate medical education: Benefits and challenges. Med Teach. 2015;37:1032–1038.
10. Bishop JA. The Impact of the Academic Quality Improvement Program (AQIP) on the Higher Learning Institutions’ North Central Association Accreditation [PhD thesis]. 2004.Milwaukee, WI: Marquette University.
11. Browne J. Setting standards: Quality in accreditation. Med Educ. 2012;46:1017.
12. Kassebaum DG, Cutler ER, Eaglen RH. On the importance and validity of medical accreditation standards. Acad Med. 1998;73:550–564.
13. Yang EB. A study on the content validity and factor validity of accreditation standards for medical schools in Korea. Korean J Med Educ. 2002;14:85–97.
14. Institute for International Medical Education Core Committee. Global minimum essential requirements in medical education. Med Teach. 2002;24:130–135.
15. van Zanten M, Boulet JR, Greaves I. The importance of medical education accreditation standards. Med Teach. 2012;34:136–145.
16. Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830–837.
17. Stern DT, Wojczak A, Schwartz MR. Accreditation of Medical Education: A Global Outcomes-Based Approach [Commission on Education of Health Professionals for the 21st Century working paper]. 2010.London, UK: The Lancet.
18. Hays R, Baravilala M. Applying global standards across national boundaries: Lessons learned from an Asia-Pacific example. Med Educ. 2004;38:582–584.
19. Ten Cate O. Point: Global standards in medical education—What are the objectives? Med Educ. 2002;36:602–604.
20. Hodges BD, Maniate JM, Martimianakis MA, Alsuwaidan M, Segouin C. Cracks and crevices: Globalization discourse and medical education. Med Teach. 2009;31:910–917.
21. Boelen C. A new paradigm for medical schools a century after Flexner’s report. Bull World Health Organ. 2002;80:592–593.
22. Boelen C, Woollard B. Social accountability and accreditation: A new frontier for educational institutions. Med Educ. 2009;43:887–894.
23. Boelen C, Heck JE. Defining and measuring the social accountability of medical schools. http://apps.who.int/iris/bitstream/10665/59441/1/WHO_HRH_95.7.pdf
. Accessed March 10, 2017.
24. Gibbs T, McLean M. Creating equal opportunities: The social accountability of medical education. Med Teach. 2011;33:620–625.
25. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376:1923–1958.
26. Liaison Committee on Medical Education. Rules of procedure. http://lcme.org/publications/
. Published March 2016. Accessed May 10, 2017.
27. Committee on Accreditation of Canadian Medical Schools. CACMS rules of procedure. https://cacms-cafmc.ca/sites/default/files/documents/CACMS_Rules_of_Procedure_July 2016.pdf
. Published July 2016. Accessed May 10, 2017.
28. Caribbean Accreditation Authority for Education in Medicine and Other Health Professions. CAAM-HP and its mission. http://www.caam-hp.org
. Accessed May10, 2017.
29. Turkish National Accreditation Council for Medical Education. TEPDAD bylaws. http://www.uteak.org.tr/condensed_english_version/17
. Accessed May 10, 2017.
30. Accreditation Commission on Colleges of Medicine. About. http://www.accredmed.org/about/default.html
. Accessed May 10, 2017.
31. Korean Institute of Medical Education and Evaluation. Mission and history. http://www.kimee.or.kr/eng/b_01.html
. Accessed January 15, 2017. [No longer available.]
32. Jones J, Saram DDD. Academic staff views of quality systems for teaching and learning: A Hong Kong case study. Qual Higher Educ. 2005;11:47–58.
33. Anderson G. Assuring quality/resisting quality assurance: Academics’ responses to “quality” in some Australian universities. Qual Higher Educ. 2006;12:161–173.
34. van Zanten M, McKinley D, Durante Montiel I, Pijano CV. Medical education accreditation in Mexico and the Philippines: Impact on student outcomes. Med Educ. 2012;46:586–592.
35. Kassebaum DG, Cutler ER, Eaglen RH. The influence of accreditation on educational change in U.S. medical schools. Acad Med. 1997;72:1127–1133.
36. Hunt D, Migdal M, Waechter DM, Barzansky B, Sabalis RF. The variables that lead to severe action decisions by the Liaison Committee on Medical Education. Acad Med. 2016;91:87–93.
37. Mays GP. Can Accreditation Work in Public Health? Lessons From Other Service Industries. 2004.Little Rock, AR: University of Arkansas for Medical Sciences.
38. Van Zanten M, Boulet JR. The association between medical education accreditation and examination performance of internationally educated physicians seeking certification in the United States. Qual Higher Educ. 2013;19:283–299.
39. Goroll AH, Sirio C, Duffy FD, et al.; Residency Review Committee for Internal Medicine. A new model for accreditation of residency programs in internal medicine. Ann Intern Med. 2004;140:902–909.
40. Ehlers UD. Understanding quality culture. Qual Assur Educ. 2009;17:343–363.
41. Stensaker B, Harvey L. Old wine in new bottles? A comparison of public and private accreditation schemes in higher education. Higher Educ Policy. 2006;19:65–85.
42. Hasan T. Doctors or technicians: Assessing quality of medical education. Adv Med Educ Pract. 2010;1:25–29.
43. Popli S. Ensuring customer delight: A quality approach to excellence in management education. Qual Higher Educ. 2005;11:17–24.
44. Bing-You RG. T2
QM (teaching and total quality management) for medical teachers. Med Teach. 1997;19:205–207.
46. Leinster SJ. A comment on “T2
QM (teaching and total quality management) for medical teachers” by R. G. Bing-You. Med Teach. 1997;19:208.
47. Volkwein JF, Lattuca LR, Harper BJ, Domingo RJ. Measuring the impact of professional accreditation on student experiences and learning outcomes. Res High Educ. 2007;48:251–282.
48. Prados JW, Peterson GD, Lattuca LR. Quality assurance of engineering education through accreditation: The impact of Engineering Criteria 2000 and its global influence. J Eng Educ. 2005;94:165–184.
49. Rubenstein L, Khodyakov D, Hempel S, et al. How can we recognize continuous quality improvement? Int J Qual Health Care. 2014;26:6–15.
50. Carman JM, Shortell SM, Foster RW, et al. Keys for successful implementation of total quality management in hospitals. Health Care Manage Rev. 2010;35:283–293.
51. Ettorchi-Tardy A, Levif M, Michel P. Benchmarking: A method for continuous quality improvement in health. Healthc Policy. 2012;7:e101–e119.
52. Brandrud AS, Schreiner A, Hjortdahl P, Helljesen GS, Nyen B, Nelson EC. Three success factors for continual improvement in healthcare: An analysis of the reports of improvement team members. BMJ Qual Saf. 2011;20:251–259.
53. Curkovic S, Melnyk S, Calantone R, Handfield R. Validating the Malcolm Baldrige National Quality Award framework through structural equation modelling. Int J Prod Res. 2000;38:765–791.
54. National Institute of Standards and Technology. Baldrige improvement tools. https://www.nist.gov/baldrige/self-assessing/improvement-tools
. Updated September 19, 2016. Accessed May 10, 2017.
55. Rondeau KV, Wagar TH. Implementing CQI while reducing the work force: How does it influence hospital performance? Healthc Manage Forum. 2004;17:22–29.
56. Abdulla Badri M, Selim H, Alshare K, Grandon EE, Younis H, Abdulla M. The Baldrige education criteria for performance excellence framework: Empirical test and validation. Int J Qual Reliab Manage. 2005;23:1118–1157.
57. Shortell SM, O’Brien JL, Carman JM, et al. Assessing the impact of continuous quality improvement/total quality management: Concept versus implementation. Health Serv Res. 1995;30:377–401.
58. Wakefield BJ, Blegen MA, Uden-Holman T, Vaughn T, Chrischilles E, Wakefield DS. Organizational culture, continuous quality improvement, and medication administration error reporting. Am J Med Qual. 2001;16:128–134.
59. Shields JA, Jennings JL. Using the Malcolm Baldrige “are we making progress” survey for organizational self-assessment and performance improvement. J Healthc Qual. 2013;35:5–15.
60. Lee S, Choi KS, Kang HY, Cho W, Chae YM. Assessing the factors influencing continuous quality improvement implementation: Experience in Korean hospitals. Int J Qual Health Care. 2002;14:383–391.
61. National Institute of Standards and Technology. Are we making progress? https://www.nist.gov/sites/default/files/documents/2016/09/12/awmp.pdf
. Revised 2015. Accessed May 10, 2017.
62. Stern D, Braithwaite J. Health sector accreditation research: A systematic review. Int J Qual Health Care. 2008;20:172–183.
63. Simpson DE, Golden DL, Rehm JM, Kochar MS, Simons KB. The costs versus the perceived benefits of an LCME institutional self-study. Acad Med. 1998;73:1009–1012.
Reference cited in Table 1 only