Secondary Logo

Journal Logo

Research Reports

Measuring Reflection on Participation in Quality Improvement Activities for Maintenance of Certification

Wittich, Christopher M. MD, PharmD; Reed, Darcy A. MD, MPH; Ting, Henry H. MD, MBA; Berger, Richard A. MD, PhD; Nowicki, Kelly M. MA; Blachman, Morris J. PhD; Mandrekar, Jayawant N. PhD; Beckman, Thomas J. MD

Author Information
doi: 10.1097/ACM.0000000000000323

Abstract

To promote and document competence, the American Board of Medical Specialties has required since 1991 that certified medical specialists participate in maintenance of certification (MOC) programs.1–4 Components of MOC include maintaining a license to practice medicine (Part I), lifelong learning and self-assessment (Part II), demonstrated cognitive expertise (Part III), and practice performance assessment (Part IV).5,6

Several pathways have been approved for MOC Part IV activities including practice improvement modules,7 structured peer review such as clinical laboratory improvement amendments, and contribution to outcomes databases such as the National Surgical Quality Improvement Program. Practicing physicians are often involved in quality improvement (QI) activities as part of their usual clinical practice. To capitalize on the value of these ongoing activities, the Mayo Clinic partnered with specialty certification boards in the past few years to offer physicians Part IV MOC credit for completing QI projects. Completed projects are reviewed for compliance with standards established by the boards, and if compliance is determined, Part IV MOC credit is granted to all participating board diplomates.

Participation in QI often requires that physicians work in interprofessional teams, reflecting the system in which they provide care.8–10 A challenge in granting individual MOC credit for a team project is determining whether the physician has meaningfully participated in the project. Prior research on the assessment of interprofessional teams has examined attitudes toward interprofessional collaboration,11,12 skills necessary for effective teamwork,13–15 and relationships between teamwork and patient outcomes.16–19 Other studies have characterized teamwork climate within health care organizations.20–22 However, there remains a need for studies to determine how learning takes place most effectively within interprofessional teams, and therefore, it is critical to determine what characteristics contribute to these teams’ success. Additionally, it will be important to measure if an individual team member meaningfully engaged in the activity.

Critical reflection—defined as “intellectual and affective activities in which individuals engage to explore their experiences in order to lead to a new understanding and appreciation”23—is an important pro cess for physicians to engage in to learn and change behaviors.24 Critical reflection is also a component of practice-based learning and improvement and systems-based practice competencies.25 Although project outcomes are often the focus of interprofessional team evaluations, reflection on how the individual participated may be a better marker of engage ment and learning.26

Critical reflection by health care professionals has been assessed using validated instruments.27–32 Prior research indicates that reflection may be a temporally stable trait,33 is influenced by the situation surrounding the reflection,28,33,34 and may be enhanced by interactions in a small group.35–37 As MOC becomes more dependent on team participation, it will be important to better understand the process of reflection, not just on individuals but among all members of health care teams. Learning about reflection on participation in QI activities may help those granting MOC credit to determine whether the physician was engaged and meaningfully participated in the activity. We are unaware of any previous research on reflection among members of interprofessional QI teams and instruments for evaluating reflection in the setting of QI teams.

As such, we sought to develop and validate a new instrument to measure reflection on participation in QI activities for MOC credit. We hypothesized that levels of reflection among QI team members would be associated with clinical relevance and impact of the project,28 participant characteristics such as professional degree and role on the QI team, and the size and diversity of the interprofessional teams. Therefore, the specific objectives of this study were (1) to develop and validate an instrument to measure reflection on experiences in QI activities and (2) to identify associations between reflection scores and factors related to QI projects, participants, and teams.

Method

Study design and participants

This was a prospective validation study that included all team participants who completed a QI project and received MOC credit through the Mayo Clinic from January 1, 2010, through July 31, 2012. Teams were from the Rochester, Minnesota, Scottsdale, Arizona, and Jacksonville, Florida, Mayo Clinic campuses as well as a group of community-based clinical practices called the Mayo Clinic Health System in Minnesota, Wisconsin, and Iowa. After submission of their QI projects for MOC credit, all team members were required to complete a postproject online attestation through an internal online program developed by the Mayo Clinic that included project information, participant demographic information, and responses to the reflection instrument items. Informed consent was not obtained because the data collected were part of the normal educational practice and no incentives were given to the participants. This study was deemed exempt by the Mayo Clinic institutional review board.

Reflection instrument development

In 2009, we developed a new instrument to measure reflection on participation in QI activities, drawn from the literature on critical reflection including Mezirow’s transformative learning theory24,25 and Kember’s levels of reflection27,28: habitual action, understanding, reflection, and critical reflection.27 The instrument was iteratively revised by a panel of Mayo experts (C.M.W., D.A.R., T.J.B.) with experience in instrument development and validation. Eventually, three items based on a five-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree) were developed for each reflection level, for a total of 12 items (Supplemental Digital Appendix 1, https://links.lww.com/ACADMED/A210).

Data collection and analysis

Participants were asked to report their perceptions of the project’s impact: (1) Would it change their personal practice (yes, no); (2) would it change the health care system (yes, no); and (3) would it have an impact on health care (no impact, prevents minor incidents such as an unnecessary blood draw or medication error without patient impact, prevents moderate incidents such as prolonged hospitalization or extra noninvasive testing, prevents severe incidents such as permanent organ dysfunction, or prevents patient deaths). They also reported demographic data including their profession (physician, nurse, administrator, support staff, or other health care professional), gender (female, male), and QI role (team leader, QI expert, intervention design, data collection/interpretation, content expert, implementation, or maintenance/dissemination). Participants could choose one or more QI roles. From these data, team characteristics were calculated including the average team size, professional diversity (total number of professions represented on the team), and QI role diversity (total number of QI roles represented on the team).

The dimensionality of reflection instrument scores was explored using factor analysis with orthogonal rotation. The factor analytic approach was confirmatory, as we anticipated that the items would cluster into four categories corresponding to Kember’s27 levels of reflection. Factors were extracted using the minimal proportion criteria, and items with loadings greater than 0.40 were retained.38 Internal consistency reliability for items within factors and overall was calculated using Cronbach alpha, where alpha coefficients greater than 0.70 were considered acceptable.

Item reflection scores were reported as mean and standard deviations (SDs). After instrument development and validation, we used parametric and nonparametric tests, where appropriate, to examine associations between participants’ overall reflection scores and (1) project characteristics (changed personal practice, changed health care system, impact); (2) participant characteristics (profession, gender, QI role); and (3) team characteristics (team size, professional diversity, QI role diversity). Given multiple comparisons, the level of statistical significance was conservatively set at P < .01. Statistical analyses were conducted using SAS (SAS Institute Inc., Cary, North Carolina).

Results

Participant characteristics

A total of 922 participants on 118 teams completed QI projects and the reflection instrument. All participants were required to complete the attestations and reflection instruments, yielding a participant completion rate of 100%. Team participants (number; percentage) included physicians (567; 61.5%), nurses (132; 14.3%), administrators (106; 11.5%), support staff (48; 5.2%), and other health care professionals (69; 7.5%). The mean (SD) QI team size was 7.81 (6.71). Most participants (784; 85.0%) reported that their project changed their personal practice, and most (752; 81.6%) reported that their project changed the health care system.

Factor analysis

Factor analysis revealed a two-dimensional model of reflection on participating on a QI team (Table 1). The two identified factors were high reflection (9 items) and low reflection (3 items). Items in the high reflection factor corresponded to Kember’s highest levels of reflection including understanding, reflection, and critical reflection. Items in the low reflection factor corresponded to Kember’s lowest level of habitual action. The overall mean (SD) for the 12 items was 4.21 (0.56) on a five-point scale. Individual item means ranged from 3.69 to 4.52 (Table 2). The internal consistency reliabilities (Cronbach alphas) were very good (high reflection factor = 0.85, low reflection factor = 0.81, and overall = 0.85).

T1-27
Table 1:
Responses to Mayo Clinic’s Quality Improvement Project Reflection Instrument, 2010–2012: Factor Analysis and Item Loadingsa
T2-27
Table 2:
Responses to Mayo Clinic’s Quality Improvement Project Reflection Instrument, 2010–2012: Factors, Mean Scores, and Reliabilitya

Table 3 shows the associations between reflection scores and project, participant, and team characteristics.

T3-27
Table 3:
Associations Between Quality Improvement (QI) Project Reflection Scores and Project, Participant, and Team Characteristics, Mayo Clinic, 2010–2012

Project characteristics.

Reflection scores were significantly higher (mean [SD]) if the participant reported that their project changed their perception of personal practice (yes: 4.30 [0.51] versus no: 3.71 [0.57]; P < .0001) or changed the health care system (yes: 4.25 [0.54] versus no: 4.03 [0.62]; P < .0001). Additionally, reflection scores were significantly higher if the participant reported a higher impact of the project (no impact: 3.52 [0.70] versus prevents patient death: 4.42 [0.50]; P < .0001).

Participant characteristics.

Physicians’ reflection scores were significantly higher than support staff scores (4.27 [0.57] versus 4.07 [0.55], respectively; P = .0005). Additionally, there was a positive association between a higher number of QI roles reported by the participant and their reflection score (no roles: 3.92 [0.61] versus 7 roles: 4.58 [0.42]; P < .0001). There were no statistically significant associations between reflection scores and participant gender.

Team characteristics.

The median team size was 6 (range 1–55), with median professional diversity of 2.5 (range 1–5) and median QI role diversity of 7 (range 2–7). There was no statistical association between team size and overall mean reflection scores. Adjusting for team size, there was no association between overall mean reflection scores and the teams’ professional diversity (P = .6056) or QI role diversity (P = .0602).

Discussion

To our knowledge, this is the first study of critical reflection on participation in interprofessional QI teams. We iden tified strong associations between participants’ reflection scores and the relevance and impact of QI projects, as well as associations between reflection scores and participants’ professions and levels of involvement in their projects. These findings have important implications for optimizing participant involvement in QI activities and engaging them in meaningful MOC activities.

The current study could inform the design of future QI education programs. We found that the perceived relevance of projects and the number of QI roles by participants may be more important than team factors, such as size or professional makeup. Prior research, which supports this finding, has demons trated that physicians’ levels of reflection on adverse outcomes is higher if the event is generalizable and relevant.28 Furthermore, factors such as leadership, team climate, ability to influence the system, and physician involvement may enhance QI teams.39 We also found that physicians had higher reflection scores than any other type of team member. This may be due to level of involvement because physicians often initiate QI projects and serve as team leaders.

MOC programs are beginning to require physician participation in interprofessional teams for credit toward certification, so it will be important to develop methods for assessing an individual team member’s engagement and learning from team-related activities. In this study, we found that a participant’s level of reflection was higher if he or she was involved in multiple QI roles. Therefore, personal attestations or team reporting of an individual’s involvement in a QI project may be important markers of engagement.40 Future study should explore the relationships between individuals’ perceptions and project quality,41 and/or individuals’ levels of team participation. If such relationships were identified, then measurements of reflection might be used to justify granting MOC credit for participation in QI activities.

Previous studies have reported methods to assess interprofessional teams42–44; however, all of these studies focused on evaluation of the team rather than an individual’s contribution. Consequently, future research should explore whether outcomes, such as project quality, sustainability, or impact on patient care, are associated with the levels of individual and/or team reflection.

The measure of reflection on QI team participation described in this study is supported by strong validity evidence. Instrument content was based on reflection theory and iterative revision by a team of experts. Internal structure validity was supported by factor analysis, which revealed that the instrument separated participant reflection into low and high levels, as well as excellent internal consistency reliability. Criterion validity was supported by associations between participants’ reflection scores and project relevance, as well as participants’ professions and levels of involvement. Future work should focus on associations between reflection, participant behaviors, and project outcomes.

This study has limitations. All participants were employed by the Mayo Clinic. However, the teams were from Arizona, Florida, Minnesota, Wisconsin, and Iowa, and the projects were completed in health care settings ranging from large academic medical centers to small community practices, which should improve the generalizability of the study findings. Additionally, the study outcomes represented only self-reporting of behavior changes. Therefore, future studies should consider incorporating objective, anonymized observations of subjects’ behaviors by other members of the QI team.

Conclusion

In conclusion, we report that reflection on participation in interprofessional QI teams may be influenced by QI project relevance and the participant’s level of involvement in the project. Critical reflection by physicians is a necessary step for learning and changing behavior. With further study, we anticipate that our new measure of reflection will be useful for determining ways to enhance the effectiveness of a physician’s meaningful engagement in MOC and QI programs.

Acknowledgments: The authors wish to thank Allison Hartl for her technical assistance with the database.

References

1. Brennan TA, Horwitz RI, Duffy FD, Cassel CK, Goode LD, Lipner RS. The role of physician specialty board certification status in the quality movement. JAMA. 2004;292:1038–1043
2. Levinson W, King TE, Goldman L, Goroll AH, Kessler B. American Board of Internal Medicine maintenance of certification program. N Engl J Med. 2010;362:948–952
3. Drazen JM, Weinstein DF. Considering recertification. N Engl J Med. 2010;362:946–947
4. Iglehart JK, Baron RB. Ensuring physicians’ competence—is maintenance of certification the answer? N Engl J Med. 2012;367:2543–2549
5. Levinson W, Holmboe E. Maintenance of certification in internal medicine: Facts and misconceptions. Arch Intern Med. 2011;171:174–176
6. Levinson W, Holmboe E. Maintenance of certification: 20 years later. Am J Med. 2011;124:180–185
7. Holmboe ES, Meehan TP, Lynn L, Doyle P, Sherwin T, Duffy FD. Promoting physicians’ self-assessment and quality improvement: The ABIM diabetes practice improvement module. J Contin Educ Health Prof. 2006;26:109–119
8. Mills PD, Weeks WB. Characteristics of successful quality improvement teams: Lessons from five collaborative projects in the VHA. Jt Comm J Qual Saf. 2004;30:152–162
9. Anderson E, Thorpe L, Heney D, Petersen S. Medical students benefit from learning about patient safety in an interprofessional team. Med Educ. 2009;43:542–552
10. Varkey P, Reller MK, Resar RK. Basics of quality improvement in health care. Mayo Clin Proc. 2007;82:735–739
11. Thomas EJ, Sexton JB, Helmreich RL. Discrepant attitudes about teamwork among critical care nurses and physicians. Crit Care Med. 2003;31:956–959
12. Hawk C, Buckwalter K, Byrd L, Cigelman S, Dorfman L, Ferguson K. Health professions students’ perceptions of interprofessional relationships. Acad Med. 2002;77:354–357
13. Undre S, Sevdalis N, Healey AN, Darzi A, Vincent CA. Observational teamwork assessment for surgery (OTAS): Refinement and application in urological surgery. World J Surg. 2007;31:1373–1381
14. Clancy CM, Tornberg DN. TeamSTEPPS: Assuring optimal teamwork in clinical settings. Am J Med Qual. 2007;22:214–217
15. Mishra A, Catchpole K, McCulloch P. The Oxford NOTECHS System: Reliability and validity of a tool for measuring teamwork behaviour in the operating theatre. Qual Saf Health Care. 2009;18:104–108
16. Mazzocco K, Petitti DB, Fong KT, et al. Surgical team behaviors and patient outcomes. Am J Surg. 2009;197:678–685
17. Wheelan SA, Burchill CN, Tilin F. The link between teamwork and patients’ outcomes in intensive care units. Am J Crit Care. 2003;12:527–534
18. McCulloch P, Mishra A, Handa A, Dale T, Hirst G, Catchpole K. The effects of aviation-style non-technical skills training on technical performance and outcome in the operating theatre. Qual Saf Health Care. 2009;18:109–115
19. Schraagen JM, Schouten T, Smit M, et al. A prospective study of paediatric cardiac surgical microsystems: Assessing the relationships between non-routine events, teamwork and patient outcomes. BMJ Qual Saf. 2011;20:599–603
20. Hann M, Bower P, Campbell S, Marshall M, Reeves D. The association between culture, climate and quality of care in primary health care teams. Fam Pract. 2007;24:323–329
21. Bower P, Campbell S, Bojke C, Sibbald B. Team structure, team climate and the quality of care in primary care: An observational study. Qual Saf Health Care. 2003;12:273–279
22. Goh TT, Eccles MP, Steen N. Factors predicting team climate, and its relationship with quality of care in general practice. BMC Health Serv Res. 2009;9:138
23. Boud D, Keogh R, Walker D Reflection: Turning Experience Into Learning. 1985 London, UK Kogan Page
24. Mezirow JCranton P. Transformative learning: Theory to practice. New Directions for Adult and Continuing Education. 1997;Vol 74 San Francisco, Calif Jossey-Bass:5–12
25. Wittich CM, Reed DA, McDonald FS, Varkey P, Beckman TJ. Transformative learning: A framework using critical reflection to link the improvement competencies in graduate medical education. Acad Med. 2010;85:1790–1793
26. Clark PG. Reflecting on reflection in interprofessional education: Implications for theory and practice. J Interprof Care. 2009;23:213–223
27. Kember D, Leung D, Jones A, et al. Development of a questionnaire to measure the level of reflective thinking. Assess Eval Higher Educ. 2000;25:381–395
28. Wittich CM, Lopez-Jimenez F, Decker LK, et al. Measuring faculty reflection on adverse patient events: Development and initial validation of a case-based learning system. J Gen Intern Med. 2011;26:293–298
29. Wittich CM, Beckman TJ, Drefahl MM, et al. Validation of a method to measure resident doctors’ reflections on quality improvement. Med Educ. 2010;44:248–255
30. Wittich CM, Pawlina W, Drake RL, et al. Validation of a method for measuring medical students’ critical reflections on professionalism in gross anatomy. Anat Sci Educ. 2012;6:232–238
31. Wald HS, Borkan JM, Taylor JS, Anthony D, Reis SP. Fostering and evaluating reflective capacity in medical education: Developing the REFLECT rubric for assessing reflective writing. Acad Med. 2012;87:41–50
32. Sobral DT. Medical students’ mindset for reflective learning: A revalidation study of the reflection-in-learning scale. Adv Health Sci Educ Theory Pract. 2005;10:303–314
33. Wittich CM, Reed DA, Drefahl MM, et al. Residents’ reflections on quality improvement: Temporal stability and associations with preventability of adverse patient events. Acad Med. 2011;86:737–741
34. Boenink AD, Oderwald AK, De Jonge P, Van Tilburg W, Smal JA. Assessing student reflection in medical practice. The development of an observer-rated instrument: Reliability, validity and initial experiences. Med Educ. 2004;38:368–377
35. Mann K, Gordon J, MacLeod A. Reflection and reflective practice in health professions education: A systematic review. Adv Health Sci Educ Theory Pract. 2009;14:595–621
36. Platzer H, Blake D, Ashford D. Barriers to learning from reflection: A study of the use of groupwork with post-registration nurses. J Adv Nurs. 2000;31:1001–1008
37. von Klitzing W. Evaluation of reflective learning in a psychodynamic group of nurses caring for terminally ill patients. J Adv Nurs. 1999;30:1213–1221
38. DeVellis RF Scale Development: Theory and Applications. 1991 Newbury Park, Calif Sage
39. Brennan SE, Bosch M, Buchan H, Green SE. Measuring team factors thought to influence the success of quality improvement in primary care: A systematic review of instruments. Implement Sci. 2013;8:1–17
40. Levine REMichaelsen LK, Parmelee DX, McMahon KK, Levine RE. Peer evaluation in team-based learning. Team-Based Learning for Health Professions Education: A Guide to Using Small Groups to Improve Learning. 2008 Sterling, Va Stylus Publishing:103–116
41. Leenstra JL, Beckman TJ, Reed DA, et al. Validation of a method for assessing resident physicians’ quality improvement proposals. J Gen Intern Med. 2007;22:1330–1334
42. Mellin EA, Bronstein L, Anderson-Butcher D, Amorose AJ, Ball A, Green J. Measuring interprofession team collaboration in expanded school mental health: Model refinement and scale development. J Interprof Care. 2010;25:514–523
43. Orchard CA, King GA, Khalili H, Bezzina MB. Assessment of Interprofessional Team Collaboration Scale (AITCS): Development and testing of the instrument. J Contin Educ Health Prof. 2012;32:58–67
44. Symonds I, Cullen L, Fraser D. Evaluation of a formative interprofessional team objective structured clinical examination (ITOSCE): A method of shared learning in maternity education. Med Teach. 2003;25:38–41

Supplemental Digital Content

© 2014 by the Association of American Medical Colleges