Skip Navigation LinksHome > January 2012 - Volume 87 - Issue 1 > Fostering and Evaluating Reflective Capacity in Medical Educ...
Academic Medicine:
doi: 10.1097/ACM.0b013e31823b55fa
Evaluating Reflective Writing

Fostering and Evaluating Reflective Capacity in Medical Education: Developing the REFLECT Rubric for Assessing Reflective Writing

Wald, Hedy S. PhD; Borkan, Jeffrey M. MD, PhD; Taylor, Julie Scott MD, MSc; Anthony, David MD, MSc; Reis, Shmuel P. MD, MHPE

Free Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

Dr. Wald is clinical associate professor of family medicine, Warren Alpert Medical School of Brown University, Providence, Rhode Island.

Dr. Borkan is professor and chair, Department of Family Medicine, Warren Alpert Medical School of Brown University, Providence, Rhode Island.

Dr. Taylor is associate professor of family medicine and director of clinical curriculum, Warren Alpert Medical School of Brown University, Providence, Rhode Island.

Dr. Anthony is assistant professor of family medicine and director of medical student education, Warren Alpert Medical School of Brown University, Providence, Rhode Island.

Dr. Reis is professor and immediate past chair, Section of Family Medicine and Department of Medical Education, Ruth and Bruce Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel, and adjunct clinical professor of family medicine, Warren Alpert Medical School of Brown University, Providence, Rhode Island.

Please see the end of this article for information about the authors.

Correspondence should be addressed to Dr. Wald, Warren Alpert Medical School of Brown University, Department of Family Medicine, 111 Brewster St., Pawtucket, RI 02860; telephone: (781) 424-2711; fax: (866) 372-7918; e-mail: hedy_wald@brown.edu.

First published online November 18, 2011

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A68.

Editor's Note: Commentaries on this article appear on pages 5 and 8.

Collapse Box

Abstract

Purpose: Reflective writing (RW) curriculum initiatives to promote reflective capacity are proliferating within medical education. The authors developed a new evaluative tool that can be effectively applied to assess students' reflective levels and assist with the process of providing individualized written feedback to guide reflective capacity promotion.

Method: Following a comprehensive search and analysis of the literature, the authors developed an analytic rubric through repeated iterative cycles of development, including empiric testing and determination of interrater reliability, reevaluation and refinement, and redesign. Rubric iterations were applied in successive development phases to Warren Alpert Medical School of Brown University students' 2009 and 2010 RW narratives with determination of intraclass correlations (ICCs).

Results: The final rubric, the Reflection Evaluation for Learners' Enhanced Competencies Tool (REFLECT), consisted of four reflective capacity levels ranging from habitual action to critical reflection, with focused criteria for each level. The rubric also evaluated RW for transformative reflection and learning and confirmatory learning. ICC ranged from 0.376 to 0.748 for datasets and rater combinations and was 0.632 for the final REFLECT iteration analysis.

Conclusions: The REFLECT is a rigorously developed, theory-informed analytic rubric, demonstrating adequate interrater reliability, face validity, feasibility, and acceptability. The REFLECT rubric is a reflective analysis innovation supporting development of a reflective clinician via formative assessment and enhanced crafting of faculty feedback to reflective narratives.

Fostering reflective capacity within medical education helps develop critical thinking skills,1,2 inform clinical reasoning,3 and enhance professionalism4 among trainees. Reflection—the expertise-enhancing, metacognitive, tacit process5,6 whereby personal experience informs practice7—is integral to core professional practice competencies.8,9 Development of reflective capacity has been highlighted as necessary for effective use of feedback in medical education10,11 and is an essential aspect of self-regulated and lifelong learning.5,10 Reflection can guide practitioners as they encounter the complexity that is inherent to clinical practice, potentially influencing the choice of how to act in “difficult or morally ambiguous circumstances.”12 In this vein, the development of reflective practice has been associated with enhancing an individual's character or “virtue,” fostering a “habit of mind,”13 “dispositional tendency,”14 or “medical morality”15 with which to approach clinical reasoning and ethical or values-related16 dilemmas that may arise. It also helps in developing “phronesis”—adaptive expertise or practical wisdom to guide professionally competent clinical practice.13,17 Failure to reflect on one's own thinking processes, including critical examination of one's assumptions, beliefs, and conclusions, was recently described as a cognitive component of “physician overconfidence,” a contributing cause of diagnostic error in medicine.18 In line with this, research has offered promising new evidence of an association between analytical reflective reasoning and improved diagnostic accuracy in challenging cases.1

Definitions of reflective capacity abound, though they generally include review, interpretation, and understanding experiences to guide future behavior. For example, Mann and colleagues19 define reflective capacity as “critical analysis of knowledge and experience to achieve deeper meaning and understanding.” Theoretical pillars of reflective capacity include Schon's20 progression from knowing-in-action, to surprise, reflection-in-action (“thinking on our feet”21), experimentation, and, finally, reflection-on-action (postexperience reflection), and Boud and colleagues'22 emphasis on addressing feelings in the reflective process. Moon23 introduces the component of meaning making to reflection in learning, and Mezirow24 links premise reflection with transformative or confirmatory learning, bringing additional depth and breadth to reflection conceptualization. Mann and colleagues19 describe two overarching dimensions in models of reflection: iterative and vertical. The iterative dimension of reflection is one triggered by experience, producing a new understanding; the vertical dimension combines surface (descriptive) and deeper (analytic) levels of reflection.

Reflection is not necessarily intuitive, especially in students at initial stages of their medical careers. Thus, medical educators strive to implement innovative educational methods to promote development of reflective capacity early in the training process. The use of reflective writing (RW) to facilitate reflective practice is well documented.25–29 Curricula have included RW groups for students in clerkships and residencies, journaling, portfolios, video essays, and what we have termed “interactive” RW—integration of written feedback from faculty to foster learners' development of more sophisticated reflection skills.29 Pedagogic goals of professional development, insights into the process of patient care, and practitioner well-being have been addressed through the small-group RW process.30 RW, a subset of narrative medicine, cultivates self-awareness and builds narrative competence for clinical encounters through the processes of attending, representing, and affiliating that are shared between RW and clinical practice.25 RW embodies the “interpretative and narrative”31 qualities of practical medical reasoning. Narrative competence and emotional self-reflective ability, which may be cultivated through RW, can bolster resilience to emotionally challenging situations32 and promote capability in challenging communication encounters, such as breaking bad news.33

Mentors who skillfully support and challenge learners through noticing the reflective moment, making sense of the experience (including emotional responses), tolerating uncertainty (or “messiness” of clinical practice at the “heart of professional expertise”20), and using new insights5,34 are an essential component to developing reflective capacity. Their guided written feedback about reflective narratives can promote a more in-depth reflective process.35,36 At the Warren Alpert Medical School (AMS) of Brown University, students receive guided, individualized feedback about their RW from interdisciplinary faculty29,37–39 during the Doctoring course40,41 and family medicine clerkship.42 Faculty use a rigorously developed tool (the Brown Educational Guide to the Analysis of Narrative [BEGAN]) to enhance the educational impact of their written feedback about reflective narratives.43

The proliferation of RW curricula locally and internationally has created the need for a valid, reliable evaluative tool that can be effectively applied to assess students' levels of reflection and the development of reflective skills within RW pedagogy. Publications on the utility of RW in medical education have been largely anecdotal or based on student self-report. Although some suggest assessing students' levels of reflection to evaluate reflective learning outcomes,44,45 a recent comprehensive review concluded that measurement of reflection is at an early stage of development and that qualitative and exploratory research approaches are appropriate for achieving deeper understanding of this essential construct.19

There are significant limitations and challenges in applying available coding systems for analyzing written reflective journals to determine the extent and level of reflection. Proposed criteria for “grading” physiotherapy students' reflective journals,46 for example, lacked clear explication,47 and a reliable structured worksheet for assessing reflection level48 focused on depth to the exclusion of breadth of reflection.47 Plack and colleagues7 applied a modified Bloom's taxonomy to determine achievement of higher-order thinking in reflective journals, yet they only indirectly assessed reflection per se. Identification and coding of textual elements of journals for levels of reflection using Boud and colleagues'22 model was described as relatively difficult and not achieving sufficiently reliable outcomes.44,46 Plack and colleagues47 broadened coding schema for reflective journals by including Schon's,20 Boud and colleagues',22 and Mezirow's 24 theoretical frameworks; however, the schema did not integrate criteria within reflective levels, and the authors identified the need for further refinement of some operational definitions. In addition, our review of available criteria for assessing level of reflection revealed that existing criteria did not include Mezirow's24 transformative or confirmatory learning schemata49; in fact, we encountered a critique of his reflection levels (as used in current assessment formats) as inadequately describing the process of reflective thinking.46 Some recently published rubrics for reflective narrative analysis are limited either in scope50–52 or in building a validity argument.53 Lastly, the factorial validity of at least one self-report reflection instrument has been questioned.54 In light of the increased interest in formal assessment of level of reflection as an indicator of professional development of medical students and best teaching practices,35 we set out to design an empirically tested, concise, “user-friendly” evaluative paradigm stemming from our review of existing qualitative and quantitative measures and frameworks for reflective capacity.

The Reflection Evaluation for Learners' Enhanced Competencies Tool (REFLECT), a new rubric for evaluating medical students' levels of reflection and the development of those levels within RW pedagogy, is an innovative approach to assessing reflection that includes multiple fundamental domains of reflection. In this report, we describe the development of the rubric, present reliability and validity data, and discuss the rubric's application and potential use for enhancing the educational effects of reflective narratives in medical education.

Back to Top | Article Outline

Method and Results

Preliminary stage: Literature and model review

The development of the REFLECT rubric began in early 2008 with a comprehensive analysis of the literature, including theoretical models of reflection, RW pedagogy, elements of reflective practice, and existing assessment modalities in health professions education. By October 2008, we concluded literature searches in the PubMed database for relevant articles for the years 1995 to 2008 using key words such as “reflection,” “reflective practice,” reflective writing,” “reflection in medicine,” “reflection in medical education,” and “reflection in health professions education.” We then conducted ongoing subsequent literature searches until late 2010 (to inform the writing of this article) with “reflection,” “reflective writing,” and “reflection assessment” used as key words, though articles from 2008 to 2010 were not included in the literature review for the development of REFLECT.

We then used snowball technique to extend the literature search from retrieved articles to other relevant sources. The snowball technique for sampling is a method whereby existing study participants suggest, recruit, or assist in recruiting future subjects from among their acquaintances or contacts.55 In this case, it refers to careful review of the bibliographies of articles found from database searches to detect other relevant articles that may have been otherwise missed. From our review of the literature, we identified four existing modalities of reflection assessment: (1) scales (“paper and pencil” forms with responses scored by respondents), (2) thematic coding (qualitative analysis that codes themes in the narratives), (3) qualitative analysis (more elaborate qualitative analysis moving beyond themes into models), and (4) analytical instructional rubrics (theory-based delineation of dimensions or levels of an assessed construct).

We next examined these four approaches for their utility in the assessment of medical students' RW. Our deliberations were based on both theoretical and functional premises. We used anonymized analogical datasets of medical student RW exercises—sampled anew with each iteration—from the 2009 and 2010 Doctoring course and family medicine clerkship as anchors for the deliberation. Although our literature search uncovered an existing scale for measuring “personal reflection,”54 we did not use it for our analysis given its intended purpose for students' self-reported reflective capacity rather than for assessment of the construct within RW. Thematic coding26,27 with sole emphasis on extraction of themes was also inadequate for our evaluative aims because students' reflective levels within RW could not be determined with such a method. Similarly, qualitative analysis was deemed insufficient because of its inability to provide focused differentiation of reflective levels. The fourth approach, the analytical instructional rubric,56 is specifically used for assessment. Analytical instructional rubrics seemed to be the best choice for the assessment of reflective levels because they are based on a theoretical framework and can be tailor-made for specific purposes. An instructional rubric delineates the various dimensions or levels of an assessed construct, defining benchmarks for each, and can yield quantitative scores.57,58 The rubric format—used for both formative and summative purposes—may vary, though common features include quality level gradations on a continuum of strong to weak work product, as well as a relatively complex list of criteria or “what counts” in completing a project or mastering a skill.59 Our close examination of the four existing approaches led us to select an analytical instructional rubric as the evaluative paradigm for our own tool.

Back to Top | Article Outline
Iterative development of the initial rubric

Once we had determined which approach to use, we began the process of developing an actual analytical instructional rubric to assess students' reflective narratives. This was accomplished through an accepted methodology of thorough model review, listing criteria, designating quality levels, creating a rubric draft, and revising the draft.59 Several iterative cycles of development were required.

Back to Top | Article Outline
The first iterative cycle: Initial reflection rubric.

In the first cycle, we constructed an initial reflection rubric based on our comprehensive analysis of relevant theoretical models of reflection and existing reflection measure instruments.60 After considering a broad range of possible elements, we reached consensus on five levels of reflection with associated criteria based on the theories of Schon,20 Boud and colleagues,22 Moon,23 and Mezirow.24 This rubric included the following levels: Level 1: Nonreflective: Habitual Action; Level 2: Nonreflective: Thoughtful Action; Level 3: Reflective; Level 4: Critically Reflective; and Level 5: Transformative Learning. We developed criteria or dimensions for each level (e.g., descriptive versus reflective stance, attending to emotions) based on a synthesis of literature descriptors. A session aimed at standardization of scoring on three RW samples followed. Within this session, we presented rationale for scoring, discussed and resolved scoring discrepancies, and reached consensus about scoring.

We obtained full formal institutional review board approval from the Memorial Hospital of Rhode Island prior to narrative analyses to allow cycles of empirical testing on actual examples of randomly selected medical students' RW. We applied the initial rubric to a dataset of all 93 second-year students' self-selected “best” RW “field notes” collected for evaluation (2008–2009). Three raters applied the initial reflection rubric to code subsets of these field notes, with an overlap of 10 randomly selected notes for reliability calculation, and interrater reliability was determined on these 10 overlapping notes using intraclass correlation (ICC; see Table 1). The distribution of students in each reflection level, according to our coding, was as follows: Level 1 = 0, Level 2 = 17, Level 3 = 38, Level 4 = 28, and Level 5 = 10.

Table 1
Table 1
Image Tools
Back to Top | Article Outline
The second iterative cycle: The REFLECT rubric.

Next, we set out to modify the rubric on the basis of insights gained from further literature review (including review of literature gleaned from the original search, plus new search results), application of the initial reflection rubric to students' reflective narratives, and feedback obtained when we presented our initial findings at conferences. We reached consensus about definitions for four reflection levels retained from initial rubric and two possible outcomes of the reflective process, as well as more precise delineation of criteria presented as a continuum of development. The four levels carried over from the initial rubric were Nonreflective: Habitual Action; Nonreflective: Thoughtful Action; Reflective; and Critically Reflective. The two possible learning outcomes require achievement of the Critically Reflective level and were defined as transformative learning and confirmatory learning.

We refined and elaborated criteria for mastering each of the four levels: voice and presence, description of conflict or disorienting dilemma (insight and reflection), attending to emotions, and critical analysis and meaning making. We also identified attention to assignment as an optional “minor” criterion to be addressed when relevant. During this iteration, we named the rubric REFLECT.

Using three raters, we applied the second iteration rubric to a sample of 10 new reflective narratives from the second-year Doctoring course and the family medicine clerkship and a sample of 10 field notes from a general surgery clerkship and again determined interrater reliability using ICC (see Table 1).

Back to Top | Article Outline
Third iteration.

After improving the tool and retesting it during the second iteration, we further reevaluated, refined, and redesigned the REFLECT in a third iteration. To empirically test the tool and determine its interrater reliability, we applied the rubric to a sample of 10 family medicine clerkship reflective narratives. We then applied the rubric to all 92 second-year Doctoring course students' self-selected “best” reflective narratives (2009–2010). We scored all narratives independently, and then four raters independently scored 60 narratives, randomly split into batches of 10. Each narrative was scored independently by two raters, and we computed ICCs for the six combinations.

Back to Top | Article Outline
Present iteration.

The present iteration of the REFLECT was informed by methodological consultation with additional content and psychometric experts and further close reading of the relevant literature. Our aim was to more precisely calculate interrater reliability data and to deliberate the role of the REFLECT rubric in formative versus summative assessment. Given our primary emphasis on analyzing quality of reflection within RW in a developmental context, we decided to omit assigned numbers for reflection “levels” to encourage use of the rubric for formative rather than summative purposes (Appendix 1).

Back to Top | Article Outline
REFLECT rubric application

The process of applying the REFLECT rubric to a reflective narrative consists of four steps:

1. Read the entire narrative.

2. Fragmentation: Zoom in to details (phrases/sentences) of the narrative to assess the presence and quality of all criteria (see Appendix 1). Determine which level each criterion represents.

3. Gestalt: Zoom out to consider overall gestalt of the narrative (while taking into consideration the detailed analysis of Step 2). Determine which level the narrative as a whole achieves. If the Critical Reflection level is achieved, determine whether either or both learning outcomes (transformative or confirmatory learning) were also achieved.

4. Defend the assignment of level and learning outcomes with examples from the text. Do not “read between the lines.”

A sample reflective narrative and REFLECT rubric analysis is presented in Appendix 2. Another example can be seen in Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A68.

Back to Top | Article Outline
Statistical analyses

We applied single-measure ICCs61 to all datasets and computed ICCs for each iteration of the REFLECT in the pilot developmental phases (Table 1). We used SPSS version 11.0 (IBM Corporation, Armonk, New York) to calculate ICCs. An ICC is used to measure ordinal/continuous data for interrater reliability for two or more raters when data may be considered interval. It may also be used to assess test-retest reliability. An ICC may be conceptualized as the ratio of between-groups variance to total variance. In single-measure reliability, individual ratings constitute the unit of analysis (i.e., single-measure reliability provides the reliability for a single judge's rating). Single-measure ICC is the more conservative estimate and can represent how much agreement one rater will have with other raters. We chose to use ICCs because the levels in rubric iterations are ordinal data where gradations are interpretable, with no “natural zero.” Each application of the developing rubric involved at least three raters.

As demonstrated in Table 1, we observed variation in the ICCs. The noted decrease between iterations 1 and 2 may be attributed to insufficient training of the raters and/or lack of clarity in definitions of levels and criteria. Some of the ensuing variation may be due to the use of different samples of field notes, each of which may have had different qualities, as well as the small sample sizes in iterations 1 to 4. In addition, further variation may be attributed to the alterations in the criteria for the rubric's rating scale, which occurred as part of the iterative process of scale development. The current iteration is likely a more stable ICC because it includes 60 field notes, though this is still a relatively small sample. Internal consistency measured by Cronbach alpha is also reported in Table 1 and ranges from 0.644 to 0.899.

Back to Top | Article Outline

Discussion

RW initiatives within medical education have prospered as medical educators are called on to prepare students to become reflective clinicians.3,62 Increasing use of such pedagogy has led to interest in formal assessment of achieved level and qualities of reflection within narrative. The rationale for conducting theory-informed evaluation of RW includes obtaining a deeper understanding of the professional development of students, designing best teaching practices, and evaluating curriculum outcomes and effectiveness. Although written essay methodology may tap into important competencies such as empathy, personal reflection, and professionalism, effective assessment of RW can be challenging.32 We obtained encouraging results in ease of application and interrater reliability with the REFLECT rubric.

We deliberately chose an analytic rubric evaluation paradigm because it promotes a theory-informed evaluation of RW and supports learning and metacognition (“the act of monitoring and regulating one's thinking”).59 The content validity of the resulting framework is sound given the iterative process of instrument development we employed. Additionally, the components of the rubric (levels of reflection, criteria defining each level, and outcomes) are grounded in the reflection literature. Rubric levels capture developmental progression from habitual action to critical reflection. Criteria for each level are based in theory and clearly explicated. Fundamental, core processes of the reflection construct, including presence, recognizing “disorienting” dilemmas, critical analysis of assumptions, attending to emotions, and deriving meaning from the exercise, are all assessed with the rubric. An additional distinguishing feature of the REFLECT is the two possible learning outcomes of critical reflection—new understanding (transformative learning) and/or confirming one's frames of reference or meaning structures (confirmatory learning). Both of these delineated outcomes have relevance for gaining insight to guide present and future behavior.

The REFLECT rubric is currently used within AMS for structured RW paradigms within the Doctoring course and family medicine clerkship, though we could envision its application for products of spontaneous in-class RW assignments as well. Written feedback about students' RW is currently standard within the Doctoring course and the family medicine clerkship curricula, and faculty can use the BEGAN43 and/or REFLECT rubric tools to formulate this written feedback. Faculty assess overall “level” of reflection for research purposes, but students do not receive this information as feedback. Faculty do not assess quality of writing, in keeping with recent evidence of a lack of significant relationship between quality of writing and reflective content.63

Recently reported rubrics for “grading” RW exhibit similarities and differences with REFLECT. O'Sullivan and colleagues50 used a similar statistical approach in the development of their reflection rubric, yet this rubric does not include various reflection domains. Kember and colleagues52 introduce a “transitional” phase between each of four reflection categories, though these categories are not elaborated. McNeill and colleagues51 offered a relatively cursory grading system without clear reference to theoretical underpinnings, and Devlin and colleagues'53 rubric is described as a feedback rubric, based on one typology. In general, we propose that the REFLECT rubric achieves a more comprehensive assessment than these recent rubric design efforts, increasing its credibility within an increasing pool of instruments for a similar purpose.

The process of rubric development involved refining a pilot rubric through further immersion in the literature, application of the rubric to various datasets, and discussion until consensus was reached on specific criteria. The ICC scores at the present iteration demonstrate acceptable interrater reliability. Feasibility of scoring and acceptability to both raters and students are promising based on feedback from faculty development workshops and use in student instruction. We have received positive feedback about the REFLECT rubric for formative assessment of students' RW from faculty development workshops locally, nationally, and internationally. Further investigations, including feedback queries for students and faculty at AMS and multiinstitutional collaboration, are planned. The generalizability of the REFLECT rubric is potentially limited, given its development and testing within a single institution, but we are currently undertaking efforts to improve generalizability by using the rubric within various health professions curricula at multiple institutions. We hope to soon complete and distribute a rubric “codebook” containing illustrative examples of rubric application to narratives to enhance feasibility and promote generalizability. Future directions include assessments of longitudinal reflective narratives at various stages of the professional life cycle and analysis of variables such as writing prompt design on rubric results.

We note some limitations to our work. Although we provide ample content evidence, further support from studies with larger samples will be required to establish robust internal structure validity. In addition, we recommend testing this rubric against other validated reflection evaluation tools.

We propose the use of the REFLECT rubric as a developmental tool within medical education. It is designed to help guide our learners toward achieving greater breadth and depth of reflective capacity within the developmental trajectory of becoming reflective practitioners.62 Such formative assessment and feedback may help foster expertise, promoting more effective self-evaluation64 and self-directed learning,65 as well as more thoughtful approaches to patient care.66 Although our efforts at standardization have yielded promising psychometric properties, we recommend using the REFLECT rubric for formative rather than summative assessment. In contrast to “quantifying” or “grading,” which may risk a lack of reflective authenticity by encouraging more formulaic approaches to reflection,67 we envision the REFLECT rubric as providing qualitative anchors to help educators both assess development of reflective capacity dimensions and formulate constructive, individualized feedback to students' reflective narratives. At this time, we counter calls for rubrics to be used for quantitative and summative assessment of learners.68 We urge caution in this regard because such use may prove counterproductive, potentially inhibiting the development of reflective capacity within interactive RW.

We plan to study the use of the REFLECT rubric to enhance the educational impact of RW feedback. We hope to examine both faculty's and students' perspectives on the effectiveness of rubric application for feedback formulation and promotion of reflective capacity. Given the current emphasis in medical education on measurable objectives, future research to determine the extent to which what is being measured in text is a valid indicator of reflective activity and how this predicts or correlates with professionalism issues is of interest. Further research is needed to explore concurrent validity through the use of reflection scales,54 thematic analyses,26 and/or measures of reflective practice outcomes. We propose the inclusion of our rubric paradigm within such an approach as a means of enhancing “state of the art” reflection assessment. The study of medical schools that teach reflective practice has been suggested to determine whether they are more likely to produce physicians who are able to improve patient care.69 Thus, the connection between medical education modalities such as RW-enhanced reflective capacity and quality clinical outcomes69 warrants further investigation.

Back to Top | Article Outline

Conclusions

RW and its assessment may enhance our understanding of the professional development of physicians and help guide pedagogic initiatives aimed at supporting this process. Metacognitive skills including reflection as well as dimensions of professionalism in effective patient care (such as self-awareness, empathy, and insight), and physician well-being can potentially be fostered through RW exercises.30 We are hopeful that longitudinal investigations of RW exercises using the REFLECT rubric will assist educators as well as learners as they reflect on the efficacy of such curriculum initiatives. In essence, the use of the REFLECT rubric as part of the assessment tool kit has the potential to broaden the question of “How do doctors think?”70 to “How can we help doctors to think?” As interdisciplinary interest continues to grow in RW and the role of reflective capacity in health care practice, increased rigor in theory building, curricula implementation, assessment, and outcome research is called for in order to demonstrate authenticity and sustainability of such constructs. Such efforts can help realize the promise of RW as a vehicle for promoting reflective capacity and its role in building professional identity, as well as for guiding development of medical expertise, leading to the formation of mindful, compassionate, and competent practitioners.

Back to Top | Article Outline

Acknowledgments:

The authors gratefully acknowledge Dr. Jonathan White for use of narratives in determination of rubric reliability, Drs. David Cook and William McGaghie for their comments on statistical design, and Dr. Rita Charon for insightful dialogue on rubric application. The authors also gratefully acknowledge Tina Charest, MD, and Eugene Cone for the use of their reflective narratives.

Back to Top | Article Outline

Funding/Support:

None.

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

This study was reviewed and approved by the institutional review board of the Memorial Hospital of Rhode Island.

Back to Top | Article Outline

References

1Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ. 2008;42:468–475.

2Driessen E, van Tartwijk J, Dornan T. The self critical doctor: Helping students become more reflective. BMJ. 2008;336:827–830.

3Plack MM, Greenberg L. The reflective practitioner: Reaching for excellence in practice. Pediatrics. 2005;116:1546–1552.

4Stern DT, Papadakis M. The developing physician—Becoming a professional. N Engl J Med. 2006;355:1794–1799.

5Sandars J. The use of reflection in medical education: AMEE guide no. 44. Med Teach. 2009;31:685–695.

6Quirk M. Intuition and Metacognition in Medical Education. New York, NY: Springer Publishing Co.; 2006.

7Plack MM, Driscoll M, Marquez M, Cuppernull L, Maring J, Greenberg L. Assessing reflective writing on a pediatric clerkship by using a modified Bloom's taxonomy. Ambul Ped. 2007;7:285–291.

8Accreditation Council for Graduate Medical Education. The ACGME learning portfolio. http://www.acgme.org/acWebsite/portfolio/learn_alp_aboutalp.asp. Accessed September 28, 2011.

9General Medical Council. Tomorrow's Doctors: Outcomes and Standards for Undergraduate Medical Education. London, UK: General Medical Council; 2009.

10Sargeant JM, Mann KV, Vleuten CP, Metsemakers JF. Reflection: A link between receiving and using assessment feedback. Adv Health Sci Educ. 2009;14:399–410.

11Bing-You RG, Trowbridge RL. Why medical educators may be failing at feedback. JAMA. 2009;302:1330–1331.

12Bryan CS, Babelay AM. Building character: A model for reflective practice. Acad Med. 2009;84:1283–1288.

13Epstein R. Reflection, perception and the acquisition of wisdom. Med Educ. 2008;42:1048–1050.

14Guillemin M, McDougall R, Gillam L. Developing “ethical mindfulness” in continuing professional development in healthcare: Use of a personal narrative approach. Camb Q Healthc Ethics. 2009;18:197–208.

15Huddle TS. Viewpoint: Teaching professionalism: Is medical morality a competency? Acad Med. 2005;80:885–891.

16Bolton G. The art of medicine. Writing values. Lancet. 2009;374:20–21.

17Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287:226–235.

18Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008;121(5 suppl):S2–S23.

19Mann K, Gordon J, MacLeod A. Reflection and reflective practice in health professions education: A systematic review. Adv Health Sci Educ Theory Pract. 2009;14:595–621.

20Schon DA. The Reflective Practitioner: How Professionals Think in Action. New York, NY: Basic Books; 1983.

21Smith MK. Donald Schon: Learning, practice, and change. Infed: The Encyclopaedia of Informal Education. http://www.infed.org/thinkers/et-schon.htm. Accessed September 28, 2011.

22Boud D, Keogh R, Walker D, eds. Reflection: Turning Experience Into Learning. London, UK: Kogan Page; 1985.

23Moon J. A Handbook of Reflective and Experiential Learning. London, UK: Routledge; 1999.

24Mezirow J. Transformative Dimensions of Adult Learning. San Francisco, Calif: Jossey-Bass; 1991.

25Charon R. Narrative Medicine—Honoring the Stories of Illness. New York, NY: Oxford University Press; 2006.

26Levine RB, Kern DE, Wright SM. The impact of prompted narrative writing during internship on reflective practice: A qualitative study. Adv Health Sci Educ Theory Pract. 2008;13:723–733.

27Brady DW, Corbie-Smith G, Branch WT. “What's important to you?”: The use of narratives to promote self-reflection and to understand the experiences of medical residents. Ann Intern Med. 2002;137:220–223.

28Kumagai AK. A conceptual framework for the use of illness narratives in medical education. Acad Med. 2008;83:653–658.

29Wald HS, Davis SW, Reis SP, Monroe AD, Borkan JM. Reflecting on reflections: Enhancement of medical education curriculum with structured field notes and guided feedback. Acad Med. 2009;84:830–837.

30Shapiro J, Kasman D, Shafer A. Words and wards: A model of reflective writing and its uses in medical education. J Med Humanit. 2006;27:231–244.

31Avrahami E, Reis SP. Narrative medicine. Isr Med Assoc J. 2009;11:216–219.

32Kuper A. Literature and medicine: A problem of assessment. Acad Med. 2006;81(10 suppl):S128–S137.

33Meitar D, Karnieli-Miller O, Eidelman S. The impact of senior medical students' personal difficulties on their communication patterns in breaking bad news. Acad Med. 2009;84:1582–1594.

34Paterson BL. Developing and maintaining reflection in clinical journals. Nurse Educ Today. 1995;15:211–220.

35Chretien K, Goldman E, Faselis C. The reflective writing class blog: Using technology to promote reflection and professional development. J Gen Intern Med. 2009;23:2066–2070.

36Wald HS. Guiding our learners in reflective writing. Lit Med. In press.

37Wald HS, Reis SP. A piece of my mind. Brew. JAMA. 2008;299:2255–2256.

38Wald HS. Teaching and learning moments. A reflective moment. Acad Med. 2009;84:633.

39Wald HS. I've got mail. Fam Med. 2008;40:393–394.

40Monroe A, Frazzano A, Ferri F, Borkan J, Macko M, Dube C. Doctoring: Course Syllabus. Providence, RI: Warren Alpert Medical School of Brown University; 2005–2007.

41Taylor J, Frazzano A, Macko M. Doctoring: Course Syllabus. Providence, RI: Warren Alpert Medical School of Brown University; 2008–2011.

42Wald HS, Anthony DA. Fostering Reflective Capacity With Interactive Reflective Writing. Alpert Medical School Family Medicine Clerkship Faculty and Student Guides. Providence, RI: Warren Alpert Medical School of Brown University: 2009–2011.

43Reis SP, Wald HS, Monroe AD, Borkan JM. Begin the BEGAN (The Brown Educational Guide to the Analysis of Narrative)—A framework for enhancing educational impact of faculty feedback to students' reflective writing. Patient Educ Couns. 2010;80:253–259.

44Wallman A, Lindblad AK, Hall S, Lundmark A, Ring L. A categorization scheme for assessing pharmacy students' levels of reflection during internships. Am J Pharm Educ. 2008;72:1–10.

45Chimera KD. The use of reflective journals in the promotion of reflection and learning in post-registration nursing students. Nurse Educ Today. 2007;27:192–202.

46Williams RM, Sundelin G. Assessing the reliability of grading reflective journal writing. J Phys Ther Educ. 2000;14:23–26.

47Plack MM, Driscoll M, Blissett S, McKenna R, Plack TP. A method for assessing reflective journal writing. J Allied Health. 2005;34:199–208.

48Pee B, Woodman T, Fry H, Davenport E. Appraising and assessing reflection in students' writing on a structured worksheet. Med Educ. 2002;36:575–585.

49Kember D, Jones A, Loke A, et al. Determining the level of reflective thinking from students' written journals using a coding scheme based on the work of Mezirow. Int J Lifelong Educ. 1999;18:18–30.

50O'Sullivan PS, Aronson L, Chittenden E, Niehaus B. Reflective ability rubric and user guide. MedEdPortal. August 26, 2010.

51McNeill H, Brown JM, Shaw NJ. First year specialist trainees' engagement with reflective practice in the e-portfolio. Adv Health Sci Educ. 2010;15:547–558.

52Kember D, McKay J, Sinclair K, Wong FKY. A four-category scheme for coding and assessing the level of reflection in written work. Assess Eval Higher Educ. 2008;33:369–379.

53Devlin MJ, Mutnick A, Balmer D, Richards BF. Clerkship-based reflective writing: A rubric for feedback. Med Educ. 2010;44:1143–1144.

54Aukes LC, Geertsma J, Cohen-Schotanus J, Zwierstra RP, Slaets JPJ. The development of a scale to measure personal reflection in medical practice and education. Med Teach. 2007;29:177–182.

55Goodman LA. “Snowball sampling.” Ann Math Stat. 1961;32:148–170.

56Boulet JR, Rebbecchi TA, Denton EC, McKinley DW, Whelan GP. Assessing the written communication skills of medical school graduates. Adv Health Sci Educ Theory Pract. 2004;9:47–60.

57Musial JL, Newman LR, Lown BA, Jones RN, Johansson A, Schwartzstein RM. Developing a scoring rubric for resident research presentations: A pilot study. J Surg Res. 2007;142:304–307.

58Newman LR. Developing a peer assessment of lecturing instrument: Lessons learned. Acad Med. 2009;84:1104–1110.

59Andrade H. Using rubrics to promote thinking and learning. Educ Leadersh. 2000;57:1–7.

60Wald HS, Reis SP, Borkan JM. Really good stuff: Development of a reflection rubric to evaluate students' reflective writing. Med Educ. 2009;43:1110–1111.

61Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychol Bull. 1979;86:420–428.

62Epstein R. Mindful practice. JAMA. 1999;282:833–839.

63Aronson L, Niehaus B, DeVries CD, Siegel JR, O'Sullivan P. Do writing and storytelling skill influence assessment of reflective ability in medical students' written reflections? Acad Med. 2011;33:220–225.

64Sadler DR. Evaluation and the improvement of academic learning. J Higher Educ. 1983;54:60–79.

65Nicol DJ, Macfarlane-Dick D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Stud Higher Educ. 2006;31:199–218.

66Kumagai AK. Forks in the road: Disruptions and transformation in professional development. Acad Med. 2010;85:1819–1820.

67Wald HS, Reis SP. Beyond the margins: Reflective writing and development of reflective capacity in medical education. J Gen Intern Med. 2010;25:746–749.

68Aronson L. Twelve tips for teaching reflection at all levels of medical education. Med Teach. 2011;33:200–205.

69Chen FM, Baucher H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–960.

70Montgomery K. How Doctors Think: Clinical Judgment and the Practice of Medicine. Oxford, UK: Oxford University Press; 2006.

Back to Top | Article Outline
Cited Here... Cited Here...
Appendix 1
Appendix 1
Image Tools
Appendix 2
Appendix 2
Image Tools
Appendix 2, Continue...
Appendix 2, Continue...
Image Tools

Cited By:

This article has been cited 3 time(s).

Educational Research
Using calibrated exemplars in the teacher-assessment of writing: an empirical study
Heldsinger, SA; Humphry, SM
Educational Research, 55(3): 219-235.
10.1080/00131881.2013.825159
CrossRef
Family Medicine
What Makes a Good Reflective Paper?
Walling, A; Shapiro, J; Ast, T
Family Medicine, 45(1): 7-12.

Medical Teacher
Fostering reflective capacity with interactive reflective writing in medical education: Using formal analytic frameworks to guide formative feedback to students' reflective writing
Armstrong, GW; Wald, HS
Medical Teacher, 35(3): 258.

Back to Top | Article Outline

Supplemental Digital Content

Back to Top | Article Outline

© 2012 Association of American Medical Colleges

Login

Article Tools

Images

Share