Secondary Logo

Journal Logo

Feasibility and Outcomes of Implementing a Portfolio Assessment System Alongside a Traditional Grading System

O’Brien, Celia Laird PhD; Sanguino, Sandra M. MD, MPH; Thomas, John X. PhD; Green, Marianne M. MD

doi: 10.1097/ACM.0000000000001168
Research Reports
Free
SDC

Purpose Portfolios are a powerful tool to collect and evaluate evidence of medical students’ competence across time. However, comprehensive portfolio assessment systems that are implemented alongside traditional graded curricula at medical schools in the United States have not been described in the literature. This study describes the development and implementation of a longitudinal competency-based electronic portfolio system alongside a graded curriculum at a relatively large U.S. medical school.

Method In 2009, the authors developed a portfolio system that served as a repository for all student assessments organized by competency domain. Five competencies were selected for a preclerkship summative portfolio review. Students submitted reflections on their performance. In 2014, four clinical faculty members participated in standard-setting activities and used expert judgment and holistic review to rate students’ competency achievement as “progressing toward competence,” “progressing toward competence with some concern,” or “progressing toward competence pending remediation.” Follow-up surveys measured students’ and faculty members’ perceptions of the process.

Results Faculty evaluated 156 portfolios and showed high levels of agreement in their ratings. The majority of students achieved the “progressing toward competence” benchmark in all competency areas. However, 31 students received at least one concerning rating, which was not reflected in their course grades. Students’ perceptions of the system’s ability to foster self-assessment were mixed.

Conclusions The portfolio review process allowed faculty to identify students with a concerning rating in a behavioral competency who would not have been identified in a traditional grading system. Identification of these students allows for intervention and early remediation.

C.L. O’Brien is instructor, Department of Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois.

S.M. Sanguino is associate professor, Departments of Pediatrics and Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois.

J.X. Thomas is professor, Departments of Physiology and Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois.

M.M. Green is associate professor, Departments of Medicine and Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: The institutional review board at Northwestern University reviewed this study and found it to be exempt (#STU00102669).

Previous presentations: A poster describing the portfolio system at the Northwestern University Feinberg School of Medicine was presented at the Association of American Medical Colleges Medical Education Meeting, November 2014, Chicago, Illinois.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A337.

Correspondence should be addressed to Marianne M. Green, Northwestern University Feinberg School of Medicine, 303 E. Chicago Ave., 1-003 Ward Building, Chicago, IL 60611; telephone: (312) 503-0394; e-mail: m-green@northwestern.edu.

Competency-based medical education (CBME) emphasizes ability, is learner centered, and focuses on ensuring that graduates have the skills to meet the needs of the health care system. Although the theoretical construct of CBME is sound, its implementation has been challenging.1–3 Medical educators have access to various instruments that yield reliable data allowing valid judgments about students’ achievement in areas such as medical knowledge and technical skills. However, behavioral competencies, such as professionalism, communication skills, and teamwork, are inherently more difficult to measure.

There has been criticism that breaking down complex tasks into smaller, measurable units may trivialize the task and risk the validity of the assessment.2,4 The recent adoption of entrustable professional activities highlights the need for medical schools to overcome these challenges5 and develop assessment systems across multiple competency domains. van der Vleuten et al6 and others7–9 argue that a strong assessment system includes a variety of instruments used over time, in different contexts, and by different evaluators to collect data. A robust picture of student achievement emerges when these evaluations are arranged together in a deliberate, longitudinal way and mapped to educational outcomes. This also allows for expert review of aggregated assessments to document students’ achievement across a variety of competency domains. An electronic portfolio provides an ideal tool to collect and evaluate this evidence and also may capture some of the other necessary conditions for CBME, including self-directed learning and reflection.10,11

Portfolio systems have been used for many years in Europe, Australia, and Canada.12–15 Their adoption in the United States for medical education largely has been limited to isolated programs and single competency areas.16,17 However, one medical school accredited by the Liaison Committee on Medical Education (LCME) with a relatively small class size of 32 students has no traditional grades and uses a summative portfolio review as the basis of all promotion decisions.18 The feasibility and impact of an electronic portfolio used for formative and summative assessments at a larger U.S. medical school with a traditional grading system has not been reported. The aim of this work was to develop and implement a comprehensive competency-based electronic portfolio system alongside a graded curriculum at a larger LCME-accredited medical school.

Back to Top | Article Outline

Method

Setting

Northwestern University Feinberg School of Medicine (Feinberg) is an urban medical school with approximately 160 students per class. Each class is divided into four colleges of approximately 40 students, each of which is led by a clinical faculty mentor. College mentors provide personal and professional support for students within their smaller learning communities and facilitate portfolio reviews as described below.

In 2008, Feinberg developed eight competencies to serve as the foundation for a new curriculum and assessment system, which launched in 2012. Six of the eight Feinberg competencies are based on the Accreditation Council for Graduate Medical Education competencies.19 The two additional competencies address community engagement and personal awareness and self-care. Each competency domain is divided into subcompetencies that include outcome-based educational objectives and developmental benchmarks (see Chart 1). All curricular objectives and assessment tools are mapped to these competencies and subcompetencies.

Chart 1

Chart 1

Feinberg’s undergraduate medical curriculum is divided into three phases. Phase 1 is the preclerkship phase, which lasts 19 months. Phase 2 includes required clerkships, and Phase 3 includes advanced clerkships and electives (see Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A337). In Phase 1, students are assigned a pass/fail grade after each integrated 6- to 10-week block based on their performance on written exams, objective structured clinical exams (OSCEs), small-group work, and other assignments. In Phases 2 and 3, students are assigned a grade of honors, high pass, pass, or fail for each required clerkship.

The goals of the Feinberg portfolio assessment system are to (1) provide a longitudinal perspective of students’ competency achievement and (2) foster skills for self-reflection and improvement. We used previously published best practices20,21 to develop an electronic portfolio incorporating student assessments organized by competency. These include peer evaluations, faculty evaluations of small-group and clinical performance, and OSCE results.

Students, academic deans, and college mentors have access to the portfolios. Since 2009, students have been required to write a reflection every six months addressing their progress in each competency and identifying areas for improvement by creating learning plans in at least two subcompetencies. College mentors are trained to facilitate a review of these formative portfolios and meet with each student semiannually to discuss her or his assessments, self-reflections, and learning plans. Mentors advise and counsel their students but have no role in grading. Because only students and mentors have access to these self-reflections, the formative review provides an opportunity for students to reflect freely on their performance.

Back to Top | Article Outline

Summative portfolio review process

We hypothesized that a summative portfolio review would better demonstrate achievement in competencies such as professionalism, teamwork, and communication skills than pass/fail grades. Additionally, we wanted to identify students who would benefit from remediation prior to starting clerkships. In 2012, we implemented a summative portfolio review after developing enough individual assess ments of the targeted competencies that we felt confident that robust judgments about students’ achievement could be made.

Toward the end of Phase 1, students prepare summative reflections for submission to the Summative Portfolio Review Committee. These reflections include a self-assessment about achievement in each competency domain; students must address any subcompetency that has at least 10 associated assessments. To support their self-assessments, students can electronically “tag” evidence from their portfolio or upload external documents that address specific competencies. Students demonstrate self-directed learning by referring to their prior learning plans and by showing improvement toward competency achievement.

Five competencies were selected for the summative portfolio review process based on a sufficient number of related assessments completed and their suitability for portfolio review. These included professional behavior and moral reasoning, systems awareness and team-based care, effective communication and interpersonal skills, continuous learning and quality improvement, and patient-centered medical care. Curriculum leaders and content experts developed a summative portfolio review rating instrument that linked the assessed subcompetencies to the following anchors: (1) target for improvement, (2) meets expectations, and (3) exceeds expectations. Descriptive behaviors were assigned to each anchor. The tool was designed using an iterative process that involved several rounds of review and feedback from students and mentors to ensure agreement and clarity.

In March 2014, we recruited four clinical faculty members to serve as reviewers on the Summative Portfolio Review Committee based on their experience educating and assessing medical trainees. All were familiar with the Feinberg competency framework, assessment processes, and components of the revised curriculum. Reviewers participated in seven hours of training (two sessions). The first session introduced them to the electronic portfolio system and associated assessment data. All reviewers were assigned a sample of five student portfolios to review. At the second session, reviewers completed a standard-setting exercise in which they discussed the reasoning behind their ratings and reached consensus through discussion and debate. Such exercises have been recommended by others to improve the reliability of judgments22 and allowed the reviewers to develop a shared understanding of the expectations for student achievement.

Each portfolio was evaluated by two reviewers who had access to the entire portfolio, with the exception of the formative reflections and learning plans that the students shared with their mentors prior to the summative review. To maximize the reliability of their assessments, the reviewers independently scored each portfolio, then met to reconcile any areas of disagreement. During the reconciliation process, reviewers again used discussion and debate to reach consensus. A third review was requested if consensus could not be reached (see Figure 1). Each reviewer was given four weeks to evaluate approximately 80 portfolios. Midway through the process, the full committee met to address questions and clarify concerns.

Figure 1

Figure 1

To assess competency achievement, reviewers evaluated (1) the contents of the portfolio, including any institutional assessment data and additional materials uploaded by students; (2) the quality of the reflections indicating students’ capacity for self-assessment of their strengths and weaknesses; and (3) the students’ ability for self-directed learning and improvement.

After scoring each subcompetency, reviewers made an overall judgment for each competency domain. The following outcomes were used: (1) progressing toward competence, (2) progressing toward competence with some concern, and (3) progressing toward competence pending remediation. Reviewers also provided narrative feedback highlighting each student’s individual strengths and weaknesses.

Students who received a “concern” rating were able to begin their clerkships but were required to meet with their mentor and develop a learning plan for improvement. Students who received a “pending remediation” rating were required to meet with the Educational Support Committee prior to beginning their clerkships to determine an appropriate remediation strategy. Students who disagreed with the outcome of the portfolio assessment process could appeal the results to the Student Promotion Committee.

To evaluate the outcomes of this process, we recorded each reviewer’s decisions before and after reconciliation to calculate the percent agreement between partners. We used descriptive statistics to analyze the frequency of final decisions across the cohort and within each competency.

We also surveyed students and reviewers after completion of the summative review process in 2014. The student survey contained 47 items about their experiences with the process, including time commitment, and their perceptions of portfolio assessment. The survey was administered using REDCap electronic data capture tools hosted at Northwestern University.23 We calculated descriptive statistics for the responses using SPSS version 22 (IBM Corp., Armonk, New York). For the purposes of this report, we focus on students’ general satisfaction with the process. Reviewers were surveyed by e-mail in the spring of 2014 and asked to provide feedback on the process, including time needed to complete the reviews.

This research was reviewed and determined to be exempt by the institutional review board at Northwestern University.

Back to Top | Article Outline

Resources

We developed our own electronic portfolio system because we found no commercial product that completely met our needs, and we wanted to ensure the flexibility to adapt and evolve our assessment tools over time. Development and maintenance of our portfolio system requires a full-time analyst and approximately 1.25 full-time equivalents for a Web developer. College mentors are compensated for their work, which includes formative portfolio review as well as teaching and advising. Portfolio reviewers were paid a fixed sum for the four weeks required to complete the reviews.

Back to Top | Article Outline

Results

A total of 156 portfolios were reviewed by at least two reviewers. Reviewers’ agreement on students’ overall progression in the five competency domains was at least 77% before reconciliation (see Table 1). After reconciliation, agreement rose to 98%. Only three portfolios required a third review. In one case, the reviewers disagreed about whether the student’s deficiencies fell into the domain of professionalism or communication skills. In the other two cases, a third review was requested because the students received more “concern” ratings than was typical, and the reviewers wanted reassurance about their decisions. The majority of students (125/156; 80%) achieved the “progressing toward competence” benchmark in all competency domains. Twenty students received a designation of “progressing toward competence with some concern” in one competency, and seven students received this designation in two or more competencies (see Table 2). The highest number of “concern” decisions was in professionalism (n = 16), followed by communication skills (n = 11). Four students received a “progressing toward competence pending remediation” rating in a single competency: two in professionalism and one each in communication skills and patient care.

Table 1

Table 1

Table 2

Table 2

The four students who required remediation met with the Educational Support Committee to determine their individualized remediation plans, which they completed over the course of two to three weeks. For three of the four students, their remediation plans required them to delay the start of their clerkships. All completed their remediation plans in a satisfactory manner. No students appealed the reviewers’ decisions.

All four summative portfolio reviewers completed the faculty survey. They reported that it took 1 to 2 hours to review each portfolio. The time spent reconciling decisions with the other reviewer varied widely—it took anywhere from 90 minutes to 5 hours to reach consensus. Writing each final narrative for the students took approximately 30 minutes to 1 hour. In their comments, the reviewers suggested that we increase the number of weeks or the number of reviewers for this work.

We collected 79 responses from the 156 students invited to complete the student survey, a 51% response rate (see Table 3). The majority of students (35/79; 44%) reported spending 6 to 10 hours preparing their summative portfolio. Only 3% (2/79) reported spending less than 1 hour; 23% (18/79) spent between 1 and 5 hours, and 24% (19/79) between 11 and 15 hours. Just 6% (5/79) said they spent more than 15 hours preparing their portfolio.

Table 3

Table 3

Students responded favorably to the questions about the support they received from their mentors; 80% (63/79) agreed or strongly agreed that their mentor helped them understand their assessment data as those data related to the subcompetencies. The majority also agreed or strongly agreed that the portfolio process taught them to engage in self-reflection (46/79; 58%), and over half agreed or strongly agreed that the portfolio allowed them to highlight their strengths that were not otherwise evident to faculty (42/79; 53%). Over half agreed or strongly agreed that they were judged fairly throughout the portfolio process (41/79; 52%). However, only 35% (28/79) agreed or strongly agreed that developing specific learning plans helped focus their improvement efforts, and only 38% (30/79) agreed or strongly agreed that reviewing their portfolio strengthened their ability to self-assess their strengths and weaknesses.

A review of the survey free-text responses indicated that, while many students appreciated the opportunity to reflect, others felt that they already engaged regularly in self-assessment and self-directed learning without the portfolio process. A few students believed that reviewing the volume of narrative comments in the portfolio was too time consuming to be a useful exercise.

Back to Top | Article Outline

Discussion

The growing call for medical education programs to train learners to meet the needs of the public5,24 demands that we carefully consider the adequacy of our current assessment systems. The findings we have reported here demonstrate the feasibility of implementing a portfolio assessment system alongside a traditional grading system at a relatively large U.S. medical school. We believe that a more credible picture of students’ competence emerges when experienced faculty use their judgment to review a longitudinal collection of students’ performance data. Our portfolio-based assessment system allows us to understand our students’ achievement in behaviorally oriented competencies in a way that was not possible prior to its development. For example, students for whom deficiencies were identified during portfolio review had all passed the Phase 1 curricular blocks, and although we had a tracking system, similar to that at the University of California San Francisco School of Medicine,25 to identify professionalism issues, it only captured significant events. Less serious behaviors may not have garnered attention; however, when accumulated over time in multiple contexts, such issues painted a picture of concerning behavior. Through portfolio review, these behaviors were identified in a timely fashion. The same was true for other competencies as well. Thus, the portfolio assessment system provides a useful method to identify areas of concern, track students’ progress, and institute individualized remediation as needed.

Debate regarding the virtue of a high-stakes, summative portfolio review includes concerns that students will not feel free to reflect honestly about their performance and will submit only assessment evidence that paints themselves in a positive light.26 Some institutions using portfolios have attempted to prevent this by requiring an advisor to attest that students’ portfolios are representative of their performance.18,27 At Feinberg, our reviewers have access to all student assessments, and students are informed that the committee is making a judgment about their capacity for self-reflection and self-directed learning. If students do not discuss critical assessments in their reflections, reviewers may make judgments about the students’ competence in these areas.

Similar portfolio systems have been implemented at other large medical schools in Australia and the Netherlands.13,14 Although process differences exist between our portfolio system and those at these international schools, we similarly found that a portfolio is a feasible and worthwhile addition to an assessment system but that it requires dedicated time and resources to implement successfully. As the educational programs in Australia and Europe vary significantly from those in the United States, we felt that a description of such a portfolio system as implemented in a U.S. medical school was a valuable addition to the literature.

Back to Top | Article Outline

Self-reflection

Recognizing that physicians’ self-assessment capacity is poor28 and that reflective thinking can be developed,11 we designed our portfolio review process to enhance students’ self-reflection and self-directed learning skills. We deliberately designed the formative portfolio reviews to be between a mentor and student only. This format facilitates a safe environment for the discussion of students’ concerns and was highly valued by our students. A safe and supportive mentoring relationship is necessary for students to learn self-reflection skills.29

Unfortunately, the results of the student survey show that students’ opinions regarding the portfolio process’s ability to foster self-directed learning were mixed. While students seemed to appreciate the opportunity to interact with their mentor and to engage in self-reflection, some doubted the usefulness of activities such as creating learning goals and using the portfolio for self-assessment. The reasons for these responses varied. Some commented that they already appropriately engage in self-assessment, so the use of a portfolio is unnecessary. Given that medical students’ self-assessments can be inaccurate, particularly in the preclinical years,30 we believe that we can better educate students about the value of using external data to improve their self-assessment skills.31 We also plan to institute a coaching curriculum so that mentors can learn to better assist students in creating effective learning goals. Similar to the experiences in other schools,32 our students increasingly are accepting the portfolio process. Preliminary data from a survey administered to our next cohort of students revealed a more positive reaction to this system. Future studies will examine the factors that contribute to students’ acceptance of the portfolio process and their development of self-directed learning skills.

Back to Top | Article Outline

Meeting criteria for quality assessment

The traditional psychometric constructs of reliability and validity used to evaluate assessment methods may need to be modified to be used in competency assessments in general and with portfolios specifically.27,33,34 We used strategies from both quantitative and qualitative methods to establish trustworthiness in our system. From a quantitative standpoint, research shows that portfolio assessment reliability is enhanced when students are well prepared, content in the portfolios is uniform, and decisions are made by trained and experienced faculty who use clearly articulated criteria.26 However, given the amount of portfolio data that requires human interpretation, we also used criteria from qualitative methods—triangulation, prolonged engagement, and the establishment of an audit trail—to establish credibility (validity) and dependability (reliability) in the process.35

Regarding the quantitative methods we used, the standardized nature of our portfolios ensured that the content was comparable across students. Reviewers participated in training that included standard-setting activities so that they could work from a shared framework of assessment.22 These factors may explain why we found such high levels of agreement between reviewers even before reconciliation.

Regarding the qualitative methods we used, portfolio reviewers performed triangulation by comparing and contrasting information from numerous assessments collected via several methods. They were able to compare their judgments with those from a second (and sometimes a third) reviewer to corroborate their findings. As seasoned educators, our reviewers were familiar with the assessment data and curriculum. This prolonged engagement gave them important context that helped them filter large amounts of assessment data and identify important factors in each student’s performance. Finally, we took steps to establish an audit trail by ensuring that our process was well defined and transparent to students, faculty, and outside observers.36 In addition to the description in Figure 1, our process is documented in the medical school’s assessment policy, which was approved by the curriculum committee before also being published in the student handbook.

In addition to reliability and validity, other criteria used to judge quality assessments are feasibility, educational/catalytic effect, and acceptability.37 We have discussed the feasibility of our portfolio review process and believe that it meets the standards of acceptability, although more can be done to improve students’ perceptions of the process. Future research should address what impact the portfolio has on student learning.28,29

Back to Top | Article Outline

Limitations

Our study has the following limitations. First, our process represents that of only one institution, so we cannot necessarily generalize our findings to other settings. Second, the response rate to our student survey was relatively low. We asked students to complete the survey after their clerkships had begun, so they may have been too distracted by their clinical responsibilities to participate. Thus, we used caution when interpreting those results. Finally, we have yet to determine whether the contents of the portfolio can predict future performance.

Back to Top | Article Outline

Conclusions

In this article, we have described the development and successful implemen tation of a competency-based portfolio assessment system alongside an existing graded curriculum. The portfolio system fosters a culture of self-reflection, which may benefit learners throughout their careers. Students who perform above expected levels are recognized and receive positive feedback on their achievements. Most important, students whose behavior is concerning are identified early in the educational program, allowing us to intervene and provide tailored educational support. Traditional grading paradigms may not detect these issues early enough for educators to intervene. Future research is needed to determine which portfolio components predict clinical performance and whether students’ self-reflection and self-directed learning skills improve after engagement with the portfolio.

Acknowledgments: The authors wish to thank Diane B. Wayne, MD, for her thoughtful feedback on previous drafts of this article. In addition, the authors acknowledge Kenzie A. Cameron, PhD, MPH, for graciously donating her time and expertise to this project.

Back to Top | Article Outline

References

1. Lurie SJ, Mooney CJ, Lyness JM. Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: A systematic review. Acad Med. 2009;84:301309.
2. van der Vleuten CP, Schuwirth LW. Assessing professional competence: From methods to programmes. Med Educ. 2005;39:309317.
3. Whitehead CR, Kuper A, Hodges B, Ellaway R. Conceptual and practical challenges in the assessment of physician competencies. Med Teach. 2015;37:245251.
4. Huddle TS, Heudebert GR. Taking apart the art: The risk of anatomizing clinical competence. Acad Med. 2007;82:536541.
5. Holmboe ES. Realizing the promise of competency-based medical education. Acad Med. 2015;90:411413.
6. van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205214.
7. Eva KW. On the generality of specificity. Med Educ. 2003;37:587588.
8. Norcini JJ, Holmboe ES, Hawkins RE. Holmboe ES, Hawkins RE. Evaluation challenges in the era of outcomes-based education. In: Practical Guide to the Evaluation of Clinical Competence. 2008.Philadelphia, Pa: Mosby/Elsevier.
9. Hodges BD, Ginsburg S, Cruess R, et al. Assessment of professionalism: Recommendations from the Ottawa 2010 conference. Med Teach. 2011;33:354363.
10. Gruppen LD. Competency-based education, feedback, and humility. Gastroenterology. 2015;148:47.
11. Mann K, Gordon J, MacLeod A. Reflection and reflective practice in health professions education: A systematic review. Adv Health Sci Educ Theory Pract. 2009;14:595621.
12. Davis MH, Friedman Ben-David M, Harden RM, et al. Portfolio assessment in medical students’ final examinations. Med Teach. 2001;23:357366.
13. Driessen E, van Tartwijk J, Vermunt JD, van der Vleuten CP. Use of portfolios in early undergraduate medical training. Med Teach. 2003;25:1823.
14. O’Sullivan AJ, Harris P, Hughes CS, et al. Linking assessment to undergraduate student capabilities through portfolio examination. Assess Eval High Educ. 2012;37:379391.
15. Hall P, Byszewski A, Sutherland S, Stodel EJ. Developing a sustainable electronic portfolio (ePortfolio) program that fosters reflective practice and incorporates CanMEDS competencies into the undergraduate medical curriculum. Acad Med. 2012;87:744751.
16. O’Sullivan PS, Cogbill KK, McClain T, Reckase MD, Clardy JA. Portfolios as a novel approach for residency evaluation. Acad Psychiatry. 2002;26:173179.
17. Kalet AL, Sanger J, Chase J, et al. Promoting professionalism through an online professional development portfolio: Successes, joys, and frustrations. Acad Med. 2007;82:10651072.
18. Dannefer EF, Henson LC. The portfolio approach to competency-based assessment at the Cleveland Clinic Lerner College of Medicine. Acad Med. 2007;82:493502.
19. Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood). 2002;21:103111.
20. Driessen EW, van Tartwijk J, Overeem K, Vermunt JD, van der Vleuten CP. Conditions for successful reflective use of portfolios in undergraduate medical education. Med Educ. 2005;39:12301235.
21. Friedman Ben David M, Davis MH, Harden RM, Howie PW, Ker J, Pippard MJ. AMEE medical education guide no. 24: Portfolios as a method of student assessment. Med Teach. 2001;23:535551.
22. Johnston B. Summative assessment of portfolios: An examination of different approaches to agreement over outcomes. Stud Higher Educ. 2004;29:395412.
23. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377381.
24. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376:19231958.
25. Papadakis MA, Loeser H, Healy K. Early detection and evaluation of professionalism deficiencies in medical students: One school’s approach. Acad Med. 2001;76:11001106.
26. Roberts C, Newble DI, O’Rourke AJ. Portfolio-based assessments in medical education: Are they valid and reliable for summative purposes? Med Educ. 2002;36:899900.
27. Driessen E, van der Vleuten C, Schuwirth L, van Tartwijk J, Vermunt J. The use of qualitative research criteria for portfolio assessment as an alternative to reliability evaluation: A case study. Med Educ. 2005;39:214220.
28. Ward M, Gruppen L, Regehr G. Measuring self-assessment: Current state of the art. Adv Health Sci Educ Theory Pract. 2002;7:6380.
29. Eva KW, Regehr G. Self-assessment in the health professions: A reformulation and research agenda. Acad Med. 2005;80(10 suppl):S46S54.
30. Blanch-Hartigan D. Medical students’ self-assessment of performance: Results from three meta-analyses. Patient Educ Couns. 2011;84:39.
31. Sargeant J, Armson H, Chesluk B, et al. The processes and dimensions of informed self-assessment: A conceptual model. Acad Med. 2010;85:12121220.
32. Davis MH, Ponnamperuma GG, Ker JS. Student perceptions of a portfolio assessment process. Med Educ. 2009;43:8998.
33. Schuwirth LW, van der Vleuten CP. Programmatic assessment and Kane’s validity perspective. Med Educ. 2012;46:3848.
34. Govaerts M, van der Vleuten CP. Validity in work-based assessment: Expanding our horizons. Med Educ. 2013;47:11641174.
35. Lincoln YS, Guba EG. Naturalistic Inquiry. 1985.Beverly Hills, Calif: Sage.
36. Tigelaar DEH, Dolmans DHJM, Wolfhagen IHAP, van der Vleuten CPM. Quality issues in judging portfolios: Implications for organizing teaching portfolio assessment procedures. Stud Higher Educ. 2005;30:595610.
37. Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33:206214.

Supplemental Digital Content

Back to Top | Article Outline
Copyright © 2016 by the Association of American Medical Colleges