Regulation is ubiquitous in our society—be it the bailout of Wall Street, the Big Three American car manufacturers, or Main Street America. Regulation is also important to the field of medical education. Indeed, assessment in medical education is a form of professional regulation. After reviewing the related articles in this issue,1–3 we found regulation to be an implicit theme, and we believe the timely and provocative work of the authors raises questions about the current state of regulation in medical education. In this commentary, we outline several theoretical and practical issues on the topic of regulation.
Sociology and Regulation
From a sociology standpoint, physicians are professionals and experts. The term “profession” refers to a privileged, largely autonomous, occupational group that has gained control of a specific, relevant area of work for society.4 Accordingly, a profession represents social closure—professionals within the field are responsible for the selection, training, and certification of their trainees. In medicine, we have enjoyed substantial autonomy in how we regulate the medical education process.
An “expert” is defined by a relationship—an individual is an “expert” relative to another (the “nonexpert”). The social benefits of consulting an expert are the time-efficient conveying of knowledge and reduction of uncertainty for the nonexpert. The expert’s role also determines the scope of accountability for one’s work. This is not a new concept; it was captured many centuries ago in the Bible and is often called the Matthew principle: “To whom much is given, much is expected.”
Expert professionals must balance what can appear to be a paradox—social closure from society (training of the expert professional) and intimate relationships with individuals in society (providing medical expertise for patients). This diametric relationship creates complex issues in regards to regulation. We will discuss this issue from two viewpoints: regulation from the standpoint of learning (self-regulated learning) and regulation from the standpoint of accountability.
The last two decades have witnessed a major shift in education from a teacher-centered to a learner-centered focus. This shift not only acknowledges the importance of individual differences in all aspects of learning, it also assumes that primary responsibility and control of the educational process shifts from the teacher to the learner. This change has given rise to the concept of self-regulated learning. This is a multidimensional construct that embodies how learners plan, monitor, and evaluate their learning in relation to specific academic goals.5 Learners with self-regulatory “competence” are active participants who generate the thoughts, feelings, and actions necessary to attain their goals by actively planning, monitoring, and regulating their own cognition, motivation, and behavior, as well as aspects of their environment. This process emerges dynamically in three cyclical phases: forethought, performance, and self-reflection.5 During all phases, students’ motivational and emotional states are considered critical to successful learning.
The three accompanying articles in this issue address self-regulation from different perspectives of “self”—the learner, the teacher, and the academic leader (dean). From a self-regulated learning perspective, pass/fail grading would be expected to positively affect all aspects of students’ academic performance through improved learner well-being and satisfaction. For example, studies have shown that academic environments that emphasize learning and deemphasize grades (environments with so-called mastery goal structures) are associated with adaptive motivational beliefs (e.g., greater self-efficacy during the forethought phase) and greater use of deep processing strategies (and, presumably, more efficient learning) during the performance phase.6 Because the learners in the study by Bloodgood and colleagues1 were aware of their final overall percentages in the course, we were not surprised by the lack of difference in academic outcome measures for three of the four semesters. Indeed, theories of self-regulation would anticipate potentially improved performance when grades are deemphasized by moving to a pass/fail system. In future work, we think it would be beneficial to address other outcomes that further assess the effects of moving to a pass/fail system. For instance, would such a change be related to students’ goal orientations—that is, their focus on mastery learning and increased competence as opposed to a focus on demonstrating ability? Moreover, what is the impact of formal self-regulation training on learning and performance? Such training has been shown to improve performance across a wide range of expertise and in a wide variety of domains, from basketball and dart throwing to reading and science education.7 Will these benefits transfer to our unique learning environment?
White and colleagues2 addressed both learner and teacher self-regulation. Starting with the latter, the use of an online remediation system may free up time for busy clinician-educators, which could improve their satisfaction and well-being, thereby enhancing their effectiveness as a teacher and “self-regulated” professional. As for the former, having learners view a video of the correct way to perform a task, in addition to viewing their own performance, could be an effective way to model skillful performance and positively impact the self-reflective phase of self-regulation. Research suggests that optimal forms of self-regulatory training “are initially social in form but become increasingly self-directed.”8 By observing their own errors and learning from proficient models, students could theoretically acquire the necessary knowledge and skills. This type of “social-to-self” training involves several levels of skill, including an observational level (discriminating the correct form of a skill by watching a proficient model), an emulation level (duplicating the general form of a proficient model), a self-controlled level (practicing a skill in a structured environment without direct guidance from a model), and a self-regulated level (practicing a skill under dynamic, real-world conditions and adapting that skill based on performance outcomes). Evidence from other educational fields suggests that students who can master each level in sequence function in a much more self-reliant, self-regulated manner.8 Thus, we believe the study by White and colleagues raises a number of intriguing questions that could inform theory and practice. For example, could an online remediation system be used in such a way as to allow learners to progress along the social-to-self continuum of self-regulation? Do all learners need direct feedback from teachers, and, if not, which skills are optimally (or exclusively) taught with direct observation and feedback (i.e., what are the limits of self-regulation)? Is electronic feedback equivalent (or even superior) to feedback given by preceptors who may be insufficiently trained in the to-be-learned skill? Should we train our teachers on how to give feedback electronically? Do the videos allow some learners to visualize success better than direct observation of faculty? Now that we have the technology, we believe this would be an interesting line of research. The literature from several fields outside of medicine argues that deliberate practice (i.e., effortful, focused practice on an area that needs improvement, usually under the guidance of a coach or mentor) is needed to achieve expertise. Research from other fields suggests that students who have learned to self-regulate are more likely to take part in deliberate practice and, therefore, are also more likely to attain expertise.5
Hauer and colleagues3 address a different aspect of self-regulation: specifically, how institutional dissatisfaction (and lack of trust) with remediation appears to influence implementation of academic consequences with remediation. The results of this study are somewhat troubling, given the amount of resources invested in these efforts. Defensive inferences (during the self-reflection phase) can be a powerful disincentive to action, and this article provides further evidence for this phenomenon. In part, the authors speak to the importance of considering the implications of self-regulation for all stakeholders in a medical school and, furthermore, suggest that faculty development efforts should target this issue.
The medical education profession has a societal obligation to ensure that our graduates are fit for future practice. The social closure associated with our profession means that there is an “audience” that is particularly at risk of being overlooked in medical education—the public. We believe the public’s interest is largely missing from these three papers. The public invests large amounts of resources—billions of dollars each year—in medical education through graduate medical education (GME) and research funds. In exchange for the privileges of being a profession, the medical education enterprise must be accountable to the public. The article by Hauer et al3 suggests we continue to perform poorly in this regard, and the article by White et al2 raises important issues about the role and responsibility of faculty.
As Hauer and colleagues3 note, the majority of medical schools remain dissatisfied with their remediation processes around clinical skills. Even more surprising is that most schools do not even have meaningful processes to attempt remediation. Thus, it seems that many students are potentially being “passed on” to residencies, perhaps with the hope that the deficiencies will be corrected during GME training. However, the data in this regard are not encouraging.9 Accordingly, the question becomes, should regulatory changes be considered at the institutional and faculty level to prevent trainees from progressing to the next phase of education when they may not be fit for such progression?
The accountability loci of the White et al study2 are complex, involving the institution, faculty, and individual. The hopeful news is that technology may support meaningful, self-regulatory behaviors. However, given current research showing that individuals perform self-assessment poorly when done in isolation,10 the findings of this study should be interpreted with caution, because the self-assessment scores did not change. Notwithstanding this caution, new methods to improve self-regulation—in particular, self-assessment—would be welcome. Physicians will spend the vast majority of their career as practitioners under self-supervision. Although progress in maintenance of certification (MOC) processes will help to ensure that physicians are competent, the episodic nature of MOC means that physicians remain largely responsible for determining most of their learning activities. In this regard, we would argue that self-directed assessment seeking10 is a critical competency, and the work of White et al2 provides some optimism that technology can assist in starting this process in medical school.
We found it interesting that the faculty have, in effect, been largely removed from the accountability equation in the White et al study.2 We worry about the rationale and implications of an assessment process using technology as a means to “replace” faculty who are too busy with other responsibilities. If medical schools are ultimately social institutions accountable to the public, should we seek to create assessment methods whose primary purpose may be to place a Band-Aid on a dysfunctional system, or should we advocate for the resources needed to train and support faculty to meet our societal charge? These types of new methods and tools should be viewed as part of a comprehensive assessment package, not as a “work-around” to address shortcomings in educational systems. Research into the advantages and disadvantages of this technology would greatly assist with defining when and how such technology can assist faculty with delivering optimal curricula. Institutions are accountable for addressing system issues that impede assessment in the clinical setting.
Potential Future Directions
What ultimately matters is performance in the chaotic clinical environment, and it is the responsibility of the professional expert to provide, well, expertise. The expert professional recognizes what is needed in the clinical encounter, understanding that each patient has different needs. Accordingly, we must train our learners to become experts, and one critical aspect of this is the development of self-regulatory competence. Work that uses well-established theoretical frameworks to better understand medical education is needed. Importantly, this work should also consider the extent to which teachers use all available information to self-regulate their own actions, with the ultimate goal of providing learners with the academic scaffolding they need to excel. What, then, is the accountability of institutions and faculty for their learners? Should there be more severe external regulatory consequences for medical schools who fail to meet this responsibility?
A physician is an expert professional who is responsible for many—our patients, our trainees, and ourselves. We would argue that the high-stakes nature of providing quality care for patients mandates minimal tolerance of mediocrity in educating future practitioners. To this end, we believe the articles in this issue provide evidence and hope for the value and impact of self-regulation. These articles also remind us of the essential, professional charge of medical education—to fully embrace accountability to the public. For us, these timely studies and this commentary are only a starting point for this important discussion.
1 Bloodgood RA, Short GJ, Jackson JM, Martindale JR. A change to pass/fail grading in the first two years at one medical school results in improved psychological well-being. Acad Med. 2009;84:655–662.
2 White CB, Ross PT, Gruppen LD. Remediating students’ failed OSCE performances at one school: The effects of self-assessment, reflection, and feedback. Acad Med. 2009;84:651–654.
3 Hauer KE, Teherani A, Kerr KM, Irby DM, O’Sullivan PS. Consequences within medical schools for students with poor performance on a medical school standardized patient comprehensive assessment. Acad Med. 2009;84:663–668.
4 Evetts J, Mieg HA, Felt U. Professionalization, scientific expertise, and elitism: A sociological perspective. In: Ericsson KA, Charness N, Feltovich PJ, Hoffman RR, eds. Cambridge Handbook of Expertise and Expert Performance. New York, NY: Cambridge University Press; 2007.
5 Zimmerman BJ. Attaining self-regulation: A social cognitive perspective. In: Boekaerts M, Pintrich PR, Zeidner M, eds. Handbook of Self-Regulation. San Diego, Calif: Academic; 2000.
6 Wolters CA. Advancing achievement goal theory: Using goal structures and goal orientations to predict students’ motivation, cognition, and achievement. J Educ Psychol. 2004;96:236–250.
7 Azevedo R, Cromley JG. Does training on self-regulated learning facilitate students’ learning with hypermedia? J Educ Psychol. 2004;96:523–535.
8 Zimmerman BJ, Tsikalas KE. Can computer-based learning environments (CBLEs) be used as self-regulatory tools to enhance learning? Educ Psychol. 2005;40:267–271.
9 Papadakis MA, Arnold GK, Blank LL, Holmboe ES, Lipner RS. Performance during internal medicine residency training and subsequent disciplinary action by state licensing boards. Ann Intern Med. 2008;148:869–876.
10 Eva KW, Regehr G. “I’ll never play professional football” and other fallacies of self-assessment. J Contin Educ Health Prof. 2008;28:14–19.