Practicing physicians have a societal obligation to maintain their knowledge and skills through lifelong learning activities.1 To guide these activities, physicians set goals for learning, assess their own understanding, and then close any gaps by acquiring or updating the appropriate competencies. Often described as self-regulated learning,2 these types of behaviors, which some have contended are critical for safe and effective practice,3–5 are not explicitly assessed or taught in most medical schools. Further, scholars have argued that physicians may be quite unskilled at certain aspects of self-regulation, such as global self-assessment.6 And, although medical schools may assume that students will eventually learn how to effectively self-regulate, research suggests that most practicing physicians feel unprepared to do so.7
With these considerations in mind, we established three objectives for the present study. First, we measured students’ perceptions of the medical school learning environment and assessed how these perceptions related to their use of various self-regulated learning behaviors. Next, we sought to determine how students’ perceptions and behaviors correlated with their academic achievement. Finally, we examined the degree to which students’ perceptions and behaviors might change across medical school. By addressing these three interrelated objectives, we hoped to better understand the nature of self-regulated learning in medical school.
A key factor in most models of self-regulation is the learning environment.2,3,5 Researchers outside of medical education have focused considerable effort developing theoretical models to explain the relationships between the learning environment and student outcomes. One such model (Figure 1) formed the theoretical foundation of the present study. Derived from social cognitive theory,8 this model proposes that students’ perceptions of the learning environment are associated with their use of various learning strategies, some of which are adaptive (e.g., students’ awareness of their own thinking, which we refer to as metacognition) and others which are not (e.g., procrastination). In turn, these behaviors influence academic outcomes, including achievement and performance.
One way to conceptualize students’ perceptions of the learning environment comes from the achievement motivation literature. Motivation theorists have a long tradition of exploring the types of goals that individuals pursue in academic settings. Known as achievement goal theory, this framework provides a lens through which to explore the relationships between different learning environments and academic outcomes.9 Results from more than 20 years of empirical research suggest that teachers, through their use of various instructional practices, create different goal structures, which then influence students’ personal goals, academic behaviors, and achievement.9,10
Notwithstanding some debate as to the types of goal structures that exist in various educational settings, many contemporary motivation researchers define three primary goal structures.11Mastery goal structures are learning environments that encourage students to focus on developing abilities, mastering new skills, accomplishing challenging tasks, and attempting to truly understand learning materials. Performance-approach goal structures are learning environments that focus on demonstrating proficiency and social comparisons of performances. Finally, performance-avoid goal structures are learning environments that encourage students to avoid the demonstration of incompetence—that is, to “avoid looking bad” in front of others.12 Mastery goal structures are generally thought to persuade students to adopt mastery goals and focus on adaptive behaviors aimed at improving learning (e.g., metacognition). Performance-approach and performance-avoid goal structures have been found to encourage students to adopt performance goals that focus on doing whatever it takes to outperform peers and avoid looking incompetent. In doing so, these performance-oriented goal structures often lead students toward less adaptive behaviors meant to demonstrate proficiency rather than develop deep understanding and mastery.11,12
In this study, we considered students’ perceptions of achievement goal structures at different phases of medical school. We also considered their reported use of three learning behaviors: metacognition, procrastination, and avoidance of help seeking (see Figure 1). Metacognition—or, more explicitly, students’ awareness of their own thinking and use of various control strategies such as planning, goal setting, and self-monitoring—has been shown to be an important predictor of learning.13 Procrastination can be defined as knowing that one is supposed to complete a task but failing to do so within the expected or desired time frame.14 Finally, help seeking can be defined as the extent to which students seek help when they need it. Research suggests that high-achieving, self-regulated learners tend to seek help when needed.15 In the present study, we measured the degree to which students avoided seeking help, even when they needed it, which can be thought of as the maladaptive side of the help-seeking coin.
Few studies have directly examined how students’ perceptions of the learning environment relate to their self-regulated learning behaviors,4 and, to our knowledge, there has been no work examining how students’ perceptions and behaviors correlate with performance or vary from the beginning to the end of medical school. The current study addressed these gaps by testing two hypotheses: (1) Students’ perceptions of mastery goal structures will be positively correlated with adaptive learning behaviors (metacognition) and negatively correlated with maladaptive behaviors (procrastination and avoidance of help seeking), and (2) students’ reported use of metacognitive control strategies will be positively correlated with their academic performance, whereas procrastination and help-avoidance behaviors will be negatively correlated with these same outcomes.
Given a dearth of previous work examining changes in perceptions of achievement goal structures and learning behaviors, we had no explicit hypotheses for how these variables might change from year 1 to year 4 of medical school. Accordingly, this portion of the study was exploratory.
The Uniformed Services University of the Health Sciences (USU) matriculates approximately 170 medical students annually. At the time of this study, USU offered a traditional four-year curriculum: two years of basic science courses followed by two years of clinical rotations (clerkships).
In May 2011, we invited all USU medical students (classes graduating in 2011 through 2014) to complete an online survey assessing their perceptions of medical school and clerkship learning environments, as well as their learning behaviors. The survey was developed as part of the university’s long-term career outcome study and included several previously validated survey scales. Data from the survey were linked to academic outcomes using a confidential participant identification number, and participation in the survey was voluntary. Ethical approval for the study was obtained from the USU institutional review board.
Thirty items from the survey were used in the present study, and there were two versions of the survey: one for preclinical students (years 1 and 2) and one for clinical students (years 3 and 4). The only difference between the surveys was whether the items referred to “classrooms” or “clerkships.” Both versions assessed students’ perceptions of achievement goal structures and their learning behaviors, each using three subscales for a total of six constructs.
Achievement goal structures. We measured students’ perceptions of the learning environment using three subscales adapted from the Patterns of Adaptive Learning Scale16: (1) A five-item mastery goal structures subscale assessed students’ perceptions that the purpose of engaging in academic work is to develop competence, (2) a three-item performance-approach goal structures subscale assessed students’ perceptions that the purpose of engaging in academic work is to demonstrate competence, and (3) a five-item performance-avoid goal structures subscale assessed students’ perceptions that the purpose of engaging in academic work is to avoid demonstrating incompetence. Several minor wording changes adapted the scales for the clerkship survey. For example, the performance-avoid goal structures item “showing others that you are not bad at class work is really important” was modified to “showing others that you are not bad at clinical work is really important.” All items employed a five-point, Likert-type response scale.
Learning behaviors. We measured students’ learning behaviors using three subscales: (1) An eight-item metacognition subscale from the Motivated Strategies for Learning Questionnaire17 assessed the frequency with which students used metacognitive control strategies (e.g., planning, goal setting, comprehension monitoring, performance regulation), (2) a four-item procrastination subscale assessed the frequency with which students disengaged academically or tended to delay getting started on academic work,14 and (3) a five-item avoidance-of-help-seeking subscale assessed the frequency with which students avoided asking for help even when they needed it.18 Again, minor wording changes to these previously validated subscales reflected adaptations to the specific medical education context, and all items employed a five-point, Likert-type response scale. Finally, although the three behaviors were self-reported, for brevity we refer to them as metacognition, procrastination, and avoidance of help seeking in the remainder of this article.
To assess performance, we calculated each student’s cumulative medical school grade point average (GPA). We used a weighted GPA created by multiplying each course grade by the number of contact hours for the given course (range: 0.0–4.0).
At the time of the study, all third- and fourth-year students had finished the internal medicine clerkship. For these students, we assessed performance using three additional outcomes: clinical points, exam points, and Department of Medical Education Committee (DOMEC) referral. Clinical points were calculated using a point system that is weighted according to the number of clinics that students had with a given clinical teacher. When calculating clinical points at USU, the second six-week period of the clerkship was weighted more heavily (60% of students’ clinical points) than the first six-week period (40%) because students were expected to improve during the clerkship. Exam points were calculated as a weighted average of students’ scores on two locally developed exams (40% of their exam points) and the National Board of Medical Examiners “subject exam” in medicine (60%). Finally, students who received one or more grades (clinical or exam points) of less than “passing” were referred to the DOMEC for grade adjudication. A DOMEC referral is considered a marker for a struggling student.
Prior to analysis, we screened the data for accuracy and missing values and checked each survey item response pattern for normality. We then conducted a confirmatory factor analysis (CFA) to assess the convergent and discriminant validity of the 30 survey items. We used maximum likelihood estimation to estimate the parameters, and we inspected several goodness-of-fit statistics to evaluate model fit.19 Next, we subjected each of the six subscales to an internal consistency reliability analysis and computed a mean score for the items associated with a particular subscale (i.e., the six variables were unweighted composite scores). Third, we calculated descriptive statistics for the total sample, and, to investigate the representativeness of our sample, we compared students who completed the survey and those who did not on Medical College Admissions Test (MCAT) score and cumulative medical school GPA. Next, we conducted a correlation analysis to explore the associations among the survey variables and cumulative medical school GPA for all participants. We then completed a second correlation analysis to examine the relations between the survey variables and the clerkship outcomes (for participants in years 3 and 4). Finally, we conducted a one-way multivariate analysis of variance (MANOVA) to investigate whether class year was related to the survey variables. All analyses were completed using SPSS 20.0 (IBM Corporation, New York, New York).
Of the 678 students invited to participate, 304 students (45%) completed the survey. Of those who completed the survey, there were 87 (29%) first-year students, 88 (29%) second-year students, 64 (21%) third-year students, and 65 (21%) fourth-year students. The sample included 223 men (73%) and 81 women, which is representative of the medical student population at USU (71% men). Results from the MANOVA comparing those who completed the survey and those who did not revealed no statistically significant differences between the two groups on MCAT scores or cumulative medical school GPA, F(2, 633) = 0.72, P = .49.
Results from the CFA suggested that the survey demonstrated acceptable model fit. The chi-square test was statistically significant (χ2 [390, N = 304] = 853.92, P < .001), the normed fit index (NFI) was 0.82, the comparative fit index (CFI) was 0.89, and the root-mean-square error of approximation (RMSEA) was 0.06. Although it is desirable to have the NFI and CFI above 0.90, the results we observed were acceptable, and the RMSEA indicated satisfactory model fit.19 Taken together, the CFA substantiated the survey’s six-factor structure.
Cronbach alpha coefficients were calculated for each subscale to assess the internal consistency reliability of the scores. As indicated in Table 1, all alpha coefficients were well within the desired range, with actual values of 0.78 to 0.91.20
Table 1 also presents the means and standard deviations of the six variables as well as the correlations between these variables and cumulative medical school GPA. As shown, there were several statistically significant correlations. Students’ perceptions of mastery goal structures were positively correlated with metacognition (r = 0.26, P < .01) and negatively correlated with procrastination (r = −0.16, P < .01) and avoidance of help seeking (r = −0.24, P < .01). Performance-approach goal structures were positively correlated with performance-avoid goal structures (r = 0.47, P < .01) and cumulative medical school GPA (r = 0.14, P < .05). Furthermore, students’ perceptions of performance-avoid goal structures were positively correlated with help avoidance (r = 0.24, P < .01), and students’ metacognition was negatively correlated with procrastination (r = −0.12, P < .05). Moreover, procrastination scores were positively correlated with avoidance of help seeking (r = 0.36, P < .01). Finally, students’ help-avoidance behaviors were negatively correlated with cumulative medical school GPA (r = −0.23, P < .01).
Next, we investigated the associations between the survey variables and the academic performance of the 126 third- and fourth-year students. As shown in Table 2, help avoidance was negatively correlated with exam points (r = −0.22, P < .05) and clinical points (r = −0.34, P < .01) and positively correlated with DOMEC referral (r = 0.20, P < .05). Clinical points were also negatively correlated with performance-avoid goal structures (r = −0.20, P < .05) and procrastination (r = −0.21, P < .05). Finally, exam points were negatively correlated with performance-approach goal structures (r = −0.20, P < .05), and DOMEC referral was negatively correlated with mastery goal structures (r = −0.19, P < .05).
Results from the MANOVA indicated that students in different class years had significantly different scores on the measured variables, F(18, 891) = 4.84, P < .001. Because the overall F test was statistically significant, we moved on to additional univariate analyses.21
Tests of between-subjects effects indicated that class year was related to mastery goal structures (F[3, 300] = 4.86, P < .01) and performance-avoid goal structures (F[3, 300] = 12.89, P < .01). We followed up these significant analyses of variance with Tukey HSD (honestly significant difference) post hoc tests, which indicated that perceptions of mastery goal structures went down from first-year to second-year students (Cohen d = −0.24) and then back up from second-year to third-year students (Cohen d = 0.21). For performance-avoid goal structures, we found increases from first-year to third-year students (Cohen d = 0.46), first-year to fourth-year students (Cohen d = 0.29), and second-year to third-year students (Cohen d = 0.34). These small to moderate differences are depicted in Figure 2.
Taken together, results from this study largely confirmed our hypotheses. In our sample, students’ perceptions of mastery goal structures were positively correlated with their use of metacognitive control strategies and were negatively correlated with procrastination and DOMEC referral. That is, those students who felt that the learning environment was more focused on improvement and understanding reported greater metacognition and less procrastination. These students were also less likely to “struggle” during the clerkship years. Further, procrastination was negatively correlated with metacognition and clinical points, which suggests that students who reported more procrastination behaviors also used less metacognitive control strategies and received slightly worse evaluations from clerkship teachers, a finding that corroborates research conducted in other higher education contexts.14
Students’ perceptions of performance-approach goal structures were positively correlated with cumulative medical school GPA. Thus, students who felt that the learning environment was more about getting good grades received slightly higher GPAs. However, somewhat surprisingly, performance-approach goal structures were negatively correlated with exam points in the clerkship. On the other hand, performance-avoid goal structures, which were positively correlated with performance-approach goal structures, were negatively correlated with clinical points. Thus, the more students felt the learning environment was about avoiding looking bad, the lower their clinical scores in the internal medicine clerkship. Somewhat paradoxically, those students who struggled in the clerkship phase were the same students who were more concerned about not looking bad. Stated another way, “strugglers” did not want to look like they were struggling in front of their peers and teachers.
Help avoidance was positively correlated with performance-avoid goal structures and procrastination and negatively correlated with mastery goal structures and cumulative medical school GPA. Those who reported a greater tendency to avoid seeking help, even when they needed it, reported more procrastination behaviors and believed that the learning environment was more about avoiding looking bad in front of their peers and instructors. These students also felt that the learning environment was less focused on trying hard and truly understanding the material; ultimately, these students had lower cumulative medical school GPAs. These findings are in line with previous work reporting that students in classes with greater perceived emphasis on performance-avoid goals have higher levels of help-seeking-avoidance patterns.15,22
From the correlation analysis of the clerkship students, we also found that help avoidance was negatively correlated with exam score and clinical score and positively correlated with DOMEC referral. These findings suggest that students who had a greater tendency to avoid help also performed slightly worse in the internal medicine clerkship. Further, it is worth noting that the correlation between DOMEC referral and help avoidance (r = −0.20), although small, was similar in magnitude to the correlation between DOMEC referral and exam points (r = 0.24; data not shown). Thus, help-avoidance behaviors were almost as strongly correlated with DOMEC referral as exam points. It seems that the students who were more reluctant to seek help were also those more likely to perform poorly in medical school during both preclinical and clinical phases. This finding aligns with previous empirical work,22 and it has implications for how medical educators determine which students require the most attention and how we might create learning environments that encourage help seeking—a behavior that, by all accounts, is adaptive and contributes to self-regulated learning,4,15
In our sample, when viewed across medical school, the data indicate that students’ perceptions of mastery and performance-avoid goal structures varied as a function of their phase of training. Compared with second-year students, first- and third-year students reported significantly higher mastery goal structures. In other words, students in years 1 and 3 felt that the learning environment was more about understanding and mastery than did those in year 2. When one considers the timing of Step 1 of the United States Medical Licensing Examination, which is completed at the end of year 2 at USU, this finding may be less surprising. It may be that medical students are thinking more about performing well on Step 1 during the second year and less about mastery of the classroom material.
Additionally, students’ perceptions of performance-avoid goal structures differed significantly between first-year students and third- and fourth-year students, with the more senior students reporting that the learning environment was much more about not looking bad during the clerkships. As Figure 2 suggests, this performance-avoid environment continued to grow from the first year of medical school to the third year but then subsided somewhat in the fourth year. This dip in the fourth year could be related to the fact that, by May of the fourth year, medical students at USU have already been selected for residency training.
Finally, the sharpest increase in perceptions of performance-avoid goal structures occurred between the second and third years. The third year is a time when the bulk of student grades are determined by direct observation by faculty rather than by tests of knowledge. Thus, third-year student grades are determined by behaviors on a daily basis rather than by end-of-block tests, and so it seems that some students may react to this change with maladaptive behaviors.23 Moreover, these increased perceptions of performance-avoid goal structures may be related to the finding that trainees often feel pressure to demonstrate independence in clinical work in an effort to “lay claim to the identity of a doctor (as a member of a group of autonomous high achievers)” and in response to heavy clinical workloads and constant evaluations.24
Because of the correlational nature of the present study, definitive instructional implications for medical educators are difficult to draw. Nonetheless, in light of the largely adaptive nature of mastery goals and the oftentimes maladaptive nature of performance goals—particularly performance-avoid goals—motivation theorists have suggested several instructional practices that support mastery goal structures. In turn, these goal structures have the potential to encourage student adoption of mastery-oriented goals and use of adaptive help seeking. Recommended instructional practices include, for example, asking students to engage in personally meaningful and challenging tasks with flexible participation structures; giving students the opportunity to participate in creating the rules that affect their academic activities; recognizing and promoting mastery ideals, such as effort, risk taking, and creativity; and assessing students formatively using assessment and feedback procedures that evaluate progress and promote mastery of essential knowledge and skills, as opposed to procedures that focus on student performance relative to others.11,12,15,16 Notably, the efficacy of these practice recommendations in medical education contexts requires empirical support in the form of intervention studies. Such studies are an important step toward gathering evidence-based recommendations for how to design learning environments that promote the development of knowledge and skills in the short term and ongoing professional development in the long term.5
Our study had several limitations, including the single-institution, cross-sectional nature of the study design. Because we did not employ a longitudinal design, care must be taken not to overinterpret our findings, particularly with respect to the differences we observed across the four years of medical school. In future work, we plan to include longitudinal collection of these same data. A second limitation is the suboptimal response rate we obtained. Although our comparisons of study participants versus non-participants revealed no differences on two performance measures (MCAT and cumulative medical school GPA), we cannot rule out the possibility of response bias.25 Finally, like all self-reports, our survey has reliability and validity limitations, especially as they relate to measuring learning behaviors, which are subject to social desirability and recall bias. That said, our CFA and reliability analyses suggest that our survey instrument had reasonable psychometric properties.
Notwithstanding these limitations, we feel that this study lays an important conceptual foundation for understanding factors that may affect self-regulated learning in medical school. From a practical perspective, we anticipate that our results could help medical educators appreciate the influence that classroom environments, clinical settings, and teacher behaviors can have on students’ self-regulation and achievement. Such an appreciation may be a critical first step to creating learning environments that encourage the lifelong learning behaviors reputed by many to be so critical to safe and effective practice.3–5
Funding/Support: This study was part of the Long-Term Career Outcome Study. It was supported by an intramural grant from the dean, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland.
Other disclosures: None.
Ethical approval: This study was approved by the institutional review board of the Uniformed Services University of the Health Sciences, Bethesda, Maryland.
Disclaimer: The authors are U.S. government employees. The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, Department of Defense, nor the U.S. government.