Secondary Logo

Journal Logo

The Educational Climate Inventory: Measuring Students’ Perceptions of the Preclerkship and Clerkship Settings

Krupat, Edward PhD; Borges, Nicole J. PhD; Brower, Richard D. MD; Haidet, Paul M. MD, MPH; Schroth, W. Scott MD, MPH; Fleenor, Thomas J. Jr MEd; Uijtdehaage, Sebastian PhD

doi: 10.1097/ACM.0000000000001730
Research Reports
Free
SDC

Purpose To develop an instrument to assess educational climate, a critical aspect of the medical school learning environment that previous tools have not explored in depth.

Method Fifty items were written, capturing aspects of Dweck’s performance–learning distinction, to distinguish students’ perceptions of the educational climate as learning/mastery oriented (where the goal is growth and development) versus performance oriented (where the goal is appearance of competence). These items were included in a 2014 survey of first-, second-, and third-year students at six diverse medical schools. Students rated their preclerkship or clerkship experiences and provided demographic and other data. The final Educational Climate Inventory (ECI) was determined via exploratory and confirmatory factor analysis. Relationships between scale scores and other variables were calculated.

Results Responses were received from 1,441/2,590 students (56%). The 20-item ECI resulted, with three factors: centrality of learning and mutual respect; competitiveness and stress; and passive learning and memorization. Clerkship students’ ratings of their learning climate were more performance oriented than preclerkship students’ ratings (P < .001). Among preclerkship students, ECI scores were more performance oriented in schools with grading versus pass–fail systems (P < .04). Students who viewed their climate as more performance oriented were less satisfied with their medical school (P < .001) and choice of medicine as a career (P < .001).

Conclusions The ECI allows educators to assess students’ perceptions of the learning climate. It has potential as an evaluation instrument to determine the efficacy of attempts to move health professions education toward learning and mastery.

Supplemental Digital Content is available in the text.

E. Krupat is associate professor of psychology, Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston, Massachusetts. At the time of the research, the author was also director, Center for Evaluation, Harvard Medical School, Boston, Massachusetts.

N.J. Borges is assistant dean, Medical Education Research and Scholarship, and professor, Department of Pediatrics, University of Mississippi Medical Center, Jackson, Mississippi. At the time of the research, the author was on the faculty of Wright State University Boonshoft School of Medicine, Dayton, Ohio.

R.D. Brower is associate dean for medical education, associate professor of medical education, and clinical associate professor of neurology, Paul L. Foster School of Medicine at Texas Tech University Health Sciences Center, El Paso, Texas.

P.M. Haidet is professor of medicine, humanities, and public health sciences and director of medical education research, Penn State College of Medicine, Hershey, Pennsylvania.

W.S. Schroth was associate dean for administration, George Washington University School of Medicine and Health Sciences, Washington, DC, at the time of the research.

T.J. Fleenor Jr is project manager, Office of Educational Quality Improvement, Harvard Medical School, Boston, Massachusetts.

S. Uijtdehaage is professor of medicine and associate director, graduate programs in health professions education, Uniformed Services University of the Health Sciences, Bethesda, Maryland. At the time of the research, the author was on the faculty of the David Geffen School of Medicine at the University of California, Los Angeles, California.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: This study was reviewed and approved individually at each of the participating schools, with the exception of the George Washington University School of Medicine and Health Sciences, which ceded review to Harvard Medical School, the site of the principal investigator (E.K.).

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A452.

Correspondence should be addressed to Edward Krupat; e-mail: edkrupat@gmail.com.

Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government.

In settings from elementary school through residency, it has been demonstrated that students’ perceptions of the learning environment as positive are associated with a variety of outcomes such as greater motivation for learning, higher levels of achievement, and lower levels of burnout.1–6 Consistent with this, in the United States and Canada, the Liaison Committee on Medical Education7 and the Committee on Accreditation of Canadian Medical Schools8 explicitly require medical schools to monitor their learning environments, with the goal of ensuring that a positive environment exists. In revisiting the Flexner Report after 100 years, Cooke and colleagues9 encouraged the medical education community to pursue these same goals.

Several instruments exist to assess the medical school learning environment, and instruments that assess the graduate medical education environment are beginning to appear.10 However, three recent systematic reviews of medical school leaning environment instruments, published in 2014,11 2012,12 and 2010,13 indicated that most of these instruments do not meet appropriate psychometric standards. Colbert-Getz and colleagues11 concluded that the majority provide some form of content validity evidence, but relatively few offer evidence of internal structure or response process, and they do not typically provide substantial evidence of their relationship to other relevant variables. Schönrock-Adema and colleagues12 identified another problem with the majority of these instruments: Most are neither informed by nor derived from conceptual or theoretical frameworks.

As noted by environmental psychologists, the “environment” can be characterized in many different ways.14,15 Most instruments that assess the medical school environment consider it broadly, from multiple perspectives. The Dundee Ready Educational Environment Measure,16,17 which has been used widely throughout the world, has a four-factor structure that captures student perceptions of everything from the faculty (“The teachers are knowledgeable”) to the social environment (“My social life is good”). The Medical School Learning Environment Survey,18,19 recently modified and used as part of a 28-school study,18 also covers a variety of domains, with items ranging from “The relationship between basic science and clinical material is clear” to “Upper-level students provide guidance to lower-level students.” The Johns Hopkins Learning Environment Scale20,21 has seven subscales ranging from community of peers (“How connected do you feel to other SOM [school of medicine] students?”) to inclusion and safety (“I am concerned that students are mistreated at the SOM”).

In contrast, the instrument we introduce here is focused on one critical aspect of the medical school environment, the educational climate. Adapting the well-respected theoretical model of Carol Dweck22–24 to medical training, we note that the educational values and culture of a medical school can be focused in one of two ways. In a learning- or mastery-oriented climate, the focus is on students’ development and growth, on creating a climate in which the curriculum (formal and informal) is geared toward not only increasing knowledge and skills but also encouraging critical thinking and self-directed learning. In such a climate, uncertainty is to be embraced rather than feared; faculty value students’ ability to reason, not just whether their answers are correct; and students receive a great deal of detailed feedback and are encouraged to reflect on how to improve. Recent discussions about the need to develop formative assessment systems are consistent with this approach.25,26

In contrast, a performance-oriented climate places the emphasis on the demonstration of competence rather than its development. In such a climate, students are concerned more about the appearance of competence than the growth of competence. They are likely to hide their uncertainty, while trying to seem confident in the eyes of others, and to avoid feedback for fear of its possible negative consequences.

In this article, we describe the development and initial validation of the Educational Climate Inventory (ECI), a new instrument attuned to assessing the perceived educational climate in medical school. We present data on the ECI’s psychometric properties and evidence bearing on its validity and reliability. In particular, we test two hypotheses that derive from Dweck’s framework that bear on the construct validity of the ECI:

  1. Preclerkship students’ ECI ratings will be more performance oriented in medical schools that have a graded preclerkship assessment system than in those that use a pass–fail system.
  2. Students’ ratings of the climate for learning in the clerkship setting will be more performance oriented than students’ ratings of their experience of the preclerkship/classroom context.
Back to Top | Article Outline

Method

Data collection

Data were collected in spring 2014 at six medical schools. In alphabetical order, they were the David Geffen School of Medicine at the University of California, Los Angeles; the George Washington University School of Medicine and Health Sciences; Harvard Medical School; the Paul L. Foster School of Medicine at Texas Tech University Health Sciences Center El Paso; Penn State College of Medicine; and Wright State University Boonshoft School of Medicine. Although this was not a random sample of medical schools, the six are diverse in geographical location, size, and public–private status, and they differ considerably in their primary educational missions. At the time of data collection, three of these schools used a preclerkship grading system with at least three levels of grades, and three used a simple pass–fail system. All six schools gave grades to students in their core clerkships.

Institutional review board approval was received for data collection at all six schools. For the purposes of this analysis, the participating schools were randomly assigned identifiers A through F.

Back to Top | Article Outline

Scale development and administration

A pool of 50 items, phrased to be equally relevant to the medical school classroom or clinical setting, was generated for rating by students. Most items were written by the first author (E.K.), and all were reviewed for feedback and possible revision by the full set of coinvestigators. The items were written intentionally to capture various aspects of Dweck’s learning–performance distinction.22–24 The blueprint included specific dimensions, such as goal or focus of learning (e.g., application vs. retention of facts); nature of faculty–student relationships and student–student relationships; and presence of a hidden curriculum (i.e., inconsistencies between stated goals and actual policies and actions by faculty). The 50 items included both positively and negatively worded items. All responses were made using a four-point Likert scale, which ranged from “strongly agree” to “strongly disagree.” For analysis purposes, all negatively worded items were reverse coded so that a higher score always reflected a more learning-oriented climate.

The instrument’s instructions asked students to provide their “feelings concerning the PRECLERKSHIP [or for third-year students, CLERKSHIP] learning environment as you have experienced it here at [medical school name].” Students were instructed to use the four-point scale to indicate the extent to which they agreed or disagreed with the 50 items as well as to rate their satisfaction with their medical school (2 items: “I am extremely satisfied with my medical school experience so far”; “If asked by a friend, I would strongly recommend that he or she go to my medical school”) and their choice of medicine as a career (1 item: “I feel certain that I made the right choice in going into medicine”). They were also asked to supply limited demographic information.

Finally, students were asked to rate their preference for being judged by faculty according to their effort versus their performance, by indicating the percentage they desired to be assessed based on effort and the percentage based on performance, to add up to 100%. Although students indicated in real numbers the relative weight that they preferred to be given to effort versus performance in assessments, for analysis purposes this variable was coded into three categories: those who preferred to be judged more by their performance (≥ 51%) than by their effort, those who preferred a 50–50 split, and those who preferred to be judged more by their effort (≥ 51%) than by their performance.

A copy of the survey instrument that was sent to preclinical students is available as Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A452. The instrument sent to clinical students was identical, with the one exception noted above.

At each of the participating schools, the anonymous surveys were sent electronically at the end of the academic year to all first-, second-, and third-year students. Each school was given latitude as to procedural specifics (e.g., how many reminders to send, whether to offer an incentive) to match their standard operating procedures for student surveys.

Back to Top | Article Outline

Scale analysis

The analysis of the 50 items went through several stages and required the use of several related statistical packages.27–30 Our goal was to generate an ECI instrument that retained the best items based on several statistical criteria. We began by calculating mean scores and standard deviations for each item, to eliminate items with extreme means or insufficient variability.

Because the size of the total sample far exceeded the respondents-per-items ratio required to generate stable factor analytic results,31 we selected half the sample at random to perform an exploratory factor analysis (EFA) on the 50 items. For the EFA, we used a promax rotation to account for the subscales’ intercorrelation along with a maximum likelihood analysis. Factors were retained if they were above the elbow of the generated scree plot and had eigenvalues > 1. Items within those factors were retained if they loaded ≥ 0.40 on their primary factor—or, if they were not factorially complex, if they loaded ≥ 0.35 on one factor only.

Having established the underlying structure of the scale using the empirical findings of the EFA, we performed confirmatory factor analysis (CFA) on the other half of the sample using the retained items only. This was done to verify the latent structure of the instrument, validate the pattern of item–factor relationships, and further determine the appropriate number of factors. We conducted additional analyses on the scale (both within-school and across schools) using the summed total score of the retained items as an outcome and on each of the three subscales using t tests, one-way analysis of variance, and chi-square, as appropriate, for tests of statistical significance, as well as eta2 for tests of effect size. Specifically, for the sample as a whole, we tested whether ECI scores were associated with demographic variables, with desire to be assessed based on performance versus effort, and with the learning context being rated (preclerkship vs. clerkship).

We tested for differences in the ratings of first- and second-year students, who were rating the preclerkship climate, with the intent of combining these two groups and comparing their ratings of the preclerkship climate with the third-year students’ ratings of the clerkship climate. For the sample as a whole, we compared ratings from the three schools with preclerkship grades versus those from the three pass–fail schools. As another means of validation, we assessed whether individuals who scored higher on the ECI were more satisfied with their medical school experience (using a two-item index of satisfaction with a possible range of 2–8 on which higher scores indicate greater satisfaction) and whether they felt that they had made the correct decision to go into medicine (using a single four-point item, with higher scores indicating a correct decision). Finally, we investigated the satisfaction levels of preclerkship versus clerkship students, and the extent to which students in different years preferred to be judged more by their effort versus their performance.

Back to Top | Article Outline

Results

The sample

A total of 1,441 students completed the surveys, with 99 to 407 respondents from each of the six schools surveyed. The response rates of Schools A–F, respectively, were 59%, 47%, 78%, 49%, 81%, and 22%, yielding an overall response rate of 56% (1,441/2,590). Response rates were higher for preclerkship (first- and second-year) students compared with third-year students who had just completed their core clerkships (58% vs. 50%). The characteristics of the total sample, which was approximately half male (48%), are reported in Table 1.

Table 1

Table 1

Back to Top | Article Outline

Factor analysis and ECI item selection

None of the 50 items’ mean scores were too extreme, nor were their standard deviations too narrow; as a result, all items were retained for the EFA. Factor 1 accounted for 18% of the variance, Factor 2 accounted for 12%, and Factor 3 accounted for 7%. After eliminating items that did not meet the criteria for retention, we were left with three factors of 10, 6, and 4 items, respectively (see Table 2). Tests of internal consistency (Cronbach alpha) for each of the three factors showed satisfactory levels of reliability: α = 0.88 for Factor 1, α = 0.80 for Factor 2, and α = 0.71 for Factor 3, with an alpha for the scale as a whole of 0.95.

Table 2

Table 2

For Factor 1, labeled “centrality of learning and mutual respect,” the highest loading items (0.65) were “In this medical school, we focus on a sense of discovery and the excitement of inquiry” and “I often get invaluable and supportive advice from faculty on how to improve.” Scores on this factor could range from 10 to 40. Higher scores, indicating agreement with the items included, indicate a perception of the climate as more learning oriented. For Factors 2 and 3, the possible ranges were 6–24 and 4–16, respectively. Higher scores, indicating disagreement with the included items, reflect a perception of a learning-oriented climate. For Factor 2, labeled “competiveness and stress,” the highest loading items were “The atmosphere here is highly competitive” and “Around here you have to act confidently even if you have little idea what you are doing.” For Factor 3, labeled “passive learning and memorization,” the highest loading items were “Most of what we do here is focused on the passive transfer of knowledge” and “Education here is all about memorizing as much content as you can.”

The CFA using the randomized second half of the sample supported the three-factor structure extracted from the EFA. All retained items loaded onto their respective latent constructs. Loadings ranged from 0.48 to 0.65 for Factor 1; 0.46 to 0.74 for Factor 2; and 0.41 to 0.58 for Factor 3 (all P < .001). The fit of the three-factor model to the data was satisfactory, indicating that a three-factor structure was suitably descriptive of the data: root mean square error of approximation = 0.06; comparative fit index = 0.93; Tucker–Lewis index = 0.92.

Back to Top | Article Outline

Relationship of ECI scores to other variables

To determine whether variability existed across schools, we compared the mean scores for each school, both for the total scale and for each of the three factors (see Table 3). Scores for the six schools differed considerably (P < .001, eta2 = 0.08). Moreover, although the factors were moderately correlated, a school’s relative standing on any given factor did not ensure that it would hold the same relative standing for a different factor.

Table 3

Table 3

We then tested to see whether ECI mean scores differed according to several variables. As shown in Table 4, we found no differences on total score according to respondent sex or college major. However, students who preferred to be judged more by performance than by effort were significantly more likely to see their school’s environment as learning oriented, most notably on Factor 2 (competitiveness and stress), compared with students whose preference was to be judged more by effort than by performance.

Table 4

Table 4

We also confirmed both of our hypotheses. Combining the ratings of the first- and second-year students (after determining that their ratings did not differ significantly) and comparing them with the third-year students’ ratings, we found that the total scores of the clerkship students were significantly lower than those of the preclerkship students, indicating that the clerkship students rated their learning climate as significantly more performance oriented (preclerkship = 58.4; clerkship = 52.9; P < .001; eta2 = 0.07). Second, the scores of the preclerkship students in the three schools with a grading system were significantly more performance oriented than those of their peers in the three schools with a pass–fail system (pass–fail = 58.9; graded = 57.7; P = .04; eta2 = 0.01).

In addition, compared with students who perceived their learning climate as more mastery oriented, students who viewed their environments as more performance oriented were less satisfied with their own medical school (5.79 vs. 7.29; P < .001; eta2 = 0.25) as well as their decision to pursue medicine as a career (3.35 vs. 3.77; P < .001; eta2 = 0.10). This relationship was the same regardless of whether students were in their clinical or preclerkship years, although students in their clerkship year expressed less satisfaction than the preclerkship students both with their school (6.20 vs. 6.73; P < .02; eta2 = 0.03) and their career choice (3.39 vs. 3.65; P < .001; eta2 = 0.03).

Finally, a far greater percentage of preclerkship students indicated that they preferred to be judged more by their performance (56%) than by their effort (20%). This relationship was reversed among clerkship students, among whom a greater percentage preferred to be judged more by their effort (41%) than by their performance (27%) (P < .001).

Back to Top | Article Outline

Discussion

Increasingly, medical educators have been attempting to create educational climates that support student growth and development rather than climates that motivate students to hide uncertainty and avoid feedback. In this article, we have introduced and attempted to validate the ECI, an instrument that focuses specifically on this aspect of the broader learning environment. Based on a survey of students at six diverse U.S. medical schools, we identified three distinct factors—resulting in a 20-item scale—that capture students’ experience of the educational climate in the medical school classroom and the hospital wards.

Consistent with our predictions, we found that the clerkship experience was rated by students as more performance oriented than the medical school classroom experience, and that students’ ratings of the preclerkship learning climate were more performance oriented in schools with grades than in schools with a pass–fail system, although the effect size for the latter was small. In addition, ECI scores also demonstrated a consistent set of associations with satisfaction: Students who perceived their environment as more performance oriented were less satisfied with both their choice of medicine as a career and with their medical school itself compared with students who perceived their learning climate as more mastery oriented.

These data set the stage for further exploration. For instance, while Smith and colleagues32 have demonstrated that the learning environment, as broadly defined, is more positive at schools with learning communities, it is not clear whether the presence of learning communities is associated with a more learning/mastery-oriented educational climate. Likewise, our finding that the clerkship climate is perceived as more performance oriented can be attributed to several possible explanations. These might include differences between the actual educational climates of the classroom and the hospital wards; clinical students’ strong concerns about performance, their clinical grades, and the impact of their grades on residency selection; and the stress of patient care and concern for making errors experienced in the clerkships. Relatively few studies33–35 have made direct comparisons between the experiences of students across the classroom–ward continuum, and because these have focused on different outcomes than those studied here, they do little to help select among the alternative explanations. Further investigation is needed to identify the causal link between students’ actual experiences on the wards and their ratings of the educational climate.

Back to Top | Article Outline

Reliability and validity of the ECI

Colbert-Getz and colleagues11 have suggested judging a given instrument according to four criteria relevant for non-high-stakes assessment: content validity, response process, internal structure, and relationship to other variables. Derived from a specific theoretical orientation, based on a large body of literature, and with items written to reflect specific dimensions of the learning–performance distinction, the ECI satisfies the first criterion of content validity.

As for response process, no formal piloting of items was conducted; however, the item pool was blueprinted to reflect various aspects of the learning–performance distinction, and the items were reviewed in advance by the coinvestigators, all experienced medical educators. The ECI stands up particularly well to the internal structure criterion, with items selected for inclusion based on thorough psychometric analysis. In addition, each of the three factors as well as the total scale has satisfactory reliability based on Cronbach alpha.

Back to Top | Article Outline

Limitations

Although based on a sample of students from six diverse medical schools, the findings reported in this study will need to be tested in a wider array of contexts and against a broader range of criteria to further generalize them and to validate the instrument. One notable limitation concerns the variability in response rates across the schools, with an overall rate of 56% derived from individual school rates as low as 22% and as high as 81%. While many medical education researchers have set absolute thresholds below which they believe survey findings cannot be trusted, social scientists who have studied this in depth have come to differing conclusions. In a special issue of the Public Opinion Quarterly devoted to this issue, Singer36 concluded that “there is no minimum response rate below which a survey estimate is necessarily biased; and conversely, no response rate above which it is never biased.” More recently, Meterko and colleagues37 stated, “Results from ‘low’ response rate surveys may accurately represent attitudes of the population. Therefore, low response rates should not be cited as reasons to dismiss results as uninformative.” We acknowledge the reservations that some will have about possible bias introduced via low response rates in some schools. However, because we were attempting to draw some tentative conclusions across schools rather than for any individual school, we feel that this first test of the ECI has produced trustworthy and informative findings. Finally, all the items on Factor 1 are positively worded, and all those in Factors 2 and 3 are negatively worded. While this is not common, neither is it highly unusual, as in scales measuring optimism–pessimism.38

Back to Top | Article Outline

How students want to be assessed

While not bearing on the validity of the scale, a finding deserving of further study is that third-year students differed from their first- and second-year counterparts on how they would prefer to be judged by faculty. Although it is reasonable to infer that a learning-oriented climate is more positive for educational growth, we place no such value distinction on students’ desire to be judged on performance versus effort. Being judged on demonstrated results (performance) versus consistently trying hard (effort) are both reasonable criteria by which students may prefer to be assessed.

It is not surprising that the majority of students in their preclerkship years indicated that they preferred to be judged by their performance, as they had to be consistently high-achieving classroom performers to get into medical school, although it is ironic that many educators are attempting to create educational climates for these students that support learning and mastery—that is, environments consistent with assessment on effort over performance. We were somewhat surprised to find the reverse pattern for students completing their core clerkships, where high levels of performance are expected by clinical faculty. One possible explanation for this is that once on the wards, students’ uncertainty about their clinical competence and fear that they may not be able to perform to the high standards of the faculty and residents lead students to want to be judged according to the one thing they can control: their effort. Consistent with this explanation, we would predict that the preference for being judged on effort versus performance may vary over time as learners move from settings in which they are novices (e.g., core clerkships or first year of residency), where they may prefer assessment based on their effort, to settings in which they are more experienced and confident (e.g., subinternships or third year of residency), where they may prefer to be judged according to their ability to perform.

Back to Top | Article Outline

Conclusions

As medical educators attempt to create educational climates that support student growth and development, it is important that they have valid and reliable instruments capable of assessing this aspect of the medical school learning environment at a given moment or over time. We believe the ECI is such an instrument, enabling educators and researchers to assess the learning climate in a way that is both theoretically and practically meaningful. We envision the ECI being used in a number of ways—for example, as a key evaluation tool to determine the success of a medical school’s initiative to reform its curriculum and educational climate. With only minor modifications, the ECI could also be used to assess the educational climate in graduate medical education or other health professions.

Back to Top | Article Outline

Acknowledgments:

Statistical support was provided by data science specialist Ista Zahn at the Institute for Quantitative Social Science, Harvard University. Additional statistical advice was provided by Dr. James House, Angus Campbell Distinguished University Professor Emeritus, University of Michigan, Ann Arbor.

Back to Top | Article Outline

References

1. Genn JM. AMEE Medical Education Guide No. 23 (Part 2): Curriculum, environment, climate, quality and change in medical education—A unifying perspective. Med Teach. 2001;23:445–454.
2. Gracey CF, Haidet P, Branch WT, et al. Precepting humanism: Strategies for fostering the human dimensions of care in ambulatory settings. Acad Med. 2005;80:21–28.
3. O’Keefe PA, Ben-Eliyahu A, Linnenbrink-Garcia L. Shaping achievement goal orientations in a mastery-structured environment and concomitant changes in related contingencies of self-worth. Motiv Emot. 2013;37:50–64.
4. Ames C. Classrooms: Goals, structures, and student motivation. J Educ Psychol. 1992;84:261–271.
5. Dyrbye LN, Thomas MR, Harper W, et al. The learning environment and medical student burnout: A multicentre study. Med Educ. 2009;43:274–282.
6. Billings ME, Lazarus ME, Wenrich M, Curtis JR, Engelberg RA. The effect of the hidden curriculum on resident burnout and cynicism. J Grad Med Educ. 2011;3:503–510.
7. Liaison Committee on Medical Education. Functions and Structure of a Medical School: Standards for Accreditation of Medical Education Programs Leading to the MD Degree. 2016. Washington, DC: Liaison Committee on Medical Education; http://lcme.org/publications/. Accessed March 20, 2017.
8. Committee on Accreditation of Canadian Medical Schools (CACMS). CACMS Standards and Elements: Standards for Accreditation of Medical Education Programs Leading to the MD Degree. 2017. Ottawa, Ontario, Canada: Committee on Accreditation of Canadian Medical Schools (CACMS); https://cacms-cafmc.ca/accreditation-documents. Accessed March 20, 2017.
9. Cooke M, Irby DM, O’Brien BC. Educating Physicians: A Call for Reform of Medical School and Residency. 2010.Stanford, CA: Jossey Bass.
10. Schönrock-Adema J, Visscher M, Raat AN, Brand PL. Development and validation of the Scan of Postgraduate Educational Environment Domains (SPEED): A brief instrument to assess the educational environment in postgraduate medical education. PLoS One. 2015;10:e0137872.
11. Colbert-Getz JM, Kim S, Goode VH, Shochet RB, Wright SM. Assessing medical students’ and residents’ perceptions of the learning environment: Exploring validity evidence for the interpretation of scores from existing tools. Acad Med. 2014;89:1687–1693.
12. Schönrock-Adema J, Bouwkamp-Timmer T, van Hell EA, Cohen-Schotanus J. Key elements in assessing the educational environment: Where is the theory? Adv Health Sci Educ Theory Pract. 2012;17:727–742.
13. Soemantri D, Herrera C, Riquelme A. Measuring the educational environment in health professions studies: A systematic review. Med Teach. 2010;32:947–952.
14. Walsh WB, Craik KH, Price RH. Person-Environment Psychology: New Directions and Perspectives. 2000.Mahwah, NJ: Lawrence Erlbaum Associates.
15. Moos RH. The Social Climate Scales: An Overview. 1974.Palo Alto, CA: Consulting Psychologists Press.
16. Roff S, McAleer S. Robust DREEM factor analysis. Med Teach. 2015;37:602–603.
17. Sunkad MA, Javali S, Shivapur Y, Wantamutte A. Health sciences students’ perception of the educational environment of KLE University, India as measured with the Dundee Ready Educational Environment Measure (DREEM). J Educ Eval Health Prof. 2015;12:37.
18. Skochelak SE, Stansfield RB, Dunham L, et al. Medical student perceptions of the learning environment at the end of the first year: A 28-medical school collaborative. Acad Med. 2016;91:1257–1262.
19. Rosenbaum ME, Schwabbauer M, Kreiter C, Ferguson KJ. Medical students’ perceptions of emerging learning communities at one medical school. Acad Med. 2007;82:508–515.
20. Shochet RB, Colbert-Getz JM, Wright SM. The Johns Hopkins learning environment scale: Measuring medical students’ perceptions of the processes supporting professional formation. Acad Med. 2015;90:810–818.
21. Shochet RB, Colbert-Getz JM, Levine RB, Wright SM. Gauging events that influence students’ perceptions of the medical school learning environment: Findings from one institution. Acad Med. 2013;88:246–252.
22. Dweck CS. Mindset: The New Psychology of Success. 2006.New York, NY: Random House.
23. Grant H, Dweck CS. Clarifying achievement goals and their impact. J Pers Soc Psychol. 2003;85:541–553.
24. Dweck CS, Mangels JA, Good C. Dai D, Sternberg R. Motivational effects on attention, cognition, and performance. In: Motivation, Emotion, and Cognition: Integrative Perspectives on Intellectual Functioning and Development. 2004:Mahwah, NJ: Lawrence Erlbaum Associates; 41–55.
25. Konopasek L, Norcini J, Krupat E. Focusing on the formative: Building an assessment system aimed at student growth and development. Acad Med. 2016;91:1492–1497.
26. Schuwirth LW, Van der Vleuten CP. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011;33:478–485.
27. Rosseel Y. lavaan: An R package for structural equation modeling. J Stat Softw. 2012;48:1–36.
28. Wickham H. ggplot2: Elegant Graphics for Data Analysis. 2009.New York, NY: Springer.
29. Revelle W. psych: Procedures for personality and psychological research. Version 1.6.9. Evanston, IL: Northwestern University. http://CRAN.R-project.org/package=psych. Accessed November 30, 2016. [No longer available.]
30. Talbot J. labeling: Axis labeling. R package version 0.3. https://cran.r-project.org/web/packages/labeling/index.html. Accessed March 7, 2017.
31. MacCallum RC, Widaman KF, Zhang S, Hong S. Sample size in factor analysis. Psychol Methods. 1999;4:84–99.
32. Smith SD, Dunham L, Dekhtyar M, et al. Medical student perceptions of the learning environment: Learning communities are associated with a more positive learning environment in a multi-institutional medical school study. Acad Med. 2016;91:1263–1269.
33. Soo J, Brett-MacLean P, Cave MT, Oswald A. At the precipice: A prospective exploration of medical students’ expectations of the pre-clerkship to clerkship transition. Adv Health Sci Educ Theory Pract. 2016;21:141–162.
34. Atherley AE, Hambleton IR, Unwin N, George C, Lashley PM, Taylor CG Jr.. Exploring the transition of undergraduate medical students into a clinical clerkship using organizational socialization theory. Perspect Med Educ. 2016;5:78–87.
35. Small RM, Soriano RP, Chietero M, Quintana J, Parkas V, Koestler J. Easing the transition: Medical students’ perceptions of critical skills required for the clerkships. Educ Health (Abingdon). 2008;21:192.
36. Singer E. Introduction-nonresponse rates in household surveys. Public Opin Q. 2006;70:637–645.
37. Meterko M, Restuccia JD, Stolzmann K, et al. Response rates, nonresponse rates, and data quality: Results from a National Survey of Senior Healthcare Leaders. Public Opin Q. 2015;79:130–144.
38. Gaudreau P, Blondin JP. Differential associations of dispositional optimism and pessimism with coping, goal attainment, and emotional adjustment during sport competition. Int J Stress Manag. 2004;11:245–269.

Supplemental Digital Content

Back to Top | Article Outline
© 2017 by the Association of American Medical Colleges