Persistent health disparities,1 changing population demographics,2 and growing numbers of insured patients following the enactment of the Affordable Care Act3 make graduating a diverse physician and scientific workforce prepared to advance high-quality, culturally competent health care and research increasingly challenging for U.S. academic medical centers. Therefore, it is imperative for institutions to assess their organizational capacity for diversity and inclusion and respond effectively to insights gained.4 Diversity scholars and practitioners have made strides in creating tactical metrics related to diversity and inclusion interventions such as pipeline and mentoring programs, internships, equal opportunity plans, diversity councils, and affinity networks. Institutions have administered climate and culture surveys to identify differences in workplace perceptions between various demographic groups. However, beyond these tactical metrics, we lack an overall measure of an institution’s capacity to fully include and engage all of its members.
We created the Diversity Engagement Survey (DES) to measure how well academic medical centers are responding to the diversity of their community members (i.e., their faculty, staff, and students). By measuring the academic medicine environment through the lens of diversity and inclusion, the DES provides institutions with data on the level of active engagement by their members, their inclusive characteristics, and the degree to which their diverse groups experience inclusion.
In this article, we discuss the development and psychometric properties of the DES, which we propose as a tool for an institution’s diagnosis of engagement and inclusion efforts, for guiding improvement toward achieving diversity goals, and for benchmarking of academic medical centers’ progress toward engagement and inclusion.
The factors that undergird the DES emerged from years of study of the diversity, inclusion, and engagement literature and applied diversity management experience. Previous iterations of the instrument were used with 12 organizations (6 corporations, 4 hospital systems, 1 government agency, and 1 social service organization). These previous iterations were useful in evaluating perspectives about diversity among individuals at the participating institutions. However, they were not as effective in providing diagnostic data and strategic direction for future interventions. To improve the effectiveness of the instrument, we recognized the need to focus on how the cultural conditions of an institution are influenced by the interplay of engagement and inclusion.
Conceptual underpinnings of the DES
Unlike culture, climate, or general purpose engagement surveys—which are widely used in academic medical settings for assessing individuals’ perceptions of their own psychosocial experiences within an institution—the DES is designed to reveal the aspects of institutional culture and social dynamics related to engagement and inclusion that have been shown to be the most strongly related to productivity and employee retention.5,6
Within the DES framework (described below), diversity is conceptualized as encompassing all aspects of human differences and is viewed as a core value that embodies inclusiveness, mutual respect, and awareness of multiple perspectives.7Inclusion is conceptualized as a set of social processes that influence an individual’s access to information and sense of belonging, job security, and social support received from others.8,9 Without an institutional culture that supports the inclusion of the differences in perspectives, life experiences, and knowledge that individuals bring to the institution, the full potential of diversity cannot be realized.4
Engagement of every member of the institution is the foundation on which a truly inclusive academic medical center is built. Successful employee engagement is derived from meeting the basic intellectual and emotional needs of workers.10–14 Engagement results from cultural conditions that foster a shared sense of the vision and purpose of the organization as well as camaraderie and appreciation of employees’ contributions to the institution. A sense of vision and purpose provides employees with a compelling reason to contribute to the organization’s mission. Camaraderie gives employees a sense of belonging and provides them with opportunities to reach out and personally connect with those around them. Appreciation recognizes individuals’ contributions and values what each person brings to the organization. These are conditions for building inclusion within a diverse workforce as well as encouraging people to bring their full creative and innovative talents into the workplace.12,15,16
The DES framework
We identified eight engagement and inclusion factors, which formed our framework for developing the DES:
- Common purpose: Individuals experience a connection to the mission, vision, and values of the organization.
- Trust: Individuals have confidence that the policies, practices, and procedures of the organization will allow them to bring their best and full self to work.
- Appreciation of individual attributes: Individuals perceive that they are valued and can successfully navigate the organizational structure in their expressed group identity.
- Sense of belonging: Individuals experience their social group identity as being connected with and accepted in the organization.
- Access to opportunity: Individuals perceive that they are able to find and utilize support for their professional development and advancement.
- Equitable reward and recognition: Individuals perceive the organization as having equitable compensation practices and nonfinancial incentives.
- Cultural competence: Individuals believe the institution has the capacity to make creative use of its diverse workforce in a way that meets business goals and enhances performance.
- Respect: Individuals experience a culture of civility and positive regard for diverse perspectives and ways of knowing.
The DES instrument
We proposed survey items derived from a review of literature and our own experience in the field relative to the framework’s factors. The final DES consisted of 22 items chosen to reflect the eight engagement and inclusion factors (see Table 1). Each item was created to capture the essence of the relationship between the institution and its members, not individuals’ perceptions about how they, and those who share a group identity with them, perceive or experience institutional practices. All items were written in the first person and phrased positively. We also included a final open-ended question (“If you wish, please provide additional comments on the diversity and inclusion efforts”) to provide the respondents the opportunity to express any concerns, insights, or experiences related to their institutional context.
All responses on the 22-item instrument were scored on a 5-point Likert scale (5 = strongly agree to 1 = strongly disagree). Respondents could indicate if they were unable to evaluate an item. Items that respondents were unable to evaluate were scored as 3 (neither agree nor disagree) in our analysis. Because of the small number of items, any significant concerns about participant acquiescence bias were dismissed.
In addition, the DES collected data on the characteristics of the respondents and their environment that may be useful in interpreting findings about diversity and inclusion:
- internal dimensions: race, ethnicity, age, gender, sexual orientation, and physical ability;
- external dimensions: religion, work experience, and languages spoken; and
- organizational dimensions: management status, functional level/classification, division/department, unit/group, work location, and seniority.
Pilot testing and survey implementation
Face and content validity of the survey were assessed and improved through a review panel consisting of representative respondents at the home medical institution of one of the authors. The same survey was piloted at an academic medical center in March 2011. After the pilot, an invitation to participate in the survey benchmarking process was sent through the Association of American Medical Colleges (AAMC) and the Group on Diversity and Inclusion to all AAMC member institutions. The survey was subsequently administered to 13 additional U.S. academic medical centers from March 2011 through April 2012. The participating academic medical centers were offered the instrument at no cost to their institution and were provided access to their survey results in aggregate form with the understanding that their results would be used to validate the instrument and create benchmark data. Data were collected, compiled, and provided to our research team by an external provider of survey management services.
The institutional review board of the University of Massachusetts Medical School provided an exemption waiver for the study in February 2011. The survey was implemented in a voluntary, anonymous manner to all participating institutions’ employees, including faculty, staff, and administrators, as well as students. Completion of the survey constituted consent. No incentives were provided for participation.
Quantitative analyses were performed using SAS 9.3 (SAS Institute Inc., Cary, North Carolina) and Stata 12 (StataCorp LP, College Station, Texas). Demographic characteristics of the respondents were summarized. Based on the development process described above, face and content validity of the instrument were established prior to pilot testing of the instrument.
Internal consistency reliability, a commonly used tool in psychometric evaluation, is an indicator of how well different items measure the same concept.17 We measured the internal consistency of the eight engagement and inclusion factors by calculating Cronbach alphas. Traditionally, Cronbach alpha values greater than or equal to 0.70 are deemed acceptable.
Construct validity is a measure of how meaningful an instrument is in actual use.17 More specifically, construct validity is conveyed when a measure captures what it is intended to represent. In other words, a measure with high construct validity will behave according to a specified conceptual model. Based on the expected mapping of survey items to engagement and inclusion factors, we performed confirmatory factor analysis (CFA) via structural equation modeling to investigate construct validity and to examine the dimensionality of the DES. We examined item correlations and selected two representative fit indices—comparative fit index (CFI)18 and the standardized root mean square residual (SRMR)19—to assess model fit. CFI is an index that ranges from 0 to 1; values greater than 0.90 are considered an indicator of a good fitting model.18 The SRMR is an absolute measure of fit and is defined as the standardized difference between the observed and predicted correlation; models with an SRMR value less than or equal to 0.08 are considered good.19
We also examined the ability of the DES to distinguish between institutions with higher and lower degrees of engagement and inclusion of their respondents. First, we calculated a mean score for each factor by institution and a separate grand mean DES score for each institution (where a higher score represents a more positive response and thus a greater degree of engagement). Next, we created a graphic display for each factor that arrayed mean institutional scores in ascending order. This display provided a visual method to examine the ability of the DES to distinguish between institutions with higher and lower degrees of engagement and inclusion. Then, we conducted a cluster analysis based on the grand mean DES score for each institution to determine whether patterns, observed graphically, resulted in different statistical clusters of institutions. Cluster analysis is a set of techniques designed to place objects into groups, suggested by the data, such that an object in a given cluster is more like other objects in that same cluster than objects in another cluster.20 We examined tree plots to determine the appropriate number of clusters. Using a complete linkage approach, we determined each institution’s cluster membership based on its grand mean DES score.
We also sought to demonstrate the instrument’s usefulness in understanding specific disparities within a given institution by distinguishing between the experiences of different demographic groups. As one example of this type of analysis, first, we calculated differences in mean item scores for black respondents and white respondents within each institution. Next, for each item we ranked each academic medical center separately on the black respondent mean scores (ordered from highest to lowest) and the observed disparity (white respondent mean score − black respondent mean score) (ordered from lowest to highest) and calculated a Spearman correlation for the two rankings. We performed a separate analysis for each item. We repeated this graphical and statistical analysis using the grand mean of all DES items for black respondents and for white respondents at each institution.
As a final step in assessing the utility of the DES, we examined criterion validity, which is a measure of how well a construct predicts an outcome based on information from other variables.17 Here, we examined differences in DES factor mean scores based on key respondent characteristics suggested by the literature, such as race/ethnicity, gender, and sexual orientation. Respondents had the opportunity to self-identify as lesbian, gay, bisexual, transgender, queer, questioning, asexual, or other. For purposes of analysis and reporting we collapsed these responses into one category labeled LGBTQ.
Broad representation across each region of the United States was obtained through the 13,694 respondents to the DES. The average response rate across the 14 participating institutions was 26.7% (SD = 9.5), and institutional response rates ranged from 11% to 46%. (One institution did not provide the total number of possible respondents.) Approximately 66% (n = 8,435) of the respondents were female, and most were white (75%; n = 9,496). Most respondents reported heterosexual orientation (87%; n = 11,847). Duration of employment was equally distributed between respondents who reported less than five years (50%; n = 6,338) and those who reported five years or more (50%; n = 6,364) at their current institution.
The Cronbach alphas for the eight engagement and inclusion factors of the DES ranged from 0.68 to 0.85 (Table 1), with an overall Cronbach alpha of 0.96. The factors demonstrated acceptable levels of internal consistency reliability, with the exception of one factor which was marginal (common purpose: Cronbach alpha = 0.68).
CFA resulted in a CFI of 0.917 and an SRMR of 0.038. Both indices indicate an acceptable model fit and support our mapping of items to engagement and inclusion factors. An examination of item correlations with the latent constructs from CFA indicated that in general all items correlated well with the constructs they were intended to measure, with only three items (items 4, 14, and 20) having slightly lower correlations than desired (Figure 1). CFA results also revealed satisfactory loadings for all the items (Table 1). Similar to the results found in the item correlations and latent constructs, items 4 and 14 had slightly lower factor loadings than the other items; however, they were still within the threshold of acceptability (loading scores > 0.4).
The graphical displays of institutions’ mean engagement and inclusion factor scores clearly delineated institutions with higher, middle, and lower degrees of engagement and inclusion by their respondents (Figure 2). The formal cluster analysis based on institutions’ grand mean DES scores similarly yielded three distinct clusters of institutions, which accounted for 98% of the variation in the eigenvalues. Figure 2 illustrates the high degree of correspondence between the formal cluster analysis based on the grand mean DES and the graphical rankings of institutional performance on each factor.
We also found that greater disparity between black and white respondents at the institutional level was strongly correlated with lower black respondent scores. Spearman correlations for institutional rankings based on disparities and institutional rankings based on black respondent mean item scores ranged from 0.70 to 0.95 and were statistically significant for all items except 4 and 14 (see Supplemental Digital Table 1 at http://links.lww.com/ACADMED/A303). Similar findings based on the analysis for the grand mean of all items are illustrated in Figure 3. This figure also shows that there was great variability in both observed disparities and grand mean DES scores for black respondents. For only two institutions, the disparities favored black respondents (i.e., black respondents had higher grand mean DES scores than white respondents).
Analysis of the responses by demographic group revealed that black respondents and Hispanic/Latino respondents had lower mean factor scores than white respondents. Female respondents had lower mean factor scores than male respondents (Table 2). Respondents who reported their orientation as LGBTQ had lower mean factor scores than those who reported heterosexual orientation. This pattern persisted when analyses were restricted to respondents from institutions belonging to the highest cluster of engagement according to cluster analysis (results not shown).
Our findings suggest that the DES produces useful, reliable, and valid measurements of key phenomena essential to conditions that support diversity, engagement, and inclusion in academic medical centers. Additionally, the DES lends itself to both composite and subgroup analyses, which serve complementary yet distinct functions. The overall institutional scores support ranking and benchmarking, whereas subgroup analysis allows focused investigation about root causes that may be used in developing improvement plans. For example, if both an institution’s overall and subgroup scores for a given factor or item are equally low, changes in organization-wide policy may be needed. On the other hand, if the overall score is high but a subgroup score is low, a policy targeting the subgroup may be appropriate.
Overall, the Cronbach alpha results indicate that the DES is a reliable instrument. One possible explanation for the marginally low Cronbach alpha of the common purpose factor may be violation of the essential tau equivalence assumption,21 which is suggested because the observed variances of the two items comprising this factor were significantly different (data not shown). However, violation of this assumption usually leads to underestimation of the alpha coefficient, so it is reasonable to assume that the reported coefficient represents a lower bound for the true value. Because the entire DES has face validity based on existing literature and vetting with the review panel, we have chosen to retain the common purpose factor in the survey. Nonetheless, we will continue to monitor this factor closely as the DES is rolled out to more academic medical centers.
Additionally, we have demonstrated both construct and criterion validity. Fit indices from the CFA were acceptable, indicating appropriate model fit. Consistent with the literature,22–26 we found that black, Hispanic/Latino, female, and LGBTQ respondents had lower degrees of engagement than their counterpart respondents. We also found that the DES consistently separated participating institutions into three distinct clusters based on the grand mean DES score, supporting the instrument’s promise as a benchmarking tool to measure the progress of diversity interventions among academic medicine institutions.
The within-institution analysis revealed large variation in disparities for black and white respondents, suggesting the importance of future studies to determine how institutional characteristics, culture, and programming are related to observed disparities. The results of the within-institution analysis also suggest that when there is a disparity between black and white respondents, the difference occurs because black individuals are reporting lower degrees of engagement and inclusion than white individuals We also found that institutions with the highest levels of engagement generally had higher engagement scores for black respondents and lower observed disparities between black and white respondents, compared with institutions with the lowest levels of engagement.
It should be noted that our sample of 14 institutions is not necessarily representative of the entire population of academic medical centers in the United States. For example, institutions that are experiencing diversity challenges or those that have been particularly active in promoting and integrating diversity may have selected to participate. However, concern about selection bias is somewhat mitigated because a significant number of institutions clustered in the middle range of scores. Nonetheless, because of concern about selection bias, we did not examine the relation between grand mean DES scores and institutional characteristics. Such studies will be appropriate as larger, representative samples of institutions become available. In addition, in-depth case studies of selected higher- and lower-performing institutions may yield findings to inform future interventions.
To build institutional capacity for diversity, institutions must start with an understanding of the extent to which their various groups feel included and engaged.25 This study shows that the DES provides a way of measuring the conditions through which the institutional culture fosters engagement and inclusion. As a diagnostic tool, it allows institutions to assess their engagement and inclusion efforts and helps them develop a strategy for achieving their diversity goals. As a benchmarking tool, the DES distinguishes institutions in their progress toward engagement and inclusion. Overall, the DES can support academic medical centers in assessing and building their institutional capacity to adapt and innovate during this time of transformation across all domains of health care and academic medicine in the United States.
1. Nelson A. Unequal treatment: Confronting racial and ethnic disparities in health care. J Natl Med Assoc. 2002;94:666–668
2. Judy RW, D’Amico C Workforce 2020: Work and Workers in the 21st Century. 1997 Indianapolis, Ind Hudson Institute
3. Kocher R, Emanuel EJ, DeParle NA. The Affordable Care Act and the future of clinical medicine: The opportunities and challenges. Ann Intern Med. 2010;153:536–539
4. Ely RJ, Thomas DA. Cultural diversity at work: The effects of diversity perspectives on work group processes and outcomes. Adm Sci Q. 2001;46:229–262
5. Cooke RA, Szumal JL. Measuring normative beliefs and shared behavioral expectations in organizations: The reliability and validity of the organizational culture inventory. Psychol Rep. 1993;72:1299–1330
6. Mor Barak ME, Cherin DA, Berkman S. Organizational and personal dimensions in diversity climate: Ethnic and gender differences in employee perceptions. J Appl Behav Sci. 1998;34:82–104
7. Association of American Medical Colleges. About GDI: Definitions. https://www.aamc.org/members/gdi/about/
. Accessed July 14, 2015
8. Hope Pelled L, Ledford GE, Mohrman SA. Demographic dissimilarity and workplace inclusion. J Manag Stud. 1999;36:1013–1031
9. Schein EH Organizational Culture and Leadership. 1992 San Francisco, Calif Jossey-Bass
10. Buckingham M, Coffman C First Break All the Rules: What the World’s Greatest Managers Do Differently. 1999 New York, NY Simon & Schuster
11. Kahn WA. Physiological conditions of personal engagement and disengagement at work. Acad Manag J. 1990;33:692–724
12. Colan L Engaging the Hearts and Minds of All Your Employees: How to Ignite Passionate Performance for Better Business Results. 2008 New York, NY McGraw-Hill
13. Harter JK, Schmidt FL, Keyes CLKeyes LM. Well-being in the workplace and its relationship to business outcomes: A review of the Gallup studies. Flourishing: Positive Psychology and the Life Well-Lived. 2003 Washington, DC American Psychological Association:205–224
14. Volpone SD, Avery DR. Linkages between racioethnicity, appraisal reactions and employee engagement. J Appl Soc Psychol. 2012;42:252–270
15. Davidson MN, Ferdman BM. Diversity and inclusion: What difference does it make? Ind Org Psychol. 2001;39(2):36–38
16. Cox T Creating the Multicultural Organization: A Strategy for Capturing the Power of Diversity. 2001 San Francisco, Calif Jossey-Bass
17. Litwin MS The Survey Kit: Vol 8. How to Assess and Interpret Survey Psychometrics. 20032nd ed. Thousand Oaks, Calif Sage Publications
18. Bentler PM. Comparative fit indexes in structural models. Psychol Bull. 1990;107:238–246
19. Hu L, Bentler PM. Cutoff criteria for fit indices in covariance structure analysis: Convention criteria versus new alternatives. Struct Equ Modeling. 1999;6:1–55
20. Everitt BS, Landau S, Leese M, Stahl D Cluster Analysis. 20115th ed New York, NY Wiley & Sons
21. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. Intern J Med Educ. 2001;2:53–55
22. UM ADVANCE Program. Assessing the academic work environment for science and engineering and social science faculty at the University of Michigan in 2006: Gender, race, and discipline in department and university related climate factors. 2008 http://www.advance.rackham.umich.edu/ADV-FacultyClimate-Rpt2-final.pdf
. Accessed July 14, 2015
23. Bilimoria D, Liang X Gender Equity in Science and Engineering: Advancing Change in Higher Education. Routledge Studies in Management, Organizations, and Society. 2012 Florence, Ky Routledge
24. Bilimoria D, Stewart AJ. “Don’t ask, don’t tell”: The academic climate for lesbian, gay, bisexual, and transgender faculty in science and engineering. NWSA J. 2009;21:85–103
25. Liang X, Bilimoria DBurke RJ, Mattis MC. The representation and experience of women faculty in STEM fields. Women and Minorities in Science, Technology, Engineering, and Mathematics: Upping the Numbers. 2007 Northampton, Mass Edward Elgar Publishing:317–333
26. Nivet MA. Commentary: Diversity 3.0: A necessary systems upgrade. Acad Med. 2011;86:1487–1489