Secondary Logo

Journal Logo

RIME: Assessment of Performance

Validity Evidence for a New Checklist Evaluating Consultations, The 5Cs Model

Kessler, Chad S., MD, MHPE; Kalapurayil, Priyanka S.; Yudkowsky, Rachel, MD, MHPE; Schwartz, Alan, PhD

Author Information
doi: 10.1097/ACM.0b013e3182677944
  • Free

Abstract

The Accreditation Council for Graduate Medical Education (ACGME) recognizes that communication and interpersonal skills are a core competency for all residents.1 Previous studies have shown these skills to be especially important when conducting consultations.2–9 Although physicians engage in consultations on a daily basis, a standardized model to conduct these consultations does not exist. This lack of standardization leads to various styles for performing consultations, some of which are often inefficient and unorganized.1,3,4,10,11

A model to train physicians in the art of effective consultations is critical, but equally important is an assessment tool to evaluate those same consultations. Studies have examined medical student and physician competence in gathering patient information after a consultation has already occurred. For example, McKinley and colleagues12 used a modified version of the Leicester Assessment Package to assess consultations in a retrospective trial in 2000. However, to our knowledge, no studies have examined or evaluated resident or attending physician communications that occur during the consultation. In particular, no studies provide validity evidence for a tool to assess consultations.

Validity evidence is categorized into five sources: content (content representativeness), response process (rater reliability and quality assurance), internal structure (psychometric characteristics of the assessment), relationship to other variables (relationship to other measures with similar or divergent constructs), and consequences (effects of the assessment).13 The primary purpose of this study is to establish validity evidence for a new checklist as a tool to measure the quality and effectiveness of consultations14 because such a tool is not currently known.

Method

We devised this new checklist (Chart 1) based on the 7Cs model, a consultation model used in business.15 Our model, adapted for medical consultations, includes the following five aspects: contact, communicate, core question, collaboration, and closing the loop. To adapt the 5Cs model to medical consultations, we gathered relevant material from the following sources:

Chart 1 5Cs Model Checklist for Assessing Physician Consultations
  • a literature search (mining PubMed, EMBASE, Ovid, ERIC, BEME, Psychinfo, CINAHL, AcademicOneFile, ABI/Inform Global, and Academic Search Premier in 2010 using the terms consulting, consultation, negotiation, delegation, conflict resolution, education, and communication combined with the terms emergency medicine, emergency department, inter-professional, multi-disciplinary, relationship, and model),
  • a consensus panel of eight experts in medical education, emergency medicine (EM), and other medical specialties, and
  • a qualitative study.8

Our aim was to compile a list of factors critical to effective emergency department (ED) consultations. We operationalized these factors into 12 performance-measurable items and incorporated them into the checklist (Chart 1) in order to evaluate consultations, in particular the simulated, emergent telephone consultations performed by the residents in this trial.

Experts in clinical EM and in medical education created two simulated cases to represent two frequent types of consultations: (1) a psychiatric consultation involving a patient with psychosis and (2) a surgical consultation centered on a patient with worsening abdominal pain. The cases were refined using a modified Delphi method by which psychiatry attending physicians, surgery attending physicians, and EM residents who were not otherwise involved in this study (four total experts) reviewed, revised, and further developed the cases to incorporate qualities such as difficulty, interest, pertinence, authenticity, and instructional value.16

We evaluated the checklist for response process validity, internal structure validity, and validity related to other variables by analyzing the assessment data gathered from residents who used the 5Cs checklist in a prospective, randomized controlled trial of an intervention to teach consultation skills.

The University of Illinois at Chicago’s institutional review board approved this study (IRB number 2010-0307); all participating residents gave their informed consent before taking part in the trial. Participation was voluntary and independent of summative assessment. We offered no incentives for participation.

We invited 47 eligible EM and EM/internal medicine (IM) residents in postgraduate years (PGYs) 1 to 5 at a large, urban academic hospital in Chicago, Illinois, to participate during May 2010. We included any residents who were in the EM or EM/IM residency program. We stratified residents by PGY level and assigned them to either the control or intervention group according to a computer-generated random number output. Using the newly developed checklist, we assessed all the residents on the quality and effectiveness of two emergent telephone consultations with a standardized physician consultant.14

Clinical faculty taught the 5Cs model of consultation to the intervention group in a didactic session, of about 90 minutes, during which sample cases were demonstrated and reviewed. The training culminated in each resident practicing a case (unrelated to the cases used in the trial) and receiving feedback. After the training, the residents in the intervention group received note cards outlining the 5Cs model, and we told them not to discuss the training with anyone until the completion of the study. The control group also received didactic instruction of about 90 minutes in other communication skills based on articles about medical communication/consultation that were unrelated to the 5Cs model.

Residents simulated the surgical and psychiatric consultations over the phone by calling a single standardized consultant who answered all calls. We trained the standardized consultant (an EM/IM physician with 10 years of consultation experience but no prior knowledge of the 5Cs model) using mock encounters. The conversations between the residents and the standardized consultant were audio-recorded. Three raters used the checklist to evaluate the recorded phone consultations performed by all the residents. The raters were all attending physicians trained to use the checklist, through, just like the standardized consultant, mock encounters.

To establish response process and internal structure validity evidence for the checklist, the calculations we performed included the following:

  • intraclass correlations to measure interrater reliability,
  • Cronbach alpha across items to measure internal consistency,
  • item analysis to identify checklist items on which the intervention group scored significantly higher than the control group (at P < .05),
  • a generalizability analysis (G-study) using G String IV (Hamilton, Ontario, Canada) to determine reliability of the data under a fixed facet, and
  • a decision study (D-study) to determine reliability of the data under the variation of facets.

We conducted the G-study as a fully crossed analysis (3 raters × 2 cases × 12 items), using the items as a fixed facet to determine the generalizability using these specific items. The D-study estimated the effect of varying the number of cases and/or the number of raters to reach a reliability coefficient (phi) of 0.8 (decided a priori).

Validity evidence for evaluating the relationship of the checklist scores to other variables included determining correlations between mean scores on the checklist with mean scores on a global rating scale (GRS) for the two different cases (Chart 2). We devised the GRS apart from the 5Cs model and evaluated seven aspects of the consultations, based on topics identified by EM physicians, consultants, and a panel of experts.10 Three raters trained to use the GRS, separate from the three raters trained to use the checklist, scored all the residents based on this tool. We also calculated correlations between the checklist scores and scores assigned on a five-point Likert-type scale by a single psychiatrist and a single surgeon who rated, respectively, the psychiatric and surgical consultation calls.

Chart 2 Global Rating Scale (GRS) for Assessing Physician Consultations

Results

Of the 47 residents eligible to participate, 43 (91%) completed the assessment. The intervention group comprised 14 males and 5 females; the control group consisted of 18 males and 6 females. The intervention group included 6 PGY-1s, 7 PGY-2s, 3 PGY-3s, 2 PGY-4s, and 1 PGY-5, and the control group had 7 PGY-1s, 5 PGY-2s, 8 PGY-3s, 2 PGY-4s, and 2 PGY-5s.

Content validity evidence for the assessment includes the development of the checklist items based on a theoretical model supported by a survey of content experts. Further content validity lies in the testing of the checklist through simulated cases that were first developed by content experts to include commonly presenting consultation topics, and then piloted and revised by residents and other residents for clarity.

Response process evidence includes interrater reliability of 0.94 as measured by intraclass correlations. Evidence of internal consistency included Cronbach alphas for the three raters on the checklist of, respectively, 0.7, 0.8, and 0.7.

Internal structure validity evidence, as determined by our G-study, resulted in a phi coefficient of 0.89 for three raters, each rating the same two cases. The D-study yielded a phi coefficient of 0.80 both for two raters rating the same single case and for one rater rating three different cases (the most likely design in an active clinical setting). Item analysis showed significantly higher scores for the intervention group (P < .05) on 9 out of the 12 checklist items compared with scores for the control group (Table 1). We detected no significant performance differences between the two groups for the following checklist items: stating name, presenting an accurate recount of information/case detail, and speaking clearly. The intervention group also had significantly higher mean scores than the control group for the checklist as a whole (P < .0001).

Table 1
Table 1:
Univariate Analysis Comparing Intervention Group Residents’ Scores on Individual Physician Consultation Checklist Items With Those of Control Group Residents, 2010

Our analysis of the checklist’s validity as it relates to scores on a GRS yielded correlation coefficients (r) of, respectively, 0.59 and 0.79 for the surgical case and the psychiatric case (n = 43, P < .0001), indicating moderate to strong correlations. The correlation coefficients (r) between the checklist scores and the ratings given by the psychiatrist and surgeon for the emergent psychiatric and surgical consultation calls were, respectively, 0.26 (P < .08) and 0.40 (P < .008).

Discussion

This study presents multiple sources of validity evidence for a new assessment of resident telephone consultation skills, the 5Cs model. The iterative, expert-driven, and theory-based process of developing both the checklist and the cases used to assess the checklist support the content validity of this new consultation evaluation instrument. Response process was another source of validity evidence: The checklist yielded an intraclass correlation of 0.94, suggesting high interrater reliability.13 Internal reliability of the checklist, as measured by Cronbach alpha across the three raters, yielded coefficients at or above the accepted reliability coefficient of 0.7.17 Internal validity evidence included significantly higher performances of the intervention group on 75% of the checklist items (P < .05), which suggests that the checklist has a reliable format that detects and successfully measures changes in performance due to the effects of the intervention. The D-study results suggest that an acceptable reliability of 0.80 is possible with only three observations (a number both reasonable and feasible in a typical ED setting). Finally, moderately strong positive correlations of the checklist scores to the GRS scores for both the surgical and psychiatric cases support the validity of the checklist in its relationship to other variables.

The approach we took in this study to gather evidence of the consultation checklist’s validity—that is, an approach involving five different sources—is, as noted by the Standards for Educational and Psychological Testing,18 an acceptable and productive way to seek validity evidence for any type of assessment tool.

Consultation evaluation tools are critical to assess resident consultations on ACGME core competencies such as communication and interpersonal skills. Because, to our knowledge, no other tools to assess consultations are currently available, this study is innovative and emphasizes the need for methods to measure this competency in physicians. This trial proposes a new tool, the 5Cs model, for evaluating emergent resident telephone consultations in the ED. Early work using this checklist in simulated clinical care scenarios has shown that it is feasible and effective.14

Although this study establishes various sources of validity evidence for the 5Cs consultation checklist as an effective tool to assess emergent telephone consultations, there are also limitations. The raters as well as the residents were aware of the purpose of the study. Observer bias in the raters and evaluation apprehension in the residents may have affected the response process and resulted in construct-irrelevant variance, a threat to the validity of the assessment.19 The instrument we used to test the validity of the checklist in relation to other variables (i.e., our GRS), has yet to be validated. We conducted this study in a controlled situation using standardized cases and involving a standardized consultant instead of in an environment that more accurately represents the everyday situation in the ED.19 Only two cases, representing, respectively, a psychiatric and surgical consultation, were used in this study. In real-world circumstances, cases vary greatly in their complexity and context, and attending–resident communication differs from one person to the next. This trial was also limited to one small group of 43 residents at a single academic hospital and represented only one type of consultation (i.e., a resident physician consulting with an attending about an emergent situation via the telephone). Future trials in different settings, involving a greater variety of cases, larger sample sizes, other types of consultations (e.g., resident-to-resident, face-to-face), and other learners (e.g., medical students) might elucidate further validity evidence in support of using this checklist as an evaluation tool for other types of consultations. Other areas for future work may include both comparing the checklist with other ratings of consultations and evaluating different means to teach learners and provide development for faculty in the use of the 5Cs model. Further study is also needed to assess the impact of enhanced communication on patient safety.

Conclusions

Because effective communication and consultations are key components of quality health care, a tool to assess resident consultations can be a vehicle for change and improvement. The 5Cs model can potentially play an important role in improving communication among health providers and in enhancing patient safety. This study provides evidence, in the domains of content validity, response process validity, internal structure validity, and validity in relation to other variables, to support the use of this tool for assessing the quality and effectiveness of emergent phone consultations in the ED. Incorporating this checklist into graduate medical education programs across the country would create a standardized way to train and assess residents in performance of consultations.

Acknowledgments: The authors wish to thank Dr. Weihua Gao, Dr. Frederick Kviz, and the Quantitative Biomedical Core Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois–Chicago, for their assistance with statistical analysis. Special thanks to the study raters, consultant, reviewers, Dr. Saul Weiner, senior associate dean for educational affairs at the University of Illinois–Chicago, and Dr. Richard Frankel, professor of medicine and geriatrics at the Indiana University School of Medicine, for their assistance in conducting this study. They did not receive compensation for their contributions.

Funding/Support: None.

Other disclosures: None.

Ethical approval: This study was approved by the institutional review board of University of Illinois at Chicago’s (IRB number 2010-0307). All subjects gave informed consent before participating in the trial.

References

1. Accreditation Council for Graduate Medical Education. ACGME General Competencies and Outcomes Assessment for Designated Institutional Officials. http://www.acgme.org/acWebsite/irc/irc_competencies.asp. Accessed June 22, 2012.
2. Eisenberg EM, Murphy AG, Sutcliffe K, et al. . Communication in emergency medicine: Implications for patient safety. Commun Monogr. 2005;72:390–413
3. Reid C, Moorthy C, Forshaw K. Referral patterns: An audit into referral practice among doctors in emergency medicine. Emerg Med J. 2005;22:355–358
4. Guertler AT, Cortazzo JM, Rice MM. Referral and consultation in emergency medicine practice. Acad Emerg Med. 1994;1:565–571
5. Walker LG. Communication skills: When, not if, to teach. Eur J Cancer. 1996;32A:1457–1459
6. The Joint Commission. Delays in treatment. . Sentinel Event Alert. June 17, 2002, 26. http://www.jointcommission.org/assets/1/18/SEA_26.pdf. Accessed June 11, 2012.
7. Cheung DS, Kelly JJ, Beach C, et al.Section of Quality Improvement and Patient Safety, American College of Emergency Physicians. Improving handoffs in the emergency department. Ann Emerg Med. 2010;55:171–180
8. Kessler C, Kutka B, Badillo C. Consultation in the emergency department: A qualitative analysis and review. J Emerg Med. 2012;42:704–711
9. Ouellette H, Kassarjian A, McLoud TC. Teaching the art of verbal consultation. J Am Coll Radiol. 2006;3:9–10
10. Matthews A, Harvey CM, Schuster RJ, Durso FT. Emergency physician to admitting physician handovers: An exploratory study. Proc Hum Fact Ergon Soc Annu Meet. 2002;46:1511–1515
11. Ye K, McD Taylor D, Knott JC, Dent A, MacBean CE. Handover in the emergency department: Deficiencies and adverse effects. Emerg Med Australas. 2007;19:433–441
12. McKinley RK, Fraser RC, van der Vleuten C, Hastings AM. Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester assessment package. Med Educ. 2000;34:573–579
13. Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830–837
14. Kessler CS,, Afshar Y, , Sardar G, et al. A prospective, randomized controlled study demonstrating a novel, effective model of transfer of care between physicians: The 5Cs of consultation. Acad Emerg Med. 2012 In press.
15. Cope M The Seven Cs of Consulting: The Definitive Guide to the Consulting Process.. 20032nd ed London, UK Financial Times Press, Prentice Hall
16. Kim S, Phillips WR, Pinsky L, Brock D, Phillips K, Keary J. A conceptual framework for developing teaching cases: A review and synthesis of the literature across disciplines. Med Educ. 2006;40:867–876
17. Nunnaly JC Psychometric Theory. 1978 New York, NY McGraw-Hill
18. American Educational Research Association; American Psychological Association; National Council on Measurement in Education.Standards for Educational and Psychological Testing. 1999 Washington, DC American Educational Research Association
19. Singleton RA, Straits BC Approaches to Social Research.. 2010.5th ed New York, NY Oxford University Press
© 2012 Association of American Medical Colleges