Skip Navigation LinksHome > July 2010 - Volume 85 - Issue 7 > Measuring the Intensity of Resident Supervision in the Depar...
Academic Medicine:
doi: 10.1097/ACM.0b013e3181d5a954
Graduate Medical Education

Measuring the Intensity of Resident Supervision in the Department of Veterans Affairs: The Resident Supervision Index

Byrne, John M. DO; Kashner, Michael PhD, JD; Gilman, Stuart C. MD, MPH; Aron, David C. MD, MS; Cannon, Grant W. MD; Chang, Barbara K. MD, MA; Godleski, Linda MD; Golden, Richard M. PhD; Henley, Steven S. MS; Holland, Gloria J. PhD; Kaminetzky, Catherine P. MD, MPH; Keitz, Sheri A. MD, PhD; Kirsh, Susan MD; Muchmore, Elaine A. MD; Wicker, Annie B.

Free Access
Article Outline
Collapse Box

Author Information

Dr. Byrne is associate chief of staff for education, VA Loma Linda Healthcare System, Loma Linda, California, and assistant professor of medicine, Loma Linda University School of Medicine, Loma Linda, California.

Dr. Kashner is associate director for program evaluation, Office of Academic Affiliations, Department of Veterans Affairs, Washington, DC, and professor, Department of Psychiatry, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas.

Dr. Gilman is director of advanced fellowships and professional development, Office of Academic Affiliations, Department of Veterans Affairs, Washington, DC, and clinical professor of health science, University of California, Irvine, Irvine, California.

Dr. Aron is associate chief of staff for education, Louis Stokes Cleveland VA Medical Center, Cleveland, Ohio, and professor of medicine, Case Western University School of Medicine, Cleveland, Ohio.

Dr. Cannon is associate chief of staff for education, George E. Wahlen VA Medical Center, Salt Lake City, Utah, and professor of medicine and Thomas E. and Rebecca D. Jeremy Presidential and Endowed Chair for Arthritis Research, University of Utah School of Medicine, Salt Lake City, Utah.

Dr. Chang is director, Medical and Dental Education, Office of Academic Affiliations, Department of Veterans Affairs, Washington, DC, and professor of medicine (emeritus), University of New Mexico School of Medicine, Albuquerque, New Mexico.

Dr. Godleski is associate chief of staff for education, VA Connecticut Healthcare System, New Haven, Connecticut, and associate professor of psychiatry, Yale School of Medicine, New Haven, Connecticut.

Dr. Golden is professor of cognitive science and engineering and program head, Undergraduate Cognition Science Program and Masters Program in Applied Cognition and Neuroscience, The School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas.

Mr. Henley is president, Martingale Research Corporation, Plano, Texas.

Dr. Holland is special assistant for policy and planning, Office of Academic Affiliations, Department of Veterans Affairs, Washington, DC.

Dr. Kaminetzky is associate chief of staff for education, Durham VA Medical Center, Durham, North Carolina, and assistant professor of medicine, Duke University School of Medicine, Durham, North Carolina.

Dr. Keitz is chief, Medical Service, Miami VA Medical Center, Miami, Florida, and associate dean, Miller School of Medicine, University of Miami, Miami, Florida.

Dr. Kirsh is associate professor of medicine, Case Western Reserve University School of Medicine, Cleveland, Ohio.

Dr. Muchmore is associate chief of staff for education, San Diego VA Medical Center, San Diego, California, and professor of clinical medicine and vice chair for education, Department of Medicine, University of California, San Diego, San Diego, California.

Ms. Wicker is research health science specialist, VA Loma Linda Healthcare System, Loma Linda, California.

Please see the end of this article for information about the authors.

Correspondence should be addressed to Dr. Byrne, Jerry L. Pettis Memorial VA Medical Center (14A), VA Loma Linda Healthcare System, 11201 Benton Street, Loma Linda, CA 92357; telephone: (909) 583-6004; fax: (909) 777-3828; e-mail: john.byrne3@va.gov.

Collapse Box

Abstract

Purpose: To develop a survey instrument designed to quantify supervision by attending physicians in nonprocedural care and to assess the instrument's feasibility and reliability.

Method: In 2008, the Department of Veterans Affairs (VA) Office of Academic Affiliations convened an expert panel to adopt a working definition of attending supervision in nonprocedural patient care and to construct a survey to quantify it. Feasibility was field-tested on residents and their supervising attending physicians at primary care internal medicine clinics at the VA Loma Linda Healthcare System in their encounters with randomly selected outpatients diagnosed with either major depressive disorder or diabetes. The authors assessed both interrater concurrent reliability and test–retest reliability.

Results: The expert panel adopted the VA's definition of resident supervision and developed the Resident Supervision Index (RSI) to measure supervision in terms of residents' case understanding, attending physicians' contributions to patient care through feedback to the resident, and attending physicians' time (minutes). The RSI was field-tested on 60 residents and 37 attending physicians for 148 supervision episodes from 143 patient encounters. Consent rates were 94% for residents and 97% for attending physicians; test–retest reliability intraclass correlations (ICCs) were 0.93 and 0.88, respectively. Concurrent reliability between residents' and attending physicians' reported time was an ICC of 0.69.

Conclusions: The RSI is a feasible and reliable measure of resident supervision that is intended for research studies in graduate medical education focusing on education outcomes, as well as studies assessing quality of care, patient health outcomes, care costs, and clinical workload.

The goal of graduate medical education (GME) is to produce competent, independent physicians who practice compassionate, safe, and effective care. One traditional approach used to achieve this goal is the apprenticeship or on-the-job-training model, through which residents provide clinical care under the oversight of a more senior physician.1 As the single most important aspect of this model, clinical supervision has been defined as “the provision of guidance and feedback on matters of personal, professional, and educational development in the context of a trainee's experience of providing safe and appropriate patient care.”2 It includes an assessment of overall professional growth,3 role modeling, mentoring, and clinical consultation.4 These elements have been captured by the definition of supervision offered by the Veterans Health Administration: “an intervention provided by a supervising practitioner to a resident . . . [that] is evaluative, [that] extends over time, and [that] has the simultaneous purposes of enhancing the professional functioning of the resident while monitoring the quality of professional services delivered . . . [and that] is exercised through observation, consultation, directing the learning of the resident, and role modeling.”5 Supervision is a unique intervention in GME that must address both the patient's and learner's needs in a specific clinical context3 and thus must serve the dual purpose of ensuring safe and effective care in the short term and contributing to the development of safe, effective, “self-regulating” independent practitioners in the long term.6

Therefore, clinical supervision in GME is a great concern not only for program directors, clinical supervisors, and residents7 but also for regulatory agencies,8,9 accrediting bodies,10,11 consumer groups,12 and the U.S. judicial system.13–15 Yet, little research has focused on how GME supervision should be defined conceptually or quantified scientifically.2,16 Moreover, concerns regarding patient safety have led to calls from the media and regulatory agencies to increase supervision.1,6 Although increased supervision has been shown to change clinical assessments, diagnoses, and treatment decisions17,18 and possibly to improve patient outcomes,16,19 supervision is not always accounted for in assessments of variations in the quality of resident-provided care20 or the education outcomes of resident training.16 Underscoring the importance of supervision in both patient care and resident education, Ulmer and colleagues,1 writing for the Institute of Medicine, and Fallon and colleagues21 have called for measurable standards of supervision. Currently, no valid, psychometrically tested instrument exists to measure supervision in nonprocedural settings.

Despite the need for quantifiable standards, supervision is principally measured by chart review of attending physician involvement.5,22 An exception to this model is surgical supervision, which is often assessed by “levels of supervision” that reflect an attending surgeon's involvement in procedural care.19,23 As part of the National Surgical Quality Improvement Program, the Department of Veterans Affairs (VA) developed a scale to measure physical presence and direct involvement in procedures that are assessed and recorded by operating room nurses. Attending physician supervision as measured on this scale has been correlated with improved outcomes.19 However, the intensity of procedural supervision, measured in terms of the attending physician's hands-on involvement or direction of the resident in a manual skill, does not necessarily reflect adequately the cognitive supervision that occurs in the assessment and management of patients in other settings. In this article, we report the recommendations from the VA's Expert Panel on Resident Supervision and describe the field-testing of its proposed Resident Supervision Index (RSI) for both feasibility and reliability.

Back to Top | Article Outline

Method

Content-valid RSI

The VA's Office of Academic Affiliations (OAA), under a grant from the Health Services Research and Development Service within VA's Office of Research and Development, assembled an Expert Panel on Resident Supervision. The panel's goals were to form by consensus a definition of resident supervision and to formulate a strategy for quantifying resident supervision through a resident-administered survey instrument, the RSI.

The expert panel consisted of five men and five women from eight universities across the United States whose expertise in clinical education covered the fields of general internal medicine, endocrinology, hematology, oncology, rheumatology, and psychiatry. The panel was supported by five advisors with expertise in mathematical and computational statistics, psychometrics, data management, and medical informatics. The panel also included OAA leadership.

The panel held seven scheduled conference calls during a two-month period beginning in March 2008; the schedule concluded with a face-to-face consensus meeting. Between scheduled calls, discussions were held on an e-mail forum made available to all panel members and advisors.

Back to Top | Article Outline
Feasibility and reliability

Once the expert panel reached consensus, the final version of the RSI was tested for feasibility and reliability at the VA Loma Linda Healthcare System. Four research assistants that we chose for their experience in working in VA outpatient clinics administered the RSI to both residents and attending physicians. Each research assistant took a three-hour course on how to administer the RSI, underwent a review of study goals, participated in practice encounters, and received an RSI instruction manual that contained case-based examples. Throughout data collection, the research assistants met weekly with the principal investigators and data managers to discuss and resolve data collection issues.

The RSI was designed to quantify the supervision provided by an attending physician to a resident physician for a specific patient-care encounter. At the start of each week of the study, we randomly selected 10 to 15 patients from a list of patients who had a diagnosis of diabetes or major depression and who had a scheduled appointment during the week with a resident physician. We obtained the list of patients from a computerized report in the VA's electronic medical records or Veterans Information System Technology Architecture (VistA). We selected patients if they had an appointment in resident physician, primary care continuity, or ambulatory block rotation clinics. In the third month of the study (August 2008), we added two more clinics—vascular surgery and orthopedic surgery—to diversify the types of supervisory encounters. The study focused on diabetes and depression because of their high prevalence among VA patients and the complexity of the care required to manage these conditions. Study patients remained on a cohort list for four weeks. We developed a manual detailing these procedures and provided it to the research assistants.

Research assistants obtained informed consent from both residents and attending physicians and completed the baseline questionnaire on both. The baseline questionnaire included demographic data, as well as information on undergraduate, medical school, and graduate medical education. At the end of their shift or workday, the participating residents were asked if they had had a supervisory encounter with any patient named on the patient cohort list. If so, the research assistants administered an RSI to both the resident and the attending physician for the supervision encounter. The RSI was readministered to both resident and attending physician within 24 hours, either in person or by telephone contact. Retesting could take place up to seven days after the initial encounter.

We determined feasibility from the rates of acceptance and withdrawal of informed consent and the rates of data capture. We assessed reliability by test–retest comparisons and by comparing responses from residents and attending physicians. Agreement was expressed in terms of Cohen kappa (κ),24 mean reported differences (bias), correlations, and intraclass correlation coefficients (ICCs) that are based on one-way random-effects models.25

The study received approval from the VA Loma Linda Healthcare System Human Subjects Subcommittee and the Research and Development Committee. The study received a waiver for patients' informed consent because patient-related data were deidentified in the analysis.

Back to Top | Article Outline

Results

RSI survey instrument

The expert panel reached consensus for the RSI, which included five important concepts (Chart 1). In the first of these concepts, the intensity of supervision should be measured in terms of resident time (Chart 1, parts 1 and 2). The attending physician's time during the supervision encounter includes patient contact; directing the resident-provided care or speaking, asking, and answering questions; and making comments to the resident about the patient. Supervision time does not include the attending physician's provision of general education or clinical direction that is not specific to the study patient's care or the attending physician's performance of administrative tasks such as writing resident evaluations or searching for or making contact with the resident. In the second concept, supervision is viewed longitudinally, because it may involve preparations before patient encounters, as well as review of medical charts, test results, and treatment responses after patient encounters. Thus, to capture continuously all supervision encounters pertaining to a study patient, the RSI should be administered to residents at the end of each day or shift (the longitudinal nature of the second concept means that it is not situated within a specific part of Chart 1). In the third concept, two types of supervision encounters were identified: resident–attending–patient (RAP) encounters occur when both the resident and the attending are present, along with the patient; and resident–attending (RA) encounters occur when the resident and attending physician are present, but the patient is not (Chart 1, parts 1 and 2).

Chart 1
Chart 1
Image Tools

The expert panel also recommended that RA encounters be classified by mode of discussion, such as face-to-face, group versus individual, telephone, e-mail, text message, note left in patient's chart, or telemedicine or video conferencing. In addition, the panel recommended specifying the reason for the encounter, such as a general discussion of the case, review of the chart or test results, review of prior patient-care encounters, or patient-initiated contact, including telephone calls, e-mail, or letters. For RAP encounters, the attending physician's time was classified by being in contact with the patient, discussing the encounter with the resident without patient contact, and the amount of time that the attending physician interacted directly with the patient when the resident was not present. When the resident, attending physician, and patient were present for the encounter, the attending physician's time was also classified by whether the resident was merely observing or was directly involved in patient care and by the location or actions of the attending physician, including participating in care, being in the room but not participating, being in the clinic area, being available only by phone or pager, or, alternatively, not being available. In the fourth concept, resident feedback is important and should include information on whether the resident's understanding of the case was enhanced and on the contribution by the attending physician to the patient's care (Chart 1, parts 3A and 3B, respectively). In the fifth concept, the expert panel suggested that supervision feedback include changes to or confirmation of resident-provided care with respect to patient history, examination findings, interpretation of diagnostic testing, diagnosis, assessment, or care plan (Chart 1, part 3).

Back to Top | Article Outline
Feasibility

From June 9 through September 5, 2008, we invited a total of 80 residents who were rotating through the selected study clinics to participate in the study. Of these 80 residents, 75 (93.8%) gave informed consent and completed the study, 1 (2.5%) gave consent but later withdrew it, and 4 (3.8%) refused to provide consent. Participating residents were followed for a mean of 63 days (SD: 23; range: 2–88 days). All 38 attending physicians (100%) assigned to the study clinic provided informed consent; 1 later withdrew consent, which left 37 (97%) attending physicians who completed the study. Characteristics of the participating attending physicians and residents are shown in Table 1.

Table 1
Table 1
Image Tools
Table 1
Table 1
Image Tools

Among the eligible patients who met the study criteria, we randomly selected 143, for whom 148 supervision episodes occurred. The patients were cared for by 60 of the 75 residents and all 37 of the attending physicians.

The RSI was completed by 60 residents at the end of their clinical shift; the RSIs covered 145 (98.0%) of the 148 available supervision episodes identified for study. Including the test and retest for both residents and attending physicians, 547 RSI surveys were completed in 548 attempts. Residents' responses with regard to supervision intensity for the attending physicians' time (RSI item 2) and residents' feedback (RSI item 3) are provided in Table 2 and Table 3, respectively.

Table 2
Table 2
Image Tools
Table 3
Table 3
Image Tools

The data show wide variation in the amount of time that attending physicians spent with residents. During 140 (97%) of the 145 episodes, residents reported discussing the case with the attending physician for an average of 8 minutes per encounter, with a coefficient of variation (CV; σ/μ) of nearly 50% (Table 2). During 35 (24%) of those encounters, residents reported that the attending physician had participated directly in patient care for an average of 7.5 minutes, with a CV of 60%. All residents reported having direct contact with the patient while the attending physician was available, either in the clinic area (144 of 145) or by pager or phone (1 of 145). Residents also reported that feedback from discussions with attending physicians enhanced their case understanding in 130 (89.7%) of 145 encounters (Table 3). Feedback led to care changes in 47 (32.4%) cases, to plan changes in 45 (31%) cases, and to assessment changes in 16 (11%) cases; it led to changes in history, examination findings, diagnostic testing interpretation, or diagnosis in fewer than 6 (4%) cases each. All supervisory encounters between residents and attending physicians occurred while the patient was present in the clinic, and no supervisory encounters were recorded in this study without the patient present. Moreover, in no supervisory encounters did the attending physician spend time alone with the patient (i.e., without the resident present).

Back to Top | Article Outline
Test–retest reliability

The ICCs, Pearson correlations (r), and kappa agreement (κ) are reported whether or not any time spent or contributions made by the attending physician were recorded on the RSI. Mean differences between tests and retests or between responses of residents and attending physicians are shown in Table 4.

Table 4
Table 4
Image Tools

We retested residents on 125 (86%) of 145 initial RSI tests within a mean of 1.4 days (SD: 1.3 days; range: 1.0 hours to 7.9 days). ICCs ranged from 0.85 to 0.95, which reflected overall agreement with reported minutes between test and retest results. Tests agreed with retest reports on whether a discussion had occurred (κ = 0 .75) or whether feedback led to a care change (κ = 0.76). Differences between test and retest responses on mean reported minutes were small and less than 0.1% for overall time. In contrast, for the same 125 encounters, absolute error (absolute value of retest minus test) in reported total supervision time averaged 4.89 (SD: 8.76) minutes, or 13.3% of the mean reported 36.71 minutes. In 13 (10.6%) of 123 reportable cases (missing data excluded), residents were not consistent in reporting whether reported feedback resulted in a care change (9 reported change on retest only; 4 reported change on initial test only). Using mixed models to correct for resident nesting, we found that test–retest disagreement did not vary by the resident characteristics that were described in Table 1.

The 37 attending physicians completed an RSI on each of the 143 episodes; retests on 132 (92%) of those episodes were administered within a mean of 1.4 days (SD: 1.5 days; range: 1.2 hours to 8.0 days) from the initial test. Test–retest agreements were comparable with residents' reports (ICCs: 0.86–0.93; κ: 0.67–1.00). Compared with the residents, attending physicians reported 5.4% (P = .03) more supervision minutes on retest to describe direct patient contacts and 13.9% (P = .007) more clinical changes on retest to describe supervision feedback. Absolute error in reporting total supervision time averaged 5.38 (SD: 6.56) minutes per episode, or 13.8% of the total 38.95 supervision minutes reported by the attending physician. Absolute error varied with the attending physician's gender, but not with the gender of a resident. Female attending physicians had nearly twice the error (4.85 minutes; 95% CI: 2.02, 7.68; t = 3.4; P = .001) as did their male counterparts. Error was also larger when retests were conducted more than 24 hours after the test (2.54 minutes; 95% CI: 0.82, 4.97; t = 2.1; P = .043). In 21 (16.4%) of 128 reportable cases, the attending physicians changed their answers about whether supervision discussions had changed care (14 reported care changes on retest only; 7 reported care changes on initial test only). This error rate is high, but there was little evidence that the error varied with the interval of time between test and retest or with the attending physician characteristics that were described in Table 1.

We compared reported minutes and changes in care between residents and the attending physicians for 140 episodes in which both resident and attending physicians provided responses for the encounter. These concurrent reliability estimates are generally lower than retest reliability (Table 4). ICCs varied between 0.56 and 0.84. Differences in reported minutes, however, were relatively small: Attending physicians reported a nominal 3% more minutes (95% CI: −4%, 9.3%) for overall supervision and less than 2% more minutes (95% CI: −6%, 9%) for resident patient-care contact.

Back to Top | Article Outline

Discussion

In this study, we worked to develop an instrument to quantify supervision and to test its psychometric properties in a nonprocedural clinical setting. An expert panel recommended that measurement of supervision intensity with the RSI should include items for the attending physician's physical presence, contribution to patient care, contribution to residents' understanding of the patient's case, and the amount of time spent in supervision. By administering and testing the RSI in primary care clinics with both resident and attending physicians, we found that it is feasible and has good psychometric properties. Other instruments have been developed to assess the quality of or satisfaction with resident supervision17,26 or to evaluate supervision as one measure of satisfaction with the learning environment,27,28 but the RSI is the first tool developed to quantitatively measure supervision in nonprocedural patient care.

As recommended in the Institute of Medicine's recent report, “Resident Duty Hours: Enhancing Sleep, Safety and Supervision,”1 measurable standards are needed to better understand the central role of supervision in GME. Although a few studies have suggested that patient outcomes are inversely related to supervision17,21 and that increased supervision is associated with improved educational outcomes,29,30 those studies lacked an explicit measurement of supervision. Furthermore, the effect of supervision on costs has received little study,31 and measurements of attending physicians' work in clinical supervision are limited to chart documentation.32 We believe that quantifying supervision increases the opportunity to scientifically evaluate and guide its role in ensuring patient safety, enhancing clinical outcomes, promoting professional growth, and determining a teaching facility's clinical workload and costs.

In the absence of quantifiable measurements of supervision, the expert panel's recommendation to use time to quantify supervision is not only intuitively appropriate but is also supported by previous work. In fact, supervision time is integral to the GME clinical learning model in which inexperienced residents require more supervision to ensure effective and safe patient care, and more experienced residents require more autonomy to further enhance their professional development. Several studies illustrate the importance of supervisory time.33–35 Xakellis and Gjerde33 found that the amount of teaching time was greatest for first-year residents in outpatient clinics, both in terms of the frequency of consultation with the attending physician and in terms of the time spent in consultation. In addition, in a comprehensive review of teaching in the ambulatory care setting, Irby34 found that the duration of interactions between residents and attending physicians in outpatient clinics decreased with increases in the levels of training. Finally, Griffith and colleagues35 found a decrease in test ordering when attending physicians were available and spent more time in supervision.

As a single measure, the amount of time spent is an incomplete measure of supervision intensity. In fact, one attending physician could potentially spend a great deal of time supervising but not changing patient care or enhancing resident education, while another attending may spend a very short amount of time supervising but have a profound effect. Thus, in addition to minutes, the expert panel identified attending physician contact with the patient and feedback about the resident's findings, assessment, and plan as important components of quantifying supervision. The panel's position is consistent with prior studies showing that attending physicians' direct contact with patients often changes residents' findings and care plans.17,36 In addition to its effects on patient care, attending physicians' direct contact with patients has been found to contribute to residents' education.18 Furthermore, previous work showed that attending physician feedback need not change the resident's assessment and plan in order to contribute to education. In fact, confirmation of the resident's assessment of the patient is also perceived by residents to have educational value.18,37

The RSI has high test–retest reliability for supervision time (minutes) and the physical presence of the attending physician but a lower concurrent reliability between residents and attending physicians for supervision time, changes to patient care, and contributions to case understanding. The lower concurrent reliability for total minutes of supervision, which include the time the resident spent with the patient, may reflect the attending physicians' uncertainty about the amount of time that residents spend alone with patients and suggests that residents may provide the most reliable assessment of supervision time. However, the low ICCs between residents' and attending physicians' reporting of the effect of supervision on resident-provided care or on the contribution to resident understanding of the patient's case are more striking. Despite the lower concurrent reliability on the RSI patient-care items, the supervised resident physicians in our study reported changes in the assessment and plan in more than 30% of cases, a rate that is similar to that reported in other studies.18,36 Furthermore, resident physicians reported that attending physicians contributed to case understanding in nearly 90% of the cases and did not contribute in only 12 supervisory encounters. In contrast to findings in a previous study,18 attending physicians in our study reported more frequently than did residents that the attendings had contributed to case understanding (95% and 90%, respectively). Cyran and colleagues18 noted that residents were more likely than were attending physicians to report attendings' contributions to teaching, diagnosis, and treatment decisions. Those authors also found that residents' gender and training level are correlated with these differences. Further analyses of our data, including correlations to other attending and resident characteristics, may shed light on these relationships.

The complexity of communication in case presentations also may have contributed to our findings. For example, perhaps resident physicians considered the alternative diagnoses or treatments presented by the attending physicians but were reluctant to discuss them because of uncertainty, fear of appearing incompetent, or deference to the attending physician. Therefore, differences in residents' and attending physicians' perceptions of the attendings' contributions may be due to gaps in communication. Others have shown that case presentations are hampered by a lack of transparency, residents' desire to avoid appearing uncertain, residents' desire for professional credibility and autonomy, differing perceptions of the purpose of oral presentations, and different types of communication used by trainees and faculty.38–43 Ende and colleagues44 showed that attending physicians mitigate their correction of interns to preserve the interns' self-esteem and level of responsibility. Instead, attending physicians provide clues and allow second chances, approaches that may lead to some ambiguity in the feedback regarding the patient's care.44 As the science of supervision advances, more research is needed to fully understand communication in supervisory encounters and its effect on patient care.

Our study has some limitations. First, it was conducted in one institution in a limited number of clinics and with a select group of patients. Second, the use of research assistants is a labor-intensive means of collecting supervision data that may not be practical in broad application of the tool. Third, we did not use independent observers to verify the amount of time spent in supervision. Fourth, assessing the length of resident encounters at the end of clinics or shifts and again in 24 hours may have decreased the accuracy of the estimates. However, despite the 24-hour retest interval that might bias against reliability, we were able to demonstrate high intraclass correlations on test–retest.

Back to Top | Article Outline

Conclusions

This study addresses the substantial need for evaluation of the supervision of residents and shows that it is possible to quantify resident supervision. The data provided here show that the RSI is an instrument with a high potential for successful measurement of resident supervision. Recognizing that supervision is a complex clinical interaction and acknowledging the inherent challenges of linking medical education with outcomes,45,46 we posit that the establishment of the RSI as a feasible and reliable instrument is a first step in studying the central role that supervision plays in GME. An enhanced understanding of supervision has the potential to benefit regulatory bodies, GME institutions and leaders, clinical teachers, residents, and, most important, patients, by improving the quality of the care that they receive. Future studies should compare RSI responses with direct observations to assess concurrent validity, develop scoring algorithms to summarize time and feedback data, explore alternative means of administering the RSI, such as through an electronic health record, and examine construct validity by comparing supervision intensity with resident-provided quality of care and patient health outcomes, education outcomes, and clinical workload.

Back to Top | Article Outline

Funding/Support:

This work was supported by grant no. SHP08-164 from the Department of Veterans Affairs, Office of Research and Development, Health Services Research and Development, and by grant no. R44CA139607 from the Small Business Innovation Research (SBIR) program of the National Cancer Institute (NCI).

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

The Human Subjects Subcommittee and the Research and Development Committee of the VA Loma Linda Healthcare System provided ethical approval for this study.

Back to Top | Article Outline

Disclaimers:

The statements and descriptions expressed herein do not necessarily reflect the opinions or positions of the Department of Veterans Affairs or of the National Institutes of Health, Department of Health and Human Services.

Back to Top | Article Outline

References

1 Ulmer C, Wolman DM, Johns MME; Committee on Optimizing Graduate Medical Trainee (Resident) Hours and Work Schedule to Improve Patient Safety, National Research Council. Resident Duty Hours: Enhancing Sleep, Safety and Supervision. Washington, DC: Institute of Medicine of the National Academies, National Academy Press; 2008.

2 Kilminster S, Cottrell D, Grant J, Jolly B. AMEE Guide 27: Effective educational and clinical supervision. Med Teach. 2007;29:2–19.

3 Bernard JM, Goodyear RK. Fundamentals of Clinical Supervision. 4th ed. Columbus, Ohio: Pearson; 2009.

4 Hewson MG, Jensen NM. An inventory to improve clinical teaching in the general internal medicine clinic. Med Educ. 1990;24:518–527.

5 Department of Veterans Affairs, Veterans Health Administration. Resident Supervision: VHA Handbook 1400.1. Available at: http://www.va.gov/oaa/VHA_Handbook_14001_html.asp. Accessed January 13, 2010.

6 Kennedy TJ, Regehr G, Baker GR, Lingard LA. Progressive independence in clinical training: A tradition worth defending? Acad Med. 2005;80(10 suppl):S106–S111.

7 Shojania KG, Fetcher KE, Saint S. Graduate medical education and patient safety: A busy—and occasionally hazardous—intersection. Ann Intern Med. 2006;145:592–598.

8 Evans L. Regulatory and legislative attempts at limiting medical resident work hours. J Leg Med. 2002;23:251–267.

9 Steinbrook R. The debate over residents' work hours. N Engl J Med. 2002;347:1296–1302.

10 Philibert I, Friedmann P, Williams WT; ACGME Work Group on Resident Duty Hours. New requirements for resident duty hours. JAMA. 2002;288:1112–1114.

11 Accreditation Council for Graduate Medical Education. Duty Hours Language. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_Lang703.pdf. Accessed January 13, 2010.

12 Gurjala A, Lurie P, Haroona L, et al. Petition to the Occupational Safety and Health Administration requesting that limits be placed on hours worked by medical residents (HRG Publication #1570). Available at: http://www.citizen.org/publications/release.cfm?ID=6771. Accessed January 13, 2010.

13 Singh H, Thomas EJ, Petersen LA, Studdert DM. Medical errors involving trainees: A study of closed malpractice claims from five insurers. Arch Intern Med. 2007;167:2030–2036.

14 Asch DA, Parker RM. The Libby Zion case. N Engl J Med. 1988;12:771–775.

15 Feinstein AR. System, supervision, standards, and the epidemic of negligent medical errors. Arch Intern Med. 1997;157(12):1285–1289.

16 Kilminster SM, Jolly BC. Effective supervision in clinical practice settings: A literature review. Med Educ. 2000;34:827–840.

17 Gennis VM, Gennis MA. Supervision in the outpatient clinic: Effects on teaching and patient care. J Gen Intern Med. 1993;8:378–380.

18 Cyran EM, Albertson G, Schilling LM, et al. What do attending physicians contribute in a house officer-based ambulatory continuity clinic? J Gen Intern Med. 2006;21:435–439.

19 Itani KMF, DePalma RG, Schifftner T, et al. Surgical resident supervision in the operating room and outcomes of care in Veterans Affairs hospitals. Am J Surg. 2005;190:725–731.

20 Mladenovic J, Shea JA, Duffy FD, Lynn LA, Holmboe ES, Lipner RS. Variation in internal medicine residency clinic practices: Assessing practice environments and quality of care. J Gen Intern Med. 2008;23:914–920.

21 Fallon WF Jr, Wears RL, Tepas JJ. Resident supervision in the operating room: Does this impact on outcome? J Trauma. 1993;35:556–560.

22 Association of American Medical Colleges. Medicare's Teaching Physician Documentation Instructions. Available at: http://www.aamc.org/advocacy/library/teachphys/mtpdi.pdf. Accessed January 13, 2010.

23 Chang BK. Resident supervision in VA teaching hospitals. ACGME Bull. September 2005:12–13.

24 Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46.

25 Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychol Bull. 1979;86:420–427.

26 de Oliveira Filho GR, Dal Mago AJ, Soares Garcia JH, Goldschmidt R. An instrument designed for faculty supervision evaluation by anesthesia residents and its psychometric properties. Anesth Analg. 2008;107:1316–1322.

27 Byrne JM, Loo LK, Giang D. Monitoring and improving resident work environment across affiliated hospitals: A call for a national resident survey. Acad Med. 2009;84:199–205.

28 Cannon GW, Keitz SA, Holland GJ, et al. Factors determining medical student and resident satisfaction during VA training: Results from the Learners' Perceptions Survey. Acad Med. 2008;83:611–620.

29 Huang GC, Smith CC, Gordon CE, et al. Beyond the comfort zone: Residents assess their comfort performing inpatient medical procedures. Am J Med. 2006;119:71.e17–e24.

30 Osborn LM, Sargent JR, Williams SD. Effects of time-in-clinic, clinic setting, and faculty supervision on the continuity clinic experience. Pediatrics. 1993;91:1089–1093.

31 Feinglass J, Schroeder J, Martin G, Wallace W, Lyons J. The relationship of residents' autonomy and use of a teaching hospital's resources. Acad Med. 1991;66:549–552.

32 Coleman DL, Moran E, Serfilippi D, et al. Measuring physicians' productivity in a Veterans Affairs medical center. Acad Med. 2003;78:682–689.

33 Xakellis GC, Gjerde CL. Ambulatory medical education: Teachers' activities, teaching cost and residents' satisfaction. Acad Med. 1995;70:702–707.

34 Irby DM. Teaching and learning in ambulatory care settings: A thematic review of the literature. Acad Med. 1995;70:898–931.

35 Griffith CH, Desai NS, Wilson JF, Griffith EA, Powell KJ, Rich EC. Housestaff experience, workload, and test ordering in a neonatal intensive care unit. Acad Med. 1996;71:1106–1108.

36 Kennedy TJ, Regehr G, Baker GR, Lingard L. Preserving professional credibility: Grounded theory study of medical trainees requests for clinical support. BMJ. 2009;338:b128. doi: 10.1126/bmj.b128.

37 Sacchetti A, Carraccio C, Harris RH. Resident management of emergency department patients: Is closer attending supervision needed? Ann Emerg Med. 1992;21:749–752.

38 Laidley TL, Braddock CH, Fihn SD. Did I answer your question? Attending physicians' recognition of residents' perceived learning needs in ambulatory settings. J Gen Intern Med. 2000;15:46–50.

39 Paukert JL. When residents talk and teachers listen: A communication analysis. Acad Med. 2000;75(10 suppl):S65–S67.

40 Caldicott CV. What's wrong with this medical student today? Dysfluency on inpatient rounds. Ann Intern Med. 1998;128:607–610.

41 Anspack RR. Notes on the sociology of medical discourse: The language of case presentation. J Health Soc Behav. 1988;29:357–375.

42 Haber RJ, Lingard LA. Learning oral presentation skills: A rhetorical analysis with pedagogical and professional implications. J Gen Intern Med. 2001;16:308–314.

43 Lingard L, Schryer C, Garwood K, Spafford M. ‘Talking the talk’: School and workplace genre tension in clerkship case presentations. Med Educ. 2003;37:612–620.

44 Ende J, Pomerantz A, Erickson F. Preceptors' strategies for correcting residents in an ambulatory care medicine setting: A qualitative analysis. Acad Med. 1995;70:224–229.

45 Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–960.

46 Bordage G, Burack JH, Irby DM, Stritter FT. Education in ambulatory settings: Developing valid measures of educational outcomes, and other research priorities. Acad Med. 1998;73:743–750.

© 2010 Association of American Medical Colleges

Login

Article Tools

Images

Share