Skip Navigation LinksHome > November 2001 - Volume 76 - Issue 11 > A Clinical Performance Exercise for Medicine—Pediatrics Resi...
Academic Medicine:
Educating Physicians: Research Reports

A Clinical Performance Exercise for Medicine—Pediatrics Residents Emphasizing Complex Psychosocial Skills

Duke, Mary B. MD; Griffith, Charles H. III MD, MSPH; Haist, Steven A. MD, MS; Wilson, John F. PhD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Duke is associate professor, Dr. Griffith is associate professor, and Dr. Haist is professor, Division of General Internal Medicine, and Dr. Wilson is professor, Department of Behavioral Science. All are at the University of Kentucky College of Medicine, Lexington.

Correspondence and requests for reprints should be addressed to Dr. Duke, K512 Kentucky Clinic, University of Kentucky 740 S. Limestone, Lexington, KY 40536-0284; e-mail: 〈mbduke2@uky.edu〉.

This project was supported in part by a Residency Training in Internal Medicine/Pediatrics grant from the Health Resources and Services Administration and does not necessarily reflect the opinion or policies of the Health Resources and Services Administration.

Collapse Box

Abstract

Purpose. To assess the skills of internal medicine—pediatrics (med—peds) residents in evaluating and counseling patients with complex psychosocial problems using a clinical performance exercise (CPE).

Method. The authors designed a 13-station CPE [nine standardized-patient (SP) stations and four non-SP stations]. Eight of the SP stations focused on counseling or assessing complex psychosocial needs, and three were videotaped and analyzed for specific verbal and nonverbal communication skills. Residents completed a written task for each station and SPs completed a checklist on interviewing and communication skills and a 52-item patient's-satisfaction survey. All first- and third-year residents (n = 25) from two academic years participated.

Results. The range of the average scores on the nine SP stations was 43–75%. The residents performed better with common problems (newborn hospital discharge instructions and cardiac risk-factor counseling) than with more complex problems that are less often encountered in the institution (HIV counseling), or problems less often recognized (adult survivor of childhood sexual abuse). As expected, third-year residents scored better than did first-year residents on the written “plan” part of the SP stations and on the non-SP stations. Third-year and first-year residents had similar scores, however, on measures of verbal and nonverbal communication and patient's satisfaction, and for gathering data and providing information.

Conclusion. This is the first performance-based evaluation of residents in a combined med—peds residency program. The stations addressed more complex clinical skills than those reported for objective structured clinical evaluations of residents.

In a recent survey, graduates of our combined internal medicine-pediatrics (med-peds) residency program indicated that during residency they had been prepared (and perhaps overprepared) to provide care to patients in the intensive care unit and hospitalized patients generally. In contrast, the graduates believed they were underprepared to care for ambulatory patients, especially to counsel and evaluate patients with complex psychosocial problems.1 This phenomenon is not unique to our institution, but appears to be common in university-based residency training programs.2 Because residents are seldom directly observed interviewing or counseling patients, our ability to address this curricular deficiency or to give residents specific, individualized feedback regarding their strengths or weaknesses is limited.

To address the residents' perceptions of their lack of preparedness for managing psychosocial issues we designed and implemented a clinical performance exercise (CPE) for med—peds residents that emphasized complex psychosocial and ambulatory skills. CPEs are widely used to evaluate medical students, and they have been used to evaluate residents in internal medicine,3,4 pediatrics,5 surgery,6 and family practice.7 To our knowledge, no interdisciplinary performance-based evaluation has been reported for residents in a combined med—peds residency program. In general, performance-based exercises for residents usually follow an objective structured clinical evaluation (OSCE) format, where examinees perform defined specific tasks.4,5,6,7 The stations generally involve common, straightforward medical conditions or tasks (e.g., chest pain history, ophthalmologic examination), and are limited in length to five to ten minutes. Longer (20–30 minute) performance-based exercises generally focus on common basic clinical conditions.3 In contrast, we were interested in evaluating residents' skills in managing more complex clinical encounters, especially those involving psychosocial issues, something that our needs assessment indicated to be a curricular weakness.

Back to Top | Article Outline

METHOD

We designed and implemented a 13-station CPE. Each station was 19 minutes in length, with one minute between stations. Nine of the stations involved a standardized patient (SP). Chart 1 lists the SP stations by presenting problem, diagnosis, primary discipline, gender of SP, and number of items on the checklist. Eight of these nine SP stations focused on either counseling or assessing complex psychosocial needs, and the ninth station required a focused history and physical examination of a common problem (chest pain/acute myocardial infarction) to prevent cueing residents to psychosocial problems. The residents had 14 minutes for the patient encounter, and they were responsible for pacing the encounter.

Chart 1
Chart 1
Image Tools

In the remaining five minutes of the CPE, the residents completed a written task related to the SP station (e.g., “write your assessment and plan for this patient”). During this time the SP also completed a detailed checklist of items. The checklists were developed by a panel of faculty experts and contained items thought to be important in addressing each problem, as well as items related to general interviewing and communication skills. Items were scored in a yes/no format. The SPs also rated the residents using a 52-question survey on general bedside manner, which consisted of six sections: (1) physician's attributes (e.g., attentive, bored, caring), (2) physician's behavior (e.g., angry/irritated, sympathetic/kind), (3) patient's satisfaction (e.g., “how satisfied were you with the visit?”), (4) physician's tone of voice (e.g., angry/irritated versus friendly/warm), (5) physician's nonverbal expressivity (e.g., volume of voice, very quiet versus very loud), and (6) physician's nonverbal physical behaviors (e.g., tense hands/fists, open or closed body posture).

The SPs had been trained for two to eight hours per station. Training time was dependent upon the SP's previous experience and the complexity of the case. Reliability testing was performed at the end of training to achieve at least a 90% agreement with two different SP trainers. Three stations were videotaped: an adult survivor of sexual abuse (“trouble sleeping”); domestic violence (“double vision/fall”); and telling bad news to the parent of a child newly diagnosed as having cystic fibrosis (“hospital follow-up pneumonia”). The four non-SP stations were: three adult and pediatric electrocardiograms (ECGs), ten pediatric and adult x-rays, 12 kodachromes of dermatologic disorders of adults and children, and a station of ten laboratory values or slides to interpret (e.g., arterial blood gases, Gram stains, blood smears).

The examinations were administered on one Saturday in the fall of 1997 and in 1998 to all first- and third-year med—peds residents in one medical center. We chose to test first-year residents to collect information about early-residency abilities and needs. The third-year residents were chosen for comparison with the less experienced trainees and because any identified deficiencies could be addressed in their final one and a half years of training. At the two administrations of the CPE, 11 of the 12 first-year residents and all 13 of the third-year residents (n = 25) completed all of the stations. One resident during 1997 did not complete two stations because of illness.

Each item on each checklist was classified as representing a particular skill: (1) introduction, (2) chief complaint, (3) history, (4) physical examination (some stations did not include a physical exam), (5) counseling, (6) communication, or (7) interpersonal skills. Scores were generated for each resident by station and by each of the seven skills. Non-SP stations were scored simply by percentage of correct items. Scores were generated by year of training for criterion-based validity. Simple descriptive statistics, Pearson correlation, t-tests, and chi-square analyses were used to analyze the data. Feedback was provided both to individuals and to each group of residents taking the examination.

Back to Top | Article Outline

RESULTS

The range of average scores on the nine SP stations was 43–75% (see Table 1). As a group, the residents did better on counseling for relatively common problems (newborn hospital discharge instructions, cardiac risk factor counseling) than they did on counseling for more complex situations that are less often encountered in our institution (HIV counseling) or assessing problems less frequently recognized (adult survivor of childhood sexual abuse).

Table 1
Table 1
Image Tools

Variations between items on individual stations were illuminating. For example, the overall residents' performance on the station of an 8-year old with abdominal pain and school refusal was generally good, with the residents taking detailed histories and performing adequate examinations. However, items relevant to counseling on that station were not performed well. Only eight of the residents specifically counseled the mother to keep the child, who obviously had non-organic abdominal pain, in school, arguably the most important piece of information to give a parent in that situation. In the station of an adult survivor of childhood sexual abuse who was experiencing insomnia and depression, all but one of the residents recognized depressive symptoms, but less than half of the residents identified or inquired into the possibility of childhood sexual abuse. In general, the residents recognized domestic violence, but few were able to formulate a safety plan for the victim.

First- and third-year residents scored similarly on the checklist's overall performance item and on the written sections of the station with regard to assessment (70% correct versus 74%, respectively, p = NS). With regard to the “plan” part of the written exercises, however, the first-year residents included 43% of the important items, while the third-year residents included 70% of the important items (p < .0001). Overall, on the written sections, the first-year residents wrote 62% of the essential items (SD 3.8, range 58–68), versus 73% for the third-year residents (SD 4.2, range 68–79) p < .0001.

The residents were uniformly rated highly for communication and interviewing skills, and those results are not presented here. Patient's satisfaction as rated by the SPs on a seven-point scale was generally high for all residents (5.3, SD 0.7). No resident was uniformly rated low on patient's satisfaction. There was no difference between third-year residents and first-year residents in their communication skills or their ratings on the patient's-satisfaction item. The residents as a whole did well on the ECG, pediatric x-ray, and dermatology kodachrome interpretations, but they correctly interpreted only 50% (SD = 21) of the adult x-rays. The residents also performed poorly on interpretation of arterial blood gases (58%, SD = 19) and interpretation of Gram stain/peripheral blood smear (42%, SD = 24). The third-year residents performed significantly better than did the first-year residents on all non-SP stations and substations (overall non-SP station score 57% for first-year residents, 68% for third-year, p < .0001).

Back to Top | Article Outline

DISCUSSION

We successfully designed and implemented a 13-station CPE for residents in our combined med—peds residency program. All residents participated, and they seemed to value the feedback in areas of individual strengths and weaknesses. The CPE was extremely helpful in identifying specific curricular deficiencies of our med—peds residency program.

Our residents' checklist performances were similar to other reported SP examination checklist performances,4,5 and they were similar to those reported for SPs' assessments of practicing clinicians.8 Faculty wishing to undertake a similar evaluation should remember that checklists are often constructed to include all potentially important items, perhaps representing the ideal, but in reality, an adequate doctor—patient encounter need not exhaustively cover every item on a checklist.

Interestingly, the first- and third-year residents scored similarly on the overall item on the checklists on all stations and in percentages correct in regard to patients' assessments. In contrast, other studies have shown improvements by year of training for pediatrics residents,5 internal medicine residents,3 and surgical residents.6 Since we were testing residents on more complex psychosocial problems than have been reported for other SP examinations, we do not believe this represents a lack of criterion-based validity. For examinations focused on more straightforward “medical” problems, one would expect improved performances from residents with more training, as “medical” knowledge improves throughout residency. Since we focused our examinations on perceived curricular deficiencies, our third-year residents' performances may reflect the persistence of this deficiency, and one would not expect an improvement in these areas with our current curriculum. Additionally, with more complex cases involving psychosocial issues, improvement may not be expected until several years after training has been completed.

As in most studies, communication and interpersonal skills were rated highly. This may reflect innate characteristics in residents (warmth, empathy, and politeness) that remain stable over time, although, distressingly, some studies note a decline in residents' interpersonal skills with training.5 Likewise, satisfaction as rated by the SPs was uniformly high and did not differ between first- and third-year residents. There is evidence that patient's satisfaction as determined in SP exercises may predict the level of satisfaction of real patients for a resident.10 Our CPE may be an indirect measure of patient's satisfaction. Also, we found no difference between first- and third-year residents in their interpersonal and communication skills, or in satisfaction ratings by the SPs. Because other studies have documented a lack of improvement in communication and interpersonal skills with training,5 our finding is not surprising. Studies that note improvements in communication skills generally have considered counseling items as communication skills, which may reflect more knowledge content rather than better communication skills.9 In contrast, most of our communication items addressed general communication skills (e.g., use of open-ended questions). Other limitations to our study include its restriction to one institution and to a small number of residents, which may limit generalizability and the power to discern differences in the performances of first- and third-year residents.

Feedback was provided both individually and as a group to the participating residents. First, we convened the entire group of residents, and presented to them the overall results of the group's performance, emphasizing group strengths and weaknesses that might suggest specific curricular deficiencies. Midway through this meeting, residents were provided folders with their individual results on station performance, including copies of all individual checklist items for review. Each resident then met individually with the med—peds program director (MD) to discuss his or her CPE performance. Residents were also supplied a copy of their videotaped sessions for review at home prior to their meeting with the program director.

Constraints of such an examination included the logistic issues of coordinating residents' coverage of clinical duties during the CPE. The challenge of coordinating coverage across two disciplines and finding a time when all residents could convene for feedback was problematic. Having the CPE on a Saturday freed up many residents from clinical duties, but at the cost of some residents' being displeased with the CPE's encroaching on their weekend. However, since this test occurred once a year, and the residents perceived the results of the CPE to be beneficial to the program, all of the residents in 1997 and 1998 participated. In conclusion, our residents and faculty were pleased with the information obtained from the CPE, both for the overall curriculum and for the individual residents.

Back to Top | Article Outline

REFERENCES

1. Burke MM, Haist SA, Griffith CH, Wilson JF, Mitchell CK. Preparation for practice: a survey of med/peds graduates. Teach Learn Med. 1999;11:80–4.

2. Camp BW, Gitterman B, Headley R, Ball V. Pediatric residency as preparation for primary care medicine. Arch Pediatr Adolesc Med. 1997;151:78–83.

3. Stillman PL, Swanson DB, Smee S, et al. Assessing clinical skills of residents with standardized patients. Ann Intern Med. 1986;105:762–77.

4. Dupras DM, Li JTC. Use of an objective structured clinical examination to determine clinical competence. Acad Med. 1995;70:1029–34.

5. Lane JL, Ziv A, Boulet JR. A pediatric clinical skills assessment using children as standardized patients. Arch Pediatr Adolesc Med. 1999;153:637–44.

6. Sloan DA, Donnelly MB, Johnson SB, Schwartz RW, Strodel WE. Use of an objective structure clinical examination (OSCE) to measure improvement in clinical competence during the surgical internship. Surgery. 1993;114:343–57.

7. Skinner BD, Newton WP, Curtis P. The educational value of an OSCE in a family practice residency. Acad Med. 1997;72:722–4.

8. Ramsey PG, Curtis JR, Paauw DS, Carline JD, Wenich MD. History-taking and preventive medicine skills among primary care physicians: an assessment using standardized patients. Am J Med. 1998;104:152–8.

9. Gordon JJ, Saunders NA, Henrikus D, Sanson-Fisher RV. Interns' performance with simulated patients at the beginning and the end of intern year. J Gen Intern Med. 1992;7:57–62.

10. Tambyln R, Abrahanowicz, Schnarch B, Colliver JA, Benaroya S, Spell L. Can standardized patients predict real-patient satisfaction with the doctor—patient relationship? Teach Learn Med. 1994;6:36–47.

© 2001 Association of American Medical Colleges

Login

Article Tools

Images

Share