The past two decades have seen teaching and learning on inpatient rotations challenged in several ways, including shorter length of hospital stays despite greater illness severity; a focus on quality metrics and outcomes, as time-consuming as they are appropriate; a reduction in residents’ duty hours; and demands placed on programs to evaluate residents more intensely.1 Moreover, rounds often take place in conference rooms, not at the bedside; little time is devoted to teaching and learning clinical skills and clinical reasoning; discharge planning and issues pertaining to patient safety may not be systematically addressed; and learners may not be as engaged as they should be with patients’ experiences in the hospital and after discharge. Not surprisingly, recent research notes declining satisfaction levels among teaching attendings who also must adhere to rigorous requirements for documentation and engage more directly in patient care.2
To address these challenges, new systems for inpatient teaching have been proposed.3,4 A common denominator in these systems has been a reduction in patient census and/or a relaxed admission schedule, modifications that were not feasible for our hospital. Our goal, then, was to redesign the inpatient learning experience without either changing the algorithms for the assignment of admissions or reducing patient census, while addressing the aforementioned issues. To achieve this goal, as part of a new program called the Penn Presbyterian Chiefs’ Service (CS), we altered the design of rounds in several ways, focusing on high-reliability behaviors, clinical reasoning, bedside teaching, and principles of patient-centered care. Below, we describe the design and implementation of the CS, our initial impressions of the program, and plans for further evaluation and dissemination.
Penn Presbyterian Medical Center (PPMC) is a 300-bed teaching hospital affiliated with the Perelman School of Medicine of the University of Pennsylvania. The PPMC general medicine inpatient service consists of four equivalent teams, each consisting of an attending, a senior resident, two interns, and one or two medical students, admitting on a four-day call cycle. In accordance with Accreditation Council for Graduate Medical Education standards, teams admit up to 10 patients per call day (5 per intern), with a total census cap of 20 patients per team. Rounds occur in the morning, starting at 7:30 AM on postcall days, and at 9:00 AM on all other days, and ending at 10:30 AM. At baseline, rounds often took place in conference rooms without formal standards for bedside rounds, including on the day of admission and discharge. Clinical (bedside) skills and diagnostic reasoning skills were not always emphasized during attending rounds, and issues pertaining to patient safety and quality improvement were not systematically discussed. An asynchronous schedule, with resident and attending “switch days” staggered for the sake of preserving continuity, also may have created inefficiency as the team may have spent time adapting to a new attending’s approach.
The random assignment of students and residents to the four teams at PPMC provided an opportunity for a quasi-experimental design study in which the new CS team, with specially trained faculty and additional resources, could function as an intervention, enabling us to compare their outcomes with those of the other three teams. Our program evaluation plan, described in detail below, was developed concurrently with the creation of the CS program. The institutional review board of the University of Pennsylvania approved the study.
Developing the CS program
Over the course of six months in 2013, the steering committee, consisting of the department chief (J.E.), the site director for residency training (J.B.R.), the service chief (K.W.), a hospitalist who is the associate clerkship director (N.B.), and the chief medical officer (K.F.), followed an iterative process to develop the following goals for the CS intervention: (1) enhance bedside clinical skills; (2) promote a culture of patient safety; (3) emphasize diagnostic reasoning; (4) engage patients; and (5) provide learners with patient-specific, postdischarge follow-up, thereby underscoring the importance of safe care transitions.
Table 1 demonstrates how each of these goals was operationalized into core activities (or elements). Seven faculty members, including the steering committee, were recruited to be the attending physicians for several two-week blocks per year, spanning the entire academic year. These attendings were trained in the CS model. Additionally, a nurse (R.J.) was hired and trained as a care coordinator (CC), with responsibility for collecting postdischarge data via direct phone calls to discharged patients. This information was supplemented by reports from community health workers who follow up on high-risk patients as part of a system-wide initiative using the IMPaCT model, described elsewhere,5 and these data were shared with the team (see below). For efficiency, orientation materials, including an executive summary of the goals and elements of the CS program and “pocket cards” for each individual element, were developed and distributed to the CS team residents and students in advance. During a three-month pilot, faculty development consisted of biweekly steering committee meetings during planning and early implementation to discuss the refinement of the elements of the program and to calibrate faculty members’ individual implementation of the elements. Faculty not on the steering committee had a one-hour session with either J.E. or J.B.R. in addition to receiving the written materials and attending the periodic follow-up sessions.
Elements of the CS program
Rounds began with a brief attending-led team huddle designed to promote a culture of teamwork, reliability, and patient safety in the team’s activities. The huddle typically began with a high-level review of overnight events, including those involving patient safety and systems concerns. After addressing these issues, the team planned rounds, identifying the order in which patients would be seen. Patients ready for discharge were usually seen first, then new patients, then follow-up patients. The huddle generally lasted no more than five minutes.
Bedside rounding was the default modality for the CS program. For newly admitted patients, a formal structure was followed with the attending standing on the right side of the bed, the presenter on the left, and the rest of the team surrounding. Team members introduced themselves, then a student or resident presented the case. Following the presentation, the attending reviewed, demonstrated, and discussed the history-taking and physical exam findings, briefly engaging the patient. Before the team exited the patient’s room, the patient was assured that a team member would return after rounds to share the team’s assessment and initial plan. Finally, the team completed a discussion of the case outside the room. Follow-up visits also were done at the bedside, focusing on changes in status and counseling the patient on his or her progress and the plan for the day.
Diagnostic uncertainty is expected, and assessments are a work-in-progress. At any time, any team member can request a diagnostic “time-out.” These impromptu meetings, conducted away from the bedside, were purposefully distinct from other rounding activities. They were intended to encourage reappraisal, stimulate analytic thinking, identify biases, and develop consensus around a diagnostic or management strategy. The time-out began with a brief problem presentation. The problem was reevaluated, and alternative diagnostic approaches were considered, with data “for” or “against” acknowledged. The need for additional tests or expert consultation was discussed, and a new plan was established, with a rendezvous time for reassessment set by the attending.
The team started rounds by seeing patients who were to be discharged. In the room, an intern or student summarized the hospitalization for the patient, including study results, diagnoses, treatment plans, and medications (highlighting medication changes). Particular attention was given to describing follow-up appointments and testing and ensuring that the patient understood and agreed with the plan, using “teach-back” strategies. Finally, the team encouraged the patient to provide feedback on the hospitalization and on the performance of the team itself. After the CC informed the patient that she would call her or him a few days after discharge, the team formally bade the patient farewell.
Postdischarge follow-up rounds.
This element was intended to broaden the team’s perspective on the hospitalization, to include postdischarge follow-up information, both clinical and experiential, on each discharged patient. The CC gathered this information during a phone call to the patient, using a six-item questionnaire, followed by a structured interview, which was transcribed in narrative form and typically spanned a few paragraphs in length. Once a week, on the “precall” day (the least busy in the cycle), the CC, accompanied by community health workers, presented for discussion follow-up information on patients discharged the previous week. These sessions typically lasted for 30 to 45 minutes.
Evaluation and continuous improvement
The CS program was implemented in January 2013 and has been in operation continuously since then, with the CS team working in parallel with the other three teams. The CS program now includes more than 700 inpatients. Our initial impressions, supplemented by preliminary evaluation data, have been positive. We have seen a renewed level of excitement among our attendings; indeed, many have expressed that now they would “never teach any other way.” Reactions from our learners have been positive as well. Residents have commented that the standard structure for rounds is efficient, especially on “switch days,” and that they appreciate the emphasis on bedside teaching and learning how patients fare after discharge. This latter process has provided great insight for attendings and learners into the ramifications of transitions of care, particularly for patients facing challenges related to their living situations and socioeconomic conditions. Other positive outcomes include an increased level of direct observation of learners’ clinical skills, such as history taking, physical examination, and counseling; this practice has facilitated the implementation of milestones-based evaluations. Informal feedback from CS patients regarding their experience has been encouraging and has supported the renewed focus on bedside rounding and on structured counseling at the time of discharge, along with postdischarge contact.
A formal evaluation process is under way, with a special focus on fidelity to ensure that the elements are implemented as described. This evaluation employs both qualitative and quantitative methods, as described in Table 1. It includes ethnographic observations of rounds by trained qualitative research staff, semistructured interviews with resident participants, and focus groups with the CS attendings, supplemented by individual diaries kept by attendings while on service. The evaluation also includes quantitative techniques relying on standard departmental surveys for learner satisfaction and other measures. Finally, patient satisfaction and outcome measures are being assessed, relying on the postdischarge telephone interview conducted by the CC and on other hospital-based data. This evaluation will inform our work as we refine the CS elements and possibly develop new ones. We intend to evaluate both the individual elements and the overall culture and impact of the program as well as the emergent properties of the program, as we are eager to see whether we have created a program where the “whole” is greater than the sum of its parts.
Implications for dissemination
We have been encouraged by the initial phase of the CS program—development, implementation, and evaluation—but appreciate the trade-offs that have been made. First, the intervention required substantial faculty development, both sessions before the implementation as well as frequent meetings afterwards to ensure that the approach was standardized and the elements well executed. Next, additional resources also were required for our CC, who is assigned specifically to track CS patients and report back to the team. Third, with the emphasis on bedside teaching and the additional time allotted for activities, such as diagnostic time-outs and conferences devoted specifically to postdischarge follow-up, less time was available for more traditional didactic teaching. Neither attendings nor residents have commented negatively on this issue in our discussions, but we suggest that those considering implementing this program consider these issues. Finally, we acknowledge that rounding at the bedside potentially can prolong rounds, perhaps beyond the time constraints that must remain relatively rigid. However, we have found that the huddles enable the CS team to prioritize their time and activities and that diagnostic time-outs have allowed for complex issues to be deferred and discussed at more appropriate times. Moreover, bedside rounds (in contrast to first discussing cases in a conference room) have proved to be efficient. As a result, our CS rounds fit the same schedule as the non-CS rounds. We anticipate that our evaluation will confirm this finding. Because the CS program, in contrast to others that have been described,2–4 does not change existing call schedules, numbers of admissions, or census caps, we feel that other institutions can adopt the model and adapt it to their needs, even for specialties other than internal medicine.
In summary, implementation of the CS program has enabled us to restructure inpatient rounds at our institution while achieving specific goals related to teaching, learning, and patient care, without altering admission or census capacity. The evaluation process, both quantitative and qualitative, is under way and may well provide data in support of disseminating this model within our own and other institutions.
Acknowledgments: The authors acknowledge and thank the following people: Breah Paciotti, Shimrit Keddem, MS, and Peter Cronholm, MD, of the Mixed-Methods Research Lab, and Judy Shea, PhD, for their advice and assistance planning the forthcoming program evaluation; David Aizenberg, MD, Robert Cato, MD, and Judd Flesch, MD, for their continued participation and feedback as attendings in the CS program, and Lisa Bellini, MD, program director, and the Penn internal medicine residents for their continued participation and support.