Journal Logo

Empirical Investigations

Feasibility of “Standardized Clinician” Methodology for Patient Training on Hospital-to-Home Transitions

Wehbe-Janek, Hania PhD; Hochhalter, Angela K. PhD; Castilla, Theresa MA; Jo, Chanhee PhD

Author Information
Simulation in Healthcare: Journal of the Society for Simulation in Healthcare: February 2015 - Volume 10 - Issue 1 - p 4-13
doi: 10.1097/SIH.0000000000000053

Abstract

Hospital-to-home transitions are complex situations for providers and patients. Common responsibilities for patients include (a) coordinating and attending follow-up appointments, (b) changing to and then following new medication regimens, (c) coordinating services such as home health or physical therapy, and (d) monitoring symptoms for signs of worsening health.1 Discontinuity of care, injury, and medication error are prominent risks during hospital-to-home transitions and are among the reasons for potentially avoidable hospital readmissions.2–10

Furthermore, hospital-to-home responsibilities are often particularly complex for older adults and those with multiple chronic conditions, inadequate engagement in health care decisions, or inadequate health literacy.2,5,6,11,12 The majority of adults 65 years or older are living with multiple chronic conditions, and most are taking multiple long-term medications.13–15 Most also experience multiple medication changes and doctor appointments in the weeks after leaving the hospital.14,16

Reduction of hospital readmission rates remains a national priority in the United States, as demonstrated by the penalties hospitals now face for high readmission rates through the Affordable Care Act Hospital Readmissions Reduction Program.16 Additional complementary approaches to addressing transitions are therefore needed. Most evidence-based interventions for improving hospital-to-home transitions address hospital discharge processes, communication among clinicians, patient education, and patient empowerment to handle challenges after discharge.17–19 Most are delivered during the hospital stay and in the weeks after discharge. The complexities of patient responsibilities may lend themselves to a skills training intervention that prepares patients for future hospital-to-home transitions at a time when they are not simultaneously faced with stressors of an ongoing transition. Furthermore, patient preferences and capacity for active involvement in health care vary. Patients’ active engagement in health care and health behaviors is increasingly recognized as crucial to achieving high-quality outcomes.20,21 Clinical simulation methodology offers a number of advantages over other types of training and may therefore hold promise for training patients to engage in health care, especially engagement in complex situations. We therefore tested the feasibility and in vitro impact of a novel application of standardized patient methodology we call “standardized clinician (SC) methodology” for the training of older adults at risk for future hospitalizations. We also considered what if any learning occurs with the SCs during their experience role playing in this training.

In SC methodology, the patient is in the role of the primary learner, and the clinician role is standardized and played by trained individuals. Evidence shows that experiential standardized patient training methodology is particularly effective for adult learners and allows learners to build skills that will transfer to similar situations in the future.22,23 Adult learning and potential for skill transfer in the future are particularly important when patient training is designed for the development of skills in future similar situations. However, testing for transfer of skills to real-world settings in the future was beyond the scope of this pilot study. Standardized clinician methodology uses simulation scenarios that mimic hospital-to-home transitions. The simulations provide a safe and controlled environment to practice skills such as asking questions, expressing preferences, reconciling medications, and providing relevant information to providers. Evidence suggests that having the opportunity to practice and be evaluated in a controlled setting increases knowledge, confidence, and self-efficacy among health care providers.24,25 We hypothesized the same would be true for patients.

METHODS

Study Design

A randomized controlled trial was conducted with 2 intervention groups: (a) simulation exposure only or (b) full intervention. Figure 1 is a schematic of each group’s activities. At the beginning of day 1, all participants took part in scenario 1. After scenario 1, participants were randomly assigned to one of the intervention groups. The simulation exposure–only group was not involved in any additional activities after scenario 1 on day 1. The full-intervention group participants continued their activity on day 1 after randomization, with (a) group debriefing on scenario 1, (b) introduction to patient tools for hospital discharge (eg, checklists), (c) practice with a second scenario (scenario 2), and (d) group debriefing on scenario 2. Participants returned 3 to 7 weeks later, not necessarily within the same cohort, for a follow-up session, during which both groups completed a final scenario (scenario 3) and group debriefing.

F1-2
FIGURE 1:
Schematic of participant activities on (A) day 1 and (B) follow-up day (4–7 weeks later).

The primary outcomes were changes in observed patient behavior during the simulation scenarios at day 1 and follow-up (ie, behavior change in the training environment). Evaluation of the feasibility was also a study aim. Feasibility was evaluated by loss to follow-up and participants’ evaluations of the program. Furthermore, SCs were asked to complete a program evaluation of their participation as well. The study was approved by the Scott & White Healthcare Institutional Review Board.

PARTICIPANTS

Participants were recruited through informational flyers and face-to-face contacts at clinics, health fairs, and community settings such as senior centers. Advertisements were also placed in local newspapers. Individuals were eligible to participate if they (a) were 65 years or older, (b) self-reported having 2 or more chronic illnesses, (c) were hospitalized within the past year in the health care system where the study was conducted, (d) spoke and read English, and (e) had transportation to the simulation center where the study was conducted. Participants were excluded if they (a) were currently undergoing cancer treatment, (b) self-reported a diagnosis of dementia, or (c) had hearing impairment that would keep them from participating in small group discussions.

Individuals who met eligibility requirements and verbally agreed to participate were scheduled for a day 1 session and a follow-up session. A maximum of 6 participants were scheduled to attend any 1 session, which kept learner groups relatively small. Sixty-seven participants enrolled.

Intervention Materials

Scenarios

Three scenarios were created by the investigators and an internist affiliated with the study. The order of the 3 scenarios was randomly assigned before the start of the study and applied consistently across all participants. Scenario 1 involved hospitalization for transient ischemic attack. Scenario 2 focused on hospitalization for mild heart attack. Scenario 3 covered hospitalization for pneumonia.

Each scenario included 3 steps: (A) talking with hospital personnel in preparation for hospital discharge, (B) reviewing medications and symptoms at home after discharge, and (C) attending a follow-up appointment with a primary care provider within a few weeks of discharge. Examples of each step are provided in the text document Supplemental Digital Content 1 (https://links.lww.com/SIH/A137). For each scenario, step types A, B, and C followed the same hypothetical reason for hospitalization—transient ischemic attack, pneumonia, and a mild heart attack. Figure 2 is a schematic of how scenarios progressed.

F2-2
FIGURE 2:
Schematic of steps A, B, and C for each scenario.

Before beginning scenario 1, participants were prebriefed on the SC methodology by an investigator (H.W.-J., A.K.H.). Prebriefing included a description of what to expect and a tour of the simulation rooms where scenarios would occur. Prebriefing was conducted to help individuals understand their role and the roles of the SCs. Cameras in each room were pointed out; participants were instructed that they were on live camera and could call out for help if needed. Live camera feeds were particularly important during times when participants were in the scenario rooms alone. After prebriefing, all participants completed scenario 1.

At the beginning of each scenario, participants reviewed a 1-page description of background information for the case. The description explained the participant’s role in the story, the reason for and duration of the hospitalization, and a few points about their living situation outside the hospital and expected functioning (eg, you live alone) (see text document, Supplemental Digital Content 1, https://links.lww.com/SIH/A137). Study personnel reviewed the description with individual participants in the room before the scenario began. When the participant was ready, study personnel exited the room. An SC entered the room, and the scenario began. The first step of the scenario centered on discussion about discharge instructions, which were provided on paper to be used again throughout the scenario.

The second step of the scenario simulated being at home after leaving the hospital. Participants were faced with challenges related to self-care activities such as taking medications and dealing with symptoms. Participants were seated alone in a room with (a) an intercom to simulate a telephone, (b) 3 empty medication bottles to simulate medicines they had at home before the hospitalization, (c) 3 empty medication bottles in a bag to simulate medicines they received from a pharmacy after discharge, and (d) a worksheet that asked 3 questions related to the condition for which the hospitalization had occurred (see text document, Supplemental Digital Content 1, https://links.lww.com/SIH/A137). Worksheet question 1 asked when and where the participant’s follow-up appointment was scheduled. The needed information was sometimes included on the written discharge instructions received at the beginning of the scenario. Sometimes, it was not. Question 2 asked for the names of the medications the participant should be taking after leaving the hospital and the number of pills per day. This information was available on the discharge instructions but did not always match the pill bottles in the room. Adding these extra mismatched pill bottles simulated the presence of a medication discrepancy discovered at home. The third question gave a symptom (eg, elevated temperature, swollen legs) and asked how the participant would respond. Sometimes, this information was available on the discharge instructions. Sometimes, it was not.

During the at-home scenario, participants had the option to use the intercom as a telephone by pressing the call button and stating the name (eg, doctor named on the discharge instructions) or role (eg, nurse at my clinic) of the person they wanted to call. Standardized clinicians were on the other end of each intercom and responded to participant questions with relevant information.

The last step of the scenario simulated a follow-up appointment with a primary care provider (see text document, Supplemental Digital Content 1, https://links.lww.com/SIH/A137).

Study personnel watched live video feeds of all simulation rooms during scenarios to quickly address problems that needed to be addressed (eg, troubles operating an intercom).

Debriefing and Patient Tools

Debriefing

Debriefing sessions were led by an investigator (A.K.H., H.W.-J.) in a conference room adjacent to simulation rooms using debriefing guides customized for each scenario. Debriefing followed each complete scenario for the full-intervention participants in a group setting. The simulation-only group participants debriefed after the follow-up scenario 3. Debriefing included general group reactions to the scenarios plus discussion of specific challenges embedded in the scenarios (see text document, Supplemental Digital Content 2, https://links.lww.com/SIH/A138, which describes the debriefing guide). Examples of challenges embedded in scenarios included medication discrepancies, missing appointment date, and appointment dates outside the follow-up window on the discharge instructions. Participants discussed whether they noticed these challenges and how they handled each. They were encouraged to discuss their own experiences in health care as they related to the scenarios. Key points that were discussed included the impact of communication and use of tools during hospital-to-home transitions, roles of family member caretakers, medication reconciliation, and the complex nature of going home from the hospital.

Patient Tools for Hospital-to-Home Transitions

The investigator leading the debriefing for scenario 2 followed debriefing with an introduction of patient tools. All tools presented were either publicly available or created by merging sets of publicly available materials. Selection of these tools was based on feedback from focus groups conducted during the formative stage of this project. Materials included the following:

  • Taking Care of Myself: A Guide for When I Leave the Hospital26
  • Care Transitions Program Personal Health Record17
  • A discharge checklist adapted from checklists available through the Care Transitions Program (caretransitions.org) and the Taking Care of Myself guide,16 and
  • Talking With Your Doctor guide for older adults (National Institute on Aging, 2005).27

Paper and electronic copies of each tool were available in the session. Participants were encouraged to take any tools they found useful. They were also encouraged to try using a tool in scenario 2 for practice and to see if they liked a selected tool.

After completing the study’s program evaluation, the simulation exposure–only group was given the option of taking any of the patient tool materials for their use in the future.

Standardized Clinician Training

Ten SCs were recruited from among nursing students and medical students at the study site and a local university. Standardized clinicians training consisted of explanation of the case stories and the challenges built into each. Standardized clinicians were trained to use scripts with instructions on how to interact with the patients. Some instructions directed SCs to be more supportive (eg, “Ask subject if he or she has any questions”) than others (“Avoid requesting additional information.”) SC interaction styles were equally supportive and less supportive within each SC role; review of the SC interaction styles was conducted by the internist. Standardized clinicians received 6 hours of training in 2 sessions with an investigator (H.W.-J.) or program coordinator. Standardized clinicians role played the stories with each other while the investigator or program coordinator was available to provide feedback and give additional instruction. Standardized clinicians were assigned to sessions based on their availability. Study coordinators served as back-up SCs only when necessary.

Measures

Self-Report Measures

Demographics

Participants reported on their demographics and common health problems they had experienced (see Table, Supplemental Digital Content 3, https://links.lww.com/SIH/A139, which describes participant demographics).

Ask, Understand, Remember Assessment

Ask, Understand, Remember Assessment (AURA) is a 4-item Likert scale that measures self-efficacy to ask, understand, and remember information during doctor visits.28 Scores on the AURA range from 4 to 16, with higher scores representing better efficacy.

Patient Assessment of Chronic Illness Care

Patient Assessment of Chronic Illness Care (PACIC) is a 20-item assessment of how well a patient’s care for chronic illness matches with elements of patient-centered care. The measure consists of a summary score and 5 subscale scores: patient activation, delivery system design/decision support, problem solving/contextual, and follow-up/coordination.29

Health Literacy Screen by Chew et al

Three questions from Chew et al30 screen for inadequate health literacy. Each question is scored on a Likert scale from 0 to 4. Items were scored by adding values associated with each response; higher scores indicate greater problems with health literacy.

Program Evaluation

The program evaluations were created specifically for this study. The study subjects’ program evaluation included 13 items on which participants rated their agreement on a scale ranging from 1 to 4 (“disagree a lot” to “agree a lot”, respectively) plus 4 open-ended questions. The program evaluation for the SCs included 13 items on which participants rated their agreement on a scale ranging from 1 to 5 (strongly disagree to strongly agree, respectively), plus 2 open-ended questions. We specifically solicited feedback on the value of their participation on their learning as health care providers as well as future participation in the program.

Behavioral Observations

All steps of the scenarios were video recorded. Four trained observers later coded preselected behaviors in each video using specified definition. For each step, observers tallied the frequency of each of the 20 behaviors for which coding definitions were available. The 20 behaviors of interest fell into 5 categories: participant use of patient tools, participant display of skills for engaging in hospital-to-home transitions, SC use of partnership-building techniques, SC use of supportive techniques, and SC use of nonsupportive techniques (definitions of individual behaviors in each category appear in Table 1). Observers also scored the worksheet for each scenario using a protocol that produced a total “worksheet” score between 0 and 5 points (see text document, Supplemental Digital Content 4, https://links.lww.com/SIH/A140, which describes the worksheet scoring procedures).

T1-2
TABLE 1:
Coding Definitions

Two observers were randomly assigned to each scenario. In total, 67 baseline scenarios and 65 follow-up scenarios were coded by 2 observers each (a total of 134 baseline and 130 follow-up coded observations). Some observers were program coordinators who had filled in as SCs during intervention sessions but no observer coded scenarios in which he or she had served in the SC role.

Observer Training

Observers were research coordinators. Coding definitions were created for participant behaviors and SC behaviors through an iterative team process that began with definitions published by Cegala et al31 and Street et al32 in studies of patient-provider communication. The lead study coordinator (T.C.) was involved in all definition drafts and trained all observers. Four observers completed a minimum of 2 hours of training each with the lead study coordinator. Additional team meetings and training occurred throughout the coding process, totaling approximately 12 additional hours of team training per observer. Before coding for study purposes, observers practiced by coding 10 scenarios (3 steps per scenario). To check for consistency, the lead study coordinator randomly selected (from a list of random numbers generated from www.randomizer.org) 1 scenario from the practice coding and compared across observers to evaluate degree of shared understanding and application of the coding definitions. The lead study coordinator also met with each observer to discuss where his or her coding differed from others’. This practice and one-on-one discussion happened before coding was conducted for study data purposes. The same process was followed at regular intervals over time. If inconsistencies in coding were found, the coders and study coordinator would meet with individuals or the group to discuss.

Statistical Analysis

To examine the reliability of observations, pairwise intraclass correlations were computed for the 5 categories of combined measures coded by 4 raters (Table 2). To test the impact of the intervention on participant behavior in the training settings, we compared changes in frequency of all observation codes from baseline to follow-up for simulation exposure–only and full-intervention groups using 2-sample t tests. A linear regression model was conducted for Use of Patient Tools, Engagement Skills, and Worksheet Score to compare changes in the variables from baseline to follow-up for the simulation exposure–only and full-intervention groups. The linear regression model controlled for PACIC scores, health literacy screening scores, and SC behaviors (ie, partnership-building techniques and non–partnership-building techniques). We also used a nonparametric, Mann-Whitney U test for nonnormal data. P value of less than 0.05 indicated statistical significance. R 2.15.1 (R development 2012) was used for the statistical analysis.

T2-2
TABLE 2:
Pairwise Intraclass Correlations by 4 Coders

RESULTS

Participant Characteristics and Baseline Measures

Sixty-seven participants (100%) completed the baseline questionnaire and baseline simulation sessions. Sixty-five completed the 1-month follow-up session and program evaluation. Thus, retention of participants in the study, which required attendance at 2 sessions approximately 1 month apart, was 97%.

Participants had a mean (SD) age of 74 (6.3) years; the majority of participants were white (95.5%) and female (73.9%). The majority of participants (97.0%) reported that they had a regular doctor, nurse practitioner, or physician assistant that they usually see for care. The simulation exposure–only and full-intervention groups did not statistically differ on any demographic characteristics (see Table, Supplemental Digital Content 3, https://links.lww.com/SIH/A139, which describes participant demographics).

Within- and Between-Group Differences in Observed Behaviors

Table 3 reports the mean and SDs of counts taken for observed behaviors along with worksheet scores by group and period (day 1 vs. follow-up) and within-group comparisons. The full-intervention group participants showed increases in observed tool possession (P = 0.014) and expression of their preferences and values (P = 0.043). However, they also showed a significant decrease in telling an SC about their hospital admission (tell about admission, P = 0.006). No other changes in observed behaviors were noted from day 1 to follow-up for the full-intervention group.

T3-2
TABLE 3:
Within-Group Comparisons of Observed Behavior Counts Reported as Means (SD)

The simulation exposure–only group showed improvement in worksheet scores (P = 0.002). The group also displayed fewer engagement skills (P = 0.021), including telling about the hospital admission (P =0.003) and giving relevant health information (P =0.016) from day 1 to follow-up.

Table 4 describes linear modeling for observed behaviors. A linear regression model was used to compare the 2 groups over day 1 and follow-up on the outcomes of observed participant behaviors and worksheet scores. Participant scores on the PACIC and health literacy screening questions were included in the models to account for potential differences in baseline capacity for engaging in health care. Observed SC accommodative and facilitative communications were also included in the models. Participant use of patient tools was positively associated with changes in SC facilitative communication (P = 0.042). Engagement skills were positively associated with changes in observed SC accommodative communication (P < 0.001). A significant intervention effect was detected for worksheet score; as shown in Table 4, the simulation exposure–only group improved more than the full-intervention group from day 1 to follow-up (P = 0.032).

T4-2
TABLE 4:
Linear Modeling for Selected Participant Behaviors and Worksheet Scores

Program Evaluation

Forty-one participants (63%) answered all 13 scaled items on the program evaluation. Program evaluation questions were each considered separately because the questions do not represent a validated instrument. Differences between groups in group responses were statistically significant for 2 items: “This program made me more confident that I can get information and do things to help me get better after going home from hospitals,” and “The people acting like doctors and nurses in the sessions were like real doctors and nurses.” (P =0. 005 and P = 0. 002, respectively) (see Table, Supplemental Digital Content 5, https://links.lww.com/SIH/A141, which describes the participants’ responses to the program evaluation).

Program evaluations were completed by both groups and included 4 open-ended questions to which participants had the option to give written responses. Qualitative analyses were not conducted because of the limited number of responses (approximately 20 per question) and brief nature of responses. Review of the responses does provide information on the feasibility of the SC methodology for teaching patients skills for health care engagement (see Table, Supplemental Digital Content 6, https://links.lww.com/SIH/A142, which describes the open-ended responses by the participants of the program evaluation). When asked, “How did this program help you?” common responses noted new attention to instructions and confidence for being assertive. When asked, “What would make this program more useful?” the most common suggestions (outside of suggesting no changes) addressed the details of the methodology. Across open-ended questions, many comments suggested that participants enjoyed the experience and saw real-world value for themselves and others.

Of the 10 SCs that participated in the study, 8 responded to the SC program evaluation, of whom 3 were nursing students and 5 were medical students at the time of their participation (see the Table, Supplemental Digital Content 7, https://links.lww.com/SIH/A143, which describes the responses of the SCs’ program evaluation. As shown in SC program evaluation, SCs reported positively on their participation. When SCs were asked about the most valuable component of their participation, the most common response was understanding the hospital-to-home transition process and the importance of communication.

DISCUSSION

This study examined the feasibility and initial impact of the SC methodology for teaching patient skills for engaging in complex health care situations. In particular, we tested the methodology when applied to the situation of older adults transitioning from a hospital setting back home. Because this was a pilot study of the methodology, we tested changes in observed behaviors occurring only during simulation scenarios. We expected the full-intervention group to show more improvement in engaged behaviors from baseline to follow-up compared with the simulation exposure–only group because the full-intervention group included debriefing and extra practice. Within-group comparisons identified improvements in some observed behaviors and the worksheet score, although the types of improvements observed were different in the 2 groups. That is, both groups showed some improvements in the simulated environment, suggesting that exposure to simulation scenarios may be enough to produce some initial behavior change. However, both groups showed reductions in the number of times participants told SCs about the hospital admission during scenarios. Alerting providers to a recent hospital admission can help facilitate coordination of care across settings. Participants may have all become comfortable with the scenarios over time and felt confident that the SCs were already aware of the situation. As additional work is conducted to refine the SC methodology, it will be important to test which types of patient engagement skills are more amenable to this training methodology and which training components are critical for achieving desired skill learning.

In the linear model, participants’ use of patient tools was associated with SC use of facilitative communication techniques. In scenarios where participants used their tools more, SCs also displayed more instances of encouraging involvement in ways such as asking for preferences. In scenarios where patients were observed actively interacting by requesting clarification, giving details of the simulated admission, or asserting preferences, SCs were observed affirming efforts to ask questions, inviting discussion about feelings, and inviting participation in decision making. These observed associations reflect the important interplay between participant behaviors and SC communication. For example, participants who asked more questions may elicit more communication than was scripted. This finding is consistent with the findings reported by Street et al,32 demonstrating that patient and physician communications during medical appointments are mutually influential.

Data from the program evaluation addressed the feasibility of using SC methodology. Open-ended comments were supportive of the program and included some specific suggestions for improving the scenarios and experience. The receptivity of the participants to the training, including the high retention we observed (97%) in this study, suggests that older adults are willing to take part in training using SC methodology. Participant responses to scaled evaluation questions indicate that participants in both groups were more confident to handle hospital-to-home transitions after being exposed to the SC methodology. The majority of participants agreed that the program helped them to be more aware of the information on discharge instructions, especially information related to medications. In addition, most participants agreed that the intervention helped them realize the importance of asking questions and getting answers before discharge from the hospital. These results are similar to findings from other interventions aimed at improving transitions in care. For example, patients enrolled in the care transitions intervention reported that the intervention helped them improve their ability to self-manage medications and gain access to knowledge about warning signs and adverse events.33 In addition, a nurse practitioner intervention targeting at-risk seniors after hospital discharge found that the intervention improved participants’ self-efficacy for managing their health conditions.34 Although improved confidence may not directly translate into improved outcomes, participants’ evaluation of the program suggests the potential for improving self-efficacy and possibly patient engagement skills.

For this study, most SCs were medical or nursing students. We considered the likelihood of their participation impacting their clinical training during the development of this study. The responses to the survey suggest that because the participating SCs found their involvement to be meaningful to their learning of the hospital-to-home transition and on valuable communication during this complex situation, learning on the part of the SCs may have occurred. Future studies on the educational impact of SCs are warranted to better understand this effect.

Limitations

The study sample consisted of a small, homogeneous group of individuals. More than 95% of the participants identified as white, and only 4.5% identified as other. More than half of the sample (64%) reported seeing a doctor, nurse practitioner, or physician’s assistant on a regular basis. In addition, the majority of the sample had completed high school, some college, or graduated from college. The study sample does not adequately represent individuals from underserved backgrounds (ie, minorities, individuals with lower than a high school education, and individuals without a regular source of care). Thus, the study should be completed with a more diverse sample before findings can be generalized.

The skills and tools addressed by this pilot study of SC methodology were tailored to the typical hospital discharge and follow-up processes at the site where the pilot study was performed. Processes in the simulation scenarios and the tools introduced to the full-intervention group are not universally applicable. They represent Americanized tools—publicly available in the United States and designed for US health care consumers. The discharge summary used in the scenarios was the same format as that used at the hospital from which most participants had been discharged in the previous year. This local tailoring was intentional because the investigators assumed that skills training would be most useful when tailored to the likely circumstances in which those skills would be used in the future. However, the training materials would not be applicable to hospital-to-home transitions in other countries.

The pilot study used standardized scenarios that did not necessarily represent individuals’ medical history and therefore may not have been directly applicable to every participant in the future. Scenarios were standardized in an attempt to control some of the variability among individual participants and across groups. Customized scenarios based on each participants’ unique medical history may be more directly useful as a patient education approach than general training on handling processes that are typical for the hospital-to-home transition or any other health care circumstance. Participants’ experience as health care providers was not collected, and this impact was not addressed within the scope of this pilot and should be considered for future evaluation.

Intraclass correlations were sometimes low for pairs of raters on individual behaviors, making the mean intraclass correlations correspondingly low. Although observers were trained and discussed coding definitions on a regular basis, some of the behaviors of interest are subjective and were not easy for all coders to reliably count. However, coder agreement was strongest for the behaviors of primary interest—engagement skills. Simplification of coding definitions, limiting the total number of observers, and additional training may be necessary for future work on SC methodology using observed behaviors as an outcome.35

This pilot study did not attempt to connect scenario behaviors to real-world skill performance. That connection is important to investigate further for 2 distinct reasons. First, if performance during simulations approximates skill performance in real-world clinical settings, then SC methodology may be useful for assessing patient engagement skills. Such an assessment could prove useful for helping providers and delivery systems tailor support and education to patients’ current needs. Second, understanding the relationship of simulation performance to real-world performance is important for determining whether skills learned through SC methodology will transfer to real-world settings when patients face the stressors that characterize seeking medical care. In this pilot study, we looked at changes in behavior within the simulation setting. Using simulation as both the intervention and assessment method was considered acceptable by investigators in the pilot testing phase because the overall feasibility of the SC methodology was our primary concern. Assessment of SC methodology on real-world performance would uncouple the intervention and assessment, allowing stronger evaluation of impact. For example, a study design that would randomize patients to SC methodology or traditional care transitions education and evaluation of patient outcomes has the potential to determine the real-world impact of the SC Methodology.

Implications and Next Steps

This study did not find evidence that the full intervention we tested leads to more robust behavior change or greater improvements in confidence than exposure to simulation training alone. However, changes in patient behaviors were observed in the simulation scenarios and participants’ self-report indicates that they valued the training experience and felt more confident after exposure to the methodology. We conclude that SC methodology is feasible for teaching patients skills to engage in complex health care situations. Additional work is needed to determine whether and how this methodology can be applied to better prepare patients to actively engage in real-world clinical experiences in ways that improve health outcomes.

Future iterations of this intervention can address some of the suggestions from study participants and further explore the potential benefits to clinicians and patients when experienced providers play SC roles. There may be benefit to both patients and providers when experienced providers or possibly medical residents participate in scenarios. Experienced providers would be more adept at improvising in response to patient participants’ interactions. In addition, experienced providers may have opportunities to hone their patient communication skills through the SC experience, particularly if included in the debriefing sessions.

Positive responses and comments from study participants support the further development and testing of SC methodology for patient skill training. We believe the next steps in the development of the methodology will test the impact on real-world health care engagement and outcomes. Inclusion of experienced providers and patients in the development and testing of the approach will optimize the potential of the approach for improving outcomes.

ACKNOWLEDGMENTS

The authors acknowledge Neil Coker, EMT-P and Director of Temple College Clinical Simulation Center for his assistance at the simulation center. The authors would also like to thank Dr Lisa Sklar, the internist affiliated with the study, for her assistance in case development, and Dr Wendy Hegefeld and Mr Glen Cryer for providing their assistance in editing this article.

REFERENCES

1. Kripalani S, Jackson AT, Schnipper JL, Coleman EA. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. J Hosp Med 2007; 2 (5): 314–323.
2. Belcher VN, Fried TR, Agostini JV, Tinetti ME. Views of older adults on patient participation in medication-related decision making. J Gen Intern Med 2006; 21 (4): 298–303.
3. Gazmararian JA, Baker DW, Williams MV, et al. Health literacy among Medicare enrollees in a managed care organization. JAMA 1999; 281 (6): 545–551.
4. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med 2009; 360 (14): 1418–1428.
5. Marcantonio ER, McKean S, Goldfinger M, Kleefield S, Yurkofsky M, Brennan TA. Factors associated with unplanned hospital readmission among patients 65 years of age and older in a Medicare managed care plan. Am J Med 1999; 107 (1): 13–17.
6. Naylor M. Transitional care for older adults: a cost-effective model. LDI Issue Brief 2004; 9: 1–4.
7. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. Adverse drug events occurring following hospital discharge. J Gen Intern Med 2005; 20 (4): 317–323.
8. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med 2003; 138 (3): 161–167.
9. de Vries EN, Ramrattan MA, Smorenburg SM, Gouma DJ, Boermeester MA. The incidence and nature of in-hospital adverse events: a systematic review. Qual Saf Health Care 2008; 17: 216–223.
10. Soop M, Fryksmark U, Koster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care 2009; 21 (4): 285–291.
11. Cresswell K, Fernando B, McKinstry B, Sheikh A. Adverse drug events in the elderly. Br Med Bull 2007; 83: 259–274.
12. Chugh A, Williams MV, Grigsby J, Coleman EA. Better transitions: improving comprehension of discharge instructions. Front Health Serv Manage Spring 2009; 25 (3): 11–32.
13. Ward BW, Schiller JS. Prevalence of multiple chronic conditions among US adults: estimates from the National Health Interview Survey, 2010. Prev Chronic Dis 2013; 10: E65.
14. Mansur N, Weiss A, Hoffman A, Gruenewald T, Beloosesky Y. Continuity and adherence to long-term drug treatment by geriatric patients after hospital discharge: a prospective cohort study. Drugs Aging 2008; 25 (10): 861–870.
15. Ashman JJ, Beresovsky V. Multiple chronic conditions among US adults who visited physician offices: data from the National Ambulatory Medical Care Survey, 2009. Prev Chronic Dis 2013; 10: E64.
16. Medicare program; hospital inpatient prospective payment systems for acute care hospitals and the long-term care hospital prospective payment system and fiscal year 2013 rates; hospitals’ resident caps for graduate medical education payment purposes; quality reporting requirements for specific providers and for ambulatory surgical centers. Final rule. Fed Regist 2012; 77 (170): 53257–53750.
17. Coleman EA, Smith JD, Frank JC, Min SJ, Parry C, Kramer AM. Preparing patients and caregivers to participate in care delivered across settings: the care transitions intervention. J Am Geriatr Soc 2004; 52 (11): 1817–1825.
18. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med 2009; 150 (3): 178–187.
19. Naylor MD, Hirschman KB, Bowles KH, Bixby MB, Konick-McMahan J, Stephens C. Care coordination for cognitively impaired older adults and their caregivers. Home Health Care Serv Q 2007; 26 (4): 57–78.
20. Carman KL, Dardess P, Maurer M, et al. Patient and family engagement: a framework for understanding the elements and developing interventions and policies. Health Aff (Millwood) 2013; 32 (2): 223–231.
21. Coulter A, Ellins J. Effectiveness of strategies for informing, educating, and involving patients. BMJ 2007; 335 (7609): 24–27.
22. Nagoshi M, Williams S, Kasuya R, Sakai D, Masaki K, Blanchette L. Using standardized patients to assess the geriatrics medicine skills of medical students, internal medicine residents, and geriatrics medicine fellows. Acad Med 2004; 79 (7): 698–702.
23. LaMantia MA, Scheunemann LP, Viera AJ, Busby-Whitehead J, Hanson LC. Interventions to improve transitional care between nursing homes and hospitals: a systematic review. J Am Geriatr Soc 2010; 58 (4): 777–782.
24. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005; 27 (1): 10–28.
25. Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA 1999; 282 (9): 861–866.
26. Agency for Healthcare Research and Quality. Taking Care of Myself: A Guide for When I Leave the Hospital. Rockville, MD: U.S. Deparment of Health and Human Services; 2010.
27. National Institute on Aging. A Guide for Older People: Talking with Your Doctor. Washington, DC: National Institutes of Health; 2005.
28. Clayman ML, Pandit AU, Bergeron AR, Cameron KA, Ross E, Wolf MS. Ask, understand, remember: a brief measure of patient communication self-efficacy within clinical encounters. J Health Commun 2010; 15 (Suppl 2): 72–79.
29. Glasgow RE, Wagner EH, Schaefer J, Mahoney LD, Reid RJ, Greene SM. Development and validation of the Patient Assessment of Chronic Illness Care (PACIC). Med Care 2005; 43( 5): 436–444.
30. Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med 2008; 23 (5): 561–566.
31. Cegala DJ, Street RL Jr, Clinch CR. The impact of patient participation on physicians’ information provision during a primary care medical interview. Health Commun 2007; 21 (2): 177–185.
32. Street RL Jr, Gordon HS, Ward MM, Krupat E, Kravitz RL. Patient participation in medical consultations: why some patients are more involved than others. Med Care 2005; 43 (10): 960–969.
33. Parry C, Kramer HM, Coleman EA. A qualitative exploration of a patient-centered coaching intervention to improve care transitions in chronically ill older adults. Home Health Care Serv Q 2006; 25 (3–4): 39–53.
34. Enguidanos S, Gibbs N, Jamison P. From hospital to home: a brief nurse practitioner intervention for vulnerable older adults. J Gerontol Nurs 2012; 38 (3): 40–50.
35. Haidet KK, Tate J, Divirgilio-Thomas D, Kolanowski A, Happ MB. Methods to improve reliability of video-recorded behavioral data. Res Nurs Health 2009; 32 (4): 465–474.
Keywords:

Older patients; Hospital readmission; Patient training; Simulated clinician intervention

Supplemental Digital Content

© 2015 Society for Simulation in Healthcare