People over the age of 65 constituted 14.1% of the Canadian population in 2009.1 This percentage is expected to increase rapidly, reaching 23% to 25% of the total population by 2036.2 Given this trend in Canada and elsewhere, the World Health Organization has suggested that all undergraduate medical students, regardless of their future specialty, should receive clinical training in geriatrics.3 The national standards for medical school accreditation in Canada currently do not mandate geriatric medicine content at the undergraduate level; they require only that geriatric content be available for those who elect to take it. As a result, standards for geriatric training and for the number of hours devoted to its training vary widely by school in Canada.4 Currently, the majority of Canadian medical schools do not have a required geriatric clerkship. In an effort to address this paucity of geriatric education, the Canadian Geriatrics Society developed 20 geriatric core competencies (or learning outcomes) for medical students5; however, because neither geriatric rotations nor geriatric training are mandated, little opportunity exists to ensure that students are competent in these needed competencies when they graduate.
Previous research has shown that a clerkship year including a geriatric rotation better prepares students to care for older adults as compared with a traditional clerkship year without a geriatric rotation.6 This finding contradicts the all-too-commonly held belief of both faculty and students7 that because trainees see older adults during most rotations (save pediatrics), specialized training in geriatrics is not required. Given that less than half of Canadian medical schools have mandatory geriatric clinical clerkships, many students do not have the opportunity to perform and practice comprehensive geriatric assessments, which research has shown to have a number of potential benefits for elderly patients including reduced morbidity and long-term care use, and advanced recovery.8 Further, when clinical students and residents do have the opportunity to complete the medical history and physical of an older patient, they are generally not supervised by a geriatrician; thus, few opportunities exist for them to receive direct feedback on their geriatric assessment skills. A solution to this lack of constructive feedback might be to involve the patient and the caregiver (i.e., the person who accompanies the patient to the appointment) because they are typically the only people who directly observe the performance of the medical trainee during a comprehensive geriatric clinical assessment. As such, the patient/caregiver pair may be an invaluable resource for providing information on both the completeness of the assessment and the overall performance of the medical trainee.
Research regarding patient-provided feedback is scarce; we found only one systematic review. It identifies six patient-feedback instruments but concludes that the instruments lack a clear purpose when used to evaluate the consultation skills of an individual physician.9 Much of the research conducted to determine the education potential of patient-provided feedback relates to the interpersonal skills and communication abilities of the doctor or to the patients’ perceptions of the technical care they receive.8–10 Further, to our knowledge, no patient-feedback instruments have been developed that examine a physician’s content-related skills specific to geriatrics. We did, however, find one study showing that elderly patients’ perceptions of their physicians’ medical knowledge and specific skills did not align with objective indicators (from case reports) of the physicians’ technical competence.10 Although the results of that study suggest that older adults’ perceptions are not useful in assessing the technical quality of care, other results suggest that patients can provide feedback on technical skills so long as they are asked specific questions about what actually occurred rather than for their perceptions of the interaction.11 If patients could accurately provide content-related feedback, medical trainees might be able to further develop their clinical skills despite the lack of supervision by physician educators.
The patients attending a geriatric clinic provide some unique challenges with respect to giving feedback.12 This population, possibly as a result of their upbringing and socialization13,14 or because of the existence of a perceived power imbalance,15,16 may be less likely to give a critical evaluation of a medical trainee’s performance. Furthermore, the prevalence of cognitive impairment or other physical/sensory deficits can be significant and may affect the patient’s ability to recall accurately what the trainee (or any provider) asked during the history taking. This reality makes the taking of a collateral history from the caregivers imperative for a comprehensive geriatric assessment. Despite these issues, patients who are older adults are often a willing source of feedback for medical trainees and are likely an underused source of teaching.17
The purpose of this study was to develop and evaluate a tool to be completed by patients/caregivers (i.e., the Comprehensive Geriatric Assessment Guide [CGAG]) that medical educators could potentially use to provide feedback to trainees about the content and completeness of their comprehensive geriatric assessment skills. As an initial step in this prospective tool’s development, we set out to determine the ability of patient/caregiver pairs attending a geriatric clinic to complete the CGAG accurately enough for it to be useful for providing feedback to medical trainees regarding their clinical performance and to help trainees improve their performance in the future.
Development and piloting of the CGAG
The concept for the CGAG originated from Dalhousie University and was based on the format of the Structured Communication Adolescent Guide.18 In early 2009, three geriatric content experts (including J.G. and L.D.) in the divisions of geriatric medicine at Dalhousie University and the University of Western Ontario (UWO) collaborated to achieve consensus on the components of a comprehensive geriatric clinical assessment that novice trainees should be able to master. Then, later in 2009, we piloted the CGAG both with standardized patient/caregiver pairs and with inpatients on a geriatric rehabilitation unit as part of a second-year clinical methods course at UWO. We made further revisions to the CGAG in an effort to address feasibility issues. The CGAG was understandable to standardized patient/caregiver pairs during this initial piloting. Faculty provided the patient feedback generated by this pilot testing to the trainees involved who, according to anecdotal evidence, generally received the feedback well, viewing it as constructive. This finding parallels the results of a recent study of general practitioners’ experience of patient evaluations conducted by Heje, Vedsted, and Olesen.19
The revised CGAG consists of 36 yes/no questions about the medical trainee’s completion or neglect of specific components of the comprehensive geriatric assessment, including whether the trainee asked for specific details about medications, inquired about activities of daily living, reviewed systems (e.g., bowel and bladder function, vision and hearing), queried social well-being, and performed a physical examination. After the pilot stage, we added a “don’t remember” option to deter patients and their caregivers from guessing if they could not remember whether or not the trainee covered a particular component during the assessment (Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A108).
We recruited patients from four outpatient geriatric clinics within an urban academic hospital organization located in Ontario, Canada. The sampled clinics included both general geriatric clinics and specialty clinics (e.g., the memory, movement, and day hospital clinics). During clinics, geriatricians would conduct geriatric assessments and, at the end, provide focused recommendations to patients. Although specific reasons for referral to the clinics varied, all patients received a similar geriatric assessment that captured the holistic health of the individual.
To be eligible to participate, patients needed to be at least 65 years old, and they or their caregiver had to be able to speak, read, or understand English well enough to complete the CGAG. We excluded participants if they had any hearing or cognitive impairment that precluded them from participating in conversations, or if they were unwilling to be audio-recorded. Patients attending the clinic either with or without a caregiver were eligible to participate.
A medical secretary e-mailed letters of information and consent, along with orientation material, to all potential trainees (i.e., clinical students in their third or fourth year of medical school and residents in their first or second year of postgraduate training) at least one week before the rotation orientation session. During orientation for the geriatric rotation, a research assistant (K.B.K.) invited all residents and medical students completing the rotation to participate in the study. During this orientation, we fully informed trainees of the purpose and protocols of the study, and we answered any questions they had about the study or their participation. We also informed trainees that we would use the results of the CGAG only to assess the ability of patients and their caregivers to accurately recall their medical interview, that the trainees would not receive feedback, and that the results would have no impact on their evaluations. We told trainees that we would not collect any of their personal data, and we obtained written consent from those trainees who agreed to participate. We did not offer the trainees any incentive to participate.
A medical secretary at each clinic mailed, as part of the routine patient orientation package, a letter of information and a consent form to patients and caregivers who were scheduled to see a consenting trainee. A research assistant (K.B.K.) approached patients and their caregivers immediately before their scheduled appointment. At this time, the research assistant described the study, its protocols (including participation and the rights to withdraw from the study), the benefits and risks of participation, and the confidentiality of records. The research assistant told patients that the purpose of the study (per the letter of information and consent) was to determine “how well you remember what was discussed during the assessment, and your ability to complete the survey tool.” The assistant answered any questions that either the patient or the caregiver had about the study and, if both the patient and caregiver agreed, obtained written consent from all of the individuals attending the appointment.
We told all participants (students, residents, patients, and caregivers) that their records and any associated research materials, including the paper-based CGAG and the electronic audio recording, would be kept confidential, would be reviewed by only the authors of this study, and would be stored in a locked cabinet or on a password-protected computer in a locked office. Because we recorded no personal identifying data, we told participants that it would be impossible to remove their particular record should they wish to withdraw from the study at a later date.
The study consisted of two phases: Phase 1 focused on determining the reliability of the CGAG, and Phase 2 focused on the evaluation of the patient/caregiver pairs’ ability to complete the CGAG accurately. For each phase of the study, we placed a digital audio recorder in the clinical assessment room of consenting participants in order to record the assessment. We also recorded the separate interviews of patients and caregivers if they were assessed independently. We conducted the study from January 2010 to January 2011, inclusive. The health science research ethics board of UWO gave ethical approval to this study.
Phase 1—Interrater reliability of CGAG. In Phase 1 (January to April 2010), two of us (K.B.K. and K.T.H.) independently listened to the audio recording of 10 comprehensive geriatric assessments. Each of us independently completed the CGAG immediately after listening to the clinical assessment. We then compared our two CGAGs for each assessment, and we addressed any discrepancies that arose through discussion with a third researcher (L.D.). We amended the CGAG form, as appropriate, usually through the provision of examples (e.g., we added “family, friends, caregivers” to clarify “supports”). The patient/caregiver pairs did not complete a CGAG in this phase,
Phase 2—Ability of patient/caregiver pairs to accurately complete the CGAG. In Phase 2 (April 2010 to January 2011), patient/caregiver pairs completed the newly revised (post-Phase 1) CGAG in a quiet room at the conclusion of the patient’s clinical assessment. We did not record, in the cases of patient and caregiver pairs who completed the CGAG together, who actually filled out the CGAG (i.e., whether it was the caregiver alone, the patient alone, or the patient and caregiver together). Although a researcher (K.B.K.) returned periodically to clarify the meaning of survey questions for patients and/or caregivers, we did not give any direct help in answering CGAG questions. After the first seven patient/caregiver pairs completed CGAGs, we realized that the question “The trainee didn’t speak to my caregiver as if I was not present” was possibly confusing for patients/caregivers. Subsequently, we provided patient/caregiver pairs with a brief explanation as to the meaning of this CGAG question.
A researcher (K.B.K.) listened to the recording(s) of all the assessments completed in Phase 2 and completed a CGAG for each based on that recording. Because the researcher was able to pause and relisten to the assessment as required, we considered the CGAGs completed by this researcher to be the “gold standard” for what the trainee actually addressed during the clinical assessment. Next, we compared the CGAG completed by the patient/caregiver for each assessment with this gold standard CGAG for the same assessment.
We used the percent agreement between the two raters (Phase 1), and between the patient/caregiver pair and the gold standard (Phase 2), as our measures of reliability. We used IBM SPSS Statistics (version 19, IBM Corporation, Armonk, New York) for our statistical analyses.
Across Phases 1 and 2, 44 patient/caregiver pairs consented to and did participate in this study, resulting in 44 recorded clinical geriatric assessments. We invited 64 medical trainees to participate; 54 (84%) consented, and 31 (48%) actually participated after we gained their patient/caregiver consents. Because the focus of this investigation was the ability of patients/caregivers to use the CGAG, we did not collect and do not report any data (demographic, satisfaction, or clinical skill) on the medical trainees who participated.
Two recordings (of the 44) were inaudible because of recording error, leaving 42 assessments (95%) available for analysis (10 from Phase 1 and 32 from Phase 2). Eight medical trainees were audio-recorded more than once. Discomfort in being audio-recorded was the most prevalent reason medical trainees gave for declining to participate. Patients and their caregivers often cited the personal nature or matter of the subject of the clinical assessment as a reason they declined to participate. Recordings of the patient assessment ranged from 1.5 to 3 hours in length, which is typical for geriatric assessments conducted in an academic training center.
Phase 1: Interrater reliability of CGAG
For Phase 1, seven medical trainees administered 10 comprehensive geriatric assessments. Of these, only one patient attended the clinic without a caregiver (Table 1). Raw percent agreements between raters for the CGAG in Phase 1 were excellent (mean agreement across questions: 90.4 ± 14.3). Only one question consistently resulted in different answers (i.e., Yes, No, or Don’t Remember) between the two raters: “The trainee asked about my financial situation” (percent agreement = 33.3%). We clarified this question by adding specific examples to the CGAG form (e.g., “sources of income,” “money worries”) before we used it in Phase 2.
Phase 2: Ability of patient/caregiver pairs to complete accurately the CGAG
Twenty-four medical trainees participated in Phase 2. In total, 32 patients and their caregivers completed CGAGs after the patient’s trainee-administered clinical assessment; 28 of the patients attended the assessment with a caregiver. The 4 patients attending the clinic alone were, on average age, 82.5 (standard deviation = 6.1) years old (slightly older than the overall Phase 2 average age of 81.0 ± 8.1 years). See Table 1 for other patient demographic information. Overall, the patient/caregiver responder(s) used the “don’t remember” option only rarely (10 participants, 8 of whom attended with a caregiver), and those patients who did use it selected it for only 1 or 2 of the 36 questions asked. Although we did not collect the exact amount of time the patients/caregivers took to complete the CGAG, they had all completed the tool within 20 minutes, and two pairs completed it in approximately 2 minutes.
Percent agreements between the ratings of the patient/caregiver pairs and the gold standard ranged from 46.9% to 100%, with an average percent agreement of 88.5% ± 10.7 (Table 2). Thirty of 36 (83%) CGAG questions garnered a gold standard and patient/caregiver pair agreement of over 80%. The lowest percent agreement between the patient/caregiver pairs and the gold standard was for the question relating to medication side effects. Patients attending the clinic without caregivers had an average percent agreement with the gold standard of 90.0%, whereas the 28 patients with caregivers had an average percent agreement with the gold standard of 88.8%.
To enhance the training of specific geriatric skills in medical school and the early years of residency training, we developed a patient response guide to provide patients and their caregivers with a method for providing feedback on the clinical assessment content and skills of medical trainees. Given the CGAG’s high interrater agreement (mean = 90.4) and over 80% agreement between the patient/caregiver pairs and the gold standard on all but six questions, both professional and lay audiences seem to understand and grasp the CGAG. In other words, our study has introduced a useful guide through which patients may eventually enhance medical trainee performance by providing accurate feedback on the completeness of the trainees’ clinical assessments of older patients.
The results of our exploratory study support the assertion that patients and/or caregivers can provide accurate feedback when they are asked to reflect specifically on what actually occurred during their assessment.11 Patients and staff may have different views on service quality,18,20 and they may use a different set of criteria by which to judge satisfaction with a consultation.21 We developed the CGAG to include very few questions regarding perceptions of technical care quality, so that the focus would emphasize objective content of clinical assessments. Even though the majority of the CGAG questions relate to objective indicators of assessment content, some may reflect the personal perceptions of patients and/or caregivers (e.g., “The trainee ensured I could hear them, without shouting at me”; “The trainee didn’t speak to my caregiver as if I was not present,” and “The trainee adapted the physical examination to accommodate my physical limitations”). Including these questions in the CGAG, despite their potentially subjective nature, was important because negative responses might suggest potential problems around rapport building or the communication ability of the medical trainee.
The finding that 6 of the 36 questions had low agreement, however, may indicate a possible lack of question clarity. For instance, the questions “The trainee asked me about any side effects” and “The trainee asked me about any immunizations I have had” both garnered low agreement when compared with the gold standard, suggesting that patients might have difficulty understanding what the question is asking. As well, patients and/or caregivers often answered the question “The trainee asked me about my financial situation” similarly to a question about the patient’s ability to manage money. This similarity suggests that the difference between the two questions may need to be more explicit. Further, the patient/caregiver pair may not have been certain when the medical trainee was testing the patient’s mood during the CGAG, and this lack of clarity may have, thereby, caused difficulties with the question “The trainee tested my mood.” The compound item “The trainee asked me about my ability to bathe, groom, and dress myself” was understandably difficult to answer because it asks more than one question. Differences between patient/caregiver answers and the gold standard on the question “The trainee watched me walk” likely stem from the fact that watching a patient walk is a task that does not lend itself to being determined from an audio recording. Unless the medical trainee explicitly states, “I want to see you walk now,” the gold standard rater has no way of knowing if the medical trainee attempted to assess the patient’s ability to walk.
Going forward, we will reword and/or add examples to the questions that garnered low agreement to improve clarity on future versions of the CGAG. Further, we will separate the items with compound questions into individual questions.
Although the results of our pilot study are positive, we must also acknowledge certain limitations. Our results are based on a limited number of participants in an ambulatory outpatient academic tertiary care institution. A larger-scale study involving a greater number of patients in more diverse settings would be helpful in determining the suitability and usability of the CGAG across other teaching environments such as emergency rooms, primary care clinics, and inpatient teaching units. Limited variability in CGAG agreement between the patient/caregiver pairs and the gold standard precluded the calculation of a kappa statistic, limiting interpretation of results to percent agreement. Further, our number of participants limited our ability to discern whether patient age affected the reliability of the CGAG; however, the assistance or contribution of caregivers would potentially mitigate any differences of patient age. In two instances where the CGAG was completed within two minutes, we presumed that the caregivers completed the CGAG alone without the input of the patient. Those using the CGAG must protect against this occurrence in future uses because patients have their own perspectives of the clinical assessment and might have had important insights or opinions that differed from those of the caregiver. Instructions for future CGAGs might include directions stressing that the patient should be the one to answer the questions and that the caregiver should supply answers only if the patient does not remember or is unsure. Finally, researchers’ explanations of the item “The trainee didn’t speak to my caregiver as if I was not present” may have biased the answers of the patient/caregiver pairs. We will modify future versions of CGAGs to improve the clarity of this question.
The ability of patient/caregiver pairs to accurately complete the CGAG suggests that this form could be useful for providing medical trainees with accurate feedback about the content and completeness of their geriatric clinical assessments. Certain questions, however, will require further revision to ensure that patient/caregiver pairs have a clear understanding of what they are being asked. Although underused as a source of training, patients who are older adults are often eager to contribute to the teaching of medical trainees.12 Implementing the guide in the clinical years of undergraduate and in postgraduate medical training may help to better prepare future physicians to care for older adults. The patient-completed form may provide trainees or their clinical educators with an opportunity to identify critical parts of the trainees’ geriatric clinical assessment that may require more focus, thereby enhancing the trainees’ clinical development. Furthermore, the implementation of CGAGs may provide the opportunity to track where some geriatric core competencies are addressed in the curriculum. We hope to incorporate the guide into the medical training at our institutions. The goal is to require each student to have at least two CGAGs completed during their clerkship year in any rotation during which they might expect to see older adults (i.e., nonpediatric rotations) and, ultimately, to improve the care of these patients.
Acknowledgments: The authors wish to thank the staff at study clinics for allowing the study to be completed alongside clinical work.
Funding/Support: This study was funded by two Faculty Support for Research in Education (FSRE) grants (R3106A04 and R3106A05) from the Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario, Canada.
Other disclosures: None.
Ethical approval: The university health sciences research ethics board and the clinical research impact committee of the hospital approved the study.
Disclaimer: The funding source supporting this research had no effect on the research design and method, the analysis and interpretation of results, or the acceptance or editing of the report.
Previous presentations: Diachun L, Klages KB, Hansen KT, Blake K, Gordon J. Comprehensive Geriatric Assessment Guide (CGAG): Elders can give content feedback to medical students. Poster presented at: Canadian Geriatrics Society 31st Annual Scientific Meeting; April 16, 2011; Vancouver, British Columbia, Canada.
Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A108.