BACKGROUND AND PURPOSE
Patient education is defined as “a planned learning experience using a combination of methods such as teaching, counselling and behaviour modification techniques which influence patients' knowledge and health behaviour”.1 Patient education provides a means for health professionals to communicate important information2 and to enhance patient's self-efficacy and self-management.3,4 It is considered an integral component of effective patient care5 where specific approaches have demonstrated positive outcomes in physical therapy settings with relation to pain, disability, and function.6,7
Despite the important role of patient education in physical therapy, there are concerns relating to how patient education is practiced. For example, novice therapists reportedly place less importance on patient education than other clinical skills8 and use less patient-centred approaches to education than their experienced colleagues.9,10 Similarly, compared to experienced physical therapists, student therapists place less importance on the use of educational activities that are considered patient-centered and report less ability to adjust their educational skills to the needs of the individual.11 The need to address these issues and prepare students as patient-centered therapists is consistently highlighted within the literature.10,11-14
Physical therapy programs have a core goal of ensuring that graduates meet the competencies required for professional practice, including patient-centered skills such as patient education.14 However, the evaluation of these competencies is a challenge, especially where reliable measures for specific competencies are not available.15 The challenge of assessing physical therapy students' patient education performance is confounded further by limited consensus on patient education competencies and a lack of appropriate measures to assess the performance of such competencies. To enhance the training of physical therapy students and to focus training toward the needs of the profession, further knowledge of professional competencies and reliable assessment tools are required. National practice within the United States,16 United Kingdom,17 Australia, and New Zealand18 include patient education as a broad competency for preprofessional and professional programs and graduates. Despite the inclusion of patient education practice in these standards, specific, empirically derived competencies required for professional practice are beyond the scope of such broad guidelines. Patient education competencies for physical therapy have recently been determined19 and have subsequently been applied in educational settings and physical therapy training research.14,20,21 These competencies (Appendix 1, Supplemental Digital Content 1, http://links.lww.com/JOPTE/A36) offer an initial framework for training and assessment in this important area of practice.
For the assessment of patient education competency and performance, it is imperative to use a reliable performance tool. Clinical performance measures of patient interview, assessment, and management skills are widely represented throughout the health care literature, but there remain no studies or tools for measuring physical therapist performance of patient education skills.14 The purpose of this study was to develop a performance tool for assessing the patient education skills of physical therapy students and to examine the reliability of this measure. Such a tool may provide the means to gather useful data about patient education performance within physical therapy educational and professional settings and may ultimately improve the quality of patient education in practice.
Development of the Measure
An initial search of the published literature was performed. The goal was to locate existing published tools or publications to serve as models for patient education skill performance in physical therapy. This review of the literature found no formal measure to assess patient education skills of physical therapists or physical therapy students. Measures of patient education performance from other health professions were considered. For example, the CELI instrument23 is designed to measure communication and patient-centred behaviour competencies performed by medical professionals across a medical consultation (including but not focused on patient education components). As expected, many of the items are specific to the medical profession and related medical settings and include a range of communication behaviors beyond patient education. It was therefore deemed necessary to develop a measure that a) is specific to physical therapy practices and settings, b) focuses on patient education competencies specific to physical therapy, and c) can be measured within an encounter where patient education was provided. Therefore, a structured step-wise process was undertaken to develop an initial physical therapy performance tool for testing.
Empirically derived patient education competencies for physical therapists reported in a previous Delphi study19 were used to develop the initial performance tool. Delphi methods are useful in synthesizing information about a specific issue and have been used widely and successfully to identify and clarify roles and practice competencies in health care and education settings.20 This previous study generated 22 patient education competencies that were empirically recognized as competencies that all physical therapists should possess.19 These competency items include tasks related to the assessment of the patient, such as “seek patient perceptions and concerns using appropriate questioning,” direct educational activities such as “effectively explain the patient's condition,” and also encompass the evaluation of education, including “identify when educational needs have been met.” The spread of competencies across the consultation are consistent with the view of physiotherapy as an educational endeavor consisting of teaching throughout and across the continuum of care.56 These competencies are outlined further in Appendix 1 (Supplemental Digital Content 1, http://links.lww.com/JOPTE/A36).
To establish internal validity of the tool, the appropriateness of each competency for patient education was considered through a review by a group of experts in the area of physical therapy practice and education. This was aimed to access the “accumulated knowledge and expertise of others who have worked in the field”22 and to ensure measurement items were appropriate for application to physical therapy practice. The panel consisted of 4 physical therapy academic staff, 2 physical therapy clinical educators and 2 experienced physical therapists. All members had over 7 years of clinical experience (mean 11.2 years) and over 4 years of experience in clinical education (mean 6.2 years). All experts were targeted purposively and invited via e-mail to participate. Each panel member was interviewed individually by the lead investigator (R.F.) across 2 face-to-face meetings. Panel members reviewed each competency item from Forbes et al (2017) as a potential patient education performance item for physical therapy students. Panel members were asked about individual item clarity, representativeness, and relevance to the construct of patient education performance with application to a student population. Each member was asked to rate each item as “relevant,” “somewhat relevant,” “slightly relevant,” and “not relevant at all,” as is recommended to assess variability of rating among reviewers.24 Structured questions were also used to ask members about essential items that they felt were not included within the tool. Each member was then asked to rate the use of several scoring tool options for measuring each performance item, including a dichotomous yes/no response and 3-point, 4-point, and 5-point Likert scales. After expert consultation, all items were considered “relevant” by the panel members, and 6 of the 8 experts agreed on the 4-point Likert scale for scoring of each performance item. Further, a “not assessable” item was agreed upon to allow the examiner to select this response if the student has not had an opportunity to demonstrate this item (Table 1). There was also agreement that should an item be scored as “not assessable,” this item would be removed from the overall total, thereby reducing the potential maximum total score.
Several individual items were refined from the original competency list into a more succinct version for the physical therapy patient education (PTPE) tool. For example, the 2 original competencies from Forbes et al (2017) of “select and use a range of appropriate learning content tailored to the patient” and “provide content that is in the best interests of the patient” were combined by the expert panel to form the performance item “selects and uses appropriate learning content tailored to the best interests of the patient.”
The final revision by panel members focussed on ensuring that items were only those that could be demonstrated explicitly within a typical single patient consultation. Adjustments resulted in a final tool that contained 11 items. The expert panel was then followed up to seek consensus on the organizational flow of the final tool (Table 1). The panel was also given an opportunity to provide feedback to improve the measure.24-26 The focus of the final instrument (Table 1) related to students' ability to provide patient education rather than the specific educational content (ie, how education was provided to the patient rather than what education was provided), although several items related specifically to patient education content (Table 1).
Physical therapy students from The University of Queensland (Australia) who were undertaking their final year of the program were recruited for participation as part of a wider randomized controlled trial.14 By this stage in the program, students had not yet participated in formal clinical experiences but had undertaken activities that utilized simulated patients (actors trained to portray patients in simulated clinical settings) and role play. A total of 164 students performed a 10-minute objective standardized clinical examination (OSCE) during which they were required to perform patient education with a simulated patient (trained actor) who was experiencing pain and disability after a whiplash injury.14 Most students were female (57.1%) and spoke English as their first language (89.9%). All students undertook the OSCE and were recorded by a fixed video camera. To investigate interrater reliability, 45 OSCE performances were randomly selected. To investigate test–retest reliability, a further 45 performances were randomly selected from the total video performances. The same performances were used at both testing times. To investigate internal consistency, all 164 OSCE performances were used. Three academic physical therapy faculty members, different to those participating in the development and validation of the tool, were selected to assess student performance and thus interrater reliability. The video-recorded student–patient interaction performances were scored by each assessor using the PTPE tool. Two to 3 weeks (mean 16 days) after the initial scoring, each assessor reviewed and rescored the video performances in random order. Results of the 2 assessments were statistically compared to estimate reliability over time (test–retest reliability). The study was approved by the institutional research ethics committee, and participants provided informed consent before participation.
The intraclass correlation coefficient (ICC) for interrater and test–retest reliability was calculated for each individual item as well as the total PTPE scores overall.30 This was performed using a single measure, 2-way mixed model, which is defined as the proportion of the total variance due to the between-subject variance.31 The ICC is a relative measure of test–retest reliability, which describes the closeness of test scores from the same individual in 2 or more sessions within a short period, thus the consistency. The ICC was deemed appropriate since it measures how much of the total variance of scores can be attributed to differences between subjects32 and when replicate measures have no time sequence. In addition, the ICC has been widely used as a measure of reliability for likert scales.23,49,50 Because there is no consensus regarding reliability criteria, results were quantified as poor (ICC < 0.00), slight (ICC 0.00–0.20), fair (ICC 0.21–0.40), moderate (ICC 0.41–0.60), substantial (ICC 0.61–0.80), and almost perfect (ICC > 0.80).33 According to Bonett,27 a sample of 15 is sufficient for a reliability study, with an estimated ICC of 0.9.27 Test–retest reliability and interrater reliability of the individual items and total measure scores were assessed using a 2-way random ICC with a 95% confidence interval34 in SPSS (SPSS, Inc, Chicago, IL). Internal consistency was calculated using Cronbach's α with >0.8 considered an acceptable level.28,29
Internal consistency reliability was at an acceptable level (>0.811) for all assessors and across both testing occasions, indicating an acceptable level of internal consistency.29 The mean performance scores (and standard deviations) for individual items ranged from 1.10 (0.89) (item 10) to 2.89 (0.42) (item 1) at initial test and 1.23 (0.79) (item 10) to 2.96 (0.39) (item 1) at retest. The test–retest ICC for the PTPE score for all 10 items was 0.76 (Table 2). All individual items had a test–retest ICC of >0.61, indicating a substantial level of reliability.33 In the measurement of interrater reliability, agreement between the assessors ranged from 0.57 to 0.80 using a 2-way random effects model. Item 5 “Uses effective and engaging communication styles, language and/or materials that are tailored to the patient” achieved only a moderate level of interrater reliability at initial (ICC = 0.57) and re-test (0.59). Item 8 “provides family or caregivers with information” was scored as “not assessable” by all raters as there were no family or caregivers present within the patient consultations. Item 8 was therefore not included within the analyses and reliability of this item could not be determined.
DISCUSSION AND CONCLUSION
The PTPE performance tool was designed to assess physical therapy students' performance of patient education, which was identified as a gap in current educational and clinical assessment and research. The final PTPE tool reflected empirically derived observable patient education competencies that could be explicitly assessed within physical therapy settings.
The final PTPE tool is consistent with wider practice competencies, including communication and patient-centred care. The final items included tailoring educational content, language and materials, and seeking the patient's perceptions and concerns.51 These competencies were not unexpected considering the support for addressing patient concerns within clinical settings52 and considering that tailoring of communication and content has a tangible impact on patient satisfaction and health outcomes.53 Communication as a wider competency also underpins the final PTPE items. Items specifically relating to communication factors include the use of questioning, effectively explaining the patient's condition, and effectively summarizing information. This supports the view that patients who receive accurate and easily digestible information about their condition are better able to understand and follow health care instructions.54 This finding was not surprising given that communication is considered the cornerstone of effective patient education,51 and when used effectively, it has a positive impact on important outcomes, including patient adherence, satisfaction and effective self-management.55
There are several similarities between the PTPE instrument and those used to assess communication and patient education skills of physicians during medical consultations. For example, the CELI instrument23 includes patient education competencies, particularly in the domains of explaining information (eg, providing concise summaries and checking comprehension) and influencing (offering educational material and reinforcing problem-solving behaviors). The PTPE is unique in that it is the first tool that relates specifically to patient education competencies and is relevant to physical therapy settings.
Our results offer support for the PTPE tool as an assessment of patient education performance within physical therapy training settings through establishing reliability. Reliability is a necessary but not sufficient condition for validity.35,36 If a tool is not reliable within the intended population, then it cannot be considered valid as a performance measure. Furthermore, reliable assessment measures of student competencies are needed during preclinical and clinical experiences37 as well as entry to the profession.38,39 Internal consistency describes the extent to which all the items in a test measure the same concept or construct and hence it is connected to the interrelatedness of the items within the test. The acceptable internal consistency of the PTPE suggests that the individual items show acceptable correlation with measuring the construct of patient education performance. As a reliability estimate, the acceptable Cronbach's α also indicates a low level of measurement error associated with the tool.40 Test–retest reliability provides an indication of longitudinal stability as stability is reduced by both test–retest unreliability and score change over time.41,42 Test–retest reliability was a substantial level for all individual items of the measure and the measure overall. This supports the concept that the PTPE maintains measurement stability over time.
The results of the interrater reliability analyses indicate that there is substantial reliability for 9 of the 10 individual items and overall PTPE measure. The high level of agreement for item 1 “Seeks patient perceptions and/or concerns using appropriate questioning” are indicative of the explicit nature of assessing the performance of the student asking questions of the patient in relation to perceptions and concerns. This is an important competency for patient education as seeking patients' perceptions and concerns is critical to understand their learning needs.43-45 One item of the measure (item 5) had only a moderate level of interrater reliability across both testing occasions. This finding is not unexpected considering the subjective nature of measuring communication skills and the variation in components of effective communication,46 making this item more challenging for assessors to score. Another possible explanation for the lower reliability scores for this item is the high discrepancy and variation in communication skills demonstrated within and throughout the observed patient interaction. It must also be recognized that interrater reliability may be enhanced when raters establish a consensus guideline,47 which was not included within this study. The authors recommend that future users of the PTPE incorporate a consensus guideline among assessors to enhance objectivity and thus reliability.
Although the PTPE tool may be used within teaching and clinical education settings, physical therapy education providers should carefully consider how to use the tool in student training and performance assessment. The PTPE tool may potentially be used early within physical therapy programs as a framework for patient education practice and performance, especially considering its development from an empirically established competency consensus.19 Student and faculty awareness of these competencies may also provide a framework to provide experiential opportunities for patient education practice and stimulate growth in this area.
There are several limitations that must be considered within the current study. The development and testing of the PTPE tool took place at a single university with only final year students. Additional testing addressing reliability of the tool in earlier stages of training and in various locations as well as within professional clinical settings is indicated. Assessors were asked to use the tool while playing the video performances, meaning they were also able to pause or play back the video. This may limit application of these findings to student performances that are assessed in real-time. Furthermore, reliability of the eighth item of the performance measure (“Provides family or caregivers with information”) was not able to be assessed in the OSCE because there was no parent or caregiver present. This may potentially affect the overall reliability and internal consistency of the measure. This study used a convenience sample of content experts and reviewers for panel consultation and reliability, meaning the participants were selected based on the authors' knowledge of each individual's experience, background, and expertise rather than a more formal process. Future studies applying the PTPE to various physical therapy professional settings is warranted, especially given that patient education skills are necessary across multiple settings.19,48
This study presents the PTPE tool as the first known empirically derived tool designed to assess patient education performance in physical therapy. This performance tool provides reliable assessment of physical therapy students' patient education skills. The tool demonstrates interrater reliability and internal consistency, and the results support the usability of the tool. Additional testing of the PTPE tool across a wider spectrum of physical therapy students and clinicians is warranted.
1. Bartlett EE. At last, a definition. Patient Educ Couns. 1983;7:323–324.
2. Hoving C, Visser A, Mullen PD, van den Borne B. A history of patient education
by health professionals in Europe and North America: From authority to shared decision making education. Patient Educ Couns. 2010;78:275–281.
3. Nunez M, Nunez E, Yoldi C, Quinto L, Hernandez MV, Munoz-Gomez J. Health-related quality of life in rheumatoid arthritis: Therapeutic education plus pharmacological treatment versus pharmacological treatment only. Rheum Int. 2006;26:752–757.
4. Ndosi M, Johnson D, Young T, et al. Effects of needs-based patient education
on self-efficacy and health outcomes in people with rheumatoid arthritis: A multicentre, single blind, randomised controlled trial. Ann Rheum Dis. 2015;75:1126–1132.
5. Cooper K, Smith BH, Hancock E. Patient-centredness in physiotherapy
from the perspective of the chronic low back pain patient. Physiotherapy
6. Albaladejo C, Kovacs FM, Royuela A, del Pino R, Zamora J. The efficacy of a short education program and a short physical therapy
program for treating low back pain in primary care: A cluster randomized trial. Spine. 2010;35:483–496.
7. Louw A, Diener I, Butler DS, Puentedura EJ. The effect of neuroscience education on pain, disability, anxiety, and stress in chronic musculoskeletal pain. Arch Phys Med Rehabil. 2011;92:2041–2056.
8. Jensen GM, Shepard KF, Hack LM. The novice versus the experienced clinician: Insights into the work of the physical therapist. Phys Ther. 1990;70:314–323.
9. Gyllensten AL, Gard G, Salford E, Ekdahl C. Interaction between patient and physical therapist: A qualitative study reflecting the physical therapist's perspective. Phys Ther Res Int. 1999;4:89–109.
10. Forbes R, Mandrusiak A, Smith M, Russell T. A comparison of patient education
practices of novice and experienced physiotherapists in Australia. Musculoskelet Sci Pract. 2017;28:46–53.
11. Holmes C. The attitudes and perspectives of physical therapist students regarding patient education
. J Phys Ther Educ. 1999;13:8–13.
12. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient-centered care. Health Aff (Millwood). 2010;29:1310–1318.
13. Sanders T, Foster NE, Bishop A, Ong BN. Biopsychosocial care and the physical therapy
encounter: Physical therapists' accounts of back pain consultations. BMC Musculoskelet Disord. 2013;14:65.
14. Forbes R, Mandrusiak A, Smith M, Russell T. Training physiotherapy
students to educate patients: A randomised controlled trial. Patient Educ Couns. 2017;101:295–303.
15. Jette DU, Bertoni A, Coots R, Johnson H, Mclaughlin C, Weisbach C. Clinical instructors perceptions of behaviours that comprise entry-level clinical performance in physical therapist student: a qualitative study. Phys Ther. 2007;87:833–843.
19. Forbes R, Mandrusiak A, Smith M, Russell T. Identification of competencies for patient education
using a Delphi approach. Physiotherapy
20. Forbes R, Mandrusiak A, Smith M, Russell T. Patient education
: The relationship between training experiences and self-efficacy in new-graduates. J Phys Ther Educ. In press.
21. Keeney S, Hasson F, McKenna H. The Delphi Technique in Nursing and Health Research. Chichester, United Kingdom: Wiley-Blackwell; 2011.
22. Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to Their Development and Use. 4th ed. Oxford, United Kongdom: Oxford University Press; 2008.
23. Wouda J, Zandbelt LC, Smets EMA, van de Wiel HBM. Assessment of physician competency in patient education
and validity of a model-bsaed instrument. Patient Ed Couns. 2011;85:92–98.
24. Polit DA, Beck CT. Essentials of Nursing Research: Methods, Appraisal, and Utilization. Vol 1. London, United Kingdom: Lippincott Williams & Wilkins; 2006.
25. Rubio DM, Berg-Weger M, Tebb SS, Lee ES, Rauch S. Objectifying content validity; conducting a content validity study in social work research. Soc Work Res. 2003;27:94–104.
26. Schilling LS, Dixon JK, Knafl KA, Grey M, Ives B, Lynn MR. Determining content validity of a self-report instrument for adolescents using a heterogeneous expert panel. Nurs Res. 2007;56:361–366.
27. Bonett D. Sample size requirements for estimating intraclass correlations with desired precision. Stat Med. 2002;21:1331–1335.
28. Nunnally J, Bernstein L. Psychometric Theory. New York, NY: McGraw-Hill Higher, INC; 1994.
29. Bland J, Altman D. Statistics notes; cronbach's alpha. BMJ. 1997;314:275.
30. Domholdt E. Physical Therapy
Research: Principles and Application. 2nd ed. Philadelphia, PA: WB Saunders Publishers; 2000.
31. Streiner DL. Starting at the beginning: An introduction to coefficient alpha and internal consistency. J Pers Assess. 2003;80:99–103.
32. Bravo G, Potvin L. Estimating the reliability
of continuous measures with cronbach's alpha or the intraclass correlation coefficient: Toward the integration of two traditions. J Clin Epidem. 1991;44:381–390.
33. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174.
34. Shrout PE, Fleiss JL. Intraclass correlations; uses in assessing rater reliability
. Psych Bull. 1979;86:420–428.
35. DeVon HA, Block ME, Moyle-Wright P, et al. A psychometric toolbox for testing validity and reliability
. J Nurs Scholarsh. 2007;39:1550164.
36. Clark LA, Watson D. Constructing validity; basic issues in objective scale development. Psyc Assess. 1995;7:309–319.
37. Gorman SL, Lazaro R, Fairchild J, Kennedy B. Development and implementation of an objective structured clinical examination (OSCE) in neuromuscular physical therapy
. J Phys Ther Educ. 2010. 24:62–68.
38. Capwell EM. Health education graduate standards: Expansion of the framework. Health Educ Behav. 1997;24:137.
39. Gruppen LD, Mangrulkar RS, Kolars JC. The promise of competency-based education in the health professions for improving global health. Hum Resourc Health. 2012;10:43–48.
40. Tavakol M, Dennick R. Making sense of cronbach's alpha. Int J Med Educ. 2011;2:53–55.
41. McCrae RR, Kurtz JE, Yamagata S, Terracciano A. Internal consistency, retest reliability
and their implications for personality scale reliability
. Pers Soc Psychol Rev. 2011;15:28–50.
42. Carmines EG, Zeller RA. Reliability
and Validity Assessment. Sage publications; London: 1979.
43. Saha S, Beach MC, Cooper LA. Patient centeredness, cultural competence and healthcare quality. J Natl Med Assoc. 2008;100:1275–1285.
44. Lamiani G, Furey A. Teaching nurses how to teach: An evaluation of a workshop on patient education
. Patient Educ Couns. 2009;75:270–273.
45. Anderson RW, Funnell MM. Patient empowerment: Myths and misconceptions. Patient Educ Couns. 2010;79:277–282.
46. Ong LML, de Haes JCJM, Hoos AM, Lammes FB. Doctor-patient communication; a review of the literature. Soc Sci Med. 1995;40:903–918.
47. Azuma H, Hori S, Nakanishi M, Fujimoto S, Ichikawa N, Furukawa TA. An intervention to improve the interrater reliability
of clinical EEG interpretations. Psychiatry Clin Neurosci. 2003;57:485–489.
48. Svavarsdóttir MH, Siguroardottir AK, Steinsbeck A. Knowledge and skills needed for patient education
for individuals with coronary heart disease: The perspective of health professionals. Eur J Cardiovasc Nurs. 2016;15:55–63.
49. Norman G. Likert scales; levels of measurement and the “laws” of statistics. Adv Health Sci Educ Theory Pract. 2010;15:625–632.
50. Beal DJ, Dawson JF. On the use of likert-type scales in multilevel data; influence on aggregate variables. Org Res Meth. 2007;10:657–672.
51. Saha S, Beach MC, Cooper LA. Patient centeredness, cultural competence and healthcare quality. J Natl Med Assoc. 2008;100:1275.
52. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient-centered care. Health Aff (Millwood). 2010;29:1310–1318.
53. Noar SM, Benac CN, Harris MS. Does tailoring matter? Meta-analytic review of tailored print health behavior change interventions. Psychol Bull. 2007;133:673–693.
54. Shoeb M, Merel SE, Jackson MB, Anawalt BD. “Can we just stop and talk?” patients value verbal communication about discharge care plans. J Hosp Med. 2012;7:504–507.
55. Chewning B, Bylund CL, Shah B, Arora NK, Gueguen JA, Makoul G. Patient preferences for shared decisions: A systematic review. Patient Educ Couns. 2012;86:9–18.
56. Sluijs EM. A checklist to assess patient education
practice: Development and reliability
. Phys Ther. 1991;71:561–569.