The Effect of Knowledge Translation Procedures on Application of Information From a Continuing Education Conference : Pediatric Physical Therapy

Secondary Logo

Journal Logo

RESEARCH ARTICLE

The Effect of Knowledge Translation Procedures on Application of Information From a Continuing Education Conference

Schreiber, Joseph PT, PhD, PCS; Dole, Robin L. PT, DPT, EdD, PCS

Author Information
Pediatric Physical Therapy 24(3):p 259-266, Fall 2012. | DOI: 10.1097/PEP.0b013e31825be0c9
  • Free

INTRODUCTION

Recent evidence suggests that pediatric physical therapists (PTs) have a positive attitude about the concept of evidence-based practice (EBP) and the importance of research evidence for the profession.1 Evidence-based practice implies the need to continually seek and find the best research evidence to guide clinical decisions and analyze the strength and quality of that evidence. Pediatric PTs are then expected to integrate the evidence with clinical experience, patient and family values, and clinical circumstances into an optimal clinical decision for an individual patient.2,3

Despite the positive attitude, multiple barriers limit the implementation of EBP by pediatric PTs.4,5 Berwick6 broadly classified barriers to EBP into 3 categories: practitioner, organization, and research.6 At the practitioner level, many pediatric PTs lack the skill and self-efficacy for seeking and analyzing evidence and often report a lack of time for these activities as a primary barrier.1,4,5,79 This lack of skill with EBP is problematic for many individuals who have been in practice greater than 10 years.10 Organizations often do not require or expect an EBP approach or provide dedicated time for employees to carry out EBP activities. Pediatric PTs who work in isolated settings and those who work in interdisciplinary teams are likely to experience additional organizational challenges to EBP.1,8,11 Negative perceptions about research and its relevance to practice also often serve as barriers to EBP.1,4,7,8,11 The evidence suggests that it is challenging for pediatric PTs to efficiently and effectively gather and translate the knowledge generated from primary research into clinical practice.

Many PTs attend continuing education conferences (CECs) as a means to gather knowledge and maintain and improve professional skill.1114 Physical therapists highly value continuing education and view this as a critical component of professional development.1114 Some evidence suggests that PTs prefer to use information from CECs to guide clinical decisions rather than from information from primary research.11,15 Many states require that PTs accumulate a certain number of continuing education units each year as a condition for license renewal. However, although some evidence supports the effectiveness of traditional CECs in improving knowledge of attendees, minimal evidence supports the effectiveness of CECs in changing practitioner behavior or improving patient outcomes.5,11,14,1619

The evidence for EBP and CECs seems to suggest that PTs struggle with translating knowledge into practice.11,20Knowledge translation (KT) has been defined as the exchange, synthesis, and ethically sound application of knowledge within a complex system of interactions among researchers and users.21,22 An emerging body of research evidence supports the use of multicomponent, multifaceted, and interactive activities to foster KT, thereby increasing the application of research and evidence-based knowledge into clinical practice.12,14,1619,2326 For example, a recent study by Cleland et al12 demonstrated improved patient outcomes following a CEC augmented with follow-up discussion groups and an educational outreach session.12 Brown et al26 demonstrated positive changes in practitioner behavior related to fall prevention after implementing multifaceted behavior change strategies.26 Examples of these behavior change strategies include the use of opinion leaders, outreach visits, training manuals and risk factor checklists (also available online), working groups, and newsletters.26 Brennan et al14 demonstrated improved patient outcomes for a group of PTs who participated in follow-up interactive workshops and practice sessions when compared with a group who attended only a 1-time 2-day CEC.14 Other KT interventions that have been shown to be effective at supporting behavior change include the use of printed materials, identification of needs and priorities, the use of small group discussions, and reminders.20

Minimal research, thus far, has been done on effective KT for pediatric PTs. The purpose of this 1-group pretest-posttest design research project was to describe the effect of a multifaceted continuing education program, including explicit KT instruction and support, on the ability of pediatric PTs to apply knowledge learned during a CEC on pediatric tests and measures. Unique aspects of the CE program included the following: grouping attendees based on practice setting for case-based discussion segments; integrating attendee cases into large group discussion segments; providing detailed printed support materials; a 90-minute session devoted to KT; and a follow-up Wiki discussion board for interaction among attendees and course instructors related to application of course information.

An additional, unique aspect of the CE was the requirement that each participant create an individualized KT plan using the checklist in Table 1. This was based on the notion that successful KT requires additional reflection by the user on the various factors that may affect the process. The first step for attendees was the explicit identification of the practice behavior change, based on self-analysis and reflection. Subsequently, attendees were asked to consider the effect of the source, compatibility of the content with current practice, and presentation medium of the underlying knowledge on the desired practice change.2731 Individual user characteristics, including incentive and readiness to change, were also considered.6,32,33(p282) Aspects of the behavior change itself, such as relative advantage over current practice, complexity, and capacity to trial the change in a limited number of cases initially, were identified.33(p15) Finally, the potential effect of unique contextual factors and barriers relative to the desired practice behavior change were considered.6,33

T1-12
TABLE 1:
Knowledge Translation Checklist

For example, attendees were provided with brief definitions and examples within each aspect of the checklist and then asked to develop a KT plan on the basis of their individual practice setting. Specifically, evidence suggests that a new innovation is more likely to be adopted if it can be trialed on a small scale first.33(p15) Therefore, participants were encouraged to reflect on clinical practice and identify ways to trial the practice change (use of an appropriate test/measure) and evaluate before attempting a more comprehensive change in practice behaviors. Similarly, other aspects of the practice change process were reviewed, and then during the 90-minute KT session, each participant was required to complete the checklist with reference to his or her own practice setting. Course instructors provided consultation during this segment.

METHODS

All PTs attending the Pennsylvania Physical Therapy Association Conference on Standardized Tests and Measures in May of 2010 had the opportunity to participate in this KT project and research study. All attendees at the conference completed a survey on knowledge and frequency of use of pediatric tests and measures. All attendees at the conference signed informed consent to participate in the research project. This project was approved by the Institutional Review Board at Widener University.

The survey was reviewed for face validity by several experienced pediatric PTs prior to the start of the project. Information on measures of impairments of body structure and function, activity, and participation was presented at the conference, and the survey was designed to gather information about knowledge and frequency of use of measures in each area. Survey respondents responded to a set of skills related to 4 elements: selecting, administering, interpreting, and sharing the results of standardized measures in pediatrics. For each skill in an element, respondents were asked to first rate their knowledge of that skill and then rate their frequency of performing that skill. The survey included 9 items related to selecting, 4 items related to administering, 6 items related to interpreting, and 5 items related to sharing. Table 2 illustrates the scales used to rate knowledge and frequency for each of the skills.

T2-12
TABLE 2:
Survey Rating Scale

At the conclusion of the 2-day conference, all attendees who agreed to participate in additional KT training remained for an additional 90-minute session. This represented 38% of the attendees. All participants had the opportunity to participate in the extra session, and most who declined expressed an interest in participating but an inability to do so because of prior outside commitments. During this session, instruction and support in developing a KT plan were provided by one of the course instructors, as described previously. Participants identified the desired practice behavior change and then used the grid in Table 1 to consider all aspects of the KT process.

During the 16-week period following the conference, these individuals participated in an online discussion group (a Wiki) that included posting of questions, comments, and interaction with other group members and the course instructors related to course content and implementation of the KT plan. At the end of the 16-week time frame, the individuals completed the survey again and also had the opportunity to provide additional narrative comments about the implementation of the KT plan. At this point, 8 of the 11 individuals responded to repeated requests to complete the follow-up. See Table 3 for a summary of the attrition that occurred during each phase of the project.

T3-12
TABLE 3:
Participant Attrition Rate

Data Analysis

Descriptive analysis of preconference surveys for all attendees provided information on baseline knowledge and frequency of use of standardized assessments for this group of pediatric PTs. In addition, data from pre- and postsurveys for the individuals who participated in the KT session were organized by content area (selecting, administering, interpreting, and sharing information about tests and measures) and then analyzed with Wilcoxon signed rank test to identify changes in self-reported knowledge and frequency of behaviors within each area. Individual change scores were also calculated for each individual for each section of the survey. A positive score indicated an increase in knowledge or frequency of behavior from baseline to follow-up. Narrative comments on the Wiki and in follow-up surveys were reviewed and compared with individual change scores to provide additional insight into quantitative results.

RESULTS

Baseline Survey—All-Course Participants

Before the start of the Conference on Standardized Tests and Measures, all participants were invited to take a baseline survey to assess their level of knowledge on various elements of standardized testing and their frequency of use of those elements. A total of 29 participants completed this survey. The mean and mode ratings and standard deviations for each element of the survey are presented in Table 4. With the exception of the element of interpreting tests, all respondents' mode ratings were high for frequency of doing a task whereas the knowledge to do the task was typically rated in the middle of the scale. These participants rated their frequency of selecting, administering, and sharing standardized tests and measures most often at the highest level on the scale, “very frequently–-greater than 15 times in the last 3 months.” Despite having significant opportunity to use standardized tests in their practice, these respondents rated their frequency of interpreting those tests lower than the frequency of selecting, administering, and sharing results of tests and measures. Knowledge of these various elements of standardized testing was most often rated in the middle of the scale, “working knowledge–-I have sufficient working knowledge to perform this skill.”

T4-12
TABLE 4:
Baseline Survey: All Conference Participants (n = 29)

Baseline Survey—KT Participants

A total of 11 participants attended the postconference KT session and all but one persisted at least in part through participation in the 16-week Wiki experience. Of the 11 participants, 8 completed the post-KT survey. The general demographics of those 8 participants indicated that this group ranged in years of practice from less than 1 year to more than 30 years. Years of practice in pediatric physical therapy ranged from less than 1 year to more than 15 years. None of the participants held specialty certification in pediatrics and the highest earned degrees included the entire range of bachelor's, master's, and clinical doctorates (DPT). All of these pediatric PTs were primarily employed in either a school-based or early intervention setting. Baseline survey data from these 8 participants were compared with their postcourse and post-KT survey results.

In this KT group, precourse ratings favored higher levels for frequency over knowledge for all areas except interpreting (both rated in the middle of the scale) and selecting (rated as less frequent) (Table 5). This group differed from the larger group at precourse baseline in 2 areas: the KT group rated their frequency of selecting tests lower and their frequency of interpreting tests higher than the larger group's frequency ratings on the baseline survey. The KT group appeared to have less opportunity to select standardized assessments than they did for the other elements assessed. Statistical analysis using the Wilcoxon signed rank test revealed no difference (P values range from .116 to .886) in the ratings between these 2 groups on the baseline survey for all areas except for the frequency of selecting tests (P = .044).

T5-12
TABLE 5:
Preconference Survey: Knowledge Translation Group (n = 8)

Post-KT Survey

The ratings of the 8 participants who completed the post-KT survey stayed the same or improved in all elements rated (Table 6). Wilcoxon signed rank tests were used to determine differences in this group from baseline to postsurvey. Statistically significant differences were noted in the knowledge for selecting (P = .000), administrating (P = .024), interpreting (P = .000), and sharing (P = .000) standardized tests and measures. When analyzing changes in the frequency for which these participants did each of these elements, only frequency of interpreting tests and measures was statistically different from baseline to postsurvey (P = .026). This group was unchanged in their ratings baseline versus postsurvey for frequency of interpreting (rated in the middle of the scale in both instances), frequency of administering (rated high in both instances), knowledge for interpreting (rated in the middle both instances), and frequency for sharing (rated high both instances). The percentages of responses for each rating category for knowledge and frequency, comparing baseline with postsurvey, are presented in Figures 1 through 4.

T6-12
TABLE 6:
Postconference Survey: Knowledge Translation Group (n = 8)
F1-12
Fig. 1:
Selecting tests: Knowledge translation group.
F2-12
Fig. 2:
Administering tests: Knowledge translation group.
F3-12
Fig. 3:
Interpreting tests: Knowledge translation group.
F4-12
Fig. 4:
Sharing results of tests: Knowledge translation group.

Change Scores

Some participants made marked positive changes as indicated by the positive numbers in their pre- to postsurvey change scores, while others had a pattern of decreased ratings (Table 7). The participants reported 4 instances of a negative change in the knowledge ratings and 12 instances of negative change in frequency ratings.

T7-12
TABLE 7:
Individual Change Scores: Knowledge Translation Group

Information was also gathered regarding the frequency and substance of posts to the Wiki discussion board. Ten of the 11 individuals from the KT group participated in the online discussions, with the number of posts by each individual ranging from 2 to 7 (the total number of posts for all 10 participants was 43) and the number of page views ranging from 2 to 26 (the total number of page views for all participants was 103). The majority of the posts were questions or comments about application of the course information in clinical practice. The remainder of the posts included comments about barriers to implementation, reference to individual goals, and greetings and expressions of gratitude to the course instructors. Finally, 16 of the posts occurred within the first 2 weeks following the course, and the remainder (27) occurred up until approximately 12 weeks after the course.

DISCUSSION

In a 2007 survey of PTs, occupational therapists, and speech-language pathologists in Ontario, Hanna et al34 found that 59% of respondents reported using standardized measures in their practice daily to weekly.34 In addition, a 1999 survey found that 70% of service providers had used pediatric outcome measures in the previous 6 months.35 At the beginning of the conference, most attendees also reported selecting, administering, and sharing information about tests and measures very frequently (greater than 15 times in the last 3 months). This suggests that pediatric PTs are regularly using tests and measures and are modestly confident with their knowledge about the tests and measures. However, similar to the findings from the survey by Hanna et al,34 attendees reported a lower frequency on items related to interpreting tests and measures (not more than 2 times in the past 4 months). This suggests that PTs may not be as comfortable with interpretation of results from standardized testing, particularly with concepts such as Z scores, confidence intervals, and standard error of measurement.34 The PTs in this study may have reported relatively lower scores related to interpretation of tests and measures due to a lack of confidence with these statistical concepts. Therefore, although the tests may be used, a comprehensive understanding of the results may be limited. A comment on the Wiki from participant 4 reflects this: “I learned a lot about Z scores (at the conference).... It was one of those things I learned in stats class, but I forgot what it meant.”

Similar to previous reports,14,1620 attendees at this conference indicated improved knowledge in course content. Despite the relatively low number of completed follow-up surveys, the 4-month time frame between baseline and follow-up, and the use of nonparametric statistics to analyze the ordinal data, this self-reported improvement in knowledge was statistically significant across all 4 areas of the survey: selection, administration, interpretation, and sharing. At the 4-month follow-up, most respondents reported “teaching knowledge” with this content, indicating sufficient knowledge to perform and teach this skill in a formal setting. The results suggest that this CE format was effective at moving participants to a knowledge level that would support their ability to instruct others, including workplace colleagues, on the information from the course and thereby increase the likelihood that the information could be effectively disseminated. Most respondents in the Hanna et al34 survey agreed that they would appreciate additional resources or assistance in learning about the use and interpretation of standardized measures, so ideally the attendees at this conference were able to serve as a knowledgeable resource for their colleagues as a result of attending this conference and participating in the KT activities.

Interestingly, according to the change scores, several KT participants reported a reduction in knowledge about some aspect of tests and measures. This may reflect an increased awareness of information previously unknown to these individuals. In other words, attendees may not have “known what they did not know” before attending this conference. Exposure to this new information, therefore, led to decreased self-confidence with this information and perhaps could have contributed to decreased willingness to apply this content in clinical practice. In all instances where change scores indicated a decrease in knowledge, the participants also reported a decrease in frequency of that activity.

A key objective of the KT aspect of this course was to support the implementation of course information into clinical practice. Previous research indicates that transfer into practice for pediatric PTs may be challenging, even with a variety of KT activities.15 The results of this study demonstrated a significant increase in frequency of interpretation of data from pediatric tests and measures. However, other frequency measures were not significantly different. One possible explanation for this is that the timing for the project coincided with summer vacation for participants providing school-based PT. The follow-up survey was administered at the end of September; therefore, some of the participants may have had decreased opportunities to perform the newly learned tests and measures just prior to completion of the follow-up survey. In addition, at baseline, several participants reported “very frequently” for several items, leading to a ceiling effect related to the survey itself and as reflected in the change scores in Table 7. Finally, on the Wiki discussion board, several participants described barriers such as limited time to administer and multidisciplinary examination procedures as barriers to implementing their individual KT program.

Frequency of interpretation did improve, however, and change scores for several participants did indicate an increase in frequency for some elements of standardized tests and measures. Comments from the Wiki also suggested an ongoing effort to apply the information from the course. More posts occurred 2 weeks after the end of the course than within the first 2 weeks, indicating sustained participation with the program. For example, participant 9 commented 3 weeks after the course: “I have begun to add Single Task Measures data section directly to my Progress Note page for charting purposes. This prompts me to complete the tests more regularly” and participant 7, also 3 weeks after the course: “I have been doing a lot of thinking about what I learned at the conference and have begun to apply it.”

The variety of change scores and comments related to implementation of course content may be related to each individual's readiness for change in this area. Prochaska's transtheoretical model suggests that individual readiness for change is variable, and that interventions and outcome expectations should reflect this variability.32 In this project, some of the attendees may not have been in the readiness for change stage and therefore less likely to make substantive practice change. Others were further along on the continuum of readiness for change and therefore more likely to make substantive practice changes. The transtheoretical model suggests that ideally, ongoing assessment and intervention that reflect the individual's readiness to change should be implemented, and more static assessments and “one size fits all” interventions are unlikely to be successful for all individuals.32

Previous research evidence supports the use of multicomponent, multifaceted, and interactive activities to foster KT.12,14,1619,2326 In a study of pediatric PTs working in a publicly funded community children's rehabilitation program in Canada, Russell et al36 implemented a multifaceted KT intervention. In this program, PTs acted as knowledge brokers to facilitate the use of 4 evidence-based measurement tools in clinical practice. This knowledge broker model was designed to overcome many of the barriers to research transfer identified in the literature. The results suggested that PTs were able to increase knowledge and application of the tests and measures after a 6-month program and at a 12-month follow-up. However, the intensive nature of this program may limit the applicability for most clinicians.36 In contrast, an advantage to the structure of the multicomponent KT program in this project was the relative ease of implementation. The 2-day course format was similar to other CECs and included activities aimed at explicitly integrating course information with cases relevant to the practice area for each attendee. In addition, each attendee was required to develop 1 or 2 individualized goals related to application of course information. The KT instruction and individual program development required an additional 90-minute session. Participants were then able to use the Wiki as needed and when convenient.

LIMITATIONS

Several limitations must be considered when applying the results to pediatric practice. The results demonstrated that the 8 participants in this project were able to gain in their knowledge of selecting, administering, interpreting, and sharing results of standardized tests and measures in pediatrics. However because no control group was included, it is not possible to determine whether participation in the course alone could have resulted in similar changes in knowledge. These individuals may have improved because of attendance at the CEC or because of some other factor unrelated to the inclusion of KT activities.

Another issue of concern is the level of attrition that occurred throughout this study. The authors attempted to collect follow-up survey data from those who completed the initial survey and attended the course but did not participate in the postcourse KT session and Wiki. That group could have served as a form of control, but the survey response rate in this group was so low (N = 3) that the data could not be used for a successful comparison. There was also attrition in the group of therapists who agreed to take part in the additional postcourse activities. This drop-off in participation is indicative of the problematic nature of both CECs as typically offered and survey research in general.11,37,38 The reduction in participation may have reflected a decreased willingness on the part of most attendees (18 of 29) to devote the additional time and effort to integration of KT as a component of CECs. Currently, there is no expectation or requirement that attendees at CECs demonstrate any change in knowledge or implementation of course content, and attendance is sufficient to obtain continuing education units necessary to maintain licensure. Therefore, there was little incentive for attendees to participate in this aspect of the program.

CONCLUSION

At times of economic uncertainty when resources for continuing education may be limited and where the expectations for PTs having such education to maintain licensure have increased 3-fold in the last 2 decades, critics have raised concerns about the efficacy and cost-effectiveness of the current systems used among health care professionals.26,3941 This investigation provides some evidence that supplemental efforts to support KT can be a low-cost and effective add-on to traditional CECs for pediatric PTs. Continuing education providers may consider such a “value-added” service to their customers by requiring presenters to use some of the techniques of KT as a part of their courses. Providers are also encouraged to include opportunities for participants to continue formal online dialogue with their colleagues and content experts long after the course sessions end.

REFERENCES

1. Schreiber J, Stern P, Marchetti G, Provident I, Turocy P. School-based pediatric physical therapists' perspectives on evidence-based practice. Pediatr Phys Ther. 2008;20:292–302.
2. American Physical Therapy Association. Code of ethics. http://www.apta.org/uploadedFiles/APTAorg/About_Us/Policies/HOD/Ethics/CodeofEthics.pdf#search=%22code of ethics%22. Published 2010. Accessed November 29, 2011.
3. Haynes B, Devereaux P, Guyatt G. Physicians' and patients' choices in evidence-based practice. BMJ. 2002;324:1350.
4. Schreiber J, Stern P. A review of the literature on evidence-based practice in physical therapy. Intern J Allied Health Sci Pract. 2005;3(4):17.
5. Schreiber J, Stern P, Marchetti G, Provident I. Strategies to promote evidence-based practice in pediatric physical therapy: a formative evaluation pilot project. Phys Ther. 2009;89:918–933.
6. Berwick D. Disseminating innovations in health care. JAMA. 2003;289(15):1969–1975.
7. Jette D, Bacon K, Batty C, et al. Evidence-based practice: beliefs, attitudes, knowledge, and behaviors of physical therapists. Phys Ther. 2003;83(9):786–805.
8. Salbach N, Jaglal S, Korner-Bitensky N, Rappolt S, Davis D. Practitioner and organizational barriers to evidence based practice of physical therapists for people with stroke. Phys Ther. 2007;87:1284–1303.
9. Schreiber J, Downey P, Traister J. Academic program support for evidence-based practice: a mixed-methods investigation. J Phys Ther Educ. 2009;23(1):36–43.
10. Salbach N, Guilcher S, Jaglal S, Davis D. Factors influencing information seeking by physical therapists providing stroke management. Phys Ther. 2009;89:1039–1050.
11. Rappolt S, Tassone M. How rehabilitation therapists gather, evaluate, and implement new knowledge. J Cont Educ Health Prof. 2002;22:170–180.
12. Cleland J, Fritz J, Brennan G, Magel J. Does continuing education improve physical therapists' effectiveness in treating neck pain? A randomized clinical trial. Phys Ther. 2009;89:38–47.
13. Landers M, McWhorter J, Krum L, Glovinsky D. Mandatory continuing education in physical therapy: survey of physical therapists in states with and states without a mandate. Phys Ther. 2005;85(9):861–871.
14. Brennan G, Fritz J, Hunter S. Impact of continuing education interventions on clinical outcomes of patients with neck pain who received physical therapy. Phys Ther. 2006;86:1251–1262.
15. Ketelaar M, Russell D, Gorter JW. The challenge of moving evidence-based measures into clinical practice: lessons in knowledge translation. Phys Occup Ther Pediatr. 2008;28:191–206.
16. Bero L, Grilli R, Grimshaw J, Harvey E, Oxman A, Thomson M. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ. 1998;317:465–468.
17. Davis D, O'Brien M, Freemantle N, Wolf F, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA. 1999;282(9):867–874.
    18. Grimshaw J, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(suppl 2)(8):II 2–II 45.
      19. McCluskey A, Lovarini M. Providing education on evidence-based practice improved knowledge but did not change behaviour: a before and after study. BMC Educ. 2005;5(40):1–12.
      20. Menon A, Korner-Bitensky N, Kastner M, McKibbon K, Straus S. Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review. J Rehabil Med. 2009;41:1024–1032.
      21. Canadian Institutes of Health Research. http://www.cihr-irsc.gc.ca/e/26574.html#defining. Published 2009. Accessed June 22, 2011.
      22. Davis D, Evans M, Jadad A, et al. The case for knowledge translation: shortening the journey from evidence to effect. BMJ. 2003;327:33–35.
      23. Smith W. Evidence for the effectiveness of techniques to change physician behavior. Chest. 2000;118:8S-17S.
      24. Lavis J, Ross S, McLeod C, Gildiner A. Measuring the impact of health research. J Health Serv Res Policy. 2003;8(3):165–170.
      25. Brown C, Gottschalk M, Van Ness P, Fortinsky R, Tinetti M. Changes in physical therapy providers' use of fall prevention strategies following a multicomponent behavioral change intervention. Phys Ther. 2005;85(5):394–403.
      26. Brown C, Gottschalk M, Van Ness P, Fortinsky R, Tinetti M. Changes in physical therapy providers' use of fall prevention strategies following a multicomponent behavioral change intervention. Phys Ther. 2005;85(5):394–403.
      27. A review of the literature on dissemination and knowledge utilization. In: (NCDDR) NCfDoDR, ed: Southwest Educational Development Laboratory; 1996:1–37.
      28. Florio E, DeMartini J. The use of information by policy makers at the local community level. Knowledge. 1993:106–123.
        29. Hutchinson J, Huberman M. Knowledge Dissemination and Utilization in Science and Mathematics Education: A Literature Review. Arlington, VA: National Science Foundation; 1993.
          30. Klein S, Gwaltney M. Charting the education dissemination system. Knowledge. 1991:241–265.
            31. Paisley W. Knowledge utilization: the role of new communications technologies. J Am Soc Inf Sci. 1993:222–234.
            32. Prochaska J, Velicer W. The transtheoretical model of health behavior change. Am J Health Promot. 1997;12(1):38–48.
            33. Rogers E. Diffusion of Innovations. 5th ed. New York: Free Press; 2003.
            34. Hanna S, Russell D, Bartlett D, Kertoy M, Rosenbaum P, Wynn K. Measurement practices in pediatric rehabilitation: a survey of physical therapists, occupational therapists, and speech-language pathologists in Ontario. Phys Occup Ther Pediatr. 2007;27(2):25–42.
            35. Law MaHC, King G, Russell D, MacKinnon E, Hurley P, Murphy C. Measuring outcomes in children's rehabilitation: a decision protocol. Arch Phys Med Rehabil. 1999;80:629–636.
            36. Russell D, Rivard L, Walter S, et al. Using knowledge brokers to facilitate the uptake of pediatric measurement tools into clinical practice: a before-after intervention study. Implement Sci. 2010;5(92):1–17.
            37. Andersen L. Occupational therapy practitioners' perceptions of the impact of continuing education activities on continuing competency. Am J Occup Ther. 2001;55:449–454.
            38. Portney L, Watkins M. Foundations of Clinical Research: Applications to Practice. 3rd ed. Upper Saddle River, NJ: Prentice-Hall; 2009.
            39. Little C. Mandatory continuing education: a survey of the literature and a comment on the implications for physical therapy. J Contin Educ Health Prof. 1993;13(2):159–167.
            40. Austin T, Graber K. Variables influencing physical therapists' perceptions of continuing education. Phys Ther. 2007;87:1023–1036.
            41. Jurisdiction License Reference Guide. Topic: Initial license requirements. https://www.fsbpt.org/download/JLRG_RequirementsInitialLicensure_200912.pdf. Accessed September 1, 2011.
            Keywords:

            clinical competence; evidence-based practice; follow-up studies; knowledge management; neuropsychological tests; medical continuing education/organization/administration; physical therapy specialty/education; survey research

            © 2012 Lippincott Williams & Wilkins, Inc.