THE ASSESSMENT of clinicians' need for knowledge, including the evaluation of gaps between research evidence and actual practice, is an important first step in evaluating the synthesis of knowledge found in systematic reviews and clinical practice guidelines (CPGs) and the uptake of that knowledge into practice,1 , 2 including evaluation of the gap between evidence and actual practice. Needs assessment is defined by Kitson and Straus as a “systematic process for determining the size and the nature of the gap between current and more desirable knowledge, skills, attitudes, behaviors or outcomes.”1 Many strategies exist to evaluate needs.3 While chart audits and indicator-based methods are often cited because of their theoretical objectivity,1 other strategies such as self-report measures exist that take into account user subjectivity.4 A form of integrated knowledge translation, in which end users are involved in the process of needs assessment, is hypothesized to improve user adherence with new knowledge in practice.5–7 In fact, it is recognized that overlooking clinicians' perspectives about an evidence-based intervention, including its perceived level of implementation and importance, theoretically lowers the chances of the innovation being adopted.8 , 9 Thus, trying to implement CPG recommendations in areas where users perceive no need for improvement or identified priority is likely to result in low effort or even resistance, leading to suboptimal implementation results with little impact on clinical practice.
In addition to the assessment of the level of implementation and perceived importance, the perceived feasibility of implementation must be considered, as it can affect stakeholder engagement in implementation efforts. Feasibility is defined as the extent to which a new treatment, or innovation, can be successfully used or carried out within a given agency or setting.10 , 11 In a context of competing priorities, stakeholders will not readily devote time and efforts to implementation challenges that are perceived as unrealistic, not within stakeholder capacity or having insurmountable obstacles. Thus, considering feasibility in addition to importance might maximize the possibilities of implementation success.
In 2016, a new CPG was published for the rehabilitation of adults with moderate to severe traumatic brain injury (MSTBI), developed in consultation with end users to ensure that the guideline would meet end user needs [Swaine et al, need for TBI guideline (article 1, in this issue); Lamontagne et al, survey of users (article 2, in this issue)]. The guideline contains 266 recommendations, among which 11 are categorized as fundamental and 104 as priority. The recommendations are grouped into 2 sections: Section 1: Components of the Optimal TBI Rehabilitation System and Section 2: Assessment and Rehabilitation of Brain Injury Sequelae. Each section is divided into subsections (7 for section 1 and 13 for section 2). Given the large number of recommendations in the CPG, and the fact that it is not feasible to move from dissemination to actual active implementation within multiple clinical rehabilitation settings at one time, it was important to have clinical settings identify priorities so that implementation projects could be focused and based on user needs.
The general aim of the current research project was to investigate the perceived gap between actual rehabilitation practice and the recommended practice from the Institut national d'excellence en santé et en services sociaux and the Ontario Neurotrauma Foundation (INESSS-ONF) CPG for the rehabilitation of adults with MSTBI in 2 provinces of Canada. The specific objectives were to determine the perceived level of implementation of a subset of key CPG recommendations and to examine the perceived importance and feasibility of implementation of the priority recommendations.
This study used a cross-sectional design, applying an electronic survey methodology according to the CHERRIES standards.12 Lockyer notes that “questionnaires can facilitate [group] self-reflection by asking about current practices or compliance with specific recommendations.”4
Since the CPG was developed in collaboration with and for service providers in public healthcare facilities (acute care and inpatient and outpatient rehabilitation settings providing services to persons with TBI) in Quebec and Ontario, these providers were invited to participate in the study. We considered a program-level perspective (in contrast to individual clinician- or system-level perspectives),4 since implementation efforts at the program-level were envisaged. We requested our partners at the INESSS and the ONF to validate the comprehensiveness of our list of eligible programs in Quebec and Ontario, respectively, to ensure that all eligible programs (which offered rehabilitation to individuals with moderate to severe brain injury) were contacted. Clinical program managers of both general and specialized acute care and trauma rehabilitation programs completed the questionnaire with their clinical team's input.
In May 2016, program managers of eligible facilities were sent an e-mail originating from either the ONF in Ontario or the INESSS in Quebec announcing the upcoming electronic survey and mentioning the context, objectives, and overall study procedure. One month later, program managers received the electronic survey with detailed information about participation. Although the CPG contained 266 recommendations, participants were asked to review a subset of the core priority and fundamental recommendations. Program managers were asked to complete the survey with the input of key program professionals (coordinators, practice leaders, etc) in order to provide broader perspective about the level of implementation of the CPG recommendations in their facilities. Participants were advised against making this a discipline-specific exercise (ie, asking different groups of professionals to examine recommendations only in their domain of expertise). The consultation was held between June and September 2016.
The electronic survey was specifically designed for this study with Microsoft Excel 2010. We used adaptive questioning12 to reduce participant burden. For each recommendation, participating managers were asked to identify the following:
- The current perceived level of implementation of the recommendation in their program on a 5-point scale: 1—fully implemented (in almost 100% of applicable patients); 2—mostly implemented (in approximately 75% of applicable patients); 3—partially implemented (in approximately 50% of applicable patients); 4—occasionally implemented (in approximately 25% of applicable patients); 5—not implemented (absent despite applicable patients).
- Only for those recommendations deemed not to be fully implemented, the degree of priority and feasibility to implement each recommendation on a 3-point scale: 1—high priority; 2—medium priority; 3—low priority; and 1—very feasible; 2—somewhat feasible; 3—not feasible.
Respondents were not able to select “not applicable” and “don't know” as response options. The questionnaire was pretested with 3 members of the project team and 3 members of the CPG development expert panel to verify its functionality and word clarity.
For each guideline section and recommendation, descriptive statistics were used to describe the proportion of respondents reporting the various levels of implementation, priority, and feasibility.
The survey was sent to 51 healthcare facilities across both provinces. Thirty-five provided either specialized inpatient/outpatient TBI rehabilitation or general rehabilitation to a range of patients including those with TBI, and 16 provided rehabilitation in acute care. Forty-four completed surveys were received (86.2%), 30 from rehabilitation sites (85.7%) and 14 from acute care sites (87.5%). Primary reasons given for nonparticipation were that requested participants felt their setting did not treat a large enough volume of patients with brain injury or the survey would take too long.
Results by topics of the guideline
Table 1 indicates the proportion of recommendations for each subsection of the guideline marked by respondents as not “fully” or “mostly” implemented, perceived as “medium” or “high” priority to implement and at least “somewhat” feasible to implement. The results are presented by type of facility (acute care or rehabilitation). Because of the adaptive questioning survey design, the option to respond to subsequent questions was conditional on the response to the previous question (ie, one could not indicate the degree of priority and feasibility if a recommendation was already deemed to be fully implemented). Thus, the number of possible responses for perceived level of implementation was greater than that for the degree of priority and feasibility.
The proportion of “partially” to “not” implemented recommendations varied from 8% to 34% across the guideline subsections. In Section 1: Components of the Optimal TBI Rehabilitation System, the “Subacute Rehabilitation” subsection had the greatest proportion of implemented recommendations, with 8.0% being perceived as being “partially,” “occasionally,” or “not” implemented, whereas the subsection with the largest proportion of recommendations not implemented was “Key Components of TBI Rehabilitation” (34.1%). Within Section 2: Assessment and Rehabilitation of Brain Injury Sequelae, the “Comprehensive Assessment” subsection had the greatest proportion of implemented recommendations (96.5%), whereas the “Psychosocial and Adaptation Issues” subsection was the least implemented (24.8%).
For most subsections (83.3%) of section 1, 60% or more of the managers noted that the recommendations currently not implemented were either of “high” or “medium” priority. Interestingly for acute care setting respondents, the subsection recommendations for “Management of Disorders of Consciousness” were not as well implemented relative to other subsections (33.3%) but were not a relative priority for acute care settings (50% indicated as a high priority). The “Caregivers and Families” subsection was also not perceived as a priority in acute care settings. For all the subsections of section 1, when recommendations were deemed a priority, more than 80% of the respondents identified the implementation to be at least “somewhat” feasible.
In section 2, “Psychosocial and Adaptation issues” were deemed as not well implemented overall (24.8%) and for rehabilitation settings (29.2%) despite being rated as a priority for almost two-thirds of respondents (60.7%). Acute care respondents rated this subsection as the lowest priority (33.3%), but it was perceived as implemented by almost 91% of respondents. The “Neurobehavioral and Mental Health” subsection had a relatively large percentage of recommendations not currently implemented (21.5%), and approximately three-fourths (75.6%) of respondents indicated they were of “high” or “medium” priority. For acute care respondents, the “Substance Use Disorders” subsection had the highest number of recommendations not currently implemented (36.4%) and almost 90% of respondents (87.5%) indicated these recommendations were of a “high” or “medium” priority. For all of the subsections of section 2, when recommendations were deemed a priority, more than 70% of respondents identified the implementation as “very” or “somewhat” feasible.
Results by province
Tables 2 and 3 report for acute care and rehabilitation settings the recommendations deemed as not implemented by at least two-thirds of responding programs, thus identifying them important for provincial implementation efforts.
For acute care programs in Quebec, recommendation S 1.1: Targeting the Screening of Individual With TBI for Substance Abuse was felt not to be implemented by 6 of 7 programs. Each of these 6 programs reported that this recommendation was a priority and feasible. The Quebec-based rehabilitation programs reported that collaboration and continuity mechanisms were not implemented and were a priority; however, implementing this recommendation was deemed less easy since 2 programs felt it was not feasible to implement. In addition, recommendation T 9.2, Specific target symptoms/behaviors should be clearly defined and monitored during pharmacological treatment, was reported as not implemented by 83.3%.
For acute care programs in Ontario, 2 recommendations were identified as not implemented by 80% of respondents: (1) careful drug selection and monitoring to minimize potential adverse effects on arousal, cognition, motivation, and motor coordination (recommendation T 9.3) and (2) use of methylphenidate to enhance attentional function and speed of information processing (recommendation J 3.1). While careful drug selection was deemed a priority and implementable by all responding facilities, appropriate use of methylphenidate was a priority for only one facility. Regarding the rehabilitation programs in Ontario, more than 80% perceived collaboration and continuity mechanisms with addiction/substance use services and programs (recommendation A 2.2) and mental health services and programs (recommendation A 2.1) as not well implemented. These 2 recommendations were attributed a similar level of priority (80.0% and 89.0%) and feasibility (100% and 88.0%), respectively.
This study aimed to investigate the perceived gap between actual rehabilitation practice and recommended practice as presented in the INESSS-ONF CPG in 2 Canadian provinces. Overall, we found a high proportion of the 109 investigated recommendations, namely, those identified as fundamental or priority in the CPG were considered as “fully” or “mostly” implemented, especially those in the “Assessment and Rehabilitation of Brain Injury Sequelae” section of the CPG. In general, the recommendations that were “partly” to “not” implemented were deemed as priorities and feasible to implement.
It is both interesting and reassuring to observe that a vast proportion of the recommendations were deemed as “fully” or “mostly” implemented by the participants. While providing an optimistic vision of the quality of current TBI rehabilitation practices in Quebec and Ontario, it also might be an artifact of the method used. Indeed, the results of this study describe perceived levels of implementation, rather than actual levels that could have been found through audits or using indicators from administrative databases. It is possible, however, as is the case for subjective outcome measurement,13 , 14 that participants' judgment was positively biased toward perceived desired outcome. However, our findings are based on clinical teams' perspective and they represent but one source of information to consider to orient the implementation process.1 Other strategies such as chart audit and use of indicators could triangulate our results and provide a much more complete picture of the degree of implementation.
The recommendations for the “Components of the Optimal TBI Rehabilitation System” section were deemed as less implemented than those in the “Assessment and Rehabilitation of Brain Injury Sequelae” section, likely because these often apply at the clinician level and are easier to implement than those addressing complex, systemic issues addressed in the former section. In fact, system-level changes are more difficult to make, as they often involve regulation and legislation changes.15 Indeed, many recommendations related to mechanisms for collaboration, coordination, and continuity with other care providers were more frequently deemed “partially” or “not” implemented. Many implementation theories see complexity of an intervention as a challenge to its implementation, and complexity increases as the number of organizational entities increase.16 Respondents were also able to provide comments about the potential obstacles to implementation. For example, some respondents reported that TBI rehabilitation was not a funding priority in their regions and this was the barrier rather than anything to do with feasibility.
There were some similarities in implementation priorities between the 2 provinces. The use of medications (recommendations R 10.3, J 6.2, J 3.1, and J 3.4) account for half of the recommendations deemed as not yet implemented. Medication use is more likely to be under individual physician influence versus team decision and therefore there may be more variability in practice due to differential physician comfort and experience in use of the medication. Furthermore, there may be hesitancy to use medications on the part of the patient and family, which may influence implementation. One potential barrier may be the quality of the evidence for the medications and fear of side effects in the absence of strong evidence. Medications are used for specific patients and in a small subset of all rehabilitation patients and therefore could be perceived by team members as not in widespread use despite appropriate use by physicians. Another possibility is that the use of the medications was considered “off-label” in patients with brain injury (eg, acetylcholinesterase inhibitors) and therefore not approved or available on hospital formularies.
An important challenge is collaboration and establishment of continuity mechanisms with addiction/substance use services and programs (recommendation A 2.2 and A 2.1). They are perceived as “not implemented” and a “priority” for both provinces' rehabilitation programs. This result highlights the difficulties, at least in Canada, for persons living with brain injury to access mental health and addiction services17 that are not integrated into comprehensive rehabilitation programs or available to persons with a brain injury once they are discharged from rehabilitation.
Although there were commonalities in the recommendations deemed “priority” to implement, the provincial variations in healthcare organization are great enough that the implementation team envisaged 2 parallel tailored implementation processes. Indeed, the perceived level of implementation and priorities varied for Ontario and Quebec, indicating the different needs of rehabilitation programs. To optimize the implementation process, the provincial implementation teams determined that meeting the perceived clinical needs was more important than having a uniform implementation process. Other considerations for implementation strategies included the availability of implementation tools and the evaluation of implementation facilitators and obstacles. Consensus meetings at the program level are necessary to tailor implementation priorities and strategies to encourage successful implementation of the local priority guideline recommendations. Currently, such meetings are also occurring at the provincial level to identify and plan for implementation efforts that address the common priorities across each province.
The following strengths and weaknesses of this study should be considered. The relatively high participation rate of 86.2% indicates the relevance of and engagement in the CPG for the participating programs. The self-reported evaluation of the implementation level and perceived importance represent the preferences of the respondents. However, there was no objective measurement of current implementation level, so the results simply offer a noteworthy snapshot of end user perspectives about the care they provide. Involving end users in a needs assessment is key to the success of the process,1 but basing an entire implementation process on clinician perception has limitations3 and further elements (actual level of implementation, facilitators and obstacles to implementation, implementation climate, etc) have to be examined to inform the implementation process. Our results provide important contextual information and can strengthen sharing of best practices as they might initiate a form of social pressure,18 encouraging programs that perceived a low level of implementation for a given practice to connect with others with higher levels of implementation. It is important to keep in mind that our results represent the perceptions of clinical teams that may have been swayed by group influence and social desirability bias; the responses do not necessarily represent the perceptions of individuals. Since it was not possible to vary the order of the survey questions, fatigue and respondent burden might have influenced the results observed for the later recommendations. Finally, given the method used, the results should not be generalized to other contexts without very careful consideration and a similar process should be used to inform implementation efforts in other settings.
Assessment of clinical needs is a required step in closing the gap between knowledge and action. To base this assessment on clinician perception provides a potentially useful and informative perspective for implementation efforts. Identifying perceived gaps in implementation and local priorities can serve to inform local implementation efforts in a way that is likely to increase commitment to implementation efforts and, by extension, the success of implementation. Identifying perceived gaps in implementation should serve as a starting point for planning of formal implementation activities.
1. Kitson A, Straus SE. Identifying knowledge to action gaps. In: Straus S, Tetroe J, Graham ID, eds. Knowledge Translation in Health Care: Moving From Evidence to Practice. 2nd ed. Hoboken, NJ: John Wiley & Sons; 2013:60–72.
2. Graham ID, Logan J, Harrison MB, et al Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24.
3. Grant J. Learning needs assessment: assessing the need. BMJ. 2002;324(7330):156–159.
4. Lockyer J. Needs assessment: lessons learned. J Contin Educ Health Prof. 1998;18(3):190–192.
5. Canadian Institutes of Health Research. A Guide to Knowledge Translation Planning at CIHR: Integrated and End-of-Grant Approaches. Ottawa, ON, Canada: Canadian Institutes of Health Research; 2012.
6. Gagliardi AR, Berta W, Kothari A, Boyko J, Urquhart R. Integrated knowledge translation (IKT) in health care: a scoping review. Implement Sci. 2016;11(1):1.
7. Gagliardi AR, Dobrow MJ. Identifying the conditions needed for integrated knowledge translation (IKT) in health care organizations: qualitative interviews with researchers and research users. BMC Health Serv Res. 2016;16(1):256.
8. Farley K, Thompson C, Hanbury A, Chambers D. Exploring the feasibility of Conjoint Analysis as a tool for prioritizing innovations for implementation
. Implement Sci. 2013;8(1):56.
9. Rogers EM. Diffusion of Innovations. New York, NY: Simon & Schuster; 2010.
10. Proctor E, Silmere H, Raghavan R, et al Outcomes for implementation
research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Mental Health. 2011;38(2):65–76.
11. Karsh B. Beyond usability: designing effective technology implementation
systems to promote patient safety. Qual Saf Health Care. 2004;13(5):388–394.
12. Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. 2004;6(3):e34.
13. Kaptchuk TJ. Intentional ignorance: a history of blind assessment and placebo controls in medicine. Bull History Med. 1998;72(3):389–433.
14. Colquhoun HL, Letts LJ, Law MC, MacDermid JC, Missiuna CA. Administration of the Canadian Occupational Performance Measure: effect on practice. Can J Occup Ther. 2012;79(2):120–128.
15. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42.
16. Kochevar LK, Yano EM. Understanding health care organization needs and context: beyond performance gaps. J Gene Intern Med. 2006;21(suppl 2):S25–S29.
17. de Guise É, Banville F, Desjardins M, et al Démarche réflexive d'analyse en partenariat sur l'élaboration de stratégies pour améliorer l'offre de services en santé mentale des personnes ayant subi un traumatisme craniocérébral modéré ou grave. Can J Community Mental Health. 2016;35(2):19–41.
18. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation
of health services research findings into practice: a consolidated framework for advancing implementation
science. Implement Sci. 2009;4:50.
Keywords:Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.
brain injury; implementation; practice guideline; survey