The Future of INCOG (Is Now) : The Journal of Head Trauma Rehabilitation

Secondary Logo

Journal Logo

Original Articles

The Future of INCOG (Is Now)

Bragge, Peter PhD; Bayley, Mark Theodore MD, FRCPC; Velikonja, Diana PhD, MScCP; Togher, Leanne PhD, BAppSc (Speech Path); Ponsford, Jennie PhD, AO, MA (Clinical Neuropsychology); Janzen, Shannon MSc; Harnett, Amber MSc, BScN, RN (c); Kua, Ailene MSc, PMP; Patsakos, Eleni MSc; McIntyre, Amanda RN; Teasell, Robert MD, FRCPC; Kennedy, Mary PhD, CCC-SLP; Marshall, Shawn MD, MSc, FRCPC

Editor(s): Bayley, Mark MD; Ponsford, Jennie AO, MA, PhD

Author Information
Journal of Head Trauma Rehabilitation 38(1):p 103-107, January/February 2023. | DOI: 10.1097/HTR.0000000000000836
  • Open

IT HAS BEEN 8 years since the first iteration of the INCOG clinical practice guidelines (CPGs) were published. Much has happened since 2014, and a considerable body of evidence has been published in the various domains of cognitive rehabilitation research represented in this special issue. Over this time, significant developments in the science of identifying, appraising, and distilling research evidence into practically applicable CPGs have emerged, as well as implementation efforts to ensure meaningful change in care delivery.1,2

Many of these developments have been either driven or “supercharged” by the COVID-19 pandemic.3–5 The pandemic led to a global spotlight on science and—due to the importance of public health measures to control the virus—the role of science in guiding our day-to-day lives.5 Specifically, exponential increases in demand for science to support real-time decision-making led to a number of poorly designed and coordinated COVID trials and reviews. A more carefully planned evidence-to-practice pipeline would have been more helpful in guiding COVID responses.4 In this sense, the pandemic reinforces the original and ongoing mission of INCOG: to provide robust reviews of the best available cognitive rehabilitation research evidence to clinicians who want to facilitate and optimize patient recovery following traumatic brain injury (TBI). As such, this commentary provides key insights from the review and guideline sciences, highlighting their relevance to this and future INCOG updates.

Getting the question right: Codesigning and prioritizing research questions ensure that research effort is focused on areas where impact is most needed.

The original INCOG guidelines grew out of a series of codesign and evidence synthesis projects culminating in an international workshop in which 25 clinicians, researchers, and knowledge translation scientists representing 4 countries prioritized cognitive rehabilitation following TBI as an important area of knowledge translation focus.6 In parallel with the foundational work leading to INCOG 2014, the importance of creating, growing, and harnessing communities of practice has continued to emerge but with an increasing emphasis on the participation of patients with lived experience of injury or illness and its consequences. For example, it has been more than a decade since the establishment of the Patient Centered Outcomes Research Institute (PCORI), which placed renewed focus on the importance of patient input into “practical questions, relevant outcomes and study populations, and the possibility that treatment effects may differ across patient populations.”7(p1583) While experts in the field have command of scientific and medical knowledge, patients are best placed to provide perspectives on the issues that need to be addressed to optimize their function and quality of life.

The addition of patient and other perspectives (eg, those of service deliverers, policy makers, and funders) builds valuable insights into the development of health research questions and approaches and other areas of problem solving.7–9 For example, this can involve gathering qualitative insights into understanding the experience of the impairment; how clinical interventions are experienced; to what extent these interventions meet real-world needs and what tailoring may be required to better match these needs. Once the questions have been formulated, there are also opportunities for patients to be part of the research team. Thomas et al10 described how “citizen scientists” (community members with interest in contributing to science but without formal scientific training) can partner with review researchers to undertake some of the many tasks within a systematic review. They are identified and trained through an online “Cochrane Crowd” platform, which now has almost 24 000 contributors across the world.11

The underlying thread that connects PCORI and “Cochrane Crowd” is the idea of meaningful involvement in health research through research codesign. This involves going beyond isolated activities without meaningful outcomes and clearly communicated outputs (like a one-off workshop with a patient group) to specifying explicit roles and responsibilities of research partners such as patients; recognizing their contributions accordingly; and transparently reporting their contributions to the final research outputs. Although this paints a picture of what research codesign looks like, our review of 23 codesign reviews published in 2021 concluded that while the concept and importance of codesign are acknowledged, the actual codesign process is rarely described in detail or evaluated.9 This can create frustration for those who participate in activities badged as “codesign” but that fall short of respectful and meaningful involvement. Although potentially helpful frameworks and strategies have been developed for examining the extent of patient and family engagement over the past decade, these are not embedded in routine practice.12,13

Addressing the challenge of doing “true codesign” is not easy; it involves developing new ways of working and additional resources. However, if research effort is directed to high-priority areas of need, the impact gains far outweigh these costs. Our experience of codesign in research question development has underlined this, as it has yielded unexpected and important insights.14,15 Relative to other interventions such as medicines and surgery, cognitive rehabilitation is highly interactive. The collaborative nature of rehabilitation underlines the need for meaningful involvement of persons with lived experience from the creation and development of effective treatment interventions to the implementation of CPGs. Such involvement has not been “business as usual” in CPGs, including INCOG. Therefore, our challenge is to explore methods of recognizing and harnessing this potential to ensure that each iteration of INCOG reflects the views and interests of the many groups it seeks to serve. This challenge extends beyond INCOG to the primary research that informs the guidelines, as there is little evidence of codesign in many published cognitive rehabilitation randomized controlled trials and other studies.

Streamlining the reviews that drive the guidelines: Technology has brought us closer to the “holy grail” of guidelines that are both comprehensive and up to date.

In parallel with the advances in cognitive rehabilitation that are reflected in INCOG 2022, there have been a number of developments in the science of developing CPGs. Systematic reviews—a comprehensive assembling of research literature in a defined area of medicine and the substrate of CPGs—are hundreds of years old, with James Lind's 1753 Treatise on Scurvy frequently acknowledged as the first example.16 Core systematic review activities—search, selection, synthesis, and interpretation—were progressively codified over the centuries that followed, culminating in the formation of the Cochrane Collaboration and evidence-based practice movement in the late 20th century. CPGs that translate review findings into statements of recommended practice, graded according to the strength of their underlying evidence, have also continued to evolve as an essential component of the end game of implementation/practice change.17

Technological and informational revolutions of the last 20 to 25 years have created a double-edged sword. There is an abundance of evidence that is readily available—the number of journal articles in existence has been shown to be doubling approximately every 24 years18—and advancements in technology have enabled this growing volume of research evidence to be instantly accessible; however, these vast amounts of information cannot be handled using traditional manual review approaches. Fortunately, technology has also resulted in advancements in review methods that can accelerate systematic review processes. For example, the Covidence online platform can manage screening, selection, and data extraction tasks between 2 or more members of a research team, including automating the identification and resolution of conflicts between reviewers in selecting, appraising, and summarizing research studies.19,20 This is one of a staggering array of tools for every step of the review process—more can be found at

Several important manifestations of this over the last decade warrant mention. First, rapid reviews (in which systematic review methods are modified, for example, by focusing on review-level evidence or altering other review parameters) have evolved as a viable means of extracting key themes from published studies in much shorter time frames (generally several weeks) than previously required using traditional systematic review methods.21 Second, as indicated by “Cochrane Crowd” earlier, larger communities of practice can distribute high volumes of work more efficiently. Finally, technology itself can dramatically accelerate review processes, from online platforms that manage review processes to the use of artificial intelligence and machine learning to replace time-consuming manual tasks with increasing precision.22,23 These technology developments have led to the advent of “living” reviews and CPGs, which harness distributed human resources and machine effort to create reviews and update them continuously.10,24,25 Living INCOG guidelines would facilitate a process to update recommendations as soon as relevant new information becomes available. With the move to virtual meetings stemming from the COVID-19 pandemic, living guideline panels are more feasible and acceptable than ever. This opportunity to maximize timeliness and relevance unlocks the potential to update INCOG in real time, rather than after a number of years.

Although the COVID-19 pandemic has raised the profile of science and the role of research evidence, it has also laid bare preexisting and serious flaws in the evidence-to-practice pipeline. Poor coordination of COVID trials led to many of them being underpowered; similarly, review efforts were rushed, with insufficient attention to overlap between different review groups. These led to “research waste at an unprecedented scale.”4(p183) Existing systems such as the PROSPERO global systematic review registration platform go some way to addressing these challenges, but COVID showed that review and trial registration is insufficient. Rather than individual tools and registration platforms, the vision for the future of evidence-based medicine is better thought of at a system design level. There is bold, transformative thinking in this space, for example, efforts to link primary, review, and CPG research and associated data, rather than the existing, poorly connected “silos.” Various models of fully integrated “evidence ecosystems” have been proposed.26 For example, Nakagawa et al27 describe a fully open-access platform that enables primary, review, and other researchers to share data based on “FAIR” principles—findable, accessible, interoperable (ie, data can be integrated with other data and can be used across applications and workflows), and reusable.

What could such a platform look like for INCOG? The possibilities are tantalizing. An “INCOG research ecosystem” could facilitate numerous connected efforts:

  • Patients or stakeholders identifying an issue of importance to them (eg, “I want access to tools that can aid my memory”);
  • Clinicians posing interventions addressing this (an online memory portal or a new approach to memory retraining);
  • A large, globally coordinated trial of the new intervention with a robust sample size; access to research data and findings by all involved in developing the question, undertaking and participating in the trial;
  • Development of an INCOG recommendation based on the trial findings that could be fed into a connected guideline portal alongside related research inputs (such as ERABI;;
  • Collection of audit data showing the extent to which the relevant recommendation is reflected by practice;
  • Gathering of information about barriers and facilitators to adoption; and
  • Planning of implementation or other follow-on research responding to the various research and audit findings.

Connecting recommendations to practice: The considerable research effort that goes into guidelines is wasted if implementation and connection to practiceareinadequate or unsupported.

The challenge of connecting academic research to the clinical point of care, often characterized as “closing the evidence-practice gap,” has long been recognized. There are multiple facets to implementing guideline recommendations starting with the guideline recommendations themselves. A review by Kastner et al28 of factors associated with successful guideline implementation highlighted the importance of CPG content (process, evidence, clinical applicability, recommendation feasibility) and communication (simple, clear and persuasive language, CPG format). This work led to development of the Guideline Implementability for Decision Excellence Model (GUIDE-M) for CPG developers.29 In developing and designing these new INCOG recommendations, we have given consideration to CPG implementation. This reflects our belief that better assessment, treatment, and outcomes for individuals with TBI are only possible if CPG teams keep implementation at the forefront of their thinking and is further underlined by our efforts to measure the extent to which clinical practice reflects awareness and use of INCOG guideline recommendations.2,30 As we found in reviewing previous cognitive rehabilitation guidelines, this focus on auditability (evaluating whether recommendations have been translated into clinical practice) is an area traditionally neglected in CPGs.31

It has been more than a quarter of a century since David Sackett and colleagues designed and tested an actual “evidence cart” comprising a computer, CD-ROM, and hard copy knowledge resources in a hospital setting. Their efforts were ultimately hampered by the sheer volume and weight of 1990s-era technology.32 Today, all of this information can be readily stored in a handheld smartphone. The “ViaTherapy” app, which supports clinical decision-making following upper-extremity stroke, is an example of how modern technology can be harnessed to achieve Sackett's ambition. The app provides evidence-based recommendations tailored to a person following an upper-extremity stroke based on 4 questions asked to the treating clinician. The ViaTherapy recommendations are further prioritized on the basis of expert panel input, with a star rating to indicate the most feasible and important therapies. Recommendations can be further filtered if the individual wants to provide group rehabilitation. Video demonstrations of the therapy and potential outcomes to use to measure progress are also provided.33,34

However, as decades of implementation science have shown, the existence of a resource such as ViaTherapy is not sufficient to achieve implementation into routine, sustainable practice. It has been shown that efforts to achieve this are more likely to be successful if barriers and facilitators to uptake across various contexts and settings are addressed.35 Two important considerations warrant mention in this regard.

First, knowledge and clinical practice should be viewed as a 2-way exchange rather than a 1-way street of guideline dissemination. The importance of this 2-way exchange is reflected in the development of the “learning healthcare systems” concept, created by the Institute of Medicine (IOM) following an evidence-based medicine roundtable in 2006. This approach views evidence-based practice as not just dissemination of information to support care (eg, using ViaTherapy) but also a continuous process of learning through implementation and making refinements based on insights gained from caregivers, patients, and families.26 This reflects the Knowledge to Action framework developed by Graham and colleagues,36 which has guided INCOG from the beginning. In a learning healthcare system, ViaTherapy implementation would not end with downloading the app. Insights into its utility; the feasibility, acceptability, and affordability of the recommended therapies; and new questions and research needs would be gathered from clinicians, patients, and families and used to iterate and improve the ViaTherapy resource. Our own work on how INCOG 2014 has translated into practice indicates that this type of continuous learning is presently lacking.2,30

Second, the healthcare system must provide an enabling environment that embeds the importance of evidence-based practice in clinical training; sets (and funds) evidence-based practice as an expectation; supports clinicians to understand and realize this ambition; and recognizes efforts to achieve this. Junior clinicians in all fields including cognitive rehabilitation can be overwhelmed in various ways, including their exposure to a health system that is almost continuously overstretched and underfunded; the expectations of patients and their families; and balancing the sheer volume of clinical work with a range of administrative and other workplace responsibilities. Opportunities to stay up to date with research evidence, reflect on what it means for practice, and learn new skills may be nonexistent, rare, or perceived as a low priority in environments that may place priority on productivity, that is, quantity of work, above the quality of work. Furthermore, the new knowledge brought from graduates may be met by senior colleagues who are set in their approaches and routines of practice. However, investment in the participation of clinicians in a larger evidence ecosystem offers substantial downstream benefits to patients. The onus is therefore on health service and clinical managers to facilitate this opportunity. If this opportunity is lost, the work of INCOG and other CPGs risks remaining on shelves and in unused apps, where it cannot improve the outcomes and quality of life of those in need of the best cognitive rehabilitation we can and should offer.


1. Evidence-based Review of moderate to severe Acquired Brain Injury (ERABI). Published 2022. Accessed May 3, 2022.
2. Downing M, Bragge P, Ponsford J. Cognitive rehabilitation following traumatic brain injury: a survey of current practice in Australia. Brain Impair. 2019;20(1):24–36. doi:10.1017/BrImp.2018.12
3. Olalekan RM, Tuebi M, Ebikapaye O, Henry S, Oka JB, Olaolu OB. A beacon for dark times: rethinking scientific evidence for environmental and public health action in the coronavirus diseases 2019 era. MAR Microbiol. 2020;3:1–18.
4. Pearson H. How COVID broke the evidence pipeline. Nature. 2021;593(7858):182–185. doi:10.1038/d41586-021-01246-x
5. Bragge P, Becker U, Breu T, et al. How policymakers and other leaders can build a more sustainable post-COVID-19 “normal.” Discov Sustain. 2022;3(1):7. doi:10.1007/s43621-022-00074-x
6. Bayley MT, Teasell RW, Wolfe DL, et al. Where to build the bridge between evidence and practice? J Head Trauma Rehabil. 2014;29(4):268–276. doi:10.1097/HTR.0000000000000053
7. Selby JV, Beal AC, Frank L. The Patient-Centered Outcomes Research Institute (PCORI) national priorities for research and initial research agenda. JAMA. 2012;307(15):1583–1564. doi:10.1001/jama.2012.500
8. Olivier P, Wright P. Digital civics. Interactions. 2015;22(4):61–63. doi:10.1145/2776885
9. Slattery P, Saeri AK, Bragge P. Research co-design in health: a rapid overview of reviews. Health Res Policy Syst. 2020;18(1):17. doi:10.1186/s12961-020-0528-9
10. Thomas J, Noel-Storr A, Marshall I, et al; Living Systematic Review Network. Living systematic reviews: 2. Combining human and machine effort. J Clin Epidemiol. 2017;91:31–37. doi:10.1016/j.jclinepi.2017.08.011
11. The Cochrane Collaboration. Cochrane Crowd. Accessed April 11, 2022.
12. Carman KL, Dardess P, Maurer M, et al. Patient and family engagement: a framework for understanding the elements and developing interventions and policies. Health Aff. 2013;32(2):223–231. doi:10.1377/hlthaff.2012.1133
13. Gilbert N, Cousins JB. Advancing patient engagement in health service improvement: what can the evaluation community offer? Can J Progr Eval. 2017;32(2). doi:10.3138/cjpe.31120
14. Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The Global Evidence Mapping Initiative: scoping research in broad topic areas. BMC Med Res Methodol. 2011;11:92. doi:10.1186/1471-2288-11-92
15. Synnot AJ, Tong A, Bragge P, et al. Selecting, refining and identifying priority Cochrane reviews in health communication and participation in partnership with consumers and other stakeholders. Health Res Policy Syst. 2019;17(1):45. doi:10.1186/s12961-019-0444-z
16. Baron JH. Sailors' scurvy before and after James Lind—a reassessment. Nutr Rev. 2009;67(6):315–332. doi:10.1111/j.1753-4887.2009.00205.x
17. Bragge P. From centuries to hours: the journey of research into practice. Digit Gov Res Pract. 2022;3(2):1–13. doi:10.1145/3529166
18. Jinha A. Article 50 million: an estimate of the number of scholarly articles in existence. Learn Publ. 2010;23(3):258–263. doi:10.1087/20100308
19. Covidence. Better systematic review management. Accessed May 3, 2022.
20. van der Mierden S, Tsaioun K, Bleich A, Leenaars CHC. Software tools for literature screening in systematic reviews in biomedical research. ALTEX. 2019;36(3):508–517. doi:10.14573/altex.1902131
21. Hamel C, Michaud A, Thuku M, et al. Defining rapid reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews. J Clin Epidemiol. 2021;129:74–85. doi:10.1016/j.jclinepi.2020.09.041
22. Yamada T, Yoneoka D, Hiraike Y, et al. Deep neural network for reducing the screening workload in systematic reviews for clinical guidelines: algorithm validation study. J Med Internet Res. 2020;22(12):e22422. doi:10.2196/22422
23. Giummarra MJ, Lau G, Gabbe BJ. Evaluation of text mining to reduce screening workload for injury-focused systematic reviews. Inj Prev. 2020;26(1):55–60. doi:10.1136/injuryprev-2019-043247
24. Negrini S, Ceravolo MG, Côté P, Arienti C. A systematic review that is “rapid” and “living”: a specific answer to the COVID-19 pandemic. J Clin Epidemiol. 2021;138:194–198. doi:10.1016/j.jclinepi.2021.05.025
25. Akl EA, Meerpohl JJ, Elliott J, Kahale LA, Schünemann HJ; Living Systematic Review Network. Living systematic reviews: 4. Living guideline recommendations. J Clin Epidemiol. 2017;91:47–53. doi:10.1016/j.jclinepi.2017.08.009
26. Vandvik PO, Brandt L. Future of evidence ecosystem series: evidence ecosystems and learning health systems: why bother? J Clin Epidemiol. 2020;123:166–170. doi:10.1016/j.jclinepi.2020.02.008
27. Nakagawa S, Dunn AG, Lagisz M, et al. A new ecosystem for evidence synthesis. Nat Ecol Evol. 2020;4(4):498–501. doi:10.1038/s41559-020-1153-2
28. Kastner M, Bhattacharyya O, Hayden L, et al. Guideline uptake is influenced by six implementability domains for creating and communicating guidelines: a realist review. J Clin Epidemiol. 2015;68(5):498–509. doi:10.1016/j.jclinepi.2014.12.013
29. Brouwers MC, Makarski J, Kastner M, Hayden L, Bhattacharyya O, Team GMR. The Guideline Implementability Decision Excellence Model (GUIDE-M): a mixed methods approach to create an international resource to advance the practice guideline field. Implement Sci. 2015;10:36. doi:10.1186/s13012-015-0225-1
30. Nowell C, Downing M, Bragge P, Ponsford J. Current practice of cognitive rehabilitation following traumatic brain injury: an international survey. Neuropsychol Rehabil. 2020;30(10):1976–1995. doi:10.1080/09602011.2019.1623823
31. Bragge P, Pattuwage L, Marshall S, et al. Quality of guidelines for cognitive rehabilitation following traumatic brain injury. J Head Trauma Rehabil. 2014;29(4):277–289. doi:10.1097/HTR.0000000000000066
32. Sackett DL, Straus SE. Finding and applying evidence during clinical rounds: the “evidence cart.” JAMA. 1998;280(15):1336–1338. doi:10.1001/jama.280.15.1336
33. Wolf SL, Kwakkel G, Bayley M, McDonnell MN. Best practice for arm recovery poststroke: an international application. Physiotherapy. 2016;102(1):1–4. doi:10.1016/
34. Hancock NJ, Collins K, Dorer C, Wolf SL, Bayley M, Pomeroy VM. Evidence-based practice “on-the-go”: using ViaTherapy as a tool to enhance clinical decision making in upper limb rehabilitation after stroke, a quality improvement initiative. BMJ Open Qual. 2019;8(3):e000592. doi:10.1136/bmjoq-2018-000592
35. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50. doi:10.1186/1748-5908-7-50
36. Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24. doi:10.1002/chp.47
© 2023 The Authors. Published by Wolters Kluwer Health, Inc.