Educational innovation is not just a frequent practice but a deeply held value in health professions education. From relatively small, local efforts to national and even global movements, the pages of our health professions education journals are a testament to the community’s commitment to innovation. The purpose of innovation is to make our current state better, or improve the status quo. Innovations are how we translate aspirational goals into activities or products that are anticipated to lead to the meaningful outcomes we desire. However, many innovations do not realize the lasting changes that they were intended to achieve. While some continue to evolve toward their aspirational purpose, many start to deviate problematically from their intended goals, devolve to the previous state, or are summarily halted altogether. The construction of a durable bridge from aspirations to successful implementation to meaningful and lasting change can be elusive, and this elusiveness stems from the difficulty of introducing change into a complex ecosystem.
Because it cannot be assumed that an innovation will be successful in achieving its goals, it is important to evaluate each innovation in context. While the importance of program evaluation has been well understood for some time, the approaches and methods have shifted significantly over the decades. Before the 1980s, program evaluation was predominantly informed by the experimental paradigm, using controlled studies to determine whether the innovation had achieved its intended outputs. 1–4 Starting in the 1980s, there was a shift to the development of theory-oriented evaluations that acknowledged the need to understand the contextual and transformational processes of implementation in the real world. 5–7 The term “black box evaluation” symbolized the recognition that evaluation tools that merely examine outputs could not capture evidence about the inner workings of implementation efforts. 1,8,9 This was problematic since the lack of information about implementation, and how the innovation functioned in context, prevented any meaningful efforts at improvement or sensible decision making about continuing, modifying, or halting an innovation. Moreover, this approach to evaluation often overlooked, and perhaps perpetuated or even contributed to, unintended and adverse consequences. 10–13
Thus, since the 1990s, there has been a movement toward examining implementation fidelity, an approach to program monitoring and evaluation that examined the inner workings of an innovation in the context of real-world implementation. 14 The core question in the fidelity model of program evaluation is whether the activities of a new program, guideline, procedure, or tool are incorporated into a system as intended. 14–20 Fidelity measures aim to build a bridge between the introduction of an innovation, lessons learned during implementation, and the anticipated outcomes of change. 21 In health professions education, discussions about implementation fidelity and moving beyond “black box evaluation” approaches have reignited alongside efforts at curricular reform. 22–24 For example, van Melle and colleagues have developed a guiding framework that facilitates examining the fidelity of implementing the core components of a curricular innovation introduced across Canada. 25 Additionally, Onyura and colleagues have explored how and why faculty development initiatives take their form by evaluating the relationship between contexts, mechanisms, and outcomes. 26
While this approach has been of undisputed importance in advancing our thinking about program evaluation, the methods by which fidelity is examined in program evaluation efforts may inadvertently limit our thinking about the trajectory of the innovation over time. For example, fidelity assessment tends to capture in-depth snapshots in time about what is happening at the moment of the evaluation. These snapshots may represent an excellent cross section of the implementation phases, but they tend to be focused on the period when the institutional motivation for enacting the innovation is still at its peak (during implementation). Thus, resources (including financial, physical, and personnel resources, as well as active and vocal support from leadership) are still being heavily invested in the initiative. This raises the critical question of what will happen within a complex system when the institution’s attentional resources are allocated elsewhere. Further, fidelity of implementation inquiries tends to focus on whether the program is being delivered as intended, which leads to a focus on the mechanical or behavioral aspects of implementation. This focus may overlook the extent to which these behaviors reflect an adoption of the underlying values that those behaviors are intended to represent, or are merely performative acts by teachers and students during the “surveillance period” of the implementation. 27
This raises the question of whether the behaviors observed during implementation phases represent a meaningful, long-lasting change. It is estimated that 75% of innovations are explicitly abandoned or gradually devolve over time, which often leads to the cyclic reinvention of change initiatives. 28–30 This carousel of curricular renewal 31 is inconvenient, exhausting, and demotivating for all stakeholders within a system. Thus, there is a need to reflect on and advance approaches to monitoring and evaluating implementation that produce a comprehensive picture not only about current fidelity but also about the durability or longevity of the changes. As Hall and colleagues have argued, examining the implementation of an innovation must be considered a “marathon not a sprint.” 32
In implementation science literature, sustainability is a common term used when considering the potential longevity of an innovation. Although a shared definition has yet to be developed (in a recent review of 209 original research articles, 33 24 different definitions of sustainability were identified), the most common understanding of sustainability involves continued support for the delivery of the innovation (e.g., financial and human capital) and continued achievement of the desired outcomes. 33 To date, however, there have been relatively few published studies that examine programs longitudinally, so there is little literature exploring the ways in which (and reasons that) innovations continue (and evolve) effectively, deviate problematically, or devolve once institutional attention is directed elsewhere. Thus, most discussions of sustainability in the context of innovations involve proposed mechanisms of ongoing resource allocation needed to maintain the program’s practices beyond the implementation phase. Because of this focus on resources, the sociological aspects of program longevity (ongoing stakeholder engagement in the goals of the program and the practices that will achieve those goals) are problematically ignored.
A new model that more effectively assesses the (potential) longevity of an innovation (and therefore increases the likelihood of longevity) is needed. The purpose of this paper is to develop an alternative model for evaluating the longevity of an innovation, borrowing from and building on the theories and models of implementation science.
A critical literature review was determined to be the most appropriate method for our purpose, since this approach seeks to identify relevant literature in a field to derive a new conceptual model that expands on existing theories and frameworks. 34 Using critical review methodology, the authors searched the implementation science literature for papers that contained markers or strategies to explore the longevity of an innovation once introduced into a system. While most of the articles were found in the academic literature, some gray literature sources were also identified through interactions with knowledgeable colleagues and via Google Scholar. Having identified relevant papers, the authors traced reference lists to identify models in the implementation science literature that might offer perspectives or approaches to understanding and exploring factors relevant to the evolving model being developed.
Three prominent implementation science models were identified as relevant to this critical review. Integrating notions from Normalization Process Theory (NPT), 35 the Consolidated Framework for Implementation Research (CFIR), 36 and Reflexive Monitoring in Action (RMA), 37 we offer a framework that highlights 6 questions that must be considered when evaluating the potential longevity of an innovation.
The foundation for the new framework began with the work of May and colleagues, who have suggested that the success of an implementation will be dependent on the extent to which the innovation is effectively embedded in the organizational context and integrated into professional practices, a process they describe as normalization. 35,38 Core features of normalization include the shared understanding of the impetus, importance, and practices of the change (coherence); contribution to the change by participating in and monitoring the activities of the innovation (cognitive participation); the interactional and relational work to perform new practices (collective action); and the work of evaluating the impact of the change (reflexive monitoring). 35 As such, in NPT, evaluating the longevity of an innovation involves examining the interactions between individuals and groups as they make sense of the work and appraise their capabilities and resources to contribute to collective change.
Contemplating monitoring and evaluation from a different perspective, Damschroder and colleagues have proposed that implementation success is multifaceted and involves factors beyond the work of individuals. They developed the CFIR 36 to guide program evaluators to look at the larger ecosystem of change. Thus, the CIFR model examines 5 primary domains: (1) the characteristics of the innovation, such as design quality and strength of evidence, trialability, and perceived relative advantage of the change; (2) the “outer” setting, such as external policies, pressures, needs, and resources; (3) the “inner” setting, such as the structure of the organization, culture, and readiness for change; (4) individual characteristics, such as knowledge and beliefs about the innovation and self-efficacy; and (5) the process of implementation, such as strategies used to introduce change into a system, and reflecting on and evaluating innovation outcomes. 36
These 2 frameworks can be complemented well by the RMA 37 framework. Reflexive monitoring promotes rational action by examining past experiences with an innovation to dictate future plans to reach desired goals. 39 Thus, RMA promotes the use of collective reflection, and learning from these reflections, to accomplish long-term system-level change to reach the ambitions of the innovation. 37 RMA oscillates between close examination of the everyday experiences of the people implementing an innovation and uncovering system-level changes that are needed to sustain the innovation beyond implementation. 37 RMA resembles the cyclic activities of other monitoring and evaluation approaches, such as the Plan-Do-Study-Act model and other quality improvement initiatives in health care settings. 40 However, it differs from these approaches in 3 significant ways. First, reflexive monitoring is ideally suited for innovations that require radical transformation rather than innovations that require incremental changes within a system. 37 Second, RMA examines the longer-term aspirational goals of the innovation rather than exploring a segment of implementation over a short period of time. 37 Third, RMA examines the longevity of the innovation by oscillating between optimizing the activities of the innovation and the alignment these activities with the goals of the innovation. 37 As implied by the name, reflexive monitoring is necessarily an ongoing process that should be enacted beyond the initial implementation period.
Putting these theories, frameworks, and practices together, it is evident that markers for successful implementation are rooted in how the system and people doing the work interact with both the features and activities of the innovation and the grand aspirations of the change. As such, our new model, Eco-Normalization (i.e., normalization of the values and practices of the innovation not just at the individual level but at the level of the entire ecosystem), is an expansion of previous literature in the form of a guiding framework for implementation research and evaluation that focuses on the potential longevity of change. By examining the interactions between the innovation, the institutional system in which the innovation is embedded, and the individual people doing the work, the Eco-Normalization model offers a framework to systematically explore the conditions and factors that might affect the meaningfulness and longevity of an implemented innovation as depicted in Figure 1. Primary-level interactions hold the aspirations of the change at the forefront when examining the interaction between these aspirations and each of 3 components: the design of the innovation, the local aspirations of stakeholders, and the aspirations of the system (Figure 1). Secondary-level interactions begin to explore more complex relationships between primary-level interactions: the compatibility of the innovation with the practices of the people doing the work, the ways in which the system impacts stakeholders, and how the features of the innovation fit with the features of the system. Assessing the extent to which the various components align (or not) at the primary and secondary levels of interaction produces 6 questions that form the basis of an Eco-Normalization program evaluation.
Interactions between change aspirations and innovation design: Does the innovation, as designed, align with the grand aspirations of change?
The alignment between the aspirational goals of change and the design of the innovation is a critical feature of implementation success. For example, in health professions education, grand aspirational goals may include development of a culture of personalized learning. However, translating these grand aspirations into an innovation with new practices and activities is founded on strategic intentions, which are propositions or assumptions of what is believed to happen and why, and often illustrated in the form of a logic model or development of a program theory. 24 These assumptions can be proven, modified, or contested in practice, through a process that might resemble the evaluation of implementation fidelity. However, consistent with the tenets of RMA, the Eco-Normalization model includes the premise that innovations are not static, but a set of dynamic features that are expected to evolve in ever-changing contexts. Thus, considering the reality that innovations are expected to evolve, a critical feature of Eco-Normalization is cyclic monitoring to uncover if/how the activities and practices of the innovation are being altered in ways that might facilitate or hinder achievement of the grand aspirations behind the change.
Interactions between change aspirations and the system: Do the system goals align with the grand aspirations of change?
Innovations are generally incorporated into established inner settings that have deep cultural, procedural, fiscal, and structural roots. For example, personalized learning may be an aspiration underlying the change (which might require greater attention and resources dedicated to teaching and learning). While the institution might agree with this aspiration in principle, institutional leaders might have to balance this aspiration against other values and aspirations, such as financial responsibility to shareholders. Prioritizing financially lucrative activities might implicitly or explicitly deprioritize excellence in education and social accountability. Thus, in isolation, each of these aspirational goals may be priorities of the system; however, in combination, the conflict between the values that underlie the motivation for change and others that are exclusive to the system has the potential to hinder Eco-Normalization, or the longevity, of the change.
Interactions between change aspirations and the people doing the work: Do the stakeholders’ local aspirations align with the grand aspirations of change?
An assumption in the development of an innovation is that grand aspirations, such as improving patient care through better educational practices, align with the local aspirations of frontline stakeholders on the ground (the teachers, students, and administrative staff). However, while the grand aspirations may be supported in principle by those who are living the change, these grand aspirations are often very far removed from the concerns of daily practice. This can lead to a chasm between the impetus for change and the functional needs and values of those experiencing the change. For example, a grand aspiration of greater educational accountability might require clinical teachers to complete more frequent or more precise assessments of their learners, but this might be perceived by teachers as a “gatekeeping” practice that interferes with their more immediate goal of developing a safe learning environment and mentoring relationship with their learner. As another example, a grand aspiration might be the development of adaptive experts such that the innovation promotes developing approaches to practice, but learners might have an ingrained mental model that values knowledge acquisition and certainty leading them to reject efforts to teach them problem solving in contexts of uncertainty. Thus, the out-of-reach aspirations of developers may lead to the innovation deviating problematically or halting because of conflicts with the more local vision, values, and relative priorities of stakeholders on the ground, hindering meaningful longevity of the change.
Interactions between the system and the innovation: Does the innovation interact with the system in a way that will lead to the aspirations of change?
The implementation of an innovation impacts the collection of interrelated parts of a system. While it seems logical that introducing an innovation into an existing system requires changes to organizational structures, policies, priorities, tools, incentives, and delegated roles, some of these aspects are hidden, overlooked, or not prioritized. Underprioritization of these factors can significantly impact the trajectory of an innovation. For example, the activities of an innovation may not be compatible in the system, such as introducing a new clinical curriculum without considering its implications for patient care routines, or introducing a new course without considering how assessment in that course might interface with the larger assessment system. Such interactions, if not carefully considered and managed, can create (rather than solve) problems for effective integration of the innovation into systemic practices and policies. In these circumstances, systemic barriers can hinder collective action, which can lead to the innovation deviating problematically or halting, preventing the longevity of successful change.
Interactions between the people doing the work and the innovation: Does the innovation evoke meaning to actions and agency of the people doing the work that will lead to aspirations of change?
Often, the onus of implementation failure or the lack of longevity of an innovation is placed on frontline implementers. The characteristics of individuals, such as knowledge and skills about the innovation, are frequently assumed to be the missing link. For example, challenges with implementation are met with more professional development initiatives that assume that implementers need more knowledge, skills, or convincing about the impetus for change and how to enact the activities of the innovation. However, there are many concurrent factors that may hinder the development of meaning and values that lead to action. In fact, frontline implementers may be cogently aware of the impetus for change and how to undertake the activities as described by the developers. But, the mandated innovation is perceived to be misaligned with local aspirations and functional priorities, and stifling to the sense of agency and power. Agency and power are imperative when implementing an innovation because they foster responsibility and relationships, which are critical ingredients of collective system-wide action. For example, overlooking the professional development of administrative staff facilitating implementation and neglecting to actively engage the receivers of an innovation can impact their sense of agency, power, and collectivity, derailing the longevity of successful change.
Interactions between the system and the people doing the work: Does the system support the people doing the work in ways that will lead to aspirations of change?
The compatibility and adaptability of the innovation and the system has a trickle-down effect to frontline implementers, or the people doing the work of the innovation. For example, the activities of an innovation may require the reorganization of workflows or the use of new tools and reallocation of time from clinical duties to teaching. If the system is rigid and static, and therefore unable to accommodate the new practices, then these practices become an added burden for frontline stakeholders. Without an ecosystem that supports the enactment of new activities of the innovation, these activities risk being reduced to a series of administrative rituals or bypassed altogether. This undermines the intentions of the change and promotes the problematic deviation and halting of the innovation, hindering successful long-term change.
In the proposed model of Eco-Normalization, innovations, the people doing the work, and the system are acknowledged to be symbiotic and not independent. Further, Eco-Normalization embraces and encourages forward planning that is dynamic rather than rigid, so the evolution of an innovation is considered a natural process and outcome of implementation rather than a sign of failure. While identities, roles, and activities evolve while doing the work of implementing something new, the innovation and system, too, need to evolve during implementation to promote the longevity of the change.
The framing of Eco-Normalization is compatible with a growing conversation drawing a distinction between fidelity and integrity of an implementation. While most implementation science literature uses the terms fidelity, integrity, and adherence synonymously, 39 the available alternate definitions of implementation integrity reflect the ability to adapt and modify aspects of an innovation to be compatible with local needs and circumstances, while remaining true to the underlying philosophy or core components of the change. 41,42 Indeed, significant factors fostering the successful implementation of an innovation include the adaptability and compatibility of the innovation in context. 36 At first glance, thinking of the possible permutations and combinations of innovation components may seem threatening or unmanageable. However, asking the 6 critical evaluation questions iteratively, during and after the implementation phases, can advance an evidence informed understanding of how and why innovations take their final form as well as the extent to which that final form continues to support the values and goals of change as initially envisioned. As such, program evaluation will evolve from snapshots of fidelity or adherence to the practices of an innovation to a reflective process that critically and cyclically examines the appropriateness of the innovation, including adaptations and modifications, and the longevity of the aspirational goals.
While innovations are designed with strategic intentions that are proposed to lead to prosperous outcomes, critical examination, reflection, and rational action are necessary to recognize evidence to suggest that these strategies will interact with values and practices of the system and those on the ground in ways that are antithetical to the broader aspirational goals. Progressing from “black box evaluations” to understanding the inner workings of implementation is a significant advancement in the community’s understanding of innovation success. Building on this understanding of implementation fidelity and implementation integrity, the Eco-Normalization model has the potential to enable program evaluators to better explore the longevity of innovations through fulsome examination of what is being implemented, including adaptations and modifications. It frames the innovation as a dynamic rather than static set of practices, it treats the system and the workers on the ground as agentive partners with their own goals and agendas, and it acknowledges the interactions between all these factors as the ecosystem seeks its “new normal” following the perturbation of change. Indeed, Eco-Normalization judiciously examines the ecosystem of change: the grand versus local aspirations for change, the innovation, strategic intentions, the people doing the work of implementation, and the role of the system. When embarking on a new journey, Eco-Normalization critically and cyclically appraises innovation experiences to ensure the upcoming path continues to lead in the direction of the desired destination. Although still early in its development, we hope that, through this framework, innovators and evaluators will be able to shed another layer of elusiveness about implementation and the ecosystem of change to deepen understanding of the features that contribute to (or hinder) the longevity of innovations in context.
1. Chen H, Rossi P. Issues in the theory-driven perspective. Eval Program Plan. 1989; 12:299–306
2. Rose C. Evaluation designs. POD Quarterly: J Prof Organ Dev Network Higher Educ. 1980;2:38–46.
3. Stufflebeam D. The use of experimental design in educational evaluation. J Educ Meas. 1971; 8:267–274
4. Campbell DT, Stanley JC. Experimental and quasi-experimental designs for research on teaching In: Handbook of Research on Teaching. Gage NL, ed. Chicago, IL: Rand McNally, 1963
5. Chen H, Rossi P. Evaluating with sense: The theory-driven approach. Eval Rev. 1983; 7:283–302
6. Cordray D. Optimizing validity in program research: An elaboration of Chen and Rossi's theory-driven approach. Eval Program Plan. 1989; 12:379–385
7. Judd CM. Combining process and outcome. evaluation. New Dir Program Eval. 1987; 23–41. doi:10.1002/ev.1457
8. Lipsey MW, Pollard JA. Driving toward theory in program evaluation: More models to choose from. Eval Program Plan. 1989; 12:317–328
9. Lipsey MW. Theory as method: Small theories of treatments. New Dir Program Eval. 1993;57:5–38
10. Jabeen S. Unintended outcomes evaluation approach: A plausible way to evaluate unintended outcomes of social development programmes. Eval Program Plann. 2018; 68:262–274
11. Jabeen S. Do we really care about unintended outcomes? An analysis of evaluation theory and practice. Eval Program Plann. 2016; 55:144–154
12. Merton RK. The unanticipated consequences of purposive social action. Am Sociol Rev. 1936; 1:894–904
13. Meyers WR. The Evaluation Enterprise. London, UK: Jossey-Bass Publishers, 1981
14. Rogers EM. Diffusion of Innovations. New York, NY: Free Press, 1995
15. Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Educ Res. 2003; 18:237–256
16. O’Donnell C. Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Rev Educ Res. 2008; 78:33–84
17. Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clin Psychol Rev. 1998; 18:23–45
18. Moncher F, Prinz RJ. Treatment fidelity in outcome studies. Clin Psychol Rev. 1991; 11:247–266
19. Gresham FM, Gansle KA, Noell GH. Treatment integrity in applied behavior analysis with children. J Appl Behav Anal. 1993; 26:257–263
20. Power TP, Blom-Hoffman J, Clarke AT, Riley-Tillman TC, Kelleher C, Manz P. Reconceptualizing intervention integrity: A partnership-based framework for linking research with practice. Psychol Sch. 2005; 42:495–507
21. Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci. 2007; 2:40
22. Van Melle E, Gruppen L, Holmboe ES, Flynn L, Oandasan I, Frank JR. International Competency-Based Medical Education Collaborators. Using contribution analysis to evaluate competency-based medical education programs: It’s all about rigor in thinking. Acad Med. 2017; 92:752–758
23. Oandasan I, Martin L, McGuire M, Zorzi R. Twelve tips for improvement-oriented evaluation of competency-based medical education. Med Teach. 2020; 42:272–277
24. Hamza DM, Ross S, Oandasan I. Process and outcome evaluation of a CBME intervention guided by program theory. J Eval Clin Pract. 2020; 26:1096–1104
25. Van Melle E, Frank JR, Holmboe ES, Dagnone D, Stockley D, Sherbino J. International Competency-based Medical Education Collaborators. A core components framework for evaluating implementation of competency-based medical education programs. Acad Med. 2019; 94:1002–1009
26. Onyura B, Ng SL, Baker LR, Lieff S, Millar BA, Mori B. A mandala of faculty development: Using theory-based evaluation to explore contexts, mechanisms and outcomes. Adv Health Sci Educ Theory Pract. 2017; 22:165–186
27. Onyura B, Baker L, Cameron B, Friesen F, Leslie K. Evidence for curricular and instructional design approaches in undergraduate medical education: An umbrella review. Med Teach. 2016; 38:150–161
28. Schneider J, Hall J. Why most product launches fail.Harvard Business Review https://hbr.org/2011/04/why-most-product-launches-fail
. Published April 2011. Accessed July 25, 2021
29. Kocina L. What percentage of new products fail and why? Media Relations Agency. https://www.publicity.com/marketsmart-newsletters/percentage-new-products-fail/?cn-reloaded=1
. Published May 3, 2017 Accessed July 25, 2021
30. Viki T. Why Innovation Fails Forbes. https://www.forbes.com/sites/tendayiviki/2018/02/28/why-innovation-fails/?sh=48a2163280be
. Published February 28, 2018 Accessed July 25, 2021
31. Whitehead CR, Hodges BD, Austin Z. Captive on a carousel: Discourses of ‘new’in medical education 1910–2010. Adv Health Sci Educ. 2013; 18:755–768
32. Hall AK, Rich J, Dagnone JD, et al. It’s a marathon, not a sprint: Rapid evaluation of competency-based medical education program implementation. Acad Med. 2020; 95:786–793
33. Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017; 12:110
34. Grant MJ, Booth A. A typology of reviews: An analysis of 14 review types and associated methodologies. Health Info Libr J. 2009; 26:91–108
35. May C, Finch T. Implementing, embedding, and integrating processes. Sociology. 2009; 43:535–554
36. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009; 4:50
37. van Mierlo B, Regeer B, van Amstel M, et al. Reflexive monitoring in action. A guide for monitoring system innovation projects. Communication and Innovation Studies. https://www.researchgate.net/publication/46383381_Reflexive_Monitoring_in_Action_A_guide_for_monitoring_system_innovation_projects
. Published 2010 Accessed July 25, 2021
38. May CR, Finch T, Ballini L, et al. Evaluating complex interventions and health technologies using normalization process theory: Development of a simplified approach and web-enabled toolkit. BMC Health Serv Res. 2011; 11:245
39. Giddens A. Central Problems in Social Theory: Action, Structure, and Contradiction in Social Analysis. Berkeley, CA: University of California Press, 1979
40. Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. 2014; 23:290–298
41. CBD Program Evaluation Operations Team. Competence by Design (CBD) Implementation Pulse Check. Ottawa, ON, Canada: Royal College of Physicians and Surgeons of Canada, 2020. https://www.royalcollege.ca/rcsite/cbd/cbd-program-evaluation-e
. Accessed September 1, 2021
42. LeMahieu P. What we need in education is more integrity (and less fidelity) of implementation. Carnegie Foundation for the Advancement of Teaching. https://www.carnegiefoundation.org/blog/what-we-need-in-education-is-more-integrity-and-less-fidelity-of-implementation
. Published October 11, 2011Accessed July 25, 2021