The transition to competency-based medical education (CBME) can present significant challenges. 1–3 This transition is not just a matter of getting used to new terminology. CBME substantially increases the complexity and burden of assessment practices and procedures. 4,5 While assessment in the pre-CBME era often generated 1 or 2 assessments per trainee per rotation, CBME programs, particularly those structured around entrustable professional activities (EPAs), often generate multiple weekly assessments for every trainee. 5
The successful implementation of CBME requires a systemic change in the use of assessment data. Electronic tracking systems have been established, competence committees have been formed to review the increased volume of EPA-based assessment data, 6 and new programmatic theories of assessment have been applied to these heterogeneous and longitudinal datasets, which, in turn, have changed the inferences that we draw from them. 7,8 Although this approach is predicated on aggregating data and using them to model trainee progress, how these data should be amalgamated, analyzed, and visualized in practice is not well established. 2 Defaulting to simple linear approaches runs the risk of reducing nuanced data into reductionistic trends 9 and nullifies the advantages of narrative feedback or other more sophisticated modeling and analysis. Further work will need to be done to harness both the narrative and numerical data of CBME assessments to foster a balanced and insightful perspective on trainee performance.
Lessons from other fields illustrate the many different ways in which data might be aggregated and used, each of which has implications for the kinds of inferences that might be drawn. 10,11 For example, business 12 and sport 13 use increasingly sophisticated data collection and analysis techniques. Similar efforts in higher education originated with the advent of online learning platforms. 14 These learning analytics can be defined as the “measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.” 15 Given the increasing availability of large assessment datasets in CBME, it is not surprising that learning analytics are increasingly being referenced within the medical education literature. CBME outcomes frameworks 16–18 and research agendas 19 describe a range of concepts related to learning analytics that medical education researchers are exploring both conceptually 20–23 and practically. 5,24
At first glance, learning analytics tools and practices have immense potential to leverage the extensive assessment datasets resulting from EPA-based CBME training programs to improve medical education and patient care. 11,20 However, these analytics have also been problematized. For instance, ten Cate et al warn about analytics implementations that overstandardize, constrain activity, impact educational efficiency and cost, and threaten the integrity of programs and their outcomes. 21 There is need for mindful and deliberate exploration of and alignment between educational programs, the analytics approaches they employ, and the affordances of different implementation designs.
As the widespread utilization of learning analytics to interpret CBME assessment data becomes increasingly viable, it is more important than ever to chart the future path within this brave new world. 25 In this paper, we explore utopian and dystopian perspectives on learning analytics (see Boxes 1 and 2) in CBME with the goal of mapping a cautious but determined future for its application in our field (see Box 3) by incorporating principled guidance from the broader education and computer science literature.
A Possible Utopia
In a utopia, everything works to the best advantage of everyone. People are happy and able to thrive and grow as they will. Our utopian scenario (Box 1) reflects much of the hope (or perhaps hype) that has been expressed with regard to learning analytics 11,20,26; everything works, benefits multiply, and everyone wins. Of course, this is too good to be true.
Box 1 Fictional Case Study Describing a Utopian Scenario for Learning Analytics in a Competency-Based Medical Education System
Dr. Walzar leaned back from her computer screen and smiled. The other program directors at her hospital had been skeptical when she had proposed working with their information technology department to develop new software to address the challenges of their competency-based assessment system. They had heard similar promises of efficiency before the implementation of their electronic health record, but it had never lived up to the hype. This was different. Once she had shown them a working prototype, they were begging to have their programs brought on board. Setting up and optimizing their new data visualization and analytics suite had been a challenge, but the results were worth it. Her program administrator, unleashed from the shackles of organizing spreadsheet after spreadsheet of assessment data, was finally taking a vacation and, in her spare time, had been coming up with new ideas to improve their program. Her competence committee members were happy as their meetings went from monthly, full-day marathons to leisurely mornings once a quarter. And she was sure that they were making better decisions, too! Even her trainees were impressed with the improved feedback they were receiving after they incorporated their new faculty dashboard into their faculty development sessions. They had also noticed that the data-driven changes to their curriculum were giving them ample opportunity to get the experiences they needed to become competent physicians.
Most recently, she had connected with some of her colleagues with backgrounds in quality improvement. They were intrigued by her progress and wanted to explore the integration of clinical outcomes with her educational data. They had begun integrating data from their electronic health record into the assessment dashboard to quantify the quality of each trainee’s clinical care. She was excited about the potential to track clinical metrics alongside educational assessments and see how they could be used together. This new dashboard would provide trainees with insights about their own practice patterns that had direct impact on the health of their patients. Dr. Walzar was looking forward to working with her clinical operations team to create these new clinical learning analytics.
There had also been other benefits for her as a program director. She could readily access her trainees’ data and receive real-time alerts when concerning assessments or assessment patterns were found. She also tracked the assessment behaviors of her faculty members to identify abnormal behavior. For example, she thought that Dr. Bisen had been submitting sexist assessments for years but until the computer had identified it, she did not have any proof! Best of all, the submission of assessment data to their accrediting body was now automated. Their analyses showed that her program was performing in the top 5 percentile in the entire country in terms of the quantity and quality of their assessments. She would have to remember to highlight that accomplishment in her application for promotion later this year. All of this “learning analytics stuff” had panned out after all.
The utopian scenario demonstrates the potential benefits of data visualization and learning analytics in the context of assessing competence, both formatively and summatively. In particular, data visualization could allow for the efficient review of frequent, low-stakes EPA-based assessment data to help trainees as well as their preceptors and program directors to identify patterns of struggling, satisfactory, or excelling performance. 27–29 These techniques could feed into a program of organizational development through the simultaneous use of these assessment data for faculty development and continuous quality improvement. 22
For example, there is validity evidence to support the use of the Quality of Assessment of Learning (QuAL) score for the evaluation of the quality of narrative assessment comments. 30,31 The automation of QuAL score ratings with natural language processing algorithms could support summative assessment decisions by making it easier for competence committees to identify EPAs with high-quality narrative comments. These scores could also be amalgamated at the faculty member and/or program level to quantify the impact of interventions focused on improving narrative feedback over time. There are clear parallels with precision medicine 32 in that as greater detail of individual and group performance can be established, individuals can benefit from this precision. This information may also help us to better align the learning environment with individual trainees’ needs and trajectories. 33
Advances in learning analytics could also help to address equity issues through the identification and quantification of sexist or racist assessment practices. Until these issues can be readily identified, it is unlikely that they will be addressed. Early work by Mueller et al 34 and Dayal et al 35 on gender bias in assessments could be expanded and automated to monitor for biases within the assessment data of individual faculty members, training programs, institutions, and specialties. This information could guide further investigation into rater biases and develop interventions to detect and mitigate them.
Other utopian perspectives focus on mining and integrating multiple data sources, in particular clinical outcome data. 16,17 Work has already begun on the integration of clinical data and educational assessment 36,37 as well as the identification of resident-sensitive quality metrics 38,39 that move beyond the interdependence of faculty and trainee clinical outcomes. 40,41 In the future, we anticipate that collaboration between physicians with expertise in quality improvement and medical education will link trainee assessment data with the clinical outcomes of the patients that they have cared for. 42 To realize this clinical–educational integration, links will need to be established between learning analytics, practice analytics, and patient outcomes. Ideally, these connections would help to optimize the quality and cost of care using integrated analytics and visualizations that can be meaningfully interpreted to support individual-, program-, and system-level changes.
Of course, whether panopticism of this kind is utopian is debatable, as one person’s utopia might well be another person’s dystopia.
A Prospective Dystopia
In a dystopia, things are far from ideal and dissatisfaction is the norm. The pace of technological advancement makes it likely that it will not be long before software vendors are knocking on the doors of our institutions promising digital transformation. Our use of artificial intelligence will continue to improve, and it is inevitable that it will be applied more broadly to both our assessment and clinical data. If we do not have a structure in place to guide its implementation, our dystopian scenario (Box 2) may come to pass.
Box 2 Fictional Case Study Describing a Dystopian Scenario for Learning Analytics in Competency-Based Medical Education
One minute Dr. Walzar had been enjoying glowing feedback from her dean and fellow program directors on the flawless implementation of her data visualization and analytics system. The next minute it had come crashing down. It started with a notification that she received about one of her trainees. Jacob was well liked within the program and had been progressing well until the computer had flagged his recent assessment pattern as concerning. When the competence committee had seen similar flags in the past, they had not acted and, sure enough, within a few months the evidence demonstrated that these trainees were struggling. The competence committee members decided that this time they did not want to wait. They requested that Dr. Walzar call Jacob in for a meeting to check in.
During the meeting, she mentioned that their algorithm had flagged his EPA performance, but that the competence committee could not identify any problems just yet. He had become visibly upset, asking “What kind of fortune-telling garbage is this? Are you seriously saying you can predict the future?” From then on, Jacob was distrustful of her, the assessment system, and the training program. His clinical performance deteriorated. She wondered if the computer had been right and this was inevitable, or if it was the meeting itself that had set off his decline. Regardless, the problem had festered until she had no choice but to terminate him from the program. She had been served with a lawsuit regarding his dismissal the very next week.
The problems did not end there. Last week their accrediting body had been hacked. All their trainees’ assessment data were stolen and published online. It did not seem like there was anything that they could do to have them removed. She had heard whispers from her trainees that the major hospital groups had gotten access to these data and, without any explanation, had canceled the interviews of the trainees who had struggled during parts of their training.
Worst of all, one of the leaked assessments referenced the care received by a patient who was currently suing her trainee, colleagues, and hospital. During the trainee’s intensive care rotation, they had struggled to place a central line. The faculty member’s EPA assessment of the procedure indicated a low level of entrustment and provided helpful formative feedback that highlighted difficulties with technique. Unfortunately, this may have contributed to the patient’s development of a central line infection. The trainee’s lawyers were certain that this assessment was going to be cited in the lawsuit as evidence of negligence.
Forget about that promotion. She was worried that she was going to have to leave academic medicine altogether. She wished that she had never gotten started with this learning analytics stuff.
The dystopian scenario illustrates only some of the ways that the sophisticated analysis of trainees’ EPA data could be detrimental. 21 As we move forward, we must consider how we can maintain the benefits of the visualization and analysis of aggregated data in ways that can ultimately lead to improved patient outcomes without succumbing to the problems that it could present. Several areas are deserving of particular attention for those pushing forward innovative data solutions incorporating learning analytics.
Data security
Whether or not an analytics approach is used, the security of trainee assessment data is a concern. 43 While this problem has always existed, modern systems may be more prone to hacking and other digital data security breaches because they are stored in larger databases with numerous distributed access points. Additionally, the assessment data that we collect are increasingly detailed and sensitive. Unfortunately, as this transition has occurred, our higher education institutions have been subject to an increasing number of security breaches. 44 They have proven to be targets for hackers due to the large amount of valuable data they store about their trainees and employees, the decentralized organization of their information technology structure, the large number of individuals that are able to access their network, and the comparative lack of resources to combat this threat relative to government and big business. 45 The potential fallout from the leak of educational assessment data is a very real risk that should be considered.
Predictive analytics
In his 1956 science fiction short story The Minority Report, Philip K. Dick introduced the concept of “precrime”—that one could be arrested and charged for a crime that they had not yet committed but that had been predicted by a seemingly infallible system. 46 Predictive analytics are techniques that make inferences about future events such as trainee performance. 10 While the incorporation of predictive analytics has the potential to assist overwhelmed competence committees to identify trainees who may be struggling and provide them with additional support, it is equally possible that prematurely labeling trainees as struggling or otherwise deficient may cause or exacerbate a trainee’s struggles by altering their and their program’s perceptions in unanticipated and potentially negative ways. 47 Other challenges such as the ethics and legality of making high-stakes decisions on the basis of analytical predictions also need to be considered.
Data ownership and use
Ethical issues surrounding access, ownership, and governance of data have been identified as particular concerns with the growing use of learning analytics in medical education. 21,43 These concerns are paralleled within the broader educational literature which identifies challenges including making high-stakes decisions from fragmented or incomplete data, applying insights derived from populations to individual trainees, inadequately (or simply not) seeking consent from trainees regarding the use of their data, failing to ensure appropriate stewardship of high-sensitivity data, and neglecting to consider the biases propagated by analyses. 48 These issues deserve particular consideration within medical education given the differences between EPA-based programs which contain detailed, real-world assessments of professional competency in the care of individual patients, as opposed to more routinely collected data about students in other fields which contain relatively abstract alphanumerical grades.
It is also a concern if the analytics engine is a priori assumed to be the ultimate authority in determining progress toward independent practice. Checks and balances, including rights of appeal, need to be built into any such system. It is unlikely that broad institutional policies designed for higher education in general will be a perfect fit for the complex oversight of longitudinal EPA assessment data focused on trainee–patient interactions.
Clinical integration
Medical education is relatively unique in that a substantive amount of assessment data is collected on trainees within an apprenticeship model wherein trainees provide clinical care of significant consequence. As we consider the potential benefits of linking clinical outcomes with educational assessments, we must ensure that we do not unduly expose trainees to additional medicolegal risk. The Bawa-Garba case in the United Kingdom, wherein a pediatric trainee was convicted of gross negligence manslaughter based, in part, on reflections pulled from her electronic portfolio, suggests that this is more possible than we would like to believe. 49
While the dystopian scenario does not illustrate all of the concerns that have been raised regarding learning analytics in medical education, 21 it should serve as a stark reminder of the pitfalls that we face as we begin to use these new tools. Moving forward in a way that maximizes the potential of the field will require a considered and intentional approach.
Charting a Course
Most jurisdictions are at an important precipice where the use of learning analytics to support CBME is being explored, but the policies and processes to govern and protect our trainees and institutions have not yet been ironed out. We hope that the sharing of utopian and dystopian scenarios has conveyed both the need for this technology and the importance of building resilient, nuanced, and nimble systems of oversight. If we can rise to this challenge, the outcomes described in our third scenario (Box 3) are more likely.
Box 3 Fictional Case Study Describing the Safe and Ethical Implementation of Learning Analytics in Competency-Based Medical Education
As Dr. Walzer learned more about learning analytics, she realized just how close her program had come to some serious consequences. Working with her local information technology team, she studied the ethical and technical implications of her work and got ahead of the issues. With her newfound knowledge, Dr. Walzar also provided input and leadership on the development of data access standards that distinguished types of data based upon their sensitivity and clarified how data would be used for program evaluation and research. This planning impacted the transfer and storage of assessment data to their accrediting body. While the transfer took longer than anticipated, major changes were made to ensure that only anonymized, aggregate data were uploaded. When the accrediting body was hacked despite having implemented the recommendations of a third-party security audit, none of their trainees’ assessment data were specifically identifiable.
She leveraged a co-design methodology to engage stakeholders including frontline faculty, current trainees, and even a few patients. Within a few years, she anticipated that these trainees would become their hospital’s newest faculty and she saw their involvement as both a way to ensure the work was embraced by the trainees and as an investment in the sustainability of their culture and system. It also increased the transparency and trust in the work. The changes the stakeholders suggested made it more relevant and resulted in multiple concerns being addressed before they were even implemented.
Her program’s use of predictive analytics to inform their competence committee decisions was also delayed. In that time, they collected validity evidence for the use of these novel methodologies in supporting high-stakes assessment decisions. The competence committee members agreed that this resulted in more clear and robust analytics that better supported their decisions. This scholarly approach also resulted in numerous publications for members of their department. They took the time to learn how to use the new methods to inform the individualized learning plans for every trainee. She worked extensively with local educators to build up a cadre of data-savvy faculty who were comfortable with helping the trainees to interpret and act upon the data, rather than simply leaving trainees to be flagged by the system.
Within a few years, Dr. Walzer, her trainees, and the institution became comfortable that the incorporation of learning analytics into their program was done in a way that maximized the benefits to all parties while preventing negative consequences. They also continued to come up with new initiatives. For example, they had begun mapping key performance indicators for their trainees to improvements in the clinical care of vulnerable patient populations.
In his 1942 short story Runaround, Issac Asimov proposed 3 laws of robotics as governing principles for robot behavior. 50 Despite being written in the early days of robotic technology, these principles continue to be discussed and debated to this day. Robotics and learning analytics share the commonality of being technologies that can evolve rapidly and unpredictably, limiting the long-term relevance of specific guidelines. Bearing this in mind, we believe that a similarly principled approach to the development and implementation of learning analytics is appropriate. Slade and Prinsloo have proposed 6 ethical principles for learning analytics that are broadly applicable within CBME. 48Table 1 describes these principles and provides an example of their application within a CBME assessment program.
Table 1: Principles for an Ethical Framework for Learning Analytics Applied to CBME (Adapted From Slade and Prinsloo, 2013
48)
Learning analytics should be a moral practice
Learning analytics should not be used solely to drive decisions, but to gain greater understanding of learning for the benefit of the trainees, institutions, and patients within our health care systems. Care should be taken to build and/or adapt appropriate policies, systems, and cultures to guide their use. 43 For example, when implementing predictive analytics, we must carefully consider what we are looking for as well as how and why that information will benefit these groups. It will be crucial to ensure that the analytics which we allow to influence our decisions are supported by validity arguments that explore the assumptions, purpose, and systemic biases that are baked into our systems. The intentional consideration of the PAIR (Participation, Access, Inclusion, and Representation) principles has been proposed to ensure that the use of learning analytics aligns with institutional and community values. 51
Trainees as agents
Learning analytics should not be “done to” trainees but “done with” them. Co-design and involvement of trainees within the systems are imperative, especially in systems where senior trainees take more of a hands-on approach than attendings in nurturing and teaching their junior colleagues. Trainees should be involved as partners in the design and operation of our systems. Making them aware of what data are being collected and how they are being used will help them to engage with and benefit from these analyses. 28,52 This will be particularly important as predictive analytics are used to monitor trainee performance. Involving trainees in the development of these analyses will help them to understand their benefits and limitations while providing agency in an assessment program that matters to them. Successful co-design efforts also allow for the apprenticeship of future educational leaders whose continued agency will enhance sustainability as they contribute to the evolution and renewal of our data systems. 53–55 The human aspect of learning from data insights has been described in the continuing professional development literature and may translate to trainees. 56 More study is needed to understand what educational supports, policies, and systems will be required to create optimal learning systems and developmentally oriented organizations. 22,43
Trainee identity and performance are temporal dynamic constructs
As we implement learning analytics, we should seek to determine how educational and clinical outcomes can be and are impacted by our trainees’ contexts. 57 EPA assessments consist of observational data. These observations occur within the context of team-based health care and are completed by faculty members who may be influenced by implicit bias. 58 As we begin to interpret these data, it will be important to remember this context and consider the trainees’ growth over time. 57 This is particularly true when incorporating patient outcomes within the complex systems and teams that exist around our trainees. We will need nuanced science to understand the context in which the trainee works and account for that in determining how to attribute success. Recent work focused on examining and isolating resident-sensitive quality measures 38,39 and teasing apart the interdependence of health care teams 40,41 should guide the interpretation of this complex and nuanced data.
Trainee success is a complex and multidimensional phenomenon.
We should train the educators who interpret and use learning analytics to sustain a degree of skepticism with respect to the inferences they draw from them. 21 Machine learning, natural language processing, and other advanced prediction tools are derivative of correlational data techniques and cannot be assumed to be linked to causality. 59 Similar to how we use clinical decision support tools, risk-based calculators, or other guidelines in clinical care, we should remain wary of the diagnostic imprecision of the tools that we use in education. Studying attributes such as the sensitivity and specificity of our tools will help us to understand their strengths and limitations and ensure that they are contributing appropriately to low- and high-stakes assessment decisions. Above all, we must ensure that we do not confuse a correlated outcome with fate.
Transparency.
Transparency is essential to develop and maintain the trust of our trainees. 60 There are numerous use cases for EPA assessment data that span from trainee learning and assessment through faculty development, program evaluation, quality improvement, research, and accreditation. 26 As it is not obvious that a trainee’s assessment data would be used for all of these purposes, they should be made aware of how their data will be used. Further, transparent policies should categorize the ownership and sensitivity of various types of data and explicitly outline who has access to each type of data for what purpose. Institutions will need to develop their own guidelines based upon their policies and local legislation. For example, using an anonymized, aggregated dataset of trainee assessments for internal program evaluation and accreditation purposes is a long-standing and standard practice, but it is important to formally inform learners that their data will be used in this way. On the other hand, the sharing of identifiable or potentially identifiable assessment data for research purposes is likely to require opt-out or opt-in consent processes depending upon the sensitivity of the data. Clarifying the division between program evaluation and research will be extremely important given the ethical exemptions and decreased scrutiny that are often extended to program evaluation work. 61 The limitations of the security procedures taken to ensure the privacy of trainee assessment data should also be made transparent. 62
Strengthening the connections between datasets supports their analysis and also makes them more vulnerable to hacking or other technical issues. 23 Data partitioning (i.e., separating where data are stored and linking on demand), anonymization (i.e., redacting data down to only coded identifiers so that trainees cannot be identified by data alone), block-chain technology (i.e., using a decentralized ledger system to ensure data integrity and minimize points of failure), and other advanced data security techniques could increase security but may be beyond the current technological skill set of educational institutions. Taken together, institutional policies should transparently define how various types of trainee data can be used and what precautions are in place to ensure its security.
Higher education cannot afford to NOT use data.
Finally, we will need resolve and courage to move forward with the use of learning analytics. These principles raise complicated and evolving concerns that will need to be addressed to do this safely and ethically. However, not using the assessment data that we collect within CBME to improve our training systems for the benefit of our future trainees and their patients would also be problematic. If we approach the use of these new tools with the goal of constantly improving our people, programs, and systems, 22 then they will provide insight into how we can improve the clinical care provided by our trainees.
Conclusions
Learning analytics have the potential to leverage the incredible amount of assessment data that are being collected by our EPA-based CBME assessment systems in ways that will benefit our trainees, programs, and systems. However, their utilization also presents new and potentially dangerous challenges. To move forward safely, we will need to consider and build upon the ethical principles of learning analytics that have been developed in the broader fields of education and computer science.
References
1. Frank JR, Snell L, Englander R, Holmboe ES; ICBME Collaborators. Implementing competency-based medical education: Moving forward. Med Teach. 2017; 39:568–573.
2. Hall AK, Rich J, Dagnone JD, et al. It’s a marathon, not a sprint: Rapid evaluation of competency-based medical education program implementation. Acad Med. 2020; 95:786–793.
3. Caverzagie KJ, Nousiainen MT, Ferguson PC, et al.; ICBME Collaborators. Overarching challenges to the implementation of competency-based medical education. Med Teach. 2017; 39:588–593.
4. Nousiainen MT, Mironova P, Hynes M, et al.; CBC Planning Committee. Eight-year outcomes of a competency-based residency training program in orthopedic surgery. Med Teach. 2018; 40:1042–1054.
5. Thoma B, Hall AK, Clark K, et al. Evaluation of a national competency-based assessment system in emergency medicine: A CanDREAM study. J Grad Med Educ. 2020; 12:425–434.
6. Pack R, Lingard L, Watling C, Cristancho S. Beyond summative decision making: Illuminating the broader roles of competence committees. Med Educ. 2020; 54:517–527.
7. Rich JV, Fostaty Young S, Donnelly C, et al. Competency-based education calls for programmatic assessment: But what does this look like in practice? J Eval Clin Pract. 2020; 26:1087–1095.
8. Holmboe ES, Yamazaki K, Hamstra SJ. The evolution of assessment: Thinking longitudinally and developmentally. Acad Med. 2020; 95(11 suppl):S7–S9.
9. Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Haala R. Numbers encapsulate, words elaborate: Towards the best use of comments for assessment and feedback on entrustment ratings. Acad Med. 2021; 96:S81–S86.
10. Lang C, Siemens G, Wise A, Gasevic D. The Handbook of Learning Analytics: First Edition. Alberta, CA: Society for Learning Analytics Research; 2017.
11. Chan T, Sebok-Syer S, Thoma B, Wise A, Sherbino J, Pusic M. Learning analytics in medical education assessment: The past, the present, and the future. AEM Educ Train. 2018; 2:178–187.
12. Appelbaum D, Kogan A, Vasarhelyi M, Yan Z. Impact of business analytics and enterprise systems on managerial accounting. Int J Account Info Syst. 2017; 25:29–44.
13. Morgulev E, Azar OH, Lidor R. Sports analytics and the big-data era. Int J Data Sci Anal. 2018; 5:213–222.
14. Khalil M. Learning Analytics in Massive Open Online Courses [dissertation]. Styria, AT: Graz University of Technology; 2018.
15. Long P, Siemens G. Penetrating the fog: Analytics in learning and education. Italian J Educ Tech. 2014; 22:132–137.
16. Chan TM, Paterson QS, Hall AK, et al. Outcomes in the age of competency-based medical education: Recommendations for emergency medicine training in Canada from the 2019 symposium of academic emergency physicians. CJEM. 2020; 22:204–214.
17. Hall AK, Schumacher DJ, Thoma B, et al. Outcomes of competency-based medical education: A taxonomy for shared language. Med Teach. In press.
18. Van Melle E, Hall AK, Schumacher DJ, et al. Capturing outcomes of competency-based medical education: The call and the challenge. Med Teach. In press.
19. ten Cate O, Balmer DF, Caretta-Weyer H, Hatala R, Hennus MP, West DC. Entrustable professional activities and entrustment decision making: A development and research agenda for the next decade. Acad Med. 2021; 96:S96–S104.
20. McConville JF, Woodruff JN. A shared evaluation platform for medical training. N Engl J Med. 2021; 384:491–493.
21. ten Cate O, Dahdal S, Lambert T, et al. Ten caveats of learning analytics in health professions education: A consumer’s perspective. Med Teach. 2020; 42:673–678.
22. Thoma B, Caretta-Weyer H, Schumacher DJ, et al. Becoming a deliberately developmental organization: Using competency-based assessment data for organizational development. Med Teach. In press.
23. Ellaway RH, Topps D, Pusic M. Data, big and small: Emerging challenges to medical education scholarship. Acad Med. 2019; 94:31–36.
24. Holmboe ES, Yamazaki K, Nasca TJ, Hamstra SJ. Using longitudinal milestones data and learning analytics to facilitate the professional development of residents: Early lessons from three specialties. Acad Med. 2020; 95:97–103.
25. Carraccio C. Harnessing the potential futures of CBME here and now. Acad Med. 2021; 96:S6–S8.
26. Saqr M. Learning analytics and medical education. Int J Health Sci (Qassim). 2015; 9:V–VI.
27. Thoma B, Bandi V, Carey R, et al. Developing a dashboard to meet Competence Committee needs: A design-based research project. Can Med Educ J. 2020; 11:e16–e34.
28. Carey R, Wilson G, Bandi V, et al. Developing a dashboard to meet the needs of residents in a competency-based training program: A design-based research project. Can Med Educ J. 2020; 11:e31–e45.
29. Bandi V, Mondal D, Thoma B. Scope and impact of visualization in training professionals in academic medicine. In: Proceedings of Graphics Interface 2020. Toronto, ON: Canadian Human-Computer Communications Society; 2020;84–94.
30. Chan TM, Sebok-Syer SS, Sampson C, Monteiro S. Reliability and validity evidence for the quality of assessment for learning (QuAL) Score. Acad Emerg Med. 2018; 25:S83.
31. Chan TM, Sebok-Syer SS, Sampson C, Monteiro S. The quality of assessment of learning (Qual) score: Validity evidence for a scoring system aimed at rating short, workplace-based comments on trainee performance. Teach Learn Med. 2020; 32:319–329.
32. Collins FS, Varmus H. A new initiative on precision medicine. N Engl J Med. 2015; 372:793–795.
33. Hodges B. Assessment in the post-psychometric era: Learning to love the subjective and collective. Med Teach. 2013; 35:564–568.
34. Mueller AS, Jenkins TM, Osborne M, Dayal A, O’Connor DM, Arora VM. Gender differences in attending physicians’ feedback to residents: A qualitative analysis. J Grad Med Educ. 2017; 9:577–585.
35. Dayal A, O’Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med. 2017; 177:651–657.
36. Wong BM, Baum KD, Headrick LA, et al. Building the bridge to quality: An urgent call to integrate quality improvement and patient safety education with clinical care. Acad Med. 2020; 95:59–68.
37. Zafar MA, Diers T, Schauer DP, Warm EJ. Connecting resident education to patient outcomes: The evolution of a quality improvement curriculum in an internal medicine residency. Acad Med. 2014; 89:1341–1347.
38. Schumacher DJ, Holmboe ES, van der Vleuten C, Busari JO, Carraccio C. Developing resident-sensitive quality measures: A model from pediatric emergency medicine. Acad Med. 2018; 93:1071–1078.
39. Schumacher DJ, Martini A, Holmboe E, et al. Initial implementation of resident-sensitive quality measures in the pediatric emergency department: A wide range of performance. Acad Med. 2020; 95:1248–1255.
40. Sebok-Syer SS, Chahine S, Watling CJ, Goldszmidt M, Cristancho S, Lingard L. Considering the interdependence of clinical performance: Implications for assessment and entrustment. Med Educ. 2018; 52:970–980.
41. Sebok-Syer SS, Pack R, Shepherd L, et al. Elucidating system-level interdependence in electronic health record data: What are the ramifications for trainee assessment? Med Educ. 2020; 54:738–747.
42. Wong BM, Headrick LA. Application of continuous quality improvement to medical education. Med Educ. 2021; 55:72–81.
43. Thoma B, Warm E, Hamstra SJ, et al. Next steps in the implementation of learning analytics in medical education: Consensus from an international cohort of medical educators. J Grad Med Educ. 2020; 12:303–311.
44. Bongiovanni I. The least secure places in the universe? A systematic literature review on information security management in higher education. Comput Secur. 2019; 86:350–357.
45. Richardson MD, Lemoine PA, Stephens WE, Waller RE. Planning for cyber security in schools: The human factor. Educ Plann. 2020; 27:23–39.
46. Dick PK. The Minority Report. In: Minority Report. London, UK: Gollancz; 2002.
47. Steinert Y. The “problem” learner: Whose problem is it? AMEE guide no. 76. Med Teach. 2013; 35:e1035–e1045.
48. Slade S, Prinsloo P. Learning analytics: Ethical issues and dilemmas. Am Behavioral Scient. 2013; 57:1510–1529.
49. Cohen D. Back to blame: The Bawa-Garba case and the patient safety agenda. BMJ. 2017; 359:j5534.
50. Asimov I. Runaround. In: Astounding Science Fiction. 1942; 29:94–103.
51. Monroe-White T, Marshall B. Data Science Intelligence: Mitigating Public Value Failures Using PAIR Principles. Proceedings of the 2019 Pre-ICIS SIGDSA Symposium. 2019; 4.
52. Franz NK. The data party: Involving stakeholders in meaningful data analysis. J Extension. 2013; 51:1IAW2.
53. Chan T, Sherbino J; McMAP Collaborators. The McMaster Modular Assessment Program (McMAP): A theoretically grounded work-based assessment system for an emergency medicine residency program. Acad Med. 2015; 90:900–905.
54. Kwan BYM, Mbanwi A, Cofie N, et al. Creating a competency-based medical education curriculum for Canadian diagnostic radiology residency (Queen’s Fundamental Innovations in Residency Education)—Part 1: Transition to discipline and foundation of discipline stages [published online ahead of print March 4, 2020]. Can Assoc Radiol J. doi:10.1177/0846537119894723.
55. Buttemer S, Hall J, Berger L, Weersink K, Dagnone JD. Ten ways to get a grip on resident co-production within medical education change. Can Med Educ J. 2020; 11:e124–e129.
56. Kamhawy R, Chan TM, Mondoux S. Enabling positive practice improvement through data-driven feedback: A model for understanding how data and self-perception lead to practice change [published online ahead of print October 30, 2020]. J Eval Clin Pract. doi:https://doi.org/10.1111/jep.13504.
57. Bates J, Ellaway RH. Mapping the dark matter of context: A conceptual scoping review. Med Educ. 2016; 50:807–816.
58. Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: Outlining their usefulness for competency-based clinical assessment. Acad Med. 2016; 91:186–190.
59. Grimmer J. We are all social scientists now: How big data, machine learning, and causal inference work together. APSC. 2015; 48:80–83.
60. Pardo A, Siemens G. Ethical and privacy principles for learning analytics. British J Educ Tech. 2014; 45:438–450.
61. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. Ottawa, CA: Secretariat on Responsible Conduct of Research; 2018.
62. Prinsloo P, Slade S. Student privacy self-management: Implications for learning analytics. In: Proceedings of the Fifth International Conference on Learning Analytics and Knowledge. New York, NY: Association for Computing Machinery; 2015:83–92.