Competencies for the Use of Artificial Intelligence–Based Tools by Health Care Professionals : Academic Medicine

Secondary Logo

Journal Logo

Research Reports

Competencies for the Use of Artificial Intelligence–Based Tools by Health Care Professionals

Russell, Regina G. PhD, MA, MEd1; Lovett Novak, Laurie PhD2; Patel, Mehool MD3; Garvey, Kim V. PhD, MS, MLIS4; Craig, Kelly Jean Thomas PhD5; Jackson, Gretchen P. MD, PhD6; Moore, Don PhD7; Miller, Bonnie M. MD, MMHC8

Author Information
Academic Medicine 98(3):p 348-356, March 2023. | DOI: 10.1097/ACM.0000000000004963

Abstract

Artificial intelligence (AI) refers to computer science techniques that mimic human intelligence, including algorithms that leverage machine learning, deep learning, natural language processing, and neural networks. In health care, AI algorithms can transform incomprehensibly large and complex data sets into information that can guide clinical decisions. 1 These technologies have become increasingly sophisticated, and AI-generated outputs, such as risk scores, image interpretation, and health record summarization, are already being used to directly influence patient care. 2,3 AI-based tools include all instruments and software that incorporate these types of computational technologies into their operating systems.

The implementation of AI-based tools in health care has given rise to a variety of practical and ethical concerns, including bias in tool development and justice in the distribution of subsequent health improvements. 4–7 As adoption increases, clinicians will encounter both beneficial and harmful effects of these tools and will therefore need to acquire baseline competencies for practicing effectively, efficiently, safely, and equitably in AI-influenced health care environments. However, the nature of such competencies has not yet been articulated.

In a review of programs offered at 6 medical schools, Paranjape et al 8 described learning activities that covered a range of AI-related topics, including advanced mathematics, health care data sets, and clinical applications of AI-based tools. Lee et al 9 conducted a scoping review of AI in undergraduate medical education and concluded that a clear academic structure and specified competencies were needed. In a recent commentary, Garvey et al 10 noted that structured learning programs rarely accompanied the testing and implementation of new AI-based tools in clinical workplaces and they echoed the call for competencies. Although competencies have been published for clinical informatics, 11 the computational complexity of AI-based tools and their potential influence on practice and outcomes necessitate specific competencies aimed at ensuring safe and ethical deployment.

Competency-based education provides a conceptual framework for this study. 12–14 The basic assumption of competency-based education is that specific knowledge, skills, and attitudes (competencies) are necessary for professional practice and that these competency expectations can be developed for learners by analyzing core aspects of their professional environments. Once expectations are defined, learners can acquire the competencies along individualized learning pathways. 12–14 Our study’s goals were (1) to define AI-related clinical competencies for health care professionals and (2) to explore future uses of health care AI technologies and the organizational responsibilities required for oversight and management.

Method

From December 2020 through July 2021, we conducted a qualitative study in which we interviewed subject matter experts (SMEs) about the competencies health care professionals need to work effectively with AI-based tools. 15–19Figure 1 outlines the study process.

F1
Figure 1:
Research protocol used for the development of frontline clinician competencies needed for the use of artificial intelligence–based tools in practice settings.

The multidisciplinary research team consisted of 5 faculty members (R.G.R., L.L.N., K.V.G., D.M., and B.M.M.) from Vanderbilt University School of Medicine and 3 members (M.P., K.J.T.C., and G.P.J.) from IBM Watson Health who have expertise in AI-related research and analytics. The team included 3 physicians (M.P., G.P.J., and B.M.M.), an anthropologist specializing in health care AI (L.L.N.), a science officer in the technology industry and former medical educator (K.J.T.C.), a specialist in educational technology (K.V.G.), and 2 medical education scholars with expertise in workplace and competency-based learning (D.M. and R.G.R.). To avoid undue influence, team members from IBM Watson Health did not participate in the interviews. The combined team collaborated on all other aspects of the study. The Vanderbilt University Institutional Review Board determined that the project posed minimal risk and designated it exempt.

Participants

We identified SMEs in the use of AI-based tools in health care settings through literature review, professional contacts, and snowball techniques (asking participants to recommend other SMEs) and invited them to participate in this study. 19 To be eligible for the study, individuals needed to work in U.S. health professions education or health informatics. This purposive sampling process ensured representation from multiple professions and included those with special expertise in ethics and equity. The SMEs received an honorarium for their participation in the study.

Data collection

Semistructured interviews were the primary source of data, although we also collected data from a demographic questionnaire and a form that solicited feedback on drafts of the competencies. Articles identified from the authors’ prior commentary informed development of the interview guide. 10 The semistructured interview protocol included an introductory script with a description of the study objectives and a working definition of AI in health care that was intended to establish a shared understanding for the interviews. Interviews were designed to answer the question, “What competencies do health care professionals need for use of AI-based tools in clinical care?” Interviews also explored future uses of health care AI technologies and organizational responsibilities for their oversight and management. Two pilot interviews were conducted to assess quality and utility of the questions. After team reflection on the initial interviews, questions were added related to diversity, inclusion, and health equity. Data saturation, or repetition of the ideas being shared by the SMEs, was reached after 15 interviews. 20 The final interview guide is available in Supplemental Digital Appendix 1 (at https://links.lww.com/ACADMED/B334).

Interviews were conducted via videoconference between January and April 2021, audio-recorded with consent of the participants, and transcribed. The interviews lasted between 45 and 60 minutes. Interviews were distributed across Vanderbilt University School of Medicine team members based on availability, with 1 person designated as the primary interviewer for each. A secondary interviewer participated in all but 1 interview to ensure consistency and pose follow-up questions at selected points. To further ensure consistency, the entire research team met weekly to debrief interviews, review transcripts, and refine the interview guide.

The SMEs provided the following demographic elements after the interview: degree(s), title(s), profession, fields of expertise, age, years of professional experience, gender, and race or ethnicity. In July 2021, the SMEs received a form that contained the specific wording of each draft competency and subcompetency. Respondents were asked to provide open text comments and editorial suggestions on each and had the opportunity to provide summary comments. The demographic questionnaire and competency feedback form were administered through REDCap (Research Electronic Data Capture), a secure online data-capturing tool. 21

Data management and analysis

Interview transcripts were deidentified before importing into the qualitative data analysis tool Dedoose. 22 Coding was distributed across the research team, with 2 members coding each interview. This coding involved excerpting sections of text from transcripts and assigning 1 or more codes to each excerpt. The team used both deductive and inductive coding methods. 23 In deductive coding, an existing competency taxonomy that integrated domains across health professions was applied. 24 In inductive coding, data were reviewed without an a priori coding scheme, allowing identification of new ideas not included in existing frameworks. The use of a previously developed competency framework supported initial alignment of concepts across coders, whereas the flexibility of open coding allowed discovery of unexpected insights.

Thematic analysis was used to explore and summarize ideas that surfaced during the coding process. 15,19,23 Six broad conceptual themes were identified through iterative team discussion of the coded data. These themes appeared repeatedly throughout interviews, and after analysis of 15 transcripts, inductive thematic saturation was reached. 20,25

On the basis of areas of expertise, 1 researcher was assigned to each conceptual theme for translation into competency statements. Using excerpts that mapped to the themes, researchers also created subcompetencies, which provided the specificity needed for future curriculum development. A second team member was assigned to review each draft and suggest revisions. The entire team then reviewed the compiled list, deduplicated similar subcompetencies, and iterated the wording and framework until consensus was reached. This consensus document was used to construct the follow-up form, which was distributed via email to the SMEs to solicit feedback on competency statements.

After collection of feedback from the SMEs, primary and secondary reviewers were assigned to each competency and its subcompetencies. The primary reviewer codified the submitted responses using the following rubric: no change, edit wording, delete, or add new subcompetency. The primary reviewer then used the codified responses to revise statements as needed. The secondary reviewer confirmed codification and evaluated the revisions for accuracy and completeness. The revised competency list was reviewed and finalized using a consensus process across the combined research team.

Weekly meetings encouraged ongoing reflection on the impact of team members’ diverse perspectives and career experiences on data interpretation. Mixed pairings for transcript coding, statement writing, and revisions from the SME feedback encouraged deeper engagement within the team, increasing the credibility of the summary findings. 25

Results

Fifteen SMEs were interviewed, including 10 men and 5 women. The SMEs ranged in age from 34 to 73 years, and 3 were from racial or ethnic minority groups. Clinical professions represented by the SMEs included medicine (n = 9), nursing (n = 2), and pharmacy (n = 1; Table 1). Three additional nonclinical scholars added expertise in ethics (n = 1), business and education (n = 1), and social medicine (n = 1). Although many SMEs reported multiple roles, 11 were selected primarily for their expertise in health care AI, biomedical informatics, and/or the ethical application of AI in the workplace; 2 for their expertise in health professions education; and 2 because of dual expertise in health professions education and health care AI.

T1
Table 1:
Professional Backgrounds of Subject Matter Experts (SMEs)

Key themes and selected comments

Researchers identified 6 conceptual themes based on iterative analysis of coded excerpts. These themes were shaped deductively by the initial interprofessional competency framework as well as inductively by open coding, which reflected the diverse expertise of the research team. Themes and representative comments are outlined below. Comments have been minimally edited for length and clarity.

The need for foundational knowledge.

The SMEs described a need for foundational knowledge about the types of AI tools deployed in clinical care, their purposes, the qualities of data, and the contributions of related fields to their development. The SMEs stressed that clinicians need not become experts in any of these disciplines but instead should gain general informatics competency and a high-level understanding of the components of AI, including data inputs, generated outputs, and the nature of algorithms.

You have to start with fundamental informatics competency education for all our health professionals. What are the technologies you use? How do you use them efficiently? What’s data? How do you collect data? Why do we need accurate data?… Then, we move up to more complex stuff like: What is AI? (SME 4)

Some described this as a new form of literacy.

We have no choice that more health care is going to be funneled through technology, so that comfort, it’s not quite numeracy and it’s not quite literacy. It’s like tech literacy. (SME 3)

The SMEs also described a need to “know what you need to know” about a specific tool to safely apply it to any given patient or population of patients.

We learn enough about pharmacology that if someone came in and said, “I have these magic beans” and you said “Okay, I’ve had pharmacology. How do they work?” And they said “Well, it’s just magic.” You’d probably want to know a little more about the magic before you would give it to your patients. AI is kind of like magic beans the way it gets sold to people, and I want all of us to be able to ask the right questions. What’s the technology behind it? How has it been tested? Did you do external validation? What kind of data sets? (SME 6)

Finally, the SMEs acknowledged that some foundational knowledge may already be covered in existing courses but would need to be applied explicitly in the context of AI.

This is something I think they already get some training on. Just the basic notion of sensitivity, specificity, the accuracy measures because there’s a tendency to say, “Ooh, you know, AI is going to solve everything.” But no particular method is perfect. (SME 14)

Ethical, legal, regulatory, social, economic, and political issues.

Many of the SMEs discussed the potential for AI-based tools to worsen health inequities if implemented without deliberate action to ensure fairness. Several cited problems occurred when AI-based technologies were implemented in criminal justice, education, and housing before sufficient attention was given to potential negative consequences. On the basis of these concerns, the SMEs described a need for all clinicians to understand the social, ethical, legal, and regulatory issues that will determine whether AI-based tools will narrow or widen health disparities and health care gaps.

We have seen what AI does when it ignores equity and inclusion in criminal justice, in education, in housing, in you name it. We should not have done it in those fields, and we certainly cannot do it in health care. (SME 15)

Beyond understanding, the SMEs described a need for clinicians to develop a sense of personal and shared responsibility for ensuring ethical deployment and appropriate evaluation of AI tools.

Many clinicians allow the system to run itself rather than saying, “I have a responsibility and part of that is to speak up. I’m not going to be a bystander on this one.” (SME 15)

The SMEs uniformly stated that clinicians should still hold primary professional and legal responsibility for all aspects of patient care and for clinical outcomes, regardless of the support provided by AI-based tools.

I think the ethical piece that the students would need is this notion of the physician not abdicating their professional duty in the process. (SME 10)

Several expressed concerns for equity and the potential for worsening disparities in poignant terms and described the political and structural factors that could impede equitable deployment.

So where does it leave those who don’t have access to technology or who are left behind because of educational, language, or other barriers? Where does an AI revolution leave people who fall in that bucket? And where does the health care model start to impede, in terms of the way reimbursements are structured, the entire system? Where might it introduce gaps that AI essentially exacerbates rather than helps close? Because I can see that being a problem down the line. (SME 14)

Clinician roles and responsibilities and the nature of the clinical encounter.

The SMEs agreed that the great promise of AI-based tools is to improve clinical care by helping clinicians manage massive amounts of data derived from diverse sources.

Historic data, current data. Trying to piece all that together at the point of care with whatever’s going on in the environment of the patient…. The symptomatology. Their treatment. We need these tools to help us. Our minds are not infinite. We are distracted. We get tired. We can’t keep up with what’s going on. We need these tools to be able to provide us the support to help make good decisions. (SME 4)

The SMEs stressed that the outputs of AI-based tools should be used to augment and support clinical decision-making and that clinicians should exercise judgment in applying those outputs in caring for individual patients, ideally using shared decision-making models.

René Laënnec discovered that you could hear better with a tube you could listen through that was eventually refined to the modern stethoscope. Similarly, AI tools should be viewed like a stethoscope that increases your diagnostic or therapeutic ability but shouldn’t replace what goes on between your ears. (SME 7)

The physician’s job is to interpret that guideline or presentation in the context of their patient and then through a shared decision-making process generate a care plan that’s going to best meet the needs of the patient where they are at that time, understanding that there may be some tradeoffs. (SME 2)

The SMEs described a need for enhanced communication skills in explaining to patients the outputs of AI-based tools that might influence their care but also believed that these tools may present an opportunity to enhance humanistic aspects of clinician–patient relationships.

If we’re saying we’re going to benefit from offsetting some of the cognitive load of the decision making and incorporating this tool to help us do that, can we reinfuse that cognitive load in the humanistic interactions? (SME 10)

Although the SMEs emphasized augmentation and not replacement, they acknowledged that some aspects of clinical care might eventually be replaced by AI-based capabilities.

You need to understand that technology is going to replace some of the stuff you do. And guess what? That is okay. There are other aspects of your skill set that are going to remain really important. But hanging onto stuff that can be done better by either technology or other points in the system? It’s important for both the patients and the system. AI, I think, is going to push that. (SME 2)

Finally, the SMEs believed that clinicians should incorporate patient-generated data when clinically appropriate.

It’s incredibly valuable information, right? It’s so much better than the one shot of, for instance, a blood pressure measurement when they’re in the office. Now you’ve got much more accurate data about their day-to-day life and how things work. And so, we need to lean into it. It offers tremendous potential. It’s all the stuff that we can’t do because we see folks so episodically. But we will need to work together to figure out how to process it and make the most of it. (SME 10)

The impact on team dynamics and workflows.

Several SMEs discussed the impact that AI-based tools would have on established workflows and recommended implementation processes that explicitly address workflow changes.

We should be able to ask the question, “What is our workflow and where does this technology fit into that?” And that’s whether it’s the presentation of a risk model or it’s the robot that delivers medications or the IV [intravenous] pump. How will this fit in, and what do we need to change about our workflows, if anything, to accommodate this new helper? (SME 1)

The SMEs acknowledged that these disrupted workflows could have an impact on interprofessional teams and the relationships between team members.

The interprofessional team would also have to have a shared understanding of what that augmented intelligence can and can’t do within that team aspect. But you could imagine that being an important part, particularly around complex care issues like advanced cancers. (SME 2)

Concerns about bias and representativeness of data sets.

The SMEs discussed concerns for bias at multiple levels, including personal, organizational, and systemic bias, and at multiple points in the tool creation process. Concerns were particularly strong when SMEs described the bias that can be built into AI-based tools based on the characteristics of populations used for training data sets and for validation.

The problem there is that the people that are using those tools don’t know about the individual studies that went into the data underlying the tool. There may have been exclusion criteria for, like, people under 70 weren’t admitted into the trial, or children weren’t admitted into the trial, and so on. And that may not have been true of all of the trials, but enough of them that it would bias the underlying statistical basis for predictions for outliers. So, it’s very dangerous to apply a tool, in general AI tools, if you don’t know what their boundary conditions are. (SME 7)

Representativeness of data becomes especially critical in decisions about applying specific AI tools to underrepresented patients and populations.

And we’ve seen issues with that in facial recognition and policing space where tools have been sold that were not trained on minority faces and then inappropriately flagged people as being suspects for things where they were not even anywhere near the vicinity. There’s an analog in medicine to that as well. (SME 14)

The SMEs recommended a community engagement and empowerment approach to mitigate bias.

Bringing in patients and their caregivers to the table is broadening and widening. And, yes, that slows things down. I agree. But if you wanted to find progress for everyone, then we need to empower and have everyone at the table and give them a power to speak and influence and shape. (SME 15)

Continuing professional development.

The SMEs stated that the rapid pace of change created an imperative for initiatives across the phases of health professions education and noted that junior learners might be more advanced than their supervisors.

This is where I think co-production and co-learning can be helpful, creating space for faculty to kind of learn with their students and the residents in areas that they themselves may not have been exposed to. (SME 2)

AI-related competencies for health care professionals

Ultimately, 6 AI-related competency domain statements and 25 subcompetencies for health care professionals were formulated and refined from analysis of the interviews, identification of themes, incorporation of expert feedback, and iterative consensus development across the study team. Although the health professions competency taxonomy 24 facilitated initial coding, we found that several of the AI-related competencies mapped to multiple established domains and did not limit ourselves to this taxonomy in constructing our final framework.

The competency domain statements are as follows: (1) basic knowledge of AI: explain what AI is and describe its health care applications; (2) social and ethical implications of AI: explain how social, economic, and political systems influence AI-based tools and how these relationships impact justice, equity, and ethics; (3) AI-enhanced clinical encounters: carry out AI-enhanced clinical encounters that integrate diverse sources of information in creating patient-centered care plans; (4) evidence-based evaluation of AI-based tools: evaluate the quality, accuracy, safety, contextual appropriateness, and biases of AI-based tools and their underlying data sets in providing care to patients and populations; (5) workflow analysis for AI-based tools: analyze and adapt to changes in teams, roles, responsibilities, and workflows resulting from implementation of AI-based tools; and (6) practice-based learning and improvement regarding AI-based tools: participate in continuing professional development and practice-based improvement activities related to use of AI tools in health care.

Figure 2 depicts the competency domains. List 1 presents the AI-related competencies along with their subcompetencies. As an example of profession-specific translation, Supplemental Digital Appendix 2 (at https://links.lww.com/ACADMED/B334) lists these competencies mapped to the Accreditation Council for Graduate Medical Education core competency domains 26 and approximate level of difficulty.

F2
Figure 2:
A diagram of the 6 competency domains identified through thematic analysis of interviews with 15 subject matter experts in health professions education and health care artificial intelligence (AI) conducted between January and April 2021. The competency domain of practice-based learning and improvement regarding AI-based tools undergirds the other 5 competencies, and together they support the safe and ethical use of AI-based tools in clinical care.

Discussion

Despite the SMEs’ general enthusiasm for clinical use of AI-based tools, a sense of caution permeated nearly all interviews. Potential dangers have been cataloged across the history of AI development, including inappropriate categorization and labeling of people, predictive models based on skewed data sets, and powerful technologies that inadvertently maintained existing inequalities. 4,27 A recent critical review of the history of AI claims that “artificial intelligence is neither artificial nor intelligent.” 4 Outputs from AI-based tools are determined by available data, and therefore system “intelligence” is determined by the structure, assumptions, and scope of underlying data sources, which originate from human decisions. Thus, a strong ethical orientation and commitment to equity should lie at the heart of all AI-related competencies.

Numerous authors and professional organizations have called for systematic approaches to imparting AI-related knowledge, skills, and attitudes and for the development of a list of competencies that would guide this teaching and learning. 7–10,28–36 Competencies have been developed for informatics in medical 37 and nursing education, 38 and although some of these are relevant to AI, they lack the specificity needed for the use of powerful AI-based tools in clinical care. Although authors have made recommendations for AI-related topics to be included in health professions curricula 10 and others have cited exemplar programs, 17,18 no list of competencies had been formulated.

This study addresses that gap by generating a list of AI-related clinical competencies for health care professionals through thematic analysis of semistructured expert interviews. An interprofessional approach was deliberately chosen because all who provide clinical care will eventually interface with AI-based tools. Findings suggest significant alignment across health professions, indicating potential opportunities for interprofessional learners to develop competencies collaboratively.

Although this study focused on individual competencies, the SMEs emphasized that competent individuals need to function within “AI-capable organizations,” which can evaluate and monitor the structure, outputs, and outcomes associated with AI-based tools. These organizations in turn need support from regulatory systems focused on safety and fairness. An evaluation process similar to that for drug and medical device development was recently described for AI-based tools, 39 whereas others have suggested a process similar to that for new laboratory tests. 40 However, in both models, the distribution of responsibilities (and thus capabilities) between local organizations and regulatory bodies remains to be clarified.

Even without direct questioning, transparency emerged as an essential characteristic of the organizations and systems needed to support competent clinicians. Open and well-communicated processes at the organizational level can ensure that patients and clinicians are aware of embedded tools. Transparency also serves as a safeguard to ensure that tools will function primarily for the benefit of patients and populations and not for powerful competing interests. This transparency requires that all relevant stakeholders have seats at decision-making tables and that standardized approaches are used for implementing new tools. 5

Individual and organizational competency demands system-wide transparency, with regulatory standards requiring clear labeling about how AI-based tools are constructed, the questions they are engineered to answer, and the populations used for training and validation. 1,41–43 Tools with complex neural networks and evolving algorithms will be difficult to interrogate and will require additional ongoing oversight. 39,43

The SMEs also discussed the need for a new cadre of health care professionals with advanced training in areas related to the implementation of AI-based tools who would establish and oversee the needed organizational and regulatory processes. The nature of this workforce and its training, whether through degree programs, fellowships, or certificate programs, also needs further definition.

Finally, competency-based education assumes there are diverse ways in which individuals gain competencies; thus, this report does not make specific recommendations for teaching and learning approaches. However, the competency list (List 1) provides a blueprint for educators who wish to design, implement, and measure results of initiatives aiming to help learners acquire them. The AI-related competencies that individual clinicians might already possess are likely to vary across and within learner levels; therefore, we did not assign competencies to distinct phases of the educational continuum. Instead, Supplemental Digital Appendix 2 (at https://links.lww.com/ACADMED/B334) suggests which competencies could be considered entry level and which could be considered more advanced.

List 1

Competencies for the Use of Artificial Intelligence (AI)–Based Tools by Health Care Professionals

  • 1. Basic Knowledge of AI: Explain what artificial intelligence is and describe its health care applications.
    • a. Identify the range of health-related AI applications.
    • b. Describe contributions from the disciplines of data science, computer science, and informatics to the development of health care AI tools.
    • c. Summarize the factors that influence the quality of data and explain how they impact the outputs of AI-based applications.
    • d. Explain how different approaches to data visualization can affect interpretation of the outputs of AI-based tools and the subsequent actions that might be taken.
    • e. Describe the statistical properties of AI-based tools and explain how they should be used in interpreting outputs.
  • 2. Social and Ethical Implications of AI: Explain how social, economic, and political systems influence AI-based tools and how these relationships impact justice, equity, and ethics.
    1. Acknowledge personal responsibility for fairness and equity in the use of AI-based tools in health care.
    2. Describe how system-level factors and regulatory structures influence the implementation of AI-based tools in health care.
    3. Identify and evaluate how personal and structural biases can impact health data and the outputs of AI-based tools.
    4. Recognize the potential for use of AI-based tools to reduce or exacerbate health disparities and participate in debiasing activities to mitigate negative impacts.
    5. Appraise the ethical issues for clinicians, patients, and populations raised by various design, implementation, and use scenarios involving AI.
  • 3. AI-Enhanced Clinical Encounters: Carry out AI-enhanced clinical encounters that integrate diverse sources of information in creating patient-centered care plans.
    1. Recognize that clinicians are responsible for all patient care decisions, including those that involve support from AI-based tools, and exercise judgment in applying AI-generated recommendations.
    2. Discern a patient’s information needs, preferences, numeracy, and health literacy levels regarding the use of AI-based tools in their care.
    3. Explain to patients the concepts of risk and uncertainty as they relate to the outputs of AI-based tools and describe practical implications for their care.
    4. Integrate information derived from multiple AI and non-AI sources in patient-centered decision-making processes that result in personalized care plans.
    5. Demonstrate comfort and humility in caring for data-empowered patients and incorporate patient-reported data and outcomes in developing care plans.
    6. Apply methods of data visualization to facilitate patient understanding of AI-derived data, with sensitivity to possible differential impacts related to race, ethnicity, sex, gender, and social determinants of health.
    7. Describe how AI-based tools can be used to enhance access and quality of care in remote and underserved settings.
  • 4. Evidence-Based Evaluation of AI-Based Tools: Evaluate the quality, accuracy, safety, contextual appropriateness, and biases of AI-based tools and their underlying datasets in providing care to patients and populations.
    1. Access critical information about specific AI-based tools before applying them to patient care, including sources and representativeness of training data, algorithm performance for the question being asked, and how they were validated.
    2. Describe how the scope and quality of data sets used in development of AI tools influence their applicability to specific patients and populations.
    3. Identify potential biases in the design of an AI-based tool, and the implications of those biases for patient care and population health.
    4. Collaborate with patients, caregivers, informaticians, and others in the ongoing monitoring of AI-based applications and communicate feedback through established organizational channels.
  • 5. Workflow Analysis for AI-Based Tools: Analyze and adapt to changes in teams, roles, responsibilities, and workflows resulting from implementation of AI-based tools.
    1. Participate collaboratively in team-based discussions that analyze changing roles, responsibilities, and workflows associated with the adoption of novel AI-based tools and help implement necessary changes.
    2. Effectively use AI-based tools to facilitate critical communications between all members of health care teams.
    3. Recognize data and informatics professionals as valuable members of health care teams and collaborate with them in the design of AI tools that address clinical problems.
    4. Contribute to micro- and macro-system decision-making processes regarding which AI-based tools should augment and which should replace parts of current health care practices.
  • 6. Practice-Based Learning and Improvement Regarding AI-Based Tools: Participate in continuing professional development and practice-based improvement activities related to use of AI tools in health care.

Limitations

Although we purposively selected the SMEs for their diverse areas of expertise, 15 interviews provided a small sampling of perspectives. For example, the sample did not include social workers, hospital administrators, or other health care professionals who will interact with AI-based tools. In addition, the study was restricted to experts working in the United States, and international experts would likely add different views. Future research could engage a wider group of experts to determine which competency elements are most critical for different care settings, team roles, and specialty contexts. Furthermore, this study focused only on patient-centered, clinical uses of AI and did not include tools that facilitate business processes or system-level operations. Finally, given the rapid pace of change, we acknowledge that the competencies will need to adapt and that this list should be frequently revisited.

Conclusions

This qualitative study using expert interviews and thematic analysis identified 6 competency domains and 25 subcompetencies needed for the use of AI-based tools by health care professionals. The competency statements and subcompetencies can be used to guide future teaching and learning programs for health care professionals across the phases of education. The development of ethically competent individuals and organizations is critical if the potential benefits of AI-based tools are to be maximized and their potential harms diminished.

Acknowledgments:

The authors would like to thank Nandini Ovalasumuthovu for expert program management assistance; Anita Preininger, PhD, and Fernando Suarez, MD, PhD, for performing early phases of the study; William Stead, MD, for his advice and feedback; and all the subject matter experts who were interviewed for this study.

References

1. Matheny M, Israni ST, Ahmed M, Whicher D, eds. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: National Academy of Medicine; 2019.
2. Roosli E, Rice B, Hernandez-Boussard T. Bias at warp speed: How AI may contribute to the disparities gap in the time of COVID-19. J Am Med Inform Assoc. 2021;28:190–192.
3. Ross C. As the FDA clears a flood of AI tools, missing data raise troubling questions on safety and fairness. STAT+. https://www.statnews.com/2021/02/03/fda-clearances-artificial-intelligence-data. Published February 3, 2021. Accessed August 16, 2022.
4. Crawford K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press; 2020.
5. Finlayson SG, Subbaswamy A, Singh K, et al. The clinician and dataset shift in artificial intelligence. N Engl J Med. 2021;385:283–286.
6. World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva, Switzerland: World Health Organization; 2021.
7. Lomis KD, Jeffries P, Palatta A, et al. Artificial Intelligence for Health Professions Educators. Washington, DC: National Academy of Medicine; 2021.
8. Paranjape K, Schinkel M, Nannan Panday R, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med Educ. 2019;5:e16048.
9. Lee J, Wu AS, Li D, Kulasegaram KM. Artificial intelligence in undergraduate medical education: A scoping review. Acad Med. 2021;96(11 suppl):S62–S70.
10. Garvey KV, Craig KJT, Russell RG, et al. The potential and the imperative: The gap in AI-related clinical competencies and the need to close it. Med Sci Educ. 2021;31:2055–2060.
11. Gardner RM, Overhage JM, Steen EB, et al. Core content for the subspecialty of clinical informatics. J Am Med Inform Assoc. 2009;16:153–157.
12. Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638–645.
13. Hodges B, Lingard L, eds. The Question of Competence: Reconsidering Medical Education in the Twenty-First Century. Ithaca, NY: Cornell University Press; 2012.
14. Klingstedt JL. Philosophical basis for competency-based education. Educ Technol. 1972;12:10–14.
15. Rubin HJ, Rubin IS. Qualitative Interviewing: The Art of Hearing Data. Newbury Park, CA: SAGE Publications; 2004:304.
16. Jordan J, Clarke SO, Coates WC. A practical guide for conducting qualitative research in medical education: Part 1-how to interview. AEM Educ Train. 2021;5:e10646.
17. Coates WC, Jordan J, Clarke SO. A practical guide for conducting qualitative research in medical education: Part 2-coding and thematic analysis. AEM Educ Train. 2021;5:e10645.
18. Clarke SO, Coates WC, Jordan J. A practical guide for conducting qualitative research in medical education: Part 3-using software for qualitative analysis. AEM Educ Train. 2021;5:e10644.
19. Saldana J. Fundamentals of Qualitative Research. Oxford, England: Oxford University Press; 2011.
20. Saunders B, Sim J, Kingstone T, et al. Saturation in qualitative research: Exploring its conceptualization and operationalization. Qual Quant. 2018;52:1893–1907.
21. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap): A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–381.
22. Dedoose. Version 8.0.35. SocioCultural Research Consultants; 2018. Accessed September 21, 2022. https://dedoose.com/.
23. Bernard HR, Wutich A, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. Newbury Park, CA: SAGE Publications; 2016.
24. Englander R, Cameron T, Ballard AJ, Dodge J, Bull J, Aschenbrener CA. Toward a common taxonomy of competency domains for the health professions and competencies for physicians. Acad Med. 2013;88:1088–1094.
25. Varpio L, Ajjawi R, Monrouxe LV, O’Brien BC, Rees CE. Shedding the cobra effect: Problematising thematic emergence, triangulation, saturation and member checking. Med Educ. 2017;51:40–50.
26. Accreditation Council for Graduate Medical Education. ACGME Common Program Requirements. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRResidency2020.pdf. Accessed August 16, 2022.
27. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366:447–453.
28. Hodges BD. Ones and zeros: Medical education and theory in the age of intelligent machines. Med Educ. 2020;54:691–693.
29. Masters K. Artificial intelligence in medical education. Med Teach. 2019;41:976–980.
30. Wartman SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Acad Med. 2018;93:1107–1109.
31. Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: Systematic review. JMIR Med Educ. 2020;6:e19285.
32. Alrassi J, Katsufrakis PJ, Chandran L. Technology can augment, but not replace, critical human skills needed for patient care. Acad Med. 2021;96:37–43.
33. Harish V, Morgado F, Stern AD, Das S. Artificial intelligence and clinical decision making: The new nature of medical uncertainty. Acad Med. 2021;96:31–36.
34. James CA, Wheelock KM, Woolliscroft JO. Machine learning: The next paradigm shift in medical education. Acad Med. 2021;96:954–957.
35. Cutrer WB, III SW, Triola MM, et al. Exploiting the power of information in medical education. Med Teach. 2021;43:S17–S24.
36. McCoy LG, Nagaraj S, Morgado F, Harish V, Das S, Celi LA. What do medical students actually need to know about artificial intelligence? NPJ Digit Med. 2020;3:86.
37. Hersh W, Biagioli F, Scholl G, et al. From competencies to competence. In: Health Professionals’ Education in the Age of Clinical Information Systems, Mobile Computing and Social Networks. Amsterdam, the Netherlands: Elsevier; 2017:269–287.
38. Technology Informatics Guiding Education Reform. Informatics competencies for every practicing nurse: Recommendations from the TIGER collaborative. Chicago, IL: Technology Informatics Guiding Education Reform; 2007:34. http://www.tigersummit.com. Accessed August 16, 2022.
39. Park Y, Jackson GP, Foreman MA, Gruen D, Hu J, Das AK. Evaluating artificial intelligence in medicine: Phases of clinical research. JAMIA Open. 2020;3:326–331.
40. Yu KH, Kohane IS. Framing the challenges of artificial intelligence in medicine. BMJ Qual Saf. 2019;28:238–241.
41. Eaneff S, Obermeyer Z, Butte AJ. The case for algorithmic stewardship for artificial intelligence and machine learning technologies. JAMA. 2020;324:1397–1398.
42. Sendak MP, Gao M, Brajer N, Balu S. Presenting machine learning model information to clinical end users with model facts labels. NPJ Digit Med. 2020;3:41.
43. Petersen C, Smith J, Freimuth RR, et al. Recommendations for the safe, effective use of adaptive CDS in the US healthcare system: An AMIA position paper. J Am Med Inform Assoc. 2021;28:677–684.

Supplemental Digital Content

Copyright © 2022 by the Association of American Medical Colleges