Technology is changing the way the world communicates, as well as how we learn, remember and transform information. Fuelled by social media and other communication technologies, e-learning has become an essential part of coursework in higher education, including health professions education.1 Programs in allied health, dentistry, medicine, nursing, pharmacy and other disciplines are offering online classes and embracing emerging technologies to leverage learning opportunities for their students.1 One such innovative pedagogy is the practice of microlearning, which refers to small lesson modules and short-term activities intended to teach and reinforce course objectives.2 This pedagogy is asynchronistic, allowing the learner to control the place, method and time of access to information.2 The theory behind the acquisition of knowledge in the form of small units is not new.3,4 However, because the notion of microlearning is relatively new, there is no standardized definition for it.3
Microlearning has developed alongside the emergence of user-generated content such as Web 2.0, accessibility and interoperability.5 The term “Web 2.0” was coined by Darcy DiNucci in 1999 and popularized by Tim O’Reilly and Dale Dougherty in 2004.5 Web 2.0 includes blogs, wikis, social media platforms and applications that enable participants to generate large amounts of information and share it immediately via the world wide web.6 Microlearning harnesses these technologies to engage students and to promote self-determined learning, also known as heutagogy.7,8 For example, students in microlearning can take advantage of smaller, targeted and manageable chunks of information while engaging in self-determined learning activities.4,9 In contrast to memorizing content as in older education designs, microlearning encourages students to approach information that is as dynamic and contemporary as possible.2,9 Microlearning involves a small and dissected learning unit in a step-by-step learning approach.2 It is also referred to as “micro- or bite-sized content”, “micro-courses” and “just-enough information”.2,3 Although microlearning is characterized in terms of the size of content, learning occurs remarkably quickly within minutes or seconds of time instead of hours, days or months, which is often referred to as “just-in-time” learning.3,4
Empirical inquiries must be based on theoretical frameworks or theories as they are the foundation from which knowledge is shaped.10 Learning theories have traditionally focused on individual concepts, learners and learning situations.9 The theoretical basis of microlearning is connectivism, which focuses on the importance of our human capabilities to form connections between ideas, with each other and with different sources of information.11 Connectivism builds on the idea that the brain uses neural networks to process information.2 A learning theorist, George Siemens, applied this concept to learning, asserting that learning occurs when internal neural networks are formed as a result of the development and use of external networks between ideas, people and places.2 These connections between ideas in individual learners’ brains are formed, developed and maintained in what Siemens calls learning ecologies.2 Learning ecologies can vary in size, scope and complexity, but they are all composed of networks of individuals and information sources. Microlearning is based on connectivism; the neural networks in individuals’ brains and the broader learning ecologies.2,9,11
As learners are bombarded with an overwhelming amount of information, microlearning helps to break that information down into smaller units that can be processed more easily.1,8,9 Learning is then focused on making connections between and among the small units, which is a foundation of critical thinking and clinical reasoning.2,9,11 This is particularly important in health professions education, which changes constantly with advancements in medicine and healthcare delivery systems.12 Indeed, microlearning has been endorsed by many health professions educators, programs, organizations and alike to facilitate student learning, training and continuing education.13-16 Examples of microlearning in health professions education include: i) gamification and mobile apps for pediatric nurses to learn how to reduce central line-associated bloodstream infections13; ii) micro-photographic images and videos via smartphones to teach students of pathology14; iii) visual applications to teach symptoms and management of common diagnoses15; and iv) exercises to enhance pharmacy students’ communication, critical-thinking and problem-solving skills.16 However, despite its popularity and applicability to a wide range of health disciplines, microlearning is a term difficult to define in terms of its features and processes.3,17 A scoping review of the literature on this topic using the Joanna Briggs Institute (JBI) methodology18 will be undertaken to provide an overview of microlearning in health professions education. This will fill the research gap through understanding the widely accepted definition of microlearning and identifying its instructional designs and outcomes that might inform practice for health professions educators.
To that end, we approach this scoping review with two questions: i) How is microlearning defined and designed as an educational strategy in health professions education? and ii) What outcomes associated with microlearning have been measured in health professions students? Pedagogical outcomes will be categorized according to Kirkpatrick's four levels of evaluation (reaction, learning, behavior and results), the most widely used program evaluation strategy in both traditional classrooms and mobile learning in health professions education.19,20 Our findings will inform educational researchers, faculty and academic administrators on the application of microlearning, pinpoint gaps in the literature and help identify opportunities for instructional designers and subject matter experts to improve course content.
The current review will consider studies worldwide that include health professions students who were exposed to microlearning as an instructional strategy to acquire knowledge, skills, attitude, confidence, commitment and/or behavioral change.19,20 For this review, health professions students will be defined as undergraduate medical students, pre- or postlicensure nursing students, dentistry students, pharmacy students and allied health professions students.
The concept of interest for the proposed scoping review is the definition of microlearning, instructional design of microlearning, and assessment of Kirkpatrick's outcomes for health professions students in didactic and clinical courses. Kirkpatrick's hierarchy will be used to differentiate levels of pedagogical outcomes.19 Level 1 is “reaction”, where learners react to the learning event with positive attitude such as satisfaction or engagement.19 Level 2 is “learning”, where learners obtain knowledge, skills, confidence and commitment by engaging in the learning event.19 Level 3 is “behavior”, where learners apply their acquired knowledge, skills, confidence and commitment to the real tasks such as practical examinations or final course grades.19 Level 4 is “results”, where learners provide benefit to patients or practice, such as patient safety or quality of care utilizing those acquired knowledge, skills, confidence and/or commitment.19 Forms of microlearning will include micro- or bite-sized content, micro-courses, just-in-time learning or just-enough information.2-4
The context of the review will be the academic setting, hospital training settings, community learning settings, clinicals, skills labs, virtual classes or any other settings where microlearning in health professions was introduced and evaluated. To answer the second review question, studies that only suggested the concept of microlearning without presenting learning outcomes will be excluded.
Types of studies
The current scoping review will consider both experimental and quasi-experimental study designs including randomized controlled trials, nonrandomized controlled trials, before and after studies, and interrupted time-series studies. In addition, analytical observational studies including prospective and retrospective cohort studies, case-control studies and analytical cross-sectional studies will be considered for inclusion. This review will also consider descriptive observational study designs including case series, individual case reports and descriptive cross-sectional studies. Qualitative studies that focus on qualitative data including, but not limited to, designs such as phenomenology, grounded theory, ethnography, qualitative description, action research and feminist research will also be considered. Systematic reviews that meet the inclusion criteria will also be considered, along with text and opinion articles. The search will be limited to studies published in English, but no date limits will be placed on the search. All included articles will be searched for additional studies.
The search strategy will follow a three-phase approach with an aim to find both published and unpublished studies. In the first phase, an initial limited search of PubMed (MEDLINE) and CINAHL will be undertaken followed by analysis of the words contained in the title and abstract, and of the index terms used to describe articles. This will inform the second phase of the search process where the strategy will be finalized and tailored for each information source. A full search strategy for PubMed and CINHAL is detailed in Appendix I. In the third phase of the search, the reference lists of included studies will be reviewed for any additional studies to consider.
The databases to be searched include PubMed (MEDLINE), CINAHL, Education Resources Information Center, Embase, PsycINFO and Education Full Text (H.W. Wilson). The search for unpublished studies will include the ProQuest Dissertations and Theses Global database.
Following the search, all identified citations will be collated and uploaded into EndNote V8.2 (Clarivate Analytics, PA, USA), and duplicates will be removed. Studies that meet the inclusion criteria will be retrieved in full, and their details will be imported into the JBI's System for the Unified Management, Assessment and Review of Information (JBI SUMARI; Joanna Briggs Institute, Adelaide, Australia). Titles and abstracts will then be screened by two independent reviewers. The full text of selected studies will be retrieved and assessed in detail against the inclusion criteria. Full-text studies that do not meet the inclusion criteria will be excluded, and reasons for exclusion will be provided in an appendix. Results will be included in full in the final report and presented in a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram.21 Any disagreements that arise between the reviewers will be resolved through discussion or with a third reviewer.
Assessment of methodological quality
A formal assessment of methodological quality or a confidence assessment will not be performed since this scoping review is aimed at providing an overview of the existing evidence regardless of quality, while offering a map of what evidence has been produced, as opposed to seeking only the best evidence to answer our research questions.18
Data will be extracted from eligible publications included in the review using the standardized data extraction tool in JBI SUMARI. The data extraction instrument (Appendix II) was adapted from the JBI SUMARI tool to answer the review questions. The data will include specific details about the population, concept, context, study methods and key findings relevant to the review questions. As in the study selection process, any disagreements between the two reviewers will be resolved through discussion or with a third reviewer. For automated qualitative data analysis, NVivo 12 software (QSR International Pty Ltd, Doncaster, VIC, Australia) will be used to organize literature reviews and to present a synthesis of the key findings. In this method, each document stored in JBI SUMARI will be imported into NVivo as a “case” to facilitate unstructured text data management, analysis and report preparation. Each case will be then assigned attributes (i.e. study year, author(s), country, study purposes, study design, target audience, sample size, theoretical framework, definition of microlearning, course or instructional design, topic, technology platform, measurement tools, study key findings and Kirkpatrick's outcomes19). Texts pertinent to the review questions will be recorded in a data-extraction form, and each document will be coded, analyzed and synthesized so that main themes emerge. The draft data extraction tool will be modified and revised as necessary during the process of extracting data from each study. Modifications will be detailed in the final report.
The extracted data will be presented in tabular form and as a narrative. The table will report: the distribution of studies by year/authors/countries of origin; purpose/aims; study design/health discipline(s)/target audience/sample size; theoretical framework, if presented; the definition of microlearning, if presented; course or instructional design, if presented; microlearning topic/technology platform or application; measurement tools; study key findings and outcomes according to Kirkpatrick's hierarchy if presented (Appendix II). This table may be further refined, and accompanied by graphic representations such as bar charts, pie charts and diagrams. A narrative description of the findings from a qualitative thematic analysis will also be presented.
The current work was supported by a 2018–2019 Duke AHEAD Supporting Health Professions Educators (DASHE) grant awarded to the first author (JCD).
Appendix I: Search strategy
CINAHL (via EBSCO)
Appendix II: Data extraction instrument
1. De Gagne JC. Oermann MH, De Gagne JC, Philips BC. Teaching in online environments. Teaching in nursing and role of the educator: the complete guide to best practice in teaching, evaluation, and curriculum development
2nd ed.New York: Springer; 2018. 95–111.
2. Hug T, editor. Didactics of microlearning
: concepts, discourses and examples. Münster: Waxmann, 2017.
3. Torgerson C. The microlearning
guide to microlearning
. Kalamazoo: Torgerson Consulting; 2016.
4. Hug T. Parsons D. Mobile learning as ‘Microlearning
’: conceptual considerations towards enhancements of didactic thinking. IGI Global, Refining current practices in mobile and blended learning: new applications
. Hershey: 2012.
5. Newland B, Byles L. Changing academic teaching with Web 2.0
technologies. Innov Educ Teach Int
2014; 51 3:315–325.
7. Blaschke LM, Hase S, Kenyon C (Eds): Experiences in selfdetermined learning. Scotts Valley: CreateSpace Independent Publishing Platform, 2014.
8. Cosnefroy L, Carré P. Self-regulated and self-directed learning: why don’t some neighbors communicate? ISSDL
2014; 11 2:1–12.
9. Bell F. Network theories for technology-enabled learning and social change: connectivism
and actor network theory. In: Proceedings of the 7th International Conference on Networked Learning [Internet]; 2010 May 3–4 [cited 7 June 2018]; Aalborg, Denmark. Available from: http://usir.salford.ac.uk/9270/
10. Grant C, Osanloo A. Understanding, selecting, and integrating a theoretical framework in dissertation research: creating the blueprint for your “House”. Adm Issues J
2014; 4 2:12–26.
11. Kahnwald N, Köhler T. Microlearning
in virtual communities of practice? An explorative analysis of changing information behaviour. In: Proceedings of microlearning
: micromedia & eLearning 2.0: getting the big picture [Internet]. Innsbruck: Innsbruck University Press; 2007 [cited 7 June 2018]. Available from: https://www.uibk.ac.at/iup/buch_pdfs/microlearning2006-druck.pdf
12. Christ-Libertin C. Leveraging technology: the Macy report's recommendation #4. J Contin Educ Nurs
2016; 47 4:151–152.
13. Orwoll B, Chu K, Diane S, Fitzpatrick S, Meer C, Roy-Burman A. Engaging staff through social gamification: delivery of microlearning
to improve safety and quality [Poster session: Quality and safety 15]. Crit Care Med
2014; 42 (12 Suppl 1):A1578.
15. Nelson M, Calandrella C, Foster D, Perera T. 147 Heads Up! An innovative use of smart phone technology to facilitate residency education. Ann Emerg Med
2017; 70 4:S59.
16. Popovich NG, Katz NL. A microteaching exercise to develop performance-based abilities in pharmacy students. Am J Pharm Educ
2009; 73 4:73.
17. Buchem I, Hamelmann H. Microlearning
: a strategy for ongoing professional development. eLearning Pap
2010; 21 7:1–15.
18. Peters MDJ, Godfrey C, McInerney P, Baldini Soares C, Khalil H, Parker D. Chapter 11: scoping reviews. In: Aromataris E, Munn Z, editors. Joanna Briggs Institute Reviewer's Manual [Internet]. Adelaide: Joanna Briggs Institute, 2017 [cited 7 June 2018]. Available from: https://reviewersmanual.joannabriggs.org/
19. Kirkpatrick JD, Kirkpatrick WK. Kirkpatrick's four levels of training evaluation. 1st ed.Alexandria: Association for Talent Development; 2016.
20. Beckman TJ, Cook DA. Developing scholarly projects in education: a primer for medical teachers. Med Teach
2007; 29 (2–3):210–218.
21. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med
2009; 6 7:e1000097.