Secondary Logo

Journal Logo


Effects of reading media on reading comprehension in health professional education: a systematic review protocol

Fontaine, Guillaume1,2; Zagury-Orly, Ivry2,3,4; de Denus, Simon2,5; Lordkipanidzé, Marie2,5; Beauchesne, Marie-France5; Maheu-Cadotte, Marc-André1,2,6; White, Michel2,3; Thibodeau-Jarry, Nicolas2,3; Lavoie, Patrick1,2

Author Information
doi: 10.11124/JBISRIR-D-19-00348
  • Free



A current and growing trend in undergraduate, graduate, and postgraduate health professional education (HPE) is the shift from paper-based learning materials to various types of digital media, such as computers, smartphones, or tablets. Studies investigating the impact of media on learning outcomes have yielded inconsistent findings.1,2 These inconsistencies may be explained by overlooked factors such as task characteristics (e.g., content, duration), participant characteristics (e.g., technological literacy), display technology (e.g., color screen versus black-and-white screen), and electronic features (e.g., animations, hyperlinks). These factors may act as confounding variables in the assessment of reading-related learning processes.3

Reading comprehension is the capacity to appraise, evaluate, integrate, and remember information.4 A recent meta-analysis on the effects of reading media on reading comprehension suggested an advantage of paper-based over digital-based reading when considering three moderators: time frame, text genre, and publication year.3 The advantages of paper-based over digital-based reading were observed in time-constrained settings and across text genres (i.e., in studies using informational text only, or a mix of informational and narrative texts), and tended to increase in recent studies. Thus, paper-based reading would be preferable for the comprehension and long-term retention of information contained in a text.3 The use of digital media could lead to decreased understanding of the texts and retention of information, especially when texts are long or reading is time-constrained. These effects would be independent of the reader's age. In the long term, this could eventually impact learners’ ability to critique, integrate, and evaluate the information they read – a fundamental element in HPE.4

Paper-based reading may be more effective for several reasons. First, it is suggested that digital-based reading leads to overconfidence in one's perceived acquisition of knowledge, which may ultimately result in diminished understanding or integration.5 Second, emerging data indicates that reading with digital media may lead to more surface reading,6 which in turn impairs learning.5 Digital media are frequently used for rapid everyday tasks (e.g., social media), which may partially explain the tendencies for shallow reading. Third, it is believed that learners’ variable experience with technology acts as a moderator in the effect of digital-based reading on comprehension.7 Even if students prefer digital media, this does not necessarily result in increased reading comprehension.4 Finally, paper-based documents provide a physical “presence” to the text, which would facilitate learning.1,3,4 This feeling of physical presence could be associated with, for example, knowing that a particular sentence or concept is at the bottom of a particular page in a printed text.1,4 However, no substantial data exist to back up these potential explanations.

While previous reviews have been conducted to assess the impact of reading media on reading comprehension,1-4,8,9 none has focused specifically on HPE. This is problematic, since reading comprehension has unique implications in the context of HPE. First, reading comprehension in the context of HPE is clinically relevant. Sub-optimal reading comprehension in HPE, if not properly addressed, may lead to increased misconceptions, faulty decision-making, and a consequent increase in medical errors.10,11 Second, previous reviews have focused on heterogeneous populations. For instance, Delgado et al.3 conducted a systematic review that included a heterogenous population across the following educational levels: elementary, middle or high school, undergraduates, or graduates and professionals. However, due to the small number of studies with sufficient data to calculate effect sizes, the category “graduates or professionals” was omitted from the analysis. In addition, while the between-group effects were non-significant, none of the comparisons were relevant for this population of interest (i.e., health professional students, trainees, and residents). Third, although previous reviews investigated the impact of text genre (i.e., informational, narrative, or mixed) on reading comprehension, they did not investigate the differences in effects of theoretical texts and applied texts (i.e., texts that contain information that will be applied in clinical practice) on reading comprehension. Finally, the quantity of medical knowledge to assimilate in order to graduate from a HPE program is growing exponentially. Medical knowledge was estimated to double every 3.5 years in 2010 and every 0.2 years (i.e., 73 days) in 2020.12 Because knowledge is expanding faster than students’ ability to assimilate it, it is essential to ensure optimal comprehension, integration, and retention.

Some studies assessed the impact of reading media on reading comprehension in HPE. Notably, two studies conducted in this context found no difference in the impact of digital-based versus paper-based reading on comprehension.13,14 In these two studies, there were differences in terms of reading time frame (free versus limited) and text genre (information versus narrative). No strategies were used to enhance reading comprehension (e.g., highlighting, note taking). Thus, it would be relevant to investigate the impact of these variations on reading comprehension in the context of this review.

A search of PROSPERO, MEDLINE, the Cochrane Database of Systematic Reviews, and JBI Evidence Synthesis was conducted and no published or in-progress systematic reviews on the effects of reading media in HPE were identified.

Review question

Among students, trainees, and residents participating in HPE, what is the effect of digital-based reading versus paper-based reading on reading comprehension?

Inclusion criteria


This review will include studies conducted with undergraduate and graduate students of any age, in any health care context and from any discipline who participate in health professional education (i.e., undergraduate or graduate courses or programs for health care professionals). The review will also include trainees and residents in medicine and other disciplines (i.e., individuals undertaking postgraduate training). Studies with individuals that have reading difficulties, cognitive impairments, and other related disorders (e.g., attention deficit hyperactivity disorder) will be excluded.


Studies that evaluate the effect of paper-based reading will be included. Paper-based reading is defined as reading texts printed on paper (e.g., printed books, printed articles).3 Studies assessing the impact of texts with wide-ranging characteristics (e.g., informational, narrative, linear, non-linear) will be included.15 If students were allowed to print the digital text, the study will be excluded from the review.


Studies that compare the effects of paper-based reading directly to that of digital-based reading will be included. Studies that do not include a comparator will be excluded from the review. Digital-based reading is defined as “reading texts on digital screens, including computers, tablets, mobile phones, and e-readers.” It is important that the reading materials evaluated in studies are comparable across media (i.e., similar content, structure, and images); thus, studies will be excluded if the digital-based condition includes features such as videos, animations, hyperlinks,16 web navigation,17 gamification,18 and adaptivity.19


The primary outcome of this review is reading comprehension (i.e., the understanding of the textual content in paper or digital formats). More specifically, this review will consider studies reporting outcomes related to textual, inferential, and mixed types of reading comprehension. Textual reading comprehension is associated with reading tasks that ask “for specific details or shallow level of comprehension.”3 Inferential reading comprehension is equivalent to high-level comprehension, when reading tasks require “inferences based on parts of the texts, across parts, or involved previous knowledge.”3 Mixed reading comprehension is associated with reading tasks that require both types of reading comprehension.3 This review will consider all methods to assess reading comprehension, regardless of prior psychometric evaluation.

In addition, variables that could influence the relationship between interventions and outcomes, such as learners’ self-reported experience with using technology and preference for paper-based or digital-based reading, will be extracted when reported in included studies. The review will consider subjective measures of learners’ experience and preference (i.e., Likert-type questionnaires).

Types of studies

This review will comprise observational, quasi-experimental, and experimental study designs including randomized controlled trials, non-randomized controlled trials, before and after studies, case-control studies, interrupted time-series studies, and cohort studies. This review will consider studies published in any language in peer-reviewed journals or peer-reviewed conference proceedings. This review will exclude qualitative studies, discussion papers, editorials, knowledge syntheses, dissertations, and theses.


The proposed systematic review will be conducted in accordance with JBI methodology for systematic reviews of effectiveness20 and the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) checklist.21 The methods described in this systematic review protocol were piloted by review authors in previous reviews.18,22,23 The title of this review was registered in the JBI Registry on October 13, 2019. This protocol is pending registration in PROSPERO CRD 42020154519.

Search strategy

An initial limited search of MEDLINE was undertaken in August 2019 to identify relevant articles on the topic. The authors worked in collaboration with an experienced librarian to refine the search strategy to ensure specificity, sensibility, and replicability in all databases. The search strategy is based on a combination of three concepts: i) students, trainees, and residents participating in HPE (population); ii) reading media (intervention); and iii) reading comprehension (outcome). The search strategy was first developed for MEDLINE (Appendix I), and then tailored to each bibliographical database.

Information sources

Systematic searches will be performed in six bibliographical databases: CINAHL (EBSCOhost; 1980 to present); Embase (Ovid SP; 1974 to present); ERIC (ProQuest; 1966 to present); MEDLINE (Ovid SP; 1946 to present); PsycINFO (EBSCOhost; 1967 to present); Web of Science – Science Citation Index (SCI) Expanded and Social Sciences Citation Index (SSCI; Clarivate Analytics; 1900 to present).

In addition to the search in bibliographical databases, reference lists of included studies will be manually screened to identify additional studies. Relevant journals (e.g., MedEdPORTAL) will be searched for additional studies, as will Google Scholar for related systematic reviews.

Study selection

All identified citations will be uploaded into EndNote v.X9.2 (Clarivate Analytics, PA, USA) and duplicates removed. Titles and abstracts will be screened by two independent reviewers for assessment against the inclusion criteria. Potentially relevant studies will be retrieved in full and their citation details imported into the JBI System for the Unified Management, Assessment and Review of Information (JBI SUMARI; JBI, Adelaide, Australia). The full text of selected citations will be assessed in detail against the inclusion criteria by two independent reviewers. Reasons for the exclusion of full-text studies that do not meet the inclusion criteria will be recorded and reported in the systematic review. At any time during the review process, disagreements will be resolved through discussion and consensus or via a third reviewer. The study selection process will be reported in a PRISMA flow diagram.21

Assessment of methodological quality

All included studies will be critically assessed by two independent reviewers. The standardized critical appraisal tools incorporated within JBI SUMARI will be used to assess the risk of bias of experimental, quasi-experimental studies, and observational studies.24 For experimental studies, reviewers will score a total of 13 criteria as being met (yes), not met (no), unclear or, where appropriate, not applicable (n/a) to that particular study. For quasi-experimental studies, reviewers will score a total of nine criteria using the same response scale. For observational studies (e.g., cohort studies), reviewers will select the appropriate checklist for each study design in the JBI Reviewer's Manual.24 Any disagreements that arise between the reviewers during the assessment of methodological quality will be resolved through discussion, or with a third reviewer. Where there is missing data or a need for clarification, authors of papers will be contacted.

Studies will not be excluded on the grounds of their risk of bias, but the risk of bias will be reported when presenting the results. The risk of bias judgments will be summarized across different studies for each of the domains listed using the risk of bias graph and the risk of bias summary.

Data extraction

Data will be extracted independently by two reviewers from included studies using the standardized JBI data extraction tool.20 Any disagreements arising during this phase of the review will be resolved through discussion, or with a third reviewer. In cases where there is missing data or a need for clarification, authors of papers will be contacted. Data will be collected at the following levels:

  • Study level: study design, year of study, sample size, type of randomization, setting, country of study conduct, and corresponding author's contact information;
  • Participant level: type and number of participants, eligibility criteria, withdrawals and exclusions (loss to follow-up), age, sex, level of instruction, practice setting, self-reported experience with using technology, self-reported preference for paper-based or digital-based reading;
  • Intervention level: clinical topic (e.g., pharmacology), text length (i.e., number of words and number of pages; text will be categorized as either short [< 1000 words] or long [≥ 1000 words]),3 allowed reading time frame (i.e., free or limited), type of paper-based media (e.g., printed book, printed article) or type of digital device (e.g., computer, laptop, smartphone), text genre (i.e., information, narrative, or mixed),3 need for scrolling (i.e., yes or no), strategies used to enhance reading comprehension (e.g., use of highlighting, note taking);
  • Outcome level: name, time points measured, definition, unit of measurement, scales, validation of measurement tool, results.

Data synthesis

Characteristics of included studies will be synthesized at four levels (i.e., study level, participant level, intervention level, outcome level) in table format. For observational studies, results will be presented descriptively.

For quasi-experimental and experimental studies, as clinical and methodological diversity is anticipated, all summary intervention effects estimates will be presented using a random effects model. Data for continuous outcomes will be analyzed using standardized mean differences with 95% confidence intervals. It is not expected that studies will have the same outcome measures/scales. Data for dichotomous outcomes will be analyzed using risk ratios and 95% confidence intervals. Each paired comparison relevant to this review will be included separately for studies with multiple intervention groups; however, shared intervention groups will be divided among the comparisons.25

Meta-analyses will be undertaken to compare the effects of reading media on reading comprehension if: i) the interventions and the research questions are similar enough for pooling to make sense; and ii) there are at least two studies available for each outcome of interest. Meta-analyses will be conducted in Review Manager (RevMan) v5.3 (Copenhagen: The Nordic Cochrane Centre, Cochrane). A narrative summary of the results will be presented if it is not possible to conduct a meta-analysis.

Heterogeneity will be first assessed by examining the characteristics of included studies, the similarities and disparities between participants, interventions, and outcomes. Heterogeneity will then be assessed statistically using standard chi-square and I2 tests within RevMan. A statistical significance level (P value) of 0.10 will be used for the chi-square statistic instead of the conventional level of 0.05, as this test is known to have low statistical power.24

Subgroup analyses will be carried out to investigate heterogeneity when two or more studies are available in the underlying outcome. The following potential effect modifiers will be explored: type of paper-based or digital-based reading media; clinical topic of reading; discipline of health professional students; and study design.

If there are 10 or more studies included in the meta-analysis for the primary outcome (i.e., reading comprehension), a funnel plot will be generated using RevMan to assess publication bias; an asymmetrical funnel plot will be indicative of publication bias. If appropriate, to further assess publication bias, Egger's regression will be performed using IBM SPSS Statistics version 25 (Armonk, NY: IBM Corp).25 A P value ≤ to 0.05 for the constant of the regression will be indicative of publication bias.

Assessing certainty in the findings

A Summary of Findings will be created for the main intervention comparisons and will include the most important outcomes (e.g., reading comprehension) to draw conclusions about the certainty of the evidence. The quality of the evidence will be assessed independently for each outcome according to the five domains (risk of bias, inconsistency, indirectness, imprecision, and publication bias) established by the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) guidelines.26 Review authors will use GRADEpro (McMaster University, ON, Canada), based upon extracted data.


Patrice Dupont (librarian at Bibliothèque de la santé, Université de Montréal) for help drafting and piloting search strategies. Alexandra Lapierre for assistance with the study selection, quality assessment, and data extraction.


This review constitutes the first phase of a research study funded by the “Cercle du Doyen” of the Faculty of Pharmacy at Université de Montréal. GF is a Canadian Institutes of Health Research Vanier Scholar. ML is a Fonds de Recherche du Québec en Santé (FRQS) Junior 1 Research Scholar. MAMC is a Fonds de Recherche du Québec en Santé (FRQS) Doctoral Fellow.

Appendix I: Search strategy



1. Wang S, Jiao H, Young MJ, Brooks T, Olson O. Comparability of computer-based and paper-and-pencil testing in K–12 reading assessments. A meta-analysis of testing mode effects. Educ Psychol Meas 2007; 68:5–24.
2. Kingston NM. Comparability of computer- and paper- administered multiple-choice tests for K–12 populations: a synthesis. Appl Meas Educ 2008; 22:22–37.
3. Delgado P, Vargas C, Ackerman R, Salmerón L. Don’t throw away your printed books: a meta-analysis on the effects of reading media on reading comprehension. Educ Res Rev 2018; 25:28–38.
4. Singer LM, Alexander PA. Reading on paper and digitally: What the past decades of empirical research reveal. Rev Educ Res 2017; 87:1007–1041.
5. Lauterman T, Ackerman R. Overcoming screen inferiority in learning and calibration. Comput Hum Behav 2014; 35:455–463.
6. Mangen A, Olivier G, Velay J-L. Comparing comprehension of a long text read in print book and on Kindle: where in the text and where in the story? Front Psychol 2019; 10 (38):1–11.
7. Chen G, Cheng W, Chang T-W, Zheng X, Huang R. A comparison of reading comprehension across paper, computer screens, and tablets: does tablet familiarity matter? J Comput Educ 2014; 1 (2):213–225.
8. Dillon A. Reading from paper versus screens: a critical review of the empirical literature. Ergonomics 1992; 35:1297–1336.
9. Noyes JM, Garland KJ. Computer- vs. paper-based tasks: are they equivalent? Ergonomics 2008; 51:1352–1375.
10. Bari A, Khan RA, Rathore AW. Medical errors; causes, consequences, emotional response and resulting behavioral change. Pak J Med Sci 2016; 32 (3):523–528.
11. Garrouste-Orgeas M, Philippart F, Bruel C, Max A, Lau N, Misset B. Overview of medical errors and adverse events. Ann Intensive Care 2012; 2 (1):2.
12. Densen P. Challenges and opportunities facing medical education. Trans Am Clin Climatol Assoc 2011; 122:48–58.
13. Green TD, Perera RA, Dance LA, Myers EA. Impact of presentation mode on recall of written text and numerical information: hard copy versus electronic. N Am J Psychol 2010; 12 (2):233–242.
14. Margolin SJ, Driscoll C, Toland MJ, Kegler JL. E-readers, computer screens, or paper: does reading comprehension change across media platforms? Appl Cognitive Psych 2013; 27 (4):512–519.
15. Zumbach J, Mohraz M. Cognitive load in hypermedia reading comprehension: influence of text type and linearity. Comput Hum Behav 2008; 24 (3):875–887.
16. Clark R, Mayer R. E-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning. 4 ed.Hoboken, NJ: John Wiley & Sons; 2016.
17. Cook, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: a meta-analysis. JAMA 2008; 300 (10):1181–1196.
18. Maheu-Cadotte M-A, Cossette S, Dubé V, Fontaine G, Mailhot T, Lavoie P, et al. Effectiveness of serious games and impact of design elements on engagement and educational outcomes in healthcare professionals and students: a systematic review and meta-analysis protocol. BMJ Open 2018; 8:e019871.
19. Fontaine G, Cossette S, Maheu-Cadotte M-A, Mailhot T, Deschênes M-F, Mathieu-Dupuis G, et al. Efficacy of adaptive e-learning for health professionals and students: a systematic review and meta-analysis. BMJ Open 2019; 9 (8):e0252521–17.
20. Tufanaru C, Munn Z, Aromataris E, Campbell J, Hopp L. Aromataris E, Munn Z. Chapter 3: Systematic reviews of effectiveness. JBI, JBI Reviewer's Manual [internet]. Adelaide:2017.
21. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015; 4 (1):1–9.
22. Fontaine G, Cossette S, Maheu-Cadotte M-A, Deschênes M-F, Rouleau G, Lavallée A, et al. Effect of implementation interventions on nurses’ behaviour in clinical practice: a systematic review, meta-analysis and meta-regression protocol. Syst Rev 2019; 8 (1):305.
23. Fontaine G, Cossette S, Maheu-Cadotte MA, Mailhot T, Deschenes MF, Mathieu-Dupuis G. Effectiveness of adaptive e-learning environments on knowledge, competence, and behavior in health professionals and students: protocol for a systematic review and meta-analysis. JMIR Res Protoc 2017; 6 (7):e128.
24. Moola S, Munn Z, Tufanaru C, Aromataris E, Sears K, Sfetcu R, et al. Chapter 7: Systematic reviews of etiology and risk. In: Aromataris E, Munn Z, editors. JBI Reviewer's Manual [Internet]. Adelaide: JBI, 2017. Available from
25. Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]: The Cochrane Collaboration; 2011.

books; health professional education; reading comprehension; reading media; systematic review

© 2020 JBI