Unprecedented numbers of international medical graduates (IMGs) currently seek to become licensed to practice in Canada.1 Despite existing opportunities, many foreign-trained physicians experience barriers to practice, often lacking medical literacy skills and understanding of the Canadian cultural context.2 The 2004 Report of the Canadian Task Force on Licensure of International Medical Graduates recommended the development of educational and self-assessment programs to assist IMGs in improving these skills and determining their readiness for licensure.2
The Web-based Communication & Cultural Competence (CCC) Program was developed to meet these needs. The CCC Program incorporates the Medical Council of Canada objectives concerning communication, culture, legal, ethical, and organizational (C2LEO) aspects of medicine important to practicing medicine in a Canadian context. Associated professional behaviors, tacitly understood by Canadian-trained physicians, are often obscure to IMGs.
Medical literacy can be considered a “complex and ill-defined domain.”3 Communication and cultural competence skills are not well taught through books. Face-to-face instruction and real-world experience are hard to access before practice, leaving IMGs with few educational options. Therefore, our goal was to create an easily accessible Web site simulating authentic real-world practice to support active cognitive engagement and complex knowledge work.
The educational problem was how to cognitively scaffold individual knowledge building in a Web-based environment, simulating the complexities of real-world experience, including discourse and social/cultural practices, while providing pertinent information and evidence, with opportunities for formative feedback, reflection, and self-assessment.
The educational approach taken was based on situated learning4 and knowledge building5 to support contextualized understanding of complex subject matter and development of expertise.6 Knowledge building theory5–8 extends the mentorship approach of situated learning by formalizing the use of cognitive scaffolds as permanent and explicit elements. Cognitive scaffolding can be defined as a framework to support progressive improvement of knowledge and understanding; the framework is intentionally set beyond learners’ current understanding and is often composed of constructive feedback, evidence, and explications to promote revision and knowledge building. Scaffolding knowledge building provides us with a way to address current criticism around problems of self-assessment,9 self-reflection, and tacit dimensions10 of practice.
Research on educational innovations is often limited to valuations of effectiveness or worth, which does little to advance theory or applied understanding.11,12 Educational innovation research can be better informed by questions such as, How is knowledge, understanding, and/or performance improved? What is important to support improvement? Why is the innovation significant? and For whom? Research questions of this nature aim to clarify, as opposed to justify, aspects of educational innovation.
Other research designs12–14 provide us with a methodological approach to systematically investigate educational problems differently, inductively, rather than deductively, and in situ. A focus on iterative improvement, rather than worth, enables us to obtain insights, explanations, and different measures of significance.
Steven Kanter,15 the editor-in-chief of Academic Medicine, recently called for a shift in educational innovation research reporting. Kanter states, “Ideally, a report of an innovation will provide not just information but insight—the kind of insight that will sustain a cycle of progressive thinking that will lead to even newer and better ideas.” The current paper intends to contribute to the emerging discourse on educational innovation research by providing insight into design research outcomes of five knowledge building scaffolds that emerged from the context of this study on IMGs in the CCC Program.
The CCC Program Web site begins with information on the Canadian medical system and an assessment of medical literacy based on English language proficiency benchmarks. The E-learning program is composed of five cases and a Communications Skills Module. Each case deals with a different medical topic and related C2LEO objectives. For example, Case 1 deals with HIV, consent, confidentiality, and physician’s duty of care; Case 2 deals with end-of-life care and cross-cultural communication.
Five design strategies were created to scaffold individual online knowledge building and were evaluated in this study:
- Simulated doctor/patient video vignettes
- Contextualized resources providing evidence at the point of need
- “Knowledge Checks” with embedded concurrent feedback
- “Reflective Exercises” highlighting ethical dilemmas
- “Commentaries” with expert feedback prompting recursion
Design research methods
The research protocol was approved by the University of Toronto Health Sciences Research Ethics Board. This design research study was conducted in three stages through the University of Toronto, Toronto, Canada. Data were collected from IMG volunteers at strategic points in program development, from October 2005 to June 2007. Inductive feedback and verification analyses informed iterative development of the CCC Program, to its completion in December 2007, and implementation in January 2008.
Design Research Study 1 was conducted in-person, in a computer lab, on October 21, 2005, to gather feedback on information design, navigation, and relevance of initial components. Participants were provided with individual computers and high-speed Internet access and were asked to complete a paper-based Likert-scale survey while viewing the Web site. A section for qualitative comments was provided at the end of the survey.
Study 2 was conducted online during a six-week time period (October to November 2006) to examine participants’ pedagogic use of Case 1 and related C2LEO objectives. Data were automatically collected online as volunteers worked through the case, on their own computers and time. Participation patterns and knowledge test outcomes were analyzed in Excel and SPSS. Analysis was used for iterative design of Case 1 and prototype creation for further case development. A focus-group feedback session was also conducted during this phase, results of which are detailed elsewhere.16
Study 3 was conducted at two time periods with two different cohorts (September to December 2006 and March to June 2007) to examine participants’ pedagogic use of Case 2 components. Data collection and analysis methods were as in Study 2.
Design Research Study 1
The Language Assessment, design of Case 1, and perceptions of the Web site were examined in Study 1, by 20 IMG volunteers. Almost all participants found the four listening and four reading scenarios on the Language Assessment to be informative and easy to navigate. Consistent with validation study results,17 participants found reading tasks more difficult than listening tasks.
Participant feedback on the E-learning case usability, pedagogic design, and perceptions about the value of content were examined to identify areas for improvement. Most participants (84.2%, 16/19) strongly agreed/agreed that the video vignettes were interesting, and 78.9% (15/19) indicated that the videos “brought the case to life” but were slow to download. Online references (77.8%, 14/18) and Knowledge Checks (73.7%, 14/19) were seen to be helpful. Only 55.6% (10/18) of participants strongly agreed/agreed that the Commentaries on Reflective Exercises were clear (4/18 were equivocal, and 4/18 disagreed/strongly disagreed). Qualitative comments indicated participants would prefer one right/wrong answer, as opposed to a series of options and associated commentaries on strengths/weakness of various medical ethics decisions.
Almost all participants (88.9%, 16/18) strongly agreed/agreed that working through Case 1 helped them to gain new knowledge about practice in the Canadian context and, in particular, new understanding of patient consent and confidentiality (89.5%, 17/19).
Design Research Study 2
Data were collected in October to November 2006. Online participation and pedagogic use of Case 1 components, including Knowledge Checks, Reflective Exercises, and a Post-Module Quiz, were examined in Study 2.
The first Knowledge Check exercise is composed of an interactive laboratory form to fill out and a multiple-choice question. Participants (N = 39) were asked to “try again” if incorrect on the first try. On the second try (N = 38), participants were provided with the answers. Each part of the laboratory form was scored and given a value, as were the number of participant attempts. A surprisingly high level of unprompted third and fourth attempts were recorded. Forty percent of participants (15/38) repeated the exercise a third time and 34.2% (13/38) a fourth time.
Educational design of Reflective Exercise options and commentaries, noted in Study 1, was reviewed but not revised. The format was retained for purposes of authenticity, where typically no single answer exists to an ethical dilemma. Each option was designed to express the nuanced strengths and weaknesses of choices. Commentaries on all options were automatically provided after participants selected an option. Commentaries were linked to relevant C2LEO objectives to provide further clarification and connections for relational understanding.
All (N = 42) participants completed Reflective Exercise 1 once. Although they were not prompted, many participants chose to complete this exercise three or four times. Fifty-five percent (23/42) of participants completed it twice; 52.2% (12/23) completed it three times, and 58.3% (7/12) completed it four times.
Reflective Exercise 2 is composed of five options: four video- and text-based options and one text-only option. Again, results (N = 34) indicate strong unprompted participation patterns; 41.2% (14/34) of participants attempted this exercise a second time.
The Post-Module Quiz is composed of 10 multiple-choice questions. Data (N = 16) were analyzed to reflect the number of times a participant tried the test, as well as the score on each item. Participants were given indicators of correct/incorrect responses on the first try and prompted to try again. On the second try, participants were given correct answers. Hence, it is not surprising to see an increase in results each time, particularly on the third try. Total aggregate scores on the first try were 60.1%, on the second try 70.4%, on the third try 83.6%, and on the fourth try 87.4%.
Participation pattern results were consistent with those previously obtained. Sixteen IMGs completed the Post-Module Quiz once and, when prompted, completed it twice. Unprompted, 31.3% (5/16) participants completed it a third time.
Design Research Study 3
Case 2 was developed based on the Case 1 model. Study 3 (n = 33) examined online pedagogic use of Case 2 components. Consistent with previous findings, strong participation patterns characterized use of Knowledge Checks and Reflective Exercises.
In summary, results of participant patterns in Reflective Exercises are particularly significant; unprompted, many participants choose to repeat each exercise numerous times. Similarly, although Knowledge Checks and the Post-Module Quiz prompted participants to try only twice, many participants chose to complete these exercises a third time. The importance of opportunities for knowledge revision to improve understanding is well recorded in the literature.5–8 Results indicate that the five design strategies examined scaffold individual online knowledge building, implications of which are discussed below.
The Web-based CCC Program supports multipass knowledge building, defined by participant needs. Five knowledge building design scaffolds were validated in this study. Used in combination, they form a framework for cognitive layering, enabling participants to work iteratively with complex, authentic content at progressively higher levels while being supported with feedback and guided through the process.
Simulated doctor/patient digital video vignettes provided visual illustrations of nuanced professional behaviors, communication, and interactions. They helped make tacit knowledge10 explicit. Videos were linked to text-based C2LEO objectives and detailed commentaries to clarify ideas and create connections.
Simulated doctor/patient videos can be considered “mid-fidelity,” where “high-fidelity” would be a work-in-person situation with simulated patients, and “low-fidelity” would be text-based descriptions of interactions. Mid-fidelity simulations retain domain complexity and authenticity resembling real-world practice. Web-based access provides opportunities to revisit and review scenarios.
Many educational Web sites organize evidence-based resources in an online library, separate from learning content. In the CCC Program resources were contextualized, woven into module content throughout the program, and listed on a bibliographic page. PDFs and URLs were embedded in each case, Knowledge Checks, Reflective Exercises, and C2LEOs. Resources were made available at point-of-need to connect content and evidence.
Embedded, concurrent, and transformative feedback is a key knowledge building principle.7 Commonly, interactive online knowledge tests provide feedback only in terms of correct or incorrect answers. In comparison, “Knowledge Checks” were designed to go beyond identification of correct/incorrect responses to identify knowledge gaps and promote knowledge building. On final submission, participants were provided with feedback on each question that includes in-depth explanations, evidence-based resources, and links to relevant C2LEOs.
A second novel form of embedded concurrent feedback was developed for the Communications Skills Module (not examined in this study). The interactive Observation Guide provides a list of behaviors that can be used when viewing videos. It is intended to help IMGs analyze doctor/patient interactions, to heighten awareness, and to cue attention to important dimensions of professional communication.
Reflective Exercise options and commentaries were constructed around ethical issues and clinical dilemmas with no one correct answer. Although some participants indicated a preference for a simple approach, a more realistic, complex one was retained. Options continued to be illustrated with video vignettes that delineate strengths, weaknesses, and consequences of different decisions. The commentaries provided in-depth explanations to scaffold self-reflection and knowledge building. The Reflective Commentaries are intended to prompt “reflection on reflection.”18 Recursive reflection provides support for higher-level metacognitive work that is deeper than self-reflection.
In addition to Reflective Commentaries, other types of commentaries were designed for other purposes in other cases and the Communications Skills Module, including Interpretive Commentaries, Application Commentaries, and Guiding/Mentor Commentaries. All commentaries were designed to provide explicit feedback from the expert perspective, to provoke internal dialogue, and scaffold knowledge building.
Collectively, the five scaffolds can be used to advance knowledge work and deep understanding. Demand for accessible, authentic, Web-based learning requires new ideas to support complexity, self-assessment, and reflection. Scaffolding knowledge building can help remove barriers to understanding and assist IMGs in surpassing the challenges inherent to practice in a new cultural context.
Advantages of Web-based learning are often cited as “anywhere, anytime,” but as the results in this study show, advantages may be more than just about convenient access. Findings indicate cognitive scaffolding supports individual Web-based knowledge building. These findings are concurrent with educational research on the importance of opportunities for learning through revision of knowledge and ideas for continual improvement in understanding and practice.8
Scaffolds evaluated in this study may be context-dependent. Further research is required to examine use of these scaffolds across contexts and to determine whether they are generalizable principles. Additional research is needed to examine correlations between scaffolded knowledge building and performance in practice.
Insights on scaffolding knowledge building, provided herein, are intended to contribute to the debate and progress of educational innovation research. The CCC Program can be accessed at (http://www.img-ccc.ca) or through the Canadian Information Centre for International Medical Graduates at (http://www.img-canada.ca).
We gratefully acknowledge Diana Tabak, Carol Shapiro, Marlene Scardamalia, Glenn Regehr, Avi Hyman, Meaghan Brierley, and Juho Park for their contributions. The CCC Program was funded by the Government of Ontario. We thank the Medical Council of Canada, College of Physicians and Surgeons of Ontario, and HealthForceOntario for their ongoing support.