Distributed medical education programs, defined as educational events and activities in multiple locations and learning environments outside of the traditional classroom or teaching hospital settings, are now an established worldwide trend.1–4 Some important drivers behind these programs include distorted disease patterns in academic tertiary teaching hospitals,5 the desire to address the health care needs of distributed populations,6 an interest in promoting primary care practice,7 the political influence of regional and rural populations,8 a pedagogical shift to self-directed learning,9 and the development of information and communication technology to support distance learning.10,11 Recently, these factors have converged with a global shortage of medical graduates12 and a consequent increase in enrollments at many medical schools that now exceed their capacity to educate their students in large, centralized teaching hospitals. As a result of what one author13 calls the perfect storm for educational reform, distributed medical education programs have sprouted up in many countries. Such programs require new and creative avenues for teaching and learning. They also pose challenges to course alignment, academic standards, and educational equivalence in institutions that are understandably concerned with the quality, reputation, and rankings of their academic offerings.
In this article, we describe the evolution of a widely distributed medical education program at an Australian university and its response to the governance and assessment challenges of course implementation at multiple sites, in diverse contexts, using different educational models.
Developing a Distributed Medical Education Program
From its inception in 1961 until about 10 years ago, the Faculty of Medicine at Monash University in Melbourne, Australia, had employed a 6-year, layered curriculum that was divided equally between preclinical and clinical training. In the last 10 years, however, in line with international trends, this traditional model was replaced by a 5-year curriculum built around four longitudinal themes that vertically integrate the basic biological, population, and social sciences with clinical practice across the entire 5 years of study. This curriculum is divided into 2 preclinical years, with a focus on social, population, and biomedical sciences, and 3 clinical years made up of multiple clerkships in the core clinical disciplines. In year three (clinical year one), internal medicine and general surgery, with their respective subdisciplines, are integrated into a single, hospital-based year that provides students with the foundations for clinical practice in these core clinical disciplines. In year four (clinical year two), women’s health (obstetrics–gynecology), children’s health (pediatrics), family practice (primary care), and psychiatry are clustered together in a single year partly based in family practice. In year five (clinical year three), students rotate through six-week, discipline-specific, hospital-based blocks in emergency medicine, aged care (geriatrics), internal medicine, and general surgery, plus one selective and one elective block to prepare for internship.
Also in the last 10 years, in response to government funding incentives to effectively double the number of students who graduate from medical school in Australia and to distribute more of these graduates to underserved regional and rural areas, Monash University rapidly expanded its enrollment numbers and established a distributed network of clinical schools offshore in Malaysia and in the rural southeastern and northwestern regions of the state of Victoria, where Melbourne is located.
Started in 2005, the Monash-affiliated Malaysian medical school adapted the Monash five-year curriculum, with the goal of accommodating the different national health priorities and residual burden of communicable diseases in Malaysia and customizing the teaching of care delivery models to fit the national public health system, which is based on primary health care. This customization process introduced new material into the curriculum that is useful for students in both Malaysia and Australia. The Malaysian medical school is accredited by the Australian Medical Council and grants Monash MBBS degrees, which are recognized for medical practice in Australia.
In 2006, in the southeastern region of Gippsland, Monash established a new regional graduate-entry medical school. Entry to this medical school, akin to the North American system, is restricted to students with previous undergraduate degrees and is based on their undergraduate academic performance. This four-year program compresses the first two years of the current Monash five-year curriculum into a single year and leaves the three clinical years largely unchanged.
Also in 2006, in the northwestern region, Monash collaborated with the University of Melbourne to establish a new model for medical education. In this model, known as the North Victorian Rural Medical Education Network (NVRMEN), dedicated cohorts of students from rural backgrounds spend the majority of their three clinical training years at regional and rural training sites in northern Victoria and are expected to meet the learning objectives of their respective medical courses.
These steps have resulted in four separate medical education programs at Monash University: (1) the original central metropolitan program, (2) the Malaysian program, (3) the Gippsland program, and (4) the NVRMEN program. Unlike at medical schools in North America, all students, excluding those in the Gippsland program, enter after graduating from high school, on the basis of their high school grades, a general aptitude test, and a face-to-face interview.
In the remainder of this article, we compare across programs educational models in clinical year two, the most complex and most varied year of study, and assessment and governance strategies in one discipline (children’s health) to exemplify the alignment challenges faced by such distributed programs. Many of the issues that we highlight apply equally to other disciplines and to other years of the Monash curriculum.
Clinical Year Two Educational Models
All four programs must address the learning objectives for the four disciplines that make up clinical year two in the Monash curriculum. Given their different contexts, available resources, and educational philosophies, each program was given the latitude to implement the curriculum in its own way, provided that the prescribed content is covered and the same assessment end points are achieved. Four quite distinct educational models have emerged (see Chart 1).
Central metropolitan program
The central metropolitan program continues to implement the curriculum in four separate, sequential nine-week, discipline-specific blocks at multiple sites in Melbourne. Except for family practice, these blocks are largely teaching-hospital-based and run by specialists and subspecialists. The approaches to education and practice at these teaching hospitals differ very little from those at North American teaching hospitals.
The Malaysian program is located in the large southern city of Johor Bahru, where students move between large regional hospitals for their women’s health, children’s health, and psychiatry clerkships and affiliated primary health care clinics for their family practice clerkship. For two days a week throughout the academic year, students attend the same clinic for their family practice clerkship. On the remaining three days a week, students are allocated sequentially to three 12-week, discipline-specific blocks in the other three disciplines (women’s health, children’s health, and psychiatry).
The Malaysian faculty elected to replace the consecutive nine-week, discipline-specific blocks of the central program with 12-week clerkships at the main teaching hospitals in Johor Bahru, which continue to provide the students with exposure to a wide range of acute and common clinical conditions in women’s health, children’s health, and psychiatry. To provide additional longitudinal perspectives for all four disciplines, the faculty introduced yearlong clerkships at public primary care clinics, which implement one of the best models that we have seen of the World Health Organization’s district health systems, based on primary health care in developing countries.
In the Gippsland program, each student’s primary clerkship is in a single family practice for the entire year, with short releases into three discipline-specific clerkships of varying length. In this integrated yearlong model, these family practices provide the main platform for learning in all four disciplines under the clinical supervision of family physicians. The students receive supplementary tutorials by specialists during their discipline-specific blocks.
This model was effectively pioneered14–16 elsewhere in Australia and was considered appropriate for a region with a scattered rural population, a well-developed network of family practices, and no clear concentration of specialist services in a single, large, regional center. This model was energetically supported by a strong primary care lobby committed to training a new generation of family physicians for remote, procedural family practice in Australia.
In the NVRMEN program, as in the Gippsland program, the four disciplines are horizontally integrated and delivered throughout the year. However, in this program, unlike in the Gippsland program, students’ time is divided equally between regional and rural semesters. During the rural semester, students are allocated to individual rural family practices, which, analogous to the Gippsland program, provide the platform for learning in all four disciplines under generalist supervision. During the regional semester, students spend one day a week in a regional family practice and the other four days in six-week sequential block rotations in women’s health, children’s health, and psychiatry under specialist supervision.
This model was informed by a detailed survey of generalists and specialists in the northwestern region of Australia about their own and each other’s competency to deliver the Monash curriculum. Most family physicians and all specialists preferred a collaborative and shared approach to curriculum delivery. This finding led to the NVRMEN longitudinal model, implemented through distinct regional (specialist-supervised) and rural (generalist-supervised) semesters, which matched the high concentration of specialist services in the two regional centers and the geographic spread and availability of rural family practices.
The Children’s Health Case Study
The development of these new educational models precipitated a curriculum review at Monash University to ensure that students in the distributed programs had equivalent educational experiences and learning outcomes to those in the central program. In this section, we focus on a single discipline (children’s health) to exemplify the governance and assessment issues that emerged from the curriculum review and redesign process, although similar processes also occurred in the other three disciplines.
The following principles guided the review process: All programs should (1) be based on common learning objectives, (2) have clearly prescribed and common curriculum content, and (3) have an identical summative assessment. Wide latitude was, however, granted for programs to develop site-specific educational methods and modes of delivery.
Common learning objectives
For the Monash five-year curriculum, a set of common global learning objectives had been defined for all disciplines in clinical year two in an attempt to better integrate the four disciplines. The Children’s Health Discipline Group adopted this set of learning objectives unchanged for their review and redesign process (see List 1).
Common curriculum content
First, faculty needed a clear understanding of what constituted curriculum content for all programs to align standards and to achieve equivalence in teaching and assessment. Programs differed in what they considered to be essential content and the depth at which that content should be taught or studied. Priorities also diverged between quaternary academic teaching hospitals, regional and rural hospitals, and Malaysian health care settings.
In 2009, we embarked on a process of curriculum mapping. We analyzed the original curriculum and all accessory documents for content and mapped them against the previously adopted learning objectives. In doing this, we distinguished between a content taxonomy and a common presentations taxonomy. The content taxonomy was based on common and important health conditions that were mutually exclusive, whereas the common presentations taxonomy included extensive overlap. We recognized that, whereas health conditions provided an efficient taxonomy for defining content, presentations, based on symptoms and signs, provided an important real-life entry point for clinical learning and reasoning.
To deal with these dual perspectives, we next developed a grid of common and important health conditions on one axis and common presenting problems on the other. We divided the health conditions into subgroups by discipline and then further into three categories to guide the depth of student learning and expectations of detailed knowledge in summative assessments. These categories, referred to as relevance ratings (R), were
* R1: conditions that are common and important for the practice of a starting intern and require extensive knowledge about etiology, pathophysiology, clinical presentation, investigation, and detailed management;
* R2: conditions that are moderately common or important for the practice of a starting intern and require some knowledge about clinical presentation, investigation, and principles of management; and
* R3: conditions that require sufficient knowledge about clinical presentation to be considered in the differential diagnosis of a common presenting complaint.
Then, we developed a grid using a modified Delphi technique with input from faculty from all four programs. This grid directly emerged from cross-program collaboration to produce a shared curriculum framework. The children’s health grid identifies the content of the children’s health curriculum, the depth of teaching and learning for each of the listed health conditions, and the assessment standards of the discipline required for all Monash students. In Chart 2, we provide a sample of this grid with a single subheading of the perinatal/neonatal health conditions and a selected group of the presenting problems.
Identical summative assessment
The logistical burden of testing a dispersed group of students in a single, end-of-year assessment has challenged the existing examination format, which traditionally included a combination of extended matching questions (EMQs) and objective structured clinical examinations (OSCEs).
Our task, then, was to design a summative assessment that
1. Was blueprinted in detail against the common curriculum content for children’s health;
2. Relied on content experts from all four programs for the writing, editing, standard setting, and final approval of all assessment items;
3. Contained sufficient observations to provide high reliability for the examination as a whole and for the discipline-specific components;
4. Provided standardized preexamination tutorials on all OSCE stations and cross-site moderation of examination formats and examiner performances; and
5. Could be feasibly conducted for the entire cohort in a single, end-of-year assessment at multiple sites across the Monash system.
Blueprinting refers to the methods and processes in assessment design through which congruence is achieved between educational content, learning objectives, and learning experiences. In early 2010, we conducted a single common blueprinting process for all four disciplines, in which discipline representatives from all four programs participated.
In advance of this blueprinting process, we created an assessment template consisting of all examinable health conditions for children’s health, women’s health, family practice, and psychiatry on one axis and the 18 global learning objectives on the other. Faculty from all disciplines then populated this template with OSCE and EMQ assessment items at the intersections between the health conditions and learning objectives. For children’s health, the conditions list that we used for the blueprinting process was consistent with the conditions list that we used for the grid that we presented earlier (see Chart 2). The availability of relevance ratings for each health condition ensured that R1, R2, and R3 conditions were appropriately distributed in the final blueprint.
The Children’s Health Discipline Group, composed of representatives from all four programs, wrote, edited, and set standards using the Ebel method for all examination items in the final summative assessment. They also were responsible for moderating the examination process across sites, including examiner training and the exchange of examiners between sites.
Implementation and reliability.
The increase in the number of students taking the assessment, the requirement for a single, end-of-year summative assessment for all disciplines, and the need to conduct this assessment concurrently in multiple sites limited the OSCE examination to a total of 16 OSCE stations—4 per discipline. Our reliability measures for the whole OSCE examination in this format were suboptimal and were even lower when we calculated them for individual disciplines. Combining the EMQ assessment with the OSCE examination achieved high overall and discipline-specific reliability.
To provide some reassurance that student performance in clinical year two was more or less equivalent across the Monash system, we conducted a one-way analysis of variance to compare students’ total scores across the four programs (see Table 1). A post hoc test using the Hochber GT2 for unequal sample sizes indicated that the average score at Gippsland (mean = 140.23, standard deviation = 8.92) was significantly lower than in Malaysia (mean = 147.21, standard deviation = 9.99). The effect size determined by Pearson correlation coefficient r was very low (r = 0.18), indicating that, although this difference was significant, it was of minimal impact. No other location combinations were significantly different.
Clinical Year Two Governance Structure
The educational management of this distributed medical education program requires an understanding of the relationship between three key constructs (see Figure 1). The first construct is place, represented by the four distinct programs in separate geographic locations with different educational models informed by the special characteristics of those locations. The faculty at each location determine and manage the where and how of learning, which are closely interrelated. The second construct is curriculum, represented by the four medical disciplines, the educational content of which must be equivalently covered irrespective of location. The faculty within each discipline determine and manage the curriculum content or the what of learning. The third construct is time, represented by the MBBS educational managers, who determine and manage the student clerkships, the durations of those clerkships (yearlong, semester-long, and/or block), and the temporal sequence of learning activities that must be accommodated by all disciplines and programs. These include the who and when of learning.
Place: The where and how of learning
Key faculty involved in clinical year two represent their programs on the management committee, on each of the discipline working groups, and on the assessment working groups. Their participation is essential to ensure that the specific health service and community context in which their students train and the educational models that have been customized for their settings are considered in all aspects of educational management and ongoing curriculum design and assessment.
Curriculum: The what of learning
Faculty within the disciplines remain, appropriately, the custodians of the curriculum, its learning objectives, its content, and its summative assessment. Discipline leaders from the central program convene and chair discipline-specific groups with essential leaders within their discipline from each of the programs. The task of these groups on an ongoing basis is to define and modify the learning objectives and content, to write items and set standards for summative assessment, to train examiners, and to standardize exam policies and procedures for their discipline across all four programs. In doing this, the discipline leaders share control of the curriculum content, redefine content based on health priorities in regional, rural, and offshore practice environments, and understand and accommodate alternative educational methods and modes of delivery. These discipline groups also provide an important forum to present and share the varied educational experiences of students in different health care settings. These discussions have generated new ideas and innovations that have enhanced the experience of students in all programs.
Time: The who and when of learning
Educational managers from the central program with their counterparts in each of the disciplines and programs are responsible for implementing the agreed-on requirements across the system. Key amongst their support roles are student placement and scheduling the many learning and assessment activities that must occur consistently across the system. They also provide administrative support for the management committees and working groups through which the managers, disciplines, and programs interact and communicate. The clinical year two management committee is the overarching management and decision-making body for this specific year of study, with representation from all four programs, all four disciplines, and key central managers. The committee is cochaired by two central program discipline leaders from any two of the four disciplines.
Geographically distributed medical education programs are emerging around the globe and are all facing the common challenges of managing program diversity and achieving educational equivalence in multiple settings. In this article, we described how one Australian medical school adapted to the rapid expansion and geographic redistribution of its medical education program and its approach to managing this change while simultaneously promoting educational innovation at its new sites. Our emphasis in this article has been on system-wide processes for assessment and governance, which we believe are relevant to all distributed medical education programs irrespective of their pedagogy or the specific configuration of their educational models. To our knowledge, these important processes have received surprisingly little attention in the medical education literature to date.
An important innovation and critical step in ensuring equivalence between programs is the design of a framework that defines and weights curriculum content for both staff and students at all sites. This learning and assessment framework guides the scope and depth of teaching and learning and provides the basis for the blueprinting of all assessment items across the system. It is also a shared tool for self-directed learning and is used as scaffolding for clinical diagnostic reasoning. A second critical step is the recognition that educational equivalence can be achieved by the tight management of learning objectives, curriculum content, and assessment without constraining the educational models favored by individual programs, thereby affording wide latitude for innovation that benefits the system as a whole. A third critical step is the conceptualization of the relationship between place, time, and curriculum, from which an effective system of governance for a very complex program can be designed.
Our distributed medical education programs have additional benefits that we are still discovering. Within this single educational system, faculty are now actively experimenting with pedagogical practices that we hope will positively impact our learning objectives and assessment techniques in the future. These practices retain many classical Flexnerian perspectives17 while also exploring the educational impacts of longitudinal clerkships18 and the associated discourses on experiential,19 work-based,20 and sociocultural learning theories, such as situated learning and communities of practice.21 The opportunities for educational research in our system, which incorporates a wide range of theoretical educational perspectives already,22 not only are numerous but also will require the definition of educational end points and assessment methods that go beyond the narrow standard of clinical competence on which our current assessments are largely based. Future research in this area must address how to measure student professional formation, understand health systems, effectively manage chronic health conditions, and assess the effect of different educational contexts and models on student values and career choice.
Already, we have seen an increase in the number of medical graduates as a result of our distributed medical education programs. Yet, although evidence from the literature indicates that enrolling medical students from rural and underserved areas increases the chances of their returning to rural practice,23 we have yet to find definitive evidence that rural enrollment combined with extended immersion in rural training programs, as occurs at Monash, better distributes graduates to rural and underserved areas in Australia.24,25 However, researchers are collecting longitudinal data26 from the 17 rural clinical schools at 16 universities across Australia to examine the impact on workforce shortages of programs such as those at Monash. Although the alleviation of these workforce shortages remains a primary goal of these rural clinical schools, we have demonstrated here the many other important contributions that such programs can make to the health care system.
Acknowledgments: The authors wish to recognize the very important contributions of all members of the Children’s Health Discipline Group at Monash University to the governance and assessment principles described in this article.
Other disclosures: None.
Ethical approval: Not applicable.