Skip Navigation LinksHome > January 2002 - Volume 77 - Issue 1 > New Medical Licensing Examination Using Computer‐based Case...
Academic Medicine:
Research Reports

New Medical Licensing Examination Using Computer‐based Case Simulations and Standardized Patients

Guagnano, Maria Teresa MD; Merlitti, Daniele MD; Manigrasso, Maria Rosaria MD; Pace-Palitti, Valeria MD; Sensi, Sergio MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Guagnano is associate professor of internal medicine, Dr. Merlitti is a computer assistant, Dr. Manigrasso and Dr. Pace-Palitti are medical assistants, and Dr. Sensi is senior professor and chief of the internal medicine section and head of the clinical education section, Department of Internal Medicine and Aging, University “G. D'Annunzio” Chieti, Italy.

Correspondence and requests for reprints should be addressed to Prof. Sensi, Clinica Medica, Policlinico Colle dell'Ara, Via Vestini, 66013 Chieti Scalo, Italy; telephone and fax: 39-0871-551562; e-mail: 〈pdirenzo@unich.it〉.

The authors acknowledge funding support from the Ministry of the University, Scientific Research and Technology, for postgraduate medical education research, and from internal medicine residency resources. They thank the rector of Chieti University, the dean of the Medical School, and the president of the Board of Physicians for their contributions and support. They also thank the members of the Internal Medicine, Surgery, and Gynecology-Pediatrics Faculty Commissions for their collaboration and useful suggestions, which were instrumental in developing a national MLE based on the multimedia integrated pilot project.

Collapse Box

Abstract

Purpose: To evaluate a new method, used for the first time in Italy, of administering the Medical Licensing Examination (MLE).

Method: Eighty medical school graduates taking the MLE were studied. The MLE was based on the Multimedia Integrated Pilot Project (MIPP), a single two-step examination that combines computer-based case simulations (step 1) and clinical encounters using standardized patients (step 2). Step 1 assessed mainly clinical knowledge and decision-making skills. Step 2 measured the ability to obtain a focused history, perform a relevant physical examination, prioritize a differential diagnosis and management plan, and provide patient education or counseling. The correlations between the total MIPP scores and the exam scores students obtained during the six-year medical school curriculum were evaluated.

Results: The step 1, step 2, and total MIPP scores were moderately correlated with the curriculum scores. A moderate correlation also existed between the scores reported in step 1 and those of step 2.

Conclusions: The MIPP is a good tool for assessing clinical competence. Internationally, computer-based and standardized patient assessments are being used more often in licensing examinations. Continuous use of this method could improve medical graduates' performances.

In Italy, the Medical Licensing Examination (MLE) is governed by a law enacted in 19561 and is offered twice a year (spring and autumn) at each national university. Examinees must be tested by three examination commissions: internal medicine, general surgery, and pediatrics and gynecology, with the presentation of relevant clinical cases. Italian clinical skills assessment is based on direct clinical encounters with three real patients presenting clinical problems involving clinical cases in internal medicine, general surgery, obstetrics—gynecology, and pediatrics. After the encounters, the examinee must provide a written report with the patient's history, physical examination findings, and an initial management plan, and carry out a bedside discussion with the respective examination commission.

In the 1999 spring session of the MLE, Chieti University implemented computer-based case simulations and standardized patients (SPs) in the Multimedia Integrated Pilot Project (MIPP) administration.2 The MIPP is a single two-step examination that combines computer-based long and short clinical case simulations with clinical encounters using SPs for assessing examinees' clinical competence.

The MIPP is founded on the principle that clinical competence comprises knowledge, gestures, behaviors, and decision-making skills in patient management, taking into account cost—benefit considerations.3 All components of clinical competence cannot be assessed with a single tool. Compared with the previous method of administering the MLE, the MIPP may measure medical school graduates' levels of clinical competence in history taking, physical examination, and interpersonal skills more effectively and objectively as well as measure their abilities to produce an initial diagnostic workup and a management plan.

Back to Top | Article Outline

METHOD

Protocol

In the first examination session of 1999 we examined 80 medical school graduates who had completed the curriculum during the 1997-98 academic year, including the postgraduate six-month internship period. The group of examinees was relatively homogeneous: 73 were graduates of our university, four were graduates of other Italian universities, and three were foreign medical graduates. Their mean age was 25 years (SD: 1), and 42 were women. Their lengths of study ranged from six to eight years: 15 had experienced a one-year delay, and five a two-year delay. The medical school curriculum is nationally regulated; the student must pass 11 examinations in the preclinical period (six semesters of basic sciences) and 25 in the clinical period (six semesters of clinical sciences).

Back to Top | Article Outline
Instrument

The MIPP was composed of two parts.

Back to Top | Article Outline
Step 1: computer-based long and short clinical case simulations

Four long simulated clinical cases or problems and four short ones were presented simultaneously to 40 examinees at a time, in two computerized examination rooms, ensuring each encountered the same level of difficulty. The long clinical cases were selected from a database containing 120 simulated clinical cases that were periodically updated and divided into different sections (e.g., cardiology, pneumonology, surgery, dermatology). Each of these computer-based case simulations provided the examinees with information about a patient's history, physical examination findings, and initial laboratory results. The examinees were then shown a succession of numbered decision options and were required to rate them, evaluating the appropriateness of each choice for patient management (diagnosis and therapy), by assigning them different scores. The computer program also presented tutorial points to the examinees, which were mainly designed to test clinical knowledge and reasoning using true/false and multiple-choice questions (MCQs). The program, acting as a tutor, delivered questions, which the subjects had to answer before proceeding. The MCQs involved proposed differential diagnoses; interpretation of such diagnostic studies as x-rays, electrocardiographic tracings, and ultrasonograms; and physical findings.4 Each long clinical simulation included between 35 and 55 decision options and five to 15 tutorial points, while each short case simulation contained five to eight decision options. The computer-based examination was evaluated with unweighted scores for the tutorial points (assigning one point for each item answered correctly and no point for a wrong answer) and with weighted scores for the decision options. The weight was assigned according to diagnostic usefulness, risk, and cost—effectiveness.

Back to Top | Article Outline
Step 2: clinical encounters with simulated patients

The second step of the MIPP exam consisted of four clinical encounters with SPs presenting common problems involving internal medicine, surgery, gynecology, and pediatrics. The examinees were given background information before the SP encounter, i.e., a description of the chief complaint. They were then required to obtain a focused history, perform a physical examination, and prepare post-encounter notes offering a differential diagnosis and diagnostic workup plan. Each case required 25 minutes for its administration-15 minutes for the SP encounter and ten minutes for the examinee to complete the post-encounter notes.5

The examination commission involved three faculty physicians for each case who directly observed the encounters with SPs. These observers were physician faculty, working in teaching hospitals, who had already participated and trained as observers in the SP examinations administered at the university since 1996. Two of the three faculty observers independently checked off items on a history and physical examination checklist developed by a panel of faculty members. The checklist contained a fixed number of items covering the information that should be obtained during a good medical interview. The faculty observers also assessed interpersonal skills through a standardized questionnaire6 that evaluated an examinee's interviewing skills, counseling skills, rapport, and manner. This questionnaire was rated on a five-point Likert scale.

For the SP examinations, scores were determined by summing the history and physical examination items each examinee correctly completed with respect to the total number of checklist items. For scoring purposes, only the items checked off by both observers were computed. At the end of the examination session, the entire commission evaluated the post-encounter notes. The scoring of post-encounter notes was obtained by summing the correct keywords used for history, physical examination, differential diagnosis, and diagnostic plan.

Back to Top | Article Outline
Analyses of MIPP Scores

The total MIPP score combined the computer simulation step 1 score and the simulated patient step 2 score. The highest possible number of correct answers (step 1 + step 2) was a 110/110 score. Ministry regulations determined the pass—fail cutoff of 66/110. The total curriculum score represented the mean score for all 36 examinations each examinee took. The preclinical and clinical scores corresponded to the mean scores for the 11 and 25 examinations, respectively. The relationship between the MIPP-based examination score and the curriculum score was evaluated by Pearson's correlation coefficient. The data were analyzed using a standard statistical software package.

Back to Top | Article Outline

RESULTS

Table 1 shows that the step 1, step 2, and total MIPP scores were moderately correlated with the preclinical, clinical, and total curriculum scores (p <.001, p <.01, and p <.001, respectively). Furthermore, a moderate correlation existed between step 1 and step 2 scores (r =.44, p <.001). We also compared the checklists checked off by the faculty observers to assess the method's reliability. The faculty observers reliably recorded the information obtained from the history, the physical examination maneuvers performed, and the initial diagnostic and management plan, with inter-rater (first observer and second observer) reliability coefficients ranging from .8 to .9. Less agreement occurred with respect to communication-skills scores, with inter-rater reliabilities ranging from .7 to .8. Of the 80 examinees, seven (8.75%) failed to reach the pass—fail cutoff score.

Table 1
Table 1
Image Tools
Back to Top | Article Outline

DISCUSSION

The MIPP appears to offer many innovations compared with previous MLE administrations, and better assessment methods are likely to improve the clinical examination performances of the examinees.7 We documented a good reliability coefficient between the ratings of the two observers. Testing the validity of a method for the assessment of clinical competence can be difficult because no evaluation method provides a “gold standard” for comparison.8,9 With SP encounters, global ratings by a panel of physician-faculty observers could represent a reliable criterion.10,11 The moderate correlation we found between the curriculum scores of the examinees and the total MIPP scores obtained on this new examination could be an acceptable validity criterion. Furthermore, the moderate correlation between step 1 and step 2 scores could be explained by the fact that none of the graduates had received previous training with SPs. Clinical encounters with SPs are not taught during the undergraduate period, only during the internship. In fact, the students trained in clinical encounters with SPs the following year performed better on the SP examination (unpublished data). We hope such correlation coefficients will improve in our future administrations of the MIPP.

In 1989, eight key North American medical organizations established the Clinical Skills Assessment Alliance (CSAA) to promote and refine the incorporation of SPs and computer simulations into standard assessment systems.12 Our previous experience with using the MIPP for the Internal Medicine Residency admission examination and, now, for the administration of the MLE convinced us to incorporate the CSAA's suggestions. Our MIPP-based MLE was not a mere didactic experiment, but a legal certifying examination valid for licensure, and the MIPP was conducted in accordance with the rules of the Italian University Ministry.

Other recent developments also prompted our development of the MIPP. In 1998 the Educational Commission for Foreign Medical Graduates coordinated a pilot study with seven Italian medical schools, in which 173 students were required to submit to clinical competence assessment using SPs, completing an international study of clinical skills assessment.6 The participating students rated the study favorably—68% regarded including SP encounters in the MLE as appropriate.13 And recently, the Federation of State Medical Boards and the National Board of Medical Examiners, the sponsoring organizations of the United States Medical Licensing Examination (USMLE), have been planning to implement computer-based testing for the USMLE. MCQs and true/false questions are no longer administered using paper and pencil, but delivered by computer. Computer-based case simulations are also part of the examination. Furthermore, the strategic plan for the USMLE includes the development of a standardized patient test to assess clinical skills. This confirms an international tendency toward adoption of a combination of these new assessment methods, as the CSAA suggested years ago. An ideal clinical competence assessment tool should effectively, reliably, and objectively measure all the components of clinical competence. These components are basic knowledge and performance, which includes skills in history taking, performing a physical examination, formulating the most likely diagnosis, establishing a management plan, and communication and interpersonal relations.12 Presently, many medical school faculty members are aware that, for assessment purposes, real-life encounters or bedside oral examinations cannot be duplicated and are unreliable, subjective, unethical for the patient, and time-consuming. The computer-based case simulations can effectively assess skills in differential diagnosis, diagnostic test utilization, management planning, and basic knowledge, but cannot evaluate the candidate's history taking, physical examination, and communication skills. We believe the combined use of computer-based simulations and SPs can provide a more objective, reproducible, and pertinent assessment of both knowledge and performance.

Back to Top | Article Outline

REFERENCES

1. Esami di Stato di Abilitazione all'esercizio delle professioni. In: Istituto Poligrafico e Zecca dello Stato (ed). Gazzetta Ufficiale della Repubblica Italiana No. 321. Roma, Italy, 1956.

2. Sutnick AI, Friedman M, Stillman PL, Norcini JJ, Wilson MP. International use of standardized patients. Teach Learn Med. 1994;6:33–5.

3. Miller GE. Conference summary. Acad Med. 1993;68:471–4.

4. Sensi S, Merlitti D, Murri R, Pace-Palitti V, Guagnano MT. Evaluation of learning progress in internal medicine using computer-aided clinical case simulation. Med Teach. 1995;17:321–5.

5. Stillman PL, Regan MB, Haley HL, Norcini JJ, Friedman M, Sutnick AI. The use of a patient note to evaluate clinical skills of first-year residents who are graduates of foreign medical schools. Acad Med. 1992;67(10 suppl):S57–S59.

6. Ziv A, Friedman Ben-David M, Sutnick AI, Gary NE. Lessons learned from six years of international administrations of the ECFMG's SP-based clinical skills assessment. Acad Med. 1998;73:84–91.

7. Petrusa ER, Blackwell TA, Rogers LP, Saydjari C, Parcel S, Guckian JC. An objective measure of clinical performance. Am J Med. 1987;83:34–41.

8. Stillman PL, Swanson DB, Smee S, et al. Assessing clinical skills of residents with standardized patients. Ann Intern Med. 1986;105:762–71.

9. Sutnick AI, Stillman PL, Norcini JJ. Pilot study of the use of the ECFMG clinical competence assessment to provide profiles of clinical competencies of graduates of foreign medical schools for residency directors. Acad Med. 1994;69:65–7.

10. Battles JB, Carpenter JL, McIntire DD, Wagner JM. Analyzing and adjusting for variables in a large-scale standardized-patient examination. Acad Med. 1994;69:370–6.

11. Swartz MH, Colliver JA, Bardes CL, Charon R, Fried ED, Moroff S. Validating the standardized-patient assessment administered to medical students in the New York City Consortium. Acad Med. 1997;72:619–26.

12. Miller GE. The Clinical Skills Assessment Alliance. Acad Med. 1994;69:285–7.

13. Sensi S, Friedman Ben-David M, Guagnano MT, et al. Assessment of clinical competence of medical school graduates in Italy with standardized patients—the opinion of the examinees. Rec Progr Med. 1998;89(11):575–7.

Cited By:

This article has been cited 8 time(s).

Medical Teacher
Virtual patients for assessing medical students - important aspects when considering the introduction of a new assessment format
Waldmann, UM; Gulich, MS; Zeitler, HP
Medical Teacher, 30(1): 17-24.
10.1080/01421590701758616
CrossRef
Medical Teacher
The use of a virtual patient case in an OSCE-based exam - A pilot study
Courteille, O; Bergin, R; Stockeld, D; Ponzer, S; Fors, U
Medical Teacher, 30(3): E66-E76.
10.1080/01421590801910216
CrossRef
Medical Teacher
Improving assessment with virtual patients
Round, J; Conradi, E; Poulton, T
Medical Teacher, 31(8): 759-763.
10.1080/01421590903134152
CrossRef
Academic Medicine
Trends in medical education research
Regehr, G
Academic Medicine, 79(): 939-947.

Teaching and Learning in Medicine
Web-based learning versus standardized patients for teaching clinical diagnosis: A randomized, controlled, crossover trial
Turner, MK; Simon, SR; Facemyer, KC; Newhall, LM; Veach, TL
Teaching and Learning in Medicine, 18(3): 208-214.

British Medical Journal
Using computers for assessment in medicine
Cantillon, P; Irish, B; Sales, D
British Medical Journal, 329(): 606-609.

Administration and Policy in Mental Health and Mental Health Services Research
Best practices for assessing competence and performance of the behavioral health workforce
Bashook, PG
Administration and Policy in Mental Health and Mental Health Services Research, 32(): 563-592.
10.1007/s10488-005-3265-z
CrossRef
Anesthesiology
Assessment of Competency in Anesthesiology
Tetzlaff, JE
Anesthesiology, 106(4): 812-825.
10.1097/01.anes.0000264778.02286.4d
PDF (422) | CrossRef
Back to Top | Article Outline

© 2002 Association of American Medical Colleges

Login

Article Tools

Images

Share