Secondary Logo

Journal Logo

Original Article

Objective structured clinical examination for undergraduates

Is it a feasible approach to standardized assessment in India?

Bhatnagar, Kavita R; Saoji, Vivek A1; Banerjee, Amitav A2

Author Information
Indian Journal of Ophthalmology: May–Jun 2011 - Volume 59 - Issue 3 - p 211-214
doi: 10.4103/0301-4738.81032
  • Open

Abstract

There has been a growing concern among medical educators about the quality of medical graduates trained in various medical colleges in our country. Data based on the faculty and student perceptions of the undergraduate curriculum indicate a need for laying more stress on practical skills during their training and assessment.[1] It is a well- known fact that the way students learn is largely determined by the way they are assessed.[2] There is a need to rationalize the examination system by giving due emphasis on the “formative” or internal assessment, and supplementing the traditional long/short case examination with more valid and reliable instruments for the assessment of clinical skills like the Objective Structured Clinical Examination (OSCE)introduced in 1979 by Harden and Gleeson.[3] It allows for the actual demonstration of applied knowledge and skills rather than testing knowledge alone.[45] The opportunity for formative as well as summative feedback makes OSCE an excellent teaching tool as well[67]

OSCE is a reliable and an established and effective multistation test for the assessment of practical skills in an objective and a transparent manner.[8] It provides an opportunity to test their attitude and communication skills as well. The clinical competence to be tested is broken down into specific skills, each of which can be tested at a time. The examination is organized in the form of several “stations” (usually 10–20, more stations, better the reliability), through which the candidates rotate. Each station focuses on testing a particular skill such as taking the history of a patient, performing examination of specific organ systems, interpretation of test results, management of patient, etc. A checklist is prepared by breaking the skill being tested into its essential steps and the precautions to be observed. Each step done well and each precaution observed gets the student a score, proportional to the importance of the step or the precaution, with provision for negative scoring in the case of an important omission or mistakes. The objectivity in assessment is achieved by having each component tested at one fixed station by the same examiner and having the student rotate through several such locations.[3] The stations on which the performance skills are tested (procedure stations) are “manned”, where the examiner matches the performance of the student with a checklist, and assigns scores. We can have 6–10 such stations. The answers at the “unmanned” stations are written in the answer sheet, and submitted at the end of the examination and matched against the checklist at the time of marking. Typically, a student spends 5 min within a station, which in practice means that approximately 10 students can be assessed in a period of 2 h.

We understand that OSCE is in a very nascent stage in India. It is one pattern of examination introduced in DNB (Diploma in National Board) few years back, at postgraduate qualifying examination (practical) level. It is not the pattern of examination at master's and doctorate in ophthalmology level (namely, MS and MD in ophthalmology) at majority of universities in India. OSCE is not introduced at the undergraduate level in any subject in India in medical colleges as a qualifying examination. Undergraduate medical education is considered a continuum leading to postgraduate training and ultimately medical practice.[9] However, studies show a poor correlation between medical school performance and resident performance.[10] It has also been pointed out that medical students are often told that they would learn certain skills when they became residents, but discovered once they became residents that they were expected to have them learned in the medical school.[11] Considering this lacuna, we made an attempt to introduce OSCE in the ophthalmology department of our medical college for undergraduates as part of internal assessment with subsequent introduction in university examination with time and experience.

Materials and Methods

A comprehensive 22-station OSCE was administered to 67 final year medical students as the end-of-attachment assessment in ophthalmology.

The faculty experts reviewed the standard text book of Ophthalmology, and Graduate Medical Education Regulations 1997,[12] based on which the most common tasks that students at their level of training are expected to perform, were listed by faculty consensus. We then defined the expected knowledge and skills for these tasks and based the OSCE on this list. Accordingly, we modified the clinical teaching for these students. They were informed early during the posting about this change in the examination pattern.

The stations were designed to test a variety of problem solving, technical, diagnostic, therapeutic, communication, examination, and history taking skills. Table 1 shows the contents of each station and the domains it tested. For instance, station 1 consisted of a healthy male to demonstrate steps of distant visual acuity testing on a Snellen chart, in a fixed order. Another station consisted of a healthy male playing the role of a son of a critically ill father. Students had to motivate him for eye donation of his father and educate him about myths and misconceptions about eye donation testing social and soft skills.

Table 1
Table 1:
Characteristics of individual stations

The class of 67 students was divided into 3 groups of 22, 22, and 23 students. Each group took the examination consecutively, with one group immediately following the other on the same day. An examiner at each station assessed the candidate's performance with a prepared checklist. The checklist items were the ones that had been deemed by the expert faculty to be critical to a competent performance. At least three experts were consulted for finalizing each station. The number of checklist items ranged from 7 to 10. Some of the stations were evaluated by senior postgraduates in ophthalmology who were trained by the authors. This served the dual benefit of removing faculty bias if any, and also saved faculty time. The length of each station was 5 min. The feedback was obtained from students as well as the participating faculty.

Data analysis was done using SPSS version 15. Mean and SD scores for each station and for the examination overall were calculated. The reliability of the overall examination was examined by calculating Cronbach's alpha, an internal consistency statistics. The correlation coefficient, a measure of station discrimination, was calculated by correlating station scores with overall test scores, using linear regression analysis. This added to the internal structure validity of the examination.[13]

Results

A total of 57 out of 67 students (85.07%) passed the OSCE (mean grade, >60%). The scoring pattern agreed upon for the undergraduates for an overall interpretation of their performance is shown in Table 2. A total of 40 out of 67 (59.7%) had an above 70 score, 17 (25.37%) had 60-69, and 10 (14.92%) had scores between 40 and 59. Significant correlations between the station score and total examination score were found for 19 of the 22 stations [Table 1]. The overall reliability of this OSCE assessed by Cronbach's alpha[14] was 0.778, which is very good for an end-of-attachment evaluation.

Table 2
Table 2:
Score interpretation and grading of the performance for competency level

Though the OSCE format was new and everyone participated in an OSCE for the first time, the feedback indicated a high acceptance level and a good correlation between self-rating and actual performance. Almost all participants agreed that the format should be used for testing undergraduates regularly.

Discussion

This pilot study was clear in its purpose. It was meticulously planned and prepared in careful, systematic details. The objectives were defined, blueprinting was done,and adequate attention was paid to the response process. The response process is defined here as evidence of data integrity such that all sources of error associated with the test administration are controlled or eliminated to the maximum extent possible.[15]

We introduced the OSCE as a component of our end-of-attachment examination after 1 month Ophthalmology attachment for final year medical students at our medical college, in the year 2008. We found, as have others,[78] that this method of examining clinical skills is superior to the previously used oral examination in several ways and was preferred, compared with an oral examination, by both faculty members and students in our institution.

The OSCE has been shown to be an objective and a valid and reliable system for evaluating the clinical skills of students. All students are examined under the similar conditions with identical problems. This contributes to a high degree of standardization, which is one of the main pitfalls of other forms of the examination, such as the traditional oral examination. The strength of OSCE is the objectivity of the assessments obtained through the use of structured checklists. On the other hand, the direct expenditure of time of those organizing an OSCE and the financial costs involved in implementing an OSCE substantially exceed those associated with a more traditional oral examination. This is a major limiting factor in introducing this format.

An important feature of any examination process is the ability to reliably differentiate between performance levels of candidates. Reliability is an important aspect of an assessment's validity evidence. Reliability refers to the reproducibility of the scores on the assessment; a high score reliability indicates that if the test were to be repeated over time, examinees would receive about the same scores on retesting as they received the first time. Unless assessment scores are reliable and reproducible (as in an experiment), it is nearly impossible to interpret the meaning of those scores; thus, validity evidence is lacking.[1617] The overall reliability of the OSCE used was 0.78 but the lower level of reliability required when making pass–fail decisions is 0.80.[1819] As this OSCE was part of the end-of-attachment examination, Cronbach's alpha value of 0.78 can be considered good.

The student feedback was well taken by the faculty and improvements were planned for stations with low validity and internal consistency scores, so also for those found difficult or unclear by the students.

In conclusion, we believe that the OSCE we have implemented has contributed substantially to our ability to assess the clinical competence of students in an objective and a reliable and valid manner with a couple of limitations. OSCE tests clinical competence in bits and does not look at a patient in totality. To overcome this limitation, long cases may be combined with the OSCE. OSCE needs considerable time and effort to plan and organize. Therefore, to sustain the motivation of faculty might pose a challenge. OSCE will be an expensive preposition for an average of 100-150 students for examination in each batch. The cost may come down with the development of OSCE station banks and physical setting for the conduction of OSCE. As the same patient is seen by a large number of students, it may be harassing to the patient, compromising their cooperation. Though we have mentioned certain statistical associations, we concede that these were on small samples. Because of this limitation in our study, further studies are required to corroborate our findings.

It requires determination and zeal on the part of the faculty members to switch from the traditional method of examinations to the more rational, objective, and methodical OSCE for undergraduates, as well as postgraduates (MD/MS) in ophthalmology.

We gratefully acknowledge Dr (Col) A Bhardwaj, Dr (Lt Col) N Ramchandran, Dr VN Kulkarni, and Dr D Muzumdar, all of whom are faculty at the Department of Ophthalmology, Bharati Vidyapeeth University Medical College, Pune, India for their wholehearted support and active participation in the project. We also thank Dr Payal K Bansal, Associate Professor, Department of Medical Education, Maharashtra University of Health Sciences, Pune, India, for her excellent help and valuable inputs in carrying out this project.

1. Sood R, Adkoli BV. Medical education in India: Problems and prospects J Indian Acad Clin Med. 2000;1:210–2
2. Sood R, Paul VK, Mittal S, Adkoli BV, Sahni P Assessment in medical education: Trends and tools. 1995 New Delhi KL Wig CMET, AIIMS
3. Sood R. A rational approach for the assessment of clinical competence of undergraduate medical students J Assoc Physicians India. 1999;47:980–4
4. Varkey P, Natt N. An Objective Structured Clinical Examination (OSCE) For the assessment of systems based practice and practice based learning and interpretation. ACGME Competency Assessment. 04-14-2008
5. Varkey P, Reller MK, Smith A. An experiential interdisciplinary quality improvement education initiative Am J Med Qual. 2006;21:317–22
6. Varkey P, Natt N. The objective structured clinical examination as an educational tool in patient safety J Comm J Qual Pat Saf. 2007;33:48–53
7. Kaufmann DM, Mann KV, Muijtjens AM, van der Vleuten CP. A comparison of standard setting procedures for an OSCE for undergraduate medical education Acad Med. 2000;75:267–71
8. Agarwal A, Batra B, Sood AK, Ramakantan R, Bhargava SK, Chidambaranathan N, et al Objective structured clinical examination in radiology Indian J Radiol Imaging. 2010;20:83–8
9. Wilkinson TJ, Frampton CM. Assessing performance in final year medical students: Can a postgraduate measure be used in an undergraduate setting? Med Educ. 2003;37:233–40
10. Kahn MJ, Merril WW, Anderson DS, Szerlip HM. Residency program directors evaluations do not correlate with performance on a required 4th –year objective structured clinical exam Teach Learn Med. 2001;13:9–12
11. Lypson ML, Fronha JG, Gruppen LD, Wooliscraft JO. Assessing resident's competencies at the baseline: Identifying the gap Acad Med. 2004;79:564–70
12. . Medical council of India. Salient features of regulations on graduate medical education, 1997accessed 2008 Mar 10 Available from: http:/www.mciindia.org/know/rules/rules_mbbs.htm
13. Downing SM. Validity: On meaningful interpretation of assessment data Med Educ. 2003;37:830–7
14. Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examinations Br Med J. 1975;1:447–54
15. Nicole M, Freeth D. Assessment of clinical skills: A new approach to an old problem Nurse Educ Today. 1998;18:601–9
16. Harden RM, Gleeson FA. Assessment of clinical competence using an Objective Structured Clinical Examination (OSCE) Med Educ. 1979;31:41–54
17. Cohen R, Reznick RK. Reliability and validity of the objective structured clinical examination in assessing surgical residents Am J Surg. 1990;160:302–5
18. Bark H, Cohen R. Use of an objective structured clinical examination as a component of the final-year examination in small animal internal medicine and surgery J Am Vet Med Assoc. 2002;221:1262–5
19. Cronbach IJ, Glaser GC, Nanda H The dependability of behavioral measurements: The theory of generalizability for scores and profiles. 1972 New York John Wiley and Sons

Source of Support: Nil

Conflict of Interest: None declared.

Keywords:

Objective Structured Clinical Examination; ophthalmology; undergraduate

© 2011 Indian Journal of Ophthalmology | Published by Wolters Kluwer – Medknow