INTRODUCTION
“Clinical reasoning skill” is the ability to integrate and apply the knowledge to clinical practice and is recognized as a core competency by medical educators and practitioners. It also involves critically weighing evidence and reflecting upon the process used to arrive at a diagnosis. Clinical experience and a substantial knowledge base are necessary to reach the high levels of clinical reasoning ability. The Medical Council of India in its document about competencies expected from the postgraduate students states the requirement: “to plan a strategy of laboratory investigation of a given case, given the relevant clinical history and physical findings in a logical sequence, with a rational explanation of each step; be able to correctly interpret the laboratory data of such studies, and discuss their significance with a view to arrive at a diagnosi s.”[1 2 ]
Modi et al . recommend that clinical reasoning must be taught at all levels of medical training and assessment should be done throughout the course. They reviewed teaching and assessment of clinical reasoning skills (CRS) and documented that the problems with critical reasoning occur due to inadequate knowledge, flaws in data gathering, and improper approach to information processing.[3 ]
However, assessment of knowledge and competence is a challenging task with existing assessment methods and higher order of thinking ability is often not assessed.
Assessments have a strong effect on student learning and also provide feedback on the effectiveness of the teaching-learning methods and programs. The tests used for the assessments should be objective, reproducible (reliable), and valid. In addition, they should be accepted by the students, feasible to develop by the faculty, promote learning and should be cost-effective. An objective test should be independent of the examiner and their feelings, attitude, and motives. A reliable test should provide approximately same result when repeated. A valid test should measure what it is supposed to measure. The traditional assessments in postgraduate medical education in India are mainly in the form of written essays/oral viva, rely on expert judgment and pave way for subjectivity and assessed scores are not always reproducible. This led to the need for adopting newer assessment methods such as multiple choice questions (MCQs) and short answer questions (SAQs), structured oral examinations, objective structured clinical examinations, and the script concordance test (SCT). Assessment of higher cognitive clinical reasoning ability can be done using scenario-based MCQs, key feature test or using SCT. Furthermore, workplace-based assessments can be used in the clinical settings.[3 4 ] Ten Cate et al. stated six necessary skills and habits of clinicians as basic requirements for learning the reasoning skills in medicine: (1) Learning to talk with a clinical vocabulary, (2) identifying the clinical problem and generating a problem representation, (3) organizing case information and building an illness script mental library, (4) contrastive learning, (5) identifying discriminating information for hypothesis-driven inquiry, and (6) diagnostic verification.[5 ]
van Bruggen et al . conducted a literature search of ERIC and PubMed databases and Cross References that yielded 30 articles that helped to determine suitable question types for computer-based assessment of CRS. They found SCTs, extended matching questions, comprehensive integrative puzzles, modified essay questions/SAQs, long menu questions, MCQs, multiple true/false questions, and virtual patients suitable for this purpose. They also concluded that regardless of the question type chosen, patient vignettes should be used as a standard stimulus format to assess clinical reasoning and also recommended further research to ensure that the combination of these question types produces valid assessments and reliable test results.[6 ]
SCT builds on the principles of illness script theory.[7 ] The students use a 5-point Likert like scale (from −2 to +2) to indicate the support or no support of the information to make diagnostic, follow up or treatment decisions for the disease hypothesis in the scenario. The students' results are then compared with the responses of the “expert panel” – who provide the “Gold standard” answer (the one on which most of the experts have agreed). The students get maximum marks for choosing the same as chosen by the expert panel. The SCT scores have also been shown to consistently increase with increasing level of training.[8 9 ]
For the purpose of development of SCT, an experts' panel of 10 and 15 expert members (number depending on the stakes of the examination-whether formative for learning or for certifying decisions) is used. The experts need to be selected based on acknowledged community standards of expertise in a given field. Standardized evidence-based guidelines for the selection of experts are not available in the literature. The recommendations are formal certifications in the field, specified number of years of practical experience, and established reputation for sound clinical acumen. Experts from multiple disciplines may be engaged as per the span of the test's content.[10 ] No prior preparations are required to complete an SCT. The experts would usually find the problems in the test to be appealing on the basis of these being encountered by them as the day-to-day challenge in the practice. Full anonymity should be ensured to the panel members. The time given for the completion of the test is at par with the students. Many researchers have been able to show that the answers to script scenarios correlate with the candidate's level of education and can predict their future performance in oral examinations in terms of their “clinical reasoning” ability. The focus of SCT is to explore the structure and organization of the knowledge base and not the amount of knowledge. The students' ability to link the facts together and apply in context rich authentic clinical problems is tested.[8 11 12 ] SCTs featuring 20–25 cases, each with 3–4 nested questions is known to provide reliable test scores.[13 ] Currently in India, in the discipline of pathology, the tools for the assessment of reasoning skills are not commonly used nor developed, particularly in the field of hematology. The postgraduate students are thus only assessed by the traditional methods to test the knowledge and some of the psychomotor skills. With the curricula now shifting to the competency-based medical education (CBME) paradigm which focuses on learning outcomes needed for carrying out professional tasks, more authentic assessment test that test skills needed for clinical decision-making is needed. Among the challenges in successfully introducing CBME curriculum, one of those recognized is the need to develop and benchmark the assessment method and it has been noted that it took 30 years to do this in western countries.[14 ] Hence, we undertook the development of testing method to test critical thinking ability and CRS needed and expected of the postgraduate students in the new CBME paradigm and so this study pilots the process of development (construct) and validation of SCT covering a small topic “Coagulation” within hematology so that our experience-sharing through this publication will enable us and others outside the local context to replicate it for the remaining topics in pathology (and in other similar disciplines) that requires the development of SCT s and thereby make PG student assessment in pathology serve its purpose in the new CBME paradigm.
MATERIALS AND METHODS
This study was undertaken in the department of pathology of a university teaching hospital in Western India, having Diagnostic Haematology Laboratory with various kinds of bleeding diatheses referred to, requiring systematic workup. Ethical clearance was obtained from the Institutional Ethics Committee (Ref. BVDU/MC/95 dated September 3, 2016). SCT for the topic coagulation was constructed as shown in Figure 1 by following the guidelines from Association for Association of Medical Education in Europe (AMEE) Guide 75.[8 ]
Figure 1: The process of construction of SCT for the topic using AMEE 39 guidelines
The test was prepared by AK after focused group discussion with three subject experts. The experts were explained about the purpose, format, and intended outcomes of the test. The subtopics to be covered in the test were unanimously agreed. A preliminary construct and content validity was established by three faculty members from the department of Pathology, who were not actively involved in the rest of the study. The case vignettes that were originally prepared were then written in Stimulus and Response (if, then ) format and subjected for validation by the panel of experts. This panel consisted of 12 Clinical and Laboratory Hematologists having more than 10 years of experience and expertise in the field. The rubric of terminologies used by panel members for reference to choose the right answers during SCT validation is shown in Table 1 .
Table 1: Terminologies used for reference based on purpose in the SCT validation process[
8 ]
The example of a scoring system for use by panel members to capture the observed variability of responses by clinical experts when faced with specific clinical situations is shown in Table 2 .[9 15 16 ]
Table 2: Using a scoring scheme to identify the “gold standard” reasoning by expert panel for an script concordance test item
An example of the framework used for vignettes developed in this initiative for SCT[17 ] is as shown in Table 3 .
Table 3: Stimulus-response framework: vignettes developed for SCT on the topic coagulation
As a part of the process of further validating the test scripts, only the items having optimal gold standard responses were included in the revised version of the test. The items having unanimous single responses and items having too varied responses were eliminated. This resulted in the SCT with 22 vignettes and 66 items. For the pilot testing, the test was administered to 17 post graduate students (Year-2 and Year-3) from Pathology and Medicine. The students were explained about the purpose of the test being developed. It was ensured that they understood the format. The time given to complete the test was same (45 min) for the students and experts. The students' reactions and satisfaction were obtained after taking the test. The analysis was performed using the SCT score calculator.[8 ]
Cronbach's α was calculated to assess the internal consistency reliability of the SCT tool. Cohen's d and u3 were calculated for the effect size with overlap and percentage (%) superiority analysis between the students “and expert” group scores. Analysis of the items for their discriminating power and group comparisons was also done.
RESULTS
Analysis for the purpose stated above was performed on 22 vignettes having 66 items revealed a statistically significant difference between the scores of the experts and the students. Cronbach's α was 0.86 and Cohen's d was 1.9 with 34% overlap giving the probability of superiority of 91%. Average scores of the 2nd and 3rd years postgraduate students from medicine were higher as compared to pathology. Item analysis based on item-total correlation identified 23 items as “Bad” (having negative “item-total correlation ”), 11 items as “Fair” and 32 items as “Good.” Upon reanalysis after removing the bad items, Cronbach's α increased to 0.91 and Cohen's d rising to 3.9 with overlap of 5% and probability of superiority reaching 100% [Table 4 and Figure 2 ].
Table 4: Analysis of Cronbach α and Cohen's d
Figure 2: Use of Cohen's d between experts and students to eliminate bad SCT items from the total items developed. SCT: Script concordance test
Cohen's d is an effect size used to indicate the standardised difference between two means. Cohen's u3 describes the percentage of scores in the lower-mean group that are exceeded by the average score in the higher-mean group.
After removing the bad SCT items, the experts' scores before and after item analysis showed significant difference as revealed by the t -test and P values [Table 5 and Figure 3 ].
Table 5: Post removal of bad script concordance test items, improvement in the difference between the scores of the expert panel and the students
Figure 3: Difference in the mean scores among expert panel and students before and after elimination of bad SCT items. SCT: Script concordance test
Feedback obtained on a 5-point Likert scale from the panel of experts and students about the development and validation process revealed that they perceived SCT so developed and used as useful tool for the assessment of critical reasoning skills. Figure 4 shows the extent to which various aspects of the benefits they perceive from the process and its use.
Figure 4: Feedback from subject experts and students about their perception (on Likert scale 1-5) on the process and usefulness of SCT as a tool for assessment of Clinical Reasoning Skills. SCT: Script concordance test
DISCUSSION
Pedagogical methods and strategies change from predominantly lower cognitive (recall) during undergraduate medical education to higher cognitive with emphasis on skills linked to professional competence to make them job-ready for professional tasks as a specialist during postgraduate period. The predominantly self-directed learning mode adopted by postgraduate students need frequent specialty supervisor assessment and feedback on their competency progression from novice to proficient professional.[18 ] Assessment of CRS has gained importance in the new CBME era of outcomes-based medical education particularly to measure higher cognitive outcomes like critical thinking. In the field of pathology too, it helps to assess student's competency progression and attainment of student's competencies for investigation to help clinicians manage patients. The comparison of the students' performance with the experts provides opportunity for the specialty supervisors to give valid feedback to the learners. The number of vignettes/items used by different specialties varies and the difficulty in the construction of the test is also reported.[9 19 20 ] The present study could establish evidence in support of utility of SCT to assess the CRS among post graduate students. As assessed by the newly developed test, it could clearly discriminate the students' performance from that of the experts.
Our description in detail of the process of development, optimization, and validation of SCT for the area of coagulation will enable faculty not only in pathology but also many other specialities to develop and include this higher order critical thinking test in their respective fields where the new CBME curriculum demands its inclusion as an appropriate method of student assessment for measuring competency progression for the listed outcome competency in the CBME curriculum document.
Feedback obtained from the students and expert faculty affirm the potential of the SCT format to encourage critical thinking and application of the knowledge among the postgraduate medical students, helping them to build their capabilities to enter clinical practice.
Our experience is consistent with the previously published literature supporting the role of SCT for the assessment of CRS. Number of studies suggests that the test delivers a valid assessment of clinical reasoning and researchers advocate for SCT which allows a 'new perspective on the measurement and teaching of cognitive skills. Wan et al . in their study to assess the construct validity of SCT, compared the scores of 105 medical students, 19 junior registrars and 13 experienced general practitioners, and note that candidate scores increased with increasing level of clinical experience, thus adding to current evidence in the international literature in support of the construct validity of SCT. Other authors recommended prospective longitudinal studies with larger sample sizes.[13 21 22 23 ]
In our experience, construction of the SCT vignettes although initially a tedious process, was a rewarding one as it serves the purpose. The development of the test vignettes required systematic efforts to identify the case scenarios which could be converted into vignettes having a certain degree of ambiguity, to ensure the appropriate level of difficulty and finalisation of the construct. The engagement of the subject experts in the project required detailed explanation about the process and probable utility of the test. The process provided good networking opportunity among the fraternity. Appreciation of the educational experience by the subject experts and the students was encouraging and gratifying. Formation of the panel of experts was a relatively easier task; however, the experts being busy in their daily schedule, in some instances, they needed follow-up to complete the test within the desired timeline. The process of item analysis and test validation required guidance from statistical expert (SB). Removal of bad items not only increased the difference between scores among panel and respondents, it also increased the variance within the panel of experts as well as respondents, which clearly shows the importance of item analysis and test validation for optimization of SCT. We followed the process of test optimization as described in the literature.[8 17 19 ]
A criticism about SCT is regarding usage of 5-point Likert-like scale leading to misunderstanding and tendency of the candidates to tend towards middle (central tendency bias) and obtain better test results. We did not come across this tendency as an interfering point since in our process, the scoring only depends on choosing the option opted by majority of the experts.
Feedback was obtained from the experts and students using validated questionnaires to understand their comfort with the test format, perceptions about the utility and difficulty level of the test, preference for online format and whether there would be a need to adapt additional/different teaching-learning methods for the students to be able to take objective assessments of reasoning skills. The feedback thus obtained revealed their lack of awareness about SCT. The test format was appreciated by both the groups, and they also considered the educational experience as satisfactory. These findings indicate that SCTs in different areas of hematology education would be welcome by both groups. Since assessment drives learning, use of SCTs is likely to positively impact the teaching-learning methods and acquisition of CRS.
Some of the education strategies that can be adopted include exposure to variety of clinical cases, development of illness scripts, sharing expert strategies to arrive at a diagnosis, encouraging reflection.[3 ] Ten Cate et al . (2017), while discussing “Teaching Clinical Reasoning” point out the need to teach beyond repeated practice on similar range of problems and observing others engaged in the process. They state that it is important to take the term “reasoning” seriously and to avoid overemphasis on the “correct” diagnosis; a differential diagnosis being legitimate endpoint of the process. The teachers should ask questions that probe a possible explanation of findings. They recommend that teaching clinical reasoning should be in a step-by-step fashion with an emphasis on formulating a correct and comprehensive differential diagnosis.[5 24 ]
CONCLUSION
Although the process is tedious, it is possible to develop a reliable and valid SCT for coagulation if we follow the systematic process. The experts and students found the test to be useful for assessment of CRS and were satisfied with the educational experience. CRS of postgraduate students could be assessed and compared with those of experts with SCT. Analysis of the items for their discriminating power, eliminating such items having “negative “item-total” correlation” and revising the test are important part of validation process. Involvement of a SB is essential for successful development of SCT. Development, validation, and piloting the test provided opportunities for networking among hematology fraternity.
Our data highlights the importance of development and use of objective and modern tests such as SCT to assess and help the enhancement CRS among medical students. The clinicians and educators from different fields would also find our experience helpful for constructing SCTs in their areas of interest.
We would like to propose the use of SCT during formative assessments, which can lead to the accumulation of more experiences across the institutions. Further assessment of the impact of the test on the assessment of the reasoning skills and alterations in the teaching-learning methods would be the subsequent outcome.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
Acknowledgment
Contribution from the participating students and the panel experts is acknowledged. This project was taken as part of the FAIMER Fellowship program by Anjali J Kelkar (AJK) under guidance of Thomas V Chacko (TVC) and Shital Bhandary (SB). Support and contributions from the PSG-FRI faculty and fellows, MEU unit at BVDUMC are acknowledged.
REFERENCES
1. . Guidelines for Competency Based Postgraduate Training Programme for MD in Pathology Available from:
https://www.nmc.org.in/wp-content/uploads/2019/09/MD-Pathology.pdf . [Last accessed on 2022 Nov 22].
2. Roberti A, Roberti Mdo R, Pereira ER, Costa NM.
Script concordance test in medical schools in Brazil: Possibilities and limitations Sao Paulo Med J. 2016;134:116–20
3. Modi JN, Anshu Gupta P, Singh T. Teaching and assessing clinical reasoning skills Indian Pediatr. 2015;52:787–94
4. Thiessen N, Fischer MR, Huwendiek S. Assessment methods in medical specialist assessments in the DACH region – Overview, critical examination and recommendations for further development GMS J Med Educ. 2019;36:Doc78
5. Ten Cate O, Custers EJ, Durning SJ. Principles and Practice of Case-Based Clinical Reasoning Education: A Method for Preclinical Students 2017 Gewerbestrasse, Cham, Switzerland Springer Nature
6. van Bruggen L, Manrique-van Woudenbergh M, Spierenburg E, Vos J. Preferred question types for computer-based assessment of clinical reasoning: A literature study Perspect Med Educ. 2012;1:162–71
7. Humbert AJ, Miech EJ. Measuring gains in the clinical reasoning of medical students: Longitudinal results from a school-wide
script concordance test Acad Med. 2014;89:1046–50
8. Lubarsky S, Dory V, Duggan P, Gagnon R, Charlin B. Script concordance testing: From theory to practice: AMEE guide no. 75 Med Teach. 2013;35:184–93
9. Duggan P, Charlin B. Summative assessment of 5
th year medical students' clinical reasoning by
script concordance test : Requirements and challenges BMC Med Educ. 2012;12:29.
10. Custers EJ. The
script concordance test : An adequate tool to assess clinical reasoning? Perspect Med Educ. 2018;7:145–6
11. Ben Hamida E, Ayadi I, Marrakchi Z, Quinton A. The
script concordance test as a tool to evaluate clinical reasoning in neonatology Tunis Med. 2017;95:326–30
12. Mzoughi K, Zairi I, Kedous MA, El Mhamdi S, Ben Dhiab M, Mghaieth F, et al
Script concordance test as a sanctionnal evaluation in cardiology Tunis Med. 2018;96:330–4
13. Aldekhayel SA, Alselaim NA, Magzoub ME, Al-Qattan MM, Al-Namlah AM, Tamim H, et al Constructing a question bank based on script concordance approach as a novel assessment methodology in surgical education BMC Med Educ. 2012;12:100.
14. Chacko TV. Moving towards competency-based education: Challenges and the way forward Arch Med Heath Sci. 2014;2:247–53
15. Lubarsky S, Durning S, Charlin B. AM last page. The
script concordance test : A tool for assessing clinical data interpretation under conditions of uncertainty Acad Med. 2014;89:1089.
16. Dory V, Gagnon R, Vanpee D, Charlin B. How to construct and implement script concordance tests: Insights from a systematic review Med Educ. 2012;46:552–63
17. Fournier JP, Demeester A, Charlin B. Script concordance tests: Guidelines for construction BMC Med Inform Decis Mak. 2008;8:18.
18. Chacko TV. Emerging pedagogies for effective adult learning: From andragogy to heutagogy Arch Med Health Sci. 2018;6:278–83
19. Lineberry M, Kreiter CD, Bordage G. Threats to validity in the use and interpretation of
script concordance test scores Med Educ. 2013;47:1175–83
20. Ahmadi SF, Khoshkish S, Soltani-Arabshahi K, Hafezi-Moghadam P, Zahmatkesh G, Heidari P, et al Challenging
script concordance test reference standard by evidence: Do judgments by emergency medicine consultants agree with likelihood ratios? Int J Emerg Med. 2014;7:34.
21. Wan MS, Tor E, Hudson JN. Construct validity of script concordance testing: Progression of scores from novices to experienced clinicians Int J Med Educ. 2019;10:174–9
22. Middeke A, Anders S, Schuelper M, Raupach T, Schuelper N. Training of clinical reasoning with a serious game versus small-group problem-based learning: A prospective study PLoS One. 2018;13:e0203851.
23. Goos M, Schubach F, Seifert G, Boeker M. Validation of undergraduate medical student
script concordance test (SCT) scores on the clinical assessment of the acute abdomen BMC Surg. 2016;16:57.
24. Jost M, Brüstle P, Giesler M, Rijntjes M, Brich J. Effects of additional team-based learning on students' clinical reasoning skills: A pilot study BMC Res Notes. 2017;10:282.