Journal Logo

Empirical Investigations

Are Simulation Learning Objectives Educationally Sound? A Single-Center Cross-Sectional Study

Hui, Madeleine BHSc; Mansoor, Muqtasid; Sibbald, Matthew MD, PhD, FRCPC

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: April 2021 - Volume 16 - Issue 2 - p 105-113
doi: 10.1097/SIH.0000000000000507
  • Free

Abstract

Leading institutions in simulation-based learning, notably the Royal College of Physicians and Surgeons of Canada (Royal College) and the International Nursing Association for Clinical Simulation and Learning (INASCL), emphasize the importance of well-designed learning objectives that define educational goals to help learners assess their proficiency and achievements of a desired outcome.1–3 The Royal College's “Standards for Accredited Simulation activities (Section 3)” include educational standards that simulation programs are expected to uphold.1 Developers of simulation programs must perform a needs-based assessment of their target audience, create learning objectives that address the identified needs, and describe methods that enable the audience to participate in the activity.1 The Royal College highlights the importance of carefully selected verbs for the development of learning objectives and subsequently acknowledges the use of Bloom's Taxonomy to achieve this.4 Within Bloom's Taxonomy, certain verbs are affiliated to a specific level of the framework that can inform participants about the sophistication of learning expected of them.4 Similar expectations for simulation-based learning objectives are mentioned within INACSL's accreditation guideline, “Standards of Best Practice: Simulation.”2,3 These standards identify learning objectives as guiding tools to assess achievement of simulation outcomes.2,3 The INACSL cites both Bloom's Taxonomy and SMART criteria as theoretical frameworks that are essential in formulating clearly defined, measurable learning objectives.2,3 They further caution against the use of vague verbs, henceforth referred to as inappropriate verbs.2

Among the many competing theoretical frameworks within the educational sector, Bloom's Taxonomy and SMART criteria are widely recognized and are often used as standard educational guidelines.5,6

  • a) Bloom's Taxonomy: To classify educational goals, Bloom's Taxonomy was purposed to enhance curriculum development and evaluation.7,8 Bloom's Taxonomy describes a division of targeted skills and abilities into 6 categories: knowledge, comprehension, application, analysis, synthesis, and evaluation, with each progressing level requiring a higher order of cognitive processing (Table 1).9,10,14
  • b) SMART criteria: SMART criteria highlights 5 core concepts: specific, measurable, attainable, realistic, and timely (Table 1).11 The implementation of SMART goals guides educators in writing a more focused learning objective, which heightens motivation and commitment, elevating the likelihood of goal attainment.15
  • c) Inappropriate verbs: A set of inappropriate verbs has also been identified and are ill-advised for use in constructing a learning objective for their subjectivity and vagueness in measuring outcomes (Table 1).10,13
TABLE 1 - Theoretical Frameworks for Learning Objectives
Framework Categories Definitions Key Words Examples From Study
Bloom's Taxonomy Knowledge Knowledge is the foundational skill required to retain specific information including facts, definitions, and methodology. This stage is characterized by the simple means of retrieval or recognition of information.9 Recall, identify, recognize, acquire, distinguish, state, define, name, list, label, reproduce, order.10 “Review abdominal examination and abdominal pain history.”
“Airway skills-recognize a patient in respiratory distress.”
Comprehension Comprehension is achieved when learners can paraphrase information, classify items, compare information, or explain concepts to others.9 Comprehension is more complex than simple recall of information, as it helps students reinforce previously learned concepts.9 Translate, extrapolate, convert, interpret, abstract, transform, select, indicate, illustrate, represent, formulate, explain, classify, comprehend.10 “Describe the nursing care for a client who has a mechanical pump.”
“Understand expectations/goals for MF2 clinical skills.”
Application Application refers to a learner's ability to apply an acquired knowledge or skill into new situations. Application is achieved when the student is able to demonstrate a skill with no instructional prompting when confronted with a task.7 Apply, sequence, carry out, solve, prepare, operate, generalize, plan, repair, explain, predict, demonstrate, instruct, compute, use, perform, implement, employ, solve.10 “Demonstrate the ability to perform appropriate history-taking and physical examination for an emergency medicine patient.”
“Appropriately apply the ACLS algorithm.”
Analysis Can be interpreted as critical thinking.9 Analysis is perceived as the ability to understand constituent components within a whole, while also understanding the relation and interaction between the individual parts.7 Analyze, estimate, compare, observe, detect, classify, discover, discriminate, explore, distinguish, catalog, investigate, breakdown, order, determine, differentiate, dissect, contrast, examine, interpret.10 “Learners are able to describe and demonstrate clinical judgment and clinical reasoning, and the nursing process and how these 2 models interrelate and complement each other and are used in nursing practice.”
Synthesis Synthesis, defined as the combination of elements toward the formation of a whole, is demonstrated when a novel idea is produced in a certain situation.7,9 During this stage, previous information can be integrated with newly learned information to create a new product.7 Write, plan, integrate, formulate, propose, specify, produce, organize, theorize, design, build, systematize, combine, summarize, restate, argue, discuss, derive, relate, generalize, conclude, produce.10 “Able to share their own definition of nursing.”
“Demonstrate the ability to formulate an investigative and treatment plan for an emergency medicine patient.”
Evaluation Evaluation occurs during reflection and assessment to appraise the learning experience where judgment is cast upon the value, purpose, ideas, words, methods etc.7,9 This stage assesses the accuracy, efficiency, economic trade-off, or satisfaction of the learning experience.7 Evaluate, verify, assess, test, judge, rank, measure, appraise, select, check, judge, justify, determine, support, defend, criticize, weigh, assess10 “Students will be able to reflect on your learning this term and assess where your learning needs are in relation to the skills and assessments introduced this term”
SMART Criteria Specific Targets a specific area for improvement and defines goal of activity.11,12 Specific: “Demonstrate the ability to perform appropriate history-taking and physical examination of an unstable emergency medicine patient.”
Not specific: “Improved examination skills.”
Measurable Quantifies or suggests an indicator of progress and/or completion.11,12 Measurable: “Students will be able to perform a basic neurological examination with the emphasis on correct technique rather than diagnosis or management.”
Not measurable: “Reflect on theory and related principles related to ambulating the patient.”
Attainable Assesses whether the goal can be achieved.12 Attainable: “Learn to obtain efficient history”
Not attainable: “Physical exams learned so far.”
Realistic Checks whether the activity is doable.12 Realistic: “Revisit the pelvic model to re-acquaint the students to perform speculum exam.”
Not realistic: “Collaborate with other professions to establish common goals, provide care for individuals and caregiver, and facilitate shared decision-making, problem-solving and conflict-resolution.”
Time Frame Specifies when the results can be achieved and if the allotted time is reasonable for the completion of the activity.11,12 Time Frame: “By the end of the session the learners will be more familiar with communication skills needed for patient interviews.”
No time frame: “Students will be able to describe the principles of growth.”
Inappropriate verbs Present Verbs that are subjective and vague with measuring learning outcomes.10,13 Know, comprehend, understand, appreciate, familiarize, study, be aware, become acquainted with, gain knowledge of, cover, learn, realize.10 “Become familiar with surroundings and equipment within the CSBL and understand regulations and policies associated with its use.”
Background information for each of the theoretical frameworks, including the definitions and key words presented to raters during the training process. A selection of key examples of learning objectives categorized to their appropriate levels within the frameworks are showcased.
ACLS, Advanced Cardiac Life Support; CBSL, Centre for Simulation-Based Learning.

Increasing use of simulation-based learning in health professions training programs has increased the organizational complexity of these programs and put cost pressures on many simulation centers.16 Presumably, cost conscious and educationally valid use of this scarce resource would include educationally sound learning objectives. Best practice guidelines and accreditation standards of simulation-based education stress the use of educationally sound learning objectives. However, it is unknown what constitutes as an educationally sound learning objective and whether these conditions are met in simulation practices. Therefore, our research goal was to assess whether learning objectives were written in a goal-attainable manner from the perspective of these theoretical frameworks. This study will determine whether learning objectives comply with the frameworks set out by accreditation standards, namely Bloom's Taxonomy, SMART criteria, and inappropriate verbs, and whether this classification relates to perceived student achievement.

METHODS

Study Design

A retrospective study was conducted at the Center for Simulation-Based Learning at McMaster University between May 2016 and June 2018, involving records of all simulation sessions with completed evaluation data. Session records included the learner group, session specific learning objectives, and evaluation data. Learning objectives were collected from the Faculty of Health Sciences, which is represented by 21 different health profession programs that used the simulation center (Table 2). The number of faculty members, simulation sessions, and learning objectives belonging to each program can be found in Table 2.

TABLE 2 - Demographic Distribution of Sample Population
Learning Objectives Student Feedback
Program No. Authors * No. Simulation Sessions No. Learning Objectives Mean No. Objectives Per Session (Minimum–Maximum) Total No. Evaluations Mean No. Evaluations Per Session
Bachelor of Health Sciences Program 8 5 15 2.44 (1–3) 9 51
Program for Interprofessional Education 3 4 9 3 (2–4) 5 16
School of Medicine 107 110 318 1.66 (0.5–5) 494 5
School of Nursing 38 30 235 2.41 (0.4–5) 59 33
School of Nursing – Accelerated stream 1 1 5 3.63 (3–4) 3 5
School of Rehab Sciences 9 9 15 1.39 (1–4) 16 8
Interprofessional Medical Education 1 1 5 3 (1–5) 2 3
Midwifery 1 3 14 2.3 (1–3) 4 14
Physician Assistants Program 13 12 95 1.99 (1–3.63) 18 34
Anesthesia 14 10 48 3.17 (1–5) 18 15
Cardiology 1 1 3 3 (3–3) 3 4
Emergency Medicine 6 3 25 2.77 (1–4.49) 4 18
Family Medicine 1 1 1 1.5 (1–2) 5 2
Geriatrics 1 1 1 1 (1–1) 1 5
Internal Medicine 2 3 13 2.56 (1–5) 10 4
Obstetrics and Gynecology 3 3 11 2.33 (1–5) 18 10
Pediatrics 2 4 19 2.06 (0.23–5) 14 7
Surgery 3 2 4 1.48 (1–1.95) 2 16
Other 1 1 4 3.5 (3–4) 2 18
External Programs 3 3 5 2 (1–5) 6 23
Unknown 1 2 4 1.11 (1–3) 22 9
Distribution of participating programs with learning objectives and student feedback entries. The average number of objectives provided to students for each session, the total number of evaluations, and mean number of evaluations have been generated.
*Some learning objectives were co-facilitated by multiple faculty members and were treated as single authors of those learning objectives.
†Some faculty members co-facilitated multiple simulation sessions.

Before the use of the simulation center, each faculty is required to perform a needs-based assessment to create an appropriate learning objective based on the SMART criteria. After a request to access the simulation laboratory has been submitted, faculty course coordinators review whether instructors have provided learning objectives for their simulation activity.

Data for student perception were aggregated from self-evaluation forms after learners completed their activities at the simulation laboratory. Regardless of program type or session activity, every student received the same evaluation form. These forms only differed by the learning objective statements pertaining to their targeted goal for that specific learning session. Students were provided with the following 4 statements on the evaluation form: (1) “Overall effectiveness of teaching for this session”; (2) “The learning objectives of this session were clear”; (3) “The session addressed the learning objective”: (Course and simulation session specific learning objective); and (4) “I was assessed and received feedback on these learning objectives.” Students rated their level of agreement for each individual statement on a continuous scale from “strongly disagree” to “strongly agree” (Fig. 1). These statements became the basis for evaluating student perception in relation to their learning objectives.

F1
FIGURE 1:
Students were provided with feedback forms to rate the level of satisfaction in response to each statement. Results were obtained on a continuous scale from strongly disagree to strongly agree across 4 criteria. The session specific learning objective is in bolded font.

Classification

Individual learning objectives were assigned a code to denote the use of Bloom's Taxonomy, SMART criteria, and inappropriate verbs. For Bloom's Taxonomy, learning objectives were exclusively assigned to 1 of the 6 levels, based primarily on its definitions or level-associated key terms. Learning objectives that were vague could not be classified and were excluded. Examples of vague objectives included “introduction to abdominal examination” and “adrenal disorders.” For the SMART criteria, the presence of each element was independently coded as either absent (0) or present (1). The presence of an inappropriate verb was also documented as absent (0) or present (1). Refer to Table 1 for definitions, key terms, and classification examples.

Interrater Reliability

Three raters were involved with the coding process chosen for their heterogenous background: 2 university undergraduate students (M.H. and M.M.) and a faculty member with educational training under the Health Sciences Program (M.S.). All raters received training to develop a shared definition and mental model of the theoretical frameworks. The training process consisted of familiarization of both the theoretical and practical applications of the frameworks, rater calibration exercises, reiterative coding, and multiple discussions to resolve coding disparities. A randomly generated subset of 100 learning objectives were independently coded by 2 raters (M.H. and M.M.) blinded to the simulation sessions to assess reliability. Any coding discrepancies between the 2 primary raters were discussed and resolved by an experienced third rater (MS) to address common themes of differences. The 3 components with the lowest interrater reliability (Blooms Taxonomy, “specific” and “measurable” components of the SMART criteria) were coded independently by 2 raters (M.H. and M.M.) and discussed to strengthen interrater reliability. For the reassessment of specific and measurable components, 100 learning objectives were recoded, whereas a further 200 learning objectives were recoded for Bloom's Taxonomy. The remaining coding was done by a single rater (M.M.).

Statistical Analysis

κ values were calculated for each scored component of the learning objectives and Pearson correlations were used to correlate learner evaluation feedback with each framework. Statistics were completed with SPSSv24 (IMB, Redmond, WA). Post hoc analyses were conducted using Bonferonni correction for multiple comparison among student perception to minimize type 1 error (P < 0.002).

Ethics

This study design was reviewed and approved by the Hamilton Integrated Research Ethics Board, protocol 2507.

RESULTS

Learning Objectives

A total of 1693 faculty-designed learning objectives were identified in 722 sessions from 2016 to 2018, with 7047 corresponding student feedback forms (Table 2). Repeated learning objectives and surveys with incomplete responses were excluded. The remaining 848 objectives were used for this study. Objectives were retrieved across 20 different programs with most objectives created by the School of Medicine (37.5%) and School of Nursing (27.7%, Table 2).

Rater Calibration

Rater agreement based on the coded learning objectives resulted in κ values of 0.69 for Bloom's Taxonomy, 0.25 to 0.96 for SMART criteria, and 0.86 for inappropriate verbs, all P < 0.01 (Table 3). The specific and measurable SMART elements had the greatest interrater variability. Common reasons for disagreement are listed in Table 3.

TABLE 3 - Coding Agreement Between Raters
Framework Items Agreement/Total (Percentage Agreement) κ Comments and Common Reasons for Disagreement
Bloom's Taxonomy Knowledge 32/193 (16.6) Round 1:
κ = 0.32, P < 0.01
Round 2:
κ = 0.69, P < 0.01
Learning objectives indirectly referring to learning actions were interpreted differently
Learning objectives occasionally referenced multiple levels of the Bloom's Taxonomy. In the second round, the highest level of Bloom's taxonomy mentioned in the objective was scored.
Verbs that were difficult to conceptualize which included ‘learn how’, ‘reflect’, and ‘will be able’, made objectives difficult to score.
Comprehension 17/193 (8.8)
Application 80/193 (41.5)
Analysis 9/193 (4.7)
Synthesis 5/193 (2.6)
Evaluation 7/193 (3.7)
SMART Criteria Specific Round 1:
72/99 (72.7)
Round 2:
69/98 (70.41)
Round 1:
κ = 0.33, P < 0.01
Round 2:
κ = 0.25, P = 0.002
Definition for the session's scope of practice were often unclear (eg, learn intravenous insertion techniques). Learner's expertise and knowledge before the simulation activity were frequently not included.
Summary sessions and evaluative objectives were found to be the least specific
Measurable Round 1:
78/99 (78.8)
Round 2:
76/98 (77.6)
Round 1:
κ = 0.58, P < 0.01
Round 2:
κ = 0.55, P < 0.01
There were difficulties in determining measurability when tools were not specified within the learning objective.
There were difficulties in assessing inappropriate verbs due to their nonmeasurable properties.
Attainable 97/98 (98.9) κ = (95%+) * N/A
Realistic 93/98 (94.9) κ = (95%+) * N/A
Time frame 97/98 (98.9) κ = 0.96, P < 0.01 N/A
Inappropriate verbs Inappropriate verbs used 95/99 (95.9) κ = 0.86, P < 0.01 N/A
Interrater agreement for coding of 200 learning objectives for Bloom's Taxonomy and 100 objectives for SMART criteria and Inappropriate verbs.
*Insufficient amount of objectives in one or more categories to compute interrater reliability. κ values for attainable and realistic were not calculated as these elements were met with high agreement between raters with 95% + agreement.

Categorization of Objectives

Bloom's Taxonomy

Simulation sessions were commonly tailored toward areas of application (53%) and knowledge (21.4%). Few objectives made use of comprehension (12.2.%) or analysis (7.2%). Higher cognitive processes, namely, synthesis (2.3%) and evaluation (3.7%), appeared the least. A total of 140 statements were excluded from categorization with Bloom's Taxonomy as they lacked a clear direction of how the learning session would be conducted.

SMART Criteria

Most learning objectives seemed to be attainable (88.8%), realistic (85.0%), and measurable (60.8%). However, nearly half of the learning objectives were not specific (49.6%). Only a small number of objectives contained a time frame (9.9%).

Inappropriate Verbs

Approximately 1 in 5 learning objectives contained inappropriate verbs (Table 4).

TABLE 4 - Learning Objective Classification and Student Ratings
Categorization by Blinded Raters Scoring by Student Evaluations
Category Count Overall Score Effectiveness of Teaching Clear Learning Objective Received feedback on Learning Objective
n % Median (95% CI) Median (95% CI) Median (95% CI) Median (95% CI)
Bloom's Taxonomy Knowledge 151 21.4 6.22 (6.11–6.33) 6.32 (6.26–6.38) 6.28 (6.22–6.39) 6.13 (6.10–6.22)
Comprehension 86 12.2 6.10 (6.03–6.19) 6.27 (6.19–6.32) 6.27 (6.18–6.38) 6.10 (6.05–6.14)
Application 376 53.3 6.35 (6.30–6.41) 6.39 (6.32–6.43) 6.35 (6.33–6.43) 6.20 (6.14–6.28)
Analysis 51 7.2 6.04 (5.90–6.18) 6.26 (6.19–6.38) 6.23 (6.14–6.39) 6.09 (5.91–6.20)
Synthesis 16 2.3 6.30 (5.68–6.48) 6.40 (6.01–6.53) 6.41 (6.33–6.55) 5.91 (5.69–6.27)
Evaluation 26 3.7 6.34 (6.05–6.33) 6.33 (6.00–6.48) 6.40 (6.16–6.45) 6.22 (5.90–6.51)
Specific Absent 427 50.4 6.35 (6.31–6.41) 6.41 (6.35–6.46) 6.40 (6.34–6.45) 6.24 (6.20–6.32)
Present 421 49.6 6.22 (6.18–6.29) 6.36 (6.32–6.40) 6.34 (6.32–6.40) 6.12 (6.07–6.18)
Measurable Absent 331 39.2 6.29 (6.23–6.35) 6.39 (6.33–6.44) 6.36 (6.30–6.43) 6.22 (6.21–6.30)
Present 514 60.8 6.30 (6.26–6.37) 6.37 (6.34–6.41) 6.38 (6.33–6.43) 6.16 (6.13–6.20)
Attainable Absent 95 11.2 6.50 (6.40–6.60) 6.52 (6.48–6.62) 6.56 (6.48–6.67) 6.39 (6.32–6.50)
Present 753 88.8 6.26 (6.23–6.31) 6.35 (6.31–6.38) 6.34 (6.37–6.39) 6.15 (6.13–6.20)
Realistic Absent 127 15.0 6.45 (6.34–6.57) 6.51 (6.47–6.60) 6.55 (6.48–6.64) 6.38 (6.28–6.45)
Present 718 85.0 6.26 (6.22–6.31) 6.33 (6.30–6.38) 6.33 (6.29–6.38) 6.14 (6.12–6.20)
Time Frame Absent 764 90.1 6.32 (6.27–6.37) 6.38 (6.35–6.42) 6.39 (6.34–6.43) 6.20 (6.18–6.25)
Present 84 9.9 6.16 (5.98–6.26) 6.29 (6.14–6.42) 6.23 (6.05–6.34) 5.90 (5.79–6.21)
Inappropriate verbs Absent 659 77.8 6.32 (6.28–6.38) 6.39 (6.36–6.42) 6.39 (6.34–6.44) 6.20 (6.17–6.26)
Present 188 22.2 6.21 (6.14–6.33) 6.30 (6.20–6.38) 6.28 (6.21–6.38) 6.13 (6.09–6.21)
Student perceptions were reported on a 7-point continuous scale from completely disagree (1) to completely agree (7).
CI, confidence interval.

Learner's Perception

The relation between theoretical frameworks of learning to student perception was examined using student ratings based on the 4 criteria: (1) overall effectiveness of teaching, (2) clarity of the learning objective, (3) whether the session addressed the learning objective, and (4) feedback received on those learning objectives. Median student ratings corresponding to each learning objective are shown in Table 4. Post hoc analysis for multivariate testing was conducted using Bonferroni correction of a P value of less than 0.002 to report significant relations.

Bloom's Taxonomy

An exploratory analysis revealed that the classification of Bloom's Taxonomy used in learning objectives did not correlate with student ratings across all 4 domains (P = not significant, data not shown). Activities that aimed higher on Bloom's hierarchy did not necessarily result in greater perception of academic success.

SMART Criteria

Correlation between student perception with SMART criteria varied depending on the elements involved. Associations between learning objectives for the specificity and measurability criteria with learner's perception could not be assessed because of low interrater reliability. However, attainable learning objectives were significantly related to a lower overall evaluation score for the session (Pearson = −0.119, P = 0.001), effectiveness of teaching (Pearson = −0.151, P < 0.001), clear learning objective (Pearson = −0.145, P < 0.001), and feedback on objectives (Pearson = −0.129, P < 0.001). No meaningful correlations could be drawn between realistic learning objectives and student perception. The inclusion of a time frame within learning objectives was negatively correlated with clarity of learning objectives (Pearson = −0.170, P < 0.001) and feedback on objectives (Pearson = −0.115, P < 0.001).

Inappropriate Verbs

Use of inappropriate verbs was not significantly related to student perception.

DISCUSSION

We set out to determine if learning objectives aligned with the frameworks outlined within accreditation standards of simulation education. Most learning objectives resided within the lower half of Bloom's Taxonomy (87%), lacked specificity (50%) and a time frame (90%), and used inappropriate verbs (22%). No correlations were observed between student perception and classification of learning objectives by Bloom's Taxonomy. For the SMART criteria, only slight negative correlations were seen in student ratings for learning objectives classified as attainable and timeliness. Student ratings for the clarity of learning objective were also not significantly correlated with the use of inappropriate verbs.

Gaps Between Practice and Guidelines

Bloom's Taxonomy

The Royal College calls for the creation of learning objectives to select appropriate teaching methods.1,17 Simulation-based education assists students in applying skills, formulating arguments, and making judgments but has no direct effect to the recall or understanding of factual information.18 However, knowledge and comprehension-based simulation learning outnumber those that require higher cognitive modalities. This brings into question why most learning objectives are still reflective of learning within the lower educational hierarchies and whether this is considered efficient use of expensive simulation resources.

SMART Criteria

With only half of the objectives being specific, two thirds measurable, and a rare presence of a time frame, it departs from standards mandated by the INASCL.2 Poorly written learning objectives, coupled with broad descriptions and the absence of a measurable standard, induce confusion to the activity focus and expected performance.13

Inappropriate Verbs

The INASCL discourages the use of vague terms as they hinder the selection of an appropriate method of instruction.2,3 However, these verbs are still being used.

Relationship to Learner Evaluation

Studies have previously correlated student ratings to achievements, which has influenced higher education to institutionalize student ratings to evaluate quality of learning.19,20 However, recent studies have challenged these views, claiming that the purpose for student ratings have become increasingly unclear.21 As such, the interpretation of student ratings as an accurate measure for learner outcome has encountered ongoing debate. Some studies claim that student ratings are strong predictors for educational outcome citing that leaner evaluations are reliable, stable, and relatively unaffected by various potentially biasing variables.19,20 Conversely, other studies conclude that student perception was never intended to act as a surrogate for learner outcome stating that student ratings are not significantly correlated with academic success, do not reflect use of effective teaching methods, and lack expected correlations with other variables.22,23

Further arguments have been met with mixed results. Other studies claim that student success was well correlated with instructional competency; however, they were unable to observe any relationship between feedback and student achievement.24,25 Furthermore, the clarity of learning objectives was moderately correlated to student achievement, whereas the extent the instructor accomplished the learning objective has been greatly correlated to student success.25,26 Overall, current literature on this topic has been inconclusive with their findings on the validity and application of student ratings as a predictor to educational outcome, suggesting that more in-depth research is required before the validity of ratings can be established.27,28

The complexity of this relationship has been reflected within the findings of this study. The relation between learning objective categorization and student perception for specificity, measurability, and realism could not be accessed. However, slight negative correlation with student scores was observed for learning objectives with attainable and time frame qualities. Meanwhile, higher levels of Bloom's Taxonomy and exclusion of inappropriate verbs were not indicative of greater evaluation of perceived academic success. Such results complicate the implications of student ratings toward perceived success. Despite the conflicting arguments in academia, educational communities and faculty members continue to collect and use student ratings. As student evaluations still have an influence on the educational system, educators should be aware that the relation between student ratings with academic success is largely inconclusive. Therefore, further studies with conclusive findings are required for the interpretation and effective use of student ratings.

Recommendations

An abundance of learning objectives are not written according to accreditation standards, potentially limiting student learning at the simulation centers. Given the expenses involved in simulation resources, educators should consider whether alternate educational techniques are more appropriate for objectives targeting knowledge and comprehension skills. Instead, simulation resources should be reserved for learning at or beyond the level of application within Bloom's Taxonomy. Educators should consider more specificity and time descriptions in their learning objectives to be more aligned with SMART criteria while also eliminating the use of inappropriate verbs.

Implications for Low Interrater Reliability

Although rater calibration improved for categorization with Bloom's Taxonomy, the interrater reliability decreased for the SMART criteria of specificity and measurability. Interrater reliability remained low even after raters received training to develop a shared mental model of the learning frameworks. The low agreement between raters attests to the challenges of using Bloom's Taxonomy and SMART criteria, namely, specificity and measurability, for real-world application as an educational measurement tool. The discrepancy between raters emphasized the difficulty with classification of learning objectives and warns educators that the defined taxonomies may be an insufficient guiding tool.

Furthermore, the extra precautions taken to ensure consensus, which still resulted in low agreement, suggests that the low interrater reliability may not be solely attributed to the raters. Instead, this discrepancy may have emerged from the inappropriate use of Bloom's Taxonomy and SMART criteria, resulting in a poorly written learning objective revealed through mismatched codes. Although current practices settle on Bloom's Taxonomy and SMART criteria as the educational standards, the systematic implementation of these frameworks should be questioned. Are Bloom's Taxonomy and SMART criteria appropriate frameworks in formulating learning objectives? Future studies should investigate the establishment of a user-friendly, novel framework targeted specifically for simulation-based learning.

Strengths and Limitations

This single-center study may be confounded by shared culture, which may limit its generalizability; however, this represents diverse program sampling each with their own subculture potentially mitigating this risk. Overall, these characteristics provide a real-world sample across multiple faculties from a single institution. Its value can be attributed to the diversity and completeness of the sampling as all simulation-based learning objectives within the studied time frame were analyzed.

As the researchers were blinded to the simulation session, this study focused specifically on the terminology of the learning objective, without knowledge of the simulation context in which it fits. This is both an advantage as it allows enhanced scrutiny without contextual assumptions and a disadvantage as we cannot comment on the appropriateness of the objectives for the level of learning.

A total of 140 learning objectives were excluded from assessment into their respective stages within Bloom's Taxonomy. As these descriptors lacked clarity in the teaching instruction surrounding the simulation session, these objectives could not be included within the analysis for Bloom's Taxonomy. However, the uncertainty of the learning objective's purpose due to its vague description emphasizes the importance of creating specific, goal-oriented learning objectives.

Several limitations have also been observed with regard to student perception. Because limited variations and high scores with student ratings, there may be a ceiling effect masking true results. In addition, scores may have been at risk for social desirability. Rather than being a true reflection of student perception, students may have responded according to what they perceived was expected after completing a simulation activity.

Future Research

Further investigation is required to account for the gaps in research that have been uncovered through this study. Three areas of research are proposed to address these disparities. First, studies examining the link between learning objectives and outcomes are warranted to better understand how to focus simulation resources to optimize learning outcomes. Secondly, despite the efforts taken to achieve high consensus between raters, the reliability remained low. Future studies should determine whether a low interrater reliability is consistent across other stakeholders. Finally, the appropriateness of Bloom's Taxonomy and SMART criteria as standards for learning objectives must be examined. Research should be dedicated to exploring the possibility of creating a novel framework that may be more suited for simulation-based learning.

CONCLUSIONS

There is an evident gap between accredited standards of simulation with current practices at McMaster University. Most learning objectives do not adhere to the theoretical frameworks outlined within the Royal College and the INACSL's accreditation standards. Learning objectives at the simulation center were not optimally written, thereby stifling the potential of costly simulation resources in medical education. Activities conducted at the simulation center should reflect higher levels of learning beyond the stage of application in Bloom's Taxonomy, contain greater emphasis on specificity and timeliness, and avoid the use of inappropriate verbs. We urge educators to re-evaluate and subsequently modify their learning objectives for better utilization of simulation resources to maximize student learning and educational experiences.

REFERENCES

1. CPD accreditation: Simulation-based learning activities. Royal College of Physicians and Surgeons of Canada. Available at: http://www.royalcollege.ca/rcsite/cpd/accreditation/cpd-accreditation-simulation-based-learning-activities-e. Accessed Jan 23, 2019.
2. INACSL Standards Committee: INACSL standards of best practice: Simulation outcomes and objectives. Clin Simul Nurs 2016;12:S13–S15.
3. Lioce L, Reed CC, Lemon D, et al. Standards of best practice: simulation standard III: participant objectives. Clin Simul Nurs 2013;9(6):S15–S18.
4. Sherbino J, Frank JR. Educational Design: A CanMEDS Guide for the Health Professions. Royal College of Physicians and Surgeons: Ottawa; 2011.
5. Summaries of Learning Theories and Models. Learning Theories. Available at: https://www.learning-theories.com/. Accessed January 23, 2019.
6. Learning Frameworks. RMIT University. Available at: https://emedia.rmit.edu.au/teachereducation/?q=Learning-frameworks. Accessed January 23, 2019.
7. Bloom BS, Engelhart MD, Furst EJ, Hill WH, Krathwohl DR. Taxonomy of Educational Objectives: Handbook 1: Cognitive Domain. London: Longman Publishing Group; 1984.
8. Anderson LW. Objectives, evaluation, and the improvement of education. Stud Educ Eval 2005;31:102–113.
9. Adams NE. Bloom's taxonomy of cognitive learning objectives. J Med Libr Assoc 2015;103:152–153.
10. Guidebook for Planning, Developing & Delivering CHSE Activities. Continuing Health Sciences Education Program McMaster University. Hamilton: The CHSE Program; 2016.
11. Doran GT. There's a S.M.A.R.T. way to write management's goals and objectives. Manage Rev 1981;70:35–36.
12. Lawlor KB, Hornyak MJ. SMART goals: how the application of SMART goals can contribute to achievement of student learning outcomes. Dev Bus Simul Exp Learn 2017;39:259–267.
    13. Chatterjee D, Corral J. How to write well-defined learning objectives. J Educ Perioper Med 2017;19:E610.
    14. Krathwohl D. A revision of Bloom's Taxonomy: an overview. Theory Pract 2002;4:212–218.
    15. Aghera A, Emery M, Bounds R, et al. A randomized trial of SMART goal enhanced debriefing after simulation to promote educational actions. West J Emerg Med 2018;19:112–120.
    16. Centre for Simulation-Based Learning. McMaster University. Available at: http://simulation.mcmaster.ca/spp.html. Accessed January 25, 2019.
    17. Accredited Activity Standards for the Maintenance of Certification (MOC) Program: Simulation-Based Activities (section 3), volume 3. Royal College of Physicians and Surgeons of Canada; 2018. Available at: https://caep.ca/wp-content/uploads/2018/06/Standards-for-MOC-Simulation-based-Activities-Section-3-January-2018-v.3.pdf. Accessed January 25, 2019.
    18. Silvia C. The impact of simulations on higher level learning. Int J Public Pol 2012;18:397–422.
    19. Marsh HW, Roche LA. Making students' evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility. Am Psychol 1997;52:1187–1197.
    20. Lizzio A, Wilson K, Simons R. University student's perceptions of the learning environment and academic outcomes: implications for theory and practice. Stud High Educ 2002;27:27–52.
    21. Darwin S. What contemporary work are student ratings actually doing in higher education. Stud Educ Eval 2017;54:13–21.
    22. Uttl B, White CA, Gonazalez DW. Meta-analysis of faculty's teaching effectiveness: student evaluation of teaching ratings and student learning are not related. Stud High Educ 2017;54:22–42.
    23. Linse AR. Interpreting and using student ratings data: guidance for faculty serving as administrators and on evaluation committees. Stud Educ Eval 2017;54:94–106.
    24. Cohen PA. Student ratings of instruction and student achievement: a meta-analysis of multisection validity studies. Rev Educ Res 1981;51:281–309.
    25. Feldman KA. The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multisection validity studies. Res High Educ 1989;30:583–645.
    26. Centra JA. Student ratings of instruction and their relationship to student learning. Am Educ Res J 1977;14:17–24.
    27. Abrami PC, d'Apollonia S, Cohen PA. Validity of student ratings of instruction: what we know and what we do not. J Educ Psychol 1990;82:219–231.
    28. Dowell DA, Neal JA. A selective review of the validity of student ratings of teachings. J High Educ 1982;53:51–62.
    Keywords:

    Simulation-based learning; learning objectives; accreditation standards of simulation; theoretical frameworks of learning; bloom's taxonomy; SMART criteria; inappropriate verbs; student perceived success

    Copyright © 2020 Society for Simulation in Healthcare