Secondary Logo

Changing the Point of Reference

Kirkpatrick’s Evaluation Level 1 From the Learner’s Point of View

Park, Chan W., MD, FAAEM; Holtschneider, Mary Edel, MEd, MPA, BSN, RN-BC, NREMT-P, CPLP

Section Editor(s): Holtschneider, Mary Edel MEd, MPA, BSN, RN-BC, NREMT-P, CPLP

Journal for Nurses in Professional Development: May/June 2019 - Volume 35 - Issue 3 - p 167–169
doi: 10.1097/NND.0000000000000548
Departments: Simulation
Free

Chan W. Park, MD, FAAEM, is Director, Simulation Education and Co-Director, Interprofessional Advanced Fellowship in Clinical Simulation, U.S. Department of Veterans Affairs, Durham VA Health Care System, and Adjunct Assistant Professor, Division of Emergency Medicine, Duke University Medical Center, Durham, North Carolina.

Mary Edel Holtschneider, MEd, MPA, BSN, RN-BC, NREMT-P, CPLP, is Simulation Education Coordinator and Co-Director, Interprofessional Advanced Fellowship in Clinical Simulation, U.S. Department of Veterans Affairs, Durham VA Health Care System, and Nursing Program Manager, Duke Area Health Education Center (AHEC), Duke University Medical Center, Durham, North Carolina.

The authors declare no conflicts of interest.

ADDRESS FOR CORRESPONDENCE: Mary Edel Holtschneider, U.S. Department of Veterans Affairs Duke University Medical Center, Durham, North Carolina (e-mail: mary.holtschneider@gmail.com).

In the January/February issue, we introduced our current series looking at evaluations and how we can optimize them. Based on the large number of responses we have received to date, we sense this topic is one of enormous interest among our readers. Naturally, we continue to ask ourselves the question “Why? Why should simulation educators and maybe, even more broadly, nursing professional development (NPD) practitioners, in general, work toward enhancing the evaluation process?” In this column, we will begin this exploratory journey by taking a fresh look at Dr. Kirkpatrick’s Four Levels of Evaluation. Only this time, we will look at the levels primarily through the lens of the learner rather than that of the educator. How does this change in the frame of reference affect or enhance the performance of the Kirkpatrick’s levels, you might ask. That’s the beauty of this journey, so let us begin at the beginning.

Back to Top | Article Outline

HISTORICAL PERSPECTIVE

In 1959, Dr. Donald Kirkpatrick published a series of articles in the Journal of Training and Development entitled, “Evaluating Reaction,” “Evaluating Learning,” “Evaluating Behavior,” and “Evaluating Results” to assist supervisors to improve their training programs. For the past 60 years, the concepts and techniques introduced by Dr. Kirkpatrick have become widely known as the Kirkpatrick principles. The first level of evaluation, Level 1, known as “reaction,” has become the basis for many posttraining surveys and evaluations that nearly all learners have completed following a lecture or training course. To quote Dr. Kirkpatrick, the main thrust behind Level 1 evaluation is to measure “the participants’ reactions to a training event.” In other words, Level 1 evaluation was meant to provide meaningful information that measures the satisfaction level of the participants in a training session.

Over the past 60 years, however, what was once meant to be a meaningful learner satisfaction measurement tool has been reduced to what is known today as simply the course/session “evals,” which are often a prerequisite for learners to obtain continuing education credit for their participation in a course. If we are honest with ourselves, we would agree that very few learners enjoy filling out the “eval” at the end of an educational session, nor do they like being made to fill this out to obtain continuing education credit. Why is that? Who does not appreciate the opportunity to express their opinion on matters they find important?

Back to Top | Article Outline

CURRENT PRACTICE OF EVALUATIONS

Let us take a pop quiz on evaluations and how they are commonly used.

Question 1: Do the evaluations that you currently use resemble anything like the ones that ask the learner to circle a score based on a 1–5 Likert scale, with 1 being the worst and 5 being the best?

Answer: If you are like most educators, the answer to Question 1 is likely a “yes.” Why? Most likely because they are easy to write and can be quickly filled out by our learners before they bolt out of the training session. For the educator expediency, that is, for the sake of time and efficiency, most evaluations are brief and ask simple questions of their learners. Although this type of evaluation serves a practical purpose for the educator who is often seeking documentation that students attended the session and that learner feedback was “actively” sought, the learners rarely take the time or are rarely given the opportunity to provide meaningful feedback. For many learners, they might feel coerced into completing the evaluation as the continuing education credit is tied to it, particularly if this documentation process is computerized on a learning management system.

Taking this route of expediency often comes at a high cost that may remain invisible. In fact, this type of evaluation process fails to convey the sincerity with which the educator prepared the content and delivery of the material in order to accomplish the following with Kirkpatrick’s levels:

  1. Maximize learner engagement and interest (Level 1).
  2. Optimize learner receptivity (Level 2).
  3. Effect lasting behavioral impact that is beneficial (Level 3).
  4. Generate results that improve the institution or organization (Level 4).

Question 2: Are most of the questions on the evaluation trainer centered rather than learner centered?

Answer: If you are unsure whether the evaluation you are using is trainer centered or learner centered, carefully consider how the questions are worded. Kirkpatrick and Kirkpatrick (2005) discuss these differences in the book Transferring Learning to Behavior: Using the Four Levels to Improve Performance. A trainer-centered question focuses on whether the course was organized, whether the objectives were met, and if the facilitator delivered the content well. A learner-centered question focuses on the learner experience, that is, how they felt about the course materials, how their learning was enhanced by the facilitator, how much they engaged in the course and the material, and to what degree did they feel that their questions and concerns were addressed. Though subtle, these wording changes demonstrate to the learner that the educational session is about them, not the facilitator.

Question 3: On your evaluation forms, are the learners asked a calibrated question that lets the learner know that we are genuinely interested in their feedback? Do we provide space and encourage free response?

Answer: A calibrated question generally refers to an open-ended question that begins with the word “what” or “how.” Generally speaking, one or two calibrated questions per survey or evaluation can produce amazingly insightful responses. For example, rather than a question that asks the learner to grade, on a 1–5 Likert scale, “The facilitator effectively taught the course material,” one may ask, “How did the facilitator effectively teach the course material?” or “What did the facilitator do to effectively teach the course material?” and leave space for the learner to write in their response.

First, this invites the learner to reflect on what was taught. Was it effective? Was it purposeful? What did the facilitator actually do? Was it conveyed meaningfully? Second, the “how” and “what” encourages the learner to think deeply about the experience they have just encountered and to paraphrase the experience in their own words. If there was value to the training, then there is a good chance they will share it with you. In fact, the learner will likely feel really good about sharing their experience if it was positive. This will, in turn, result in a lasting influence on their last memory of your session. They will likely return to their position speaking positively about the experience and encourage others to attend.

What if the experience was less than positive or simply negative? Even in such a case, by being asked a calibrated question, the learner is more likely to feel appreciated and valued. They are likely to let you know specifically “how” and “what” occurred that detracted from the experience. Here again, the learner is likely to leave the session feeling positive, as they were encouraged to provide their honest feedback. Thus, even a negative session can impart a positive lasting memory of your session. How valuable is that?

Back to Top | Article Outline

NPD SCOPE AND STANDARDS—EVALUATION

The NPD Scope and Standards of Practice Standard 6 on Evaluation states that the NPD practitioner “evaluates progress toward attainment of outcomes” (Harper & Maloney, 2016). The competencies listed for the NPD generalist include “involving learners and stakeholders in the evaluation process” and “disseminating the evaluation results of learning activities.” Additional competencies for the NPD specialist include “formulating a systematic and effective evaluation plan aimed at measuring processes and outcomes that are relevant to programs, learners, and stakeholders” and “demonstrating program value based on achieved outcomes.” Striving to attain enhanced evaluation procedures and processes is an essential part of NPD practitioner practice. External organizations, such as the Institute of Medicine (2015), challenge those of us working in interprofessional healthcare education to measure and document our outcomes. Designing and developing stronger evaluation methods will continue to be a high priority for healthcare institutions.

Back to Top | Article Outline

SIMULATION EVALUATIONS

Given the unique nature of simulation learning, where the learners are active participants in experiential learning, we have broader capacity to design more meaningful, learner-centered evaluation procedures. For example, let us look at reframing Level 1 evaluation questions that are learner centered for an emergency response in situ simulation on a med/surg floor (Likert 1–5 scale):

  1. I was well engaged during the session (facilitator delivery).
  2. I was provided ample opportunity to practice the skills I am asked to learn (facilitator style).
  3. I was given ample opportunity to demonstrate my knowledge and skill (program evaluation).
  4. I experienced minimal distractions during the session (facility).

In this column, we have begun the journey reflecting on Kirkpatrick’s Four Levels of Evaluation. Simply by changing the frame of reference from that of the educator to that of the learner, we can drastically impact the level of satisfaction and improve the value of the learner response. In our future columns, we will look more closely at Level 2 (learning) to facilitate the transition from intangible assessment (reaction) to something more tangible (learning). Please e-mail us at chan.park2@va.gov and mary.holtschneider@va.gov to share your experiences and ideas for improving our evaluation processes.

Back to Top | Article Outline

References

Harper M., & Maloney P. (Eds.) (2016). Nursing professional development: Scope and standards of practice. Chicago, IL: ANPD.
Institute of Medicine (2015). Measuring the impact of interprofessional education on collaborative practice and patient outcomes. Washington, DC: The National Academies Press; https://doi.org/10.17226/21726
Kirkpatrick D., & Kirkpatrick J. (2005). Transferring learning to behavior: Using the four levels to Improve performance. San Francisco, CA: Berrett-Koehler Publishers.
© 2019 by Lippincott Williams & Wilkins, Inc.