Secondary Logo

Share this article on:

How to write better multiple-choice questions

Smith, Linda S., MS, PhD, RN, CLNC

doi: 10.1097/01.NURSE.0000546471.79886.85
Department: Learning Curve

Linda S. Smith is vice president for research at Data Design, Inc., in Horseshoe Bend, Ark. She holds a faculty position at Ozarka College in Melbourne, Ark., and is a member of the Nursing2018 editorial board.

The author has disclosed no financial relationships related to this article.

FOR DECADES, nurses in staff development, patient education, and formal nursing education programs have used multiple-choice questions to measure acquired knowledge and skill.1 Multiple-choice test items are used to assess recalled information and facts and to evaluate higher levels of learning such as application, evaluation, and analysis.

Deceptively simple, a good multiple-choice question can be difficult to write. This article explores the components of a good multiple-choice question, provides examples of bias, and identifies helpful guidelines for writing objective multiple-choice questions.

Table

Table

Back to Top | Article Outline

Ensuring validity and reliability

All knowledge assessment tools must be valid (measuring what they are designed and identified to measure) and reliable (consistent in the measurement of the learning that has taken place).1-3 Additionally, they must be objective, versatile, and efficient to take, score, and administer.1-3

Multiple-choice questions are prevalent and versatile knowledge assessment tools that measure the learner's basic knowledge recall, as well as knowledge application, situational analysis, problem solving, and information synthesis, critique, and evaluation.2,4,5 Poorly written multiple-choice questions are neither valid nor reliable (see Glossary of terms). Apart from wasting time and resources, these questions may also demonstrate unfair bias and cultural insensitivity. Additionally, poorly written questions can sometimes benefit learners who have not acquired the necessary knowledge and penalize those who have achieved their learning goals.1,3,5,6

Back to Top | Article Outline

Multiple-choice components

Multiple-choice questions consist of a problem or stem and a list of possible solutions. The key consists of one or more correct answer(s); the incorrect alternatives are called distractors.

A good distractor is intended to sidetrack learners who have not achieved the required knowledge goals and should be plausible yet easily ignored by those who have.2,5

A well-written stem, which evolves directly from the stated learning objective(s) of the educational program, clearly presents a situation, problem, task statement, or question that triggers the correct answer(s) or option(s).5,7 Answer options include a list of possible solutions, one or more of which is correct; the others are plausible, but incorrect.1,5,7 As these questions are designed to measure acquired knowledge, the stem must be clear, relevant, and directly related to the learning objectives.2 (See Guidelines for writing multiple-choice test questions.)

Poorly written multiple-choice questions may reflect unintentional bias and cultural insensitivity. Biased questions do not test the participant's acquired knowledge and defeat the assumed fairness and impartial nature of the assessment. Testing biases create unfair advantages and disadvantages for test takers of diverse cultural backgrounds.5 As such, the question is no longer reliable and valid as a learning assessment.

Back to Top | Article Outline

Avoiding biased and stereotyped language

Biased nomenclature must be avoided in test items. For example, certain words have alternative meanings, such as cup (Is it a tea cup, a mug, or a traditional apparatus used to promote healing?), soda (Is it baking soda, a carbonated beverage, a type of glass, or a natural mineral?), blue (Is it a color or a mood?), and so on. Another example is stereotyped language, such as man-up, punk, girl clothes, and so on. These may include covert and overt biases related to gender, age, disability, race or ethnicity, socioeconomic status, sexual orientation, political preferences, and generational or geographic groups.8-10 Culturally competent testing items present an unbiased representation of varied cultural groups from diverse physical and social backgrounds. Questions must portray members of cultural groups as capable, meaningful, active, independent, strong, and informed.9,10

The individual is far more important than his or her cultural characteristics.9 As such, these qualities are relevant only when they are pertinent to measuring the achievement of the learning objectives. Carefully worded multiple-choice questions address only that which assesses the achieved objective directly. Follow these principles to eliminate bias and stereotyping from multiple-choice tests. (See Avoid slang terms, colloquialisms, and imprecise language.)

Table

Table

  • Gender. Omit gender-related pronouns whenever possible. For example, instead of saying, The doctor said the results were better than he thought they would be, consider The physician said the results were better than expected. If the use of gender-specific pronouns is necessary, address the subject without portraying any gender as feeble, inferior, submissive, or incapable compared with another—or the opposite, portraying one gender as superior, dominant, or more capable than another.9,11 Additionally, avoid labeling people with such terms as male nurse and female physician.9-11
  • Disability. When identified in a question, patients with disabilities should be presented as capable and independent.9,11 Avoid negative terms such as deformed and crippled. Also avoid labeling patients by disease or disorder. For example, rather than “a diabetic,” say “a patient with diabetes.”
  • Slang. Idioms, slang expressions, and colloquialisms can be associated with or unique to specific cultural groups, geographic areas, or certain time periods. Because others may not understand these terms, they present cultural bias and should be avoided. Slang is avoided to eliminate ambiguous, unclear, vague, and idiomatic verbiage.
Table

Table

Slang terms and expressions that should be avoided in multiple-choice questions include once in a blue moon and baby boomer. Demeaning or pejorative terms that should never be used include frequent flyer and sundowner.8-12

  • Negative wording. Negatively worded stems and options must be avoided as well because they may present unnecessary linguistic complexity.6,8,13 For example, avoid constructions such as “All of the following are correct, except,” or “Which of the following is not correct?” For test takers, this type of question may be confusing and difficult to answer due to uncertainty regarding what is actually being asked.1,6,8,14,15
  • Unnecessary verbiage and redundancy. Avoid language with unclear or double meanings when drafting multiple-choice questions. Eliminate words or phrases not required to answer the question and assess the learning objective.2,5,11,13,14 Ill-defined and relative terms such as several, often, few, occasionally, maybe, many, usually, sometimes, and frequently should be avoided.1,6 Additionally, proper grammar, spelling, and punctuation must be applied consistently.1,14,15

Clear, concise stems measure learning more fairly and efficiently.2,5,15 Questions should address the problem directly, excluding all language other than that which expressly assesses the knowledge level of the test taker relative to the objective.

This also applies to multiple-choice question options. Language included in the stem should not be repeated in the options, as illustrated by this example:

Back to Top | Article Outline

Poor:

While performing a sterile dressing change, a nurse notices the left glove has touched the patient's gown. What should the nurse do next?

  1. The nurse should...
  2. The nurse should...
  3. The nurse should...
  4. The nurse should...
Back to Top | Article Outline

Better:

While performing a sterile dressing change, a nurse notices the left glove has touched the patient's gown. The nurse should... [This eliminates the repeated words in each option.]

Multiple-choice questions and exams test the learner's level of knowledge and skill in relation to the established learning objectives and outcomes.5 Questions should never be ambiguous or aim to “trick” participants. Pilot testing, peer review, and posttesting item analyses are performed on all exam questions. This helps ensure that the test items are culturally competent, the multiple-choice questions are valid, and the exam is reliable.

Back to Top | Article Outline

Measuring acquired knowledge

Clear and concise multiple-choice questions are not easy to compose, but when constructed correctly, they serve as a reliable and valid measurement of acquired knowledge. Biased language and cultural incompetence can be eliminated through peer review, editing, and pilot testing.

Well-written multiple-choice questions measure the accomplishment of learning objectives efficiently and without bias, which is essential for continuous quality improvement initiatives. Consider the cultural backgrounds of all potential participants, and pay close attention to possible instances of bias that may put different cultural groups at a disadvantage.

Back to Top | Article Outline

REFERENCES

1. Nedeau-Cayo R, Laughlin D, Rus L, Hall J. Assessment of item-writing flaws in multiple-choice questions. J Nurses Prof Dev. 2013;29(2):52–57; quiz E1-E2.
2. Brame CJ. Writing good multiple-choice test questions. Vanderbilt University: Vanderbilt Center for Teaching. 2013. https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions.
3. Dellinges MA, Curtis DA. Will a short training session improve multiple-choice item-writing quality by dental school faculty? A pilot study. J Dent Educ. 2017;81(8):948–955.
4. Shabatu J. Using Bloom's Taxonomy to write effective learning objectives. Teaching Innovation & Pedagogical Support. 2018. https://tips.uark.edu/using-blooms-taxonomy.
5. Sutherland K, Schwartz J, Dickson P. Best practices for writing test items. J Nurs Regul. 2012;3(2):35–39.
6. DiSantis DJ. Writing good multiple-choice questions: a brief guide for radiologists. Radiographics. 2013;33(7):1865–1866.
7. Jong K. Annual conference reports: best practices in writing test items. Am Med Writers Assoc J. 2016;31(4):172–173.
8. Kim KH, Zabelina D. Cultural bias in assessment: can creativity assessment help. Int J Crit Pedagogy. 2015;6(2):129–148.
9. Peterson D. Avoiding bias and stereotypes—test design & delivery part 5. Questionmark. 2012. https://blog.questionmark.com/avoiding-bias-and-stereotypes-test-design-delivery-part-5.
10. Sosa K. Standardized testing and cultural bias. Bright Hub Education. 2012. http://www.brighthubeducation.com/student-assessment-tools/65699-standardized-testing-and-cultural-bias.
11. Appropriate language: overview. Purdue University Online Writing Lab. 2018. https://owl.purdue.edu/owl/general_writing/academic_writing/using_appropriate_language/index.html.
12. Bristol T, Brett AL. Test item writing: 3 Cs for successful tests. Teach Learn Nurs. 2015;10(2):100–103.
13. Lampe S, Tsaouse B. Linguistic bias in multiple-choice test questions. Creat Nurs. 2010;16(2):63–67.
14. Hall M. Tips for writing effective multiple-choice questions. Johns Hopkins University, Center for Educational Resources: The Innovative Instructor. 2016. https://ii.library.jhu.edu/2016/12/15/tips-for-writing-effective-multiple-choice-questions.
15. Zimmaro DM. Writing good multiple-choice exams. The University of Texas at Austin: Faculty Innovative Center. 2016. https://facultyinnovate.utexas.edu/sites/default/files/writing-good-multiple-choice-exams-fic-120116.pdf.
Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.