You Can’t Test Safety Competency With Tests Your Crappy Tests

dunce                      

by Phil La Duke

If you’re hoping to ensure that the people taking your safety training have learned the material , then you probably use a posttest (a test given at the end of the session), and if you wrote this test it probably sucks. I used to write tests for a living and I am continually disgusted by what passes for an evaluative instrument—even those that have been created by professional trainers. The problem stems from the fact that most of us grew up taking really poorly designed tests and when tasked with creating a test of our own we tend to emulate what we know.

Is it a problem that our tests suck? Yes (and to those of you who think my use of the word “suck” is crude, in poor taste, or unprofessional I say got straight to hell—when you start creating tests that don’t suck, I’ll clean up my act, until then…well you get the picture). Using a poorly constructed test is worse than using no test at all because it takes time to build, complete, score, and record it while adding no real value.

I should point out that most of you who create truly excremental tests (and I have seen many college professors who fall into this category) think that your tests rock it (they don’t). So what exactly is wrong with these tests? I’m glad you asked.

  1. Questions that Don’t Match      the Course Objectives. Each question should correspond to one (and      only one) of your course objectives. You identified the things you wanted      people to learn in your objectives so asking questions about anything else      is just noise. People do (and should) cue in on the topics in the course      that relate to the objectives and tend to place a lower priority on the      trivia (that which doesn’t match up to an objective.)
  2. No Pretest. Pre- and posttests are a      matched set. The pretest establishes baseline knowledge. If a person can      pass the pretest without any instruction he or she doesn’t really need the      training (and in mandated regulatory training you fill find that this is      often the case, unfortunately the law says we have to provide them      training anyway.) Pretests should be the exact same questions as the      posttest (to ensure an apples-to-apples comparison between the learner’s      skills and knowledge before and after the training. Pre- and posttest      questions should be in a different order and should also mix the order of      the distractors.
  3. True-or-False Questions. True-or-False questions are      popular because ostensibly they’re easy to write. Unfortunately, good      true-or-false questions are actually fairly difficult to construct. Even      well-constructed true-or-false questions shouldn’t be used because while      people believe that a person has a 50:50 shot at guess correctly, when, in      fact, experts tell us that the chance of guessing correctly is much higher      (around 66% the last time I looked it up). The problem is that many      well-constructed true-or-false questions provide grammatical clues that      allow the reader to guess correctly. These clues are usually in the form      of absolutes (must, always, most, least, etc.) and even if they don’t tip      off the reader, these types of questions tend to measure reading      comprehension skills far more than the participants’ grasp of the      material.
  4. Poorly-Written      Multiple-Choice Questions. Some people smugly call multiple-choice      questions “multiple-guess” questions. Do me a favor, next time someone      tries to get cute by saying “multiple-guess” crack them a good one in the      mouth with the back of your hand; unless there are social consequences for      our actions people will never learn manners. Multiple-choice questions are      (along with matching or fill in the blank) the best kind of questions to      ask, provided you construct them correctly. When writing a multiple choice      question remember these tips:
    1. The key to effective       multiple-choice questions lies in the distractors (the possible answers       that aren’t correct). Eighty percent of the poorly written multiple       choice questions have really, REALLY bad distractors that allow the       person completing the test to use the process of elimination to arrive at       the correct answer. That works something like this:

The capital of France is:

a) North Dakota

b) In Spain

c) Paris

d) All of the above

These distractors are horrible because, a) North Dakota is impossible since a U.S. State cannot be the capital city of a European country, b) is similarly absurd because the capital of France is not likely to be in Spain, and d) is absolutely wrong because North Dakota is not in Spain. (Note: never use distractors like all of the above, none of the above, or a) and c). A multiple-choice question should have only one correct answer). Once we eliminate all the stupid distractors we are left only with Paris. A better question is:

The capital of France is:

a) Cannes

b) Versailles

c) Paris

d) I don’t know.

You may be put off by the distractor, d) I don’t know, but this is a key to writing a good multiple-choice question. People will tend to guess anyway, but it gives them an out, and you will occasionally be pleasantly surprised by the person who bravely and honestly answers “I don’t know”. The added benefit of the “I don’t know option” allows the instructor to spend more time with participants who clearly aren’t achieving a learning objective.

  1. Too Few/Many Questions. I have found the sweet      spot for the number of test questions is 20-25. (Frankly I seldom go over      or under 20 questions,) So assuming you have a course with five objectives      (and what the hell is wrong with you if you have more than five?) and you      ask four or five questions on each objective your test should have 20 to      25. But there’s more than simple multiplication here, less than 20      questions produces a population that is too small to make valid statistic      inferences and more than 25 becomes unwieldy, taking too long to complete      and score.
  2. Lack of test validation. There are scientific      methods for assessing the validity of tests, but you don’t have to go to      that extreme to ensure that your tests are valid instruments. There is a      simple test of test validity that I use: first, give the test to someone      who doesn’t know the material (ideally someone who is good at taking      tests). If that person is able to earn a passing score than the test is      too easy. Next give the test (assuming the first person didn’t pass) to a      subject matter expert; if the expert can’t get at least 90% than the test      is probably too difficult, or just poorly written (believe it or not,      sometimes one or more of our distractors might be technically correct      because of the way we worded it.

I know that this entry will largely fall on deaf ears (as I’ve said, I’ve met seasoned learning professionals that can’t write a decent test to save their lives) but if only one of you will through away the tripe you’ve been using to ensure that workers have achieved their learning objectives relative to safety, I will be satisfied with my meager success in this area.

There is more….but this is enough.

Advertisements

#safety-training, #testing-safety-competency