By Phil La Duke
I’ve taken a fair amount of flack about being behind on the latest and greatest in Safety theory. I’m not worried about getting a little flack from pompous, over-blown, theoreticians who pat me on the head and patronize me for being the poor stupid author of an antiquated article filled with atavistic thinking. That’s fine, but keep it respectful or I will poor more vitriol and bile then you thought possible.
For starters I have been a proponent, advocate, and user of predictive indicators since the late 1990s. But I have to tell you that I think the theoreticians are jumping the gun in saying that we can predict fatalities, or even injuries, at least not without significant education in statistics and data analysis.
Predicting fatalities is a bit like predicting the weather; that is, difficult. The difficulty lies in the many variables that can influence the outcome, and many of those variables are unknown, making a precise prediction impossible.
There is a lot wrong with the average safety practitioner’s understanding of predictive (but fortunately there are a host of consultants out there to “show them the way”). Let’s start with the difference between “prediction” and “foresight”. Prediction implies that one can, using statistical analysis with a high probability of accuracy, a certain outcome. Statistical prediction is a recognized science and my point is not to belittle it; that having been said, it requires no small amount of expertise. Foresight, means that given a basic set of facts, a reasonable person can anticipate a likely result. Again, I am not belittling foresight, but even foresight requires no small amount of skills.
Using Statistical Analysis to “Predict” fatalities is not quite as pat as it seems. For starters, many are using the word “predict” when they mean “foresee”. I can foresee that someone welding on a gas tank could cause an explosion. I don’t need anything more than a basic understanding of the relationship between gasoline fumes and explosions to be able to foresee an undesirable outcome.
Prediction is more likely to use data to produce a fairly vague prognosis; the data may show that enough variables exist that a fatality (or serious injury) will happen in a given area and even the nature of the injury, but it’s difficult to say exactly when it will occur. There is also the problem of probability (each encounter has the same probability of causing an injury, so a lucky organization could have workers engaged in extremely risky behavior and never have an injury.)
In an article in EHS Today one of the leading proponents of predictive Safety offered four “truths” of predictive safety:
“#1: More inspections predict a safer worksite.” This is misleading, because it assumes that the a) the inspections are effective in identifying the hazards that are most likely to cause an injury; b) the inspections cover the entire workplace, i.e. they aren’t conducted in the same place. It also assumes that all hazards create equal jeopardy, which we no Is not true.
“Safety Truth #2: More inspectors, specifically more inspectors outside the safety function, predict a safer worksite.” Here again this is fraught with assumptions. It assume that the more inspectors are adept at finding hazards and are judicious in containing and correcting the hazards in a timely manner; this cannot be assumed. Furthermore, more inspectors don’t “predict” anything necessarily, rather this statement flies in the face of sound statistical analysis. Where is the cause and effect of more inspectors (who may or may not have the ability to identify hazards effectively). This “truth” relies only on quantitative data and ignores any and all qualitative data.
“Safety Truth #3: Too many “100 percent safe” inspections predict an unsafe worksite.” Again, there is no basis for prediction. There are many, MANY variables that could create inspections that are “100 percent safe”. The author of this statement infers (and it makes sense to infer it) that the inspectors are either derelict in doing their duties, or are missing hazards. The author may be right, but makes no allowance for the improbable scenario that all hazards have indeed been removed from the areas inspected.
“Safety Truth #4: Too many unsafe observations predict an unsafe worksite.” Here the author is mistaking foresight for predictability. This entire premise mistakes correlation for cause and effect ignores the very real need for a sufficiently large sample size before any statistical inference can be made. Furthermore it ignores margin for error, the need for a normal distribution, and statistical outliers.
They are on the right track, but too many people moving to “predictive analysis” don’t understand the differences between being able to foresee and predict, correlation and cause, and science and snake oil—the bottle has changed but the poison is the same.