Phil La Duke's Blog

Fresh perspectives on safety and Performance Improvement

Maybe You REALLY Can’t Fix Stupid


By Phil La Duke

In a recent blog entry on the blog, Fuel Fix http://fuelfix.com/blog/2014/05/12/human-errors-account-for-80-of-offshore-accidents-exec-says/  Oil & Gas executives were quoted as saying that 80% of offshore accidents were caused by human error.
According to the article, Jim Raney, director of engineering and technology at Anadarko was addressing the Ocean Energy Safety Institute at the University of Houston when he said, “You can’t fix stupid…what’s the answer? A culture of safety. It has to be through leadership and supported through procedures — a safety management system.” I’m careful not to use the stupid brush to tar too many people in worker safety. Are their stupid people out there working? I think it’s safety to say yes. But can we blame 80% of worker injuries on stupidity? I don’t think so, at least not among the rank and file. Let’s face it, if 80% of your injuries are because of human error, as the article later suggests, you have some big issues and I would be careful who you go around calling stupid.
Even Smart People Make Mistakes
I’m not going to beat up on Jim Raney. My guess is that at his level he isn’t doing the incident investigations personally, and therefore he is being fed conclusions by his safety practitioners that lead him to believe that the vast majority of the incidents are because he has a bunch of idiots working for him. But stupidity is not the same as making a mistake, and while everyone makes mistakes (it’s a biological imperative) no one should have to die because of it. If there is stupidity in this process it lies with the person who designed it; he or she either refused to believe that people make mistakes or knew people would invariable make mistakes but refused to protect those that did. Stupid? It’s damned near depraved indifference and gross negligence.
Dispelling the “Operator Error” Myth
For years I taught problem solving courses as part of lean implementations. For generations engineers (the folks typically charged with finding out what caused a quality defect) would ultimately conclude that someone screwed up; the report would conclude that “operator error” was the proximate and root cause. The problem was that the engineer never asked “why?” the operator screwed up. I’ve written reams on performance inhibitors, those things like worker fatigue, stress, distraction, drug use, et el, can cause even the smartest people to make mistakes so I won’t revisit them now. But I wonder how many of those 80% of the people working on offshore rigs had been working long hours without a day off or with inadequate sleep? Keep anyone up for days on end working 16+ hour shifts in the elements and even the brightest among them will seem like a drooling idiot. Simply denouncing the people as stupid and then doing nothing about the system issue will not create a culture of safety, it will create a culture of stupidity. If I can go off on one of my well celebrated tangents for a minute, why are Oil & Gas companies hiring so many stupid people? While you may not be able to fix stupid, you don’t have to hire it, you don’t have to seek out the dumbest in society and offer them a job.
Injuries Are Seldom Caused By a Single Root Cause
A part of the problem solving training that I taught for many years dealt with selecting the right tool from the tool box. Traditional root cause analysis, repetitive whys, and similar tools are designed for use in solving problem of a specific structure and a sudden occurrence, that is to say, issues that develop rapidly and happen in response to a single cause. Situation analysis, fishbone analysis, and other tools, are better used for problems of a general structure and a gradual occurrence, in other words, incidents that are the product of a multiple, inter-related elements. In these types of incidents, many factors have to be present to cause an injury, and it is only after a threshold is reached that we see a process failure. In my experience, injuries tend to be the product of multiple factors that contribute to the incident. As long as we continue to use inappropriate tools to find the cause of injuries we will continue to mask hazards instead of removing them. The fact that Oil & Gas executives are concluding that 80% of the workers’ injuries are caused by “human error” leads me to question their methodology used to identify injury causes. Yes people make mistakes, but if those mistakes are leading to injury you have more at play than stupid people, you also have a process that hurts people when they make mistakes.
Protect the Stupid
We may not all be stupid, but we all do stupid things from time to time—we make poor choices, take unreasonable risks, allow distraction, fatigue, or other factors to impair our performance, or generally act in a way at odds with our safety. Some seem to forget that not all safety is about prevention; probability of interaction is only PART of the formula, there is another key component, reduction of severity. Engineers use this formula when identifying which of the hierarchy of controls to apply to everything from the machines we use in the workplace to the consumer goods we use every day. If the probability of interaction is high (people will almost certainly interact with the hazard) but the severity is low (most of the people who interact with the hazard won’t be seriously injured) they will generally slap a “no-kidding?” warning label on it. But if the probability of interaction is low, but the severity is lethal, they will take greater measures to protect people. I don’t believe that 80% of the Oil & Gas injuries are the fault of stupid people making mistakes; frankly it sounds suspiciously close to Heinrich’s Pyramid. But if the processes used in Oil & Gas are so fragile that human error is going to result in injury, the safety practitioners had better take bold initiatives to make these processes safer.
They Have the Answer; They Just Don’t Know It
The last part of Raney’s statement, “It has to be through leadership and supported through procedures — a safety management system” is right on. Unfortunately, organizations can’t achieve a sustainable safety management system that is built on the belief that you can’t fix stupid. Leadership has to drive good decision making and has to reward and encourage worker engagement based on respect; and describing workers as “stupid” is far from respectful.

Filed under: culture change, Injury reporting, Just Culture, Phil La Duke, Safety Culture, Worker Safety, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Has The Battle Against Distracted Driving Gone Too Far?


 

Photo courtesy of http://www.navideck.com/

Photo courtesy of http://www.navideck.com/

By Phil La Duke

In the United States April is National Distracted Driving Awareness Month so you can look forward to a barrage of earnest and well-intentioned campaigns to ensure that drivers are aware of the dangers of distracted driving.  Is distracted driving an issue? You bet.  The ubiquitous nature of cell phones[1] and smart devices—not to mention GPS systems, car radios, and myriad other sources of distraction—in use today makes the dangers of a traffic accident much greater than it has been in the past.  According to www.distraction.gov “An estimated 421,000 people were injured in motor vehicle crashes involving a distracted driver, this was a nine percent increase from the estimated 387,000 people injured in 2011.”  The problem is compounded by some of the other statistics from the www.distraction.gov website:

  • 10% of all drivers under the age of 20 involved in fatal crashes were reported as distracted at the time of the crash. This age group has the largest proportion of drivers who were distracted.
  • Drivers in their 20s make up 27 percent of the distracted drivers in fatal crashes. (NHTSA)
  • At any given daylight moment across America, approximately 660,000 drivers are using cell phones or manipulating electronic devices while driving, a number that has held steady since 2010. (NOPUS)
  • Engaging in visual-manual subtasks (such as reaching for a phone, dialing and texting) associated with the use of hand-held phones and other portable devices increased the risk of getting into a crash by three times. (VTTI)
  • Five seconds is the average time your eyes are off the road while texting. When traveling at 55mph, that’s enough time to cover the length of a football field blindfolded. (2009, VTTI)
  • Headset cell phone use is not substantially safer than hand-held use. (VTTI)
  • A quarter of teens respond to a text message once or more every time they drive. 20 percent of teens and 10 percent of parents admit that they have extended, multi-message text conversations while driving. (UMTRI)

Clearly some of these statistics are misleading, especially the ones involving teens.  When we read that teens are involved in the most accidents while distracted it can lead us to believe that the problem is those damned irresponsible teenagers.  The fact is that texting is a new communication vehicle and is disproportionately used by young people.  As these people mature, they don’t necessarily abandon the practice, rather young people become a smaller percentage of those who use texting to communicate.  Also, while headset cellphone use is not substantially safer than a hand-held device, that is only true during the conversation itself and a hands-free device is significantly safer when placing or receiving a call.  But all of this aside, the response from safety pundits seems to be, don’t do anything in the car except drive (I’ve even seen an ominous statistic about the dangers of having a conversation with a passenger while driving).  This works on paper, oh hell who am I kidding, this is a stupid idea even on paper.  First of all, none of us are going to do this. Imagine the car ride where you ignore everything except the tasks required to drive.  You sit stone faced while you and your passengers keep a solemn silence and you do nothing but scan the road, check your mirrors, and keep your hands at the ten and two position.

Some Distraction is Actually Valuable

Way back in college, when I was studying adult education they taught us about how the mind works.  As you can imagine, classroom distraction can seriously disrupt the learning experience.  Now it’s been a long time since I was in college, but at the time experts calculated the attention span of the average American at something like two and a half minutes. [2]  The thinking is that our brains take in information for about two minutes and then spend about 30 seconds processing it.  At the end of a cycle we are most easily distracted because the brain is actively seeking out new information.  This cycle continues for about 10 minutes before—unless interrupted—the brain starts to fatigue. In other words, if we concentrate too intently for too long we start to stress ourselves.  Changing things every 10 minutes or so sort of resets our brain and refreshes us.  After about four hours, however, even a proverbial change of scenery is enough to keep us alert and we quickly see a diminishing return at about six hours we become fairly rubber-headed and incoherent.

I was thinking about this the other day as I was making a four-hour drive home from a client site.  My company has a strict “no cellphone use while driving” policy and as a partner and leader I feel that I have to have a “no exceptions” standard of compliance for myself; if I can’t exhibit these behaviors myself, how then can I in good conscious hold others to this standard?  So there I am barreling along with the cruise control set (to ensure that I didn’t inadvertently creep up above the speed limit) listening to my iPod on auto shuffle so I don’t have to find another radio station or fiddle with selecting a song (I set it up to shuffle before leaving so I literally don’t have to touch or look at the device while driving.

Now this particular drive involved me driving for all but the last 20 minutes on a single expressway so I didn’t need directions, or the use of a GPS, or even have to think about things like where my exit was or how far away I was from my next turn.  Ostensibly this should have been the very safest driving experience (for most of my trip I was the single car on the road).

The lack of distraction meant that I soon started to feel very fatigued, I felt the beginnings of what they used to call “white-line fever” where the hypnotic pattern of the dotted white lane markers made me feel drowsy and made it difficult to concentrate.  I was in a particularly desolate area where pulling over and resting for 15 minutes or so seemed not only stupid but potentially dangerous.  And even if it was the smart move, I wasn’t about to stop for fifteen minutes an hour and extend my already long car ride for an extra hour.  I did recognize the danger however and, drawing on my experience as a trainer, I minimized my risk by introducing…distractions.  First, I turned off the cruise control and began checking my speed periodically.  Next I began counting the number of deer I had seen on my  trip home (13, in case you secretly wanted to know) and finally I would look at the mile markers and mentally calculate how long, at my current rate of speed it would take me to get home. When I got to the next exit that had a gas station I got out and stretched my legs, filled up the tank (because gas was relatively cheap there) used the restroom and stocked up on water and some snacks.

The result was I was far less fatigued than I was prior to when I was driving in a distraction-free environment.  I was no longer on auto-pilot and I believe I was safer because of the mild distraction.

For safety pundits to advocate that people drive without any distraction is the same old time-tested imbecility with which most safety professionals attack an emerging threat, that is, prohibition.  Prohibition is a dangerous and stupid approach to distracted driving.  Instead of telling people not to be distracted (which is like telling people to be taller) we need to encourage people to manage distractions.  After all, the distraction in and of itself is not dangerous, rather prolonged distraction is the problem. In fact, when we examine the examples of so-called distractions we’re really not talking about distractions, rather, we are talking about changing the primary activity from driving to something else.  www.Distraction.gov offers these examples:

  • Texting
  • Using a cell phone or smartphone
  • Eating and drinking
  • Talking to passengers
  • Grooming
  • Reading, including maps
  • Using a navigation system
  • Watching a video
  • Adjusting a radio, CD player, or MP3 player

Clearly texting is dangerous because the average time it takes to text is 15 seconds and let’s face it, it is exceedingly rare that one sends or receives just one text so the time spent with one’s eyes not on the road is likely best measured in minutes not seconds.  But what about talking to passengers? This has been around since the invention of the automobile and until the distraction hysteria has never been taken seriously as a cause of a significant number of traffic accidents.  In fact, how many times have you had a passenger interrupt the conversation by alerting the driver of a hazard? Two pair of eyes on the road is safer than just one. Using a hands-free navigation system is clearly safer than reading a map or cutting across three lanes of traffic so that you don’t miss an exit or the not insignificant distraction of being lost and not knowing how to get back on track.

What’s the difference between prohibiting distraction and managing it? Scope.  Whenever any activity replaces driving (or working at heights, or operating machinery, or assembling a widget, or operating a crane) as the primary activity we endanger safety.  Simply telling people NOT to do anything else except…hasn’t worked since the dawn of time (it only drives the prohibited behavior underground and does nothing to protect people) so we need to help people learn to manage distraction instead.  Clearly some of these behaviors (texting, reading emails, answering emails, reading a book) are just plain reckless while others (having a conversation, eating, etc.) represent mild risks that if managed properly can actually reduce driver fatigue and make the roadways safer.

Beyond this, however, is an underlying cause: the privatization of driver’s education.  Drivers are far less prepared, in my opinion, to acquire good, safe driving habits and driving skills when they learn to drive from a place that I wouldn’t trust me to sell me a lawn mower rather than our public schools.  We need to invest in driver training and do a better job of enforcing the laws on the books and worry less about telling people not to drive while distracted; this is just another way of telling people to be more careful and it won’t do anything but make us feel like we are doing something when we are not.

 

 

[1] According to the Pew Center for research 91% of adults now own cellphones (I have to guess that this is in the United States since the research wasn’t clear, but I know some estimate that worldwide there are more cellphones/smart devices than people on the planet; a claim I find dubious, but the fact that credible people are making it speaks to my point none-the-less

[2] Surprisingly, this number wasn’t markedly lower than other parts of the world and it seems to be the way the human brain was designed; a physiological rather than cultural phenomena

Filed under: Safety, Worker Safety, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Six Simple Ways to Change Your Life


by Phil La Duke

Years ago I worked in talent development for one of the largest faith-based healthcare systems in the United States. I left it to pursue other career goals but it never left me, at least not completely. The system was founded when two religious orders merged after discovering that the youngest among the two orders was 78 years old. They came together to preserve a way of life that had existed over 500 years. Sure it ran hospitals, but more important was the spiritual community that it had created. Faced with extinction it set about an elaborate plan for turning over its legacy to the laity. I always took that very seriously. For me it wasn’t about organizational development or training, although these were certainly a big part of my job, rather it was about preserving a way of life.
Some time ago I shared the podium at the Canadian Society of Safety Engineers with an anthropologist and National Geographic photographer who talked about cultural extinction (which interestingly enough, he attributed to the growth of the written word). According to him, cultures are going extinct at a far faster rate than animals; it’s scary really, thousands of years of knowledge lost as cultures die daily. I was determined that I would do everything in my power to save this one culture to which I had been entrusted.
I wasn’t the only one so entrusted; there were scores of professionals whose primary jobs were to preserve the mission, culture, and vision of the consolidated order. One of the tools they had for preserving the culture was the Guiding Behaviors (note to the grammar vigilantes: I know this sounds like number disagreement but the Guiding Behaviors is considered one tool). As I reflected this morning, as I do every morning, on these behaviors it occurred to me that these would serve the safety professionals as much as anyone else. I have changed the wording of some of these to make them less specific to healthcare, but I doubt the surviving members of the orders will mind too much.

“We support each other in service”
The first of the behaviors is “we support each other in service” what better way for a safety professional to sum up his or her job? We don’t really save lives—not the way doctors or nurses do anyway—but we can always support people in making better decisions and while not directly saving lives influencing people to save their own lives or the lives of a coworker.

“We communicate openly and honestly, respectfully, and directly”
I’ve written volumes about the importance of open and honest communication. I still believe that the only path for safety professionals to get respect is by truly respecting the people and organizations they serve. It’s disappointing how many safety professionals disparage the people they are charged with protecting. People who feel respected tend to respond respectfully. We must always strive, not only to be truthful, but truly honest and not just with the people we serve but with ourselves as well. And let us never confuse hurtful speech with honesty. Before speaking we should ask ourselves, “is what I want to say true? Is it helpful? Is it intended to help someone or merely to make ourselves feel better? And finally, is it necessary?” if all of these things aren’t true then maybe we should just keep it to ourselves.

“We are fully present”
Perhaps the behavior I struggle with the most is “we are fully present”. Being fully present means that you keep your mind on the job—no multitasking, no distractions, no dreaming about the weekend. While it’s easy to see how staying fully present on the job would greatly benefit most workers—distraction on the job can be deadly—we also need to be fully present as safety professionals. This means really participating in meetings and really listening (not just waiting to talk) and working with others to accomplish things. Keeping your head in the game every minute of every day is really tough and if you try to do it you will come home exhausted.
“We are all accountable”
“We are all accountable” means more than holding others accountable, although that is certainly a part of it. We also must strive to hold ourselves accountable. Each day we must ask ourselves if we earned our pay. Did we make a positive impact in people’s lives, not just in the context of safety, but did we make the workplace (and the world) a more pleasant place? Did we really bring our “A” game or did we merely phone it in? We must also remember that we have a duty to be just in holding others accountable. We do not stand in judgment above those we serve, but we owe it to the organization and to the entire population to hold people answerable—both positively and negatively,
“We trust and assume goodness in intentions”
People screw with our work, our day, and our heads on a daily basis. But trusting and assuming goodness in intentions has taught me one of the most powerful lessons of my life: we screw with our own work, our own day, and our own heads far more often than anyone else ever could. They say that forgiveness is a gift we give ourselves and it begins by never taking slight in the first place. Instead of assuming that the Operations leadership is throwing us under the bus we should ask the person some questions. Most often we will find that because we assume that the person meant us no harm and was probably completely unaware of the issues he or she was creating for us. Assuming goodness in intentions brings a person real peace and strengthens relationships. There is a saying that if you keep meeting jerks all day long the jerk is you. I say that if you assume goodness of intention in all you meet you will live in a world like you could never imagine. Send out good stimuli and you receive good responses.
“We are continuous learners”
Too often we strive to teach. We are, after all, the experts in safety and what good is that expertise unless we share it with the organization? We get sad and frustrated when people don’t want to listen to what we have to say. But when we are continuous learners, when we focus not on what we can teach others, but what we can learn from them, we find that we end up teaching other so much more of value than if we were to just spout facts at them. Continuous learning involves a lot of introspection—we have to examine our mistakes and try hard to understand why things went wrong and what we can do to fix things them.
The World Loves a Hypocrite
While I try to live by these simple six statements I don’t always succeed; in fact I fail a lot. But the beauty of these guiding behaviors is that they are things to which I aspire. So now I charge you to share these aspirations with me. Try doing these six things for a week. You may fail, but remember in some cases success comes, not in the outcome, but in the attempt.

Filed under: Behavior Based Safety, Hazard Management, Just Culture, Performance Improvement, Phil La Duke, Worker Safety, , , , , , , , , , , , , , , , , , , , , ,

The Madness Of Measuring Nothing


cg_measuring-cup

By Phil La Duke

These days’ organizations live and die by measurements. It seems that no matter where we work we are confronted with the dreaded balanced score card and so we are tasked with measuring “safety”. I’ve said for a long time, “the absence of injuries does not denote the presence of safety” and zero injuries doesn’t tell us a heck of a lot about the risk of injuries within a given population.

The traditional measures of safety, i.e. injuries or days away or restricted time, don’t help us predict the likelihood of future performance, and yet that is the best we seem able to come up with. We play at leading indicators like near miss reporting (as somehow indicating the level of participation of workers in safety) but even these measurements are fraught with statistical noise that can lead us to conclusions not in evidence; so many of our indicators mislead us that one has to wonder if there is any value in them at all.

Recently I was asked to address a meeting of the senior leaders of a multi-national manufacturer and I was asked what some good predictive measures would be for safety. I was pressed for time and I’m afraid I didn’t have the luxury of a prolonged discussion on metrics (my topic was Creating A Culture Of Safety Excellence). So given the fervor set in motion from my last three posts I thought I would add a bit of metaphorical fuel to the fire and lay out for professional debate, what I see as some good ways for correlating business measures to future performance in safety.

Risk Factor #1: Worker Stress and Distraction

Worker stress has a profound impact not only on human error, but on risk taking, and worker’s health as well.  Highly stressed workers are distracted and distraction leads to mistakes which lead to injuries.  Some measures that I think directly correlate to worker stress are:

  • Worker absenteeism.  Absenteeism rates are indicators of both worker stress and worker competence. Research has shown that stressed workers tend to miss more work, and when a worker misses work, his or her job is done by someone less skilled, less practiced at the job, and therefore more likely to deviate from the standard.  In other words, the worker stuck doing the job is at greater risk of injury than the worker whose muscle memory is completing many of the tasks by rote.  Of course this isn’t universally the case, but it is true often enough to correlate, and when it comes to prediction, correlation is the best we’ve got.
  • Number of calls to employee assistance programs.  When we talk about worker distraction, we tend to think in terms of distractions borne in the workplace.  Workers who are worried about financial problems, divorce, or other “off-hours” problems while working face the same dangers as those distracted by work issues.  The number of calls to EAP lines can provide a good idea of how much distraction is in the workplace which correlates to human error, behavioral drift, lapses in judgement and ultimately  workplace injuries.
  • Worker turnover.  Employee turn over creates risk in much the same way absenteeism does: it introduces greater variation into our work processes which in increases the risk of injuries.  The greater the worker turnover rates the higher the risk of injuries as newer, less competent and skilled workers replace higher performing, more experienced workers.
  • Engagement survey scores. Engaged workers tend to do things because these things are the right thing to do.  The lower the level of employee engagement the higher the risk of worker injuries.

In all these cases we have to remember that we seldom have a perfect correlation (a case where everytime factor A is true factor B is also true) and even in those rare cases where there is a perfect correlation such a condition does not mean that there is a cause and effect relationship between two factors.  But since we are looking at the measurement’s predictive value there is always a margin for error, statistical anomolies and statistical outliers.  If we had a perfect way of predicting exactly where and when an injury would occur we would be using it. 

Risk Factor #2: Worker Incompetence

When we talk about worker incompetence, we’re not talking about the nincompoop  who doesn’t seem able to do even the most rudimentary task without screwing things up, rather, we are talking about the skill level at which a worker is able to perform his or her job.

There is a strong correlation between level of mastery at which a worker performs the tasks associated with his or her job and the risk of injuries.  To that end these measurements are appropriate and predictive:

  • Required training % complete. Assuming that we require training because it is necessary to do one’s job, the lack of this training would indicate process variability.  Tracking the percentage of training provides us with a glimpse of how much risk a worker faces of being injured because he or she performed a task improperly.  The greater the percentage of people who have completed training the lower the risk of injury because of a gap in essential skills.
  • % of licenses and certificate expired. Just as the percentage of required training that is complete provides us with an understanding of approximately how many people are likely working out of process (it’s tough to do the job right simply by guessing) so too does the percentage of workers who are working despite having expired licenses and certificates.
  • Time to complete required training.  The longer it takes to complete required training the longer a worker is exposed to workplace risk associated with a skills gape.
  • Worker performance appraisal scores. This particular measure is tricky—it assumes that the worker appraisals are fair assessments of the worker’s ability to accurately complete tasks and do the job. Assuming that there is a robust worker performance appraisal assessment the lower scored individuals should be at greater risk that those who are peforming at higher levels. 

Risk Factor #3: Leader Incompetence.

Workers generally perform in ways for which they are rewarded and eschew behaviors for which they are punished. Low-performing leaders often exacerbate safety issues by behaving inappropriately in their interactions with workers. Some measures that I think directly correlate to leadership competency are:

  • 360 Reviews. 360 Reviews, that is, reviews where a leader’s team members, boss, and peers all contribute to the review, are often excellent indicators of how well a leader interacts with his or her team. The weaker the leader the higher the risk of process variation and hence a rise in the risk of injuries.
  • Leader performance review.  Leaders who perform poorly are generally allowing more variation into the work area the higher the performance of the leader the less likely workers will be harmed on his or her watch. It’s important to note that the leader’s performance review will most likely include things like the productivity of his or her team, general performance in things like cost, quality, and efficiency, in other words, things that will either directly or indirectly impact the risk of injuries.
  • Worker morale.  Of course worker morale can be effected by a host of things unrelated to the leader, but worker morale is heavily influenced by the performance of the leader.  Workers suffering from poor morale generally perform at lower levels which fall outside the processes control limits.  The worse the morale the higher the risk of variation and ultimately injuries.
  • % of safety reviews completed on time. I am not a fan of “behavioral observations”; I’ve always felt the time watching someone work could be better spent taking a more holistic view of worker safety by reviewing the risk conditions (procedural, physical, or behavioral).  That having been said, it is important that leaders conduct routine and repeated inspections of the workplace to identify hazards.  The percent of safety reviews/tours/inspections/observations completed on time is a, at least ostensibly, an indicator of the time to which workers are exposed to hazards.
  • % or performance reviews completed on time.  Completing performance reviews on time isn’t just about making employees feel good, it is also about assessing competency.  The more reviews that are completed on time, the more skills and performance gaps are identified in a timely manner.
  • % Attendance at safety meetings.  The percentage of safety meetings that a leader attends provides a good insight into the level of priority on which the leader places on safety. 

Risk Factor #5: Process Capability

Process variability creates risk; to the product, to the equipment, and to the workers.  The frequency and duration of non-standard or out of process work is a good predictive indicator of risk of injury.  Good measures of process capability (relative to safety) are:

  • % of nonstandard work.  Statistically speaking nonstandard work tends to be more dangerous and the injuries associated with nonstandard work tend to be more lethal than its standard counterpart.  The percentage of work that is nonstandard can indicate a substantial bump in risk associated with any operation.
  • % of jobs with completed JSAs.  A complete and current Job Safety Analysis (JSA) is crucial for the safe execution of work, yet I don’t know any company that has 100% of it’s jobs with JSAs, and many companies don’t have a good track record of keeping the JSA’s current with the standard operating procedure. Understanding the percentage of your tasks have good and current JSAs is a good predictor of future risk (the higher the percentage the lower the risk).
  • % of jobs with Standard Work Instructions. Personally, I prefer Standard Work Instructions (SWI) to JSAs (a good SWI should address all the safety concerns of a job), but SWIs suffer from the same problems that I discussed regarding JSAs above.
  • % behind in production.  I still have nightmares about my days working an assembly line and falling “in the hole” screams of “man in the hole” booming above the cacophony of hand tools, presses, and industrial vehicles still give me chills.  Whenever ever workers are struggling to catch up because they are behind in production the risk of injuries rises.
  • % parts shortages.  When there are part shortages (or tools shortages, or materials shortages, or labor shortages for those of you who work outside manufacturing) workers are forced to work outside the standard process.  This is incredibly dangerous because the standard process is designed with protections against injuries embedded in the tasks.  When a worker is working outside the process the organization is relying on luck to protect them.

Risk Factor #6: Worker Engagement In Safety

We’ve discussed worker engagement in a broad sense, but I think it is important enough to look at worker engagement specific to safety.  Engaged workers will work safely for no more reward than because working safely is the right thing to do.  Worker engagement in safety can be measured by:

  • Number of reported near misses.  Some will argue, correctly, that near misses are lagging indicators, but whether or not a worker choses to report a near miss correlates to the level of worker engagement in the safety process. This meaurement, admittedly, is difficult to get accurately.  Since we don’t know the total actual number of near misses we can’t say with certainty whether the current level of reporting is a high or low percentage.  Even so, the number of workers who report, even more so than the raw numbers of near misses, can provide a good glimpse into the level of importance workers place on safety.
  • Number improvement suggestions.  Workers who take an interest in improving the organization are generally interested in finding and eliminating failure modes, which will include those failure modes that will ultimately place workers at risk of injury.  The greater the number of suggestions the lower the risk.
  • Participation in continuous improvement workshops.  Elimanating variation, risk, and hazards are part and parcel of the continous improvement process so it should surprise no one that the level of participation in these activities correlate to the level of risk.
  • Number of worker grievances.  Worker grievances shed valuable light in to many of the other risk factors identified here and generally the greater the number of grievances the higher the level of risk of injuries.
  • Number of disciplinary actions for safety violations. The number of disciplinary actions for safet violations are indicative of two things: the number of unsafe acts being committed and the extent to which these incidents are taken serioiusly.

Of course one has to be careful in designing and managing these measurements to avoid unintended consequences (for example, one could easily reduce the number of disciplinary actions by not applying appropriate discipline, or one could raise worker performance evaluation simply through “score inflation” but the risk of these unintended consequences can be reduced by solid management practices and random sampling audits.

The Imperfection Of Predictive Measures

To some extent we can never have a perfect set of measures.  In many ways it’s like predicting the weather, since we are talking about probability there is always a chance that the organization will beat the odds.  In fact, there isn’t one of these measures that I couldn’t construct a convincing argument against.  What’s important is to use those of these measures that make sense and use them in conjunction with each other.  One correlation does not a pattern make, but when we look at multiple areas of risk and analyze them in a holistic context we can find a more useful way to measure safety than counting bodies and broken bones.

 

 

Filed under: Worker Safety, , , , , , , , , , , , , , , , , , , , , , , ,

You Can’t Test Safety Competency With Tests Your Crappy Tests


dunce                      

by Phil La Duke

If you’re hoping to ensure that the people taking your safety training have learned the material , then you probably use a posttest (a test given at the end of the session), and if you wrote this test it probably sucks. I used to write tests for a living and I am continually disgusted by what passes for an evaluative instrument—even those that have been created by professional trainers. The problem stems from the fact that most of us grew up taking really poorly designed tests and when tasked with creating a test of our own we tend to emulate what we know.

Is it a problem that our tests suck? Yes (and to those of you who think my use of the word “suck” is crude, in poor taste, or unprofessional I say got straight to hell—when you start creating tests that don’t suck, I’ll clean up my act, until then…well you get the picture). Using a poorly constructed test is worse than using no test at all because it takes time to build, complete, score, and record it while adding no real value.

I should point out that most of you who create truly excremental tests (and I have seen many college professors who fall into this category) think that your tests rock it (they don’t). So what exactly is wrong with these tests? I’m glad you asked.

  1. Questions that Don’t Match      the Course Objectives. Each question should correspond to one (and      only one) of your course objectives. You identified the things you wanted      people to learn in your objectives so asking questions about anything else      is just noise. People do (and should) cue in on the topics in the course      that relate to the objectives and tend to place a lower priority on the      trivia (that which doesn’t match up to an objective.)
  2. No Pretest. Pre- and posttests are a      matched set. The pretest establishes baseline knowledge. If a person can      pass the pretest without any instruction he or she doesn’t really need the      training (and in mandated regulatory training you fill find that this is      often the case, unfortunately the law says we have to provide them      training anyway.) Pretests should be the exact same questions as the      posttest (to ensure an apples-to-apples comparison between the learner’s      skills and knowledge before and after the training. Pre- and posttest      questions should be in a different order and should also mix the order of      the distractors.
  3. True-or-False Questions. True-or-False questions are      popular because ostensibly they’re easy to write. Unfortunately, good      true-or-false questions are actually fairly difficult to construct. Even      well-constructed true-or-false questions shouldn’t be used because while      people believe that a person has a 50:50 shot at guess correctly, when, in      fact, experts tell us that the chance of guessing correctly is much higher      (around 66% the last time I looked it up). The problem is that many      well-constructed true-or-false questions provide grammatical clues that      allow the reader to guess correctly. These clues are usually in the form      of absolutes (must, always, most, least, etc.) and even if they don’t tip      off the reader, these types of questions tend to measure reading      comprehension skills far more than the participants’ grasp of the      material.
  4. Poorly-Written      Multiple-Choice Questions. Some people smugly call multiple-choice      questions “multiple-guess” questions. Do me a favor, next time someone      tries to get cute by saying “multiple-guess” crack them a good one in the      mouth with the back of your hand; unless there are social consequences for      our actions people will never learn manners. Multiple-choice questions are      (along with matching or fill in the blank) the best kind of questions to      ask, provided you construct them correctly. When writing a multiple choice      question remember these tips:
    1. The key to effective       multiple-choice questions lies in the distractors (the possible answers       that aren’t correct). Eighty percent of the poorly written multiple       choice questions have really, REALLY bad distractors that allow the       person completing the test to use the process of elimination to arrive at       the correct answer. That works something like this:

The capital of France is:

a) North Dakota

b) In Spain

c) Paris

d) All of the above

These distractors are horrible because, a) North Dakota is impossible since a U.S. State cannot be the capital city of a European country, b) is similarly absurd because the capital of France is not likely to be in Spain, and d) is absolutely wrong because North Dakota is not in Spain. (Note: never use distractors like all of the above, none of the above, or a) and c). A multiple-choice question should have only one correct answer). Once we eliminate all the stupid distractors we are left only with Paris. A better question is:

The capital of France is:

a) Cannes

b) Versailles

c) Paris

d) I don’t know.

You may be put off by the distractor, d) I don’t know, but this is a key to writing a good multiple-choice question. People will tend to guess anyway, but it gives them an out, and you will occasionally be pleasantly surprised by the person who bravely and honestly answers “I don’t know”. The added benefit of the “I don’t know option” allows the instructor to spend more time with participants who clearly aren’t achieving a learning objective.

  1. Too Few/Many Questions. I have found the sweet      spot for the number of test questions is 20-25. (Frankly I seldom go over      or under 20 questions,) So assuming you have a course with five objectives      (and what the hell is wrong with you if you have more than five?) and you      ask four or five questions on each objective your test should have 20 to      25. But there’s more than simple multiplication here, less than 20      questions produces a population that is too small to make valid statistic      inferences and more than 25 becomes unwieldy, taking too long to complete      and score.
  2. Lack of test validation. There are scientific      methods for assessing the validity of tests, but you don’t have to go to      that extreme to ensure that your tests are valid instruments. There is a      simple test of test validity that I use: first, give the test to someone      who doesn’t know the material (ideally someone who is good at taking      tests). If that person is able to earn a passing score than the test is      too easy. Next give the test (assuming the first person didn’t pass) to a      subject matter expert; if the expert can’t get at least 90% than the test      is probably too difficult, or just poorly written (believe it or not,      sometimes one or more of our distractors might be technically correct      because of the way we worded it.

I know that this entry will largely fall on deaf ears (as I’ve said, I’ve met seasoned learning professionals that can’t write a decent test to save their lives) but if only one of you will through away the tripe you’ve been using to ensure that workers have achieved their learning objectives relative to safety, I will be satisfied with my meager success in this area.

There is more….but this is enough.

Filed under: Training & Safety, Worker Safety, ,

What Every Operation Leader Should Know About Safety


 

one-on-one-coaching

 By Phil LaDuke

Every day I hear another safety professional bemoan the fact that Operations (or leadership) doesn’t support safety.  It’s a tired bleat from whiners who should know that I would have no patience for it.  I generally turn the conversation around and ask flat out what they have done to educate operations leaders on safety and they begin to drone on and on about incident rates and lost work days and whatever the latest fad in safety of which they happen to currently be enamored. As safety professionals we have to drive these eunuchs from our chosen field with knotted chords and send them scampering like shocked money changers.

It seems that every month or so I get a wild hair up my small intestine and advocate throwing a beating into some poor schmoo who’s trying to make a buck.  Maybe that’s unfair, but who cares, I care not one whit about fair and when someone is trying to make a buck by undermining the foundation of a profession that, for all its warts,  is ostensibly about keeping people alive long enough to toil another day. So for those of you who are reading this in hopes of yet another viscous attack against the ugly brutes schilling snake oil, sorry; you will be disappointed, perhaps on several levels.

But then I digress.  The target of this week’s blog is the self-castrated safety professional who simpers and whelps about the grave injustice of being saddled with a clueless Operations managers who just don’t get it when it comes to safety.  I freely accept that there are many Operations folks who don’t get safety, but why is that? We’ve made the topic of worker safety about as interesting as the farm report.  You want to shut down the conversation with the hyper caffeinated goofball seated next to you on a plane? You don’t tell them you sell insurance, or that you’re a realtor (when did real estate agents decide that their chosen profession needed to be pronounced real TORE instead of realter? Call it what you want your still selling real estate; case closed) No to strangle the conversation in its infancy you simply need to say, “I work in worker safety, what do YOU do?” The conversation will die quicker than if you said you enjoy watching snuff films.

Let us assume that you’re able to truly able to have a frank conversation with Operations management about worker safety, what would you say, what are the five things you would want  every Operations leader know about safety? First of all, if you need to have this conversation if you hold out any hope of making things better, and some of you, I’m convinced, don’t want that. Many of you are only content to be malcontents, to be the pitiful victims who are under appreciated; those of you who work so hard and receive so little reward.

For my part, here are the five things that every Operations manager should know:

  1. Injuries Aren’t Unavoidable.  Generally speaking there is a correlation between a tightly controlled process that has little variation and a safe workplace.  When people get hurt it’s obviously out of process, as your process (unless it was designed by the Marquis de Sadd) wasn’t designed to deliberately injure workers. So if a leader strives to make sure that people work within process (including things like following safety processes and procedures) they will tend to have less injuries.
  2. Injuries Are Inefficient and Cost A Lot.  When people get hurt it shuts or slows everything down; everything, and not just at the time of the injury sometimes for weeks or months afterward and far beyond the confines of the area in which the worker was hurt.  Depending on how gruesome the injury (or Heaven forbid a fatality) the witnesses may be forever shaken by what they’ve seen, some may not be able to return to work ever (and this isn’t me being melodramatic, I’ve seen strong men unable to cope—and therefore work—-because they saw a friend pulped and mangled before his or her agonizing death on a dirty factory floor.) Even those who didn’t witness the event first hand are shaken and the macabre cacophony that travels through the organization like ball lightening is sometimes far worse in its imaginings of the scene the bloody reality. It’s tough to give work your all when you wonder if you will be the next to shuffle this mortal coil in the name of building widgets. Okay so maybe I am being melodramatic, but what’s a bit of melodrama between us safety guys?  The efficiency goes on and on through investigations internal, corporate, and criminal.  It takes a lot of time to kill or cripple a worker, given all the paperwork and associated loss of production and time is, after all, money.  So when the final cost of carnage hit the bottom line it hardly seems worth it.
  3. 3.    If It Looks Dangerous It Is; So Shut It Down. Too often people assume that because the boss (whether it be the team leader or the CEO) allows an activity it must at a minimum be “safe enough”.  In a lot of those cases the boss is counting on the worker to make a judgment call and to keep him/herself out of harm’s way.  So on it goes with both parties counting on the other to prevent the accident that will kill the worker. 
  4. 4.    Giving People Credit For “a Little Common Sense” Is like Giving Them Credit FHaving Super Powers.  We could argue whether or not common sense exist ad nauseum and all that would come of it would be that eventually I would want to back hand you right in the mouth; probably more than once.  The bottom line is that whether or not you believe common sense exists to any great extent (it doesn’t) trusting it to keep people from doing something they never foresaw or intended (i.e. injuring themselves or others) is a pretty stupid way to run a business.
  5. 5.    Work is Intrinsically Unsafe and the Only Way to Make It A Bit Safer Is to Stay Actively Involved. All jobs carry with them some risk of injury so leaders have to be mindful of the risks endemic to a job and, yes, actively work to reducing the risks to the lowest practicable level.  We can pretend that people don’t commit errors, make bad decisions, take risks, behave recklessly, and generally do stupid things.  We can act as if we live in a utopia where machines don’t malfunction, tools don’t wear out, and equipment never fails.  We can do these things but when we do we do nothing to reduce the risks and we count on luck to protect people.  Lucky people win lotteries, date people way more attractive that any sense of justice would allow, and find hundred dollar bills on the ground. LUCKY PEOPLE DON’T NARROWLY ESCAPE DYING ON THE JOB.      

Are these the right five? Are the really ten? Fifty? A thousand? Maybe you have others you think they should know, but if you think they need to know about how hard your job is, how to calculate Incident Rates or how to conduct a JSA I would put it to you that you’re probably as dumb as the Operations leader thinks you are; maybe more even.

Filed under: culture change, Just Culture, Phil La Duke, risk management, Safety, Safety Culture, Worker Safety

Legitimizing Risk


goldfish jumping out of the water

By Phil La Duke

Several days ago the United States celebrated the signing of the Declaration of Independence, the first step toward its becoming a sovereign nation.  It was an event marked in the state of Michigan by the irresponsible and dangerous use of fireworks by drunken amateurs with no training.  Michigan shortsightedly recently repealed a decades long ban on these types of explosives.  Michigan also recently rescinded a requirement that motorcyclists wear helmets while riding.  These two important laws designed to protect people are just the most recent erosion of public support for safety.  I’ve written at length about the alarming shift in public opinion toward the belief that at home or at work we have gone overboard with safety, so I won’t repeat myself.  Instead, I thought I would focus on how legalization (whether in the traditional legal sense or in the relaxation of work rules) endorses and legitimizes unsafe practices—if something is allowed most people assume that it’s safe.

In a similar vein, if we have rules and regulations to which we turn a blind eye we are effectively sending the message that the rules don’t matter; that they don’t really protect us, they are just a means to keep order in the workplace.  This further reinforces the idea that disregarding a safety regulation isn’t really putting anyone in harm’s way. When people believe an activity is is safe they are more likely to take risks while engaging in said activity.  If we believe that a shortcut puts us at less risk than it in fact does, we sharply increase both the probability (if you believe that the amount of interaction increases the probability) and (perhaps) the severity of the injury (if for instance, emergency response equipment is not maintained, or if the requirements for drills are ignored).

My intent is not to wax political, in broad strokes I don’t care if the law requires helmets  or outlaws fireworks, but the legitimization of  hazards seems, at least to me, to be a growing problem—both internal to the workplace and external to it.  As companies pull themselves out of the economic hole it has been easy to let maintenance issues accumulate.  This in itself isn’t a bad thing; I’ve said for ages that the safest companies are those who went out of business because they were foolish with their spending and unable to prioritize expenditures.  But I’ve seen a rise in complacency and a lack of operational discipline that puts lives and livelihoods at risk.

Familiarity Breeds Contempt

As workers and companies become more comfortable with hazards the hazards cease to motivate them to take reasonable care.  Why is it now necessary to ensure that industrial vehicles are checked for defects and taken out of service and repaired when it was no big deal a year ago, after all, nobody has been hurt in all that time?  The fact that companies were forced to take risks (putting off maintenance, removing unsafe tools and equipment from service, letting training slide longer than one should, etc.) and suffered no meaningful consequences (governmental budget cuts have meant that, for many locations, surprise visits from regulators—and their subsequent findings—have been exceedingly rare) has left many wondering if those safety protections were necessary in the first place.

The company isn’t alone in its cavalier attitude toward workplace hazards; workers are also more likely to take more and greater risks in this climate.  Even as companies diligently try to create more of an empowered approach to safety and improve their safety cultures, five years or more, of neglect for respecting hazards has greatly increased individual’s risk tolerances.  This increased tolerance for risk manifests in individuals doing things they might ordinarily have avoided.  When we think of individuals taking risks, it’s natural to think of front-line workers, but there are others taking risks whose faulty decision making is far more dangerous.  When crew chiefs, foremen, supervisors, and others whose decisions can have severe consequences for a large population the danger to workers becomes exponential.  A single flawed decision from an Operations leader can lead to catastrophic chain of events that kill multiple workers and become international news.

Denial Isn’t Just A River In Egypt

For many companies the problem just isn’t that bad. In the mind of some leaders the fact that nothing bad has happened yet is pretty good proof that it will never happen.  These organizations have been living in a collective denial (it’s difficult knowing that you are operating at heightened risk simply because you can’t afford to fix things and it’s comforting to believe that nothing is likely to happen, and if something DOES happen, it most likely won’t be serious.)  Unfortunately, risks tend to grow and unless there is some form of intervention hazards will continue to build until they reach a threshold where injury is all but certain.  And years of under-reporting borne from fear of job loss, corporate programs aimed at reducing “recordable” injuries instead of reduction of risk and the elimination of ALL injuries, has reinforced the idea that companies have things under control when, in fact, they may not.

Reversing the Trend

For most organizations the problem of legitimizing risk did not happen over night and unfortunately it needs to be rapidly reversed. One solution is a performance audit.  A performance audit is different from a compliance audit in several important ways. While a compliance audit is designed to determine the gap between what is legally required and the current state of an organization a performance audit goes far deeper.  The purpose of the performance audit is, in part, to assess the organization’s tolerance for risk and to present—often in jarring terms—the areas where immediate action must be taken.  Performance audits can open the eyes to hazards and unseen risks and reverse decades of incremental complacency and underestimation of hazards.  Performance audits are often pricey, and many organizations balk at the cost, especially as they are just beginning to limp out of the red and into the black, but these audits remain the best and most effective way of quickly breaking the trend toward legitimizing risk.

Filed under: Phil La Duke, Risk, risk management, Safety, Worker Safety, , , , ,

Safety Isn’t Immune to Hiring for Technical Skills and Firing for Interpersonal Skills


hire fired

By Phil La Duke

In my last column, The Safety Side, in Fabricating and Metalworking magazine (http://www.fabricatingandmetalworking.com/2013/06/stereotypes-get-a-bad-rap/?goback=%2Egmr_1533957%2Egde_1533957_member_254035449) I wrote about personality styles and understanding how a person prefers to be treated and tempering ones style of communication to meet another’s needs can make one not only a more effective safety professional, but a very effective professional of whatever career one chooses to pursue. I posted, as is my habit, a link to the article to the many LinkedIn groups to which I belong. The response was generally positive, but not universally so.  One reader posted”

“well i’m not quite sure to agree with what you are saying. i know first hand that most employers, supervisors, or just about most anyone does not like or like working with someone who speaks their mind. and i guess i fall under that category. i say what is on my mind and i don’t try and find precise words so feeling are not hurt or just misunderstood. it tends to piss people off, oh well that’s me and i’m not changing for anyone. i am very productive in any job/career i do though. ell i’m not quite sure to agree with what you are saying. i know first hand that most employers, supervisors, or just about most anyone does not like or like working with someone who speaks their mind. and i guess i fall under that category. i say what is on my mind and i don’t try and find precise words so feeling are not hurt or just misunderstood. it tends to piss people off, oh well that’s me and i’m not changing for anyone. i am very productive in any job/career i do though.”

I always appreciate it when people post comments, especially when they disagree (provided they can avoid attacking me personally, to which I typically respond in kind) and this was no exception. In this particular case, I was struck by how resolute the poster was in his position of “oh well that’s me and I’m not changing for anyone.”   I responded in part that if what he is doing is working for him then he should keep at it.  Maybe it is working for him, but many others who feel—and act—that way the poster does, find themselves limited.  These people seem to be offered far fewer advancement opportunities, pay raises, opportunities for plumb assignments, and tend to have worse performance appraisals. Often they limit themselves without even knowing it, and many who ignore differences in personality styles find themselves forced to work twice (or more) as hard just to achieve the same (or less) rewards as those who temper their styles to better relate to others who feel differently.  It’s easy for those who find themselves blaming the successful, branding them as suck-ups or favorites or some other pejorative euphemism for someone whose sole reason for success is undeserved and unfair.

The whole exchange reminded me of an adage Human Resources professionals have known for years “we hire people for their technical skills and we fire them for their (lack of) interpersonal skills.” I think this is particularly true in the field of worker safety, and this is a real problem.  Of course we need competent and skilled safety professionals this should go without saying.  Safety professionals must be skilled in a lot of technical areas and my intent is not to diminish this in any way. But there is a real need for safety professionals to be interpersonally adept—unless they can do their jobs in a way that encourages people to respect, and yes, even like them, they won’t be effective for long.

The Good The Bad and The Ugly

There are four types of safety professionals: safety professionals who are technically good and interpersonally good, those that are technically inept AND interpersonally inept, those that are technically skilled but interpersonally clumsy, and those that are technically incompetent but are politically adroit. I make no claim to what percentage of safety professionals fall in which category, but it behooves us all to try to increase the population of safety professionals who are both technically and interpersonally masterful.

Technically Gifted Social Toads

If there is any truth to the idiom “we hire for the technical skills and fire for the interpersonal skills” then there is likely a disproportionate number of safety professionals who are well educated and skilled in the requirements of worker safety. These people bulldoze their way through life and tend to alienate not just the rank and file, but also leadership. They find it difficult to get funding, have their initiatives thwarted at every turn and generally do their jobs in a haze of hostility and frustration. They start to see the organization—both front line employees and leadership—as the enemy; as impediments to the work that needs to be done.

Some safety professionals may scoff at the idea that their success is rooted in whether or not people like them, and may even seen popularity and safety as mutually exclusive (I have at least one colleague who, whenever he wants to avoid talking to someone on a plane simply tells the other passenger the he works in safety).  Many safety professionals who were drawn to the profession because of their love of rules and enforcement may find it difficult to understand the importance of having good interpersonal relationships with their constituency.  In the most extreme cases the safety professional is dismissed and replaced by someone more “reasonable”.

The Eighth Waste & Rain Man

I once worked with two different safety professionals who were universally seen as great guys—real sweethearts—but completely incompetent. One, who in an organization that had an aggressive continuous improvement program aimed at eliminating the Seven Wastes—[1] earned the unfortunate nickname, “the Eighth Waste” because of his simple-minded, albeit well-intentioned ideas, around the nature of safety. Rain Man, was a similarly well-liked and uninformed safety professional. Neither of the two were able to do much good, and when the organization began valuing the job a competent safety professional was supposed to do both faced with either rapidly bring up their skills to an acceptable level or be summarily dismissed.

Taking Out The Trash

Obviously, we can’t protect those in our number who are neither interpersonally skilled nor technically adroit, but they are out there. Fortunately, their numbers are rapidly declining.  There was a time that people who weren’t particularly skilled but who hadn’t committed an offense that would justify firing them.  These people were put into safety because —at least in the early days of our profession—safety was seen as a function that was impossible to screw up.  Many of these people were put into the position after washing out of the job that they were hired to do.  Most kept their jobs until they were allowed to retire.  Unfortunately, many of these retirees have decided to hang out a shingle and continue to ply their trade as a consultant.[2] So be fore warned, the drecks of our profession have not gone gently into that good night they have poorly made business cards, a crappy website, and are open for business.


[1] The “seven wastes” is a key component of the Toyota Production System, a continuous improvement system that forms the foundation of practically every world-class management system developed since. The Seven Wastes in TPS are defects/scrap, over-production, waiting, transportation, excess inventory, motion and excess processing.

[2] Before any of you retirees get all bent out of shape, puff up your chests and fire off a nasty missive, most retirees I know are more than competent professionals. In fact, when I was running Rockford Greene International  I relied heavily on skilled retirees to deliver training and provide my clients consulting services.

Filed under: Phil La Duke, Safety, Worker Safety, , , , , , , ,

The Rise of The Safety Extremist


By Phil La Duke

 Stop extremsim

“’Isms’ in my opinion are not good”
—Ferris Bueller, Ferris Bueller’s Day Off 

fa·nat·ic (fuh-nat-ik) noun

  1. a person with an extreme and uncritical enthusiasm or zeal, as in religion or politics.

ex·trem·ist (ik-stree-mist) noun

  1. a person who goes to extremes, especially in political matters.
  2. a supporter or advocate of extreme doctrines or practices.

I write provocative material.  I deliberately try to elicit a visceral response and take people to a place where they can explore their deepest held beliefs and question basic ideologies of safety. The latest in neuroscience suggests that our decisions or made and our ability to change reside deep in our subconscious beneath our defenses. When something strikes a nerve at that level it can be difficult to  have a rational conversation, but in general, if one can at least reconsider one’s belief set maybe its worth it.

Why is it important to reexamine our deepest held beliefs? Because the world is a dynamic place and if our beliefs are static we become increasingly out of touch.  If we cling blindly to our beliefs and lash out to anyone who threatens our worldview then we run the risk of becoming completely and dangerously out of touch with the realities of your profession and become a useless relic.  That should be career suicide, but sadly even the most out of touch hacks can usually find work based on their years and years of experience.  But what good is 40.2 years of experience if that experience consists chiefly of self-congratulatory affirmations and retreads of theories that are a century old.

Not that every new idea is a good one.  There is as much crap spewed by the idea d’jour pundits today as there ever has been. And just because an idea or theory is new doesn’t make it any better than conventional wisdom, but it’s important that any professional consider new ideas and emerging thought with an open mind.

That’s getting tougher and tougher to do in safety, owing to the rise in extremist thought in safety. The merest suggestion that we discard a safety truism is likely to to create nothing short of a public out rage.  Take for instance the response to Heinrich’s Pyramid.  A recent thread on the social networking site LinkedIn elicited 3,186 comments ranging from the intellectually bantering to the crackpot personal attacks. The thread quoted a recent assertion by EHS Today:

“Heinrich’s assertion that 88% of accidents are the result of unsafe acts has been dismissed as something he just made up. There was no research behind it whatsoever. “ and asked the simple question “What’s your opinion? And why?”

According to a recent article by Ashley Johnson in H+S Magazine a poll the magazine conducted found that 86% of respondents believed either completely or somewhat in Heinrich’s theories, while another 10% reporting that they weren’t familiar with Heinrich’s theories.  The article is a scathing indictment of Heinrich’s theories from experts who question his methods, his conclusions, and generally speaking nearly everything had to say.  The article was balanced by a half-hearted defense that the numbers were never meant to be statistical predictors (the were, by the way) and that Heinrich never blamed the workers (he did. In fact Heinrich was a devotee of eugenics and believed that one’s race and ethnicity played a factor in the likelihood that a worker would be injured or cause an injury to other.)

The What does this all have to do with extremism? Plenty.  This demonstrates that  despite a growing body of evidence that deeply held belief will hold sway.  This in itself is not extremism, but it does create an environment where extremists thrive.  Why do people cling to beliefs that are refuted (there are still people who deeply believe in fake photos and film footage of the Loch Ness Monster and Big Foot, even though the perpetrators of these hoaxes[1])? People tend to want to believe in what they’re doing and when people chip away at the foundation.

Its not just the Heinrich supporters who will lash out against any suggestion that doesn’t support their world view.  If you don’t believe me just publish something critical about Behavior Based Safety.  Within hours extremists and fanatics will marshal their forces and begin attacking you.  The problem has grown to such an extent that several editors of leading safety magazines actively avoid the debate more out of a desire to avoid arguing with fanatics than out of fear or intimidation.  But intimidation of the press is a goal of extremists everywhere —from Al Quida to the Ku Klux Klan to the Neo Nazis to the safety extremists—is to discredit, attack, intimidate, and generally silence the media which, if it is truly unbiased—will never buy there bill of goods.

Extremism Is Rooted In Fear

Let’s suppose you have 40.2 years of experience in safety where you served with distinction, and someone comes along and asserts something contrary to the foundation on which your entire experience is predicated.  What happens to your credentials and accomplishments and very identity as a safety professional when all on which it is built crumbles? People will protect their beliefs with a wildness typically reserved for mother grizzlies defending their cubs; they will make ugly personal attacks and seek to gather together like-minded souls close to them.

Extremism Loves Company

Social networking sites make it easy to reach out to a world of people. Some credit social networking with ushering in Arab Spring, but it also has a darker side.  Social Networking affords us the opportunity for the fanatics to get their ideas out to a sympathetic ear. Unfortunately, when it comes to safety, people are dying in the workplace while crackpots are postulating theories that are given equal weight with responsible theorists in safety.  I will leave the readers to decide which slide of the equation on which I fall.


[1] I’m speaking of the most famous loch ness monster photo and the actual film footage of a reputed big foot. The very people who first produced them convincingly disproved both of these.  If you want to believe in the Loch Ness monster or Big Foot God bless you, but what was the most compelling evidence has been disproven. And don’t even get me started on crop circles.

Filed under: Behavior Based Safety, Phil La Duke, Safety, Safety Culture, Worker Safety, , , , , ,

Misleading Indicators


trash graphs

“If you don’t know where you’re going, how do you know you aren’t already there?”

By Phil La Duke

Nearly every safety professional worth his or her salt has been told that he or she needs to look at both leading and lagging indicators; it’s good advice, in fact, it’s advice I’ve given many times in articles and speeches over the years.  But in my last post (two weeks ago—I spent the last week at a customer site and with the travel travails I just couldn’t bring myself to hammer out a post, deepest apologies to my fans and detractors alike) I questioned the value of tracking (not reporting or investigating, mind you, just tracking) near misses.  Well, as you can imagine the weirdoes, fanatics, and dullards came out in droves to sound off and huff and puff about things I never said (reading comprehension skills are at a disgraceful low these days).  Not everyone one who reads my stuff is a whack-job however, and some of the cooler heads insisted that tracking near misses was important because near miss reporting is a key leading indicator; it’s not…and it is, but like so much of life, it’s complicated.

Near misses in themselves aren’t leading indicators; they are things that almost killed or injured someone, and most importantly, they are events that happened in the past.  Not that anything that happens in the past has to be automatically counted out as a lagging indicator, but unless you still cling to the idea proffered by Heinrich that there is a strict statistical correlation between the number of near misses and fatalities, near misses are no more a leading indicator than your injury rate, lost work days, or first aid cases.  They simply tell you that something almost happened, and nothing more.  Now some of you might try to argue that if you have ENOUGH near misses you are bound to eventually have a fatality, but that does hold up to careful scrutiny.  Leading indicators are often expressions of probability, and like the proverbial coin that is tossed an infinite number of times, the probability of the outcome does not change because of the frequency of the toss.  If you were to toss the coin 400 times and it came up tails, the probability that the 401st toss would come up heads is still 50:50. So knowing that tracking near misses doesn’t really shed any light on what is likely to happen mean we should stop investigating near misses? Certainly not, but we really do need to stop thinking that the data is telling us things that it isn’t.  On the other hand, near miss reporting is indeed a leading indicator; if we accept (as I do) that when people report near misses they: a) are more actively engaged in safety day-to-day (and I suppose someone could argue that this doesn’t necessarily correlate) and b) the more the individual reports near misses the better he or she is at identifying hazards (again, this is a leap of faith, but  I believe in most cases this to be true.) So if you want to gage the robustness of your safety process I suppose the level of participation in near miss reporting is a good indicator.

The whole exercise got me thinking about indicators, and how often safety professionals (and everyone else on God’s green Earth for that matter) tend to be mislead by data because of the erroneous belief that the data is saying things that it isn’t.

Causefusion

Regular readers of my blog will recognize the concept of “causefusion”.  The term was coined by Zachery Shore in his book, Blunder: Why Smart People Make Bad Decisions which he uses to explain how people mistake correlation and cause-and-effect.  According to Shore, causefusion works something like this[1]: People who floss their teeth live longer than people who don’t floss or who floss irregularly therefore flossing your teeth makes you live longer.  It makes sense, right? Yes, except that it is wrong.  There are other possibilities for this correlation, for instance, isn’t it possible that people who are more interested in their health overall might be more likely to floss regularly? In a world where eager safety professionals provide data to Operations people who are hungry for quick fixes, Causefusion happens a lot; and it’s a real danger because it leads us away from the true causes of injuries and may blind us to real shortcomings in our processes.

Another way that we can be lead by indicators is the paradigm effect. When we think of the word “paradigm” we think of the definition, “a typical example” or “viewpoint”, but in the world of science paradigm there is another, lesser known definition, “a worldview underlying the theories and methodology of a particular scientific subject” Joel Barker pointed out how damaging paradigms (in the scientific sense) can be.  Barker believed that there were many instances where the worldview is so powerfully believed that any new evidence that does not support the worldview is ignored. Consider the dangers of ignoring critical new information relative to worker safety because you believe in a particular tool or methodology so strongly that you can’t even consider another viewpoint.

A third way that we mislead ourselves is when we see patterns that aren’t there.  This phenomena is wonderfully described in another book that I really believe is important to the world of safety, Why We Make Mistakes: How We Look Without Seeing, Forget Things in Seconds, and Are All Pretty Sure We Are Way Above Average by Joseph T. Hallinan. According to Hallinan—and the latest in brain research supports his contention—the human brain tends to see patterns even where there are none.  So in cases where safety professionals desperately seek answers and are under pressure to initiate action, the pressure to see patterns where there are none can be extreme.

Perhaps the most misleading indicator is one of the most common: zero recordables.  Too often safety professionals (and operations, as well, for that matter) see a trend of recordables as evidence that they are at far less risk of injuries and fatalities than they are.  This isn’t to say that they AREN’T at less risk, but there isn’t anything more than a correlation between the two elements; they might be good but they are just as likely to be lucky.


[1] The example is mine and mine alone, don’t get all huffy and bother Shore.

Filed under: Loss Prevention, Loss Prevention, Near Miss Reporting, Performance Improvement, Phil La Duke, Safety, Worker Safety, , , , , , , , , ,

Blogroll

broadcasts/podcasts

La Duke in the News

Presentations

Press Release

Professional Organizations

Publications

Safety Professional's Resource Room

Social Networking

Web Resource

Follow

Get every new post delivered to your Inbox.

Join 428 other followers

%d bloggers like this: