Phil La Duke's Blog

Fresh perspectives on safety and Performance Improvement

An Inspection by Any Other Name


I’m often asked by people both inside and outside the safety discipline the difference between an audit and a safety inspection. An audit is typically annual (or semi-annual) activity conducted by safety professionals to ensure compliance with safety regulations and internal policies. An auditor typically has a check list of items that need to be verified or assessed, and audits are usually done by either an internal safety professional or an external governmental agency. Audits are reactive. Audits are a “gotcha” that ostensibly is performed so that the safety professional—whether an internal department or OSHA, the Minister of Labour, or some other governmental agency—can coach the organization.  In fact most audits result in negative consequences and for the most part they are feared and detested, and in the majority of the those cases rightfully so.

Safety inspections are regular, proactive activities that are designed to identify workplace hazards and contain/correct them before an individual gets hurt. Safety inspections are conducted by first line supervisors and/or representation (in Union environments) and use a problem-solving, failure-mode (anticipating what could go wrong) approach. Inspections are proactive. The problem with safety inspections is no matter what you call them (and there are myriad names for essentially the same activity) people associate safety inspections with some negative outcome like those associated with audits.  The result is a well-intentioned buy largely simple minded attempt to rebrand the safety inspection to take away the sting associated with it.

In healthcare, Safety Rounding is growing in popularity.  Safety Rounds are safety inspections that are adapted for use in matrix organizations. Like Safety Inspections, Safety Rounds are regular, proactive walk-thrus, but instead of first-line supervision conducting the rounds, volunteers take on the responsibility in addition to their normal jobs. The goal of a Safety Round is the same as that of a Safety Inspection, but Safety Rounds focus parallel the “Environment of Care” requirements of the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) audits. Unfortunately, the volunteer brigades tend to attract gung-ho staffers who don’t have much to do or who are shirking their core responsibilities in favor of the new assignment.  But even the best intentioned volunteers lack the authority to hold the people responsible for getting hazards corrected and in a short time the volunteers lose interest, become frustrated, or otherwise become ineffective.  I’ve seen the same thing happen in lean implementations where 5S teams were staffed by volunteers; without the power to force the first line supervisor to correct issues the same items are identified week after week, month after month.

But Safety Rounds aren’t without value.  In fact, in places where the manager that owns the area is held accountable, Safety Rounds can be extremely effective.  Safety Rounds tend to be more holistic than Safety Inspections and often those conducting Safety Rounds will ask hospital staff questions to determine the effectiveness of required safety training.  Safety Rounds may well be tied to Patient Safety, and when it is, the effectiveness tends to increase expontentially.

In Lean Manufacturing environments (which believe it or not aren’t restricted to manufacturing these days) Safety Inspections can be embedded into Layered Process Audits.  from 2008 to 2009 I spent one week a month for 15 months working with a manufacturer in Mexico to completely integrate safety into their manufacturing operating system.  One of the major breakthroughs that we made was the integration of the safety inspection into a layered process audit.  This had a profound impact on the effectiveness of the safety inspection because a) it met the requirement that a Layered Process Audit be conducted weekly and b) it documented all the process flaws into a database that made it easy for maintenance (or other departments) to correct the flaws.

Perhaps the most useless bastardization of a safety inspection is the safety observations.  Safety observations are based on the belief that if a supervisor watches someone working he or she can identify unsafe work practices and provide feedback to the worker on how to work more safely.  This practice overlooks many scientific principles that make it an expensive waste of time.  For starters safety observations assume that workers perform their tasks the same way every time they do their jobs and that the act of being observe will not alter the worker’s performance in any way.  Years ago I worked in an automobile factory assembling seats.  Once a year the engineers would do a time study where they would come and watch each operator work and count the steps involved in a given job.  Knowing that the engineers were likely to heap as much work as they possibly could on a job the operators would routinely add steps, slow their pace, and other wise queer the batter by providing the observer skewed data.  But even in cases where operators are not trying to confuse the results, the fact that their bosses are watching over their shoulders are likely to make the operators take more time to do their jobs and work more safely.  Unless an organization intends to pay someone to watch every operator every moment of every day, it’s not likely that the observations will bear much fruit and it’s highly likely that they will add costs and ignore variation in human behavior.

Some organizations have taken to calling the safety inspection a safety tour, and in so doing soften the stigma of an inspection.  I suppose that if renaming the activity makes it less threatening then we should by all means rename it.  My personal preference is to call it a Process Integrity Analysis, and I would not limit it to safety.  We have to do a better job integrating safety into the work processes, and stop calling safety out as a separate and discrete activity.  A Process Integrity Analysis should include analysis of process capability and reliability, quality, total productive maintenance, 5S, and Job Safety Analysis.  By examining a process holistically an organization can lower injuries, boost productivity, and increase quality.  If we position the “Safety Inspection” as just another element of process improvement Operations will stop viewing safety as an interruption of their jobs and start treating it as a critical discipline that drives productivity.

 

Filed under: Performance Improvement, Safety, , , , , , , , , , , , , , , , , ,

Reluctance To Report Near Misses May Not Be Caused By Fear


Conventional wisdom tends to hold that people won’t report near misses because they are fearful of the repercussions of admitting that they screwed up in some way. I’ve been chewing on this for a while now and have concluded that this belief is, for the most part, wrong.  But before we get into that, I should define my terms a bit.  A near miss is any activity that almost resulted in an injury but didn’t.

Near misses provide us with an invaluable opportunity to learn about system failures and correct the root causes before a catastrophic incident happens (someone is killed or seriously injured or there is substantial property damage.)  But people are reticent to report these mishaps and safety professionals and organizations struggle to convince people to document near misses.  Why? Many, if not most safety professional land on “people are afraid they will get in trouble”, and I don’t doubt that is sometimes the case, but in recent weeks I have been working with a new organization and, as such, the pressure to conform to the new culture, while self imposed, is formidable. Three times in the past two weeks I have been involved with near misses and I did not report them.  Why? Was I afraid? I was afraid of negative job repercussions, in fact, in each case I did nothing wrong. In the first case I was trying to turn off a light in a cubicle and as I felt along the front of the light in an effort to locate the light switch I instead crammed my palm into the clear plastic light cover; it hurt, but it didn’t injure me.  Had I been hurrying or had the plastic been jagged or…a host of other conditions I could have been injured.  From a safety stand point I could have been cut, burned or even received an electrical shock.  Clearly this is a system flaw—I was not behaving unsafely or working out of process and yet the way cubes are lit is a poor design that encourages people to feel around for a switch instead of having the switch in plain view.  At a minimum this condition is likely to eventually damage the plastic covering which presumably has some purpose and function.

The second near miss was a slip on the snow walking down concrete steps into a traffic area.  I slipped but managed to grab the hand rail and while I was off balance I didn’t fall.  So another near miss.  I did a quick analysis and again, I as the worker was in no way negligent.  I wasn’t walking too fast, I was wearing appropriate footwear, and I was walking in an area intended for pedestrians.  The steps were sloped down and forward and being concrete and smooth the slightest moisture (never mind ice and snow) can easily cause a loss of traction.  To further complicate things, there is no pedestrian crossing marked, no stop sign, and now speed bumps.  There are also no sidewalks from this parking lot to the entrance forcing people to walk on the snow covered grass or in traffic. Not only is an injury probable but if an injury does occur the impact promises to be severe or even fatal.

The third near miss involved me catching the heal of my shoe on a step and falling forward.  In this case I was also able to catch myself using the rail and felt only mild discomfort in my knee and ankle.  Things most certainly could have been much worse but I was lucky.  In this case, as with the others, I was not distracted, I was following procedures, and I was not behaving unsafely.

As I’ve said, I didn’t report any of these near misses and I’ve spent significant reflection on why I didn’t report them.  Here’s what I learned:

  1. After the first incident I asked a colleague if the organization had a near miss reporting process.  She asked me what that was.  Clearly our safety jargon was getting in the way so i asked here differently, “how do we report injuries?”  She explained that there was a system but she didn’t know what it was and that I should ask the department head.  So reason number 1: Reporting an Near Miss is Hard.
  2. Sometime later I found the head of the department and I asked about near miss reporting and got the same general response: I don’t know.  When she asked me why I was inquiring, not in an accusatory tone, but in more of a concerned, “Did you want to report something?” sort of way, I found myself dismissing the near miss as too trivial to report (when was the last time somebody died looking for a light switch?) Reason number 2: Because there wasn’t any serious consequence resulting from the near miss it wasn’t worth reporting.
  3. After my near slip on the ice I noticed a group of people talking about the fact that the lack of side walks meant that they had to walk into traffic and that the few sidewalks that did exist were slick with ice. I shared my experience with the icy steps and one person responded, if you call facilities they tell you that you have to fill out a work order and even when you do they don’t do anything.  Reason number 3: Because people believe that even serious safety concerns are ignored so what is the point in reporting near misses? The organization does not value the information.
  4. By the time I caught my heal on the step and almost fell I was fully indoctrinated into a culture that did not report near misses, but I desperately wanted to avoid being one of those employees that ignored the problem.  I mentally resolved to find the process and report these near misses.  Then I mentally walked myself through the scenario of me reporting these three near misses and decided that I would look like a) an accident prone klutz, b) I would be seen as chicken little and c) nothing would be done with the information anyway.  Reason 4: The risk to reward ratio is stacked against me; I risk being seen as a fool and there is no reward for doing so. I thought I would be seen as ridiculous reporting something so trivial and I wanted to make a good first impression.

For the record, this organization has an amazingly nurturing and employee-centric culture.  Employees are developed and encouraged and training is a key priority.  And yet I was clearly and quickly “told” that near miss reporting was not a priority, not valued, and not concerned with my safety, despite none of these things being true.

So what did I take away? Several things:

  1. People feel foolish when they do something that results in a near miss even if they did nothing wrong, and people who feel foolish are unlikely to advertise it.
  2. People will only report near misses if it is easy to do so and ideally if doing so is anonymous.
  3. If you solicit people to report hazards or near misses you had ought to be ready to respond quickly and effectively to the hazard.
  4. Even a veteran safety professional is not immune to organizational and peer pressure.
  5. If you insist on safety incentives, a good use for them is to provide incentive for near miss reporting.
  6. The fear of being made to look like a whiner or a wimp is greater than the desire to improve the safety of the workplace.
  7. If you want people to report things you have to have a system that is easy, accessible, and valued by the organization.  Advertising the process is key.
  8. You absolutely must have a blame-free reporting process.  If I was reluctant to report something that happened for which I was in no way responsible, how much more reluctant will I be for an incident where my behavior played a role in causing the incident?
  9. My guess is that near miss reporting will most likely only happen in cases where it is virtually impossible not to report it.  This needs to change but unless near miss reporting is given the same priority as reporting a serious injury we are doomed to a world of ignorance.
  10. We get what we measure.  Nobody seemed all that interested in collecting my information so i was certainly not going to push it and risk a negative outcome.

Sadly, while we as safety professionals preach a good fight when it comes to near miss reporting we don’t do a good job in executing because many of us start with the assumption that people won’t report them because they are afraid.  Until we move beyond that mindset our organizations will be at significant risk and we will continue to significantly underestimate our risk of serious injuries and fatalities.

Filed under: Behavior Based Safety, Near Miss Reporting, Phil La Duke, Safety, Safety Culture, , , , , , , , , , , , , , , ,

Ending The Checklist Mentality to Safety Inspections


Often while training people how to identify, contain, and correct hazards , I find that people often miss obvious  hazards because they are looking for something on a mental checklist; instead of viewing the work place holistically the look for one hazard at a time.  Inspecting a work area for potential hazard is hard—in  many instances the hazards are contextual and given the right conditions virtually anything can increase the risk of injuries.  And as our familiarity with the workplace increases our respect for workplace hazards diminish until we become blind to the risks in a given area.  If forced to find hazards in the area, people will indeed find hazards, but typically these will be obvious hazards that pose no serious risk to workers.

To prevent this dynamic one should begin by asking a couple of questions:

What happens here?
If you ask someone what they do they will tend to tell answer in broad, general terms (“this is a deburring station”) so one will have to probe further. Ask the worker to describe in detail the tasks—lifting, walking, material flow, handling parts, attaching fasteners.
This detailed description of the basic elements of the process forces you to move away from the checklist and really think about the forces and inputs that go on in the area.

What could go wrong? What injuries have I seen in this area in the past?
Typically whoever who is inspecting a work area is  intimate with every possible problem one is likely to encounter in the work area and can tick off a list of process failure modes complete with a list of triggers, from there it’s easy to scan the area for these triggers.

What doesn’t belong here/what is out of place or out of process?
By targeting the sources of process variation we teach ourselves to focus on the critical few hazards that are most likely to seriously injure workers. This technique is also useful for eliminating the tendency to “pick the low-hanging fruit” and ignore those issues that tend to be more difficult to anticipate or readily observe.

What has changed since the last time you toured this area?
Variation creates problems in the workplace.  And provided the system is stable, once the root causes of  process hazards have been identified and corrected the one need only focus on things that have changed. On a side note, I start every incident investigation with the question, “what was different in this case than in the way this operations is usually done?” I typically get a resolute “nothing” to which I respond, “if that was  that true either the worker would never get hurt or would get hurt every time. And since neither condition is true, there must have been SOMETHING different in this case.” Differences represent process variation and where there is process variation there is heightened risk.

Holistic versus Category Based

Viewing the work area holistically, that is, as a complete system versus as discrete elements can be difficult if one doesn’t truly understand the process.  And while this is easier in manufacturing than in non-production environments like a hospital ward or a warehouse, viewing a manufacturing operations as a system can be very challenging.  When we look for things that have the potential to harm someone the shear magnitude of the hazards can be overwhelming, and a checklist is a logical tool for keeping one organized and for ensuring one doesn’t miss anything. Unfortunately, because we are typically moving around when we are inspecting an area for hazards we tend to inspect as we go and we move down the list as we move geographically through the area.  For a checklist to work one would have to walk the entire area for each checklist item and that’s just not sensible.  But holistic inspection means that the inspector must have an in-depth knowledge of not only of the systems active in his or her work area, but ergonomics, human factors,  and more specifically each subset within an operation.  Such knowledge is useful not only for improving safety, but all of the SQDCME.  Unfortunately, this kind of sophisticated knowledge of the work being performed is exceedingly rare in the modern workplace.

The Human Behavior Wildcard

The biggest source of process variability is differences in human behavior.  People do stupid things, do things subconsciously, or just vary the way they do things.  This variation can combine with other process variation to create injury triggers.  It is no secret that the majority of all injuries have some behavioral component to the cause.  Unfortunately, variation in human behavior is also the most difficult variable to control.  Organizations, acting on the dubious advise of Behavior Based Safety advocates have spent millions trying (largely in vain) to manipulate human behavior such that the workplace is substantially safer.  Most of this money was wasted, or resulted in organizations that significantly increased overhead and costs preventing injuries.  Instead, these companies would have been better served investing in mistake proofing their processes or investing in contingent measures to reduce the likely severity of an incident and or protecting workers.

Striking an Acceptable Balance

Ending the checklist mentality completely is neither possible nor desirable—the categorization and trending of hazards and injury root causes is beneficial and useful—but there are better ways than working from a checklist.  When looking for hazards one should take a page from Stephen Covey’s playbook and seek first to understand and THEN work the checklist.  In other words, take a failure modes effects analysis look at the area before pulling out your checklist.  Use the checklist to confirm that the absence of hazards after you have walked the area instead of using it to prompt you to look for a hazard.  This may sound like a trifling distinction, but it may well mean the difference between  identifying and correcting dozens of hazards and finding one or two.

Filed under: Behavior Based Safety, Safety, Worker Safety, , , , , , , , , , , , , , , ,

When It Comes to Safety Management Systems, One Size Does Not Fit ALL


Every couple of years a new big thing comes into vogue in worker safety.  Behavior Based Safety, Process Safety,  Safety Culture all share one thing in common: they promise a universal approach to worker safety and it is a promise on which it seldom delivers.  The problem isn’t with these approaches to safety as much as it is with the erroneous belief that a single approach can address the diverse and divergent safety needs of dissimilar organizations.  The issue rests with the difficulty in the commercialization of a single system such that vendors can sell the system to a diverse client base.  I should explain that I don’t think there is anything wrong with this practice; whether a company is selling training programs or high-end consulting services it is common practice to start with some sort of common material or philosophy.  There’s always a trade off between a static, standard product and a dynamic highly customized one. If one chooses a system that is most likely to meet one’s needs one should expect to pay a premium for this customization.

But in many cases the providers of safety management systems position their offerings as a panacea, or at very least imply that their systems will get results irrespective of the environments in which they are deployed.  In the interest of fairness I should disclose that I currently help companies design and build worker safety management systems.  That having been said, I don’t advocate a custom-designed system for every organization.  It’s as wrong to recommend a customized system to everyone as it is push a static system to all organizations.

Make/Buy Analysis

Whenever an organization considers making a major purchase it typically does something called a Make/Buy Analysis.  This activity, which is often called a variety of terms, involves determining whether or not its better (usually cheaper) to go to the outside for a good or service or to assign it to an individual inside the company instead.  For years I was responsible for doing this kind of analysis to help executives at the company for which I worked determine whether it was wiser to fill a position with a new hire or to invest in the development in an existing employee who would then be promoted.  I would dispassionately review the required skills and compare them against the skills of the existing employee.  When deciding whether or not to buy a new system or building your own the discipline involved in Make/Buy Analysis can be useful and the considerations are the essentially the same:

Cost

No matter what the business decision, cost is usually a primary consideration.  Even if the return on investment promises to be large, the cash out lay involved in a business solution can quickly torpedo.  While it may be true that it takes money to save money it’s equally true that you have to have money to spend money. This is less about the actual expenditure than it is about cash flow, so merely arguing the merits of the investment may well fall on deaf ears.

Effectiveness

In the end a safety management system that doesn’t work, or that fails to achieve its peak efficiency, will be cause the safety advocate that championed it to look frivolous and cause him or her to lose credibility.  It’s rare for an outside, canned system fail outright, but fairly likely that the system will fail to achieve the promised results; this puts the safety professional in a position where he or she has to make excuses for the shortfall.

Of course there is potential for an even more catastrophic failure of a home-made safety management system.  Homemade systems are often a mishmash of theories that the safety professional cobbled together from books, speeches at professional conferences, or from tools and tactics “borrowed” from pundits. The problem is exacerbated by safety professionals who, while skilled in safety and compliance, lack the experience in culture change or systems design to create an appropriate safety management system.

Business Case

Business leaders more likely to not only expect a Return On Investment (ROI) but also to expect a sizable ROI within a year, if not sooner.  Conventional wisdom now holds that all expenditures should return more capital than the amount expended and any cash outlay competes with other opportunities to invest.  Unfortunately, many Safety departments lack the infrastructure to accurately measure the cost of the lack of a system which makes it impossible to calculate a precise ROI.

The decision as to whether to adopt a canned program or to build one using internally will always require careful deliberation and analysis, but unless one takes great care at the onset one is likely to end with a system that is in effectual and that degrades the organization’s confidence in the safety professional.

Filed under: Loss Prevention, Safety, Safety Culture, Worker Safety, , , , , , , , , , , , , , , , , ,

Blogroll

broadcasts/podcasts

Guest blogs

La Duke in the News

Presentations

Press Release

Professional Organizations

Publications

Safety Professional's Resource Room

Social Networking

Web Resource

Follow

Get every new post delivered to your Inbox.

Join 430 other followers

%d bloggers like this: