In the previous article, we took a hard look at safety’s first problem – how it is studied. As safety leaders, we are challenged with looking at issues or incidents, in circumstances where there is political pressure and a preference for short explanations. If the safety issue cannot be condensed or explained with brevity, then an assumption exists or comes into play, that it is not well understood.
Now, we're going to take a closer look at Hollnagel’s (2017) second problem: how safety is measured. This represents another difficult issue, especially for organizations who choose not to view safety as a system. This problem is compounded by the industrial and commercial evangelism of “Zero” philosophies.
Popular key performance data sets all include regulated safety data, such as incident rates, severity, days away/restricted, and lost time incidents. These have been used to describe the status of safety management for years, and unfortunately they show no signs of disappearing soon. Also, near misses and stop work events have been added to the growing list of KPIs, but are marginally of the same flavor.
When I refer to the same flavor, this denotes that most KPIs used by procurement agents, safety personnel, and organizations, have a common theme for measuring safety. They measure the instances where safety is not present. Comparing to safety’s first problem, it’s nearly identical.
TRIRs measure aggregate incident rates, and then break them down into relevant categories with job restriction, lost time, first aid cases, but the common thread among them is safety, in its traditional sense, being absent in most of those situations.
For organizations claiming to have safety management systems, registered or self-declared, using these types of KPIs and zero philosophies represents a pretty serious issue. Systems, especially safety management systems, are designed to be adaptive. The complexity of interactions within the system requires feedback loops in order for the system to continue to function and adapt, as necessary (Mobus & Kalton, 2015).
Simply speaking, safety management systems can no longer make the necessary adjustments to ensure the system adapts properly if we use incident, injury, and illness metrics. Managing incidents and near misses to zero gives your safety management nothing to work with in terms of adaptations or corrections that need to occur. Frequently used proactive metrics such as health and safety training, inspections, and audits can help identify problem areas within your system, but are function or process specific.
Training is mandated often times by regulation, so if you were looking for a good training metric, one would want to measure training in excess of regulation/compliance. Inspections are cursory in most places, compliance-based, and have very little to do with measuring the effectiveness of a health and safety management system. Audits are a bit better, but only work when they’re driven by a schedule and not as a response to issues.
Truly proactive metrics are going to be based off of activities that occur when safety is present in a work process or setting. Safety professionals have opportunities using different learning frameworks within the organization to discover and share how workers identify and bridge the performance gap, before it becomes apparent through and incident rate or an injury.
Otherwise, if safety leaders are not aware of how things go right, can we actually expect to be able to pick the right answer when things go wrong?
Hollnagel, E. (2017). Safety-II In Practice: Developing the Resilience Potentials. New York: Routledge. ISBN: 9781138708921.
Mobus, G.E., & Kalton, M.C. (2015). Principles of Systems Science (Understanding Complex Systems. New York: Springer. ISBN: 9781493919192.