When Things Go Wrong Let’s Do More of the Same


Originally posted on July 10, 2019 @ 7:03 PM

 

imageI always find it fascinating when there is an increase in safety statistics or incidents that the call is always about greater vigilance, more checklists, more policing and more effort. What an amazing industry that still doesn’t know why things go wrong.

Rather than question the fundamental flaws in safety ideology or safety orthodoxy or apply a bit of imagination, the assumption is that what has been normalized is effective, its just a glitch. This is despite the fact that something went wrong. It often comes back to blame the individual or blame the system. Ah that’s it, we need to spend more money on a systems review and develop better systems. This is to follow the last three reviews that had the same finding. Clever!

Unfortunately, orthodox safety inquiry assumptions are never questioned, we select the same old mindset to undertake the same old investigation, under the same assumptions, under the same ideological thinking so that nothing in safety orthodoxy will be questioned.

The best way to keep the safety status quo is to get a regulator (or ex-regulator) to undertake an investigation to find out what they already know. This is done using the same STEM tools that have been used over the past 20 years. In this way we can get the same outcome that was achieved last time and run a ‘safety blitz’ across the industry under the assumption that wrong programming or failure in systems is the problem. STEM thinking comes all neatly packaged with no questioning of flawed anthropology, flawed thinking about personhood, no knowledge of human unconsciousness, no understanding of social influence or the nature of human embodied decision making. In this way the enquiry can always come out with the problem being the worker typically in the style of the Danny Cheney tragedy (https://safetyrisk.net/theres-a-hole-in-your-investigation/ ).

Usually the outcome from an orthodox safety investigation is that someone was either ‘not careful’ or ‘complacent’ or perhaps that ‘human error’ was a cause. In the case of Danny Cheney apparently he wanted to suicide that day!

In this way the inquiry thinks it has found something when all of this is an attribution and means nothing. This is all assisted by the behaviourist-cognitvist assumptions of James Reason who established that decisions are made by ‘violations’ and ‘omissions’ (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1117770/ ).

One thing Safety does know is that there couldn’t be a fundamental flaw in the assumptions of safety thinking or systems. There couldn’t be a flaw in the assumptions of the behaviourist-cognitivist investigation. There couldn’t be any problem with STEM assumptions of personhood or understanding of human decision-making. All of these must remain unquestioned. It couldn’t be that there are other worldviews that could offer astounding insight into the way things go wrong. There is no other worldview (https://safetyrisk.net/can-there-be-other-valid-worldviews-than-safety/) there are only ‘known knowns’ and never ‘unknown unknowns’.

This is how Safety makes sure that things stay the same.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.