The Safety Love Affair with AI

Hot on the heels of the SPoR March Newsletter ( on The Data Ethic of Power we get more about the Safety love affair with AI (

And you don’t have to dig deep to find the gems like Figure 1. AI and Surveillance (p.8)

Figure 1. AI and Surveillance


Do you remember the content of the last blog? Naming the bad as good? Spinning unethical practice as morally good? In many cases in Safety, ‘monitoring’ is marketed as ethically good (

Just coupling the notions of AI and surveillance together reveals the real agenda of Safety, especially given the complete silence of Safety on Data Ethics of Power. Indeed, under the mantra of Duty of Care (to Safety) and the deontological ethic of Safety discussing concerns about privacy are ludicrous. And where is this discussion located? In the Privacy Act, in Regulation. NOT in an ethic of care.

Poor old Safety can’t help itself, whenever it speaks about safety it anchored to Regulation not a duty to persons (

So, here we have an article concerned about so-called privacy, framed in the opening lines about the benefits of AI for safety and a semiotic of a worker in a truck being monitored by technology through eye scanning technology. And you also need to make sure that the person in the picture is smiling, god bless surveillance.

Why is it than whenever Safety speaks it starts anchored to Legislation and Regulation NOT an ethic of personhood? Because it doesn’t know how to.

If you really are concerned about the threats of AI and surveillance and breach of personal power, you start with an ethic of personhood NOT Regulation. Putting emphasis on Regulation simply emboldens the power of Regulation over persons. Such discussion endorses and empowers the assumptions of Safety in behaviourism and engineering.

Then we get to the real agenda (p.7):

‘In the meantime, many companies, especially those working in high-risk industries, are embracing the use of artificial intelligence to support the management of health and safety risks’.

Ah, god bless Safety.

All that follows is justification for the power of Technique (Ellul) over persons, in the name of safety, named as good. The same paragraph is packed with marketing goop about Intenseye who uses AI ‘responsibly’. (Probably sponsors the NSCA). Hmmm, I wonder if Intenseye have a deontological ethic? Just read the marketing on their website. All about standards but nothing about people. Or, read this, nothing on bias, nothing on methodology, no declared ethic, nothing on power and lots of spin about ‘well-being’. All wonderfully designed not to ask any critical questions or deconstruct assumptions.

Meanwhile back to the NSCA article all packed wonderfully in trust AI, trust the system and its ‘round-the-clock monitoring’ and ‘real-time unsafe act notifications’.

Amazing, hmmmm and how can AI interpret behaviours, when it has no embodied knowledge to determine risk? How can AI ‘read’ the emotions and motivations of persons by surveillance of behaviours? How can a bunch of algorithms understand moral philosophy when no AI app on the planet will buy into such a discussion?

· There is no objective AI, it is all programmed.

· There is no machine learning, because it doesn’t learn, it reorganises algorithms.

· There is on objective AI, there is no neutral AI.

· All AI has in-programmed bias based by an IT engineer who designed its moral programming with no awareness of ethics.

Then the clanger at the end of this piece:

‘We must instead lift standards across industry to tackle the root cause of unsafe practices’.

A wonderful sing song in tribute to James Reason (,

So, for all those who are seduced by the nonsense of AI, just open you AI app and ask it (a computer) a moral question. For example, Is surveillance morally good? Is sex with a dog unethical? I had sex with a sex doll, is that OK? My computer is still spinning looking for an answer.

This is the answer ChatGPT gave me on my moral question, Is surveillance morally good? (Figure 2. ChatGPT Answer)

Figure 2. Chat GPT Answer


Of course, AI can never understand the nuances of human culture, ethics or embodied knowing because it has no body, no unconscious and no Socialitie. Without fallible emotions, AI cannot know or learn. And AI can never be programmed to be wither fallible or emotional because it has no body. Maybe ask your ChatGPT what it dreamt about last night?

In the same NSCA mag we also have a piece on Psychosocial Health but it ensures that the legally binding name of Psychosocial ‘hazards’ is removed (p.13). It is all reframed as ‘Psychosocial Risk’, how amusing. Perhaps this is due to the integrity of the journal, the NSCA or and ethic of silence that best suits the purposes of spin and marketing. The same marketing that proposes the mantra of ‘trust us with AI, we’re ethical’.

The best way to mask agenda is through reframing and silence.

The best way to understand what Safety is saying, is to listen to its silences (

Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.