The Delusions of AI, Risk and Safety


image‘Congratulations, we have just shifted all out Psychosocial data on you, into our hazards register’.

Can you just imagine this message being forwarded to you any time soon as Safety takes on the task of implementing ISO 45003 and the Codes of Practice associated with ‘controlling’ Psychosocial ‘hazards’.

The recent move by Safety into the field of Psychosocial Health with ‘ISO 43005 Occupational health and safety management – Psychological health and safety at work – Guidelines for managing psychosocial risks’ is fraught with so many problems, not least of which is the ethical management of data.

Other concerning aspects of this new standard and associated Codes of Practice have been discussed here:

https://safetyrisk.net/category/mental-health/psychosocial-safety/

In an industry that is yet to tackle an ‘ethic of risk’ or anything of any maturity about Ethics at all, this venture into Psychosocial health and the recording of Psychosocial issues as ‘hazards’ poses huge questions about: confidentiality, trust, confession, openness, honesty and power.

In all I have read in ISO 43005, Codes of Practice on Psychosocial health and anything on Ethics (eg. the AIHS BoK Chapter on Ethics, there is simply no discussion on the nature of power. Yet, the very essence of Psychosocial stress is the abuse of power.

Similarly, there is no discussion on the politics of power or abuse of power in any Safety publication that raises the subject of ‘duty of care’ or moral obligation. Most often the notion of moral obligation or ‘duty of care’, is focused on the Act and Regulation NOT the person abused or violated. The focus on a ‘duty of care’ is on legal obligation not an ethic of care or caring.

So, in the absence of an Ethic of Risk or any thought of a ‘Data Ethics of Power’ What is going to happen with the data collected on Psychosocial health as ‘hazards’ in the safety industry?

I will not be able to raise the many issues associated with this question in this Newsletter but if you want to read further about the problem then here is a good source to commence your introduction to the problem of Data Ethics of Power.

Hasselbalch, G., (2021) Data Ethics of Power, A Human Approach in the Big Data and AI Era. Edward Elgar. Cheltenham UK.

Recently in Australia we have witnessed the vicious abuse of power inflicted on vulnerable persons by the Robodebt scheme, enacted by the previous conservative government (https://robodebt.royalcommission.gov.au/).

The Robodebt scheme highlights the problem of the Data Ethics of Power and Big Data Sociotechnical Systems (BDSS) in the abuse of persons. Robodebt was concocted by the conservative government to target the poor and vulnerable who were supposedly ‘ripping off’ the welfare payment system in Australia. Robodebt is symbolically the enactment of conservative mythology anger against the poor.

At the heart of the Robodebt system was the creation of an algorithm using Big Data systems to victimise welfare recipients. It is estimated that this system resulted in and least 2000 or more suicides. When it comes to ideology, the last thing that matters is safety!

In June 2021, a Federal Court Judge approved a settlement worth at least A$1.8 billion for people wrongly pursued by the conservative Government’s Robodebt scheme.

The court discovered during the Robodebt scheme, the conservative Government had unlawfully raised A$1.73 billion in debts against 433,000 people. Of this, $751 million was wrongly recovered from 381,000 people. Settlement payments to eligible group members involved in the Robodebt class action (https://www.servicesaustralia.gov.au/information-for-people-who-got-class-action-settlement-notice?context=60271).

Early on in the development of the Robodebt scheme by the conservative government, advice was given on its illegality and this was ignored by Ministers and Senior Public servants (https://www.afr.com/politics/federal/tudge-never-queried-legality-of-robo-debt-commission-hears-20230131-p5ch0s; https://the-riotact.com/minister-vowed-to-double-down-on-robodedt-even-when-told-it-was-illegal-royal-commission-hears/639458).

At the heart of the Robodebt scheme was conservative neoliberal ideology that created a Big Data Sociotechnical System (BDSS) that enabled the unethical use of power. Once such a system is put into play, Big Data becomes an automated vehicle that enacts the vices of the ideology. This enabled conservative politicians to wash their hands of any ethical responsibility and the computer systems are allowed to take over.

We have seen similar abuses of data systems in Cambridge Analytica Scandal (https://en.wikipedia.org/wiki/Cambridge_Analytica), the Snowden Affair (https://www.tandfonline.com/doi/full/10.1080/23753234.2020.1713017), the COMPAS Algorithm (https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/) and countless other examples of the abuse of power using Big Data Sociotechnical Systems (BDSS).

Hasselbalch (p.87) calls these ‘Destiny Machines’ to highlight the way BDSS shapes the destiny and misery of persons.

I just watched the miniseries The Capture that proposes clandestine work by spy agencies using ‘deep fake’ technology in real time. However, whilst it’s one thing to enjoy the imaginations of film and script writers, it is quite another to enter into the Qanon world of attributing conspiracy to such scripts as covert reality, then witness the enactment of a terrorist attack (https://www.theguardian.com/australia-news/2023/feb/16/wieambilla-shootings-australia-christian-terrorist-attack-queensland-police) based on ideology and misinformation.

We know that films The Capture, The Matrix, The Black Box Society, Black Mirror and The Social Dilemma capture the Mind of popular culture and offers solace to the religious imagination (Ostwalt, (2012) Secular Steeples and Lyden, (2003) Film as Religion). Much of this movie genre helps generate the myths of human brain-as-computer and the absurd ideas of Transhumanism (https://www.telecomreview.com/articles/reports-and-coverage/3925-digital-humanism-the-extent-of-our-hyper-digital-reality). Transhumanism is a faith-cult just as the language of the mechanisation of humans is pure nonsense. Similarly, the mythology of ‘machine learning’ all help fabricate nonsense mythologies about AI. It was Baudrillard who aptly commented: ‘The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence’.

I wrote about the problem of brain-centrism and so called ‘machine learning’ in my books Envisioning Risk (pp.14-17) and Tackling Risk (pp. 24-28). The top 10 movies of all time are focused on Metaphysical themes, similarly with miniseries. Stranger Things is a good example. All of this helps feed into the creation of belief about AI and shapes how people define personhood.

However, now we see what follows in the wake of mythology about AI is a faith-belief about reality. We see this similarly with the current fixation with AI and computer programs like Chat GPT.

We have all been warned about the trajectory of this mythical thinking by Madsbjerg (2017) in Sensemaking, What makes Human Intelligence in the Age of the Algorithm (https://www.blinkist.com/en/books/sensemaking-en) and, Larson (2021) The Myth of Artificial Intelligence, Why computers Can’t Think The Way We Do.

It is at this point a small dose of reality is helpful (https://www.spiceworks.com/tech/artificial-intelligence/articles/common-myths-about-ai/).

All this excitement about AI currently is like a cultic illness. The absurd levels of mythology that exists about the capability of AI, the exaggeration of AI capability and associated delusions should be ringing ethical alarm bells for all of us. More so, when one considers that the safety Industry now wants to record Psychosocial data as ‘hazards’.

What hope has Safety got for critical thinking when so many believe that The Matrix is real?

· https://www.sciencefocus.com/future-technology/the-matrix-simulation/

· https://www.scientificamerican.com/article/confirmed-we-live-in-a-simulation/

· https://www.engadget.com/a-glitch-in-the-matrix-review-sundance-simulation-theory-150015472.html

What hope has Safety got of behaving ethically and professionally with the data of persons as ‘Psychosocial hazards’ when it has no interest in Ethics? What hope can there be for the ethical use of data when the Safety has such a love affair with Big Data, engineering and so called ‘predictive analytics’? A few examples tells the story:

· https://www.worley.com/our-thinking/improve-safety-with-predictive-analytics

· https://safetymanagement.eku.edu/blog/using-predictive-analytics-to-predict-and-prevent-workplace-injuries/

· https://usequantum.com/future-safety-predictive-safety-analytics/

· https://www.inxsoftware.com/wp-content/uploads/2020/11/predictive-analytics-for-safety.pdf

Just imagine this faith-lust for ‘control’ (the darling focus of Safety) coupled with this new found desire to ‘control’ Psychosocial hazards? Again, with no discussion on a Data Ethics of Power or the unethical abuse of Big Data Sociotechnical Systems (BDSS)?

Just the use of the language of ‘hazards’ with regard to Psychosocial health in itself, poses enormous problems for the use of data and the expected suppression of any confession of abuse in the workplace, especially now as such data then invokes the inclusion in the Regulator in reporting!

 

So, we already have the AIHS and others focusing on the nonsense language of ‘futureproofing’ (https://www.aihs.org.au/events/nsw-safefest-future-proofing-our-profession; https://www.linkedin.com/pulse/future-proofing-safety-health-phil-walton ) complete with the pooling of ignorance by amateurs projecting about professionalism. Keen ((2008) tells us all about this in The Cult of The Amateur. And one can be sure that the words ‘surveillance’ and ‘ethics’ are nowhere to be found.

Without an Ethic of Risk safety will never be professional. Without an Ethic of Personhood, Safety will never stop its dehumanisation of persons in the name of safety, zero and ‘duty of care’.

Just imagine the noise of all these meaningless ‘speak up’ campaigns (https://www.safework.nsw.gov.au/search?query=Speak+Up) when workers become informed that any Psychosocial information they give, becomes a registered ’hazard’? Just imagine linking the ‘Speak Up app’ to the Psychosocial hazards register with no ethic to guide what happens next? Just imagine what happens to the data without a Data Ethic of Power?

Without a Data Ethics of Power one can be sure that the new venture by Safety into Psychosocial ‘hazards’ will create a new data nightmare (https://safetyrisk.net/welcome-to-the-nightmare-safety-creates-its-own-minefield-as-usual/).

Of course, it doesn’t have to be this way. In SPoR the study of Ethics is foundational to the enactment of risk. Indeed, the workshop on an Ethic of Risk is currently being conducted for free and is over subscribed (https://safetyrisk.net/free-online-workshops/). One can find more on the Workshop here: https://cllr.com.au/product/an-ethic-of-risk-unit-17/

The workshop enables a positive, practical and comprehensive approach to risk as if persons and ethics matter.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.