Faith in AI and AI Safety

DOWNLOAD PAPER HERE: Faith in AI and AI Safety

Introduction

The propaganda machine for AI is accelerating at full speed with huge implications for WHS and AI Safety. The latest is that, AI can identify crime a week before it happens (https://biologicalsciences.uchicago.edu/news/algorithm-predicts-crime-police-bias). Most of the reports on this are lazy ‘cut and paste’ non-journalism. And it’s this kind of stuff that never raises the challenges or ethics of moral philosophy.

Most of what is reported in this story has no relevance to the problems of crime or criminal behaviour. Most of this kind of reporting says much more about faith-in-AI than any reality for anything useful or real useability.

Of course, any research into AI and crime detection doesn’t mention the longitudinal biases in AI in crime prevention including, biases against women, the poor, disadvantaged, indigenous groups or inbuilt racism (Vallor, S., 2024. The AI Mirror, Oxford Uni Press, London).

Sometimes the hyper-propaganda machine when in full fantasy, suggests that AI is sentient. Fortunately, this delusion is quickly blown out of the water by experts in the field.

Purpose

The purpose of this paper is directed to the risk and safety industry and urges caution in the adoption of AI into the sector.

The more I research and read about AI, the more I realise that belief in AI is a faith-system that says more about the presenter than about the realities of AI. Indeed, the many myths that have been created about AI that have been popularised, have simply endorsed the myths of utopian hope for miracles. There is simply no evidence for much of the fear-mongering that suggest that AI will take over and destroy humans. This shares as much reality as War of the Worlds, Westworld or The Matrix.

All data is interpreted.  No data is neutral/objective. AI are data machines. AI cannot ‘think’ nor ‘understand’ what it does. It cannot and never can ‘know’, the moral or embodied being of living as a fallible person in the world.

AI is not human and never can be. AI simply mirrors and replicates the algorithms programmed into it. AI doesn’t ‘learn’ in the real meaning of the word even though the language of ‘machine learning’ is applied to it. Machines are trained to train, replicate algorithms but can never know anything about lived experience.

The idea that humans are predictive as this non-news story suggests, is based on faith in big-data as fantasy, all wrapped up in a marketing strategy. The only way to have faith in prediction is to deny the fallibility of human persons, the world and mortality. We see the faux promises of predictive analytics (https://safetyiq.com/insight/how-to-use-ai-to-improve-workplace-safety-a-predictive-analytics-model/) and promises of revolutionising safety but this just demonstrates more faith in data. Most of this stuff should be in the fiction section of the library.

The data being fed into AI about crime in this story comes from 5 cities in the USA where historical data demonstrates extraordinary bias against minority groups. What is more, algorithms are never going to produce arrest warrants for crimes not committed. All this research will do, is amplify the bias of the system fed into it and increase surveillance, profiling, and general targeting compared to their white counterparts. Of course, making everything worse and less safe.

Faith in AI[1]

Faith-in-AI has little thought about bias, ethics, moral meaning, the abuse of power and rarely considers by-products. As Kallas states (https://www.dqindia.com/interview/can-ai-truly-predict-crime-insights-from-george-kailas-7655793 ):

‘Law enforcement’s actions based on this technology could result in deeper mistrust of the law within these communities’

And it doesn’t help even if one tries to take the bias out of the data fed into AI. Vallor (2024, p. 55ff) demonstrates that tinkering with the bias of data or taking out words or language in the data makes little difference and most often invalidates the input and outcome.

Faith in AI is founded on a number of myths fostered by scientism and engineering, these are that:

  • AI is a panacea for human work.
  • AI can do better what humans can do best, critical thinking.
  • AI will one day be perfect.
  • AI has a Mind and can ‘think’.
  • AI is only limited by the data it is fed.
  • AI can do human work.
  • AI can learn.
  • One day AI will be perfect.

All of these are myths are fed by propaganda from vested interests.

The Failures of AI

So, let’s look at some reality. AI failures are extensive and harmful. And AI doesn’t get any more ethical or moral, no matter what is fed into it. Do some research. Here are just 10 examples:

 

Here is further research into specific fields.

Medical Care

 Automated Vehicles

 We were being fed propaganda since 2013 that automated cars would be flooding our roads in 10 years. Not so. This is just more fantasy. Despite all the projections of faith in AI, it simply cannot adapt and ‘think’ as humans do. The only automated vehicles in existence are in controlled environments or on rail.

AI Dumbs Us Down

 One of the main by-products of faith in AI is that it dumbs us down. The more we rely on being spoon fed data by AI the less critical we become, particularly in consideration of by-products, imagination and ethical concerns. The following is an example.

Fake Cases

 Of course, the errors and limitations of AI are evident in all fields such as Law, Education, Medicine and Multi-media. Here are just a few examples:

The Dreamers and Faith-Healers

As in all cults and faith systems, there are those who maintain the faith against all the evidence. The belief is, that this is all just about ‘hiccups’ and ‘teething problems’ and that things will get better and better with AI over time. Vallor calls this kind of faith-group, the ‘longtimers’.

This faith-belief is founded on the original mythology and attributions of ‘bad faith’. The faith in myths is not real, neither can AI faiths ever be real, as long as the myth of the human brain as computer metaphor continues in popular AI mythology. All of this, including the transhumanism cult (https://www.forbes.com/sites/julianvigo/2018/09/24/the-ethics-of-transhumanism-and-the-cult-of-futurist-biotech/) is a religious faith system.

We see this faith system most prominent in the delusions of Elon Musk. Musk desires the realisation of Nietzsche’s ‘ubermench’ (superman) and with it all the vices of a megalomaniac. The myths of transhumanism are maintained by extensive semiotics based on the myth of brain-as-computer. None of this is true. We even have absurd ideas that AI can somehow have some understanding of the unconscious and dreaming:

With all this in Mind, now let’s turn our attention to the risk and safety industry.

Metaphors, Meaning and Language

Whilst people get excited about Large language Models of AI, there can never be any AI that ‘understands’ what it generates. AI has no lived experience nor embodied being to understand the meaning of language, its anchored gestures or the contradiction of metaphorical language> All metaphors are learned through gestural experience and understand text by stating what something isn’t. Poetical language cannot be ‘understood’ by AI. Life is not lived in a dictionary understanding of text. We know a great deal as humans by what is NOT said and by interpreting paralinguistic knowing.

Detecting if Someone uses Chat GPT

https://writingcooperative.com/im-a-professional-editor-and-these-phrases-tell-me-you-used-chatgpt-23236708918f

Push-Back

Not all of the spin and propaganda about AI hits the spot. Remember Google Glass? Remember the billions spent on prediction that we would all be wearing glasses that can read the environment and others? It is the tale of yet another spectacular failure (https://www.investopedia.com/articles/investing/052115/how-why-google-glass-failed.asp). It turns out that humans don’t want to be scanned and probed for data.

This is just one example of push-back against the delusions that AI will ‘take over’.

(https://www.theguardian.com/technology/2024/jan/07/artificial-intelligence-surveillance-workers). People want to live in a human world not one dictated by machines. Google Glass is just one of a string of spectacular failures (https://www.heraldnet.com/opinion/google-glass-joins-list-of-spectacular-failures/).

As much as some geeks would love face recognition everywhere, many push back because they understand the ethical need for privacy.

Academia

We now know that as more AI is being brought into many sectors that it also brings with it unseen trade-offs and by-products that work against the very foundations of the the profession that adopts it.

https://theconversation.com/vague-confusing-and-did-nothing-to-improve-my-work-how-ai-can-undermine-peer-review-251040?

Risk and Safety

Let’s explore the way in which AI is being marketed in and to the risk and safety sector. We will do this by discussing some current AI ideas being marketed to the risk and safety industry.

An AI Risk Score Card

(https://www.centreforwhs.nsw.gov.au/tools/ethical-use-of-artificial-intelligence-in-the-workplace-final-report)

Firstly, AI cannot ‘sense’ risk, neither can it ‘understand’ behaviours or ‘think’. Secondly, AI has no ‘sense’ of emotional pressures in work or any comprehension of hidden and unconscious factors that motivate and facilitate decision making. It cannot determine heuristics or sense psychosocial pressures that cannot be measured or observed. (All of the key drivers of risk and safety decision making cannot be measured). Thirdly, AI has no real-world experience nor lived experience in the workplace. All the data fed into AI about hazards and risks is about past events. In this way, AI is a machine that talks about the past, it can have no ‘imagination’ about the future.

Avoid this tool at all costs. It is a dangerous and misleading idea.

Risk Assessment

(https://www.hse-network.com/ai-in-health-and-safety-how-artificial-intelligence-is-transforming-risk-assessment/).

AI does not ‘revolutionize’ risk assessment and at best can only compile lists of what is fed into it from the past. AI cannot predict what it doesn’t know and has no ‘imagination’ with which to ‘explore’ risk potential.

Just look at the language of this propaganda. The first line is a give-away: ‘The capabilities of AI do not stop to amaze as technology continues to evolve.’ Language such as ‘amaze’ and ‘evolve’ are faith statements based on attribution of the author. There is no evidence for either of these claims. AI has no ‘life’ with which to ‘evolve’.

Everything in this article is about AI-faith loaded with emotive language and promises and direct mis-information about the capabilities AI. Eg. its use in the health sector has been a disaster and it is not instrumental in the sector.

AI cannot predict hazards and it cannot replace inspectors. Unchecked AI has been demonstrated to be dangerous and harmful.

Everything in this article is not news but rather propaganda. None of the claims are true neither supported by evidence. It is nothing more than a faith in AI ‘puff piece’. Look at the language such as ‘AI insight’, this is fairy-tale stuff.

Avoid AI in safety propaganda like this.

 Journalism and AI

 One of the by-products of using AI in some professions is already a loss of integrity, critical thinking and rising laziness:

 Movie Myths Generating AI Myths

 We know that many confuse the myths of the movies with real life indeed, that many conspiracy theories come from what AI can do on the screen. This raises a great concern about the need for critical thinking.

 AI Misinformation and Disinformation

Similar mis-information and dis-information abound such as:

A sure give-away in all of this propaganda/spin is never any mention of ethics, by-products, need for caution, problems and limitations. The propaganda of safety and AI is populated by absurd claims of benefits without evidence and little mention of risk and harms to persons.

Most of what appears in the pages of risk and safety journals about AI is sheer fantasy, wishing, naïve promises and a demonstrable faith in AI.

Most of the stuff purporting to be about AI in the risk and safety industry is marketing, spin and fiction. The evidence for this is overwhelming. All one has to do is undertake Critical Discourse Analysis (CDA) of any text in safety about AI and analyse its metaphors, emotive claims and lack of evidence.

Language and AI, Metaphors and Myths

The best way to pull apart the myth and faith statements made about AI is to use critical thinking, critical discourse analysis and skills in linguistics to deconstruct and challenge truth claims. Just look at the metaphors used in a text. Look at the truth claims and deconstruct the language in the text.

AI Can’t ‘Learn’

The language of ‘learning’ and ‘intelligence’ for Artificial Intelligence is of course a misnomer. Some of the best stuff that discredits such belief is by Roger Penrose (English mathematician, mathematical physicist, philosopher of science and Nobel Laureate in Physic):

Mimetics is not How Humans Acquire language

The problem with Large Language Models and how humans acquire language is not supported by any evidence about how humans acquire language. The behaviourist idea that humans learn language through mimetics was torn to shreds by Chomsky 50n years ago. Some better research is by Fuchs:

The Challenges of AI for the Risk and Safety Industry

The following are major challenges faced by the risk and safety industry concerning AI.

  1. Unfortunately, the risk and safety industry have no well-developed, mature or intelligent Ethic of Risk. Eg, the AIHS BoK Chapter on Ethics is one of the most unprofessional and amateurish pieces ever published on ethics. This BoK chapter makes no mention of the nature of power, confuses morality and ethics, makes no mention of care or relational ethics, proposes naïve approaches to ethics like ‘check your gut’ and aligns ethics to duty without any reference to deontological bias. Unless the industry develops a mature and balanced approach to ethics, it will never be professional.
  2. The Risk and Safety curriculum doesn’t include education in Ethics, Politics, Generalist Studies, Culture or Critical Thinking. This means that propaganda and spin thrive throughout the industry that is more energised by slogans than substance. This is evidenced by so much propaganda about AI and all of it has no demonstrable evidence that any of it ‘works’. Indeed, any critical thinking or criticism of safety is deemed non-compliant and anti-safety. In this way the industry never seeks debate or a contestation of ideas. Of course, with no education in Linguistics or Critical Discourse Analysis (CDA), Safety doesn’t even have the tools to analyse its own discourse or its trajectory.
  3. Data is not information and no amount of data makes the perception of risk easier or better. The fixation of the risk and safety industry on data simply leads to ‘flooding’ and cognitive overload. Indeed, the flooding of data facilitated by the flood of AI outputs actually makes the job of risk assessment more challenging and riskier. This leads to a vicious cycle so that trust in AI is generated simply because humans can’t process excessive amounts of meaningless data.
  4. The Safework Australia Emerging Challenges – Australian Work Health and Safety Strategy 2023-2033 make no mention of ethics or ethical challenges associated with WHS and the future of work (https://www.safeworkaustralia.gov.au/awhs-strategy_23-33/context/emerging-challenges). This is the kind of naïve approach to futures one expects from safety associations and statutory authorities that confuse psychosocial risks as ‘hazards’ and remain ignorant of ethics. Ethical competence, moral meaning and the by-products and trade-offs associated with the adoption of AI receive no mention in any discourse provided by statutory authorities or associations.
  5. A lack of critical reading in the risk and safety sector, that cycles within its own bubble, means that significant works on the threats of Technique (Ellul) and Technopoly (Postman) are unknown. Unfortunately, whenever Safety wants to know something it searches for an insider safety source for knowledge. This creates an extraordinary culture of naivety and insular thinking. Moreso, the complete naivety about Religious Studies enables the risk and safety industry to become one of the most religious disciplines on the planet. This is due to: a lack of critical thinking, poor curriculum, belief in objectivity and an inability to reflect on ideology and philosophy. The soteriology (theory of salvation) of safety and its preoccupation with ‘saving lives’ has led to anchoring in ideology of zero and all of its faith-myths required to believe in it. This blindness also facilitates a naivety about faith in AI.
  6. AI can only ‘see’ what it is programmed to ‘see’. AI is simply software in hardware to process the data it is fed, including the bias of the risk and safety industry that still understands psychosocial health as a ‘hazard’. The idea that AI can be of any assistance in psychosocial and cultural dimensions is sheer fantasy. AI cannot ‘sense’ anything and what it ‘sees’ and has no understanding of the lived world. Therefore, it has no sense of perception or the ability to interpret what it observes. AI has no sense of human subjectivity required to perceive and assess risk according to experience. It cannot understand human motivation or any emotional drive in being.
  7. Much of what appears in WHS environments regarding AI are projections and attributions. It is as if stating something as true by some so-called authority makes it so. Unfortunately, the risk and safety industry rarely seek out expertise in any field other than its own but rather, uses sources within itself in OHS to provide so-called leadership on ethics, AI, culture and Linguistics. Some examples are:

8.    When you want to know anything about AI, culture, Linguistics, mental health or Ethics, Safety sees that the best to consult are those with no expertise in the matter. Eg.

9.    Unfortunately, without Critical Discourse Analysis it is relatively easy to sell silver bullets and mythical ideology to an industry desperate to achieve zero. This fosters a naïve belief and faith in AI as if AI can do human work. If risk and safety were only about crunching data then AI would be useful. But, all that is human, fallible, mortal and subjective about lived experience is beyond AI. Most importantly, AI has no perception to interpret data in a moral and ethical way for the good of persons.

What to Do

1.    Don’t believe the propaganda and marketing of AI about what it cannot and never will do.

2.    Look at any semiotics that accompany a story on AI and safety and see if it seeks to affirm the brain-as-computer myth. If so, don’t waste any time reading it. Here are some examples of what you shouldn’t waste your time on:

Source: https://www.thesafestep.com.au/enhancing-workplace-safety-with-ai

Source: https://safetypedia.com/safety/artificial-intelligence-in-health-and-safety/

Source: https://ohsonline.com/articles/2024/02/16/acquiring-workplace-safety-in-the-data-age-via-ai-innovations.aspx

Source: https://safetyatworkblog.com/2023/01/27/chatgpt-article-on-psychosocial-hazards-at-work/

Source: https://rdariverina.org.au/news/2021/9/29/creating-a-new-digital-tool-to-help-manage-ai-whs-risks

Source: https://www.modsoft.com.tr/blog/artificial-intelligence/can-prevent-accidents-occupational-health-safety-using-artificial-intelligence-image-processing-methods-implement-proactive-measures/

Source: https://www.techedt.com/google-unveils-ai-model-that-shows-its-reasoning-process

Source: https://www.britsafe.org/safety-management/2024/ai-a-powerful-new-tool-for-managing-safety-risks

3.    One of the best ways to detect spin and propaganda in AI and safety is to see what the accompanying semiotics say. Most of what is anchored to discourse about AI and safety endorses myths about AI. Myths are made powerful when accompanied by semiotics that confirm the myth.

4.    Use AI for what it is designed for, crunching data and giving data that then needs interpreting and analysis.

5.    Don’t be seduced into the idea that AI will create a safer workplace.

6.    Don’t be seduced by the idea that the future can be predicted, least of all by AI. Anything that goes into AI is data about the past. You can manufacture prediction about the future based on volumes of data about the past. If risk is about the unknown, how is AI going to tell you what it doesn’t know?

7.    Be extremely cautious about any claims that AI can interpret human behaviour. Don’t use AI to ‘monitor’ human behaviour it is a minefield to leave assessment of human action to a machine that has no lived experience.

8.    Don’t give any credibility to the marketing of AI that ignores the ethical, moral and cultural implications of what is being ‘sold’.

9.    If robotics are being used to replace humans remember to monitor the robots.

10. Be smart about attributions and about language of marketing. Hope, promises and projections are NOT reality. If you see the language of ‘AI revolutionising safety’ (https://www.myosh.com/news/ai-powered-tool-revolutionises-workplace-safety-in-australia) ignore it. There is no substitute for human perception, subjective analysis, social meaning and relational action.

11. Remember, that regardless of how AI is used, the Law, OHS ACT and Regulation will hold any PCBU responsible for any decision making in the workplace including, outcomes from faith, belief and trust in AI.

12. Don’t be seduced into the idea that volume and substance equal significance. This is one of the foundation traps in risk and safety and why all the excess of paperwork doesn’t make people any safer.

13. Don’t use AI for surveillance.

14. One of the main hinderances to safety with AI is that Associations and Regulators are sponsors of zero ideology. The quest for this absolute blinds risk and safety to critical thinking and creative non-compliance.

15. There is no substitute for human-to-human person-to-person engagement in observation and conversations. The key to tackling risk is lived experience not anything that is ‘artificial’.

The Need for Wisdom and Imagination

The purpose of this discussion has been to engender caution and wisdom and to raise the alarm bells about harms associated with AI. It is one thing to run around the safety world screaming zero harm from the rooftops and quite another to push the early adoption of AI as if there is no harm associated with it.

References

Atkinson, R., and Moschella, D., (2024) Technology Fears and Scapegoats. 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy.  Palgrave Macmillan. Cham, Switzerland.

Banafa, A., (2024)  Introduction to Artificial Intelligence. Routledge. New York.

Borovick, H.,  (2024). AI and the Law, A Practical Guide to Using Artificial Intelligence Safely. Apress. New York.

Buolamwini, J., (2023)  Unmasking AI, My Mission to Protect What is Human in a World of Machines. Random House.  New York.

Boylan, M., and Teays, W., (eds.) (2022)  Ethics in the AI, Technology, and Information Age.  Rowman and Littlefield.  New York.

Coeckelbergh, M.,  (2020)  AI Ethics.  MIT Press. Cambridge Mass.

Dubber, M., Pasquale, F., and Das, S., (2020) The Oxford Handbook of Ethics and AI.  Oxford Uni Press.  London.

Ellul, J., (1964)  The Technological Society.  Vintage Books. New York. (https://monoskop.org/File:Ellul_Jacques_The_Technological_Society.pdf)

Hasselbalch, G.,  (2021) Data Ethics and Power. A Human Approach to Big Data and AI Era. Edward Elgar Publishing. Cheltenham.

Hendrycks, D., (2025)  Introduction to AI Safety, Ethics, and Society. CRC Press. Boca Raton. FL.

Madsbjerg, C., (2017)  Sensemaking, The Power of the Humanities in the Age of the Algorithm. Hachette Books.  New York.

Postman, N., (1992) Technopoly, The Surrender of Culture to Technology. Vintage Books. New York. (https://interesi.wordpress.com/wp-content/uploads/2017/10/technopoly.pdf)

Powell, J., and Kleiner, A., (2023) The AI Dilemma, 7 Principles for Responsible Technology. Berrett-Koehler Publishers. Oakland CA.

Richterich, A.,  (2018) The Big Data Agenda, Data Ethics and Critical Data Studies. University of Westminster Press. London.

Santoni de Sio, F., (2024)  Human Freedom in the Age of AI. Routledge. New York.

Vallor, S., 2024. The AI Mirror, How to Claim Out Humanity In an Age of Machine Thinking.  Oxford Uni Press, London.

[1] What is ‘faith in AI’? Faith is about hope without evidence. Faith can be religious in nature but is not a term locked into just religious discourse. Eg. I have faith my car will start when I turn the key but I have no guarantee it will start, until it does. I can have faith that my son is where he said he is, but until I I have evidence to the contrary, I don’t know. Faith trusts in something without evidence. Faith lives ‘as-if’ there is some assurance about something believed.


brhttps://safetyrisk.net/faith-in-ai-and-ai-safety/
Prompt

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.