The Dangers of AI for Safety

I don’t use AI and never have and, have no intention of doing so.

We’ve done two video on the dangers of AI for safety here: https://safetyrisk.net/is-ai-safe-two-videos/

And, have written on the ethics of using AI in safety here: https://safetyrisk.net/ai-ethics-and-ai-safety/

My last newsletter was devoted entirely to the challenges of AI to risk and safety: https://safetyrisk.net/quartely-newsletter-spor-news-april-2025/

It was pleasing to see that Australian authors are resisting the use of AI, even when lucrative money is involved: https://theconversation.com/new-research-reveals-australian-authors-say-no-to-ai-using-their-work-even-if-money-is-on-the-table-257243?

I understand the seduction of making things easy and taking short cuts. I also understand the pressures of time for output. But, deep down I have always been a person committed to Education and Learning. My journey into learning about learning started in 1969 and I have written about this journey here: https://www.humandymensions.com/product/tackling-risk/

I was so lucky in my development and at university to be mentored by some of the best educators in Australia.

I have devoted my life accordingly to Education and Learning, including many years of research and study into the nature of learning. This is what is known as ‘meta-learning’ and the philosophy of learning.

What drew me into engagement with the safety industry and continues to do so is, its inability to learn and understand learning. Even when it uses the word ‘learning’ it is not often about learning. Training and indoctrination are NOT learning.

One of the greatest trade-offs in using AI is the loss of learning.

What happens in the use of AI is that process is sacrificed for outcome.

Pushing outputs for safety is not about learning. Even when I see groups push the idea of ‘learning teams’, it’s about training NOT learning. You don’t learn about learning from safety, it was not how the industry was set up. Education and learning are NOT about outcomes and outputs, this is where a preoccupation with performance takes you. Learning is NOT about performance.

The purpose of learning is not about measuring performance indeed, it’s not about assessment. Indeed, even though there are over 16 different types of assessment (https://cloudassess.com/blog/types-of-assessment/ ), the measurement of learning and comprehension is still focused on outputs NOT the quality of personhood. The purpose of learning is about being, living and learning NOT how one performs. This is also why ethics is foundational to any notion of learning. That is, the way that learning is fostered and facilitated is just as important as any outcome.

This is why the purpose of safety should never be about the outcome of safety. This is why zero draws the industry away from learning, because its focus is on a numeric outcome not the ethical development of persons. And yet, when Safety wants to know about ethical learning it asks an engineer! (https://safetyrisk.net/safety-the-expert-in-everything-and-the-art-of-learning-nothing/)

This is the safety way.

Keeping everything ‘in-house’ ensures that safety doesn’t learn. Safety knows what it knows and knows that sources outside of safety don’t know what matters. In this way, Safety knows that any worldview outside of itself is wrong and ought not to be consulted. This is how to inhibit learning.

If you want to learn about learning perhaps start by reading about Learning:

So, I don’t and won’t use AI because learning matters.

 

brhttps://safetyrisk.net/the-dangers-of-ai-for-safety/
Prompt

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.