AI, Ethics and AI Safety

I read with interest the white paper from LexisNexis Privacy and AI: Navigating Australia’s Shifting Regulatory Landscape. And, apart from being a fairly typical marketing ploy, there is simply no discussion of ethics nor the critical need for understanding Technique (Ellul, Postman).

The idea that machines ‘learn’ is of course nonsense because in order to learn several things are essential: consciousness, emotions, embodied being and mortality. Machines don’t ‘learn’, they repeat algorithms that modify algorithms. Computers are ‘programmed’ and ‘designed’, they don’t have a life of their own. Switch off the power source and they don’t exist. They can’t and never will have a consciousness about learning. All this fear of AI we read in the media and the mythology of AI pushed by tech giants like Musk is myth. This includes nonsense like robots will take over the world and people will have computer chips in their head (https://www.scientificamerican.com/article/elon-musks-neuralink-has-implanted-its-first-chip-in-a-human-brain-whats-next/ ).

Most of the mythology constructed around AI is based on the erroneous use of the metaphor of the computer applied to the human brain. None of this is true.

The human brain is nothing like a computer and human decision making is 99% embodied. The brain doesn’t direct decision making but rather gets informed by the body about what has already happened. Read any of these sources in neuroscience to find out more: https://safetyrisk.net/essential-readings-neuroscience-and-the-whole-person/

This week we read in the news about DeepSeek. Just another Artificial Intelligence computer that crunches data and spits out whatever it is programmed to do.

What is most concerning about risk and safety moving into this AI space is that it is not equipped to even start considering moral or ethical issues associated with its use. AI is not neutral or objective, it is ‘designed’ and, if the designers maintain the myth of objectivity in all this then many will be harmed.

How fascinating that a risk management company like LexisNexis offers no ethical comment on the issue of privacy and AI.

I have written before about ethics and AI referencing the wonderful book by: Hasselbalch (2021), Data Ethics of Power, A Human Approach in the Big Data and AI Era.

Unfortunately, the risk and safety industry has no foundation for the study of Ethics in its curriculum and so is poorly equipped to even start a conversation about the threats of AI to human well-being, work and privacy.

When the best the AIHS can do on ethics is a BoK chapter that doesn’t even discuss the notion of power, you already know that Safety is naïve and incompetent in any space that considers ethics and moral philosophy. Being sensitised to power (is the beginning of thinking about ethics (https://safetyrisk.net/being-sensitised-to-power-in-safety/).

If you are interested in learning about an Ethic of Risk we offer free books and programs that can help (https://cllr.com.au/product/an-ethic-of-risk-workshop-unit-17-elearning/).

brhttps://safetyrisk.net/ai-ethics-and-ai-safety/
Prompt

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.