AI and Safety, Brutalism on Steroids


There has been so much hype about AI recently and such little discernment about what is hidden in the language and discourse of so much ‘noise’ about the issue. So much of what Ellul described 70 years ago:

is directly relevant to this discussion.

Ellul was a Sociologist, Philosopher and Lawyer  who knew how principles and archetypes have a power, force and energy to themselves, much more than the people in them. Safety is one such Archetype.

Ellul discusses the principle of Technique, the ultimate quest for efficiency. In many ways the ultimate quest for efficiency demonstrates an inability to cope with, understand and manage fallibility (https://www.humandymensions.com/product/fallibility-risk-living-uncertainty/).

When efficiency becomes the purpose, energy and force that drives decision making, humans (the bearers of fallibility) suffer, humans are harmed. When efficiency becomes the reason for being then incomes another Archetype/principle to help it along – Propaganda, also discussed by Ellul 50 years ago. The quest for zero is the quest for efficiency and so is also the quest to harm.

We see these two forces Technique and Propaganda come together in so much of all the hype about AI recently. At least those like Andrew Keen clearly name what is going on (https://engelsbergideas.com/notebook/chatgpt-what-could-possibly-go-wrong/). Of course, in an age of so little discernment (https://www.humandymensions.com/product/real-risk/ ), so much misinformation and such little critical thinking, the industry of Safety is ripe for the pickings for the pitch of the AI Saviour.

The inevitable move of Safety into AI will no doubt be heralded as the great panacea for tackling risk but, don’t get too excited.

The latest research shows that if Safety moves into AI it will be more brutal than ever! (https://neurosciencenews.com/ai-judge-rules-23238/). Of course, this is why Safety is so attracted to it. There is nothing more satisfying to the schadenfreude of safety than harming someone in the name of good (https://safetyrisk.net/when-safety-delights-in-i-told-you-so/). There is nothing quite so delightful for blame and denial than lecturing others on the sins of fallibility.

The current level of propaganda about AI is sickening, even stupid talk of AI being sentient and conscious. How mad can people be to believe this stuff? Machines cannot be conscious, fallible or have an unconscious. Machines cannot ‘think’, feel or ‘learn’. The repetition of dis-embodied actions is not learning. Yet so much hype and even fear is being circulated:

· https://www.theguardian.com/commentisfree/2023/may/26/future-ai-chilling-humans-threat-civilisation

· https://www.bbc.com/future/article/20230405-why-ai-is-becoming-impossible-for-humans-to-understand

Much of this fear and anxiety is based on disinformation, Cartesian brain-centrism, engineering and behaviourist philosophy. The same methodologies that drive the IT sector.

BTW, the engineering, behaviourist and Transhumanist ideologies have no interest in moral philosophy, theology or ethics. This is why so much language about AI is Transhumanist, unethical and religious in nature.

This is why AI will be so seductive to an industry so easily seduced by utopian (https://safetyrisk.net/utopian-language-and-the-quest-for-perfection-in-safety/) and eugenic ideas (https://safetyrisk.net/safety-eugenics-and-the-engineering-of-risk-aversion/ ) , Transhumanism (https://safetyrisk.net/zero-as-a-transhumanist-quest/) and the religious promises of Zero (https://safetyrisk.net/the-spirit-of-zero/ ).

 

The Safety industry is a hotbed for the propaganda of AI.

So, let’s take a deep breath for a moment and ask what is behind all this AI hype. Maybe just apply some simple critical questions from the SPoR Critical Political tool on understanding social politics.

clip_image002

When we apply these questions and some simple Discourse Analysis  we can quickly discern what is behind much disinformation and propaganda that floats about for hidden interests in AI.

So far, Safety shows no interest in Discourse Analysis which is why it is so good at speaking nonsense to people (https://safetyrisk.net/safety-experts-in-speaking-nonsense-to-people/).

The first thing we need to remember is that AI is designed, computers are ‘programmed’, they are a box with a set of algorithms run by computers connected to other computers but are not and never will be brains, never will be embodied and will never ‘live’ or ‘be’. AI is artificial!

· https://www.theatlantic.com/ideas/archive/2023/05/humans-ai-jacy-reese-anthis-sociologist-perspective/673972/

· https://www.cbc.ca/news/science/ai-consciousness-how-to-recognize-1.6498068

What is scarier than AI is the behaviourist and engineering worldview that pushes nonsense Transhumanist stuff that AI is somehow ‘real’. So much of this is cartesian nonsense that has no understanding of fallibility, the embodied person or the Existentialist Phenomenology of human ‘being’. Think of all the things that AI isn’t and can never have:

· Skin knowing

· Feeling

· Emotions

· Heart

· Gut

· Soul, spirit

· Nervous, Endocrine, Reproductive, Immune systems

· Knowing through being

· Unconscious

· Dreaming

· Fallibility

· Personhood

· Ethics

The list is endless. The whole focus of AI is on data and output, not process and learning and as such is a dangerous erosion of what humans think being human is.

· https://www.forbes.com/sites/bernardmarr/2019/05/07/artificial-intelligence-is-creating-a-fake-world-what-does-that-mean-for-humans/?sh=2b37f56032bf

· https://podglomerate.com/?episode=gary-marcus-why-smart-machines-will-probably-never-replicate-the-human-act-of-writing-and-how-writers-should-view-ai-suspiciously-like-a-hawk

None of this offers anything that will help humanise safety or help humans better tackle risk. Indeed, any move of Safety into AI according to the research will simply deliver more brutalism and harm.

Of course, none of this is going to stop Safety from running full tilt at yet another delusion. As long as safety runs away from an understanding of fallibility, it will always be seduced by nonsense and the promise of simple fixes.

A reading of any of the following will quickly dispel any of the nonsense being circulated about AI:

· Claxton, G., (2009) The Wayward Mind, An Intimate History of The Unconscious. Abacus. London.

· Claxton, G., (2015) Intelligence in the Flesh. Yale University Press. New York.

· Colombetti, G., The Feeling Body, Affective Science Meets the Enactive Mind. MIT Press, London.

· Damasio, A., (1994) Descartes’ Error, Emotion, Reason, and The Human Brian. Penguin, New York.

· Damasio, A., (1999) The Feeling of What happens, Body and Emotions in the Making of Consciousness. Harvest Books, New York.

· Damasio, A., (2003) Looking for Spinoza, Joy, Sorrow and the Feeling Brain. Harvest Books. New York.

· Damasio, A., (2010) Self Comes to Mind, Constructing the Conscious Brain. Pantheon Books. New York.

· Damasio, A., (2018) The Strange Order of Things, Life, Feeling and the Making of Cultures. Pantheon Books. New York.

· Damasio, A., (2021) Feeling and Knowing, Making Minds Conscious. Pantheon Books. New York.

· Durt, C., Fuchs, T., and Tews, C., (eds.) (1997) Embodiment, Enaction, and Culture. MIT Press. London.

· Fuchs, T., (2018) Ecology of the Brain, The Phenomenology and Biology of the Embodied Mind. Oxford University Press. London.

· Fuchs, T., (2021) In Defense of the Human Being Foundational Questions of an Embodied Anthropology. Oxford University Press. London.

· Ginot, E., (2015) The Neuropsychology of the Unconscious, Integrating Brain and Mind in Psychotherapy. Nortons. New York.

· Johnson, M., (1987) The Body in Mind, The Bodily Basis of Meaning, Imagination and Reason. University of Chicago Press. Chicago.

· Johnson, M., (2007) The Meaning of the Body, Aesthetics of Human Understanding. University of Chicago Press. Chicago.

· Johnson, M., (2014) Morality for Humans, Ethical Understanding from the Perspective of Cognitive Science. University of Chicago Press. Chicago.

· Johnson, M., (2017) Embodied Mind, Meaning and Reason. How Our Bodies Give Rise to Understanding. University of Chicago Press. Chicago.

· Lakoff, G., and Johnson, M., (1980). Metaphors We Live By. University of Chicago Press. Chicago.

· Lakoff, G., and Johnson, M., (1999). Philosophy in the Flesh, The Embodied Mind and Its Challenges to Western Thought. Basic Books, New York.

· Macknik, S., and Martinez-Conde, S., (2010) Sleights of Mind, What the Neuroscience of Magic Reveals About Our Everyday Deceptions. Henry Holt Co., New York.

· Meyer, C., Streeck, J., and Jordan, J. S., (2017). Intercorporeality, Emerging Socialities in Interaction. University of Chicago Press. Chicago.

· Noe, A., (2009) Out of Our Heads, Why You Are Not Your Brain and Other Lessons from The Biology of Consciousness. Hill and Wang. New York.

· Norretranders, T., (1991) The User Illusion, Cutting Consciousness Down to Size. Penguin. London.

· Panksepp, J., (1998) Affective Neuroscience, The Foundations of Human Animal Emotions. Oxford University Press. London.

· Raaven, H., (2013). The Self Beyond Itself, An Alternative History of Ethics, the New Brain Sciences and the Myth of Free Will. The New Press. New York.

· Ramachandran, V. S., (2004) A Brief Tour of Human Consciousness. PI Books, New York

· Robinson, K., (2011). Out of Our Minds, Learning to Be Creative. Capstone. London.

· Thompson, E., (2010) Mind in Life, Biology, Phenomenology, and the Science of the Mind. Belknap Press. London.

· Tversky, B., (2019) Mind in Motion, How Action Shapes Thoughts. Basic Books. New York.

· Van Der Kolk, B., (2015) The Body Keeps the Score, Brain, Mind and Body in the Healing of Trauma. Penguin, New York.

· Varela, F., Thompson, E ., and Rosch, E., (1993) The Embodied Mind, Cognitive Science and Human Experience. MIT Press, London.

BTW, you won’t find any of these in the AIHS BoK or any safety curriculum across the globe.

So, where does this leave us? Guess what? Fallibility is not the enemy but rather the blessing of being a human person. The key is finding a methodology and method that better helps human persons tackle risk and this is what SPoR does, and it works (https://www.humandymensions.com/product/it-works-a-new-approach-to-risk-and-safety/).

If you want to learn how to better tackle risk in a way that is: Positive, Constructive, Practical, Rational, Visual, Verbal, Social, Relational, Person-Centric, Respectful, Ethical and Real then, it’s as easy as an email. Or, join us for workshops commencing soon in Vienna or later in Canberra in September.

These workshops will help you move away from: the engineering/behaviourism/positivism approach to safety, identifying with Safety and the myth of objectivity and, discover methods that actually work to humanise risk (https://www.humandymensions.com/product/it-works-a-new-approach-to-risk-and-safety/).



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.