The Safety Anthem, Control All and All in Control


imageOne of the quotes of note in Hauskeller’s book Mythologies of Transhumanism states (p.62):

‘There is an obvious gap between what we want to do, or wish we could do, and what we actually can do due to the limitations of our bodies. In order to close this gap, we are supposed to turn our bodies into machines, that is, into something that completely serves our purposes. We need to control them (our bodies) so that they can no longer control us.’

And as I read that statement ‘We need to control them so that they can no longer control us’ I realised that this is the anthem for Safety.

In an industry that seeks total control (zero); where injury and death are demonised and counted as the enemy; where the body is made problematic; where harm is deemed evil; and where humans are made the enemy of safety (https://safetyrisk.net/and-the-enemy-of-safety-is-humans/ ); where uncertainty is made certainty through the enactment of systems, rituals and myths; there is no understanding of the purpose and meaning of fallibility (https://www.humandymensions.com/product/fallibility-risk-living-uncertainty/).

We find this too in the many faith statements made by the Transhumanist cult:

  • Google Wants to Give Your Computer a Personality (Eadicicco)
  • A Google engineer says AI has become sentient. What does that actually mean? (McQillan)
  • Google creates AI that can teach itself and ‘isn’t constrained by the limits of human knowledge’ (Sullyman)
  • The Google engineer who thinks the company’s AI has come to life (Tiku)

What all this demonstrates is an extraordinary level of faith, belief in mythology and an absurd affirmation of a non-sense notion (or absence) of personhood. It also demonstrates an absence of critical thinking and discernment at the most basic and fundamental level.

Of course, where there is no articulation of personhood in these articles (nor safety) and this creates every opportunity to demonise persons and enact unethical processes in the name of good. This is what comes from a dualist focus, a behaviourist focus and an ideology that seeks to disembody humanity. The body and fallibility are the problem. Afterall, the mind is pure (supposedly) but the body brings decay, suffering, harm, pain and misery.

And so, the argument goes (by Safety and Transhumanists) something like this: Surely you would like to improve the human condition? Surely, you want to eliminate harm? Surely, if there was bio-enhancement or technological options to eradicate fallibility, this would be a good thing? Surely, if we could put a human mind in a machine, a human could live forever (with regular maintenance)? Surely, death and harm are evil? Surely suffering should be eradicated?

What is hidden in these questions and hidden ideology is a worldview and ethic that has no understanding of fallibility, the meaning of personhood or human ‘being’. Indeed, it never speaks about it.

It was Karl Jaspers who stated ‘every form of possible human perfection proves upon reflection to be defective and unachievable in reality’.

The skills required to discern what is happening in these quests for zero, perfection and the eradication of fallibility, relies on an ability to listen to the affirmations and silences in language put forward by Transhumanist and Safety ideology. Of course, this is why most in safety see no problem with the ideology of zero, or stating that seeking the impossible is impossible!

This is why understanding mythology (https://spor.com.au/september-canberra-workshop/) is at the foundation of tackling risk.

This is why Safety doesn’t want to study mythology, doesn’t confess to its faiths or its naïve assertions (https://safetyrisk.net/zero-as-a-transhumanist-quest/) that it states as ‘fact’ and ‘science’ that are neither.

The best way to generate a myth in safety is to state something by what it is not and then anchor it to a symbol (https://safetyrisk.net/believing-non-sense-in-safety/ ). This is the foundation for all faith-based ritual and myth. This is why understanding semiotics is foundational to demythologising safety (http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0259-94222016000400011).

So, let’s have a quick look at these assertions made at Google that machines can have personalities, sentience and being.

In the article that claims AI has ‘life’ we note the article starts by the opening of a laptop. Hmm, is this like how humans wake up? Apparently so. And who opens you up each morning? God? This is all ghost in the machine stuff (https://kwanj.files.wordpress.com/2016/02/ryle-the-ghost-in-the-machine.pdf).

And all of this because a computer can mimic speech? Mimicking speech is not how language is acquired and the computer has no such consciousness of itself or an unconscious. The same article talks about ‘building’ the program algorithms that produce mimicked speech so, it has no life of its own. It has no selfhood, it has no ‘being’. So, when a computer dies, do these people conduct a funeral service?

Just because we use anthropomorphosis applied to objects (eg. I love my car) doesn’t make them persons or human.

Then there is the claim that AI has ‘sentience’ (feelings, emotional being) (https://en.wikipedia.org/wiki/Sentience) and soon the article descends into absolute non-sense. All asserted by faith-wishing and dreaming, which of course AI cannot do.

All of this has much more to do with The Matrix, iRobot, Robocop, Terminator and Dr Who than it has any connection to reality. And yet there are whole populations of people online in a Matrix cult that believe it to be true (further see Children of the Matrix, The Matrix Cult (https://www.researchgate.net/publication/272019269_The_Matrix_Cult), Philosophy and the Matrix, Taking the Red Pill, The Matrix and Philosophy and Beyond the Matrix.

There is no difference between the faith of these cults and the Safety cult that believes the following:

  • People are super heroes (spruiked by regulators)
  • The impossible is possible (DuPont)
  • Zero is possible in denial of fallibility (global Safety)
  • All accidents are preventable
  • Safety is a choice you make

Just because one wishes there to be no injury and wishes for total safety, doesn’t make it so. Just because one doesn’t like uncertainty doesn’t ensure predictability. Just because one doesn’t understand fallibility, humanity, personhood or suffering, doesn’t mean a replacement faith in zero is any different than a faith in religion. Indeed, proposing that faith in science is better than faith in religion (Dekker – Suffering and the End of Heaven) is a statement of faith.

A nice quote in this article by Tiku states:

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said.

I’m with Emily Bender. Even the slightest expertise in Linguistics, Embodiment, Neuropsychology and Socialitie will demonstrate that there is no similarity between the electrical networks of a computer and the neural embodied interaffectivity and intercorporeality of the human (Fuchs).

It is no surprise that much of this fiction and unethical conjecture in safety and Transhumanism comes from engineering. It makes about as much sense as Hollnagel’s Resilience Engineering, when we know that resilience cannot be ‘engineered’.

Once again, affirmations of faith by stating what something isn’t and then affirming it as true by faith. Then anchor to a symbol and the myth is made.

So, we know that AI will never ‘come to life’. We know that when we turn off the laptop it doesn’t cry or dream. We know that siri, alexa and hound have neither personhood or personality. Indeed, they have no consciousness. And you can say as much ignorant s*#t to people and some will believe it but such is only evidence of mental illness. Being out of touch with reality and delusion is a mental health disorder (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3016695/). Delusions lead to dysfunctionality and unethical outcomes.

Yet, none of this seems to stop the regurgitation of non-sense, particularly in safety. There is no limit to the power of faith.

So, we return to the quote by Hauskeller: ‘We need to control them (our bodies) so that they can no longer control us.’ This is clearly the anthem for Safety. There is no more repeated word in safety than the word ‘control’.

And what Safety hates most is no control and radical uncertainty (https://safetyrisk.net/radical-uncertainty/).

So, it creates myths to mask its dissatisfaction with reality, it anchors volumes of systems to imagine fallibility doesn’t exists and, sings it’s anthem of control at every safety conference and toolbox talk.

Of course, there are other valid understandings and views to the predominance of this safety faith. In SPoR, we deconstruct much of this safety mythology so that we can tackle Real Risk (https://www.humandymensions.com/product/real-risk/) so that Safety Works (https://www.humandymensions.com/product/it-works-a-new-approach-to-risk-and-safety/) and Risk Makes Sense (https://www.humandymensions.com/product/risk-makes-sense/). And all of this is positive, constructive, doable and practical and much of it is free for download.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.