One of the skills in critical thinking is deconstructing assumptions underneath argument. This skill is essential when reading anything about AI and safety.
It is critical in risk and safety to be able to deconstruct argument and discern Real Risk (https://www.humandymensions.com/product/real-risk/) from hype, marketing, spin and propaganda. It is also critical to be able to understand the assumptions brought to any argument, as assumptions (nor ethic) are often not declared even by the proponent of an argument. Indeed, many in risk and safety don’t even realise they are promoting a behaviourist and engineering worldview, even when they claim the brand ‘differently’.
One of the challenges of critical thinking is owning one’s own assumptions, making oneself familiar with Transdisciplinary views and entering into dialogue with those views. Seeking Transdisciplinary dialogue is challenging because it relies on each discipline to also want dialogue. Being open to conversation is the key. However, if the other worldview is closed to dialogue, then not much can be accomplished by engagement. This is also difficult when a worldview is convinced that its view of the world is the only valid view.
For example, if one is convinced that only things that can be measured have meaning then a discussion with SPoR will not be possible. Learning only happens in movement. The key to dialogue is to know one’s own assumptions, suspending agenda, open questioning and active listening.
This brings me to the current hype and marketing around AI.
· https://en.wikipedia.org/wiki/AI_takeover
· https://nypost.com/2022/02/20/could-artificial-intelligence-really-wipe-out-humanity/
· https://digitalgenius.com/will-ai-take-over-the-world/
· https://www.theatlantic.com/newsletters/archive/2023/01/is-this-the-start-of-an-ai-takeover/672628/
You could read all this stuff and become worried and alarmed were it not for the fact that the assumptions of these articles are never declared. Most of this stuff is wishful hoping based on the idea that a human is a computer on a body. This assumption is neither true nor supported by any evidence. Such an assumption is primarily maintained by the computer-as-brain metaphor. (See further Lakoff and Johnson Metaphors We Live By).
If you are trying to work out an assumption of a worldview just investigate the linguistics and para-linguistics used and NOT used.
All of the current marketing around AI is founded on the assumption that the human brain is a computer indeed, that cognition is computing. The metaphor of computing as thinking and learning is an assumption of many AI arguments, hype and marketing. In the end, much of this hype is about selling products. Similarly in risk and safety where much hype and noise is made about trends or marketing and yet underneath there is nothing different just more of the same to sell a product (https://safetyrisk.net/entertainment-suckers-and-making-money-from-safety/).
Most of what you read about AI is founded on an assumption of disembodiment. AI is artificial and will never have a body nor any of the attributes of fallible human being. It has no unconscious, can’t dream or lucid dream, can’t sense, feel or emote, social relationship, thinking morally or, hold any of the attributes of what it is to be phenomenologically human. It’s artificial and is NOT sentient. Understanding the essential nature of fallibility (https://www.humandymensions.com/product/fallibility-risk-living-uncertainty/) and its beauty for human phenomenological being is essential to dissect much of the AI hype that circulates in the media and safety.
The rejection of fallibility by the safety-zero world also sets up Safety (and through its behaviourist-engineering assumptions) for the great AI con. This is assisted by the data-centric and measurement centrism of the safety industry. The lack of critical thinking in the safety curriculum is also a factor in confirming safety mythology. We see this at recent safety conference promotions (https://fleming.events/hse-excellence-europe/). It also seems most attractive to a masculinist paradigm of power and control. Eg. at this conference only 4 of the 22 presenters are female.
Most of the linguistics in AI hype use anthropomorphic language and people simply accept such uncritically eg. machine learning. The acceptance of such language relies on ignorance about learning and the neuropsychology of human ‘being’. It also helps to a have a binary assumption about matter and embodiment. Once these assumptions are accepted and unquestioned it is then an easy step through confirmation bias to simply affirm one’s own assumptions by finding evidence that seems to fit a paradigm. It is also helpful to not explore Transdisciplinary evidence that disproves one’s own assumptions.
Of course, there is endless research that demonstrates the false consciousness of AI hype. The research work of Damasio, Fuchs, Mark Johnson and a host of research in embodiment and neuropsychology dispels most of the hype in AI as mythology. As Baudrillard stated so well:
The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence’.
This is not to say that AI cannot be helpful, but is at best a tool. It’s like how Safety think about systems. In SPoR we don’t serve systems, we use systems to serve us, similarly with how we should understand AI.
If you want to learn more about the linguistics and ethics of AI you can study here: https://cllr.com.au/product/linguistics-flyer-unit-21/