Embodied Risk

imageOne of the challenges with the engineering-behaviourist worldview of safety is that it has no answer for the question: what is the body for? (https://safetyrisk.net/what-is-the-human-body-for-in-safety/). (Other than it is simply a carrier for a computer)

So much of safety work is brain-centric and Cartesian. There is really no sophisticated understanding of human judgement and decision making in safety that includes the body. So often, the response to problems and accidents is more programming for the brain assuming, that accidents are caused by bad programming or free choice.

Neither of these are true or real. Such is the behaviourist worldview that dominates safety.

One of the wonderful aspects of the work of Paul Ricouer is his rejection of Cartesian-binary view of the person. His work on voluntary and involuntary ‘being’ is critical for understanding why humans do as they do.

In SPoR, when we use the word ‘Mind’ it never means brain. We don’t have a mind, we are a Mind. That is why the model of 1B3M (https://safetyrisk.net/body-memory-and-safety/) is so important to understanding personhood, ethics and decision making. It is why Triarchic thinking (https://safetyrisk.net/triarachic-thinking-in-spor/) is so important for understanding why people do as they do.

If the starting point for safety is an ethic of risk then, the starting point for understanding risk is the way persons make decisions. One thing is for sure: the behaviourist understanding of human decision making is pure mythology. A behaviourist understanding of why people do what they do is: antiquated, has no research base, explains little and drives brutalism.

When we understand the nature of the human Will, we know as Ricoeur states: I will, I decided, I move my body and I consent’. All consciousness is consciousness of something.

Without the movement of the body, there is no learning. The regurgitation of data in a brain, is NOT learning.

Without embodied movement there is no learning. Hence AI that has no body, cannot know embodied feeling to move and so doesn’t learn. It just regurgitates algorithms. It is not ‘I think therefore I am’ it’s ‘I move therefore I am.

When we accept the connectedness of the body, we accept its associated necessities, mortality, weaknesses, vulnerabilities and obligations. When we think of the human senses we don’t just think of those inside the head, all knowing is embodied. For example, we feel and experience the world through our skin. AI cannot ‘feel’ because it has no fallible body.

When we accept the connectedness of the body as essential to personhood, we can no longer deny fallibility.

It really shouldn’t be that difficult. How long can you go without water? How long can you go without food? How long can you go without exercise, the sun, light and sleep? These are the basics of human being. None of these are the concerns of AI, because AI can never know the necessities of being embodied.

It is the necessities of being embodied that make learning possible. Fallibility is a blessing, not the enemy. Only silly behaviourist injury-counting safety makes fallibility the enemy.

Our bodies impose limits on what we can do and so we learn how to live with the involuntary needs of being embodied. This is how we ‘know’ our limits. This is how we learn how to tackle risk. This is why Zero is nonsense.

Risk is not just an idea, it’s a reality of all that threatens human ‘being’. Risk is the lived reality of persons in learning. Zero is antithetical to living and learning.

This is why Safety ought to be about Real Risk (https://www.humandymensions.com/product/real-risk/) and learning how to tackle risk as fallible persons (https://www.humandymensions.com/product/fallibility-risk-living-uncertainty/).

Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.