Is it possible to think oneself to death?
Context: In the Halo video game series, the idea of ``AI Rampancy'' was a plot point in Halo 4, with the explanation that it was due to the divergence of the underlying neural network lattice, which causes too many instances thinking at cross-purposes which eventually leads to the famous line of how it really is an AI ``thinking itself to death''.
I tried to think about it (heheh) from a human perspective to see how it might work out. But I immediately ran into two issues.
Firstly, for those of us (the majority) who have learnt some form of natural language, we seem to be stuck with the ``inner talking voice'' model of thinking. This means that when we have thoughts, these said thoughts tend to come in the form of words, usually ordered in some sequential manner according to whatever we managed to grok out of the associated grammar. With such a strong serialisation in place, I found it hard to apply the Halo version of ``thinking oneself to death'', since it seems that I cannot hold more than one ``inner talking voice'' at a time. I can, however, see how getting stuck in a reasoning loop of any sort (think of the Chinese idiom of 转牛角尖) as being one way of ``thinking oneself to death'' through seeming hopelessness. Since this is still serial in nature though, it doesn't have that cascaded effects feel that is implied through the Halo interpretation.
Secondly, much of human thought processes are massively parallel to begin with, though not all thought processes thus harnessed are associated with conscious abstract thought that is implied in the Halo context. Unlike Cortana of Halo, we are still piloting a meat-shielded carbonated hydroxyapatite skeleton mecha that runs on chemically-driven hydraulics, and all that movement requires a fair bit of coordinated control to operate properly, things that most AI don't have to do. A large part of the human brain has brain-cycles dedicated to processing the sensory input and associated coordinated neurological firing, but since these are unconscious/subconscious in nature, we don't usually call them as ``thinking''.
That said, I still conclude that it is possible to think oneself to death, though not in the manner in which Cortana talks about in Halo. I can come up with at least two ways too.
The first is already talked about earlier: getting stuck in a reasoning loop with seemingly no solution to the reasoning problem. This can cause death not through resource deprivation in the biomechanical sense, but in the more abstract ``future energy'' aka ``hope'' sense. It is related to the placebo effect, where the body reacts according to the level of optimism/belief within the mind. We commonly see this effect in situations where someone is literally given a sugar pill and told that it can provide some therapeutic effect which ends up happening despite the sugar pill being known to not have any active ingredients that can affect the particular symptom it treats in any way. Imagine now a negative version of the placebo effect that comes from a reasoning loop---that is how one might ``think oneself to death''. It is not the mind that dies from the overload, but that the body itself fails from the hopelessness that the mind has prescribed.
The second is more obvious: suicide. Literally thinking about how one can die, and then proceeding to make it come true through physical actions directed by the brain.
But ``thinking oneself to death'' like how it is described in the ``AI Rampancy'' concept in Halo? Nah... brains don't work that way.
No comments:
Post a Comment