Wednesday, August 04, 2021

On Systems

The propensity for bad behaviour by humans to gain some form of advantage (real or otherwise) always boggle my mind. I suppose that's the source of inspiration for the well-known phrase ``The road to Hell is paved with good intentions''. It is also the primary reason why the Prisoner's Dilemma exists.

These types of behaviours are, surprisingly enough, much better understood by society's so-called lowlifes than anyone else who claim to be more cultured/law-abiding. The high-born are likely to know this extremely well too, though there always exists a fine line separation between the upper-most strata of a society and its lower-most stratum. To the high-born, they make the rules, and as people benefitting from them, know the true extent in which the rules cover and more importantly, do not cover. The lowlifes don't make the rules of polite society, but they live a more lawless existence than the vast majority of people, and thus have a stronger innate understanding of how base human nature can be. In both situations, while the source of understanding may be different in presentation, the essence behind the reasoning is more or less the same.

The real idiots in society are actually those who claim to be masters of the system---the engineers, the scientists, and anyone who claims technological superiority. They may know their machines and systems well from their own axiomatic approach, but they almost always fail to take into account how the very rules that they have put into their small universes may be bent to perform bad behaviours. They put too much faith in their ability to analyse systems, believing that since they designed the system, they possess mastery over everything that it works with, and thus they always know best, even coming up with terms like ``data-driven approach'' to justify their decisions, throwing shade at any decision process that involve qualifiable statements that are not quantifiable.

Ironic, isn't it? I am a tech-y, and in the same breath, I am also condemning myself as being an idiot in society.

I think I've mentioned before how not everything is quantifiable, and perhaps more subtly, the most important things in life/society that may be qualifiable may never really have some form of meaningful quantification. I did not make the much bolder step of declaring that science is not the start of understanding, but really the end of it, and that it is part of the ongoing dialogue to understand both the physical world and the manipulated ones that are dominated by the various types of human interactions.

Science is always wrong by definition, however what separates it from every other mode of inquiry is that it strongly encourages people to always challenge everything that is declared to be correct in science through reproducible experiments. Reality is what science looks up to---reality is the ``ground truth'' that everyone likes to talk about, and if a model in science does not conform to reality, it is the model that is wrong, not reality.

That last point is a very important one and deserves to be stated clearly:
Reality is what science looks up to---reality is the ``ground truth'' that everyone likes to talk about, and if a model in science does not conform to reality, it is the model that is wrong, not reality.
It is the innate want of spreading discoveries from observation that guides our scientific endeavours, as well as the subsequent control that we may exert upon reality through the ramifications that arise from the model that we had used to explain the observed phenomenon. That's the big reason why basic scientific research that does not immediately lead to a product is so important---the increase in the number of axioms we possess within the framework of science increases the overall expressivity of the engineering outcome exponentially.

But the ability to explain phenomena with models simplified from reality makes science practitioners arrogant---after a while, they ``get high on their own supply'' as some lowlifes may call it. They have too much faith in their systems that they demand that reality conform to their models, as opposed to fixing their models to fit reality. This carries over into the land of the engineers of various rule-based systems, be it information systems, laws, or anything under the sun.

As long as society exists, there will always be a need for interaction, and when there is a need for interaction, systems of rules governing what is acceptable behaviour will exist. But like regular science, much of these rules are post-hoc, and tend to be heavily generalised versions that can help the vast majority of rule-abiding people. They can never be complete and be easily transmitted around---the two properties are contradictory in nature. Given that then, it becomes clear that anyone who chooses to not follow the rules' intention but by their words can effectively do whatever they want, without any bad consequences since penalty/rewards are determined solely through the ``letter of the law'' as opposed to the ``spirit'' for the sake of fairness.

So tools like ``automatic banning of reported spammer'' can be abused to generate denial of service attacks by perpetrators against their victims, as are abuses to rules like ``warrantless entry of a dwelling to enact arrest without giving the chance for the person to escape'', or even ``be masked up at all times unless you are eating''. The intentions can be quite clear, but their presentation, for the sake of conciseness, transmission, and explanation, is sufficiently open that it can be abused to hell and back with little repercussions.

Which brings me to something to really think about: as the world's information systems increase their reach beyond the skilled knowledge workers of the past into the larger society of the now and beyond, what is the ethically correct way of dealing with this propensity of bad behaviour by humans?

Maybe if we can solve that at scale, the solution will be what we have been calling ``artificial intelligence''.

No comments: