Monday, October 18, 2021

Machines as Masters?

Let's talk a bit about the ``recent'' craze over artificial intelligence techniques that revolve around the use of neural networks. Neural networks isn't exactly a new thing, in fact, it was one of the core ideas of artificial intelligence that were explored from the 1940s onwards. The Wikipedia article that I linked to probably can give a better summary than what I care to do here, so I'm not even going to attempt to do so. What I want to do is to disabuse the hyper-optimism that artificial intelligence based on neural networks (or other data-trained programs really) is the Final Solution of all things involving humans. This line of thought came about during one of the debates that I was having before with someone who told me in my face that I was too much of a theoretician in nature to not see the usefulness of what neural networks can bring, while I was pointing out the sheer recklessness that we are using these tools without proper understanding of its limitations in the name of Capitalism.

The idea of creating a device that can automate decision making is a consequence of our highly successful automation of physical processes as seen in the many electro-mechanical machines. In many ways, boolean logic is applied to measurement based ``features'' (to borrow from machine learning jargon) which results in particular actions stemming from fulfilling certain conditions. A crude example of such automated decision systems as used in the automation of our physical processes can be seen from ladder logic and its cousins in the form of micro-controller logic. In many ways, much of the menial decision-making has been automated already in the form of programs---the world of artificial intelligence is already here even without the use of technology like neural networks.

The key benefit of such program-based artificial intelligence is that almost all outcomes that arise from the execution of the program can eventually be back-traced as a reasoning path through the program itself, with the source code of the program acting as its own documentation of the features that it was examining, and how it chose to combine the features to arrive at a decision. This ability to trace reasoning paths through a program from its outcome is part of what ``explainability'' means. In the sense of defending decisions (or rather, actions that result from a decision) in human societies, having such a laid out sequence of antecedents makes it easier to bring about debate on the correctness/validity, since all who are interested can examine the antecedents and state if they agree with the feature, the value of the feature at the time of the evaluation, or even the correctness of the combination of the features that spark off a branch in the execution path.

Assuming no exceptions or crashes of the program, all outcomes that it produces must come from at least one execution path through its program logic. This level of certainty combined with the direct correspondence nature of feature and outcome the execution path can inspire trust in the program's correctness, as well as allowing people to decide together how to fix it if the program outcome for a specific context is not acceptable. This is a good thing, because as human societies evolve over time, their needs and requirements change, and the associated programs may need to have their reasoning checked against the updated expectations, and if that is lacking, must be fixed accordingly to fulfil the correctness of the automated decision making. More importantly, the error of a single outcome can be used to correct the program.

Now let's go back to the neural network and be extension, other computer programs that require training in order to achieve their purpose. These techniques exist for the reason that the problem space that they are used in are either:
  1. Not well understood due to too many inter-relating variables; or
  2. May be understood somewhat in abstraction, but face a steep effort wall in converting that abstraction into the boolean logic form that digital micro-processors can handle.
As humans, we tend to use consensus as a means of determining correctness, and a consensus is often obtained through structured empiricism (scientific method). So a logical extension of that reasoning process when faced with problem spaces of such nature is to try and get as much data as it is possible and ``make sense'' out of it.

There are some catches though:
  1. Data are not created equal: it's the amount of information relative to the outcome that it contains that is critical. If I want to know if tomorrow is likely to rain, knowing that there exists spotted dogs does not help me.
  2. Representation is the cornerstone of all intelligence. As humans, we cheat a little since we already have some built-in representations that we axiomatically take for granted---machines are dumb enough that we need to explicitly define it for them. It's easy to go overboard too---see the curse of dimensionality. ``Dimensionality'' here refers to each feature that we introduce to our model/program.
Much of the work [in machine learning] has been to deal with these two catches---it is important work because helps to establish the baseline correctness of the approach. So all the different neural network architectures including the number of levels, types of units, units per level, connection schemata, regularisation, ensemble learning protocols and the like are to address these two major catches. Getting these right is more art than science for now, but who knows when that will be made rigorous enough that we can predict performance before doing any training/validation.

There is a related problem from trying to solve these two big catches: I call it the surjection problem. Put simply, it is usually the case that the number of unique input to the neural network far exceeds the number of unique outcomes that we want. By the pigeonhole principle, this means that there will exist some specific outcome that will have more than one unique input that can reach it. This is usually not a problem because we assume that the input to the neural network is sufficiently biased towards ``what is possible in the real world'' that the neural network will therefore ``only be used in possible real world situations''. We gain some confidence about this through the data collection protocol that is used to train the model/program.

So, where is the disabusement?

Disabusement #1: Neural networks are fallible, and therefore we need oversight when we choose to use them as a tool.  Neural networks are fallible for the reason of the ``surjection problem'' that I raised earlier. The attack idea is like this: let's say I want a specific outcome from a decision process that is made up of the neural network. All I need to do then is to find some input that I can shove into the neural network to get that outcome. This attack can work because neural networks are surjective in nature, and more importantly, are not designed to be collision resistant because they were designed to achieve the right outcome given ``possible [in the real world]'' input and not to exclude any outcome given ``impossible [in the real world]'' input (more specifically, the behaviour of the neural network when fed with ``impossible'' input is not explicitly tested). There is a practical reason for this: the number of unique ``possible'' input is generally tractable compared with the number of unique ``impossible'' input.

Sounds impossible? Think again. These attacks are going to be more sophisticated over time as more neural networks play larger roles in decision making processes that matter. It may not be against facial recognition next time; it could be using the small hijacking of communicated data to confuse the neural network controlling infrastructure, or something more coordinated and nefarious that attacks entire economies---the possibilities for mischief are endless. In fact, it can get much worse quickly because there is a high chance that these neural networks will converge to a few trained ``super'' neural networks that everyone uses [in a service model] because training neural networks requires stupid amounts of [relevant] data and computational power that only the a few big players have, making such attacks increasingly profitable from the return of investment of effort perspective. An example of such a ``super'' neural network that can be a basis for downstream use is GPT-3.

Not terrified enough? What if we tie the human institutions of punishment to the outcome of these neural networks? What if one is automatically charged of a crime because the neural network misidentified your profile picture from the CCTV? What if the justification for the use of these automated tools with no oversight prior to taking action on final outcome is the need to deal with the increased transaction/population density/costs of deploying people to look out? Will you want to live in such a world?

Disabusement #2: Even if we have oversight, correcting the neural network so that its outcome conforms with our [society's] understanding of correctness requires a Herculean effort akin to tearing down what was already built completely and rebuild it.  The strength of the neural network is also its chief downfall when it comes to correction: since the behaviour of the neural network is largely dependent on the quality of the information from the input data, any correction that is needed from the oversight requires providing additional data that has the information relevant to the correction needed to cause the neural network to change in behaviour. One might argue that there are still various hyperparameters that can be tweaked to tune the behaviour of the neural network, but the correlation between such tweaks as compared to the oversight-determined correction is sketchy at best.

Thus, training (and validating) with new data containing information relevant to the correction required is still the better approach. Now, whether it requires a complete tear down and re-training of the neural network depends on the chosen training protocol, but my gut says that the oversight is likely to be less convinced with a ``patching'' approach using the already trained neural network as compared to a re-train ``from scratch''. Part of the reason is that there is an ontological disconnect between how the neural network works and how it is trained---it is easier to gain a peace of mind to start ``with a clean slate'' on something that is not easy understand than to worry if the new data provided has indeed corrected the behaviour of the neural network truly, or if it was just part of an overfitting phase that would immediately fail when the next instance of the input of a ``problematic type'' arrives.

Sadly, I don't think that we have the right theory now to explain any of these things. A good theory on the characteristics of how much data is required to perform certain types of correction can go a long way towards making oversight and correction of neural networks as natural as fixing a broken regular program because of a wrong antecedent.

As an added bonus somewhat tangential to the gathering of data to train such neural networks, it is apparently possible to derive a [good enough] approximation of the individual data used to train the neural network (actual paper). Still think that you have ``nothing to hide'' and therefore that little bit of information you feed into the data-dredge is safe?

So where does this leave us? Should we abandon using neural networks in Real Life then? No, I didn't claim that---I claimed that we should not be hyper-optimistic that neural networks can completely replace humans in decision making and be the Final Solution. Even in electro-mechnical automation we still see operators being present to keep an eye out on things despite those systems having decades of good performance data, what more about neural networks?

Legislation must make an effort to keep up, and we must bend neural networks to conform to human ethics, and not bend human ethics to conform to neural networks. Even if this means reduction in the efficiency in the neural network (for whatever definition of efficiency), it must be done.

Neural networks (and their allies) are powerful tools, but they should be treated as powerful tools and not as masters over us. Even the most powerful tool can cause a serious amount of damage when misused, and allowing neural networks to operate completely autonomously for ``big decisions'' that intersect with rules/norms/laws of human society is just an invitation for trouble at all levels, no matter what socio-economic class one is from.

After all (from Bill Vaughan):
To err is human, to really foul things up requires a computer.
Till the next update.

No comments: