Tuesday, November 27, 2007

When Life is a Markov Chain

Markov chains are an amazing type of weighted digraph. They are immensely useful in the modelling of complex systems, and are very well used throughout the field for that purpose. I had a taste of how to use Markov chains to model real-world behaviour, and even till now, I'm amazed at how such a simple idea can become such a powerful modelling technique.

The key idea behind the Markov chain lies in a system having the "Markovian property"; this just means that the probability of transiting to the next state is independent of the states with which the system had been into before. This interesting property (or assumption) is one that makes the problem of computing the stationary distribution (set of the proportions of total time that a system is in any particular state) tractable.

The stationary distribution is important when dealing with Markov chains as it allows us to compute the long-term behaviour of a system. For example, if the Markov chain models a queue where the states are just the number of jobs in the queue, then we have a means of determining what the proportion of time of the server is in the "idle" state, as well as being in the "busy" or even "overloaded" states.

But, is life a Markov chain? Are we really able to model our lives as some form of memory-less system of [in-]finitely many discrete states? What if life were a Markov chain? Does this mean that what we did before does not contribute to the probability of what we will end up doing?

If life were a Markov chain, then it would appear that whatever we do next does not rely on whatever we did earlier. In a small way, this can be true. Often, one hears of people who have "re-made" themselves when they are in a new environment, simply by capitalising on the fact that the people in the new environment do not know about their past, and working from there to rebuild a new identity. But in general, it is probably not true that life can be modelled as a Markov chain, or at least, a Markov chain whose stationary distribution is defined. This is because there are states in our lives that we just cannot return to, like how we cannot return to childhood after reaching adult-hood. Using the terms in dealing with Markov chains, we realise that if a stationary distribution exists, then the probability that we stay in the state of being a child over a long period of time is as good as 0. There are also times when we keep alternating between states, albeit in a not-so time independet way. These things hamper a perfect modelling based on the model of Markov chains.

There is an old saying, that "you are because of what you were, and you are determines what you will be"; this is the cause and effect relationship between past, present and future. It is a relatively profound concept, which hints that perhaps one's future isn't really as unpredictable is it seems to be. Yet at the same time, it embodies the possibility of one being able to change one's present to be able to affect the future. I think that this bipolar nature of the interpretation of the statement reflects itself in the form of the pessimism and optimism attitudes respectively. A pessimist will observe that since one's past implies one's present, and that one's present implies one's future, then by transitivity of the implication, one's past implies one's future and thus there's nothing that one can do about it. An optimist will look at it and realise that the present is the now, and that the two "presents" in the saying are not necessarily the same; one's past brings us to one's present, but one can alter one's present to become one's future.

Pessimism and optimism, transitivity of implication, and Markov chains. Why do I suddenly bring this up? Because I can. Most of the knowledge that humans as a whole generate are so because we want to understand what is going on in the world; we want to know about our past, and how to deal with the future. In so many fields of human knowledge, we find that there is heavy reliance on past data and current understanding to try to determine what the future might bring along. Even for a field as much about the past as history, we find that it is in actuality a post-mortem analysis of what people had done to understand why they did what they did in order to predict what can happen in the future, given similar circumstances and slightly more information. This, in its own challenges the idea of the Markov chain, as well as transitivity of implication, and also questions one's predictions based on one's predisposition towards pessimism or optimism.

So am I a pessimist or optimist? I think that it is fairly clear which side I lean towards; I'm cautiously optimistic, but at the same time maintaining an almost deliberate pessimism, in the belief that if I expect the worst and get something good instead, I would have done well.

No comments: