For notes and additional problems, see: http://mathsci2.appstate.edu/~hph/SageMath/
Markov Chain Example 1: Weather – A study of the weather in Tel Aviv showed that the sequence of wet and dry days could be predicted quite accurately as follows.
If the current day is dry then there is a
.250 probability of having a wet day the next day
.750 probability of having a dry day the next day
If the current day is a wet day there this is a
.662 probability of having a wet day the next day
.338 probability of having a dry day the next day
The set of outcomes is U = {wet, dry}, and the probabilities can be put in a transition matrix:
wet now | dry now | |
wet later | .662 | .250 |
dry later | .338 | .750 |
This matrix is a really nice way to characterize Markov chains. There is no ambiguity in the definition, provided one remembers that the i,j entry is the probability of going from state uj to state ui. Note also that the columns sum to 1, since each column lists probabilities for all possible future outcomes for a given current state.
The main question we want to answer eventually is, “what happens over the long term?” In the weather example, what is the long-term probability of a dry or wet day? To do this we need to be able to calculate the probability of a wet or dry day two days, three days, four days, etc., after a wet or dry day. These are called higher order transition probabilities and are denoted Pi,j(k).
Theorem: If P is the transition matrix for a Markov chain, then Pi,j(k) is the i, j entry of Pk, i.e., the kth power of the matrix.
The probability that a dry day will occur 3 days after a wet day is .53463 from entry 2,1 in the matrix:
|
|
Chains that have the property that there is an integer k such that every state can be reached from every other state in exactly k steps are called regular chains. These chains have two interesting properties:
Theorem: In a regular chain, some power of the transition matrix has all of its entries positive.
Theorem: The powers of the transition matrix approach a matrix with all columns the same. More over, this column vector – called the fixed-point probability vector – contains the long-term probabilities of being in each state.
In addition to the usual question, “if we start in state i, what is the probability we get to j in k steps?” we will ask the following question about regular chains:
What is the long term probability of being in state i?
Look at the weather example:
|
|
Markov Chain Example 2: Russian Roulette – There is a gun with six cylinders, one of which has a bullet in it. The barrel is spun and then the gun is fired at a person’s head. After each firing, the person is either dead or alive. If the person survives, the barrel is spun again and fired again. This is repeated until the person is dead.
The set of outcomes is U = {alive, dead}, and the probabilities are:
alive now | dead now | |
alive later | 5/6 | 0 |
dead later | 1/6 |
1 |
A state that cannot be left is called an absorbing state. Chains that have at least one absorbing state and from every non-absorbing state it is possible to reach an absorbing state is called an absorbing chain.
Theorem: As the number of stages approaches infinity in an absorbing chain, the probability of being in a non-absorbing state approaches 0.
Russian Roulette is an example of an absorbing chain. Note that very high powers of the matrix give:
|
We will be interested in the following questions related to absorbing chains:
(1) What is the probability of ending up in absorbing state i when starting in state j?
(2) What is the expected number of times we will be in state j before being absorbed?
(3) What is the expected number of stages before being absorbed?
These can all be answered by studying the transition matrix. For any absorbing Markov chain, the transition matrix can be rewritten in the following form:
absorbing |
nonabsorbing |
|
absorbing |
I |
R |
nonabsorbing |
0 |
Q |
where I is the identity matrix, 0 is the matrix of zeros, R and Q are submatrices that give the probabilities of going from nonabsorbing to absorbing states and the probabilities of staying in nonabsorbing states respectively.
The key ideas are listed in the following theorem:
Theorem: In an absorbing Markov chain, the matrix N = (I – Q)-1 exists, and
(1) the expected number of times we are in state i given that we start in state j is given by the i,j entry of N;
(2) the expected number of steps before absorption given that the process starts in row i is the sum of column i of N;
(3) the probability of absorption in state i given that the process starts in state j is given by the i,j entry of the matrix RN.
|
|
Interpreting these numbers gives us:
(1) We expect to be alive 6 stages given that we start out alive.
(2) We expect to be alive 6 stages before dying.
(3) The probability of dying given that we start out alive is 1.
Not that interesting. Here is another problem:
A rat is dropped into chamber 1 in the maze, and wanders through the chambers at random until it finds the cheese in chamber 5. Assuming that at any stage the rat chooses a door out of any chamber at random, what is the expected number of stages before the rat finds the cheese? What is the expected number of times that each chamber is entered? If the rat is particularly stupid and only sees the cheese half the time when entering chamber 5, how do these answers change?
We need to set up a transition matrix. First notice that the problem implies room 5 is an absorbing state. So the matrix row corresponding to being in 5 looks like: 0, 1/3, 1/3, 0, 1. This assumes the columns represent 1 through 5 in order. Notice for example the 1/3 in position 2, which indicates that 1/3 of the time a rat in room 2 will enter room 5.
The row for room 1: 0, 1/3, 1/3, 0, 0
The row for room 2: 1/2, 0, 1/2, 0, 0
Continuing in this way gives the matrix:
|
|
|
|
|
Interpret these numbers!
|