def. Markov Chain. A Stochastic Process where:

  • has finite or infinite states
  • only depends on , i.e. is memoryless
  • where is a matrix that contains all transition probabilities, the transition matrix:
  • Row sums are always 1, since total probability exiting a state should add to 1
  • Absorbing States

thm. Chapman-Kolmogorov Eq. Given Markov chain and transition probability matrix , the transition probability from state to in steps is denoted and for all in matrix , the n-step transition matrix. This is calculated as:

Absorbing States

def. Absorbing State is a state where once entered, it is never left. Motivation. Calculation of the probability that, from an initial state, the process will ever enter a certain state is a common question yet difficult to calculate. The following formalizes this type of problem and offers a concise solution. Consider a Markov chain , initial state , transition matrix . let and we want to know if ever enters any states within transition steps. This is:

which is a pain in the ass to calculate. Instead, we construct a new Markov chain , transition matrix with states which is an absorbing state. is constructed accordingly to model this absorption, in the following way:

  • for all
  • , obviously only for
  • since it’s an absorbing state Then the original probability is

Classification of States

def. Accessible. From state state is accessible iff:

def. Communication & Class. State and communicate iff accesses and vice versa. Properties:

  • Identity. communicates with itself
  • Commutative. if comm. with , and with , then comms. with . Proof:

def. Irreducible Markov Chain. A markov chain that contains only one class. We also see below, irreducible markov chain has one recurrent class. def. Recurrence and Transience. Initial state . Let , i.e. that we will ever visit state again (indefinitely) State is:

  • recurrent if , will visit again
  • transient if , will probably never visit again thm. Determiniation of Recurrence. State is:
  • recurrent if Converges
  • transient if Diverges Proof. See (Proof) Markov Recurrence by Sum Divergence. This leads to following corollaries that are pretty self-evident.
  1. thm. Finite state markov chains always contain a recurrent state.
    • …but infinite state markov chains may have no recurrent states
  2. thm. Recurrence/transience is a class property. If state comm., and is recurrent, so is .
  3. thm. Irreducible markov chains have one recurrent class that contains all states. def. Positive/Null Recurrent. let state , and , the expected time of return. Then :

Intuition. Null recurrence is an odd variety. It means the process will definitely eventually return to , but you have to wait infinitely long(!).

  • thm. A Markov Chain does not have a stationary distribution iff it is null recurrent
  • thm. In a finite-state markov chain, all recurrent states are positive recurrent. def. Periodic/Aperiodic. A state is periodic with period iff:
  1. for all not a multiple of
  2. , the largest of such numbers
  • If , the state is aperiodic.
  • thm. Periodicity is a class property.

Stationary Distribution

Motivation. In certain Markov chains, as we run the process for a large number of steps, the fraction of time spent in every state s approaches some enumber. This is equally the transition matrix at infinite number of transitions.

  • def. Ergodic state. A state that is positive-recurrent, aperiodic.
  • def. Ergodic Markov Chain is a markov chain with only ergodic states. Since ergodicity is naturally also a class property, we consider irreducible ergodic MCs, i.e. chains that “go in cycles”, will return in a finite amount of time, and has only one “cycle group”. thm. Stationary Distribution. For an ergodic markov chain there exists the limit:

Notice first that each column has just one limiting value. Thus we index with just one index, the destination. for each column is the unique non-negative solution to:

  • thm. If MC is (just) irreducible then there always exists such a solution iff MC is positive recurrent
    • If solution exists i.e. MC is pos-rec, solution will be unique
    • If MC is aperiodic as well, is then the limiting probability.