Markov processes

Introduction

Markov property

For a process with the Markov property, only the current state matters for all probability distributions.

\(P(x_{t+n}|x_t)=P(x_{t+n}|x_t, x_{t-1}...)\)

Markov chains

Finite state Markov chains

Transition matrices

This shows the probability for moving between discrete states.

We can show the probability of being in a state by multiplying the vector state by the transition matrix.

\(Mv\)

Time-homogenous Markov chains

For time-homogenous Markov chains the transition matrix is independent of time.

For these we can calculate the probability of being in any given state in the future:

\(M^nv\)

This becomes independent of v as we tend to infinity. The initial starting state does not matter for long term probabilities.

How to find steady state probability?

\(Mv=v\)

The eigenvectors! With associated eigenvector \(1\). There is only one eigenvector. We can find it by iteratively multiplying any vector by \(M\).

Infinite state Markov chains

Markov model description We can represent the transition matrix as a series of rules to reduce the number of dimensions \(P(x_t |y_{t-1})=f(x,y)\)

can represent states as number, rather than atomic. could be continuous, or even real.

in more complex, can use vectors.

Hidden Markov Models

Introduction

As well as the Markov process \(X\), we have another process \(Y\) which depends on \(X\).

Dynamic Bayesian networks

Introduction