Lesson 15 of 15

Markov Chains

The Markov Property

A sequence X0,X1,X2,X_0, X_1, X_2, \ldots is a Markov chain if the future depends only on the present, not the past:

P(Xn+1=jXn=i,Xn1,,X0)=P(Xn+1=jXn=i)P(X_{n+1} = j \mid X_n = i, X_{n-1}, \ldots, X_0) = P(X_{n+1} = j \mid X_n = i)

Transition Matrix

The transition matrix PP captures all single-step probabilities:

Pij=P(Xn+1=jXn=i)P_{ij} = P(X_{n+1} = j \mid X_n = i)

Each row sums to 1. The nn-step distribution is obtained by matrix-vector multiplication:

v(n)=v(0)Pn\mathbf{v}^{(n)} = \mathbf{v}^{(0)} \cdot P^n

Stationary Distribution

A distribution π\pi is stationary if πP=π\pi P = \pi. It is the long-run fraction of time spent in each state.

For an ergodic chain, iterating vvPv \leftarrow v P converges to π\pi from any starting distribution.

# Weather model: sunny(0) or rainy(1)
P = [[0.9, 0.1],   # from sunny: 90% stay sunny
     [0.2, 0.8]]   # from rainy: 80% stay rainy

# Stationary distribution: solve πP = π
v = [0.5, 0.5]
for _ in range(1000):
    v = [sum(v[i]*P[i][j] for i in range(2)) for j in range(2)]
print([round(x, 4) for x in v])  # [0.6667, 0.3333]

Analytically: π00.1=π10.2\pi_0 \cdot 0.1 = \pi_1 \cdot 0.2 gives π0/π1=2\pi_0/\pi_1 = 2, so π0=2/3\pi_0 = 2/3.

Your Task

Implement two functions:

  • markov_stationary(P) — returns the stationary distribution by iterating 1000 steps from [1/n,,1/n][1/n, \ldots, 1/n], rounded to 4 decimal places, one per line
  • markov_n_step(initial, P, n) — returns the nn-step distribution, rounded to 4 decimal places, one per line
Pyodide loading...
Loading...
Click "Run" to execute your code.