Lesson 15 of 15
Markov Chains
The Markov Property
A sequence is a Markov chain if the future depends only on the present, not the past:
Transition Matrix
The transition matrix captures all single-step probabilities:
Each row sums to 1. The -step distribution is obtained by matrix-vector multiplication:
Stationary Distribution
A distribution is stationary if . It is the long-run fraction of time spent in each state.
For an ergodic chain, iterating converges to from any starting distribution.
# Weather model: sunny(0) or rainy(1)
P = [[0.9, 0.1], # from sunny: 90% stay sunny
[0.2, 0.8]] # from rainy: 80% stay rainy
# Stationary distribution: solve πP = π
v = [0.5, 0.5]
for _ in range(1000):
v = [sum(v[i]*P[i][j] for i in range(2)) for j in range(2)]
print([round(x, 4) for x in v]) # [0.6667, 0.3333]
Analytically: gives , so .
Your Task
Implement two functions:
markov_stationary(P)— returns the stationary distribution by iterating 1000 steps from , rounded to 4 decimal places, one per linemarkov_n_step(initial, P, n)— returns the -step distribution, rounded to 4 decimal places, one per line
Pyodide loading...
Loading...
Click "Run" to execute your code.