Lesson 15 of 18

Power Iteration

Finding the Dominant Eigenvalue

Power iteration is an iterative algorithm that finds the largest eigenvalue (in magnitude) of a matrix.

Starting from a random vector, repeatedly multiply by mathbfAmathbf{A} and normalize:

import math

def power_iteration(A, num_iter=100):
    n = len(A)
    # Start with a ones vector
    v = [1.0] * n
    eigenvalue = 0.0
    for _ in range(num_iter):
        # Multiply A @ v
        Av = [sum(A[i][j] * v[j] for j in range(n)) for i in range(n)]
        # Rayleigh quotient: λ ≈ (v · Av) / (v · v)
        num = sum(v[i] * Av[i] for i in range(n))
        den = sum(v[i] * v[i] for i in range(n))
        eigenvalue = num / den
        # Normalize
        norm = math.sqrt(sum(x**2 for x in Av))
        v = [x / norm for x in Av]
    return round(eigenvalue, 4)

A = [[3, 0], [0, 5]]
print(power_iteration(A))   # 5.0

Why It Works

Multiplying any vector by mathbfAmathbf{A} amplifies the component in the direction of the largest eigenvector. After enough iterations, the vector aligns with that eigenvector, and the Rayleigh quotient gives the eigenvalue.

The Rayleigh Quotient

For a vector mathbfvmathbf{v} and matrix mathbfAmathbf{A}:

lambda approx rac{mathbf{v}^T mathbf{A} mathbf{v}}{mathbf{v}^T mathbf{v}}

This converges to the dominant eigenvalue as mathbfvmathbf{v} aligns with the dominant eigenvector.

Applications

  • PageRank — Google's algorithm is power iteration on the web graph
  • PCA — the first principal component corresponds to the dominant eigenvector
  • Markov chains — finding stationary distributions

Your Task

Implement power_iteration(A, num_iter=100) that returns the dominant eigenvalue of mathbfAmathbf{A}, rounded to 4 decimal places.

Pyodide loading...
Loading...
Click "Run" to execute your code.