Lesson 15 of 15

Conjugate Gradient Method

Conjugate Gradient Method

The Conjugate Gradient (CG) method solves symmetric positive definite systems Ax=bAx = b iteratively — without factoring AA. It is the algorithm of choice for large sparse systems.

Key Idea

CG builds a sequence of search directions p0,p1,\mathbf{p}_0, \mathbf{p}_1, \ldots that are A-conjugate (mutually orthogonal in the energy inner product u,vA=uTAv\langle \mathbf{u}, \mathbf{v} \rangle_A = \mathbf{u}^T A \mathbf{v}). Each step minimises the error over an expanding Krylov subspace.

Algorithm

x0=0,r0=b,p0=b\mathbf{x}_0 = \mathbf{0}, \quad \mathbf{r}_0 = b, \quad \mathbf{p}_0 = b

For k=0,1,2,k = 0, 1, 2, \ldots:

αk=rkTrkpkTApk(optimal step size)\alpha_k = \frac{\mathbf{r}_k^T \mathbf{r}_k}{\mathbf{p}_k^T A \mathbf{p}_k} \quad (\text{optimal step size})

xx+αkpk,rrαkApk\mathbf{x} \leftarrow \mathbf{x} + \alpha_k \mathbf{p}_k, \qquad \mathbf{r} \leftarrow \mathbf{r} - \alpha_k A \mathbf{p}_k

βk=rk+1Trk+1rkTrk(correction factor),pr+βkpk\beta_k = \frac{\mathbf{r}_{k+1}^T \mathbf{r}_{k+1}}{\mathbf{r}_k^T \mathbf{r}_k} \quad (\text{correction factor}), \qquad \mathbf{p} \leftarrow \mathbf{r} + \beta_k \mathbf{p}_k

For an n×nn \times n SPD system, CG converges in at most nn iterations (exactly, in exact arithmetic).

Example

A=(4113),b=(12)x=(1/117/11)(0.09090.6364)A = \begin{pmatrix}4 & 1\\1 & 3\end{pmatrix}, \quad b = \begin{pmatrix}1\\2\end{pmatrix} \quad \Rightarrow \quad x = \begin{pmatrix}1/11\\7/11\end{pmatrix} \approx \begin{pmatrix}0.0909\\0.6364\end{pmatrix}

Your Task

Implement conjugate_gradient(A, b) that solves Ax=bAx = b for symmetric positive definite AA.

Python runtime loading...
Loading...
Click "Run" to execute your code.