International
Tables for
Crystallography
Volume F
Crystallography of biological macromolecules
Edited by M. G. Rossmann and E. Arnold

International Tables for Crystallography (2006). Vol. F. ch. 15.1, p. 322   | 1 | 2 |

Section 15.1.5.2.1. The conjugate-gradient method

K. Y. J. Zhang,a K. D. Cowtanb* and P. Mainc

a Division of Basic Sciences, Fred Hutchinson Cancer Research Center, 1100 Fairview Ave N., Seattle, WA 90109, USA,bDepartment of Chemistry, University of York, York YO1 5DD, England, and cDepartment of Physics, University of York, York YO1 5DD, England
Correspondence e-mail:  cowtan+email@ysbl.york.ac.uk

15.1.5.2.1. The conjugate-gradient method

| top | pdf |

The conjugate-gradient method does not require the inversion of the normal matrix, and therefore the solution to a large system of linear equations can be achieved very quickly.

Starting from a trial solution to equations (15.1.5.4)[link], such as a null vector, [\delta \rho_0 \left({\bf x}\right) = {\bf 0}, \eqno(15.1.5.9)] the initial residual is [{\bf r}_0 = - {\bf J}^T \left(\varepsilon - {\bf J}\delta \rho_0 \left({\bf x}\right)\right) \eqno(15.1.5.10)] and the initial search step is [{\bf p}_0 = {\bf r}_0. \eqno(15.1.5.11)]

The iterative process is as follows. The new shift to the density is [\delta \rho_{k + 1} \left({\bf x}\right) = \delta \rho_k \left({\bf x}\right) + \alpha _k {\bf p}_k, \eqno(15.1.5.12)] where [\alpha _k = {\bf r}_k^T {\bf p}_k / {\bf q}_k^T {\bf q}_k \eqno(15.1.5.13)] and [{\bf q}_k = {\bf Jp}_k. \eqno(15.1.5.14)] The new residual is [{\bf r}_{k + 1} = {\bf r}_k - \alpha _k {\bf s}_k, \eqno(15.1.5.15)] where [{\bf s}_k = {\bf J}^T {\bf q}_k. \eqno(15.1.5.16)] The next search step which conjugates with the residual is [{\bf p}_{k + 1} = {\bf r}_{k + 1} + \beta _k {\bf p}_k, \eqno(15.1.5.17)] where [\beta _k = - {\bf r}_{k + 1}^T {\bf s}_k / {\bf q}_k^T {\bf q}_k. \eqno(15.1.5.18)]

The process is iterated by increasing k until convergence is reached, when [\left| {{\bf r}_{k + 1} - {\bf r}_k } \right| \Rightarrow 0.]

The number of iterations required for an exact solution is equal to the number of unknowns, because the search vector at each step is orthogonal with all the previous steps. However, a very satisfactory solution can normally be reached after very few iterations. This makes the conjugate-gradient method a very efficient and fast procedure for solving a system of equations. Note that the normal matrix never appears explicitly, although it is implicit in (15.1.5.10)[link] and (15.1.5.16)[link]. The inversion of the normal matrix and matrix multiplication is completely avoided. Most of the calculation comes from the formation of the matrix-vector products in (15.1.5.10)[link], (15.1.5.14)[link], and (15.1.5.16)[link]. These can be expressed as convolutions and can be performed using FFTs, thus saving considerably more time.

The solution to [\delta\rho\left({\bf x}\right)] at the end of conjugate-gradient iteration is substituted into equation (15.1.5.6)[link] to get a new solution for [\rho\left({\bf x}\right)]. The solution to the system of nonlinear equations (15.1.5.3)[link] is obtained when the Newton–Raphson iteration has reached convergence.








































to end of page
to top of page