International
Tables for
Crystallography
Volume B
Reciprocal space
Edited by U. Shmueli

International Tables for Crystallography (2006). Vol. B. ch. 3.3, pp. 361-367   | 1 | 2 |

Section 3.3.1.2. Orthogonal (or rotation) matrices

R. Diamonda*

aMRC Laboratory of Molecular Biology, Hills Road, Cambridge CB2 2QH, England
Correspondence e-mail: rd10@cam.ac.uk

3.3.1.2. Orthogonal (or rotation) matrices

| top | pdf |

It is a basic requirement for any graphics or molecular-modelling system to be able to control and manipulate the orientation of the structures involved and this is achieved using orthogonal matrices which are the subject of these sections.

3.3.1.2.1. General form

| top | pdf |

If a vector v is expressed in terms of its components resolved onto an axial set of vectors X, Y, Z which are of unit length and mutually perpendicular and right handed in the sense that [({\bf X} \times {\bf Y}) \cdot {\bf Z} = + 1], and if these components are [v_{I}], and if a second set of axes X′, Y′, Z′ is similarly established, with the same origin and chirality, and if v has components [v'_{I}] on these axes then [v'_{I} = a_{Ij} v_{j},] in which [a_{IJ}] is the cosine of the angle between the ith primed axis and the jth unprimed axis. Evidently the elements [a_{IJ}] comprise a matrix R, such that any row represents one of the primed axial vectors, such as X′, expressed as components on the unprimed axes, and each column represents one of the unprimed axial vectors expressed as components on the primed axes. It follows that [{\bi R}^{T} = {\bi R}^{-1}] since elements of the product [{\bi R}^{T} {\bi R}] are scalar products among perpendicular unit vectors.

A real matrix whose transpose equals its inverse is said to be orthogonal.

Since X, Y and Z can simultaneously be superimposed on X′, Y′ and Z′ without deformation or change of scale the relationship is one of rotation, and orthogonal matrices are often referred to as rotation matrices. The operation of replacing the vector v by Rv corresponds to rotating the axes from the unprimed to the primed set with v itself unchanged. Equally, the same operation corresponds to retaining fixed axes and rotating the vector in the opposite sense. The second interpretation is the one more frequently helpful since conceptually it corresponds more closely to rotational operations on objects, and it is primarily in this sense that the following is written.

If three vectors u, v and w form the edges of a parallelepiped, then its volume V is [V = {\bf u} \cdot ({\bf v} \times {\bf w}) = \varepsilon_{ijk} u_{i} v_{j} w_{k}] and if these vectors are transformed by the matrix R as above, then the transformed volume V′ is [V' = \varepsilon_{lmn} u'_{l} v'_{m} w'_{n} = \varepsilon_{lmn} a_{li} a_{mj} a_{nk} u_{i} v_{j} w_{k}.] But the determinant of R is given by [|{\bi R}| \varepsilon_{IJK} = \varepsilon_{lmn} a_{lI} a_{mJ} a_{nK}] so that [V' = |{\bi R}|V] and the determinant of R must therefore be +1 for a transformation which is a pure rotation. Nevertheless orthogonal matrices with determinant −1 exist though these do not describe a pure rotation. They may always be described as the product of a pure rotation and inversion through the origin and are referred to here as improper rotations. In what follows all references to orthogonal matrices refer to those with positive determinant only, unless stated otherwise.

An important general form of an orthogonal matrix in three dimensions was derived as equation (1.1.4.32[link] ) and is [{\bi R} = \pmatrix{l^{2} + (m^{2} + n^{2}) \cos \theta &lm(1 - \cos \theta) - n \sin \theta &nl(1 - \cos \theta) + m \sin \theta\cr lm(1 - \cos \theta) + n \sin \theta &m^{2} + (n^{2} + l^{2}) \cos \theta &mn(1 - \cos \theta) - l \sin \theta\cr nl(1 - \cos \theta) - m \sin \theta &mn(1 - \cos \theta) + l \sin \theta &n^{2} + (l^{2} + m^{2}) \cos \theta\cr}] or [{R}_{IJ} = (1 - \cos \theta) l_{I}l_{J} + \delta_{IJ} \cos \theta - \varepsilon_{IJk}l_{k} \sin \theta,] in which l, m and n are the direction cosines of the axis of rotation (which are the same when referred to either set of axes under either interpretation) and θ is the angle of rotation. In this form, and with R operating on column vectors on the right, the sign of θ is such that, when viewed along the rotation axis from the origin towards the point lmn, the object is rotated clockwise for positive θ with a fixed right-handed axial system. If, under the same viewing conditions, the axes are to be rotated clockwise through θ with the object fixed then the components of vectors in the object, on the new axes, are given by R with the same lmn and with θ negated. This is the transpose of R, and if R is constructed from a product, as below, then each factor matrix in the product must be transposed and their order reversed to achieve this. Note that if, for a given rotation, the viewing direction from the origin is reversed, l, m, n and θ are all reversed and the matrix is unchanged.

Any rotation about a reference axis such that two of the direction cosines are zero is termed a primitive rotation, and it is frequently a requirement to generate or to interpret a general rotation as a product of primitive rotations.

A second important general form is based on Eulerian angles and is the product of three such primitives. It is [\eqalign{ {\bi R} &= \pmatrix{\cos \varphi_{3} &- \sin \varphi_{3} &0\cr \sin \varphi_{3} &\cos \varphi_{3} &0\cr 0 &0 &1\cr} \pmatrix{\cos \varphi_{2} &0 &\sin \varphi_{2}\cr 0 &1 &0\cr - \sin \varphi_{2} &0 &\cos \varphi_{2}\cr} \pmatrix{\cos \varphi_{1} &- \sin \varphi_{1} &0\cr \sin \varphi_{1} &\cos \varphi_{1} &0\cr 0 &0 &1\cr}\cr &= \pmatrix{(\cos \varphi_{3} \cos \varphi_{2} \cos \varphi_{1} &- (\cos \varphi_{3} \cos \varphi_{2} \sin \varphi_{1} &\cos \varphi_{3} \sin \varphi_{2}\cr - \sin \varphi_{3} \sin \varphi_{1}) &+ \sin \varphi_{3} \cos \varphi_{1})\cr \noalign{\vskip5pt} (\sin \varphi_{3} \cos \varphi_{2} \cos \varphi_{1} &(- \sin \varphi_{3} \cos \varphi_{2} \sin \varphi_{1} &\sin \varphi_{3} \sin \varphi_{2}\cr + \cos \varphi_{3} \sin \varphi_{1}) &+ \cos \varphi_{3} \cos \varphi_{1})\cr \noalign{\vskip5pt} - \sin \varphi_{2} \cos \varphi_{1} &\sin \varphi_{2} \sin \varphi_{1} &\cos \varphi_{2}\cr}}] which is commonly employed in four-circle diffractometers for which [\varphi = - \varphi_{1}], [\chi = \varphi_{2}] and [\omega = - \varphi_{3}]. In terms of the fixed-axes–moving-object conceptualization this corresponds to a rotation [\varphi_{1}] about Z followed by [\varphi_{2}] about Y followed by [\varphi_{3}] about Z. In the familiar diffractometer example, when [\chi = 0] the φ and ω axes are both vertical and equivalent. If φ is altered first, then the χ axis is still in the direction of a fixed Y axis, but if ω is altered first it is not. Since all angles are to be rotations about fixed axes to describe a rotating object it follows that it is φ rather than ω which corresponds to [\varphi_{1}]. In general, when rotating parts are mounted on rotating parts the rotation closest to the moved object must be applied first, forming the right-most factor in any multiple transformation, with the rotation closest to the fixed part as the left-most factor, assuming data supplied as column vectors on the right.

Given an orthogonal matrix, in either numerical or analytical form, it may be required to discover θ and the axis of rotation, or to factorize it as a product of primitives. From the first form we see that the vector [v_{I} = \varepsilon_{Ijk}a_{jk},] consisting of the antisymmetric part of R, has elements [-2 \sin \theta] times the direction cosines l, m, n, which establishes the direction immediately, and normalization using [l^{2} + m^{2} + n^{2} = 1] determines [\sin \theta]. Furthermore, the trace is [1 + 2 \cos \theta] so that the quadrant of θ is also fixed. This method fails, however, if the matrix is symmetrical, which occurs if [\theta = \pi]. In this case only the direction of the axis is required, which is given by [l: m: n = (a_{23})^{-1}: (a_{31})^{-1}: (a_{12})^{-1}] for non-zero elements, or [l = \sqrt{{\textstyle{1\over 2}} (a_{11} + 1)}] etc., with the signs chosen to satisfy [a_{12} = 2lm] etc.

The Eulerian form may be factorized by noting that [\tan \varphi_{1} = - a_{32}/a_{31}, \tan \varphi_{3} = a_{23}/a_{13}, \cos \varphi_{2} = a_{33}]. There is then freedom to choose the sign of [\sin \varphi_{2}], but the choice then fixes the quadrants of [\varphi_{1}] and [\varphi_{3}] through the elements in the last row and column, and the primitives may then be constructed. These expressions for [\varphi_{1}] and [\varphi_{3}] fail if [\sin \varphi_{2} = 0], in which case the rotation reduces to a primitive rotation about Z with angle [(\varphi_{1} + \varphi_{3}), \varphi_{2} = 0], or [(\varphi_{3} - \varphi_{1}), \varphi_{2} = \pi].

Eulerian angles are unlikely to be the best choice of primitive angles unless they are directly related to the parameters of a system, as with the diffractometer. It is often more important that the changes to primitive angles should be quasi-linearly related to θ for any small rotations, which is not the case with Eulerian angles when the required rotation axis is close to the X axis. In such a case linearized techniques for solving for the primitive angles will fail. Furthermore, if the required rotation is about Z only [(\varphi_{1} + \varphi_{3})] is determinate.

Quasi-linear relationships between θ and the primitive rotations arise if the primitives are one each about X, Y and Z. Any order of the three factors may be chosen, but the choice must then be adhered to since these factors do not commute. For sufficiently small rotations the primitive rotations are then [l\theta], [m\theta] and [n\theta], whilst for larger θ linearized iterative techniques for finding the primitive rotations are likely to be convergent and well conditioned.

The three-dimensional space of the angles [\varphi_{1}, \varphi_{2}] and [\varphi_{3}] in either case is non-linearly related to θ. In the Eulerian case the worst non-linearities occur at the origin of φ-space. Equally severe non-linearities occur in the quasi-linear case also but are 90° away from the origin and less likely to be troublesome.

Neither of the foregoing general forms of orthogonal matrix has ideally convenient properties. The first is inconvenient because it uses four non-equivalent variables l, m, n and θ, with a linking equation involving l, m and n, so that they cannot be treated as independent variables for analytical purposes. The second form (the product of primitives) is not ideal because the three angles, though independent, are not equivalent, the non-equivalence arising from the non-commutation of the primitive factors. In the remainder of this section we give two further forms of orthogonal matrix which each use three variables which are independent and strictly equivalent, and a third form using four whose squares sum to unity.

The first of these is based on the diagonal and uses the three independent variables p, q, r, from which we construct the auxiliary variables [\eqalign{P &= \pm \sqrt{1 + p - q - r},\ Q = \pm \sqrt{1 - p + q - r},\cr R &= \pm \sqrt{1 - p - q + r},\ S = \pm \sqrt{1 + p + q + r},}] then [\displaylines{{\bi R} = \pmatrix{p &{\textstyle{1\over 2}}[PQ - RS] &{\textstyle{1\over 2}}[PR + QS]\cr {\textstyle{1\over 2}}[PQ + RS] &q &{\textstyle{1\over 2}}[QR - PS]\cr {\textstyle{1\over 2}}[PR - QS] &{\textstyle{1\over 2}}[QR + PS] &r\cr}}] is orthogonal with positive determinant for any of the sixteen sign combinations. The signs of P, Q, R and S are, respectively, the signs of the direction cosines of the rotation axis and of [\sin \theta]. Using also [T = \sqrt{4 - S^{2}}], which may be deemed positive without loss of generality, [\displaylines{l = P/T, m = Q/T, n = R/T, \sin \theta = ST/2,\cr \cos \theta = 1 - T^{2}/2 = S^{2}/2 - 1.}]

Although p, q and r are independent, the point [pqr] is bound, by the requirement that P, Q, R and S be real, to lie within a tetrahedron whose vertices are the points [111], [[1\bar{1}\bar{1}]], [[\bar{1}1\bar{1}]] and [[\bar{1}\bar{1}1]], corresponding to the identity and to 180° rotations about each of the axes. The facts that the identity occurs at a vertex of the feasible region and that [(1 - \cos \theta)], rather than [\sin \theta], is linear on p, q and r in this vicinity make this form suitable only for substantial rotations.

The second form consists in defining a rotation vector r with components u, v, w such that [u = lt], [v = mt], [w = nt] with [t = \tan (\theta/2)] and [{\bf r} \cdot {\bf r} = t^{2}]. Then the matrix [\displaylines{{\bi R} = \pmatrix{\displaystyle{1 + u^{2} - v^{2} - w^{2}\over 1 + t^{2}} &\displaystyle{2(uv - w)\over 1 + t^{2}} &\displaystyle{2(uw + v)\over 1 + t^{2}}\cr \noalign{\vskip3pt} \displaystyle{2(uv + w)\over 1 + t^{2}} &\displaystyle{1 - u^{2} + v^{2} - w^{2}\over 1 + t^{2}} &\displaystyle{2(vw - u)\over 1 + t^{2}}\cr \noalign{\vskip3pt} \displaystyle{2(uw - v)\over 1 + t^{2}} &\displaystyle{2(vw + u)\over 1 + t^{2}} &\displaystyle{1 - u^{2} - v^{2} + w^{2}\over 1 + t^{2}}\cr}\cr \noalign{\vskip5pt} R_{IJ} = (1 + t^{2})^{-1} [\delta_{IJ} (1 - u_{k}u_{k}) + 2(u_{I}u_{J} - \varepsilon_{IJl}u_{l})]}] is orthogonal and the variables u, v, w are independent, equivalent and unbounded, and, unlike the previous form, small rotations are quasi-linear on these variables. As examples, [{\bf r} = [100]] gives 90° about X, [{\bf r} = [111]] gives 120° about [111].

R then transforms a vector d according to [{\bi R}{\bf d} = {\bf d} + {2\over 1 + t^{2}} \{({\bf r} \times {\bf d}) + [{\bf r} \times ({\bf r} \times {\bf d})]\}.]

Multiplying two such matrices together allows us to establish the manner in which the rotation vectors [{\bf r}_{1}] and [{\bf r}_{2}] combine. [{\bf r} = {{\bf r}_{2} + {\bf r}_{1} + {\bf r}_{2} \times {\bf r}_{1}\over 1 - {\bf r}_{2} \cdot {\bf r}_{1}}] for a rotation [{\bf r}_{1}] followed by [{\bf r}_{2}], so that rotations expressed in terms of rotation angles and axes may be compounded into a single such rotation without the need to form and decompose a product matrix.

Note that if [{\bf r}_{1}] and [{\bf r}_{2}] are parallel this reduces to the formula for the tangent of the sum of two angles, and that if [{\bf r}_{1} \cdot {\bf r}_{2} = 1] the combined rotation is always 180°. Note, too, that reversing the order of application of the rotations reverses only the vector product.

If three rotations [{\bf r}_{1}, {\bf r}_{2}] and [{\bf r}_{3}] are applied successively, [{\bf r}_{1}] first, then their combined rotation is [\eqalign{{\bf r}=\;&[{\bf r}_{3} (1 - {\bf r}_{1} \cdot {\bf r}_{2}) + {\bf r}_{2} (1 + {\bf r}_{3} \cdot {\bf r}_{1}) + {\bf r}_{1} (1 - {\bf r}_{3} \cdot {\bf r}_{2})\cr &+ {{\bf r}_{3} \times {\bf r}_{2} + {\bf r}_{3} \times {\bf r}_{1} + {\bf r}_{2} \times {\bf r}_{1}}]\cr&\times[{1 - {\bf r}_{1} \cdot {\bf r}_{2} - {\bf r}_{2} \cdot {\bf r}_{3} - {\bf r}_{3} \cdot {\bf r}_{1} - {\bf r}_{3} \cdot ({\bf r}_{2} \times {\bf r}_{1})}]^{-1}.}]

Note the irregular pattern of signs in the numerator.

Similar ideas, using a vector of magnitude [\sin (\theta/2)], are developed in Aharonov et al. (1977[link]).

The third form of orthogonal matrix uses four variables, λ, μ, ν and σ, which comprise a four-dimensional vector [\boldrho], such that [\lambda = ls], [\mu = ms], [\nu = ns] with [s = \sin (\theta/2)] and [\sigma = \cos (\theta/2)]. In terms of these variables [{\bi R} = \pmatrix{(\lambda^{2} - \mu^{2} - \nu^{2} + \sigma^{2}) &2(\lambda \mu - \nu \sigma) &2(\lambda \nu + \mu \sigma)\cr 2(\mu \lambda + \nu \sigma) &(-\lambda^{2} + \mu^{2} - \nu^{2} + \sigma^{2}) &2(\mu \nu - \lambda \sigma)\cr 2(\lambda \nu - \mu \sigma) &2(\mu \nu + \lambda \sigma) &(-\lambda^{2} - \mu^{2} + \nu^{2} + \sigma^{2})\cr}.] Two further matrices S and T may be defined (Diamond, 1988[link]), [{\bi S} = \pmatrix{-\sigma &{\phantom-}\nu &-\mu &\lambda\cr -\nu &-\sigma &{\phantom-}\lambda &\mu\cr {\phantom-}\mu &-\lambda &-\sigma &\nu\cr {\phantom-}\lambda &{\phantom-}\mu &{\phantom-}\nu &\sigma\cr} \hbox{ and } {\bi T} = \pmatrix{{\phantom-}\sigma &-\nu &{\phantom-}\mu &\lambda\cr {\phantom-}\nu &{\phantom-}\sigma &-\lambda &\mu\cr -\mu &{\phantom-}\lambda &{\phantom-}\sigma &\nu\cr -\lambda &-\mu &-\nu &\sigma\cr},] which are themselves orthogonal (though S has determinant −1) and which have the property that [{\bi S}^{2} = \pmatrix{{\bi R} &{\bf 0}\cr {\bf 0}^{T} &1\cr}] so that, for example, if homogeneous coordinates are being employed (Section 3.3.1.1.2[link]) [\pmatrix{x'\cr y'\cr z'\cr w\cr} = \pmatrix{-\sigma &{\phantom-}\nu &-\mu &\lambda\cr -\nu &-\sigma &{\phantom-}\lambda &\mu\cr {\phantom-}\mu &-\lambda &-\sigma &\nu\cr {\phantom-}\lambda &{\phantom-}\mu &{\phantom-}\nu &\sigma\cr} \pmatrix{-\sigma &{\phantom-}\nu &-\mu &\lambda\cr -\nu &-\sigma &{\phantom-}\lambda &\mu\cr {\phantom-}\mu &-\lambda &-\sigma &\nu\cr {\phantom-}\lambda &{\phantom-}\mu &{\phantom-}\nu &\sigma\cr} \pmatrix{x\cr y\cr z\cr w\cr}] is a rotation of (x, y, z, w) through the angle θ about the axis (l, m, n). With suitably pipelined hardware this forms an efficient means of applying rotations since the `overhead' of establishing S is so trivial.

T has the property that the rotation vector [\boldrho] arising from a concatenation of n rotations is [{\boldrho} = {\bi T}_{n} {\bi T}_{n - 1} \ldots {\bi T}_{1} {\boldrho}_{0},] in which [{\boldrho}_{0}^{T}] is the vector (0, 0, 0, 1) which defines a null rotation. This equation may be used as a basis for factorizing a given rotation into a concatenation of rotations about designated axes (Diamond, 1990a[link]).

Finally, an exact rotation of the vector d may be obtained without using matrices at all by writing [{\bf d} = {\textstyle\sum\limits_{0}^{\infty}} {\bf d}_{n}] in which [{\bf d}_{n} = {1\over n} ({\boldtheta} \times {\bf d}_{n - 1})] and [{\bf d}_{0}] is the initial position which is to be rotated. Here [\boldtheta] is a vector with direction cosines l, m and n, and magnitude equal to the required rotation angle in radians (Diamond, 1966[link]). This method is particularly efficient when [|{\boldtheta}| \ll 1] or when the number of vectors to be transformed is small since the overhead of establishing R is eliminated and the process is simple to program. It is the three-dimensional analogue of the power series for sin θ and cos θ and has the same convergence properties.

3.3.1.2.2. Measurement of rotations and strains from coordinates

| top | pdf |

Given the coordinates of a molecular fragment it is often a requirement to relate the fragment to its image in some standard orientation by a transformation which may be required to be a pure rotation, or may be required to be a combination of rotation and strain. Of the methods reviewed in this section all except (iv[link]) are concerned with pure rotation, ignoring any strain that may be present, and give the best rigid-body superposition. In all these methods, unless inhomogeneous strain is being considered, the best possible superposition is obtained if the centroids of the two images are first brought into coincidence by translation and treated as the origin.

Methods (i[link] [link] [link] [link]) to (v[link]) seek transformations which perform the superposition and impose on these, in various ways, the requirements of orthogonality for the rotational part. All these methods therefore need some defence against indeterminacy that arises in the general transformation if one or both of the fragments is planar, and, if improper rotations are to be excluded, need a defence against these also if the fragment and its image are of opposite chirality. Methods (vi[link]) and (vii[link]) pay no attention to the general transformation and work with variables which are intrinsically rotational in character, and always produce an orthogonal transformation with positive determinant, with no degeneracy arising from planar fragments which need no special attention. Even collinear atoms cause no problem, the superposition being performed correctly but with an arbitrary rotation about the length of the line being present in the result. These methods are therefore to be preferred over the earlier ones unless the purpose of the operation is to detect differences of chirality, although this, too, can be detected with a simple test.

In this review we adopt the same notation for all the methods which, unavoidably, means that symbols are used in ways which differ from the original publications. We use the symbol x for the vector set which is to be rotated and X for the vector set whose orientation is not to be altered, and write the residuals as [e_{IA} = D_{Ij} x_{jA} - X_{IA}] and, by choice of origin, [W_{a} x_{Ia} = W_{a} X_{Ia} = 0_{I}] for weights W. The quadratic residual to be minimized is [E = W_{a} e_{ia} e_{ia}] and we define the matrix [M_{IJ} = W_{a} x_{Ia} X_{Ja}] and use l for the direction cosines of the rotation axis.

  • (i) McLachlan's first method (McLachlan, 1972[link], 1982[link]) is iterative and conceptually the simplest. It sets [D_{IJ} = A_{Ik} R_{kJ}] in which A and R are both orthogonal with R being a current estimate of D and A being an adjustment which, at the beginning of each cycle, has a zero angle associated with it. One iterative cycle estimates a non-trivial A, after which the product AR replaces R. [A_{IJ} = (1 - \cos \theta) l_{I} l_{J} + \delta_{IJ} \cos \theta - \varepsilon_{IJk} l_{k} \sin \theta] and [\left({\partial A_{IJ}\over \partial \theta}\right)_{\theta = 0} = - \varepsilon_{IJk} l_{k},] therefore [\eqalign{\left({\partial E\over \partial \theta}\right)_{\theta = 0} &= 2 W_{a} \left({\partial A_{ij}\over \partial \theta}\right)_{\theta = 0} R_{jk} x_{ka} (A_{il}R_{lm}x_{ma} - X_{ia})\cr &= 2\varepsilon_{ijl} R_{jk} M_{ki} l_{l}.}] For this to vanish for all possible rotation axes l the vector [g_{L} = \varepsilon_{ijL} R_{jk} M_{ki}] must vanish, i.e. at the end of the iteration R must be such that the matrix [N_{JI} = R_{Jk} M_{kI}] is symmetrical. The vector g represents the couple exerted on the rotating body by forces [2 W_{A} (R_{Ij} x_{jA} - X_{IA})] acting at the atoms. Choosing [l_{L} = g_{L}/|{\bf g}|] gives the greatest [|\partial {E}/\partial \theta|_{\theta = 0}] and [(\partial E/\partial \theta)] vanishes when [\tan \theta = {\varepsilon_{ijk} N_{ji} l_{k}\over N_{pq} (l_{p} l_{q} - \delta_{pq})}] in which N is constructed from the current R matrix. A is then constructed from l and this θ and AR replaces R. The process is iterative because a couple about some new axis can appear when rotation about g eliminates the couple about g.

    Note that for each rotation axis l there are two values of θ, differing by π, which reduce [|{\bf g}|] to zero, corresponding to maximum and minimum values of E. The minimum is that which makes [{\partial^{2} E\over \partial \theta^{2}} = 2(\hbox{tr } N - l_{i} N_{ij} l_{j})] positive. Adding π to θ alters R and N and negates this quantity.

    Note, too, that the process is essentially characterized as that which makes the product RM symmetrical with R orthogonal. We return to this point in (iii[link]).

  • (ii) Kabsch's method (Kabsch, 1976[link], 1978[link]) minimizes E with respect to the nine elements of D, subject to the six constraints [D_{kI} D_{kJ} - \delta_{IJ} = 0_{IJ},] by using an auxiliary function [F = L_{ij} (D_{ki} D_{kj} - \delta_{ij})] in which L is symmetric containing six Lagrange multipliers. The Lagrangian function [G = E + F] then has minima with respect to the elements of D at locations which are dependent, inter alia, on the elements of L. By suitably choosing L a minimum of G may be brought into coincidence with the constrained minimum of E. A minimum of G occurs where [{\partial G\over \partial D_{IJ}} = 2D_{Ik} (S_{Jk} + L_{Jk}) - 2 M_{JI} = 0_{IJ}] and the [9 \times 9] matrix [{\partial^{2} G\over \partial D_{MK} \partial D_{IJ}} = 2\delta_{MI} (S_{JK} + L_{JK})] is positive definite, block diagonal, and has [S_{JK} = W_{a} x_{Ja} x_{Ka}] which is symmetrical. Thus L must be chosen so as to make the symmetric matrix [({\bi S} + {\bi L})] such that [{\bi D} ({\bi S} + {\bi L})^{T} = {\bi M}^{T}] with D orthogonal, or [{\bi RN} = {\bi M}^{T}] with R replacing D since we are now confined to the orthogonal case, and N is symmetric and positive definite.

  • (iii) Comparison of the Kabsch and McLachlan methods. Using the initials of these authors as subscripts, we have seen that the Kabsch solution involves solving [{\bi R}_{\rm WK} {\bi N}_{\rm WK} = {\bi M}^{T}] for an orthogonal matrix [{\bi R}_{\rm WK}] given that [{\bi N}_{\rm WK}] is symmetrical and positive definite. Thus [{\bi MM}^{T} = {\bi N}_{\rm WK}^{T} {\bi R}_{\rm WK}^{T} {\bi R}_{\rm WK} {\bi N}_{\rm WK} = {\bi N}_{\rm WK}^{2}] and [{\bi R}_{\rm WK} = {\bi M}^{T} ({\bi MM}^{T})^{-1/2}.]

    By comparison, the McLachlan treatment leads to an orthogonal R matrix satisfying [{\bi R}_{\rm ADM} = {\bi N}_{\rm ADM} {\bi M}^{-1}] in which [{\bi N}_{\rm ADM}] is also symmetric and positive definite, which similarly leads to [{\bi R}_{\rm ADM} = ({\bi M}^{T} {\bi M})^{1/2} {\bi M}^{-1}.]

    These seemingly different expressions for [{\bi R}_{\rm WK}] and [{\bi R}_{\rm ADM}] are, in fact, equal, as the following shows [{\bi R}_{\rm WK} = {\bi R}_{\rm ADM} {\bi R}_{\rm ADM}^{-1} {\bi R}_{\rm WK} = {\bi R}_{\rm ADM} {\bi MN}_{\rm ADM}^{-1} {\bi M}^{T} {\bi N}_{\rm WK}^{-1},] therefore [\eqalign{ {\bi R}_{\rm WK}^{T} {\bi R}_{\rm WK} &= {\bi I}\cr &= {\bi N}_{\rm WK}^{-1} {\bi MN}_{\rm ADM}^{-1} {\bi M}^{T} {\bi R}_{\rm ADM}^{T} {\bi R}_{\rm ADM} {\bi MN}_{\rm ADM}^{-1} {\bi M}^{T} {\bi N}_{\rm WK}^{-1}.}] Multiplying on both sides by [{\bi N}_{\rm WK}] gives [{\bi N}_{\rm WK}^{2} = ({\bi MN}_{\rm ADM}^{-1} {\bi M}^{T})^{2},] and since both N matrices are positive definite [{\bi N}_{\rm WK} = {\bi MN}_{\rm ADM}^{-1} {\bi M}^{T}] and conversely [{\bi N}_{\rm ADM} = {\bi M}^{T} {\bi N}_{\rm WK}^{-1} {\bi M},] therefore [{\bi R}_{\rm WK} = {\bi M}^{T} {\bi M}^{T-1} {\bi N}_{\rm ADM} {\bi M}^{-1} = {\bi R}_{\rm ADM}.]

  • (iv) Diamond's first method. This method (Diamond, 1976a[link]) differs from the previous ones in that the transformation D is allowed to be a general transformation which is then factorized into the product of an orthogonal matrix R and a symmetrical matrix T. The transformation of x to fit X is thus interpreted as the combination of homogeneous strain and pure rotation in which x is subjected to strain and the result is rotated. [\eqalign{{\bi D} &= {\bi RT}\cr {\bi T}^{2} &= {\bi D}^{T} {\bi D}\cr {\bi T} &= ({\bi D}^{T} {\bi D})^{1/2}\cr {\bi R} &= {\bi D} ({\bi D}^{T} {\bi D})^{-1/2}.}] Furthermore, the solution for D is [{\bi D} = {\bi M}^{T} {\bi S}^{-1}] (in the notation of Kabsch), so that [{\bi R} = {\bi M}^{T} {\bi S}^{-1} ({\bi S}^{-1} {\bi MM}^{T} {\bi S}^{-1})^{-1/2}] which may be compared with the results of the previous paragraph.

    Although this R matrix by itself (i.e. applied without T) does not produce the best rotational superposition (i.e. smallest E), it is the one which exactly superposes the only three vectors in x whose mutual dispositions are conserved, on their equivalents in X, so that the rotation so found is arguably the best defined one.

    Alternatives based on [{\bi D} = {\bi TR}], [{\bi D}^{-1} = {\bi RT}], [{\bi D}^{-1} = {\bi TR}] are all easily developed, and these ideas are developed by Diamond (1976a[link]) to include non-homogeneous strains also.

  • (v) McLachlan's second method. This method (McLachlan, 1979[link]) is based on the properties of the [6\times 6] matrix [\pmatrix{{\bi0} &{\bi M}\cr {\bi M}^{T} &{\bi 0}\cr}] and is immune to singularity of M. If p and q are three-dimensional vectors such that [({\bf p}^{T}, {\bf q}^{T})] is an eigenvector of this matrix then [\pmatrix{{\bi 0} &{\bi M}\cr {\bi M}^{T} &{\bi 0}\cr} \pmatrix{{\bf p}\cr {\bf q}\cr} = \pmatrix{{\bi M}{\bf q}\cr {\bi M}^{T}{\bf p}\cr} = \pmatrix{{\bf p}\lambda\cr {\bf q}\lambda\cr}.]

    If q is negated the second equality is maintained provided λ is also negated. Therefore an orthogonal [6\times 6] matrix [\pmatrix{{\bi H} &{\bi H}\cr {\bi K} &{\bi -K}\cr}] (consisting of [3\times 3] partitions) exists for which [\pmatrix{{\bi H}^{T} &{\bi K}^{T}\cr {\bi H}^{T} &{\bi -K}^{T}\cr} \pmatrix{{\bi 0} &{\bi M}\cr {\bi M}^{T} &{\bi 0}\cr} \pmatrix{{\bi H} &{\bi H}\cr {\bi K} &{\bi -K}\cr} = \pmatrix{{\boldLambda} &{\bi 0}\cr {\bi 0} &{-\boldLambda}\cr}] in which [\boldLambda] is diagonal and contains non-negative eigenvalues. The reverse transformation shows that [{\bi M} = 2 {\bi H\boldLambda K}^{T}] and multiplying the eigenvectors together gives [{\bi H}^{T} {\bi H} = {\bi K}^{T} {\bi K} = {\textstyle{1\over 2}} {\bi I} = {\bi HH}^{T} = {\bi KK}^{T}.] Therefore [2{\bi KH}^{T} {\bi M} = 4{\bi KH}^{T} {\bi H}\boldLambda {\bi K}^{T} = 2{\bi K}\boldLambda {\bi K}^{T},] but [2{\bi KH}^{T}] is orthogonal and [2{\bi K}\boldLambda {\bi K}^{T}] is symmetrical, therefore [by paragraphs (i[link]) and (iii[link]) above] [2{\bi KH}^{T}] is the required rotation. Similarly, forming [\displaylines{{\bi M}^{T} = 2{\bi K}\boldLambda {\bi H}^{T}\cr \noalign{\vskip5pt} 2{\bi M}^{T} {\bi H}\boldLambda^{-1} {\bi H}^{T} = 4{\bi K}\boldLambda {\bi H}^{T} {\bi H}\boldLambda^{-1} {\bi H}^{T} = 2{\bi KH}^{T}}] corresponds to the Kabsch formulation [paragraphs (ii[link]) and (iii[link])] since [2{\bi H}\boldLambda^{-1} {\bi H}^{T}] is symmetrical and the same rotation, [2{\bi KH}^{T}], appears.

    Note that the determinant of the orthogonal matrix so found is twice the product of the determinants of H and of K, and since the positive eigenvalues are collected into [\boldLambda] it follows that the sign of the determinant of M is the same as the sign of the determinant of the resulting orthogonal matrix. If this is negative it means that the best superposition is obtained if one vector set is inverted and that x and X are of opposite chirality.

    Expanding the expression for E, the weighted sum of squares of errors, for an orthogonal transformation shows that this is least when the trace of the product RM is greatest. In this treatment [\hbox{tr} ({\bi RM}) = \hbox{tr} (2{\bi KH}^{T}\cdot 2{\bi H}\boldLambda {\bi K}^{T}) = \hbox{tr} (2{\bi K}\boldLambda {\bi K}^{T}) = \hbox{tr} (\boldLambda).] Hence, if the eigenvalues in [\boldLambda] and −[\boldLambda] are arranged in decreasing order of modulus, and if the determinant of M is negative, then exchanging the third and sixth columns of [\pmatrix{{\bi H} &{\bi H}\cr {\bi K} &{\bi -K}\cr}] produces a product [2{\bi KH}^{T}] with positive determinant (i.e. a proper rotation) at minimum cost in residual. Similarly, if M is singular and one or more eigenvalues in [\boldLambda] vanishes it is necessary only to complete an orthonormal set of eigenvectors such that the determinants of H and K have the same sign.

  • (vi) MacKay's method. MacKay (1984[link]) was the first to consider the rotational superposition problem in terms of the vector r of Section 3.3.1.2.1.[link] Using quaternion algebra he showed that if a vector x is rotated to [{\bf X} = {\bi R}{\bf x}] then [({\bf X} - {\bf x}) = {\bf r}\times ({\bf X} + {\bf x}),] where [|{\bf r}| = \tan (\theta/2)] and the direction of r is the axis of rotation, as may also be shown from elementary considerations. MacKay then solves this for the vector r by least squares given the vector pairs X and x. The individual errors are [e_{IA} = \varepsilon_{Ijk} r_{j} (X_{kA} + x_{kA}) - (X_{IA} - x_{IA})] and [E = W_{a} e_{ia} e_{ia}.] Setting [\partial E/\partial r_{P} = 0_{P}] gives [\displaylines{W_{a} \varepsilon_{iPk} \varepsilon_{ilm} r_{l} (X_{ka} + x_{ka}) (X_{ma} + x_{ma})\cr = W_{a} \varepsilon_{iPk} (X_{ka} + x_{ka}) (X_{ia} - x_{ia})}] which reduces to [2{\bf V} = - ({\bi Q} + {\bi Q}_{0}){\bf r}] in which [\eqalign{{\bi Q} &= {\bi M} + {\bi M}^{T} - 2{\bi I} \hbox{tr } {\bi M}\cr {\bi Q}_{0} &= {\bi S} + {\bi S}' - {\bi I} (\hbox{tr } {\bi S} + \hbox{tr}\; {\bi S}')\cr V_{I} &= \varepsilon_{Ijk} M_{jk}\cr S_{IJ} &= W_{a} x_{Ia} x_{Ja}\cr S'_{IJ} &= W_{a} X_{Ia} X_{Ja}.}] Thus a direct solution for r is obtained, [{\bf r} = -2({\bi Q}_{0} + {\bi Q})^{-1}{\bf V},] the elements of which are u, v and w, and may be used to construct the orthogonal matrix as in Section 3.3.1.2.1.[link] [{\bi Q} + {\bi Q}_{0}] may be obtained directly from [{\bi X} + {\bi x}].

    If the requisite rotation is 180°, [({\bi Q}_{0} + {\bi Q})] is singular and cannot be inverted. In this case any row or column of the adjoint of [({\bi Q}_{0} + {\bi Q})] is a vector in the direction of the axis. Normalizing this vector to unity, giving l, gives the requisite orthogonal matrix as [{\bi R} = 2{\bi ll}^{T} - {\bi I}.]

    Note that MacKay's residual E is quadratic in r. E therefore has one minimum and no maximum, and the minimum is reached on the first cycle of least squares. By contrast, the objective function E that is minimized in methods (i[link]), (ii[link]), (v[link]) and (vii[link]) has one minimum, one maximum and two saddle points in the space of the vector r, as shown in (vii[link]).

    It may be shown (Diamond, 1989[link]) that if MacKay's solution vector r is denoted by [{\bf r}_{M}] and that given by the other methods [except (iv[link])] by [{\bf r}_{O}] then [{\bf r}_{M} = {\bf r}_{O} - {\bi A}^{-1} {\bi B}{\bf r}_{O}] in which A and B are real symmetric, positive semi-definite. A is positive definite unless all the individual vector sums [({\bf X} + {\bf x})] are parallel, as can happen when the best rotation is 180°. Thus the MacKay method only gives the same result as the other methods if:

    • (a) the initial orientation is optimal, for then [{\bf r}_{O} = {\bf 0}], or

    • (b) perfect fitting is possible, for then [{\bi B} = {\bf 0}], or

    • (c) all the residual vectors (after fitting by [{\bf r}_{O}]) are parallel to [{\bf r}_{O}], for then B is singular such that [{\bi B}{\bf r}_{O} = {\bf 0}]. In general, [|{\bf r}_{M}|\leq |{\bf r}_{O}|]. [{\bf r}_{O}] may be found by iterating [{\bf r}_{M}], but x must be replaced by Rx on each iteration.

  • (vii) Diamond's second method. This is closely related to MacKay's method, but uses a four-dimensional vector [\boldrho] with components λ, μ, ν and σ in which λ, μ and ν are the direction cosines of the rotation axis multiplied by [\sin (\theta/2)] and σ is [\cos (\theta/2)]. In terms of such a vector Diamond (1988[link]) showed that [E = E_{0} - 2\boldrho^{T} {\bi P}\boldrho] in which E is the weighted sum of squares of coordinate differences, as before, [E_{0}] is its value before any rotation is applied and P is the matrix [{\bi P} = \pmatrix{{\bi Q} &{\bf V}\cr {\bf V}^{T} &0\cr}.] The rotation matrix R corresponding to the vector [\boldrho] is then the last of the forms for R given in Section 3.3.1.2.1.[link]

    The minimum E is therefore [E_{0}] minus twice the largest eigenvalue of P since [\boldrho^{T} \boldrho = 1], and a stationary value of E occurs when [\boldrho] is any of the four eigenvectors of P. E thus has a maximum, a minimum and two saddle points, in general, and its value may be determined before any coordinates are transformed. Diamond also showed that the orientations giving these stationary values are related by the operations of 222 symmetry. Equivalent results have also been obtained by Kearsley (1989[link]).

    As an alternative to solving a [4\times 4] eigenproblem, Diamond also showed that the vector r, as in MacKay's solution, may be obtained by iterating [\eqalign{\alpha_{0} &= E_{0}/2\cr {\bf r}_{n} &= (\alpha_{n} {\bi I - Q})^{-1} {\bf V}\cr \alpha_{n+1} &= {{\bf V} \cdot {\bf r}_{n} + \alpha_{n} r_{n}^{2}\over 1 + r_{n}^{2}}}] which has the property that if X and x are exactly superposable then [\alpha_{0}] is the exact solution and no iteration is necessary. If X and x are similar but not exactly superposable then a small number of iterations may be required to reach a stable r vector, though the matrix [{\bi Q}_{0}] is not required. As in MacKay's solution, [(\alpha {\bi I} - {\bi Q})] is singular at the end of the iteration if the required rotation is 180°, but the MacKay and Diamond methods both have the advantage that improper rotations are never generated by these means, and methods based on P and [\boldrho] rather than Q and r are trouble-free for 180° rotations. The iterative loop in this method does not require Rx to be redetermined on each cycle.

    Finally, it may be shown that if [p_{1}, p_{2}, p_{3}, p_{4}] are the eigenvalues of P arranged in descending order and [p_{1} - p_{2} - p_{3} + p_{4}] is negative, then a closer superposition may be obtained by reversing the chirality of one of the vector sets, and the R matrix constructed from [{\boldrho_{4}}] optimally superimposes Rx onto − X, the enantiomer of X (Diamond, 1990b[link]).

3.3.1.2.3. Orthogonalization of impure rotations

| top | pdf |

There are several ways of deriving a strictly orthogonal matrix from a given approximately orthogonal matrix, among them the following.

  • (i) The Gram–Schmidt process. This is probably the simplest and the easiest to compute. If the given matrix consists of three column vectors [{\bf v}_{1}, {\bf v}_{2}] and [{\bf v}_{3}] (later referred to as primers) which are to be replaced by three column vectors [{\bf u}_{1}, {\bf u}_{2}] and [{\bf u}_{3}] then the process is [\eqalign{{\bf u}_{1} &= {\bf v}_{1}/|{\bf v}_{1}|\cr {\bf u}_{2} &= {\bf v}_{2} - ({\bf u}_{1} \cdot {\bf v}_{2}) {\bf u}_{1}\cr {\bf u}_{2} &= {\bf u}_{2}/|{\bf u}_{2}|\cr {\bf u}_{3} &= {\bf v}_{3} - ({\bf u}_{1} \cdot {\bf v}_{3}) {\bf u}_{1} - ({\bf u}_{2} \cdot {\bf v}_{3}) {\bf u}_{2}\cr {\bf u}_{3} &= {\bf u}_{3}/|{\bf u}_{3}|.}]

    As successive vectors are established, each vector v has subtracted from it its components in the directions of established vectors, and the remainder is normalized. The method will fail at the normalization step if the vectors v are not linearly independent. Otherwise, the process may be extended to any number of dimensions.

    The weakness of the method is that, though [{\bf u}_{1}] differs from [{\bf v}_{1}] only in scale, [{\bf u}_{N}] may differ grossly from [{\bf v}_{N}] as the various columns are not treated equivalently.

  • (ii) A preferable method which treats all vectors equivalently is to iteratively replace the matrix M by [{\textstyle{1\over 2}} ({\bi M} + {\bi M}^{T-1})].

    Defining the residual matrix E as [{\bi E} = {\bi MM}^{T} - {\bi I},] then on each iteration E is replaced by [{\bi E}^{2} ({\bi MM}^{T})^{-1}/4] and convergence necessarily ensues.

  • (iii) A third method resolves M into its symmetric and antisymmetric parts [{\bi S} = {\textstyle{1\over 2}} ({\bi M} + {\bi M}^{T}),\quad {\bi A} = {\textstyle{1\over 2}} ({\bi M} - {\bi M}^{T}),\quad {\bi M} = {\bi S} + {\bi A}] and constructs an orthogonal matrix for which only S is altered. A determines l, m, n and θ as shown in Section 3.3.1.2.1[link], and from these a new S may be constructed.

  • (iv) A fourth method is to treat the general matrix M as a combination of pure strain and pure rotation. Setting [{\bi M} = {\bi RT}] with R orthogonal and T symmetrical gives [{\bi T} = ({\bi M}^{T} {\bi M})^{1/2}, \quad {\bi R} = {\bi M} ({\bi M}^{T} {\bi M})^{-1/2}.]

    The rotation so found is the one which exactly superposes those three mutually perpendicular directions which remain mutually perpendicular under the transformation M.

    [{\bi T} - {\bi I}] is then the strain tensor of an unrotated body.

    Writing [{\bi M} = {\bi TR}], [{\bi T} = ({\bi MM}^{T})^{1/2}], [{\bi R} = ({\bi MM}^{T})^{-1/2} {\bi M}] may also be useful, in which [{\bi T} - {\bi I}] is the strain tensor of a rotated body. See also Section 3.3.1.2.2[link] (iv)[link].

3.3.1.2.4. Eigenvalues and eigenvectors of orthogonal matrices

| top | pdf |

If R is the orthogonal matrix given in Section 3.3.1.2.1[link] in terms of the direction cosines l, m and n of the axis of rotation, then it is clear that (l, m, n) is an eigenvector of R with eigenvalue unity because [{\bi R} \pmatrix{l\cr m\cr n\cr} = \pmatrix{l\cr m\cr n\cr}.]

Consideration of the determinant [|{\bi R} - \lambda {\bi I}| = 0] shows that the sum of the three eigenvalues is [1 + 2 \cos \theta] and that their product is unity. Hence the three eigenvalues are 1, [e^{i\theta}] and [e^{-i\theta}]. Since R is real, its product with any real vector is also real, yet its product with an eigenvector must, in general, be complex. Thus the eigenvectors must themselves be complex.

The remaining two eigenvectors u may be found using the results of Section 3.3.1.2.1[link] (q.v.) according to [{\bi R}{\bf u} = {\bf u} + {2\over 1 + t^{2}} \{({\bf r} \times {\bf u}) + [{\bf r} \times ({\bf r} \times {\bf u})]\} = {\bf u} e^{\pm i\theta} = {\bf u} {1 \pm it\over 1 \mp it},] which is solved by any vector of the form [{\bf u} = {\bf l} \times {\bf v} \mp i{\bf l} \times ({\bf l} \times {\bf v})] for any real vector v, where l is the normalized axis vector, [{\bf l}t = {\bf r}], [|{\bf l}| = 1], [t = \tan (\theta /2)]. Eigenvectors for the two eigenvalues may have unrelated v vectors though the sign choices are coupled. If the vector v is rotated about l through an angle φ the corresponding vector u is multiplied by [e^{-i\varphi}] and remains an eigenvector. Using superscript signs to denote the sign of θ in the eigenvalue with which each vector is associated, the matrix [{\bi U} = ({\bf l},\ {\bf u}^{+},\ {\bf u}^{-})] has the properties that [{\bi RU} = {\bi U} \pmatrix{1 &0 &0\cr 0 &e^{i\theta} &0\cr 0 &0 &e^{-i\theta}\cr}] and [{\bi U}^{* T} {\bi U} = \pmatrix{1 &0 &0\cr 0 &2|{\bf l} \times {\bf v}^{+}|^{2} &0\cr 0 &0 &2|{\bf l} \times {\bf v}^{-}|^{2}\cr}] which places restrictions on v if this is to be the identity. Note that the 23 element vanishes even in the absence of any relationship between [{\bf v}^{+}] and [{\bf v}^{-}].

A convenient form for U, symmetrical in the elements of l, is obtained by setting [{\bf v}^{+} = {\bf v}^{-} = [{111}]] and is [{\bi U} = \pmatrix{l &\{(m - n) - i[l(l + m + n) - 1]\}/d &\{(m - n) + i[l(l + m + n) - 1]\}/d\cr m &\{(n - l) - i[m(l + m + n) - 1]\}/d &\{(n - l) + i[m(l + m + n) - 1]\}/d\cr n &\{(l - m) - i[n(l + m + n) - 1]\}/d &\{(l - m) + i[n(l + m + n) - 1]\}/d\cr}] in which the normalizing denominator is given by [d = 2 \sqrt{1 - lm - mn -nl}.]

References

First citation Aharonov, Y., Farach, H. A. & Poole, C. P. (1977). Non-linear vector product to describe rotations. Am. J. Phys. 45, 451–454.Google Scholar
First citation Diamond, R. (1966). A mathematical model-building procedure for proteins. Acta Cryst. 21, 253–266.Google Scholar
First citation Diamond, R. (1976a). On the comparison of conformations using linear and quadratic transformations. Acta Cryst. A32, 1–10.Google Scholar
First citation Diamond, R. (1988). A note on the rotational superposition problem. Acta Cryst. A44, 211–216.Google Scholar
First citation Diamond, R. (1989). A comparison of three recently published methods for superimposing vector sets by pure rotation. Acta Cryst. A45, 657.Google Scholar
First citation Diamond, R. (1990a). On the factorisation of rotations with special reference to diffractometry. Proc. R. Soc. London Ser. A, 428, 451–472.Google Scholar
First citation Diamond, R. (1990b). Chirality in rotational superposition. Acta Cryst. A46, 423.Google Scholar
First citation Kabsch, W. (1976). A solution for the best rotation to relate two sets of vectors. Acta Cryst. A32, 922–923.Google Scholar
First citation Kabsch, W. (1978). A discussion of the solution for the best rotation to relate two sets of vectors. Acta Cryst. A34, 827–828.Google Scholar
First citation Kearsley, S. K. (1989). On the orthogonal transformation used for structural comparisons. Acta Cryst. A45, 208–210.Google Scholar
First citation Mackay, A. L. (1984). Quaternion transformation of molecular orientation. Acta Cryst. A40, 165–166.Google Scholar
First citation McLachlan, A. D. (1972). A mathematical procedure for superimposing atomic coordinates of proteins. Acta Cryst. A28, 656–657.Google Scholar
First citation McLachlan, A. D. (1979). Gene duplications in the structural evolution of chymotrypsin. Appendix: Least squares fitting of two structures. J. Mol. Biol. 128, 49–79.Google Scholar
First citation McLachlan, A. D. (1982). Rapid comparison of protein structures. Acta Cryst. A38, 871–873.Google Scholar








































to end of page
to top of page