International
Tables for Crystallography Volume F Crystallography of biological macromolecules Edited by M. G. Rossmann and E. Arnold © International Union of Crystallography 2006 |
International Tables for Crystallography (2006). Vol. F. ch. 16.2, pp. 346-351
https://doi.org/10.1107/97809553602060000690 Chapter 16.2. The maximum-entropy method
aLaboratory of Molecular Biology, Medical Research Council, Cambridge CB2 2QH, England The maximum-entropy principle is discussed in a general context, and its adaptation to crystallography is described. There is an intimate connection between the maximum-entropy method and an enhancement of the probabilistic techniques of conventional direct methods known as the `saddlepoint method'. Keywords: direct methods; Jaynes' maximum-entropy principle; maximum entropy; random-atom model; recentring; saddlepoint method; source entropy. |
The modern concept of entropy originated in the field of statistical thermodynamics, in connection with the study of large material systems in which the number of internal degrees of freedom is much greater than the number of externally controllable degrees of freedom. This concept played a central role in the process of building a quantitative picture of the multiplicity of microscopic states compatible with given macroscopic constraints, as a measure of how much remains unknown about the detailed fine structure of a system when only macroscopic quantities attached to that system are known. The collection of all such microscopic states was introduced by Gibbs under the name `ensemble', and he deduced his entire formalism for statistical mechanics from the single premise that the equilibrium picture of a material system under given macroscopic constraints is dominated by that configuration which can be realized with the greatest combinatorial multiplicity (i.e. which has maximum entropy) while obeying these constraints.
The notions of ensemble and the central role of entropy remained confined to statistical mechanics for some time, then were adopted in new fields in the late 1940s. Norbert Wiener studied Brownian motion, and subsequently time series of random events, by similar methods, considering in the latter an ensemble of messages, i.e. `a repertory of possible messages, and over that repertory a measure determining the probability of these messages' (Wiener, 1949). At about the same time, Shannon created information theory and formulated his fundamental theorem relating the entropy of a source of random symbols to the capacity of the channel required to transmit the ensemble of messages generated by that source with an arbitrarily small error rate (Shannon & Weaver, 1949
). Finally, Jaynes (1957
, 1968
, 1983
) realized that the scope of the principle of maximum entropy could be extended far beyond the confines of statistical mechanics or communications engineering, and could provide the basis for a general theory (and philosophy) of statistical inference and `data processing'.
The relevance of Jaynes' ideas to probabilistic direct methods was investigated by the author (Bricogne, 1984). It was shown that there is an intimate connection between the maximum-entropy method and an enhancement of the probabilistic techniques of conventional direct methods known as the `saddlepoint method', some aspects of which have already been dealt with in Section 1.3.4.5.2
in Chapter 1.3 of IT B (Bricogne, 2001
).
Statistical communication theory uses as its basic modelling device a discrete source of random symbols, which at discrete times , randomly emits a `symbol' taken out of a finite alphabet
. Sequences of such randomly produced symbols are called `messages'.
An important numerical quantity associated with such a discrete source is its entropy per symbol H, which gives a measure of the amount of uncertainty involved in the choice of a symbol. Suppose that successive symbols are independent and that symbol i has probability . Then the general requirements that H should be a continuous function of the
, should increase with increasing uncertainty, and should be additive for independent sources of uncertainty, suffice to define H uniquely as
where k is an arbitrary positive constant [Shannon & Weaver (1949)
, Appendix 2] whose value depends on the unit of entropy chosen. In the following we use a unit such that
.
These definitions may be extended to the case where the alphabet is a continuous space endowed with a uniform measure μ: in this case the entropy per symbol is defined as
where q is the probability density of the distribution of symbols with respect to measure μ.
Two important theorems [Shannon & Weaver (1949), Appendix 3] provide a more intuitive grasp of the meaning and importance of entropy:
The entropy H of a source is thus a direct measure of the strength of the restrictions placed on the permissible messages by the distribution of probabilities over the symbols, lower entropy being synonymous with greater restrictions. In the two cases above, the maximum values of the entropy and
are reached when all the symbols are equally probable, i.e. when q is a uniform probability distribution over the symbols. When this distribution is not uniform, the usage of the different symbols is biased away from this maximum freedom, and the entropy of the source is lower; by Shannon's theorem (2), the number of `reasonably probable' messages of a given length emanating from the source decreases accordingly.
The quantity that measures most directly the strength of the restrictions introduced by the non-uniformity of q is the difference , since the proportion of N-atom random structures which remain `reasonably probable' in the ensemble of the corresponding source is
. This difference may be written (using continuous rather than discrete distributions)
where m(s) is the uniform distribution which is such that
.
From the fundamental theorems just stated, which may be recognized as Gibbs' argument in a different guise, Jaynes' own maximum-entropy argument proceeds with striking lucidity and constructive simplicity, along the following lines:
The only requirement for this analysis to be applicable is that the `ranges of possibilities' to which it refers should be representable (or well approximated) by ensembles of abstract messages emanating from a random source. The entropy to be maximized is then the entropy per symbol of that source.
The final form of the maximum-entropy criterion is thus that q(s) should be chosen so as to maximize, under the constraints expressing the knowledge of newly acquired data, its entropy relative to the `prior prejudice' m(s) which maximizes H in the absence of these data.
Jaynes (1957) solved the problem of explicitly determining such maximum-entropy distributions in the case of general linear constraints, using an analytical apparatus first exploited by Gibbs in statistical mechanics.
The maximum-entropy distribution , under the prior prejudice m(s), satisfying the linear constraint equations
where the
are linear constraint functionals defined by given constraint functions
, and the
are given constraint values, is obtained by maximizing with respect to q the relative entropy defined by equation (16.2.2.4)
. An extra constraint is the normalization condition
to which it is convenient to give the label
, so that it can be handled together with the others by putting
,
.
By a standard variational argument, this constrained maximization is equivalent to the unconstrained maximization of the functional where the
are Lagrange multipliers whose values may be determined from the constraints. This new variational problem is readily solved: if q(s) is varied to
, the resulting variations in the functionals
and
will be
respectively. If the variation of the functional (16.2.2.7)
is to vanish for arbitrary variations
, the integrand in the expression for that variation from (16.2.2.8)
must vanish identically. Therefore the maximum-entropy density distribution
satisfies the relation
and hence
It is convenient now to separate the multiplier associated with the normalization constraint by putting
where Z is a function of the other multipliers
. The final expression for
is thus
The values of Z and of
may now be determined by solving the initial constraint equations. The normalization condition demands that
The generic constraint equations (16.2.2.5)
determine
by the conditions that
for
. But, by Leibniz's rule of differentiation under the integral sign, these equations may be written in the compact form
Equations (ME1), (ME2) and (ME3) constitute the maximum-entropy equations.
The maximal value attained by the entropy is readily found: i.e. using the constraint equations
The latter expression may be rewritten, by means of equations (ME3), as
which shows that, in their dependence on the λ's, the entropy and log Z are related by Legendre duality.
Jaynes' theory relates this maximal value of the entropy to the prior probability of the vector c of simultaneous constraint values, i.e. to the size of the sub-ensemble of messages of length N that fulfil the constraints embodied in (16.2.2.5)
, relative to the size of the ensemble of messages of the same length when the source operates with the symbol probability distribution given by the prior prejudice m. Indeed, it is a straightforward consequence of Shannon's second theorem (Section 16.2.2)
as expressed in equation (16.2.2.3)
that
where
is the total entropy for N symbols.
The standard setting of probabilistic direct methods (Hauptman & Karle, 1953; Bertaut, 1955a
,b
; Klug, 1958
) uses implicitly as its starting point a source of random atomic positions. This can be described in the terms introduced in Section 16.2.2.1
by using a continuous alphabet
whose symbols s are fractional coordinates x in the asymmetric unit of the crystal, the uniform measure μ being the ordinary Lebesgue measure
. A message of length N generated by that source is then a random N-equal-atom structure.
The traditional theory of direct methods assumes a uniform distribution q(x) of random atoms and proceeds to derive joint distributions of structure factors belonging to an N-atom random structure, using the asymptotic expansions of Gram–Charlier and Edgeworth. These methods have been described in Section 1.3.4.5.2.2
of IT B (Bricogne, 2001
) as examples of applications of Fourier transforms. The reader is invited to consult this section for terminology and notation. These joint distributions of complex structure factors are subsequently used to derive conditional distributions of phases when the amplitudes are assigned their observed values, or of a subset of complex structure factors when the others are assigned certain values. In both cases, the largest structure-factor amplitudes are used as the conditioning information.
It was pointed out by the author (Bricogne, 1984) that this procedure can be problematic, as the Gram–Charlier and Edgeworth expansions have good convergence properties only in the vicinity of the expectation values of each structure factor: as the atoms are assumed to be uniformly distributed, these series afford an adequate approximation for the joint distribution
only near the origin of structure-factor space, i.e. for small values of all the structure amplitudes. It is therefore incorrect to use these local approximations to
near
as if they were the global functional form for that function `in the large' when forming conditional probability distributions involving large amplitudes.
These limitations can be overcome by recognizing that, if the locus (a high-dimensional torus) defined by the large structure-factor amplitudes to be used in the conditioning data is too extended in structure-factor space for a single asymptotic expansion of
to be accurate everywhere on it, then
should be broken up into sub-regions, and different local approximations to
should be constructed in each of them. Each of these sub-regions will consist of a `patch' of
surrounding a point
located on
. Such a point
is obtained by assigning `trial' phase values to the known moduli, but these trial values do not necessarily have to be viewed as `serious' assumptions concerning the true values of the phases: rather, they should be thought of as pointing to a patch of
and to a specialized asymptotic expansion of
designed to be the most accurate approximation possible to
on that patch. With a sufficiently rich collection of such constructs,
can be accurately calculated anywhere on
.
These considerations lead to the notion of recentring . Recentring the usual Gram–Charlier or Edgeworth asymptotic expansion for away from
, by making trial phase assignments that define a point
on
, is equivalent to using a non-uniform prior distribution of atoms q(x), reproducing the individual components of
among its Fourier coefficients. The latter constraint leaves q(x) highly indeterminate, but Jaynes' argument given in Section 16.2.2.3
shows that there is a uniquely defined `best' choice for it: it is that distribution
having maximum entropy relative to a uniform prior prejudice m(x), and having the corresponding values
of the unitary structure factors for its Fourier coefficients. This distribution has the unique property that it rules out as few random structures as possible on the basis of the limited information available in
.
In terms of the statistical mechanical language used in Section 16.2.1, the trial structure-factor values
used as constraints would be the macroscopic quantities that can be controlled externally; while the 3N atomic coordinates would be the internal degrees of freedom of the system, whose entropy should be a maximum under these macroscopic constraints.
It is possible to solve explicitly the maximum-entropy equations (ME1) to (ME3) derived in Section 16.2.2.4 for the crystallographic case that has motivated this study, i.e. for the purpose of constructing
from the knowledge of a set of trial structure-factor values
. These derivations are given in §3.4 and §3.5 of Bricogne (1984)
. Extensive relations with the algebraic formalism of traditional direct methods are exhibited in §4, and connections with the theory of determinantal inequalities and with the maximum-determinant rule of Tsoucaris (1970)
are studied in §6, of the same paper. The reader interested in these topics is invited to consult this paper, as space limitations preclude their discussion in the present chapter.
The saddlepoint method constitutes an alternative approach to the problem of evaluating the joint probability of structure factors when some of the moduli in
are large. It is shown in §5 of Bricogne (1984)
, and in more detail in Section 1.3.4.5.2.2
of Chapter 1.3 of IT B (Bricogne, 2001
), that there is complete equivalence between the maximum-entropy approach to the phase problem and the classical probabilistic approach by the method of joint distributions, provided the latter is enhanced by the adoption of the saddlepoint approximation.
References











