International
Tables for
Crystallography
Volume C
Mathematical, physical and chemical tables
Edited by E. Prince

International Tables for Crystallography (2006). Vol. C. ch. 8.2, pp. 689-692
https://doi.org/10.1107/97809553602060000610

Chapter 8.2. Other refinement methods

E. Princea and D. M. Collinsb

a NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA, and bLaboratory for the Structure of Matter, Code 6030, Naval Research Laboratory, Washington, DC 20375-5341, USA

Least squares is a powerful data fitting method when the distribution of statistical fluctuation in the data is approximately normal, or Gaussian, but it can perform poorly if the distribution function has longer tails than a Gaussian distribution. Chapter 8.2 discusses several procedures that work better than least squares if the normality condition is not satisfied. Maximum likelihood methods, which are identical to least squares for a normal distribution, can be designed to be optimum for any distribution. Other methods are robust, because they work well over a broad range of distributions, and resistant, because they are insensitive to the presence in the data of points that disagree with the model. Maximum entropy methods are particularly useful when there are insufficient data.

Keywords: entropy maximization; maximum-likelihood methods; robust/resistant methods.

Chapter 8.1[link] discusses structure refinement by the method of least squares, which has a long history of successful use in data fitting and statistical analysis of results. It is an excellent technique to use in a wide range of practical problems, it is easy to implement, and it usually gives results that are straightforward and unambiguous. If a set of observations, [y_i], is an unbiased estimate of the values of model functions, [M_i({\bf x})], a properly weighted least-squares estimate is the best, linear, unbiased estimate of the parameters, x, provided the variances of the p.d.f.s of the populations from which the observations are drawn are finite. This assumes, however, that the model is correct and complete, an assumption whose validity may not necessarily be easily justified. Furthermore, least squares tends to perform poorly when the distribution of errors in the observations has longer tails than a normal, or Gaussian, distribution. For these reasons, a number of other procedures have been developed that attempt to retain the strengths of least squares but are less sensitive to departures from the ideal conditions that have been implicitly assumed. In this chapter, we discuss several of these methods. Two of them, maximum-likelihood methods and robust/resistant methods, are closely related to least squares. A third one uses a function that is mathematically related to the entropy function of thermodynamics and statistical mechanics, and is therefore referred to as the maximum-entropy method. For a discussion of the particular application of least squares to structure refinement with powder data that has become known as the Rietveld method (Rietveld, l969[link]), see Chapter 8.6[link] .

8.2.1. Maximum-likelihood methods

| top | pdf |

In Chapter 8.1[link] , structure refinement is presented as finding the answer to the question, `given a set of observations drawn randomly from populations whose means are given by a model, M(x), for some set of unknown parameters, x, how can we best determine the means, variances and covariances of a joint probability density function that describes the probabilities that the true values of the elements of x lie in certain ranges?'. For a broad class of density functions for the observations, the linear estimate that is unbiased and has minimum variances for all parameters is given by the properly weighted method of least squares. The problem can also be stated in the slightly different manner, `given a model and a set of observations, what is the likelihood of observing those particular values, and for what values of the parameters of the model is that likelihood a maximum?'. This set of parameters is the maximum-likelihood estimate.

Suppose the ith observation is drawn from a population whose p.d.f. is [\Phi _i\left (\Delta_i\right) ], where [\Delta_i=[y_i-M_i({\bf x})]/s_i ], x is the set of `true' values of the parameters, and [s_i] is a measure of scale appropriate to that observation. If the observations are independent, their joint p.d.f. is the product of the individual, marginal p.d.f.s: [\Phi _J({\bf \Delta})=\textstyle\prod\limits _{i=1}^n\Phi _i(\Delta_i).  \eqno (8.2.1.1)]The function [\Phi _i\left (\Delta_i\right) ] can also be viewed as a conditional p.d.f. for [y_i] given [M_i({\bf x})], or, equivalently, as a likelihood function for x given an observed value of [y_i], in which case it is written [l_i({\bf x}|y_i)]. Because a value actually observed logically must have a finite, positive likelihood, the density function in (8.2.1.1)[link] and its logarithm will be maximum for the same values of x: [\ln [l({\bf x}|{\bf y})]=\textstyle\sum\limits _{i=1}^n\ln [l_i({\bf x}|y_i)]. \eqno (8.2.1.2)]In the particular case where the error distribution is normal, and [\sigma _i ], the standard uncertainty of the ith observation, is known, then [\Phi _i(\Delta_i)={1\over \sqrt {2\pi }\sigma _i}\exp \bigl (-(1/2)\{[y_i-M_i({\bf x})]/\sigma _i\}^2\bigr), \eqno (8.2.1.3)]and the logarithm of the likelihood function is maximum when [S=\textstyle\sum\limits _{i=1}^n\{[y-M_i({\bf x})]/\sigma _i\}^2 \eqno (8.2.1.4)]is minimum, and the maximum-likelihood estimate and the least-squares estimate are identical.

For an error distribution that is not normal, the maximum-likelihood estimate will be different from the least-squares estimate, but it will, in general, involve finding a set of parameters for which a sum of terms like those in (8.2.1.2)[link] is a maximum (or the sum of the negatives of such terms is a minimum). It can thus be expressed in the general form: find the minimum of the sum [S=\textstyle\sum\limits_{i=1}^n\rho (\Delta_i), \eqno (8.2.1.5)]where ρ is defined by ρ(x) = −ln [Φ(x)], and Φ(x) is the p.d.f. of the error distribution appropriate to the observations. If [\rho (x)=x^2/2], the method is least squares. If the error distri­bution is the Cauchy distribution, [\Phi (x)=[\pi (1+x^2)]^{-1}], [\rho (x)=\ln (1+x^2)], which increases much more slowly than x2 as |x| increases, causing large deviations to have much less influence than they do in least squares.

Although there is no need for ρ(x) to be a symmetric function of x (the error distribution can be skewed), it may be assumed to have a minimum at x = 0, so that dρ(x)/dx = 0. A series expansion about the origin therefore begins with the quadratic term, and [\rho (x)=(x^2/2)\left (1+\textstyle\sum\limits _{k=1}^\infty a_kx^k\right). \eqno (8.2.1.6)]This procedure is thus equivalent to a variant of least squares in which the weights are functions of the deviation.

8.2.2. Robust/resistant methods

| top | pdf |

Properly weighted least squares gives the best linear estimate for a very broad range of distributions of random errors in the data and the maximum-likelihood estimate if that error distribution is normal or Gaussian. But the best linear estimator may nevertheless not be a very good one, and the error distribution may not be well known. It is therefore important to address the question of how good an estimation procedure may be when the conditions for which it is designed may not be satisfied. Refinement procedures may be classified according to the extent that they possess two properties known as robustness and resistance. A procedure is said to be robust if it works well for a broad range of error distributions and resistant if its results are not strongly affected by fluctuations in any small subset of the data. Because least squares is a linear estimator, the influence of any single data point on the parameter estimates increases without limit as the difference between the observation and the model increases. It therefore works poorly if the actual error distribution contains large deviations with a frequency that substantially exceeds that expected from a normal distribution. Further, it has the undesirable property that it will make the fit of a few wildly discrepant data points better by making the fit of many points a little worse. Least squares is therefore neither robust nor resistant.

Tukey (1974[link]) has listed a number of properties a procedure should have in order to be robust and resistant. Because least squares works well when the error distribution is normal, the procedure should behave like least squares for small deviations whose distribution is similar to the normal distribution. It should de-emphasize large differences between the model and the data, and it should connect these extremes smoothly. A procedure suggested by Tukey was applied to crystal structure refinement by Nicholson, Prince, Buchanan & Tucker (1982[link]). It corresponds to a fitting function ρ(Δ) [equation (8.2.1.5)[link]] of the form [\matrix{ \rho \left (\Delta\right) = \left (\Delta^2/2\right) \left (1-\Delta^2+\Delta^4/3\right)\quad &\left | \Delta\right | \lt 1, \cr \rho \left (\Delta\right) = 1/6 \hfill &\left | \Delta\right | \geq 1,} \eqno (8.2.2.1)]where [\Delta_i=[y_i-M_i({\bf x})]/s_i], and s is a resistant measure of scale.

In order to see what is meant by a resistant measure, consider a large sample of observations, [y_i], with a normal distribution. The sample mean, [\overline {y}=(1/n)\textstyle\sum\limits_{i=1}^ny_i, \eqno (8.2.2.2)]is an unbiased estimate of the population mean. Contamination of the sample by a small number of observations containing large, systematic errors, however, would have a large effect on the estimate. The median value of [y_i] is also an unbiased estimate of the population mean, but it is virtually unaffected by a few contaminating points. Similarly, the sample variance, [s^2=[1/(n-1)]\textstyle\sum\limits _{i=1}^n(y_i-\overline {y})^2, \eqno (8.2.2.3)]is an unbiased estimate of the population variance, but, again, it is strongly affected by a few discrepant points, whereas [[0.7413r_q]{^2}], where [r_q] is the interquartile range, the difference between the first and third quartile observations, is an estimate of the population variance that is almost unaffected by a small number of discrepant points. The median and the interquartile range are thus resistant quantities that can be used to estimate the mean and variance of a population distribution when the sample may contain points that do not belong to the population. A value of the scale parameter, [s_i], for use in evaluating the quantities in (8.2.2.1)[link], that has proved to be useful is [s_i=9 | \delta _m|\sigma _i], where [\left | \delta _m\right | ] represents the median value of [\left | [y_i-M_i({\bf x})]/\sigma _i\right |], the median absolute deviation, or MAD.

Implementation of a procedure based on the function given in (8.2.2.1)[link] involves modification of the weights used in each cycle by [\matrix{ \varphi \left (\Delta\right) = \left (1-\Delta^2\right){}^2,\quad &\left | \Delta\right | \lt 1, \cr \varphi \left (\Delta\right) = 0, \hfill &\left | \Delta\right | \geq 1.} \eqno (8.2.2.4)]Because of this weight modification, the procedure is sometimes referred to as `iteratively reweighted least squares'. It should be recognized, however, that the function that is minimized is more complex than a sum of squares. In a strict application of the Gauss–Newton algorithm (see Section 8.1.3[link] ) to the minimization of this function, each term in the summations to form the normal-equations matrix contains a factor [\omega \left (\Delta_i\right) ], where ω(Δ) = d2ρ/dΔ2 = 1 − 6Δ2 + 5Δ4. This factor actually gives some data points a negative effective `weight', because the sum is actually reduced by making the fit worse. The inverse of this normal-equations matrix is not an estimate of the variance–covariance matrix; for that the unmodified weights, equal to [1/\sigma _i^2], must be used, but, because more discrepant points have been deliberately down weighted relative to the ideal weights, the variances are, in general, underestimated. A recommended procedure (Huber, 1973[link]; Nicholson et al., 1982[link]) is to calculate the normal-equations matrix using the unmodified weights, invert that matrix, and premultiply by an estimate of the variance of the residuals (Section 8.4.1[link] ) using modified weights and (np) degrees of freedom. Huber showed that this estimate is biased low, and suggests multiplication by a number, [c^2], greater than one, and given by [c=[1+ps_\omega ^2/n\overline {\omega }^2]/\overline {\omega }, \eqno (8.2.2.5)]where [\overline {\omega }] is the mean value, and [s_\omega ^2] is the variance of [\omega \left (\Delta_i\right) ] over the entire data set. The conditions under which this expression is derived are not well satisfied in the crystallographic problem, but, if n/p is large and [\overline {\omega }] is not too much less than one, the value of c will be close to [1/\overline {\omega }]. [\overline {\omega }] plays the role of a `variance efficiency factor'. That is, the variances are approximately those that would be achieved with a least-squares fit to a data set with normally distributed errors that contained [n\overline {\omega }] data points.

Robust/resistant methods have been discussed in detail by Huber (1981[link]), Belsley, Kuh & Welsch (1980[link]), and Hoaglin, Mosteller & Tukey (1983[link]). An analysis by Wilson (1976[link]) shows that a fitting procedure gives unbiased estimates if [\sum _{i=1}^n\left [\left (\displaystyle {\partial w_i \over \partial y_{ci}}\right) \left (\displaystyle{{\rm d}y_{ci} \over {\rm d}x}\right) \sigma _i^2\right] =2\sum _{i=1}^n\left [\left (\displaystyle {\partial w_i \over \partial y_{oi}}\right)\left (\displaystyle {{\rm d}y_{ci} \over {\rm d}x}\right) \sigma _i^2\right] , \eqno (8.2.2.6)]where [y_{oi}] and [y_{ci}] are the observed and calculated values of [y_i], respectively. Least squares is the case where all terms on both sides of the equation are equal to zero; the weights are fixed. In maximum-likelihood estimation or robust/resistant estimation, the effective weights are functions of the deviation, causing possible introduction of bias. Equation (8.2.2.6)[link], however, suggests that the estimates will still be unbiased if the sums on both sides are zero, which will be the case if the error distribution and the weight modification function are both symmetric about Δ = 0.

Note that the fact that two different weighting schemes applied to the same data lead to different values for the estimate does not necessarily imply that either value is biased. As long as the observations represent unbiased estimates of the values of the model functions, any weighting scheme gives unbiased estimates of the model parameters, although some weighting schemes will cause those estimates to be more precise than others will. Bias can be introduced if a procedure systematically causes fluctuations in one direction to be weighted more heavily than fluctuations in the other. For example, in the Rietveld method (Chapter 8.6[link] ), the observations are counts of quanta, which are subject to fluctuation according to the Poisson distribution, where the probability of observing k counts per unit time is given by [\Phi (k)=\lambda ^k\exp (-\lambda)/k!. \eqno (8.2.2.7)]The mean and the variance of this p.d.f. are both equal to λ, so that the ideal weighting should have [w_i=1/\lambda _i]. However, λi is not known a priori, and must be estimated. The usual procedure is to take [k_i] as an estimate of λi, but this is an unbiased estimate only asymptotically for large k (Box & Tiao, 1973[link]), and, furthermore, causes observations that have negative, random errors to be weighted more heavily than observations that have positive ones. This correlation can be removed by using, after a preliminary cycle of refinement, [M_i(\widehat {{\bf x}})] as an estimate of [\lambda _i]. This might seem to have the effect of making the weights dependent on the calculated values, so that the right-hand side of (8.2.2.6)[link] is no longer zero, but this applies only if the weights are changed during the refinement. There is thus no conflict with the result in (equation 8.1.2.9[link] ). In practice, in any case, many other sources of uncertainty are much more important than any possible bias that could be introduced by this effect.

8.2.3. Entropy maximization

| top | pdf |

8.2.3.1. Introduction

| top | pdf |

Entropy maximization, like least squares, is of interest primarily as a framework within which to find or adjust parameters of a model. Rationalization of the name `entropy maximization' by analogy to thermodynamics is controversial, but there is formal proof (Shore & Johnson, 1980[link], Johnson & Shore, 1983[link]) supporting entropy maximization as the unique method of inference that satisfies basic consistency requirements (Livesey & Skilling, 1985[link]). The proof consists of discovering the consequences of four consistency axioms, which may be stated informally as follows:

  • (1) the result of the inference should be unique;

  • (2) the result of the inference should be invariant to any transformations of coordinate system;

  • (3) it should not matter whether independent information is accounted for independently or jointly;

  • (4) it should not matter whether independent subsystems are treated separately in conditional problems or collected and treated jointly.

The term `entropy' is used in this chapter as a name only, the name for variation functions that include the form [\varphi \ln \varphi ], where [\varphi ] may represent probability or, more generally, a positive proportion. Any positive measure, either observed or derived, of the relative apportionment of a characteristic quantity among observations can serve as the proportion.

The method of entropy maximization may be formulated as follows: given a set of n observations, [y_i], that are measurements of quantities that can be described by model functions, [M_i({\bf x})], where x is a vector of parameters, find the prior, positive proportions, [\mu _i=f(y_i)], and the values of the parameters for which the positive proportions [\varphi =f[M_i({\bf x})]] make the sum [S=-\textstyle\sum\limits_{i=1}^n\varphi _i^{\prime }\ln (\varphi _i^{\prime }/\mu _i^{\prime }), \eqno (8.2.3.1)]where [\varphi _i^{\prime }=\varphi _i\big/\sum \varphi _j] and [\mu _i^{\prime }=\mu _i\big/\sum \mu _j], a maximum. S is called the Shannon–Jaynes entropy. For some applications (Collins, 1982[link]), it is desirable to include in the variation function additional terms or restraints that give S the form [S=-\textstyle\sum\limits_{i=1}^n\varphi _i^{\prime }\ln (\varphi _i^{\prime }/\mu _i^{\prime })+\lambda _1\xi _1({\bf x},{\bf y})+\lambda _2\xi _2({\bf x},{\bf y})+\ldots, \eqno (8.2.3.2)]where the λs are undetermined multipliers, but we shall discuss here only applications where λi = 0 for all i, and an unrestrained entropy is maximized. A necessary condition for S to be a maximum is for the gradient to vanish. Using [{\partial S \over \partial x_j}=\sum _{i=1}^n\left ({\partial S \over \partial \varphi _i}\right) \bigg({\partial \varphi _i\over\partial x_j}\bigg) \eqno (8.2.3.3)]and [{\partial S \over \partial \varphi _i}=\sum _{k=1}^n\left ({\partial S \over \partial \varphi _k^{\prime }}\right) \left ({\partial \varphi _k^{\prime }\over\partial \varphi _i}\right), \eqno (8.2.3.4)]straightforward algebraic manipulation gives equations of the form [\sum _{i=1}^n\left \{ {\partial \varphi _i \over \partial x_j}-\varphi _i^{\prime }\left (\sum _{k=1}^n {\partial \varphi _k \over \partial x_j}\right) \right \} \ln \left (\displaystyle {\varphi _i^{\prime } \over \mu _i^{\prime }}\right) =0. \eqno (8.2.3.5)]It should be noted that, although the entropy function should, in principle, have a unique stationary point corresponding to the global maximum, there are occasional circumstances, particularly with restrained problems where the undetermined multipliers are not all zero, where it may be necessary to verify that a stationary solution actually maximizes entropy.

8.2.3.2. Some examples

| top | pdf |

For an example of the application of the maximum-entropy method, consider (Collins, 1984[link]) a collection of diffraction intensities in which various subsets have been measured under different conditions, such as on different films or with different crystals. All systematic corrections have been made, but it is necessary to put the different subsets onto a common scale. Assume that every subset has measurements in common with some other subset, and that no collection of subsets is isolated from the others. Let the measurement of intensity [I_h] in subset i be [J_{hi}], and let the scale factor that puts intensity [I_h] on the scale of subset i be [k_i]. Equation (8.2.3.1)[link] becomes [S=-\sum _{h=1}^n\sum _{i=1}^m(k_iI_h)^{\prime }\ln \left [{\left (k_iI_h\right) ^{\prime } \over J_{hi}^{\prime }}\right] , \eqno (8.2.3.6)]where the term is zero if [I_h] does not appear in subset i. Because [k_i] and [I_h] are parameters of the model, equations (8.2.3.5)[link] become [\sum _{i=1}^mk_i\ln \left [{(k_iI_h)^{\prime } \over J_{hi}^{\prime }}\right] -\sum _{h=1}^n\;\sum _{i=1}^m(k_iI_h)^{\prime }\left (\sum _{l=1}^mk_l\right) \ln \left [\displaystyle {(k_iI_h)^{\prime } \over J_{hi}^{\prime }}\right] =0, \eqno (8.2.3.7a)]and [\sum _{h=1}^nI_h\ln \left [{(k_iI_h)^{\prime } \over J_{hi}^{\prime }}\right] -\sum _{h=1}^n\sum _{i=1}^m(k_iI_h)^{\prime }\left (\sum_{l=1}^nI_l\right) \ln \left[{(k_iI_h)^{\prime }\over J_{hi}^{\prime}}\right]=0. \eqno (8.2.3.7b)]These simplify to [\ln I_h=Q-\textstyle\sum\limits _{i=1}^mk_i^{\prime }\ln (k_i/J_{hi}) \eqno (8.2.3.8a)]and [\ln k_i=Q-\textstyle\sum\limits _{h=1}^nI_h^{\prime }\ln (I_h/J_{hi}), \eqno (8.2.3.8b)]where [Q=\textstyle\sum\limits ^n_{h=1}\; \textstyle\sum\limits ^m_{i=1}(k_iI_h)^{\prime }\ln [(k_iI_h)/J_{hi}].\eqno (8.2.3.8c)]Equations (8.2.3.8)[link][link][link] may be solved iteratively, starting with the approximations [k_i=\sum _{h=1}^nJ_{hi}] and Q = 0.

The standard uncertainties of scale factors and intensities are not used in the solution of equations (8.2.3.8)[link][link][link], and must be computed separately. They may be estimated on a fractional basis from the variances of estimated population means [\left \langle J_{hi}/I_h\right \rangle ] for a scale factor and [\left \langle J_{hi}/k_i\right \rangle ] for an intensity, respectively. The maximum-entropy scale factors and scaled intensities are relative, and either set may be multiplied by an arbitrary, positive constant without affecting the solution.

For another example, consider the maximum-entropy fit of a linear function to a set of independently distributed variables. Let [y_i] represent an observation drawn from a population with mean [a_0+a_1x_i] and finite variance [\sigma _i^2]; we wish to find the maximum-entropy estimate of [a_0] and [a_1]. Assume that the mismatch between the observation and the model is normally distributed, so that its probability density is the positive proportion [\varphi _i=\varphi (\Delta_i)=(2\pi \sigma _i^2)^{-1/2}\exp (-\Delta_i^2/2\sigma _i^2), \eqno (8.2.3.9)]where [\Delta_i=y_i-(a_0+a_1x_i)]. The prior proportion is given by [\mu _i=\varphi (0)=(2\pi \sigma _i^2)^{-1/2}. \eqno (8.2.3.10)]Letting [A_\varphi =1\big/\sum \varphi _i], equations (8.2.3.5)[link] become [\textstyle\sum\limits_{i=1}^n\left [\varphi _i\Delta_i/\sigma _i^2-A_\varphi \,\varphi _i\left (\textstyle\sum\limits _{j=1}^n\varphi _j\Delta_j/\sigma _j^2\right) \right] \Delta_i^2/\sigma _i^2=0 \eqno (8.2.3.11a)]and [\textstyle\sum\limits_{i=1}^n\left [\varphi _i\Delta_i\,x_i/\sigma _i^2-A_\varphi \,\varphi _i\left (\textstyle\sum\limits _{j=1}^n\varphi _j\Delta_jx_j/\sigma _j^2\right) \right] \Delta_i^2/\sigma _i^2=0, \eqno (8.2.3.11b)]which simplifies to [\eqalignno{ &\left (\matrix{ \sum \limits _{i=1}^nw_i &\sum \limits _{i=1}^nw_i x_i \cr \sum \limits _{i=1}^nw_i x_i &\sum \limits _{i=1}^nw_i x_i^2}\right) \left ({a_0 \atop a_1}\right) \cr &\quad =\left(\matrix{ \sum \limits _{i=1}^nw_i\left (y_i-\sigma _i^2A_\varphi \sum \limits _{j=1}^n\varphi _j \Delta_j/\sigma _j^2\right) \cr \sum \limits _{i=1}^nw_i\left (y_ix_i-\sigma _i^2A_\varphi \sum \limits _{j=1}^n\varphi _j\Delta_j x_j/\sigma _j^2\right)}\right), &(8.2.3.12)}]where [w_i] may be interpreted as a weight and is given by [w_i=\varphi _i\Delta_i^2/\sigma _i^4]. Equations (8.2.3.12)[link] may be solved iteratively, starting with the approximations that the sums over j on the right-hand side are zero and [w_i=1.0] for all i, that is, using the solutions to the corresponding, unweighted least-squares problem. Resetting [w_i] after each iteration by only half the indicated amount defeats a tendency towards oscillation. Approximate standard uncertainties for the parameters, [a_0] and [a_1], may be computed by conventional means after setting to zero the sums over j on the right-hand side of equations (8.2.3.12)[link]. (See, however, a discussion of computing variance–covariance matrices in Section 8.1.2[link] .) Note that [w_i] is small for both small and large values of [\left | \Delta_i\right |]. Thus, in contrast to the robust/resistant methods (Section 8.2.2[link]), which de-emphasize only the large differences, this method down-weights both the small and the large differences and adjusts the parameters on the basis of the moderate-size mismatches between model and data. The procedure used in this two-dimensional, linear model can be extended to linear models, and linear approximations to nonlinear models, in any number of dimensions using methods discussed in Chapter 8.1[link] .

The maximum-entropy method has been described (Jaynes, 1979[link]) as being `maximally noncommittal with respect to all other matters; it is as uniform (by the criterion of the Shannon information measure) as it can be without violating the given constraint[s]'. Least squares, because it gives minimum variance estimates of the parameters of a model, and therefore of all functions of the model including the predicted values of any additional data points, might be similarly described as `maximally committal' with regard to the collection of more data. Least squares and maximum entropy can therefore be viewed as the extremes of a range of methods, classified according to the degree of a priori confidence in the correctness of the model, with the robust/resistant methods lying somewhere in between (although generally closer to least squares). Maximum-entropy methods can be used when it is desirable to avoid prejudice in favour of a model because of doubt as to the model's correctness.

References

First citation Belsley, D. A., Kuh, E. & Welsch, R. E. (1980). Regression diagnostics. New York: John Wiley.Google Scholar
First citation Box, G. E. P. & Tiao, G. C. (1973). Bayesian inference in statistical analysis. Reading, MA: Addison-Wesley.Google Scholar
First citation Collins, D. M. (1982). Electron density images from imperfect data by iterative entropy maximization. Nature (London), 298, 49–51.Google Scholar
First citation Collins, D. M. (1984). Scaling by entropy maximization. Acta Cryst. A40, 705–708.Google Scholar
First citation Hoaglin, D. C., Mosteller, M. & Tukey, J. W. (1983). Understanding robust and exploratory data analysis. New York: John Wiley.Google Scholar
First citation Huber, P. J. (1973). Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Stat. 1, 799–821.Google Scholar
First citation Huber, P. J. (1981). Robust statistics. New York: John Wiley.Google Scholar
First citation Jaynes, E. T. (1979). Where do we stand on maximum entropy? The maximum entropy formalism, edited by R. D. Liven & M. Tribus, pp. 44–49. Cambridge, MA: Massachusetts Institute of Technology.Google Scholar
First citation Johnson, R. W. & Shore, J. E. (1983). Comments on and correction to 'Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy'. IEEE Trans. Inf. Theory, IT-29, 942–943.Google Scholar
First citation Livesey, A. K. & Skilling, J. (1985). Maximum entropy theory. Acta Cryst. A41, 113–122.Google Scholar
First citation Nicholson, W. L., Prince, E., Buchanan, J. & Tucker, P. (1982). A robust/resistant technique for crystal structure refinement. Crystallographic statistics: progress and problems, edited by S. Rameseshan, M. F. Richardson & A. J. C. Wilson, pp. 220–263. Bangalore: Indian Academy of Sciences.Google Scholar
First citation Rietveld, H. M. (1969). A profile refinement method for nuclear and magnetic structures. J. Appl. Cryst. 2, 65–71.Google Scholar
First citation Shore, J. E. & Johnson, R. W. (1980). Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theory, IT-26, 26–37.Google Scholar
First citation Tukey, J. W. (1974). Introduction to today's data analysis. Critical evaluation of chemical and physical structural information, edited by D. R. Lide & M. A. Paul, pp. 3–14. Washington: National Academy of Sciences.Google Scholar
First citation Wilson, A. J. C. (1976). Statistical bias in least-squares refinement. Acta Cryst. A32, 994–996.Google Scholar








































to end of page
to top of page