International
Tables for Crystallography Volume B Reciprocal space Edited by U. Shmueli © International Union of Crystallography 2006 |
International Tables for Crystallography (2006). Vol. B. ch. 1.3, pp. 25-98
https://doi.org/10.1107/97809553602060000551 Chapter 1.3. Fourier transforms in crystallography: theory, algorithms and applications
G. Bricognea
a MRC Laboratory of Molecular Biology, Hills Road, Cambridge CB2 2QH, England, and LURE, Bâtiment 209D, Université Paris-Sud, 91405 Orsay, France In the first part of this chapter, the mathematical theory of the Fourier transformation is cast in the language of Schwartz's theory of distributions, allowing Fourier transforms, Fourier series and discrete Fourier transforms to be treated together. Next the numerical computation of the discrete Fourier transform is discussed. One-dimensional algorithms are examined first, including the Cooley–Tukey algorithm, the Good (or prime factor) algorithm, the Rader algorithm and the Winograd algorithms. Multidimensional algorithms are then covered. The last part of the chapter surveys the crystallographic applications of Fourier transforms. Keywords: Fourier transforms; topology; distributions; convergence; convolution; convolution theorem; Heisenberg's inequality; Hardy's theorem; reciprocity theorem; Sobolev spaces; Fourier series; Poisson summation formula; Toeplitz forms; Szegö's theorem; correlation; correlation functions; sampling; Cooley–Tukey algorithm; Good algorithm; prime factor algorithm; Chinese remainder theorem; Rader algorithm; Winograd algorithms; multidimensional algorithms; Nussbaumer–Quandalle algorithm; structure factors; Friedel's law; Patterson functions; determinantal inequalities; crystallographic groups; Parseval's theorem; generalized multiplexing; electron density; molecular envelopes; noncrystallographic symmetry; molecular averaging; Green's theorem; least-squares methods; fast Fourier transforms; helical symmetry; fibres; macromolecular crystallography; probability theory. |
Since the publication of Volume II of International Tables, most aspects of the theory, computation and applications of Fourier transforms have undergone considerable development, often to the point of being hardly recognizable.
The mathematical analysis of the Fourier transformation has been extensively reformulated within the framework of distribution theory, following Schwartz's work in the early 1950s.
The computation of Fourier transforms has been revolutionized by the advent of digital computers and of the Cooley–Tukey algorithm, and progress has been made at an ever-accelerating pace in the design of new types of algorithms and in optimizing their interplay with machine architecture.
These advances have transformed both theory and practice in several fields which rely heavily on Fourier methods; much of electrical engineering, for instance, has become digital signal processing.
By contrast, crystallography has remained relatively unaffected by these developments. From the conceptual point of view, old-fashioned Fourier series are still adequate for the quantitative description of X-ray diffraction, as this rarely entails consideration of molecular transforms between reciprocal-lattice points. From the practical point of view, three-dimensional Fourier transforms have mostly been used as a tool for visualizing electron-density maps, so that only moderate urgency was given to trying to achieve ultimate efficiency in these relatively infrequent calculations.
Recent advances in phasing and refinement methods, however, have placed renewed emphasis on concepts and techniques long used in digital signal processing, e.g. flexible sampling, Shannon interpolation, linear filtering, and interchange between convolution and multiplication. These methods are iterative in nature, and thus generate a strong incentive to design new crystallographic Fourier transform algorithms making the fullest possible use of all available symmetry to save both storage and computation.
As a result, need has arisen for a modern and coherent account of Fourier transform methods in crystallography which would provide:
The rapid pace of progress in these fields implies that such an account would be struck by quasi-immediate obsolescence if it were written solely for the purpose of compiling a catalogue of results and formulae `customized' for crystallographic use. Instead, the emphasis has been placed on a mode of presentation in which most results and formulae are derived rather than listed. This does entail a substantial mathematical overhead, but has the advantage of preserving in its `native' form the context within which these results are obtained. It is this context, rather than any particular set of results, which constitutes the most fertile source of new ideas and new applications, and as such can have any hope at all of remaining useful in the long run.
These conditions have led to the following choices:
In keeping with this philosophy, the theory is presented first, followed by the crystallographic applications. There are `forward references' from mathematical results to the applications which later invoke them (thus giving `real-life' examples rather than artificial ones), and `backward references' as usual. In this way, the internal logic of the mathematical developments – the surest guide to future innovations – can be preserved, whereas the alternative solution of relegating these to appendices tends on the contrary to obscure that logic by subordinating it to that of the applications.
It is hoped that this attempt at an overall presentation of the main features of Fourier transforms and of their ubiquitous role in crystallography will be found useful by scientists both within and outside the field.
The Fourier transformation and the practical applications to which it gives rise occur in three different forms which, although they display a similar range of phenomena, normally require distinct formulations and different proof techniques:
At the same time, the most useful property of the Fourier transformation – the exchange between multiplication and convolution – is mathematically the most elusive and the one which requires the greatest caution in order to avoid writing down meaningless expressions.
It is the unique merit of Schwartz's theory of distributions (Schwartz, 1966) that it affords complete control over all the troublesome phenomena which had previously forced mathematicians to settle for a piecemeal, fragmented theory of the Fourier transformation. By its ability to handle rigorously highly `singular' objects (especially δ-functions, their derivatives, their tensor products, their products with smooth functions, their translates and lattices of these translates), distribution theory can deal with all the major properties of the Fourier transformation as particular instances of a single basic result (the exchange between multiplication and convolution), and can at the same time accommodate the three previously distinct types of Fourier theories within a unique framework. This brings great simplification to matters of central importance in crystallography, such as the relations between
All these properties become subsumed under the same theorem.
This striking synthesis comes at a slight price, which is the relative complexity of the notion of distribution. It is first necessary to establish the notion of topological vector space and to gain sufficient control (or, at least, understanding) over convergence behaviour in certain of these spaces. The key notion of metrizability cannot be circumvented, as it underlies most of the constructs and many of the proof techniques used in distribution theory. Most of Section 1.3.2.2 builds up to the fundamental result at the end of Section 1.3.2.2.6.2
, which is basic to the definition of a distribution in Section 1.3.2.3.4
and to all subsequent developments.
The reader mostly interested in applications will probably want to reach this section by starting with his or her favourite topic in Section 1.3.4, and following the backward references to the relevant properties of the Fourier transformation, then to the proof of these properties, and finally to the definitions of the objects involved. Hopefully, he or she will then feel inclined to follow the forward references and thus explore the subject from the abstract to the practical. The books by Dieudonné (1969)
and Lang (1965)
are particularly recommended as general references for all aspects of analysis and algebra.
Throughout this text, will denote the set of real numbers,
the set of rational (signed) integers and
the set of natural (unsigned) integers. The symbol
will denote the Cartesian product of n copies of
:
so that an element x of
is an n-tuple of real numbers:
Similar meanings will be attached to
and
.
The symbol will denote the set of complex numbers. If
, its modulus will be denoted by
, its conjugate by
(not
), and its real and imaginary parts by
and
:
If X is a finite set, then will denote the number of its elements. If mapping f sends an element x of set X to the element
of set Y, the notation
will be used; the plain arrow → will be reserved for denoting limits, as in
If X is any set and S is a subset of X, the indicator function of S is the real-valued function on X defined by
The set can be endowed with the structure of a vector space of dimension n over
, and can be made into a Euclidean space by treating its standard basis as an orthonormal basis and defining the Euclidean norm:
By misuse of notation, x will sometimes also designate the column vector of coordinates of ; if these coordinates are referred to an orthonormal basis of
, then
where
denotes the transpose of x.
The distance between two points x and y defined by allows the topological structure of
to be transferred to
, making it a metric space. The basic notions in a metric space are those of neighbourhoods, of open and closed sets, of limit, of continuity, and of convergence (see Section 1.3.2.2.6.1
).
A subset S of is bounded if sup
as x and y run through S; it is closed if it contains the limits of all convergent sequences with elements in S. A subset K of
which is both bounded and closed has the property of being compact, i.e. that whenever K has been covered by a family of open sets, a finite subfamily can be found which suffices to cover K. Compactness is a very useful topological property for the purpose of proof, since it allows one to reduce the task of examining infinitely many local situations to that of examining only finitely many of them.
Let φ be a complex-valued function over . The support of φ, denoted Supp φ, is the smallest closed subset of
outside which φ vanishes identically. If Supp φ is compact, φ is said to have compact support.
If , the translate of φ by t, denoted
, is defined by
Its support is the geometric translate of that of φ:
If A is a non-singular linear transformation in , the image of φ by A, denoted
, is defined by
Its support is the geometric image of Supp φ under A:
If S is a non-singular affine transformation in of the form
with A linear, the image of φ by S is
, i.e.
Its support is the geometric image of Supp φ under S:
It may be helpful to visualize the process of forming the image of a function by a geometric operation as consisting of applying that operation to the graph of that function, which is equivalent to applying the inverse transformation to the coordinates x. This use of the inverse later affords the `left-representation property' [see Section 1.3.4.2.2.2(e)
] when the geometric operations form a group, which is of fundamental importance in the treatment of crystallographic symmetry (Sections 1.3.4.2.2.4
, 1.3.4.2.2.5
).
When dealing with functions in n variables and their derivatives, considerable abbreviation of notation can be obtained through the use of multi-indices.
A multi-index is an n-tuple of natural integers:
. The length of p is defined as
and the following abbreviations will be used:
Leibniz's formula for the repeated differentiation of products then assumes the concise form while the Taylor expansion of f to order m about
reads
In certain sections the notation will be used for the gradient vector of f, and the notation
for the Hessian matrix of its mixed second-order partial derivatives:
The Riemann integral used in elementary calculus suffers from the drawback that vector spaces of Riemann-integrable functions over are not complete for the topology of convergence in the mean: a Cauchy sequence of integrable functions may converge to a non-integrable function.
To obtain the property of completeness, which is fundamental in functional analysis, it was necessary to extend the notion of integral. This was accomplished by Lebesgue [see Berberian (1962), Dieudonné (1970)
, or Chapter 1 of Dym & McKean (1972)
and the references therein, or Chapter 9 of Sprecher (1970)
], and entailed identifying functions which differed only on a subset of zero measure in
(such functions are said to be equal `almost everywhere'). The vector spaces
consisting of function classes f modulo this identification for which
are then complete for the topology induced by the norm
: the limit of every Cauchy sequence of functions in
is itself a function in
(Riesz–Fischer theorem).
The space consists of those function classes f such that
which are called summable or absolutely integrable. The convolution product:
is well defined; combined with the vector space structure of
, it makes
into a (commutative) convolution algebra. However, this algebra has no unit element: there is no
such that
for all
; it has only approximate units, i.e. sequences
such that
tends to g in the
topology as
. This is one of the starting points of distribution theory.
The space of square-integrable functions can be endowed with a scalar product
which makes it into a Hilbert space. The Cauchy–Schwarz inequality
generalizes the fact that the absolute value of the cosine of an angle is less than or equal to 1.
The space is defined as the space of functions f such that
The quantity
is called the `essential sup norm' of f, as it is the smallest positive number which
exceeds only on a subset of zero measure in
. A function
is called essentially bounded.
Let ,
. Then the function
is called the tensor product of f and g, and belongs to
. The finite linear combinations of functions of the form
span a subspace of
called the tensor product of
and
and denoted
.
The integration of a general function over may be accomplished in two steps according to Fubini's theorem. Given
, the functions
exist for almost all
and almost all
, respectively, are integrable, and
Conversely, if any one of the integrals
is finite, then so are the other two, and the identity above holds. It is then (and only then) permissible to change the order of integrations.
Fubini's theorem is of fundamental importance in the study of tensor products and convolutions of distributions.
Geometric intuition, which often makes `obvious' the topological properties of the real line and of ordinary space, cannot be relied upon in the study of function spaces: the latter are infinite-dimensional, and several inequivalent notions of convergence may exist. A careful analysis of topological concepts and of their interrelationship is thus a necessary prerequisite to the study of these spaces. The reader may consult Dieudonné (1969, 1970
), Friedman (1970)
, Trèves (1967)
and Yosida (1965)
for detailed expositions.
Most topological notions are first encountered in the setting of metric spaces. A metric space E is a set equipped with a distance function d from to the non-negative reals which satisfies:
By means of d, the following notions can be defined: open balls, neighbourhoods; open and closed sets, interior and closure; convergence of sequences, continuity of mappings; Cauchy sequences and completeness; compactness; connectedness. They suffice for the investigation of a great number of questions in analysis and geometry (see e.g. Dieudonné, 1969
).
Many of these notions turn out to depend only on the properties of the collection of open subsets of E: two distance functions leading to the same
lead to identical topological properties. An axiomatic reformulation of topological notions is thus possible: a topology in E is a collection
of subsets of E which satisfy suitable axioms and are deemed open irrespective of the way they are obtained. From the practical standpoint, however, a topology which can be obtained from a distance function (called a metrizable topology) has the very useful property that the notions of closure, limit and continuity may be defined by means of sequences. For non-metrizable topologies, these notions are much more difficult to handle, requiring the use of `filters' instead of sequences.
In some spaces E, a topology may be most naturally defined by a family of pseudo-distances , where each
satisfies (i) and (iii) but not (ii). Such spaces are called uniformizable. If for every pair
there exists
such that
, then the separation property can be recovered. If furthermore a countable subfamily of the
suffices to define the topology of E, the latter can be shown to be metrizable, so that limiting processes in E may be studied by means of sequences.
The function spaces E of interest in Fourier analysis have an underlying vector space structure over the field of complex numbers. A topology on E is said to be compatible with a vector space structure on E if vector addition [i.e. the map
] and scalar multiplication [i.e. the map
] are both continuous; E is then called a topological vector space. Such a topology may be defined by specifying a `fundamental system S of neighbourhoods of
', which can then be translated by vector addition to construct neighbourhoods of other points
.
A norm ν on a vector space E is a non-negative real-valued function on such that
Subsets of E defined by conditions of the form
with
form a fundamental system of neighbourhoods of 0. The corresponding topology makes E a normed space. This topology is metrizable, since it is equivalent to that derived from the translation-invariant distance
. Normed spaces which are complete, i.e. in which all Cauchy sequences converge, are called Banach spaces; they constitute the natural setting for the study of differential calculus.
A semi-norm σ on a vector space E is a positive real-valued function on which satisfies (i′) and (iii′) but not (ii′). Given a set Σ of semi-norms on E such that any pair (x, y) in
is separated by at least one
, let B be the set of those subsets
of E defined by a condition of the form
with
and
; and let S be the set of finite intersections of elements of B. Then there exists a unique topology on E for which S is a fundamental system of neighbourhoods of 0. This topology is uniformizable since it is equivalent to that derived from the family of translation-invariant pseudo-distances
. It is metrizable if and only if it can be constructed by the above procedure with Σ a countable set of semi-norms. If furthermore E is complete, E is called a Fréchet space.
If E is a topological vector space over , its dual
is the set of all linear mappings from E to
(which are also called linear forms, or linear functionals, over E). The subspace of
consisting of all linear forms which are continuous for the topology of E is called the topological dual of E and is denoted E′. If the topology on E is metrizable, then the continuity of a linear form
at
can be ascertained by means of sequences, i.e. by checking that the sequence
of complex numbers converges to
in
whenever the sequence
converges to f in E.
At the end of the 19th century, Heaviside proposed under the name of `operational calculus' a set of rules for solving a class of differential, partial differential and integral equations encountered in electrical engineering (today's `signal processing'). These rules worked remarkably well but were devoid of mathematical justification (see Whittaker, 1928). In 1926, Dirac introduced his famous δ-function [see Dirac (1958)
, pp. 58–61], which was found to be related to Heaviside's constructs. Other singular objects, together with procedures to handle them, had already appeared in several branches of analysis [Cauchy's `principal values'; Hadamard's `finite parts' (Hadamard, 1932
, 1952
); Riesz's regularization methods for certain divergent integrals (Riesz, 1938
, 1949
)] as well as in the theories of Fourier series and integrals (see e.g. Bochner, 1932
, 1959
). Their very definition often verged on violating the rigorous rules governing limiting processes in analysis, so that subsequent recourse to limiting processes could lead to erroneous results; ad hoc precautions thus had to be observed to avoid mistakes in handling these objects.
In 1945–1950, Laurent Schwartz proposed his theory of distributions (see Schwartz, 1966), which provided a unified and definitive treatment of all these questions, with a striking combination of rigour and simplicity. Schwartz's treatment of Dirac's δ-function illustrates his approach in a most direct fashion. Dirac's original definition reads:
These two conditions are irreconcilable with Lebesgue's theory of integration: by (i), δ vanishes almost everywhere, so that its integral in (ii) must be 0, not 1.
A better definition consists in specifying that for any function φ sufficiently well behaved near
. This is related to the problem of finding a unit for convolution (Section 1.3.2.2.4
). As will now be seen, this definition is still unsatisfactory. Let the sequence
in
be an approximate convolution unit, e.g.
Then for any well behaved function φ the integrals
exist, and the sequence of their numerical values tends to
. It is tempting to combine this with (iii) to conclude that δ is the limit of the sequence
as
. However,
almost everywhere in
and the crux of the problem is that
because the sequence
does not satisfy the hypotheses of Lebesgue's dominated convergence theorem.
Schwartz's solution to this problem is deceptively simple: the regular behaviour one is trying to capture is an attribute not of the sequence of functions , but of the sequence of continuous linear functionals
which has as a limit the continuous functional
It is the latter functional which constitutes the proper definition of δ. The previous paradoxes arose because one insisted on writing down the simple linear operation T in terms of an integral.
The essence of Schwartz's theory of distributions is thus that, rather than try to define and handle `generalized functions' via sequences such as [an approach adopted e.g. by Lighthill (1958)
and Erdélyi (1962)
], one should instead look at them as continuous linear functionals over spaces of well behaved functions.
There are many books on distribution theory and its applications. The reader may consult in particular Schwartz (1965, 1966
), Gel'fand & Shilov (1964)
, Bremermann (1965)
, Trèves (1967)
, Challifour (1972)
, Friedlander (1982)
, and the relevant chapters of Hörmander (1963)
and Yosida (1965)
. Schwartz (1965)
is especially recommended as an introduction.
The guiding principle which leads to requiring that the functions φ above (traditionally called `test functions') should be well behaved is that correspondingly `wilder' behaviour can then be accommodated in the limiting behaviour of the while still keeping the integrals
under control. Thus
To ensure further the continuity of functionals such as with respect to the test function φ as the
go increasingly wild, very strong control will have to be exercised in the way in which a sequence
of test functions will be said to converge towards a limiting φ: conditions will have to be imposed not only on the values of the functions
, but also on those of all their derivatives. Hence, defining a strong enough topology on the space of test functions φ is an essential prerequisite to the development of a satisfactory theory of distributions.
With this rationale in mind, the following function spaces will be defined for any open subset Ω of (which may be the whole of
):
When Ω is unambiguously defined by the context, we will simply write .
It sometimes suffices to require the existence of continuous derivatives only up to finite order m inclusive. The corresponding spaces are then denoted with the convention that if
, only continuity is required.
The topologies on these spaces constitute the most important ingredients of distribution theory, and will be outlined in some detail.
It is defined by the family of semi-norms where p is a multi-index and K a compact subset of Ω. A fundamental system S of neighbourhoods of the origin in
is given by subsets of
of the form
for all natural integers m, positive real ɛ, and compact subset K of Ω. Since a countable family of compact subsets K suffices to cover Ω, and since restricted values of ɛ of the form
lead to the same topology, S is equivalent to a countable system of neighbourhoods and hence
is metrizable.
Convergence in may thus be defined by means of sequences. A sequence
in
will be said to converge to 0 if for any given
there exists
such that
whenever
; in other words, if the
and all their derivatives
converge to 0 uniformly on any given compact K in Ω.
It is defined by the family of semi-norms where K is now fixed. The fundamental system S of neighbourhoods of the origin in
is given by sets of the form
It is equivalent to the countable subsystem of the
, hence
is metrizable.
Convergence in may thus be defined by means of sequences. A sequence
in
will be said to converge to 0 if for any given
there exists
such that
whenever
; in other words, if the
and all their derivatives
converge to 0 uniformly in K.
It is defined by the fundamental system of neighbourhoods of the origin consisting of sets of the form where (m) is an increasing sequence
of integers tending to
and (ɛ) is a decreasing sequence
of positive reals tending to 0, as
.
This topology is not metrizable, because the sets of sequences (m) and (ɛ) are essentially uncountable. It can, however, be shown to be the inductive limit of the topology of the subspaces , in the following sense: V is a neighbourhood of the origin in
if and only if its intersection with
is a neighbourhood of the origin in
for any given compact K in Ω.
A sequence in
will thus be said to converge to 0 in
if all the
belong to some
(with K a compact subset of Ω independent of ν) and if
converges to 0 in
.
As a result, a complex-valued functional T on will be said to be continuous for the topology of
if and only if, for any given compact K in Ω, its restriction to
is continuous for the topology of
, i.e. maps convergent sequences in
to convergent sequences in
.
This property of , i.e. having a non-metrizable topology which is the inductive limit of metrizable topologies in its subspaces
, conditions the whole structure of distribution theory and dictates that of many of its proofs.
A distribution T on Ω is a linear form over , i.e. a map
which associates linearly a complex number
to any
, and which is continuous for the topology of that space. In the terminology of Section 1.3.2.2.6.2
, T is an element of
, the topological dual of
.
Continuity over is equivalent to continuity over
for all compact K contained in Ω, and hence to the condition that for any sequence
in
such that
then
the sequence of complex numbers converges to 0 in
.
If the continuity of a distribution T requires (ii) for
only, T may be defined over
and thus
; T is said to be a distribution of finite order m. In particular, for
is the space of continuous functions with compact support, and a distribution
is a (Radon) measure as used in the theory of integration. Thus measures are particular cases of distributions.
Generally speaking, the larger a space of test functions, the smaller its topological dual: This clearly results from the observation that if the φ's are allowed to be less regular, then less wildness can be accommodated in T if the continuity of the map
with respect to φ is to be preserved.
Let f be a complex-valued function over Ω such that exists for any given compact K in Ω; f is then called locally integrable.
The linear mapping from to
defined by
may then be shown to be continuous over
. It thus defines a distribution
:
As the continuity of
only requires that
,
is actually a Radon measure.
It can be shown that two locally integrable functions f and g define the same distribution, i.e. if and only if they are equal almost everywhere. The classes of locally integrable functions modulo this equivalence form a vector space denoted
; each element of
may therefore be identified with the distribution
defined by any one of its representatives f.
A distribution is said to vanish on an open subset ω of Ω if it vanishes on all functions in
, i.e. if
whenever
.
The support of a distribution T, denoted Supp T, is then defined as the complement of the set-theoretic union of those open subsets ω on which T vanishes; or equivalently as the smallest closed subset of Ω outside which T vanishes.
When for
, then Supp
, so that the two notions coincide. Clearly, if Supp T and Supp φ are disjoint subsets of Ω, then
.
It can be shown that any distribution with compact support may be extended from
to
while remaining continuous, so that
; and that conversely, if
, then its restriction T to
is a distribution with compact support. Thus, the topological dual
of
consists of those distributions in
which have compact support. This is intuitively clear since, if the condition of having compact support is fulfilled by T, it needs no longer be required of φ, which may then roam through
rather than
.
A sequence of distributions will be said to converge in
to a distribution T as
if, for any given
, the sequence of complex numbers
converges in
to the complex number
.
A series of distributions will be said to converge in
and to have distribution S as its sum if the sequence of partial sums
converges to S.
These definitions of convergence in assume that the limits T and S are known in advance, and are distributions. This raises the question of the completeness of
: if a sequence
in
is such that the sequence
has a limit in
for all
, does the map
define a distribution
? In other words, does the limiting process preserve continuity with respect to φ? It is a remarkable theorem that, because of the strong topology on
, this is actually the case. An analogous statement holds for series. This notion of convergence does not coincide with any of the classical notions used for ordinary functions: for example, the sequence
with
converges to 0 in
, but fails to do so by any of the standard criteria.
An example of convergent sequences of distributions is provided by sequences which converge to δ. If is a sequence of locally summable functions on
such that
then the sequence of distributions converges to δ in
.
As a general rule, the definitions are chosen so that the operations coincide with those on functions whenever a distribution is associated to a function.
Most definitions consist in transferring to a distribution T an operation which is well defined on by `transposing' it in the duality product
; this procedure will map T to a new distribution provided the original operation maps
continuously into itself.
The reverse operation from differentiation, namely calculating the `indefinite integral' of a distribution S, consists in finding a distribution T such that .
For all such that
with
, we must have
This condition defines T in a `hyperplane'
of
, whose equation
reflects the fact that ψ has compact support.
To specify T in the whole of , it suffices to specify the value of
where
is such that
: then any
may be written uniquely as
with
and T is defined by
The freedom in the choice of
means that T is defined up to an additive constant.
The product of a distribution T on
by a function α over
will be defined by transposition:
In order that
be a distribution, the mapping
must send
continuously into itself; hence the multipliers α must be infinitely differentiable. The product of two general distributions cannot be defined. The need for a careful treatment of multipliers of distributions will become clear when it is later shown (Section 1.3.2.5.8
) that the Fourier transformation turns convolutions into multiplications and vice versa.
If T is a distribution of order m, then α needs only have continuous derivatives up to order m. For instance, δ is a distribution of order zero, and is a distribution provided α is continuous; this relation is of fundamental importance in the theory of sampling and of the properties of the Fourier transformation related to sampling (Sections 1.3.2.6.4
, 1.3.2.6.6
). More generally,
is a distribution of order
, and the following formula holds for all
with
:
The derivative of a product is easily shown to be and generally for any multi-index p
Given a distribution S on and an infinitely differentiable multiplier function α, the division problem consists in finding a distribution T such that
.
If α never vanishes, is the unique answer. If
, and if α has only isolated zeros of finite order, it can be reduced to a collection of cases where the multiplier is
, for which the general solution can be shown to be of the form
where U is a particular solution of the division problem
and the
are arbitrary constants.
In dimension , the problem is much more difficult, but is of fundamental importance in the theory of linear partial differential equations, since the Fourier transformation turns the problem of solving these into a division problem for distributions [see Hörmander (1963)
].
Let σ be a smooth non-singular change of variables in , i.e. an infinitely differentiable mapping from an open subset Ω of
to Ω′ in
, whose Jacobian
vanishes nowhere in Ω. By the implicit function theorem, the inverse mapping
from Ω′ to Ω is well defined.
If f is a locally summable function on Ω, then the function defined by
is a locally summable function on Ω′, and for any
we may write:
In terms of the associated distributions
This operation can be extended to an arbitrary distribution T by defining its image under coordinate transformation σ through
which is well defined provided that σ is proper, i.e. that
is compact whenever K is compact.
For instance, if is a translation by a vector a in
, then
;
is denoted by
, and the translate
of a distribution T is defined by
Let be a linear transformation defined by a non-singular matrix A. Then
, and
This formula will be shown later (Sections 1.3.2.6.5
, 1.3.4.2.1.1
) to be the basis for the definition of the reciprocal lattice.
In particular, if , where I is the identity matrix, A is an inversion through a centre of symmetry at the origin, and denoting
by
we have:
T is called an even distribution if
, an odd distribution if
.
If with
, A is called a dilation and
Writing symbolically δ as
and
as
, we have:
If
and f is a function with isolated simple zeros
, then in the same symbolic notation
where each
is analogous to a `Lorentz factor' at zero
.
The purpose of this construction is to extend Fubini's theorem to distributions. Following Section 1.3.2.2.5, we may define the tensor product
as the vector space of finite linear combinations of functions of the form
where
and
.
Let and
denote the distributions associated to f and g, respectively, the subscripts x and y acting as mnemonics for
and
. It follows from Fubini's theorem (Section 1.3.2.2.5
) that
, and hence defines a distribution over
; the rearrangement of integral signs gives
for all
. In particular, if
with
, then
This construction can be extended to general distributions and
. Given any test function
, let
denote the map
; let
denote the map
; and define the two functions
and
. Then, by the lemma on differentiation under the
sign of Section 1.3.2.3.9.1
,
, and there exists a unique distribution
such that
is called the tensor product of S and T.
With the mnemonic introduced above, this definition reads identically to that given above for distributions associated to locally integrable functions:
The tensor product of distributions is associative: Derivatives may be calculated by
The support of a tensor product is the Cartesian product of the supports of the two factors.
The convolution of two functions f and g on
is defined by
whenever the integral exists. This is the case when f and g are both in
; then
is also in
. Let S, T and W denote the distributions associated to f, g and
respectively: a change of variable immediately shows that for any
,
Introducing the map σ from
to
defined by
, the latter expression may be written:
(where
denotes the composition of mappings) or by a slight abuse of notation:
A difficulty arises in extending this definition to general distributions S and T because the mapping σ is not proper: if K is compact in , then
is a cylinder with base K and generator the `second bisector'
in
. However,
is defined whenever the intersection between Supp
and
is compact.
We may therefore define the convolution of two distributions S and T on
by
whenever the following support condition is fulfilled:
`the set
is compact in
for all K compact in
'.
The latter condition is met, in particular, if S or T has compact support. The support of is easily seen to be contained in the closure of the vector sum
Convolution by a fixed distribution S is a continuous operation for the topology on : it maps convergent sequences
to convergent sequences
. Convolution is commutative:
.
The convolution of p distributions with supports
can be defined by
whenever the following generalized support condition:
`the set
is compact in
for all K compact in
'
is satisfied. It is then associative. Interesting examples of associativity failure, which can be traced back to violations of the support condition, may be found in Bracewell (1986, pp. 436–437).
It follows from previous definitions that, for all distributions , the following identities hold:
|
The latter property is frequently used for the purpose of regularization: if T is a distribution, α an infinitely differentiable function, and at least one of the two has compact support, then is an infinitely differentiable ordinary function. Since sequences
of such functions α can be constructed which have compact support and converge to δ, it follows that any distribution T can be obtained as the limit of infinitely differentiable functions
. In topological jargon:
is `everywhere dense' in
. A standard function in
which is often used for such proofs is defined as follows: put
with
(so that θ is in
and is normalized), and put
Another related result, also proved by convolution, is the structure theorem: the restriction of a distribution to a bounded open set Ω in
is a derivative of finite order of a continuous function.
Properties (i) to (iv)
are the basis of the symbolic or operational calculus (see Carslaw & Jaeger, 1948
; Van der Pol & Bremmer, 1955
; Churchill, 1958
; Erdélyi, 1962
; Moore, 1971
) for solving integro-differential equations with constant coefficients by turning them into convolution equations, then using factorization methods for convolution algebras (Schwartz, 1965
).
Given a complex-valued function f on subject to suitable regularity conditions, its Fourier transform
and Fourier cotransform
are defined as follows:
where
is the ordinary scalar product. The terminology and sign conventions given above are the standard ones in mathematics; those used in crystallography are slightly different (see Section 1.3.4.2.1.1
). These transforms enjoy a number of remarkable properties, whose natural settings entail different regularity assumptions on f: for instance, properties relating to convolution are best treated in
, while Parseval's theorem requires the Hilbert space structure of
. After a brief review of these classical properties, the Fourier transformation will be examined in a space
particularly well suited to accommodating the full range of its properties, which will later serve as a space of test functions to extend the Fourier transformation to distributions.
There exists an abundant literature on the `Fourier integral'. The books by Carslaw (1930), Wiener (1933)
, Titchmarsh (1948)
, Katznelson (1968)
, Sneddon (1951
, 1972
), and Dym & McKean (1972)
are particularly recommended.
Both transformations and
are obviously linear maps from
to
when these spaces are viewed as vector spaces over the field
of complex numbers.
and
turn translations into phase shifts:
Under a general linear change of variable with non-singular matrix A, the transform of
is
i.e.
and similarly for
. The matrix
is called the contragredient of matrix A.
Under an affine change of coordinates with non-singular matrix A, the transform of
is given by
with a similar result for
, replacing −i by +i.
The kernels of the Fourier transformations and
satisfy the following identities:
As a result the transformations
and
themselves have the following `conjugate symmetry' properties [where the notation
of Section 1.3.2.2.2
will be used]:
Therefore,
Conjugate symmetry is the basis of Friedel's law (Section 1.3.4.2.1.4) in crystallography.
Another elementary property of is its naturality with respect to tensor products. Let
and
, and let
denote the Fourier transformations in
and
, respectively. Then
Furthermore, if
, then
as a function of x and
as a function of y, and
This is easily proved by using Fubini's theorem and the fact that
, where
. This property may be written:
If f and g are summable, their convolution exists and is summable, and
With
, so that
and with Fubini's theorem, rearrangement of the double integral gives:
and similarly
Thus the Fourier transform and cotransform turn convolution into multiplication.
In general, and
are not summable, and hence cannot be further transformed; however, as they are essentially bounded, their products with the Gaussians
are summable for all
, and it can be shown that
where the limit is taken in the topology of the
norm
. Thus
and
are (in a sense) mutually inverse, which justifies the common practice of calling
the `inverse Fourier transformation'.
If , i.e. is summable, then
and
exist and are continuous and essentially bounded:
In fact one has the much stronger property, whose statement constitutes the Riemann–Lebesgue lemma, that
and
both tend to zero as
.
Let us now suppose that and that
is differentiable with
. Integration by parts yields
Since f′ is summable, f has a limit when
, and this limit must be 0 since f is summable. Therefore
with the bound
so that
decreases faster than
.
This result can be easily extended to several dimensions and to any multi-index m: if f is summable and has continuous summable partial derivatives up to order , then
and
Similar results hold for , with
replaced by
. Thus, the more differentiable f is, with summable derivatives, the faster
and
decrease at infinity.
The property of turning differentiation into multiplication by a monomial has many important applications in crystallography, for instance differential syntheses (Sections 1.3.4.2.1.9, 1.3.4.4.7.2
, 1.3.4.4.7.5
) and moment-generating functions [Section 1.3.4.5.2.1
(c
)].
Conversely, assume that f is summable on and that f decreases fast enough at infinity for
also to be summable, for some multi-index m. Then the integral defining
may be subjected to the differential operator
, still yielding a convergent integral: therefore
exists, and
with the bound
Similar results hold for , with
replaced by
. Thus, the faster f decreases at infinity, the more
and
are differentiable, with bounded derivatives. This property is the converse of that described in Section 1.3.2.4.2.8
, and their combination is fundamental in the definition of the function space
in Section 1.3.2.4.4.1
, of tempered distributions in Section 1.3.2.5
, and in the extension of the Fourier transformation to them.
An extreme case of the last instance occurs when f has compact support: then and
are so regular that they may be analytically continued from
to
where they are entire functions, i.e. have no singularities at finite distance (Paley & Wiener, 1934
). This is easily seen for
: giving vector
a vector
of imaginary parts leads to
where the latter transform always exists since
is summable with respect to x for all values of η. This analytic continuation forms the basis of the saddlepoint method in probability theory [Section 1.3.4.5.2.1
(f)
] and leads to the use of maximum-entropy distributions in the statistical theory of direct phase determination [Section 1.3.4.5.2.2
(e)
].
By Liouville's theorem, an entire function in cannot vanish identically on the complement of a compact subset of
without vanishing everywhere: therefore
cannot have compact support if f has, and hence
is not stable by Fourier transformation.
Let f belong to , i.e. be such that
and
, equality being taken as `almost everywhere' equality. This again leads to calling
the `inverse Fourier transformation' rather than the Fourier cotransformation.
and
preserve the
norm:
This property, which may be written in terms of the inner product (,) in
as
implies that
and
are unitary transformations of
into itself, i.e. infinite-dimensional `rotations'.
Some light can be shed on the geometric structure of these rotations by the following simple considerations. Note that so that
(and similarly
) is the identity map. Any eigenvalue of
or
is therefore a fourth root of unity, i.e. ±1 or
, and
splits into an orthogonal direct sum
where
(respectively
) acts in each subspace
by multiplication by
. Orthonormal bases for these subspaces can be constructed from Hermite functions (cf. Section 1.3.2.4.4.2
) This method was used by Wiener (1933
, pp. 51–71).
In , the convolution theorem (when applicable) and the Parseval/Plancherel theorem are not independent. Suppose that f, g,
and
are all in
(without questioning whether these properties are independent). Then
may be written in terms of the inner product in
as follows:
i.e.
Invoking the isometry property, we may rewrite the right-hand side as so that the initial identity yields the convolution theorem.
To obtain the converse implication, note that where conjugate symmetry (Section 1.3.2.4.2.2
) has been used.
These relations have an important application in the calculation by Fourier transform methods of the derivatives used in the refinement of macromolecular structures (Section 1.3.4.4.7).
The duality established in Sections 1.3.2.4.2.8 and 1.3.2.4.2.9
between the local differentiability of a function and the rate of decrease at infinity of its Fourier transform prompts one to consider the space
of functions f on
which are infinitely differentiable and all of whose derivatives are rapidly decreasing, so that for all multi-indices k and p
The product of
by any polynomial over
is still in
(
is an algebra over the ring of polynomials). Furthermore,
is invariant under translations and differentiation.
If , then its transforms
and
are
hence and
are in
:
is invariant under
and
.
Since and
, all properties of
and
already encountered above are enjoyed by functions of
, with all restrictions on differentiability and/or integrability lifted. For instance, given two functions f and g in
, then both f g and
are in
(which was not the case with
nor with
) so that the reciprocity theorem inherited from
allows one to state the reverse of the convolution theorem first established in
:
Gaussian functions are particularly important elements of . In dimension 1, a well known contour integration (Schwartz, 1965
, p. 184) yields
which shows that the `standard Gaussian'
is invariant under
and
. By a tensor product construction, it follows that the same is true of the standard Gaussian
in dimension n:
In other words, G is an eigenfunction of
and
for eigenvalue 1 (Section 1.3.2.4.3.4
).
A complete system of eigenfunctions may be constructed as follows. In dimension 1, consider the family of functions where D denotes the differentiation operator. The first two members of the family
are such that
, as shown above, and
hence
We may thus take as an induction hypothesis that
The identity
may be written
and the two differentiation theorems give:
Combination of this with the induction hypothesis yields
thus proving that
is an eigenfunction of
for eigenvalue
for all
. The same proof holds for
, with eigenvalue
. If these eigenfunctions are normalized as
then it can be shown that the collection of Hermite functions
constitutes an orthonormal basis of
such that
is an eigenfunction of
(respectively
) for eigenvalue
(respectively
).
In dimension n, the same construction can be extended by tensor product to yield the multivariate Hermite functions (where
is a multi-index). These constitute an orthonormal basis of
, with
an eigenfunction of
(respectively
) for eigenvalue
(respectively
). Thus the subspaces
of Section 1.3.2.4.3.4
are spanned by those
with
.
General multivariate Gaussians are usually encountered in the non-standard form where A is a symmetric positive-definite matrix. Diagonalizing A as
with
the identity matrix, and putting
, we may write
i.e.
hence (by Section 1.3.2.4.2.3
)
i.e.
i.e. finally
This result is widely used in crystallography, e.g. to calculate form factors for anisotropic atoms (Section 1.3.4.2.2.6) and to obtain transforms of derivatives of Gaussian atomic densities (Section 1.3.4.4.7.10
).
The result just obtained, which also holds for , shows that the `peakier'
, the `broader'
. This is a general property of the Fourier transformation, expressed in dimension 1 by the Heisenberg inequality (Weyl, 1931
):
where, by a beautiful theorem of Hardy (1933)
, equality can only be attained for f Gaussian. Hardy's theorem is even stronger: if both f and
behave at infinity as constant multiples of G, then each of them is everywhere a constant multiple of G; if both f and
behave at infinity as constant multiples of
, then each of them is a finite linear combination of Hermite functions. Hardy's theorem is invoked in Section 1.3.4.4.5
to derive the optimal procedure for spreading atoms on a sampling grid in order to obtain the most accurate structure factors.
The search for optimal compromises between the confinement of f to a compact domain in x-space and of to a compact domain in ξ-space leads to consideration of prolate spheroidal wavefunctions (Pollack & Slepian, 1961
; Landau & Pollack, 1961
, 1962
).
A final formal property of the Fourier transform, best established in , is its symmetry: if f and g are in
, then by Fubini's theorem
This possibility of `transposing' (and
) from the left to the right of the duality bracket will be used in Section 1.3.2.5.4
to extend the Fourier transformation to distributions.
Other ways of writing Fourier transforms in exist besides the one used here. All have the form
where h is real positive and ω real non-zero, with the reciprocity formula written:
with k real positive. The consistency condition between h, k and ω is
It should be noted that conventions (ii) and (iii) introduce numerical factors of 2π in convolution and Parseval formulae, while (ii) breaks the symmetry between and
.
It was found in Section 1.3.2.4.2 that the usual space of test functions
is not invariant under
and
. By contrast, the space
of infinitely differentiable rapidly decreasing functions is invariant under
and
, and furthermore transposition formulae such as
hold for all
. It is precisely this type of transposition which was used successfully in Sections 1.3.2.3.9.1
and 1.3.2.3.9.3
to define the derivatives of distributions and their products with smooth functions.
This suggests using instead of
as a space of test functions φ, and defining the Fourier transform
of a distribution T by
whenever T is capable of being extended from
to
while remaining continuous. It is this latter proviso which will be subsumed under the adjective `tempered'. As was the case with the construction of
, it is the definition of a sufficiently strong topology (i.e. notion of convergence) in
which will play a key role in transferring to the elements of its topological dual
(called tempered distributions) all the properties of the Fourier transformation.
Besides the general references to distribution theory mentioned in Section 1.3.2.3.1 the reader may consult the books by Zemanian (1965
, 1968
). Lavoine (1963)
contains tables of Fourier transforms of distributions.
A notion of convergence has to be introduced in in order to be able to define and test the continuity of linear functionals on it.
A sequence of functions in
will be said to converge to 0 if, for any given multi-indices k and p, the sequence
tends to 0 uniformly on
.
It can be shown that is dense in
. Translation is continuous for this topology. For any linear differential operator
and any polynomial
over
,
implies
in the topology of
. Therefore, differentiation and multiplication by polynomials are continuous for the topology on
.
The Fourier transformations and
are also continuous for the topology of
. Indeed, let
converge to 0 for the topology on
. Then, by Section 1.3.2.4.2
,
The right-hand side tends to 0 as
by definition of convergence in
, hence
uniformly, so that
in
as
. The same proof applies to
.
A distribution is said to be tempered if it can be extended into a continuous linear functional on
.
If is the topological dual of
, and if
, then its restriction to
is a tempered distribution; conversely, if
is tempered, then its extension to
is unique (because
is dense in
), hence it defines an element S of
. We may therefore identify
and the space of tempered distributions.
A distribution with compact support is tempered, i.e. . By transposition of the corresponding properties of
, it is readily established that the derivative, translate or product by a polynomial of a tempered distribution is still a tempered distribution.
These inclusion relations may be summarized as follows: since contains
but is contained in
, the reverse inclusions hold for the topological duals, and hence
contains
but is contained in
.
A locally summable function f on will be said to be of polynomial growth if
can be majorized by a polynomial in
as
. It is easily shown that such a function f defines a tempered distribution
via
In particular, polynomials over
define tempered distributions, and so do functions in
. The latter remark, together with the transposition identity (Section 1.3.2.4.4
), invites the extension of
and
from
to
.
The Fourier transform and cotransform
of a tempered distribution T are defined by
for all test functions
. Both
and
are themselves tempered distributions, since the maps
and
are both linear and continuous for the topology of
. In the same way that x and ξ have been used consistently as arguments for φ and
, respectively, the notation
and
will be used to indicate which variables are involved.
When T is a distribution with compact support, its Fourier transform may be written since the function
is in
while
. It can be shown, as in Section 1.3.2.4.2
, to be analytically continuable into an entire function over
.
The duality between differentiation and multiplication by a monomial extends from to
by transposition:
Analogous formulae hold for
, with i replaced by −i.
The formulae expressing the duality between translation and phase shift, e.g. between a linear change of variable and its contragredient, e.g.
are obtained similarly by transposition from the corresponding identities in
. They give a transposition formula for an affine change of variables
with non-singular matrix A:
with a similar result for
, replacing −i by +i.
Conjugate symmetry is obtained similarly: with the same identities for
.
The tensor product property also transposes to tempered distributions: if ,
Since δ has compact support, It is instructive to show that conversely
without invoking the reciprocity theorem. Since
for all
, it follows from Section 1.3.2.3.9.4
that
; the constant c can be determined by using the invariance of the standard Gaussian G established in Section 1.3.2.4.3
:
hence
. Thus,
.
The basic properties above then read (using multi-indices to denote differentiation): with analogous relations for
, i becoming −i. Thus derivatives of δ are mapped to monomials (and vice versa), while translates of δ are mapped to `phase factors' (and vice versa).
The previous results now allow a self-contained and rigorous proof of the reciprocity theorem between and
to be given, whereas in traditional settings (i.e. in
and
) the implicit handling of δ through a limiting process is always the sticking point.
Reciprocity is first established in as follows:
and similarly
The reciprocity theorem is then proved in by transposition:
Thus the Fourier cotransformation
in
may legitimately be called the `inverse Fourier transformation'.
The method of Section 1.3.2.4.3 may then be used to show that
and
both have period 4 in
.
Multiplier functions for tempered distributions must be infinitely differentiable, as for ordinary distributions; furthermore, they must grow sufficiently slowly as
to ensure that
for all
and that the map
is continuous for the topology of
. This leads to choosing for multipliers the subspace
consisting of functions
of polynomial growth. It can be shown that if f is in
, then the associated distribution
is in
(i.e. is a tempered distribution); and that conversely if T is in
is in
for all
.
Corresponding restrictions must be imposed to define the space of those distributions T whose convolution
with a tempered distribution S is still a tempered distribution: T must be such that, for all
is in
; and such that the map
be continuous for the topology of
. This implies that S is `rapidly decreasing'. It can be shown that if f is in
, then the associated distribution
is in
; and that conversely if T is in
is in
for all
.
The two spaces and
are mapped into each other by the Fourier transformation
and the convolution theorem takes the form
The same identities hold for
. Taken together with the reciprocity theorem, these show that
and
establish mutually inverse isomorphisms between
and
, and exchange multiplication for convolution in
.
It may be noticed that most of the basic properties of and
may be deduced from this theorem and from the properties of δ. Differentiation operators
and translation operators
are convolutions with
and
; they are turned, respectively, into multiplication by monomials
(the transforms of
) or by phase factors
(the transforms of
).
Another consequence of the convolution theorem is the duality established by the Fourier transformation between sections and projections of a function and its transform. For instance, in , the projection of
on the x, y plane along the z axis may be written
its Fourier transform is then
which is the section of
by the plane
, orthogonal to the z axis used for projection. There are numerous applications of this property in crystallography (Section 1.3.4.2.1.8
) and in fibre diffraction (Section 1.3.4.5.1.3
).
The special properties of in the space of square-integrable functions
, such as Parseval's identity, can be accommodated within distribution theory: if
, then
is a tempered distribution in
(the map
being continuous) and it can be shown that
is of the form
, where
is the Fourier transform of u in
. By Plancherel's theorem,
.
This embedding of into
can be used to derive the convolution theorem for
. If u and v are in
, then
can be shown to be a bounded continuous function; thus
is not in
, but it is in
, so that its Fourier transform is a distribution, and
Spaces of tempered distributions related to can be defined as follows. For any real s, define the Sobolev space
to consist of all tempered distributions
such that
These spaces play a fundamental role in the theory of partial differential equations, and in the mathematical theory of tomographic reconstruction – a subject not unrelated to the crystallographic phase problem (Natterer, 1986).
Let be the subset of
consisting of those points with (signed) integer coordinates; it is an n-dimensional lattice, i.e. a free Abelian group on n generators. A particularly simple set of n generators is given by the standard basis of
, and hence
will be called the standard lattice in
. Any other `non-standard' n-dimensional lattice Λ in
is the image of this standard lattice by a general linear transformation.
If we identify any two points in whose coordinates are congruent modulo
, i.e. differ by a vector in
, we obtain the standard n-torus
. The latter may be viewed as
, i.e. as the Cartesian product of n circles. The same identification may be carried out modulo a non-standard lattice Λ, yielding a non-standard n-torus
. The correspondence to crystallographic terminology is that `standard' coordinates over the standard 3-torus
are called `fractional' coordinates over the unit cell; while Cartesian coordinates, e.g. in ångströms, constitute a set of non-standard coordinates.
Finally, we will denote by I the unit cube and by
the subset
A distribution is called periodic with period lattice
(or
-periodic) if
for all
(in crystallography the period lattice is the direct lattice).
Given a distribution with compact support , then
is a
-periodic distribution. Note that we may write
, where
consists of Dirac δ's at all nodes of the period lattice
.
Conversely, any -periodic distribution T may be written as
for some
. To retrieve such a `motif'
from T, a function ψ will be constructed in such a way that
(hence has compact support) and
; then
. Indicator functions (Section 1.3.2.2
) such as
or
cannot be used directly, since they are discontinuous; but regularized versions of them may be constructed by convolution (see Section 1.3.2.3.9.7
) as
, with ɛ and η such that
on
and
outside
. Then the function
has the desired property. The sum in the denominator contains at most
non-zero terms at any given point x and acts as a smoothly varying `multiplicity correction'.
Throughout this section, `periodic' will mean `-periodic'.
Let , and let [s] denote the largest integer
. For
, let
be the unique vector
with
. If
, then
if and only if
. The image of the map
is thus
modulo
, or
.
If f is a periodic function over , then
implies
; we may thus define a function
over
by putting
for any
such that
. Conversely, if
is a function over
, then we may define a function f over
by putting
, and f will be periodic. Periodic functions over
may thus be identified with functions over
, and this identification preserves the notions of convergence, local summability and differentiability.
Given , we may define
since the sum only contains finitely many non-zero terms; φ is periodic, and
. Conversely, if
we may define
periodic by
, and
by putting
with ψ constructed as above.
By transposition, a distribution defines a unique periodic distribution
by
; conversely,
periodic defines uniquely
by
.
We may therefore identify -periodic distributions over
with distributions over
. We will, however, use mostly the former presentation, as it is more closely related to the crystallographer's perception of periodicity (see Section 1.3.4.1
).
The content of this section is perhaps the central result in the relation between Fourier theory and crystallography (Section 1.3.4.2.1.1).
Let with r defined as in Section 1.3.2.6.2
. Then
,
hence
, so that
:
-periodic distributions are tempered, hence have a Fourier transform. The convolution theorem (Section 1.3.2.5.8
) is applicable, giving:
and similarly for
.
It is readily shown that Q is tempered and periodic, so that , while the periodicity of r implies that
Since the first factors have single isolated zeros at
in
,
(see Section 1.3.2.3.9.4
) and hence by periodicity
; convoluting with
shows that
. Thus we have the fundamental result:
so that
i.e., according to Section 1.3.2.3.9.3
,
The right-hand side is a weighted lattice distribution, whose nodes are weighted by the sample values
of the transform of the motif
at those nodes. Since
, the latter values may be written
By the structure theorem for distributions with compact support (Section 1.3.2.3.9.7
),
is a derivative of finite order of a continuous function; therefore, from Section 1.3.2.4.2.8
and Section 1.3.2.5.8
,
grows at most polynomially as
(see also Section 1.3.2.6.10.3
about this property). Conversely, let
be a weighted lattice distribution such that the weights
grow at most polynomially as
. Then W is a tempered distribution, whose Fourier cotransform
is periodic. If T is now written as
for some
, then by the reciprocity theorem
Although the choice of
is not unique, and need not yield back the same motif as may have been used to build T initially, different choices of
will lead to the same coefficients
because of the periodicity of
.
The Fourier transformation thus establishes a duality between periodic distributions and weighted lattice distributions
. The pair of relations are referred to as the Fourier analysis and the Fourier synthesis of T, respectively (there is a discrepancy between this terminology and the crystallographic one, see Section 1.3.4.2.1.1
). In other words, any periodic distribution
may be represented by a Fourier series (ii), whose coefficients are calculated by (i). The convergence of (ii) towards T in
will be investigated later (Section 1.3.2.6.10
).
Let Λ denote the non-standard lattice consisting of all vectors of the form , where the
are rational integers and
are n linearly independent vectors in
. Let R be the corresponding lattice distribution:
.
Let A be the non-singular matrix whose successive columns are the coordinates of vectors
in the standard basis of
; A will be called the period matrix of Λ, and the mapping
will be denoted by A. According to Section 1.3.2.3.9.5
we have
for any
, and hence
. By Fourier transformation, according to Section 1.3.2.5.5
,
which we write:
with
is a lattice distribution:
associated with the reciprocal lattice
whose basis vectors
are the columns of
. Since the latter matrix is equal to the adjoint matrix (i.e. the matrix of co-factors) of A divided by det A, the components of the reciprocal basis vectors can be written down explicitly (see Section 1.3.4.2.1.1
for the crystallographic case
).
A distribution T will be called Λ-periodic if for all
; as previously, T may be written
for some motif distribution
with compact support. By Fourier transformation,
so that
is a weighted reciprocal-lattice distribution, the weight attached to node
being
times the value
of the Fourier transform of the motif
.
This result may be further simplified if T and its motif are referred to the standard period lattice
by defining t and
so that
,
,
. Then
hence
so that
in non-standard coordinates, while
in standard coordinates.
The reciprocity theorem may then be written: in non-standard coordinates, or equivalently:
in standard coordinates. It gives an n-dimensional Fourier series representation for any periodic distribution over
. The convergence of such series in
will be examined in Section 1.3.2.6.10
.
Let be a distribution with compact support (the `motif'). Its Fourier transform
is analytic (Section 1.3.2.5.4
) and may thus be used as a multiplier.
We may rephrase the preceding results as follows:
Thus the Fourier transformation establishes a duality between the periodization of a distribution by a period lattice Λ and the sampling of its transform at the nodes of lattice reciprocal to Λ. This is a particular instance of the convolution theorem of Section 1.3.2.5.8
.
At this point it is traditional to break the symmetry between and
which distribution theory has enabled us to preserve even in the presence of periodicity, and to perform two distinct identifications:
Let , so that
. Let R be the lattice distribution associated to lattice Λ, with period matrix A, and let
be associated to the reciprocal lattice
. Then we may write:
i.e.
This identity, which also holds for , is called the Poisson summation formula. Its usefulness follows from the fact that the speed of decrease at infinity of φ and
are inversely related (Section 1.3.2.4.4.3
), so that if one of the series (say, the left-hand side) is slowly convergent, the other (say, the right-hand side) will be rapidly convergent. This procedure has been used by Ewald (1921)
[see also Bertaut (1952)
, Born & Huang (1954)
] to evaluate lattice sums (Madelung constants) involved in the calculation of the internal electrostatic energy of crystals (see Chapter 3.4
in this volume on convergence acceleration techniques for crystallographic lattice sums).
When φ is a multivariate Gaussian then
and Poisson's summation formula for a lattice with period matrix A reads:
or equivalently
with
Let and
be two Λ-periodic distributions, the motifs
and
having compact support. The convolution
does not exist, because S and T do not satisfy the support condition (Section 1.3.2.3.9.7
). However, the three distributions R,
and
do satisfy the generalized support condition, so that their convolution is defined; then, by associativity and commutativity:
By Fourier transformation and by the convolution theorem: Let
,
and
be the sets of Fourier coefficients associated to S, T and
, respectively. Identifying the coefficients of
for
yields the forward version of the convolution theorem for Fourier series:
The backward version of the theorem requires that T be infinitely differentiable. The distribution is then well defined and its Fourier coefficients
are given by
Toeplitz forms were first investigated by Toeplitz (1907, 1910
, 1911a
). They occur in connection with the `trigonometric moment problem' (Shohat & Tamarkin, 1943
; Akhiezer, 1965
) and probability theory (Grenander, 1952
) and play an important role in several direct approaches to the crystallographic phase problem [see Sections 1.3.4.2.1.10
, 1.3.4.5.2.2
(e)]
. Many aspects of their theory and applications are presented in the book by Grenander & Szegö (1958)
.
Let be real-valued, so that its Fourier coefficients satisfy the relations
. The Hermitian form in
complex variables
is called the nth Toeplitz form associated to f. It is a straightforward consequence of the convolution theorem and of Parseval's identity that
may be written:
It was shown independently by Toeplitz (1911b), Carathéodory (1911)
and Herglotz (1911)
that a function
is almost everywhere non-negative if and only if the Toeplitz forms
associated to f are positive semidefinite for all values of n.
This is equivalent to the infinite system of determinantal inequalities The
are called Toeplitz determinants. Their application to the crystallographic phase problem is described in Section 1.3.4.2.1.10
.
The eigenvalues of the Hermitian form are defined as the
real roots of the characteristic equation
. They will be denoted by
It is easily shown that if for all x, then
for all n and all
. As
these bounds, and the distribution of the
within these bounds, can be made more precise by introducing two new notions.
|
We may now state an important theorem of Szegö (1915, 1920
). Let
, and put
,
. If m and M are finite, then for any continuous function
defined in the interval [m, M] we have
In other words, the eigenvalues
of the
and the values
of f on a regular subdivision of ]0, 1[ are equally distributed.
Further investigations into the spectra of Toeplitz matrices may be found in papers by Hartman & Wintner (1950, 1954
), Kac et al. (1953)
, Widom (1965)
, and in the notes by Hirschman & Hughes (1977)
.
The investigation of the convergence of Fourier series and of more general trigonometric series has been the subject of intense study for over 150 years [see e.g. Zygmund (1976)]. It has been a constant source of new mathematical ideas and theories, being directly responsible for the birth of such fields as set theory, topology and functional analysis.
This section will briefly survey those aspects of the classical results in dimension 1 which are relevant to the practical use of Fourier series in crystallography. The books by Zygmund (1959), Tolstov (1962)
and Katznelson (1968)
are standard references in the field, and Dym & McKean (1972)
is recommended as a stimulant.
The space consists of (equivalence classes of) complex-valued functions f on the circle which are summable, i.e. for which
It is a convolution algebra: If f and g are in
, then
is in
.
The mth Fourier coefficient of f,
is bounded:
, and by the Riemann–Lebesgue lemma
as
. By the convolution theorem,
.
The pth partial sum of the Fourier series of f,
may be written, by virtue of the convolution theorem, as
, where
is the Dirichlet kernel. Because
comprises numerous slowly decaying oscillations, both positive and negative,
may not converge towards f in a strong sense as
. Indeed, spectacular pathologies are known to exist where the partial sums, examined pointwise, diverge everywhere (Zygmund, 1959
, Chapter VIII). When f is piecewise continuous, but presents isolated jumps, convergence near these jumps is marred by the Gibbs phenomenon:
always `overshoots the mark' by about 9%, the area under the spurious peak tending to 0 as
but not its height [see Larmor (1934)
for the history of this phenomenon].
By contrast, the arithmetic mean of the partial sums, also called the pth Cesàro sum, converges to f in the sense of the
norm:
as
. If furthermore f is continuous, then the convergence is uniform, i.e. the error is bounded everywhere by a quantity which goes to 0 as
. It may be shown that
where
is the Fejér kernel.
has over
the advantage of being everywhere positive, so that the Cesàro sums
of a positive function f are always positive.
The de la Vallée Poussin kernel has a trapezoidal distribution of coefficients and is such that
if
; therefore
is a trigonometric polynomial with the same Fourier coefficients as f over that range of values of m.
The Poisson kernel with
gives rise to an Abel summation procedure [Tolstov (1962
, p. 162); Whittaker & Watson (1927
, p. 57)] since
Compared with the other kernels,
has the disadvantage of not being a trigonometric polynomial; however,
is the real part of the Cauchy kernel (Cartan, 1961
; Ahlfors, 1966
):
and hence provides a link between trigonometric series and analytic functions of a complex variable.
Other methods of summation involve forming a moving average of f by convolution with other sequences of functions besides
of
which `tend towards δ' as
. The convolution is performed by multiplying the Fourier coefficients of f by those of
, so that one forms the quantities
For instance the `sigma factors' of Lanczos (Lanczos, 1966
, p. 65), defined by
lead to a summation procedure whose behaviour is intermediate between those using the Dirichlet and the Fejér kernels; it corresponds to forming a moving average of f by convolution with
which is itself the convolution of a `rectangular pulse' of width
and of the Dirichlet kernel of order p.
A review of the summation problem in crystallography is given in Section 1.3.4.2.1.3.
The space of (equivalence classes of) square-integrable complex-valued functions f on the circle is contained in
, since by the Cauchy–Schwarz inequality
Thus all the results derived for
hold for
, a great simplification over the situation in
or
where neither
nor
was contained in the other.
However, more can be proved in , because
is a Hilbert space (Section 1.3.2.2.4
) for the inner product
and because the family of functions
constitutes an orthonormal Hilbert basis for
.
The sequence of Fourier coefficients of
belongs to the space
of square-summable sequences:
Conversely, every element
of
is the sequence of Fourier coefficients of a unique function in
. The inner product
makes
into a Hilbert space, and the map from
to
established by the Fourier transformation is an isometry (Parseval/Plancherel):
or equivalently:
This is a useful property in applications, since (f, g) may be calculated either from f and g themselves, or from their Fourier coefficients
and
(see Section 1.3.4.4.6
) for crystallographic applications).
By virtue of the orthogonality of the basis , the partial sum
is the best mean-square fit to f in the linear subspace of
spanned by
, and hence (Bessel's inequality)
The use of distributions enlarges considerably the range of behaviour which can be accommodated in a Fourier series, even in the case of general dimension n where classical theories meet with even more difficulties than in dimension 1.
Let be a sequence of complex numbers with
growing at most polynomially as
, say
. Then the sequence
is in
and even defines a continuous function
and an associated tempered distribution
. Differentiation of
times then yields a tempered distribution whose Fourier transform leads to the original sequence of coefficients. Conversely, by the structure theorem for distributions with compact support (Section 1.3.2.3.9.7
), the motif
of a
-periodic distribution is a derivative of finite order of a continuous function; hence its Fourier coefficients will grow at most polynomially with
as
.
Thus distribution theory allows the manipulation of Fourier series whose coefficients exhibit polynomial growth as their order goes to infinity, while those derived from functions had to tend to 0 by virtue of the Riemann–Lebesgue lemma. The distribution-theoretic approach to Fourier series holds even in the case of general dimension n, where classical theories meet with even more difficulties (see Ash, 1976) than in dimension 1.
Let be such that
has compact support K. Let φ be sampled at the nodes of a lattice
, yielding the lattice distribution
. The Fourier transform of this sampled version of φ is
which is essentially Φ periodized by period lattice
, with period matrix A.
Let us assume that Λ is such that the translates of K by different period vectors of Λ are disjoint. Then we may recover Φ from by masking the contents of a `unit cell'
of Λ (i.e. a fundamental domain for the action of Λ in
) whose boundary does not meet K. If
is the indicator function of
, then
Transforming both sides by
yields
i.e.
since
is the volume V of
.
This interpolation formula is traditionally credited to Shannon (1949), although it was discovered much earlier by Whittaker (1915)
. It shows that φ may be recovered from its sample values on
(i.e. from
) provided
is sufficiently fine that no overlap (or `aliasing') occurs in the periodization of Φ by the dual lattice Λ. The interpolation kernel is the transform of the normalized indicator function of a unit cell of Λ containing the support K of Φ.
If K is contained in a sphere of radius and if Λ and
are rectangular, the length of each basis vector of Λ must be greater than
, and thus the sampling interval must be smaller than
. This requirement constitutes the Shannon sampling criterion.
Let be a period lattice in
with matrix A, and let
be the lattice reciprocal to
, with period matrix
. Let
be defined similarly, and let us suppose that
is a sublattice of
, i.e. that
as a set.
The relation between and
may be described in two different fashions: (i) multiplicatively, and (ii) additively.
|
Let us now consider the two reciprocal lattices and
. Their period matrices
and
are related by:
, where
is an integer matrix; or equivalently by
. This shows that the roles are reversed in that
is a sublattice of
, which we may write:
The above relations between lattices may be rewritten in terms of the corresponding lattice distributions as follows: where
and
are (finite) residual-lattice distributions. We may incorporate the factor
in (i) and
into these distributions and define
Since , convolution with
and
has the effect of averaging the translates of a distribution under the elements (or `cosets') of the residual lattices
and
, respectively. This process will be called `coset averaging'. Eliminating
and
between (i) and (ii), and
and
between
and
, we may write:
These identities show that period subdivision by convolution with
(respectively
) on the one hand, and period decimation by `dilation' by
on the other hand, are mutually inverse operations on
and
(respectively
and
).
Finally, let us consider the relations between the Fourier transforms of these lattice distributions. Recalling the basic relation of Section 1.3.2.6.5,
i.e.
and similarly:
Thus (respectively
), a decimated version of
(respectively
), is transformed by
into a subdivided version of
(respectively
).
The converse is also true: i.e.
and similarly
Thus (respectively
), a subdivided version of
(respectively
) is transformed by
into a decimated version of
(respectively
). Therefore, the Fourier transform exchanges subdivision and decimation of period lattices for lattice distributions.
Further insight into this phenomenon is provided by applying to both sides of (iv) and (v) and invoking the convolution theorem:
These identities show that multiplication by the transform of the period-subdividing distribution
(respectively
) has the effect of decimating
to
(respectively
to
). They clearly imply that, if
and
, then
Therefore, the duality between subdivision and decimation may be viewed as another aspect of that between convolution and multiplication.
There is clearly a strong analogy between the sampling/periodization duality of Section 1.3.2.6.6 and the decimation/subdivision duality, which is viewed most naturally in terms of subgroup relationships: both sampling and decimation involve restricting a function to a discrete additive subgroup of the domain over which it is initially given.
The usual presentation of this duality is not in terms of lattice distributions, but of periodic distributions obtained by convolving them with a motif.
Given , let us form
, then decimate its transform
by keeping only its values at the points of the coarser lattice
; as a result,
is replaced by
, and the reverse transform then yields
which is the coset-averaged version of the original
. The converse situation is analogous to that of Shannon's sampling theorem. Let a function
whose transform
has compact support be sampled as
at the nodes of
. Then
is periodic with period lattice
. If the sampling lattice
is decimated to
, the inverse transform becomes
hence becomes periodized more finely by averaging over the cosets of
. With this finer periodization, the various copies of Supp Φ may start to overlap (a phenomenon called `aliasing'), indicating that decimation has produced too coarse a sampling of φ.
Let be such that
has compact support (
is said to be band-limited). Then
is
-periodic, and
is such that only a finite number of points
of
have a non-zero Fourier coefficient
attached to them. We may therefore find a decimation
of
such that the distinct translates of Supp
by vectors of
do not intersect.
The distribution Φ can be uniquely recovered from by the procedure of Section 1.3.2.7.1
, and we may write:
these rearrangements being legitimate because
and
have compact supports which are intersection-free under the action of
. By virtue of its
-periodicity, this distribution is entirely characterized by its `motif'
with respect to
:
Similarly, φ may be uniquely recovered by Shannon interpolation from the distribution sampling its values at the nodes of is a subdivision of
). By virtue of its
-periodicity, this distribution is completely characterized by its motif:
Let and
, and define the two sets of coefficients
Define the two distributions
and
The relation between ω and Ω has two equivalent forms:
By (i), . Both sides are weighted lattice distributions concentrated at the nodes of
, and equating the weights at
gives
Since
,
is an integer, hence
By (ii), we have Both sides are weighted lattice distributions concentrated at the nodes of
, and equating the weights at
gives
Since
,
is an integer, hence
Now the decimation/subdivision relations between and
may be written:
so that
with
, hence finally
Denoting by
and
by
, the relation between ω and Ω may be written in the equivalent form
where the summations are now over finite residual lattices in standard form.
Equations (i) and (ii) describe two mutually inverse linear transformations and
between two vector spaces
and
of dimension
.
[respectively
] is the discrete Fourier (respectively inverse Fourier) transform associated to matrix N.
The vector spaces and
may be viewed from two different standpoints:
|
These two spaces are said to be `isomorphic' (a relation denoted ≅), the isomorphism being given by the one-to-one correspondence:
The second viewpoint will be adopted, as it involves only linear algebra. However, it is most helpful to keep the first one in mind and to think of the data or results of a discrete Fourier transform as representing (through their sets of unique weights) two periodic lattice distributions related by the full, distribution-theoretic Fourier transform.
We therefore view (respectively
) as the vector space of complex-valued functions over the finite residual lattice
(respectively
) and write:
since a vector such as ψ is in fact the function
.
The two spaces and
may be equipped with the following Hermitian inner products:
which makes each of them into a Hilbert space. The canonical bases
and
and
and
are orthonormal for their respective product.
By virtue of definitions (i) and (ii), so that
and
may be represented, in the canonical bases of
and
, by the following matrices:
When N is symmetric, and
may be identified in a natural manner, and the above matrices are symmetric.
When N is diagonal, say , then the tensor product structure of the full multidimensional Fourier transform (Section 1.3.2.4.2.4
)
gives rise to a tensor product structure for the DFT matrices. The tensor product of matrices is defined as follows:
Let the index vectors
and
be ordered in the same way as the elements in a Fortran array, e.g. for
with
increasing fastest,
next fastest,
slowest; then
where
and
where
The DFT inherits most of the properties of the Fourier transforms, but with certain numerical factors (`Jacobians') due to the transition from continuous to discrete measure.
|
The Fourier transformation's most remarkable property is undoubtedly that of turning convolution into multiplication. As distribution theory has shown, other valuable properties – such as the shift property, the conversion of differentiation into multiplication by monomials, and the duality between periodicity and sampling – are special instances of the convolution theorem.
This property is exploited in many areas of applied mathematics and engineering (Campbell & Foster, 1948; Sneddon, 1951
; Champeney, 1973
; Bracewell, 1986
). For example, the passing of a signal through a linear filter, which results in its being convolved with the response of the filter to a δ-function `impulse', may be modelled as a multiplication of the signal's transform by the transform of the impulse response (also called transfer function). Similarly, the solution of systems of partial differential equations may be turned by Fourier transformation into a division problem for distributions. In both cases, the formulations obtained after Fourier transformation are considerably simpler than the initial ones, and lend themselves to constructive solution techniques.
Whenever the functions to which the Fourier transform is applied are band-limited, or can be well approximated by band-limited functions, the discrete Fourier transform (DFT) provides a means of constructing explicit numerical solutions to the problems at hand. A great variety of investigations in physics, engineering and applied mathematics thus lead to DFT calculations, to such a degree that, at the time of writing, about 50% of all supercomputer CPU time is alleged to be spent calculating DFTs.
The straightforward use of the defining formulae for the DFT leads to calculations of size for N sample points, which become unfeasible for any but the smallest problems. Much ingenuity has therefore been exerted on the design and implementation of faster algorithms for calculating the DFT (McClellan & Rader, 1979
; Nussbaumer, 1981
; Blahut, 1985
; Brigham, 1988
). The most famous is that of Cooley & Tukey (1965)
which heralded the age of digital signal processing. However, it had been preceded by the prime factor algorithm of Good (1958
, 1960
), which has lately been the basis of many new developments. Recent historical research (Goldstine, 1977
, pp. 249–253; Heideman et al., 1984
) has shown that Gauss essentially knew the Cooley–Tukey algorithm as early as 1805 (before Fourier's 1807 work on harmonic analysis!); while it has long been clear that Dirichlet knew of the basis of the prime factor algorithm and used it extensively in his theory of multiplicative characters [see e.g. Chapter I of Ayoub (1963)
, and Chapters 6 and 8 of Apostol (1976)
]. Thus the computation of the DFT, far from being a purely technical and rather narrow piece of specialized numerical analysis, turns out to have very rich connections with such central areas of pure mathematics as number theory (algebraic and analytic), the representation theory of certain Lie groups and coding theory – to list only a few. The interested reader may consult Auslander & Tolimieri (1979)
; Auslander, Feig & Winograd (1982
, 1984
); Auslander & Tolimieri (1985)
; Tolimieri (1985)
.
One-dimensional algorithms are examined first. The Sande mixed-radix version of the Cooley–Tukey algorithm only calls upon the additive structure of congruence classes of integers. The prime factor algorithm of Good begins to exploit some of their multiplicative structure, and the use of relatively prime factors leads to a stronger factorization than that of Sande. Fuller use of the multiplicative structure, via the group of units, leads to the Rader algorithm; and the factorization of short convolutions then yields the Winograd algorithms.
Multidimensional algorithms are at first built as tensor products of one-dimensional elements. The problem of factoring the DFT in several dimensions simultaneously is then examined. The section ends with a survey of attempts at formalizing the interplay between algorithm structure and computer architecture for the purpose of automating the design of optimal DFT code.
It was originally intended to incorporate into this section a survey of all the basic notions and results of abstract algebra which are called upon in the course of these developments, but time limitations have made this impossible. This material, however, is adequately covered by the first chapter of Tolimieri et al. (1989) in a form tailored for the same purposes. Similarly, the inclusion of numerous detailed examples of the algorithms described here has had to be postponed to a later edition, but an abundant supply of such examples may be found in the signal processing literature, for instance in the books by McClellan & Rader (1979)
, Blahut (1985)
, and Tolimieri et al. (1989)
.
Throughout this section we will denote by the expression
,
. The mapping
has the following properties:
Thus e defines an isomorphism between the additive group
(the reals modulo the integers) and the multiplicative group of complex numbers of modulus 1. It follows that the mapping
, where
and N is a positive integer, defines an isomorphism between the one-dimensional residual lattice
and the multiplicative group of Nth roots of unity.
The DFT on N points then relates vectors X and in W and
through the linear transformations:
The presentation of Gentleman & Sande (1966) will be followed first [see also Cochran et al. (1967)
]. It will then be reinterpreted in geometric terms which will prepare the way for the treatment of multidimensional transforms in Section 1.3.3.3.
Suppose that the number of sample points N is composite, say . We may write k to the base
and
to the base
as follows:
The defining relation for
may then be written:
The argument of
may be expanded as
and the last summand, being an integer, may be dropped:
This computation may be decomposed into five stages, as follows:
If the intermediate transforms in stages (ii) and (iv)
are performed in place, i.e. with the results overwriting the data, then at stage (v)
the result
will be found at address
. This phenomenon is called scrambling by `digit reversal', and stage (v)
is accordingly known as unscrambling.
The initial N-point transform has thus been performed as
transforms
on
points, followed by
transforms
on
points, thereby reducing the arithmetic cost from
to
. The phase shifts applied at stage (iii)
are traditionally called `twiddle factors', and the transposition between
and
can be performed by the fast recursive technique of Eklundh (1972)
. Clearly, this procedure can be applied recursively if
and
are themselves composite, leading to an overall arithmetic cost of order N log N if N has no large prime factors.
The Cooley–Tukey factorization may also be derived from a geometric rather than arithmetic argument. The decomposition is associated to a geometric partition of the residual lattice
into
copies of
, each translated by
and `blown up' by a factor
. This partition in turn induces a (direct sum) decomposition of X as
where
According to (i),
is related to
by decimation by
and offset by
. By Section 1.3.2.7.2
,
is related to
by periodization by
and phase shift by
, so that
the periodization by
being reflected by the fact that
does not depend on
. Writing
and expanding
shows that the phase shift contains both the twiddle factor
and the kernel
of
. The Cooley–Tukey algorithm is thus naturally associated to the coset decomposition of a lattice modulo a sublattice (Section 1.3.2.7.2
).
It is readily seen that essentially the same factorization can be obtained for , up to the complex conjugation of the twiddle factors. The normalizing constant
arises from the normalizing constants
and
in
and
, respectively.
Factors of 2 are particularly simple to deal with and give rise to a characteristic computational structure called a `butterfly loop'. If , then two options exist:
|
By repeated factoring of the number N of sample points, the calculation of and
can be reduced to a succession of stages, the smallest of which operate on single prime factors of N. The reader is referred to Gentleman & Sande (1966)
for a particularly lucid analysis of the programming considerations which help implement this factorization efficiently; see also Singleton (1969)
. Powers of two are often grouped together into factors of 4 or 8, which are advantageous in that they require fewer complex multiplications than the repeated use of factors of 2. In this approach, large prime factors P are detrimental, since they require a full
-size computation according to the defining formula.
The set of congruence classes of integers modulo an integer N [see e.g. Apostol (1976)
, Chapter 5] inherits from
not only the additive structure used in deriving the Cooley–Tukey factorization, but also a multiplicative structure in which the product of two congruence classes mod N is uniquely defined as the class of the ordinary product (in
) of representatives of each class. The multiplication can be distributed over addition in the usual way, endowing
with the structure of a commutative ring.
If N is composite, the ring has zero divisors. For example, let
, let
mod N, and let
mod N: then
mod N. In the general case, a product of non-zero elements will be zero whenever these elements collect together all the factors of N. These circumstances give rise to a fundamental theorem in the theory of commutative rings, the Chinese Remainder Theorem (CRT), which will now be stated and proved [see Apostol (1976
), Chapter 5; Schroeder (1986
), Chapter 16].
Let be factored into a product of pairwise coprime integers, so that g.c.d.
for
. Then the system of congruence equations
has a unique solution
mod N. In other words, each
is associated in a one-to-one fashion to the d-tuple
of its residue classes in
.
The proof of the CRT goes as follows. Let Since g.c.d.
there exist integers
and
such that
then the integer
is the solution. Indeed,
because all terms with
contain
as a factor; and
by the defining relation for
.
It may be noted that so that the
are mutually orthogonal idempotents in the ring
, with properties formally similar to those of mutually orthogonal projectors onto subspaces in linear algebra. The analogy is exact, since by virtue of the CRT the ring
may be considered as the direct product
via the two mutually inverse mappings:
The mapping defined by (ii) is sometimes called the `CRT reconstruction' of
from the
.
These two mappings have the property of sending sums to sums and products to products, i.e: (the last proof requires using the properties of the idempotents
). This may be described formally by stating that the CRT establishes a ring isomorphism:
The CRT will now be used to factor the N-point DFT into a tensor product of d transforms, the jth of length .
Let the indices k and be subjected to the following mappings:
Then Cross terms with
vanish since they contain all the factors of N, hence
Dividing by N, which may be written as
for each j, yields
and hence
Therefore, by the multiplicative property of
,
Let be described by a one-dimensional array
indexed by k. The index mapping (i)
turns X into an element of
described by a d-dimensional array
; the latter may be transformed by
into a new array
. Finally, the one-dimensional array of results
will be obtained by reconstructing
according to (ii)
.
The prime factor algorithm, like the Cooley–Tukey algorithm, reindexes a 1D transform to turn it into d separate transforms, but the use of coprime factors and CRT index mapping leads to the further gain that no twiddle factors need to be applied between the successive transforms (see Good, 1971). This makes up for the cost of the added complexity of the CRT index mapping.
The natural factorization of N for the prime factor algorithm is thus its factorization into prime powers: is then the tensor product of separate transforms (one for each prime power factor
) whose results can be reassembled without twiddle factors. The separate factors
within each
must then be dealt with by another algorithm (e.g. Cooley–Tukey, which does require twiddle factors). Thus, the DFT on a prime number of points remains undecomposable.
The previous two algorithms essentially reduce the calculation of the DFT on N points for N composite to the calculation of smaller DFTs on prime numbers of points, the latter remaining irreducible. However, Rader (1968) showed that the p-point DFT for p an odd prime can itself be factored by invoking some extra arithmetic structure present in
.
The ring has the property that its
non-zero elements, called units, form a multiplicative group
. In particular, all units
have a unique multiplicative inverse in
, i.e. a unit
such that
. This endows
with the structure of a finite field.
Furthermore, is a cyclic group, i.e. consists of the successive powers
of a generator g called a primitive root mod p (such a g may not be unique, but it always exists). For instance, for
,
is generated by
, whose successive powers mod 7 are:
[see Apostol (1976
), Chapter 10].
The basis of Rader's algorithm is to bring to light a hidden regularity in the matrix by permuting the basis vectors
and
of
as follows:
where g is a primitive root mod p.
With respect to these new bases, the matrix representing will have the following elements:
Thus the `core'
of matrix
, of size
, formed by the elements with two non-zero indices, has a so-called skew-circulant structure because element
depends only on
. Simplification may now occur because multiplication by
is closely related to a cyclic convolution. Introducing the notation
we may write the relation
in the permuted bases as
where Z is defined by
,
.
Thus may be obtained by cyclic convolution of C and Z, which may for instance be calculated by
where × denotes the component-wise multiplication of vectors. Since p is odd,
is always divisible by 2 and may even be highly composite. In that case, factoring
by means of the Cooley–Tukey or Good methods leads to an algorithm of complexity p log p rather than
for
. An added bonus is that, because
, the elements of
can be shown to be either purely real or purely imaginary, which halves the number of real multiplications involved.
This idea was extended by Winograd (1976, 1978
) to the treatment of prime powers
, using the cyclic structure of the multiplicative group of units
. The latter consists of all those elements of
which are not divisible by p, and thus has
elements. It is cyclic, and there exist primitive roots g modulo
such that
The
elements divisible by p, which are divisors of zero, have to be treated separately just as 0 had to be treated separately for
.
When , then
with
. The results
are p-decimated, hence can be obtained via the
-point DFT of the
-periodized data Y:
with
When , then we may write
where
contains the contributions from
and
those from
. By a converse of the previous calculation,
arises from p-decimated data Z, hence is the
-periodization of the
-point DFT of these data:
with
(the
-periodicity follows implicity from the fact that the transform on the right-hand side is independent of
).
Finally, the contribution from all
may be calculated by reindexing by the powers of a primitive root g modulo
, i.e. by writing
then carrying out the multiplication by the skew-circulant matrix core as a convolution.
Thus the DFT of size may be reduced to two DFTs of size
(dealing, respectively, with p-decimated results and p-decimated data) and a convolution of size
. The latter may be `diagonalized' into a multiplication by purely real or purely imaginary numbers (because
) by two DFTs, whose factoring in turn leads to DFTs of size
and
. This method, applied recursively, allows the complete decomposition of the DFT on
points into arbitrarily small DFTs.
When , the same method can be applied, except for a slight modification in the calculation of
. There is no primitive root modulo
for
: the group
is the direct product of two cyclic groups, the first (of order 2) generated by −1, the second (of order
) generated by 3 or 5. One then uses a representation
and the reindexed core matrix gives rise to a two-dimensional convolution. The latter may be carried out by means of two 2D DFTs on
points.
The cyclic convolutions generated by Rader's multiplicative reindexing may be evaluated more economically than through DFTs if they are re-examined within a new algebraic setting, namely the theory of congruence classes of polynomials [see, for instance, Blahut (1985), Chapter 2; Schroeder (1986
), Chapter 24].
The set, denoted , of polynomials in one variable with coefficients in a given field
has many of the formal properties of the set
of rational integers: it is a ring with no zero divisors and has a Euclidean algorithm on which a theory of divisibility can be built.
Given a polynomial , then for every
there exist unique polynomials
and
such that
and
is called the residue of
modulo
. Two polynomials
and
having the same residue modulo
are said to be congruent modulo
, which is denoted by
If is said to be divisible by
. If
only has divisors of degree zero in
, it is said to be irreducible over
(this notion depends on
). Irreducible polynomials play in
a role analogous to that of prime numbers in
, and any polynomial over
has an essentially unique factorization as a product of irreducible polynomials.
There exists a Chinese remainder theorem (CRT) for polynomials. Let be factored into a product of pairwise coprime polynomials [i.e.
and
have no common factor for
]. Then the system of congruence equations
has a unique solution
modulo
. This solution may be constructed by a procedure similar to that used for integers. Let
Then
and
are coprime, and the Euclidean algorithm may be used to obtain polynomials
and
such that
With
, the polynomial
is easily shown to be the desired solution.
As with integers, it can be shown that the 1:1 correspondence between and
sends sums to sums and products to products, i.e. establishes a ring isomorphism:
These results will now be applied to the efficient calculation of cyclic convolutions. Let and
be two vectors of length N, and let
be obtained by cyclic convolution of U and V:
The very simple but crucial result is that this cyclic convolution may be carried out by polynomial multiplication modulo
: if
then the above relation is equivalent to
Now the polynomial
can be factored over the field of rational numbers into irreducible factors called cyclotomic polynomials: if d is the number of divisors of N, including 1 and N, then
where the cyclotomics
are well known (Nussbaumer, 1981
; Schroeder, 1986
, Chapter 22). We may now invoke the CRT, and exploit the ring isomorphism it establishes to simplify the calculation of
from
and
as follows:
When N is not too large, i.e. for `short cyclic convolutions', the are very simple, with coefficients 0 or ±1, so that (i)
only involves a small number of additions. Furthermore, special techniques have been developed to multiply general polynomials modulo cyclotomic polynomials, thus helping keep the number of multiplications in (ii)
and (iii)
to a minimum. As a result, cyclic convolutions can be calculated rapidly when N is sufficiently composite.
It will be recalled that Rader's multiplicative indexing often gives rise to cyclic convolutions of length for p an odd prime. Since
is highly composite for all
other than 23 and 47, these cyclic convolutions can be performed more efficiently by the above procedure than by DFT.
These combined algorithms are due to Winograd (1977, 1978
, 1980
), and are known collectively as `Winograd small FFT algorithms'. Winograd also showed that they can be thought of as bringing the DFT matrix F to the following `normal form':
where
The elements on the diagonal of B can be shown to be either real or pure imaginary, by the same argument as in Section 1.3.3.2.3.1. Matrices A and C may be rectangular rather than square, so that intermediate results may require extra storage space.
From an algorithmic point of view, the distinction between one-dimensional (1D) and multidimensional DFTs is somewhat blurred by the fact that some factoring techniques turn a 1D transform into a multidimensional one. The distinction made here, however, is a practical one and is based on the dimensionality of the indexing sets for data and results. This section will therefore be concerned with the problem of factoring the DFT when the indexing sets for the input data and output results are multidimensional.
The DFT was defined in Section 1.3.2.7.4 in an n-dimensional setting and it was shown that when the decimation matrix N is diagonal, say
, then
has a tensor product structure:
This may be rewritten as follows:
where the I's are identity matrices and × denotes ordinary matrix multiplication. The matrix within each bracket represents a one-dimensional DFT along one of the n dimensions, the other dimensions being left untransformed. As these matrices commute, the order in which the successive 1D DFTs are performed is immaterial.
This is the most straightforward method for building an n-dimensional algorithm from existing 1D algorithms. It is known in crystallography under the name of `Beevers–Lipson factorization' (Section 1.3.4.3.1), and in signal processing as the `row–column method'.
Substantial reductions in the arithmetic cost, as well as gains in flexibility, can be obtained if the factoring of the DFT is carried out in several dimensions simultaneously. The presentation given here is a generalization of that of Mersereau & Speake (1981), using the abstract setting established independently by Auslander, Tolimieri & Winograd (1982)
.
Let us return to the general n-dimensional setting of Section 1.3.2.7.4, where the DFT was defined for an arbitrary decimation matrix N by the formulae (where
denotes
):
with
Let us now assume that this decimation can be factored into d successive decimations, i.e. that and hence
Then the coset decomposition formulae corresponding to these successive decimations (Section 1.3.2.7.1
) can be combined as follows:
with
. Therefore, any
may be written uniquely as
Similarly:
so that any
may be written uniquely as
with
. These decompositions are the vector analogues of the multi-radix number representation systems used in the Cooley–Tukey factorization.
We may then write the definition of with
factors as
The argument of e(–) may be expanded as
The first summand may be recognized as a twiddle factor, the second and third as the kernels of
and
, respectively, while the fourth is an integer which may be dropped. We are thus led to a `vector-radix' version of the Cooley–Tukey algorithm, in which the successive decimations may be introduced in all n dimensions simultaneously by general integer matrices. The computation may be decomposed into five stages analogous to those of the one-dimensional algorithm of Section 1.3.3.2.1
:
The initial -point transform
can thus be performed as
transforms
on
points, followed by
transforms
on
points. This process can be applied successively to all d factors. The same decomposition applies to
, up to the complex conjugation of twiddle factors, the normalization factor
being obtained as the product of the factors
in the successive partial transforms
.
The geometric interpretation of this factorization in terms of partial transforms on translates of sublattices applies in full to this n-dimensional setting; in particular, the twiddle factors are seen to be related to the residual translations which place the sublattices in register within the big lattice. If the intermediate transforms are performed in place, then the quantity will eventually be found at location
so that the final results will have to be unscrambled by a process which may be called `coset reversal', the vector equivalent of digit reversal.
Factoring by 2 in all n dimensions simultaneously, i.e. taking , leads to `n-dimensional butterflies'. Decimation in time corresponds to the choice
, so that
is an n-dimensional parity class; the calculation then proceeds by
Decimation in frequency corresponds to the choice
,
, so that
labels `octant' blocks of shape M; the calculation then proceeds through the following steps:
i.e. the
parity classes of results, corresponding to the different
, are obtained separately. When the dimension n is 2 and the decimating matrix is diagonal, this analysis reduces to the `vector radix FFT' algorithms proposed by Rivard (1977)
and Harris et al. (1977)
. These lead to substantial reductions in the number M of multiplications compared to the row–column method: M is reduced to
by simultaneous
factoring, and to
by simultaneous
factoring.
The use of a non-diagonal decimating matrix may bring savings in computing time if the spectrum of the band-limited function under study is of such a shape as to pack more compactly in a non-rectangular than in a rectangular lattice (Mersereau, 1979). If, for instance, the support K of the spectrum Φ is contained in a sphere, then a decimation matrix producing a close packing of these spheres will yield an aliasing-free DFT algorithm with fewer sample points than the standard algorithm using a rectangular lattice.
Suppose that the decimation matrix N is diagonal and let each diagonal element be written in terms of its prime factors:
where m is the total number of distinct prime factors present in the
.
The CRT may be used to turn each 1D transform along dimension i into a multidimensional transform with a separate `pseudo-dimension' for each distinct prime factor of
; the number
, of these pseudo-dimensions is equal to the cardinality of the set:
The full n-dimensional transform thus becomes μ-dimensional, with
.
We may now permute the μ pseudo-dimensions so as to bring into contiguous position those corresponding to the same prime factor ; the m resulting groups of pseudo-dimensions are said to define `p-primary' blocks. The initial transform is now written as a tensor product of m p-primary transforms, where transform j is on
points [by convention, dimension i is not transformed if
]. These p-primary transforms may be computed, for instance, by multidimensional Cooley–Tukey factorization (Section 1.3.3.3.1
), which is faster than the straightforward row–column method. The final results may then be obtained by reversing all the permutations used.
The extra gain with respect to the multidimensional Cooley–Tukey method is that there are no twiddle factors between p-primary pieces corresponding to different primes p.
The case where N is not diagonal has been examined by Guessoum & Mersereau (1986).
Suppose that the CRT has been used as above to map an n-dimensional DFT to a μ-dimensional DFT. For each [κ runs over those pairs (i, j) such that
], the Rader/Winograd procedure may be applied to put the matrix of the κth 1D DFT in the CBA normal form of a Winograd small FFT. The full DFT matrix may then be written, up to permutation of data and results, as
A well known property of the tensor product of matrices allows this to be rewritten as and thus to form a matrix in which the combined pre-addition, multiplication and post-addition matrices have been precomputed. This procedure, called nesting, can be shown to afford a reduction of the arithmetic operation count compared to the row–column method (Morris, 1978
).
Clearly, the nesting rearrangement need not be applied to all μ dimensions, but can be restricted to any desired subset of them.
Nussbaumer's approach views the DFT as the evaluation of certain polynomials constructed from the data (as in Section 1.3.3.2.4). For instance, putting
, the 1D N-point DFT
may be written
where the polynomial Q is defined by
Let us consider (Nussbaumer & Quandalle, 1979) a 2D transform of size
:
By introduction of the polynomials
this may be rewritten:
Let us now suppose that is coprime to N. Then
has a unique inverse modulo N (denoted by
), so that multiplication by
simply permutes the elements of
and hence
for any function f over
. We may thus write:
where
Since only the value of polynomial
at
is involved in the result, the computation of
may be carried out modulo the unique cyclotomic polynomial
such that
. Thus, if we define:
we may write:
or equivalently
For N an odd prime p, all non-zero values of are coprime with p so that the
-point DFT may be calculated as follows:
Step (1) is a set of p `polynomial transforms' involving no multiplications; step (2)
consists of p DFTs on p points each since if
then
step (3)
is a permutation; and step (4)
is a p-point DFT. Thus the 2D DFT on
points, which takes 2p p-point DFTs by the row–column method, involves only
p-point DFTs; the other DFTs have been replaced by polynomial transforms involving only additions.
This procedure can be extended to n dimensions, and reduces the number of 1D p-point DFTs from for the row–column method to
, at the cost of introducing extra additions in the polynomial transforms.
A similar algorithm has been formulated by Auslander et al. (1983) in terms of Galois theory.
The mathematical analysis of the structure of DFT computations has brought to light a broad variety of possibilities for reducing or reshaping their arithmetic complexity. All of them are `analytic' in that they break down large transforms into a succession of smaller ones.
These results may now be considered from the converse `synthetic' viewpoint as providing a list of procedures for assembling them:
|
The simplest DFT may then be carried out into a global algorithm in many different ways. The diagrams in Fig. 1.3.3.1 illustrate a few of the options available to compute a 400-point DFT. They may differ greatly in their arithmetic operation counts.
To obtain a truly useful measure of the computational complexity of a DFT algorithm, its arithmetic operation count must be tempered by computer architecture considerations. Three main types of trade-offs must be borne in mind:
|
Many of the mathematical developments above took place in the context of single-processor serial computers, where f.p. addition is substantially cheaper than f.p. multiplication but where integer address arithmetic has to compete with f.p. arithmetic for processor cycles. As a result, the alternatives to the Cooley–Tukey algorithm hardly ever led to particularly favourable trade-offs, thus creating the impression that there was little to gain by switching to more exotic algorithms.
The advent of new machine architectures with vector and/or parallel processing features has greatly altered this picture (Pease, 1968; Korn & Lambiotte, 1979
; Fornberg, 1981
; Swartzrauber, 1984
):
|
Another major consideration is that of data flow [see e.g. Nawab & McClellan (1979)]. Serial machines only have few registers and few paths connecting them, and allow little or no overlap between computation and data movement. New architectures, on the other hand, comprise banks of vector registers (or `cache memory') besides the usual internal registers, and dedicated ALUs can service data transfers between several of them simultaneously and concurrently with computation.
In this new context, the devices described in Sections 1.3.3.2 and 1.3.3.3
for altering the balance between the various types of arithmetic operations, and reshaping the data flow during the computation, are invaluable. The field of machine-dependent DFT algorithm design is thriving on them [see e.g. Temperton (1983a
,b
,c
, 1985
); Agarwal & Cooley (1986
, 1987
)].
In order to explore systematically all possible algorithms for carrying out a given DFT computation, and to pick the one best suited to a given machine, attempts have been made to develop:
|
Task (i) can be accomplished by systematic use of a tensor product notation to represent the various stages into which the DFT can be factored (reindexing, small transforms on subsets of indices, twiddle factors, digit-reversal permutations).
Task (ii) may for instance use the Winograd CBA normal form for each small transform, then apply the rules governing the rearrangement of tensor product
and ordinary product × operations on matrices. The matching of these rearrangements to the architecture of a vector and/or parallel computer can be formalized algebraically [see e.g. Chapter 2 of Tolimieri et al. (1989)
].
Task (iii) is a complex search which requires techniques such as dynamic programming (Bellman, 1958
).
Johnson & Burrus (1983) have proposed and tested such a scheme to identify the optimal trade-offs between prime factor nesting and Winograd nesting of small Winograd transforms. In step (ii)
, they further decomposed the pre-addition matrix A and post-addition matrix C into several factors, so that the number of design options available becomes very large: the N-point DFT when N has four factors can be calculated in over 1012 distinct ways.
This large family of nested algorithms contains the prime factor algorithm and the Winograd algorithms as particular cases, but usually achieves greater efficiency than either by reducing the f.p. multiplication count while keeping the number of f.p. additions small.
There is little doubt that this systematic approach will be extended so as to incorporate all available methods of restructuring the DFT.
The central role of the Fourier transformation in X-ray crystallography is a consequence of the kinematic approximation used in the description of the scattering of X-rays by a distribution of electrons (Bragg, 1915; Duane, 1925
; Havighurst, 1925a
,b
; Zachariasen, 1945
; James, 1948a
, Chapters 1 and 2; Lipson & Cochran, 1953
, Chapter 1; Bragg, 1975
).
Let be the density of electrons in a sample of matter contained in a finite region V which is being illuminated by a parallel monochromatic X-ray beam with wavevector
. Then the far-field amplitude scattered in a direction corresponding to wavevector
is proportional to
In certain model calculations, the `sample' may contain not only volume charges, but also point, line and surface charges. These singularities may be accommodated by letting ρ be a distribution, and writing F is still a well behaved function (analytic, by Section 1.3.2.4.2.10
) because ρ has been assumed to have compact support.
If the sample is assumed to be an infinite crystal, so that ρ is now a periodic distribution, the customary limiting process by which it is shown that F becomes a discrete series of peaks at reciprocal-lattice points (see e.g. von Laue, 1936; Ewald, 1940
; James, 1948a
p. 9; Lipson & Taylor, 1958
, pp. 14–27; Ewald, 1962
, pp. 82–101; Warren, 1969
, pp. 27–30) is already subsumed under the treatment of Section 1.3.2.6
.
Let ρ be the distribution of electrons in a crystal. Then, by definition of a crystal, ρ is Λ-periodic for some period lattice Λ (Section 1.3.2.6.5) so that there exists a motif distribution
with compact support such that
where
. The lattice Λ is usually taken to be the finest for which the above representation holds.
Let Λ have a basis over the integers, these basis vectors being expressed in terms of a standard orthonormal basis
as
Then the matrix
is the period matrix of Λ (Section 1.3.2.6.5
) with respect to the unit lattice with basis
, and the volume V of the unit cell is given by
.
By Fourier transformation where
is the lattice distribution associated to the reciprocal lattice
. The basis vectors
have coordinates in
given by the columns of
, whose expression in terms of the cofactors of A (see Section 1.3.2.6.5
) gives the familiar formulae involving the cross product of vectors for
. The H-distribution F of scattered amplitudes may be written
and is thus a weighted reciprocal-lattice distribution, the weight
attached to each node
being the value at H of the transform
of the motif
. Taken in conjunction with the assumption that the scattering is elastic, i.e. that H only changes the direction but not the magnitude of the incident wavevector
, this result yields the usual forms (Laue or Bragg) of the diffraction conditions:
, and simultaneously H lies on the Ewald sphere.
By the reciprocity theorem, can be recovered if F is known for all
as follows [Section 1.3.2.6.5
, e.g. (iv)]:
These relations may be rewritten in terms of standard, or `fractional crystallographic', coordinates by putting so that a unit cell of the crystal corresponds to
, and that
. Defining
and
by
so that
we have
These formulae are valid for an arbitrary motif distribution
, provided the convergence of the Fourier series for
is considered from the viewpoint of distribution theory (Section 1.3.2.6.10.3
).
The experienced crystallographer may notice the absence of the familiar factor from the expression for
just given. This is because we use the (mathematically) natural unit for
, the electron per unit cell, which matches the dimensionless nature of the crystallographic coordinates x and of the associated volume element
. The traditional factor
was the result of the somewhat inconsistent use of x as an argument but of
as a volume element to obtain ρ in electrons per unit volume (e.g. Å3). A fortunate consequence of the present convention is that nuisance factors of V or
, which used to abound in convolution or scalar product formulae, are now absent.
It should be noted at this point that the crystallographic terminology regarding and
differs from the standard mathematical terminology introduced in Section 1.3.2.4.1
and applied to periodic distributions in Section 1.3.2.6.4
: F is the inverse Fourier transform of ρ rather than its Fourier transform, and the calculation of ρ is called a Fourier synthesis in crystallography even though it is mathematically a Fourier analysis. The origin of this discrepancy may be traced to the fact that the mathematical theory of the Fourier transformation originated with the study of temporal periodicity, while crystallography deals with spatial periodicity; since the expression for the phase factor of a plane wave is
, the difference in sign between the contributions from time versus spatial displacements makes this conflict unavoidable.
In many cases, is a sum of translates of atomic electron-density distributions. Assume there are n distinct chemical types of atoms, with
identical isotropic atoms of type j described by an electron distribution
about their centre of mass. According to quantum mechanics each
is a smooth rapidly decreasing function of x, i.e.
, hence
and (ignoring the effect of thermal agitation)
which may be written (Section 1.3.2.5.8
)
By Fourier transformation:
Defining the form factor
of atom j as a function of h to be
we have
If
and
are the real- and reciprocal-space coordinates in Å and Å−1, and if
is the spherically symmetric electron-density function for atom type j, then
More complex expansions are used for electron-density studies (see Chapter 1.2
in this volume). Anisotropic Gaussian atoms may be dealt with through the formulae given in Section 1.3.2.4.4.2
.
The convergence of the Fourier series for
is usually examined from the classical point of view (Section 1.3.2.6.10
). The summation of multiple Fourier series meets with considerable difficulties, because there is no natural order in
to play the role of the natural order in
(Ash, 1976
). In crystallography, however, the structure factors
are often obtained within spheres
for increasing resolution (decreasing Δ). Therefore, successive estimates of
are most naturally calculated as the corresponding partial sums (Section 1.3.2.6.10.1
):
This may be written
where
is the `spherical Dirichlet kernel'
exhibits numerous negative ripples around its central peak. Thus the `series termination errors' incurred by using
instead of
consist of negative ripples around each atom, and may lead to a Gibbs-like phenomenon (Section 1.3.2.6.10.1
) near a molecular boundary.
As in one dimension, Cesàro sums (arithmetic means of partial sums) have better convergence properties, as they lead to a convolution by a `spherical Fejér kernel' which is everywhere positive. Thus Cesàro summation will always produce positive approximations to a positive electron density. Other positive summation kernels were investigated by Pepinsky (1952) and by Waser & Schomaker (1953)
.
If the wavelength λ of the incident X-rays is far from any absorption edge of the atoms in the crystal, there is a constant phase shift in the scattering, and the electron density may be considered to be real-valued. Then Thus if
then
This is Friedel's law (Friedel, 1913
). The set
of Fourier coefficients is said to have Hermitian symmetry.
If λ is close to some absorption edge(s), the proximity to resonance induces an extra phase shift, whose effect may be represented by letting take on complex values. Let
and correspondingly, by termwise Fourier transformation
Since and
are both real,
and
are both Hermitian symmetric, hence
while
Thus
, so that Friedel's law is violated. The components
and
, which do obey Friedel's law, may be expressed as:
By Section 1.3.2.4.3.3 and Section 1.3.2.6.10.2
,
Usually
is real and positive, hence
, but the identity remains valid even when
is made complex-valued by the presence of anomalous scatterers.
If is the collection of structure factors belonging to another electron density
with the same period lattice as ρ, then
Thus, norms and inner products may be evaluated either from structure factors or from `maps'.
Let and
be two electron densities referred to crystallographic coordinates, with structure factors
and
, so that
The distribution is well defined, since the generalized support condition (Section 1.3.2.3.9.7
) is satisfied. The forward version of the convolution theorem implies that if
then
If either or
is infinitely differentiable, then the distribution
exists, and if we analyse it as
then the backward version of the convolution theorem reads:
The cross correlation between
and
is the
-periodic distribution defined by:
If
and
are locally integrable,
Let
The combined use of the shift property and of the forward convolution theorem then gives immediately:
hence the Fourier series representation of
:
Clearly,
, as shown by the fact that permuting F and G changes
into its complex conjugate.
The auto-correlation of is defined as
and is called the Patterson function of
. If
consists of point atoms, i.e.
then
contains information about interatomic vectors. It has the Fourier series representation
and is therefore calculable from the diffraction intensities alone. It was first proposed by Patterson (1934
, 1935a
,b
) as an extension to crystals of the radially averaged correlation function used by Warren & Gingrich (1934)
in the study of powders.
Shannon's sampling and interpolation theorem (Section 1.3.2.7.1) takes two different forms, according to whether the property of finite bandwidth is assumed in real space or in reciprocal space.
|
It was shown at the end of Section 1.3.2.5.8 that the convolution theorem establishes, under appropriate assumptions, a duality between sectioning a smooth function (viewed as a multiplication by a δ-function in the sectioning coordinate) and projecting its transform (viewed as a convolution with the function 1 everywhere equal to 1 as a function of the projection coordinate). This duality follows from the fact that
and
map
to
and
to
(Section 1.3.2.5.6
), and from the tensor product property (Section 1.3.2.5.5
).
In the case of periodic distributions, projection and section must be performed with respect to directions or subspaces which are integral with respect to the period lattice if the result is to be periodic; furthermore, projections must be performed only on the contents of one repeating unit along the direction of projection, or else the result would diverge. The same relations then hold between principal central sections and projections of the electron density and the dual principal central projections and sections of the weighted reciprocal lattice, e.g. etc.
When the sections are principal but not central, it suffices to use the shift property of Section 1.3.2.5.5. When the sections or projections are not principal, they can be made principal by changing to new primitive bases B and
for Λ and
, respectively, the transition matrices P and
to these new bases being related by
in order to preserve duality. This change of basis must be such that one of these matrices (say, P) should have a given integer vector u as its first column, u being related to the line or plane defining the section or projection of interest.
The problem of constructing a matrix P given u received an erroneous solution in Volume II of International Tables (Patterson, 1959), which was subsequently corrected in 1962. Unfortunately, the solution proposed there is complicated and does not suggest a general approach to the problem. It therefore seems worthwhile to record here an effective procedure which solves this problem in any dimension n (Watson, 1970
).
Let be a primitive integral vector, i.e. g.c.d.
. Then an
integral matrix P with det
having u as its first column can be constructed by induction as follows. For
the result is trivial. For
it can be solved by means of the Euclidean algorithm, which yields
such that
, so that we may take
. Note that, if
is a solution, then
is another solution for any
. For
, write
with
so that both
and
are primitive. By the inductive hypothesis there is an integral
matrix V with
as its first column, and an integral
matrix Z with z as its first column, with
and
.
Now put i.e.
The first column of P is
and its determinant is 1, QED.
The incremental step from dimension to dimension n is the construction of
matrix V, for which there exist infinitely many solutions labelled by an integer
. Therefore, the collection of matrices P which solve the problem is labelled by
arbitrary integers
. This freedom can be used to adjust the shape of the basis B.
Once P has been chosen, the calculation of general sections and projections is transformed into that of principal sections and projections by the changes of coordinates: and an appeal to the tensor product property.
Booth (1945a) made use of the convolution theorem to form the Fourier coefficients of `bounded projections', which provided a compromise between 2D and 3D Fourier syntheses. If it is desired to compute the projection on the (x, y) plane of the electron density lying between the planes
and
, which may be written as
The transform is then
giving for coefficient
:
Another particular instance of the convolution theorem is the duality between differentiation and multiplication by a monomial (Sections 1.3.2.4.2.8, 1.3.2.5.8
).
In the present context, this result may be written in Cartesian coordinates, and
in crystallographic coordinates.
A particular case of the first formula is where
is the Laplacian of ρ.
The second formula has been used with or 2 to compute `differential syntheses' and refine the location of maxima (or other stationary points) in electron-density maps. Indeed, the values at x of the gradient vector
and Hessian matrix
are readily obtained as
and a step of Newton iteration towards the nearest stationary point of
will proceed by
The modern use of Fourier transforms to speed up the computation of derivatives for model refinement will be described in Section 1.3.4.4.7.
The converse property is also useful: it relates the derivatives of the continuous transform to the moments of
:
For
and
, this identity gives the well known relation between the Hessian matrix of the transform
at the origin of reciprocal space and the inertia tensor of the motif
. This is a particular case of the moment-generating properties of
, which will be further developed in Section 1.3.4.5.2
.
The classical results presented in Section 1.3.2.6.9 can be readily generalized to the case of triple Fourier series; no new concept is needed, only an obvious extension of the notation.
Let be real-valued, so that Friedel's law holds and
. Let
be a finite set of indices comprising the origin:
. Then the Hermitian form in
complex variables
is called the Toeplitz form of order
associated to
. By the convolution theorem and Parseval's identity,
If
is almost everywhere non-negative, then for all
the forms
are positive semi-definite and therefore all Toeplitz determinants
are non-negative, where
The Toeplitz–Carathéodory–Herglotz theorem given in Section 1.3.2.6.9.2 states that the converse is true: if
for all
, then
is almost everywhere non-negative. This result is known in the crystallographic literature through the papers of Karle & Hauptman (1950)
, MacGillavry (1950)
, and Goedkoop (1950)
, following previous work by Harker & Kasper (1948)
and Gillis (1948a
,b
).
Szegö's study of the asymptotic distribution of the eigenvalues of Toeplitz forms as their order tends to infinity remains valid. Some precautions are needed, however, to define the notion of a sequence of finite subsets of indices tending to infinity: it suffices that the
should consist essentially of the reciprocal-lattice points h contained within a domain of the form
(k-fold dilation of Ω) where Ω is a convex domain in
containing the origin (Widom, 1960
). Under these circumstances, the eigenvalues
of the Toeplitz forms
become equidistributed with the sample values
of
on a grid satisfying the Shannon sampling criterion for the data in
(cf. Section 1.3.2.6.9.3
).
A particular consequence of this equidistribution is that the geometric means of the and of the
are equal, and hence as in Section 1.3.2.6.9.4
where
denotes the number of reflections in
. Complementary terms giving a better comparison of the two sides were obtained by Widom (1960
, 1975
) and Linnik (1975)
.
This formula played an important role in the solution of the 2D Ising model by Onsager (1944) (see Montroll et al., 1963
). It is also encountered in phasing methods involving the `Burg entropy' (Britten & Collins, 1982
; Narayan & Nityananda, 1982
; Bricogne, 1982
, 1984
, 1988
).
The description of a crystal given so far has dealt only with its invariance under the action of the (discrete Abelian) group of translations by vectors of its period lattice Λ.
Let the crystal now be embedded in Euclidean 3-space, so that it may be acted upon by the group of rigid (i.e. distance-preserving) motions of that space. The group
contains a normal subgroup
of translations, and the quotient group
may be identified with the 3-dimensional orthogonal group
. The period lattice Λ of a crystal is a discrete uniform subgroup of
.
The possible invariance properties of a crystal under the action of are captured by the following definition: a crystallographic group is a subgroup Γ of
if
The two properties are not independent: by a theorem of Bieberbach (1911), they follow from the assumption that Λ is a discrete subgroup of
which operates without accumulation point and with a compact fundamental domain (see Auslander, 1965
). These two assumptions imply that G acts on Λ through an integral representation, and this observation leads to a complete enumeration of all distinct Γ's. The mathematical theory of these groups is still an active research topic (see, for instance, Farkas, 1981
), and has applications to Riemannian geometry (Wolf, 1967
).
This classification of crystallographic groups is described elsewhere in these Tables (Wondratschek, 2005), but it will be surveyed briefly in Section 1.3.4.2.2.3
for the purpose of establishing further terminology and notation, after recalling basic notions and results concerning groups and group actions in Section 1.3.4.2.2.2
.
The books by Hall (1959) and Scott (1964)
are recommended as reference works on group theory.
|
Let Γ be a crystallographic group, Λ the normal subgroup of its lattice translations, and G the finite factor group . Then G acts on Λ by conjugation [Section 1.3.4.2.2.2
(d)
] and this action, being a mapping of a lattice into itself, is representable by matrices with integer entries.
The classification of crystallographic groups proceeds from this observation in the following three steps:
|
Step 1 leads to the following groups, listed in association with the crystal system to which they later give rise:
and the extension of these groups by a centre of inversion. In this list ⋉ denotes a semi-direct product [Section 1.3.4.2.2.2
(d)
], α denotes the automorphism
, and
(the group of permutations on three letters) operates by permuting the copies of
(using the subgroup
of cyclic permutations gives the tetrahedral subsystem).
Step 2 leads to a list of 73 equivalence classes called arithmetic classes of representations
, where
is a
integer matrix, with
and
. This enumeration is more familiar if equivalence is relaxed so as to allow conjugation by rational
matrices with determinant ± 1: this leads to the 32 crystal classes. The difference between an arithmetic class and its rational class resides in the choice of a lattice mode
. Arithmetic classes always refer to a primitive lattice, but may use inequivalent integral representations for a given geometric symmetry element; while crystallographers prefer to change over to a non-primitive lattice, if necessary, in order to preserve the same integral representation for a given geometric symmetry element. The matrices P and
describing the changes of basis between primitive and centred lattices are listed in Table 5.1.3.1
and illustrated in Figs. 5.1.3.2
to 5.1.3.8
, pp. 80–85, of Volume A of International Tables (Arnold, 2005
).
Step 3 gives rise to a system of congruences for the systems of non-primitive translations
which may be associated to the matrices
of a given arithmetic class, namely:
first derived by Frobenius (1911)
. If equivalence under the action of
is taken into account, 219 classes are found. If equivalence is defined with respect to the action of the subgroup
of
consisting only of transformations with determinant +1, then 230 classes called space-group types are obtained. In particular, associating to each of the 73 arithmetic classes a trivial set of non-primitive translations
yields the 73 symmorphic space groups. This third step may also be treated as an abstract problem concerning group extensions, using cohomological methods [Ascher & Janner (1965)
; see Janssen (1973)
for a summary]; the connection with Frobenius's approach, as generalized by Zassenhaus (1948)
, is examined in Ascher & Janner (1968)
.
The finiteness of the number of space-group types in dimension 3 was shown by Bieberbach (1912) to be the case in arbitrary dimension. The reader interested in N-dimensional space-group theory for
may consult Brown (1969)
, Brown et al. (1978)
, Schwarzenberger (1980
), and Engel (1986)
. The standard reference for integral representation theory is Curtis & Reiner (1962)
.
All three-dimensional space groups G have the property of being solvable, i.e. that there exists a chain of subgroups where each
is a normal subgroup of
and the factor group
is a cyclic group of some order
. This property may be established by inspection, or deduced from a famous theorem of Burnside [see Burnside (1911
), pp. 322–323] according to which any group G such that
, with p and q distinct primes, is solvable; in the case at hand,
and
. The whole classification of 3D space groups can be performed swiftly by a judicious use of the solvability property (L. Auslander, personal communication).
Solvability facilitates the indexing of elements of G in terms of generators and relations (Coxeter & Moser, 1972; Magnus et al., 1976
) for the purpose of calculation. By definition of solvability, elements
may be chosen in such a way that the cyclic factor group
is generated by the coset
. The set
is then a system of generators for G such that the defining relations [see Brown et al. (1978
), pp. 26–27] have the particularly simple form
with
and
. Each element g of G may then be obtained uniquely as an `ordered word':
with
, using the algorithm of Jürgensen (1970)
. Such generating sets and defining relations are tabulated in Brown et al. (1978
, pp. 61–76). An alternative list is given in Janssen (1973
, Table 4.3, pp. 121–123, and Appendix D, pp. 262–271).
The action of a crystallographic group Γ may be written in terms of standard coordinates in as
with
An important characteristic of the representation is its reducibility, i.e. whether or not it has invariant subspaces other than
and the whole of
. For triclinic, monoclinic and orthorhombic space groups, θ is reducible to a direct sum of three one-dimensional representations:
for trigonal, tetragonal and hexagonal groups, it is reducible to a direct sum of two representations, of dimension 2 and 1, respectively; while for tetrahedral and cubic groups, it is irreducible.
By Schur's lemma (see e.g. Ledermann, 1987), any matrix which commutes with all the matrices
for
must be a scalar multiple of the identity in each invariant subspace.
In the reducible cases, the reductions involve changes of basis which will be rational, not integral, for those arithmetic classes corresponding to non-primitive lattices. Thus the simplification of having maximally reduced representation has as its counterpart the use of non-primitive lattices.
The notions of orbit, isotropy subgroup and fundamental domain (or asymmetric unit) for the action of G on are inherited directly from the general setting of Section 1.3.4.2.2.2.
Points x for which
are called special positions, and the various types of isotropy subgroups which may be encountered in crystallographic groups have been labelled by means of Wyckoff symbols. The representation operators
in
have the form:
The operators
associated to the purely rotational part of each transformation
will also be used. Note the relation:
Let a crystal structure be described by the list of the atoms in its unit cell, indexed by . Let the electron-density distribution about the centre of mass of atom k be described by
with respect to the standard coordinates x. Then the motif
may be written as a sum of translates:
and the crystal electron density is
.
Suppose that is invariant under Γ. If
and
are in the same orbit, say
, then
Therefore if
is a special position and thus
, then
This identity implies that
(the special position condition), and that
i.e. that
must be invariant by the pure rotational part of
. Trueblood (1956)
investigated the consequences of this invariance on the thermal vibration tensor of an atom in a special position (see Section 1.3.4.2.2.6
below).
Let J be a subset of K such that contains exactly one atom from each orbit. An orbit decomposition yields an expression for
in terms of symmetry-unique atoms:
or equivalently
If the atoms are assumed to be Gaussian, write
where
is the total number of electrons, and where the matrix
combines the Gaussian spread of the electrons in atom j at rest with the covariance matrix of the random positional fluctuations of atom j caused by thermal agitation.
In crystallographic coordinates:
If atom k is in a special position , then the matrix
must satisfy the identity
for all g in the isotropy subgroup of
. This condition may also be written in Cartesian coordinates as
where
This is a condensed form of the symmetry properties derived by Trueblood (1956)
.
An elementary discussion of this topic may be found in Chapter 1.4
of this volume.
Having established that the symmetry of a crystal may be most conveniently stated and handled via the left representation of G given by its action on electron-density distributions, it is natural to transpose this action by the identity of Section 1.3.2.5.5
:
for any tempered distribution T, i.e.
whenever the transforms are functions.
Putting , a
-periodic distribution, this relation defines a left action
of G on
given by
which is conjugate to the action
in the sense that
The identity
expressing the G-invariance of
is then equivalent to the identity
between its structure factors, i.e. (Waser, 1955a
)
If G is made to act on via
the usual notions of orbit, isotropy subgroup (denoted
) and fundamental domain may be attached to this action. The above relation then shows that the spectrum
is entirely known if it is specified on a fundamental domain
containing one reciprocal-lattice point from each orbit of this action.
A reflection h is called special if . Then for any
we have
, and hence
implying that
unless
. Special reflections h for which
for some
are thus systematically absent. This phenomenon is an instance of the duality between periodization and decimation of Section 1.3.2.7.2
: if
, the projection of
on the direction of h has period
, hence its transform (which is the portion of F supported by the central line through h) will be decimated, giving rise to the above condition.
A reflection h is called centric if , i.e. if the orbit of h contains
. Then
for some coset γ in
, so that the following relation must hold:
In the absence of dispersion, Friedel's law gives rise to the phase restriction:
The value of the restricted phase is independent of the choice of coset representative γ. Indeed, if
is another choice, then
with
and by the Frobenius congruences
, so that
Since
,
and
if h is not a systematic absence: thus
The treatment of centred lattices may be viewed as another instance of the duality between periodization and decimation (Section 1.3.2.7.2): the periodization of the electron density by the non-primitive lattice translations has as its counterpart in reciprocal space the decimation of the transform by the `reflection conditions' describing the allowed reflections, the decimation and periodization matrices being each other's contragredient.
The reader may consult the papers by Bienenstock & Ewald (1962) and Wells (1965)
for earlier approaches to this material.
Structure factors may be calculated from a list of symmetry-unique atoms by Fourier transformation of the orbit decomposition formula for the motif given in Section 1.3.4.2.2.4
:
i.e. finally:
In the case of Gaussian atoms, the atomic transforms are or equivalently
Two common forms of equivalent temperature factors (incorporating both atomic form and thermal motion) are
In the first case, does not depend on
, and therefore:
In the second case, however, no such simplification can occur:
These formulae, or special cases of them, were derived by Rollett & Davies (1955)
, Waser (1955b)
, and Trueblood (1956)
.
The computation of structure factors by applying the discrete Fourier transform to a set of electron-density values calculated on a grid will be examined in Section 1.3.4.4.5.
A formula for the Fourier synthesis of electron-density maps from symmetry-unique structure factors is readily obtained by orbit decomposition: where L is a subset of
such that
contains exactly one point of each orbit for the action
of G on
. The physical electron density per cubic ångström is then
with V in Å3.
In the absence of anomalous scatterers in the crystal and of a centre of inversion −I in Γ, the spectrum has an extra symmetry, namely the Hermitian symmetry expressing Friedel's law (Section 1.3.4.2.1.4
). The action of a centre of inversion may be added to that of Γ to obtain further simplification in the above formula: under this extra action, an orbit
with
is either mapped into itself or into the disjoint orbit
; the terms corresponding to
and
may then be grouped within the common orbit in the first case, and between the two orbits in the second case.
|
The general statement of Parseval's theorem given in Section 1.3.4.2.1.5 may be rewritten in terms of symmetry-unique structure factors and electron densities by means of orbit decomposition.
In reciprocal space, for each l, the summands corresponding to the various
are equal, so that the left-hand side is equal to
In real space, the triple integral may be rewritten as (where D is the asymmetric unit) if
and
are smooth densities, since the set of special positions has measure zero. If, however, the integral is approximated as a sum over a G-invariant grid defined by decimation matrix N, special positions on this grid must be taken into account:
where the discrete asymmetric unit D contains exactly one point in each orbit of G in
.
The standard convolution theorems derived in the absence of symmetry are readily seen to follow from simple properties of functions (denoted simply e in formulae which are valid for both signs), namely:
These relations imply that the families of functions
both generate an algebra of functions, i.e. a vector space endowed with an internal multiplication, since (i) and (ii) show how to `linearize products'.
Friedel's law (when applicable) on the one hand, and the Fourier relation between intensities and the Patterson function on the other hand, both follow from the property
When crystallographic symmetry is present, the convolution theorems remain valid in their original form if written out in terms of `expanded' data, but acquire a different form when rewritten in terms of symmetry-unique data only. This rewriting is made possible by the extra relation (Section 1.3.4.2.2.5)
or equivalently
The kernels of symmetrized Fourier transforms are not the functions e but rather the symmetrized sums for which the linearization formulae are readily obtained using (i), (ii) and (iv) as
where the choice of sign in ± must be the same throughout each formula.
Formulae defining the `structure-factor algebra' associated to G were derived by Bertaut (1955c
, 1956b
,c
, 1959a
,b
) and Bertaut & Waser (1957)
in another context.
The forward convolution theorem (in discrete form) then follows. Let then
with
The backward convolution theorem is derived similarly. Let then
with
Both formulae are simply orbit decompositions of their symmetry-free counterparts.
Consider two model electron densities and
with the same period lattice
and the same space group G. Write their motifs in terms of atomic electron densities (Section 1.3.4.2.2.4
) as
where
and
label the symmetry-unique atoms placed at positions
and
, respectively.
To calculate the correlation between and
we need the following preliminary formulae, which are easily established: if
and f is an arbitrary function on
, then
hence
and
The cross correlation between motifs is therefore
which contains a peak of shape
at the interatomic vector
for each
,
,
,
.
The cross-correlation between the original electron densities is then obtained by further periodizing by
.
Note that these expressions are valid for any choice of `atomic' density functions and
, which may be taken as molecular fragments if desired (see Section 1.3.4.4.8
).
If G contains elements g such that has an eigenspace
with eigenvalue 1 and an invariant complementary subspace
, while
has a non-zero component
in
, then the Patterson function
will contain Harker peaks (Harker, 1936
) of the form
[where
represent the action of g in
] in the translate of
by
.
In 1929, W. L. Bragg demonstrated the practical usefulness of the Fourier transform relation between electron density and structure factors by determining the structure of diopside from three principal projections calculated numerically by 2D Fourier summation (Bragg, 1929). It was immediately realized that the systematic use of this powerful method, and of its extension to three dimensions, would entail considerable amounts of numerical computation which had to be organized efficiently. As no other branch of applied science had yet needed this type of computation, crystallographers had to invent their own techniques.
The first step was taken by Beevers & Lipson (1934) who pointed out that a 2D summation could be factored into successive 1D summations. This is essentially the tensor product property of the Fourier transform (Sections 1.3.2.4.2.4
, 1.3.3.3.1
), although its aspect is rendered somewhat complicated by the use of sines and cosines instead of complex exponentials. Computation is economized to the extent that the cost of an
transform grows with N as
rather than
. Generalization to 3D is immediate, reducing computation size from
to
for an
transform. The complication introduced by using expressions in terms of sines and cosines is turned to advantage when symmetry is present, as certain families of terms are systematically absent or are simply related to each other; multiplicity corrections must, however, be introduced. The necessary information was tabulated for each space group by Lonsdale (1936)
, and was later incorporated into Volume I of International Tables.
The second step was taken by Beevers & Lipson (1936) and Lipson & Beevers (1936)
in the form of the invention of the `Beevers–Lipson strips', a practical device which was to assist a whole generation of crystallographers in the numerical computation of crystallographic Fourier sums. The strips comprise a set of `cosine strips' tabulating the functions
and a set of `sine strips' tabulating the functions
for the 16 arguments
. Function values are rounded to the nearest integer, and those for other arguments m may be obtained by using the symmetry properties of the sine and cosine functions. A Fourier summation of the form
is then performed by selecting the n cosine strips labelled
and the n sine strips labelled
, placing them in register, and adding the tabulated values columnwise. The number 60 was chosen as the l.c.m. of 12 (itself the l.c.m. of the orders of all possible non-primitive translations) and of 10 (for decimal convenience). The limited accuracy imposed by the two-digit tabulation was later improved by Robertson's sorting board (Robertson, 1936a
,b
) or by the use of separate strips for each decimal digit of the amplitude (Booth, 1948b
), which allowed three-digit tabulation while keeping the set of strips within manageable size. Cochran (1948a)
found that, for most structures under study at the time, the numerical inaccuracies of the method were less than the level of error in the experimental data. The sampling rate was subsequently increased from 60 to 120 (Beevers, 1952
) to cope with larger unit cells.
Further gains in speed and accuracy were sought through the construction of special-purpose mechanical, electro-mechanical, electronic or optical devices. Two striking examples are the mechanical computer RUFUS built by Robertson (1954, 1955
, 1961
) on the principle of previous strip methods (see also Robertson, 1932
) and the electronic analogue computer X-RAC built by Pepinsky, capable of real-time calculation and display of 2D and 3D Fourier syntheses (Pepinsky, 1947
; Pepinsky & Sayre, 1948
; Pepinsky et al., 1961
; see also Suryan, 1957
). The optical methods of Lipson & Taylor (1951
, 1958
) also deserve mention. Many other ingenious devices were invented, whose descriptions may be found in Booth (1948b)
, Niggli (1961)
, and Lipson & Cochran (1968)
.
Later, commercial punched-card machines were programmed to carry out Fourier summations or structure-factor calculations (Shaffer et al., 1946a,b
; Cox et al., 1947
, 1949
; Cox & Jeffrey, 1949
; Donohue & Schomaker, 1949
; Grems & Kasper, 1949
; Hodgson et al., 1949
; Greenhalgh & Jeffrey, 1950
; Kitz & Marchington, 1953
).
The modern era of digital electronic computation of Fourier series was initiated by the work of Bennett & Kendrew (1952), Mayer & Trueblood (1953)
, Ahmed & Cruickshank (1953b)
, Sparks et al. (1956)
and Fowweather (1955)
. Their Fourier-synthesis programs used Beevers–Lipson factorization, the program by Sparks et al. being the first 3D Fourier program useable for all space groups (although these were treated as P1 or
by data expansion). Ahmed & Barnes (1958)
then proposed a general programming technique to allow full use of symmetry elements (orthorhombic or lower) in the 3D Beevers–Lipson factorization process, including multiplicity corrections. Their method was later adopted by Shoemaker & Sly (1961)
, and by crystallographic program writers at large.
The discovery of the FFT algorithm by Cooley & Tukey in 1965, which instantly transformed electrical engineering and several other disciplines, paradoxically failed to have an immediate impact on crystallographic computing. A plausible explanation is that the calculation of large 3D Fourier maps was a relatively infrequent task which was not thought to constitute a bottleneck, as crystallographers had learned to settle most structural questions by means of cheaper 2D sections or projections. It is significant in this respect that the first use of the FFT in crystallography by Barrett & Zwick (1971) should have occurred as part of an iterative scheme for improving protein phases by density modification in real space, which required a much greater number of Fourier transformations than any previous method. Independently, Bondot (1971)
had attracted attention to the merits of the FFT algorithm.
The FFT program used by Barrett & Zwick had been written for signal-processing applications. It was restricted to sampling rates of the form , and was not designed to take advantage of crystallographic symmetry at any stage of the calculation; Bantz & Zwick (1974)
later improved this situation somewhat.
It was the work of Ten Eyck (1973) and Immirzi (1973
, 1976
) which led to the general adoption of the FFT in crystallographic computing. Immirzi treated all space groups as P1 by data expansion. Ten Eyck based his program on a versatile multi-radix FFT routine (Gentleman & Sande, 1966
) coupled with a flexible indexing scheme for dealing efficiently with multidimensional transforms. He also addressed the problems of incorporating symmetry elements of order 2 into the factorization of 1D transforms, and of transposing intermediate results by other symmetry elements. He was thus able to show that in a large number of space groups (including the 74 space groups having orthorhombic or lower symmetry) it is possible to calculate only the unique results from the unique data within the logic of the FFT algorithm. Ten Eyck wrote and circulated a package of programs for computing Fourier maps and re-analysing them into structure factors in some simple space groups (P1, P1, P2, P2/m, P21, P222, P212121, Pmmm). This package was later augmented by a handful of new space-group-specific programs contributed by other crystallographers (P21212, I222, P3121, P41212). The writing of such programs is an undertaking of substantial complexity, which has deterred all but the bravest: the usual practice is now to expand data for a high-symmetry space group to the largest subgroup for which a specific FFT program exists in the package, rather than attempt to write a new program. Attempts have been made to introduce more modern approaches to the calculation of crystallographic Fourier transforms (Auslander, Feig & Winograd, 1982
; Auslander & Shenefelt, 1987
; Auslander et al., 1988
) but have not gone beyond the stage of preliminary studies.
The task of fully exploiting the FFT algorithm in crystallographic computations is therefore still unfinished, and it is the purpose of this section to provide a systematic treatment such as that (say) of Ahmed & Barnes (1958) for the Beevers–Lipson algorithm.
Ten Eyck's approach, based on the reducibility of certain space groups, is extended by the derivation of a universal transposition formula for intermediate results. It is then shown that space groups which are not completely reducible may nevertheless be treated by three-dimensional Cooley–Tukey factorization in such a way that their symmetry may be fully exploited, whatever the shape of their asymmetric unit. Finally, new factorization methods with built-in symmetries are presented. The unifying concept throughout this presentation is that of `group action' on indexing sets, and of `orbit exchange' when this action has a composite structure; it affords new ways of rationalizing the use of symmetry, or of improving computational speed, or both.
A finite set of reflections can be periodized without aliasing by the translations of a suitable sublattice
of the reciprocal lattice
; the converse operation in real space is the sampling of ρ at points X of a grid of the form
(Section 1.3.2.7.3
). In standard coordinates,
is periodized by
, and
is sampled at points
.
In the absence of symmetry, the unique data are
They are connected by the ordinary DFT relations: or
and
or
In the presence of symmetry, the unique data are
– or
in real space (by abuse of notation, D will denote an asymmetric unit for x or for m indifferently);
– in reciprocal space.
The previous summations may then be subjected to orbital decomposition, to yield the following `crystallographic DFT' (CDFT) defining relations: with the obvious alternatives in terms of
. Our problem is to evaluate the CDFT for a given space group as efficiently as possible, in spite of the fact that the group action has spoilt the simple tensor-product structure of the ordinary three-dimensional DFT (Section 1.3.3.3.1
).
Two procedures are available to carry out the 3D summations involved as a succession of smaller summations:
|
Clearly, a symmetry expansion to the largest fully reducible subgroup of the space group will give maximal decomposability, but will require computing more than the unique results from more than the unique data. Economy will follow from factoring the transforms in the subspaces within which the space group acts irreducibly.
For irreducible subspaces of dimension 1, the group action is readily incorporated into the factorization of the transform, as first shown by Ten Eyck (1973).
For irreducible subspaces of dimension 2 or 3, the ease of incorporation of symmetry into the factorization depends on the type of factorization method used. The multidimensional Cooley–Tukey method (Section 1.3.3.3.1) is rather complicated; the multidimensional Good method (Section 1.3.3.3.2.2
) is somewhat simpler; and the Rader/Winograd factorization admits a generalization, based on the arithmetic of certain rings of algebraic integers, which accommodates 2D crystallographic symmetries in a most powerful and pleasing fashion.
At each stage of the calculation, it is necessary to keep track of the definition of the asymmetric unit and of the symmetry properties of the numbers being manipulated. This requirement applies not only to the initial data and to the final results, where these are familiar; but also to all the intermediate quantities produced by partial transforms (on subsets of factors, or subsets of dimensions, or both), where they are less familiar. Here, the general formalism of transposition (or `orbit exchange') described in Section 1.3.4.2.2.2 plays a central role.
Suppose that the space-group action is reducible, i.e. that for each
by Schur's lemma, the decimation matrix must then be of the form
if it is to commute with all the
.
Putting and
, we may define
and write
(direct sum) as a shorthand for
We may also define the representation operators and
acting on functions of
and
, respectively (as in Section 1.3.4.2.2.4
), and the operators
and
acting on functions of
and
, respectively (as in Section 1.3.4.2.2.5
). Then we may write
and
in the sense that g acts on
by
and on
by
Thus equipped we may now derive concisely a general identity describing the symmetry properties of intermediate quantities of the form which arise through partial transformation of F on
or of
on
. The action of
on these quantities will be
and hence the symmetry properties of T are expressed by the identity Applying this relation not to T but to
gives
i.e.
If the unique were initially indexed by
(see Section 1.3.4.2.2.2
), this formula allows the reindexing of the intermediate results
from the initial form
to the final form
on which the second transform (on
) may now be performed, giving the final results
indexed by
which is an asymmetric unit. An analogous interpretation holds if one is going from
to F.
The above formula solves the general problem of transposing from one invariant subspace to another, and is the main device for decomposing the CDFT. Particular instances of this formula were derived and used by Ten Eyck (1973); it is useful for orthorhombic groups, and for dihedral groups containing screw axes
with g.c.d.
. For comparison with later uses of orbit exchange, it should be noted that the type of intermediate results just dealt with is obtained after transforming on all factors in one summand.
A central piece of information for driving such a decomposition is the definition of the full asymmetric unit in terms of the asymmetric units in the invariant subspaces. As indicated at the end of Section 1.3.4.2.2.2, this is straightforward when G acts without fixed points, but becomes more involved if fixed points do exist. To this day, no systematic `calculus of asymmetric units' exists which can automatically generate a complete description of the asymmetric unit of an arbitrary space group in a form suitable for directing the orbit exchange process, although Shenefelt (1988)
has outlined a procedure for dealing with space group P622 and its subgroups. The asymmetric unit definitions given in Volume A of International Tables are incomplete in this respect, in that they do not specify the possible residual symmetries which may exist on the boundaries of the domains.
Methods for factoring the DFT in the absence of symmetry were examined in Sections 1.3.3.2 and 1.3.3.3
. They are based on the observation that the finite sets which index both data and results are endowed with certain algebraic structures (e.g. are Abelian groups, or rings), and that subsets of indices may be found which are not merely subsets but substructures (e.g. subgroups or subrings). Summation over these substructures leads to partial transforms, and the way in which substructures fit into the global structure indicates how to reassemble the partial results into the final results. As a rule, the richer the algebraic structure which is identified in the indexing set, the more powerful the factoring method.
The ability of a given factoring method to accommodate crystallographic symmetry will thus be determined by the extent to which the crystallographic group action respects (or fails to respect) the partitioning of the index set into the substructures pertaining to that method. This remark justifies trying to gain an overall view of the algebraic structures involved, and of the possibilities of a crystallographic group acting `naturally' on them.
The index sets and
are finite Abelian groups under component-wise addition. If an iterated addition is viewed as an action of an integer scalar
via
then an Abelian group becomes a module over the ring
(or, for short, a
-module), a module being analogous to a vector space but with scalars drawn from a ring rather than a field. The left actions of a crystallographic group G by
can be combined with this
action as follows:
This provides a left action, on the indexing sets, of the set
of symbolic linear combinations of elements of G with integral coefficients. If addition and multiplication are defined in
by
and
with
then
is a ring, and the action defined above makes the indexing sets into
-modules. The ring
is called the integral group ring of G (Curtis & Reiner, 1962
, p. 44).
From the algebraic standpoint, therefore, the interaction between symmetry and factorization can be expected to be favourable whenever the indexing sets of partial transforms are -submodules of the main
-modules.
Suppose, as in Section 1.3.3.3.2.1, that the decimation matrix N may be factored as
. Then any grid point index
in real space may be written
with
and
determined by
These relations establish a one-to-one correspondence
between
and the Cartesian product
of
and
, and hence
as a set. However
as an Abelian group, since in general
because there can be a `carry' from the addition of the first components into the second components; therefore,
as a
-module, which shows that the incorporation of symmetry into the Cooley–Tukey algorithm is not a trivial matter.
Let act on I through
and suppose that N `integerizes' all the non-primitive translations
so that we may write
with
and
determined as above. Suppose further that N,
and
commute with
for all
, i.e. (by Schur's lemma, Section 1.3.4.2.2.4
) that these matrices are integer multiples of the identity in each G-invariant subspace. The action of g on
leads to
which we may decompose as
with
and
Introducing the notation the two components of
may be written
with
The term is the geometric equivalent of a carry or borrow: it arises because
, calculated as a vector in
, may be outside the unit cell
, and may need to be brought back into it by a `large' translation with a non-zero component in the
space; equivalently, the action of g may need to be applied around different permissible origins for different values of
, so as to map the unit cell into itself without any recourse to lattice translations. [Readers familiar with the cohomology of groups (see e.g. Hall, 1959
; MacLane, 1963
) will recognize
as the cocycle of the extension of
G-modules described by the exact sequence
.]
Thus G acts on I in a rather complicated fashion: although does define a left action in
alone, no action can be defined in
alone because
depends on
. However, because
,
and
are left actions, it follows that
satisfies the identity
for all g,
in G and all
in
. In particular,
for all
, and
This action will now be used to achieve optimal use of symmetry in the multidimensional Cooley–Tukey algorithm of Section 1.3.3.3.2.1. Let us form an array Y according to
for all
but only for the unique
under the action
of G in
. Except in special cases which will be examined later, these vectors contain essentially an asymmetric unit of electron-density data, up to some redundancies on boundaries. We may then compute the partial transform on
:
Using the symmetry of
in the form
yields by the procedure of Section 1.3.3.3.2
the transposition formula
By means of this identity we can transpose intermediate results initially indexed by
so as to have them indexed by
We may then apply twiddle factors to get
and carry out the second transform
The final results are indexed by
which yield essentially an asymmetric unit of structure factors after unscrambling by:
The transposition formula above applies to intermediate results when going backwards from F to , provided these results are considered after the twiddle-factor stage. A transposition formula applicable before that stage can be obtained by characterizing the action of G on h (including the effects of periodization by
) in a manner similar to that used for m.
Let with
We may then write
with
Here
and
are defined by
and
Let us then form an array according to
for all
but only for the unique
under the action of G in
, and transform on
to obtain
Putting
and using the symmetry of F in the form
where
yields by a straightforward rearrangement
This formula allows the transposition of intermediate results Z from an indexing by to an indexing by
We may then apply the twiddle factors to obtain
and carry out the second transform on
The results, indexed by
yield essentially an asymmetric unit of electron densities by the rearrangement
The equivalence of the two transposition formulae up to the intervening twiddle factors is readily established, using the relation which is itself a straightforward consequence of the identity
To complete the characterization of the effect of symmetry on the Cooley–Tukey factorization, and of the economy of computation it allows, it remains to consider the possibility that some values of may be invariant under some transformations
under the action
.
Suppose that has a non-trivial isotropy subgroup
, and let
. Then each subarray
defined by
satisfies the identity
so that the data for the transform on
have residual symmetry properties. In this case the identity satisfied by
simplifies to
which shows that the mapping
satisfies the Frobenius congruences (Section 1.3.4.2.2.3
). Thus the internal symmetry of subarray
with respect to the action of G on
is given by
acting on
via
The transform on needs only be performed for one out of
distinct arrays
(results for the others being obtainable by the transposition formula), and this transforms is
-symmetric. In other words, the following cases occur:
The symmetry properties of the -transform may themselves be exploited in a similar way if
can be factored as a product of smaller decimation matrices; otherwise, an appropriate symmetrized DFT routine may be provided, using for instance the idea of `multiplexing/demultiplexing' (Section 1.3.4.3.5
). We thus have a recursive descent procedure, in which the deeper stages of the recursion deal with transforms on fewer points, or of lower symmetry (usually both).
The same analysis applies to the -transforms on the subarrays
, and leads to a similar descent procedure.
In conclusion, crystallographic symmetry can be fully exploited to reduce the amount of computation to the minimum required to obtain the unique results from the unique data. No such analysis was so far available in cases where the asymmetric units in real and reciprocal space are not parallelepipeds. An example of this procedure will be given in Section 1.3.4.3.6.5.
This procedure was described in Section 1.3.3.3.2.2. The main difference with the Cooley–Tukey factorization is that if
, where the different factors are pairwise coprime, then the Chinese remainder theorem reindexing makes
isomorphic to a direct sum.
where each p-primary piece is endowed with an induced
-module structure by letting G operate in the usual way but with the corresponding modular arithmetic. The situation is thus more favourable than with the Cooley–Tukey method, since there is no interference between the factors (no `carry'). In the terminology of Section 1.3.4.2.2.2
, G acts diagonally on this direct sum, and results of a partial transform may be transposed by orbit exchange as in Section 1.3.4.3.4.1
but without the extra terms μ or η. The analysis of the symmetry properties of partial transforms also carries over, again without the extra terms. Further simplification occurs for all p-primary pieces with p other than 2 or 3, since all non-primitive translations (including those associated to lattice centring) disappear modulo p.
Thus the cost of the CRT reindexing is compensated by the computational savings due to the absence of twiddle factors and of other phase shifts associated with non-primitive translations and with geometric `carries'.
Within each p-primary piece, however, higher powers of p may need to be split up by a Cooley–Tukey factorization, or carried out directly by a suitably adapted Winograd algorithm.
As was the case in the absence of symmetry, the two previous classes of algorithms can only factor the global transform into partial transforms on prime numbers of points, but cannot break the latter down any further. Rader's idea of using the action of the group of units to obtain further factorization of a p-primary transform has been used in `scalar' form by Auslander & Shenefelt (1987)
, Shenefelt (1988)
, and Auslander et al. (1988)
. It will be shown here that it can be adapted to the crystallographic case so as to take advantage also of the possible existence of n-fold cyclic symmetry elements
in a two-dimensional transform (Bricogne & Tolimieri, 1990
). This adaptation entails the use of certain rings of algebraic integers rather than ordinary integers, whose connection with the handling of cyclic symmetry will now be examined.
Let G be the group associated with a threefold axis of symmetry: with
. In a standard trigonal basis, G has matrix representation
in real space,
in reciprocal space. Note that
and that
so that
and
are conjugate in the group of
unimodular integer matrices. The group ring
is commutative, and has the structure of the polynomial ring
with the single relation
corresponding to the minimal polynomial of
. In the terminology of Section 1.3.3.2.4
, the ring structure of
is obtained from that of
by carrying out polynomial addition and multiplication modulo
, then replacing X by any generator of G. This type of construction forms the very basis of algebraic number theory [see Artin (1944
, Section IIc) for an illustration of this viewpoint], and
as just defined is isomorphic to the ring
of algebraic integers of the form
under the identification
. Addition in this ring is defined component-wise, while multiplication is defined by
In the case of a fourfold axis, with
, and
is obtained from
by carrying out polynomial arithmetic modulo
. This identifies
with the ring
of Gaussian integers of the form
, in which addition takes place component-wise while multiplication is defined by
In the case of a sixfold axis, with
, and
is isomorphic to
under the mapping
since
.
Thus in all cases where
is an irreducible quadratic polynomial with integer coefficients.
The actions of G on lattices in real and reciprocal space (Sections 1.3.4.2.2.4, 1.3.4.2.2.5
) extend naturally to actions of
on
in which an element
of
acts via
in real space, and via
in reciprocal space. These two actions are related by conjugation, since
and the following identity (which is fundamental in the sequel) holds:
Let us now consider the calculation of a two-dimensional DFT with n-fold cyclic symmetry
for an odd prime
. Denote
by
. Both the data and the results of the DFT are indexed by
: hence the action of
on these indices is in fact an action of
, the latter being obtained from
by carrying out all integer arithmetic in
modulo p. The algebraic structure of
combines the symmetry-carrying ring structure of
with the finite field structure of
used in Section 1.3.3.2.3.1
, and holds the key to a symmetry-adapted factorization of the DFT at hand.
The structure of depends on whether
remains irreducible when considered as a polynomial over
. Thus two cases arise:
These two cases require different developments.
|
Most crystallographic Fourier syntheses are real-valued and originate from Hermitian-symmetric collections of Fourier coefficients. Hermitian symmetry is closely related to the action of a centre of inversion in reciprocal space, and thus interacts strongly with all other genuinely crystallographic symmetry elements of order 2. All these symmetry properties are best treated by factoring by 2 and reducing the computation of the initial transform to that of a collection of smaller transforms with less symmetry or none at all.
The computation of a DFT with Hermitian-symmetric or real-valued data can be carried out at a cost of half that of an ordinary transform, essentially by `multiplexing' pairs of special partial transforms into general complex transforms, and then `demultiplexing' the results on the basis of their symmetry properties. The treatment given below is for general dimension n; a subset of cases for was treated by Ten Eyck (1973)
.
|
A vector is said to be Hermitian-antisymmetric if
Its transform
then satisfies
i.e. is purely imaginary.
If X is Hermitian-antisymmetric, then is Hermitian-symmetric, with
real-valued. The treatment of Section 1.3.4.3.5.1
may therefore be adapted, with trivial factors of i or
, or used as such in conjunction with changes of variable by multiplication by
.
The matrix is its own contragredient, and hence (Section 1.3.2.4.2.2
) the transform of a symmetric (respectively antisymmetric) function is symmetric (respectively antisymmetric). In this case the group
acts in both real and reciprocal space as
. If
with both factors diagonal, then
acts by
i.e.
The symmetry or antisymmetry properties of X may be written with
for symmetry and
for antisymmetry.
The computation will be summarized as with the same indexing as that used for structure-factor calculation. In both cases it will be shown that a transform
with
and M diagonal can be computed using only
partial transforms
instead of
.
|
Conjugate symmetric (Section 1.3.2.4.2.3) implies that if the data X are real and symmetric [i.e.
and
], then so are the results
. Thus if
contains a centre of symmetry, F is real symmetric. There is no distinction (other than notation) between structure-factor and electron-density calculation; the algorithms will be described in terms of the former. It will be shown that if
, a real symmetric transform can be computed with only
partial transforms
instead of
.
|
If X is real antisymmetric, then its transform is purely imaginary and antisymmetric. The double-multiplexing techniques used for real symmetric transforms may therefore be adapted with only minor changes involving signs and factors of i.
So far the multiplexing technique has been applied to pairs of vectors with similar types of parity-related and/or conjugate symmetry properties, in particular the same value of ɛ.
It can be generalized so as to accommodate mixtures of vectors with different symmetry characteristics. For example if is Hermitian-symmetric and
is Hermitian-antisymmetric, so that
is real-valued while
has purely imaginary values, the multiplexing process should obviously form
(instead of
if both had the same type of symmetry), and demultiplexing consists in separating
The general multiplexing formula for pairs of vectors may therefore be written where ω is a phase factor (e.g. 1 or i) chosen in such a way that all non-exceptional components of
and
(or
and
) be embedded in the complex plane
along linearly independent directions, thus making multiplexing possible.
It is possible to develop a more general form of multiplexing/demultiplexing for more than two vectors, which can be used to deal with symmetry elements of order 3, 4 or 6. It is based on the theory of group characters (Ledermann, 1987).
All the necessary ingredients are now available for calculating the CDFT for any given space group.
Space group P1 is dealt with by the methods of Section 1.3.4.3.5.1 and
by those of Section 1.3.4.3.5.4
.
A general monoclinic transformation is of the form with
a diagonal matrix whose entries are
or
, and
a vector whose entries are 0 or
. We may thus decompose both real and reciprocal space into a direct sum of a subspace
where
acts as the identity, and a subspace
where
acts as minus the identity, with
. All usual entities may be correspondingly written as direct sums, for instance:
We will use factoring by 2, with decimation in frequency when computing structure factors, and decimation in time when computing electron densities; this corresponds to with
,
. The non-primitive translation vector
then belongs to
, and thus
The symmetry relations obeyed by
and F are as follows: for electron densities
or, after factoring by 2,
while for structure factors
with its Friedel counterpart
or, after factoring by 2,
with Friedel counterpart
When calculating electron densities, two methods may be used.
|
Almost all orthorhombic space groups are generated by two monoclinic transformations and
of the type described in Section 1.3.4.3.6.2
, with the addition of a centre of inversion
for centrosymmetric groups. The only exceptions are Fdd2 and Fddd which contain diamond glides, in which some non-primitive translations are `square roots' not of primitive lattice translations, but of centring translations. The generic case will be examined first.
To calculate electron densities, the unique octant of data may first be transformed on (respectively
) as in Section 1.3.4.3.6.2
using the symmetry pertaining to generator
. These intermediate results may then be expanded by generator
by the formula of Section 1.3.4.3.3
prior to the final transform on
(respectively
). To calculate structure factors, the reverse operations are applied in the reverse order.
The two exceptional groups Fdd2 and Fddd only require a small modification. The F-centring causes the systematic absence of parity classes with mixed parities, leaving only (000) and (111). For the former, the phase factors in the symmetry relations of Section 1.3.4.3.6.2
become powers of (−1) so that one is back to the generic case. For the latter, these phase factors are odd powers of i which it is a simple matter to incorporate into a modified multiplexing/demultiplexing procedure.
All the symmetries in this class of groups can be handled by the generalized Rader/Winograd algorithms of Section 1.3.4.3.4.3, but no implementation of these is yet available.
In groups containing axes of the form with g.c.d.
along the c direction, the following procedure may be used (Ten Eyck, 1973)
:
|
These are usually treated as their orthorhombic or tetragonal subgroups, as the body-diagonal threefold axis cannot be handled by ordinary methods of decomposition.
The three-dimensional factorization technique of Section 1.3.4.3.4.1 allows a complete treatment of cubic symmetry. Factoring by 2 along all three dimensions gives four types (i.e. orbits) of parity classes:
Orbit exchange using the threefold axis thus allows one to reduce the number of partial transforms from 8 to 4 (one per orbit). Factoring by 3 leads to a reduction from 27 to 11 (in this case, further reduction to 9 can be gained by multiplexing the three diagonal classes with residual threefold symmetry into a single class; see Section 1.3.4.3.5.6
). More generally, factoring by q leads to a reduction from
to
. Each of the remaining transforms then has a symmetry induced from the orthorhombic or tetragonal subgroup, which can be treated as above.
No implementation of this procedure is yet available.
Lattice centring is an instance of the duality between periodization and decimation: the extra translational periodicity of ρ induces a decimation of described by the `reflection conditions' on h. As was pointed out in Section 1.3.4.2.2.3
, non-primitive lattices are introduced in order to retain the same matrix representation for a given geometric symmetry operation in all the arithmetic classes in which it occurs. From the computational point of view, therefore, the main advantage in using centred lattices is that it maximizes decomposability (Section 1.3.4.2.2.4
); reindexing to a primitive lattice would for instance often destroy the diagonal character of the matrix representing a dyad.
In the usual procedure involving three successive one-dimensional transforms, the loss of efficiency caused by the duplication of densities or the systematic vanishing of certain classes of structure factors may be avoided by using a multiplexing/demultiplexing technique (Ten Eyck, 1973):
|
The three-dimensional factorization technique of Section 1.3.4.3.4.1 is particularly well suited to the treatment of centred lattices: if the decimation matrix of N contains as a factor
a matrix which `integerizes' all the non-primitive lattice vectors, then centring is reflected by the systematic vanishing of certain classes of vectors of decimated data or results, which can simply be omitted from the calculation. An alternative possibly is to reindex on a primitive lattice and use different representative matrices for the symmetry operations: the loss of decomposability is of little consequence in this three-dimensional scheme, although it substantially complicates the definition of the cocycles
and
.
The preceding sections have been devoted to showing how the raw computational efficiency of a crystallographic Fourier transform algorithm can be maximized. This section will briefly discuss another characteristic (besides speed) which a crystallographic Fourier transform program may be required to possess if it is to be useful in various applications: a convenient and versatile mode of presentation of input data or output results.
The standard crystallographic FFT programs (Ten Eyck, 1973, 1985
) are rather rigid in this respect, and use rather rudimentary data structures (lists of structure-factor values, and two-dimensional arrays containing successive sections of electron-density maps). It is frequently the case that considerable reformatting of these data or results must be carried out before they can be used in other computations; for instance, maps have to be converted from 2D sections to 3D `bricks' before they can be inspected on a computer graphics display.
The explicitly three-dimensional approach to the factorization of the DFT and the use of symmetry offers the possibility of richer and more versatile data structures. For instance, the use of `decimation in frequency' in real space and of `decimation in time' in reciprocal space leads to data structures in which real-space coordinates are handled by blocks (thus preserving, at least locally, the three-dimensional topological connectivity of the maps) while reciprocal-space indices are handled by parity classes or their generalizations for factors other than 2 (thus making the treatment of centred lattices extremely easy). This global three-dimensional indexing also makes it possible to carry symmetry and multiplicity characteristics for each subvector of intermediate results for the purpose of automating the use of the orbit exchange mechanism.
Brünger (1989) has described the use of a similar three-dimensional factoring technique in the context of structure-factor calculations for the refinement of macromolecular structures.
Fourier transform (FT) calculations play an indispensable role in crystallography, because the Fourier transformation is inherent in the diffraction phenomenon itself.
Besides this obligatory use, the FT has numerous other applications, motivated more often by its mathematical properties than by direct physical reasoning (although the latter can be supplied after the fact). Typically, many crystallographic computations turn out to be convolutions in disguise, which can be speeded up by orders of magnitude through a judicious use of the FT. Several recent advances in crystallographic computation have been based on this kind of observation.
Bragg (1929) was the first to use this type of calculation to assist structure determination. Progress in computing techniques since that time was reviewed in Section 1.3.4.3.1
.
The usefulness of the maps thus obtained can be adversely affected by three main factors:
Limited resolution causes `series-termination errors' first investigated by Bragg & West (1930), who used an optical analogy with the numerical aperture of a microscope. James (1948b)
gave a quantitative description of this phenomenon as a convolution with the `spherical Dirichlet kernel' (Section 1.3.4.2.1.3
), which reflects the truncation of the Fourier spectrum by multiplication with the indicator function of the limiting resolution sphere. Bragg & West (1930)
suggested that the resulting ripples might be diminished by applying an artificial temperature factor to the data, which performs a further convolution with a Gaussian point-spread function. When the electron-density map is to be used for model refinement, van Reijen (1942)
suggested using Fourier coefficients calculated from the model when no observation is available, as a means of combating series-termination effects.
Errors in the data introduce errors in the electron-density maps, with the same mean-square value by virtue of Parseval's theorem. Special positions accrue larger errors (Cruickshank & Rollett, 1953; Cruickshank, 1965a
). To minimize the mean-square electron-density error due to large phase uncertainties, Blow & Crick (1959)
introduced the `best Fourier' which uses centroid Fourier coefficients; the associated error level in the electron-density map was evaluated by Blow & Crick (1959)
and Dickerson et al. (1961a
,b
).
Computational errors used to be a serious concern when Beevers–Lipson strips were used, and Cochran (1948a) carried out a critical evaluation of the accuracy limitations imposed by strip methods. Nowadays, the FFT algorithm implemented on digital computers with a word size of at least 32 bits gives results accurate to six decimal places or better in most applications (see Gentleman & Sande, 1966
).
Various approaches to the phase problem are based on certain modifications of the electron-density map, followed by Fourier analysis of the modified map and extraction of phase information from the resulting Fourier coefficients.
Sayre (1952a) derived his `squaring method equation' for structures consisting of equal, resolved and spherically symmetric atoms by observing that squaring such an electron density is equivalent merely to sharpening each atom into its square. Thus
where
is the ratio between the form factor
common to all the atoms and the form factor
for the squared version of that atom.
Most of the central results of direct methods, such as the tangent formula, are an immediate consequence of Sayre's equation. Phase refinement for a macromolecule by enforcement of the squaring method equation was demonstrated by Sayre (1972, 1974
).
A category of phase improvement procedures known as `density modification' is based on the pointwise application of various quadratic or cubic `filters' to electron-density maps after removal of negative regions (Hoppe & Gassmann, 1968; Hoppe et al., 1970
; Barrett & Zwick, 1971
; Gassmann & Zechmeister, 1972
; Collins, 1975
; Collins et al., 1976
; Gassmann, 1976
). These operations are claimed to be equivalent to reciprocal-space phase-refinement techniques such as those based on the tangent formula. Indeed the replacement of
by
, where P is a polynomial
yields
and hence gives rise to the convolution-like families of terms encountered in direct methods. This equivalence, however, has been shown to be rather superficial (Bricogne, 1982
) because the `uncertainty principle' embodied in Heisenberg's inequality (Section 1.3.2.4.4.3
) imposes severe limitations on the effectiveness of any procedure which operates pointwise in both real and reciprocal space.
In applying such methods, sampling considerations must be given close attention. If the spectrum of extends to resolution Δ and if the pointwise non-linear filter involves a polynomial P of degree n, then P(
) should be sampled at intervals of at most
to accommodate the full bandwidth of its spectrum.
Crystals of proteins and nucleic acids contain large amounts of mother liquor, often in excess of 50% of the unit-cell volume, occupying connected channels. The well ordered electron density corresponding to the macromolecule thus occupies only a periodic subregion
of the crystal. Thus
implying the convolution identity between structure factors (Main & Woolfson, 1963
):
which is a form of the Shannon interpolation formula (Sections 1.3.2.7.1
, 1.3.4.2.1.7
; Bricogne, 1974
; Colman, 1974
).
It is often possible to obtain an approximate `molecular envelope' from a poor electron-density map
, either interactively by computer graphics (Bricogne, 1976
) or automatically by calculating a moving average of the electron density within a small sphere S. The latter procedure can be implemented in real space (Wang, 1985
). However, as it is a convolution of
with
, it can be speeded up considerably (Leslie, 1987
) by computing the moving average
as
This remark is identical in substance to Booth's method of computation of `bounded projections' (Booth, 1945a) described in Section 1.3.4.2.1.8
, except that the summation is kept three-dimensional.
The iterative use of the estimated envelope for the purpose of phase improvement (Wang, 1985
) is a submethod of the previously developed method of molecular averaging, which is described below. Sampling rules for the Fourier analysis of envelope-truncated maps will be given there.
Macromolecules and macromolecular assemblies frequently crystallize with several identical subunits in the asymmetric metric unit, or in several crystal forms containing the same molecule in different arrangements. Rossmann & Blow (1963) recognized that intensity data collected from such structures are redundant (Sayre, 1952b
) and that their redundancy could be a source of phase information.
The phase constraints implied by the consistency of geometrically redundant intensities were first derived by Rossmann & Blow (1963), and were generalized by Main & Rossmann (1966)
. Crowther (1967
, 1969
) reformulated them as linear eigenvalue equations between structure factors, for which he proposed an iterative matrix solution method. Although useful in practice (Jack, 1973
), this reciprocal-space approach required computations of size
for N reflections, so that N could not exceed a few thousands.
The theory was then reformulated in real space (Bricogne, 1974), showing that the most costly step in Crowther's procedure could be carried out much more economically by averaging the electron densities of all crystallographically independent subunits, then rebuilding the crystal(s) from this averaged subunit, flattening the density in the solvent region(s) by resetting it to its average value. This operation is a projection [by virtue of Section 1.3.4.2.2.2
(d)
]. The overall complexity was thus reduced from
to N log N. The design and implementation of a general-purpose program package for averaging, reconstructing and solvent-flattening electron-density maps (Bricogne, 1976
) led rapidly to the first high-resolution determinations of virus structures (Bloomer et al., 1978
; Harrison et al., 1978
), with
.
The considerable gain in speed is a consequence of the fact that the masking operations used to retrieve the various copies of the common subunit are carried out by simple pointwise multiplication by an indicator function in real space, whereas they involve a convolution with
in reciprocal space.
The averaging by noncrystallographic symmetries of an electron-density map calculated by FFT – hence sampled on a grid which is an integral subdivision of the period lattice – necessarily entails the interpolation of densities at non-integral points of that grid. The effect of interpolation on the structure factors recalculated from an averaged map was examined by Bricogne (1976). This study showed that, if linear interpolation is used, the initial map should be calculated on a fine grid, of size Δ/5 or Δ/6 at resolution Δ (instead of the previously used value of Δ/3). The analysis about to be given applies to all interpolation schemes which consist in a convolution of the sampled density with a fixed interpolation kernel function K.
Let be a
-periodic function. Let K be the interpolation kernel in `normalized' form, i.e. such that
and scaled so as to interpolate between sample values given on a unit grid
; in the case of linear interpolation, K is the `trilinear wedge'
where
Let
be sampled on a grid
, and let
denote the function interpolated from this sampled version of
. Then:
where
, so that
The transform of thus consists of
Thus is not band-limited even if
is. Supposing, however, that
is band-limited and that grid
satisfies the Shannon sampling criterion, we see that there will be no overlap between the different bands:
may therefore be recovered from the main band by compensating its attenuation, which is approximately a temperature-factor correction.
For numerical work, however, must be resampled onto another grid
, which causes its transform to become periodized into
This now causes the main band
to become contaminated by the ghost bands
of the translates
of
.
Aliasing errors may be minimized by increasing the sampling rate in grid well beyond the Shannon minimum, which rapidly reduces the r.m.s. content of the ghost bands.
The sampling rate in grid needs only exceed the Shannon minimum to the extent required to accommodate the increase in bandwidth due to convolution with
, which is the reciprocal-space counterpart of envelope truncation (or solvent flattening) in real space.
Green's theorem stated in terms of distributions (Section 1.3.2.3.9.1) is particularly well suited to the calculation of the Fourier transforms
of indicator functions. Let f be the indicator function
and let S be the boundary of U (assumed to be a smooth surface). The jump
in the value of f across S along the outer normal vector is
, the jump
in the normal derivative of f across S is
, and the Laplacian of f as a function is (almost everywhere) 0 so that
. Green's theorem then reads:
The function satisfies the identity
. Therefore, in Cartesian coordinates:
i.e.
where n is the outer normal to S. This formula was used by von Laue (1936)
for a different purpose, namely to calculate the transforms of crystal shapes (see also Ewald, 1940
). If the surface S is given by a triangulation, the surface integral becomes a sum over all faces, since n is constant on each face. If U is a solid sphere with radius R, an integration by parts gives immediately:
An atomic model of a crystal structure consists of a list of symmetry-unique atoms described by their positions, their thermal agitation and their chemical identity (which can be used as a pointer to form-factor tables). Form factors are usually parameterized as sums of Gaussians, and thermal agitation by a Gaussian temperature factor or tensor. The formulae given in Section 1.3.4.2.2.6 for Gaussian atoms are therefore adequate for most purposes. High-resolution electron-density studies use more involved parameterizations.
Early calculations were carried out by means of Bragg–Lipson charts (Bragg & Lipson, 1936) which gave a graphical representation of the symmetrized trigonometric sums Ξ of Section 1.3.4.2.2.9.
The approximation of form factors by Gaussians goes back to the work of Vand et al. (1957)
and Forsyth & Wells (1959)
. Agarwal (1978)
gave simplified expansions suitable for medium-resolution modelling of macromolecular structures.
This method of calculating structure factors is expensive because each atom sends contributions of essentially equal magnitude to all structure factors in a resolution shell. The calculation is therefore of size for N atoms and
reflections. Since N and
are roughly proportional at a given resolution, this method is very costly for large structures.
Two distinct programming strategies are available (Rollett, 1965) according to whether the fast loop is on all atoms for each reflection, or on all reflections for each atom. The former method was favoured in the early times when computers were unreliable. The latter was shown by Burnett & Nordman (1974)
to be more amenable to efficient programming, as no multiplication is required in calculating the arguments of the sine/cosine terms: these can be accumulated by integer addition, and used as subscripts in referencing a trigonometric function table.
Robertson (1936b) recognized the similarity between the calculation of structure factors by Fourier summation and the calculation of Fourier syntheses, the main difference being of course that atomic coordinates do not usually lie exactly on a grid obtained by integer subdivision of the crystal lattice. He proposed to address this difficulty by the use of his sorting board, which could extend the scale of subdivision and thus avoid phase errors. In this way the calculation of structure factors became amenable to Beevers–Lipson strip methods, with considerable gain of speed.
Later, Beevers & Lipson (1952) proposed that trigonometric functions attached to atomic positions falling between the grid points on which Beevers–Lipson strips were based should be obtained by linear interpolation from the values found on the strips for the closest grid points. This amounts (Section 1.3.4.4.3.4
) to using atoms in the shape of a trilinear wedge, whose form factor was indicated in Section 1.3.4.4.3.4
and gives rise to aliasing effects (see below) not considered by Beevers & Lipson.
The correct formulation of this idea came with the work of Sayre (1951), who showed that structure factors could be calculated by Fourier analysis of a sampled electron-density map previously generated on a subdivision
of the crystal lattice Λ. When generating such a map, care must be taken to distribute onto the sample grid not only the electron densities of all the atoms in the asymmetric motif, but also those of their images under space-group symmetries and lattice translations. Considerable savings in computation occur, especially for large structures, because atoms are localized: each atom sends contributions to only a few grid points in real space, rather than to all reciprocal-lattice points. The generation of the sampled electron-density map is still of complexity
for N atoms and
reflections, but the proportionality constant is smaller than that in Section 1.3.4.4.4
by orders of magnitude; the extra cost of Fourier analysis, proportional to
, is negligible.
The idea of approximating a Fourier transform by a discrete transform on sampled values had already been used by Whittaker (1948), who tested it on the first three odd Hermite functions and did not consider the problem of aliasing errors. By contrast, Sayre gave a lucid analysis of the sampling problems associated to this technique. If the periodic sampled map is written in the form of a weighted lattice distribution (as in Section 1.3.2.7.3
) as
then its discrete Fourier transform yields
so that each correct value
is corrupted by its aliases
for
.
To cure this aliasing problem, Sayre used `hypothetical atoms' with form factors equal to those of standard atoms within the resolution range of interest, but set to zero outside that range. This amounts to using atomic densities with built-in series-termination errors, which has the detrimental effect of introducing slowly decaying ripples around the atom which require incrementing sample densities at many more grid points per atom.
Sayre considered another cure in the form of an artificial temperature factor B (Bragg & West, 1930) applied to all atoms. This spreads each atom on more grid points in real space but speeds up the decay of its transform in reciprocal space, thus allowing the use of a coarser sampling grid in real space. He discounted it as spoiling the agreement with observed data, but Ten Eyck (1977)
pointed out that this agreement could be restored by applying the negative of the artificial temperature factor to the results. This idea cannot be carried to extremes: if B is chosen too large, the atoms will be so spread out in real space as each to occupy a sizeable fraction of the unit cell and the advantage of atom localization will be lost; furthermore, the form factors will fall off so rapidly that round-off error amplification will occur when the results are sharpened back. Clearly, there exists an optimal combination of B and sampling rate yielding the most economical computation for a given accuracy at a given resolution, and a formula will now be given to calculate it.
Let us make the simplifying assumption that all atoms are roughly equal and that their common form factor can be represented by an equivalent temperature factor . Let
be the resolution to which structure factors are wanted. The Shannon sampling interval is
. Let σ be the oversampling rate, so that the actual sampling interval in the map is
: then consecutive copies of the transform are separated by a distance
in reciprocal space. Let the artificial temperature factor
be added, and let
The worst aliasing occurs at the outer resolution limit
, where the `signal' due to an atom is proportional to
while the `noise' due to the closest alias is proportional to
Thus the signal-to-noise ratio, or quality factor, Q is
If a certain value of Q is desired (e.g. for 1% accuracy), then the equation
defines B in terms of
and Q.
The overall cost of the structure-factor calculation from N atoms is then
where and
are constant depending on the speed of the computer used. This overall cost may be minimized with respect to σ for given
and Q, determining the optimal B (and hence
) in passing by the above relation.
Sayre (1951) did observe that applying an artificial temperature factor in real space would not create series-termination ripples: the resulting atoms would have a smaller effective radius than his hypothetical atoms, so that step (i)
would be faster. This optimality of Gaussian smearing is ultimately a consequence of Hardy's theorem (Section 1.3.2.4.4.3
).
Some methods of phase determination rely on maximizing a certain global criterion involving the electron density, of the form
, under constraint of agreement with the observed structure-factor amplitudes, typically measured by a
residual C. Several recently proposed methods use for
various measures of entropy defined by taking
or
(Bricogne, 1982
; Britten & Collins, 1982
; Narayan & Nityananda, 1982
; Bryan et al., 1983
; Wilkins et al., 1983
; Bricogne, 1984
; Navaza, 1985
; Livesey & Skilling, 1985
). Sayre's use of the squaring method to improve protein phases (Sayre, 1974
) also belongs to this category, and is amenable to the same computational strategies (Sayre, 1980
).
These methods differ from the density-modification procedures of Section 1.3.4.4.3.2 in that they seek an optimal solution by moving electron densities (or structure factors) jointly rather than pointwise, i.e. by moving along suitably chosen search directions
[or
].
For computational purposes, these search directions may be handled either as column vectors of sample values on a grid in real space, or as column vectors of Fourier coefficients
in reciprocal space. These column vectors are the coordinates of the same vector
in an abstract vector space
of dimension
over
, but referred to two different bases which are related by the DFT and its inverse (Section 1.3.2.7.3
).
The problem of finding the optimum of S for a given value of C amounts to achieving collinearity between the gradients and
of S and of C in
, the scalar ratio between them being a Lagrange multiplier. In order to move towards such a solution from a trial position, the dependence of
and
on position in
must be represented. This involves the
Hessian matrices H(S) and H(C), whose size precludes their use in the whole of
. Restricting the search to a smaller search subspace of dimension n spanned by
we may build local quadratic models of S and C (Bryan & Skilling, 1980
; Burch et al., 1983
) with respect to n coordinates X in that subspace:
The coefficients of these linear models are given by scalar products:
which, by virtue of Parseval's theorem, may be evaluated either in real space or in reciprocal space (Bricogne, 1984
). In doing so, special positions and reflections must be taken into account, as in Section 1.3.4.2.2.8.
Scalar products involving S are best evaluated by real-space grid summation, because H(S) is diagonal in this representation; those involving C are best calculated by reciprocal-space summation, because H(C) is at worst
block-diagonal in this representation. Using these Hessian matrices in the wrong space would lead to prohibitively expensive convolutions instead of scalar (or at worst
matrix) multiplications.
Since the origins of X-ray crystal structure analysis, the calculation of crystallographic Fourier series has been closely associated with the process of refinement. Fourier coefficients with phases were obtained for all or part of the measured reflections in the basis of some trial model for all or part of the structure, and Fourier syntheses were then used to complete and improve this initial model. This approach is clearly described in the classic paper by Bragg & West (1929), and was put into practice in the determination of the structures of topaz (Alston & West, 1929
) and diopside (Warren & Bragg, 1929
). Later, more systematic methods of arriving at a trial model were provided by the Patterson synthesis (Patterson, 1934
, 1935a
,b
; Harker, 1936
) and by isomorphous replacement (Robertson, 1935
, 1936c
). The role of Fourier syntheses, however, remained essentially unchanged [see Robertson (1937)
for a review] until more systematic methods of structure refinement were introduced in the 1940s. A particularly good account of the processes of structure completion and refinement may be found in Chapters 15 and 16 of Stout & Jensen (1968)
.
It is beyond the scope of this section to review the vast topic of refinement methods: rather, it will give an account of those aspects of their development which have sought improved power by exploiting properties of the Fourier transformation. It is of more than historical interest that some recent advances in the crystallographic refinement of macromolecular structures had been anticipated by Cochran and Cruickshank in the early 1950s.
Hughes (1941) was the first to use the already well established multivariate least-squares method (Whittaker & Robinson, 1944
) to refine initial estimates of the parameters describing a model structure. The method gained general acceptance through the programming efforts of Friedlander et al. (1955)
, Sparks et al. (1956)
, Busing & Levy (1961)
, and others.
The Fourier relations between and F (Section 1.3.4.2.2.6
) are used to derive the `observational equations' connecting the structure parameters
to the observations
comprising the amplitudes and their experimental variances for a set
of unique reflections.
The normal equations giving the corrections δu to the parameters are then where
To calculate the elements of A, write:
hence
In the simple case of atoms with real-valued form factors and isotropic thermal agitation in space group P1, where
being a fractional occupancy.
Positional derivatives with respect to are given by
so that the corresponding
subvector of the right-hand side of the normal equations reads:
The setting up and solution of the normal equations lends itself well to computer programming and has the advantage of providing a thorough analysis of the accuracy of its results (Cruickshank, 1965b, 1970
; Rollett, 1970
). It is, however, an expensive task, of complexity
, which is unaffordable for macromolecules.
It was the use of Fourier syntheses in the completion of trial structures which provided the incentive to find methods for computing 2D and 3D syntheses efficiently, and led to the Beevers–Lipson strips. The limited accuracy of the latter caused the estimated positions of atoms (identified as peaks in the maps) to be somewhat in error. Methods were therefore sought to improve the accuracy with which the coordinates of the electron-density maxima could be determined. The naive method of peak-shape analysis from densities recalculated on a grid using high-accuracy trigonometric tables entailed 27 summations per atom.
Booth (1946a) suggested examining the rapidly varying derivatives of the electron density rather than its slowly varying values. If
then the gradient vector
of
at
can be calculated by means of three Fourier summations from the
vector of Fourier coefficients
Similarly, the Hessian matrix of
at
can be calculated by six Fourier summations from the unique elements of the symmetric matrix of Fourier coefficients:
The scalar maps giving the components of the gradient and Hessian matrix of will be called differential syntheses of 1st order and 2nd order respectively. If
is approximately but not exactly a maximum of
, then the Newton–Raphson estimate of the true maximum
is given by:
This calculation requires only nine accurate Fourier summations (instead of 27), and this number is further reduced to four if the peak is assumed to be spherically symmetrical.
The resulting positions are affected by series-termination errors in the differential syntheses. Booth (1945c, 1946c)
proposed a `back-shift correction' to eliminate them, and extended this treatment to the acentric case (Booth, 1946b
). He cautioned against the use of an artificial temperature factor to fight series-termination errors (Brill et al., 1939
), as this could be shown to introduce coordinate errors by causing overlap between atoms (Booth, 1946c
, 1947a
,b
).
Cruickshank was able to derive estimates for the standard uncertainties of the atomic coordinates obtained in this way (Cox & Cruickshank, 1948; Cruickshank, 1949a
,b
) and to show that they agreed with those provided by the least-squares method.
The calculation of differential Fourier syntheses was incorporated into the crystallographic programs of Ahmed & Cruickshank (1953b) and of Sparks et al. (1956)
.
Having defined the now universally adopted R factors (Booth, 1945b) as criteria of agreement between observed and calculated amplitudes or intensities, Booth proposed that R should be minimized with respect to the set of atomic coordinates
by descending along the gradient of R in parameter space (Booth, 1947c
,d
). This `steepest descents' procedure was compared with Patterson methods by Cochran (1948d)
.
When calculating the necessary derivatives, Booth (1948a, 1949
) used the formulae given above in connection with least squares. This method was implemented by Qurashi (1949)
and by Vand (1948
, 1951
) with parameter-rescaling modifications which made it very close to the least-squares method (Cruickshank, 1950
; Qurashi & Vand, 1953
; Qurashi, 1953
).
Cochran (1948b,c
, 1951a
) undertook to exploit an algebraic similarity between the right-hand side of the normal equations in the least-squares method on the one hand, and the expression for the coefficients used in Booth's differential syntheses on the other hand (see also Booth, 1948a
). In doing so he initiated a remarkable sequence of formal and computational developments which are still actively pursued today.
Let be the electron-density map corresponding to the current atomic model, with structure factors
; and let
be the map calculated from observed moduli and calculated phases, i.e. with coefficients
. If there are enough data for
to have a resolved peak at each model atomic position
, then
while if the calculated phases
are good enough,
will also have peaks at each
:
It follows that
where the summation is over all reflections in
or related to
by space-group and Friedel symmetry (overlooking multiplicity factors!). This relation is less sensitive to series-termination errors than either of the previous two, since the spectrum of
could have been extrapolated beyond the data in
by using that of
[as in van Reijen (1942)
] without changing its right-hand side.
Cochran then used the identity in the form
to rewrite the previous relation as
(the operation
[] on the first line being neutral because of Friedel symmetry). This is equivalent to the vanishing of the
subvector of the right-hand side of the normal equations associated to a least-squares refinement in which the weights would be
Cochran concluded that, for equal-atom structures with
for all j, the positions
obtained by Booth's method applied to the difference map
are such that they minimize the residual
with respect to the atomic positions. If it is desired to minimize the residual of the ordinary least-squares method, then the differential synthesis method should be applied to the weighted difference map
He went on to show (Cochran, 1951b
) that the refinement of temperature factors could also be carried out by inspecting appropriate derivatives of the weighted difference map.
This Fourier method was used by Freer et al. (1976) in conjunction with a stereochemical regularization procedure to refine protein structures.
Cruickshank consolidated and extended Cochran's derivations in a series of classic papers (Cruickshank, 1949b, 1950
, 1952
, 1956
). He was able to show that all the coefficients involved in the right-hand side and normal matrix of the least-squares method could be calculated by means of suitable differential Fourier syntheses even when the atoms overlap. This remarkable achievement lay essentially dormant until its independent rediscovery by Agarwal in 1978 (Section 1.3.4.4.7.6
).
To ensure rigorous equivalence between the summations over (in the expressions of least-squares right-hand side and normal matrix elements) and genuine Fourier summations, multiplicity-corrected weights were introduced by:
where G h denotes the orbit of h and
its isotropy subgroup (Section 1.3.4.2.2.5
). Similarly, derivatives with respect to parameters of symmetry-unique atoms were expressed, via the chain rule, as sums over the orbits of these atoms.
Let be the label of a parameter
belonging to atoms with label j. Then Cruickshank showed that the pth element of the right-hand side of the normal equations can be obtained as
, where
is a differential synthesis of the form
with
a polynomial in (h, k, l) depending on the type of parameter p. The correspondence between parameter type and the associated polynomial extends Booth's original range of differential syntheses, and is recapitulated in the following table.
Unlike Cochran's original heuristic argument, this result does not depend on the atoms being resolved.
Cruickshank (1952) also considered the elements of the normal matrix, of the form
associated with positional parameters. The
block for parameters
and
may be written
which, using the identity
becomes
(Friedel's symmetry makes
redundant on the last line). Cruickshank argued that the first term would give a good approximation to the diagonal blocks of the normal matrix and to those off-diagonal blocks for which
and
are close. On this basis he was able to justify the `n-shift rule' of Shoemaker et al. (1950)
. Cruickshank gave this derivation in a general space group, but using a very terse notation which somewhat obscures it. Using the symmetrized trigonometric structure-factor kernel
of Section 1.3.4.2.2.9
and its multiplication formula, the above expression is seen to involve the values of a Fourier synthesis at points of the form
.
Cruickshank (1956) showed that this analysis could also be applied to the refinement of temperature factors.
These two results made it possible to obtain all coefficients involved in the normal equations by looking up the values of certain differential Fourier syntheses at or at
. At the time this did not confer any superiority over the standard form of the least-squares procedure, because the accurate computation of Fourier syntheses was an expensive operation. The modified Fourier method was used by Truter (1954)
and by Ahmed & Cruickshank (1953a)
, and was incorporated into the program system described by Cruickshank et al. (1961)
. A more recent comparison with the least-squares method was made by Dietrich (1972)
.
There persisted, however, some confusion about the nature of the relationship between Fourier and least-squares methods, caused by the extra factors which make it necessary to compute a differential synthesis for each type of atom. This led Cruickshank to conclude that `in spite of their remarkable similarities the least-squares and modified-Fourier methods are fundamentally distinct'.
Agarwal (1978) rederived and completed Cruickshank's results at a time when the availability of the FFT algorithm made the Fourier method of calculating the coefficients of the normal equations much more economical than the standard method, especially for macromolecules.
As obtained by Cruickshank, the modified Fourier method required a full 3D Fourier synthesis
Agarwal disposed of the latter dependence by pointing out that the multiplication involved is equivalent to a real-space convolution between the differential synthesis and , the standard electron density
for atom type j (Section 1.3.4.2.1.2
) smeared by the isotropic thermal agitation of that atom. Since
is localized, this convolution involves only a small number of grid points. The requirement of a distinct differential synthesis for each parameter type, however, continued to hold, and created some difficulties at the FFT level because the symmetries of differential syntheses are more complex than ordinary space-group symmetries. Jack & Levitt (1978)
sought to avoid the calculation of difference syntheses by using instead finite differences calculated from ordinary Fourier or difference Fourier maps.
In spite of its complication, this return to the Fourier implementation of the least-squares method led to spectacular increases in speed (Isaacs & Agarwal, 1978; Agarwal, 1980
; Baker & Dodson, 1980
) and quickly gained general acceptance (Dodson, 1981
; Isaacs, 1982a
,b
, 1984
).
Lifchitz [see Agarwal et al. (1981), Agarwal (1981)
] proposed that the idea of treating certain multipliers in Cruickshank's modified differential Fourier syntheses by means of a convolution in real space should be applied not only to
, but also to the polynomials
which determine the type of differential synthesis being calculated. This leads to convoluting
with the same ordinary weighted difference Fourier synthesis, rather than
with the differential synthesis of type p. In this way, a single Fourier synthesis, with ordinary (scalar) symmetry properties, needs be computed; the parameter type and atom type both intervene through the function
with which it is convoluted. This approach has been used as the basis of an efficient general-purpose least-squares refinement program for macromolecular structures (Tronrud et al., 1987
).
This rearrangement amounts to using the fact (Section 1.3.2.3.9.7) that convolution commutes with differentiation. Let
be the inverse-variance weighted difference map, and let us assume that parameter
belongs to atom j. Then the Agarwal form for the pth component of the right-hand side of the normal equations is
while the Lifchitz form is
A very simple derivation of the previous results will now be given, which suggests the possibility of many generalizations.
The weighted difference map has coefficients
which are the gradients of the global residual with respect to each
:
By the chain rule, a variation of each
by
will result in a variation of R by
with
The
operation is superfluous because of Friedel symmetry, so that
may be simply written in terms of the Hermitian scalar product in
:
If
is the transform of
, we have also by Parseval's theorem
We may therefore write
which states that
is the functional derivative of R with respect to
.
The right-hand side of the normal equations has for its pth element, and this may be written
If
belongs to atom j, then
hence
By the identity of Section 1.3.2.4.3.5
, this is identical to Lifchitz's expression
. The present derivation in terms of scalar products [see Brünger (1989)
for another presentation of it] is conceptually simpler, since it invokes only the chain rule [other uses of which have been reviewed by Lunin (1985)
] and Parseval's theorem; economy of computation is obviously related to the good localization of
compared to
. Convolutions, whose meaning is less clear, are no longer involved; they were a legacy of having first gone over to reciprocal space via differential syntheses in the 1940s.
Cast in this form, the calculation of derivatives by FFT methods appears as a particular instance of the procedure described in connection with variational techniques (Section 1.3.4.4.6) to calculate the coefficients of local quadratic models in a search subspace; this is far from surprising since varying the electron density through a variation of the parameters of an atomic model is a particular case of the `free' variations considered by the variational approach. The latter procedure would accommodate in a very natural fashion the joint consideration of an energetic (Jack & Levitt, 1978
; Brünger et al., 1987
; Brünger, 1988
; Brünger et al., 1989
; Kuriyan et al., 1989
) or stereochemical (Konnert, 1976
; Sussman et al., 1977
; Konnert & Hendrickson, 1980
; Hendrickson & Konnert, 1980
; Tronrud et al., 1987
) restraint function (which would play the role of S) and of the crystallographic residual (which would be C). It would even have over the latter the superiority of affording a genuine second-order approximation, albeit only in a subspace, hence the ability of detecting negative curvature and the resulting bifurcation behaviour (Bricogne, 1984
). Current methods are unable to do this because they use only first-order models, and this is known to degrade severely the overall efficiency of the refinement process.
The impossibility of carrying out a full-matrix least-squares refinement of a macromolecular crystal structure, caused by excessive computational cost and by the paucity of observations, led Diamond (1971) to propose a real-space refinement method in which stereochemical knowledge was used to keep the number of free parameters to a minimum. Refinement took place by a least-squares fit between the `observed' electron-density map and a model density consisting of Gaussian atoms. This procedure, coupled to iterative recalculation of the phases, led to the first highly refined protein structures obtained without using full-matrix least squares (Huber et al., 1974
; Bode & Schwager, 1975
; Deisenhofer & Steigemann, 1975
; Takano, 1977a
,b
).
Real-space refinement takes advantage of the localization of atoms (each parameter interacts only with the density near the atom to which it belongs) and gives the most immediate description of stereochemical constraints. A disadvantage is that fitting the `observed' electron density amounts to treating the phases of the structure factors as observed quantities, and to ignoring the experimental error estimates on their moduli. The method is also much more vulnerable to series-termination errors and accidentally missing data than the least-squares method. These objections led to the progressive disuse of Diamond's method, and to a switch towards reciprocal-space least squares following Agarwal's work.
The connection established above between the Cruickshank–Agarwal modified Fourier method and the simple use of the chain rule affords a partial refutation to both the premises of Diamond's method and to the objections made against it:
|
The calculation of the inner products from a sampled gradient map D requires even more caution than that of structure factors via electron-density maps described in Section 1.3.4.4.5
, because the functions
have transforms which extend even further in reciprocal space than the
themselves. Analytically, if the
are Gaussians, the
are finite sums of multivariate Hermite functions (Section 1.3.2.4.4.2
) and hence the same is true of their transforms. The difference map D must therefore be finely sampled and the relation between error and sampling rate may be investigated as in Section 1.3.4.4.5.
An examination of the sampling rates commonly used (e.g. one third of the resolution) shows that they are insufficient. Tronrud et al. (1987)
propose to relax this requirement by applying an artificial temperature factor to
(cf. Section 1.3.4.4.5
) and the negative of that temperature factor to D, a procedure of questionable validity because the latter `sharpening' operation is ill defined [the function exp
does not define a tempered distribution, so the associativity properties of convolution may be lost]. A more robust procedure would be to compute the scalar product by means of a more sophisticated numerical quadrature formula than a mere grid sum.
Certain correlation functions can be useful to detect the presence of multiple copies of the same molecule (known or unknown) in the asymmetric unit of a crystal of unknown structure.
Suppose that a crystal contains one or several copies of a molecule in its asymmetric unit. If
is the electron density of that molecule in some reference position and orientation, then
where
describes the placement of the jth copy of the molecule with respect to the reference copy. It is assumed that each such copy is in a general position, so that there is no isotropy subgroup.
The methods of Section 1.3.4.2.2.9 (with
replaced by
, and
by
) lead to the following expression for the auto-correlation of
:
If μ is unknown, consider the subfamily σ of terms with and
:
The scalar product
in which R is a variable rotation will have a peak whenever
since two copies of the `self-Patterson'
of the molecule will be brought into coincidence. If the interference from terms in the Patterson
other than those present in σ is not too serious, the `self-rotation function'
(Rossmann & Blow, 1962
; Crowther, 1972
) will show the same peaks, from which the rotations
may be determined, either individually or jointly if for instance they form a group.
If μ is known, then its self-Patterson may be calculated, and the
may be found by examining the `cross-rotation function'
which will have peaks at
. Once the
are known, then the various copies
of
may be Fourier-analysed into structure factors:
The cross terms with
in
then contain `motifs'
with Fourier coefficients
translated by
. Therefore the `translation functions' (Crowther & Blow, 1967
)
will have peaks at
corresponding to the detection of these motifs.
The theory of diffraction by helical structures (Cochran et al., 1952; Klug et al., 1958
) has played an important part in the study of polypeptides, of nucleic acids and of tobacco mosaic virus.
Let be a reasonably regular function in two-dimensional real space. Going over to polar coordinates
and writing, by slight misuse of notation,
for
we may use the periodicity of f with respect to φ to expand it as a Fourier series (Byerly, 1893
):
with
Similarly, in reciprocal space, if and if
then
with
where the phase factor
has been introduced for convenience in the forthcoming step.
The Fourier transform relation between f and F may then be written in terms of 's and
's. Observing that
, and that (Watson, 1944
)
we obtain:
hence, by the uniqueness of the Fourier expansion of F:
The inverse Fourier relationship leads to
The integral transform involved in the previous two equations is called the Hankel transform (see e.g. Titchmarsh, 1922
; Sneddon, 1972
) of order n.
Let ρ be the electron-density distribution in a fibre, which is assumed to have translational periodicity with period 1 along z, and to have compact support with respect to the (x, y) coordinates. Thus ρ may be written where
is the motif.
By the tensor product property, the inverse Fourier transform may be written
and hence consists of `layers' labelled by l:
with
Changing to polar coordinates in the (x, y) and planes decomposes the calculation of F from ρ into the following steps:
and the calculation of ρ from F into:
These formulae are seen to involve a 2D Fourier series with respect to the two periodic coordinates φ and z, and Hankel transforms along the radial coordinates. The two periodicities in φ and z are independent, so that all combinations of indices (n, l) occur in the Fourier summations.
Helical symmetry involves a `clutching' between the two (hitherto independent) periodicities in φ (period 2π) and z (period 1) which causes a subdivision of the period lattice and hence a decimation (governed by `selection rules') of the Fourier coefficients.
Let i and j be the basis vectors along and z. The integer lattice with basis (i, j) is a period lattice for the
dependence of the electron density ρ of an axially periodic fibre considered in Section 1.3.4.5.1.3
:
Suppose the fibre now has helical symmetry, with u copies of the same molecule in t turns, where g.c.d. . Using the Euclidean algorithm, write
with λ and μ positive integers and
. The period lattice for the
dependence of ρ may be defined in terms of the new basis vectors:
In terms of the original basis If α and β are coordinates along I and J, respectively,
or equivalently
By Fourier transformation,
with the transformations between indices given by the contragredients of those between coordinates, i.e.
and
It follows that
or alternatively that
which are two equivalent expressions of the selection rules describing the decimation of the transform. These rules imply that only certain orders n contribute to a given layer l.
The 2D Fourier analysis may now be performed by analysing a single subunit referred to coordinates α and β to obtain and then reindexing to get only the allowed
's by
This is u times faster than analysing u subunits with respect to the
coordinates.
The Fourier transformation plays a central role in the branch of probability theory concerned with the limiting behaviour of sums of large numbers of independent and identically distributed random variables or random vectors. This privileged role is a consequence of the convolution theorem and of the `moment-generating' properties which follow from the exchange between differentiation and multiplication by monomials. When the limit theorems are applied to the calculation of joint probability distributions of structure factors, which are themselves closely related to the Fourier transformation, a remarkable phenomenon occurs, which leads to the saddlepoint approximation and to the maximum-entropy method.
The material in this section is not intended as an introduction to probability theory [for which the reader is referred to Cramér (1946), Petrov (1975)
or Bhattacharya & Rao (1976)
], but only as an illustration of the role played by the Fourier transformation in certain specific areas which are used in formulating and implementing direct methods of phase determination.
|
The methods of probability theory just surveyed were applied to various problems formally similar to the crystallographic phase problem [e.g. the `problem of the random walk' of Pearson (1905)] by Rayleigh (1880
, 1899
, 1905
, 1918
, 1919
) and Kluyver (1906)
. They became the basis of the statistical theory of communication with the classic papers of Rice (1944, 1945
).
The Gram–Charlier and Edgeworth series were introduced into crystallography by Bertaut (1955a,b
,c
, 1956a
) and by Klug (1958)
, respectively, who showed them to constitute the mathematical basis of numerous formulae derived by Hauptman & Karle (1953)
. The saddlepoint approximation was introduced by Bricogne (1984)
and was shown to be related to variational methods involving the maximization of certain entropy criteria. This connection exhibits most of the properties of the Fourier transform at play simultaneously, and will now be described as a final illustration.
|
Acknowledgements
Many aspects of the theory of discrete Fourier transform algorithms and of its extension to incorporate crystallographic symmetry have been the focus of a long-standing collaborative effort between Professor Louis Auslander, Professor Richard Tolimieri, their co-workers and the writer. I am most grateful to them for many years of mathematical stimulation and enjoyment, for introducing me to the `big picture' of the discrete Fourier transform which they have elaborated over the past decade, and for letting me describe here some of their unpublished work. In particular, the crystallographic extensions of the Rader/Winograd algorithms presented in Section 1.3.4.3.4.3 were obtained by Richard Tolimieri, in a collaboration partially supported by NIH grant GM 32362 (to the writer).
I am indebted to the Editor for many useful and constructive suggestions of possible improvements to the text, only a few of which I have been able to implement. I hope to incorporate many more of them in the future. I also wish to thank Dr D. Sayre for many useful comments on an early draft of the manuscript.
This contribution was written during the tenure of a Visiting Fellowship at Trinity College, Cambridge, with partial financial support from Trinity College and the MRC Laboratory of Molecular Biology. I am most grateful to both institutions for providing ideal working conditions.
References













































































































































































































































































































































































































































