International
Tables for Crystallography Volume F Crystallography of biological macromolecules Edited by M. G. Rossmann and E. Arnold © International Union of Crystallography 2006 |
International Tables for Crystallography (2006). Vol. F. ch. 19.6, pp. 451-463
https://doi.org/10.1107/97809553602060000703 Chapter 19.6. Electron cryomicroscopy
a
Department of Biological Sciences, Purdue University, West Lafayette, Indiana 47907-1392, USA, and bMedical Research Council, Laboratory of Molecular Biology, Hills Road, Cambridge CB2 2QH, England This chapter gives a complete but concise introduction to electron cryomicroscopy. The theoretical and practical background is discussed together with a review of applications to the three-dimensional structural analysis of biological macromolecules or macromolecular assemblies. It includes basic descriptions of how to analyse the structures of molecules arranged in the form of two-dimensional crystals, helical arrays or as single particles with or without symmetry. The chapter concludes by anticipating trends towards increased automation, development of better electronic cameras and increasing use of electron tomography for analysis of cell structure. Keywords: cryo EM; electron cryomicroscopy; electron microscopy; electron scattering; helical particles; icosahedral particles; image processing in cryo EM; phasing; radiation damage; three-dimensional image reconstruction; two-dimensional crystals. |
Diffraction and imaging techniques are one manifestation of the use of scattering of beams or wavefronts by objects to analyse the structure of that object (Fig. 19.6.2.1). Such methods can be used in general for the examination of structures of any size, ranging from elementary subnuclear particles all the way up to the structure of the earth's core. For macromolecular structure determination, there are two main differences between the use of electrons and X-rays to probe structure. The most important is that the scattering cross section is about 105 times greater for electrons than it is for X-rays, so significant scattering using electrons is obtained for crystals or other specimens that are 1 to 10 nm thick, whereas scattering or absorption of a similar fraction of an illuminating X-ray beam requires crystals that are 100 to 500 µm thick. The second main difference is that electrons are much more easily focused than X-rays since they are charged particles that can be deflected by electric or magnetic fields. As a result, electron lenses are much superior to X-ray lenses and can be used to produce a magnified image of an object as easily as a diffraction pattern. This then allows the electron microscope to be switched back and forth instantly between imaging and diffraction modes so that the image of a single molecule at any magnification can be obtained as conveniently as the electron diffraction pattern of a thin crystal. By contrast, X-ray microscopy has been much less valuable than X-ray diffraction, but may be useful for imaging at the cellular level.
In the early years of electron microscopy of macromolecules, electron micrographs of molecules embedded in a thin film of heavy-atom stains (Brenner & Horne, 1959; Huxley & Zubay, 1960
) were used to produce pictures which were interpreted directly. Beginning with the work of Klug (Klug & Berger, 1964
), a more rigorous approach to image analysis led first to the interpretation of the two-dimensional (2D) images as the projected density summed along the direction of view and then to the ability to reconstruct the three-dimensional (3D) object from which the images arose (DeRosier & Klug, 1968
; Hoppe et al., 1968
), with subsequent more sophisticated treatment of image contrast transfer (Erickson & Klug, 1971
).
Later, macromolecules were examined by electron diffraction and imaging without the use of heavy-atom stains by embedding the specimens in either a thin film of glucose (Unwin & Henderson, 1975) or in a thin film of rapidly (Dubochet, Lepault et al., 1982
; Dubochet et al., 1988
) or slowly (Taylor & Glaeser, 1974
) frozen water, which required the specimen to be cooled while it was examined in the electron microscope. This use of unstained specimens thus led to the structure determination of the molecules themselves, rather than the structure of a `negative stain' excluding volume, and has created the burgeoning field of 3D electron microscopy of macromolecules. Many of the image-analysis techniques now used for studying unstained specimens originated from those used in the analysis of negatively stained samples.
At this point in year 2000, hundreds of medium-resolution structures of macromolecular assemblies (e.g. ribosomes), spherical and helical viruses, and larger protein molecules have been determined by electron cryomicroscopy in ice. Three atomic resolution structures have been obtained by electron cryomicroscopy of thin 2D crystals embedded in glucose, trehalose or tannic acid (Henderson et al., 1990; Kühlbrandt et al., 1994
; Nogales et al., 1998
), where specimen cooling reduced the effect of radiation damage. The medium-resolution density distributions can often be interpreted in terms of the chemistry of the structure if a high-resolution model of one or more of the component pieces has already been obtained by X-ray, electron microscopy, or NMR methods. As a result, electron microscopy is being transformed from a niche methodology into a powerful technique for which, in some cases, no alternative approach is possible. This article outlines the key aspects of electron cryomicroscopy (cryo EM) and 3D image reconstruction. Further information can be obtained from several reviews (e.g. Amos et al., 1982
; Glaeser, 1985
; Chiu, 1986
; Dubochet et al., 1988
; Stewart, 1990
; Koster et al., 1997
; Walz & Grigorieff, 1998
; Baker et al., 1999
; Yeager et al., 1999
) and a book (Frank, 1996
). Recommended textbooks that describe general aspects of electron microscopy are those by Cowley (1975
), Spence (1988
) and Reimer (1989
).
A schematic overview of scattering and imaging in the electron microscope is depicted in Fig. 19.6.2.1. For biological electron microscopy and diffraction, the incident beam is normally parallel and monochromatic. The incident electron beam then passes through the specimen and individual electrons are either unscattered or scattered by the atoms of the specimen. This scattering occurs either elastically, with no loss of energy and therefore no energy deposition in the specimen, or inelastically, with consequent energy loss by the scattered electron and accompanying energy deposition in the specimen, resulting in radiation damage. The electrons emerging from the specimen are then collected by the imaging optics, shown here for simplicity as a single lens, but in practice consisting of a complex system of five or six lenses with intermediate images being produced at successively higher magnification at different positions down the column. Finally, in the viewing area, either the electron-diffraction pattern or the image can be seen directly by eye on the phosphor screen, or detected by a TV or CCD camera, or recorded on photographic film or an image plate.
The coherent, elastically scattered electrons contain all the high-resolution information describing the structure of the specimen. The amplitudes and phases of the scattered electron beams are directly related to the amplitudes and phases of the Fourier components of the atomic distribution in the specimen. When the scattered beams are recombined with the unscattered beam in the image, they create an interference pattern (the image) which, for thin specimens, is related approximately linearly to the density variations in the specimen. The information about the structure of the specimen can then be retrieved by digitization and computer-based image processing, as described below (Sections 19.6.4.5 and 19.6.4.6
). The elastic scattering cross sections for electrons are not as simply related to the atomic composition as happens with X-rays. With X-ray diffraction, the scattering factors are simply proportional to the number of electrons in each atom, normally equal to the atomic number. Since elastically scattered electrons are in effect diffracted by the electrical potential inside atoms, the scattering factor for electrons depends not only on the nuclear charge but also on the size of the surrounding electron cloud which screens the nuclear charge. As a result, electron scattering factors in the resolution range of interest in macromolecular structure determination (up to
Å−1) are very sensitive to the effective radius of the outer valency electrons and therefore depend sensitively on the chemistry of bonding. Although this is a fascinating field in itself with interesting work already carried out by the gas-phase electron-diffraction community (e.g. Hargittai & Hargittai, 1988
), it is still an area where much work remains to be done. At present, it is probably adequate to think of the density obtained in macromolecular structure analysis by electron microscopy as roughly equivalent to the electron density obtained by X-ray diffraction but with the contribution from hydrogen atoms being somewhat greater relative to carbon, nitrogen and oxygen.
Those electrons which are inelastically scattered lose energy to the specimen by a number of mechanisms. The energy-loss spectrum for a typical biological specimen is dominated by the large cross section for plasmon scattering in the energy range 20–30 eV with a continuum in the distribution which decreases up to higher energies. At discrete high energies, specific inner electrons in the K shell of carbon, nitrogen or oxygen can be ejected with corresponding peaks in the energy-loss spectrum appearing at 200–400 eV. Any of these inelastic interactions produces an uncertainty in the position of the scattered electron (by Heisenberg's uncertainty principle) and as a result, the resolution of any information present in the energy-loss electron signal extends only to low resolutions of around 15 Å (Isaacson et al., 1974). Consequently, the inelastically scattered electrons are generally considered to contribute little except noise to the images.
The most important consequence of inelastic scattering is the deposition of energy into the specimen. This is initially transferred to secondary electrons which have an average energy (20 eV) that is five or ten times greater than the valency bond energies. These secondary electrons interact with other components of the specimen and produce numerous reactive chemical species, including free radicals. In ice-embedded samples, these would be predominantly highly reactive hydroxyl free radicals that arise from the frozen water molecules. In turn, these react with the embedded macromolecules and create a great variety of radiation products such as modified side chains, cleaved polypeptide backbones and a host of molecular fragments. From radiation-chemistry studies, it is known that thiol or disulfide groups react more quickly than aliphatic groups and that aromatic groups, including nucleic acid bases, are the most resistant. Nevertheless, the end effect of the inelastic scattering is the degradation of the specimen to produce a cascade of heterogeneous products, some of which resemble the starting structure more closely than others. Some of the secondary electrons also escape from the surface of the specimen, causing it to charge up during the exposure. As a rough rule, for 100 kV electrons the dose that can be used to produce an image in which the starting structure at high resolution is still recognizable is about 1 e Å−2 for organic or biological materials at room temperature, 5 e Å−2 for a specimen near liquid-nitrogen temperature (−170 °C) and 10 e Å−2 for a specimen near liquid-helium temperature (4–8 K). However, individual experimenters will often exceed these doses if they wish to enhance the low-resolution information in the images, which is less sensitive to radiation damage. The effects of radiation damage due to electron irradiation are essentially identical to those from X-ray or neutron irradiation for biological macromolecules except for the amount of energy deposition per useful coherent elastically scattered event (Henderson, 1995). For electrons scattered by biological structures at all electron energies of interest, the number of inelastic events exceeds the number of elastic events by a factor of three to four, so that 60 to 80 eV of energy is deposited for each elastically scattered electron. This limits the amount of information in an image of a single biological macromolecule. Consequently, the 3D atomic structure cannot be determined from a single molecule but requires the averaging of the information from at least 10 000 molecules in theory, and even more in practice (Henderson, 1995
). Crystals used for X-ray or neutron diffraction contain many orders of magnitude more molecules.
It is possible to collect both the elastically and the inelastically scattered electrons simultaneously with an energy analyser and, if a fine electron beam is scanned over the specimen, then a scanning transmission electron micrograph displaying different properties of the specimen can be obtained. Alternatively, conventional transmission electron microscopes to which an energy filter has been added can be used to select out a certain energy band of the electrons from the image. Both these types of microscope can contribute in other ways to the knowledge of structure, but in this article, we concentrate on high-voltage phase-contrast electron microscopy of unstained macromolecules most often embedded in ice, because this is the method of widest impact and whose results encompass all resolutions both complementary to and competitive with those from X-ray diffraction.
The important properties of the image in terms of defocus, astigmatism and the presence and effect of amplitude or phase contrast are discussed below (Sections 19.6.4.4 and 19.6.4.6
). The best-quality incident electron beam is produced by a field emission gun (FEG). This is because the electrons from a FEG are emitted from a very small volume at the tip, which is the apparent source size. Once these electrons have been collected by the condenser lens and used to produce the illuminating beam, that beam of electrons is then very parallel (divergence of ~10−2 mrad) and therefore spatially coherent. Similarly, because the emitting tip of a FEG is not heated as much as a conventional thermionic tungsten source, the thermal energy spread of the electrons is relatively small (0.5 to 1.0 eV) and, as a result, the illuminating beam is monochromatic and therefore temporally coherent. Electron beams can also be produced by a normal heated tungsten source, which gives a less parallel beam with a larger energy spread, but is nevertheless adequate for electron cryomicroscopy if the highest resolution images are not required.
The determination of 3D structure by cryo EM methods follows a common scheme for all macromolecules (Fig. 19.6.4.1). A more detailed discussion of the individual steps as applied to different classes of macromolecules appears in subsequent sections. Briefly, each specimen must be prepared in a relatively homogeneous aqueous form (1D or 2D crystals or a suspension of single particles in a limited number of states) at relatively high concentration, rapidly frozen (vitrified) as a thin film, transferred into the electron microscope and photographed by means of low-dose selection and focusing procedures. The resulting images, if recorded on film, must then be digitized. Digitized images are then processed using a series of computer programs that allow different views of the specimen to be combined into a 3D reconstruction that can be interpreted in terms of other available structural, biochemical and molecular data.
Radiation damage by the illuminating electron beam generally allows only one good picture (micrograph) to be obtained from each molecule or macromolecular assembly. In this micrograph, the signal-to-noise ratio of the 2D projection image is normally too small to determine the projected structure accurately. This implies firstly that it is necessary to average many images of different molecules taken from essentially the same viewpoint to increase the signal-to-noise ratio, and secondly that many of these averaged projections, taken from different directions, must be combined to build up the information necessary to determine the 3D structure of the molecule. Thus, the two key concepts are: (1) averaging to a greater or lesser extent depending on resolution, particle size and symmetry to increase the signal-to-noise ratio; and (2) the combination of different projections to build a 3D map of the structure.
In addition, there are various technical corrections that must be made to the image data to allow an unbiased model of the structure to be obtained. These include correction for the phase contrast-transfer function (CTF – see Fig. 19.6.4.4 in Section 19.6.4.4
for a description of the CTF and Section 19.6.4.6
for its correction) and for the effects of beam tilt. For crystals, it is also possible to combine electron-diffraction amplitudes with image phases to produce a more accurate structure (Unwin & Henderson, 1975
), and in general to correct for loss of high-resolution contrast for any reason by `sharpening' the data by application of a negative temperature factor (Havelka et al., 1995
).
The idea of increasing the signal-to-noise ratio in electron images of unstained biological macromolecules by averaging was discussed in 1971 (Glaeser, 1971) and demonstrated in 1975 (Unwin & Henderson, 1975
), though earlier work on stained specimens had shown the value of averaging to increase the signal-to-noise ratio. The improvement obtained, as in all repeated measurements, gives a factor of
improvement in signal-to-noise ratio where N is the number of times the measurement is made. The effect of averaging to produce an improvement in signal-to-noise ratio is seen most clearly in the processing of images from 2D crystals. Fig. 19.6.4.2
shows the results of applying a sequence of corrections, beginning with averaging, to 2D crystals of bacteriorhodopsin in 2D space group p3. The panels show: (a, b) 2D averaging, (c) correction for the microscope contrast-transfer function (CTF), and (d) threefold crystallographic symmetry averaging of the phases and combination with electron-diffraction amplitudes. At each stage in the procedure the projected picture of the molecules gets clearer. The final stage results in a virtually noise-free projected structure for the molecule at atomic (3 Å) resolution.
The earliest successful application of the idea of combining projections to reconstruct the 3D structure of a biological assembly was made by DeRosier & Klug (1968). The idea is that each 2D projection corresponds after Fourier transformation to a central section of the 3D transform of the assembly. If enough independent projections are obtained, then the 3D transform will have been fully sampled and the structure can then be obtained by back-transformation of the averaged, interpolated and smoothed 3D transform. This procedure is shown schematically for the ubiquitous duck, which represents the molecule whose structure is being determined (Fig. 19.6.4.3
).
In practice, the implementation of these concepts has been carried out in a variety of ways, since the experimental strategy and type of computer analysis used depends on the type of specimen, especially the molecular weight of the individual molecule, its symmetry and whether or not it assembles into an aggregate with one-dimensional (1D), two-dimensional (2D), or three-dimensional (3D) periodic order.
The symmetry of a macromolecule or supramolecular complex is the primary determinant of how specimen preparation, microscopy, and 3D image reconstruction are performed (Sections 19.6.4.4
–19.6.4.6
). The classification of molecules according to their level of periodic order and symmetry (Table 19.6.4.1
) provides a logical and convenient way to consider the means by which specimens are studied in 3D by microscopy.
†The symmetry of a helical structure is defined by an |
Each type of specimen offers a unique set of challenges in obtaining 3D structural information at the highest possible resolution. The best resolutions achieved by 3D EM methods to date, at about 3–4 Å, have been obtained with several thin 2D crystals (Henderson et al., 1990; Kühlbrandt et al., 1994
; Nogales et al., 1998
). These milestones have been achieved, in part, as a consequence of the excellent crystalline order exhibited by these specimens, but they are also attributable to dedicated efforts aimed at developing and refining a series of quantitative imaging and image-processing protocols, many of which are rooted in the principles and practice of Fourier-based methods used in X-ray crystallography.
With the exception of true 3D crystals, which must be sectioned to render them amenable (i.e. thin enough) to study by transmission electron microscopy, the resolutions obtained with biological specimens are generally dictated by the preservation of periodic order, and the symmetry and complexity of the object. Hence, studies of the helical acetylcholine receptor tubes (Miyazawa et al., 1999), the icosahedral hepatitis B virus capsid (Böttcher, Wynne & Crowther, 1997
), the 70S ribosome (Gabashvili et al., 2000
) and the centriole (Kenney et al., 1997
) have yielded 3D density maps at resolutions of 4.6, 7.4, 11.5 and 280 Å, respectively.
If high resolution were the sole objective of EM, it would be necessary, given the capabilities of existing technology, to try to form well ordered 2D crystals or helical assemblies of each macromolecule of interest. Indeed, a number of different crystallization techniques have been devised (e.g. Horne & Pasquali-Ronchetti, 1974; Yoshimura et al., 1990
; Kornberg & Darst, 1991
; Jap et al., 1992
; Kubalek et al., 1994
; Rigaud et al., 1997
; Hasler et al., 1998
; Reviakine et al., 1998
; Wilson-Kubalek et al., 1998
) and some of these have yielded new structural information about otherwise recalcitrant molecules like RNA polymerase (Polyakov et al., 1998
). However, despite the obvious technological advantages of having a molecule present in a highly ordered form, most macromolecules function not as highly ordered crystals or helices but instead as single particles (e.g. many enzymes) or, more likely, in concert with other macromolecules as occurs in supramolecular assemblies. Also, crystallization tends to constrain the number of conformational states a molecule can adopt and the crystal conformation might not be functionally relevant. Hence, though resolution may be restricted to much below that realized in the bulk of current X-ray crystallographic studies, cryo EM methods provide a powerful means to study molecules that resist crystallization in 1D, 2D or 3D. These methods allow one to explore the dynamic events, different conformational states (as induced, for example, by altering the microenvironment of the specimen) and macromolecular interactions that are the key to understanding how each macromolecule functions.
The goal in preparing specimens for cryomicroscopy is to keep the biological sample as close as possible to its native state in order to preserve the structure to atomic or near atomic resolution in the microscope and during microscopy. The methods by which numerous types of macromolecules and macromolecular complexes have been prepared for cryo EM studies are now well established (e.g. Dubochet et al., 1988). Most such methods involve cooling samples at a rate fast enough to permit vitrification (to a solid glass-like state) rather than crystallization of the bulk water. Noncrystalline biological macromolecules are typically vitrified by applying a small (often <10 µl) aliquot of a purified ~0.2–5 mg ml−1 suspension of sample to an EM grid coated with a carbon or holey carbon support film. The grid, secured with a pair of forceps and suspended over a container of ethane or propane cryogen slush (maintained near its freezing point by a reservoir of liquid nitrogen), is blotted nearly dry with a piece of filter paper. The grid is then plunged into the cryogen, and the sample, if thin enough (∼0.2 µm or less), is vitrified in millisecond or shorter time periods (Mayer & Astl, 1992
; Berriman & Unwin, 1994
; White et al., 1998
).
The ability to freeze samples with a timescale of milliseconds affords cryo EM one of its unique and, as yet, perhaps most under-utilized advantages: capturing and visualizing dynamic structural events that occur over time periods of a few milliseconds or longer. Several devices that allow samples to be perturbed in a variety of ways as they are plunged into a cryogen have been described (e.g. Subramaniam et al., 1993; Berriman & Unwin, 1994
; Siegel et al., 1994
; Trachtenberg, 1998
; White et al., 1998
). Examples of the use of such devices include spraying acetylcholine onto its receptor to cause the receptor channel to open (Unwin, 1995
), lowering the pH of an enveloped virus sample to initiate early events of viral fusion (Fuller et al., 1995
), inducing a temperature jump with a flash-tube system to study phase transitions in liposomes (Siegel & Epand, 1997
), or mixing myosin S1 fragments with F-actin to examine the geometry of the crossbridge powerstroke in muscle (Walker et al., 1999
).
Crystalline (2D) samples can fortunately often be prepared for cryo EM by means of simpler procedures, and vitrification of the bulk water is not always essential to achieve success (Cyrklaff & Kühlbrandt, 1994). Such specimens may be applied to the carbon film on an EM grid by normal adhesion methods, washed with 1–2% solutions of solutes like glucose, trehalose, or tannic acid, wicked with filter paper to remove excess solution, air dried, loaded into a cold holder (see below), inserted into the microscope, and, finally, cooled to liquid-nitrogen temperature.
Specimen preparation for cryomicroscopy is, of course, easier to describe than perform (`the Devil is in the details'). Success or failure depends critically on many factors such as: sample properties (pI, presence of lipids etc.); sample concentration (usually much higher than that needed for negative staining) and temperature; stability, age and wetting properties of the support film and need for glow-discharging (Dubochet, Groom & Müller-Neuteboom, 1982) or use of lipids (Vénien-Bryan & Fuller, 1994
) to render the film hydrophilic or hydrophobic; time of sample adsorption to the film; humidity near the sample; extent of blotting and time elapsed before freeze-plunging; and concentrations and types of solutes present in the aqueous sample or the need to remove them (Trinick & Cooper, 1990
; Vénien-Bryan & Fuller, 1994
). Lastly, the experience and persistence of the microscopist may be critical in judging which factors are most important. Fortunately, cryo EM has evolved long enough to demonstrate that a wide variety of fragile macromolecular assemblies can be preserved and imaged in a near-native state.
Alternative procedures exist for each step of sample preparation. Particulate specimens (i.e. single particles) are usually prepared on holey carbon films, which are sometimes glow-discharged to enhance the spreading of the specimen. Continuous carbon films, carbon-coated plastic films and even bare grids (Adrian et al., 1984) have been used as supports for different specimens. Several techniques and freezing devices have been developed for producing uniformly thin, vitrified samples (e.g. Taylor & Glaeser, 1976
; Dubochet, Chang et al., 1982
; Bellare et al., 1988
; Dubochet et al., 1988
; Trinick & Cooper, 1990
). All subsequent steps, up to and including the recording of images in the microscope (Section 19.6.4.4
), are carried out in a manner that maintains the sample below −170 °C to avoid devitrification, which occurs at ∼−140 °C and leads to recrystallization of the bulk water to form cubic ice (Dubochet, Lepault et al., 1982
; Lepault et al., 1983
).
These steps include transfer of the grid from the cryogen into liquid nitrogen, where it may be stored indefinitely, and then into a cryo specimen holder that is cooled with liquid nitrogen (e.g. Dubochet et al., 1988). The cold holder is rapidly but carefully inserted into the electron microscope to minimize condensation of water vapour onto the cold holder tip, otherwise such water ruins the high vacuum of the microscope and also contaminates the specimen. Indeed, because the cold specimen itself is an efficient trap for any contaminant, most cryo EM is performed on microscopes equipped with blade-type anticontaminators (e.g. Homo et al., 1984
) that permit individual EM grids to be viewed for periods of up to several hours. Also, cryo holders are subject to greater instabilities than conventional, room-temperature holders owing to the temperature gradient between microscope and specimen and because boiling of the liquid-nitrogen coolant in the Dewar of the cold holder transmits vibrations to the specimen. The maximum instrumental resolving power of most modern microscopes (∼0.7–2 Å) cannot yet be realized with commercially available cold holders, which promise stability in the 2–4 Å range.
Once the vitrified specimen is inserted into the microscope and sufficient time is allowed (∼15 min) for the specimen stage to stabilize to minimize drift and vibration, microscopy is performed to generate a set of images that, with suitable processing procedures, can later be used to produce a reliable 3D reconstruction of the specimen at the highest possible resolution. To achieve this goal, imaging must be performed at an electron dose that minimizes beam-induced radiation damage to the specimen, with the objective lens of the microscope defocused to enhance phase contrast from the weakly scattering, unstained biological specimen, and under conditions that keep the specimen below the devitrification temperature and minimize its contamination.
The microscopist locates specimen areas suitable for photography by searching the EM grid at very low magnification () to keep the irradiation level very low (<0.05 e Å−2 s−1) while assessing sample quality: Is it vitrified and is the thickness optimal? Are the concentration and distribution of particles or is the size of the 2D crystal optimal? In microscopes operated at 200 keV or higher, where image contrast is very weak, it is helpful to perform the search procedure with the assistance of a CCD camera or a video-rate TV-intensified camera system. CCD cameras are gaining popularity because imaging conditions (defocus level, astigmatism, specimen drift or vibration etc.) can be accurately monitored and adjusted by computing the image Fourier transform online (Sherman et al., 1996
) and also because in some cases the distribution of single particles can be seen at low or moderate magnifications (Olson et al., 1997
). For some specimens, like thin 2D crystals, searching is conveniently performed by viewing the low-magnification high-contrast image produced by slightly defocusing the electron-diffraction pattern using the diffraction lens.
After a desired specimen area is identified, the microscope is switched to high-magnification mode for focusing and astigmatism correction. These adjustments are typically performed in a region ∼2–10 µm away from the chosen area at the same or higher magnification than that used for photography. The choice of magnification, defocus level, accelerating voltage, beam coherence, electron dose and other operating conditions is dictated by several factors. The most significant ones are the size of the particle or crystal unit cell being studied, the anticipated resolution of the images and the requirements of the image processing needed to compute a 3D reconstruction to the desired resolution. For most specimens at required resolutions from 3 to 30 Å, images are typically recorded at 25 000–50 000× magnification with an electron dose of between 5 and 20 e Å−2 . Even lower magnification, down to 10 000×, can be used if high resolution is not required, and higher magnification, up to 75 000×, can be used if good specimen areas are easy to locate. These conditions yield micrographs of sufficient optical density (OD 0.2–1.5) and image resolution for subsequent image processing steps (Sections 19.6.4.5 and 19.6.4.6
). Most modern EMs provide some mode of low-dose operation for imaging beam-sensitive, vitrified biological specimens. Dose levels may be measured directly (e.g. with a Faraday cup) or they may be estimated from a calibrated microscope exposure meter (e.g. Baker & Amos, 1978
).
The intrinsic low contrast of unstained specimens makes it impossible to observe and focus on specimen details directly as is routine with stained or metal-shadowed specimens. Focusing, aimed to enhance phase contrast in the recorded images but minimize beam damage to the desired area, is achieved by judicious defocusing on a region that is adjacent to the region to be photographed and preferably situated on the microscope tilt axis. The appropriate focus level is set by adjusting the appearance of either the Fresnel fringes that occur at the edges of holes in the carbon film or the `phase granularity' from the carbon support film (e.g. Agar et al., 1974).
Unfortunately, electron images do not give a direct rendering of the specimen density distribution. The relationship between image and specimen is described by the contrast-transfer function (CTF) which is characteristic of the particular microscope used, the specimen and the conditions of imaging. The microscope CTF arises from the objective-lens focal setting and from the spherical aberration present in all electromagnetic lenses, and varies with the defocus and accelerating voltage according to equation (19.6.4.1), an expression which includes both phase and amplitude contrast components. First, however, it might be useful to describe briefly the essentials of amplitude contrast and phase contrast, two concepts carried over from optical microscopy. Amplitude contrast refers to the nature of the contrast in an image of an object which absorbs the incident illumination or scatters it in any other way so that a proportion of it is lost. As a result, the image appears darker where greater absorption occurs. Phase contrast is required if an object is transparent (i.e. it is a pure phase object) and does not absorb but only scatters the incident illumination. Biological specimens for cryo EM are almost pure phase objects and the scattering is relatively weak, so the simple theory of image formation by a weak phase object applies (Spence, 1988
; Reimer, 1989
). An exactly in-focus image of a phase object has no contrast variation since all the scattered illumination is focused back to equivalent points in the image of the object from which it was scattered. In optical microscopy, the use of a quarter wave plate can retard the phase of the direct unscattered beam, so that an in-focus image of a phase object has very high `Zernicke' phase contrast. However, there is no simple quarter wave plate for electrons, so instead phase contrast is created by introducing phase shifts into the diffracted beams by adjustment of the excitation of the objective lens so that the image is slightly defocused. In addition, since all matter is composed of atoms and the electric potential inside each atom is very high near the nucleus, even the electron-scattering behaviour of the light atoms found in biological molecules deviates from that of a weak phase object, but for a deeper discussion of this the reader should refer to Reimer (1989
) or Spence (1988
). In practice, the proportion of `amplitude' contrast is about 7% at 100 kV and 5% at 200 kV for low-dose images of protein molecules embedded in ice.
The overall dependence of the CTF on resolution, wavelength, defocus and spherical aberration is where
, ν is the spatial frequency (in Å−1),
is the fraction of amplitude contrast, λ is the electron wavelength (in Å), where
(= 0.037, 0.025 and 0.020 Å for 100, 200 and 300 keV electrons, respectively), V is the voltage (in volts),
is the underfocus (in Å) and
is the spherical aberration of the objective lens of the microscope (in Å).
In addition, this CTF is attenuated by an envelope or damping function which depends upon the spatial and temporal coherence of the beam, specimen drift and other factors (Erickson & Klug, 1971; Frank, 1973
; Wade & Frank, 1977
; Wade, 1992
). Fig. 19.6.4.4
shows a few representative CTFs for different amounts of defocus on a normal and a FEG microscope. Thus, for a particular defocus setting of the objective lens, phase contrast in the electron image is positive and maximal only at a few specific spatial frequencies. Contrast is either lower than maximal, completely absent, or it is opposite (inverted or reversed) from that at other frequencies. Hence, as the objective lens is focused, the electron microscopist selectively accentuates image details of a particular size. For this discussion, we ignore inelastic scattering, which makes some limited contribution at low resolution to images as a result of the effect of chromatic aberration on the energy-loss electrons in thick specimens or samples embedded in thick layers of vitrified water (Langmore & Smith, 1992
). Inelastically scattered electrons can be largely removed by use of microscopes equipped with electron-energy filtering devices (e.g. Langmore & Smith, 1992
; Koster et al., 1997
; Zhu et al., 1997
), but this also leaves fewer electrons to form the image.
Images are typically recorded 0.8–3.0 µm underfocus to enhance specimen features in the 20–40 Å size range and thereby facilitate phase-origin and specimen-orientation search procedures carried out in the image-processing steps (Section 19.6.4.8), but this level of underfocus also enhances contrast in lower-resolution maps, which may help in interpretation. To obtain results at better than 10–15 Å resolution, it is essential to record, process and combine data from several micrographs that span a range of defocus levels (e.g. Unwin & Henderson, 1975
; Böttcher, Wynne & Crowther, 1997
). This strategy assures good information transfer at all spatial frequencies up to the limiting resolution but requires careful compensation for the effects of the microscope CTF during image processing. Also, the recording of image focal pairs or focal series from a given specimen area can be beneficial in determining origin and orientation parameters for processing of images of single particles (e.g. Cheng et al., 1992
; Trus et al., 1997
; Conway & Steven, 1999
).
Many high-resolution cryo EM studies are now performed with microscopes operated at 200 keV or higher and with FEG electron sources (e.g. Zemlin, 1992; Zhou & Chiu, 1993
; Zemlin, 1994
; Mancini et al., 1997
). The high coherence of a FEG source ensures that phase contrast in the images remains strong out to high spatial frequencies (
) even for highly defocused images. The use of higher voltages provides potentially higher resolution (greater depth of field – i.e. less curvature of the Ewald sphere – owing to the smaller electron-beam wavelength), better beam penetration (less multiple scattering), reduced problems with specimen charging of the kind that plague microscopy of unstained or uncoated vitrified specimens (Brink et al., 1998
) and reduced phase shifts associated with beam tilt.
Images are recorded on photographic film or on a CCD camera with either flood-beam or spot-scan procedures. Film, with its advantages of low cost, large field of view and high resolution (∼10 µm), has remained the primary image-recording medium for most cryo EM applications, despite disadvantages of high background fog and need for chemical development and digitization. CCD cameras provide image data directly in digital form and with very low background noise, but suffer from higher cost, limited field of view, limited spatial resolution caused by poor point spread characteristics and a fixed pixel size (24 µm). They are useful, for example, for precise focusing and adjustment of astigmatism (e.g. Krivanek & Mooney, 1993; Sherman et al., 1996
). With conventional flood-beam imaging, the electron beam (generally >2–5 µm diameter) illuminates an area of specimen that exceeds what is recorded in the micrograph. In spot-scan imaging, which decreases the beam-induced specimen drift often seen in flood illumination, a 2000 Å or smaller diameter beam is scanned across the specimen in a square or hexagonal pattern while the image is recorded (Downing, 1991
). This method is beneficial in the examination of 2D crystalline specimens at near-atomic resolutions (Henderson et al., 1990
; Nogales et al., 1998
) and has also been used to study some icosahedral viruses (e.g. Zhou et al., 1994
; Zhao et al., 1995
).
For studies in which specimens must be tilted to collect 3D data, such as with 2D crystals, single particles that adopt preferred orientations on the EM grid, or specimens requiring tomography, microscopy is performed in essentially the same way as described above. However, the limited tilt range (60–70°) of most microscope goniometers can lead to non-isotropic resolution in the 3D reconstructions (the `missing cone' problem), and tilting generates a constantly varying defocus across the field of view in a direction normal to the tilt axis. The effects caused by this varying defocus level must be corrected in high-resolution applications (Henderson et al., 1990
) or they can be partially corrected during spot-scan microscopy if the defocus of the objective lens is varied in proportion to the distance between the beam and tilt axis (Zemlin, 1989
).
Before any image analysis or classification of the molecular images can be done, a certain amount of preliminary checking and normalization is required to ensure there is a reasonable chance that a homogeneous population of molecular images has been obtained. First, good-quality micrographs are selected in which the electron exposure is correct, there is no image drift or blurring, and there is minimal astigmatism and a reasonable amount of defocus to produce good phase contrast. This is usually done by visual examination and optical diffraction.
Once the best pictures have been chosen, the micrographs must be scanned and digitized on a suitable densitometer. The sizes of the steps between digitization of optical density, and the size of the sample aperture over which the optical density is averaged by the densitometer, must be sufficiently small to sample the detail present in the image at fine enough intervals (DeRosier & Moore, 1970). Normally, a circular (or square) sample aperture of diameter (or length of side) equal to the step between digitization is used. This avoids digitizing overlapping points, without missing any of the information recorded in the image. The size of the sample aperture and digitization step depends on the magnification selected and the resolution required. A value of one-quarter to one-third of the required limit of resolution (measured in µm on the emulsion) is normally ideal, since it avoids having too many numbers (and therefore wasting computer resources) without losing anything during the measurement procedure. For a 40 000× image, on which a resolution of 10 Å at the specimen is required, a step size of 10 µm [= (1/4)(10 Å × 40 000/10 000 Å µm−1)] would be suitable.
The best area of an image of a helical or 2D crystal specimen can then be boxed off using a soft-edged mask. For images of single particles, a stack of individual particles can be created by selecting out many small areas surrounding each particle. In the later steps of image processing, because the orientation and position of each particle are refined by comparing the amplitudes and phases of their Fourier components, it is important to remove spurious features around the edge of each particle and to make sure the different particle images are on the same scale. This is normally done by masking off a circular area centred on each particle and floating the density so that the average around the perimeter becomes zero (DeRosier & Moore, 1970). The edge of the mask is apodized, which means the application of a soft cosine bell shape to the original densities so they taper towards the background level. Finally, to compensate for variations in the exposure due to ice thickness or electron dose, most workers normalize the stack of individual particle images so that the mean density and mean density variation over the field of view are set to the same values for all particles (Carrascosa & Steven, 1978
).
Once some good particles or crystalline areas for 1D or 2D crystals have been selected, digitized and masked and their intensity values have been normalized, true image processing can begin.
Although the general concepts of signal averaging, together with combining different views to reconstruct the 3D structure, are common to the different computer-based procedures which have been implemented, it is important to emphasize one or two preliminary points. First, a homogeneous set of particles must be selected for inclusion in the 3D reconstruction. This selection may be made by eye to eliminate obviously damaged particles or impurities, or by the use of multivariate statistical analysis (van Heel & Frank, 1981) or some other classification scheme. This allows a subset of the particle images to be used to determine the structure of a better-defined entity. All image-processing procedures require the determination of the same parameters that are needed to specify unambiguously how to combine the information from each micrograph or particle. These parameters are: the magnification, defocus, astigmatism and, at high resolution, the beam tilt for each micrograph; the electron wavelength used (i.e. accelerating voltage of the microscope); the spherical aberration coefficient,
, of the objective lens; and the orientation and phase origin for each particle or unit cell of the 1D, 2D or 3D crystal. There are 13 parameters for each particle, eight of which may be common to each micrograph and two or three (
, accelerating voltage, magnification) to each microscope. The different general approaches that have been used in practice to determine the 3D structure of different classes of macromolecular assemblies from one or more electron micrographs are listed in Table 19.6.4.2
.
†Electron tomography is the subject of an entire issue of Journal of Structural Biology [(1997), 120, pp. 207–395] and a book edited by Frank (1992 ![]() |
The precise way in which each general approach codes and determines the particle or unit-cell parameters varies greatly and is not described in detail. Much of the computer software used in image-reconstruction studies is relatively specialized compared with that used in the more mature field of macromolecular X-ray crystallography. In part, this may be attributed to the large diversity of specimen types amenable to cryo EM and reconstruction methods. As a consequence, image-reconstruction software is evolving quite rapidly, and references to software packages cited in Table 19.6.4.2 are likely to become quickly outdated. Extensive discussion of algorithms and software packages in use at this time may be found in a number of recent special issues of Journal of Structural Biology (Vol. 116, No. 1; Vol. 120, No. 3; Vol. 121, No. 2; Vol. 125, Nos. 2–3).
In practice, attempts to determine or refine some parameters may be affected by the inability to determine accurately one of the other parameters. The solution of the structure is therefore an iterative procedure in which reliable knowledge of the parameters that describe each image is gradually built up to produce a more and more accurate structure until no more information can be squeezed out of the micrographs. At this point, if any of the origins or orientations are wrongly assigned, there will be a loss of detail and a decrease in signal-to-noise ratio in the map. If a better-determined or higher-resolution structure is required, it would then be necessary to record images on a better microscope or to prepare new specimens and record better pictures.
The reliability and resolution of the final reconstruction can be measured using a variety of indices. For example, the differential phase residual (DPR) (Frank et al., 1981), the Fourier shell correlation (FSC) (van Heel, 1987b
) and the Q factor (van Heel & Hollenberg, 1980
) are three such measures. The DPR is the mean phase difference, as a function of resolution, between the structure factors from two independent reconstructions, often calculated by splitting the image data into two halves. The FSC is a similar calculation of the mean correlation coefficient between the complex structure factors of the two halves of the data as a function of resolution. The Q factor is the mean ratio of the vector sum of the individual structure factors from each image divided by the sum of their moduli, again calculated as a function of resolution. Perfectly accurate measurements would have values of the DPR, FSC and Q factor of 0°, 1.0 and 1.0, respectively, whereas random data containing no information would have values of 90°, 0.0 and 0.0. The spectral signal-to-noise ratio (SSNR) criterion has been advocated as the best of all (Unser et al., 1989
): it effectively measures, as a function of resolution, the overall signal-to-noise ratio (squared) of the whole of the image data. It is calculated by taking into consideration how well all the contributing image data agree internally.
An example of a strategy for determination of the 3D structure of a new and unknown molecule without any symmetry and which does not crystallize might be as follows:
For large single particles with no symmetry, particles with higher symmetry or crystalline arrays, it is usually possible to miss out the negative-staining steps and go straight to alignment of particle images from ice embedding, because the particle or crystal tilt angles can be determined internally from comparison of phases along common lines in reciprocal space or from the lattice or helix parameters from a 2D or 1D crystal.
The following discussion briefly outlines for a few specific classes of macromolecule the general strategy for carrying out image processing and 3D reconstruction.
For 2D crystals, the general 3D reconstruction approach consists of the following steps. First, a series of micrographs of single 2D crystals are recorded at different tilt angles, with random azimuthal orientations. Each crystal is then unbent using cross-correlation techniques to identify the precise position of each unit cell (Henderson et al., 1986), and amplitudes and phases of the Fourier components of the average of that particular view of the structure are obtained for the transform of the unbent crystal. The reference image used in the cross-correlation calculation can either be a part of the whole image masked off after a preliminary round of averaging by reciprocal-space filtering of the regions surrounding the diffraction spots in the transform, or it can be a reference image calculated from a previously determined 3D model. The amplitudes and phases from each image are then corrected for the CTF and beam tilt (Henderson et al., 1986
, 1990
; Havelka et al., 1995
) and merged with data from many other crystals by scaling and origin refinement, taking into account the proper symmetry of the 2D space group of the crystal. Finally, the whole data set is fitted by least squares to constrained amplitudes and phases along the lattice lines (Agard, 1983
) prior to calculating a map of the structure. The initial determination of the 2D space group can be carried out by a statistical test of the phase relationships in one or two images of untilted specimens (Valpuesta et al., 1994
). The absolute hand of the structure is automatically correct since the 3D structure is calculated from images whose tilt axes and tilt angle are known. Nevertheless, care must be taken not to make any of a number of trivial mistakes that would invert the hand.
The basic steps involved in processing and 3D reconstruction of helical specimens include: Record a series of micrographs of vitrified particles suspended over holes in a perforated carbon support film. The micrographs are digitized and Fourier transformed to determine image quality (astigmatism, drift, defocus, presence and quality of layer lines, etc.). Individual particle images are boxed, floated, and apodized within a rectangular mask. The parameters of helical symmetry (number of subunits per turn and pitch) must be determined by indexing the computed diffraction patterns. If necessary, simple spline-fitting procedures may be employed to `straighten' images of curved particles (Egelman, 1986), and the image data may be reinterpolated (Owen et al., 1996
) to provide more precise sampling of the layer-line data in the computed transform. Once a preliminary 3D structure is available, a much more sophisticated refinement of all the helical parameters can be used to unbend the helices on to a predetermined average helix so that the contributions of all parts of the image are correctly treated (Beroukhim & Unwin, 1997
). The layer-line data are extracted from each particle transform and two phase origin corrections are made: one to shift the phase origin to the helix axis (at the centre of the particle image) and the other to correct for effects caused by having the helix axis tilted out of the plane normal to the electron beam in the electron microscope. The layer-line data are separated out into near- and far-side data, corresponding to contributions from the near and far sides of each particle imaged. The relative rotations and translations needed to align the different transforms are determined so the data may be merged and a 3D reconstruction computed by Fourier–Bessel inversion procedures (DeRosier & Moore, 1970
).
The typical strategy for processing and 3D reconstruction of icosahedral particles consists of the following steps: First, a series of micrographs of a monodisperse distribution of particles, normally suspended over holes in a perforated carbon support film, is recorded. After digitization of the micrographs, individual particle images are boxed and floated with a circular mask. The astigmatism and defocus of each micrograph is measured from the sum of intensities of the Fourier transforms of all particle images (Zhou et al., 1996). Auto-correlation techniques are then used to estimate the particle phase origins, which coincide with the centre of each particle where all rotational symmetry axes intersect (Olson & Baker, 1989
). The view orientation of each particle, defined by three Eulerian angles, is determined either by means of common and cross-common lines techniques or with the aid of model-based procedures (e.g. Crowther, 1971
; Fuller et al., 1996
; Baker et al., 1999
). Once a set of self-consistent particle images is available, an initial low-resolution 3D reconstruction is computed by merging these data with Fourier–Bessel methods (Crowther, 1971
). This reconstruction then serves as a reference for refining the orientation, origin and CTF parameters of each of the included particle images, for rejecting `bad' images, and for increasing the size of the data set by including new particle images from additional micrographs taken at different defocus levels. A new reconstruction, computed from the latest set of images, serves as a new reference and the above refinement procedure is repeated until no further improvements as measured by the reliability criteria mentioned above are made.
Once a reliable 3D map is obtained, computer graphics and other visualization tools may be used as aids in interpreting morphological details and understanding biological function in the context of biochemical and molecular studies and complementary X-ray crystallographic and other biophysical measurements.
Initially, for low-resolution studies (>10 Å) and where the structure is unknown, the gross shape (molecular envelope) of the macromolecule is best visualized with volume-rendering programs (e.g. Conway et al., 1996; Sheehan et al., 1996
; Spencer et al., 1997
). Such programs establish a density threshold, above which all density is represented as a solid and below which all density is invisible (representing possible solvent regions). Choice of the threshold that accurately represents the solvent-excluded density can prove problematic, especially if the microscope CTF is uncorrected (e.g. Conway et al., 1996
). For qualitative examination of maps, a threshold at 1.5 or 2 standard deviations above the background noise level provides a practical choice. Another, semi-quantitative, approach is to adjust the threshold to produce a volume consistent with the expected total molecular mass. This procedure is prone to error because the volume is sensitive to small changes in contour level, which, in turn, is highly sensitive to scaling and CTF correction. Caution should therefore be exercised in drawing conclusions based on volume fluctuations of less than 20% of that expected. As a general guide, solid-surface rendering in the range 80 to 120% of the expected volume gives reasonable shape and connectivity.
A complete description of available graphical tools for visualizing 3D density maps is beyond the scope of this discussion, but it is worth noting several of the principles by which 3D data can be rendered. Stereo images (e.g. Liu et al., 1994; Agrawal et al., 1996
; Taveau, 1996
; Winkler et al., 1996
; Nogales et al., 1997
; Kolodziej et al., 1998
; Gabashvili et al., 2000
) provide a powerful way to convey the 3D structure. Also, as in X-ray crystallographic applications, stereo viewing is essential for exploring details of secondary and tertiary structural information in high-resolution 3D maps (e.g. Henderson et al., 1990
; Böttcher, Wynne & Crowther, 1997
). Additional visualization tools include: use of false colour to highlight distinct components (e.g. Yeager et al., 1994
; Cheng et al., 1995
; Hirose et al., 1997
; Metoz et al., 1997
; Zhou et al., 1999
) and a variety of computer `sectioning' or image-projection algorithms that produce cut-open views (e.g. Vigers et al., 1986
; Cheng et al., 1994
; Fuller et al., 1995
), spherical sections (e.g. Baker et al., 1991
), icosahedrally cut surfaces (Böttcher, Kiselev et al., 1997
), polar sections (Fuller et al., 1995
), cylindrical sections (e.g. Hirose et al., 1997
), radial projections (e.g. Dryden et al., 1993
), and radial depth cueing (Spencer et al., 1997
), which conveys an immediate, and often quantitative, view of the radial placement of details in a map (Grimes et al., 1997
). Animation also provides an alternative approach to enhance the viewer's perception of the 3D structure (e.g. van Heel et al., 1996
; Frank et al., 1999
). All these rendering methods should always be carefully described so the reader may distinguish representation from result.
Difference imaging and density-map modelling are examples of two additional techniques that can sometimes enhance interpretation of 3D cryo EM data.
Difference imaging is a very powerful tool, long employed by structural biologists outside the cryo EM field, that permits small (or large) differences among closely related structures to be examined. One of the great advantages of cryomicroscopy of ice-embedded specimens over microscopy using negative stains is that cryo EM difference imaging yields more reliable results as confirmed by correlation with biochemical and immunological data (e.g. Baker et al., 1990; Stewart et al., 1993
; Vénien-Bryan & Fuller 1994
; Yeager et al., 1994
; Hoenger & Milligan, 1997
; Lawton et al., 1997
; Stewart et al., 1997
; Böttcher et al., 1998
; Conway et al., 1998
; Sharma et al., 1998
; Zhou, Mcnab et al., 1998
). However, reliable interpretation is only possible if the difference maps are carefully calculated (i.e. from two maps calculated to the same resolution and scaled in such a way that the differences are minimized). Subtraction of two maps, each having an intrinsic noise level, guarantees that the difference map will always be noisier than either of the parent maps, and noise in the difference map is what determines the significance of features in it. Careful statistical analysis is an important prerequisite in attributing significance to and interpreting particular features. One critical test is to see whether a difference of a similar size can be found between independent determinations of the same structure. Differences that occur at symmetry axes must be treated cautiously, because the noise level there is greater.
The combination of X-ray and cryo EM data provides a powerful tool for interpreting structures [e.g. see the review by Baker & Johnson (1996)]. A high-resolution X-ray model can be docked into a cryo EM density map with greater precision than the nominal resolution of the map. Several similar protocols have been developed for fitting X-ray data to cryo EM reconstructions (e.g. Wikoff et al., 1994
; Che et al., 1998
; Volkmann & Hanein, 1999
; Wriggers et al., 1999
). First, because magnification in an electron microscope can vary, the absolute magnification of the reconstruction to within a few per cent must be established. In addition, the relative scale factor for the density must be calculated. Determination of absolute magnification and relative scaling may be accomplished by several means: (1) comparing the EM map with clear features in the X-ray structure of an individual component (Stewart et al., 1993
), (2) using radial density profile information derived from scattering experiments (e.g. Cheng et al., 1995
), or (3) using the X-ray structure of an entire assembly when this is available (e.g. Speir et al., 1995
). When a single component is used for scaling, it is necessary to refine the scale as the proper position and orientation of that component become better determined. Next, the resolution of the density distribution of the reconstruction must be matched to that in the X-ray structure. For example, fitting of the high-resolution model of the adenovirus hexon to the EM density map of the virus itself was accomplished by convoluting the X-ray structure with the point-spread function for the EM reconstruction (Stewart et al., 1993
). An alternative procedure is simply to normalize the EM map so that it has the same range of density values as the corresponding X-ray map (e.g. Luo et al., 1993
; Wikoff et al., 1994
; Ilag et al., 1995
). A maximum-entropy approach (Skoglund et al., 1996
) was also used to treat CTF effects to improve the correspondence between the adenovirus hexon X-ray structure and the corresponding density in the EM map (Stewart et al., 1993
).
The next step in the modelling procedure is to fit the X-ray structure interactively to the cryo EM density using a display program such as O (Jones et al., 1991), at which point a subjective estimate of the quality of the fit can be made. One criterion for the quality of the fit is whether the hand of the structure can be identified. Because 3D density maps are generated from projected views of the structure, an arbitrary hand will emerge during refinement unless explicit steps are taken to determine the absolute hand. Thus, in the absence of relatively high resolution data (which, for example, would reveal the hand of features like α-helices), the absolute hand must be determined from other types of data such as shadowing (e.g. Belnap et al., 1996
) or comparison of the orientations of the same particle imaged at different tilt angles (e.g. Finch, 1972
; Belnap et al., 1997
). The match with X-ray data (of known hand) serves as an unambiguous determination of hand for the EM map. At this stage of fitting it is important to determine whether the X-ray model should be fitted as a single rigid body (e.g. Stewart et al., 1993
) or as two or more partly independent domains (e.g. Grimes et al., 1997
; Che et al., 1998
; Volkmann & Hanein, 1999
).
The quality and uniqueness of the X-ray/EM map fit is then assessed, often simply by calculating an R factor between the two maps. It may be necessary first to mask out density that is not part of the structure being fitted by the X-ray data (e.g. Stewart et al., 1993; Liu et al., 1994
; Cheng et al., 1995
). The uniqueness of the fit is then tested by rotating and shifting the X-ray model in the EM map and noting changes in the R factor (e.g. Che et al., 1998
). An objective fitting procedure, either in reciprocal or real space, is necessary for refining and checking the uniqueness of the result of the interactive fitting. The program X-PLOR (Brünger et al., 1987
) can be used for reciprocal-space refinement after the two maps are modified to avoid ripple and edge effects due to masking and differences in contrast (e.g. Wikoff et al., 1994
; Grimes et al., 1997
; Hewat et al., 1997
). Real-space refinement has been performed using other programs (e.g. Volkmann & Hanein, 1999
; Wriggers et al., 1999
). A few examples include fitting the Sindbis capsid protein to the Ross River virion map (Cheng et al., 1995
), fitting the VP7 viral protein into the bluetongue virus core map (Grimes et al., 1997
), fitting the Ncd motor domain to microtubules (Wriggers et al., 1999
), and fitting of two separate macromolecules, the myosin S1 subfragment and the N-terminal domain of human T-fimbrin, to reconstructions of complexes between these molecules and actin filaments (Volkmann & Hanein, 1999
). Comparison between results of real- and reciprocal-space fitting can prove informative. For example, reciprocal-space fitting is usually not constrained to avoid interpenetration of the densities, an issue more easily addressed in real-space fitting. If the two approaches yield different fits, it may be necessary to consider conformation changes between the X-ray and the EM structure. This type of analysis is best performed with quantitative model-fitting routines such as those currently being developed (e.g. Volkmann & Hanein, 1999
; Wriggers et al., 1999
).
Correlation of EM and X-ray data is not limited to situations in which a high-resolution structure from X-ray diffraction is used to enhance the interpretation of an EM reconstruction. For example, in several virus crystallography studies, an EM map was used to help solve the phase problem in the solution of the X-ray structure. This approach has the advantage in that it avoids the need for heavy-atom derivatives, which produce only small changes in the scattering of a large object such as a virus and also often introduce problems of non-isomorphism. The first such example was the determination of the structure of cowpea chlorotic mottle virus (CCMV), in which an EM reconstruction was used to construct an initial model for phasing the X-ray data (Speir et al., 1995). The atomic coordinates of southern bean mosaic virus were placed into the CCMV EM density map, and a rotation function was calculated to 15 Å and used to orient the model with respect to the crystallographic data. Phases were calculated from the oriented model and then extended to 4 Å by the use of standard phase-extension and noncrystallographic symmetry procedures. The envelope from this 4 Å map was then used to construct a polyalanine model from which phases to 3.5 Å were calculated.
Three additional virus examples include: (1) The crystallographic analysis of the core of bluetongue virus (BTV), in which the cryo EM map of the BTV core (Prasad et al., 1992) was used to position the X-ray-derived structure of the VP7 capsid protein so that a pseudo-atomic model could be generated (Grimes et al., 1997
). This model was then used to calculate initial phases for the X-ray data for the whole core (Grimes et al., 1998
). (2) The cryo EM derived map of human rhinovirus type 14 (HRV14) complexed with neutralizing Fab 17-1A (Liu et al., 1994
) was used in the solution of the X-ray structure of the same complex. Here, the structures of the isolated Fab and the uncomplexed virus had been solved previously. The EM map of the HRV–Fab complex was used to position these components and provide a pseudo-atomic phasing model for the X-ray data (Smith et al., 1996
). (3) The EM structure for the hepatitis B cores at 7.4 Å (Böttcher, Wynne & Crowther, 1997
) was used to provide initial phasing for solving the atomic structure by X-ray crystallography at 3.3 Å (Wynne et al., 1999
).
Two non-viral examples of the use of EM data in X-ray crystallography include: (1) The structure of bacteriorhodopsin, solved at 3.5 and 3.0 Å resolution by electron microscopy (Grigorieff et al., 1996; Kimura et al., 1997
), was used to allow solution of several 3D crystal forms by molecular replacement (Pebay-Peyroula et al., 1997
; Essen et al., 1998
; Luecke et al., 1998
). (2) Density maps of the 50S ribosomal subunits from two species obtained by cryo EM (Frank et al., 1995
; Ban et al., 1998
) were used to help solve the X-ray crystal structure of the Haloarcula marismortui 50S subunit (Ban et al., 1998
).
The new generation of intermediate-voltage (∼300 kV) FEG microscopes becoming available is now making it much easier to obtain higher-resolution images which, by use of larger defocus values, have good image contrast at both very low and very high resolution. The greater contrast at low resolution greatly facilitates particle-alignment procedures, and the increased contrast resulting from the high-coherence illumination helps to increase the signal-to-noise ratio for the structure at high resolution. Cold stages are constantly being improved, with several liquid-helium stages now in operation (e.g. Fujiyoshi et al., 1991; Zemlin et al., 1996
). Two of these are commercially available from JEOL and FEI/Philips/Gatan. The microscope vacuums are improving so that the bugbear of ice contamination in the microscope, which prevents prolonged work on the same grid, is likely to disappear soon. The improved drift and vibration performance of the cold stage means longer (and therefore more coherently illuminated) exposures at higher resolution can be recorded more easily. Hopefully, the first atomic structure of a single-particle macromolecular assembly solved by electron microscopy will soon become a reality.
Finally, three additional likely trends include: (1) increased automation, including the recording of micrographs, and the use of spot-scan procedures in remote microscope operation (Kisseberth et al., 1997; Hadida-Hassan et al., 1999
) and in every aspect of image processing; (2) production of better electronic cameras (e.g. CCD or pixel detectors); and (3) increased use of dose-fractionated, tomographic tilt series to extend EM studies to the domain of larger supramolecular and cellular structures (McEwen et al., 1995
; Baumeister et al., 1999
).
Acknowledgements
We are greatly indebted to all our colleagues at Purdue and Cambridge for their insightful comments and suggestions, to B. Böttcher, R. Crowther, J. Frank, W. Kühlbrandt and R. Milligan for supplying images used in Fig. 19.6.5.1, which gives some examples of the best work done recently, and J. Brightwell for editorial assistance. TSB was supported in part by grant GM33050 from the National Institutes of Health.
References



























































































































































































































