Tải bản đầy đủ (.pdf) (189 trang)

modeling nanoscale imaging in electron microscopy thomas vogt, wolfgang dahmen, peter binev, editors.

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (6.19 MB, 189 trang )

Nanostructure Science and Technology
Series Editor:
David J. Lockwood, FRSC
National Research Council of Canada
Ottawa, Ontario, Canada
For further volumes:
/>
Thomas Vogt • Wolfgang Dahmen • Peter Binev
Editors
Modeling Nanoscale Imaging
in Electron Microscopy
123
Editors
Thomas Vogt
NanoCenter and Department
of Chemistry and Biochemistry
University of South Carolina
1212 Greene Street
Columbia, SC 29208
USA
Peter Binev
Department of Mathematics
and Interdisciplinary Mathematics Institute
University of South Carolina
1523 Greene Street
Columbia, SC 29208
USA
Wolfgang Dahmen
Institut f¨ur Geometrie
und Praktische Mathematik
Department of Mathematics


RWTH Aachen
52056 Aachen
Germany
ISSN 1571-5744
ISBN 978-1-4614-2190-0 e-ISBN 978-1-4614-2191-7
DOI 10.1007/978-1-4614-2191-7
Springer New York Dordrecht Heidelberg London
Library of Congress Control Number: 2012931557
© Springer Science+Business Media, LLC 2012
All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,
NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in
connection with any form of information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are
not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject
to proprietary rights.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Imaging with electrons, in particular using scanning transmission electron
microscopy (STEM), will become increasingly important in the near future, es-
pecially in the materials and life sciences. Understanding cellular interaction
networks will enable transformative research such as “visual proteomics,” where
spatial arrangements of the proteome or particular subsets of proteins will be
mapped out. In the area of heterogeneous catalysis, which in many cases relies
on nanoparticles deposited onto supports recently, achieved advances in imaging
and characterization of catalysts and precatalysts are transforming the field and
allowing more and more rational design of multifunctional catalysts. Advances
in nanoscale manufacturing will require picometer resolution and control as well

as the elimination of routine visual inspection by humans to become viable and
implemented in “real” manufacturing environments. There are (at least) two major
obstructions to fully exploit the information provided by electron microscopy.
On the one hand, a major bottleneck in all these applications is currently
the “human-in-the-loop” resulting in slow and labor-intensive selection and accu-
mulation of images. A “smart” microscope in which instrument control, image
prescreening, image recognition, and machine learning techniques are integrated
would transform the use of electron imaging in materials science, biology, and other
fields of research by combining fast and reliable imaging with automated high-
throughput analysis such as combinatorial chemical synthesis in catalysis or the
multiple “omics” in biology.
On the other hand, even if environmental perturbations could be completely
avoided a principal dilemma remains that results from the fact that the acquired
images offer only an “ambiguous reflection” of reality due to inherently noisy
data and this is the primary issue addressed in this volume. The noise structure
is highly complex and far from fully being understood. In particular, it depends
in a complex way on the electron dose deployed per unit area. Low noise
levels require a high dose that, in turn, may cause damage. In most cases, high-
energy electrons damage biological and organic matter and thus require special
techniques for imaging when using electron microscopes with beams in the 100–300
kV range. Experiments are frequently performed at “nonbiological” temperatures
v
vi Preface
(i.e., cryo-electron microscopy) to reduce damage. But even when investigating
inorganic material at the atomic resolution level, relatively low dose image acquisi-
tion is often required to avoid damaging the sample. This again impacts significantly
the signal-to-noise ratio of the resulting images. The required low doses necessitate
new paradigms for imaging, more sophisticated data “denoising” and image analysis
as well as simulation techniques. In combination with ongoing experimental work to
reduce the environmentalimpact during nano-imagingexperiments (e.g., vibrations,

temperature, acoustic, and electromagnetic interference), we have begun to develop
and apply nonlinear probabilistic techniques. They are enhanced by learning theory
to significantly reduce noise by systematically exploiting repetitive similarities of
patterns within each frame as well as across a series of frames combined with new
registration techniques. Equating “low electron dose” with “few measurements”
is an intriguing idea that is going to radically alter image analysis—and even
acquisition—using techniques derived from “Compressed Sensing,” an emerging
new paradigm in signal processing. A key component here is to use randomness to
extract the essential information from signals with “sparse information content” by
reducing the number of measurements in ranges where the signal is sparse. Working
first with inorganic materials allows us to validate our methods by selecting on the
basis of high-resolution images an object to be imaged at lower resolution. Building
on the insight gained through these we can then proceed to image silicate or organic
materials which cannot be exposed to high energy electrons for extended periods of
time. Examples of such an approach are given in Chap. 4.
Part of our work has greatly benefitted from three workshops organized at the
University of South Carolina by the Interdisciplinary Mathematics Institute and
the NanoCenter entitled “Imaging in Electron Microscopy” in 2009 and 2010 and
“New Frontiers in Imaging and Sensing” in 2011. At these workshops world-class
practitioners of electron microscopy, engineers, and mathematicians began to discus
and initiate innovative strategies for image analysis in electron microscopy.
The goal of our work is to develop and apply novel methods from signal and
image processing, harmonic analysis, approximation theory, numerical analysis, and
learning theory. Simulation is an important and necessary component of electron
image analysis in order to assess errors of extracted structural parameters and
better understand the specimen–electron interactions. It thereby helps improve the
image as well as calibrate and assess the electron optics and their deviations due
to environmental effects such as acoustic noise, temperature drifts, radio-frequency
interferences, and stray AC and DC magnetic fields. The intuition-based approach
basedonZ

2
-contrast can be misleading if for instance in certain less compact
structures electron channeling effects are not correctly taken into account.
Over the last 3 years, we have established a global research collaboration
anchored around electron microscopists at USC (Thomas Vogt, Douglas Blom) and
other people such as Angus Kirkland (Oxford), Nigel Browning (UC Davis and
LLNL) with mathematicians at USC’s Interdisciplinary Mathematics Institute (Peter
Binev, Robert Sharpley), Ronald DeVore (Texas A&M) and Wolfgang Dahmen
(RWTH Aachen). These collaborations are critical in exploring novel denoising,
Preface vii
nonlocal algorithms as well as new methods to exploit Compressed Sensing for
nanoscale chemical imaging. This book is to be seen as a progress report on these
efforts.
We thought it was helpful to have Professor Michael Dickson (Philosophy,
University of South Carolina) address issues of realism and perception of nano-
images and how we might think of them in a “Kantian” way.
Chapters 2 and 3 are from well-established practitioners in the field of scanning
transmission electron microscopy, led by Professors Nigel Browning and Angus
Kirkland from the University of California Davis and Oxford University, respec-
tively. Both chapters exemplify what it means to “image at the edge” and push the
method to its current limitations. Limitations that might be pushed back a bit further
using different image analysis techniques.
Chapters 4 and 5 rely heavily on two facilities at USC: many experimental
data were taken on a JEOL JEM-2100F (200 kV) microscope with field emission
gun, spherical aberration corrector, STEM mode, High Angle Annular Dark Field
detector (HAADF), EELS, EDX, and tomography mode. This instrument provides
routinely sub-Angstrom image resolution and elemental resolution at the atomic
level and is operated by Dr. Douglas Blom. Second, we have a state-of-the-
art floating-point parallel computing cluster based on general purpose graphics
processing units (GPGPUs) achieved through parallel architecture of the GPGPU,

which is a mini-supercomputer packed in a graphics card used for floating point
operations. Our major electron imaging simulation code is written in the CUDA
programming language which uses a single-precision FFT routine in the CUFFT
library. We have been able to simulate inorganic structures of unprecedented
complexity using this hardware. These simulations were performed by Sonali Mitra
a Ph.D. student working under the supervision of Drs. Vogt and Blom in the
Department of Chemistry and Biochemistry at the University of South Carolina.
The work by Amit Singer and Yoel Shkolnisky (Chap. 6) is a tour-de-force in
explaining the mathematical theory cryo-transmission electron microscopy is based
on. What appears to many practitioners of electron microscopy as “black art” is
deeply rooted in fundamental mathematics. This chapter illustrates the deep-rooted
connections between imaging and applied mathematics, illustrating what Eugene
Wigner coined in 1960 as the “unreasonable effectiveness of mathematics in the nat-
ural sciences” (Communications on Pure and Applied Mathematics 13 (1): 1–14).
We believe that the combination of state-of-the-art imaging using aberration-
corrected electron microscopy with applied and computational mathematics will
enable a “new age” of imaging in both the hard and soft sciences. This will leverage
the huge infrastructure investments that have been made globally over the past 10
years in national laboratories, universities, and selected companies.
Tom Vogt would like to thank the Korean Ministry of Science, Education, and
Technology for a Global Research Laboratory grant and the National Academies
Keck Future Initiative for support. We all would like to acknowledge the support
from the Nanocenter, the Interdisciplinary Mathematics Institute, and the College
of Arts and Sciences at the University of South Carolina for the realization of the
above-mentioned workshops that helped shape our ideas presented in this volume.

Contents
Kantianism at the Nano-scale 1
Michael Dickson
The Application of Scanning Transmission Electron

Microscopy (STEM) to the Study of Nanoscale Systems 11
N.D. Browning, J.P. Buban, M. Chi, B. Gipson, M. Herrera,
D.J. Masiel, S. Mehraeen, D.G. Morgan, N.L. Okamoto,
Q.M. Ramasse, B.W. Reed, and H. Stahlberg
High Resolution Exit Wave Restoration 41
Sarah J. Haigh and Angus I. Kirkland
Compressed Sensing and Electron Microscopy 73
Peter Binev, Wolfgang Dahmen, Ronald DeVore, Philipp Lamby,
Daniel Savu, and Robert Sharpley
High-Quality Image Formation by Nonlocal Means Applied
to High-Angle Annular Dark-Field Scanning Transmission
Electron Microscopy (HAADF–STEM) 127
Peter Binev, Francisco Blanco-Silva, Douglas Blom,
Wolfgang Dahmen, Philipp Lamby, Robert Sharpley,
and Thomas Vogt
Center of Mass Operators for Cryo-EM—Theory and Implementation 147
Amit Singer and Yoel Shkolnisky
Index 179
ix

Kantianism at the Nano-scale
1
Michael Dickson
1 Introduction
The smallest object that the human eye can detect has dimensions of around
50 microns. So there is a sense in which a sphere that is, say, 10 microns in
diameter, is invisible to us. Some philosophers have argued that the invisibility, to
us, of a 10 microns sphere has epistemological significance that, in particular, our
knowledgeabout and our understandingof such things may be qualitatively different
from our knowledge and understanding of directly observable objects. Along with

many other philosophers, I find this view untenable. It seems clear that although
they are not directly observable to us, 10 microns spheres are nonetheless the same
sort of thing as their larger cousins (the 50 microns spheres). Indeed, there are
creatures whose visual apparatus works more or less as ours does that can directly
see 10 microns spheres.
However, at first blush, nano-objects raise issues of a different order of magni-
tude, literally. Being much smaller than a single wavelength of visible light, they
are not visible to anybody, not even in principle. For example, creatures (such as
ourselves) whose visual apparatus works via edge detection in the visible spectrum
could never see a nano-object. The nanoworld thus provides an epistemic challenge
to those (such as myself) who would argue that we do in fact have decent epistemic
access to the unobservable world. How exactly do we have this access, and what do
1
Thanks to audiences at the University of South Carolina NanoCenter and the University of
California at Irvine Department of Logic and Philosophy of Science for helpful comments and
questions.
M. Dickson (
)
Department of Philosophy and USC NanoCenter, University of South Carolina,
Columbia, SC 29208, USA
e-mail:
T. Vogt et al. (eds.), Modeling Nanoscale Imaging in Electron Microscopy,
Nanostructure Science and Technology, DOI 10.1007/978-1-4614-2191-7
1,
© Springer Science+Business Media, LLC 2012
1
2 M. Dickson
our representations of the nanoworld really mean? Clearly they cannot mean “what
we would see, were we small enough” or some such thing. So what do they mean?
The central suggestion of this paper is that a more or less Kantian understanding

of what we are doing when we create scientific representations (and more specifi-
cally, for present purposes, images)—whether they be of 10 microns spheres or of
carbon nanotubes—resolves this epistemological puzzle. It shows how, and in what
sense, we can have genuine knowledge of objects that we either do not, or cannot
even in principle, observe directly.
After a brief discussion (Sect. 2) of the nature of nano-images, I will quickly
review (Sect. 3) some aspects of the philosophical debate about our epistemic access
to the “unobservable”. In Sect. 4, I present in broad outline of a more or less Kantian
(really, neo-Kantian) account of science, one that I argue resolves the philosophical
debates while respecting the science. In Sect. 5, I conclude by applying this view to
nano-images.
2 Nano-Images: Seeing the Invisible?
Many images of nanoscale objects and their properties seem to present nano-objects
as “what one would see if one were small enough”. Artists’ renditions are especially
noteworthy here, as they frequently show shadows “caused by” three-dimensional
structure, changes in reflectance “caused by” changes in contour, and so on. The
scales on which these structures are depicted to occur are often in the vicinity of a
single Angstrom.
Images created for scientific consumption and study often have similar features.
Many STM and AFM images contain what appear to be shadows and other
visual elements that are reminiscent of the middle-sized three-dimensional world
(for example, again apparent changes in reflectance).
How do these elements get into the image? The production of images from the
raw data produced by STM or AFM, or hosts of other, microscopes is very complex,
involving much processing of the data, feedback from data (at various stages of
processing) to the configuration of the instrument and even to the preparation of the
sample, and so on. Often much of the work of transforming data into a visual image
is done by more or less off the shelf software (in the form of a graphics library) that
was specifically developed for image production (for example, in animated movies).
Some bemoan, or at least highlight as epistemically (and even ethically) prob-

lematic, the fact that Hollywood or video game industry software is used for these
scientific purposes, and that various methods are used to “clean up” or “simplify”
images in ways that may be, so the complaint goes, misleading. Pitt ([9], 157),
forexample,warnsthat“theimagestheseinstrumentsproduce donotallowusto
see atoms in the same way that we see trees.” Indeed, elsewhere ([10]) he questions
whether these machines are “producing an honest replication of the object/surface in
question” Pitt is especially concerned about the fact that these machines (along with
the software) produce images that are, as we observed above, quite similar in their
features to images of everyday middle-sized dry goods, and thus liable to produce
Kantianism at the Nano-scale 3
serious misunderstanding. The epistemological and ethical consequences are, he
argues, quite serious.
However, purveyors of the imaging products are not at all shy about the
provenance of their software, and its capacity for creating familiar-looking images.
Consider, for example, this statement from PNI regarding their AFMs:
“We have taken advantage of some of the latest software and video graphics
capabilities developed for video games to make NanoRuleC fast, intuitive, and easy
to use. It gives the AFM user the ability to rapidly visualize data in new ways and
gain new insights via controlled 3-dimensional imagery [8].”
For the researcher who must produce and use images from an AFM, ease of use
and rapid visualization are no doubt virtues On the one hand, there is no doubt that
nano-images may be misleading in various ways. They may convey—especially to
the untrained consumer—a sense of order and controllability that is far beyond what
the systems “really” have.
Reflective scientists do seem to acknowledge both sides. Goodsell, for example,
is enthusiastic about the power of imaging technology and embraces the fact that
nano-images can make nano-objects seem familiar: “Imagery is playing an impor-
tant role as nanotechnology matures by making the invisible world of the nanoscale
comprehensible and familiar” ([3], 44). On the other hand, he is concerned that
“Pictures carry with them an insidious danger; images are so compelling that they

may compete with the scientific facts” ([3], 47).
Broadly speaking, we find here two opposing attitudes to these images. On the
one hand, one might say that the human visual system is constructed in such a
way (or, as Pitt would have it, “trained” or developed in such a way) that it will
interpret these images in the same way that it interprets analogous images of the
middle-sized objects of everyday experience, i.e., wrongly, and thus the images are
inherently misleading. The more that “Hollywood” has to do with these images, the
worse off they are. Instead, one should use modes of presentation that do not mislead
the visual system in this way. For example, one might use line scans or other modes
of representation that do not suggest a simple 3-dimensional representation to our
visual system as if the image were a photograph of the nano-object. One might think
of this attitude as more or less “realist” in the sense that it insists that in order to be
true, in order not to mislead the viewer about the true nature of the properties that are
being represented, we ought to choose modes of representation that push the viewer
away from misinterpretation and in particular in this case, from visualization. (This
point applies as well to viewers who are savvy enough to know how correctly to
interpret the image, for their visual systems will be just as misled as the rest of ours.
They interpret correctly despite the pseudo-representational elements of the image.)
On the other hand, one might argue that the (scientific) purpose of these
images is to provide (or to suggest) a theoretical representation that is sufficient
to make predictions about future observations, and to suggest modes of control and
manipulation of nano-objects. But success on these scores does not imply that the
theoretical representation suggested to us by the image accurately depicts the object
“as it is”. We know (and, as scientists, care) only that the representation is sufficient
to support these other purposes. In this case, there is no problem with the images.
4 M. Dickson
They clearly suggest (among other things) certain spatial, structural, properties of
the objects that are being studied, and as it turns out, presuming the objects to behave
as if they have those properties does lead (sometimes) to successful predictions
and manipulations. Whether they actually have the properties is not something that

science can verify for science can, according to this attitude, do no better than to
make accurate predictions about observation. One might think of this attitude as
more or less antirealist, inasmuch as it sets aside as irrelevant to science the issue of
truth and focuses on predictive and manipulative success.
3 The Epistemic Significance of Observability
This debate is not new. It is perhaps most familiar to contemporary philosophers
of science in the form of the debate over van Fraassen’s [11] claim that direct
observability marks the line between scientific claims that we may legitimately
believe (or be said to know), and those that we should merely accept for the
purposes of doing science (e.g., prediction and manipulation). So, on this view,
I can legitimately believe (and possibly even know) that my dog is brown, but not
that a hydrogen atom has one electron. (Of course, one can and ought to accept the
latter claim for the purposes of doing science; it is, in that sense, well justified.)
A recent discussion of van Fraassen’s view will be helpful here. One worry
(from the beginning) about van Fraassen’s view is that the distinction itself is vague,
and that the obvious ways of making it precise are inherently circular. Muller [4,5]
is correct to notice that one of the most powerful objections to van Fraassen’s view
came from Musgrave [7], who, translated into the current context, argued thus:
Premise 1: It is correct to accept the wave theory of light, including whatever it
tells us about the observable.
Premise 2: The wave theory of light implies that certain nano-objects are strictly
unobservable.
Premise 3: This implication of the wave theory of light is, clearly, an implication
about unobservables.
Premise 4: The constructive empiricist accepts, but does not believe, theoretical
claims about what is unobservable.
Conclusion: The constructive empiricist does not believe that it is not the case that
nano-objects are unobservable.
This conclusion is a problem, because van Fraassen wants to rely on science to
tell him which things are unobservable (and this strategy is quite reasonable, lest

one appear to be engaging in armchair science), but the argument suggests that he
cannot do so. Hence he cannot draw the distinction that he wants.
Muller proposes a solution, which involves taking “observable” to be more or less
co-extensive with “directly perceptible by the senses unaided” I find this solution
bizarre, because it makes epistemic considerations depend in a very odd way on
personal idiosyncrasies—for example, the sort of justification that I have of certain
claims may be different from the sort of justification that those of you who are not
Kantianism at the Nano-scale 5
as blind as me have. Yours is directly visual and can support legitimate beliefs. Mine
is indirect and theoretical, involving an appeal to the laws of optics as they apply
to eyeglasses, and as such cannot support belief but only acceptance. This view
will have a hard time making good sense of scientific knowledge. The epistemic
quality of scientific knowledge-claims does not depend on who is uttering them
(for example, whether that person happens to wear eyeglasses), but on the overall
state of the scientific enterprise geared toward the verification of the claim in
question.
2
Although I have not established the point here,
3
I believe that van Fraassen’s
position does naturally lead to this dilemma—either the distinction between the
observable and the unobservable must be established without appeal to science, or
it must be co-extensive with the distinction between what is directly perceptible
and what is not directly perceptible, and therefore different for different individuals.
Neither option is very attractive.
But lacking a viable distinction between which scientific representations to
understand realistically and which to understand instrumentally, it seems then that
we are left with either the “fully realist” position that all scientific representations
should aspire to be “true representations”, or the “fully antirealist” position that
all scientific representations are nothing more than instruments for prediction,

manipulation, and other scientific or technological activities.
4 A Neo-Kantian Understanding of Science
There is a third way [1]. The view that I would like us to consider combines aspects
of both of these two attitudes. The basic idea is that the first attitude is correct
insofar as it acknowledges that we are “wired” to see the image in a certain way—
the visual stimulus provided by the image prompts us to apply certain concepts
to the image. (For example, where there are appropriate changes in reflectance,
we see three-dimensional spatial contour.) The first attitude is incorrect to see an
epistemological problem here, for according to the (Kantian) attitude that I will
outline below, science is not about the properties of things independently of how
they are conceptualized by us.
2
There are other reasons to think that the perceptible/imperceptible distinction is not epistemically
relevant. Consider graphene. What are we to make of the fact that we can see (albeit through an
optical microscope, but the point clearly extends to other cases where unaided perception applies)
flakes of graphene whose thickness, by all reasonable accounts, is less than we can discern. Can we
seriously entertain agnosticism (acceptance but not belief) regarding the existence or properties of
objects (e.g., atoms or small molecules) that could apparently be (and indeed can be)oftheoverall
dimensions of the thickness of graphene? And what of the flakes them selves? Is their width and
breadth real, but their thickness not?
3
In particular, the discussion has advanced beyond the paper by Muller. See Muller and van
Fraassen [6] and the references therein.
6 M. Dickson
In other words, the second attitude is correct insofar as it acknowledges
that scientific claims ultimately must “refer themselves” to human observations.
(And these are epistemically relevant for us. Indeed, on what else could we base our
knowledge?) Indeed, the Kantian attitude goes a step further and says that science
is about the properties of physical objects as conceptualized by us. This process of
conceptualization (of the stimuli that we get via observation) is what brings theory

to bear on the physical world. However, the second attitude is incorrect insofar as its
instrumentalism implies that we ought not to draw inferences about the properties of
unobservable objects from these images. We can and should draw such inferences—
the fact that these inferences concern unobservable objects “as conceptualized by
us” makes their conclusions no less objective.
In short: images of nano-objects portray the properties as conceivable by us of
very small things. These are the empirically meaningful properties. They represent
the manner in which attributions of various theoretical properties to nano-objects
become observationally grounded in some possible perceptions.
Something like this general view of science has a pedigree going back to Kant.
4
Prior to Kant, Hume had argued against many contemporary accounts of empirical
knowledge on the grounds that we can never have any good reasons to think that
the representations of the world that we have “in our heads” are in fact faithful
representations of the things in the world, for whenever we seek to compare
our represents with “the world” we necessarily first represent the world in some
way or another to ourselves, and thus we end up comparing representation with
representation, not representation with “the world unrepresented” Hume was led to
skepticism. Kant, on the other hand, took scientific knowledge as given, and sought
to understand how scientific knowledge is possible at all, in light of Hume’s critique.
His solution to Hume’s problem was what he called a new “Copernican revo-
lution”. Just as Copernicus made it clear that we are not observing the objective
motions of heavenly bodies directly, but their motions relative to ourselves, so Kant
turned Hume on his head and argued that science is not about those “things in
themselves” in the world, to which we can never have direct mental access. True, our
access to those things is always mediated by our perceptual and conceptual faculties
(which is what gives rise to Hume’s problem), but science is about those things as
perceived and conceived by us. It is nonetheless entirely objective, because there are
objective facts about how we perceive and conceive.
For example, on Kant’s view we necessarily represent external objects as existing

in space and we do so, necessarily, in accordance with the laws of Euclidean
geometry. Those laws therefore become objective facts about space (a plausible
view when Newtonian physics was taken to be true). Similarly, Kant argued that we
conceive of things in such a way that places them in various relations, for example,
causal relations.
4
Interpretation of Kant is both complex and controversial. I generally follow the views of Friedman
[2], though I am here leaving out almost all of the details.
Kantianism at the Nano-scale 7
At least some aspects of Kant’s view are called seriously into question by modern
physics. For example, the theory of relativity strongly suggests that the laws of
Euclidean geometry are not facts about space. In response to relativity (amongst
other mathematical and scientific developments), some philosophers developed
a “neo-Kantian” view according to which what structures our perceptions and
conceptions of the objects of scientific inquiry flows not from unavoidable facts
about human perceptive and cognitive faculties, but from the categories and modes
of conception that are best suited to provide a framework within which the scientific
theory is conceivable and expressible. For example, Euclidean geometry (among
other things) provides a suitable framework within which one can conceive and
express Newtonian mechanics.
The notion of “best suited” can be spelled out in various ways, but simplicity
often plays a role. For example, it is possible to express general relativity in a fully
Euclidean geometry, but the laws become unwieldy. They are far simpler (although
the precise definition of simplicity is at best an unresolved matter) when expressed
in a non-Euclidean geometry. On this neo-Kantian view, then this non-Euclidean
geometrical structure provides the framework within which is it possible to conceive
and express Einstein’s theory, and thus its laws govern the structure within which
we make scientific spatial observations.
Note that just as, for Kant, the facts of Euclidean geometry are a priori, i.e.,
prior to experience in the sense of being the (for him, necessary) form of spatial

representation, so also, on this neo-Kantian view, the non-Euclidean geometry
employed in general relativity is a priori, i.e., prior to experience in the sense of
being the (now contingent insofar as general relativity could be replaced by another
theory) form of spatial representation.
Again, there is no pejorative or epistemically worrying sense in which this neo-
Kantian view is “subjective” or “antirealist”. We do not simply choose aformof
representation. We discover which form of representation best suits the development
of a successful theory that accommodates our perceptions represented in this way.
Neither, however, is this view that of the traditional realist who presumes that
the representations that we find in successful scientific theories are in some sense
“isomorphic” to the things in themselves (or approximately so), unperceived and
unconceived by us. Science is not about those things on this view. It is about things
as perceived and conceived by us.
5 Nano-Images
How does this view apply to nano-images? Recall the two views mentioned
above. The “fully realist” position was that scientific images (and indeed all
representations) must strive to be “just like” (i.e., in some sense, isomorphic or
at any rate homomorphic, to) their target. In the current context of nano-images,
this view faces two challenges. First, current practices employed in the depiction of
nano-objects do not seem to conform to it. Neither artistic renditions nor scientific
8 M. Dickson
images of nano-images pretend to be “accurate depictions”—they contain many
elements (reflectance, color, etc.) that nobody thinks are properties of the objects
that they depict. Second, even if we abstract from those properties, supposing them
to be inessential to the representational content of the images, we are stuck with the
fact that we have no way to verify that the objects really have the properties depicted
by the image (e.g., such and such spatial structure) apart from employing the very
technology and assumptions used to create the images in the first place. Unlike, say,
the image of a distant mountain produced by binoculars, we cannot “go there and
check”. Instead, in essence we presume that the objects in question have certain

types of property (e.g., spatial structure) and then design instruments to measure
and manipulate that structure.
On the fully antirealist view, these presumptions are pure “as if” they turn out
to be useful for the purposes of making predictions and producing technological
marvels, but they have nothing to do with “truth” or “the world”.
On the neo-Kantian view, the antirealist’s mistake here lies in presuming that
science (properly understood, epistemologically speaking
5
) was ever about anything
other than the world as perceived and conceived by us. Nano-images are perfectly
“objective” and “accurate” and “true” from this point of view. We structure our
representations of nano-objects as we do because it is the best way (so far as we
know right now) to theorize about them. The antirealist points out that science
cannot verify that our representations faithfully depict the “thing in itself”. The neo-
Kantian suggests that the very idea that such a feat could be accomplished—evenfor
middle-sized dry goods!—is incoherent. We do not, and never could, have “direct
access” to “things in themselves”—we always perceive and conceive them in one
way or another, and what is verified (if anything) about a scientific theory is that it
conforms (or not) to things as perceived and conceived in this way.
On this neo-Kantian view, both the practices surrounding the generation of nano-
images, and the procedures that we use to verify those images make perfect sense.
In the first place, we should depict nano-objects as having color, casting shadows,
etc. Why? Because we conceive of them as having spatial structure, and as far as we
(human observers) are concerned, objects with spatial structure are like that. In other
words, if we wish to convey to another (or to ourselves) an image of an object with
spatial structure, we ought to include such properties as color and shading in the
image. Indeed, if we fail to include such properties, we are likely to fail to convey
the intended spatial structure, and thus fail to represent the object as perceived and
conceived by us. Note that it does not follow that we cannot (or should not) add
as a proviso that it is impossible “actually to see” the objects in this way. One can

understand perfectly well that nano-objects are smaller than a wavelength of light,
5
The point here is not that practitioners have always understood their practice in this way, but
that understanding the practice in this way gives it epistemic credibility (i.e., we can legitimately
say that the practice produces knowledge) without doing serious violence to (for example,
misrepresenting) the practice itself.
Kantianism at the Nano-scale 9
and yet admit that including features such as color and shadow enables portraying
them as having spatial structure.
In the second place, there is nothing wrong with the procedures of verification
used to verify that the images we get from our instruments “accurately” represent
the objects. The antirealist’s contention against the realist is that our instruments
and procedures assume that nano-objects have the sorts of properties that we are
attributing to them. This contention is correct, but on the neo-Kantian view misses
the point of “verification” in the first place. The point is not to verify that the objects
“really have” the properties in question, but to verify that the theories
6
that we are
building, based on the (often tacit and granted) assumption that the objects have
such properties are panning out so far.
Of course, nothing guarantees that these procedures of verification will work out.
In the most extreme case, it could turn out that what is at fault is the very manner
in which we conceive of nano-objects. In this case, we will be forced to rethink the
very foundations of our scientific activity (as we have been in the case of quantum
theory, where the usual modes of conception have broken down). Indeed, one of the
exciting aspects of research in this area is precisely that such a result is possible.
References
1. Dickson M (2004) The view from nowhere: quantum reference frames and quantum uncer-
tainty. Stud Hist Philos Mod Phys 35:195–220
2. Friedman M (2010) Synthetic history reconsidered In: Domski M, Dickson M (eds) Discourse

on a new method: essays at the intersection of history and philosophy of science. Open Court
Press, Chicago, pp 571–813
3. Goodsell D (2006) Seeing the nanoscale. NanoToday 1:44–49
4. Muller FA (2004) Can constructive empiricism adopt the concept of observability? Philos Sci
71:637–654
5. Muller FA (2005) The deep black sea: observability and modality afloat. Br J Philos Sci
56:61–99
6. Muller FA, van Fraassen BC (2008) How to talk about unobservables. Analysis 68:197–205
7. Musgrave A (1985) Constructive empiricism and realism In: Churchland P, Hooker CA (eds)
Images of science. University of Chicago Press, Chicago, pp 196–208
8. Pacific Nanotechnology, Inc. (2003) Press Release: “New Image Display and Analysis Soft-
ware for Atomic Force Microscopy”. 17 March. Available online at freelibrary.
com
9. Pitt J (2004) The epistemology of the very small In: Baird D, Nordmann A, Schummer J (eds)
Discovering the Nanoscale. IOS Press, Amsterdam, pp 157–163
10. Pitt J (2005) When is an image not an image? Techn
´
e: Research in Philosophy and Technology
8:23–33
11. Van Fraassen BC (1981) The scientific image. Clarendon Press, Oxford
6
I have nothing grand in mind by using the term ‘theory’ here. It is being used here to refer
to models, hypothesis, simple assertions (such as ‘each molecule of X is surrounded by several
molecules of Y ’) and so on.
The Application of Scanning Transmission
Electron Microscopy (STEM) to the Study
of Nanoscale Systems
N.D. Browning, J.P. Buban, M. Chi, B. Gipson, M. Herrera, D.J. Masiel,
S. Mehraeen, D.G. Morgan, N.L. Okamoto, Q.M. Ramasse, B.W. Reed,
and H. Stahlberg

Abstract In this chapter, the basic principles of atomic resolution scanning
transmission electron microscopy (STEM) will be described. Particular attention
will be paid to the benefits of the incoherent Z-contrast imaging technique for
structural determination and the benefits of aberration correction for improved
spatial resolution and sensitivity in the acquired images. In addition, the effect
that the increased beam current in aberration corrected systems has on electron
beam-induced structural modifications of inorganic systems will be discussed.
N.D. Browning ()
Department of Chemical Engineering and Materials Science, University of California-Davis,
One Shields Ave, Davis, CA 95618, USA
Department of Molecular and Cellular Biology, University of California-Davis,
One Shields Ave, Davis, CA 95618, USA
Chemical and Materials Sciences Division, Pacific Northwest National Laboratory, 902 Battelle
Boulevard, Richland, WA 99352, USA
e-mail:
J.P. Buban • S. Mehraeen
Department of Molecular and Cellular Biology, University of California-Davis,
One Shields Ave, Davis, CA 95618, USA
e-mail: ;
M. Chi
Materials Science Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
e-mail:
B. Gipson • H. Stahlberg
C-CINA, Biozentrum, University Basel, WRO-1058 Mattenstrasse, CH-4058 Basel, Switzerland
e-mail: ;
M. Herrera
Departamento de Ciencia de los Materiales e Ingenier
´
ıa Metalurgica y Qu
´

ımica Inorg
´
anica,
Facultad de Ciencias, Universidad de C
´
adiz, Pol. Rio San Pedro,
11510 Puerto Real (C
´
adiz), Spain
e-mail:
T. Vogt et al. (eds.), Modeling Nanoscale Imaging in Electron Microscopy,
Nanostructure Science and Technology, DOI 10.1007/978-1-4614-2191-7
2,
© Springer Science+Business Media, LLC 2012
11
12 N.D. Browning et al.
Procedures for controlling the electron dose will be described along with image
processing methods that enable quantified information to be extracted from STEM
images. Several examples of the use of aberration-corrected STEM for the study
of nanoscale systems will be presented; a quantification of vacancies in clathrate
systems, a quantification of N doping in GaAs, a quantification of the size
distribution in nanoparticle catalysts, and an observation of variability in dislocation
core composition along a low-angle grain boundary in SrTiO
3
. The potential for
future standardized methods to reproducibly quantify structures determined by
STEM and/or high-resolution TEM will also be discussed.
1 Introduction
Transmission electron microscopy (TEM) has long played a key role in driving
our scientific understanding of extended defects and their control of the properties

of materials—from the earliest TEM observations of dislocations [1] through to
the current use of aberration-corrected TEMs to determine the atomic structure
of grain boundaries [2]. With the current generation of aberration corrected and
monochromated TEMs, we can now obtain images with a spatial resolution
approaching 0.05 nm in both the plane-wave, phase-contrast TEM and the focused
probe, Z-contrast scanning-TEM (STEM) modes of operation [3–5]. In addition to
the increase in the spatial resolution, aberration correctors also provide an increase
in the beam current and subsequently the signal-to-noise levels (contrast) in the
acquired images. This means that small differences in structure and composition
can be more readily observed and, for example, in the STEM mode of operation,
complete 2-D atomic resolution elemental maps can be generated using electron
energy loss spectroscopy (EELS) [6, 7]. Furthermore, the EEL spectra that are
obtained using a monochromated microscope also show vast improvements over
the spectra that could be obtained a few years ago—allowing bonding state changes
to be observed from core-loss spectra with high precision [8] and the low-loss region
D.J. Masiel • D.G. Morgan
Department of Chemical Engineering and Materials Science, University of California-Davis,
One Shields Ave, Davis, CA 95618, USA
e-mail: ;
N.L. Okamoto
Department of Materials Science and Engineering, Kyoto University, Yoshida, Sakyo-ku,
Kyoto 606–8501, Japan
e-mail:
B.W. Reed
Condensed Matter and Materials Division, Physical and Life Sciences Directorate,
Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94550, USA
e-mail:
Q.M. Ramasse
SuperSTEM Laboratory, J Block, STFC Daresbury, Daresbury WA4 4AD, UK
e-mail:

The Application of Scanning Transmission Electron Microscopy (STEM) 13
of the spectrum to be used to map fluctuations in optical properties [9–11]. Taken all
together, these newly developed capabilities for (S)TEM provide a comprehensive
set of tools to measure, quantify, and understand the atomic scale properties of
nanoscale materials, interfaces, and defects.
However, although the tools now exist to obtain very high-quality images from
nanoscale materials, defects, and interfaces, as yet there has been very little work to
quantify the information contained in them—other than to identify a structure and
report composition variations in obvious cases across a hetero-interface. Images of
individual interfaces, grain boundaries, or dislocations are typically presented as
being “representative” of the whole structure with little proof that this is actually
the case. In addition, the history of the sample is usually poorly defined in terms
of its synthesis, preparation for the TEM, and beam irradiation history (which
can easily have a significant effect on the structure, particularly when aberration-
corrected microscopes are used). This is in stark contrast to the work that has been
performed using TEMs for structural biology, where quantifying the information
present in low-dose images has been the major emphasis of research for over 20
years [12–24]. Image processing and analysis methods for the study of organic
systems can now routinely cope with variations across an image caused by sample
movement and noise and can quantify the contribution of each—leading to a well-
defined measurement of resolution and the accurate incorporation of these essential
experimental factors into the structure determination procedure. In the case of the
analysis of point defects in nanoscale systems, dislocations, grain boundaries, and
interfaces by aberration-corrected (S)TEM, the lack or periodicity in the structure,
large composition variations, and a sensitivity of the structure to beam modification
actually make the experimental considerations very similar to those employed for
organic systems. We can therefore use the image processing/analysis tools that have
already been defined for structural biology to provide an unprecedented atomic scale
characterization of nanoscale materials, defects, and interfaces—potentially even
defining the effect of single atom composition variations on the structure and the

subsequent properties.
2 Z-Contrast Imaging in STEM
The main principle behind the scanning transmission electron microscope is to use
the electron lenses to form a small focused beam (probe) of electrons on the surface
of the specimen [25](Fig.1a). As this electron probe is scanned across the surface of
the specimen, the electrons that are scattered by the specimen are collected in a
series of detectors that cover different angular ranges—the signal in each detector
therefore contains a different part of the physics of the interaction of the beam with
the specimen [26]. A 2-D image is created by displaying the output fromone of these
detectors as a function of the beam position as it is scanned across the specimen.
Most STEM images use a high-angle annular dark field (HAADF) detector, in which
the scattering that is collected is proportional to the Rutherford scattering cross-
section that has a second power Z
2
dependence on the atomic number Z of the
14 N.D. Browning et al.
Aperture

Sample
To EELS
Detector
ADF Detector
Object Function
Probe Function
Image
ab
Fig. 1 (a) The geometry of the probe, detector and sample produce an overlapping CBED pattern
at the detector plane. (b) The Z-contrast image (and electron energy loss spectrum) can, to a first
approximation, be treated as a convolution between the probe intensity profile and the scattering
cross section for the signal of interest (i.e. inelastic or elastic). The two probes shown illustrate the

effect of aberration correction on the final image
scattering center—giving rise to the name Z-contrast imaging. From the earliest
images of individual heavy atoms on a light support [25], the technique evolved to
be able to image crystals with atomic spatial resolution [27]. In the remainder of this
section, the principles behind the spatial resolution and the compositional sensitivity
of the method will be described and the effect of aberration correction discussed.
2.1 Basic Concepts of Z-contrast Imaging
As described above, a Z-contrast image [27–32] is formed by collecting the
high-angle scattering on an annular detector and synchronously displaying its
integrated output on a TV screen or computer monitor while the electron probe
is scanned across the specimen. Detecting the scattered intensity at high angles
and integrating it over a large angular range effectively averages coherent effects
between atomic columns in the specimen, allowing each atom to be considered to
scatter independently with a cross-section approaching a Z
2
dependence on atomic
number (Fig. 1b). This cross-section forms an object function that is strongly peaked
at the atom sites. The detected intensity is, to a first approximation, a convolution
of this object function with the probe intensity profile. The small width of the
object function ( 0:1
˚
A) means that the spatial resolution is limited only by the
probe size of the microscope. For a crystalline material in a zone–axis orientation,
where the atomic spacing is greater than the probe size ( 0:1 nm for the JEOL
2100 C
s
corrected STEM at UC-Davis, 0:1 nm for the Nion-corrected VG STEM
at Lawrence Berkeley National Laboratory (LBNL), and 0.05–0.1 nm for the C
s
The Application of Scanning Transmission Electron Microscopy (STEM) 15

corrected FEI Titans at Lawrence Livermore National Laboratory (LLNL), LBNL,
and Oak Ridge National Laboratory (ORNL)—these microscopes were used to
obtain the results presented later in this chapter), the atomic columns can be
illuminated individually. Therefore, as the probe is scanned over the specimen, an
atomic resolution compositional map is generated in which the intensity depends
on the average atomic number of the atoms in the column. An important feature of
this method is that changes in focus and thickness do not cause contrast reversals
in the image, so that atomic sites can be identified unambiguously during the
experiment. As the images can be interpreted directly in real time while working
on the microscope, they can be used to position the probe to obtain electron energy
loss spectra from defined locations in the structure [33–39], thus permitting a full
spectroscopic analysis to be correlated with the image on the atomic scale.
Since the initial development of the Z-contrast imaging technique, there have
been many studies that have confirmed the general concept of incoherent imaging
described above—in particular, identifying the location of atomic columns in the
image is straightforward. However, interpretation of the intensities within the atomic
columns seen in the images is a little more complicated than the simple incoherent
model suggests [38–42]. If you want to interpret the absolute intensities in the
individual columns in terms of the presence of vacancies and impurities, then first
principles simulations of the atomic structures must be accompanied by image
simulations—there are currently several available packages to perform these simula-
tions [43,44]. As the aim of this chapter is to discuss the applications of quantitative
imaging in STEM, we will not discuss the details of the simulations further here,
other than to mention in the subsequent sections when simulations were used.
2.2 Aberration Correction
In conventional high-resolution TEM imaging and in atomic resolution Z-contrast
imaging, the resolution of the final image is limited by the aberrations in the
principal imaging lens. For STEM, this means the aberrations in the final probe-
forming lens—which determines the spatial extent of the electron beam on the
surface of the specimen. As with other high-resolution methods, defocus can be

used to balance out the effects of aberrations up to some optimum value, usually
called the Scherzer defocus, with a resolution given by
d D 0:43.C
s

3
/
1
4
(1)
As can be seen from this equation, there are two principle factors that control
resolution—the wavelength  of the electrons (determined by the acceleration
voltage of the microscope) and the spherical aberration coefficient C
s
of the lens. For
typical C
s
values in uncorrected state-of-the-art 200kV TEM/STEM microscopes
(C
s
 0:5 mm), this gives an optimum probe size of 0:12nm [45]. This equation
also shows the two methods that can increase the spatial resolution—higher voltage
and lower C
s
.
16 N.D. Browning et al.
Fig. 2 The effect of spherical aberration (a) can be corrected to create a smaller, more intense
electron probe (b)
In the late 1990s there were two main efforts underway to establish C
s

correctors
for TEM [3] and STEM [4]. For the formation of a STEM probe, the effect
of C
s
correction is shown schematically in Fig.2. Practically the effect of C
s
correction means that a larger area of the lens is free from spherical aberration,
allowing larger apertures to be used and a higher resolution to be obtained [46].
An important corollary to the increase in spatial resolution is that the larger aperture
size means that the smaller probe that is formed can have up to an order of magnitude
more beam current than a conventional STEM probe [6]. Now that spherical
aberration has essentially been removed as the limitation in the probe size, higher
order aberrations are the limiting factors. As was the case with the Scherzer defocus,
the aberration corrector can now be adjusted to compensate for those higher order
aberrations by tuning C
s
itself to an optimal value. Although due to the complexity
of the multipole electron optics of correctors many more parameters actually control
the probe size, (1) can be modified to yield the probe full-width-at-half-maximum
of a simplified system limited only by 5th order aberration C
5
[47]:
 D 0:37.C
5

5
/
1
6
(2)

For a state-of-the-art aberration-corrected STEM, the probe size can now approach
0.05 nm [5] and designs are currently being implemented that should push resolution
even further to 0:03 nm [48]. Aberration-corrected STEMs are now becoming
the standard for high-resolution imaging, with many applications to solve materials
science problems being present in the literature [49–53].
Another advantage of the aberration corrector for STEM is the increased
usefulness of other imaging signals not traditionally exploited in scanning mode.
Instead of the annular-shaped detector used for Z-contrast imaging, a detector placed
directly on this axis will form a bright field image, which can be shown by simple
optical reciprocity considerations to be equivalent to a conventional high-resolution
TEM phase contrast image. Thanks to the larger aberration-free area of the electron
wavefront, the collection angle for this bright field detector can be increased in a
corrected instrument and high-quality images can be obtained [26]. As a supplement
to the Z-contrast images described above, simultaneous phase-contrast images can

×