Tải bản đầy đủ (.pdf) (42 trang)

Tài liệu Xử lý hình ảnh kỹ thuật số P16 doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.29 MB, 42 trang )

509
16
IMAGE FEATURE EXTRACTION
An image feature is a distinguishing primitive characteristic or attribute of an image.
Some features are natural in the sense that such features are defined by the visual
appearance of an image, while other, artificial features result from specific manipu-
lations of an image. Natural features include the luminance of a region of pixels and
gray scale textural regions. Image amplitude histograms and spatial frequency spec-
tra are examples of artificial features.
Image features are of major importance in the isolation of regions of common
property within an image (image segmentation) and subsequent identification or
labeling of such regions (image classification). Image segmentation is discussed in
Chapter 16. References 1 to 4 provide information on image classification tech-
niques.
This chapter describes several types of image features that have been proposed
for image segmentation and classification. Before introducing them, however,
methods of evaluating their performance are discussed.
16.1. IMAGE FEATURE EVALUATION
There are two quantitative approaches to the evaluation of image features: prototype
performance and figure of merit. In the prototype performance approach for image
classification, a prototype image with regions (segments) that have been indepen-
dently categorized is classified by a classification procedure using various image
features to be evaluated. The classification error is then measured for each feature
set. The best set of features is, of course, that which results in the least classification
error. The prototype performance approach for image segmentation is similar in
nature. A prototype image with independently identified regions is segmented by a
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
Copyright © 2001 John Wiley & Sons, Inc.
ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)
510
IMAGE FEATURE EXTRACTION


segmentation procedure using a test set of features. Then, the detected segments are
compared to the known segments, and the segmentation error is evaluated. The
problems associated with the prototype performance methods of feature evaluation
are the integrity of the prototype data and the fact that the performance indication is
dependent not only on the quality of the features but also on the classification or seg-
mentation ability of the classifier or segmenter.
The figure-of-merit approach to feature evaluation involves the establishment of
some functional distance measurements between sets of image features such that a
large distance implies a low classification error, and vice versa. Faugeras and Pratt
(5) have utilized the Bhattacharyya distance (3) figure-of-merit for texture feature
evaluation. The method should be extensible for other features as well. The Bhatta-
charyya distance (B-distance for simplicity) is a scalar function of the probability
densities of features of a pair of classes defined as
(16.1-1)
where x denotes a vector containing individual image feature measurements with
conditional density . It can be shown (3) that the B-distance is related mono-
tonically to the Chernoff bound for the probability of classification error using a
Bayes classifier. The bound on the error probability is
(16.1-2)
where represents the a priori class probability. For future reference, the Cher-
noff error bound is tabulated in Table 16.1-1 as a function of B-distance for equally
likely feature classes.
For Gaussian densities, the B-distance becomes
(16.1-3)
where u
i
and represent the feature mean vector and the feature covariance matrix
of the classes, respectively. Calculation of the B-distance for other densities is gener-
ally difficult. Consequently, the B-distance figure of merit is applicable only for
Gaussian-distributed feature data, which fortunately is the common case. In prac-

tice, features to be evaluated by Eq. 16.1-3 are measured in regions whose class has
been determined independently. Sufficient feature measurements need be taken so
that the feature mean vector and covariance can be estimated accurately.
BS
1
S
2
,() p x S
1
()p x S
2
()[]
12⁄
xd




ln–=
p x S
i
()
PPS
1
()PS
2
()[]
12⁄
BS
1

S
2
,()–{}exp≤
PS
i
()
BS
1
S
2
,()
1
8

u
1
u
2
–()
T
Σ
ΣΣ
Σ
1
Σ
ΣΣ
Σ
2
+
2




1

u
1
u
2
–()
1
2

1
2

Σ
ΣΣ
Σ
1
Σ
ΣΣ
Σ
2
+
Σ
ΣΣ
Σ
1
12⁄

Σ
ΣΣ
Σ
2
12⁄






ln+=
Σ
ΣΣ
Σ
i
AMPLITUDE FEATURES
511
TABLE 16.1-1 Relationship of Bhattacharyya Distance
and Chernoff Error Bound
16.2. AMPLITUDE FEATURES
The most basic of all image features is some measure of image amplitude in terms of
luminance, tristimulus value, spectral value, or other units. There are many degrees
of freedom in establishing image amplitude features. Image variables such as lumi-
nance or tristimulus values may be utilized directly, or alternatively, some linear,
nonlinear, or perhaps noninvertible transformation can be performed to generate
variables in a new amplitude space. Amplitude measurements may be made at spe-
cific image points, [e.g., the amplitude at pixel coordinate , or over a
neighborhood centered at ]. For example, the average or mean image amplitude
in a pixel neighborhood is given by

(16.2-1)
where W = 2w + 1. An advantage of a neighborhood, as opposed to a point measure-
ment, is a diminishing of noise effects because of the averaging process. A disadvan-
tage is that object edges falling within the neighborhood can lead to erroneous
measurements.
The median of pixels within a neighborhood can be used as an alternative
amplitude feature to the mean measurement of Eq. 16.2-1, or as an additional
feature. The median is defined to be that pixel amplitude in the window for which
one-half of the pixels are equal or smaller in amplitude, and one-half are equal or
greater in amplitude. Another useful image amplitude feature is the neighborhood
standard deviation, which can be computed as
(16.2-2)
B Error Bound
1 1.84 × 10
–1
2 6.77 × 10
–2
4 9.16 × 10
–3
6 1.24 × 10
–3
8 1.68 × 10
–4
10 2.27 × 10
–5
12 2.07 × 10
–6
S
1
S

2
,()
Fjk,() jk,()
jk,()
WW×
Mjk,()
1
W
2
Fj mk n+,+()
nw
–=
w

mw
–=
w

=
WW×
Sjk,()
1
W
Fj mk n+,+()Mj mk n+,+()–[]
2
nw
–=
w

mw

–=
w

12⁄
=
512
IMAGE FEATURE EXTRACTION
In the literature, the standard deviation image feature is sometimes called the image
dispersion. Figure 16.2-1 shows an original image and the mean, median, and stan-
dard deviation of the image computed over a small neighborhood.
The mean and standard deviation of Eqs. 16.2-1 and 16.2-2 can be computed
indirectly in terms of the histogram of image pixels within a neighborhood. This
leads to a class of image amplitude histogram features. Referring to Section 5.7, the
first-order probability distribution of the amplitude of a quantized image may be
defined as
(16.2-3)
where denotes the quantized amplitude level for . The first-order his-
togram estimate of P(b) is simply
FIGURE 16.2-1. Image amplitude features of the washington_ir image.
(
a
) Original (
b
) 7 × 7 pyramid mean
(
c
) 7 × 7 standard deviation (
d
) 7 × 7 plus median
Pb() P

R
Fjk,()r
b
=[]=
r
b
0 bL1–≤≤
AMPLITUDE FEATURES
513
(16.2-4)
where M represents the total number of pixels in a neighborhood window centered
about , and is the number of pixels of amplitude in the same window.
The shape of an image histogram provides many clues as to the character of the
image. For example, a narrowly distributed histogram indicates a low-contrast
image. A bimodal histogram often suggests that the image contains an object with a
narrow amplitude range against a background of differing amplitude. The following
measures have been formulated as quantitative shape descriptions of a first-order
histogram (6).
Mean:
(16.2-5)
Standard deviation:
(16.2-6)
Skewness:
(16.2-7)
Kurtosis:
(16.2-8)
Energy:
(16.2-9)
Entropy:
(16.2-10)

Pb()
Nb()
M

jk,() Nb() r
b
S
M
b≡ bP b()
b 0
=
L 1


=
S
D
σ
b
≡ bb–()
2
Pb()
b 0
=
L 1


12⁄
=
S

S
1
σ
b
3
bb–()
3
Pb()
b 0
=
L 1


=
S
K
1
σ
b
4
bb–()
4
Pb()
b 0
=
L 1


3–=
S

N
Pb()[]
2
b 0
=
L 1


=
S
E
Pb()
2
log Pb(){}
b 0
=
L 1


–=
514
IMAGE FEATURE EXTRACTION
The factor of 3 inserted in the expression for the Kurtosis measure normalizes S
K
to
zero for a zero-mean, Gaussian-shaped histogram. Another useful histogram shape
measure is the histogram mode, which is the pixel amplitude corresponding to the
histogram peak (i.e., the most commonly occurring pixel amplitude in the window).
If the histogram peak is not unique, the pixel at the peak closest to the mean is usu-
ally chosen as the histogram shape descriptor.

Second-order histogram features are based on the definition of the joint proba-
bility distribution of pairs of pixels. Consider two pixels and that
are located at coordinates and , respectively, and, as shown in Figure
16.2-2, are separated by r radial units at an angle with respect to the horizontal
axis. The joint distribution of image amplitude values is then expressed as
(16.2-11)
where and represent quantized pixel amplitude values. As a result of the dis-
crete rectilinear representation of an image, the separation parameters may
assume only certain discrete values. The histogram estimate of the second-order dis-
tribution is
(16.2-12)
where M is the total number of pixels in the measurement window and
denotes the number of occurrences for which and .
If the pixel pairs within an image are highly correlated, the entries in will
be clustered along the diagonal of the array. Various measures, listed below, have
been proposed (6,7) as measures that specify the energy spread about the diagonal of
.
Autocorrelation:
(16.2-13)
FIGURE 16.2-2. Relationship of pixel pairs.
j,k
r
q
m,n
Fjk,() Fmn,()
jk,() mn,()
θ
Pab,()P
R
Fjk,()r

a
Fmn,()r
b
=,=[]=
r
a
r
b
r θ,()
Pab,()
Nab,()
M

Nab,()
Fjk,() r
a
= Fmn,()r
b
=
Pab,()
Pab,()
S
A
abP a b,()
b 0
=
L 1


a 0

=
L 1


=
AMPLITUDE FEATURES
515
Covariance:
(16.2-14a)
where
(16.2-14b)
(16.2-14c)
Inertia:
(16.2-15)
Absolute value:
(16.2-16)
Inverse difference:
(16.2-17)
Energy:
(16.2-18)
Entropy:
(16.2-19)
The utilization of second-order histogram measures for texture analysis is consid-
ered in Section 16.6.
S
C
aa–()bb–()Pab,()
b 0
=
L 1



a 0
=
L 1


=
aaPab,()
b 0
=
L 1


a 0
=
L 1


=
bbPab,()
b 0
=
L 1


a 0
=
L 1



=
S
I
ab–()
2
Pab,()
b 0
=
L 1


a 0
=
L 1


=
S
V
ab– Pab,()
b 0
=
L 1


a 0
=
L 1



=
S
F
Pab,()
1 ab–()
2
+

b 0
=
L 1


a 0
=
L 1


=
S
G
Pab,()[]
2
b 0
=
L 1


a 0

=
L 1


=
S
T
Pab,()
2
log Pab,(){}
b 0
=
L 1


a 0
=
L 1


–=
516
IMAGE FEATURE EXTRACTION
16.3. TRANSFORM COEFFICIENT FEATURES
The coefficients of a two-dimensional transform of a luminance image specify the
amplitude of the luminance patterns (two-dimensional basis functions) of a trans-
form such that the weighted sum of the luminance patterns is identical to the image.
By this characterization of a transform, the coefficients may be considered to indi-
cate the degree of correspondence of a particular luminance pattern with an image
field. If a basis pattern is of the same spatial form as a feature to be detected within

the image, image detection can be performed simply by monitoring the value of the
transform coefficient. The problem, in practice, is that objects to be detected within
an image are often of complex shape and luminance distribution, and hence do not
correspond closely to the more primitive luminance patterns of most image trans-
forms.
Lendaris and Stanley (8) have investigated the application of the continuous two-
dimensional Fourier transform of an image, obtained by a coherent optical proces-
sor, as a means of image feature extraction. The optical system produces an electric
field radiation pattern proportional to
(16.3-1)
where are the image spatial frequencies. An optical sensor produces an out-
put
(16.3-2)
proportional to the intensity of the radiation pattern. It should be observed that
and are unique transform pairs, but is not uniquely
related to . For example, does not change if the origin of
is shifted. In some applications, the translation invariance of may be a
benefit. Angular integration of over the spatial frequency plane produces
a spatial frequency feature that is invariant to translation and rotation. Representing
in polar form, this feature is defined as
(16.3-3)
where and . Invariance to changes in scale is an
attribute of the feature
(16.3-4)
F ω
x
ω
y
,() Fxy,() i ω
x

x ω
y
y+()–{}exp xdyd








=
ω
x
ω
y
,()
M ω
x
ω
y
,()F ω
x
ω
y
,()
2
=
F ω
x

ω
y
,()Fxy,() M ω
x
ω
y
,()
Fxy,() M ω
x
ω
y
,() Fxy,()
M ω
x
ω
y
,()
M ω
x
ω
y
,()
M ω
x
ω
y
,()
N ρ() M ρθ,()θd
0



=
θ arc ω
x
ω
y
⁄{}tan= ρ
2
ω
x
2
ω
y
2
+=
P θ() M ρθ,()ρd
0


=
TRANSFORM COEFFICIENT FEATURES
517
The Fourier domain intensity pattern is normally examined in specific
regions to isolate image features. As an example, Figure 16.3-1 defines regions for
the following Fourier features:
Horizontal slit:
(16.3-5)
Vertical slit:
(16.3-6)
Ring:

(16.3-7)
Sector:
(16.3-8)
FIGURE 16.3-1. Fourier transform feature masks.
M ω
x
ω
y
,()
S
1
m() M ω
x
ω
y
,()ω
x
ω
y
dd
ω
y
m()
ω
y
m 1
+
()






=
S
2
m() M ω
x
ω
y
,()ω
x
ω
y
dd




ω
x
m()
ω
x
m 1
+
()

=
S

3
m() M ρθ,()ρθdd
0


ρ m()
ρ m 1
+
()

=
S
4
m() M ρθ,()ρθdd
θ m()
θ m 1
+
()

0


=
518
IMAGE FEATURE EXTRACTION
For a discrete image array , the discrete Fourier transform
(16.3-9)
FIGURE 16.3-2. Discrete Fourier spectra of objects; log magnitude displays.
(
a

) Rectangle (
b
) Rectangle transform
(
c
) Ellipse (
d
) Ellipse transform
(
e
) Triangle (
f
) Triangle transform
Fjk,()
F uv,()
1
N
Fjk,()
2πi–
N
uxvy+()



exp
k 0
=
N 1



j 0
=
N 1


=
TEXTURE DEFINITION
519
for can be examined directly for feature extraction purposes. Hor-
izontal slit, vertical slit, ring, and sector features can be defined analogous to
Eqs. 16.3-5 to 16.3-8. This concept can be extended to other unitary transforms,
such as the Hadamard and Haar transforms. Figure 16.3-2 presents discrete Fourier
transform log magnitude displays of several geometric shapes.
16.4. TEXTURE DEFINITION
Many portions of images of natural scenes are devoid of sharp edges over large
areas. In these areas, the scene can often be characterized as exhibiting a consistent
structure analogous to the texture of cloth. Image texture measurements can be used
to segment an image and classify its segments.
Several authors have attempted qualitatively to define texture. Pickett (9) states
that “texture is used to describe two dimensional arrays of variations The ele-
ments and rules of spacing or arrangement may be arbitrarily manipulated, provided
a characteristic repetitiveness remains.” Hawkins (10) has provided a more detailed
description of texture: “The notion of texture appears to depend upon three ingredi-
ents: (1) some local 'order' is repeated over a region which is large in comparison to
the order's size, (2) the order consists in the nonrandom arrangement of elementary
parts and (3) the parts are roughly uniform entities having approximately the same
dimensions everywhere within the textured region.” Although these descriptions of
texture seem perceptually reasonably, they do not immediately lead to simple quan-
titative textural measures in the sense that the description of an edge discontinuity
leads to a quantitative description of an edge in terms of its location, slope angle,

and height.
Texture is often qualitatively described by its coarseness in the sense that a patch
of wool cloth is coarser than a patch of silk cloth under the same viewing conditions.
The coarseness index is related to the spatial repetition period of the local structure.
A large period implies a coarse texture; a small period implies a fine texture. This
perceptual coarseness index is clearly not sufficient as a quantitative texture mea-
sure, but can at least be used as a guide for the slope of texture measures; that is,
small numerical texture measures should imply fine texture, and large numerical
measures should indicate coarse texture. It should be recognized that texture is a
neighborhood property of an image point. Therefore, texture measures are inher-
ently dependent on the size of the observation neighborhood. Because texture is a
spatial property, measurements should be restricted to regions of relative uniformity.
Hence it is necessary to establish the boundary of a uniform textural region by some
form of image segmentation before attempting texture measurements.
Texture may be classified as being artificial or natural. Artificial textures consist of
arrangements of symbols, such as line segments, dots, and stars placed against a
neutral background. Several examples of artificial texture are presented in Figure
16.4-1 (9). As the name implies, natural textures are images of natural scenes con-
taining semirepetitive arrangements of pixels. Examples include photographs
of brick walls, terrazzo tile, sand, and grass. Brodatz (11) has published an album of
photographs of naturally occurring textures. Figure 16.4-2 shows several natural
texture examples obtained by digitizing photographs from the Brodatz album.
uv, 0 … N 1–,,=
520
IMAGE FEATURE EXTRACTION
FIGURE 16.4-1. Artificial texture.
VISUAL TEXTURE DISCRIMINATION
521
16.5. VISUAL TEXTURE DISCRIMINATION
A discrete stochastic field is an array of numbers that are randomly distributed in

amplitude and governed by some joint probability density (12). When converted to
light intensities, such fields can be made to approximate natural textures surpris-
ingly well by control of the generating probability density. This technique is useful
for generating realistic appearing artificial scenes for applications such as airplane
flight simulators. Stochastic texture fields are also an extremely useful tool for
investigating human perception of texture as a guide to the development of texture
feature extraction methods.
In the early 1960s, Julesz (13) attempted to determine the parameters of stochas-
tic texture fields of perceptual importance. This study was extended later by Julesz
et al. (14–16). Further extensions of Julesz’s work have been made by Pollack (17),
FIGURE 16.4-2. Brodatz texture fields.
(
a
) Sand (
b
) Grass
(
c
) Wool (
d
) Raffia
522
IMAGE FEATURE EXTRACTION
Purks and Richards (18), and Pratt et al. (19). These studies have provided valuable
insight into the mechanism of human visual perception and have led to some useful
quantitative texture measurement methods.
Figure 16.5-1 is a model for stochastic texture generation. In this model, an array
of independent, identically distributed random variables passes through a
linear or nonlinear spatial operator to produce a stochastic texture array
. By controlling the form of the generating probability density and the

spatial operator, it is possible to create texture fields with specified statistical proper-
ties. Consider a continuous amplitude pixel at some coordinate in .
Let the set denote neighboring pixels but not necessarily nearest geo-
metric neighbors, raster scanned in a conventional top-to-bottom, left-to-right fash-
ion. The conditional probability density of conditioned on the state of its
neighbors is given by
(16.5-1)
The first-order density employs no conditioning, the second-order density
implies that J = 1, the third-order density implies that J = 2, and so on.
16.5.1. Julesz Texture Fields
In his pioneering texture discrimination experiments, Julesz utilized Markov process
state methods to create stochastic texture arrays independently along rows of the
array. The family of Julesz stochastic arrays are defined below.
1. Notation. Let denote a row neighbor of pixel and let
P(m), for m = 1, 2, , M, denote a desired probability generating function.
2. First-order process. Set for a desired probability function P(m).
The resulting pixel probability is
(16.5-2)
FIGURE 16.5-1. Stochastic texture field generation model.
Wjk,()
O
·
{}
Fjk,()
p
W()
x
0
jk,() Fjk,()
z

1
z
2
… z
J
,, ,{}
x
0
p
x
0
z
1
… z
J
,,()
px
0
z
1
… z
J
,, ,()
pz
1
… z
J
,,()
=
px

0
()
px
0
z
1
()
x
n
Fjk n–,()= x
0
x
0
m=
Px
0
() Px
0
m=()Pm()==
VISUAL TEXTURE DISCRIMINATION
523
3. Second-order process. Set for , and set
, where the modulus function
for integers p and q. This gives a first-order probability
(16.5-3a)
and a transition probability
(16.5-3b)
4. Third-order process. Set for , and set
for . Choose to satisfy
. The governing probabilities then become

(16.5-4a)
(16.5-4b)
(16.5-4c)
This process has the interesting property that pixel pairs along a row are
independent, and consequently, the process is spatially uncorrelated.
Figure 16.5-2 contains several examples of Julesz texture field discrimination
tests performed by Pratt et al. (19). In these tests, the textures were generated
according to the presentation format of Figure 16.5-3. In these and subsequent
visual texture discrimination tests, the perceptual differences are often small. Proper
discrimination testing should be performed using high-quality photographic trans-
parencies, prints, or electronic displays. The following moments were used as sim-
ple indicators of differences between generating distributions and densities of the
stochastic fields.
(16.5-5a)
(16.5-5b)
(16.5-5c)
(16.5-5d)
Fj1,()m= Pm() 1 M⁄=
x
0
x
1
m+()MOD M{}= p MOD q{}≡
pqpq÷()×[]–
Px
0
()
1
M
=

p
x
0
x
1
()Px
0
x
1
m+()MOD M{}=[]Pm()==
Fj1,()m= Pm() 1 M⁄=
Fj2,()n= Pn() 1 M⁄= x
0
2x
0
x
1
x
2
m++()=
MOD M{}
Px
0
()
1
M
=
px
0
x

1
()
1
M
=
p
x
0
x
1
x
2
,()P 2x
0
x
1
x
2
m++()MOD M{}=[]Pm()==
η Ex
0
{}=
σ
2
Ex
0
η–[]
2
{}=
α

Ex
0
η–[]x
1
η–[]{}
σ
2
=
θ
Ex
0
η–[]x
1
η–[]x
2
η–[]{}
σ
3
=
524
IMAGE FEATURE EXTRACTION
The examples of Figure 16.5-2a and b indicate that texture field pairs differing in
their first- and second-order distributions can be discriminated. The example of
Figure 16.5-2c supports the conjecture, attributed to Julesz, that differences in third-
order, and presumably, higher-order distribution texture fields cannot be perceived
provided that their first-order and second- distributions are pairwise identical.
FIGURE 16.5-2. Field comparison of Julesz stochastic fields; .
(
a
) Different first order

s
A
= 0.289, s
B
= 0.204
(
b
) Different second order
s
A
= 0.289, s
B
= 0.289
a
A
= 0.250, a
B
= − 0.250
(
c
) Different third order
s
A
= 0.289, s
B
= 0.289
a
A
= 0.000, a
B

= 0.000
q
A
= 0.058, q
B
= − 0.058
η
A
η
B
0.500==
VISUAL TEXTURE DISCRIMINATION
525
16.5.2. Pratt, Faugeras, and Gagalowicz Texture Fields
Pratt et al. (19) have extended the work of Julesz et al. (13–16) in an attempt to study
the discriminability of spatially correlated stochastic texture fields. A class of Gaus-
sian fields was generated according to the conditional probability density
(16.5-6a)
where
(16.5-6b)
(16.5-6c)
The covariance matrix of Eq. 16.5-6a is of the parametric form
FIGURE 16.5-3. Presentation format for visual texture discrimination experiments.
px
0
z
1
… z
J
,,()

2π()
J 1
+
K
J 1
+
1

2⁄
1
2

– v
J 1
+
η
ηη
η
J 1
+
–()
T
K
J 1
+
()
1

v
J 1

+
η
ηη
η
J 1
+
–()



exp
2π()
J
K
J
1

2⁄
1
2

– v
J
η
ηη
η
J
–()
T
K

J
()
1

v
J
η
ηη
η
J
–()



exp
=
v
J
z
1
z
J
=

v
J 1
+
x
0
v

J
=
526
IMAGE FEATURE EXTRACTION
(16.5-7)
where denote correlation lag terms. Figure 16.5-4 presents an example of
the row correlation functions used in the texture field comparison tests described
below.
Figures 16.5-5 and 16.5-6 contain examples of Gaussian texture field comparison
tests. In Figure 16.5-5, the first-order densities are set equal, but the second-order
nearest neighbor conditional densities differ according to the covariance function plot
of Figure 16.5-4a. Visual discrimination can be made in Figure 16.5-5, in which the
correlation parameter differs by 20%. Visual discrimination has been found to be
marginal when the correlation factor differs by less than 10% (19). The first- and
second-order densities of each field are fixed in Figure 16.5-6, and the third-order
FIGURE 16.5-4. Row correlation factors for stochastic field generation. Dashed line, field
A; solid line, field B.
(
b
) Constrained third-order density
(
a
) Constrained second-order density
K
J 1
+
1
α
β
γ


α
βσ
2

K
J
γ
=

αβγ…,,,
VISUAL TEXTURE DISCRIMINATION
527
conditional densities differ according to the plan of Figure 16.5-4b. Visual discrimi-
nation is possible. The test of Figure 16.5-6 seemingly provides a counterexample to
the Julesz conjecture. In this test, and , but
. However, the general second-order density pairs
and are not necessarily equal for an arbitrary neighbor , and
therefore the conditions necessary to disprove Julesz’s conjecture are violated.
To test the Julesz conjecture for realistically appearing texture fields, it is neces-
sary to generate a pair of fields with identical first-order densities, identical
Markovian type second-order densities, and differing third-order densities for every
FIGURE 16.5-5. Field comparison of Gaussian stochastic fields with different second-order
nearest neighbor densities;
.
FIGURE 16.5-6. Field comparison of Gaussian stochastic fields with different third-order
nearest neighbor densities;
.
(
a

) a
A
= 0.750, a
B
= 0.900 (
b
) a
A
= 0.500, a
B
= 0.600
η
A
η
B
0.500 σ
A

B
0.167== ==
p
A
x
0
() p
B
x
0
()=[]p
A

x
0
x
1
,()p
B
x
0
x
1
,()=
p
A
x
0
x
1
x
2
,,()p
B
x
0
x
1
x
2
,,()≠
p
A

x
0
z
j
,() p
B
x
0
z
j
,() z
j
(
a
) b
A
= 0.563, b
B
= 0.600 (
b
) b
A
= 0.563, b
B
= 0.400
η
A
η
B
0.500 σ

A

B
0.167 α
A

B
0.750== == ==
528
IMAGE FEATURE EXTRACTION
pair of similar observation points in both fields. An example of such a pair of fields
is presented in Figure 16.5-7 for a non-Gaussian generating process (19). In this
example, the texture appears identical in both fields, thus supporting the Julesz
conjecture.
Gagalowicz has succeeded in generating a pair of texture fields that disprove the
Julesz conjecture (20). However, the counterexample, shown in Figure 16.5-8, is not
very realistic in appearance. Thus, it seems likely that if a statistically based texture
measure can be developed, it need not utilize statistics greater than second-order.
FIGURE 16.5-7. Field comparison of correlated Julesz stochastic fields with identical first-
and second-order densities, but different third-order densities.
FIGURE 16.5-8. Gagalowicz counterexample.
h
A
= 0.500, h
B
= 0.500
s
A
= 0.167, s
B

= 0.167
a
A
= 0.850, a
B
= 0.850
q
A
= 0.040, q
B
= − 0.027
TEXTURE FEATURES
529
Because a human viewer is sensitive to differences in the mean, variance, and
autocorrelation function of the texture pairs, it is reasonable to investigate the
sufficiency of these parameters in terms of texture representation. Figure 16.5-9 pre-
sents examples of the comparison of texture fields with identical means, variances,
and autocorrelation functions, but different nth-order probability densities. Visual
discrimination is readily accomplished between the fields. This leads to the conclu-
sion that these low-order moment measurements, by themselves, are not always suf-
ficient to distinguish texture fields.
16.6. TEXTURE FEATURES
As noted in Section 16.4, there is no commonly accepted quantitative definition of
visual texture. As a consequence, researchers seeking a quantitative texture measure
have been forced to search intuitively for texture features, and then attempt to evalu-
ate their performance by techniques such as those presented in Section 16.1. The
following subsections describe several texture features of historical and practical
important. References 20 to 22 provide surveys on image texture feature extraction.
Randen and Husoy (23) have performed a comprehensive study of many texture fea-
ture extraction methods.

FIGURE 16.5-9. Field comparison of correlated stochastic fields with identical means,
variances, and autocorrelation functions, but different nth-order probability densities gener-
ated by different processing of the same input field. Input array consists of uniform random
variables raised to the 256th power. Moments are computed.
h
A
= 0.413, h
B
= 0.412
s
A
= 0.078, s
B
= 0.078
a
A
= 0.915, a
B
= 0.917
q
A
= 1.512, q
B
= 0.006
530
IMAGE FEATURE EXTRACTION
16.6.1. Fourier Spectra Methods
Several studies (8,24,25) have considered textural analysis based on the Fourier
spectrum of an image region, as discussed in Section 16.2. Because the degree of
texture coarseness is proportional to its spatial period, a region of coarse texture

should have its Fourier spectral energy concentrated at low spatial frequencies. Con-
versely, regions of fine texture should exhibit a concentration of spectral energy at
high spatial frequencies. Although this correspondence exists to some degree, diffi-
culties often arise because of spatial changes in the period and phase of texture pat-
tern repetitions. Experiments (10) have shown that there is considerable spectral
overlap of regions of distinctly different natural texture, such as urban, rural, and
woodland regions extracted from aerial photographs. On the other hand, Fourier
spectral analysis has proved successful (26,27) in the detection and classification of
coal miner’s black lung disease, which appears as diffuse textural deviations from
the norm.
16.6.2. Edge Detection Methods
Rosenfeld and Troy (28) have proposed a measure of the number of edges in a
neighborhood as a textural measure. As a first step in their process, an edge map
array is produced by some edge detector such that for a detected
edge and otherwise. Usually, the detection threshold is set lower than
the normal setting for the isolation of boundary points. This texture measure is
defined as
(16.6-1)
where is the dimension of the observation window. A variation of this
approach is to substitute the edge gradient for the edge map array in
Eq. 16.6-1. A generalization of this concept is presented in Section 16.6.4.
16.6.3. Autocorrelation Methods
The autocorrelation function has been suggested as the basis of a texture measure
(28). Although it has been demonstrated in the preceding section that it is possible to
generate visually different stochastic fields with the same autocorrelation function,
this does not necessarily rule out the utility of an autocorrelation feature set for nat-
ural images. The autocorrelation function is defined as
(16.6-2)
Ejk,() Ejk,() 1=
Ejk,() 0=

Tjk,()
1
W
2
Ej mk n+,+()
nw
–=
w

mw
–=
w

=
W 2w 1+=
Gjk,()
A
F
mn,() Fjk,()Fj mk n–,–()
k

j

=
TEXTURE FEATURES
531
for computation over a window with pixel lags. Presumably, a
region of coarse texture will exhibit a higher correlation for a fixed shift than
will a region of fine texture. Thus, texture coarseness should be proportional to the
spread of the autocorrelation function. Faugeras and Pratt (5) have proposed the fol-

lowing set of autocorrelation spread measures:
(16.6-3a)
where
(16.6-3b)
(16.6-3c)
In Eq. 16.6-3, computation is only over one-half of the autocorrelation function
because of its symmetry. Features of potential interest include the profile spreads
S(2, 0) and S(0, 2), the cross-relation S(1, 1), and the second-degree spread S(2, 2).
Figure 16.6-1 shows perspective views of the autocorrelation functions of the
four Brodatz texture examples (5). Bhattacharyya distance measurements of these
texture fields, performed by Faugeras and Pratt (5), are presented in Table 16.6-1.
These B-distance measurements indicate that the autocorrelation shape features are
marginally adequate for the set of four shape features, but unacceptable for fewer
features. Tests by Faugeras and Pratt (5) verify that the B-distances are low for
FIGURE 16.6-1. Perspective views of autocorrelation functions of Brodatz texture fields.
WW× T– mn, T≤≤
mn,()
Suv,() m η
m
–()
u
n η
n
–()
v
A
F
mn,()
nT
–=

T

m 0
=
T

=
η
m
mA
F
mn,()
nT
–=
T

m 0
=
T

=
η
n
nA
F
mn,()
nT
–=
T


m 0
=
T

=
(
a
) Sand (
b
) Grass
(
c
) Wool (
d
) Raffia
532
IMAGE FEATURE EXTRACTION
TABLE 16.6-1. Bhattacharyya Distance of Texture Feature Sets for Prototype Texture
Fields: Autocorrelation Features
the stochastic field pairs of Figure 16.5-9, which have the same autocorrelation
functions but are visually distinct.
16.6.4. Decorrelation Methods
Stochastic texture fields generated by the model of Figure 16.5-1 can be described
quite compactly by specification of the spatial operator and the stationary
first-order probability density p(W) of the independent, identically distributed gener-
ating process . This observation has led to a texture feature extraction proce-
dure, developed by Faugeras and Pratt (5), in which an attempt has been made to
invert the model and estimate its parameters. Figure 16.6-2 is a block diagram of
their decorrelation method of texture feature extraction. In the first step of the
method, the spatial autocorrelation function is measured over a texture

field to be analyzed. The autocorrelation function is then used to develop a whiten-
ing filter, with an impulse response , using techniques described in Section
19.2. The whitening filter is a special type of decorrelation operator. It is used to
generate the whitened field
(16.6-4)
This whitened field, which is spatially uncorrelated, can be utilized as an estimate
of the independent generating process by forming its first-order histogram.
Field Pair Set 1
a
Set 2
b
Set 3
c
Grass

sand 5.05 4.29 2.92
Grass – raffia 7.07 5.32 3.57
Grass – wool 2.37 0.21 0.04
Sand

raffia 1.49 0.58 0.35
Sand – wool 6.55 4.93 3.14
Raffia – wool 8.70 5.96 3.78
Average 5.213.552.30

a
1: S(2, 0), S(0, 2), S(1, 1), S(2,2).

b
2: S(1,1), S(2,2).


c
3: S(2,2).
O
·
{}
Wjk,()
A
F
mn,()
H
W
jk,()
W
ˆ
jk,()Fjk,()

*
H
W
jk,()=
Wjk,()
TEXTURE FEATURES
533
FIGURE 16.6-2. Decorrelation method of texture feature extraction.
FIGURE 16.6-3. Whitened Brodatz texture fields.
(
a
) Sand
(

b
) Grass
(
c
) Wool (
d
) Raffia

×