www.elsolucionario.net
www.elsolucionario.net
www.elsolucionario.net
CHAPTER 1
1.1
Let
r u ( k ) = E [ u ( n )u * ( n – k ) ]
(1)
r y ( k ) = E [ y ( n )y * ( n – k ) ]
(2)
We are given that
y(n) = u(n + a) – u(n – a)
Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get
r y(k ) = E [(u(n + a) – u(n – a))(u*(n + a – k ) – u*(n – a – k ))]
= 2r u ( k ) – r u ( 2a + k ) – r u ( – 2a + k )
1.2
We know that the correlation matrix R is Hermitian; that is
R
H
= R
Given that the inverse matrix R-1 exists, we may write
–1 H
R R
= I
where I is the identity matrix. Taking the Hermitian transpose of both sides:
RR
–H
= I
Hence,
R
–H
= R
–1
That is, the inverse matrix R-1 is Hermitian.
1.3
For the case of a two-by-two matrix, we may
Ru = Rs + Rν
1
www.elsolucionario.net
(3)
www.elsolucionario.net
2
r 11 r 12
σ
0
=
+
2
r 21 r 22
0 σ
=
r 11 + σ
r 21
2
r 12
r 22 + σ
2
For Ru to be nonsingular, we require
2
With r12 = r21 for real data, this condition reduces to
2
2
( r 11 + σ ) ( r 22 + σ ) – r 12 r 21 > 0
2
2
Since this is quadratic in σ , we may impose the following condition on σ for nonsingularity of Ru:
4∆ r
2 1
σ > --- ( r 11 + r 22 ) 1 – --------------------------------------
2
2
( r + r ) – 1
11
22
2
where ∆ r = r 11 r 22 – r 12
1.4
We are given
R = 1 1
1 1
This matrix is positive definite because
a
T
a Ra = [ a 1 ,a 2 ] 1 1 1
1 1 a2
2
2
= a 1 + 2a 1 a 2 + a 2
2
www.elsolucionario.net
2
det ( R u ) = ( r 11 + σ ) ( r 22 + σ ) – r 12 r 21 > 0
www.elsolucionario.net
2
= ( a 1 + a 2 ) > 0 for all nonzero values of a1 and a2
(Positive definiteness is stronger than nonnegative definiteness.)
But the matrix R is singular because
2
2
det ( R ) = ( 1 ) – ( 1 ) = 0
Hence, it is possible for a matrix to be positive definite and yet it can be singular.
(a)
H
r(0) r
R M+1 =
r RM
(1)
Let
–1
R M+1 =
a
b
H
b
C
(2)
where a, b and C are to be determined. Multiplying (1) by (2):
H
r(0) r
I M+1 =
r RM
a
b
b
H
C
where IM+1 is the identity matrix. Therefore,
H
r ( 0 )a + r b = 1
(3)
ra + R M b = 0
(4)
H
(5)
rb + R M C = I M
H
H
r ( 0 )b + r C = 0
T
(6)
From Eq. (4):
3
www.elsolucionario.net
1.5
www.elsolucionario.net
–1
b = – R M ra
(7)
Hence, from (3) and (7):
1
a = -----------------------------------H –1
r ( 0 ) – r RM r
(8)
Correspondingly,
–1
RM r
b = – -----------------------------------H –1
r ( 0 ) – r RM r
From (5):
–1
–1
C = R M – R M rb
H
–1
=
–1
RM
H
–1
R M rr R M
+ -----------------------------------H –1
r ( 0 ) – r RM r
(10)
As a check, the results of Eqs. (9) and (10) should satisfy Eq. (6).
H
–1
H
–1
H
–1
r ( 0 )r R M
H – 1 r R M rr R M
r ( 0 )b + r C = – ------------------------------------ + r R M + ------------------------------------H –1
H –1
r ( 0 ) – r RM r
r ( 0 ) – r RM r
H
H
= 0
T
We have thus shown that
H
–1
R M+1
–1
T
1
–r R M
0 0
=
+
a
–1
0 R–1
R M r R – 1 rr H R – 1
M
M
M
T
1
0 0
=
+
a
–1
0 R–1
–R M r
M
H
–1
[ 1 –r R M ]
4
www.elsolucionario.net
(9)
www.elsolucionario.net
where the scalar a is defined by Eq. (8):
B*
r
R M+1 =
BT
r(0)
r
(11)
Let
D
–1
R M+1 =
e
e
f
H
(12)
where D, e and f are to be determined. Multiplying (11) by (12):
RM
B*
D
r
I M+1 =
BT
r(0)
r
e
H
e
f
Therefore
RM D + r
RM e + r
r
r
B* H
e
B*
(13)
= I
(14)
f = 0
BT
e + r(0) f = 1
BT
D + r ( 0 )e
H
= 0
(15)
T
(16)
From (14):
– 1 B*
e = – RM r
(17)
f
Hence, from (15) and (17):
1
f = --------------------------------------------BT – 1 B*
r ( 0 ) – r RM r
(18)
Correspondingly,
5
www.elsolucionario.net
(b)
RM
www.elsolucionario.net
– 1 B*
RM r
e = – --------------------------------------------BT – 1 B*
r ( 0 ) – r RM r
(19)
From (13):
– 1 B* H
e
– 1 B* BT
=
–1
RM
–1
RM r r RM
+ --------------------------------------------BT – 1 B*
r ( 0 ) – r RM r
(20)
As a check, the results of Eqs. (19) and (20) must satisfy Eq. (16). Thus
BT
r
BT
D + r ( 0 )e
H
= r
BT
= 0
–1
RM +
– 1 B* BT
–1
BT
–1
r ( 0 )r R M
r RM r r RM
------------------------------------------------ – --------------------------------------------BT – 1 B*
BT – 1 B*
r ( 0 ) – r RM r
r ( 0 ) – r RM r
T
We have thus shown that
– 1 B* BT
–1
–1
R M+1
=
RM 0
0
T
+f
–r
0
–1
=
RM 0
0
T
0
RM r
r
BT
–1
R M – R – 1 r B*
M
–1
RM
– 1 B*
+f
–R M r
[ –r
1
BT
1
–1
RM 1 ]
where the scalar f is defined by Eq. (18).
1.6
(a) We express the difference equation describing the first-order AR process u(n) as
u ( n ) = v ( n ) + w1 u ( n – 1 )
where w1 = -a1. Solving this equation by repeated substitution, we get
u ( n ) = v ( n ) + w1 v ( n – 1 ) + w1 u ( n – 2 )
6
www.elsolucionario.net
–1
D = RM – RM r
www.elsolucionario.net
= …
2
n-1
= v ( n ) + w1 v ( n – 1 ) + w1 v ( n – 2 ) + … + w1 v ( 1 )
(1)
Here we have used the initial condition
u(0) = 0
or equivalently
u(1) = v(1)
E [v(n)] = µ
for all n,
we get the geometric series
2
n-1
E [ u ( n ) ] = µ + w1 µ + w1 µ + … + w1 µ
1 – w n
1
- ,
= µ -------------1
–
w
1
µn,
w1 ≠ 1
w1 = 1
This result shows that if µ ≠ 0 , then E[u(n)] is a function of time n. Accordingly, the
AR process u(n) is not stationary. If, however, the AR parameter satisfies the
condition:
a 1 < 1 or w 1 < 1
then
µ
E [ ( n ) ] → --------------- as n → ∞
1 – w1
Under this condition, we say that the AR process is asymptotically stationary to order
one.
(b) When the white noise process v(n) has zero mean, the AR process u(n) will likewise
have zero mean. Then
7
www.elsolucionario.net
Taking the expected value of both sides of Eq. (1) and using
www.elsolucionario.net
2
var [ v ( n ) ] = σ v
2
var [ u ( n ) ] = E [ u ( n ) ].
(2)
Substituting Eq. (1) into (2), and recognizing that for the white noise process
2
E [ v ( n )v ( k ) ] = σ v
0,
n=k
(3)
n≠k
var [ u ( n ) ] = σ v ( 1 + w 1 + w 1 + … + w 1
2
2
4
2n-2
2n
2 1 – w1
- ,
σ v ----------------2
=
1
w
–
1
2
σ v n,
www.elsolucionario.net
we get the geometric series
)
w1 ≠ 1
w1 = 1
When |a1| < 1 or |w1| < 1, then
2
2
σv
σv
var [ u ( n ) ] ≈ --------------- = -------------- for large n
2
2
1 – w1
1 – a1
(c) The autocorrelation function of the AR process u(n) equals E[u(n)u(n-k)]. Substituting
Eq. (1) into this formula, and using Eq. (3), we get
2
k
k+2
E [ u ( n )u ( n – k ) ] = σ v ( w 1 + w 1
+ … + w1
k+2n-2
2n
2 k 1 – w1
- ,
σ v w 1 ----------------2
=
–
1
w
1
2
σ v n,
8
w1 ≠ 1
w1 = 1
)
www.elsolucionario.net
For |a1| < 1 or |w1| < 1, we may therefore express this autocorrelation function as
r ( k ) = E [ u ( n )u ( n – k ) ]
2 k
σv w1
≈ --------------- for large n
2
1 – w1
Case 1: 0 < a1 < 1
In this case, w1 = -a1 is negative, and r(k) varies with k as follows:
-3
-1
+3
+1
-2
-4
0
+2
+4
k
Case 2: -1 < a1 < 0
In this case, w1 = -a1 is positive and r(k) varies with k as follows:
r(k)
-4
1.7
-3
-2
-1
0
+1
+2
+3 +4
k
(a) The second-order AR process u(n) is described by the difference equation:
u ( n ) = u ( n – 1 ) – 0.5u ( n – 2 ) + v ( n )
Hence
w1 = 1
w 2 = – 0.5
and the AR parameters equal
a1 = –1
a 2 = 0.5
Accordingly, we write the Yule-Walker equations as
9
www.elsolucionario.net
r(k)
www.elsolucionario.net
r(0)
r(1)
r(1)
r(0)
1 = r(1)
– 0.5
r(2)
(b) Writing the Yule-Walker equations in expanded form:
r ( 0 ) – 0.5r ( 1 ) = r ( 1 )
r ( 1 ) – 0.5r ( 0 ) = r ( 2 )
Solving the first relation for r(1):
2
r ( 1 ) = --- r ( 0 )
3
Solving the second relation for r(2):
1
r ( 2 ) = --- r ( 0 )
6
(2)
(c) Since the noise v(n) has zero mean, so will the AR process u(n). Hence,
2
var [ u ( n ) ] = E [ ( u n ) ]
= r ( 0 ).
We know that
2
σv
2
=
∑ ak r ( k )
k=0
= r ( 0 ) + a1 r ( 1 ) + a2 r ( 2 )
(3)
Substituting (1) and (2) into (3), and solving for r(0), we get
2
σv
r ( 0 ) = ---------------------------- = 1.2
2 1
1 + --- a 1 --- a 2
3 6
1.8
By definition,
P0 = average power of the AR process u(n)
10
www.elsolucionario.net
(1)
www.elsolucionario.net
= E[|u(n)|2]
= r(0)
(1)
where r(0) is the autocorrelation function of u(n) for zero lag. We note that
r(1) r(2) … r(M )
-, ----------, , -------------
r--------r(0)
(0) r(0)
{ a 1, a 2, …, a M }
Equivalently, except for the scaling factor r(0),
(2)
Combining Eqs. (1) and (2):
{ r ( 0 ), r ( 1 ), r ( 2 ), …, r ( M ) }
{ P 0, a , a 2 , … , a M }
1
1.9
(a) The transfer function of the MA model of Fig. 2.3 is
* –1
H ( z ) = 1 + b1 z
* –2
+ b2 z
+ … + bK z
* –K
(b) The transfer function of the ARMA model of Fig. 2.4 is
b0 + b1 z + b2 z + … + bK z
H ( z ) = --------------------------------------------------------------------------------* –1
* –2
* –M
1 + a1 z + a2 z + … + a M z
*
* –1
* –2
* –K
(c) The ARMA model reduces to an AR model when
b0 = b1 = … = bK = 0
It reduces to an MA model when
a1 = a2 = … = a M = 0
1.10
We are given
x ( n ) = υ ( n ) + 0.75υ ( n – 1 ) + 0.25υ ( n – 2 )
Taking the z-transforms of both sides:
11
(3)
www.elsolucionario.net
{ r ( 1 ), r ( 2 ), …, r ( M ) }
{ a 1, a 2, …, a M }
www.elsolucionario.net
X ( z ) = ( 1 + 0.75z
–1
–2
+ 0.25z )V ( z )
Hence, the transfer function of the MA model is
–1
–2
X (z)
----------- = 1 + 0.75z + 0.25z
V (z)
1
= --------------------------------------------------------------–1
–2 –1
( 1 + 0.75z + 0.25z )
(1)
( 1 + 0.75z
–1
–2 –1
+ 0.25z )
3 – 1 5 – 2 3 – 3 11 – 4
45 – 5
= 1 – --- z + ------ z – ------ z – --------- z + ------------ z
64
4
16
256
1024
85 – 8
627 – 9
91 – 6
93 – 7
1541 – 10
– ------------ z + --------------- z + --------------- z – ------------------ z + --------------------- z
+…
65536
262144
4096
16283
1048576
≈ 1 – 0.75z
– 0.0222z
–6
–1
+ 0.3125z
+ 0.0057z
–7
–2
– 0.0469z
+ 0.0013z
–8
–3
– 0.043z
– 0.0024z
–4
–9
+ 0.0439z
+ 0.0015z
–5
– 10
(2)
(a) M = 2
Retaining terms in Eq. (2) up to z-2, we may approximate the MA model with an AR
model of order two as follows:
X (z)
1
----------- ≈ ---------------------------------------------------------–
1
V ( z ) 1 – 0.75z + 0.3125z – 2
(b) M = 5
Retaining terms in Eq. (2) up to z-5, we obtain the following approximation in the
forms of an AR model of order five:
X (z)
1
----------- ≈ --------------------------------------------------------------------------------------------------------------------------------------------------–
1
–
2
V ( z ) 1 – 0.75z + 0.3125z – 0.0469z – 3 – 0.043z – 4 + 0.0439z – 5
12
www.elsolucionario.net
Using long division, we may perform the following expansion of the denominator in Eq.
(1):
www.elsolucionario.net
(c) M = 10
Finally, retaining terms in Eq. (2) up to z-10, we obtain the following approximation in
the form of an AR model of order ten:
X (z)
1
----------- ≈ ----------V (z) D(z)
where D(z) is given by the polynomial on the right-hand side of Eq. (2).
1.11
(a) The filter output is
H
where u(n) is the tap-input vector. The average power of the filter output is therefore
2
H
H
E [ x ( n ) ] = E [ w u ( n )u ( n )w ]
H
H
= w E [ u ( n )u ( n ) ]w
H
= w Rw
(b) If u(n) is extracted from a zero mean white noise of variance σ2, we have
2
R = σ I
where I is the identity matrix. Hence,
2
2 H
E [ x(n) ] = σ w w
1.12
(a) The process u(n) is a linear combination of Gaussian samples. Hence, u(n) is
Gaussian.
(b) From inverse filtering, we recognize that v(n) may also be expressed as a linear
combination of samples represented by u(n). Hence, if u(n) is Gaussian, then v(n) is
also Gaussian.
1.13
(a) From the Gaussian moment factoring theorem:
*
E ( u1 u2 )
k
= E [ u1 … u1 u2 … u2 ]
*
*
13
www.elsolucionario.net
x(n) = w u(n)
www.elsolucionario.net
= k! E [ u 1 u 2 ] … E [ u 1 u 2 ]
*
*
*
= k! ( E [ u 1 u 2 ] )
k
(1)
(b) Putting u2 = u1 = u, Eq. (1) reduces to
2k
2
] = k! ( E [ u ] )
k
1.14
It is not permissible to interchange the order of expectation and limiting operations in Eq.
(1.113). The reason is that the expectation is a linear operation, whereas the limiting
operation with respect to the number of samples N is nonlinear.
1.15
The filter output is
y(n) =
∑ h ( i )u ( n – i )
i
Similarly, we may write
y(m) =
∑ h ( k )u ( m – k )
k
Hence,
*
r y ( n, m ) = E [ y ( n )y ( m ) ]
= E
∑ h ( i )u ( n – i ) ∑ h
i
=
k
*
( k )E [ u ( n – i )u ( m – k ) ]
∑ ∑ h ( i )h
*
( k )r u ( n – i, m – k )
i
1.16
*
( k )u ( m – k )
∑ ∑ h ( i )h
i
=
*
*
k
k
The mean-square value of the filter output in response to white noise input is
2
2σ ∆ω
P o = -----------------π
14
www.elsolucionario.net
E[ u
www.elsolucionario.net
The value Po is linearly proportional to the filter bandwidth ∆ω. This relation holds
irrespective of how small ∆ω is, compared to the mid-band frequency of the filter.
1.17
(a) The variance of the filter output is
2
2
2σ ∆ω
σ y = -----------------π
We are given
2
σ = 0.1 volt
2
www.elsolucionario.net
∆ω = 2π × 1 radians/sec.
Hence,
2
2
2 × 0.1 × 2
σ y = -------------------------- = 0.4 volt
π
(b) The pdf of the filter output y is
2
2
– y ⁄ 2σ y
1
f ( y ) = ----------------- e
2πσ y
2
– y ⁄ 0.8
1
= ---------------------e
0.63 2π
1.18
(a) We are given
N -1
Uk =
∑ u ( n ) exp ( – jnωk ) ,
k = 0,1,...,N-1
n=∞
where u(n) is real valued and
2π
ω k = ------k
N
Hence,
15
www.elsolucionario.net
*
E[UkUl ]
N -1 N -1
= E
∑ ∑ u ( n )u ( m ) exp ( – jnωk + jmωl )
n=0 m=0
N -1 N -1
=
∑ ∑ exp ( – jnωk + jmωl )E [ u ( n )u ( m ) ]
n=0 m=0
N -1 N -1
=
∑ ∑ exp ( – jnωk + jmωl )r ( n – m )
n=0 m=0
=
N -1
∑ exp ( jnωk ) ∑ r ( n – m ) exp ( – jnωk )
m=0
(1)
n=0
By definition, we also have
N -1
∑ r ( n ) exp ( – jnωk )
= Sk
n=0
Moreover, since r(n) is periodic with period N, we may invoke the time-shifting
property of the discrete Fourier transform to write
N -1
∑ r ( n – m ) exp ( – jnωk )
= exp ( – jmω k )S k
n=0
Thus, recognizing that ωk = (2π/N)k, Eq. (1) reduces to
*
E[UkUl ]
N -1
= Sk
∑ exp ( jm ( ωl – ωk ) )
m=0
S ,
= k
0,
l=k
otherwise
(b) Part (a) shows that the complex spectral samples Uk are uncorrelated. If they are
Gaussian, then they will also be statistically independent. Hence,
16
www.elsolucionario.net
N -1
www.elsolucionario.net
1 H
1
f U ( U 0, U 1, …, U N -1 ) = --------------------------------- exp – --- U ΛU
2
N
( 2π ) det ( Λ )
where
U = [ U 0, U 1, …, U N -1 ]
T
H
1
Λ = --- E [ UU ]
2
N -1
1
det ( Λ ) = ------ ∏ S k
N
2 k=0
Therefore,
N -1 U 2
1
1
k
- exp – --- ∑ ------------
f U ( U 0, U 1, …, U N -1 ) = --------------------------------------N -1
2
1
N –N
k=0 --- S k
( 2π ) 2 ∏ S k
2
k=0
2
= π
1.19
–N
N -1
Uk
exp ∑ – ------------ – ln S k
k=0 S k
The mean square value of the increment process dz(ω) is
2
E [ dz ( ω ) ] = S ( ω )dω
Hence E[|dz(ω)|2] is measured in watts.
1.20
The third-order cumulant of a process u(n) is
c 3 ( τ 1, τ 2 ) = E [ u ( n )u ( n + τ 1 )u ( n + τ 2 ) ]
= third-order moment.
All odd-order moments of a Gaussian process are known to be zero; hence,
17
www.elsolucionario.net
1
= --- diag ( S 0, S 1, …, S N – 1 )
2
www.elsolucionario.net
c 3 ( τ 1, τ 2 ) = 0
The fourth-order cumulant is
c 4 ( τ 1, τ 2, τ 3 ) = E [ u ( n )u ( n + τ 1 )u ( n + τ 2 )u ( n + τ 3 ) ]
– E [ u ( n )u ( n + τ 1 ) ]E [ u ( n + τ 2 )u ( n + τ 3 ) ]
– E [ u ( n )u ( n + τ 2 ) ]E [ u ( n + τ 1 )u ( n + τ 3 ) ]
For the special case of τ = τ1 = τ2 = τ3, the fourth-order moment of a zero-mean Gaussian
process of variance σ2 is 3σ4, and its second-order moment of σ2. Hence, the fourth-order
cumulant is zero. Indeed, all cumulants higher than order two are zero.
1.21
The trispectrum is
∞
C 4 ( ω 1, ω 2, ω 3 ) =
∑
∞
∑
∞
∑
τ 1 =-∞ τ 2 =-∞ τ 3 =-∞
c 4 ( τ 1, τ 2, τ 3 )e
– j ( ω1 τ1 + ω2 τ2 + ω3 τ3 )
Let the process be passed through a three-dimensional band-pass filter centered on ω1, ω2,
and ω3. We assume that the bandwidth (along each dimension) is small compared to the
respective center frequency. The average power of the filter output is proportional to the
trispectrum, C4(ω1, ω2, ω3).
1.22
(a) Starting with the formula
c k ( τ 1, τ 2, …, τ k-1 ) = γ k
∞
∑ hi hi + τ1 … hi + τk-1
i=-∞
the third-order cumulant of the filter output is
∞
c 3 ( τ 1, τ 2 ) = γ 3
∑ hi hi + τ1 hi + τ2
i=-∞
where γ 3 is the third-order cumulant of the filter input. The bispectrum is
18
www.elsolucionario.net
– E [ u ( n )u ( n + τ 3 ) ]E [ u ( n + τ 1 )u ( n + τ 2 ) ]
www.elsolucionario.net
∞
C 3 ( ω 1, ω 2 ) = γ 3
∑
τ 1 =-∞ τ 2 =-∞
∞
= γ3
∑
∑
∞
c 3 ( τ 1, τ 2 )e
– j ( ω1 τ1 + ω2 τ2 )
∞
∑
∑
i=-∞ τ 1 =-∞ τ 2 =-∞
hi hi + τ hi + τ e
1
2
– j ( ω1 τ1 + ω2 τ2 )
Hence,
(b) From this formula, we immediately deduce that
arg [ C 3 ( ω 1, ω 2 ) ] = arg H e
1.23
+ arg H e jω 2 – arg H e j ( ω 1 + ω 2 )
jω 1
The output of a filter of impulse response hi due to an input u(i) is given by the
convolution sum
y(n) =
∑ hi u ( n – i )
i
The third-order cumulant of the filter output is, for example,
C 3 ( τ 1, τ 2 ) = E [ y ( n )y ( n + τ 1 )t ( n + τ 2 ) ]
= E
∑ hi u ( n – i ) ∑ hk u ( n + τ1 – k ) ∑ hl u ( n + τ2 – l )
i
= E
k
∑ hi u ( n – i ) ∑ hk+τ1 u ( n – k ) ∑ hl+τ2 u ( n – l )
i
=
l
k
l
∑ ∑ ∑ hi hk+τ1 hl+τ2 E [ u ( n – i )u ( n – k )u ( n – l ) ]
i
k
l
For an input sequence of independent and identically distributed random variables, we
note that
γ
E [ u ( n – i )u ( n – k )u ( n – l ) ] = 3
0,
i = k= l
otherwise
19
www.elsolucionario.net
jω 1
jω 2
* j ( ω 1 + ω 2 )
C 3 ( ω 1, ω 2 ) = γ 3 H e H e H e
www.elsolucionario.net
Hence,
∞
C 3 ( τ 1, τ 2 ) = γ 3
∑ hi hi+τ1 hi+τ2
i=-∞
In general, we may thus write
∞
C 3 ( τ 1, τ 2, …, τ k-1 ) = γ k
∑ hi hi+τ1 …hi+τk-1
i=-∞
By definition:
r
(α)
N -1
*
– j2παn jπαk
1
( k ) = ---- ∑ E [ u ( n )u ( n – k )e
]e
N
n=0
Hence,
r
(α)
N -1
*
– j2παn – j παk
1
( – k ) = ---- ∑ E [ u ( n )u ( n + k )e
]e
N
n=0
r
( α )*
N -1
*
j2παn – j παk
1
( k ) = ---- ∑ E [ u ( n )u ( n – k )e
]e
N
n=0
We are told that the process u(n) is cyclostationary, which means that
*
E [ u ( n )u ( n + k )e
– j2παn
*
] = E [ u ( n )u ( n – k )e
j2παn
]
It follows therefore that
r
1.25
(α)
( –k ) = r
( α )*
(k)
For α = 0, the input to the time-average cross-correlator reduces to the squared amplitude
of a narrow-band filter with mid-band frequency ω. Correspondingly, the time-average
cross-correlator reduces to an average power meter. Thus, for α = 0, the instrumentation of
Fig. 1.16 reduces to that of Fig. 1.13.
20
www.elsolucionario.net
1.24
www.elsolucionario.net
CHAPTER 2
(a) Let
wk = x + jy
p(-k) = a + jb
We may then write
f = wk p*(-k)
= (x + jy)(a - jb)
www.elsolucionario.net
2.1
= (ax + by) + j(ay - bx)
Let
f = u + jv
with
u = ax + by
v = ay - bx
Hence,
∂u
------ = a
∂x
∂u
------ = b
∂y
∂v
----- = a
∂y
∂v
------ = – b
∂x
From these results we immediately see that
∂u
∂v
------ = ----∂x
∂y
∂v
∂u
------ = – -----∂x
∂y
In other words, the product term wk p*(-k) satisfies the Cauchy-Rieman equations, and
so this term is analytic.
21
www.elsolucionario.net
(b) Let
f = wk*p(-k)
= (x - jy) (a + jb)
= (ax + by) + j(bx - ay)
Let
f = u + jv
www.elsolucionario.net
with
u = ax + by
v = bx - ay
Hence,
∂u
------ = a
∂x
∂u
------ = b
∂y
∂v
------ = b
∂x
∂v
----- = – a
∂y
From these results we immediately see that
∂u ∂v
------ ≠ ----∂x ∂y
∂u
∂v
------ = – -----∂y
∂x
In other words, the product term wk*p(-k) does not satisfy the Cauchy-Rieman
equations, and so this term is not analytic.
2.2
(a) From the Wiener-Hopf equation, we have
–1
wo = R p
(1)
22
www.elsolucionario.net
We are given
R =
1
0.5
0.5
1
p =
0.5
0.25
Hence, the inverse matrix R-1 is
= 1
0.5
0.5
1
–1
1
= ---------- 1
0.75 – 0.5
www.elsolucionario.net
R
–1
– 0.5
1
Using Eq. (1), we therefore get
1
w o = ---------- 1
0.75 – 0.5
1
= --- 1
3 – 0.5
– 0.5 0.5
1
0.25
– 0.5 2
1
1
1
= --- 1.5
3 0
= 0.5
0
(b) The minimum mean-square error is
2
H
J min = σ d – p w o
23
www.elsolucionario.net
2
= σ d – 0.5, 0.25 0.5
0
2
= σ d – 0.25
(c) The eigenvalues of matrix R are roots of the characteristic equation
2
2
( 1 – λ ) – ( 0.5 ) = 0
λ 1 = 0.5 and λ 2 = 1.5
The associated eigenvectors are defined by
Rq=λq
For λ1 = 0.5, we have
1
0.5
0.5 q 11 = 0.5 q 11
1 q 12
q 12
Expanding
q11 + 0.5 q12 = 0.5 q11
0.5 q11 + q12 = 0.5 q12
Therefore,
q11 = - q12
Normalizing the eigenvector q1 to unit length, we therefore have
1
q 1 = ------- 1
2 –1
24
www.elsolucionario.net
That is, the two roots are