You are on page 1of 6

Fundamentals of Statistical Signal Processing:

Estimation Theory (Stephen Kay): Chapter 2 Detailed


Solutions

Question 1
The estimator is given as:
N 1
1 X 2
2 = x [n] (1)
N n=0
First, we find the expected value (should be equal to 2 if the estimator is unbiased):
N 1 N 1 N 1
1 X 1 X 1 X 2 N 2
E{2 } = E{x2 [n]} = (var{x} + E{x[n]}2 ) = ( + 02 ) = = 2 (2)
N n=0 N n=0 N n=0 N
This implies that the estimator is unbiased.
Next, we calculate the variance:
N 1
1 X N 1
var{2 } = var{x2 [n]} = 2 var{x2 [n]} = var{x2 [n]} (3)
N 2 n=0 N N

In the above equation,


var{x2 [n]} = E{x4 [n]} E{x2 [n]}2 = E{x4 [n]} 4 (4)
Using our knowledge that x[n] is normally distributed, we use the moment generating function to compute
E{x4 [n]}.
The moment generating function, (t), for a Normal distribution is given as:
2 t2
(t) = exp{t + } (5)
2
It is important to note that in general:
n (0) = E{X n }, n 1 (6)
Since we are interested in E{x4 [n]}, we take the fourth derivative of (t) and evaluate it at t= 0:
0000 2 t2 2 t2 2 t2
(t) = 3 4 exp{ + t} + exp{ + t}(t 2 + )4 + 6 2 exp{ + t}(t 2 + )2 (7)
2 2 2
0000
(0) = 3 4 = E{x4 [n]} (8)
Substituting the obtained expressions, we obtain:

1 2 4
var{2 } =
(3 4 4 ) = (9)
N N
Evidently, as N , variance decreases and the estimator becomes better.

1
Chapter 2 Detailed Solutions

Question 2
In a uniform distribution over (a,b), E{x[n]} = a+b
2 . For this question we have a = 0, b = , E{x[n]} = 2 .
Since in an unbiased estimator, E{} = , we can simply average the samples and multiply the outcome by
2, so that:
N 1
2 X
= x[n] (10)
N n=0

Question 3
In example 2.1, we saw that x[n] = A + w[n], where w[n] is WGN. The estimator then was:
N 1
1 X
A = x[n] (11)
N n=0

This implies that the estimator is a linear sum - x[n] values are simply summed and averaged. Because of
this, the estimator has the same distribution as x[n], or as w[n], hence Gaussian or Normal.
We also know that the estimator is unbiased, and so its expected value is simply A. The variance is easily
found:
N 1 N 1 N 1
1 X 1 X 1 X 2 2
var{A} = var{x[n]} = var{w[n]} = = (12)
N 2 n=0 N 2 n=0 N 2 n=0 N

Hence we can say that the estimator is normally distributed with a mean of A, and variance of 2 /N .

Question 4
An averaging estimator in this problem is defined as
N 1
1 X
h = hi (13)
N n=0
N 1
1 X
E{h} = E{hi } = h (14)
N n=0
N 1
1 X var(hi ) 1
var{h} = var{hi } = = (15)
N n=0 N N
Mean:
For = 1, E{h} = h and E{hi } = h. Similarly for = 0.5, E{h} = 0.5h and E{hi } = 0.5h.
So, averaging does not improve the estimation of the mean - when = 0.5, the estimation is biased no
matter what.
Variance: For = 1, var{h} = 0.1 and var{hi } = 1. Similarly for = 0.5, var{h} = 0.1 and var{hi } = 1.
So, averaging reduces the variance (which should be expected. However, when = 0.5, the estimation
becomes worse as the distribution narrows around the wrong value.

Page 2 of 6
Chapter 2 Detailed Solutions

Question 5
We are told that the estimator 2 is unbiased, which means that E{2 } = 2 .
Next, 2 is expressed as a scaled sum of two squares (1/2(x2 [0] + x2 [1])), where the x[n] terms are normally
distributed.
The chi-squared distribution has a similar form, where statistic Y is chi-squared-distributed:

k
X Xi i 2
Y = ( ) (16)
i=1
i
The chi-squared distribution has a pdf (for k = 2) of:

ey/2 1
p(y) = = ey/2 (17)
2(1) 2
Using our knowledge of x[n], we can write:
x[0] x[1]
Y = + 2 (18)
2
Comparing this to the expression for 2 , we can see that
Y 2
2 = (19)
2
Next, if we know the pdf of a random variable X, it is possible to calculate the pdf of another variable Y
that is related to x. The basic relationship is:

|fY (y)dy| = |fX (x)dx| (20)


|fX (x)|
|fY (y)| = (21)
dy/dx
So in our case,
1 22
1
2e
2 2
p(2 ) = (22)
2 /2
The chi-squared distribution is only defined for 2 0. From the equation it is evident that the pdf is a
decaying exponential, which is not symmetrical.

Question 6
For the given estimator, we can define the mean and the variance:
N
X 1
A = an x[n] (23)
n=0

N
X 1 N
X 1
E{A} = E{an x[n]} = an A (24)
n=0 n=0
N
X 1 N
X 1
var{A} = a2n var{x[n]} = a2n 2 (25)
n=0 n=0
Given the constraints, we can say that
N
X 1
an = 1 (26)
n=0

Question 6 continued on next page. . . Page 3 of 6


Chapter 2 Detailed Solutions

Then we minimize the variance using Lagrangian multipliers:


1
N
! N 1 N 1
! N 1 N 1
X X X X X
2 2
J = var{A} + an 1 = an + an 1 = a2n 2 + an (27)
n=0 n=0 n=0 n=0 n=0

dJ
= 2ai 2 + = 0 (28)
dai

ai = (29)
2 2
The value of ai is a constant, which means all ai are equal. From this we can say that N ai = 1, or ai = 1/N .

Question 7
When we are interested in evaluating something of the form P {| | > }, we can look at z-scores (a.k.a.
standard score).
Z-score is a normalized metric, so we can write our expression as:

| |  | | 
Pr q >q < Pr q >q (30)
var() var() var() var()
The probability (Pr above) can be calculated as:

Z
1 2
P r (x > a) = ex /2 dx (31)
2
a

This rewrites the expression as:


Z Z
1 2 1 2
ex /2 dx < ex /2 dx (32)
2 2
a b

 
Here, a = and b = , and a > b. Since we are integrating a decaying exponential, shifting
var() var()
a makes the probability associated with smaller than that for .

Question 8
Similarly to the previous question, we can normalize the probability expression and rewrite it in terms of an
integral:

Z

|A A|  1 2
Pr q
>q =2 ex /2 dx (33)
var(A) var(A) 
2

var(A)

  N
The lower limit on the integral is , where var(A) = 2 /N , making the limit . The value of the
var(A)
integral reduces with increasing lower limit as this is a decaying exponential
past x = 0 (and this is always
true since we are looking at absolute values). Since as N ,  N and hence the probability 0.
NP1
1
Next, we look at estimator A = 2N x[n]. The variance of this estimator is:
n=0
2
var(A) = (34)
4N

Question 8 continued on next page. . . Page 4 of 6


Chapter 2 Detailed Solutions

At first glance this might look like it will reduce the probability faster than the previous estimator, but if we
analyze A more closely, we will notice that it is biased (expect value centered at A/2), and hence P r 1 as
N .

Question 9
In Example 2.1 the estimator was:
N 1
1 X
A = x[n] (35)
N n=0

E{A} = A (36)
Now, for 2 ,
1 1 1
N N N
!
1 X 1 X X
E{} = 2 E{( x[n])2 } = 2 E{ 2
x[n]} + var{ x[n]} (37)
N n=0
N n=0 n=0

1 2 2
 N A2 + 2 2 2 2
E{} = (N A) + N = = + A = + (38)
N2 N N N
Since the expected value isnt equal to , the estimator is biased.

Question 10
We already know that estimator A is unbiased. We only need to show the expected value of 2 :

N
E{2 } = E{(x[n] A)2 }
N 1
N  
= E{x[n] A}2 + var{x[n] A}
N 1

N 1
!2 N 1

N 1 X N 1 1 X
= E{x[n]} E{ x[m]} + var{ x[n] x[m]}
N 1 N n=0 N N
m=0,m6=n

(N 1)2 2 N 1 2
 
N
= (A A)2 + +
N 1 N2 N2
N N 1 2
= = 2
N 1 N
Thus the estimator is unbiased.

Question 11
We are dealing with a uniform distribution, the pdf of which is defined as:
1 1
fX (x) = = = (39)
ba 1/ 0

Because we want an unbiased estimator, we also know that

E{} = (40)

Question 11 continued on next page. . . Page 5 of 6


Chapter 2 Detailed Solutions

In general we can also write the expected value as


Z
E{} = = g(x)f (x)dx (41)

Here g(x) is a measurable function of x, and f(x) is the pdf of x.

Z
= g(x)f (x)dx

Z1/
= g(x[0])d(x[0])
0

Cancelling out , we get:

Z1/
1= g(x[0])d(x[0])
0

Z1/
= g(u)du
0

Next we need to prove that a function g(x[0]) cannot be found to satisfy this condition for all > 0.
Lets take two values of - 1 and 2 that are not equal. Then,
1/
Z 1
1= g(u)du
0

1/
Z 2
1= g(u)du
0

Subtracting these two results in:


1/
Z 1
0= g(u)du
1/2

But this is only possible when g(u) is 0, which doesnt represent , and makes the estimator biased.

Page 6 of 6

You might also like