Professional Documents
Culture Documents
OTTAWA CANADA
ECONOMETRICS 5701
LECTURE 1
Instructor: Marcel Voia
z 2 ∼ χ2(1)
z
t(n) = p
x/n
x1 /n1
F = ∼ F(n1 ,n2 )
x2 /n2
has an F distribution.
1
1.1.4 The Gamma and Exponential Distribution
Gamma Distribution is used on studies of income distribution and production
function
λP −λx P −1
f (x) = e x , x ≥ 0, λ > 0, P > 0
Γ (P )
P P
E (x) = , Var (x) = 2 .
λ λ
When P = 1 ⇒ Exponential Distribution
When λ = 12 , P = n2 ⇒ χ(n)
2
Second, in many cases we may not be able to say anything precise about
the properties of an estimator in a small sample but we may be able to derive
(approximate) results about the estimator as the sample size gets large.
This section describes some of the tools from statistical analysis.
3
The general case ¡ ¢ ¡ ¢
If y = αxβ then E (y) = E αxβ 6= αE xβ
However, in the linear case
If y = α + βx then E (y) = E (α + βx) = α + βE (x)
another reason why linear models are easy to deal with!
The Slutsky Theorem applies to random vectors (matrices) as well as to random
scalars. For example
If p lim Wn = Ω then p lim Wn−1 = Ω−1
The key result are therefore as follows:
p d
• xn → x ⇒ xn → x
d d
• If xn → x and g(x) is a function then the Slutsky Theorem says g (xn ) →
g (x).
√ ³ ´
d
• (Delta Method) If n b θ − θ0 → N (0, Σ) ⇒
µ ¶
√ ³ ³ ´ ´
d ∂g (θ) X ∂g (θ) 0
n g b θ − g (θ0 ) → N 0, 0 |θ0 0 |θ0
∂θ ∂θ
d p
• If xn → x and yn → α, where α is a constant, then
d
(i) xn + yn → x + a
d
(ii) xn yn → xa
xn d x
(iii) → , a different from 0.
yn a
1.2.5 Orders of Magnitude
Let {an } be a sequence of real numbers, {Xn } be a sequence of random variables
and {gn } be a sequence of positive real numbers. Then,
1. an is of smaller order (in magnitude) than gn , denoted an = o (gn ) , if
limn→∞ agnn = 0.
2. an is at most of order (in magnitude) gn , denoted an = O (gn ) , if there
exists a real number B such that
|an |
≤ B for all n.
gn
4
1.2.6 Stabilizing Transformations
Finally, it is often the case that the limiting distribution, F (x), of a random
variable is a point (often zero). There is very little information in this point and
we often want to analyze the properties of a random variable before it collapses
to a spike. This can be achieved using a stabilizing transformation.
Consider the following statistic
µ ¶
σ2
x ∼ µ, ,
n
σ2
which says that x has a mean of µ and a variance of n . We know that
p lim x = µ
but as the sample size increases the distribution collapses to a point. This is
known as a degenerate distribution.
Imagine we had another statistic with a distribution
µ ¶
0 σ2
x ∼ µ, θ
n
This has the same property that p lim x0 = µ but even though it has a different
variance it too will have the same degenerate distribution. As the sample size
gets large enough we cannot distinguish between the two distributions. In other
words, different distributions may be indistinguishable in the limit.
However we may be able to define a transformation such that
d
z = h (x) → f (z)
but
d
z 0 = h (x0 ) → f (z 0 )
where f(z) is a well-defined limiting distribution. We now have a basis for
comparison. This is a so-called stabilizing transformation.
This allows us to introduce our next desirable property of an estimator:
1.3 Consistency
An estimator is said to be consistent if its probability limit is equal to the true
population parameter. In other words
p lim b
θ=θ
Thus if, as the sample size increases, the bias and the variance decline and if
they both converge on zero when the sample is infinite, then the estimator is
consistent.
Note: all unbiased estimators are consistent; but not all consistent
³ ´estimators
are unbiased. An estimator may be biased in finite sample (i.e. E b θ 6= θ) but
consistent (p lim b
θ = θ). There is a large class of estimators with this property,
the most important being IV and GMM estimators.
5
1.4 Using Distribution Theory: The Sampling Distribu-
tion of the Sample Mean
Imagine drawing a random sample of n observations from a population and
computing a statistic B for example, the sample mean. If we drew another
sample we would, obviously, get another value for the statistic. The sample
mean is thus a random variable.
We are interested in deriving the sampling distribution of this sample mean
in cases where the variable can assume any value and can come from any kind
of distribution.
Proposition 3 If X1 ...Xn are a random sample and can be regarded as iden-
tically and independently distributed draws from a population with mean, µ and
variance σ 2 then whatever the form of the distribution of X, the sampling dis-
2
tribution of the random variable X will have mean µ and variance σn .
Proof. We define the sample mean as
n
1X 1
X= Xi = (X1 + X2 + ... + Xn )
n i=1 n
where X1 , X2 , ..., Xn are n variables drawn from the same sample. It is assumed
that the Xi are identically and independently distributed. Since n is constant,
à n !
¡ ¢ 1X 1
E X =E Xi = E (X1 + X2 + ... + Xn )
n i=1 n
We know from Jensen’s Inequality that in the case of a linear function, the
expectation of a sum is equal to the sum of the expectations, and that the mean
of each Xi ,X, is, by definition, µ:
¡ ¢ 1 1 nµ
E X = E (X1 + X2 + ... + Xn ) = (µ + µ + ... + µ) = =µ
n n n
Thus the mean of the sampling distribution of X is equal to the population
mean. The variance of the sample mean is given by
à n ! à n !
2
¡ ¢ 1X 1 X
σ X = V ar X = V ar Xi = 2 V ar Xi
n i=1 n i=1
6
1.5 The link with estimation theory
Theorem 4 The Central Limit Theorem
If x1 ...xn are a random sample from any probability distribution with finite
mean µ and variance σ 2 then
√ d ¡ ¢
n (x − µ) → N 0, σ 2 ,
which states that the limiting distribution of the mean of the sample is Normal.
which states that the limiting distribution of the mean of the sample is Normal.
7
The central limit theorem is absolutely crucial to econometrics as it allows
us to base our inference on the properties of the sample mean on the assumption
that its distribution can be approximated by a normal distribution whatever the
true (but unknown population distribution).
If you are working on Gauss, you can open a ”txt” file as an editor file. Write
your code on that file and save it using shorter names (e.g. ass1.txt)
If you are working on OX and you are programming using gauss codes, save
your editor file using the extension “.src”, (e.g. ass1.src).
8
endo;
meanc(x3); @check the mean of x3@
@compute the mean of the samples means and compare with the population
mean meanc(x)@
x_average=(meanc(x1)+meanc(x2)+meanc(x3))/3;