You are on page 1of 23

Mathematics

Academic year 2012-2013


Teacher: P. Gibilisco
Lecture 1 Monday, September 17, 2012 (11:00-13:00)
Introduction to the course. List of the notions required in advance to
follow the course. Sketch of the written examination. Some examples on
the applications of mathematics in economics (risk aversion and concavity).
An informal description of central limit theorem and related mathematical
problems (integration of the function f(x) := exp(x
2
)).
Improper integrals. Evaluation of the following integrals:
_
+
1
1
x
dx = +
_
+
1
1
x
2
dx < +
Sequence and series. The geometric series and its sum.
1 +q +q
2
+q
3
+ =
1
1 q
if [q[ < 1
Prove that

n=1
1
n
= +

n=1
1
n
2
< +
using improper integrals.
Lecture 2 Tuesday, September 18, 2012 (10:30-13:00)
Necessity condition for convergent series (Cauchy). Absolute conver-
gence implies convergence. The root test and the ratio test for nonnegative
and positive series.
Discuss the character of the following series:
n
2
+ 2
2
n
n!
n
n
Show that the root and ratio test cannot be used to study the series:

n=1
1
n
=

n=1
1
n
2
Give an alternative proof of the convergence of the series 1/n
2
using the
inequality
1
n
2

1
n 1

1
n
1
Two positive series

a
n
,

b
n
are asymptotically equivalent i
lim
n+
a
n
b
n
= 1
Two asymptotically equivalent positive series have the some behavior (they
both converge or diverge). Using asymptotic equivalence study the character
of the following series:
n + 4
4n
2
+ 5
n
_
n
2
1
Topology of the real line: open, closed, bounded sets. Weierstrass theo-
rem. Euclidean distance in the plane.
Lecture 3 Wednesday, September 19, 2012 (10:30-13:00)
History of the most important numbers in mathematics: 0, 1, e, , i.
Topology of the plane. Sequences and limits in the plane. Inequalities
in the plane. Domains of functions from R
2
to R.
Find the domain of the functions:
log(x
2
+y
2
9)
_
x +y 1
y x
2
2x 1
Complex numbers. The fundamental theorem of algebra. Taylor poly-
nomial and series. Examples: exponential and trigonometric functions.
The complex exponential. Euler formula.
e
i
+ 1 = 0
Lecture 4 Thursday, September 20, 2012 (10:30-13:00)
Leibniz criterion for alternating series.
Calculate f

(0) where
f(x) =
_
e

1
x
2
, if x ,= 0
0, if x = 0
For the above function f
(k)
(0) = 0 for any k. This implies that the Taylor
series converges but not to the function itself.
Power series. Radius of convergence and its calculation.
2
Show that the following power series

k=0
x
k

k=0
x
k
(k + 1)
2

k=0
x
k
k + 1
have the some radius of convergence. Discuss what happens when [x[ is
equal to the radius.
Power series in the complex plane. Absolute convergence implies con-
vergence also for complex series.
The radius of the power series

k=0
(1)
k
x
k
=
1
1 +x

k=0
(1)
k
x
2k
=
1
1 +x
2
at the light of the complex plane.
Partial derivatives.
Counterexample: show that the function f : R
2
R dened by
f(x, y) =
_
_
_
_
x
2
y
x
4
+y
2
_
2
, if (x, y) ,= (0, 0)
0, if (x, y) = (0, 0)
a) is not continuous in the origin; b) has the partial derivatives at the origin.
Introduction to multiple integrals. The Fubini theorem.
Let Q = [1, 2] [1, 3] and f(x, y) = x
3
exp(yx
2
). Show that
_ _
Q
f(x, y) =
1
6
[e
12
e
3
]
1
2
[e
4
e]
Lecture 5 Friday, September 21, 2012 (10:30-13:00)
The indicator function of a set. Integrable functions. A counterexample:
the Dirichlet function. A function with few discontinuities is integrable.
A remark: _ _
R
2
g(x)h(x)dxdy =
_
R
g(x)dx
_
R
h(y)dy
Normal domains. Prove that if
A = (x, y) R
2
[x [1, 1] 1 +x
2
y
_
1 x
2

then _ _
A
xy dxdy = 0
3
Polar coordinates. Dieomorphisms in R and R
2
. Jacobian matrix. The
change of variable formula.
_ _
A
g(x, y)dxdy =
_ _
h
1
(A)
g(h(u, v))[ det J
h
(u, v)[dudv
The polar coordinates case:
_ _
R
2
g(x, y)dxdy =
_
2
0
_
+
0
f( cos , sin ) dd
The formula
_
R
e
x
2
dx =
__ _
R
2
e
(x
2
+y
2
)
dxdy
_1
2
=

The gaussian density


1

2
e

x
2
2
Using the formula
Area(A) =
_ _
A
1 dxdy =
_ _
h
1
(A)
[ det J
h
(u, v)[dudv
prove that if
E = (x, y) R
2
[0 < x < y < 2x, 1 < xy < 2
then
Area(E) = log

2
Lecture 6 Monday, September 24, 2012 (12:00-14:00)
Taylor polynomial and the character of stationary points in one dimen-
sion. Cartesian products of sets. Groups and elds. Vector spaces, linear
transformations, scalar products. Linear independence, basis. Characteri-
zation of linear transformations T : R
n
R as scalar products.
Lecture 7 Tuesday, September 25, 2012 (11:30-14:00)
Gradient, directional derivative.
4
Counterexample: show that the function f : R
2
R dened by
f(x, y) =
_
_
_
_
x
2
y
x
4
+y
2
_
2
, if (x, y) ,= (0, 0)
0, if (x, y) = (0, 0)
a) is not continuous in the origin (already done!); b) has all the directional
derivatives.
The notion of dierentiable function and the tangent plane. Stationary
points. If a function f is dierentiable then f has directional derivatives in
any direction and moreover
f, v = D
v
f
Cauchy-Schwartz inequality. Continuity of a dierentiable function. The
sucient condition for dierentiability in a point: existence of partial deriva-
tives in a neighborhood and continuity in the point.
Show that the function
f(x) =
_
x
2
sin
1
x
if x ,= 0
0 if x = 0
has the derivative everywhere in R but the derivative it is not continuous in
0.
Using the above result prove that the function
f(x, y) =
_
_
_
(x
2
+y
2
) sin
1

x
2
+y
2
if (x, y) ,= (0, 0)
0 if (x, y) = (0, 0)
is dierentiable also if it has discontinuous partial derivatives.
Find the stationary points of the following functions:
f(x, y) = 2x
3
+y
3
3x
2
3y + 5
Lecture 8 Wednesday, September 26, 2012 (11:30-14:00)
Matrices and their operations. Representation of linear transformation
by matrices. Determinant of matrices and the inverse of a matrix. Mixed
partial derivatives.
An example: the function f(x, y) := xy +[y[.
5
The Schwartz (or Young) theorem on the equality of mixed derivatives.
Hessian matrix. Dierentiability for functions f : R
n
R
m
. The Jacobian
matrix. The chain rule for dierential functions.
Lecture 9 Thursday, September 27, 2012 (11:30-14:00)
Deterministic phenomena: the discovery of Neptune. Random phe-
nomena. Axiomatization of geometry and of probability: Euclid and Kol-
mogorov. The law of chance as an oxymoron. Suggested reading: David
Mumford, The dawning of the age of stochasticity. The birthday paradox.
The two pillar of probability theory: on one side analysis and measure the-
ory; on the other side gambling situations, coin-tossing, Probability spaces
and their properties (nitely additive case). Conditional probability, pairs
of independent events (A B means A and B are independent.
Exercise: prove that A B A B
c
Bayes formula.
Lecture 10 Friday, September 27, 2012 (11:30-14:00)
Continuity of probability measures is equivalent to -additivity. An
example of a probability measure that is not -additive: the density for
subsets of N

.
Exercise: throw a die two times. Which is the probability to get at least
one six?
Partitions.
(How to choose a place in a queue.) An urn contains w white balls and
r red balls. Choose a ball in the urn and leave it out (without looking at
its coulor). Choose a second ball. Which is the probability that the second
ball is white?
The false positive paradox. Pairwise independence and independence. A
counterexample: pairwise independence does not imply independence.
Lecture 11 Monday, October 1, 2012 (11:30-13:45)
Curves and tangent vectors. First and second derivatives along curves.
Taylor formula for several variables. Positive denite and semidenite ma-
trices.
Cramers rule for linear equation systems. Non-trivial solution for ho-
mogeneous systems. Eigenvalues and eigenvectors.
Exercise: prove that the exponential function f(x) = a
x
is an eigenvector
forf the derivative operator.
6
The characteristic polynomial of a matrix.
If H is symmetric all the eigenvalues are real: prove this theorem in the
2 2 case.
Lecture 12 Tuesday, October 2, 2012 (11:30-14:00)
If H is a symmetric matrix let us study the function
Hv, v
[v[
2
[v[ , = 0
Necessary and sucient conditions for maxima and minima. Saddle
points.
The behavior of stationary points calculating the eigenvalues of the Hes-
sian matrix.
Exercise: study the stationary points of the function
f(x, y) = x
2
+y
3
xy
Answer: (0,0) is a saddle point, (
1
12
,
1
6
) is a local minimum.
Criteria to study the sign of the root of a cubic equations. Consider the
following equation

3
+a
2
+b +c = 0
and suppose that all the roots
1
,
2
,
3
are real. Then:
i) the roots are all negative if and only if a, b, c > 0;
ii) the roots are all positive if and only if a < 0, b > 0, c < 0.
Exercise: study the stationary points of the function
f(x, y, z) = x
2
+y
4
+y
2
+z
3
2xz
Answer: (0,0,0) is a saddle point, (
2
3
, 0,
2
3
) is a local minimum.
Lecture 13 Wednesday, October 3, 2012 (11:30-14:00)
Regular curves. Two examples of non-regular curves
t [1, 1]
1
(t) = (t, [t[)
2
(t) = (t
2
, t
3
)
Constrained optimization. Lagrangian function and Lagrange multipli-
ers.
Maximize the function
f(x, y) = xy y
2
+ 3
7
subject to the constraint
g(x, y) = x +y
2
1 = 0
using a parametric representation of the constraint.
Consider the following function
f(x, y) =
_
x if y = 0
0 if y ,= 0
.
Compute the directional derivatives in the origin along an arbitrary di-
rection v. Show that the equality
f(0, 0), v =
f
v
(0, 0)
does not hold in general.
What can you conclude about the dierentiability of the function f?
Counterexample: show that the function f : R
2
R dened by
f(x, y) =
_
_
_
_
x
2
y
x
4
+y
2
_
2
, if (x, y) ,= (0, 0)
0, if (x, y) = (0, 0)
a) is not continuous in the origin (already done!); b) lim
x0
f(x, mx) =
f(0, 0) for any m.
Using the polar coordinates show that the function
f(x, y) =
_
_
_
xy

x
2
+y
2
, if (x, y) ,= (0, 0)
0, if (x, y) = (0, 0)
is continuous.
Theorem
For a power series with radius R
f(x) =

n=0
a
n
x
n
one has that f (

(I(0, R)) and


f
(k)
(x) =

n=k
n(n 1) (n k + 1)a
n
x
nk
8
Using the above result prove that if [x[ < 1 then

k=1
kx
k
=
x
(1 x)
2
Lecture 14 Thursday, October 4, 2012 (11:30-14:00)
Some combinatorics.
Solution of the birthday paradox.
Coin ipping and the Bernoulli process.
Countable and uncountable sets. Cardinality of N, Q, R.
Random variables: discrete and continuous case.
Distribution of a random variable.
The most important discrete r.v.: Bernoulli, binomial, geometric and
Poisson distribution.
Expected value for discrete r.v.
Lecture 15 Friday, October 5, 2012 (15:00-17:00)
Linearity and positivity for the expected value. Moments of a r.v.. Vari-
ance, covariance and their properties.
Cauchy-Schwartz inequality again. Correlation coecient and scale in-
variance.
Independence for random variables.
X Y Var(X +Y ) = Var(X) + Var(Y )
Mean value and variance for the Bernoulli and the binomial distributions.
Mean value for the geometric and Poisson distributions.
Continuous random variables.
Densities and absolutely continuous random variable. Moments, variance
and covariance for absolutely continuous random variables.
Mean value and variance for the uniform and exponential distributions.
Lecture 16 Monday, October 8, 2012 (11:30-13:30)
Subspaces of a vector space. Intersection of subspaces. Examples: poly-
nomial of degree n in R[X], kernels of linear transformations, eigenspaces.
Intersection of eigenspaces w.r.t. dierent eigenvalues is 0.
Av, w = v, A
t
w
9
Orthogonal matrices (A
t
= A
1
). If A is orthogonal then [A(x)[ = [x[
and det(A)
2
= 1.
Complex roots of real polynomials (if P(z) = 0 then P( z) = 0). A real
polynomial of odd degree has always a real root.
Solution of the quizzes for the Simulation 1 (Calculus and Linear Algebra
Parts).
Lecture 17 Tuesday, October 9, 2012 (11:30-14:00)
Solution of the quizzes for the Simulation 1 (Optimization) .
Maximize the function
f(x, y) = xy y
2
+ 3
subject to the constraint
g(x, y) = x +y
2
1 = 0
using the Lagrange multiplier.
Maximize the function
f(x, y) = x
2
+y
2
subject to the constraint
g(x, y) =
x
2
a
2
+
y
2
b
2
1 = 0
using: a) parametrization of the curve; b) Lagrange multipliers.
Using the equality
e
x
=

n=0
x
k
k!
as a denition of the exponential function prove that De
x
= e
x
.
Symmetric and skew-symmetric matrices. Eigenvalues for the transpose
transformation.
Properties of the transpose and inverse matrix.
(AB)
t
= B
t
A
t
(AB)
1
= B
1
A
1
Dierentiating under the integral sign: if f(x, t) = (2x +t
3
)
2
prove that
d
dt
_
1
0
f(x, t)dx =
_
1
0

t
f(x, t)dx
10
Using: i) the derivation under the integral sign; ii) mathematical induc-
tion; iii) the substitution x = tu in the formula
_
+
0
e
x
dx = 1
prove that
_
+
0
x
n
e
tx
dx =
n!
t
n+1
which implies the Euler integral formula for n!, namely
_
+
0
x
n
e
x
dx = n!
Lecture 18 Tuesday, October 9, 2012 (15:00-17:00)
Convergence of the Bernoulli to the Poisson.
Independence implies uncorrelation but not viceversa (example: X and
X
2
where X has a distribution symmetric w.r.t. to 0).
If X is an a.c. r.v. with density f
X
then
F

X
= f
X
Moments for a.c. r.v.
Density of aX +b from the density of X.
Solution of the quizzes 1 and 3 for the Simulation 1, Probability.
The Gaussian distribution.
Lecture 19 Thursday, October 11, 2012 (11:30-14:00)
Using the substitution x =

tu in the formula
_
R
e

x
2
2
dx =

2
and the derivation under integral sign prove that
_
R
x
2
e

tx
2
2
dx =

2
t
3
2
and therefore if X A(0, 1) then Var(X) = 1.
Solution of quiz 2 for the Simulation 1, Probability.
11
Convergence in probability.
Chebyshevs inequality.
The weak law of large numbers.
Random vectors. Marginals.
Lecture 20 Friday, October 12, 2012 (11:30-14:00)
The characteristic function, rst properties. Moments and the charac-
teristic function.
X Y
X+Y
(t) =
X
(t)
Y
(t)
X Poisson() =
X
(t) = exp((e
it
1))
Prove that
X Poisson(), Y Poisson() =X +Y Poisson( +)
a) directly; b) using the characteristic function.
Simulation n. 2 of the written examination (Probability part)
Exercise 1. 10 points (Not done yet!)
(X, Y ) is a random vector with uniform density on A = (x, y) R
2
[x
2
+
y
2
1.
Let U := X +Y .
i) Are X and Y independent?
ii) Calculate F
U
(2), F
U
(0), F
U
(2).
ii) Let f
X
be the density of X: where does f
X
have its maximum?
Exercise 2. 10 points
X
1
, X
2
, ... is a sequence of i.i.d. random variables with binomial distri-
bution B(7,
1
3
). Dene Y
i
:= (X
i
E(X
i
))
2
and
S
k
:= Y
1
+ +Y
k
Find a real number such that for any > 0
lim
k+
P
_

S
k
k


_
= 0
12
Quiz 1. Correct answer: points 2; Wrong answer: points -1; No answer:
points 0
If Var(X + Y ) = Var(X) + Var(Y ) then X, Y are independent random
variables.
TRUE FALSE
Quiz 2.
If X, Y are gaussian random variables then X+Y is also gaussian. (Not
done yet!)
TRUE FALSE
Quiz 3.
If X has a density f
X
() then P(X = 2 X = 3) = 0.
TRUE FALSE
Quiz 4.
If X B(2,
1
2
) and Y Poisson(1) then P(X = 0) < P(Y = 0).
TRUE FALSE
Quiz 5.
For the cumulative distribution function we have always F
X
(0) = 1.
TRUE FALSE
Simulation n. 3 of the written examination (Probability part)
Exercise 1. 10 points (Not done yet!)
Two fair dice are rolled. Let us denote by X the result of the rst die
and by Y the result of the second die. Let Z := X Y .
a) Which is the distribution of Z?
b) Calculate Cov(Z, Z
2
).
c) Are Z and Z
2
independent?
Exercise 2. 10 points
Let g : R R be the function
g(x) :=
1
x
1
[1,b]
(x), b > 1.
a) Find b such that g() is a probability density.
13
b) Let X be a r.v. with density g(). Calculate the c.d.f. F
X
().
c) Calculate Var(X).
Quiz 1. Correct answer: points 2; Wrong answer: points -1; No answer:
points 0
If E(Y [X) = E(Y ) then Var(X + Y ) =Var(X)+Var(Y ). (Not done
yet!)
TRUE FALSE
Quiz 2.
If X B(n, p) then [X[ B(n, p).
TRUE FALSE
Quiz 3.
The c.d.f. F
X
is always a continuous function.
TRUE FALSE
Quiz 4.
If X A(0, 1) then X and X
2
are uncorrelated.
TRUE FALSE
Quiz 5.
An event cannot be independent by itself.
TRUE FALSE
Lecture 21 Monday, October 15, 2012 (11:30-13:30)
Metric spaces. Norms and scalar products on vector spaces.
The space of continuous functions on the unit interval ((([0, 1])). The
subspace of the constant functions.
The (Sup) norm and distance. Pointwise and uniform convergence
for function. Uniform convergence implies pointwise convergence but not
viceversa.
Theorem: if f
n
converges to f uniformly then
lim
n+
_
1
0
f
n
(x)dx =
_
1
0
f(x)dx
14
Counterexamples. 1) A sequence f
n
f pointwise such that
_
f
n
,
_
f. 2)
A sequence f
n
f pointwise such that
_
f
n

_
f but f
n
,f uniformly.
The L
2
scalar product, norm and distance.
Lecture 22 Tuesday, October 16, 2012 (11:30-13:30)
Exercise. Prove that if f (([0, 1]) the nearest constant (w.r.t. the L
2
distance) is
_
1
0
f.
Linear independence, basis, dimension of vector spaces.
Examples: dim(R
n
) = n; dim(R[X]) = .
Invariance of the determinants under addition of linear combination of
rows (columns).
Solution of the exercises of the Simulation n.1, Linear Algebra part.
Lecture 23 Wednesday, October 17, 2012 (10:30-13:00)
Introduction to dierential equations. The Cauchy problem. Linear
dierential equation of rst order (homogeneous case): for an interval I R,
x
0
I and y
0
R the unique solution of the Cauchy problem
_
y

= ay in I
y(x
0
) = y
0
is given by
y(x) = y
0
exp
__
x
x
0
a(s)ds
_
Exercises
Solve the Cauchy problem
_
y

xy x 0
y(0) = 2
Solution: y(x) = 2 exp
_
2
3
x

x
_
.
Solve the Cauchy problem
_
y

= xy x R
y(0) = 1
Solution: y(x) = e

x
2
2
.
15
If X A(0, 1) prove (using the derivative under the integral sign and
integration by parts) that for the characteristic function

X
(t) = t
X
(t)
Deduce that
X
(t) = e

t
2
2
Study the function F(x, y, z) = x + 3y z under the constraints
_
x
2
+y
2
z = 0
z 2x 4y = 0
Lecture 24 25 Thursday, October 18, 2012 (11:00-14:00 and 15:00
16:00)
Standardized random variables
X

:=
X E(X)
(X)
Taylor formula for the characteristic function (especially the standard-
ized case).
Convergence in law (in distribution).
Example: if X
n
and X are random variables whose values are natural
numbers then convergence in law is equivalent to
P(X
n
= k) P(X = k) k N
If X
n
B(n, p) then
X
(t) = (pe
it
+q)
n
.
Example: if X
n
B(n,

n
) and X Poisson() then X
n
=X in law.
Continuity theorem
X
n
=X in law i
Xn
(t)
X
(t) t R
Example: prove that if X
n
B(n,

n
) and X Poisson() then
X
n
=X in law using the characteristic function and the continuity
theorem.
16
Inversion theorem
X, Y have the same distribution i
X
(t) =
Y
(t) t R

_
1 +
c
n
_
n
e
c
CLT. X
1
, ..., X
n
, ... i.i.d. random variables with nite mean and vari-
ance. Then
S

n
A(0, 1) in law
Meaning of the CLT: the sum of (innitely) many, small, independent
eects is (approximately) normal (gaussian).
Characteristic function of the gaussian.
Using the characteristic function prove that the sum of independent
Gaussian r.v. is Gaussian (not true without independence).
The L
2
scalar product for random variables. Example: a two points
space and the euclidean distance.
Subspace of random variables. The random variable (Y ) nearest to
X.
Conditional expectation for discrete random variables (linearity, posi-
tivity, constants).

E(g(Y )X[Y ) = g(Y )E(X[Y )

E(g(Y )E(X[Y )) = E(g(Y )X)


(Namely: any r.v. g(Y ) is orthogonal to the r.v. (X E(X[Y )).
Law of iterated expectations
E(E(X[Y )) = E(X)

E((Xg(Y ))
2
) = E((XE(X[Y ))
2
)+E((g(Y )E(X[Y ))
2
) E((XE(X[Y ))
2
)
Namely: E(X[Y ) is the random variable (Y ) nearest to X.
17
The conditional variance
Var(X[Y ) = E((X E(X[Y ))
2
[Y )

Var(X[Y ) = E(X
2
[Y ) E(X[Y )
2
The law of total variance
Var(X) = E(Var(X[Y )) + Var(E(X[Y ))
S random sign, X A(0, 1), S X then SX A(0, 1)
Lecture 26 Monday, October 22, 2012 (11:30-13:30)
The composition of ane transformation is ane
If
L
_
x
y
_
= A
_
x
y
_
+b
The Jacobian matrix is J
L
= A
Gaussian vectors: the standard case.
Gaussian vectors: the general case.
If X, Y are jointly gaussian random variables then X Y is equivalent
to Cov(X, Y ) = 0.
Lecture 27 Tuesday, October 23, 2012 (11:00-14:00)
For which x R is the series

n=0
(x 1)
n
2
n
+ 3
converging?
18
Find the radius of convergence of the series

n=0
(2
n
+ 3
n
)x
n
Let D R the bounded region delimited by the curves y = x
2
, y = 0
and x = 2. Calculate in two dierent ways
_ _
D
x
2
+y
2
dxdy

Study the stationary points of the function


f(x, y) =
_
x
2
2x +y
2
4y + 2
Solve the Cauchy problem
_
y

= xy x R
y(0) = 3
Solution: y(x) = 3 exp
_
1
2
x
2
_
.

Find eigenvalues and eigenvectors of the matrix


B =
_
_
3 2 0
1 0 0
0 0 1
_
_

Exercise 1 of Simulation 1, Probability part.


19
Lecture 28 Wednesday, October 24, 2012 (11:00-14:00)
Maximize and minimize the function
f(x, y) = x
2
+ 2y
2
+ 3
subject to the constraint
2x
2
+y
2
= 4
Consider the three vectors
v
1
=
_
_
1
1
2
_
_
v
2
=
_
_
1
1
k
_
_
v
3
=
_
_
1
k
2
_
_
1) For which values of the parameter k do they form a basis of R
3
?
2) Let A the 33 matrix whose columns are given by v
1
, v
2
, v
3
for the
values of k satisfying the condition of the point 1). Solve the linear
equation system
A
_
_
x
y
z
_
_
=
_
_
1
k
3
_
_
using: i) the Cramer rule; ii) the substitution-elimination techniques.

If V is a vector space and , is a scalar product on V (with associated


norm [[v[[ = v, v) prove the parallelogram identity
[[v +w[[
2
+[[v w[[
2
= 2[[v[[
2
+ 2[[w[[
2
Dene for P = (x, y) R
2
[[P[[
1
= [x[ +[y[
i) Prove that [[ [[
1
is a norm. ii) Prove, by a counterexample, that
the parallelogram identity does not hold (this implies the no scalar
product can induce the norm [[ [[
1
).
20
Study the convergence of the series

0
sin n
n
2
If z = 2 3i then Im(z
1
) =
3
13
: true or false?
-
Quiz 4 of Simulation 1, Probability part.

S random sign, X A(0, 1), S X then E(SX[X) = E(SX) but


SX , X.
Conclusion: we have that
X Y = E(X[Y ) = E(X) = Cov(X, Y ) = 0
while
X Y ,= E(X[Y ) = E(X) ,= Cov(X, Y ) = 0
Lecture 29 Thursday, October 25, 2012 (11:30-14:00)
Consider the function
f(x, y) =
_
(x
2
+y
2
) log(x
2
+y
2
) (x, y) ,= (0, 0)
0 (x, y) = 0
i) Is the function continuous? ii) Does it have the partial derivatives
in (0,0)? Is the function diferentiable?
Calculate _ _
A
xy dxdy
where
A = (x, y) R[x
2
+y
2
< 1, x
2
+y
2
< 2x, y > 0
21
Study the convergence of the power series

n=0
n + 1
n + 2
x
n
3
n

n=0
1
n
2
+ 2
x
2
2
n
Counterexamples: a series for which the root test succeeds and the
ratio test fails.

n=1
2
((1)
n
n)

Quizzes
The union of two vector subspaces is a vector subspace.
For a square matrix A
2
= 0 implies A = 0.
f 0, K :=
_
R
f ,= 0, + implies
f
K
is a density.
-
Exercise on Probability
X Poisson(), Y Poisson() and X Y implies E(X[X + Y ) =

+
(X +Y ).
Lecture 30 Friday, October 26, 2012 (11:30-14:00)
Consider the function
f(x, y) =
_
xy
x
2
y
2
x
2
+y
2
(x, y) ,= (0, 0)
0 (x, y) = 0
and show that f
xy
(0, 0) ,= f
yx
(0, 0). What can you deduce for the
mixed derivatives of second order?
Let V = (

(R) be the vector space of the functions on the real line


that have derivatives of any order. Let D : V V be the derivative
(linear) operator. Find the eigenvector and eigenvalues of D.
If 0 is an eigenvalue of a linear transformation T then T is not injective.
True or false?
22
Every matrix A ,= 0 has an inverse. True or false?
Suppose that the matrix A is symmetric and that , are distinct
eigenvalues. If v is an eigenvector w.r.t. to and w is an eigenvector
w.r.t. to then v, w are orthogonal. True or false?
If (X, Y ) A(b, ) and
_
U
V
_
= B
_
X
Y
_
+c
then (U, V ) A(Bb +c, BB
t
).
Let us consider a Gaussian vector
_
X
Y
_
A
__
1
1
__
1 2
2 8
__
Let V = X +Y .
i) Find the distribution of the random vector (X, V ).
ii) Which random variables of the form V = X+Y are independent
of X?
Density of polynomials in the space of continuous functions: the Bern-
stein proof of the Weierstrass theorem.
Lecture 31 Thursday, November 29, 2012 (11:00-13:00)
Optimization w.r.t. inequality constraints. Binding constraints. Com-
plementary slackness condition. Non-degenerate constraints qualication.
Examples of the Simon-Blume: 18.7 at page 428, 18.9 at page 431, 18.10 at
page 435.
23

You might also like