Professional Documents
Culture Documents
r
X
i
v
:
1
0
0
2
.
4
5
4
6
v
1
[
m
a
t
h
.
P
R
]
2
4
F
e
b
2
0
1
0
Nonlinear Expectations and Stochastic
Calculus under Uncertainty
with Robust Central Limit Theorem and G-Brownian Motion
Shige PENG
Institute of Mathematics
Shandong University
250100, Jinan, China
peng@sdu.edu.cn
Version: rst edition
2
Preface
This book is focused on the recent developments on problems of probability
model under uncertainty by using the notion of nonlinear expectations and, in
particular, sublinear expectations. Roughly speaking, a nonlinear expectation
E is a monotone and constant preserving functional dened on a linear space of
random variables. We are particularly interested in sublinear expectations, i.e.,
E[X +Y ] E[X] + E[Y ] for all random variables X, Y and E[X] = E[X] if
0.
A sublinear expectation E can be represented as the upper expectation of
a subset of linear expectations E
[X]. In
most cases, this subset is often treated as an uncertain model of probabilities
P
x
i
to calculate its mean. In general he uses (X) =
1
N
(x
i
)
for the mean of (X). We will discuss in detail this issue after the overview of
our new law of large numbers (LLN) and central limit theorem (CLT).
A theoretical foundation of the above expectation framework is our new
LLN and CLT under sublinear expectations. Classical LLN and CLT have been
widely used in probability theory, statistics, data analysis as well as in many
practical situations such as nancial pricing and risk management. They provide
a strong and convincing way to explain why in practice normal distributions are
so widely utilized. But often a serious problem is that, in general, the i.i.d.
condition is dicult to be satised. In practice, for the most real-time processes
and data for which the classical trials and samplings become impossible, the
uncertainty of probabilities and distributions can not be neglected. In fact the
abuse of normal distributions in nance and many other industrial or commercial
i
ii
domains has been criticized.
Our new CLT does not need this strong i.i.d. assumption. Instead of
xing a probability measure P, we introduce an uncertain subset of probability
measures P
(x) :
with
= E[X
i
] = E[X
i
];
(ii) Any realization of X
1
, , X
n
does not change the distributional uncer-
tainty of X
n+1
.
Under E, we call X
1
, X
2
, to be identically distributed if condition (i) is
satised, and we call X
n+1
is independent from X
1
, , X
n
if condition (ii) is
fullled. Mainly under the above weak i.i.d. assumptions, we have proved
that for each continuous function with linear growth we have the following
LLN:
lim
n
E[(
S
n
n
)] = sup
v
(v).
Namely, the uncertain subset of the distributions of S
n
/n is approximately a
subset of dirac measures
v
: v .
In particular, if = = 0, then S
n
/n converges in law to 0. In this case, if
we assume furthermore that
2
= E[X
2
i
] and
2
= E[X
2
i
], i = 1, 2, , then
we have the following generalization of the CLT:
lim
n
E[(S
n
/
n)] = E[(X)].
Here X is called G-normal distributed and denoted by N(0 [
2
,
2
]). The
value E[(X)] can be calculated by dening u(t, x) := E[(x +
tX)] which
solves the partial dierential equation (PDE)
t
u = G(u
xx
) with G(a) :=
1
2
(
2
a
+
2
a
2
2
_
(x) exp(
x
2
2
2
)dx,
but if is a concave function, the above
2
must be replaced by
2
. If = = ,
then N(0 [
2
,
2
]) = N(0,
2
) which is a classical normal distribution.
This result provides a new way to explain a well-known puzzle: many practi-
tioners, e.g., traders and risk ocials in nancial markets can widely use normal
distributions without serious data analysis or even with data inconsistence. In
many typical situations E[(X)] can be calculated by using normal distribu-
tions with careful choice of parameters, but it is also a high risk calculation if
the reasoning behind has not been understood.
iii
We call N(0[
2
,
2
]) the G-normal distribution. This new type of sublin-
ear distributions was rst introduced in Peng (2006)[100] (see also [102], [103],
[104], [105]) for a new type of G-Brownian motion and the related calculus of
It os type. The main motivations were uncertainties in statistics, measures of
risk and superhedging in nance (see El Karoui, Peng and Quenez (1997) [44],
Artzner, Delbaen, Eber and Heath (1999) [3], Chen and Epstein (2002) [19],
Follmer and Schied (2004) [51]). Fully nonlinear super-hedging is also a possi-
ble domain of applications (see Avellaneda, Levy and Paras (1995) [5], Lyons
(1995) [82], see also Cheridito, Soner, Touzi and Victoir (2007) [23] where a new
BSDE approach was introduced).
Technically we introduce a new method to prove our CLT on a sublinear
expectation space. This proof is short since we have borrowed a deep interior
estimate of fully nonlinear partial dierential equation (PDE) in Krylov (1987)
[76]. In fact the theory of fully nonlinear parabolic PDE plays an essential
role in deriving our new results of LLN and CLT. In the classical situation the
corresponding PDE becomes a heat equation which is often hidden behind its
heat kernel, i.e., the normal distribution. In this book we use the powerful notion
of viscosity solutions for our nonlinear PDE initially introduced by Crandall
and Lions (1983) [29]. This notion is specially useful when the equation is
degenerate. For readers convenience, we provide an introductory chapter in
Appendix C. If readers are only interested in the classical non-degenerate cases,
the corresponding solutions will become smooth (see the last section of Appendix
C).
We dene a sublinear expectation on the space of continuous paths from R
+
to R
d
which is an analogue of Wieners law, by which a G-Brownian motion
is formulated. Briey speaking, a G-Brownian motion (B
t
)
t0
is a continuous
process with independent and stationary increments under a given sublinear
expectation E.
GBrownian motion has a very rich and interesting new structure which non-
trivially generalizes the classical one. We can establish the related stochastic
calculus, especially It os integrals and the related quadratic variation process
B. A very interesting new phenomenon of our G-Brownian motion is that its
quadratic variation process B is also a continuous process with independent
and stationary increments, and thus can be still regarded as a Brownian motion.
The corresponding G-Itos formula is obtained. We have also established the
existence and uniqueness of solutions to stochastic dierential equation under
our stochastic calculus by the same Picard iterations as in the classical situation.
New norms were introduced in the notion of G-expectation by which the cor-
responding stochastic calculus becomes signicantly more exible and powerful.
Many interesting, attractive and challenging problems are also automatically
provided within this new framework.
In this book we adopt a novel method to present our G-Brownian motion
theory. In the rst two chapters as well as the rst two sections of Chapter III,
our sublinear expectations are only assumed to be nitely sub-additive, instead
of -sub-additive. This is just because all the related results obtained in this
part do not need the -sub-additive assumption, and readers even need not to
iv
have the background of classical probability theory. In fact, in the whole part of
the rst ve chapters we only use a very basic knowledge of functional analysis
such as Hahn-Banach Theorem (see Appendix A). A special situation is when
all the sublinear expectations in this book become linear. In this case this book
can be still considered as using a new and very simple approach to teach the
classical It os stochastic calculus, since this book does not need the knowledge
of probability theory. This is an important advantage to use expectation as our
basic notion.
The authentic probabilistic parts, i.e., the pathwise analysis of our G-
Brownian motion and the corresponding random variables, view as functions of
G-Brownian path, is presented in Chapter VI. Here just as the classical P-sure
analysis, we introduce c-sure analysis for G-capacity c. Readers who are not
interested in the deep parts of stochastic analysis of G-Brownian motion theory
do not need to read this chapter.
This book was based on the authors Lecture Notes [102] for several series of
lectures, for the 2nd Workshop Stochastic Equations and Related Topic Jena,
July 2329, 2006; Graduate Courses of Yantai Summer School in Finance, Yan-
tai University, July 0621, 2007; Graduate Courses of Wuhan Summer School,
July 2426, 2007; Mini-Course of Institute of Applied Mathematics, AMSS,
April 1618, 2007; Mini-course in Fudan University, May 2007 and August 2009;
Graduate Courses of CSFI, Osaka University, May 15June 13, 2007; Minerva
Research Foundation Lectures of Columbia University in Fall of 2008; Mini-
Workshop of G-Brownian motion and G-expectations in Weihai, July 2009, a
series talks in Department of Applied Mathematics, Hong Kong Polytechnic
University, November-December, 2009 and an intensive course in WCU Center
for Financial Engineering, Ajou University. The hospitalities and encourage-
ments of the above institutions and the enthusiasm of the audiences are the
main engine to realize this lecture notes. I thank for many comments and sug-
gestions given during those courses, especially to Li Juan and Hu Mingshang.
During the preparation of this book, a special reading group was organized with
members Hu Mingshang, Li Xinpeng, Xu Xiaoming, Lin Yiqing, Su Chen, Wang
Falei and Yin Yue. They proposed very helpful suggestions for the revision of
the book. Hu Mingshang and Li Xinpeng have made a great eort for the nal
edition. Their eorts are decisively important to the realization of this book.
Contents
Chapter I Sublinear Expectations and Risk Measures 1
1 Sublinear Expectations and Sublinear Expectation Spaces 1
2 Representation of a Sublinear Expectation 4
3 Distributions, Independence and Product Spaces 6
4 Completion of Sublinear Expectation Spaces 11
5 Coherent Measures of Risk 14
Notes and Comments 15
Chapter II Law of Large Numbers and Central Limit Theorem 17
1 Maximal Distribution and G-normal Distribution 17
2 Existence of G-distributed Random Variables 24
3 Law of Large Numbers and Central Limit Theorem 25
Notes and Comments 33
Chapter III G-Brownian Motion and Itos Integral 35
1 G-Brownian Motion and its Characterization 35
2 Existence of G-Brownian Motion 38
3 It os Integral with GBrownian Motion 42
4 Quadratic Variation Process of GBrownian Motion 46
5 The Distribution of B 51
6 GItos Formula 56
7 Generalized G-Brownian Motion 61
8
G-Brownian Motion under a Nonlinear Expectation 64
9 Construction of
G-Brownian Motions under Nonlinear Expectation 66
Notes and Comments 69
Chapter IV G-martingales and Jensens Inequality 71
1 The Notion of G-martingales 71
2 On G-martingale Representation Theorem 73
3 Gconvexity and Jensens Inequality for Gexpectations 75
Notes and Comments 79
v
vi Contents
Chapter V Stochastic Dierential Equations 81
1 Stochastic Dierential Equations 81
2 Backward Stochastic Dierential Equations 83
3 Nonlinear Feynman-Kac Formula 85
Notes and Comments 88
Chapter VI Capacity and Quasi-Surely Analysis for G-Brownian
Paths 91
1 Integration Theory associated to an Upper Probability 91
2 G-expectation as an Upper Expectation 101
3 G-capacity and Paths of G-Brownian Motion 103
Notes and Comments 106
Appendix A Preliminaries in Functional Analysis 107
1 Completion of Normed Linear Spaces 107
2 The Hahn-Banach Extension Theorem 108
3 Dinis Theorem and Tietzes Extension Theorem 108
Appendix B Preliminaries in Probability Theory 109
1 Kolmogorovs Extension Theorem 109
2 Kolmogorovs Criterion 110
3 Daniell-Stone Theorem 112
Appendix C Solutions of Parabolic Partial Dierential Equation 113
1 The Denition of Viscosity Solutions 113
2 Comparison Theorem 115
3 Perrons Method and Existence 122
4 Krylovs Regularity Estimate for Parabolic PDE 125
Bibliography 129
Index of Symbols 139
Index 140
Chapter I
Sublinear Expectations and
Risk Measures
The sublinear expectation is also called the upper expectation or the upper
prevision, and this notion is used in situations when the probability models
have uncertainty. In this chapter, we present the basic notion of sublinear ex-
pectations and the corresponding sublinear expectation spaces. We give the
representation theorem of a sublinear expectation and the notions of distribu-
tions and independence under the framework of sublinear expectations. We
also introduce a natural Banach norm of a sublinear expectation in order to get
the completion of a sublinear expectation space which is a Banach space. As
a fundamentally important example, we introduce the notion of coherent risk
measures in nance. A large part of notions and results in this chapter will be
throughout this book.
1 Sublinear Expectations and Sublinear Expec-
tation Spaces
Let be a given set and let 1 be a linear space of real valued functions dened
on . In this book, we suppose that 1 satises c 1 for each constant c and
[X[ 1 if X 1. The space 1 can be considered as the space of random
variables.
Denition 1.1 A Sublinear expectation E is a functional E : 1 R satis-
fying
(i) Monotonicity:
E[X] E[Y ] if X Y.
(ii) Constant preserving:
E[c] = c for c R.
1
2 Chap.I Sublinear Expectations and Risk Measures
(iii) Sub-additivity: For each X, Y 1,
E[X +Y ] E[X] +E[Y ].
(iv) Positive homogeneity:
E[X] = E[X] for 0.
The triple (, 1, E) is called a sublinear expectation space. If (i) and
(ii) are satised, E is called a nonlinear expectation and the triple (, 1, E)
is called a nonlinear expectation space .
Denition 1.2 Let E
1
and E
2
be two nonlinear expectations dened on (, 1).
E
1
is said to be dominated by E
2
if
E
1
[X] E
1
[Y ] E
2
[X Y ] for X, Y 1. (1.1)
Remark 1.3 From (iii), a sublinear expectation is dominated by itself. In many
situations, (iii) is also called the property of self-domination. If the inequality in
(iii) becomes equality, then E is a linear expectation, i.e., E is a linear functional
satisfying (i) and (ii).
Remark 1.4 (iii)+(iv) is called sublinearity. This sublinearity implies
(v) Convexity:
E[X + (1 )Y ] E[X] + (1 )E[Y ] for [0, 1].
If a nonlinear expectation E satises convexity, we call it a convex expecta-
tion.
The properties (ii)+(iii) implies
(vi) Cash translatability:
E[X +c] = E[X] +c for c R.
In fact, we have
E[X] +c = E[X] E[c] E[X +c] E[X] +E[c] = E[X] +c.
For property (iv), an equivalence form is
E[X] =
+
E[X] +
E[X] for R.
In this book, we will systematically study the sublinear expectation spaces.
In the following chapters, unless otherwise stated, we consider the following
sublinear expectation space (, 1, E): if X
1
, , X
n
1then (X
1
, , X
n
)
1 for each C
l.Lip
(R
n
) where C
l.Lip
(R
n
) denotes the linear space of functions
satisfying
[(x) (y)[ C(1 +[x[
m
+[y[
m
)[x y[ for x, y R
n
,
some C > 0, m N depending on .
In this case X = (X
1
, , X
n
) is called an n-dimensional random vector, de-
noted by X 1
n
.
1 Sublinear Expectations and Sublinear Expectation Spaces 3
Remark 1.5 It is clear that if X 1 then [X[, X
m
1. More generally,
(X)(Y ) 1 if X, Y 1 and , C
l.Lip
(R). In particular, if X 1 then
E[[X[
n
] < for each n N.
Here we use C
l.Lip
(R
n
) in our framework only for some convenience of tech-
niques. In fact our essential requirement is that 1 contains all constants and,
moreover, X 1 implies [X[ 1. In general, C
l.Lip
(R
n
) can be replaced by
any one of the following spaces of functions dened on R
n
.
L
(R
n
): the space of bounded Borel-measurable functions;
C
b
(R
n
): the space of bounded and continuous functions;
C
k
b
(R
n
): the space of bounded and k-time continuously dierentiable func-
tions with bounded derivatives of all orders less than or equal to k;
C
unif
(R
n
): the space of bounded and uniformly continuous functions;
C
b.Lip
(R
n
): the space of bounded and Lipschitz continuous functions;
L
0
(R
n
): the space of Borel measurable functions.
Next we give two examples of sublinear expectations.
Example 1.6 In a game we select a ball from a box containing W white, B
black and Y yellow balls. The owner of the box, who is the banker of the game,
does not tell us the exact numbers of W, B and Y . He or she only informs us
that W +B+Y = 100 and W = B [20, 25]. Let be a random variable dened
by
=
_
_
_
1 if we get a white ball;
0 if we get a yellow ball;
1 if we get a black ball.
Problem: how to measure a loss X = () for a given function on R.
We know that the distribution of is
_
1 0 1
p
2
1 p
p
2
_
with uncertainty: p [, ] = [0.4, 0.5].
Thus the robust expectation of X = () is
E[()] := sup
P1
E
P
[()]
= sup
p[,]
[
p
2
((1) +(1)) + (1 p)(0)].
Here, has distribution uncertainty.
4 Chap.I Sublinear Expectations and Risk Measures
Example 1.7 A more general situation is that the banker of a game can choose
among a set of distributions F(, A)
AB(R),
of a random variable . In this
situation the robust expectation of a risk position () for some C
l.Lip
(R)
is
E[()] := sup
_
R
(x)F(, dx).
Exercise 1.8 Prove that a functional E satises sublinearity if and only if it
satises convexity and positive homogeneity.
Exercise 1.9 Suppose that all elements in 1 are bounded. Prove that the
strongest sublinear expectation on 1 is
E
[X] := X
= sup
X().
Namely, all other sublinear expectations are dominated by E
[].
2 Representation of a Sublinear Expectation
A sublinear expectation can be expressed as a supremum of linear expectations.
Theorem 2.1 Let E be a functional dened on a linear space 1 satisfying sub-
additivity and positive homogeneity. Then there exists a family of linear func-
tionals E
[X] for X 1
and, for each X 1, there exists
X
such that E[X] = E
X
[X].
Furthermore, if E is a sublinear expectation, then the corresponding E
is a
linear expectation.
Proof. Let Q = E
Q.
We rst prove that Qis non empty. For a given X 1, we set L = aX : a
R which is a subspace of 1. We dene I : L R by I[aX] = aE[X], a R,
then I[] forms a linear functional on 1 and I E on L. Since E[] is sub-
additive and positively homogeneous, by Hahn-Banach theorem (see Appendix
A), there exists a linear functional E on 1 such that E = I on L and E E
on 1. Thus E is a linear functional dominated by E such that E[X] = E[X].
We now dene
E
[X] := sup
[X] for X 1.
It is clear that E
= E.
Furthermore, if E is a sublinear expectation, then we have that, for each
nonnegative element X 1, E[X] = E[X] E[X] 0. For each c R,
E[c] = E[c] E[c] = c and E[c] E[c] = c, so we get E[c] = c. Thus E
is a linear expectation. The proof is complete.
2 Representation of a Sublinear Expectation 5
Remark 2.2 It is important to observe that the above linear expectation E
is
only nitely additive. A sucient condition for the -additivity of E
is to
assume that E[X
i
] 0 for each sequence X
i
i=1
of 1 such that X
i
() 0
for each . In this case, it is clear that E
[X
i
] 0. Thus we can apply the
well-known Daniell-Stone Theorem (see Theorem 3.3 in Appendix B) to nd a
-additive probability measure P
[X] =
_
X()dP
, X 1.
The corresponding model uncertainty of probabilities is the subset P
: ,
and the corresponding uncertainty of distributions for an n-dimensional random
vector X in 1 is F
X
(, A) := P
(X A) : A B(R
n
).
In many situation, we may concern the probability uncertainty, and the
probability maybe only nitely additive. So next we will give another version
of the above representation theorem.
Let T
f
be the collection of all nitely additive probability measures on
(, T), we consider L
0
(, T) the collection of risk positions with nite val-
ues, which consists risk positions X of the form
X() =
N
i=1
x
i
I
Ai
(), x
i
R, A
i
T, i = 1, , N.
It is easy to check that, under the norm ||
, L
0
(, T) is dense in L
(, T).
For a xed Q T
f
and X L
0
(, T) we dene
E
Q
[X] = E
Q
[
N
i=1
x
i
I
Ai
()] :=
N
i=1
x
i
Q(A
i
) =
_
X()Q(d).
E
Q
: L
0
(, T) R is a linear functional. It is easy to check that E
Q
satises
(i) monotonicity and (ii) constant preserving. It is also continuous under |X|
.
[E
Q
[X][ sup
[X()[ = |X|
.
Since L
0
is dense in L
0
to a linear continuous
functional on L
(, T).
Proposition 2.3 The linear functional E
Q
[] :L
(A) = (I
A
), A T. The
corresponding expectation is itself
(X) =
_
X()Q
(d).
6 Chap.I Sublinear Expectations and Risk Measures
Theorem 2.4 A sublinear expectation E has the following representation: there
exists a subset Q T
f
, such that
E[X] = sup
Q
E
Q
[X] for X 1.
Proof. By Theorem 2.1, we have
E[X] = sup
[X] for X 1,
where E
(, (1)) by
[X] := infE
[Y ]; Y X, Y 1.
It is not dicult to check that
E
is a sublinear expectation on L
(, (1)),
where (1) is the smallest -algebra generated by 1. We also have E
on 1, by Hahn-Banach theorem, E
(, (1)),
by Proposition 2.3, there exists Q T
f
, such that
E
[X] = E
Q
[X] for X 1.
So there exists Q T
f
, such that
E[X] = sup
Q
E
Q
[X] for X 1.
is a sublinear expectation.
3 Distributions, Independence and Product Spaces
We now give the notion of distributions of random variables under sublinear
expectations.
Let X = (X
1
, , X
n
) be a given n-dimensional random vector on a nonlin-
ear expectation space (, 1, E). We dene a functional on C
l.Lip
(R
n
) by
F
X
[] := E[(X)] : C
l.Lip
(R
n
) R.
The triple (R
n
, C
l.Lip
(R
n
), F
X
) forms a nonlinear expectation space. F
X
is
called the distribution of X under E. This notion is very useful for a sublinear
expectation space E. In this case F
X
is also a sublinear expectation. Further-
more we can prove that (see Remark 2.2), there exists a family of probability
measures F
X
()
dened on (R
n
, B(R
n
)) such that
F
X
[] = sup
_
R
n
(x)F
X
(dx), for each C
b.Lip
(R
n
).
Thus F
X
[] characterizes the uncertainty of the distributions of X.
3 Distributions, Independence and Product Spaces 7
Denition 3.1 Let X
1
and X
2
be two ndimensional random vectors dened on
nonlinear expectation spaces (
1
, 1
1
, E
1
) and (
2
, 1
2
, E
2
), respectively. They
are called identically distributed, denoted by X
1
d
= X
2
, if
E
1
[(X
1
)] = E
2
[(X
2
)] for C
l.Lip
(R
n
).
It is clear that X
1
d
= X
2
if and only if their distributions coincide. We say that
the distribution of X
1
is stronger than that of X
2
if E
1
[(X
1
)] E
2
[(X
2
)], for
each C
l.Lip
(R
n
).
Remark 3.2 In the case of sublinear expectations, X
1
d
= X
2
implies that the
uncertainty subsets of distributions of X
1
and X
2
are the same, e.g., in the
framework of Remark 2.2,
F
X1
(
1
, ) :
1
1
= F
X2
(
2
, ) :
2
2
.
Similarly if the distribution of X
1
is stronger than that of X
2
, then
F
X1
(
1
, ) :
1
1
F
X2
(
2
, ) :
2
2
.
The distribution of X 1 has the following four typical parameters:
:= E[X], := E[X],
2
:= E[X
2
],
2
:= E[X
2
].
The intervals [, ] and [
2
,
2
] characterize the mean-uncertainty and the
variance-uncertainty of X respectively.
A natural question is: can we nd a family of distribution measures to
represent the above sublinear distribution of X? The answer is armative:
Lemma 3.3 Let (, 1, E) be a sublinear expectation space. Let X 1
d
be
given. Then for each sequence
n
n=1
C
l.Lip
(R
d
) satisfying
n
0, we have
E[
n
(X)] 0.
Proof. For each xed N > 0,
n
(x) k
N
n
+
1
(x)I
[]x]>N]
k
N
n
+
1
(x)[x[
N
for each x R
dm
,
where k
N
n
= max
]x]N
n
(x). We then have
E[
n
(X)] k
N
n
+
1
N
E[
1
(X)[X[
].
It follows from
n
0 that k
N
n
0. Thus we have lim
n
E[
n
(X)]
C
N
E[
1
(X)[X[]. Since N can be arbitrarily large, we get E[
n
(X)] 0.
Lemma 3.4 Let (, 1, E) be a sublinear expectation space and let F
X
[] :=
E[(X)] be the sublinear distribution of X 1
d
. Then there exists a family of
probability measures F
dened on (R
d
, B(R
d
)) such that
F
X
[] = sup
_
R
d
(x)F
(dx), C
l,Lip
(R
d
). (3.2)
8 Chap.I Sublinear Expectations and Risk Measures
Proof. By the representation theorem, for the sublinear expectation F
X
[]
dened on (R
d
, C
l.Lip
(R
n
)), there exists a family of linear expectations f
on (R
d
, C
l.Lip
(R
n
)) such that
F
X
[] = sup
[], C
l.Lip
(R
n
).
By the above lemma, for each sequence
n
n=1
in C
b.Lip
(R
n
) such that
n
0
on R
d
, F
X
[
n
] 0, thus f
[
n
] 0 for each . It follows from Daniell-Stone
Theorem (see Theorem 3.3 in Appendix B) that, for each , there exists
a unique probability measure F
() on (R
d
, (C
b.Lip
(R
d
)) = (R
d
, B(R
d
)), such
that f
[] =
_
R
d
(x)F
.
The following property is very useful in our sublinear expectation theory.
Proposition 3.6 Let (, 1, E) be a sublinear expectation space and X, Y be
two random variables such that E[Y ] = E[Y ], i.e., Y has no mean-uncertainty.
Then we have
E[X +Y ] = E[X] +E[Y ] for R.
In particular, if E[Y ] = E[Y ] = 0, then E[X +Y ] = E[X].
Proof. We have
E[Y ] =
+
E[Y ] +
E[Y ] =
+
E[Y ]
E[Y ] =
E[X +Y ] =
E[X] +
E[Y ], X 1, R. (3.4)
In particular
E[X +c] =
E[X] +c, for c R. (3.5)
3 Distributions, Independence and Product Spaces 9
Proof. We have
E[Y ] =
E[0]
E[Y ] E[Y ] = E[Y ]
E[Y ]
and
E[Y ] = E[Y ]
E[Y ]
=
E[0]
E[Y ] E[Y ].
From the above relations we have
E[Y ] = E[Y ] =
E[X +Y ]
E[X] E[Y ],
E[X]
E[X + Y ] E[Y ] = E[Y ].
Thus (3.4) holds.
Denition 3.8 A sequence of n-dimensional random vectors
i
i=1
dened
on a sublinear expectation space (, 1, E) is said to converge in distribu-
tion (or converge in law) under E if for each C
b.Lip
(R
n
), the sequence
E[(
i
)]
i=1
converges.
The following result is easy to check.
Proposition 3.9 Let
i
i=1
converge in law in the above sense. Then the
mapping F[] : C
b.Lip
(R
n
) R dened by
F[] := lim
i
E[(
i
)] for C
b.Lip
(R
n
)
is a sublinear expectation dened on (R
n
, C
b.Lip
(R
n
)).
The following notion of independence plays a key role in the nonlinear ex-
pectation theory.
Denition 3.10 In a nonlinear expectation space (, 1, E), a random vector
Y 1
n
is said to be independent from another random vector X 1
m
under
E[] if for each test function C
l.Lip
(R
m+n
) we have
E[(X, Y )] = E[E[(x, Y )]
x=X
].
Remark 3.11 In particular, for a sublinear expectation space (, 1, E), Y is
independent from X means that the uncertainty of distributions F
Y
(, ) :
of Y does not change after the realization of X = x. In other words, the
conditional sublinear expectation of Y with respect to X is E[(x, Y )]
x=X
. In
the case of linear expectation, this notion of independence is just the classical
one.
Remark 3.12 It is important to note that under sublinear expectations the con-
dition Y is independent from X does not imply automatically that X is in-
dependent from Y .
10 Chap.I Sublinear Expectations and Risk Measures
Example 3.13 We consider a case where E is a sublinear expectation and
X, Y 1 are identically distributed with E[X] = E[X] = 0 and
2
=
E[X
2
] >
2
= E[X
2
]. We also assume that E[[X[] = E[X
+
+ X
] > 0,
thus E[X
+
] =
1
2
E[[X[ + X] =
1
2
E[[X[] > 0. In the case where Y is independent
from X, we have
E[XY
2
] = E[X
+
2
X
2
] = (
2
2
)E[X
+
] > 0.
But if X is independent from Y , we have
E[XY
2
] = 0.
The independence property of two random vectors X, Y involves only the
joint distribution of (X, Y ). The following result tells us how to construct
random vectors with given marginal distributions and with a specic direction
of independence.
Denition 3.14 Let (
i
, 1
i
, E
i
), i = 1, 2 be two sublinear (resp. nonlinear)
expectation spaces. We denote
1
1
1
2
:= Z(
1
,
2
) = (X(
1
), Y (
2
)) : (
1
,
2
)
1
2
,
(X, Y ) 1
m
1
1
n
2
, C
l.Lip
(R
m+n
),
and, for each random variable of the above form Z(
1
,
2
) = (X(
1
), Y (
2
)),
(E
1
E
2
)[Z] := E
1
[ (X)], where (x) := E
2
[(x, Y )], x R
m
.
It is easy to check that the triple (
1
2
, 1
1
1
2
, E
1
E
2
) forms a sublinear
(resp. nonlinear) expectation space. We call it the product space of sublinear
(resp. nonlinear) expectation spaces (
1
, 1
1
, E
1
) and (
2
, 1
2
, E
2
). In this way,
we can dene the product space
(
n
i=1
i
,
n
i=1
1
i
,
n
i=1
E
i
)
of given sublinear (resp. nonlinear) expectation spaces (
i
, 1
i
, E
i
), i =
1, 2, , n. In particular, when (
i
, 1
i
, E
i
) = (
1
, 1
1
, E
1
) we have the product
space of the form (
n
1
, 1
n
1
, E
n
1
).
Let X,
X be two n-dimensional random vectors on a sublinear (resp. non-
linear) expectation space (, 1, E).
X is called an independent copy of X if
X
d
= X and
X is independent from X.
The following property is easy to check.
Proposition 3.15 Let X
i
be an n
i
-dimensional random vector on sublinear
(resp. nonlinear) expectation space (
i
, 1
i
, E
i
) for i = 1, , n, respectively.
We denote
Y
i
(
1
, ,
n
) := X
i
(
i
), i = 1, , n.
4 Completion of Sublinear Expectation Spaces 11
Then Y
i
, i = 1, , n, are random vectors on (
n
i=1
i
,
n
i=1
1
i
,
n
i=1
E
i
).
Moreover we have Y
i
d
= X
i
and Y
i+1
is independent from (Y
1
, , Y
i
), for each
i.
Furthermore, if (
i
, 1
i
, E
i
) = (
1
, 1
1
, E
1
) and X
i
d
= X
1
, for all i, then we
also have Y
i
d
= Y
1
. In this case Y
i
is said to be an independent copy of Y
1
for
i = 2, , n.
Remark 3.16 In the above construction the integer n can be also innite. In
this case each random variable X
i=1
1
i
belongs to (
k
i=1
i
,
k
i=1
1
i
,
k
i=1
E
i
)
for some positive integer k < and
i=1
E
i
[X] :=
k
i=1
E
i
[X].
Remark 3.17 The situation Y is independent from Xoften appears when Y
occurs after X, thus a robust expectation should take the information of X into
account.
Exercise 3.18 Suppose X, Y 1
d
and Y is an independent copy of X. Prove
that for each a R, b R
d
,a +b, Y is an independent copy of a +b, X.
In a sublinear expectation space we have:
Example 3.19 We consider a situation where two random variables X and Y
in 1 are identically distributed and their common distribution is
F
X
[] = F
Y
[] = sup
_
R
(y)F(, dy) for C
l.Lip
(R),
where for each , F(, A)
AB(R)
is a probability measure on (R, B(R)).
In this case, Y is independent from X means that the joint distribution of X
and Y is
F
X,Y
[] = sup
1
_
R
_
sup
2
_
R
(x, y)F(
2
, dy)
_
F(
1
, dx) for C
l.Lip
(R
2
).
Exercise 3.20 Let (, 1, E) be a sublinear expectation space. Prove that if
E[(X)] = E[(Y )] for any C
b,Lip
, then it still holds for any C
l,Lip
.
That is, we can replace C
l,Lip
in Denition 3.1 by C
b,Lip
.
4 Completion of Sublinear Expectation Spaces
Let (, 1, E) be a sublinear expectation space. We have the following useful
inequalities.
We rst give the following well-known inequalities.
12 Chap.I Sublinear Expectations and Risk Measures
Lemma 4.1 For r > 0 and 1 < p, q < with
1
p
+
1
q
= 1, we have
[a +b[
r
max1, 2
r1
([a[
r
+[b[
r
) for a, b R, (4.6)
[ab[
[a[
p
p
+
[b[
q
q
. (4.7)
Proposition 4.2 For each X, Y 1, we have
E[[X +Y [
r
] 2
r1
(E[[X[
r
] +E[[Y [
r
]), (4.8)
E[[XY [] (E[[X[
p
])
1/p
(E[[Y [
q
])
1/q
, (4.9)
(E[[X +Y [
p
])
1/p
(E[[X[
p
])
1/p
+ (E[[Y [
p
])
1/p
, (4.10)
where r 1 and 1 < p, q < with
1
p
+
1
q
= 1.
In particular, for 1 p < p
, we have (E[[X[
p
])
1/p
(E[[X[
p
])
1/p
.
Proof. The inequality (4.8) follows from (4.6).
For the case E[[X[
p
] E[[Y [
q
] > 0, we set
=
X
(E[[X[
p
])
1/p
, =
Y
(E[[Y [
q
])
1/q
.
By (4.7) we have
E[[[] E[
[[
p
p
+
[[
q
q
] E[
[[
p
p
] +E[
[[
q
q
]
=
1
p
+
1
q
= 1.
Thus (4.9) follows.
For the case E[[X[
p
] E[[Y [
q
] = 0, we consider E[[X[
p
] + and E[[Y [
q
] + for
> 0. Applying the above method and letting 0, we get (4.9).
We now prove (4.10). We only consider the case E[[X +Y [
p
] > 0.
E[[X +Y [
p
] = E[[X +Y [ [X +Y [
p1
]
E[[X[ [X +Y [
p1
] +E[[Y [ [X +Y [
p1
]
(E[[X[
p
])
1/p
(E[[X +Y [
(p1)q
])
1/q
+ (E[[Y [
p
])
1/p
(E[[X +Y [
(p1)q
])
1/q
.
Since (p 1)q = p, we have (4.10).
By(4.9), it is easy to deduce that (E[[X[
p
])
1/p
(E[[X[
p
])
1/p
for 1 p < p
1
p
under this norm, then (
1
p
, ||
p
) is a Banach space. In particular, when
p = 1, we denote it by (
1, ||).
For each X 1, the mappings
X
+
() : 1 1 and X
() : 1 1
satisfy
[X
+
Y
+
[ [X Y [ and [X
[ = [(X)
+
(Y )
+
[ [X Y [.
Thus they are both contraction mappings under ||
p
and can be continuously
extended to the Banach space (
1
p
, ||
p
).
We can dene the partial order in this Banach space.
Denition 4.3 An element X in (
1, ||) is said to be nonnegative, or X 0,
0 X, if X = X
+
. We also denote by X Y , or Y X, if X Y 0.
It is easy to check that X Y and Y X imply X = Y on (
1
p
, ||
p
).
For each X, Y 1, note that
[E[X] E[Y ][ E[[X Y [] [[X Y [[
p
.
We then can dene
Denition 4.4 The sublinear expectation E[] can be continuously extended
to (
1
p
, ||
p
) on which it is still a sublinear expectation. We still denote by
(,
1
p
, E).
Let (, 1, E
1
) be a nonlinear expectation space. E
1
is said to be dominated
by E if
E
1
[X] E
1
[Y ] E[X Y ] for X, Y 1.
From this we can easily deduce that [E
1
[X] E
1
[Y ][ E[[X Y [], thus the non-
linear expectation E
1
[] can be continuously extended to (
1
p
, ||
p
) on which it
is still a nonlinear expectation. We still denote by (,
1
p
, E
1
).
Remark 4.5 It is important to note that X
1
, , X
n
1 does not imply
(X
1
, , X
n
)
1 for each C
l.Lip
(R
n
). Thus, when we talk about the no-
tions of distributions, independence and product spaces on (,
1, E), the space
C
l.Lip
(R
n
) is replaced by C
b.Lip
(R
n
) unless otherwise stated.
Exercise 4.6 Prove that the inequalities (4.8),(4.9),(4.10) still hold for (,
1, E).
14 Chap.I Sublinear Expectations and Risk Measures
5 Coherent Measures of Risk
Let the pair (, 1) be such that is a set of scenarios and 1 is the collection
of all possible risk positions in a nancial market.
If X 1, then for each constant c, X c, X c are all in 1. One typical
example in nance is that X is the tomorrows price of a stock. In this case, any
European call or put options with strike price K of forms (S K)
+
, (KS)
+
are in 1.
A risk supervisor is responsible for taking a rule to tell traders, securities
companies, banks or other institutions under his supervision, which kind of risk
positions is unacceptable and thus a minimum amount of risk capitals should
be deposited to make the positions acceptable. The collection of acceptable
positions is dened by
/ = X 1 : X is acceptable.
This set has meaningful properties in economy.
Denition 5.1 A set / is called a coherent acceptable set if it satises
(i) Monotonicity:
X /, Y X imply Y /.
(ii) 0 / but 1 , /.
(iii) Positive homogeneity
X / implies X / for 0.
(iv) Convexity:
X, Y / imply X + (1 )Y / for [0, 1].
Remark 5.2 (iii)+(iv) imply
(v) Sublinearity:
X, Y / X +Y / for , 0.
Remark 5.3 If the set / only satises (i),(ii) and (iv), then / is called a
convex acceptable set.
In this section we mainly study the coherent case. Once the rule of the
acceptable set is xed, the minimum requirement of risk deposit is then auto-
matically determined.
Denition 5.4 Given a coherent acceptable set /, the functional () dened
by
(X) =
,
(X) := infm R : m+X /, X 1
is called the coherent risk measure related to /.
5 Coherent Measures of Risk 15
It is easy to see that
(X + (X)) = 0.
Proposition 5.5 () is a coherent risk measure satisfying four properties:
(i) Monotonicity: If X Y then (X) (Y ).
(ii) Constant preserving: (1) = (1) = 1.
(iii) Sub-additivity: For each X, Y 1, (X +Y ) (X) +(Y ).
(iv) Positive homogeneity: (X) = (X) for 0.
Proof. (i), (ii) are obvious.
We now prove (iii). Indeed,
(X +Y ) =infm R : m+ (X +Y ) /
=infm+n : m, n R, (m +X) + (n +Y ) /
infm R : m+X / + infn R : n +Y /
=(X) +(Y ).
To prove (iv), in fact the case = 0 is trivial; when > 0,
(X) = infm R : m +X /
= infn R : n +X / = (X),
where n = m/.
Obviously, if E is a sublinear expectation, we dene (X) := E[X], then
is a coherent risk measure. Inversely, if is a coherent risk measure, we dene
E[X] := (X), then E is a sublinear expectation.
Exercise 5.6 Let () be a coherent risk measure. Then we can inversely dene
/
:= X 1 : (X) 0.
Prove that /
[] = E[()] = sup
y
(y) for C
l.Lip
(R).
Recall a well-known characterization: X
d
= N(0, ) if and only if
aX +b
X
d
=
_
a
2
+b
2
X for a, b 0, (1.1)
where
X is an independent copy of X. The covariance matrix is dened by
= E[XX
T
]. We now consider the so called G-normal distribution in probabil-
ity model uncertainty situation. The existence, uniqueness and characterization
will be given later.
Denition 1.4 (G-normal distribution) A d-dimensional random vector X =
(X
1
, , X
d
)
T
on a sublinear expectation space (, 1, E) is called (centralized)
G-normal distributed if
aX +b
X
d
=
_
a
2
+b
2
X for a, b 0,
where
X is an independent copy of X.
Remark 1.5 Noting that E[X +
X] = 2E[X] and E[X +
X] = E[
2X] =
t
u G(D
y
u, D
2
x
u) = 0, (1.6)
with Cauchy condition u[
t=0
= , where G : R
d
S(d) R is dened by
(1.2) and D
2
u = (
2
xixj
u)
d
i,j=1
, Du = (
xi
u)
d
i=1
. The PDE (1.6) is called a
G-equation.
In this book we will mainly use the notion of viscosity solution to describe
the solution of this PDE. For readers convenience, we give a systematical intro-
duction of the notion of viscosity solution and its related properties used in this
book (see Appendix C, Section 1-3). It is worth to mention here that for the case
where G is non-degenerate, the viscosity solution of the G-equation becomes a
classical C
1,2
solution (see Appendix C, Section 4). Readers without knowledge
of viscosity solutions can simply understand solutions of the G-equation in the
classical sense along the whole book.
Proposition 1.10 For the pair (X, ) satisfying (1.5) and a function
C
l.Lip
(R
d
R
d
), we dene
u(t, x, y) := E[(x +
tX, y +t)]
E[(x +
tX, y +t) ( x +
tX, y +t)]
E[C
1
(1 +[X[
k
+[[
k
+[x[
k
+[y[
k
+[ x[
k
+[ y[
k
)]
([x x[ +[y y[)
C(1 +[x[
k
+[y[
k
+[ x[
k
+[ y[
k
)([x x[ +[y y[),
we have (1.8).
Let (
X, ) be an independent copy of (X, ). By (1.5),
u(t +s, x, y) = E[(x +
t +sX, y + (t +s))]
= E[(x +
sX +
t
X, y +s +t )]
= E[E[(x +
s x +
t
X, y +s y +t )]
( x, y)=(X,)
]
= E[u(t, x +
sX, y +s)],
we thus obtain (1.7). From this and (1.8) it follows that
u(t +s, x, y) u(t, x, y) = E[u(t, x +
s[X[ +s[[)],
thus we obtain (1.9).
Now, for a xed (t, x, y) (0, ) R
d
R
d
, let C
2,3
b
([0, ) R
d
R
d
)
be such that u and (t, x, y) = u(t, x, y). By (1.7) and Taylors expansion,
it follows that, for (0, t),
0 E[(t , x +
X, y + ) (t, x, y)]
C(
3/2
+
2
)
t
(t, x, y)
+E[D
x
(t, x, y), X
+D
y
(t, x, y), +
1
2
D
2
x
(t, x, y)X, X
_
]
=
t
(t, x, y) +E[D
y
(t, x, y), +
1
2
D
2
x
(t, x, y)X, X
_
] +
C(
3/2
+
2
)
=
t
(t, x, y) +G(D
y
, D
2
x
)(t, x, y) +
C(
3/2
+
2
),
1 Maximal Distribution and G-normal Distribution 21
from which it is easy to check that
[
t
G(D
y
, D
2
x
)](t, x, y) 0.
Thus u is a viscosity subsolution of (1.6). Similarly we can prove that u is a
viscosity supersolution of (1.6).
Corollary 1.11 If both (X, ) and (
X, ) satisfy (1.5) with the same G, i.e.,
G(p, A) := E[
1
2
AX, X+p, ] = E[
1
2
A
X,
X
_
+p, ] for (p, A) R
d
S(d),
then (X, )
d
= (
X, ). In particular, X
d
= X.
Proof. For each C
l.Lip
(R
d
R
d
), we set
u(t, x, y) := E[(x +
tX, y +t)],
u(t, x, y) := E[(x +
t
X, y +t )], (t, x, y) [0, ) R
d
R
d
.
By Proposition 1.10, both u and u are viscosity solutions of the G-equation (1.6)
with Cauchy condition u[
t=0
= u[
t=0
= . It follows from the uniqueness of the
viscosity solution that u u. In particular,
E[(X, )] = E[(
X, )].
Thus (X, )
d
= (
X, ).
Corollary 1.12 Let (X, ) satisfy (1.5). For each C
l.Lip
(R
d
) we dene
v(t, x) := E[((x +
t
v G(D
x
v, D
2
x
v) = 0, v[
t=0
= . (1.11)
Moreover, we have v(t, x + y) u(t, x, y), where u is the solution of the PDE
(1.6) with initial condition u(t, x, y)[
t=0
= (x +y).
Example 1.13 Let X be G-normal distributed. The distribution of X is char-
acterized by
u(t, x) = E[(x +
tX)], C
l.Lip
(R
d
).
In particular, E[(X)] = u(1, 0), where u is the solution of the following parabolic
PDE dened on [0, ) R
d
:
t
u G(D
2
u) = 0, u[
t=0
= , (1.12)
where G = G
X
(A) : S(d) R is dened by
G(A) :=
1
2
E[AX, X], A S(d).
22 Chap.II Law of Large Numbers and Central Limit Theorem
The parabolic PDE (1.12) is called a G-heat equation.
It is easy to check that G is a sublinear function dened on S(d). By Theorem
2.1 in Chapter I, there exists a bounded, convex and closed subset S(d) such
that
1
2
E[AX, X] = G(A) =
1
2
sup
Q
tr[AQ], A S(d). (1.13)
Since G(A) is monotonic: G(A
1
) G(A
2
), for A
1
A
2
, it follows that
S
+
(d) = S(d) : 0 = BB
T
: B R
dd
,
where R
dd
is the set of all d d matrices. If is a singleton: = Q, then
X is classical zero-mean normal distributed with covariance Q. In general,
characterizes the covariance uncertainty of X. We denote X
d
= N(0 )
(Recall equation (1.4), we can set (q, Q) 0 ).
When d = 1, we have X
d
= N(0 [
2
,
2
]) (We also denoted by X
d
=
N(0, [
2
,
2
])), where
2
= E[X
2
] and
2
= E[X
2
]. The corresponding G-
heat equation is
t
u
1
2
(
2
(
2
xx
u)
+
2
(
2
xx
u)
) = 0, u[
t=0
= .
For the case
2
> 0, this equation is also called the Barenblatt equation.
In the following two typical situations, the calculation of E[(X)] is very easy:
For each convex function , we have
E[(X)] =
1
2
_
(
2
y) exp(
y
2
2
)dy.
Indeed, for each xed t 0, it is easy to check that the function u(t, x) :=
E[(x +
tX)] is convex in x:
u(t, x + (1 )y) = E[(x + (1 )y +
tX)]
E[(x +
tX)] + (1 )E[(x +
tX)]
= u(t, x) + (1 )u(t, x).
It follows that (
2
xx
u)
t
u =
2
2
2
xx
u, u[
t=0
= .
For each concave function , we have
E[(X)] =
1
2
_
(
2
y) exp(
y
2
2
)dy.
1 Maximal Distribution and G-normal Distribution 23
In particular,
E[X] = E[X] = 0, E[X
2
] =
2
, E[X
2
] =
2
and
E[X
4
] = 3
4
, E[X
4
] = 3
4
.
Example 1.14 Let be maximal distributed, the distribution of is character-
ized by the following parabolic PDE dened on [0, ) R
d
:
t
u g(Du) = 0, u[
t=0
= , (1.14)
where g = g
(p) : R
d
R is dened by
g
(p) := E[p, ], p R
d
.
It is easy to check that g
p, q , p R
d
. (1.15)
By this characterization, we can prove that the distribution of is given by
[] = E[()] = sup
v
(v) = sup
v
_
R
d
(x)
v
(dx), C
l.Lip
(R
d
), (1.16)
where
v
is Dirac measure. Namely it is the maximal distribution with the
uncertainty subset of probabilities as Dirac measures concentrated at
. We
denote
d
= N(
(p) := E[p] = p
+
p
, p R,
where = E[] and =
[
t=0
= . We take
= R
2d
,
1 = C
l.Lip
(R
2d
)
and = (x, y) R
2d
. The corresponding sublinear expectation
E[] is dened
by
E[] = u
X, )( ) = (x, y).
We have
E[(
X, )] = u
(1, 0, 0) for C
l.Lip
(R
2d
).
In particular, just setting
0
(x, y) =
1
2
Ax, x +p, y, we can check that
u
0
(t, x, y) = G(p, A)t +
1
2
Ax, x +p, y .
We thus have
E[
1
2
_
A
X,
X
_
+p, ] = u
0
(1, 0, 0) = G(p, A), (p, A) R
d
S(d).
We construct a product space
(, 1, E) = (
,
1
1,
E),
and introduce two pairs of random vectors
(X, )(
1
,
2
) =
1
, (
X, )(
1
,
2
) =
2
, (
1
,
2
)
.
By Proposition 3.15 in Chapter I, (X, )
d
= (
X, ) and (
X, ) is an independent
copy of (X, ).
3 Law of Large Numbers and Central Limit Theorem 25
We now prove that the distribution of (X, ) satises condition (1.5). For
each C
l.Lip
(R
2d
) and for each xed > 0, ( x, y) R
2d
, since the function v
dened by v(t, x, y) := u
(t, x+
, y + ).
Thus
E[( x +
X, y +)] = v(1, 0, 0) = u
(, x, y).
By the denition of E, for each t > 0 and s > 0,
E[(
tX +
s
X, t +s )] = E[E[(
tx +
s
X, ty +s )]
(x,y)=(X,)
]
= E[u
(s,
tX, t)] = u
u
(s,,)
(t, 0, 0)
= u
(t +s, 0, 0)
= E[(
t +sX, (t +s))].
Namely (
tX +
s
X, t + s )
d
= (
2
2 E
[(X)]
for C
l,Lip
(R), where E
i=1
be a sequence of R
d
-
valued random variables on a sublinear expectation space (,1, E). We assume
that Y
i+1
d
= Y
i
and Y
i+1
is independent from Y
1
, , Y
i
for each i = 1, 2, .
Then the sequence
S
n
n=1
dened by
S
n
:=
1
n
n
i=1
Y
i
converges in law to a maximal distribution, i.e.,
lim
n
E[(
S
n
)] = E[()], (3.17)
26 Chap.II Law of Large Numbers and Central Limit Theorem
for all functions C(R
d
) satisfying linear growth condition ([(x)[ C(1 +
[x[)), where is a maximal distributed random vector and the corresponding
sublinear function g : R
d
R is dened by
g(p) := E[p, Y
1
], p R
d
.
Remark 3.2 When d = 1, the sequence
S
n
n=1
converges in law to N([, ]
0), where = E[Y
1
] and = E[Y
1
]. For the general case, the sum
1
n
n
i=1
Y
i
converges in law to N(
0), where
R
d
is the bounded,
convex and closed subset dened in Example 1.14. If we take in particular
(y) = d
(y) = inf[x y[ : x
, then by (3.17) we have the following
generalized law of large numbers:
lim
n
E[d
(
1
n
n
i=1
Y
i
)] = sup
() = 0. (3.18)
If Y
i
has no mean-uncertainty, or in other words,
is a singleton:
=
,
then (3.18) becomes
lim
n
E[[
1
n
n
i=1
Y
i
[] = 0.
Theorem 3.3 (Central limit theorem with zero-mean) Let X
i
i=1
be a
sequence of R
d
-valued random variables on a sublinear expectation space (,1, E).
We assume that X
i+1
d
= X
i
and X
i+1
is independent from X
1
, , X
i
for each
i = 1, 2, . We further assume that
E[X
1
] = E[X
1
] = 0.
Then the sequence
S
n
n=1
dened by
S
n
:=
1
n
n
i=1
X
i
converges in law to X, i.e.,
lim
n
E[(
S
n
)] = E[(X)],
for all functions C(R
d
) satisfying linear growth condition, where X is a
G-normal distributed random vector and the corresponding sublinear function
G : S(d) R is dened by
G(A) := E[
1
2
AX
1
, X
1
], A S(d).
Remark 3.4 When d = 1, the sequence
S
n
n=1
converges in law to N(0
[
2
,
2
]), where
2
= E[X
2
1
] and
2
= E[X
2
1
]. In particular, if
2
=
2
, then
it becomes a classical central limit theorem.
3 Law of Large Numbers and Central Limit Theorem 27
The following theorem is a nontrivial generalization of the above two theo-
rems.
Theorem 3.5 (Central limit theorem with law of large numbers) Let
(X
i
, Y
i
)
i=1
be a sequence of R
d
R
d
-valued random vectors on a sublin-
ear expectation space (,1, E). We assume that (X
i+1
, Y
i+1
)
d
= (X
i
, Y
i
) and
(X
i+1
, Y
i+1
) is independent from (X
1
, Y
1
), , (X
i
, Y
i
) for each i = 1, 2, .
We further assume that
E[X
1
] = E[X
1
] = 0.
Then the sequence
S
n
n=1
dened by
S
n
:=
n
i=1
(
X
i
n
+
Y
i
n
)
converges in law to X +, i.e.,
lim
n
E[(
S
n
)] = E[(X +)], (3.19)
for all functions C(R
d
) satisfying a linear growth condition, where the pair
(X, ) is G-distributed. The corresponding sublinear function G : R
d
S(d) R
is dened by
G(p, A) := E[p, Y
1
+
1
2
AX
1
, X
1
], A S(d), p R
d
.
Thus E[(X +)] can be calculated by Corollary 1.12.
The following result is equivalent to the above central limit theorem.
Theorem 3.6 We make the same assumptions as in Theorem 3.5. Then for
each function C(R
d
R
d
) satisfying linear growth condition, we have
lim
n
E[(
n
i=1
X
i
n
,
n
i=1
Y
i
n
)] = E[(X, )].
Proof. It is easy to prove Theorem 3.5 by Theorem 3.6. To prove Theorem 3.6
from Theorem 3.5, it suces to dene a pair of 2d-dimensional random vectors
X
i
= (X
i
, 0),
Y
i
= (0, Y
i
) for i = 1, 2, .
We have
lim
n
E[(
n
i=1
X
i
n
,
n
i=1
Y
i
n
)] = lim
n
E[(
n
i=1
(
X
i
n
+
Y
i
n
))] = E[(
X +)]
= E[(X, )]
with
X = (X, 0) and = (0, ).
28 Chap.II Law of Large Numbers and Central Limit Theorem
To prove Theorem 3.5, we need the following norms to measure the regularity
of a given real functions u dened on Q = [0, T] R
d
:
|u|
C
0,0
(Q)
= sup
(t,x)Q
[u(t, x)[,
|u|
C
1,1
(Q)
= |u|
C
0,0
(Q)
+|
t
u|
C
0,0
(Q)
+
d
i=1
|
xi
u|
C
0,0
(Q)
,
|u|
C
1,2
(Q)
= |u|
C
1,1
(Q)
+
d
i,j=1
_
_
xixj
u
_
_
C
0,0
(Q)
.
For given constants , (0, 1), we denote
|u|
C
,
(Q)
= sup
x,yR
d
, x,=y
s,t[0,T],s,=t
[u(s, x) u(t, y)[
[r s[
+ [x y[
,
|u|
C
1+,1+
(Q)
= |u|
C
,
(Q)
+|
t
u|
C
,
(Q)
+
d
i=1
|
xi
u|
C
,
(Q)
,
|u|
C
1+,2+
(Q)
= |u|
C
1+,1+
(Q)
+
d
i,j=1
_
_
xixj
u
_
_
C
,
(Q)
.
If, for example, |u|
C
1+,2+
(Q)
< , then u is said to be a C
1+,2+
-function
on Q.
We need the following lemma.
Lemma 3.7 We assume the same assumptions as in Theorem 3.5. We further
assume that there exists a constant > 0 such that, for each A,
A S(d) with
A
A, we have
E[AX
1
, X
1
] E[
AX
1
, X
1
_
] tr[A
A]. (3.20)
Then our main result (3.19) holds.
Proof. We rst prove (3.19) for C
b.Lip
(R
d
). For a small but xed h > 0,
let V be the unique viscosity solution of
t
V +G(DV, D
2
V ) = 0, (t, x) [0, 1 +h) R
d
, V [
t=1+h
= . (3.21)
Since (X, ) satises (1.5), we have
V (h, 0) = E[(X +)], V (1 +h, x) = (x). (3.22)
Since (3.21) is a uniformly parabolic PDE and G is a convex function, by the
interior regularity of V (see Appendix C), we have
|V |
C
1+/2,2+
([0,1]R
d
)
< for some (0, 1).
3 Law of Large Numbers and Central Limit Theorem 29
We set =
1
n
and S
0
= 0. Then
V (1,
S
n
) V (0, 0) =
n1
i=0
V ((i + 1),
S
i+1
) V (i,
S
i
)
=
n1
i=0
[V ((i + 1),
S
i+1
) V (i,
S
i+1
)] + [V (i,
S
i+1
) V (i,
S
i
)]
=
n1
i=0
_
I
i
+J
i
_
with, by Taylors expansion,
J
i
=
t
V (i,
S
i
)+
1
2
D
2
V (i,
S
i
)X
i+1
, X
i+1
_
+
_
DV (i,
S
i
), X
i+1
+Y
i+1
_
I
i
=
_
1
0
[
t
V ((i +),
S
i+1
)
t
V (i,
S
i+1
)]d + [
t
V (i,
S
i+1
)
t
V (i,
S
i
)]
+
D
2
V (i,
S
i
)X
i+1
, Y
i+1
_
3/2
+
1
2
D
2
V (i,
S
i
)Y
i+1
, Y
i+1
_
2
+
_
1
0
_
1
0
_
(X
i+1
+Y
i+1
), X
i+1
+ Y
i+1
_
dd
with
= D
2
V (i,
S
i
+(X
i+1
+Y
i+1
)) D
2
V (i,
S
i
).
Thus
E[
n1
i=0
J
i
] E[
n1
i=0
I
i
] E[V (1,
S
n
)] V (0, 0) E[
n1
i=0
J
i
] +E[
n1
i=0
I
i
]. (3.23)
We now prove that E[
n1
i=0
J
i
] = 0. For J
i
, note that
E[
_
DV (i,
S
i
), X
i+1
_
] = E[
_
DV (i,
S
i
), X
i+1
_
] = 0,
then, from the denition of the function G, we have
E[J
i
] = E[
t
V (i,
S
i
) +G(DV (i,
S
i
), D
2
V (i,
S
i
))].
Combining the above two equalities with
t
V +G(DV, D
2
V ) = 0 as well as the
independence of (X
i+1
, Y
i+1
) from (X
1
, Y
1
), , (X
i
, Y
i
), it follows that
E[
n1
i=0
J
i
] = E[
n2
i=0
J
i
] = = 0.
Thus (3.23) can be rewritten as
E[
n1
i=0
I
i
] E[V (1,
S
n
)] V (0, 0) E[
n1
i=0
I
i
].
30 Chap.II Law of Large Numbers and Central Limit Theorem
But since both
t
V and D
2
V are uniformly
2
-holder continuous in t and -
h older continuous in x on [0, 1]R
d
, we then have
[I
i
[ C
1+/2
(1 +[X
i+1
[
2+
+[Y
i+1
[
2+
).
It follows that
E[[I
i
[] C
1+/2
(1 +E[[X
1
[
2+
+[Y
1
[
2+
]).
Thus
C(
1
n
)
/2
(1 +E[[X
1
[
2+
+[Y
1
[
2+
]) E[V (1,
S
n
)] V (0, 0)
C(
1
n
)
/2
(1 +E[[X
1
[
2+
+[Y
1
[
2+
]).
As n , we have
lim
n
E[V (1,
S
n
)] = V (0, 0). (3.24)
On the other hand, for each t, t
, x)[ C(
_
[t t
[ +[t t
[).
Thus [V (0, 0) V (h, 0)[ C(
S
n
)][ = [E[V (1,
S
n
)] E[V (1 +h,
S
n
)][ C(
h +h).
It follows from (3.22) and (3.24) that
limsup
n
[E[(
S
n
)] E[(X +)][ 2C(
h +h).
Since h can be arbitrarily small, we have
lim
n
E[(
S
n
)] = E[(X +)].
Remark 3.8 From the proof we can check that the main assumption of identical
distribution of X
i
, Y
i
i=1
can be weaken to
E[p, Y
i
+
1
2
AX
i
, X
i
] = G(p, A), i = 1, 2, , p R
d
, A S(d).
Another essential condition is E[[X
i
[
2+
] +E[[Y
i
[
1+
] C for some > 0. We
do not need the condition E[[X
i
[
n
] +E[[Y
i
[
n
] < for each n N.
We now give the proof of Theorem 3.5.
Proof of Theorem 3.5. For the case when the uniform elliptic condition (3.20)
does not hold, we rst introduce a perturbation to prove the above convergence
for C
b.Lip
(R
d
). According to Denition 3.14 and Proposition 3.15 in Chap I,
3 Law of Large Numbers and Central Limit Theorem 31
we can construct a sublinear expectation space (
,
1,
E) and a sequence of three
randomvectors (
X
i
,
Y
i
,
i
)
i=1
such that, for each n = 1, 2, , (
X
i
,
Y
i
)
n
i=1
d
=
(X
i
, Y
i
)
n
i=1
and (
X
n+1
,
Y
n+1
,
n+1
) is independent from (
X
i
,
Y
i
,
i
)
n
i=1
and,
moreover,
E[(
X
i
,
Y
i
,
i
)] = (2)
d/2
_
R
d
E[(X
i
, Y
i
, x)]e
]x]
2
/2
dx for C
l.Lip
(R
3d
).
We then use the perturbation
X
i
=
X
i
+
i
for a xed > 0. It is easy to
see that the sequence (
X
i
,
Y
i
)
i=1
satises all conditions in the above CLT, in
particular,
G
(p, A) :=
E[
1
2
A
X
1
,
X
1
_
+
p,
Y
1
_
] = G(p, A) +
2
2
tr[A].
Thus it is strictly elliptic. We then can apply Lemma 3.7 to
n
:=
n
i=1
(
n
+
Y
i
n
) =
n
i=1
(
X
i
n
+
Y
i
n
) +J
n
, J
n
=
n
i=1
i
n
and obtain
lim
n
E[(
n
)] =
E[(
X + + )],
where ((
X, ), ( , 0)) is
G-distributed under
E[] and
G( p,
A) :=
E[
1
2
A(
X
1
,
1
)
T
, (
X
1
,
1
)
T
_
+
p, (
Y
1
, 0)
T
_
],
A S(2d), p R
2d
.
By Proposition 1.6, it is easy to prove that (
X + , ) is G
-distributed and
(
X, ) is G-distributed. But we have
[E[(
S
n
)]
E[(
n
)][ = [
E[(
n
J
n
)]
E[(
n
)][
C
E[[J
n
[] C
and similarly,
[E[(X + )]
E[(
X + + )][ = [
E[(
X+ )]
E[(
X+ + )][ C.
Since can be arbitrarily small, it follows that
lim
n
E[(
S
n
)] = E[(X +)] for C
b.Lip
(R
d
).
On the other hand, it is easy to check that sup
n
E[[
S
n
[
2
] + E[[X + [
2
] < .
We then can apply the following lemma to prove that the above convergence
holds for C(R
d
) with linear growth condition. The proof is complete.
Lemma 3.9 Let (, 1, E) and (
,
1,
E[[Y [
p
] < . If the convergence lim
n
E[(Y
n
)] =
E[(Y )]
holds for each C
b.Lip
(R
d
), then it also holds for all functions C(R
d
)
with the growth condition [(x)[ C(1 +[x[
p1
).
32 Chap.II Law of Large Numbers and Central Limit Theorem
Proof. We rst prove that the above convergence holds for C
b
(R
d
) with
a compact support. In this case, for each > 0, we can nd a C
b.Lip
(R
d
)
such that sup
xR
d [(x) (x)[
2
. We have
[E[(Y
n
)]
E[(Y )]
E[ (Y )][
+[E[ (Y
n
)]
E[ (Y )][ +[E[ (Y
n
)]
E[ (Y )][.
Thus limsup
n
[E[(Y
n
)]
E[
1
(Y ) +
2
(Y )][
[E[
1
(Y
n
)]
E[
1
(Y )][ +E[[
2
(Y
n
)[] +
E[[
2
(Y )[]
[E[
1
(Y
n
)]
E[
1
(Y )][ +
2C
N
(2 +E[[Y
n
[
p
] +
E[[Y [
p
])
[E[
1
(Y
n
)]
E[
1
(Y )][ +
C
N
,
where
C = 2C(2+sup
n
E[[Y
n
[
p
]+
E[[Y [
p
]). We thus have limsup
n
[E[(Y
n
)]
E[(Y )][
C
N
. Since N can be arbitrarily large, E[(Y
n
)] must converge to
E[(Y )].
Exercise 3.10 Let X
i
1, i = 1, 2, , be such that X
i+1
is independent from
X
1
, , X
i
, for each i = 1, 2, . We further assume that
E[X
i
] = E[X
i
] = 0,
lim
i
E[X
2
i
] =
2
< , lim
i
E[X
2
i
] =
2
,
E[[X
i
[
2+
] M for some > 0 and a constant M.
Prove that the sequence
S
n
n=1
dened by
S
n
=
1
n
n
i=1
X
i
converges in law to X, i.e.,
lim
n
E[(
S
n
)] = E[(X)] for C
b,lip
(R),
where X N(0 [
2
,
2
]).
In particular, if
2
=
2
, it becomes a classical central limit theorem.
3 Law of Large Numbers and Central Limit Theorem 33
Notes and Comments
The contents of this chapter are mainly from Peng (2008) [105] (see also Peng
(2007) [101]).
The notion of G-normal distribution was rstly introduced by Peng (2006)
[100] for 1-dimensional case, and Peng (2008) [104] for multi-dimensional case.
In the classical situation, a distribution satisfying equation (1.1) is said to be
stable (see Levy (1925) [77] and (1965) [78]). In this sense, our G-normal dis-
tribution can be considered as the most typical stable distribution under the
framework of sublinear expectations.
Marinacci (1999) [83] used dierent notions of distributions and indepen-
dence via capacity and the corresponding Choquet expectation to obtain a law
of large numbers and a central limit theorem for non-additive probabilities (see
also Maccheroni and Marinacci (2005) [84] ). But since a sublinear expectation
can not be characterized by the corresponding capacity, our results can not be
derived from theirs. In fact, our results show that the limit in CLT, under
uncertainty, is a G-normal distribution in which the distribution uncertainty is
not just the parameter of the classical normal distributions (see Exercise 2.2).
The notion of viscosity solutions plays a basic role in the denition and
properties of G-normal distribution and maximal distribution. This notion was
initially introduced by Crandall and Lions (1983) [29]. This is a fundamentally
important notion in the theory of nonlinear parabolic and elliptic PDEs. Read-
ers are referred to Crandall, Ishii and Lions (1992) [30] for rich references of the
beautiful and powerful theory of viscosity solutions. For books on the theory of
viscosity solutions and the related HJB equations, see Barles (1994) [8], Fleming
and Soner (1992) [49] as well as Yong and Zhou (1999) [124].
We note that, for the case when the uniform elliptic condition holds, the vis-
cosity solution (1.10) becomes a classical C
1+
2
,2+
-solution (see Krylov (1987)
[76] and the recent works in Cabre and Caarelli (1997) [17] and Wang (1992)
[119]). In 1-dimensional situation, when
2
> 0, the G-equation becomes the
following Barenblatt equation:
t
u +[
t
u[ = u, [[ < 1.
This equation was rst introduced by Barenblatt (1979) [7] (see also Avellaneda,
Levy and Paras (1995) [5]).
34 Chap.II Law of Large Numbers and Central Limit Theorem
Chapter III
G-Brownian Motion and
It os Integral
The aim of this chapter is to introduce the concept of G-Brownian motion, to
study its properties and to construct It os integral with respect to G-Brownian
motion. We emphasize here that our denition of G-Brownian motion is con-
sistent with the classical one in the sense that if there is no volatility uncer-
tainty. Our G-Brownian motion also has independent increments with identical
G-normal distributions. G-Brownian motion has a very rich and interesting
new structure which non-trivially generalizes the classical one. We thus can
establish the related stochastic calculus, especially It os integrals and the re-
lated quadratic variation process. A very interesting new phenomenon of our
G-Brownian motion is that its quadratic process also has independent incre-
ments which are identically distributed. The corresponding G-Itos formula is
obtained.
1 G-Brownian Motion and its Characterization
Denition 1.1 Let (, 1, E) be a sublinear expectation space. (X
t
)
t0
is called
a d-dimensional stochastic process if for each t 0, X
t
is a d-dimensional
random vector in 1.
Let G() : S(d) R be a given monotonic and sublinear function. By
Theorem 2.1 in Chapter I, there exists a bounded, convex and closed subset
S
+
(d) such that
G(A) =
1
2
sup
B
(A, B) , A S(d).
By Section 2 in Chapter II, we know that the G-normal distribution N(0)
exists.
We now give the denition of G-Brownian motion.
35
36 Chap.III G-Brownian Motion and Itos Integral
Denition 1.2 A d-dimensional process (B
t
)
t0
on a sublinear expectation
space (, 1, E) is called a GBrownian motion if the following properties
are satised:
(i) B
0
() = 0;
(ii) For each t, s 0, the increment B
t+s
B
t
is N(0s)-distributed and is
independent from (B
t1
, B
t2
, , B
tn
), for each n N and 0 t
1
t
n
t.
Remark 1.3 We can prove that, for each t
0
> 0, (B
t+t0
B
t0
)
t0
is a G-
Brownian motion. For each > 0, (
1
2
B
t
)
t0
is also a G-Brownian motion.
This is the scaling property of G-Brownian motion, which is the same as that
of the classical Brownian motion.
We will denote in the rest of this book
B
a
t
= a, B
t
for each a = (a
1
, , a
d
)
T
R
d
.
By the above denition we have the following proposition which is important
in stochastic calculus.
Proposition 1.4 Let (B
t
)
t0
be a d-dimensional G-Brownian motion on a
sublinear expectation space (, 1, E). Then (B
a
t
)
t0
is a 1-dimensional G
a
-
Brownian motion for each a R
d
, where G
a
() =
1
2
(
2
aa
T
+
2
aa
T
),
2
aa
T
= 2G(aa
T
) = E[a, B
1
2
],
2
aa
T
= 2G(aa
T
) = E[a, B
1
2
].
In particular, for each t, s 0, B
a
t+s
B
a
t
d
= N(0 [s
2
aa
T
, s
2
aa
T
]).
Proposition 1.5 For each convex function , we have
E[(B
a
t+s
B
a
t
)] =
1
_
2s
2
aa
T
_
(x) exp(
x
2
2s
2
aa
T
)dx.
For each concave function and
2
aa
T
> 0, we have
E[(B
a
t+s
B
a
t
)] =
1
_
2s
2
aa
T
_
(x) exp(
x
2
2s
2
aa
T
)dx.
In particular, we have
E[(B
a
t
B
a
s
)
2
] =
2
aa
T (t s), E[(B
a
t
B
a
s
)
4
] = 3
4
aa
T (t s)
2
,
E[(B
a
t
B
a
s
)
2
] =
2
aa
T (t s), E[(B
a
t
B
a
s
)
4
] = 3
4
aa
T (t s)
2
.
The following theorem gives a characterization of G-Brownian motion.
Theorem 1.6 Let (B
t
)
t0
be a d-dimensional process dened on a sublinear
expectation space (, 1, E) such that
(i) B
0
()= 0;
(ii) For each t, s 0, B
t+s
B
t
and B
s
are identically distributed and B
t+s
B
t
is independent from (B
t1
, B
t2
, , B
tn
), for each n N and 0 t
1
t
n
t.
(iii) E[B
t
] = E[B
t
] = 0 and lim
t0
E[[B
t
[
3
]t
1
= 0.
Then (B
t
)
t0
is a G-Brownian motion with G(A) =
1
2
E[AB
1
, B
1
], A S(d).
1 G-Brownian Motion and its Characterization 37
Proof. We only need to prove that B
1
is G-normal distributed and B
t
d
=
tB
1
.
We rst prove that
E[AB
t
, B
t
] = 2G(A)t, A S(d).
For each given A S(d), we set b(t) =E[AB
t
, B
t
]. Then b(0) = 0 and [b(t)[
[A[(E[[B
t
[
3
])
2/3
0 as t 0. Since for each t, s 0,
b(t +s) = E[AB
t+s
, B
t+s
] =
E[A(B
t+s
B
s
+B
s
), B
t+s
B
s
+B
s
]
= E[A(B
t+s
B
s
), (B
t+s
B
s
) +AB
s
, B
s
+ 2A(B
t+s
B
s
), B
s
]
= b(t) +b(s),
we have b(t) = b(1)t =2G(A)t.
We now prove that B
1
is G-normal distributed and B
t
d
=
tB
1
. For this,
we just need to prove that, for each xed C
b.Lip
(R
d
), the function
u(t, x) := E[(x +B
t
)], (t, x) [0, ) R
d
is the viscosity solution of the following G-heat equation:
t
u G(D
2
u) = 0, u[
t=0
= . (1.1)
We rst prove that u is Lipschitz in x and
1
2
-Holder continuous in t. In fact,
for each xed t, u(t, ) C
b.Lip
(R
d
) since
[u(t, x) u(t, y)[ = [E[(x +B
t
)] E[(y +B
t
)][
E[[(x +B
t
) (y +B
t
)[]
C[x y[,
where C is Lipschitz constant of .
For each [0, t], since B
t
B
is independent from B
, we also have
u(t, x) = E[(x +B
+ (B
t
B
)]
= E[E[(y + (B
t
B
))]
y=x+B
],
hence
u(t, x) = E[u(t , x +B
)]. (1.2)
Thus
[u(t, x) u(t , x)[ = [E[u(t , x +B
) u(t , x)][
E[[u(t , x +B
) u(t , x)[]
E[C[B
[] C
_
2G(I)
.
To prove that u is a viscosity solution of (1.1), we x (t, x) (0, ) R
d
and
let v C
2,3
b
([0, ) R
d
) be such that v u and v(t, x) = u(t, x). From (1.2)
we have
v(t, x) = E[u(t , x +B
)] E[v(t , x +B
)].
38 Chap.III G-Brownian Motion and Itos Integral
Therefore by Taylors expansion,
0 E[v(t , x + B
) v(t, x)]
= E[v(t , x + B
) v(t, x +B
) + (v(t, x +B
) v(t, x))]
= E[
t
v(t, x) +Dv(t, x), B
+
1
2
D
2
v(t, x)B
, B
+I
]
t
v(t, x) +
1
2
E[D
2
v(t, x)B
, B
] +E[I
]
=
t
v(t, x) +G(D
2
v(t, x)) +E[I
],
where
I
=
_
1
0
[
t
v(t , x +B
)
t
v(t, x)]d
+
_
1
0
_
1
0
(D
2
v(t, x +B
) D
2
v(t, x))B
, B
dd.
With the assumption (iii) we can check that lim
0
E[[I
[]
1
= 0, from which we
get
t
v(t, x) G(D
2
v(t, x)) 0, hence u is a viscosity subsolution of (1.1). We
can analogously prove that u is a viscosity supersolution. Thus u is a viscosity
solution and (B
t
)
t0
is a G-Brownian motion. The proof is complete.
Exercise 1.7 Let B
t
be a 1-dimensional Brownian motion, and B
1
d
= N(0
[
2
,
2
]). Prove that for each m N,
E[[B
t
[
m
] =
_
2(m1)!!
m
t
m
2
/
2 m is odd,
(m1)!!
m
t
m
2
m is even.
2 Existence of G-Brownian Motion
In the rest of this book, we denote by = C
d
0
(R
+
) the space of all R
d
valued
continuous paths (
t
)
tR
+, with
0
= 0, equipped with the distance
(
1
,
2
) :=
i=1
2
i
[( max
t[0,i]
[
1
t
2
t
[) 1].
For each xed T [0, ), we set
T
:=
T
: . We will consider the
canonical process B
t
() =
t
, t [0, ), for .
For each xed T [0, ), we set
L
ip
(
T
) := (B
t1T
, , B
tnT
) : n N, t
1
, , t
n
[0, ), C
l.Lip
(R
dn
) .
It is clear that L
ip
(
t
)L
ip
(
T
), for t T. We also set
L
ip
() :=
_
n=1
L
ip
(
n
).
2 Existence of G-Brownian Motion 39
Remark 2.1 It is clear that C
l.Lip
(R
dn
), L
ip
(
T
) and L
ip
() are vector lat-
tices. Moreover, note that , C
l.Lip
(R
dn
) imply C
l.Lip
(R
dn
),
then X, Y L
ip
(
T
) imply X Y L
ip
(
T
). In particular, for each t [0, ),
B
t
L
ip
().
Let G() : S(d) R be a given monotonic and sublinear function. In the
following, we want to construct a sublinear expectation on (, L
ip
()) such
that the canonical process (B
t
)
t0
is a G-Brownian motion. For this, we rst
construct a sequence of d-dimensional random vectors (
i
)
i=1
on a sublinear
expectation space (
,
1,
E) such that
i
is G-normal distributed and
i+1
is
independent from (
1
, ,
i
) for each i = 1, 2, .
We now introduce a sublinear expectation
E dened on L
ip
() via the fol-
lowing procedure: for each X L
ip
() with
X = (B
t1
B
t0
, B
t2
B
t1
, , B
tn
B
tn1
)
for some C
l.Lip
(R
dn
) and 0 = t
0
< t
1
< < t
n
< , we set
E[(B
t1
B
t0
, B
t2
B
t1
, , B
tn
B
tn1
)]
:=
E[(
t
1
t
0
1
, ,
_
t
n
t
n1
n
)].
The related conditional expectation of X = (B
t1
, B
t2
B
t1
, , B
tn
B
tn1
) under
tj
is dened by
E[X[
tj
] =
E[(B
t1
, B
t2
B
t1
, , B
tn
B
tn1
)[
tj
] (2.3)
:= (B
t1
, , B
tj
B
tj1
),
where
(x
1
, , x
j
) =
E[(x
1
, , x
j
,
_
t
j+1
t
j
j+1
, ,
_
t
n
t
n1
n
)].
It is easy to check that
E[] consistently denes a sublinear expectation on
L
ip
() and (B
t
)
t0
is a G-Brownian motion. Since L
ip
(
T
)L
ip
(),
E[] is also
a sublinear expectation on L
ip
(
T
).
Denition 2.2 The sublinear expectation
E[]: L
ip
() R dened through the
above procedure is called a Gexpectation. The corresponding canonical process
(B
t
)
t0
on the sublinear expectation space (, L
ip
(),
E) is called a GBrownian
motion.
In the rest of this book, when we talk about GBrownian motion, we mean
that the canonical process (B
t
)
t0
is under G-expectation.
Proposition 2.3 We list the properties of
E[[
t
] that hold for each X, Y L
ip
():
(i) If X Y , then
E[X[
t
]
E[Y [
t
].
40 Chap.III G-Brownian Motion and Itos Integral
(ii)
E[[
t
] = , for each t [0, ) and L
ip
(
t
).
(iii)
E[X[
t
]
E[Y [
t
]
E[X Y [
t
].
(iv)
E[X[
t
] =
+
E[X[
t
] +
E[X[
t
] for each L
ip
(
t
).
(v)
E[
E[X[
t
][
s
] =
E[X[
ts
], in particular,
E[
E[X[
t
]] =
E[X].
For each X L
ip
(
t
),
E[X[
t
] =
E[X], where L
ip
(
t
) is the linear space of
random variables with the form
(B
t2
B
t1
, B
t3
B
t2
, , B
tn+1
B
tn
),
n = 1, 2, , C
l.Lip
(R
dn
), t
1
, , t
n
, t
n+1
[t, ).
Remark 2.4 (ii) and (iii) imply
E[X +[
t
] =
E[X[
t
] + for L
ip
(
t
).
We now consider the completion of sublinear expectation space (, L
ip
(),
E).
We denote by L
p
G
(), p 1, the completion of L
ip
() under the norm
|X|
p
:= (
E[[X[
p
])
1/p
. Similarly, we can dene L
p
G
(
T
), L
p
G
(
t
T
) and L
p
G
(
t
).
It is clear that for each 0 t T < , L
p
G
(
t
) L
p
G
(
T
) L
p
G
().
According to Sec.4 in Chap.I,
E[] can be continuously extended to a sub-
linear expectation on (, L
1
G
()) still denoted by
E[]. We now consider the
extension of conditional expectations. For each xed t T, the conditional
G-expectation
E[[
t
] : L
ip
(
T
) L
ip
(
t
) is a continuous mapping under ||.
Indeed, we have
E[X[
t
]
E[Y [
t
]
E[X Y [
t
]
E[[X Y [[
t
],
then
[
E[X[
t
]
E[Y [
t
][
E[[X Y [[
t
].
We thus obtain _
_
_
E[X[
t
]
E[Y [
t
]
_
_
_ |X Y | .
It follows that
E[[
t
] can be also extended as a continuous mapping
E[[
t
] : L
1
G
(
T
) L
1
G
(
t
).
If the above T is not xed, then we can obtain
E[[
t
] : L
1
G
() L
1
G
(
t
).
Remark 2.5 The above proposition also holds for X, Y L
1
G
(). But in (iv),
L
1
G
(
t
) should be bounded, since X, Y L
1
G
() does not imply X Y
L
1
G
().
In particular, we have the following independence:
E[X[
t
] =
E[X], X L
1
G
(
t
).
We give the following denition similar to the classical one:
2 Existence of G-Brownian Motion 41
Denition 2.6 An n-dimensional random vector Y (L
1
G
())
n
is said to be
independent from
t
for some given t if for each C
b.Lip
(R
n
) we have
E[(Y )[
t
] =
E[(Y )].
Remark 2.7 Just as in the classical situation, the increments of GBrownian
motion (B
t+s
B
t
)
s0
are independent from
t
.
The following property is very useful.
Proposition 2.8 Let X, Y L
1
G
() be such that
E[Y [
t
] =
E[Y [
t
], for
some t [0, T]. Then we have
E[X +Y [
t
] =
E[X[
t
] +
E[Y [
t
].
In particular, if
E[Y [
t
] =
E
G
[Y [
t
] = 0, then
E[X +Y [
t
] =
E[X[
t
].
Proof. This follows from the following two inequalities:
E[X +Y [
t
]
E[X[
t
] +
E[Y [
t
],
E[X +Y [
t
]
E[X[
t
]
E[Y [
t
] =
E[X[
t
] +
E[Y [
t
].
E[B
a
t
B
a
s
[
s
] = 0,
E[(B
a
t
B
a
s
)[
s
] = 0,
E[(B
a
t
B
a
s
)
2
[
s
] =
2
aa
T (t s),
E[(B
a
t
B
a
s
)
2
[
s
] =
2
aa
T (t s),
E[(B
a
t
B
a
s
)
4
[
s
] = 3
4
aa
T (t s)
2
,
E[(B
a
t
B
a
s
)
4
[
s
] = 3
4
aa
T (t s)
2
,
where
2
aa
T
= 2G(aa
T
) and
2
aa
T
= 2G(aa
T
).
Example 2.10 For each a R
d
, n N, 0 t T, X L
1
G
(
t
) and
C
l.Lip
(R), we have
E[X(B
a
T
B
a
t
)[
t
] = X
+
E[(B
a
T
B
a
t
)[
t
] +X
E[(B
a
T
B
a
t
)[
t
]
= X
+
E[(B
a
T
B
a
t
)] +X
E[(B
a
T
B
a
t
)].
In particular, we have
E[X(B
a
T
B
a
t
)[
t
] = X
+
E[(B
a
T
B
a
t
)] + X
E[(B
a
T
B
a
t
)] = 0.
This, together with Proposition 2.8, yields
E[Y +X(B
a
T
B
a
t
)[
t
] =
E[Y [
t
], Y L
1
G
().
42 Chap.III G-Brownian Motion and Itos Integral
We also have
E[X(B
a
T
B
a
t
)
2
[
t
] = X
+
E[(B
a
T
B
a
t
)
2
] +X
E[(B
a
T
B
a
t
)
2
]
= [X
+
2
aa
T X
2
aa
T ](T t)
and
E[X(B
a
T
B
a
t
)
2n1
[
t
] = X
+
E[(B
a
T
B
a
t
)
2n1
] +X
E[(B
a
T
B
a
t
)
2n1
]
= [X[
E[(B
a
Tt
)
2n1
].
Example 2.11 Since
E[2B
a
s
(B
a
t
B
a
s
)[
s
] =
E[2B
a
s
(B
a
t
B
a
s
)[
s
] = 0,
we have
E[(B
a
t
)
2
(B
a
s
)
2
[
s
] =
E[(B
a
t
B
a
s
+B
a
s
)
2
(B
a
s
)
2
[
s
]
=
E[(B
a
t
B
a
s
)
2
+ 2(B
a
t
B
a
s
)B
a
s
[
s
]
=
2
aa
T (t s).
Exercise 2.12 Show that if X Lip(
T
) and
E[X] =
E[X], then
E[X] =
E
P
[X], where P is a Wiener measure on .
Exercise 2.13 For each s, t 0, we set B
s
t
:= B
t+s
B
s
. Let = (
ij
)
d
i,j=1
L
1
G
(
s
; S(d)). Prove that
E[B
s
t
, B
s
t
[
s
] = 2G()t.
3 It os Integral with GBrownian Motion
Denition 3.1 For T R
+
, a partition
T
of [0, T] is a nite ordered subset
T
= t
0
, t
1
, , t
N
such that 0 = t
0
< t
1
< < t
N
= T.
(
T
) := max[t
i+1
t
i
[ : i = 0, 1, , N 1.
We use
N
T
= t
N
0
, t
N
1
, , t
N
N
to denote a sequence of partitions of [0, T] such
that lim
N
(
N
T
) = 0.
Let p 1 be xed. We consider the following type of simple processes: for
a given partition
T
= t
0
, , t
N
of [0, T] we set
t
() =
N1
k=0
k
()I
[t
k
,t
k+1
)
(t),
where
k
L
p
G
(
t
k
), k = 0, 1, 2, , N 1 are given. The collection of these
processes is denoted by M
p,0
G
(0, T).
3 Itos Integral with GBrownian Motion 43
Denition 3.2 For an M
p,0
G
(0, T) with
t
() =
N1
k=0
k
()I
[t
k
,t
k+1
)
(t),
the related Bochner integral is
_
T
0
t
()dt :=
N1
k=0
k
()(t
k+1
t
k
).
For each M
p,0
G
(0, T), we set
E
T
[] :=
1
T
E[
_
T
0
t
dt] =
1
T
E[
N1
k=0
k
()(t
k+1
t
k
)].
It is easy to check that
E
T
: M
p,0
G
(0, T) R forms a sublinear expectation. We
then can introduce a natural norm ||
M
p
G
(0,T)
, under which, M
p,0
G
(0, T) can be
extended to M
p
G
(0, T) which is a Banach space.
Denition 3.3 For each p 1, we denote by M
p
G
(0, T) the completion of
M
G
p,0
(0, T) under the norm
||
M
p
G
(0,T)
:=
_
E[
_
T
0
[
t
[
p
dt]
_
1/p
.
It is clear that M
p
G
(0, T) M
q
G
(0, T) for 1 p q. We also use M
p
G
(0, T; R
n
)
for all n-dimensional stochastic processes
t
= (
1
t
, ,
n
t
), t 0 with
i
t
M
p
G
(0, T), i = 1, 2, , n.
We now give the denition of It os integral. For simplicity, we rst introduce
It os integral with respect to 1-dimensional GBrownian motion.
Let (B
t
)
t0
be a 1-dimensional GBrownian motion with G() =
1
2
(
2
), where 0 < .
Denition 3.4 For each M
2,0
G
(0, T) of the form
t
() =
N1
j=0
j
()I
[tj,tj+1)
(t),
we dene
I() =
_
T
0
t
dB
t
:=
N1
j=0
j
(B
tj+1
B
tj
).
Lemma 3.5 The mapping I : M
2,0
G
(0, T) L
2
G
(
T
) is a continuous linear
mapping and thus can be continuously extended to I : M
2
G
(0, T) L
2
G
(
T
).
We have
E[
_
T
0
t
dB
t
] = 0, (3.4)
E[(
_
T
0
t
dB
t
)
2
]
2
E[
_
T
0
2
t
dt]. (3.5)
44 Chap.III G-Brownian Motion and Itos Integral
Proof. From Example 2.10, for each j,
E[
j
(B
tj+1
B
tj
)[
tj
] =
E[
j
(B
tj+1
B
tj
)[
tj
] = 0.
We have
E[
_
T
0
t
dB
t
] =
E[
_
tN1
0
t
dB
t
+
N1
(B
tN
B
tN1
)]
=
E[
_
tN1
0
t
dB
t
+
E[
N1
(B
tN
B
tN1
)[
tN1
]]
=
E[
_
tN1
0
t
dB
t
].
Then we can repeat this procedure to obtain (3.4).
We now give the proof of (3.5). Firstly, from Example 2.10, we have
E[(
_
T
0
t
dB
t
)
2
] =
E[
__
tN1
0
t
dB
t
+
N1
(B
tN
B
tN1
)
_
2
]
=
E[
__
tN1
0
t
dB
t
_
2
+
2
N1
(B
tN
B
tN1
)
2
+ 2
__
tN1
0
t
dB
t
_
N1
(B
tN
B
tN1
)]
=
E[
__
tN1
0
t
dB
t
_
2
+
2
N1
(B
tN
B
tN1
)
2
]
= =
E[
N1
i=0
2
i
(B
ti+1
B
ti
)
2
].
Then, for each i = 0, 1, , N 1, we have
E[
2
i
(B
ti+1
B
ti
)
2
2
i
(t
i+1
t
i
)]
=
E[
E[
2
i
(B
ti+1
B
ti
)
2
2
i
(t
i+1
t
j
)[
ti
]]
=
E[
2
2
i
(t
i+1
t
i
)
2
2
i
(t
i+1
t
i
)] = 0.
Finally, we have
E[(
_
T
0
t
dB
t
)
2
] =
E[
N1
i=0
2
i
(B
ti+1
B
ti
)
2
]
E[
N1
i=0
2
i
(B
ti+1
B
ti
)
2
N1
i=0
2
i
(t
i+1
t
i
)] +
E[
N1
i=0
2
i
(t
i+1
t
i
)]
N1
i=0
E[
2
i
(B
ti+1
B
ti
)
2
2
i
(t
i+1
t
i
)] +
E[
N1
i=0
2
i
(t
i+1
t
i
)]
=
E[
N1
i=0
2
i
(t
i+1
t
i
)] =
2
E[
_
T
0
2
t
dt].
3 Itos Integral with GBrownian Motion 45
t
dB
t
:= I().
It is clear that (3.4) and (3.5) still hold for M
2
G
(0, T).
We list some main properties of It os integral of GBrownian motion. We
denote, for some 0 s t T,
_
t
s
u
dB
u
:=
_
T
0
I
[s,t]
(u)
u
dB
u
.
Proposition 3.7 Let , M
2
G
(0, T) and let 0 s r t T. Then we
have
(i)
_
t
s
u
dB
u
=
_
r
s
u
dB
u
+
_
t
r
u
dB
u
.
(ii)
_
t
s
(
u
+
u
)dB
u
=
_
t
s
u
dB
u
+
_
t
s
u
dB
u
, if is bounded and in L
1
G
(
s
).
(iii)
E[X +
_
T
r
u
dB
u
[
s
] =
E[X[
s
] for X L
1
G
().
We now consider the multi-dimensional case. Let G() : S(d) R be a
given monotonic and sublinear function and let (B
t
)
t0
be a d-dimensional G
Brownian motion. For each xed a R
d
, we still use B
a
t
:= a, B
t
. Then
(B
a
t
)
t0
is a 1-dimensional G
a
Brownian motion with G
a
() =
1
2
(
2
aa
T
2
aa
T
), where
2
aa
T
= 2G(aa
T
) and
2
aa
T
= 2G(aa
T
). Similar to 1-
dimensional case, we can dene It os integral by
I() :=
_
T
0
t
dB
a
t
, for M
2
G
(0, T).
We still have, for each M
2
G
(0, T),
E[
_
T
0
t
dB
a
t
] = 0,
E[(
_
T
0
t
dB
a
t
)
2
]
2
aa
T
E[
_
T
0
2
t
dt].
Furthermore, Proposition 3.7 still holds for the integral with respect to B
a
t
.
Exercise 3.8 Prove that, for a xed M
2
G
(0, T),
E[
_
T
0
2
t
dt]
E[(
_
T
0
t
dB
t
)
2
]
2
E[
_
T
0
2
t
dt],
where
2
=
E[B
2
1
] and
2
=
E[B
2
1
].
Exercise 3.9 Prove that, for each M
p
G
(0, T), we have
E[
_
T
0
[
t
[
p
dt]
_
T
0
E[[
t
[
p
]dt.
46 Chap.III G-Brownian Motion and Itos Integral
4 Quadratic Variation Process of GBrownian
Motion
We rst consider the quadratic variation process of 1-dimensional GBrownian
motion (B
t
)
t0
with B
1
d
= N(0 [
2
,
2
]). Let
N
t
, N = 1, 2, , be a
sequence of partitions of [0, t]. We consider
B
2
t
=
N1
j=0
(B
2
t
N
j+1
B
2
t
N
j
)
=
N1
j=0
2B
t
N
j
(B
t
N
j+1
B
t
N
j
) +
N1
j=0
(B
t
N
j+1
B
t
N
j
)
2
.
As (
N
t
) 0, the rst term of the right side converges to 2
_
t
0
B
s
dB
s
in L
2
G
().
The second term must be convergent. We denote its limit by B
t
, i.e.,
B
t
:= lim
(
N
t
)0
N1
j=0
(B
t
N
j+1
B
t
N
j
)
2
= B
2
t
2
_
t
0
B
s
dB
s
. (4.6)
By the above construction, (B
t
)
t0
is an increasing process with B
0
= 0.
We call it the quadratic variation process of the GBrownian motion B.
It characterizes the part of statistic uncertainty of GBrownian motion. It is
important to keep in mind that B
t
is not a deterministic process unless = ,
i.e., when (B
t
)
t0
is a classical Brownian motion. In fact we have the following
lemma.
Lemma 4.1 For each 0 s t < , we have
E[B
t
B
s
[
s
] =
2
(t s), (4.7)
E[(B
t
B
s
)[
s
] =
2
(t s). (4.8)
Proof. By the denition of B and Proposition 3.7 (iii),
E[B
t
B
s
[
s
] =
E[B
2
t
B
2
s
2
_
t
s
B
u
dB
u
[
s
]
=
E[B
2
t
B
2
s
[
s
] =
2
(t s).
The last step follows from Example 2.11. We then have (4.7). The equality
(4.8) can be proved analogously with the consideration of
E[(B
2
t
B
2
s
)[
s
]=
2
(t s).
A very interesting point of the quadratic variation process B is, just like
the GBrownian motion B itself, the increment B
s+t
B
s
is independent
from
s
and identically distributed with B
t
. In fact we have
4 Quadratic Variation Process of GBrownian Motion 47
Lemma 4.2 For each xed s,t 0, B
s+t
B
s
is identically distributed with
B
t
and independent from
s
.
Proof. The results follow directly from
B
s+t
B
s
= B
2
s+t
2
_
s+t
0
B
r
dB
r
[B
2
s
2
_
s
0
B
r
dB
r
]
= (B
s+t
B
s
)
2
2
_
s+t
s
(B
r
B
s
)d(B
r
B
s
)
= B
s
t
,
where B
s
is the quadratic variation process of the GBrownian motion B
s
t
=
B
s+t
B
s
, t 0.
We now dene the integral of a process M
1
G
(0, T) with respect to B.
We rst dene a mapping:
Q
0,T
() =
_
T
0
t
d B
t
:=
N1
j=0
j
(B
tj+1
B
tj
) : M
1,0
G
(0, T) L
1
G
(
T
).
Lemma 4.3 For each M
1,0
G
(0, T),
E[[Q
0,T
()[]
2
E[
_
T
0
[
t
[dt]. (4.9)
Thus Q
0,T
: M
1,0
G
(0, T) L
1
G
(
T
) is a continuous linear mapping. Conse-
quently, Q
0,T
can be uniquely extended to M
1
G
(0, T). We still denote this map-
ping by
_
T
0
t
d B
t
:= Q
0,T
() for M
1
G
(0, T).
We still have
E[[
_
T
0
t
d B
t
[]
2
E[
_
T
0
[
t
[dt] for M
1
G
(0, T). (4.10)
Proof. Firstly, for each j = 1, , N 1, we have
E[[
j
[(B
tj+1
B
tj
)
2
[
j
[(t
j+1
t
j
)]
=
E[
E[[
j
[(B
tj+1
B
tj
)[
tj
]
2
[
j
[(t
j+1
t
j
)]
=
E[[
j
[
2
(t
j+1
t
j
)
2
[
j
[(t
j+1
t
j
)] = 0.
48 Chap.III G-Brownian Motion and Itos Integral
Then (4.9) can be checked as follows:
E[[
N1
j=0
j
(B
tj+1
B
tj
)[]
E[
N1
j=0
[
j
[ B
tj+1
B
tj
]
E[
N1
j=0
[
j
[[(B
tj+1
B
tj
)
2
(t
j+1
t
j
)]] +
E[
2
N1
j=0
[
j
[(t
j+1
t
j
)]
N1
j=0
E[[
j
[[(B
tj+1
B
tj
)
2
(t
j+1
t
j
)]] +
E[
2
N1
j=0
[
j
[(t
j+1
t
j
)]
=
E[
2
N1
j=0
[
j
[(t
j+1
t
j
)] =
2
E[
_
T
0
[
t
[dt].
E[X +(B
2
t
B
2
s
)] =
E[X +(B
t
B
s
)
2
]
=
E[X +(B
t
B
s
)].
Proof. By (4.6) and Proposition 3.7 (iii), we have
E[X +(B
2
t
B
2
s
)] =
E[X +(B
t
B
s
+ 2
_
t
s
B
u
dB
u
)]
=
E[X +(B
t
B
s
)].
We also have
E[X +(B
2
t
B
2
s
)] =
E[X +((B
t
B
s
)
2
+ 2(B
t
B
s
)B
s
)]
=
E[X +(B
t
B
s
)
2
].
E[(
_
T
0
t
dB
t
)
2
] =
E[
_
T
0
2
t
d B
t
]. (4.11)
Proof. We rst consider M
2,0
G
(0, T) of the form
t
() =
N1
j=0
j
()I
[tj,tj+1)
(t)
4 Quadratic Variation Process of GBrownian Motion 49
and then
_
T
0
t
dB
t
=
N1
j=0
j
(B
tj+1
B
tj
). From Proposition 3.7, we get
E[X + 2
j
(B
tj+1
B
tj
)
i
(B
ti+1
B
ti
)] =
E[X] for X L
1
G
(), i ,= j.
Thus
E[(
_
T
0
t
dB
t
)
2
] =
E[(
N1
j=0
j
(B
tj+1
B
tj
))
2
] =
E[
N1
j=0
2
j
(B
tj+1
B
tj
)
2
].
From this and Proposition 4.4, it follows that
E[(
_
T
0
t
dB
t
)
2
] =
E[
N1
j=0
2
j
(B
tj+1
B
tj
)] =
E[
_
T
0
2
t
d B
t
].
Thus (4.11) holds for M
2,0
G
(0, T). We can continuously extend the above
equality to the case M
2
G
(0, T) and get (4.11).
We now consider the multi-dimensional case. Let (B
t
)
t0
be a d-dimensional
GBrownian motion. For each xed a R
d
, (B
a
t
)
t0
is a 1-dimensional G
a
t
:= lim
(
N
t
)0
N1
j=0
(B
a
t
N
j+1
B
a
t
N
j
)
2
= (B
a
t
)
2
2
_
t
0
B
a
s
dB
a
s
,
where B
a
is called the quadratic variation process of B
a
. The above
results also hold for B
a
. In particular,
E[[
_
T
0
t
d B
a
t
[]
2
aa
T
E[
_
T
0
[
t
[dt] for M
1
G
(0, T)
and
E[(
_
T
0
t
dB
a
t
)
2
] =
E[
_
T
0
2
t
d B
a
t
] for M
2
G
(0, T).
Let a = (a
1
, , a
d
)
T
and a = ( a
1
, , a
d
)
T
be two given vectors in R
d
. We
then have their quadratic variation processes of B
a
and B
a
. We can dene
their mutual variation process by
B
a
, B
a
_
t
:=
1
4
[
B
a
+B
a
_
t
B
a
B
a
_
t
]
=
1
4
[
B
a+a
_
t
B
aa
_
t
].
Since B
aa
= B
aa
= B
aa
, we see that B
a
, B
a
t
= B
a
, B
a
t
. In
particular, we have B
a
, B
a
= B
a
. Let
N
t
, N = 1, 2, , be a sequence of
partitions of [0, t]. We observe that
N1
k=0
(B
a
t
N
k+1
B
a
t
N
k
)(B
a
t
N
k+1
B
a
t
N
k
) =
1
4
N1
k=0
[(B
a+a
t
k+1
B
a+a
t
k
)
2
(B
aa
t
k+1
B
aa
t
k
)
2
].
50 Chap.III G-Brownian Motion and Itos Integral
Thus as (
N
t
) 0 we have
lim
N
N1
k=0
(B
a
t
N
k+1
B
a
t
N
k
)(B
a
t
N
k+1
B
a
t
N
k
) =
B
a
, B
a
_
t
.
We also have
B
a
, B
a
_
t
=
1
4
[
B
a+a
_
t
B
aa
_
t
]
=
1
4
[(B
a+a
t
)
2
2
_
t
0
B
a+a
s
dB
a+a
s
(B
aa
t
)
2
+ 2
_
t
0
B
aa
s
dB
aa
s
]
= B
a
t
B
a
t
_
t
0
B
a
s
dB
a
s
_
t
0
B
a
s
dB
a
s
.
Now for each M
1
G
(0, T), we can consistently dene
_
T
0
t
d
B
a
, B
a
_
t
=
1
4
_
T
0
t
d
B
a+a
_
t
1
4
_
T
0
t
d
B
aa
_
t
.
Lemma 4.6 Let
N
M
2,0
G
(0, T), N = 1, 2, , be of the form
N
t
() =
N1
k=0
N
k
()I
[t
N
k
,t
N
k+1
)
(t)
with (
N
T
) 0 and
N
in M
2
G
(0, T), as N . Then we have the
following convergence in L
2
G
(
T
):
N1
k=0
N
k
(B
a
t
N
k+1
B
a
t
N
k
)(B
a
t
N
k+1
B
a
t
N
k
)
_
T
0
t
d
B
a
, B
a
_
t
.
Proof. Since
B
a
, B
a
_
t
N
k+1
B
a
, B
a
_
t
N
k
= (B
a
t
N
k+1
B
a
t
N
k
)(B
a
t
N
k+1
B
a
t
N
k
)
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
,
we only need to prove
E[
N1
k=0
(
N
k
)
2
(
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
)
2
] 0.
For each k = 1, , N 1, we have
E[(
N
k
)
2
(
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
)
2
C(
N
k
)
2
(t
N
k+1
t
N
k
)
2
]
=
E[
E[(
N
k
)
2
(
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
)
2
[
t
N
k
] C(
N
k
)
2
(t
N
k+1
t
N
k
)
2
]
E[C(
N
k
)
2
(t
N
k+1
t
N
k
)
2
C(
N
k
)
2
(t
N
k+1
t
N
k
)
2
] = 0,
5 The Distribution of B) 51
where C =
2
aa
T
2
a a
T
/2.
Thus we have
E[
N1
k=0
(
N
k
)
2
(
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
)
2
]
E[
N1
k=0
(
N
k
)
2
[(
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
)
2
C(t
N
k+1
t
N
k
)
2
]]
+
E[
N1
k=0
C(
N
k
)
2
(t
N
k+1
t
N
k
)
2
]
N1
k=0
E[(
N
k
)
2
[(
_
t
N
k+1
t
N
k
(B
a
s
B
a
t
N
k
)dB
a
s
)
2
C(t
N
k+1
t
N
k
)
2
]]
+
E[
N1
k=0
C(
N
k
)
2
(t
N
k+1
t
N
k
)
2
]
E[
N1
k=0
C(
N
k
)
2
(t
N
k+1
t
N
k
)
2
] C(
N
T
)
E[
_
T
0
[
N
t
[
2
dt],
As (
N
T
) 0, the proof is complete.
Exercise 4.7 Let B
t
be a 1-dimensional G-Brownian motion and be a bounded
and Lipschitz function on R. Show that
lim
N
E[[
N1
k=0
(B
t
N
k
)[(B
t
N
k+1
B
t
N
k
)
2
(B
t
N
k+1
B
t
N
k
)][] = 0,
where t
N
k
= kT/N, k = 0, 2, , N 1.
Exercise 4.8 Prove that, for a xed M
1
G
(0, T),
E[
_
T
0
[
t
[dt]
E[
_
T
0
[
t
[dB
t
]
2
E[
_
T
0
[
t
[dt],
where
2
=
E[B
2
1
] and
2
=
E[B
2
1
].
5 The Distribution of B
In this section, we rst consider the 1-dimensional GBrownian motion (B
t
)
t0
with B
1
d
= N(0 [
2
,
2
]).
The quadratic variation process B of G-Brownian motion B is a very in-
teresting process. We have seen that the G-Brownian motion B is a typical
process with variance uncertainty but without mean-uncertainty. In fact, B is
52 Chap.III G-Brownian Motion and Itos Integral
concentrated all uncertainty of the G-Brownian motion B. Moreover, B itself
is a typical process with mean-uncertainty. This fact will be applied to measure
the mean-uncertainty of risk positions.
Lemma 5.1 We have
E[B
2
t
] 10
4
t
2
. (5.12)
Proof. Indeed,
E[B
2
t
] =
E[(B
t
2
2
_
t
0
B
u
dB
u
)
2
]
2
E[B
4
t
] + 8
E[(
_
t
0
B
u
dB
u
)
2
]
6
4
t
2
+ 8
2
E[
_
t
0
B
u
2
du]
6
4
t
2
+ 8
2
_
t
0
E[B
u
2
]du
= 10
4
t
2
.
E[b
1
].
Proof. We rst prove that
E[b
t
] = t and
E[b
t
] = t.
We set (t) :=
E[b
t
]. Then (0) = 0 and lim
t0
(t) =0. Since for each t, s 0,
(t +s) =
E[b
t+s
] =
E[(b
t+s
b
s
) +b
s
]
= (t) +(s).
Thus (t) is linear and uniformly continuous in t, which means that
E[b
t
] = t.
Similarly
E[b
t
] = t.
We now prove that b
t
is N([t, t] 0)-distributed. By Exercise 1.17 in
Chap.II, we just need to prove that for each xed C
b.Lip
(R), the function
u(t, x) :=
E[(x +b
t
)], (t, x) [0, ) R
5 The Distribution of B) 53
is the viscosity solution of the following parabolic PDE:
t
u g(
x
u) = 0, u[
t=0
= (5.13)
with g(a) = a
+
a
.
We rst prove that u is Lipschitz in x and
1
2
-Holder continuous in t. In fact,
for each xed t, u(t, ) C
b.Lip
(R) since
[
E[(x +b
t
)]
E[(y +b
t
)][
E[[(x +b
t
) (y +b
t
)[]
C[x y[.
For each [0, t], since b
t
b
is independent from b
, we have
u(t, x) =
E[(x +b
+ (b
t
b
)]
=
E[
E[(y + (b
t
b
))]
y=x+b
],
hence
u(t, x) =
E[u(t , x +b
)]. (5.14)
Thus
[u(t, x) u(t , x)[ = [
E[u(t , x +b
) u(t , x)][
E[[u(t , x +b
) u(t , x)[]
E[C[b
[] C
1
.
To prove that u is a viscosity solution of the PDE (5.13), we x a point
(t, x) (0, ) R and let v C
2,2
b
([0, ) R) be such that v u and
v(t, x) = u(t, x). From (5.14), we have
v(t, x) =
E[u(t , x +b
)]
E[v(t , x +b
)].
Therefore, by Taylors expansion,
0
E[v(t , x +b
) v(t, x)]
=
E[v(t , x +b
) v(t, x +b
) + (v(t, x +b
) v(t, x))]
=
E[
t
v(t, x) +
x
v(t, x)b
+I
]
t
v(t, x) +
E[
x
v(t, x)b
] +
E[I
]
=
t
v(t, x) +g(
x
v(t, x)) +
E[I
],
where
I
=
_
1
0
[
t
v(t , x +b
) +
t
v(t, x)]d
+b
_
1
0
[
x
v(t, x +b
)
x
v(t, x)]d.
54 Chap.III G-Brownian Motion and Itos Integral
With the assumption that lim
t0
E[b
2
t
]t
1
= 0, we can check that
lim
0
E[[I
[]
1
= 0,
from which we get
t
v(t, x) g(
x
v(t, x)) 0, hence u is a viscosity subsolution
of (5.13). We can analogously prove that u is also a viscosity supersolution. It
follows that b
t
is N([t, t] 0)-distributed. The proof is complete.
It is clear that B satises all the conditions in the Proposition 5.2, thus
we immediately have
Theorem 5.3 B
t
is N([
2
t,
2
t]0)-distributed, i.e., for each C
l.Lip
(R),
E[(B
t
)] = sup
2
v
2
(vt). (5.15)
Corollary 5.4 For each 0 t T < , we have
2
(T t) B
T
B
t
2
(T t) in L
1
G
().
Proof. It is a direct consequence of
E[( B
T
B
t
2
(T t))
+
] = sup
2
v
2
(v
2
)
+
(T t) = 0
and
E[( B
T
B
t
2
(T t))
] = sup
2
v
2
(v
2
)
(T t) = 0.
E[( B
t+s
B
s
)
n
[
s
] =
E[B
n
t
] =
2n
t
n
(5.16)
and
E[( B
t+s
B
s
)
n
[
s
] =
E[B
n
t
] =
2n
t
n
. (5.17)
We now consider the multi-dimensional case. For notational simplicity, we
denote by B
i
:= B
ei
the i-th coordinate of the GBrownian motion B, under a
given orthonormal basis (e
1
, , e
d
) of R
d
. We denote
(B
t
)
ij
:=
B
i
, B
j
_
t
.
Then B
t
, t 0, is an S(d)-valued process. Since
E[AB
t
, B
t
] = 2G(A)t for A S(d),
5 The Distribution of B) 55
we have
E[(B
t
, A)] =
E[
d
i,j=1
a
ij
B
i
, B
j
_
t
]
=
E[
d
i,j=1
a
ij
(B
i
t
B
j
t
_
t
0
B
i
s
dB
j
s
_
t
0
B
j
s
dB
i
s
)]
=
E[
d
i,j=1
a
ij
B
i
t
B
j
t
] = 2G(A)t for A S(d),
where (a
ij
)
d
i,j=1
= A.
Now we set, for each C
l.Lip
(S(d)),
v(t, X) :=
E[(X +B
t
)], (t, X) [0, ) S(d).
Let S
+
(d) be the bounded, convex and closed subset such that
G(A) =
1
2
sup
B
(A, B) , A S(d).
Proposition 5.6 The function v solves the following rst order PDE:
t
v 2G(Dv) = 0, v[
t=0
= ,
where Dv = (
xij
v)
d
i,j=1
. We also have
v(t, X) = sup
(X +t).
Sketch of the Proof. We have
v(t +, X) =
E[(X +B
+B
t+
B
)]
=
E[v(t, X +B
)].
The rest part of the proof is similar to the 1-dimensional case.
Corollary 5.7 We have
B
t
t := t : ,
or equivalently, d
t
(B
t
) = 0, where d
U
(X) = inf
_
(X Y, X Y ) : Y U.
Proof. Since
E[d
t
(B
t
)] = sup
d
t
(t) = 0,
it follows that d
t
(B
t
) = 0.
Exercise 5.8 Complete the proof of Proposition 5.6.
56 Chap.III G-Brownian Motion and Itos Integral
6 GItos Formula
In this section, we give It os formula for a G-Ito process X. For simplicity,
we rst consider the case of the function is suciently regular.
Lemma 6.1 Let C
2
(R
n
) with
x
,
2
x
x
C
b.Lip
(R
n
) for , =
1, , n. Let s [0, T] be xed and let X = (X
1
, , X
n
)
T
be an ndimensional
process on [s, T] of the form
X
t
= X
s
+
(t s) +
ij
(
B
i
, B
j
_
t
B
i
, B
j
_
s
) +
j
(B
j
t
B
j
s
),
where, for = 1, , n, i, j = 1, , d,
,
ij
and
j
are bounded elements
in L
2
G
(
s
) and X
s
= (X
1
s
, , X
n
s
)
T
is a given random vector in L
2
G
(
s
). Then
we have, in L
2
G
(
t
),
(X
t
) (X
s
) =
_
t
s
x
(X
u
)
j
dB
j
u
+
_
t
s
x
(X
u
)
du (6.18)
+
_
t
s
[
x
(X
u
)
ij
+
1
2
2
x
x
(X
u
)
i
j
]d
B
i
, B
j
_
u
.
Here we use the , i.e., the above repeated indices , , i and j imply the sum-
mation.
Proof. For each positive integer N, we set = (t s)/N and take the partition
N
[s,t]
= t
N
0
, t
N
1
, , t
N
N
= s, s +, , s +N = t.
We have
(X
t
) (X
s
) =
N1
k=0
[(X
t
N
k+1
) (X
t
N
k
)] (6.19)
=
N1
k=0
x
(X
t
N
k
)(X
t
N
k+1
X
t
N
k
)
+
1
2
[
2
x
x
(X
t
N
k
)(X
t
N
k+1
X
t
N
k
)(X
t
N
k+1
X
t
N
k
) +
N
k
],
where
N
k
= [
2
x
x
(X
t
N
k
+
k
(X
t
N
k+1
X
t
N
k
))
2
x
x
(X
t
N
k
)](X
t
N
k+1
X
t
N
k
)(X
t
N
k+1
X
t
N
k
)
with
k
[0, 1]. We have
E[[
N
k
[
2
] =
E[[[
2
x
x
(X
t
N
k
+
k
(X
t
N
k+1
X
t
N
k
))
2
x
x
(X
t
N
k
)]
(X
t
N
k+1
X
t
N
k
)(X
t
N
k+1
X
t
N
k
)[
2
]
c
E[[X
t
N
k+1
X
t
N
k
[
6
] C[
6
+
3
],
6 GItos Formula 57
where c is the Lipschitz constant of
2
x
x
n
,=1
and C is a constant inde-
pendent of k. Thus
E[[
N1
k=0
N
k
[
2
] N
N1
k=0
E[[
N
k
[
2
] 0.
The rest terms in the summation of the right side of (6.19) are
N
t
+
N
t
with
N
t
=
N1
k=0
x
(X
t
N
k
)[
(t
N
k+1
t
N
k
) +
ij
(
B
i
, B
j
_
t
N
k+1
B
i
, B
j
_
t
N
k
)
+
j
(B
j
t
N
k+1
B
j
t
N
k
)] +
1
2
2
x
x
(X
t
N
k
)
i
j
(B
i
t
N
k+1
B
i
t
N
k
)(B
j
t
N
k+1
B
j
t
N
k
)
and
N
t
=
1
2
N1
k=0
2
x
x
(X
t
N
k
)[
(t
N
k+1
t
N
k
) +
ij
(
B
i
, B
j
_
t
N
k+1
B
i
, B
j
_
t
N
k
)]
[
(t
N
k+1
t
N
k
) +
lm
(
B
l
, B
m
_
t
N
k+1
B
l
, B
m
_
t
N
k
)]
+ 2[
(t
N
k+1
t
N
k
) +
ij
(
B
i
, B
j
_
t
N
k+1
B
i
, B
j
_
t
N
k
)]
l
(B
l
t
N
k+1
B
l
t
N
k
).
We observe that, for each u [t
N
k
, t
N
k+1
),
E[[
x
(X
u
)
N1
k=0
x
(X
t
N
k
)I
[t
N
k
,t
N
k+1
)
(u)[
2
]
=
E[[
x
(X
u
)
x
(X
t
N
k
)[
2
]
c
2
E[[X
u
X
t
N
k
[
2
] C[ +
2
],
where c is the Lipschitz constant of
x
n
=1
and C is a constant independent
of k. Thus
N1
k=0
x
(X
t
N
k
)I
[t
N
k
,t
N
k+1
)
() converges to
x
(X
) in M
2
G
(0, T).
Similarly,
N1
k=0
2
x
x
(X
t
N
k
)I
[t
N
k
,t
N
k+1
)
() converges to
2
x
x
(X
) in M
2
G
(0, T).
From Lemma 4.6 as well as the denitions of the integrations of dt, dB
t
and
d B
t
, the limit of
N
t
in L
2
G
(
t
) is just the right hand side of (6.18). By the
next Remark we also have
N
t
0 in L
2
G
(
t
). We then have proved (6.18).
Remark 6.2 To prove
N
t
0 in L
2
G
(
t
), we use the following estimates: for
N
M
2,0
G
(0, T) with
N
t
=
N1
k=0
N
t
k
I
[t
N
k
,t
N
k+1
)
(t), and
N
T
= t
N
0
, , t
N
N
N1
k=0
[
N
t
k
[
2
(t
N
k+1
t
N
k
)] C, for all N =
1, 2, , we have
E[[
N1
k=0
N
k
(t
N
k+1
t
N
k
)
2
[
2
] 0 and, for any xed a, a R
d
,
E[[
N1
k=0
N
k
(B
a
t
N
k+1
B
a
t
N
k
)
2
[
2
] C
E[
N1
k=0
[
N
k
[
2
(B
a
t
N
k+1
B
a
t
N
k
)
3
]
C
E[
N1
k=0
[
N
k
[
2
6
aa
T (t
N
k+1
t
N
k
)
3
] 0,
58 Chap.III G-Brownian Motion and Itos Integral
E[[
N1
k=0
N
k
(B
a
t
N
k+1
B
a
t
N
k
)(t
N
k+1
t
N
k
)[
2
]
C
E[
N1
k=0
[
N
k
[
2
(t
N
k+1
t
N
k
)(B
a
t
N
k+1
B
a
t
N
k
)
2
]
C
E[
N1
k=0
[
N
k
[
2
4
aa
T (t
N
k+1
t
N
k
)
3
] 0,
as well as
E[[
N1
k=0
N
k
(t
N
k+1
t
N
k
)(B
a
t
N
k+1
B
a
t
N
k
)[
2
]
C
E[
N1
k=0
[
N
k
[
2
(t
N
k+1
t
N
k
)[B
a
t
N
k+1
B
a
t
N
k
[
2
]
C
E[
N1
k=0
[
N
k
[
2
2
aa
T (t
N
k+1
t
N
k
)
2
] 0
and
E[[
N1
k=0
N
k
(B
a
t
N
k+1
B
a
t
N
k
)(B
a
t
N
k+1
B
a
t
N
k
)[
2
]
C
E[
N1
k=0
[
N
k
[
2
(B
a
t
N
k+1
B
a
t
N
k
)[B
a
t
N
k+1
B
a
t
N
k
[
2
]
C
E[
N1
k=0
[
N
k
[
2
2
aa
T
2
aa
T (t
N
k+1
t
N
k
)
2
] 0.
t
= X
0
+
_
t
0
s
ds +
_
t
0
ij
s
d
B
i
, B
j
_
s
+
_
t
0
j
s
dB
j
s
.
Proposition 6.3 Let C
2
(R
n
) with
x
,
2
x
x
C
b.Lip
(R
n
) for , =
1, , n. Let
,
j
and
ij
, = 1, , n, i, j = 1, , d be bounded processes
in M
2
G
(0, T). Then for each t 0 we have, in L
2
G
(
t
)
(X
t
) (X
s
) =
_
t
s
x
(X
u
)
j
u
dB
j
u
+
_
t
s
x
(X
u
)
u
du (6.20)
+
_
t
s
[
x
(X
u
)
ij
u
+
1
2
2
x
x
(X
u
)
i
u
j
u
]d
B
i
, B
j
_
u
.
6 GItos Formula 59
Proof. We rst consider the case where , and are step processes of the
form
t
() =
N1
k=0
k
()I
[t
k
,t
k+1
)
(t).
From the above lemma, it is clear that (6.20) holds true. Now let
X
,N
t
= X
0
+
_
t
0
,N
s
ds +
_
t
0
ij,N
s
d
B
i
, B
j
_
s
+
_
t
0
j,N
s
dB
j
s
,
where
N
,
N
and
N
are uniformly bounded step processes that converge to
, and in M
2
G
(0, T) as N , respectively. From Lemma 6.1,
(X
N
t
) (X
N
s
) =
_
t
s
x
(X
N
u
)
j,N
u
dB
j
u
+
_
t
s
x
(X
N
u
)
,N
u
du (6.21)
+
_
t
s
[
x
(X
N
u
)
ij,N
u
+
1
2
2
x
x
(X
N
u
)
i,N
u
j,N
u
]d
B
i
, B
j
_
u
.
Since
E[[X
,N
t
X
t
[
2
]
C
E[
_
T
0
[(
,N
s
s
)
2
+[
,N
s
s
[
2
+[
,N
s
s
[
2
]ds],
where C is a constant independent of N, we can prove that, in M
2
G
(0, T),
x
(X
N
)
ij,N
x
(X
)
ij
2
x
x
(X
N
)
i,N
j,N
2
x
x
(X
)
i
x
(X
N
)
,N
x
(X
x
(X
N
)
j,N
x
(X
)
j
.
We then can pass to limit as N in both sides of (6.21) to get (6.20).
In order to consider the general , we rst prove a useful inequality.
For the G-expectation
E, we have the following representation (see Chap.VI):
E[X] = sup
P1
E
P
[X] for X L
1
G
(), (6.22)
where T is a weakly compact family of probability measures on (, B()).
Proposition 6.4 Let M
p
G
(0, T) with p 2 and let a R
d
be xed. Then
we have
_
T
0
t
dB
a
t
L
p
G
(
T
) and
E[[
_
T
0
t
dB
a
t
[
p
] C
p
E[[
_
T
0
2
t
dB
a
t
[
p/2
]. (6.23)
60 Chap.III G-Brownian Motion and Itos Integral
Proof. It suces to consider the case where is a step process of the form
t
() =
N1
k=0
k
()I
[t
k
,t
k+1
)
(t).
For each L
ip
(
t
) with t [0, T], we have
E[
_
T
t
s
dB
a
s
] = 0.
From this we can easily get E
P
[
_
T
t
s
dB
a
s
] = 0 for each P T, which implies
that (
_
t
0
s
dB
a
s
)
t0,T]
is a P-martingale. Similarly we can prove that
M
t
:= (
_
t
0
s
dB
a
s
)
2
_
t
0
2
s
dB
a
s
, t [0, T]
is a P-martingale for each P T. By the Burkholder-Davis-Gundy inequalities,
we have
E
P
[[
_
T
0
t
dB
a
t
[
p
] C
p
E
P
[[
_
T
0
2
t
dB
a
t
[
p/2
] C
p
E[[
_
T
0
2
t
dB
a
t
[
p/2
],
where C
p
is a universal constant independent of P. Thus we get (6.23).
We now give the general GItos formula.
Theorem 6.5 Let be a C
2
-function on R
n
such that
2
x
x
satisfy polyno-
mial growth condition for , = 1, , n. Let
,
j
and
ij
, = 1, , n,
i, j = 1, , d be bounded processes in M
2
G
(0, T). Then for each t 0 we have
in L
2
G
(
t
)
(X
t
) (X
s
) =
_
t
s
x
(X
u
)
j
u
dB
j
u
+
_
t
s
x
(X
u
)
u
du (6.24)
+
_
t
s
[
x
(X
u
)
ij
u
+
1
2
2
x
x
(X
u
)
i
u
j
u
]d
B
i
, B
j
_
u
.
Proof. By the assumptions on , we can choose a sequence of functions
N
C
2
0
(R
n
) such that
[
N
(x)(x)[+[
x
N
(x)
x
(x)[+[
2
x
x
N
(x)
2
x
x
(x)[
C
1
N
(1+[x[
k
),
where C
1
and k are positive constants independent of N. Obviously,
N
satises
the conditions in Proposition 6.3, therefore,
N
(X
t
)
N
(X
s
) =
_
t
s
x
N
(X
u
)
j
u
dB
j
u
+
_
t
s
x
v
N
(X
u
)
u
du (6.25)
+
_
t
s
[
x
N
(X
u
)
ij
u
+
1
2
2
x
x
N
(X
u
)
i
u
j
u
]d
B
i
, B
j
_
u
.
7 Generalized G-Brownian Motion 61
For each xed T > 0, by Proposition 6.4, there exists a constant C
2
such that
E[[X
t
[
2k
] C
2
for t [0, T].
Thus we can prove that
N
(X
t
) (X
t
) in L
2
G
(
t
) and in M
2
G
(0, T),
x
N
(X
)
ij
x
(X
)
ij
2
x
x
N
(X
)
i
2
x
x
(X
)
i
x
N
(X
x
(X
x
N
(X
)
j
x
(X
)
j
.
We then can pass to limit as N in both sides of (6.25) to get (6.24).
Corollary 6.6 Let be a polynomial and a, a
R
d
be xed for = 1, , n.
Then we have
(X
t
) (X
s
) =
_
t
s
x
(X
u
)dB
a
u
+
1
2
_
t
s
2
x
x
(X
u
)d
_
B
a
, B
a
_
u
,
where X
t
= (B
a
1
t
, , B
a
n
t
)
T
. In particular, we have, for k = 2, 3, ,
(B
a
t
)
k
= k
_
t
0
(B
a
s
)
k1
dB
a
s
+
k(k 1)
2
_
t
0
(B
a
s
)
k2
dB
a
s
.
If
E becomes a linear expectation, then the above GItos formula is the
classical one.
7 Generalized G-Brownian Motion
Let G : R
d
S(d) R be a given continuous sublinear function monotonic in
A S(d). Then by Theorem 2.1 in Chap.I, there exists a bounded, convex and
closed subset R
d
S
+
(d) such that
G(p, A) = sup
(q,B)
[
1
2
tr[AB] +p, q] for (p, A) R
d
S(d).
By Chapter II, we know that there exists a pair of d-dimensional random vectors
(X, Y ) which is G-distributed.
We now give the denition of the generalized G-Brownian motion.
Denition 7.1 A d-dimensional process (B
t
)
t0
on a sublinear expectation
space (, 1,
E) is called a generalized G-Brownian motion if the follow-
ing properties are satised:
(i) B
0
() = 0;
(ii) For each t, s 0, the increment B
t+s
B
t
identically distributed with
+
1
2
AB
, B
]
1
for (p, A) R
d
S(d).
Proof. We rst prove that lim
0
E[p, B
+
1
2
AB
, B
]
1
exists. For each
xed (p, A) R
d
S(d), we set
f(t) :=
E[p, B
t
+
1
2
AB
t
, B
t
].
Since
[f(t +h) f(t)[
E[([p[ + 2[A[[B
t
[)[B
t+h
B
t
[ +[A[[B
t+h
B
t
[
2
] 0,
we get that f(t) is a continuous function. It is easy to prove that
E[q, B
t
] =
E[q, B
1
]t for q R
d
.
Thus for each t, s > 0,
[f(t +s) f(t) f(s)[ C
E[[B
t
[]s,
where C = [A[
E[[B
1
[]. By (iii), there exists a constant
0
> 0 such that
E[[B
t
[
3
] t for t
0
. Thus for each xed t > 0 and N N such that
Nt
0
, we have
[f(Nt) Nf(t)[
3
4
C(Nt)
4/3
.
From this and the continuity of f, it is easy to show that lim
t0
f(t)t
1
exists.
Thus we can get G(p, A) for each (p, A) R
d
S(d). It is also easy to check
that G is a continuous sublinear function monotonic in A S(d).
We only need to prove that, for each xed C
b.Lip
(R
d
), the function
u(t, x) :=
E[(x +B
t
)], (t, x) [0, ) R
d
is the viscosity solution of the following parabolic PDE:
t
u G(Du, D
2
u) = 0, u[
t=0
= . (7.26)
7 Generalized G-Brownian Motion 63
We rst prove that u is Lipschitz in x and
1
2
-Holder continuous in t. In fact,
for each xed t, u(t, ) C
b.Lip
(R
d
) since
[
E[(x +B
t
)]
E[(y +B
t
)][
E[[(x +B
t
) (y +B
t
)[]
C[x y[.
For each [0, t], since B
t
B
is independent from B
,
u(t, x) =
E[(x +B
+ (B
t
B
)]
=
E[
E[(y + (B
t
B
))]
y=x+B
].
Hence
u(t, x) =
E[u(t , x +B
)]. (7.27)
Thus
[u(t, x) u(t , x)[ = [
E[u(t , x +B
) u(t , x)][
E[[u(t , x +B
) u(t , x)[]
E[C[B
[] C
_
G(0, I) + 1
.
To prove that u is a viscosity solution of (7.26), we x a (t, x) (0, ) R
d
and let v C
2,3
b
([0, ) R
d
) be such that v u and v(t, x) = u(t, x). From
(7.27), we have
v(t, x) =
E[u(t , x +B
)]
E[v(t , x +B
)].
Therefore, by Taylors expansion,
0
E[v(t , x +B
) v(t, x)]
=
E[v(t , x +B
) v(t, x +B
) + (v(t, x +B
) v(t, x))]
=
E[
t
v(t, x) +Dv(t, x), B
+
1
2
D
2
v(t, x)B
, B
+I
]
t
v(t, x) +
E[Dv(t, x), B
+
1
2
D
2
v(t, x)B
, B
] +
E[I
],
where
I
=
_
1
0
[
t
v(t , x +B
)
t
v(t, x)]d
+
_
1
0
_
1
0
(D
2
v(t, x +B
) D
2
v(t, x))B
, B
dd.
With the assumption (iii) we can check that lim
0
E[[I
[]
1
= 0, from which
we get
t
v(t, x) G(Dv(t, x), D
2
v(t, x)) 0, hence u is a viscosity subsolution
of (7.26). We can analogously prove that u is a viscosity supersolution. Thus u
is a viscosity solution and (B
t
)
t0
is a generalized G-Brownian motion.
64 Chap.III G-Brownian Motion and Itos Integral
In many situations we are interested in a generalized 2d-dimensional Brow-
nian motion (B
t
, b
t
)
t0
such that
E[B
t
] =
E[B
t
] = 0 and
E[[b
t
[
2
]/t 0, as
t 0. In this case B is in fact a G-Brownian motion dened on Denition 2.1 of
Chapter II. Moreover the process b satises properties of Proposition 5.2. We
dene u(t, x, y) =
E[(x +B
t
, y +b
t
)]. By the above proposition it follows that
u is the solution of the PDE
t
u = G(D
y
u, D
2
xx
u), u[
t=0
= C
l.Lip
(R
2d
).
where G is a sublinear function of (p, A) R
d
, dened by
G(p, A) :=
E[p, b
t
+AB
t
, B
t
].
Here , = ,
R
d
.
8
G-Brownian Motion under a Nonlinear Ex-
pectation
We can also dene a G-Brownian motion on a nonlinear expectation space
(, 1,
E).
Denition 8.1 A d-dimensional process (B
t
)
t0
on a nonlinear expectation
space (, 1,
E) is called a (nonlinear)
G-Brownian motion if the following
properties are satised:
(i) B
0
() = 0;
(ii) For each t, s 0, the increment B
t+s
B
t
identically distributed with B
s
and is independent from (B
t1
, B
t2
, , B
tn
), for each n N and 0 t
1
t
n
t;
(iii) lim
t0
E[[B
t
[
3
]t
1
= 0.
The following theorem gives a characterization of the nonlinear
G-Brownian
motion, and give us the generator
G of our
G-Brownian motion.
Theorem 8.2 Let
E be a nonlinear expectation and
E be a sublinear expectation
dened on (, 1). let
E be dominated by
E, namely
E[X]
E[Y ]
E[X Y ], X, Y 1.
Let (B
t
, b
t
)
t0
be a given R
2d
valued
G-Brownian motion on (, 1,
E) such
that
E[B
t
] =
E[B
t
] = 0 and lim
t0
E[[b
t
[
2
]/t = 0. Then, for each xed
C
b.Lip
(R
2d
), the function
u(t, x, y) :=
E[(x + B
t
, y +b
t
)], (t, x, y) [0, ) R
2d
is the viscosity solution of the following parabolic PDE:
t
u
G(D
y
u, D
2
x
u) = 0, u[
t=0
= . (8.28)
8
G-Brownian Motion under a Nonlinear Expectation 65
where
G(p, A) =
E[p, b
1
+
1
2
AB
1
, B
1
], (p, A) R
d
S(d).
Remark 8.3 Let G(p, A) :=
E[p, b
1
+
1
2
AB
1
, B
1
]. Then the function
G is
dominated by the sublinear function G in the following sense:
G(p, A)
G(p
, A
) G(p p
, A A
), (p, A), (p
, A
) R
d
S(d). (8.29)
Proof of Theorem 8.2. We set
f(t) = f
A,t
(t) :=
E[p, b
t
+
1
2
AB
t
, B
t
], t 0.
Since
[f(t +h) f(t)[
E[([p[ + 2[A[[B
t
[)[B
t+h
B
t
[ +[A[[B
t+h
B
t
[
2
] 0,
we get that f(t) is a continuous function. Since
E[B
t
] =
E[B
t
] = 0, it follows
from Proposition 3.7 that
E[X + p, B
t
] =
E[X] for each X 1 and p R
d
.
Thus
f(t +h) =
E[p, b
t+h
b
t
+p, b
t
+
1
2
AB
t+h
B
t
, B
t+h
B
t
+
1
2
AB
t
, B
t
]
=
E[p, b
h
+
1
2
AB
h
, B
h
] +
E[p, b
t
+
1
2
AB
t
, B
t
]
= f(t) +f(h).
It then follows that f(t) = f(1)t =
G(A, p)t. We now prove that the function
u is Lipschitz in x and uniformly continuous in t. In fact, for each xed t,
u(t, ) C
b.Lip
(R
d
) since
[
E[(x +B
t
, y +b
t
)]
E[(x
+B
t
, y
+b
t
)][
E[[(x +B
t
, y +b
t
) (x
+B
t
, y
+b
t
)[] C([x x
[ +[y y
[).
For each [0, t], since (B
t
B
, b
t
b
) is independent from (B
, b
),
u(t, x) =
E[(x +B
+ (B
t
B
), y +b
+ (b
t
b
)]
=
E[
E[( x + (B
t
B
), y + (b
t
b
))]
x=x+B
, y=y+b
].
Hence
u(t, x) =
E[u(t , x +B
, y +b
)]. (8.30)
Thus
[ u(t, x, y) u(t , x, y)[ = [
E[ u(t , x +B
, y +b
) u(t , x, y)][
E[[ u(t , x +B
, y +b
) u(t , x, y)[]
E[C([B
[ +[b
[)].
66 Chap.III G-Brownian Motion and Itos Integral
It follows from (iii) of Denition 8.1 that u(t, x, y) is continuous in t uniformly
in (t, x) [0, ) R
2d
.
To prove that u is a viscosity solution of (8.28), we x a (t, x, y) (0, )R
2d
and let v C
2,3
b
([0, ) R
2d
) be such that v u and v(t, x, y) = u(t, x, y).
From (8.30), we have
v(t, x, y) =
E[u(t , x +B
, y +b
)]
E[v(t , x +B
, y +b
)].
Therefore, by Taylors expansion,
0
E[v(t , x +B
, y +b
) v(t, x)]
=
E[v(t , x +B
, y +b
) v(t, x +B
, y +b
)
+ (v(t, x +B
, y +b
) v(t, x, y)]
=
E[
t
v(t, x, y) +D
y
v(t, x, y), b
+
x
v(t, x, y), B
+
1
2
D
2
xx
v(t, x, y)B
, B
+I
]
t
v(t, x, y) +
E[D
y
v(t, x, y), b
+
1
2
D
2
xx
v(t, x, y)B
, B
] +
E[I
],
where
I
=
_
1
0
[
t
v(t , x +B
, y +b
)
t
v(t, x, y)]d
+
_
1
0
y
v(t, x +B
, y +b
)
y
v(t, x, y), b
d
+
_
1
0
x
v(t, x, y +b
)
x
v(t, x, y), B
d
+
_
1
0
_
1
0
(D
2
xx
v(t, x +B
, y +b
) D
2
xx
v(t, x, y))B
, B
dd.
With the assumption (iii) we can check that lim
0
E[[I
[]
1
= 0, from which
we get
t
v(t, x) G(Dv(t, x), D
2
v(t, x)) 0, hence u is a viscosity subsolution
of (8.28). We can analogously prove that u is a viscosity supersolution. Thus u
is a viscosity solution.
9 Construction of
G-Brownian Motions under
Nonlinear Expectation
Let G() : R
d
S(d) R be a given sublinear function monotonic on A S(d)
and
G() : R
d
S(d) R be a given function dominated by G in the sense of
(8.29). The construction of a R
2d
-dimensional
G-Brownian motion (B
t
, b
t
)
t0
under a nonlinear expectation
E, dominated by a sublinear expectation
E is
based on a similar approach introduced in Section 2. In fact we will see that
by our construction (B
t
, b
t
)
t0
is also a G-Brownian motion of the sublinear
expectation
E.
9 Construction of
G-Brownian Motions under Nonlinear Expectation 67
We denote by = C
2d
0
(R
+
) the space of all R
2d
valued continuous paths
(
t
)
tR
+. For each xed T [0, ), we set
T
:=
T
: . We will
consider the canonical process (B
t
, b
t
)() =
t
, t [0, ), for . We also
follow section 2 to introduce the spaces of random variables L
ip
(
T
) and L
ip
()
so that to dene
E and
E on (, L
ip
()).
To this purpose we rst construct a sequence of d-dimensional random vec-
tors (X
i
,
i
)
i=1
on a sublinear expectation space (, 1, E) such that (X
i
,
i
) is
G-distributed and (X
i+1
,
i+1
) is independent from ((X
1
,
1
), , (X
i
,
i
)) for
each i = 1, 2, . By the denition of G-distribution the function
u(t, x, y) :=
E[(x +
tX
1
, y +t
1
)], t 0, x, y R
d
is the viscosity solution of the following parabolic PDE, which is the same as
equation (1.6) in Chap.II.
t
u G(D
y
u, D
2
xx
u) = 0, u[
t=0
= C
Lip
(R
2d
).
We also consider the PDE (for the existence, uniqueness, comparison and dom-
ination properties, see Theorem 2.6 in Appendix C).
t
u
G(D
y
u, D
2
xx
u) = 0, u[
t=0
= C
Lip
(R
2d
),
and denote by
P
t
[](x, y) = u(t, x, y). Since
G is dominated by G, it follows from
the domination theorem of viscosity solutions, i.e., Theorem 3.5 in Appendix C,
that, for each , C
b,Lip
(R
2d
),
P
t
[](x, y)
P
t
[](x, y) E[( )(x +
tX
1
, y +t
1
)].
We now introduce a sublinear expectation
E and a nonlinear
E dened on
L
ip
() via the following procedure: for each X L
ip
() with
X = (B
t1
B
t0
, b
t1
b
t0
, , B
t1
B
t0
, b
tn
b
tn1
)
for C
l.Lip
(R
2dn
) and 0 = t
0
< t
1
< < t
n
< , we set
E[(B
t1
B
t0
, b
t1
b
t0
, , B
tn
B
tn1
, b
tn
b
tn1
)]
:= E[(
t
1
t
0
X
1
, (t
1
t
0
)
1
, ,
_
t
n
t
n1
X
n
, (t
n
t
n1
)
n
)].
and
E[(B
t1
B
t0
, b
t1
b
t0
, , B
tn
B
tn1
, b
tn
b
tn1
)] =
n
(0, 0)
where
n
C
b.Lip
(R
2d
) is dened iteratively through
1
(x
1
, y
1
, , x
n1
, y
n1
) =
P
tntn1
[
1
(x
1
, y
1
, , x
n1
, y
n1
, )](0, 0),
.
.
.
n1
(x
1
, y
1
) =
P
t2t1
[
n2
(x
1
, y
1
, )](0, 0),
n
(x
1
, y
1
) =
P
t2t1
[
n1
()](x
1
, y
1
).
68 Chap.III G-Brownian Motion and Itos Integral
The related conditional expectation of X =(B
t1
B
t0
, b
t1
b
t0
, , B
tn
B
tn1
, b
tn
b
tn1
) under
tj
is dened by
E[X[
tj
] =
E[(B
t1
B
t0
, b
t1
b
t0
, , B
tn
B
tn1
, b
tn
b
tn1
) [
tj
] (9.31)
:= (B
t1
B
t0
, b
t1
b
t0
, , B
tj
B
tj1
, b
tj
b
tj1
),
where
(x
1
, , x
j
) = E[(x
1
, , x
j
,
_
t
j+1
t
j
X
j+1
, (t
1
t
0
)
j+1
, ,
_
t
n
t
n1
X
n
, (t
1
t
0
)
n
)].
Similarly
E[X[
tj
] =
nj
(B
t1
B
t0
, b
t1
b
t0
, , B
tj
B
tj1
, b
tj
b
tj1
).
It is easy to check that
E[] (resp.
E) consistently denes a sublinear (resp.
nonlinear) expectation and
E[] on (, L
ip
()). Moreover (B
t
, b
t
)
t0
is a G-
Brownian motion under
E and a
G-Brownian motion under
E.
Proposition 9.1 We also list the properties of
E[[
t
] that hold for each X, Y L
ip
():
(i) If X Y , then
E[X[
t
]
E[Y [
t
].
(ii)
E[X +[
t
] =
E[X[
t
] +, for each t 0 and L
ip
(
t
).
(iii)
E[X[
t
]
E[Y [
t
]
E[X Y [
t
].
(iv)
E[
E[X[
t
][
s
] =
E[X[
ts
], in particular,
E[
E[X[
t
]] =
E[X].
(v) For each X L
ip
(
t
),
E[X[
t
] =
E[X], where L
ip
(
t
) is the linear space
of random variables with the form
(W
t2
W
t1
, W
t3
W
t2
, , W
tn+1
W
tn
),
n = 1, 2, , C
l.Lip
(R
dn
), t
1
, , t
n
, t
n+1
[t, ).
Since
E can be considered as a special nonlinear expectation of
E dominated
by its self, thus
E[[
t
] also satises above properties (i)(v). Moreover
Proposition 9.2 The conditional sublinear expectation
E[[
t
] satises (i)-(v).
Moreover
E[[
t
] itself is sublinear, i.e.,
(vi)
E[X[
t
]
E[Y [
t
]
E[X Y [
t
], .
(vii)
E[X[
t
] =
+
E[X[
t
] +
E[X[
t
] for each L
ip
(
t
).
We now consider the completion of sublinear expectation space (, L
ip
(),
E).
We denote by L
p
G
(), p 1, the completion of L
ip
() under the norm
|X|
p
:= (
E[[X[
p
])
1/p
. Similarly, we can dene L
p
G
(
T
), L
p
G
(
t
T
) and L
p
G
(
t
).
It is clear that for each 0 t T < , L
p
G
(
t
) L
p
G
(
T
) L
p
G
().
According to Sec.4 in Chap.I,
E[] can be continuously extended to (, L
1
G
()).
Moreover, since
E is dominated by
E, thus by Denition 4.4 in Chap.I, (, L
1
G
(),
E)
forms a sublinear expectation space and (, L
1
G
(),
E[X[
t
]
E[Y [
t
]
E[X Y [
t
]
E[[X Y [[
t
],
then
[
E[X[
t
]
E[Y [
t
][
E[[X Y [[
t
].
We thus obtain _
_
_
E[X[
t
]
E[Y [
t
]
_
_
_ |X Y | .
It follows that
E[[
t
] can be also extended as a continuous mapping
E[[
t
] : L
1
G
(
T
) L
1
G
(
t
).
If the above T is not xed, then we can obtain
E[[
t
] : L
1
G
() L
1
G
(
t
).
Remark 9.3 The above proposition also holds for X, Y L
1
G
(). But in (iv),
L
1
G
(
t
) should be bounded, since X, Y L
1
G
() does not imply X Y
L
1
G
().
In particular, we have the following independence:
E[X[
t
] =
E[X], X L
1
G
(
t
).
We give the following denition similar to the classical one:
Denition 9.4 An n-dimensional random vector Y (L
1
G
())
n
is said to be
independent from
t
for some given t if for each C
b.Lip
(R
n
) we have
E[(Y )[
t
] =
E[(Y )].
Notes and Comments
Bachelier (1900) [6] proposed Brownian motion as a model for uctuations of the
stock market, Einstein (1905) [42] used Brownian motion to give experimental
conrmation of the atomic theory, and Wiener (1923) [121] gave a mathemati-
cally rigorous construction of Brownian motion. Here we follow Kolmogorovs
idea (1956) [74] to construct G-Brownian motion by introducing innite di-
mensional function space and the corresponding family of innite dimensional
sublinear distributions, instead of linear distributions in [74].
The notions of G-Brownian motion and the related stochastic calculus of
It os type were rstly introduced by Peng (2006) [100] for 1-dimensional case
and then in (2008) [104] for multi-dimensional situation. It is very interesting
that Denis and Martini (2006) [38] studied super-pricing of contingent claims
under model uncertainty of volatility. They have introduced a norm on the
70 Chap.III G-Brownian Motion and Itos Integral
space of continuous paths = C([0, T]) which corresponds to our L
2
G
-norm and
developed a stochastic integral. There is no notion of nonlinear expectation
and the related nonlinear distribution, such as G-expectation, conditional G-
expectation, the related G-normal distribution and the notion of independence
in their paper. But on the other hand, powerful tools in capacity theory enable
them to obtain pathwise results for random variables and stochastic processes
through the language of quasi-surely (see e.g. Dellacherie (1972) [32], Del-
lacherie and Meyer (1978 and 1982) [33], Feyel and de La Pradelle (1989) [48])
in place of almost surely in classical probability theory.
The main motivations of G-Brownian motion were the pricing and risk mea-
sures under volatility uncertainty in nancial markets (see Avellaneda, Levy and
Paras (1995) [5] and Lyons (1995) [82]). It was well-known that under volatil-
ity uncertainty the corresponding uncertain probabilities are singular from each
other. This causes a serious problem for the related path analysis to treat,
e.g., path-dependent derivatives, under a classical probability space. Our G-
Brownian motion provides a powerful tool to such type of problems.
Our new It os calculus for G-Brownian motion is of course inspired from
It os groundbreaking work since 1942 [65] on stochastic integration, stochastic
dierential equations and stochastic calculus through interesting books cited
in Chapter IV. It os formula given by Theorem 6.5 is from Peng [100], [104].
Gao (2009)[54] proved a more general It os formula for G-Brownian motion.
An interesting problem is: can we get an It os formula in which the conditions
correspond the classical one? Recently Li and Peng have solved this problem in
[79].
Using nonlinear Markovian semigroup known as Nisios semigroup (see Nisio
(1976) [86]), Peng (2005) [98] studied the processes with Markovian properties
under a nonlinear expectation.
Chapter IV
G-martingales and Jensens
Inequality
In this chapter, we introduce the notion of G-martingales and the related
Jensens inequality for a new type of G-convex functions. Essentially dier-
ent from the classical situation, M is a G-martingale does not imply that
M is a G-martingale.
1 The Notion of G-martingales
We now give the notion of Gmartingales.
Denition 1.1 A process (M
t
)
t0
is called a Gmartingale (respectively, G
supermartingale, Gsubmartingale) if for each t [0, ), M
t
L
1
G
(
t
)
and for each s [0, t], we have
E[M
t
[
s
] = M
s
(respectively, M
s
, M
s
).
Example 1.2 For each xed X L
1
G
(), it is clear that (
E[X[
t
])
t0
is a
Gmartingale.
Example 1.3 For each xed a R
d
, it is easy to check that (B
a
t
)
t0
and
(B
a
t
)
t0
are Gmartingales. The process (B
a
2
aa
T
t)
t0
is a Gmartingale
since
E[B
a
t
2
aa
T t[
s
] =
E[B
a
s
2
aa
T t + (B
a
t
B
a
s
)[
s
]
= B
a
s
2
aa
T t +
E[B
a
t
B
a
s
]
= B
a
s
2
aa
T s.
71
72 Chap.IV G-martingales and Jensens Inequality
Similarly we can show that ((B
a
t
2
aa
T
t))
t0
is a Gsubmartingale. The
process ((B
a
t
)
2
)
t0
is a Gsubmartingale since
E[(B
a
t
)
2
[
s
] =
E[(B
a
s
)
2
+ (B
a
t
B
a
s
)
2
+ 2B
a
s
(B
a
t
B
a
s
)[
s
]
= (B
a
s
)
2
+
E[(B
a
t
B
a
s
)
2
[
s
]
= (B
a
s
)
2
+
2
aa
T (t s) (B
a
s
)
2
.
Similarly we can prove that ((B
a
t
)
2
2
aa
T
t)
t0
and ((B
a
t
)
2
B
a
t
)
t0
are
Gmartingales.
In general, we have the following important property.
Proposition 1.4 Let M
0
R, = (
j
)
d
j=1
M
2
G
(0, T; R
d
) and = (
ij
)
d
i,j=1
M
1
G
(0, T; S(d)) be given and let
M
t
= M
0
+
_
t
0
j
u
dB
j
u
+
_
t
0
ij
u
d
B
i
, B
j
_
u
_
t
0
2G(
u
)du for t [0, T].
Then M is a Gmartingale. Here we still use the , i.e., the above repeated
indices i and j imply the summation.
Proof. Since
E[
_
t
s
j
u
dB
j
u
[
s
] =
E[
_
t
s
j
u
dB
j
u
[
s
] = 0, we only need to prove
that
M
t
=
_
t
0
ij
u
d
B
i
, B
j
_
u
_
t
0
2G(
u
)du for t [0, T]
is a Gmartingale. It suces to consider the case where M
1,0
G
(0, T; S(d)),
i.e.,
t
=
N1
k=0
t
k
I
[t
k
,t
k+1
)
(t).
We have, for s [t
N1
, t
N
],
E[
M
t
[
s
] =
M
s
+
E[(
tN1
, B
t
B
s
) 2G(
tN1
)(t s)[
s
]
=
M
s
+
E[(A, B
t
B
s
)]
A=t
N1
2G(
tN1
)(t s)
=
M
s
.
Then we can repeat this procedure backwardly to prove the result for s
[0, t
N1
].
Corollary 1.5 Let M
1
G
(0, T). Then for each xed a R
d
, we have
2
aa
T
E[
_
T
0
[
t
[dt]
E[
_
T
0
[
t
[dB
a
t
]
2
aa
T
E[
_
T
0
[
t
[dt]. (1.1)
2 On G-martingale Representation Theorem 73
Proof. For each M
1
G
(0, T), by the above proposition, we have
E[
_
T
0
t
dB
a
t
_
T
0
2G
a
(
t
)dt] = 0,
where G
a
() =
1
2
(
2
aa
T
2
aa
T
E[
_
T
0
[
t
[dB
a
t
2
aa
T
_
T
0
[
t
[dt] = 0,
E[
_
T
0
[
t
[dB
a
t
+
2
aa
T
_
T
0
[
t
[dt] = 0.
From the sub-additivity of G-expectation, we can easily get the result.
Remark 1.6 It is worth to mention that for a Gmartingale M, in general,
M is not a Gmartingale. But in Proposition 1.4, when 0, M is still a
Gmartingale.
Exercise 1.7 (a) Let (M
t
)
t0
be a Gsupermartingale. Show that (M
t
)
t0
is
a Gsubmartingale.
(b) Find a Gsubmartingale (M
t
)
t0
such that (M
t
)
t0
is not a Gsupermartingale.
Exercise 1.8 (a) Let (M
t
)
t0
and (N
t
)
t0
be two Gsupermartingales. Prove
that (M
t
+N
t
)
t0
is a Gsupermartingale.
(b) Let (M
t
)
t0
and (M
t
)
t0
be two Gmartingales. For each Gsubmartingale
(respectively, Gsupermartingale) (N
t
)
t0
, prove that (M
t
+ N
t
)
t0
is a G
submartingale (respectively, Gsupermartingale).
2 On G-martingale Representation Theorem
How to give a G-martingale representation theorem is still a largely open prob-
lem. Xu and Zhang (2009) [122] have obtained a martingale representation for
a special symmetric G-martingale process. A more general situation have been
proved by Soner, Touzi and Zhang (preprint in private communications). Here
we present the formulation of this G-martingale representation theorem under
a very strong assumption.
In this section, we consider the generator G : S(d) R satisfying the uni-
formly elliptic condition, i.e., there exists a > 0 such that, for each A,
A S(d)
with A
A,
G(A) G(
A) tr[A
A].
For each = (
j
)
d
j=1
M
2
G
(0, T; R
d
) and = (
ij
)
d
i,j=1
M
1
G
(0, T; S(d)),
we use the following notations
_
T
0
t
, dB
t
:=
d
j=1
_
T
0
j
t
dB
j
t
;
_
T
0
(
t
, dB
t
) :=
d
i,j=1
_
T
0
ij
t
d
B
i
, B
j
_
t
.
We rst consider the representation of (B
T
B
t1
) for 0 t
1
T < .
74 Chap.IV G-martingales and Jensens Inequality
Lemma 2.1 Let = (B
T
B
t1
), C
b.Lip
(R
d
). Then we have the following
representation:
=
E[] +
_
T
t1
t
, dB
t
+
_
T
t1
(
t
, dB
t
)
_
T
t1
2G(
t
)dt.
Proof. We know that u(t, x) =
E[(x+B
T
B
t
)] is the solution of the following
PDE:
t
u +G(D
2
u) = 0 (t, x) [0, T] R
d
, u(T, x) = (x).
For each > 0, by the interior regularity of u (see Appendix C), we have
|u|
C
1+/2,2+
([0,T]R
d
)
< for some (0, 1).
Applying G-Itos formula to u(t, B
t
B
t1
) on [t
1
, T ], since Du(t, x) is uni-
formly bounded, letting 0, we have
=
E[] +
_
T
t1
t
u(t, B
t
B
t1
)dt +
_
T
t1
Du(t, B
t
B
t1
), dB
t
+
1
2
_
T
t1
(D
2
u(t, B
t
B
t1
), dB
t
)
=
E[] +
_
T
t1
Du(t, B
t
B
t1
), dB
t
+
1
2
_
T
t1
(D
2
u(t, B
t
B
t1
), dB
t
)
_
T
t1
G(D
2
u(t, B
t
B
t1
))dt.
t
, dB
t
+
_
T
0
(
t
, dB
t
)
_
T
0
2G(
t
)dt.
Proof. We only need to prove the case = (B
t1
, B
T
B
t1
). We set, for each
(x, y) R
2d
,
u(t, x, y) =
E[(x, y +B
T
B
t
)];
1
(x) =
E[(x, B
T
B
t1
)].
For each x R
d
, we denote
= (x, B
T
B
t1
). By Lemma 2.1, we have
=
1
(x) +
_
T
t1
D
y
u(t, x, B
t
B
t1
), dB
t
+
1
2
_
T
t1
(D
2
y
u(t, x, B
t
B
t1
), dB
t
)
_
T
t1
G(D
2
y
u(t, x, B
t
B
t1
))dt.
3 Gconvexity and Jensens Inequality for Gexpectations 75
By the denitions of the integrations of dt, dB
t
and dB
t
, we can replace x by
B
t1
and get
=
1
(B
t1
) +
_
T
t1
D
y
u(t, B
t1
, B
t
B
t1
), dB
t
+
1
2
_
T
t1
(D
2
y
u(t, B
t1
, B
t
B
t1
), dB
t
)
_
T
t1
G(D
2
y
u(t, B
t1
, B
t
B
t1
))dt.
Applying Lemma 2.1 to
1
(B
t1
), we complete the proof.
We then immediately have the following G-martingale representation theo-
rem.
Theorem 2.3 Let (M
t
)
t[0,T]
be a G-martingale with M
T
= (B
t1
, B
t2
B
t1
, ,B
tN
B
tN1
), C
b.Lip
(R
dN
), 0 t
1
< t
2
< < t
N
= T < . Then
M
t
=
E[M
T
] +
_
t
0
s
, dB
s
+
_
t
0
(
s
, dB
s
)
_
t
0
2G(
s
)ds, t T.
Proof. For M
T
, by Theorem 2.2, we have
M
T
=
E[M
T
] +
_
T
0
s
, dB
s
+
_
T
0
(
s
, dB
s
)
_
T
0
2G(
s
)ds.
Taking the conditional G-expectation on both sides of the above equality and
by Proposition 1.4, we obtain the result.
3 Gconvexity and Jensens Inequality for G
expectations
A very interesting question is whether the wellknown Jensens inequality still
holds for Gexpectations.
First, we give a new notion of convexity.
Denition 3.1 A continuous function h : R R is called Gconvex if for
each bounded L
1
G
(), the following Jensens inequality holds:
E[h()] h(
E[]).
In this section, we mainly consider C
2
-functions.
Proposition 3.2 Let h C
2
(R). Then the following statements are equivalent:
(i) The function h is Gconvex.
(ii) For each bounded L
1
G
(), the following Jensens inequality holds:
E[h()[
t
] h(
E[[
t
]) for t 0.
76 Chap.IV G-martingales and Jensens Inequality
(iii) For each C
2
b
(R
d
), the following Jensens inequality holds:
E[h((B
t
))] h(
E[(B
t
)]) for t 0.
(iv) The following condition holds for each (y, z, A) R R
d
S(d):
G(h
(y)A +h
(y)zz
T
) h
(y)G(A) 0. (3.2)
To prove the above proposition, we need the following lemmas.
Lemma 3.3 Let : R
d
S(d) be continuous with polynomial growth. Then
lim
0
E[
_
t+
t
((B
s
), dB
s
)]
1
= 2
E[G((B
t
))]. (3.3)
Proof. If is a Lipschitz function, it is easy to prove that
E[[
_
t+
t
((B
s
) (B
t
), dB
s
)[] C
1
3/2
,
where C
1
is a constant independent of . Thus
lim
0
E[
_
t+
t
((B
s
), dB
s
)]
1
= lim
0
E[((B
t
), B
t+
B
s
)]
1
= 2
E[G((B
t
))].
Otherwise, we can choose a sequence of Lipschitz functions
N
: R
d
S(d)
such that
[
N
(x) (x)[
C
2
N
(1 +[x[
k
),
where C
2
and k are positive constants independent of N. It is easy to show that
E[[
_
t+
t
((B
s
)
N
(B
s
), dB
s
)[]
C
N
and
E[[G((B
t
)) G(
N
(B
t
))[]
C
N
,
where C is a universal constant. Thus
[
E[
_
t+
t
((B
s
), dB
s
)]
1
2
E[G((B
t
))][
[
E[
_
t+
t
(
N
(B
s
), dB
s
)]
1
2
E[G(
N
(B
t
))][ +
3C
N
.
Then we have
limsup
0
[
E[
_
t+
t
((B
s
), dB
s
)]
1
2
E[G((B
t
))][
3C
N
.
Since N can be arbitrarily large, we complete the proof.
3 Gconvexity and Jensens Inequality for Gexpectations 77
Lemma 3.4 Let be a C
2
-function on R
d
such that D
2
satisfy polynomial
growth condition. Then we have
lim
0
(
E[(B
)] (0))
1
= G(D
2
(0)). (3.4)
Proof. Applying G-Itos formula to (B
), we get
(B
) = (0) +
_
0
D(B
s
), dB
s
+
1
2
_
0
(D
2
(B
s
), dB
s
).
Thus we have
E[(B
)] (0) =
1
2
E
G
[
_
0
(D
2
(B
s
), dB
s
)].
By Lemma 3.3, we obtain the result.
Lemma 3.5 Let h C
2
(R) and satisfy (3.2). For each C
b.Lip
(R
d
), let
u(t, x) be the solution of the G-heat equation:
t
u G(D
2
u) = 0 (t, x) [0, ) R
d
, u(0, x) = (x). (3.5)
Then u(t, x) := h(u(t, x)) is a viscosity subsolution of G-heat equation (3.5) with
initial condition u(0, x) = h((x)).
Proof. For each > 0, we denote by u
t
u
(D
2
u
) = 0 (t, x) [0, ) R
d
, u
(0, x) = (x),
where G
C
1,2
((0, ) R
d
). By simple calculation,
we have
t
h(u
) = h
(u
)
t
u
= h
(u
)G
(D
2
u
)
and
t
h(u
) G
(D
2
h(u
)) = f
(t, x) = h
(u
)G(D
2
u
) G(D
2
h(u
)) h
(u
)[Du
[
2
.
Since h is Gconvex, it follows that f
(u
)[Du
[
2
. We can also deduce
that [Du
) uniformly converges
to h(u) and h
(u
t
h(u
) G
(D
2
h(u
)) C, h(u
E[[
t
], we have
E[h()[
t
] h(
E[[
t
]), t 0.
We then can extend this Jensens inequality, under the norm [[ [[ =
E[[ [], to
each bounded L
1
G
().
(iii)=(iv): for each C
2
b
(R
d
), we have
E[h((B
t
))] h(
E[(B
t
)]) for each
t 0. By Lemma 3.4, we know that
lim
0
(
E[(B
)] (0))
1
= G(D
2
(0))
and
lim
0
(
E[h((B
))] h((0)))
1
= G(D
2
h()(0)).
Thus we get
G(D
2
h()(0)) h
((0))G(D
2
(0)).
For each (y, z, A) R R
d
S(d), we can choose a C
2
b
(R
d
) such that
((0), D(0), D
2
(0)) = (y, z, A). Thus we obtain (iv).
(iv)=(iii): for each C
2
b
(R
d
), u(t, x) =
E[(x+B
t
)] (respectively, u(t, x) =
E[h((x + B
t
))]) solves the G-heat equation (3.5). By Lemma 3.5, h(u) is a
viscosity subsolution of G-heat equation (3.5). It follows from the maximum
principle that h(u(t, x)) u(t, x). In particular, (iii) holds.
Remark 3.6 In fact, (i)(ii)(iii) still hold without the assumption h
C
2
(R).
Proposition 3.7 Let h be a Gconvex function and X L
1
G
() be bounded.
Then Y
t
= h(
E[X[
t
]), t 0, is a Gsubmartingale.
Proof. For each s t,
E[Y
t
[
s
] =
E[h(
E[X[
t
])[
s
] h(
E[X[
s
]) = Y
s
.
Exercise 3.8 Suppose that G satises the uniformly elliptic condition and h
C
2
(R). Show that h is G-convex if and only if h is convex.
3 Gconvexity and Jensens Inequality for Gexpectations 79
Notes and Comments
This chapter is mainly from Peng (2007) [102].
Peng (1997) [92] introduced a ltration consistent (or time consistent, or
dynamic) nonlinear expectation, called g-expectation, via BSDE, and then in
(1999) [94] for some basic properties of the g-martingale such as nonlinear Doob-
Meyer decomposition theorem, see also Briand, Coquet, Hu, Memin and Peng
(2000) [14], Chen, Kulperger and Jiang (2003) [20], Chen and Peng (1998) [21]
and (2000) [22], Coquet, Hu, Memin and Peng (2001) [26], and (2002) [27],
Peng (1999) [94], (2004) [97], Peng and Xu (2003) [107], Rosazza (2006) [112].
Our conjecture is that all properties obtained for g-martingales must has its
correspondence for G-martingale. But this conjecture is still far from being
complete. Here we present some properties of G-martingales.
The problemG-martingale representation theorem has been raised as a prob-
lem in Peng (2007) [102]. In Section 2, we only give a result with very regular
random variables. Some very interesting developments to this important prob-
lem can be found in Soner, Touzi and Zhang (2009) [114] and Song (2009) [116].
Under the framework of g-expectation, Chen, Kulperger and Jiang (2003)
[20], Hu (2005) [60], Jiang and Chen (2004) [70] investigate the Jensens in-
equality for g-expectation. Recently, Jia and Peng (2007) [68] introduced the
notion of g-convex function and obtained many interesting properties. Certainly
a G-convex function concerns fully nonlinear situations.
80 Chap.IV G-martingales and Jensens Inequality
Chapter V
Stochastic Dierential
Equations
In this chapter, we consider the stochastic dierential equations and backward
stochastic dierential equations driven by G-Brownian motion. The conditions
and proofs of existence and uniqueness of a stochastic dierential equation is
similar to the classical situation. However the corresponding problems for back-
ward stochastic dierential equations are not that easy, many are still open. We
only give partial results to this direction.
1 Stochastic Dierential Equations
In this chapter, we denote by
M
p
G
(0, T; R
n
), p 1, the completion of M
p,0
G
(0, T; R
n
)
under the norm (
_
T
0
E[[
t
[
p
]dt)
1/p
. It is not hard to prove that
M
p
G
(0, T; R
n
)
M
p
G
(0, T; R
n
). We consider all the problems in the space
M
p
G
(0, T; R
n
), and the
sublinear expectation space (, 1,
E) is xed.
We consider the following SDE driven by a d-dimensional G-Brownian mo-
tion:
X
t
= X
0
+
_
t
0
b(s, X
s
)ds+
_
t
0
h
ij
(s, X
s
)d
B
i
, B
j
_
s
+
_
t
0
j
(s, X
s
)dB
j
s
, t [0, T],
(1.1)
where the initial condition X
0
R
n
is a given constant, and b, h
ij
,
j
are given
functions satisfying b(, x), h
ij
(, x),
j
(, x)
M
2
G
(0, T; R
n
) for each x R
n
and
the Lipschitz condition, i.e., [(t, x) (t, x
)[ K[x x
R
n
, = b, h
ij
and
j
, respectively. Here the horizon [0, T] can be
arbitrarily large. The solution is a process X
M
2
G
(0, T; R
n
) satisfying the
SDE (1.1).
We rst introduce the following mapping on a xed interval [0, T]:
:
M
2
G
(0, T; R
n
)
M
2
G
(0, T; R
n
)
81
82 Chap.V Stochastic Dierential Equations
by setting
t
, t [0, T], with
t
(Y ) = X
0
+
_
t
0
b(s, Y
s
)ds +
_
t
0
h
ij
(s, Y
s
)d
B
i
, B
j
_
s
+
_
t
0
j
(s, Y
s
)dB
j
s
.
We immediately have the following lemma.
Lemma 1.1 For each Y, Y
M
2
G
(0, T; R
n
), we have the following estimate:
E[[
t
(Y )
t
(Y
)[
2
] C
_
t
0
E[[Y
s
Y
s
[
2
]ds, t [0, T], (1.2)
where the constant C depends only on the Lipschitz constant K.
We now prove that SDE (1.1) has a unique solution. By multiplying e
2Ct
on both sides of (1.2) and integrating them on [0, T], it follows that
_
T
0
E[[
t
(Y )
t
(Y
)[
2
]e
2Ct
dt C
_
T
0
e
2Ct
_
t
0
E
G
[[Y
s
Y
s
[
2
]dsdt
= C
_
T
0
_
T
s
e
2Ct
dt
E[[Y
s
Y
s
[
2
]ds
=
1
2
_
T
0
(e
2Cs
e
2CT
)
E[[Y
s
Y
s
[
2
]ds.
We then have
_
T
0
E[[
t
(Y )
t
(Y
)[
2
]e
2Ct
dt
1
2
_
T
0
E[[Y
t
Y
t
[
2
]e
2Ct
dt. (1.3)
We observe that the following two norms are equivalent on
M
2
G
(0, T; R
n
), i.e.,
(
_
T
0
E[[Y
t
[
2
]dt)
1/2
(
_
T
0
E[[Y
t
[
2
]e
2Ct
dt)
1/2
.
From (1.3) we can obtain that (Y ) is a contraction mapping. Consequently,
we have the following theorem.
Theorem 1.2 There exists a unique solution X
M
2
G
(0, T; R
n
) of the stochas-
tic dierential equation (1.1).
We now consider the following linear SDE. For simplicity, we assume that
d = 1 and n = 1.
X
t
= X
0
+
_
t
0
(b
s
X
s
+
b
s
)ds+
_
t
0
(h
s
X
s
+
h
s
)dB
s
+
_
t
0
(
s
X
s
+
s
)dB
s
, t [0, T],
(1.4)
where X
0
R is given, b
.
, h
.
,
.
are given bounded processes in
M
2
G
(0, T; R) and
b
.
,
h
.
,
.
are given processes in
M
2
G
(0, T; R). By Theorem 1.2, we know that the
linear SDE (1.4) has a unique solution.
2 Backward Stochastic Dierential Equations 83
Remark 1.3 The solution of the linear SDE (1.4) is
X
t
=
1
t
(X
0
+
_
t
0
b
s
s
ds +
_
t
0
(
h
s
s
s
)
s
dB
s
+
_
t
0
s
s
dB
s
), t [0, T],
where
t
= exp(
_
t
0
b
s
ds
_
t
0
(h
s
1
2
2
s
)dB
s
_
t
0
s
dB
s
).
In particular, if b
.
, h
.
,
.
are constants and
b
.
,
h
.
,
.
are zero, then X is a
geometric G-Brownian motion.
Denition 1.4 We call X is a geometric G-Brownian motion if
X
t
= exp(t +B
t
+B
t
), (1.5)
where , , are constants.
Exercise 1.5 Prove that
M
p
G
(0, T; R
n
) M
p
G
(0, T; R
n
).
Exercise 1.6 Complete the proof of Lemma 1.1.
2 Backward Stochastic Dierential Equations
We consider the following type of BSDE:
Y
t
=
E[ +
_
T
t
f(s, Y
s
)ds +
_
T
t
h
ij
(s, Y
s
)d
B
i
, B
j
_
s
[
t
], t [0, T], (2.6)
where L
1
G
(
T
; R
n
) is given, and f, h
ij
are given functions satisfying f(, y),
h
ij
(, y)
M
1
G
(0, T; R
n
) for each y R
n
and the Lipschitz condition, i.e.,
[(t, y) (t, y
)[ K[y y
R
n
, = f and
h
ij
, respectively. The solution is a process Y
M
1
G
(0, T; R
n
) satisfying the
above BSDE.
We rst introduce the following mapping on a xed interval [0, T]:
:
M
1
G
(0, T; R
n
)
M
1
G
(0, T; R
n
)
by setting
t
, t [0, T], with
t
(Y ) =
E[ +
_
T
t
f(s, Y
s
)ds +
_
T
t
h
ij
(s, Y
s
)d
B
i
, B
j
_
s
[
t
].
We immediately have
Lemma 2.1 For each Y, Y
M
1
G
(0, T; R
n
), we have the following estimate:
E[[
t
(Y )
t
(Y
)[] C
_
T
t
E[[Y
s
Y
s
[]ds, t [0, T], (2.7)
where the constant C depends only on the Lipschitz constant K.
84 Chap.V Stochastic Dierential Equations
We now prove that BSDE (2.6) has a unique solution. By multiplying e
2Ct
on both sides of (2.7) and integrating them on [0, T], it follows that
_
T
0
E[[
t
(Y )
t
(Y
)[]e
2Ct
dt C
_
T
0
_
T
t
E[[Y
s
Y
s
[]e
2Ct
dsdt
= C
_
T
0
E[[Y
s
Y
s
[]
_
s
0
e
2Ct
dtds
=
1
2
_
T
0
E[[Y
s
Y
s
[](e
2Cs
1)ds
1
2
_
T
0
E[[Y
s
Y
s
[]e
2Cs
ds. (2.8)
We observe that the following two norms are equivalent on
M
1
G
(0, T; R
n
), i.e.,
_
T
0
E[[Y
t
[]dt
_
T
0
E[[Y
t
[]e
2Ct
dt.
From (2.8), we can obtain that (Y ) is a contraction mapping. Consequently,
we have the following theorem.
Theorem 2.2 There exists a unique solution (Y
t
)
t[0,T]
M
1
G
(0, T; R
n
) of the
backward stochastic dierential equation (2.6).
Let Y
v
, v = 1, 2, be the solutions of the following BSDE:
Y
v
t
=
E[
v
+
_
T
t
(f(s, Y
v
s
) +
v
s
)ds +
_
T
t
(h
ij
(s, Y
v
s
) +
ij,v
s
)d
B
i
, B
j
_
s
[
t
].
Then the following estimate holds.
Proposition 2.3 We have
E[[Y
1
t
Y
2
t
[] Ce
C(Tt)
(
E[[
1
2
[] +
_
T
t
E[[
1
s
2
s
[ +[
ij,1
s
ij,2
s
[]ds), (2.9)
where the constant C depends only on the Lipschitz constant K.
Proof. Similar to Lemma 2.1, we have
E[[Y
1
t
Y
2
t
[] C(
_
T
t
E[[Y
1
s
Y
2
s
[]ds +
E[[
1
2
[]
+
_
T
t
E[[
1
s
2
s
[ +[
ij,1
s
ij,2
s
[]ds).
By the Gronwall inequality (see Exercise 2.5), we conclude the result.
3 Nonlinear Feynman-Kac Formula 85
Remark 2.4 In particular, if
2
= 0,
2
s
= f(s, 0),
ij,2
s
= h
ij
(s, 0),
1
s
=
0,
ij,1
s
= 0, we obtain the estimate of the solution of the BSDE. Let Y be the
solution of the BSDE (2.6). Then
E[[Y
t
[] Ce
C(Tt)
(
E[[[] +
_
T
t
E[[f(s, 0)[ + [h
ij
(s, 0)[]ds), (2.10)
where the constant C depends only on the Lipschitz constant K.
Exercise 2.5 (The Gronwall inequality) Let u(t) be a nonnegative function
such that
u(t) C +A
_
t
0
u(s)ds for 0 t T,
where C and A are constants. Prove that u(t) Ce
At
for 0 t T.
Exercise 2.6 For each L
1
G
(
T
; R
n
). Show that the process (
E[[
t
])
t[0,T]
belongs to
M
1
G
(0, T; R
n
).
Exercise 2.7 Complete the proof of Lemma 2.1.
3 Nonlinear Feynman-Kac Formula
Consider the following SDE:
_
dX
t,
s
= b(X
t,
s
)ds +h
ij
(X
t,
s
)d
B
i
, B
j
_
s
+
j
(X
t,
s
)dB
j
s
, s [t, T],
X
t,
t
= ,
(3.11)
where L
2
G
(
t
; R
n
) is given and b, h
ij
,
j
: R
n
R
n
are given Lipschitz
functions, i.e., [(x) (x
)[ K[x x
[, for each x, x
R
n
, = b, h
ij
and
j
.
We then consider associated BSDE:
Y
t,
s
=
E[(X
t,
T
) +
_
T
s
f(X
t,
r
, Y
t,
r
)dr +
_
T
s
g
ij
(X
t,
r
, Y
t,
r
)d
B
i
, B
j
_
r
[
s
],
(3.12)
where : R
n
R is a given Lipschitz function and f, g
ij
: R
n
R R are
given Lipschitz functions, i.e., [(x, y) (x
, y
)[ K([x x
[ + [y y
[), for
each x, x
R
n
, y, y
R, = f and g
ij
.
We have the following estimates:
Proposition 3.1 For each ,
L
2
G
(
t
; R
n
), we have, for each s [t, T],
E[[X
t,
s
X
t,
s
[
2
[
t
] C[
[
2
(3.13)
and
E[[X
t,
s
[
2
[
t
] C(1 +[[
2
), (3.14)
where the constant C depends only on the Lipschitz constant K.
86 Chap.V Stochastic Dierential Equations
Proof. It is easy to obtain
E[[X
t,
s
X
t,
s
[
2
[
t
] C
1
([
[
2
+
_
s
t
E[[X
t,
r
X
t,
r
[
2
[
t
]dr).
By the Gronwall inequality, we obtain
E[[X
t,
s
X
t,
s
[
2
[
t
] C
1
e
C1T
[
[
2
.
Similarly, we can get (3.14).
Corollary 3.2 For each L
2
G
(
t
; R
n
), we have
E[[X
t,
t+
[
2
[
t
] C(1 +[[
2
) for [0, T t], (3.15)
where the constant C depends only on the Lipschitz constant K.
Proof. It is easy to obtain
E[[X
t,
t+
[
2
[
t
] C
1
_
t+
t
(1 +
E[[X
t,
s
[
2
[
t
])ds.
By Proposition 3.1, we obtain the result.
Proposition 3.3 For each ,
L
2
G
(
t
; R
n
), we have
[Y
t,
t
Y
t,
t
[ C[
[ (3.16)
and
[Y
t,
t
[ C(1 +[[), (3.17)
where the constant C depends only on the Lipschitz constant K.
Proof. For each s [0, T], it is easy to check that
[Y
t,
s
Y
t,
s
[ C
1
E[[X
t,
T
X
t,
T
[ +
_
T
s
([X
t,
r
X
t,
r
[ +[Y
t,
r
Y
t,
r
[)dr[
s
].
Since
E[[X
t,
s
X
t,
s
[[
t
] (
E[[X
t,
s
X
t,
s
[
2
[
t
])
1/2
,
we have
E[[Y
t,
s
Y
t,
s
[[
t
] C
2
([
[ +
_
T
s
E[[Y
t,
r
Y
t,
r
[[
t
]dr).
By the Gronwall inequality, we obtain (3.16). Similarly we can get (3.17).
We are more interested in the case when = x R
n
. Dene
u(t, x) := Y
t,x
t
, (t, x) [0, T] R
n
. (3.18)
By the above proposition, we immediately have the following estimates:
[u(t, x) u(t, x
)[ C[x x
[, (3.19)
[u(t, x)[ C(1 +[x[), (3.20)
where the constant C depends only on the Lipschitz constant K.
3 Nonlinear Feynman-Kac Formula 87
Remark 3.4 It is important to note that u(t, x) is a deterministic function of
(t, x), because X
t,x
s
and Y
t,x
s
are independent from
t
.
Theorem 3.5 For each L
2
G
(
t
; R
n
), we have
u(t, ) = Y
t,
t
. (3.21)
Proposition 3.6 We have, for [0, T t],
u(t, x) =
E[u(t+, X
t,x
t+
)+
_
t+
t
f(X
t,x
r
, Y
t,x
r
)dr+
_
t+
t
g
ij
(X
t,x
r
, Y
t,x
r
)d
B
i
, B
j
_
r
].
(3.22)
Proof. Since X
t,x
s
= X
t+,X
t,x
t+
s
for s [t + , T], we get Y
t,x
t+
= Y
t+,X
t,x
t+
t+
. By
Theorem 3.5, we have Y
t,x
t+
= u(t +, X
t,x
t+
), which implies the result.
For each A S(n), p R
n
, r R, we set
F(A, p, r, x) := G(B(A, p, r, x)) +p, b(x) +f(x, r),
where B(A, p, r, x) is a d d symmetric matrix with
B
ij
(A, p, r, x) := A
i
(x),
j
(x) +p, h
ij
(x) +h
ji
(x) +g
ij
(x, r) +g
ji
(x, r).
Theorem 3.7 u(t, x) is a viscosity solution of the following PDE:
_
t
u +F(D
2
u, Du, u, x) = 0,
u(T, x) = (x).
(3.23)
Proof. We rst show that u is a continuous function. By (3.19) we know that u
is a Lipschitz function in x. It follows from (2.10) and (3.14) that for s [t, T],
E[[Y
t,x
s
[] C(1 +[x[). Noting (3.15) and (3.22), we get [u(t, x) u(t +, x)[
C(1 +[x[)(
1/2
+) for [0, T t]. Thus u is
1
2
-Holder continuous in t, which
implies that u is a continuous function. We can also show, that for each p 2,
E[[X
t,x
t+
x[
p
] C(1 +[x[
p
)
p/2
, (3.24)
Now for xed (t, x) (0, T) R
n
, let C
2,3
b
([0, T] R
n
) be such that u
and (t, x) = u(t, x). By (3.22), (3.24) and Taylors expansion, it follows that,
for (0, T t),
0
E[(t +, X
t,x
t+
) (t, x) +
_
t+
t
f(X
t,x
r
, Y
t,x
r
)dr
+
_
t+
t
g
ij
(X
t,x
r
, Y
t,x
r
)d
B
i
, B
j
_
r
]
1
2
E[(B(D
2
(t, x), D(t, x), (t, x), x), B
t+
B
t
)]
+ (
t
(t, x) +D(t, x), b(x) +f(x, (t, x))) +C(1 +[x[ +[x[
2
+[x[
3
)
3/2
(
t
(t, x) +F(D
2
(t, x), D(t, x), (t, x), x)) +C(1 +[x[ +[x[
2
+[x[
3
)
3/2
,
88 Chap.V Stochastic Dierential Equations
then it is easy to check that
t
(t, x) +F(D
2
(t, x), D(t, x), (t, x), x) 0.
Thus u is a viscosity subsolution of (3.23). Similarly we can prove that u is a
viscosity supersolution of (3.23).
Example 3.8 Let B = (B
1
, B
2
) be a 2-dimensional G-Brownian motion with
G(A) = G
1
(a
11
) +G
2
(a
22
),
where
G
i
(a) =
1
2
(
2
i
a
+
2
i
a
), i = 1, 2.
In this case, we consider the following 1-dimensional SDE:
dX
t,x
s
= X
t,x
s
ds +X
t,x
s
d
B
1
_
s
+X
t,x
s
dB
2
s
, X
t,x
t
= x,
where , and are constants.
The corresponding function u is dened by
u(t, x) :=
E[(X
t,x
T
)].
Then
u(t, x) =
E[u(t +, X
t,x
t+
)]
and u is the viscosity solution of the following PDE:
t
u +x
x
u + 2G
1
(x
x
u) +
2
x
2
G
2
(
2
xx
u) = 0, u(T, x) = (x).
Exercise 3.9 For each L
p
G
(
t
; R
n
) with p 2, show that SDE (3.11) has a
unique solution in
M
p
G
(t, T; R
n
). Furthermore, show that the following estimates
hold.
E[[X
t,x
s
X
t,x
s
[
p
] C[x x
[
p
,
E[[X
t,x
s
[
p
] C(1 +[x[
p
),
E[[X
t,x
t+
x[
p
] C(1 +[x[
p
)
p/2
.
Notes and Comments
This chapter is mainly from Peng (2007) [102].
There are many excellent books on It os stochastic calculus and stochastic
dierential equations founded by It os original paper [65], as well as on martin-
gale theory. Readers are referred to Chung and Williams (1990) [25], Dellacherie
and Meyer (1978 and 1982) [33], He, Wang and Yan (1992) [57], It o and McKean
(1965) [66], Ikeda and Watanabe (1981) [63], Kallenberg (2002) [72], Karatzas
and Shreve (1988) [73], ksendal (1998) [87], Protter (1990) [110], Revuz and
Yor (1999)[111] and Yong and Zhou (1999) [124].
3 Nonlinear Feynman-Kac Formula 89
Linear backward stochastic dierential equation (BSDE) was rst introduced
by Bismut in (1973) [12] and (1978) [13]. Bensoussan developed this approach
in (1981) [10] and (1982) [11]. The existence and uniqueness theorem of a gen-
eral nonlinear BSDE, was obtained in 1990 in Pardoux and Peng [88]. The
present version of the proof was based on El Karoui, Peng and Quenez (1997)
[44], which is also a very good survey on BSDE theory and its applications, spe-
cially in nance. Comparison theorem of BSDEs was obtained in Peng (1992)
[90] for the case when g is a C
1
-function and then in [44] when g is Lipschitz.
Nonlinear Feynman-Kac formula for BSDE was introduced by Peng (1991) [89]
and (1992) [91]. Here we obtain the corresponding Feynman-Kac formula under
the framework of G-expectation. We also refer to Yong and Zhou (1999) [124],
as well as in Peng (1997) [93] (in 1997, in Chinese) and (2004) [95] for system-
atic presentations of BSDE theory. For contributions in the developments of
this theory, readers can be referred to the literatures listing in the Notes and
Comments in Chap. I.
90 Chap.V Stochastic Dierential Equations
Chapter VI
Capacity and Quasi-Surely
Analysis for G-Brownian
Paths
In this chapter, we rst present a general framework for an upper expectation
dened on a metric space (, B()) and the corresponding capacity to introduce
the quasi-surely analysis. The results are important for us to obtain the pathwise
analysis for G-Brownian motion.
1 Integration Theory associated to an Upper
Probability
Let be a complete separable metric space equipped with the distance d, B()
the Borel -algebra of and / the collection of all probability measures on
(, B()).
L
0
(): the space of all B()-measurable real functions;
B
b
(): all bounded functions in L
0
();
C
b
(): all continuous functions in B
b
().
All along this section, we consider a given subset T /.
1.1 Capacity associated to T
We denote
c(A) := sup
P1
P(A), A B().
One can easily verify the following theorem.
91
92 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
Theorem 1.1 The set function c() is a Choquet capacity, i.e. (see [24, 32]),
1. 0 c(A) 1, A .
2. If A B, then c(A) c(B).
3. If (A
n
)
n=1
is a sequence in B(), then c(A
n
)
c(A
n
).
4. If (A
n
)
n=1
is an increasing sequence in B(): A
n
A = A
n
, then
c(A
n
) = lim
n
c(A
n
).
Furthermore, we have
Theorem 1.2 For each A B(), we have
c(A) = supc(K) : K compact K A.
Proof. It is simply because
c(A) = sup
P1
sup
Kcompact
KA
P(K) = sup
Kcompact
KA
sup
P1
P(K) = sup
Kcompact
KA
c(K).
n=1
c(A
n
) < .
Then limsup
n
A
n
is polar .
Proof. Applying the Borel-Cantelli Lemma under each probability P T.
The following theorem is Prokhorovs theorem.
Theorem 1.6 T is relatively compact if and only if for each > 0, there exists
a compact set K such that c(K
c
) < .
The following two lemmas can be found in [62].
Lemma 1.7 T is relatively compact if and only if for each sequence of closed
sets F
n
, we have c(F
n
) 0.
1 Integration Theory associated to an Upper Probability 93
Proof. We outline the proof for the convenience of readers.
= part: It follows from Theorem 1.6 that for each xed > 0, there exists
a compact set K such that c(K
c
) < . Note that F
n
K , then there exists
an N > 0 such that F
n
K = for n N, which implies lim
n
c(F
n
) < . Since
can be arbitrarily small, we obtain c(F
n
) 0.
= part: For each > 0, let (A
k
i
)
i=1
be a sequence of open balls of radius
1/k covering . Observe that (
n
i=1
A
k
i
)
c
, then there exists an n
k
such that
c((
n
k
i=1
A
k
i
)
c
) < 2
k
. Set K =
k=1
n
k
i=1
A
k
i
. It is easy to check that K is
compact and c(K
c
) < . Thus by Theorem 1.6, T is relatively compact.
Lemma 1.8 Let T be weakly compact. Then for each sequence of closed sets
F
n
F, we have c(F
n
) c(F).
Proof. We outline the proof for the convenience of readers. For each xed > 0,
by the denition of c(F
n
), there exists a P
n
T such that P
n
(F
n
) c(F
n
) .
Since T is weakly compact, there exist P
n
k
and P T such that P
n
k
converge
weakly to P. Thus
P(F
m
) limsup
k
P
n
k
(F
m
) limsup
k
P
n
k
(F
n
k
) lim
n
c(F
n
) .
Letting m , we get P(F) lim
n
c(F
n
) , which yields c(F
n
) c(F).
Following [62] (see also [35, 50]) the upper expectation of T is dened as follows:
for each X L
0
() such that E
P
[X] exists for each P T,
E[X] = E
1
[X] := sup
P1
E
P
[X].
It is easy to verify
Theorem 1.9 The upper expectation E[] of the family T is a sublinear expec-
tation on B
b
() as well as on C
b
(), i.e.,
1. for all X, Y in B
b
(), X Y = E[X] E[Y ].
2. for all X, Y in B
b
(), E[X + Y ] E[X] +E[Y ].
3. for all 0, X B
b
(), E[X] = E[X].
4. for all c R, X B
b
() , E[X +c] = E[X] +c.
Moreover, it is also easy to check
Theorem 1.10 We have
1. Let E[X
n
] and E[
n=1
X
n
] be nite. Then E[
n=1
X
n
]
n=1
E[X
n
].
2. Let X
n
X and E[X
n
], E[X] be nite. Then E[X
n
] E[X].
Denition 1.11 The functional E[] is said to be regular if for each X
n
n=1
in C
b
() such that X
n
0 on , we have E[X
n
] 0.
94 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
Similar to Lemma 1.7 we have:
Theorem 1.12 E[] is regular if and only if T is relatively compact.
Proof. = part: For each sequence of closed subsets F
n
such that
F
n
, n = 1, 2, , are non-empty (otherwise the proof is trivial), there exists
g
n
n=1
C
b
() satisfying
0 g
n
1, g
n
= 1 on F
n
and g
n
= 0 on : d(, F
n
)
1
n
.
We set f
n
=
n
i=1
g
i
, it is clear that f
n
C
b
() and 1
Fn
f
n
0. E[] is
regular implies E[f
n
] 0 and thus c(F
n
) 0. It follows from Lemma 1.7 that T
is relatively compact.
= part: For each X
n
n=1
C
b
() such that X
n
0, we have
E[X
n
] = sup
P1
E
P
[X
n
] = sup
P1
_
0
P(X
n
t)dt
_
0
c(X
n
t)dt.
For each xed t > 0, X
n
t is a closed subset and X
n
t as n .
By Lemma 1.7, c(X
n
t) 0 and thus
_
0
c(X
n
t)dt 0. Consequently
E[X
n
] 0.
1.2 Functional spaces
We set, for p > 0,
L
p
:= X L
0
() : E[[X[
p
] = sup
P1
E
P
[[X[
p
] < ;
A
p
:= X L
0
() : E[[X[
p
] = 0;
A := X L
0
() : X = 0, c-q.s..
It is seen that L
p
and A
p
are linear spaces and A
p
= A, for each p > 0.
We denote L
p
:= L
p
/A. As usual, we do not take care about the distinction
between classes and their representatives.
Lemma 1.13 Let X L
p
. Then for each > 0
c([X[ > )
E[[X[
p
]
p
.
Proof. Just apply Markov inequality under each P T.
Similar to the classical results, we get the following proposition and the proof
is omitted which is similar to the classical arguments.
Proposition 1.14 We have
1. For each p 1, L
p
is a Banach space under the norm |X|
p
:= (E[[X[
p
])
1
p
.
1 Integration Theory associated to an Upper Probability 95
2. For each p < 1, L
p
is a complete metric space under the distance
d(X, Y ) := E[[X Y [
p
].
We set
L
:= X L
0
() : a constant M, s.t. [X[ M, q.s.;
L
:= L
/A.
Proposition 1.15 Under the norm
|X|
is a Banach space.
Proof. From [X[ > |X|
n=1
_
[X[ |X|
+
1
n
_
we know that [X[
|X|
, for each X L
.
Proposition 1.17 Let p (0, ] and (X
n
) be a sequence in L
p
which converges
to X in L
p
. Then there exists a subsequence (X
n
k
) which converges to X quasi-
surely in the sense that it converges to X outside a polar set.
96 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
Proof. Let us assume p (0, ), the case p = is obvious since the conver-
gence in L
n=1
in B
b
()
such that E[[XY
n
[
p
] 0. Let y
n
= sup
[Y
n
()[ and X
n
= (Xy
n
)(y
n
).
Since [XX
n
[ [XY
n
[, we have E[[XX
n
[
p
] 0. This clearly implies that
for any sequence (
n
) tending to , lim
n
E[[X (X
n
) (
n
)[
p
] = 0.
Now we have, for all n N,
E[[X[
p
1
]X]>n]
] = E[([X[ n +n)
p
1
]X]>n]
]
(1 2
p1
)
_
E[([X[ n)
p
1
]X]>n]
] +n
p
c([X[ > n)
_
.
The rst term of the right hand side tends to 0 since
E[([X[ n)
p
1
]X]>n]
] = E[[X (X n) (n)[
p
] 0.
For the second term, since
n
p
2
p
1
]X]>n]
([X[
n
2
)
p
1
]X]>n]
([X[
n
2
)
p
1
]X]>
n
2
]
,
we have
n
p
2
p
c([X[ > n) =
n
p
2
p
E[1
]X]>n]
] E[([X[
n
2
)
p
1
]X]>
n
2
]
] 0.
Consequently X J
p
.
Proposition 1.19 Let X L
1
b
. Then for each > 0, there exists a > 0, such
that for all A B() with c(A) , we have E[[X[1
A
] .
1 Integration Theory associated to an Upper Probability 97
Proof. For each > 0, by Proposition 1.18, there exists an N > 0 such that
E[[X[1
]X]>N]
]
2
. Take =
2N
. Then for a subset A B() with c(A) ,
we have
E[[X[1
A
] E[[X[1
A
1
]X]>N]
] +E[[X[1
A
1
]X]N]
]
E[[X[1
]X]>N]
] +Nc(A) .
_
i=k
[X
ni+1
X
ni
[ > 2
i/p
.
Thanks to the subadditivity property and the Markov inequality, we have
c(A
k
)
i=k
c([X
ni+1
X
ni
[ > 2
i/p
)
i=k
2
i
= 2
k+1
.
98 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
As a consequence, lim
k
c(A
k
) = 0, so the Borel set A =
k=1
A
k
is polar.
As each X
n
k
is continuous, for all k 1, A
k
is an open set. Moreover, for all
k, (X
ni
) converges uniformly on A
c
k
so that the limit is continuous on each A
c
k
.
This yields the result.
The following theorem gives a concrete characterization of the space L
p
c
.
Theorem 1.25 For each p > 0,
L
p
c
= X L
p
: X has a quasi-continuous version, lim
n
E[[X[
p
1
]X]>n]
] = 0.
Proof. We denote
J
p
= X L
p
: X has a quasi-continuous version, lim
n
E[[X[
p
1
]X]>n]
] = 0.
Let X L
p
c
, we know by Proposition 1.24 that X has a quasi-continuous version.
Since X L
p
b
, we have by Proposition 1.18 that lim
n
E[[X[
p
1
]X]>n]
] = 0.
Thus X J
p
.
On the other hand, let X J
p
be quasi-continuous. Dene Y
n
= (Xn) (n)
for all n N. As E[[X[
p
1
]X]>n]
] 0, we have E[[X Y
n
[
p
] 0.
Moreover, for all n N, as Y
n
is quasi-continuous , there exists a closed set F
n
such that c(F
c
n
) <
1
n
p+1
and Y
n
is continuous on F
n
. It follows from Tietzes
extension theorem that there exists Z
n
C
b
() such that
[Z
n
[ n and Z
n
= Y
n
on F
n
.
We then have
E[[Y
n
Z
n
[
p
] (2n)
p
c(F
c
n
)
(2n)
p
n
p+1
.
So E[[X Z
n
[
p
] (1 2
p1
)(E[[X Y
n
[
p
] +E[[Y
n
Z
n
[
p
]) 0, and X L
p
c
.
c
:= X L
c
is a closed linear subspace of L
.
Proof. For each Cauchy sequence X
n
n=1
of L
c
under ||
, we can nd
a subsequence X
ni
i=1
such that
_
_
X
ni+1
X
ni
_
_
2
i
. We may further
assume that each X
n
is quasi-continuous. Then it is easy to prove that for each
> 0, there exists an open set G such that c(G) < and
X
ni+1
X
ni
2
i
for all i 1 on G
c
, which implies that the limit belongs to L
c
.
As an application of Theorem 1.25, we can easily get the following results.
1 Integration Theory associated to an Upper Probability 99
Proposition 1.28 Assume that X : R has a quasi-continuous version
and that there exists a function f : R
+
R
+
satisfying lim
t
f(t)
t
p
= and
E[f([X[)] < . Then X L
p
c
.
Proof. For each > 0, there exists an N > 0 such that
f(t)
t
p
1
, for all t N.
Thus
E[[X[
p
1
]X]>N]
] E[f([X[)1
]X]>N]
] E[f([X[)].
Hence lim
N
E[[X[
p
1
]X]>N]
] = 0. From Theorem 1.25 we infer X L
p
c
.
Lemma 1.29 Let P
n
n=1
T converge weakly to P T. Then for each
X L
1
c
, we have E
Pn
[X] E
P
[X].
Proof. We may assume that X is quasi-continuous, otherwise we can consider
its quasi-continuous version which does not change the value E
Q
for each Q T.
For each > 0, there exists an N > 0 such that E[[X[1
]X]>N]
] <
2
. Set
X
N
= (XN)(N). We can nd an open subset G such that c(G) <
4N
and
X
N
is continuous on G
c
. By Tietzes extension theorem, there exists Y C
b
()
such that [Y [ N and Y = X
N
on G
c
. Obviously, for each Q T,
[E
Q
[X] E
Q
[Y ][ E
Q
[[X X
N
[] +E
Q
[[X
N
Y []
2
+ 2N
4N
= .
It then follows that
limsup
n
E
Pn
[X] lim
n
E
Pn
[Y ] + = E
P
[Y ] + E
P
[X] + 2,
and similarly liminf
n
E
Pn
[X] E
P
[X]2. Since can be arbitrarily small,
we then have E
Pn
[X] E
P
[X].
Remark 1.30 For continuous X, the above lemma is Lemma 3.8.7 in [15].
Now we give an extension of Theorem 1.12.
Theorem 1.31 Let T be weakly compact and let X
n
n=1
L
1
c
be such that
X
n
X, q.s.. Then E[X
n
] E[X].
Remark 1.32 It is important to note that X does not necessarily belong to L
1
c
.
Proof. For the case E[X] > , if there exists a > 0 such that E[X
n
] >
E[X] + , n = 1, 2, , we then can nd a P
n
T such that E
Pn
[X
n
] >
E[X] +
1
n
, n = 1, 2, . Since T is weakly compact, we then can nd a
subsequence P
ni
i=1
that converges weakly to some P T. From which it
follows that
E
P
[X
ni
] = lim
j
E
Pn
j
[X
ni
] limsup
j
E
Pn
j
[X
nj
]
limsup
j
E[X] +
1
n
j
= E[X] +, i = 1, 2, .
100 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
Thus E
P
[X] E[X] + . This contradicts the denition of E[]. The proof for
the case E[X] = is analogous.
We immediately have the following corollary.
Corollary 1.33 Let T be weakly compact and let X
n
n=1
be a sequence in L
1
c
decreasingly converging to 0 q.s.. Then E[X
n
] 0.
1.4 Kolmogorovs criterion
Denition 1.34 Let I be a set of indices, (X
t
)
tI
and (Y
t
)
tI
be two processes
indexed by I . We say that Y is a quasi-modication of X if for all t I,
X
t
= Y
t
q.s..
Remark 1.35 In the above denition, quasi-modication is also called modi-
cation in some papers.
We now give a Kolmogorov criterion for a process indexed by R
d
with d N.
Theorem 1.36 Let p > 0 and (X
t
)
t[0,1]
d be a process such that for all t
[0, 1]
d
, X
t
belongs to L
p
. Assume that there exist positive constants c and
such that
E[[X
t
X
s
[
p
] c[t s[
d+
.
Then X admits a modication
X such that
E
__
sup
s,=t
[
X
t
X
s
[
[t s[
_
p
_
< ,
for every [0, /p). As a consequence, paths of
X are quasi-surely H oder
continuous of order for every < /p in the sense that there exists a Borel
set N of capacity 0 such that for all w N
c
, the map t
X(w) is H oder
continuous of order for every < /p. Moreover, if X
t
L
p
c
for each t, then
we also have
X
t
L
p
c
.
Proof. Let D be the set of dyadic points in [0, 1]
d
:
D =
_
(
i
1
2
n
, ,
i
d
2
n
); n N, i
1
, , i
d
0, 1, , 2
n
_
.
Let [0, /p). We set
M = sup
s,tD,s,=t
[X
t
X
s
[
[t s[
.
Thanks to the classical Kolmogorovs criterion (see Revuz-Yor [111]), we know
that for any P T, E
P
[M
p
] is nite and uniformly bounded with respect to P
so that
E[M
p
] = sup
P1
E
P
[M
p
] < .
2 G-expectation as an Upper Expectation 101
As a consequence, the map t X
t
is uniformly continuous on D quasi-surely
and so we can dene
t [0, 1]
d
,
X
t
= lim
st,sD
X
s
.
It is now clear that
X satises the enounced properties.
2 G-expectation as an Upper Expectation
In the following sections of this Chapter, let = C
d
0
(R
+
) denote the space of
all R
d
valued continuous functions (
t
)
tR
+, with
0
= 0, equipped with the
distance
(
1
,
2
) :=
i=1
2
i
[( max
t[0,i]
[
1
t
2
t
[) 1],
and let
= (R
d
)
[0,)
denote the space of all R
d
valued functions (
t
)
tR
+. Let
B() denote the -algebra generated by all open sets and let B(
) denote the
-algebra generated by all nite dimensional cylinder sets. The corresponding
canonical process is B
t
() =
t
(respectively,
B
t
( ) =
t
), t [0, ) for
(respectively,
). The spaces of Lipschitzian cylinder functions on and
) := (
B
t1
,
B
t2
, ,
B
tn
) : n 1, t
1
, , t
n
[0, ), C
l.Lip
(R
dn
).
Let G() : S(d) R be a given continuous monotonic and sublinear function.
Following Sec.2 in Chap.III, we can construct the corresponding G-expectation
E on (, L
ip
()). Due to the natural correspondence of L
ip
(
) and L
ip
(), we
also construct a sublinear expectation
E on (
, L
ip
(
)) such that (
B
t
( ))
t0
is
a G-Brownian motion.
The main objective of this section is to nd a weakly compact family of (-
additive) probability measures on (, B()) to represent G-expectation
E. The
following lemmas are a variety of Lemma 3.3 and 3.4.
Lemma 2.1 Let 0 t
1
< t
2
< < t
m
< and
n
n=1
C
l.Lip
(R
dm
)
satisfy
n
0. Then
E[
n
(
B
t1
,
B
t2
, ,
B
tm
)] 0.
We denote T := t = (t
1
, . . . , t
m
) : m N, 0 t
1
< t
2
< < t
m
< .
Lemma 2.2 Let E be a nitely additive linear expectation dominated by
E on
L
ip
(
, B(
)) such
that E[X] = E
Q
[X] for each X L
ip
(
).
Proof. For each xed t = (t
1
, . . . , t
m
) T , by Lemma 2.1, for each sequence
n=1
C
l.Lip
(R
dm
) satisfying
n
0, we have E[
n
(
B
t1
,
B
t2
, ,
B
tm
)]
102 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
0. By Daniell-Stones theorem (see Appendix B), there exists a unique probabil-
ity measure Q
t
on (R
dm
, B(R
dm
)) such that E
Qt
[] = E[(
B
t1
,
B
t2
, ,
B
tm
)]
for each C
l.Lip
(R
dm
). Thus we get a family of nite dimensional distribu-
tions Q
t
: t T . It is easy to check that Q
t
: t T is consistent. Then
by Kolmogorovs consistent theorem, there exists a probability measure Q on
(
, B(
)) such that Q
t
: t T is the nite dimensional distributions of Q.
Assume that there exists another probability measure
Q satisfying the condi-
tion, by Daniell-Stones theorem, Q and
Q have the same nite-dimensional
distributions. Then by monotone class theorem, Q =
Q. The proof is complete.
, B(
)) such
that
E[X] = max
Q1e
E
Q
[X], for X L
ip
(
).
Proof. By the representation theorem of sublinear expectation and Lemma 2.2,
it is easy to get the result.
For this T
e
, we dene the associated capacity:
c(A) := sup
Q1e
Q(A), A B(
),
and the upper expectation for each B(
E[X] := sup
Q1e
E
Q
[X].
Theorem 2.4 For (
B)
t0
, there exists a continuous modication (
B)
t0
of
B
(i.e., c(
B
t
,=
B
t
) = 0, for each t 0) such that
B
0
= 0.
Proof. By Lemma 2.3, we know that
E =
E on L
ip
(
E[[
B
t
B
s
[
4
] =
E[[
B
t
B
s
[
4
] = d[t s[
2
for s, t [0, ),
where d is a constant depending only on G. By Theorem 1.36, there exists a
continuous modication
B of
B. Since c(
B
0
,= 0) = 0, we can set
B
0
= 0.
The proof is complete.
For each Q T
e
, let Q
B
1
denote the probability measure on (, B())
induced by
B with respect to Q. We denote T
1
= Q
B
1
: Q T
e
. By
Lemma 2.4, we get
E[[
B
t
B
s
[
4
] =
E[[
B
t
B
s
[
4
] = d[t s[
2
, s, t [0, ).
Applying the well-known result of moment criterion for tightness of Kolmogorov-
Chentsovs type (see Appendix B), we conclude that T
1
is tight. We denote by
T = T
1
the closure of T
1
under the topology of weak convergence, then T is
weakly compact.
Now, we give the representation of G-expectation.
3 G-capacity and Paths of G-Brownian Motion 103
Theorem 2.5 For each continuous monotonic and sublinear function G : S(d)
R, let
E be the corresponding G-expectation on (, L
ip
()). Then there exists a
weakly compact family of probability measures T on (, B()) such that
E[X] = max
P1
E
P
[X] for X L
ip
().
Proof. By Lemma 2.3 and Lemma 2.4, we have
E[X] = max
P11
E
P
[X] for X L
ip
().
For each X L
ip
(), by Lemma 2.1, we get
E[[X (X N) (N)[] 0 as
N . Noting that T = T
1
, by the denition of weak convergence, we get
the result.
Remark 2.6 In fact, we can construct the family T in a more explicit way: Let
(W
t
)
t0
= (W
i
t
)
d
i=1,t0
be a d-dimensional Brownian motion in this space. The
ltration generated by W is denoted by T
W
t
. Now let be the bounded, closed
and convex subset in R
dd
such that
G(A) = sup
tr[A
T
], A S(d),
(see (1.13) in Ch. II) and /
t
:=
_
T
0
s
dW
s
, t 0, /
.
and T
0
the collection of probability measures on the canonical space (, B())
induced by B
: /
. Then T = T
0
(see [37] for details).
3 G-capacity and Paths of G-Brownian Motion
According to Theorem 2.5, we obtain a weakly compact family of probability
measures T on (, B()) to represent G-expectation
E[]. For this T, we dene
the associated G-capacity:
c(A) := sup
P1
P(A), A B()
and upper expectation for each X L
0
() which makes the following denition
meaningful:
E[X] := sup
P1
E
P
[X].
By Theorem 2.5, we know that
E =
E on L
ip
(), thus the
E[[ []-completion
and the
E[[ []-completion of L
ip
() are the same.
104 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
For each T > 0, we also denote by
T
= C
d
0
([0, T]) equipped with the distance
(
1
,
2
) =
_
_
2
_
_
C
d
0
([0,T])
:= max
0tT
[
1
t
2
t
[.
We now prove that L
1
G
() = L
1
c
, where L
1
c
is dened in Sec.1. First, we need
the following classical approximation lemma.
Lemma 3.1 For each X C
b
() and n = 1, 2, , we denote
X
(n)
() := inf
X(
) +n|
|
C
d
0
([0,n])
for .
Then the sequence X
(n)
n=1
satises:
(i) M X
(n)
X
(n+1)
X, M = sup
[X()[;
(ii) [X
(n)
(
1
) X
(n)
(
2
)[ n|
1
2
|
C
d
0
([0,n])
for
1
,
2
;
(iii) X
(n)
() X() for .
Proof. (i) is obvious.
For (ii), we have
X
(n)
(
1
) X
(n)
(
2
)
sup
[X(
) +n|
1
|
C
d
0
([0,n])
] [X(
) +n|
2
|
C
d
0
([0,n])
]
n|
1
2
|
C
d
0
([0,n])
and, symmetrically, X
(n)
(
2
) X
(n)
(
1
) n|
1
2
|
C
d
0
([0,n])
. Thus (ii) fol-
lows.
We now prove (iii). For each xed , let
n
be such that
X(
n
) +n|
n
|
C
d
0
([0,n])
X
(n)
() +
1
n
.
It is clear that n|
n
|
C
d
0
([0,n])
2M+1 or |
n
|
C
d
0
([0,n])
2M+1
n
. Since
X C
b
(), we get X(
n
) X() as n . We have
X() X
(n)
() X(
n
) +n|
n
|
C
d
0
([0,n])
1
n
,
thus
n|
n
|
C
d
0
([0,n])
[X() X(
n
)[ +
1
n
.
We also have
X(
n
) X() +n|
n
|
C
d
0
([0,n])
X
(n)
() X()
X(
n
) X() +n|
n
|
C
d
0
([0,n])
1
n
.
3 G-capacity and Paths of G-Brownian Motion 105
From the above two relations, we obtain
[X
(n)
() X()[ [X(
n
) X()[ +n|
n
|
C
d
0
([0,n])
+
1
n
2([X(
n
) X()[ +
1
n
) 0 as n .
Thus (iii) is obtained.
Proposition 3.2 For each X C
b
() and > 0, there exists Y L
ip
() such
that
E[[Y X[] .
Proof. We denote M = sup
[
X()[ M and
[
X()
X(
)[ |
|
C
d
0
([0,T])
for ,
.
Now for each positive integer n, we introduce a mapping
(n)
() : :
(n)
()(t) =
n1
k=0
1
[t
n
k
,t
n
k+1
)
(t)
t
n
k+1
t
n
k
[(t
n
k+1
t)(t
n
k
) +(t t
n
k
)(t
n
k+1
)] +1
[T,)
(t)(t),
where t
n
k
=
kT
n
, k = 0, 1, , n. We set
X
(n)
() :=
X(
(n)
()), then
[
X
(n)
()
X
(n)
(
)[ sup
t[0,T]
[
(n)
()(t)
(n)
(
)(t)[
= sup
k[0, ,n]
[(t
n
k
)
(t
n
k
)[.
We now choose a compact subset K such that
E[1
K
C] /6M. Since
sup
K
sup
t[0,T]
[(t)
(n)
()(t)[ 0, as n , we can choose a suciently
large n
0
such that
sup
K
[
X()
X
(n0)
()[ = sup
K
[
X()
X(
(n0)
())[
sup
K
sup
t[0,T]
[(t)
(n0)
()(t)[
< /3.
Set Y :=
X
(n0)
, it follows that
E[[X Y []
E[[X
X[] +
E[[
X
X
(n0)
[]
E[[X
X[] +
E[1
K
[
X
X
(n0)
[] + 2M
E[1
K
C]
< .
The proof is complete.
106 Chap.VI Capacity and Quasi-Surely Analysis for G-Brownian Paths
By Proposition 3.2, we can easily get L
1
G
() = L
1
c
. Furthermore, we can get
L
p
G
() = L
p
c
, p > 0.
Thus, we obtain a pathwise description of L
p
G
() for each p > 0:
L
p
G
() = X L
0
() : X has a quasi-continuous version and lim
n
E[[X[
p
I
]X]>n]
] = 0.
Furthermore,
E[X] =
E[X], for each X L
1
G
().
Exercise 3.3 Show that, for each p > 0,
L
p
G
(
T
) = X L
0
(
T
) : X has a quasi-continuous version and lim
n
E[[X[
p
I
]X]>n]
] = 0.
Notes and Comments
The results of this chapter for G-Brownian motions were mainly obtained by
Denis, Hu and Peng (2008) [37] (see also Denis and Martini (2006) [38] and
the related comments after Chapter III). Hu and Peng (2009) [58] then have
introduced an intrinsic and simple approach. This approach can be regarded
as a combination and extension of the original Brownian motion construction
approach of Kolmogorov (for more general stochastic processes) and a sort of
cylinder Lipschitz functions technique already introduced in Chap. III. Section
1 is from [37] and Theorem 2.5 is rstly obtained in [37], whereas contents of
Sections 2 and 3 are mainly from [58].
Choquet capacity was rst introduced by Choquet (1953) [24], see also Del-
lacherie (1972) [32] and the references therein for more properties. The ca-
pacitability of Choquet capacity was rst studied by Choquet [24] under 2-
alternating case, see Dellacherie and Meyer (1978 and 1982) [33], Huber and
Strassen (1972) [62] and the references therein for more general case. It seems
that the notion of upper expectations was rst discussed by Huber (1981) [61]
in robust statistics. Recently, it was rediscovered in mathematical nance, es-
pecially in risk measure, see Delbaen (1992, 2002) [34, 35], Follmer and Schied
(2002, 2004) [50] and etc..
Appendix A
Preliminaries in Functional
Analysis
1 Completion of Normed Linear Spaces
In this section, we suppose 1 is a linear space under the norm | |.
Denition 1.1 x
n
1 is a Cauchy sequence, if x
n
satises Cauchys
convergence condition:
lim
n,m
|x
n
x
m
| = 0.
Denition 1.2 A normed linear space 1 is called a Banach space if it is
complete, i.e., if every Cauchy sequence x
n
of 1 converges strongly to a
point x
of 1:
lim
n
|x
n
x
| = 0.
Such a limit point x
| |x x
n
| +|x
n
x
|.
The completeness of a Banach space plays an important role in functional anal-
ysis. We introduce the following theorem of completion.
Theorem 1.3 Let 1 be a normed linear space which is not complete. Then 1
is isomorphic and isometric to a dense linear subspace of a Banach-space
1,
i.e., there exists a one-to-one correspondence x x of 1 onto a dense linear
subspace of
1 such that
x + y = x + y, x = x, | x| = |x|.
The space
1 is uniquely determined up to isometric isomorphism.
For a proof see Yosida [125] (1980, p.56).
107
108 Appendix
2 The Hahn-Banach Extension Theorem
Denition 2.1 Let T
1
and T
2
be two linear operators with domains D(T
1
) and
D(T
2
) both contained in a linear space 1, and the ranges R(T
1
) and R(T
2
) both
contained in a linear space /. Then T
1
= T
2
if and only if D(T
1
) = D(T
2
)
and T
1
x = T
2
x for all x D(T
1
). If D(T
1
) D(T
2
) and T
1
x = T
2
x for all
x D(T
1
), then T
2
is called an extension of T
1
, or T
1
is called a restriction
of T
2
.
Theorem 2.2 (Hahn-Banach extension theorem in real linear spaces)
Let 1 be a real linear space and let p(x) be a real-valued function dened on 1
satisfying the following conditions:
p(x +y) p(x) +p(y) (subadditivity);
p(x) = p(x) for 0 (positive homogeneity).
Let L be a real linear subspace of 1 and f
0
be a real-valued linear functional
dened on L :
f
0
(x +y) = f
0
(x) +f
0
(y) for x, y L and , R.
Let f
0
satisfy f
0
(x) p(x) on L. Then there exists a real-valued linear func-
tional F dened on 1 such that
(i) F is an extension of f
0
, i.e., F(x) = f
0
(x) for all x L.
(ii) F(x) p(x) for x 1.
For a proof see Yosida [125] (1980, p.102).
Theorem 2.3 (Hahn-Banach extension theorem in normed linear spaces)
Let 1 be a normed linear space under the norm | |, L be a linear subspace of
1 and let f
1
be a continuous linear functional dened on L. Then there exists
a continuous linear functional f, dened on 1, such that
(i) f is an extension of f
1
.
(ii) |f
1
| = |f|.
For a proof see for example Yosida [125] (1980, p.106).
3 Dinis Theorem and Tietzes Extension The-
orem
Theorem 3.1 (Dinis theorem) Let 1 be a compact topological space. If a
monotone sequence of bounded continuous functions converges pointwise to a
continuous function, then it also converges uniformly.
Theorem 3.2 (Tietzes extension theorem) Let L be a closed subset of a
normal space 1 and let f : L R be a continuous function. Then there exists
a continuous extension of f to all of 1 with values in R.
Appendix B
Preliminaries in Probability
Theory
1 Kolmogorovs Extension Theorem
Let X be a random variable with values in R
n
dened on a probability space
(, T, P). Denote by B the Borel -algebra on R
n
. We dene Xs law of distri-
bution P
X
and its expectation E
P
with respect to P as follows respectively:
P
X
(B) := P( : X() B); E
P
[X] :=
_
+
xP(dx),
where B B.
In fact, we have P
X
(B) = E
P
[I
B
(X)].
Now let X
t
tT
be a stochastic process with values in R
n
dened on a prob-
ability space (, T, P), where the parameter space T is usually the haline
[0, +).
Denition 1.1 The nite dimensional distributions of the process X
t
tT
are the measures
t1, ,t
k
dened on R
nk
, k = 1, 2, , by
t1, ,t
k
(B
1
B
k
) := P[X
t1
B
1
, , X
t
k
B
k
], t
i
T, i = 1, 2, , k,
where B
i
B, i = 1, 2, , k.
The family of all nite-dimensional distributions determine many (but not all)
important properties of the process X
t
tT
.
Conversely, given a family
t1, ,t
k
: t
i
T, i = 1, 2, , k, k N of probabil-
ity measures on R
nk
, it is important to be able to construct a stochastic process
(Y
t
)
tT
with
t1, ,t
k
as its nite-dimensional distributions. The following fa-
mous theorem states that this can be done provided that
t1, ,t
k
satisfy two
natural consistency conditions.
109
110 Appendix
Theorem 1.2 (Kolmogorovs extension theorem) For all t
1
, t
2
, , t
k
,
k N, let
t1, ,t
k
be probability measures on R
nk
such that
t
(1)
, ,t
(k)
(B
1
B
k
) =
t1, ,t
k
(B
1
(1)
B
1
(k)
)
for all permutations on 1, 2, , k and
t1, ,t
k
(B
1
B
k
) =
t1, ,t
k
,t
k+1
, ,t
k+m
(B
1
B
k
R
n
R
n
)
for all m N, where the set on the right hand side has a total of k +m factors.
Then there exists a probability space (, T, P) and a stochastic process (X
t
) on
, X
t
: R
n
, such that
t1, ,t
k
(B
1
B
k
) = P[X
t1
B
1
, , X
t
k
B
k
]
for all t
i
T and all Borel sets B
i
, i = 1, 2, , k, k N.
For a proof see Kolmogorov [74] (1956, p.29).
2 Kolmogorovs Criterion
Denition 2.1 Suppose that (X
t
) and (Y
t
) are two stochastic processes dened
on (, T, P). Then we say that (X
t
) is a version of (or a modication of )
(Y
t
) if
P( : X
t
() = Y
t
()) = 1 for all t.
Theorem 2.2 (Kolmogorovs continuity criterion) Suppose that the pro-
cess X = X
t
t0
satises the following condition: for all T > 0 there exist
positive constants , , D such that
E[[X
t
X
s
[
] D[t s[
1+
, 0 s, t T.
Then there exists a continuous version of X.
For a proof see Stroock and Varadhan [117] (1979, p.51).
Let E be a metric space and B be the Borel -algebra on E. We recall a few
facts about the weak convergence of probability measures on (E, B). If P is
such a measure, we say that a subset A of E is a P-continuity set if P(A) = 0,
where A is the boundary of A.
Proposition 2.3 For probability measures P
n
(n N) and P, the following
conditions are equivalent:
(i) For every bounded continuous function f on E,
lim
n
_
fdP
n
=
_
fdP;
2 Kolmogorovs Criterion 111
(ii) For every bounded uniformly continuous function f on E,
lim
n
_
fdP
n
=
_
fdP;
(iii) For every closed subset F of E, limsup
n
P
n
(F) P(F);
(iv) For every open subset G of E, liminf
n
P
n
(G) P(G);
(v) For every P-continuity set A, lim
n
P
n
(A) = P(A).
Denition 2.4 If P
n
and P satisfy the equivalent conditions of the preceding
proposition, we say that (P
n
) converges weakly to P.
Now let be a family of probability measures on (E, B).
Denition 2.5 A family is weakly relatively compact if every sequence
of contains a weakly convergent subsequence.
Denition 2.6 A family is tight if for every (0, 1), there exists a compact
set K
such that
P(K
) 1 for every P .
With this denition, we have the following theorem.
Theorem 2.7 (Prokhorovs criterion) If a family is tight, then it is weakly
relatively compact. If E is a Polish space (i.e., a separable completely metrizable
topological space), then a weakly relatively compact family is tight.
Denition 2.8 If (X
n
)
nN
and X are random variables taking their values in
a metric space E, we say that (X
n
) converges in distribution or converges
in law to X if their laws P
Xn
converge weakly to the law P
X
of X.
We stress the fact that the (X
n
) and X need not be dened on the same prob-
ability space.
Theorem 2.9 (Kolmogorovs criterion for weak compactness) Let X
n
be a sequence of R
d
-valued continuous processes dened on probability spaces
(
n
, T
n
, P
n
) such that
(i) the family P
n
X
n
0
of initial laws is tight in R
d
.
(ii) there exist three strictly positive constants , , such that for each s, t R
+
and each n,
E
P
n[[X
n
s
X
n
t
[
] [s t[
+1
,
then the set (P
n
X
n) of the laws of the (X
n
) is weakly relatively compact.
For the proof see Revuz and Yor [111] (1999, p.517)
112 Appendix
3 Daniell-Stone Theorem
Let (, T, ) be a measure space, on which we can dene integration. One
essential properties of integration is its linearity, thus it can be seen as a lin-
ear functional on L
1
(, T, ). This idea leads to another approach to dene
integralDaniells integral.
Denition 3.1 Let be an abstract set and 1 be a linear space formed by a
family of real valued functions. 1 is called a vector lattice if
f 1 [f[ 1, f 1 1.
Denition 3.2 Suppose that 1 is a vector lattice on and I is a positive linear
functional on 1, i.e.,
f, g 1, , R I(f +g) = I(f) +I(g);
f 1, f 0 I(f) 0.
If I satises the following condition:
f
n
1, f
n
0 I(f
n
) 0,
or equivalently,
f
n
1, f
n
f 1 I(f) = lim
n
I(f
n
),
then I is called a Daniells integral on 1.
Theorem 3.3 (Daniell-Stone theorem) Suppose that 1 is a vector lattice
on and I is a Daniell integral on 1. Then there exists a measure T,
where T := (f : f 1), such that 1 L
1
(, T, ) and I(f) = (f), f 1.
Furthermore, if 1 1
+
, where 1
+
:= f : f
n
0, f
n
1 such that f
n
f,
then this measure is unique and is -nite.
For the proof see Dellacherie and Meyer [33] (1978, p.59), Dudley [41] (1995,
p.142), or Yan [123] (1998, p.74).
Appendix C
Solutions of Parabolic
Partial Dierential
Equation
1 The Denition of Viscosity Solutions
The notion of viscosity solutions was rstly introduced by Crandall and Lions
(1981) [28] and (1983) [29] (see also Evanss contribution (1978) [45] and (1980)
[46]) for the rst-order Hamilton-Jacobi equation, with uniqueness proof given in
[29]. The the proof of second-order case for Hamilton-Jacobi-Bellman equations
was rstly developed by Lions (1982) [80] and (1983) [81] using stochastic control
verication arguments. A breakthrough was achieved in the second-order PDE
theory by Jensen (1988) [67]. For all other important contributions in the
developments of this theory we refer to the well-known users guide by Crandall,
Ishii and Lions (1992) [30]. For readers convenience, we systematically interpret
some parts of [30] required in this book into its parabolic version. However,
up to my knowledge, the presentation and the related proof for the domination
theorems seems to be a new generalization of the maximum principle presented
in [30]. Books on this theory are, among others, Barles (1994) [8], Fleming, and
Soner (1992) [49], Yong and Zhou (1999) [124].
Let T > 0 be xed and let O [0, T] R
N
. We set
USC(O) = upper semicontinuous functions u : O R,
LSC(O) = lower semicontinuous functions u : O R.
Consider the following parabolic PDE:
_
(E)
t
u G(t, x, u, Du, D
2
u) = 0 on (0, T) R
N
,
(IC) u(0, x) = (x) for x R
N
,
(1.1)
where G : [0, T] R
N
R R
N
S(N) R, C(R
N
). We always suppose
113
114 Appendix
that G is continuous and satises the following degenerate elliptic condition:
G(t, x, r, p, X) G(t, x, r, p, Y ) whenever X Y. (1.2)
Next we recall the denition of viscosity solutions from Crandall, Ishii and Lions
[30]. Let u : (0, T)R
N
R and (t, x) (0, T)R
N
. We denote by T
2,+
u(t, x)
(the parabolic superjet of u at (t, x)) the set of triples (a, p, X) RR
N
T
2,+
u(t, x) :=(a, p, X) R R
N
S(N) : (t
n
, x
n
, a
n
, p
n
, X
n
)
such that (a
n
, p
n
, X
n
) T
2,+
u(t
n
, x
n
) and
(t
n
, x
n
, u(t
n
, x
n
), a
n
, p
n
, X
n
) (t, x, u(t, x), a, p, X).
Similarly, we dene T
2,
u(t, x) (the parabolic subjet of u at (t, x)) by
T
2,
u(t, x) := T
2,+
(u)(t, x) and
T
2,
u(t, x) by
T
2,
u(t, x) :=
T
2,+
(u)(t, x).
Denition 1.1 (i) A viscosity subsolution of (E) on (0, T)R
N
is a function
u USC((0, T) R
N
) such that for each (t, x) (0, T) R
N
,
a G(t, x, u(t, x), p, X) 0 for (a, p, X) T
2,+
u(t, x);
likewise, a viscosity supersolution of (E) on (0, T) R
N
is a function v
LSC((0, T) R
N
) such that for each (t, x) (0, T) R
N
,
a G(t, x, v(t, x), p, X) 0 for (a, p, X) T
2,
v(t, x);
and a viscosity solution of (E) on (0, T) R
N
is a function that is simultane-
ously a viscosity subsolution and a viscosity supersolution of (E) on (0, T)R
N
.
(ii) A function u USC([0, T) R
N
) is called a viscosity subsolution of
(1.1) on [0, T) R
N
if u is a viscosity subsolution of (E) on (0, T) R
N
and
u(0, x) (x) for x R
N
; the appropriate notions of a viscosity supersolution
and a viscosity solution of (1.1) on [0, T) R
N
are then obvious.
We now give the following equivalent denition (see Crandall, Ishii and Lions
[30]).
Denition 1.2 A viscosity subsolution of (E), or G-subsolution, on (0, T)R
N
is a function u USC((0, T) R
N
) such that for all (t, x) (0, T) R
N
,
C
2
((0, T) R
N
) such that u(t, x) = (t, x) and u < on (0, T) R
N
(t, x),
we have
t
(t, x) G(t, x, (t, x), D(t, x), D
2
(t, x)) 0;
2 Comparison Theorem 115
likewise, a viscosity supersolution of (E), or G-supersolution, on (0, T) R
N
is a function v LSC((0, T) R
N
) such that for all (t, x) (0, T) R
N
,
C
2
((0, T) R
N
) such that u(t, x) = (t, x) and u > on (0, T) R
N
(t, x),
we have
t
(t, x) G(t, x, (t, x), D(t, x), D
2
(t, x)) 0;
and a viscosity solution of (E) on (0, T)R
N
is a function that is simultaneously
a viscosity subsolution and a viscosity supersolution of (E) on (0, T) R
N
. The
denition of a viscosity solution of (1.1) on [0, T)R
N
is the same as the above
denition.
2 Comparison Theorem
We will use the following well-known result in viscosity solution theory (see
Theorem 8.3 of Crandall, Ishii and Lions [30]).
Theorem 2.1 Let u
i
USC((0, T) R
Ni
) for i = 1, , k. Let be a func-
tion dened on (0, T) R
N1++N
k
such that (t, x
1
, . . . , x
k
) (t, x
1
, . . . , x
k
)
is once continuously dierentiable in t and twice continuously dierentiable in
(x
1
, , x
k
) R
N1++N
k
. Suppose that
t (0, T), x
i
R
Ni
for i = 1, , k
and
w(t, x
1
, , x
k
) := u
1
(t, x
1
) + +u
k
(t, x
k
) (t, x
1
, , x
k
)
w(
t, x
1
, , x
k
)
for t (0, T) and x
i
R
Ni
. Assume, moreover, that there exists r > 0 such
that for every M > 0 there exists constant C such that for i = 1, , k,
b
i
C whenever (b
i
, q
i
, X
i
) T
2,+
u
i
(t, x
i
),
[x
i
x
i
[ +[t
t[ r and [u
i
(t, x
i
)[ +[q
i
[ +|X
i
| M.
(2.3)
Then for each > 0, there exist X
i
S(N
i
) such that
(i) (b
i
, D
xi
(
t, x
1
, , x
k
), X
i
) T
2,+
u
i
(
t, x
i
), i = 1, , k,
(ii)
(
1
+|A|)I
_
_
X
1
0
.
.
.
.
.
.
.
.
.
0 X
k
_
_ A +A
2
,
(iii) b
1
+ +b
k
=
t
(
t, x
1
, , x
k
),
where A = D
2
x
(
t, x) S(N
1
+ +N
k
).
Observe that the above condition (2.3) will be guaranteed by having each u
i
be
a subsolution of a parabolic equation given in the following two theorems.
116 Appendix
In this section we will give comparison theorem for G-solutions with dierent
functions G.
(G) We assume that
G : [0, T] R
N
R R
N
S(N) R, i = 1, , k,
are continuous in the following sense: for each t [0, T), v R, x, y, p R
N
and X S(N),
[G
i
(t, x, v, p, X) G
i
(t, y, v, p, X)[
(1 + (T t)
1
+[x[ +[y[ +[v[)([x y[ +[p[ [x y[),
where , : R
+
R
+
are given continuous functions with (0) = 0.
Theorem 2.2 (Domination Theorem) We are given constants
i
> 0, i =
1, , k. Let u
i
USC([0, T] R
N
) be subsolutions of
t
u G
i
(t, x, u, Du, D
2
u) = 0, i = 1, , k, (2.4)
on (0, T) R
N
such that
_
k
i=1
i
u
i
(t, x)
_
+
0, uniformly as [x[ . We
assume that the functions G
i
i
= 1
k
satises assumption (G) and the following
domination condition holds for :
k
i=1
i
G
i
(t, x, v
i
, p
i
, X
i
) 0, (2.5)
for each (t, x) (0, T) R
N
and (v
i
, p
i
, X
i
) R R
N
S(N) such that
k
i=1
i
v
i
0,
k
i=1
i
p
i
= 0,
k
i=1
i
X
i
0.
Then a domination also holds for the solutions: if the sum of initial values
k
i=1
i
u
i
(0, ) is a non-positive function on R
N
, then
k
i=1
i
u
i
(t, ) 0, for
all t > 0.
Proof. We rst observe that for
> 0 and for each 1 i k, the functions
dened by u
i
:= u
i
/(T t) is a subsolution of
t
u
i
G
i
(t, x, u
i
, D u
i
, D
2
u
i
)
(T t)
2
,
where
G
i
(t, x, v, p, X) := G
i
(t, x, v +
/(T t), p, X). It is easy to check that
the functions
G
i
satisfy the same conditions as G
i
. Since
k
i=1
i
u
i
0 follows
from
k
i=2
i
u
i
0 in the limit
0, it suces to prove the theorem under the
additional assumptions:
t
u
i
G
i
(t, x, u
i
, Du
i
, D
2
u
i
) c, where c =
/T
2
,
and lim
tT
u
i
(t, x) = uniformly on [0, T) R
N
.
(2.6)
2 Comparison Theorem 117
To prove the theorem, we assume to the contrary that
sup
(t,x)[0,T)R
N
k
i=1
i
u
i
(t, x) = m
0
> 0
We will apply Theorem 2.1 for x = (x
1
, , x
k
) R
kN
and
w(t, x) :=
k
i=1
i
u
i
(t, x
i
), (x) =
(x) :=
2
k1
i=1
[x
i+1
x
i
[
2
.
For each large > 0, the maximum of w
achieves at some (t
, x
) inside
a compact subset of [0, T) R
kN
. Indeed, since
M
=
k
i=1
i
u
i
(t
, x
i
)
(x
) m
0
,
we conclude t
must be inside
a compact set x R
kN
: sup
t[0,T0]
w(t, x)
m0
2
. We can check that (see
[30] Lemma 3.1)
_
_
(i) lim
(x
) = 0,
(ii) lim
= lim
1
u
1
(t
, x
1
) + +
k
u
k
(t
, x
k
))
= sup
(t,x)[0,T)R
N[
1
u
1
(t, x) + +
k
u
k
(t, x)]
= [
1
u
1
(
t, x) + +
k
u
k
(
t, x)] = m
0
,
(2.7)
where (
t, x) is a limit point of (t
, x
). Since u
i
USC, for suciently large ,
we have
1
u
1
(t
, x
1
) + +
k
u
k
(t
, x
k
)
m
0
2
.
If
t = 0, we have limsup
k
i=1
i
u
i
(t
, x
i
) =
k
i=1
i
u
i
(0, x) 0. We
know that
t > 0 and thus t
i
R, X
i
S(N) such that
(b
i
,
1
i
D
xi
(x
), X
i
)
T
2,+
u
i
(t
, x
i
),
k
i=1
i
b
i
= 0 for i = 1, , k, (2.8)
and such that
(
1
+|A|)I
_
_
_
_
_
1
X
1
. . . 0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 . . .
k1
X
k1
0
0 . . . 0
k
X
k
_
_
_
_
_
A +A
2
, (2.9)
118 Appendix
where A = D
2
(x
k
i=1
i
X
i
0. Set
p
1
=
1
1
D
x1
(x
) =
1
1
(x
1
x
2
),
p
2
=
1
2
D
x2
(x
) =
1
2
(2x
2
x
1
x
3
),
.
.
.
p
k1
=
1
k1
D
x
k1
(x
) =
1
k1
(2x
k1
x
k2
x
k
),
p
k
=
1
k
D
x
k
(x
) =
1
k
(x
k
x
k1
).
Thus
k
i=1
i
p
i
= 0. From this together with (2.8) and (2.6), it follows that
b
i
G
i
(t
, x
i
, u
i
(t
, x
i
), p
i
, X
i
) c, i = 1, , k.
By (2.7) (i), we also have lim
[p
i
[ [x
i
x
1
[ 0. This, together with the
domination condition (2.5) of G
i
, implies
c
k
i=1
i
=
k
i=1
i
b
i
c
k
i=1
i
k
i=1
i
G
i
(t
, x
i
, u
i
(t
, x
i
), p
i
, X
i
)
k
i=1
i
G
i
(t
, x
1
, u
i
(t
, x
i
), p
i
, X
i
)
i=1
i
[G
i
(t
, x
i
, u
i
(t
, x
i
), p
i
, X
i
) G
i
(t
, x
1
, u
i
(t
, x
i
), p
i
, X
i
)[
k
i=1
i
(1 + (T T
0
)
1
+[x
1
[ +[x
i
[ +[u
i
(t
, x
i
)[) ([x
i
x
1
[
+[p
i
[ [x
i
x
1
[).
The right side tends to zero as , which induces a contradiction. The
proof is complete.
Theorem 2.3 We assume that the functions G
i
= G
i
(t, x, v, p, X), i = 1, , k
and G = G(t, x, v, p, X) satisfy assumption (G). We also assume G dominates
G
i
k
i=1
in the following sense: for each (t, x) [0, ) R
N
and (v
i
, p
i
, X
i
)
2 Comparison Theorem 119
R R
N
S(N),
k
i=1
G
i
(t, x, v
i
, p
i
, X
i
) G(t, x,
k
i=1
v
i
,
k
i=1
p
i
,
k
i=1
X
i
). (2.10)
Moreover there exists a constant
C such that, for each (t, r, x, p) [0, )
R
N
R
N
and, Y
1
, Y
2
S(N) such that Y
2
Y
1
G(t, x, r, p, Y
2
) G(t, x, r
1
, p, Y
1
) 0
and there exists a constant
C such that
[G(t, x, r, p, X
1
) G(t, x, r, p, X
2
)[
C([r
1
r
2
[ +[X
1
X
2
[).
Let u
i
USC([0, T] R
N
) be a G
i
-subsolution and u LSC([0, T] R
N
) be a
G-supersolution such that u
i
and u satisfy polynomial growth condition. Then
k
i=1
u
i
(t, x) u(t, x) on [0, T) R
N
provided that
k
i=1
u
i
[
t=0
u[
t=0
.
Proof. For a xed and large constant >
C + C we set (x) := (1 + [x[
2
)
l/2
and
u
i
(t, x) := u
i
(t, x)(x)e
t
, i = 1, , k, u
k+1
(t, x) := u(t, x)(x)e
t
,
where l is chosen to be large enough such that
[ u
i
(t, x)[ 0 uniformly as
[x[ . It is easy to check that, for each i = 1, , k + 1, u
i
is a subsolution
of
t
u
i
+ u
i
G
i
(t, x, u
i
, D u
i
, D
2
u
i
) = 0,
where, for each i = 1, , k, the function
G
i
(t, x, v, p, X) is given by
e
t
1
G
i
(t, x, e
t
v, e
t
(p +vD), e
t
(X +p D +D p +vD
2
)),
and
G
k+1
(t, x, v, p, X) is given by
e
t
1
G(t, x, e
t
v, e
t
(p +vD), e
t
(X +p D +D p +vD
2
)).
Observe that
D(x) = l(x)(1 +[x[
2
)
1
x,
D
2
(x) = (x)[l(1 +[x[
2
)
1
I +l(l 2)(1 +[x[
2
)
2
x x].
Thus both
1
(x)[D(x)[ and
1
(x)[D
2
(x)[ converges to zero uniformly as
[x[ .
From the domination condition (2.10), for each (v
i
, p
i
, X
i
) R R
N
S(N),
i = 1, , k + 1, such that
k+1
i=1
v
i
= 0,
k+1
i=1
p
i
= 0, and
k+1
i=1
X
i
= 0, we
have
k+1
i=1
G
i
(t, x, v
i
, p
i
, X
i
) 0.
120 Appendix
For v, r R, p R
N
and X, Y S(N) such that r 0, Y 0 and r > 0, since
G is still monotone in X,
G
k+1
(t, x, v, p, X)
G
k+1
(t, x, v r, p, X +Y )
G
k+1
(t, x, v, p, X)
G
k+1
(t, x, v r, p, X)
c(
C +C
1
)r,
where the constant C
1
does not depend on (t, x, v, p, X). We then apply the
above theorem by choosing
i
= 1, i = 1, , k + 1. Thus
k+1
i=1
u
i
[
t=0
0.
Moreover for each v
i
R, p
i
R
N
and X
i
S(N) such that v =
k+1
i=1
v
i
0,
k+1
i=1
p
i
= 0 and
X =
k+1
i=1
X
i
0 we have
k+1
i=1
v
i
+
k+1
i=1
G
i
(t, x, v
i
, p
i
, X
i
)
= v +
k
i=1
G
i
(t, x, v
i
, p
i
, X
i
) +
G
k+1
(t, x, v
k+1
v, p
k+1
, X
k+1
X)
+
G
k+1
(t, x, v
k+1
, p
k+1
, X
k+1
)
G
k+1
(t, x, v
k+1
v, p
k+1
, X
k+1
X)
v +
G
k+1
(t, x, v
k+1
, p
k+1
, X
k+1
)
G
k+1
(t, x, v
k+1
v, p
k+1
, X
k+1
)
v +c(
C +C) v 0
It follows that all conditions in Theorem 2.2 are satised. Thus we have
k+1
i=1
u
i
0, or equivalently,
k+1
i=1
u
i
(t, x) u(t, x) for (t, x) [0, T) R
N
.
The following comparison theorem is a direct consequence of the above domi-
nation theorem.
Theorem 2.4 (Comparison Theorem) We are given two functions G = G(t, x, v, p, X)
and G
1
= G
1
(t, x, v, p, X) satisfying condition (G). We also assume that, for
each (t, x, v, p, X) [0, ) R
N
R R
N
and Y S(N) such that X Y ,
G(t, x, v, p, X) G
1
(t, x, v, p, X), (2.11)
G(t, x, v, p, Y ) G(t, x, v, p, X). (2.12)
We also assume that G is a uniform Lipschitz function in v and X, namely, for
each (t, x) [0, ) R
N
and (v, p, X), (v
, p, X
) R R
N
S(N),
[G(t, x, v, p, X) G(t, x, v
, p
, X
)[
C([v v
[ +[X X
[).
Let u
1
USC([0, T] R
N
) be a G
1
-subsolution and u LSC([0, T] R
N
) be
a G-supersolution on (0, T) R
N
satisfying the polynomial growth condition.
Then u u
1
on [0, T) R
N
provided that u[
t=0
u
1
[
t=0
. In particular this
comparison holds for the case where G G
1
, which is a Lipschitz function in
(v, X) and satises the elliptic condition (2.12).
The following special case of the above domination theorem is also very useful.
2 Comparison Theorem 121
Theorem 2.5 (Domination Theorem) We assume that G
1
and G satisfy the
same conditions given in the previous theorem except that the condition (2.11)
is replaced by the following one: for each (t, x) [0, ) R
N
and (v, p, X),
(v
, p
, X
) R R
N
S(N),
G
1
(t, x, v, p, X) G
1
(t, x, v
, p
, X
) G(t, x, v v
, p p
, X X
)
Let u USC([0, T] R
N
) be a G
1
-subsolution and v LSC([0, T] R
N
) be a
G
1
-supersolution on (0, T) R
N
and w is a G-supersolution. They satisfy the
polynomial growth condition. If (u v)[
t=0
= w[
t=0
then u v w on [0, T)
R
N
.
The following theorem will be frequently used in this book. Let G : R
N
S(N)
R be a given continuous sublinear function monotonic in A S(N). Obviously,
Gsatises conditions (G) of Theorem 2.3. We consider the following G-equation:
t
u G(Du, D
2
u) = 0, u(0, x) = (x). (2.13)
Theorem 2.6 Let G : R
N
S(N) R be a given continuous sublinear function
monotonic in A S(N). Then we have
(i) If u USC([0, T] R
N
) with polynomial growth is a viscosity subsolution
of (2.13) and v LSC([0, T] R
N
) with polynomial growth is a viscosity
supersolution of (2.13), then u v.
(ii) If u
C([0, T]R
N
) denotes the polynomial growth solution of (2.13) with
initial condition , then u
= u
+u
.
(iii) If a given function
G : R
N
S(N) R is dominated by G, i.e.
G(p, X)
G(p
, X
) G(p p
, X X
), for p, p
R, X, X
S(N),
then for each C(R
N
) satisfying polynomial growth condition, there
exists a unique
G-solution u
(t, x) on [0, ) R
N
with initial condition
u
[
t=0
= (see the next section for the proof of existence), i.e.,
t
u
G(D u
, D
2
u
) = 0, u
[
t=0
= .
Moreover
u
(t, x) u
(t, x) u
.
Proof. By the above corollaries, it is easy to obtain the results.
122 Appendix
3 Perrons Method and Existence
The combination of Perrons method and viscosity solutions was introduced by
H. Ishii [64]. For the convenience of readers, we interpret the proof provided in
Crandall, Ishii and Lions [30] into its parabolic situation.
We consider the following parabolic PDE:
_
t
u G(t, x, u, Du, D
2
u) = 0 on (0, ) R
N
,
u(0, x) = (x) for x R
N
,
(3.14)
where G : [0, ) R
N
R R
N
S(N) R, C(R
N
).
To discuss Perrons method, we will use the following notations: if u : O
[, ] where O [0, ) R
N
, then
_
u
(t, x) = lim
r0
supu(s, y) : (s, y) O and
_
[s t[ +[y x[
2
r,
u
(t, x) = lim
r0
infu(s, y) : (s, y) O and
_
[s t[ +[y x[
2
r.
(3.15)
One calls u
. Similarly,
u
(0, x) = u
(0, x) = (x)
for x R
N
. Then
W(t, x) = supw(t, x) : u w u and w is a viscosity subsolution of (3.14)
is a viscosity solution of (3.14).
The proof consists of two lemmas. For the proof of the following two lemmas,
we also see [1]. The rst one is
Lemma 3.2 Let T be a family of viscosity subsolution of (3.14) on (0, )R
N
.
Let w(t, x) = supu(t, x) : u T and assume that w
(t, x) and w
< on (0, ) R
N
(t, x), let (t
n
, x
n
) be a
maximum point of u
n
over N
r
, hence u
n
(s, y) u
n
(t
n
, x
n
)+(s, y)(t
n
, x
n
)
for (s, y) N
r
. Suppose that (passing to a subsequence if necessary) (t
n
, x
n
)
(
t, x) as n . Putting (s, y) = (s
n
, y
n
) in the above inequality and taking
the limit inferior as n , we obtain
w
(t, x) liminf
n
u
n
(t
n
, x
n
) +(t, x) (
t, x)
w
t, x) +(t, x) (
t, x).
3 Perrons Method and Existence 123
From the above inequalities and the assumption on , we get lim
n
(t
n
, x
n
,u
n
(t
n
, x
n
)) =
(t, x, w
t
(t
n
, x
n
) G(t
n
, x
n
, u
n
(t
n
, x
n
), D(t
n
, x
n
), D
2
(t
n
, x
n
)) 0.
Letting n , we conclude that
t
(t, x) G(t, x, w
> on (0, )
R
N
(t, x) and
t
(t, x) G(t, x, (t, x), D(t, x), D
2
(t, x)) < 0.
The continuity of Gprovides r,
1
> 0 such that N
r
= (s, y) :
_
[s t[ +[y x[
2
r is compact and
t
G(s, y, +, D, D
2
) 0
for all s, y, N
r
[0,
1
]. Lastly, we obtain
2
> 0 for which u
> +
2
on
N
r
. Setting
0
= min(
1
,
2
) > 0, we dene
U =
_
max(u, +
0
) on N
r
u elsewhere.
By the above inequalities and Lemma 3.2, it is easy to check that U is a viscosity
subsolution of (3.14) on (0, ) R
N
. Obviously, U u. Finally, observe that
U
(t, x) max(u
fails to be a viscosity supersolution at some point (s, z), then for any small > 0
there is a viscosity subsolution U
of (3.14) on (0, ) R
N
satisfying
_
U
u) > 0,
U
W W
and, in particular, W
(0, x) = W(0, x) = W
(0, x) = (x)
for x R
N
. By lemma 3.2, W
and W
u
124 Appendix
and since W is the maximal viscosity subsolution between u and u, we arrive at
the contradiction W
W. Hence W
= W W
t
u G(Du, D
2
u) = 0, u(0, x) = (x). (3.16)
Case 1: If C
2
b
(R
N
), then u(t, x) = Mt +(x) and u(t, x) =
Mt +(x) are
respectively the classical subsolution and supersolution of (3.16), where M =
inf
xR
N G(D(x), D
2
(x)) and
M = sup
xR
N G(D(x), D
2
(x)). Obviously, u
and u satisfy all the conditions in Theorem 3.1. By Theorem 2.6, we know the
comparison holds for (3.16). Thus by Theorem 3.1, we obtain that G-equation
(3.16) has a viscosity solution.
Case 2: If C
b
(R
N
) with lim
]x]
(x) = 0, then we can choose a sequence
n
C
2
b
(R
N
) which uniformly converge to . For
n
, by Case 1, there exists a
viscosity solution u
n
. By comparison theorem, it is easy to show that u
n
is
uniformly convergent, the limit denoted by u. Similar to the proof of Lemma
3.2, it is easy to prove that u is a viscosity solution of G-equation (3.16) with
initial condition .
Case 3: If C(R
N
) with polynomial growth, then we can choose a large
l > 0 such that (x) = (x)
1
(x) satises the condition in Case 2, where
(x) = (1 + [x[
2
)
l/2
. It is easy to check that u is a viscosity solution of G-
equation (3.16) if and only if u(t, x) = u(t, x)
1
(x) is a viscosity solution of
the following PDE:
t
u
G(x, u, D u, D
2
u) = 0, u(0, x) = , (3.17)
where
G(x, v, p, X) = G(p +v(x), X +p (x) +(x) p +v(x)). Here
(x) :=
1
(x)D(x) = l(1 +[x[
2
)
1
x,
(x) :=
1
(x)D
2
(x) = l(1 +[x[
2
)
1
I +l(l 2)(1 +[x[
2
)
2
x x.
Similar to the above discussion, we obtain that there exists a viscosity solution
of (3.17) with initial condition . Thus there exists a viscosity solution of G-
equation (3.16).
We summarize the above discussions as a theorem.
Theorem 3.4 Let C(R
N
) with polynomial growth. Then there exists a
viscosity solution of G-equation (3.16) with initial condition .
Theorem 3.5 Let the function G be given as in the previous theorem and let
G(t, x, p, X) : [0, ] R
N
R
N
S(N) R be a given function satisfying
condition (G) in Theorem 2.3. We assume that
G is dominated by G in the
sense of
G(t, x, p, X)
G(t, x, p
, X
) G(p p
, X X
),
for each t, x, p, p
and X, X
.
4 Krylovs Regularity Estimate for Parabolic PDE 125
Then for each given C(R
N
) with polynomial growth condition the viscosity
solution of
t
u
G(t, x, D u, D
2
u) = 0 with u[
t=0
= exists and is unique.
Moreover the comparison property also holds.
Proof. It is easy to check that a G-solution with u[
t=0
= is a
G-supersolution.
Similarly denoting G
-solution with u[
t=0
= is a
G-subsolution.
We now prove that comparison holds for
G-solutions. Let u
1
be a
G-supersolution
and u
2
-be a
G-subsolution with u
1
[
t=0
=
1
SC([0, T) R
N
), u
2
[
t=0
=
2
PSC([0, T) R
N
), then by the above domination theorem we can ob-
tain u
1
u
2
u where u is a G-supersolution with u[
t=0
=
1
2
. On
the other hand it is easy to prove that, in the case when
1
2
, u =
(
1
(x)
2
(x))1
0]
(t) such type of G-supersolution. Consequently we have
u
1
u
2
. This implies that the comparison holds for
G-equations. We then can
apply Theorem 3.1 to prove that
G-solution u with u[
t=0
= exists.
4 Krylovs Regularity Estimate for Parabolic
PDE
The proof of our new central limit theorem is based on a powerful C
1+/2,2+
-
regularity estimates for fully nonlinear parabolic PDE obtained in Krylov [76].
A more recent result of Wang [119] (the version for elliptic PDE was initially
introduced in Cabre and Caarelli [17]), using viscosity solution arguments, can
also be applied.
For simplicity, we only consider the following type of PDE:
t
u +G(D
2
u, Du, u) = 0, u(T, x) = (x), (4.18)
where G : S(d) R
d
R R is a given function and C
b
(R
d
).
Following Krylov [76], we x constants K > 0, T > 0 and set Q = (0, T)
R
d
. Now we give the denition of ((, K, Q) and
((, K, Q).
The following denition is according to Denition 5.5.1 in Krylov [76].
Denition 4.1 Let G : S(d) R
d
R R be given, written it as G(u
ij
, u
i
, u),
i, j = 1, . . . , d. We denote G ((, K, Q) if G is twice continuously dieren-
tiable with respect to (u
ij
, u
i
, u) and, for each real-valued u
ij
= u
ji
, u
ij
= u
ji
,
u
i
, u
i
, u, u and
i
, the following inequalities hold:
[[
2
i,j
uij
G K[[
2
,
[G
i,j
u
ij
uij
G[ M
G
1
(u)(1 +
i
[u
i
[
2
),
[
u
G[ + (1 +
i
[u
i
[)
i
[
ui
G[ M
G
1
(u)(1 +
i
[u
i
[
2
+
i,j
[u
ij
[),
126 Appendix
[M
G
2
(u, u
k
)]
1
G
()()
i,j
[ u
ij
[
_
i
[ u
i
[ + (1 +
i,j
[u
ij
[)[ u[
_
+
i
[ u
i
[
2
(1 +
i,j
[u
ij
[) + (1 +
i,j
[u
ij
[
3
)[ u[
2
,
where the arguments (u
ij
, u
i
, u) of G and its derivatives are omitted, = ( u
ij
, u
i
, u),
and
G
()()
:=
i,j,r,s
u
ij
u
rs
2
uijurs
G+ 2
i,j,r
u
ij
u
r
2
uij ur
G+ 2
i,j
u
ij
u
2
uiju
G
+
i,j
u
i
u
j
2
uiuj
G+ 2
i
u
i
u
2
uiu
G+[ u[
2
2
uu
G,
M
G
1
(u) and M
G
2
(u, u
k
) are some continuous functions which grow with [u[ and
u
k
u
k
and M
G
2
1.
Remark 4.2 Let I A = (a
ij
) KI. It is easy to check that
G(u
ij
, u
i
, u) =
i,j
a
ij
u
ij
+
i
b
i
u
i
+cu
belongs to ((, K, Q).
The following denition is Denition 6.1.1 in Krylov [76].
Denition 4.3 Let a function G = G(u
ij
, u
i
, u) : S(d) R
d
R R be given.
We write G
((, K, Q) if there exists a sequence G
n
((, K, Q) converging
to G as n at every point (u
ij
, u
i
, u) S(d) R
d
R such that
(i) M
G1
i
= M
G2
i
= =: M
G
i
, i = 1, 2;
(ii) for each n = 1, 2, . . ., the function G
n
is innitely dierentiable with respect
to (u
ij
, u
i
, u);
(iii) there exist constants
0
=:
G
0
> 0 and M
0
=: M
G
0
> 0 such that
G
n
(u
ij
, 0, M
0
)
0
, G
n
(u
ij
, 0, M
0
)
0
for each n 1 and symmetric nonnegative matrices (u
ij
).
The following theorem is Theorem 6.4.3 in Krylov [76] , which plays important
role in our proof of central limit theorem.
Theorem 4.4 Suppose that G
((, K, Q) and C
b
(R
d
) with sup
xR
d [(x)[
M
G
0
. Then PDE (4.18) has a solution u possessing the following properties:
(i) u C([0, T] R
d
), [u[ M
G
0
on Q;
4 Krylovs Regularity Estimate for Parabolic PDE 127
(ii) there exists a constant (0, 1) only depending on d, K, such that for
each > 0,
[[u[[
C
1+/2,2+
([0,T]R
d
)
< . (4.19)
Now we consider the G-equation. Let G : R
d
S(d) R be a given continuous
sublinear function monotonic in A S(d). Then there exists a bounded, convex
and closed subset R
d
S
+
(d) such that
G(p, A) = sup
(q,B)
[
1
2
tr[AB] +p, q] for (p, A) R
d
S(d). (4.20)
The G-equation is
t
u +G(Du, D
2
u) = 0, u(T, x) = (x). (4.21)
We set
u(t, x) = e
tT
u(t, x). (4.22)
It is easy to check that u satises the following PDE:
t
u +G(D u, D
2
u) u = 0, u(T, x) = (x). (4.23)
Suppose that there exists a constant > 0 such that for each A,
A S(d) with
A
A, we have
G(0, A) G(0,
A) tr[A
A]. (4.24)
Since G is continuous, it is easy to prove that there exists a constant K > 0
such that for each A,
A S(d) with A
A, we have
G(0, A) G(0,
A) Ktr[A
A]. (4.25)
Thus for each (q, B) , we have
2I B 2KI.
By Remark 4.2, it is easy to check that
G(u
ij
, u
i
, u) := G(u
i
, u
ij
) u
((, K, Q) and
G
0
= M
G
0
can be any positive constant. By Theorem 4.4 and
(4.22), we have the following regularity estimate for G-equation (4.21).
Theorem 4.5 Let G satisfy (4.20) and (4.24), C
b
(R
d
) and let u be a
solution of G-equation (4.21). Then there exists a constant (0, 1) only
depending on d, G, such that for each > 0,
[[u[[
C
1+/2,2+
([0,T]R
d
)
< . (4.26)
128 Appendix
Bibliography
[1] Alvarez, O. and Tourin, A. (1996) Viscosity solutions of nonlinear integro-
dierential equations. Ann. Inst. H. Poincare Anal. Non Lineaire, 13(3),
293317.
[2] Artzner, Ph., Delbaen, F., Eber, J. M. and Heath, D. (1997) Thinking
coherently RISK 10, 8671.
[3] Artzner, Ph., Delbaen, F., Eber, J. M. and Heath, D. (1999) Coherent
Measures of Risk, Mathematical Finance 9, 203228.
[4] Atlan,M. (2006) Localizing volatilities, in arXiv:math/0604316v1
[math.PR].
[5] Avellaneda, M., Levy, A. and Paras, A. (1995) Pricing and hedging deriva-
tive securities in markets with uncertain volatilities. Appl. Math. Finance
2 7388.
[6] Bachelier, L. (1990) Theorie de la speculaton. Annales Scientiques de
l
E G-expectation 39
1 Space of random variables 1
L
0
() Space of all B()-measurable real functions 91
L
p
b
Completion of B
b
() under norm [[ [[
p
95
L
p
c
Completion of C
b
() under norm [[ [[
p
95
M
p,0
G
(0, T) Space of simple processes 42
M
p
G
(0, T) Completion of M
p,0
G
(0, T) under norm
E[
_
T
0
[
t
[
p
dt]
1/p
43
M
p
G
(0, T) Completion of M
p,0
G
(0, T) under norm
_
T
0
E[[
t
[
p
dt]
1/p
81
q.s. Quasi-surely 92
S(d) Space of d d symmetric matrices 18
S
+
(d) Space of non-negative d d symmetric matrices 22
Coherent risk measure 14
(, 1, E) Sublinear expectation space 2
d
= Identically distributed 7
x, y Scalar product of x, y R
n
[x[ Euclidean norm of x
(A, B) Inner product (A, B) := tr[AB]
139
Index
G-Brownian motion, 36
G-convex, 75
G-distributed, 19
G-equation, 19
G-expectation, 39
G-heat equation, 22
G-martingale, 71
G-normal distribution, 18
G-submartingale, 71
G-supermartingale, 71
Banach space, 107
Bochner integral, 43
Cauchy sequence, 107
Cauchys convergence condition, 107
Central limit theorem with law of large
numbers, 27
Central limit theorem with zero-mean,
26
Coherent acceptable set, 14
Coherent risk measure, 14
Complete, 107
Converge in distribution, 9, 111
Converge in law, 9, 111
Converge weakly, 111
Convex acceptable set, 14
Convex expectation, 2
Daniells integral, 112
Daniell-Stone theorem, 112
Dinis theorem, 108
Distribution, 6
Domination Theorem, 116
Einstein convention, 56, 72
Extension, 108
Finite dimensional distributions, 109
Generalized G-Brownian motion, 61
Geometric G-Brownian motion, 83
Hahn-Banach extension theorem, 108
Identically distributed, 7
Independent, 9
Independent copy, 11
Kolmogorovs continuity criterion, 110
Kolmogorovs criterion for weak com-
pactness, 111
Kolmogorovs extension theorem, 110
Law of large numbers, 25
Lower semicontinuous envelope, 122
Maximal distribution, 17
Mean-uncertainty, 7
Modication, 110
Mutual variation process, 49
Nonlinear expectation, 2
Nonlinear expectation space, 2
Nonlinear G-Brownian motion, 64
Parabolic subjet, 114
Parabolic superjet, 114
Polar, 92
Product space of sublinear expectation
space, 10
Prokhorovs criterion, 111
Quadratic variation process, 46
Quasi-surely, 92
Regular, 93
140
Restriction, 108
Robust expectation, 3
Stochastic process, 35
Sublinear expectation, 1
Sublinear expectation space, 2
Sublinearity, 2
Tight, 111
Upper semicontinuous envelope, 122
Variance-uncertainty, 7
Vector lattice, 112
Version, 110
Viscosity solution, 114, 115
Viscosity subsolution, 114
Viscosity supersolution, 114, 115
Weakly relatively compact, 111