You are on page 1of 26

Growth optimal portfolio selection

strategies with transaction costs 


László Györ István Vajda
Department of Computer Science and Information Theory
Budapest University of Technology and Economics
1521 Stoczek u. 2, Budapest, Hungary
{gyorfi,vajda}@szit.bme.hu
February 16, 2008

Abstract
Discrete time innite horizon growth optimal investment in stock
markets with transactions costs is considered. The stock processes
are modelled by homogeneous Markov processes. Assuming that the
distribution of the market process is known, we show two recursive
investment strategies such that, in the long run, the growth rate on
trajectories (in "liminf" sense) is greater than or equal to the growth
rate of any other investment strategy with probability 1.

Key words and phrases: portfolio selection, growth optimal investment,

recursive algorithm, proportional transaction cost, dynamic optimization.

 The rst author acknowledges the support of the Computer and Automation Research
Institute of the Hungarian Academy of Sciences.

1
1 Introduction

The problem of optimal investment with proportional transaction costs has


been essentially formulated in continuous time only, in the classical articles
Davis and Norman [8], Taksar et al. [30] and Shreve and Soner [28], etc.
Taksar, Klass and Assaf [30] investigate optimal investment in a continuous
time market with two assets and with proportional transaction costs driven

by a Wiener process and using long run expected reward criteria. Akien,
Sulem and Taksar [1] extend these results to the case of several risky assets.
Papers dealing with growth optimal investment with transaction costs
in discrete time setting are seldom. Cover and Iyengar [18] formulated the

problem of horse race markets, where in every market period one of the
assets has positive pay o and all the others pay nothing. Their model
included proportional transaction costs and they used a long run expected
average reward criterion. There are results for more general markets as
well. Iyengar [16] investigated growth optimal investment with several

assets assuming independent and identically distributed (i.i.d.) sequence


of asset returns. Bobryk and Stettner [6] considered the case of portfolio
selection with consumption, when there are two assets, a bank account and
a stock. Furthermore, long run expected discounted reward and i.i.d asset

returns were assumed. In the case of discrete time, the most far reaching
study was Schäfer [26] who considered the maximization of the long run
expected growth rate with several assets and proportional transaction costs,
when the asset returns follow a stationary Markov process.
Other authors considered transaction costs with a xed and a propor-

tional part: Morton and Pliska [23], Easthem and Hastings [9], (cf. [5], [24],
[25], [21] and see also the references therein). The optimality criteria was
either long run expected average reward or long run expected discounted
reward. These articles assumed continuous time parameter and geometric

Wiener price process or more general parametrized stochastic processes. A


renaissance of impulse control in portfolio optimization was brought about
by [9]. An impulsive strategy  = ((N ; 0)(N ;  ); (N ;  ) : : : ) is a se-
0 1 1 2 2

quence of pairs (N ;  ), where 


i i i are nondecreasing stopping times, and Ni
are the portfolio positions. At a random time k the portfolio is changed

2
from Nk 1 to Nk .
Most of the above mentioned papers use some kind of method from
stochastic optimal control theory. Without exception all the papers con-
sider optimality in expected reward. None of these papers give result on

almost sure optimality. In this paper we present two portfolio selection


strategies, and for a Markovian market we prove their almost sure optimal-
ity.
The rest of the paper is organized as follows. In Section 2 we introduce
the market model and describe the modelling of transaction costs. In Sec-

tion 3 we formulate the underlying Markov control problem, and Section


4 denes optimal portfolio selection strategies. The proofs are given in
Section 5.

2 Mathematical setup: investment with trans-


action cost

Consider a market of d assets. A market vector x = (x (1)


; : : : x(d) ) 2R
T d
+ is
a vector of d nonnegative numbers representing price relatives for a given
trading period. That is, the j -th component x(j )  0 of x expresses the
ratio of the closing and opening prices of asset j . In other words, x(j ) is the

factor by which capital invested in the j -th asset grows during the trading
period.
The investor is allowed to diversify his capital at the beginning of each
trading period according to a portfolio vector b = (b (1)
; : : : b(d) ) T
. The j -th
(j )
component bb denotes the proportion of the investor's capital invested
of

in asset j . Throughout the paper we assume that the portfolio vector b has
nonnegative components with
Pd
j =1 b(j ) = 1. The fact that
Pd
j =1 b(j ) =1
means that the investment strategy is self nancing and consumption of
capital is excluded. The non-negativity of the components of b means that
short selling and buying stocks on margin are not permitted. To make the
analysis feasible, some simplifying assumptions are used that need to be
taken into account. We assume that assets are arbitrarily divisible and all
assets are available in unbounded quantities at the current price at any

3
given trading period. We also assume that the behavior of the market
is not aected by the actions of the investor using the strategies under
investigation.
Let S0 denote the investor's initial capital. Then at the end of the

trading period the investor's wealth becomes

d
=S = S hb ; x i ;
X
S1 0 b(j ) x(j ) 0
j =1

where h ; i denotes inner product.


The evolution of the market in time is represented by a sequence of
x1 ; x2 ; : : : 2 Rd+ , where the j -th component xi
(j )
market vectors of xi denotes
the amount obtained after investing a unit capital in the j -th asset on the

i-th trading period. For j 


i we abbreviate by xij the array of market
( )
vectors xj ; : : : ; xi and denote by 
d the simplex of all vectors b
d
+ 2R
with nonnegative components summing up to one. An investment strategy
is a sequence B of functions

: R ! d ; = 1; 2; : : :
 i 1
d
bi + i

so that (
bi xi1 1
) denotes the portfolio vector chosen by the investor on the
i-th trading period, upon observing the past behavior of the market. We
write (
b xi1 1
) = b (x ) to ease the notation.
i
i 1
1

In this section our presentation of the transaction cost problem utilized


the formulation in [26]. Let Sn denote the wealth at the end of market

day n, n= 0; 1; 2;    , where without loss of generality let the investor's


initial capital S be 1 dollar. At the beginning of a new market day n + 1,
0

the investor sets up his new portfolio, i.e. buys/sells stocks according to
the actual portfolio vector bn+1 . During this rearrangement, he has to pay
transaction cost, therefore at the beginning of a new market day n +1
the net wealth Nn in the portfolio bn+1 is less than Sn . Using the above
notations the (gross) wealth Sn at the end of market day n is

Sn = N hb n 1 n ; xn : i
The rate of proportional transaction costs (commission factors) levied on
one asset are denoted by 0<c s < 1 and 0 < c p < 1, i.e., the sale of 1 dollar
4
worth of asset i nets only 1 cs dollars, and similarly we take into account
the purchase of an asset such that the purchase of 1 dollar's worth of asset
i costs an extra cp dollars. We consider the special case when the rate of
costs are constant over the assets.

Let's calculate the transaction cost to be paid when select the portfolio
bn+1 . j -th asset there is b(nj ) x(nj ) Nn
Before rearranging the capitals, at the 1

dollars, while after rearranging we need b Nn dollars. If b(nj ) x(nj ) Nn 1


(j )
n+1 
(j )
bn+1 Nn then we have to sell and the transaction cost at the j -th asset is
 
(j )
cs b(nj ) x(nj ) Nn 1 bn+1 Nn ;

otherwise we have to buy and the transaction cost at the j -th asset is

 
(j )
cp bn+1 Nn b(nj ) x(nj ) Nn 1 :

Let x+ denote the positive part of x. Thus, the gross wealth Sn decom-
poses to the sum of the net wealth and cost the following - self-nancing -
way

d d
=S
X  + X  +
(j ) (j )
Nn n cs b(nj ) x(nj ) Nn 1 bn+1 Nn cp bn+1 Nn b(nj ) x(nj ) Nn 1 ;
j =1 j =1

or equivalently

d  d 
= N +c +c
X + X +
(j ) (j )
Sn n s b(nj ) x(nj ) Nn 1 bn+1 Nn p bn+1 Nn b(nj ) x(nj ) Nn 1 :
j =1 j =1

Dividing both sides by Sn and introducing ratio

wn = NS n
;
n

0<w n < 1, we get


(2.1)
!+ !+
d d
b(nj ) x(nj ) b(nj ) x(nj )
1=w +c n s
X

hbn ; xni
(j )
b
n+1 wn +c p
X
b
(j )
n+1 wn
hbn ; xni :
j =1 j =1

5
Remark 2.1. Equation (2.1) is used in the sequel. Examining this cost
equation, it turns out, that for arbitrary portfolio vectors bn , bn+1 , and
return vector xn there exists a unique cost factors wn 2 [0; 1), i.e. the
portfolio is self nancing. The value of cost factor wn at day n is determined
by portfolio vectors bn and bn+1 as well as by return vector xn , i.e.

wn = w (b ; b n n+1 ; xn );
for some function w. If we want to rearrange our portfolio substantially,
then our net wealth decreases more considerably, however, it remains pos-
itive. Note also, that the cost does not restrict the set of new portfolio
vectors, i.e., the optimization algorithm searches for optimal vector bn+1
within the whole simplex  . The value of the cost factor ranges between
d

1 c  w  1: s
1+c p
n

Starting with an initial wealth S0 = 1 and w = 1, wealth S


0 n at the
closing time of the n-th market day becomes

n
= N hb i = wn hbn ; xni = [w(bi ) hb ; x i]:
Y
Sn n 1 n ; xn 1 Sn 1 1 ; bi ; xi 1 i i
i=1

Introduce the notation

(
g bi 1 ; bi ; xi 1 ; xi ) = log(w(b i 1 ; bi ; xi 1 ) hb ; x i);
i i

then the average growth rate becomes

1 log S = 1 log(w(b ; b ; x ) hb ; x i)
n
n
X
i 1 i i 1 i i
n n i=1

= n1 g(b ; b ; x ; x ): Xn
(2.2) i 1 i i 1 i
i=1

Our aim is to maximize this average growth rate.

Remark 2.2. In modelling the behavior of the evolution of the market, two

main approaches have been considered in the theory of sequential invest-


ment. One of them allows the market sequence x1 ; x2 ; : : : to take completely

6
arbitrary values and no stochastic model is imposed on the mechanism gen-
erating the price relatives. In this approach the achieved wealth is compared
with that of the best in a class of reference strategies. For example, Cover
[7] considers the class of all constantly rebalanced portfolios ( crp) dened
by strategies B for which bi (x ) equals a xed portfolio vector indepen-
i 1
1
i 1
dently of i and the past x1 . Without transaction cost, Cover showed that
there exist investment strategies B (so-called universal portfolios) which
perform almost as well as the best constantly rebalanced portfolio. The
advantage of this worst-case approach is that it avoids imposing statis-

tical models on the stock market and the results hold for all possible se-
quences xn1 . In this sense this approach is extremely robust. Taking into
account the transaction cost Iyengar [17], Iyengar and Cover [18], Kalai
and Blum [20], and Merhav et al [22] introduced universal portfolio selec-

tion strategies. Another possibility is to assume that the market vectors


are realizations of a random process, and describe a statistical model. The
advantage of this more classical view is that, for each process, an optimal
strategy may be determined (in a sense specied below), which depends
on the unknown distribution of the process, and uses the past market data

sequence to estimate the statistical features necessary to approximate the


optimal strategy.

In the sequel xi will be random variable and is denoted by Xi . Let's

use the decomposition

(2.3)
1 log S = I + J ;
n n n
n
where

= n1 (g(b
n
In
X
i 1 ; bi ; Xi 1 ; Xi ) Efg(b i 1 ; bi ; Xi 1 ; Xi )jX g)
i 1
1
i=1

= n1 Efg(b
and
n
)jX g:
X
i 1
Jn i 1 ; bi ; Xi 1 ; Xi 1
i=1

In is an average of martingale dierences. Under mild conditions on the


support of the distribution of (
X, g bi 1 ; bi ; Xi 1 ; Xi ) is bounded, therefore
7
In is an average of bounded martingale dierences, which converges to 0
almost surely, since according to the Chow Theorem (cf. Theorem 3.3.1 in
Stout [29])

X 1 Ef(g(bi 1 ; bi ; Xi 1 ; Xi ) Efg(b i 1 ; bi ; Xi 1 ; Xi )jX g) g < 1


i 1
1
2

i=1
i2

implies that
In !0
almost surely. Thus, the asymptotic maximization of the average growth
rate
1
n
log S n is equivalent to the maximization of Jn .
fXig is a homogeneous and rst order Markov
If the market process
process then, for appropriate portfolio selection fbi g, we have that
Efg(bi ; bi ; Xi ; Xi )jXi g1 1 1
1

= Eflog(w(bi ; bi; Xi ) hbi ; Xii)jXi g 1 1 1


1

= log w(bi ; bi; Xi ) + Eflog hbi ; Xii jXi g


1 1 1
1

= log w(bi ; bi; Xi ) + Eflog hbi ; Xii jbi; Xi g


1 1 1

= v (b
def
i 1 ; bi ; Xi 1 );
therefore the maximization of the average growth rate
1
n
log S n is asymp-
totically equivalent to the maximization of

= n1
n
( ):
X
(2.4) Jn v bi 1 ; bi ; Xi 1
i=1

Remark 2.3. Without transaction cost, the fundamental limits, deter-


mined in Algoet and Cover [2] reveal that the so-called log-optimum port-
folio = fb()g is the best possible choice. More precisely, in trading
B
period n let b () be such that


b (X ) = arg max E log b(X ) ; X X


n D E o
n 1 n 1 n 1
n 1 : 1 n 1
b( )
If Sn = S (B) denotes the capital achieved by a log-optimum portfolio
n
strategy B , after n trading periods, then for any other investment strategy

8
B with capital Sn = S (B) and for any stationary and ergodic return
n
process fXng11,
lim!1inf n1 log SS  0

n
almost surely
n
n

and
lim 1  
n!1 n log S = W n almost surely,

where ( )

W = E max E log b(X 1 ) ; X


n D E o
1 1
0 X 1
b  ()

is the maximal possible growth rate of any investment strategy. Moreover,

Györ and Schäfer [12] and Györ, Lugosi and Udina [11] constructed em-
pirical (data dependent) log-optimum strategies in case of unknown distri-
butions. Note that for rst order Markovian return process

(
bn X1n 1
) = b (X ) = argbmax
n n 1

E f log hb(X ) ; X ij X g :
()
n 1 n n 1

Remark 2.4. For the portfolio selection problem in case of transaction


cost, one might apply the log-optimum portfolio (
bn Xn 1 ). We tested
investment strategies on a standard set of New York Stock Exchange data
used by Cover [7], Kalai and Blum [20], and others. The nyse data set

includes daily prices of 36 assets along a 22-year period (5651 trading days)
ending in 1985. = 36. However, for a usual value of
This means that d
transaction coecient c = c = c = 0:0015 the portfolio b (X ) has poor
b s
 n 1
n
performance, i.e., the resulting average growth rate is typically negative.

3 The related Markov control problem

Discrete time portfolio optimization with transaction cost is a special case


of the general Markov control processes (MCP). A discrete time Markov
control process is dened by a ve tuple (S; A; U (s); Q; r) (cf. [13]). S is

a Borel space, called the state space, the action space A is Borel, too, the
space of admissible actions U s ( ) is a Borel subset of A. Let the set K be

9
f(s; a) : s 2 S; a 2 U (s)g. The transition law is a stochastic kernel Q(:js; a)
on S given K, and ( ) is the reward function.
r s; a
The evolution of the process is the following. Let St denote the the state
at time t, action At is chosen at that time. Let St s and At a , then = =
( )
the reward is r s; a , and the process moves to St+1 according to the kernel
(j )
Q : s; a . A control policy is a sequence  =f g
n of stochastic kernels on
(j
A given the past states and actions, i.e., n s0 ; a0 ; : : : ; sn 1 ,an 1 ; sn is a )
randomized policy, it is the conditional probability distribution of the the
randomization.

Two reward criteria is considered. The expected long run average re-
ward for  is dened by

(3.1) ( ) = liminf
J 
1 n
X1
E r(St ; At ):
!1 n
n
t=0

The sample-path average reward is dened as

(3.2) ( ) := liminf
J 
1 n
X1
(
r St ; At : )
!1 nn
t=0

In the theory of MCP most of the results correspond to (3.1), while just
a few present result for the sample-path criterion (3.2). Such sample-path
results can be found in [3] for bounded rewards and in [14], [15], [31], [19]
for unbounded rewards.

For Markov control processes, the main aim is to approach the maximum
asymptotic reward:
J = sup J ();


which leads to a dynamic programming problem.


For portfolio optimization with transaction cost, we formulate the cor-
responding Markov control problem. Assume that there exist 0<a 1 < 1<
a2 < 1 such that
Xi 2 [a ; a ]d :
1 2

1: Let us dene the state space as:


S := (b; x)jb 2  ; x 2 [a ; a ]
n o
d
d 1 2 :

10
2: The action space is
A :=  :
d

3: In this model of transaction cost, the set of admissible actions is


U (b; x) :=  : d

This special form of admissible set of actions makes the optimization prob-
lem much easier.
4: The stochastic kernel is the transition probability distribution of the
Markov market process, describing the asset returns:

((
Q d b0 ; x0 )j(b; x); b0) := P (dx0jx) := PfdX = dx0jX = xg:
2 1

Note, that this corresponds to the assumption, that the market behaviour is
not aected by the investor. In general, the optimal strategy is randomized.
However, if the market is not inuenced by the trading then the optimal
strategy can be non-randomized, as it will be shown in the next section.

5: The reward function is:


r ((b; x); b0 ) = v (b; b0 ; x):

6: The sample-path average reward criterion is the following:


lim!1inf n1 r((b ; x ); b ) = liminf 1
n n

!1 n v (b ; b ; x )
X X
t 1 t 1 t t 1 t t 1
n n
t=1 t=1
= lim!1inf J : n
n

Remark 3.1. It should be noted that the methods of MCP literature, more

precisely the theorems in [3], [14], [15], [31], [19] can't be applied in our
case. However, we do use the formalism, the results on the existence of the
solution of discounted Bellman equations, and the basic idea of vanishing

discount approach. The diculties arise from the fact, that we do not as-
sume weakly continuous transition kernel. (cf. [3]). But even if we assumed
weak or even strong continuity (continuity for bounded Borel measurable
functions) of ( j ), an additional ergodicity assumption would be nec-
Q dy :; :
essary. The usual uniform ergodicity assumption on f(b ; x )g is equivalent
t t
to aperiodicity and Doeblin's condition (cf. [14]). However, the aperiodic-
ity of f(bt; xt)g is not necessarily true, one can easily give counterexamples
for it.

11
4 Optimal portfolio selection algorithms

Before investigating the general optimization problem we may introduce a


suboptimal solution, called naive portfolio, by a one-step optimization as
follows: put b1 = f1=d; : : : ; 1=dg and for i  1,
(4.1) b = arg max v (b ; b0 ; X ):
i+1 i i
0
b

Obviously, this portfolio has no global optimality property. However, in

some experiences on NYSE data it has signicantly better performance


than the log-optimal portfolio.

Remark 4.1. One may consider long run expected average reward crite-

ria (3.1), to maximize the expected average growth rate of wealth of the

lim!1inf n1 Eflog S g:
investor:

n
n

This type of optimality criteria is used by Iyengar [16], and by Schäfer


[26]. Iyengar [16] investigates the problem of transaction cost if the market
process f Xi g is a sequence of independent and identically distributed
(iid) random variables, and the marginal distribution is known. Using
a closely related market and transaction model to ours, with a dierent
control theoretic representation it is shown that the optimal growth rate
can be uniquely characterized as g, where (V; g) are the solutions of the
corresponding average cost optimality equation (a special version of the
Bellman equation):

g + V (b; x) = max
b
fv0(b; b0; x) + EfV (b0; X )gg ;
0
1

where the function v0 is similar to our v above. The optimal strategy  is


not randomized, it is dened as b1 1 g = f1 1
=d; : : : ; =d and for i ,

= argbmax v0(b ; b0; X ) + EfV (b ; X )gg:


n
0
bi+1 i i 1
0

Iyengar [16] proves that if the solution (V; g) exists then this portfolio
selection has optimal expected growth rate g. In order to avoid imposing

12
some regularity conditions on the optimal policy  for the convergence of
the value iteration algorithm, it is shown that for all  > 0 there exists
a continuous portfolio selection function with expected growth rate not
smaller than g .

Next we introduce two optimal portfolio selection rules. Let 0<<1


denote a discount factor. We apply a kind of vanishing discount approach,
formulated by the discounted Bellman equation:

(4.2) ( ) = max
F b; x
b
fv(b; b0; x) + (1
0
 )EfF (b0; X ) j X = xgg :
 2 1

We show that this discounted Bellman equation has a solution.


It is rather standard to prove that the continuous solution F 2 C (d 
[a ; a ] ) for (4.2) exists, for any 0 <  < 1. As the proof goes on the usual
1 2
d

way we give only the sketch of it. Let H be the following operator

H : h(b; x) ! max
b
fv(b; b0; x) + (1
0
 )Efh(b0; X ) j X = xgg 2 1

One can show that H : C (  [a ; a ] ) ! C (  [a ; a ] ) and that H is


d 1 2
d
d 1 2
d

a contraction mapping. Then Banach Fixed Point Theorem ensures, that


the following value iteration converges to the solution: put F;1 b; x( ) = 0,
and

F;m+1 b; x( ) = max
b
fv(b; b0; x) + (1
0
 )EfF (b0; X ) j X = xgg ;
;m 2 1

for 1 m (cf. Hernández-Lerma, Lasserie [13], Bertsekas, Shreve [4],

Schäfer [26]).

Strategy 1. Our rst portfolio selection strategy is the following:

b1 = f1=d; : : : ; 1=dg


and

= argbmax v(b; b0; X ) + (1 )EfF (b ; X )jX gg;


n
bi+1
0
(4.3) i i i i i+1 i
0

for 1  i, where 0 <  < 1 is a discount factor such that  # 0.


i i

13
Remark 4.2. A strategy similar to (4.3) was dened by Schäfer [26].
He introduces an additional asset to settle the transaction costs when the
portfolio is restructured.

Remark 4.3. A portfolio selection fbig is called recursive if it has the


form
bi = b ( x ) = b (b
i
i 1
1 i i 1 ; xi 1 ):
Obviously, the portfolio fbi g is recursive. The recursion in the denition of
the portfolio fbi g is not time invariant, i.e., it is a non-stationary portfolio
selection rule.
Now, we claim our result on the optimality of Strategy 1 with respect
to sample-path average criterion:

Theorem 4.1 Assume


(i) that fXi g is a homogeneous and rst order Markov process,
(ii) and there exist 0<a 1 < 1<a 2 < 1 such that a  X j  a
1
( )
2 for
all j = 1; : : : ; d.
Choose the discount factor # 0 such that
i

(i i )=i ! 0 +1
2
+1

as i ! 1, and 1
X 1 < 1:
n=1 n2 n2
Then, for Strategy 1, the portfolio fbi g with capital Sn is optimal in
the sense that for any portfolio strategy fbi g with capital Sn ,

lim!1inf n1 log S  n1 log S  0


 

n n
n

a.s.

14
Remark 4.4. According to Theorem 4.2.1 in Schäfer [26],

lim!1inf E n1 log S  n1 log S  0;


 

n n
n

i.e., the portfoliofbi g is optimal in expectation. Theorem 4.1 states that


the portfolio strategy fbi g is sample-path optimal, too, i.e., it is optimal

with probability one.

Remark 4.5. For the choice

i =i 
;

with 1 2, the conditions of Theorem 4.1 are satised.


< =

Remark 4.5. It is an open problem to prove that

1 log S  = 1 n
X
(
g bi ; bi+1 ; Xi ; Xi+1 )
n
n n i=1

is convergent for ergodic market process, and the limit W is not random.
A further problem is how to calculate W .

Strategy 2. Next, we introduce a portfolio with stationary (time invari-


ant) recursion such that this portfolio is a sample-path optimal policy, too.
For any integer 1  k, put
b1
(k )
= f1=d; : : : ; 1=dg
and

= argbmax v(b ) + (1 )EfF (b ; X )jX gg;


n
; b0 ; Xi
(k) (k) 0
(4.4) bi+1 i k k i+1 i
0

for 1  i. The portfolio = fb g is called the portfolio of expert k


B(k)
(k)
i
with capital (
Sn B(k) ). Choose an arbitrary probability distribution q > 0, k
and introduce the combined portfolio with its capital

1
~ =
Sn
X
qk Sn B(k) : ( )
k=1

15
Theorem 4.2 Assume (i) and (ii) of Theorem 4.1. Choose the dis-
count factor i # 0 as i ! 1. Then, for Strategy 2,
lim 1 log S  1 log S~ = 0
 

n!1 n n n n

a.s.
The importance of the Theorem 4.2 is that we can approach the optimal
average growth rate asymptotically with ~
Sn . An important direction of our
future work is to construct an empirical version of this stationary rule, i.e.,
to get a data driven portfolio selection when the distribution of the market

process is unknown.

5 Proofs

Proof of Theorem 4.1. Introduce the following notation:

( ) = F (b; x):
Fi b; x i

We have to show that

lim!1inf n1 1 !
n n
g (bi ; bi+1 ; Xi ; Xi+1 ) ( ) 0
X X
g bi ; bi+1 ; Xi ; Xi+1
n
i=1
n i=1

a.s. Because of the martingale dierence argument in Section 2, one has

lim!1inf n1 ) n1
!
n n
(
g bi ; bi+1 ; Xi ; Xi+1 ( )
X X
g bi ; bi+1 ; Xi ; Xi+1
n
i=1 i=1

= lim!1inf n1 1 !
n n
v (bi ; bi+1 ; Xi ) ( )
X X
v bi ; bi+1 ; Xi
n
i=1
n i=1

a.s. therefore we have to prove that

lim!1inf n1 1 !
n n
v (bi ; bi+1 ; Xi ) ( ) 0
X X
(5.1) v bi ; bi+1 ; Xi
n
i=1
n i=1

a.s. (4.3) implies that

(5.2) (
Fi bi ; Xi ) = v(b; b i i+1 ; Xi ) + (1 i )EfF (b i i+1 ; Xi+1 )jb i+1 ; Xi g;
16
while for any portfolio fbig,
(5.3) (
Fi bi ; Xi )  v (b ; b i i+1 ; Xi ) + (1 i )EfF (b i i+1 ; Xi+1 )jb i+1 ; Xi g:
Because of (5.2) and (5.3), we get that

1 n
X
(
v bi ; bi+1 ; Xi )
n i=1

= n1
n 
( ) (1 )EfF (b )j

Fi bi ; Xi  g
X
i i i+1 ; Xi+1 bi+1 ; Xi
i=1

= 1
n 
( ) (1 )EfF (b )j g

Fi bi ; Xi
X
i
i i i+1 ; Xi+1 X1
n i=1

and

1 n
X
(
v bi ; bi+1 ; Xi )
n i=1

 n1
n
(F (b ; X ) (1 )EfF (b )jb g)
X
i i i i i i+1 ; Xi+1 i+1 ; Xi
i=1

= n1
n 
( ) (1 )EfF (b )jX g
X 
i
Fi bi ; Xi i i i+1 ; Xi+1 1 ;
i=1

therefore

1 n
X
(
v bi ; bi+1 ; Xi ) n1
n
X
(
v bi ; bi+1 ; Xi )
n i=1 i=1

 n1
n 
( ) (1 )EfF (b )j g

Fi bi ; Xi
X
i
i i i+1 ; Xi+1 X1
i=1
1 Xn 
(
Fi bi ; Xi ) (1 i )EfF (b i i+1 ; Xi+1 )jX g i

:
1
n i=1

Apply the following identity

(1 )jX g
i )EfF (b i i+1 ; Xi+1
i
1 (
Fi bi ; Xi )
= EfF (b ; X )jX g F (b
i i+1 i+1
i
1 i i+1 ; Xi+1 )
+ F (b ; X ) F (b ; X )
i i+1 i+1 i i i

 EfF (b ; X )jX g
i i i+1 i+1
i
1

= a +b +c :i i i

17
Because of

( ) = max
Fi b; x
b
fv(b; b0; x) + (1
0
i )E(F (b0; X )jX = x); g
i i+1 i

we have that

kFik1  kvk1 + (1 i )kF k1;


i

kFik1  kvk1
therefore

(cf. Lemma 4.2.3 in Schäfer [26]). As fai g is a sequence of martingale

dierences such that

jaij  2kFik1  2 kvk1;


i

therefore, because of
P 1
2
n n2 n < 1, the Chow Theorem implies that
(5.4)
1 n
X
ai !0
n i=1

a.s. (cf. Stout [29]).

Similarly to the bounding above, we have the equality

( ) = max
Fi b; x
b
fv(b; b0; x) + (1
0
i )E(F (b0; X )jX = x))g
i i+1 i

and the inequality

( ) = max
Fi+1 b; x
b
fv(b; b00; x) + (1  )E(F (b00; X )jX = x)g
00
i+1 i+1 i+2 i+1

 v(b; b0; x) + (1  )E(F (b0; X )jX = x) i+1 i+1 i+1 i

with arbitrary b0 . Taking dierence

( ) F (b; x)
Fi b; x i+1

 max
b
f(1  )E(F (b0; X jX = x)) (1  )E(F (b0; X )jX = x))g
0
i i i+1 i i+1 i+1 i+1 i

 (1  )kF F k1 + (
i i  ) max E(F (b0 ; X )jX = x)
b
i+1 i+1 i 0
i+1 i+1 i

 (1  )kF F k1 + (
i i  )kF k1 :
i+1 i+1 i i+1

18
So we have

kFi Fi+1 k 1  i i
i+1
kFi k1:
+1

Using that kFi k1  kv+1k


+1
i
1
and assumption on i 's, we get that

kFi Fi+1 k1  kvk1 i i+1


2
i

(cf. Lemma 4.2.3 in Schäfer [26]). Concerning fbig,






1 Xn



bi = 1 (F (b




Xn
i i+1 ; Xi+1 ) (
Fi bi ; Xi ))




n n
i=1 i=1

 1 (F (b

n
) ( ))
X

i i+1 ; Xi+1 Fi+1 bi+1 ; Xi+1
n
i=1

+ 1 (F (b ; X ) F (b ; X ))

Xn

i+1 i+1 i+1 i i i
n
i=1

 n1 kF F k1
n
X
i i+1
i=1

+ 1 (F (b ; X ) F (b ; X ))



n+1 n+1 n+1 1 1 1
n

 n1 k1 + kFn k1n+ kF k1
n
kFi
X
+1 1
Fi+1
i=1

 kvk1 n1 ji  i j
+ kvk 1=n + 1=
n X
+1 +1 1
2 1 n
i i =1 +1

(5.5) ! 0
by conditions. Concerning the proof of (5.1) what is left to show that

sup n1
n 
lim!1 f ( )j g f ( )j g  0

i E Fi bi+1 ; Xi+1 Xi1
X
i E Fi bi+1 ; Xi+1 Xi1
n
i=1

19
a.s. The denition of Fi implies that

( ) F (b ; X )
Fi bi+1 ; Xi+1 i i+1 i+1

= max v (b ; b0 ; X ) + (1  )EfF (b ; X )jX g


n o
 0

0
i+1 i+1 i i i+2 i+1
b

max v (b ; b00 ; X ) + (1  )EfF (b00 ; X )jX g


n o

00
i+1 i+1 i i i+2 i+1
b

max v (b ; b0 ; X ) + (1  )EfF (b ; X )jX g


n
 b 0
i+1 i+1 i i
0

i+2 i+1

v (b ; b0 ; X ) (1  )EfF (b0 ; X )jX g


o

i+1 i+1 i i i+2 i+1

max v (b ; b0 ; X ) v (b ; b0 ; X )
n o
 b 0
i+1 i+1 i+1 i+1

 2kvk1;
therefore

(5.6)
1 n
X
f (
i E Fi bi+1 ; Xi+1 ) (
Fi bi+1 ; Xi+1 Xi1 )j g  2kvnk1
n
X
i ! 0:
n i=1 i=1

(5.4), (5.5) and (5.6) imply (5.1).

Proof of Theorem 4.2. Theorem 4.1 implies that

lim!1inf n1 log S  n1 log S~  0


 

n n
n

a.s. We have to show that

lim!1inf n1 log S~ n1 log S   0


 

(5.7) n n
n

a.s. Because of the denition of S ~, n

1 log S~ = 1 log 1 q S (B )
n
X
k n
(k )
n n k=1

 1 log sup q S (B ) n
k n
(k)

= sup lognq + n1 log S (B )


!
k (k)
n ;
k

therefore (5.7) follows from

lim!1inf sup lognq + n1 1


! !
n n
( ) (
g bi ; bi+1 ; Xi ; Xi+1 ) 0
X X
k (k ) (k )
g bi ; b i+1 ; Xi ; Xi+1
n k i=1
n i=1

20
a.s. which is equivalent to
(5.8)

lim!1inf sup lognq + n1 1 !


n n
( ;X ) v (bi ; bi+1 ; Xi ) 0
X X
k (k) (k)
v bi ; b i+1 i
n k i=1
n i=1

a.s. (4.4) implies that

(5.9) ( (k)
Fk bi ; Xi ) = v (b (k)
i
(k)
; bi+1 ; Xi ) + (1 k )EfF (bk
(k)
i+1 )j (k)
; Xi+1 bi+1 ; Xi ; g
while for any portfolio fbig,
(
Fk bi ; Xi )  v (b ; b i i+1 ; Xi ) + (1 k )EfF (b k i+1 ; Xi+1 )jbi+1 ; Xi g;
thus for the portfolio fbi g
(5.10) (
Fk bi ; Xi )  v(b; b i i+1 ; Xi ) + (1 k )EfF (bk i+1 ; Xi+1 )jb i+1 ; Xi g:
Because of (5.9) and (5.10), we get that

1 n
X
(
v bi ; bi+1 ; Xi )
n i=1

 n1
n 
( ) (1 )EfF (b )j

Fk bi ; Xi  g
X
k k i+1 ; Xi+1 bi+1 ; Xi
i=1

= n1
n 
( ) (1 )EfF (b )j g

Fk bi ; Xi
X
i
k k i+1 ; Xi+1 X1
i=1

and

1 n
X
( (k)
v bi ; bi+1 ; Xi
(k)
)
n i=1

= n1
n 
( ) (1 )EfF (b )j

g
X
(k ) (k) (k)
Fk bi ; Xi k k i+1 ; Xi+1 bi+1 ; Xi
i=1

= 1
n 
( ) (1 )EfF (b )j g
X 
(k ) (k)
Fk bi ; Xi k k i+1 ; Xi+1 Xi1 ;
n i=1

21
therefore

1 n
X
(
v bi ; b
(k) (k)
;X )
1
i
n
X
(
v bi ; bi+1 ; Xi )
i+1
n i=1
n i=1

 n1
n 
( ) (1 )EfF (b )j g
X 
(k) (k)
Fk bi ; Xi k k i+1 ; Xi+1 Xi1
i=1
1 Xn 
Fk bi ; Xi( ) (1 k )EfF (b i
)j g


k i+1 ; Xi+1 X1 :
n i=1

Apply the following identity

(1 )jX g
k )EfF (b k i+1 ; Xi+1
i
1 (
Fk bi ; Xi )
= EfF (b ; X )jX g F (b
k i+1 i+1
i
1 k i+1 ; Xi+1 )
+ F (b ; X ) F (b ; X )
k i+1 i+1 k i i

 EfF (b ; X )jX g
k k i+1 i+1
i
1

= a +b +c :
i i i

Similarly to the proof of Theorem 4.1, the averages of ai 's and bi 's tend to

zero a.s., so concerning (5.8) we have that, with probability one,

lim!1inf sup lognq + n1 v(b ; b ; X ) n1 v(b; b


!
n n
)
X X
k (k) (k)
n i i+1 i i i+1 ; Xi
k i=1 i=1

 sup lim!1inf lognq + n1 v(b ; b ; X ) n1 v(b; b


!
n n
)
X X
k (k) (k)
n i i+1 i i i+1 ; Xi
k i=1 i=1

= sup lim inf 1 v(b ; b ; X ) 1 v(b; b ; X )


!
n
X n
X
(k) (k)

k !1
n n i=1
i i+1 i
n i=1
i i+1 i

!
n n
= sup lim!1inf
k n
k
n
X
EfFk (b (k )
i+1 ; Xi+1 X1 )j g i k
n
X
EfFk (bi+1 ; Xi+1 )jXi1 g :
i=1 i=1

The problem left is to show that the last term is non negative a.s. Using

22
the denition of Fk

( (k )
Fk bi+1 ; Xi+1 ) F (b ; X ) k i+1 i+1

= max v (b ; b0 ; X ) + (1  )EfF (b ; X )jX g


n o
(k) 0

0
i+1 i+1 k k i+2 i+1
b

max v (b ; b00 ; X ) + (1  )EfF (b00 ; X )jX g


n o

00
i+1 i+1 k k i+2 i+1
b

= max min v (b ; b0 ; X ) + (1  )EfF (b ; X )jX g


nn o
(k) 0

0 00
i+1 i+1 k k i+2 i+1
b b

v (b ; b00 ; X ) + (1  )EfF (b00 ; X )jX g


n oo

i+1 i+1 k k i+2 i+1

min v (b ; b00 ; X ) + (1  )EfF (b00 ; X )jX g


nn o
 b00
(k )
i+1 i+1 k k i+2 i+1

v (b ; b00 ; X ) + (1  )EfF (b00 ; X )jX g


n oo

i+1 i+1 k k i+2 i+1

= min v (b ; b00 ; X ) v (b ; b00 ; X )


n o
(k )
00
i+1 i+1 i+1 i+1
b
 2kvk1;
therefore

sup lim!1inf n1  EfF (b


n
) (
Fk bi+1 ; Xi+1 Xi1 )j g
X
(k )
n
k k i+1 ; Xi+1
k i=1
 sup  ( 2kvk1) k
k
= 0
a.s., and (5.8) is proved.

References

[1] Akien, M. , A. Sulem, and M. I. Taksar (2001): Dynamic Optimization


of Long-term Growth Rate for a Portfolio with Transaction Costs and

Logaritmic Utility." Mathematical Finance, 11, 153-188.


[2] Algoet, P. , T. Cover (1988): Asymptotic Optimality Asymptotic
Equipartition Properties of Log-optimum Investments." Annals of
Probability, 16, 876-898.
[3] Arapostathis, A. , V. S. Borkar, E. Fernandez-Gaucherand, M. K.
Ghosh and S. I. Marcus (1993): Discrete-time Controlled Markov

23
Processes with Average Cost Criterion: a Survey." SIAM J. Control
Optimzation, 31, 282-344.
[4] Bertsekas, D. P. , and S. E. Shreve (1978): Stochastic Optimal Con-
trol: the Discrete Time Case. New York: Academic Press.

[5] Bielecki, T. R. , and S. R. Pliska (2000): Risk Sensitive Asset Manag-

ment with Transaction Costs." Finance and Stochastics, 4, 1-33.


[6] Bobryk, R. V. , and L. Stettner (1999): Discrete Time Portfolio Se-
lection with Proportional Transaction Costs." Probability and Math-
ematical Statistics, 19, 235-248.
[7] Cover, T. (1991): Universal Portfolios." Mathematical Finance, 1,

1-29.

[8] Davis, M. H. A. , and A. R. Norman (1990): Portfolio Selection with


Transaction Costs." Mathematics of Operations Research, 15, 676-
713.

[9] Eastham, J. , and K. Hastings (1988): Optimal Impulse Control of

Portfolios." Mathematics of Operations Research, 13, 588-605.


[10] Guo, X. , and X.Cao (2005): Optimal Control of Ergodic Continuous
Time Markov-chains with Average Sample-path Rewards." SIAM J.
Control Optimization, 44, 29-48.
[11] Györ, L. , G. Lugosi, and F. Udina (2006): Nonparametric Kernel-

based Sequential Investment Strategies." Mathematical Finance, 16,


337-357.

[12] Györ, L. , and D. Schäfer (2003): "Nonparametric Prediction," in


Advances in Learning Theory: Methods, Models and Applications,
eds. J. A. K. Suykens, G. Horváth, S. Basu, C. Micchelli, and J. Van-

devalle. IOS Press, 339-354

[13] Hernández-Lerma, L. , and J.B.Lasserre (1996): Discrete-Time


Markov Control Processes: Basic Optimality Criteria. New York:
Springer.

24
[14] Hernández-Lerma, O. , O. Vega-Amaya (1998): Innite-horizon
Markov Control Processes with Undiscounted Cost Criteria: from Av-
erage to Overtaking Optimality." Applicationes Mathematicae, 25,
153-178.

[15] Hernández-Lerma, O. , O. Vega-Amaya and G. Carracasco (1999):


Sample-path Optimality and Variance-minimization of Average Cost

Markov Control Processes." SIAM J. Control Optimization, 38, 79-


93.

[16] Iyengar, G. (2002): Discrete Time Growth Optimal Investment with


Costs." Working Paper,
http://www.columbia.edu/ gi10/Papers/stochastic.pdf

[17] Iyengar, G. (2005): Universal Investment in Markets with Transaction


Costs." Mathematical Finance, 15, 359-371.
[18] Iyengar, G. , and T. Cover (2000): Growths Optimal Investment in
Horse Race Markets with Costs." IEEE Transactions on Informa-
tion Theory, 46, 2675-2683.
[19] Lasserre, J. B. (1999): Sample-path Average Optimality for Markov

Control Processes." IEEE Transactions on Automatic Control, 44,


1966-1971.

[20] Kalai, A. , and A.Blum (1997): Universal Portfolios with and without
Transaction Costs." Proceedings of the 10th Annual Conference on
Learning Theory., 309-313.
[21] Korn, R. (1998): Portfolio Optimization with Strictly Positive Trans-
action Cost and Impulse Control." Finance and Stochastics, 2, 85-
114.

[22] Merhav, N. , E. Ordentlich, G. Seroussi and M. J. Weinberger (2002):

On Sequential Strategies for Loss Functions with Memory." IEEE


Transactions on Information Theory, 48, 1947-1958.

25
[23] Morton, A.J. , and S. R. Pliska (1995): Optimal Portfolio Manange-
ment with Transaction Costs." Mathematical Finance, 5, 337-356.
[24] Palczewski, J. , and L. Stettner, (2006): Optimisation of Portfolio
Growth Rate on the Market with Fixed Plus Proportional Transac-
tion Cost." CIS to Appear a Special Issue Dedicated to Prof. T.
Duncan.
[25] Pliska, S. R. , and K. Suzuki (2004): Optimal Tracking for Asset Allo-

cation with Fixed and Proportional Transaction Costs." Quantitative


Finance, 4, 233-243.
Nonparametric Estimation for Finantial Invest-
[26] Schäfer, D. (2002):
ment under Log-Utility. PhD Dissertation, Mathematical Institute,
University Stuttgart, Aachen: Shaker Verlag.

[27] Shreve, S. E. , H. M. Soner and G.L. Xu (1991): Optimal Investment


and Consumption with Two Bonds and Transaction Costs." Mathe-
matical Finance, 1(3), 53-84.
[28] Shreve, S. E. , and H. M. Soner (1994): Optimal Investment and Con-
sumption with Transaction Costs." Annals of Applied Probability,
4, 609-692.

[29] Stout, W. F. (1974): Almost Sure Convergence. New York: Academic


Press.

[30] Taksar, M. , M. Klass and, D. Assaf (1988): A Diusion Model for


Optimal Portfolio Selection in the Presence of Brokerage Fees." Math-
ematics of Operations Research, 13, 277-294.
[31] Vega-Amaya, O. (1999): Sample-path Average Optimality of Markov
Control Processes with Strictly Unbounded Costs." Applicationes
Mathematicae, 26, 363-381.

26

You might also like