# SAMPLE LYAPUNOV EXPONENT FOR A CLASS

#### Document technical information

Format pdf
Size 2.2 MB
First found Jun 9, 2017

#### Document content analysis

Category Also themed
Language
English
Type
not defined
Concepts
no text concepts found

#### Transcript

Furuoya, T. and Shiga, T.
Osaka J. Math.
35 (1998), 35-72
SAMPLE LYAPUNOV EXPONENT FOR A CLASS
OF LINEAR MARKOVIAN SYSTEMS OVER Zd
TASUKU FURUOYA and TOKUZO SHIGA1
1. Introduction
Let Zd be the d-dimensional cubic lattice space, and let {Yi(t)}ieZd be independent copies of a one-dimensional Levy process Y(t) defined on a probability space
(Ω, T, Pγ). Regarding {Yi(t)}ie%d as random noises, we consider the following linear stochastic partial differential equation (SPDE) over Zd;
(1.1)
dξi(t) = κA
where K > 0 is a constant and A = {a(ί, j)}ijezd *s an infinitesimal generator of a
continuous time random walk on Zd, i.e.
(1.2)
α(0,z)>0
(ΐ^O),
α(0,i) = 0,
α(ΐ, j) = α(0, j - i)
(iJ^Zd)
and
Under a mild assumption on Y(t) the equation (1.1) is well-posed and the solution
defines a linear Markovian system in the sense of Liggett's book([7], Chap. IX).
When {Yi(t}} are independent copies of a standard Brownian motion, (1.1) is
called parabolic Anderson model which has been extensively studied in [10], [8], [2],
[3] from the view-point of intermittency. On the other hand when Yi(t) = —Ni(t) +t
and {Ni(t)}i€Zd are independent copies of a Poisson process with parameter one,
(1.1) defines a linear system with deterministic births and random deaths introduced
in [7], which was discussed from the view-point of ergodic problems. This process
is a dual object of the survival probability of a random walker in a spatially and
temporally fluctuating random environment, for which an asymptotic analysis was
executed in [9], The present form of the equation (1.1) was first treated in [1] where
Partly supported by Grant-in-Aid for Scientific Research (No. 07640283, No.08454037),
Science and Culture.
36
T. FURUOYA and T. SHIGA
an asymptotic analysis of the moment Lyapunov exponents for solutions of (1.1)
was discussed under a stronger moment condition on the Levy measure.
In this paper we are concerned with asymptotic analysis of the sample Lyapunov
exponent for the solutions of (1.1). Let ψ(z) be the characteristic exponent of the
Levy process Y(t)9
(1.3) ψ(z) = --z2 + ^ϊβz +
e
l z u
- 1 - V^ϊzul
u\ <
p(du),
where I(A) stands for the indicator function of A c R, a and β are real constants,
and p is a Radon measure in R \ {0} satisfying
(1.4)
mm{u2,l}p(du) < oc.
/
Λ o
In order to formulate the sample Lyapunov exponent we restrict our consideration
to the situation that (1.1) admits nonnegative solutions with finite mean, which is
realized by the following condition.
Condition [A]
(1.5)
p((-oo,-l)) = 0,
and
(1.6)
/
up(du) < oo.
Under the condition [A] there exists a unique nonnegative Ll (7) solution with
&(0) - 1 (i e Zd), which is denoted by ξl(t) = {#(*)} (The definition of Ll(-y)
solutions are given later). We first establish that there exists a constant λ such that
(1-7)
t
lπnj
λ = X(κA]Y) is called sample Lyapunov exponenήn L1-sense. In the section 3
we prove this result in a general setting and discuss some relations between the
sample Lyapunov exponent and the almost sure Lyapunov exponent. In the section
4 we derive some inequalities on X(κA]Y), from which it follows that X(κA Y) is
continuous in K, > 0 and Y in a suitable sense.
Our main concern of the present paper is to investigate an asymptotics of
X(κA Y) as K \ 0. Assume further that Y(t) has zero mean, so that (1.3) turns
to
(1.8)
ψ(z) = -^z*.
SAMPLE LYAPUNOV EXPONENT
37
Let
(1.9)
o(y) = ~ + /
2
(log(l + u) - ti
«/[-l,oo)
which coincides with the sample Lyapunov exponent in the non-interacting case,
i.e. K = 0. Note that -oo < λ 0 (y) < 0 in general, unless Y(t) = 0.
In particular if we consider a two dimensional stochastic equation instead of
(1.1), it is possible to obtain its precise asymptotics as K \ 0 under some moment
condition on the Levy measure p at a neighbourhood of —1, which is discussed in
the section 5. In the final section we discuss asymptotic estimates of X(κA',Y) as
K \ 0 in two extremal cases involving a moment condition of p around —1, which
combines with the inequalities on X(κA]Y) to get a continuity result of λ(κA]Y)
at K = 0.
It is to be noted that the moment condition on p at a neighbourhood of —1
is significant for an asymptotics of X(κA Y) as K \ 0. In fact it is shown that if
p({— 1}) > 0, it holds X(κA] Y) « log K, for small K > 0, which extends the previous
result in [9]. On the other hand if p = 0 and a φ 0, i.e. Y(t) = aB(t), it is known
that X(κA-,Y) — Xo(Y) ~ l/log(l/«) for small K > 0, which was recently obtained
by [3]. For a general Y(t)9 under a moment condition on p at a neighbourhood of
— 1, we obtain the same lower bound estimate as in the Brownian case and a slightly
weaker upper bound estimate for λ(/^A; Y) — Xo(Y) as K \ 0 than the case.
2. Well-posedness of the SPDE and Feynman Kac formula
Let J be a countable set, A = ( a ( i ^ j ) ) i j ^ j
that,
(2.1)
α(ϊ,j)>0 ( i ^ j ) , ^ α ( i , j ) = 0 ,
j£
be a J x J real matrix satisfying
and sup |α(i,i)| < oo,
ieJ
and let {Yi(t)}iej be independent copies of a one-dimensional Levy process Y(t)
with the characteristic exponent ψ(z) of (1.3). It is assumed that {Yi(t)}i^j are
defined on a complete probability space (Ω,^7, Pγ) with a filtration (Ft) such that
{Yi(t)} are (^)-adapted and {Yi(t + r) - Yi(t) i e J,r > 0} are independent of ft
for each t > 0.
Let us consider the following linear stochastic equation:
(2.2)
To formulate a solution of (2.2) we first fix a positive and summable vector 7 =
38
{Ίί}ί£J
T. FURUOYA and T. SHIGA
satisfying that for some constant Γ > 0
(2.3)
ΣΊia(i,j)\<TΊj
(JGJ),
and denote by L1 (7) the totality of vectors ξ = {6}ΐeJ such that
ξ(t) = (ξi(t))iej is an L1(7)-solution of (2.2) if the following three conditions are
satisfied;
(a) for each i £ J, &(£) is an ( f t ) -predict able and right continuous process with
left limit defined on (Ω, J7, (J:t),PY),
(b) ||£(*)||Li(<y) is locally bounded in t > 0, Py-a.s.
(c) ξ(t) = (6(ΐ))» € j satisfies the equation (2.2).
The last term of (2.2) is the Itό's stochastic integral which is interpreted as
follows. Consider the Levy-Itό's decomposition of Yί(t),
(2.4)
uN^dsdu) + /
Yi(i) = aBi(t) + βt+ I
7[0,ί]x(|n|<l/2)
uN^dsdu ),
«/[0,t]x(|n|>l/2)
where {Bi(t}} are independent copies of a standard Brownian motion, {Ni(dsdu)}
are independent copies of a Poisson random measure on [0, oo) x R \ {0} with
intensity measure dsp(du\ and Ni(dsdu) = Nι(dsdu) — dsp(du) (i G J). Then
(2.5)
I
Jo
^(s-)dYi(a) = α ί ξ^dB^s) +β ί ξ^ds
Jo
-h
Άo,t]χ(H>ι/2)
Jo
uξi(s-)Ni(dsdu).
To guarantee the well-posedness of the equation (2.2) we impose the following
condition.
(2.6)
/
\u\p(du) < oo.
J(\u\>l)
Theorem 2.1. Assume (2.6). Let ξ(0) = {6(0)} be an ^-measurable random
γ
vector satisfying ||£(0)|| L i( 7 ) < oo P -a.s. Then the equation (2.2) has a pathwίsel
unique L -(7) -solution.
SAMPLE LYAPUNOV EXPONENT
39
To be selfcontained we give a proof of the theorem breafly, although it may not
be novel except an Ll (7) argument, which is needed due to the weaker assumption
(2.6) than in [1]. We may assume that Eγ(\\ξ(0)\\Lι(^) < oo.
1. Let {Jn} be an increasing sequence of finite subsets of J with Un>lt7n — J.
For n > 1 consider the following finite-dimensional equation.
(2.7) ξ|n)(ί) = &(0) + ίt^a(i,j)ξ^(s)dS
Jo jeJ
d n ) (*)=6(0)
+ Γ ξln\s-)dYί(s)
Jo
(i e Jn)
(»6J\Jn).
It is easy to see that (2.7) has a pathwise unique solution { £ \$ ( £ ) } . Moreover letting
ηln\t) = sup0<s<t |ξtn ^(s)|, by the condition (2.6) one can see that
Eγ(η^(t))<oo
2.
(2.8)
(ieJ).
We claim that there is a constant C > 0 such that for every n > 1,
Eγ
sup I
\0<r<t
To show this we apply the following maximal inequality for martingales.
Lemma 2.1. For 0 < p < 1, there is an absolute constant Cp > 0 such that for
every t > 0,
(2.9)
Eγ
sup |α Γf< n ) OW
θ<r<ί
Jo
Proof.
The maximal inequalities for continuous martingales are found in
Theorem ΠI.3.1 of [6], not all of which are valid for discontinuous martingales in
general. However the inequality (2.9) can be proved for the discontinuous martingale
without any change of the argument there.
D
Thus by (2.9) we have a constant C\ > 0 such that
Eγ
sup α
i0<r<ί
(a)dBi(S)+
n
uξns-)Nί(dsdu)
40
T. FURUOYA and T. SHIGA
/
V 1/2
/
1
,
<cΛc?+
\
2
Λι«ι<ι/2)
2
/
ΓJt
Eγ [J
M p(d«)
/
\\ Jo
ξ|n)
< \EY (ηln\t)) + \C\ (a* + ί
\u 2p(d
*
*
\
^(|«|<i)
which yields (2.8) with
/ 22
f
C:2 I a
α + //
C = Cl
\
3.
\
r
|w|22p(dtί)
\β\ + /I
\u\
p(du) I + |/3|
Λ|«|<l/2)
/
Λl«l
Combining (2.7) and (2.8) we get
+C f
Jo
EY(ηΐ\S))ds,
from which
(2.10)
E
where
Mij=2\a(i,j)\+CδiJ
(t,j€J).
Particularly by (2.3) and (2.9) it holds that
£y(||u(
(2.11)
4.
For n > m, set
(n.m) / ,\
^,
SU
i/-(^)/ \
/-('n,) / \ ι
W = 0<r<ί
P lί< W - « Wl
Applying the same argument as in the step 2 we get
j€J
+ I(Jn \
from which it follows
(2.12)
γ
E (\\η^
Jm)(ϊ)(EY(η<Γ\t})
u|/o(du).
SAMPLE LYAPUNOV EXPONENT
41
Moreover by (2.10) the r.h.s. of (2.12) vanishes as n and m tend to oo. Accordingly
there exists an L1(7)-valued stochastic process ξ(t) — {ξi(t)}i^j such that for every
t >0
lim Eγ(sup ||^ n ) (r)-e(r)|Uι ( 7 ) )=0.
n^oo
Q<r<£
Furthermore it is obvious that ξ(t) = {ξi(t)}i£j is an L1(7)-solution of (2.2).
The proof of the pathwise uniqueness is routine, so it is omitted.
REMARK 2.1. The condition (2.6) is necessary for the solution of (2.2) to have
finite first moment, that is
It would be rather delicate to discuss the well-posedness of (2.2) without the
condition (2.6).
Next we discuss Feynman-Kac representation for the solution of (2.2). Let
(X(ί),Pi) be a continuous time Markov chain with state space J generated by the
infinitesimal marix A = {a(i,j)}. We denote the associated transition matrix by
Theorem 2.2. Under the same assumption as in Theorem 2.1 the solution
ξ(t) = {ξi(t)} is represented by
(2.13)
e<(t)=E < fe x ( t ) (0)exp/
o
dZx(t_s](s) : /
o
Nx(t_s}(ds, {-!}) = θ) ,
where {Zi(t)} are complex-valued independent Levy processes given by
(2.14)
Zi(t) - aBi(t) +
log(l -h u}Ni(dsdu}
+ mt + V^πJViQO, t] x (-00, -1))
(2.15)
m= ί
ΛH<l/2)
J€J
42
T. FURUOYA and T. SHIGA
and
Γ
«/0
Proof.
First we note that the r.h.s. of (2.13) converges absolutely Py-a.s.,
because under Py, the distributions of /Qί+ dZX(t_a)(s) and Zi(t) coincide for given
t > 0 conditioned on {X(s)}0<s<t, and
-l-/(|u| <
) log(l +«) ) p(du) +m t
Thus,
/
/*+
(53 / 7(X(ί
70
SeJ
i€J
< etΓEγ(\\ξ(0)\\L1(y))Eγ(\eXpZim
< oo.
The rest is essentially the same as the proof of Lemma 3.1 of [9]. For p > 0, let
y.W(ί) = Yi(t) + e-pNi([Q,t} x {-!}).
Using Itό's formula (cf. [6, Theorem II.5.1]) we have
(2.16)
exp '
_
/•*+
exp
Σ Γ
0
^
i€J
Set
ξx(t)(0) exp
SAMPLE LYAPUNOV EXPONENT
43
Using (2.16), Markov property of (X(t),Pi) and a stochastic Fubini theorem one
can easily see
Γ
Jo
so letting p -» oo, we obtain
( }
ίt+ dZχ _ (s) : Iίt+ N _ (ds, {-!}) = 0 \1 ,
(t β)
x(t s}
(
ξl°°)(t) = Ei Uχ( t )(0)exp /
\
Jo
/
Jo
and
(2.17)
^(t)-
It is easy to see that {&(£)} itself satisfies the equation (2.17) and that the pathwise
uniqueness holds for solutions of (2.17). Therefore ξi(t) — Q (i) holds for every
t > 0, ί G J, Py-a.s, which completes the proof of Theorem 2.2.
D
Corollary 2.1. Suppose that the condition [A] is fulfilled.
(2.2) has a unique nonnegative Ll(j)-solution, if
ξ(0) = {£(0)} G Ll(j),
Then the equation
and &(0) > 0 for all i e J Py-α.s.
The proof is immediate from Theorem 2.2 because {Zi(t)}iej
processes due to the condition (1.4).
are real-valued
3. Sample Lyapunov exponent
We first establish an existence theorem of sample Lyapunov exponent in L 1 sense for a class of stochastic partial differential equations in a general setting. Let
G be a topological abelian group with a Haar measure m. Suppose that we are
given a system of random kernels {pω(s,y,t,x)',Q < s < t,x,y e G} defined on a
probability space (Ω, J7, P) with a filtration (^)o<s<t that satisfies the following
conditions.
s,τ/;ί,x) is a jointly measurable non-negative random field in (s,y;£,£)
satisfying that for 0 < 5 < r < t and y, x G G,
,y,t,x)= I
JG
m(dz)pω(s,y,r,z)pω(r,z',t,x).
T. FURUOYA and T. SHIGA
44
[B-2] (independent increments)
(i) for any 0 < s < t, {pω(s, y; £, x); x, y G G} are T\ measurable and
(ii) for any 0 < tλ < t2 <
< ίn, {.T^1 > f\l r ' »^tΓ-i} are independent.
[B-3] (homogenuity) For any /ι > 0,z E G, the probability laws of{p u; (s,2/;ί,x);
ω
0 < s < ί, x, y G G} and {p (s + Λ, y + 2; ί H- Λ, α: + z); 0 < s < ί, x, 2/ € G}
coincide.
[B-4] Let υ%(t,x) = fGm(dy)pω(Q,y]t,x). There exists a constant α > 0 such that
for every t > 0,
(ί, x) + £<(£, x)~α < oo.
Then we obtain
Theorem 3.1. Under the conditions [B-l]-[B-4] there exists a constant λ such
that
(3.1)
lim E (-log<(ί,x) ) = supE [-log<(ί,x) J = λ.
t^oo
/
\t
t>0
\ί
/
Moreover for every p > 0,
(3.2)
lim
^(ί, x) — λ
We call λ the sample Lyapunov exponent of {pω(s, y; ί, x); 0 < s < ί, x, y G G}.
Corollary 3.1. Suppose that uω (0, x) £s an J^-measurable nonnegative random
field satisfying that
+ E(\ logu ω (0, a;) I) is bounded in x e G,
(3-3)
set
(3.4)
uω(t,x) = ί m(dy)uω(0,
JG
Then
(3-5)
ω
lim E -\ogu (t,x)-\ =0.
The proof of Theorem 3.1 is rather standard in the situation that the moment
Lyapunov exponent ^(p) = lim t _ K X ) (l/ί)logE(w(ί,x) p ) is well-posed in a neighbourhood of p = 0, for which the condition [B-4] is an essential requirement. In
SAMPLE LYAPUNOV EXPONENT
45
fact, it will be shown that λ of (3.2) coincides with the left derivative of j(p) at
p = 0.
We may assume 0 < α < 1 in [B-4]. For — a < p < 1, let
Mp(t)=Eu^(t,xγ.
Noting that Mp(t) is constant in x G G by [B-3], and set
(i)
(ii)
Lemma 3.1.
MI (ί) = ect (t > 0) for some real c.
For -a < p < 0,
(3.6)
7(p) = lim - logMp(t) = inf - logMp(t)
£>0 t
t—xx) t
exists,
and
(3.7)
Ί(P)>CP
p G [-α,0].
for
(iii) For 0 < p < 1,
(3.8)
7(p) = lim - logMp(ί) = sup - \ogMp(t]
t->oo t
t>0 t
exists,
and
(3.9)
7(p)<cp /or p e [ 0 , l ] .
(iv) 7(p) is a convex function ofp G [— α, 1] w/fλ 7(0) = 0, so that the left and
right derivatives 7''(p—) andy(p+) exist for —a. < p < 1. In particular ^(p)/p
is non-decreasing in p G [— α, 0) U (0, 1],
(3.10)
Proof.
lim
p
- y(O-)
αm/
lim
\Q p
= V(0+).
P
First we note that by [B-l] to [B-4]
which yields (i). Next by [B-l]
<(ί + s , x ) = ί / m(dy)uΐ(s,y)pω(s,y',s
+ t,x)J J m(dz)pω(s, z\t -h 5, x),
46
T. FURUOYA and T. SHIGA
where
1
(3.11)
/ Γ
V
ω
ω
ω
p (s,y,s + t,x) = p (s,y s + t,x) { I m(dz)p (s,z;t + s,x)
.
\JG
J
p
Using this, [B-2]-[B-4], and Jensen's inequality with a convex function x (p < 0),
we see the subadditivity of logMp(£), i.e.
Mp(t + s) < Mp(t)Mp(s]
(s > 0, t > 0),
and
Thus we get
lim - logMp(ί) = inf - log Mp(ί) > cp (-a < p < 0),
which yields (3.6) and (3.7) and a similar argument yields (iii). (iv) is elementary,
so omitted.
D
The following lemma follows immediately from Lemma 3.1 by a standard large
deviation argument to use the exponential Chebyshev inequality and (3.10).
(i)
Lemma 3.2.
For any e > 0 there exist c(e) > 0 and t(e) > 0 such that for t > t(e)
and
(ii)
(3.14)
For every p > 0 and e > 0
lim E(|log<(ί,x)| p ;<(t,x) < e^0-^) - 0.
t—> oo
(iii) For every p > 0
p
(3.15)
E
\
is bounded in t > 0.
SAMPLE LYAPUNOV EXPONENT
REMARK 3.1.
47
From (3.12), (3.13) and the Borel-Cantelli lemma it follows that
(3.16)
limsup - logι#(ra,z) < 7'(0+)
n—> oo
P-a.s.
Ή>
and
(3.17)
liminf-log<(n,z) > V(O-)
n-+oo n
P-a.s.
Accordingly, if j(p) is differentiable at p = 0, it holds
(3.18)
lim -log<(n,z) = V(0)
n-κx) n
P-a.s.
However it seems extremely difficult to verify the differentiability of 7(7?). On the
other hand it is obvious that
(3.19)
liminf-logu"(i,z) < liminf - log<(n,z)
t—»oo
n—κx>
t
77,
Pγ-a.s.
It should be noted that the equality in (3.19) does not hold in general, which will
be discussed later, (see Theorem 3.3).
The following lemma is a key point for the proof of Theorem 3.1.
Lemma 3.3.
(3.20)
lim jWog<(t,z)) - y(O-).
t—KX) t
Proof.
Since logMp(£) is subadditive in t > 0 if p < 0,
ί7(p,ί) - B(log<(ί,z)) - lim
pXO
is a superadditive function of t > 0, which implies
(3.21)
lim
t-*oo t
β(log<(ί,x)) = sup
t>0
t
(log<(ί,x)) < sup
t>0 ί
log£;«(ί,x)) < c.
Noting that 7(p,ί) is convex in p, so that j ( p , t ) / p is nondecreasing as p /* 0, we
see that for — a < p < 0,
lim -E(loguy(t,x)) = lim lim
t-+oot
V
1 V
^
ί-c
>
~
48
T. FURUOYA and T. SHIGA
which yields
(3.22)
lim ±E(logu<ϊ(t,x)) > V(O-).
t—»OO
t
On the other hand by (3.6) and (3.21)
1
Ύ(U t}
lim -E(log<(ί,z)) - sup lim -^-^
V
^
1 V
p
=
ιim frft
P/Όt>0
which yields
lim -E(log<(ί,z)) < 77(0-).
(3.23)
ί—> oo t
Hence (3.20) follows from (3.22) and (3.23).
D
Proof of Theorem 3.1. Let X(t) = (l/t)log<(t,z) -7^0-), then by Lemma
3.3
(3.24)
lim E ( X ( t ) ) = 0.
t—+00
Moreover by Lemma 3.2 we have
(3.25)
lim E(X(t) : X(t) < -e) = 0 for every e > 0.
t —>oo
(3.24) and (3.25) imply
lim E(\X(t)\)
t—KX>
=0.
Moreover combining this with (3.15) we see that for any p > 0,
which completes the proof of Theorem 3.1.
Proof of Corollary 3.1. Note that by (3.11) and Jensen's inequality,
\oguω(t,x) -log <(£,*) = log ( / m(dy)uω(Q,y)pω(0,y ,t,x)
\JG
>- ί
JG
D
SAMPLE LYAPUNOV EXPONENT
49
Using logx < x — I (x > 0), we have
IG
Hence (3.3), [B-2] and [B-3] imply that
E(\loguω(t,x) -log<(ί,x)|)
is bounded in t > 0,
thus (3.5) follows from Theorem 3.1.
Π
EXAMPLE 3.1. Let G = Rd, m be the Lebesgue measure on R d , and {W(t, x) :
t > 0,x G R d } be a continuous centered Gaussian field defined on a probability
space (Ω,F,PW) with filtration {ft} satisfying that for each t > 0, W(ί, ) is Ttadapted and that
(3.26)
Ew(W(s, x)W(t, y } ) = t f \ sC(x - y}
(s, t > 0, x, y G R d ),
where C(x) is assumed to be a bounded smooth function on R d . Then W(s, x) has a
continuous modification. Let us consider the following stochastic partial differential
equation (SPDE):
(3.27)
V
/
«ϋM
=1
Oj
r>
C/t
Z
It is easy to see that the SPDE (3.27) admits a fundamental solution
{pω(s,y;t,x);s,t > 0,z,y G R d } which satisfies the conditions [B-l]-[B-3]. In
this case [B-4] can be verified as follows. For an Revalued continuous function
f ( t ) one can define a stochastic integral/0 d W ( s , f ( s ) ) that is a centered Gaussian
process satisfying that
E w ( I* dW(s,f(8)} ίtdW(s,g(s))} = Γ C ( f ( s ) - g(s))ds.
\Jo
Jo
/
Jo
Using this stochastic integral we have Feynman-Kac representation for the solution
of (3.27) as follows.
(3.28)
< ( t , x ) = E x f e x p / dW(s, B(t - s}]\ expV
Jo
/
C(0)t
2 '
where ( B ( t ) , P x ) is a d-dimensional Brownian motion. Obviously Ew(u"(t, x}) = 1.
On the other hand, noting that
1
/
ft
\
C(Q)t
< Ex exp / -dW(s, B(t - s)) exp
~2~'
\
Jo
)
50
T. FURUOYA and T. SHIGA
we get
which verifies the condition [B-4]. Accordingly by Theorem 3.1 the sample
Lyapunov exponent is well-defined for the SPDE (3.27).
Next we apply Theorem 3.1 to show existence of the sample Lyapunov exponent
for the solutions of the stochastic equation (1.1).
Now let J = Zd, and let us consider the stochastic equation (2.2) with A =
{a(ij)} satisfying (1.2).
Theorem 3.2. Suppose that (1.2) and the condition [A] are fulfilled, and let
ξ i ( t ) = {£(t)}ieZd be the Ll (7) -solution of the equation (1.1) with &(0) = 1 for all
i £ 2>d. Then {ξl(t)}iezd are non-negative and there exists a constant \ such that
for every p > 0,
(3.29)
lim£ y ( i log £ (f) - λ
t—κx>
y t
= 0.
The nonnegativity of {£*(*) }ίezd follows from Corollary 2.1. Let { p ω ( s , ί ] t , j ) - ,
0 < s < t,z, j E Zd} be the fundamental solution of (2.2). Choosing T^ as the
σ-field generated by {Yϊ(r) - Yi(s)]s < r < t,i G Zd} and taking F\$ the σ-field
independent of {^ 0 < s < t < oo}, it is easy to verify the conditions [B-l]-[B-3].
In order to apply Theorem 3.1 it suffices to verify the condition [B-4], which will
be reduced to the following lemma.
Lemma 3.4. Let{ηi(t)}ieZd be the solution of the following stochastic equation,
(3.30) ηi(t) - 1 = ί V a(ij)ηj(s)d8Jo
t
Γη^s-ΪNt ids, f-1,- 2
Jo
\ L
Then
(3.31)
ηi(t) = Pi(ί
\Jo
Nx(t-
and moreover for every 0 < a < 1
(3.32)
£y(r?;(ί)-α) < oo for every
Proof. (3.31) follows from Theorem 2.2, and (3.32) follows from a combination of Lemma 2.3 and 2.4 in [9].
Π
SAMPLE LYAPUNOV EXPONENT
51
Proof of Theorem 3.2. By Theorem 2.2 we have
(3.33) ξl(t) = Ei (exp ί
V
Jo
dZx(t_s](s]
t+
(
r
,
>E< exp / dZ' _ (s)
\
x(t
Jo
a)
Nx(t_s}(ds, {-!}) = Q]
/
: ί
Jo
:
f/ tJr
ί
\
ι\\ = 0\ ,
Nx(t_s} (ds,-!,-Z
Jo
\ L
/
/
/
where
(3.34)
Z<(t) - αBi(t) + /
log(l +u)Ni(ds,du)
J[0,ί]x(|ti|<l/2)
+ /
log(l + w)-/Vi(ds,dτ/)H-mί
J[0,ί]x(-l,-l/2]u[l/2,oo)
where m is of (2.15), and
(3.35)
log(l + u)Ni(d8, du).
Z[(t) = Zi(t) - ί
J[0,ί]x(-l,-l/2]
Define a random variable E7 and an event A by
ί/ = exp^
dZ'x(t_s)(s),
A=y^
Nx(t_s) (da, [-1, -0) = ϋ] .
Then it is easy to see that for every p > 1 and t > 0
oo.
Also using Jensen's inequality and Holder's inequality we see that for 0 < α < 1
and p, q > 1 with 1/p + l/q = 1,
Hence choosing q > 1 close to 1, by (3.32) we have
α
Eγ (Ei(U : A)- ) < oo.
Combining this with (3.33) we obtain
Eγ (ξl(t)-α) < oo for t>Q,ieZd.
Thus the condition [B-4] is verified and the proof of Theorem 3.2 is completed.
D
52
T. FURUOYA and T. SHIGA
By virtue of Theorem 3.1 the sample Lyapunov exponent in L1-sense is welldefined for a class of SPDEs satisfying the conditions [B-l]-[B-4]. However the
almost sure sample Lyapunov exponent is not well-defined in general. Concerning
this problem we have the following result. Recall 7(7?) is the moment Lyapunov
exponent of f*(ί), which is well-defined by Lemma 3.1.
Theorem 3.3.
(3.36)
In the situation of Theorem 3.2 assume that
P({-1}) = 0
and
ί
\ log(l + u)\p(du) < oo.
•Λ-1,-1/2)
Then
(3.37)
lim inf- log #(ί) = τ'(O-) - λ(A;Y)
£—>OO
t
Pγ -a.s.,
and
limsup - log#(ί) < 7;(0+)
(3.38)
Pγ -a.s.
To the contrary if either p({— 1}) > 0 or
(3.39)
/
|log(l
Λ-1,-V2)
then
(3.40)
lim inf - log^(ί) = -oo
t—»oo
ί
PY -a.s.
Proof.
1. p({-l}) = 0 implies that ^(ί) > 0 holds for every t > 0, so that
by Itό's formula
(3.41)
logtf (ί) -
where Zi(t) is of (3.34). From this it follows
(3.42)
inf
n<ί<n+l
^
and
(3.43)
log^(n + 1) -
sup log^(ί) > o(0,0) + Zi(n + 1) -
sup Zί(ί).
SAMPLE LYAPUNOV EXPONENT
53
Since (3.36) implies that {supn<ί<n+1 \Zi(t) — Zi(n)\} are i.i.d. random variables
with finite mean, we have
(3.44)
lim -
sup
n^oo n n<t<n+l
\Zi(t) - Z{(ri)\ = 0 Pγ-a.s.
thus a combination of (3.42)-(3.44) with (3.16)-(3.17) implies (3.38) and
(3.45)
liminf - log^(ί) > V(O-),
n—>oo
t
PY-a.s.
Therefore (3.37) follows from (3.45) and Theorem 3.2.
2. Next assume (3.39) since it is trivial in the case p({—1}) > 0. It is easy to
see that
lim sup - logξl(t) < oo Pγ-a.s.
(3.46)
Let z G !Lά be fixed, and let {(τ™,t/f)} be the sequence of random variables such
that
where 5(s,w) stands for the unit mass at (s,u). Note that {ί//l}n=ι,2,... is an i i d.
sequence with distribution cp|(_ 1 ? _!/ 2 ) and it holds that
(3.47)
lim — = c,
n—> oo
77,
where c = p((-l, -1/2))"1. Moreover (3.39) implies that
limsup-|log(l + t/ΠI = °° Py-α.5.
(3.48)
n—*oo
^
Since
by (3.47) and (3.48) we have
21imsup |
ί—>oo
t
n—>oo
11
= oo Pγ —a.s.
Thus (3.40) follows from this and (3.46).
D
54
T. FURUOYA and T. SHIGA
4. Inequalities for the sample Lyapunov exponent
By Theorem 3.2 the sample Lyapunov exponent is well-defined for the SPDE
(1.1) over Έd under the condition [A], which is denoted by λ(κA] Y).
Recall that
2
a
Γ
-oc < λ 0 (y) = -— + /
(log(l + u) - u)p(du) < 0,
*
«/[-ι,oo)
then λ 0 (y) > — oo holds if and only if
(4.1)
| log(l + u)\p(du) < oo.
p({-l}) = 0 and
Let Y'(t) be another Levy process with the characteristic exponent
(4.2)
ψ'(z) = ~^-z2 + ί
2
(e^1™ - I - ^zu)p'(du).
Λ-l,oo)
Theorem 4.1. Assume that
(4.3)
|α| < GL
and
p < p' .
Then the following inequalities hold.
(4.4)
(4.5)
(4.6)
λ(A y') < λ ( Λ ; y ) <0.
-X(κA,Y) < —t\(κ'A,Y}
if
Q<κ<κf.
;
o < λ(Λ; y) - λ 0 (y) < λ(Λ; y ) - Ao(y ; )
whenever p and p' satisfy (4.1).
1
Corollary 4.1. Suppose that p and p satisfy (4.1). 77ze« λ(κA; y) is continuous
in
/AX the following sense.
(4.7)
(4.8)
,y)-λ(κA,y')|<
tji
7(-l,oo)
τx) - u\\p- p'\var(du),
where \ρ — p'\var stands for the total variational measure of p — p' .
SAMPLE LYAPUNOV EXPONENT
55
For the proof of Theorem 4.1 we apply the following comparison lemma which
is a modification of the comparison theorem of [4].
2
J
Let F be the totality of bounded C -functions defined on [0,oo) depending
on finitely many components and having first and second bounded derivatives such
d
that A £ > j / > 0 (ij 6Z ).
f
Lemma 4.1. Assume the condition (4.3). Then for the solutions ξ(t) and ξ (t)
of the equation (2.2) associated with Y(t) andY'(i) and ξ(0) = £'(0) > 0, it holds
that
Eγ(f(ξ(t)))<Eγ\f(ξ'(t)))
(4.9)
/GF
for
and t > 0.
Proof. It is sufficient to prove (4.9) in a simpler situation that J is a finite
set and p is compactly supported in (—1,0) U (0, oo) because general case can be
reduced to this case by a standard approximation procedure. Let Tt and T/ be the
transition semi-group of the Markov processes ξ(t) and ξ'(t), and denote by L and L'
their infinitesimal generators respectively. In this case it is easy to see the following
perturbation formula,
(4.10)
Ttf - T'J = ί T't_s(L - L')TJds.
Jo
Note that if / e F,
2
(4.11)
(L'-L)f(x)=
(a') - a
where
(π^ίc)^ = (1 + w)xi
and (π?x)j = xj
(j ^ i).
Because the second term of the r.h.s. of (4.11) is
V Xi i
j
ie
(p1 - p)(du) > 0 for / G F.
( Γ(Diϊ(^x) - Dif(x))dv)
«/(-l,oo) \Jθ
/
On the other hand it holds that
(4.12)
Γt/eF
if
/GF,
56
T. FURUOYA and T. SHIGA
since by making use of the fundamental solution pω(s,ί;t,j)
Ttf can be represented by
of the equation (2.2)
which yields (4.12) by straightforward differentiations. Therefore (4.9) follows from
(4.10)-(4.12).
D
Proof of Theorem 4.1.
have
Let &(0) = £(0) = 1
(i G Zd). By Lemma 4.1 we
which yields (4.4). Note that by a time rescaling
(4.13)
-\(κA,Y) = ^X (κ'A,Y (K
K,'
\
\ K
Hence a combinaiton of (4.4) and (4.13) yields (4.5). To show (4.6) note that by
Theorem 2.2
ξi(t) = E< (exp / dZx(t_s}(s) ) expλ 0 (y)ί,
V
JQ
/
where
Zi(t) = aBi(t) + /
log(l + u)Ni(ds, du).
«/[0,ί]x(-l,oo)
By the assumption (4.3) we may assume that the probability laws of({Y/( )},Py )
and ({Yi( ) + Ϋi(')},Pγ x P y ) coincide, where (ίί(ί),Py) are independent copies
of a Levy process Y(t) associated with (ά = y^α')2 — α 2 , p — pf — p). Then
/
/**
/** ^
\
ίί(ί) = E< exp / dZx(t_s}(s)exp / dZ x(t _ θ) (s) expλo(y%
V
Jo
Jo
/
where {Zi(i)} are independent copies of a Levy process { Z ( t ) }
Eγ (Z(t)) = 0 corresponding to Y(t). Using Jensen's inequality we see
satisfying
SAMPLE LYAPUNOV EXPONENT
57
; y') - λo(y') - X(A Y) + λ 0 (
^
e xf p /* dZx(t_s}(s)exp /** dZ
x(t_s}(s
\
Jo
Jo
i
= Urn -Eγ x Eγ [ log
xEi (exp /
\
I
y
< lim -E*
t^oo t
(
log ( Ei
\
\
xEi
(
Jo
dZχ(t_8)(s)
exp ^
/ dZχft_a\(s)
\
Jθ
(exp /Γ
\
Jo
dZx(t_s}(s)
= o,
which yields (4.6), since
dZx(t_,)(a)}=EY(Zi(t))
Proof of Corollary 4.1.
1
Using the scaling property (4.13) and (4.6) we have
1
/AlAv./!} J
Kt
)
D
= 0.
,
~ / \ l Av -^1-5 -t
K
)
1
= - X(κ'.
X(K'A,Y)-X
~
K
~ κf
I
K
(K'A,Y
-
Xo(Y
I
K,'
which yields (4.7). (4.8) follows immediately from (4.6).
D
Next we discuss a comparison of the sample Lyapunov exponents between
d
d
finite systems and infinite system of (2.2). Let Λ n = (— n, n] Π Z , and let
n
A^ — {a( \i, j)} be a Λ n x Λ n matrix induced by A = (α(i,jf)), i.e.
We denote by \(n\A',Y) the sample Lyapunov exponent of (2.2) with J = Λ n
n
2
and A^ = (a( \i,j)). Furthermore we denote by \( \κ;Y) the sample Lyapunov
exponent of the following two dimensional stochastic equation,
(4.14)
-6(0) = «
58
T. FURUOYA and T. SHIGA
- 6(0) = « Λξι(ί) - &(S))ώ + f
Jo
Jo
ξ2(s-)dY2(s),
where
(4.15)
«=
α(0,j)
and {yι(ί),y2(ί)} are independent copies of y(ί).
Theorem 4.2.
(4.16)
For the proof of the theorem we apply another comparison theorem. Let {J n }
be a partition of J, {Yn(t}} be independent Levy processes associated with the
characteristic exponent ψ(z) of (1.1), and set
if
ίeJn.
Let us consider an equation similar to (2.2);
(4.17)
/•*
/•*+
° *ΞJ
where 77(0) = {^(0)} is assumed to be non-negative and J5(||τ7(0)|| L i( 7 )) < oo.
By the same method as Theorem 2.1 one can show that (4.17) has the pathwise
unique L1(7)-solution η(i) = {rji(t)}.
Now we compare it with the solution of (2.2).
Lemma 4.2. Let / 6 F. //£(0) = 77(0), then for every t > 0,
(4.18)
Eγ(f(ζ(t)))<E*(f(η(t))).
Proof. It is essentially the same as Lemma 2.2 of [9] where a special case is
treated, so we omit it.
D
Proof of Theorem 4.2. Let Ji and J2 be the set of all odd points and the set
of all even points in Λ n respectively. For i G Jp, we set Yi(t) = Yp(t) (p = 1,2). For
the solution (ξ[2\t),ξ™(t)) of (4.14) with 6(0) - 6(0) = 1, we set ηi(t) = ξ™(t)
for i G Jp (p = 1,2). Then {r7i(ί)}»€Λn is a solution of (4.17) with J = Λ n and A^n\
SAMPLE LYAPUNOV EXPONENT
59
Furthermore let £(n)(0 = {fί n ) W}»€Λ n be the solution of the equation (2.2) with
£;(0) — 1 (i G Λ n ). Then by Lemma 4.2 we have
γ
2)
γ
n
E (ϊogξ[ (t))<E (logξ[ \t))
which yields the first inequality of (4.16). The second inequality of (4.16) can be
proved in a similar way.
Π
5. Two dimensional case
Recall the two dimensional stochastic equation
(4.14)
&(ί) - 6(0) = K
(&(*) - &(β))ώ +
6(ί) - 6(0) = K ί (6(β) - ξ2(s))ds + ί
Jo
Jo
ξ1(s-)dY1(s)
ζ2(s-)dY2(s),
and denote by X^(^Y) its sample Lyapunov exponent. In this section we investigate an asymptotics of \^(K,; Y) as K \ 0.
Theorem 5.1. Assume that (1.8), the condition [A] and the following (5.1) are
fulfilled.
(5.1)
p({-l}) = 0
and
(log(l + u))2p(du) < oo.
ί
7(-l,oo)
λ2(/.;y)-λo(y)-
(5.2)
(5.3)
c
= i [ α2 + /
* \
α5 « \ 0.
(log(l + u))2p(du)} .
J(-l,oo)
)
Here a(κ) ~ β(κ) as K \ 0 means lim^o a(κ)/β(κ) = I.
Let p be a Radon measure on R \ {0} defined by the following relation: for
every continuous function / > 0,
(5.4)
p(du)/(u) Λ\{0}
p(du)(f(log(l
«/(-l,oo)
Note that the condition (5.1) is equivalent to
(5.5)
/
JR\{0}
p(du)u2 < oo.
+ u)) + f(- log(l + ^))).
60
T. FURUOYA and T. SHIGA
Let Z(t) be a Levy process with the characteristic exponent
2
(5.6)
1
φ(z) = -a + /
(e^ ™ - l)p(du),
JR\{O}
and consider the following stochastic equation
/**
θθ
ζ(s
C(*) - C(0)
C(0) - / «(e-«
«(e-« >
> -- e ^ds + Z ( t ) .
Jo
(5.7)
The equation (5.7) has the unique solution, which defines a one-dimensional Markov
process (ζ(t),Px) with the infinitesimal generator
(5.8)
Lf(x) = κ(e~x - e*)f(x) + <*2f"(x) + /
(f(x + j/) - f ( x ) ) β ( d y ) .
JR\{0}
It is easy to see that (£(£), Px) has the unique stationary probability measure μM
which is characterized by
(5.9)
f μW(dx)Lf(x)
=Ό
for
/ GC
JR
where Cf(J(R) stands for the set of all (^-functions defined on R with compact
support.
Note that μ^ is indeed a symmetric measure on R, and that (5.9) is valid for
f ( x ) = x 2 by the condition (5.1) so that
(5.10)
2« / μ(κ\dx)exx = 2- ί p(dx)x2 + α 2
JR
Λ
(/ς; y) is given in terms of μ^ as follows.
Lemma 5.1.
(5.11)
λ( 2 )(κ; y) — λo(y) = ft / μ(κ
JR
Proof.
Let (ξι(t),6(0) be the solution of (4.14) under an assumption that
the distribution of logξι(O) coincides with μ^ satisfying (5.9) and £2(0) = 1. Then
by Corollary 3.1 we have
(5.12)
λW(κ Y) = lim
t—>oo t
Ey (log &
SAMPLE LYAPUNOV EXPONENT
61
Let
(5.13)
«*>-*&\$•
From (4.14) it follows
(5.14)
log 6 (t) - log 6(0) = κ ί (e-«β> - l)ds 4 αSχ(t) 4 λ0
Jo
4-- /
t/[0,t]x(-l,,oo)
and
(5.15)
rt
= / φ-c(s)
Joo
4- /
«/[0,t]x(-l,o
which is equivalent to (5.7). Note further that ζ(i) is stationary, so that it follows
from (5.14) that
Eγ(logξl(t)) - Eγ(\ogξl(0)) = κt ( ί μW(dx)e-χ - l) 4 λ0(F)ί.
\JR
/
Thus (5.11) follows from this and (5.12).
Proof of Theorem 5.1.
D
First we give the upper bound. By (5.10)
(5.16)
where c is given in (5.3).
Set
M(κ)=
(5.17)
ί μ(^(dx)ex.
JR
Since g(x) — xlogx is convex, using Jensen's inequality we get
-.
At
l
Hence denoting by g~ the inverse function of g(x) = xlogx in (l,oo), we have
that for small K > 0,
62
T. FURUOYA and T. SHIGA
which yields
(5.18)
limsup,
< c.
Thus we obtain the upper bound.
To get the lower bound we first assume
(5.19)
2
ci = ί p(dx)(ex -1) = ί
p(du
JR
J(-ι,oo)
< 00.
Then (5.9) is valid for f ( x ) — ex and
K (1 - / μ(κ\dx)e2x ] + (ci + a2)M(κ) = 0.
V
JR
J
(5.20)
Using Jensen's inequality and (5.16) we have
ί
ί
JR
JR
= M(κ) exp
, ..
Setting
hκ(x) — xe*
x
with
C2 = c\ + α 2 ,
by (5.20) and (5.21) we get
(5.22)
hκ(κM(κ)) < K.
Since hκ(x) is decreasing in (0,c) and κM(κ) vanishes as K \ 0 by (5.18), setting
x(κ) = /ι~1(κ) we see
(5.23)
κM(κ) > x(κ)
for small K > 0.
Note that hκ(x(κ)) = K implies that x(κ) —> 0 as « \ 0 and
——— = log I —— H
J < log
+ log —
for small K > 0,
from which and (5.23) it follows
«™
(5.24)
rlimmf,
^-—/
.
^.. . .
«\o
> liminf -— \ . > limmf -—, ^
^
> c.
SAMPLE LYAPUNOV EXPONENT
63
Thus we have shown the lower bound under the assumption (5.19). However it is
easy to remove (5.19) by virtue of Theorem 4.1. In fact let Y^(t) be a Levy process
with the characteristic exponent ψ(z) of (1.8) with (α, pn) where pn is the restriction
of p on [-1 + l/n,n]. Then by (5.24)
yi .1) Λ,(y' .i)
κ\0
i
lθg(l//c)
y
2
2 y
7[-l+l/n,n]
/
for any n > 1,
so by Theorem 4. 1 we get
.. . X M ( κ ; Y ) - X o ( Y )
liminff -^—/ . N
κ\0
^
lθg(l/tt)
.
XW(κ',Y^)-X0(γW)
Γ
>
lim liminff ----—
/
-- = c,
n-^oo
lθg(l/«)
κ\0
which yields the lower estimate of (5.2).
Π
6. Asymptotical estimates of λ(«A; Y) as /c \ 0
In this section we investigate asymptotical estimates of the sample Lyapunov
exponent X(κA Y) as K \ 0 for the SPDE (1.1) over Zd in two extremal cases
depending upon singularity of the Levy measure p at the neighbourhood of —1.
Let us begin with a non-singular case that satisfies (4.1). Recall that the solution
ξl(t) = {ξl(t)} with the initial condition &(0) = 1 (ί G Zd) is represented by the
following Feynman-Kac formula;
(6.1)
g(t) = Ei
/
/•*
\
exp / dZχ(t-β)(β) expλ 0 (F)ί,
\
/
Jo
where ( X ( t ) , P i ) denotes a continuous time random walk generated by κA9
(6.2)
Zi(ί) = αJBi(ί) + /
log(l + u)Ni(ds, du)
(i G Zd),
7(0,t]x(-l,CX5)
and by (4.1),
(6.3)
-oc < λoθn - ~ + /
^
(log(l + w) - u)p(dw) < 0.
J(-l,oo)
To investigate asymptotics of λ(«A; y) as « \ 0 we use the following expression,
(6.4) λ(«A; y) - λ 0 (y) = lim \ logE, fexp /
t^oo t
\
J0
dZx(s}(s)}
)
n
since for fixed t > 0, {^(s) : 0 < 5 < t,i G 1d} and {^(ί) - Z^t - s) :
0 < s < t, i £ Zd} have the same probability law under Py, which inherits that
64
T. FURUOYA and T. SHIGA
E»(exp/Q dZχ( t _ β )(s)) and Ei(exp/0* dZχ(8)(s)) have the same distribution under
Pγ.
For technical reason we impose the following stronger condition than (4.1).
Theorem 6.1. Assume further that A = (α(z, j)) is of finite range, i.e. for
some R > 0,
d
α(0, j) = 0 for
j G1
with
\j\ > R,
and that
(6.5)
/
l
-p(du) < oo.
^-p(du
τ
/ ( _ l , _ l / 2 ) 1+U
Then there exist constants GI > 0, c2 > 0 and KQ > 0 swc/z
(6 6)
'
Next we consider the following extremely singular case,
(6.7)
p({-l}) > 0.
Theorem 6.2. Assume (6.7) /« addition to the situation of Theorem 3.2. Then
it holds
(6.8)
λ(/cΛ; Y) w log«
α5 « \ 0.
For the proof of Theorem 6.1 we adopt the method of [8] where is exploited an
approximation of the continuous time random walk on Zd by a discrete time stochastic process. However unlike the case of Brownian motions in [8], [3], Levy process
Y(t) lacks sufficient moment conditions which makes arguments more complicated.
In what follows we normalize α(0,0) = — 1. First we mention the following
lemma from [8].
Lemma 6.1 ((Lemma 3.2 of [8])). Let (X(ϊ), Pi) be a continuous time random
walk on Zd generated by κA> and let Πt be the number of jump times ofX( ) up to
time t. Then (Πt) is a Poisson process with parameter K. Moreover there is a discrete
time Zd -valued stochastic process (X(n)) satisfying that
(6.9)
\X(n)-X(n-l)\<R
(n = 1,2,...),
(6.10)
o
(6.11) Πn = (J{1 < m < n\X(m) φ X(m - 1)} < Πn
and [Πn = 0] = [Πn = 0].
SAMPLE LYAPUNOV EXPONENT
65
Lemma 6.2. Conditioned on the sample paths X( ) and X(>), it holds that
Eγ (exp / dZx(s](s)\
(6.12)
\
)
Jo
= e^γ^
P,-α.s.,
and
(6.13)
(exp /
\
Jo
where
2
/
u
- p(du).
,-1,00} 1 + U
Proof.
Since /0 dZx^(s) is equivalent to the Levy process Zi(t) of (6.3) for
.s. fixed X(-), (6.12) is obvious.
Note that the l.h.s. of (6.13) is
Eγ
exp
-exp
2
^+ f
-
+
/α
x
2
T
hence this and (6.10) yield (6.13).
D
Lemma 6.3. Assume the condition (5.1). Then there exists a constant CΊ > 0
such that
(6.14)
PY(Z(t) >\t) <exp-CΊ£λ 2
Proof.
/or 0< λ < 1
and t >0.
Note that
2 2
/*
Af(α) = log^ y (expαZ(l)) = ^- -f /
2
7(-l,oo)
((1 + u)α - 1 - αlog(l + u))p(du)
66
T. FURUOYA and T. SHIGA
is a C2((0, l])-function, and by (5.1)
M "(0+) - α2 + /
(6.15)
(log(l + u)}2p(du) < oo.
J(-l,oo)
Setting
L(λ) - sup (z\ - M(z))
(X > 0),
0<z<l
by the exponential Chebyshev's inequality we get
PY(Z(i) > Xt) < exp-ίL(λ).
(6.16)
So it suffices to show that
(6.17)
Cι ΞΞ inf ^V
> 0.
o<λ<ι λ2
But as easily seen, L(0) = 0, L(λ) > 0 (λ > 0) and
L(A) =
1
λ\o λ 2
2M"(0+)'
which yields (6.17).
D
Proof of Theorem 6.1.
1. Note that
Let 0 < e < 1 be fixed.
1
/
Γ
\
lim - logEo exp / dZx(s}(s) : Πn = 0
(6.18)
n^oo n
\
J0
)
= lim - (ZQ(n) + logP0(Πn = 0))
n—> oo 77,
2.
Using Fubini's theorem and (6.12) we have
n
ί
Γ
γ ί
E ί EO ί exp / dZχ(a)(s)
\\
|λo(y)|n
: Πn > en j 1 = e
P 0 (Π n > en)
<Eo(exp(MΏ!Πn
= exp
Acn
exp
e
)
so that by Chebyshev's inequality for every δ > 0,
(
n
I
/
r
\
(
f \XQ(Y)\\
\\
-n logEo exp / dZx(s](s) :Un>en) > (K + 6) exp l-—^-^
- 11)
e
\ Jo
/
\ \
/ / /
SAMPLE LYAPUNOV EXPONENT
67
which is summable in n > 1. Hence by the Borel-Cantelli lemma it holds that that
1
(
Γ
limsup - logE0 I exp / dZx^(s) : Πn > en
(6.19)
n—>oo
n
\
JQ
3. Let W% is the totality of Zd-valued paths w = (w(ra))o<m<n such that
\w(m) — w(m—l)\ < R(l <m <n) and that (J{1 <m<n: w(πί) / w(m— 1)} = k.
Note that for a function δ(r) defined on [0,e], if
n
/
/fcλ
G^u;[s](cfe) < nδ ( —
1
n
\ /
f°r every 1 < k < en
and w G VF^1,
then
(
ίn
EO exp / dZx(a)(s) : l < Π n <
\
Jo
/.n
[™]
= 5^ ^ exp /
Jo
k=iwew£
dZw[a](s)
x EO (exp / d(Zx(a) - Z~[s])(s) : 1 < Πn < en,ίίn = fe,X( ) - υ
\
/
Jθ
/
/π \
r
\
\
< EO exp nδ — + / ci(Zx(s) - %,,(«)) : 1 < Πn < en .
u
\
\ \n J
Jo
/
/
From this it follows that
O (exp / dZx(s}(s) : 1 < Πn < en ) > E0 ( exp2nί ( -2) : Un < en } }
n
\
Jo
/
\
\ /
//
/
PY
Σ
(
/le
dZw[s](s) > n6 (k-
\J°
=lw€W£
γ
n
Γ
(
(
(
EO exp [nδ
\n
/ft \
-^
n
r
+ / d(Zx(β) - Z-u )(s)
Jo
(exp2nδ ( — ) : Πn < en J J
V
\n J
JJ
fc=ι
\
+ ( EO (exp ( 2nδ ( —
V
V
V
V n
} : Πn < en
: 1 < Πn < en
68
T. FURUOYA and T. SHIGA
x E 0 ί e x p ί n δ ί ^ J + cΠn J : l < Π n < e n )
= Jι(n) +J 2 (n)
(say,)
where at the last inequality we used Lemma 6.3 together with Chebyshev's inequality.
Using Stirling formula we get
[en]
1
(
2
k
fk
Jι (n) < const. > A:" / exp - k log - + 2dk log R - Cinδ I -
\
t[
So, letting
CΊ<5(r)2 = 3r log - + 2dr log #,
r
(6.20)
we get
IH
_
k
Ji (n) < const, y^ k l/2 exp 2/c log —,
^—'
n
fc=l
which yields that {Jι(n)} is summable.
Next choose a small e > 0 so that δ(r) of (6.20) is increasing in r G [0, e], then
^2(n) 5: exp(n sup {cr — δ(r)}).
0<r<e
Accordingly, for small e, {J2(n)} also is summable, thus the Borel-Cantelli lemma
implies that Py-almost surely
i
lim - logEo
n^oo n
/exp /Γ
\
Jθ
dZx(s](s)
: 1 < Πn < en
< lim - logEo I exp2nδ ( — } : Πn < <
n-oo n
V
Vn J
= sup (2ί(r) - /(r))
0<r<e
< const, sup I \ rlog
o<r<e y V
r
rlog— I
KJ
~ ι°g(i°g(i/«)) „„ „ x .
which completes the proof of the upper bound of (6.6). On the other hand the lower
bound of (6.6) follows from Theorem 4.2 and Theorem 5.1 since the condition (5.1)
is satisfied by (6.5). Therefore the proof of Theorem 6.1 is complete.
D
SAMPLE LYAPUNOV EXPONENT
69
Proof of Theorem 6.2. The proof is reduced to the case a = 0, and p — 6{-ι},
for which (6.8) was proved in [9], so that by virtue of Theorem 1.4 of [9], for
(6.21)
λ(«A; y') πlogκ
as K \ 0.
Furthermore, by the comparison result of Theorem 4.1 we obtain the upper estimate
of (6.8).
On the other hand by Theorem 2.2
(
rt
\
ft
exp / dZx(t_s}(s)
Jo
: / Nx(t_s}(ds, {-!}) = 0 1 ,
Jo
/
where
Zi(t)
= aBi(t) -f /
log(l + u)Ni(ds, du)
7[0,ί]x(H<l/2)
Kl/2)
log(l -f u)Ni(ds, du)
|x(-l,-l/2]u[l/2,oo)
with
β= ί
( / ( N < ^)log(l + u)-u\p(du).
J[—1,00) \ \
"/
/
Let
Zfa) = aBi(t) + /
log(l + u)Ni(ds, du).
J[0,t]x(|u|<l/2)
By the proof of Theorem 6.1
(6.23)
1
/
/
/**
\\
lim -Eγ logE^ exp- / dZ'x(t_a)(s)
<
t-^oo t
\
\
//
JO
for small
K > 0.
By the previous result in [9] we know that
(6.24)
1
/ /**
/
/
Λ\
^
lim - logP, / Nx(t_s} I d s , ( \ u \ > - ))= 0
ί^OO
t
« log /ί
\JQ
for small
\
K > 0.
Noting that by the Schwarz inequality
\
Z / /
J
70
T. FURUOYA and T. SHIGA
< Ei
i f *
(
f*
(
\ 1/2
ι\\
exp / dZ'x(t_s)(s) : / Nx(t_s) ( d s , (\u\ > - ))= 0
\
Jo
Jo
\
\
<•) )
/
xEi
<e^Ei
xEi
( exp- //•*
v
/2
dZx(t_a}(a)}
Jo
/
\
Γ*
f*
\
e x p / dZx(t_s](s) : / Nx(t_s}(ds, {-!}) = 0
V
Jo
Jo
/
ί
(
/"
λ
exp- / dZ'x(t_a)(8)}
\
Jo
/
1/2
1/2
,
by (6.22)-(6.24) we have a constant C > 0 satisfying that
X(κ,A; Y) > C log K
for small tt > 0,
which combines with (6.23) to complete the proof of Theorem 6.2.
D
A combination of Theorem 6.1 and Theorem 4.1 yield a continuity result of
X(κ A] Y) as K, \ 0 as follows.
(i)
(ii)
Theorem 6.3. Assume that A = ( a ( ί , j ) ) is of finite range.
//(5.1) is fulfilled, then lim^o λ(«A; Y) = X0(Y} > -oo.
While //(5.1) is violated, then \o(Y) = -oo andlimκ\^oX(κA;Y) = -oo.
Proof.
Let Y^n\i) be a Levy process with the characteristic exponent (1.7)
with pn = p|(-ι+ι/n,oo) m place of p. Then Theorem 6.1 is applicable for γ(n\t)
and it holds
lim X(κA YM) = X0(YM).
(6.25)
κ\,0
Also by Theorem 4.1,
X0(Y) < X(κA; Y) < λ(«A; Y(n}).
(6.26)
M
Since lim,,^^ X0(Y ) = X0(Y) > -oo, (6.25) and (6.26) yield (i) and (ii).
D
REMARK 6.1. Assume the condition [A] together with Σ jGZ d |j| 2 α(0, j) < oo,
and that Y(t) has zero mean.
(i) If d = 1 or 2, for every K > 0 it holds that
(6.27)
lim &(ί) = 0
t —> oo
in probability
(i e Zd)
for every nonnegative solution of of (1.1) ξ(t)
γ
Z d E '(&(0)) <oo.
=
{£;(£)} satisfying
SAMPLE LYAPUNOV EXPONENT
(ii)
71
If d > 3, there is a constant KQ > 0 such that (6.27) holds for 0 < K, < KQ.
Proof.
(i) can be proved by the same method as Theorem 4.5 in [7], Chap.
IX. For (ii) apply Theorem 6.3 that asserts that X(κA Y) < 0 for small K > 0.
Moreover Corollary 3.1 implies (6.9).
Π
REMARK 6.2. Let d > 3, and let (Ω,Γ,Pξ,ξ(t) = {&(*)}) be a linear Markovian system associated with (1.1). Remark 6.1 implies that if K, > 0 is small, any
Z d -shift invariant stationary distribution μ satisfying Eμ(ξi) < oo coincides with <50
d
(the point mass at & = 0 (i e Z )). On the other hand if
(6.28)
/
2
u p(du) < oo,
for a large K > 0, (Ω,.F, P£,£(£) = (&(£))) has non-trivial stationary distributions,
that can be proved by making use of standard second moment computations as in
[7], Chap. IX. However if (6.28) is violated, it holds Eι(ξi(t)2)
= oo for t > 0,
so that the second moment arguments are not applied. In this case it is an open
problem how to show existence of non-trivial stationary distributions for a large
K > 0.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
H.S. Ahn, R.A. Carmona and S.A. Molchanov: Nonstationary Anderson model with Levy
potential Lecture Notes in Control and Information Science 176 (1992), 1-11.
R. Carmona and S.A. Molchanov: Parabolic Anderson Model and Intermίttency, Mem.
Amer. Math. Soc. 518 (1994).
R. Carmona, S.A. Molchanov and F.G. Viens: Sharp upper bound on the almost-sure
exponential behavior of a stochastic parabolic partial differential equation, Random Oper.
and Stoch. Equ. 4 (1996), 43-49.
J.T. Cox, K. Fleischmann and A. Greven: Comparison of interacting diffusions and an
application totheir ergodic theory, Probab. Th. Related Fields 105 (1996), 513-528.
J.D. Deuschel and D.W. Stroock: Large Deviations, Academic Press, 1990.
N. Ikeda and S. Watanabe: Stochastic Differential Equations and Diffusion Processes,
North-Holland Kodansha, 1981.
T.M. Liggett: Interacting Particle Systems, Springer Verlag, 1985.
T. Shiga: Ergodic theorems and exponential decay of sample paths for certain interacting
diffusion systems, Osaka J. Math. 29 (1992), 789-807.
T. Shiga: Exponential decay rate of the survival probability in a disastrous random environment, Probab. Th. Related Fields 108 (1997), 417-439.
Y.B. Zeldovich, S.A. Molchanov, A.A. Ruzmaikin and D.D. Sokoloff: Intermittency, diffusion and generation in a non-stationary random medium, Soviet Sci. Rev. Sec C. 7 (1988),
7-110.
72
T. FURUOYA and T. SHIGA
T. Furuoya
Department of Applied Physics
Tokyo Institute of Technology
Oh-okayama, Meguro
Tokyo 152-0033, Japan
T. Shiga
Department of Applied Physics
Tokyo Institute of Technology
Oh-okayama, Meguro
Tokyo 152-0033, Japan