Kolmogorov continuity Theorem and related results - Geoffrey Boutard

ing the existence of a modification of X with almost surely continuous sample .... Suppose that |x − y| ≤ 1, then as α2 − α1 ≥ 0, we have |x − y|α2−α1 ≤ 1, so,.
377KB taille 2 téléchargements 238 vues
Kolmogorov continuity Theorem and related results Geoffrey Boutard Mémoire de M2 sous la direction d’Antoine Ayache

Contents 1 Introduction

2

2 Hölder spaces and their local versions 2.1 Global Hölder regularity for bounded functions on Rd . . . . . . . . . 2.2 Local Hölder regularity for unbounded functions on Rd . . . . . . . .

2 3 5

3 Variance of increments and connections to path Hölder regularity 3.1 Derivatives for Gaussian processes . . . . . . . . . . . . . . . . . . . . 3.2 Kolmogorov continuity Theorem and related results . . . . . . . . . . 3.3 Strong versions of the Kolmogorov continuity Theorem . . . . . . . .

9 9 11 15

4 Tightness criteria

22

1

1

Introduction

For a mean zero real-valued Gaussian process X = {X(t) : t ∈ R}, we are interested in determining the regularity of its trajectories (in other words its sample paths). The well-known Kolmogorov continuity Theorem provides a useful criterion for establishing the existence of a modification of X with almost surely continuous sample paths; also a strong version, due to Kolmogorov and Čentsov, of this Theorem, even, allows to show that they are Hölder functions, on each compact interval. More precisely, assuming there exist two constants, C > 0 and α ∈ (0, 1], such that E[|X(t + h) − X(t)|2 ] ≤ C|h|2α ,

for all t, h ∈ R,

then there is a modification Y = {Y (t) : t ∈ R} of the process X, such that almost surely, the function t 7→ Y (t, ω) satisfies, on each compact interval, a Hölder condition of any arbitrary order γ ∈ (0, α) ⊂ (0, 1). The notion of Hölder regularity, can be extended to functions which are differentiable, not only one time, but also an arbitrary number of times. Thus, the orders of their Hölder regularities need not necessarily to belong to the bounded interval (0, 1), in fact, these orders can be any positive real-numbers; for simplifying, the presentation of this memory, we have limited ourself to the noninteger case, which is in fact, the most usual. The main goal of the memory, is to present, an extension of Kolmogorov-Čentsov Theorem, which provides a criterion for showing that sample paths of a Gaussian process, are, almost surely Hölder functions of an arbitrary noninteger order bigger than 1. Another goal of the memory, is the study of weak relative compactness, of a sequence of probability measures on a Banach space of continuous functions; it is known that a necessary and sufficient condition for having such compactness, is tightness of the sequence. We focus on a tightness criterion, whose statement is very close to that of Kolmogorov-Čentsov Theorem.

2

Hölder spaces and their local versions

In the present section, we make a detailed presentation of the notion of Hölder regularity, our presentation is inspired by that in the article [And97]. In all the sequel, d denotes an arbitrary fixed positive integer.

2

2.1

Global Hölder regularity for bounded functions on Rd

Definition 2.1.1. Let α in (0, 1) and f a real-valued function defined on Rd . We say that f is a Hölder function, or has Hölder regularity, of order α if it is bounded and if there exists a constant C > 0 such that for all x and y in Rd , we have the following inequality, |f (y) − f (x)| ≤ C|y − x|α .  We write f ∈ C α Rd . We would like to have a definition of a Hölder function of order s > 1. If a real-valued function f satisfies for all x and y in Rd , |f (y) − f (x)| ≤ C|y − x|s , then f is differentiable on Rd and for all x ∈ Rd , Df (x) ≡ 0, that is f is constant on Rd . Hence we need another definition when the order is greater than 1. Definition 2.1.2. Let s be a non-integer positive real, write s = m + α with m ∈ {0, 1, 2, . . . } and α ∈ (0, 1). We say that a real-valued function f , defined on Rd , is a Hölder function, or has Hölder regularity, of order s if it is m times continuously differentiable and satisfies: for each multi-index β of Z+ d such that |β| ≤ m, the function ∂ β f is bounded and for each multi-index β of Z+d such that |β| = m the function ∂ β f is in the space C α (Rd ). We write f ∈ C s Rd . With this definition, Hölder spaces satisfy the following inclusion property. Proposition 2.1.3. Let s and t two positive reals, if s ≤ t then   C t Rd ⊂ C s Rd . We begin with the following lemma, Lemma 2.1.4. Let α1 and α2 two reals in (0, 1). If α1 ≤ α2 , then   C α2 Rd ⊂ C α1 Rd .   Proof of the lemma 2.1.4. Let f be in C α2 Rd . We prove that f lies in C α1 Rd . By definition, the function f is bounded on Rd by a constant M and there exists another constant C > 0 such that for all x et y of Rd , |f (x) − f (y)| ≤ C|x − y|α2 . 3

Suppose that |x − y| ≤ 1, then as α2 − α1 ≥ 0, we have |x − y|α2 −α1 ≤ 1, so, |f (x) − f (y)| ≤ C|x − y|α1 |x − y|α2 −α1 ≤ C|x − y|α1 . If |x − y| > 1, then |x − y|α1 > 1 and |f (x) − f (y)| ≤ 2M ≤ 2M |x − y|α1 . Hence, in all cases, |f (x) − f (y)| ≤ max(2M, C)|x − y|α1 , which finishes the proof. Remark 2.1.5. In the previous proof we can notice than if there exists C > 0 such that, for all x and y in Rd , |f (x) − f (y)| ≤ C|x − y|, then for all α ≤ 1, there exists C 0 > 0 such that, for all x and y in Rd |f (x) − f (y)| ≤ C 0 |x − y|α We prove now the proposition 2.1.3, Proof of the proposition 2.1.3. In the sequel, we write, s = ms + αs et t = mt + αt ,  with ms and mt in {0, 1,. . . } and αt and αs in (0, 1). Let f be in C t Rd and we prove that f is in C s Rd . Since s ≤ t, we have ms ≤ mt , the function f is ms time continuously differentiable; and for each multi-index β of Z+ d such that |β| ≤ ms , we have |β| ≤ mt , so the function ∂ β f is bounded on Rd . There are two cases. If ms = mt , then αs ≤ αt and we can conclude using the lemma 2.1.4. Indeed, for each multi-index β of Z+ d such that |β| = ms the function ∂ β f has Hölder regularity of order αt so of order αs . In the second case, we suppose that ms < mt . Let a multi-index β = (β1 , . . . , βd ) of Z+ d such that |β| = ms , then, for each i ∈ {1, . . . , d}, we denote by β i+ , the multi-index defined defined as (β1 , . . . , βi−1 , βi +1, βi+1 , . . . , βd ). For all i ∈ {1, . . . , d}, we have |β i+ | = ms +1 ≤ mt , i+ so the functions ∂ β f are bounded by a constant M and by the mean value theorem, for all x and y of Rd , |∂ β f (x) − ∂ β f (y)| ≤ M d|x − y|. By the previous remark, since αs ≤ 1, ∂ β f has Hölder regularity of order  αs . As this is true for each multi-index β, we can conclude that f lies in C s Rd and the proposition is proved. 4

The function defined on R, x 7→ x2 satisfies for all x and y in [−M, M ], with M > 0, |x2 − y 2 | = |x + y||x − y| ≤ 2M |x − y|. But it is impossible to have an inequality of the kind, ∀(x, y) ∈ R2 , |x2 − y 2 | ≤ C|x − y|, where C is a positive constant wich does not depend on x and y, since it implies that ∀(x, y) ∈ R2 , x 6= y, |x + y| ≤ C. This lead us to the notion of locally Hölder functions.

2.2

Local Hölder regularity for unbounded functions on Rd

Definition 2.2.1. Let s be a non-integer, positive real number. We say that a realvalued function f , defined on Rd , is a locally Hölder function, or has local Hölder regularity, of order s if, and only if, for all functions ϕ infinitely differentiable and compactly supported, we write the set of such functions D(Rd ), the function f ϕ is in s C s (Rd ). We denote by Cloc (Rd ) this space. We will give a characterization of these spaces. s Proposition 2.2.2. Let f be a function defined on Rd . Then, f lies in Cloc (Rd ) with s = m + α, m ∈ Z+ and α ∈ (0, 1) if, and only if, f is m times continuously differentiable on Rd and for all compact sets K of Rd and for each multi-index β of Z+ d such that |β| = m, there exists a constant C(K) > 0 such that,

∀(x, y) ∈ K × K, |∂ β f (y) − ∂ β f (x)| ≤ C(K)|y − x|α . s Proof. Let f be a function defined on Rd . We suppose that f is in Cloc (Rd ). The function f is m times continuously differentiable. Let K be a compact set of Rd , and β be a multi-index in Z+ d such that |β| = m. There is an integer p such that K ⊂ B(0, p) := {x ∈ Rd : |x| ≤ p}, and a function ϕ ∈ D(Rd ) equal to 1 on B(0, p + 1), then the function f ϕ is in C s (Rd ). Moreover, since ϕ is equal to 1 on B(0, p + 1), for all t in B(0, p), ∂ β (f ϕ)(t) = ∂ β f (t)ϕ(t). Hence, as K ⊂ B(0, p) there exists a contant C(K) > 0 such that

∀(x, y) ∈ K × K, |∂ β f (y) − ∂ β f (x)| = |∂ β (f ϕ)(y) − ∂ β (f ϕ)(x)| ≤ C(K)|y − x|α .

5

Let us suppose now that the function f is m times continuously differentiable on Rd and satisfies the following property: for all compact sets K of Rd , for each multi-index β of Z+ d such that |β| = m, there exists a constant C(K) > 0 such that ∀(x, y) ∈ K × K, |∂ β f (y) − ∂ β f (x)| ≤ C(K)|y − x|α . Let ϕ be any function in D(Rd ), there exists a compact set K such that ϕ|Rd \K ≡ 0. Let β = (β1 , . . . , βd ) be in Z+ d with |β| = m. We will prove that for all x and y in Rd we have the following inequality, |∂ β (f ϕ)(y) − ∂ β (f ϕ)(x)| ≤ C|y − x|α . By the Leibniz’s formula, we have, β

∂ (f ϕ)(x) =

β1 X

···

i1 =0

βd   X β1 id =0

i1

  βd (i1 ,...,id ) ... ∂ f (x)∂ β−(i1 ,...,id ) ϕ(x), id

and, by the triangular inequality, |∂ β (f ϕ)(y) − ∂ β (f ϕ)(x)|   β1 βd   X X β1 βd ≤ ··· ... |∂ (i1 ,...,id ) f (x)∂ β−(i1 ,...,id ) ϕ(x) i i 1 d i =0 i =0 1

d

−∂ (i1 ,...,id ) f (y)∂ β−(i1 ,...,id ) ϕ(y)| Since f is m times continuously differentiable, for all i := (i1 , . . . , id ) such that |(i1 , . . . , id )| ≤ m, the function ∂ i f is bounded on K by a constant c(K, i). Moreover, since ϕ is in D(Rd ), the function ∂ β−i ϕ is also in D(Rd ), so it is bounded by a constant c0 (i) on Rd ; like before, by the mean value theorem, we have, ∂ β−i ϕ ∈ C α (Rd ). There exists then a contant M 0 (i) > 0 such that for all (x, y) ∈ K × K, |∂ β−i ϕ(y) − ∂ β−i ϕ(x)| ≤ M 0 (i)|y − x|α . In the same way, by the mean value theorem and the remark 2.1.5 when |i| < m, and by hypotesis when |i| = m, there exists a constant M (K, i) > 0 such that for all (x, y) ∈ K × K, |∂ i f (y) − ∂ i f (x)| ≤ M (K, i)|y − x|α . Hence,

6

|∂ i f (x)∂ β−i ϕ(x) − ∂ i f (y)∂ β−i ϕ(y)| ≤ |∂ i f (x) − ∂ i f (y)||∂ β−i ϕ(x)| + |∂ i f (y)||∂ β−i ϕ(x) − ∂ β−i ϕ(y)| ≤ (c0 (i)M (K, i) + c(K, i)M 0 (i))|y − x|α . Let us write, C(K) =

β1 X

···

i1 =0

βd   X β1 id =0

i1

  βd ... (c0 (i)M (K, i) + c(K, i)M 0 (i)). id

For all (x, y) ∈ K × K, |∂ β (f ϕ)(y) − ∂ β (f ϕ)(x)| ≤ C(K)|y − x|α . As ∂ β (f ϕ)(x) = 0 when x ∈ / K, for all x ∈ / K and y ∈ / K, |∂ β (f ϕ)(y) − ∂ β (f ϕ)(x)| = 0 ≤ C|y − x|α . Suppose now that x ∈ K and y ∈ / K (the case x ∈ / K and y ∈ K is similar), then by the Leibniz formula, |∂ β (f ϕ)(y) − ∂ β (f ϕ)(x)| = |∂ β (f ϕ)(x)|   β1 βd   X X β1 βd ≤ ··· ... |∂ (i1 ,...,id ) f (x)∂ β−(i1 ,...,id ) ϕ(x)|. i i 1 d i =0 i =0 1

d

Moreover, as f and all its partial derivates are locally bounded (as continuous), |∂ (i1 ,...,id ) f (x)∂ β−(i1 ,...,id ) ϕ(x)| = |∂ (i1 ,...,id ) f (x)||∂ β−(i1 ,...,id ) ϕ(x) − ∂ β−(i1 ,...,id ) ϕ(y)| ≤ C 0 (K)|x − y|α Hence, there is a positive constant C > 0 such that, for all (x, y) ∈ Rd × Rd , |∂ β (f ϕ)(y) − ∂ β (f ϕ)(x)| ≤ C|y − x|α . This finishes the proof of the proposition Using the inclusions of the Hölder spaces given before we have the inclusions of the local Hölder spaces, that is, 7

Proposition 2.2.3. Let s and t be non-integer positive real numbers, if s ≤ t then   s t Rd . Rd ⊂ Cloc Cloc We should remember of this proof that if f is a m times continuously differentiable on Rd function which satisfies: for all compact sets K of Rd , for all multi-index β of Z+ d such that |β| = m, there exists a constant C(K) > 0 such that ∀(x, y) ∈ K × K, |∂ β f (y) − ∂ β f (x)| ≤ C(K)|y − x|, then for all α ∈ (0, 1), for all compact sets K of Rd , for all integers m0 ≤ m, for all multi-index β of Z+ d such that |β| = m0 , there exists a constant C(K) > 0 such that ∀(x, y) ∈ K × K, |∂ β f (y) − ∂ β f (x)| ≤ C(K)|y − x|α . Moreover thanks to this characterization, in the definition of the local Hölder spaces, we can restrict ourselves to the functions τk where k is in N, and such that • τk ∈ D(Rd ), • τk (Rd ) = [0, 1], • τk ≡ 1 on B(0, k) and, • τk ≡ 0 on R\B(0, k + 1). Finally, the space C s (Rd ), with s = m + α, m ∈ Z+ and α ∈ (0, 1) equipped with the norm X |∂ β f (x) − ∂ β f (y)| X ||f || = sup + sup |∂ β f (x)|, α |x − y| x,y∈Rd x∈Rd |β|=k

|β|≤k

is a Banach space. Let M be a fixed positive real number, we define C α ((−M, M )d ) as the space of the functions f , which are m times continuously differentiable on (−M, M )d , and satisfy for all x, y ∈ (−M, M )d and for every multi-index β in Z+ d , with |β| = m, the inequality, |∂ β f (x) − ∂ β f (y)| ≤ C|x − y|α . Then equipped with the norm X |∂ β f (x) − ∂ β f (y)| X ||f || = sup + |x − y|α x,y∈(−M,M )d

|β|≤k x∈(−M,M )

|β|=k

|∂ β f (x)|,

sup d

C s ((−M, M )d ) is a Banach space. Remark 2.2.4. Thanks to the previous characterization, we have, s f ∈ Cloc (Rd ) ⇔ ∀M > 0, f ∈ C s ((−M, M )d ).

8

3

Variance of increments and connections to path Hölder regularity

In this section, to simplify our presentation, we focus on stochastic processes indexed by R, or compact intervals of it; yet, most of the results we present, can be extended to the more general framework, where R is replaced by Rd , the dimension d being an arbitrary positive integer. Roughly speaking, the main point is that the behaviors of variances of first and second order increments of Gaussian processes, are closely related to the almost sure Hölder regularity of their paths.

3.1

Derivatives for Gaussian processes

Definition 3.1.1. Let {X(t) : t ∈ R} be a random process. We say that it is a Gaussian process P if for all positive integers n and for each t1 , . . . , tn in R, the random variable np=1 X(tp ) has a Gaussian distribution. Proposition 3.1.2. Let {Xn : n ∈ N} be a real-valued Gaussian process with mean zero and X be a random variable such that the sequence (Xn )n∈N converges in probability to X (hence, X is a mean zero Gaussian random variable). Then, for all p ∈ (0, ∞), we have the convergence limn→+∞ ||Xn − X||Lp (Ω) = 0. Proof. As the process {Xn : n ∈ N} is Gaussian, for all integers m and n, the random variable Xm − Xn has a Gaussian distribution with mean zero. Moreover, we have lim Xn − Xm = Xn − X

m→+∞

in probability, so for all integers n, the random variable Xn − X has also a Gaussian distribution with mean zero. Let σn2 be the variance of Xn −X, then its characteristic function is given for all t in R by 2

φn (t) = e−σn t/2 . But we know that, limn→+∞ Xn − X = 0 in probability, so in law, so for all real t, limn→+∞ φn (t) = E[ei0t ] = 1, by the continuity at 1 of the logarithmic function, we get, limn→+∞ σn2 = 0 and hence lim ||Xn − X||L2 (Ω) = 0.

n→+∞

9

Next observe that for any p ∈ (0, +∞), if σn2 = 6 0, one has  p  |Xn − X| p p E[|Xn − X| ] = σn E . σn As the random variable the quantity C(p) := E

Xn −X

has Gaussian law with mean zero and variance one, p i |Xn −X| is a positive constant that only depends on p. σn

hσn

If σn2 = 0 then, almost surely, Xn = X, hence E[|Xn − X|p ] = 0. Let, ε > 0 then there exists n0 ∈ N such that, for all n ≥ n0 , σn < ε. Finally, ∀n ≥ n0 , E[|Xn − X|p ] ≤ C(p)εp . As this is true for all ε > 0, lim ||Xn − X||Lp (Ω) = 0.

n→+∞

The next result can be found in [Adl81] (page 26-27). Definition 3.1.3. Let {X(t) : t ∈ R} be a second order centered real-valued random process. If for some t ∈ R, the limit X(t + h) − X(t) h→0 h

∂X(t) = lim

exists in the space L2 (Ω), then ∂X(t) is called the mean square derivative of X at t. If this limit exists for each t in R then X is said to possess a mean square derivative. Proposition 3.1.4. Let {X(t) : t ∈ R} be a second order centered real-valued random process with covariance function R : R × R → R, (s, t) 7→ R(s, t). If for all t in R the quantity ∂s ∂t R(t, t) exists and is finite, then X possesses a mean square derivative ∂X and the covariance function of the derivative process is given for all s and t in R by ∂s ∂t R(s, t). Proof. Let t be in R, and (hn )n∈N be any real sequence such that limn→+∞ hn = 0 and for all n in N, hn 6= 0. Then, the sequence (Yn )n∈N defined by Yn = X(t+hhnn)−X(t) converges in quadratic mean if, and only if, the sequence (E[Yn Ym ])m,n∈N×N converges to a finite limit as m and n tend independently to infinity. For all m and n in N, E[Yn Ym ] =

R(t + hn , t + hm ) − R(t, t + hm ) − R(t + hn , t) + R(t, t) , hm hn 10

which has a finite limit as m and n tend independently to infinity because the quantity ∂s ∂t R(t, t) exists and is finite. This is true for all sequence (hn )n∈N , so exists in the space L2 (Ω). This statement is true for all t in R. limh→0 X(t+h)−X(t) h Then, we can define the process {∂X(t) : t ∈ R} which is a second order random ˜ be its covariance function. By the Hölder inequality, for all s and t in process; let R R, ˜ t) = E[∂X(s)∂X(t))], R(s, 1 = lim 2 E[(X(s + hn ) − X(s))(X(t + hn ) − X(t))], n→∞ hn = ∂s ∂t R(s, t), which finishes the proof. If the process {X(t) : t ∈ R} is a real-valued centered Gaussian process and is almost surely differentiable on R, that is there is Ω∗ a measurable set of probability 1 such that, for all ω in Ω∗ , the function X(., ω) is differentiable on R, let X 0 (., ω) be its derivate function; if ω is not in Ω∗ , we define X 0 (., ω) ≡ 0. We say that {X 0 (t) : t ∈ R} is the derivative process of {X(t) : t ∈ R}. This is a real-valued centered Gaussian process and by the propositions 3.1.2 and 3.1.4, X 0 is a mean square derivative process of X and its covariance function RX 0 satisfies for all s and t in R RX 0 (s, t) = ∂s ∂t RX (s, t).

3.2

Kolmogorov continuity Theorem and related results

This paragraph details results from [CL04] about the regularity of sample paths. The first are about the continuity of the sample paths and the second are about their derivability (paragraph 4.2); their corollaries for Gaussian process (called normal process in [CL04]) can be found in paragraph 9.4. We finish this paragraph by resaults about the Hölder regularity of the trajectories. Proposition 3.2.1. Let {X(t), t ∈ [0, 1]} be a stochastic process. Suppose that for all t and t + h in the interval [0, 1], P(|X(t + h) − X(t)| ≥ g(h)) ≤ q(h), where g and q are positive even functions of h, nonincreasing in a neighborhood of 0, and such that +∞ X

−n

g(2

) < +∞ and

n=1

+∞ X n=1

11

2n q(2−n ) < +∞.

Then, there exists a modification {Y (t), t ∈ [0, 1]}, that is a stochastic process such that for all t there exists a mesurable set Ω(t), such that P(Ω(t)) = 1 and for each ω in Ω(t), X(t, ω) = Y (t, ω); moreover the sample paths of the process {Y (t), t ∈ [0, 1]} are, with probability 1, continuous on [0, 1], that is there exists a mesurable set Ω∗ such that P(Ω∗ ) = 1 and for each ω in Ω∗ , the function, defined on [0, 1], t 7→ X(t, ω) is continuous on [0, 1]. As a special case, using the Markov inequality, we get the next result. Corollary 3.2.2. Let {X(t), t ∈ [0, 1]} be a stochastic process. Suppose that for all t and t + h in the interval [0, 1], E[|X(t + h) − X(t)|p ] ≤

K|h| , | log |h||1+r

where p < r and K are positive constants. Then, there exists a modification {Y (t), t ∈ [0, 1]} of {X(t), t ∈ [0, 1]}, whose sample paths are, with probability 1, continuous on [0, 1]. Proposition 3.2.3. Let {X(t), t ∈ [0, 1]} be a stochastic process. Suppose that for all t and t + h in the interval [0, 1], P(|X(t + h) − X(t)| ≥ g(h)) ≤ q(h), and that for all t − h, t and t + h in the interval [0, 1], P(|X(t + h) + X(t − h) − 2X(t)| ≥ g1 (h)) ≤ q1 (h), where g, q g1 and q1 are positive even functions of h, nonincreasing in a neighborhood of 0, and such that +∞ X

g(2−n ) < +∞ and

+∞ X

2n q(2−n ) < +∞,

n=1

n=1

and, +∞ X

n

−n

2 g1 (2

) < +∞ and

n=1

+∞ X

2n q1 (2−n ) < +∞.

n=1

Then, there exists a modification {Y (t), t ∈ [0, 1]} of {X(t), t ∈ [0, 1]} whose sample paths are with probability 1 continuously differentiable, that is there exists a mesurable set Ω∗ such that P(Ω∗ ) = 1 and for each ω in Ω∗ , the function, defined in [0, 1], t 7→ Y (t, ω) is continuouly differentiable on [0, 1]. 12

As in the case of continuity, we have more convenient hypothesis for the same results Corollary 3.2.4. Let {X(t), t ∈ [0, 1]} be a stochastic process. Suppose that for all t and t + h in the interval [0, 1], E[|X(t + h) − X(t)|p ] ≤

K|h| , | log |h||1+r

and E[|X(t + h) + X(t − h) − 2X(t)|p1 ] ≤

K|h|1+p1 , | log |h||1+r1

where p < r, p1 < r1 and K are positive constants. Then, there exists a modification {Y (t), t ∈ [0, 1]} of {X(t), t ∈ [0, 1]}, whose sample paths are, with probability 1, continuously differentiable on [0, 1]. As for all a > 0 and b ∈ R, lim xa (log |x|)b = 0, x↓0

the previous conditions can be replaced by E[|X(t + h) − X(t)|p ] ≤ K|h|1+α , and E[|X(t + h) + X(t − h) − 2X(t)|p1 ] ≤ K 0 |h|1+p1 +α1 , for some potive K, α, α1 , p and p1 . For a Gaussian process, this assumptions can be rewritten. Corollary 3.2.5. Let {X(t), t ∈ [0, 1]} be a mean zero, real-valued Gaussian process satisfying, for some constants K > 0, α > 0, α1 > 0, t, t + h, t − h in [0, 1], E[|X(t + h) − X(t)|2 ] ≤ K|h|2α . and E[|X(t + h) + X(t − h) − 2X(t)|2 ] ≤ K|h|2+α1 . Then, there exists a modification {Y (t), t ∈ [0, 1]} of {X(t), t ∈ [0, 1]}, whose sample paths are, with probability 1, continuously differentiable on [0, 1]. For a mean zero, real-valued Gaussian process defined on R, we get the following result. 13

Corollary 3.2.6. Let {X(t), t ∈ R} be a mean zero, real-valued Gaussian process satisfying for each M > 0 (fixed), there are constants K(M ) > 0, α(M ) > 0, α1 (M ) > 0 such that, for all t, t + h, t − h in [−M, M ] E[|X(t + h) − X(t)|2 ] ≤ K(M )|h|2α(M ) . and E[|X(t + h) + X(t − h) − 2X(t)|2 ] ≤ K(M )|h|2+α1 (M ) . Then, there exists a modification {Y (t), t ∈ R} of {X(t), t ∈ R}, whose sample paths are, with probability 1, continuously differentiable on R. Finally, the Kolmogorov-Čentsov Theorem provides a result about the Hölder regularity of the paths of a random process. Proposition 3.2.7 (Kolmogorov-Čentsov Theorem). Let {X(t) : t ∈ [0, 1]} be a real-valued random process. Suppose that there are positive constants α, β and C such that, for all t and t + h in [0, 1], E[|X(t) − X(t + h)|α ] ≤ C|h|1+β . Then there is a modification {Y (t) : t ∈ [0, 1]} of the process {X(t) : t ∈ [0, 1]} and a measurable set Ω∗ of probability 1 such that for all γ ∈ (0, β/α) and ω ∈ Ω∗ there is a constant C(ω) > 0 such that, for all t and t + h in [0, 1], |X(t) − X(t + h)| ≤ C(ω)|h|γ . For a Gaussian process defines on [0, 1] we get the next corollaries. Corollary 3.2.8. Let {X(t) : t ∈ [0, 1]} be a mean zero, real-valued Gaussian process. Suppose that there are positive constants s and C such that, for all t and t + h in [0, 1], E[|X(t) − X(t + h)|2 ] ≤ C|h|2s . Then, there is a modification {Y (t) : t ∈ [0, 1]} of the process {X(t) : t ∈ [0, 1]} and a measurable set Ω∗ of probability 1 such that for all γ ∈ (0, s) and ω ∈ Ω∗ there is a constant C(ω) > 0 such that, for all t and t + h in [0, 1], |X(t) − X(t + h)| ≤ C(ω)|h|γ . For a mean zero, real-valued Gaussian process {X(t) : t ∈ [−M, M ]}, M > 0, considering the process {Y (t) : t ∈ [0, 1]} defined for all ω ∈ Ω and t ∈ [0, 1] by Y (t, ω) = X(M (2t − 1), ω), we get the following result. 14

Corollary 3.2.9. Let {X(t) : t ∈ [−M, M ]} be a mean zero, real-valued Gaussian process. Suppose that there are positive constants s and C such that, for all t and t + h in [−M, M ], E[|X(t) − X(t + h)|2 ] ≤ C|h|2s . Then there is a modification {Y (t) : t ∈ [−M, M ]} of the process {X(t) : t ∈ [−M, M ]} and a measurable set Ω∗ of probability 1 such that, for all γ ∈ (0, s) and ω ∈ Ω∗ there is a constant C(ω) > 0 such that, for all t and t + h in [−M, M ], |X(t, ω) − X(t + h, ω)| ≤ C(ω)|h|γ . When {X(t), t ∈ [0, 1]} is a mean zero real-valued Gaussian process, we are just interested in the variance of increments of order 1 or 2, that is for all t, and t + h in [0, 1], E[|X(t) − X(t + h)|2 ] and E[|X(t + h) + X(t − h) − 2X(t)|2 ]. Let us denote by R the covariance function of the process {X(t), t ∈ [0, 1]}. As the process is centered, this variance of the increments, of ordre 1 for instance, can be expressed with R. Indeed, E[|X(t) − X(t + h)|2 ] = (R(t + h, t + h) − R(t + h, t)) − (R(t, t + h) − R(t, t)), where appears increment of order 1 in the first and the second variable of R. If R is a local Hölder function, then we could have inequilities of the kind E[|X(t) − X(t + h)|2 ] ≤ C|h|2s , with positive constants C and s. This is what we study in the next paragraph.

3.3

Strong versions of the Kolmogorov continuity Theorem

Let {X(t), t ∈ R} be a mean zero real-valued Gaussian process. We denote by RX its covariance function defined by ∀t1 , t2 ∈ R, RX (t1 , t2 ) = Cov(X(t1 ), X(t2 )) = E[X(t1 )X(t2 )]. In the sequel, we detail condition on this covariance function to conlude about the almost surely local Hölder regularity of the sample paths. We begin with two lemmas. 2s (R2 ), Lemma 3.3.1. Let s be a constant in (0, 1)\{ 21 } and suppose that RX ∈ Cloc then there exists a modification {Z(t), t ∈ R} of {X(t), t ∈ R} and a measurable set Ω∗ such that P(Ω∗ ) = 1 and for all ε > 0, small enough, and each ω in Ω∗ , the function Z(., ω) : R → R, t 7→ Z(t, ω) s−ε (R) lies in Cloc

15

Proof. Let k be in N and work with the process {Yk (t), t ∈ R} defined by ∀ω ∈ Ω, ∀t ∈ R, Yk (t, ω) = τk (t)X(t, ω) Then for all t ∈ [−k, k], we have Yk (t) = X(t). The process {Yk (t), t ∈ R} is Gaussian, centered and real-valued; its covariance function is given for all s ans t in R by, Cov(Yk (s), Yk (t)) = τk (s)τk (t)RX (s, t). As the function τk is in D(R), the function (s, t) 7→ τk (s)τk (t) lies in D(R2 ). Since 2s we have RX ∈ Cloc (R2 ), the covariance function of the process {Yk (t), t ∈ R} is in C 2s (R2 ). In order to use the Kolmogorov-Čentsov theorem, we compute for all t and t + h in [−k, k], E[|Yk (t + h) − Yk (t)|2 ]. There are two cases, suppose s ∈ (0, 21 ), then 2s ∈ (0, 1), and using the fact that RYk ∈ C 2s (R2 ), we have E[|Yk (t + h) − Yk (t)|2 ] = (RYk (t + h, t + h) − RYk (t + h, t)) −(RYk (t, t + h) − RYk (t, t)), ≤ |RYk (t + h, t + h) − RYk (t + h, t)| +|RYk (t, t + h) − RYk (t, t)|, ≤ Ck |(h, 0)|2s + Ck |(0, h)|2s , ≤ 2Ck |h|2s , where Ck is a positive constant. In the second case s ∈ ( 21 , 1), then 2s ∈ (1, 2), and using the fact that ∂t2 RYk ∈ C 2s−1 (R2 ), we have, E[|Yk (t + h) − Yk (t)|2 ] = (RYk (t + h, t + h) − RYk (t + h, t)) −(RYk (t, t + h) − RYk (t, t)), Z t+h = ∂t2 RYk (t + h, u) − ∂t2 RYk (t, u) du, t Z t+h ≤ |∂t2 RYk (t + h, u) − ∂t2 RYk (t, u)| du, t Z t+h ≤ Ck0 |(h, 0)|2s−1 du, ≤

t Ck0 |h|2s ,

16

where Ck0 is a positive constant. In all cases there exists a constant Ck00 > 0 such that, for all t and t + h in [−k, k], E[|Yk (t + h) − Yk (t)|2 ] ≤ Ck00 |h|2s . Hence, for all integer k, by the Kolmogorov-Čentsov theorem there exists a modification {Zk (t), t ∈ [−k, k]} of Yk , there exists a measurable set Ω∗ (k), with P(Ω∗ (k)) = 1, such for all ε > 0 small enough and each ω ∈ Ω∗ (k), there is C(ω, k) < +∞ such that for all t and t + h in [−k, k], |Zk (t + h, ω) − Zk (t, ω)| ≤ C(ω, k)|h|s−ε . Since N is countable, Ω∗1 = ∩k∈N Ω∗ (k) is a measurable set such that P(Ω∗1 ) = 1. Let ω ∈ Ω∗1 , the previous inequality proves that for all integers k the function Zk (., ω) is continuous on [−k, k]. Moreover, for all integers k, for all t ∈ [−k, k], by definition of the process Yk , Zk (t, ω) = Yk (t, ω) = X(t, ω) as. Hence, by continuity, we can define the process {Z(t), t ∈ R} by, for all t ∈ R and ω ∈ Ω∗ (Ω∗ is a measurable set of probability 1, included in Ω∗1 ), Z(t, ω) = Zk (t, ω) when t ∈ [−k, k], and when ω ∈ / Ω∗ , Z(t, ω) = 0 for all real t. So, the process Z is a modification of the process X. Let ω ∈ Ω∗ and let K a compact set of R. There exists an integer k such that K ⊂ [−k, k] and there exists C(ω, K) < +∞ such that for all t and t + h in K ⊂ [−k, k], |Zk (t + h, ω) − Zk (t, ω)| ≤ C(ω, K)|h|s−ε . Thus, combining the later inequality with the definition of Z, we get, for all t and t + h in K, |Z(t + h, ω) − Z(t, ω)| ≤ C(ω, K)|h|s−ε ; then, using the characterization of the local Hölder spaces, we have almost surely for all ε > 0, small enough, that the function Z(.) : R → R, t 7→ Z(t) s−ε lies in Cloc (R)

In fact, we proved the following corollary we will also call the Kolmogorov-Čentsov Theorem in all the sequel. 17

Corollary 3.3.2 (Kolmogorov-Čentsov Theorem). Let {X(t) : t ∈ R} be a mean zero, real-valued Gaussian process. Suppose that there is a positive constant s and for each M > 0, there is a constant C(M ) such that, for all t and t + h in [−M, M ], E[|X(t) − X(t + h)|2 ] ≤ C(M )|h|2s . Then there is a modification {Y (t) : t ∈ R} of the process {X(t) : t ∈ R} and a measurable set Ω∗ of probability 1 such that, for all compact set K of R and each γ ∈ (0, s) and ω ∈ Ω∗ there is a constant C(K, ω) > 0 such that, for all t and t + h in K, |X(t, ω) − X(t + h, ω)| ≤ C(K, ω)|h|γ , γ that is the function X(., ω) is in Cloc (R). 2s (R2 ), Lemma 3.3.3. Let s be a constant in (1, 2)\{ 23 } and suppose that RX ∈ Cloc then there exists a modification {Z(t), t ∈ R} of {X(t), t ∈ R} and a measurable set Ω∗ such that P(Ω∗ ) = 1 and for all ε > 0 and for all ω in Ω∗ , the function

Z(., ω) : R → R, t 7→ Z(t, ω) s−ε lies in Cloc (R)

Proof. Using the inclusions of the local Hölder spaces, for all δ > 0, small enough, 2−δ the covariance function of X satisfies RX ∈ Cloc (R2 ), hence by the lemma 3.3.1 there exists a modification {Z(t), t ∈ R} of the process X such that for all ε > 0, 1−ε small enough, the sample paths of Z are in Cloc (R) almost surely. Since Z is a modification of X, both processes have the same finite dimensionnal distributions, so in the sequel, we will work with the process Z and RZ (= RX ). Let M > 0. In order to use the differentiablity criteron in the Gaussian case, we compute for all t, t + h and t − h in [−M, M ], A(t, h) := E[|Z(t + h) − Z(t)|2 ] and B(t, h) := E[|Z(t + h) + Z(t − h) − 2Z(t)|2 ]. By the inclusions of the local Hölder 1/2 spaces, RZ ∈ Cloc (R2 ); as [−M, M ]2 is a compact set of R2 , there exists K(M ) > 0, such that for all t, t + h in [−M, M ], A(t, h) = RZ (t + h, t + h) + RZ (t + h, t) − (RZ (t, t + h) + RZ (t, t)), ≤ K(M )|h|1/2 .

18

Then, for all t, t + h and t − h in [−M, M ], B(t, h) = RZ (t + h, t + h) + RZ (t − h, t − h) +4RZ (t, t) − 4RZ (t, t + h) −4RZ (t, t − h) + 2RZ (t + h, t − h), Z t Z t+h ∂t1 ∂t2 RZ (y, x + h) − ∂t1 ∂t2 RZ (y, x) dydx, = t−h t Z t Z t ∂t1 ∂t2 RZ (y, x) − ∂t1 ∂t2 RZ (y + h, x) dydx. + t−h

t−h

As in the previous proof, there are two cases. In the first one, s ∈ (1, 3/2), then 2s−2 2s ∈ (2, 3) and by hypothesis, ∂t1 ∂t2 RZ ∈ Cloc (R2 ), so Z t Z t+h B(t, h) ≤ |∂t1 ∂t2 RZ (y, x + h) − ∂t1 ∂t2 RZ (y, x)| dydx t−h t Z t Z t + |∂t1 ∂t2 RZ (y, x) − ∂t1 ∂t2 RZ (y + h, x)| dydx, t−h

Z

t−h t Z t+h

≤ C(M )

|(0, h)|2s−2 dydx

t−h t Z t Z t

+C(M ) t−h 2s

|(−h, 0)|2s−2 dydx,

t−h

≤ 2C(M )|h| ,

where C(M ) is a positive constant. In the second case, s ∈ (3/2, 2), then ∂t1 ∂t22 RZ ∈ 2s−3 Cloc (R2 ). Hence, using the symmetry of RZ , we have, Z t Z t Z h ∂t1 ∂t22 RZ (y + h, x + z) − ∂t1 ∂t22 RZ (y, x + z) dzdydx, B(t, h) = t−h t

Z

t−h t

Z

0

Z

h

|∂t1 ∂t22 RZ (y + h, x + z) − ∂t1 ∂t22 RZ (y, x + z)| dzdydx,

≤ t−h 0

t−h

Z

0 t

Z

t

Z

≤ C (M ) t−h

t−h

h

|(h, 0)|2s−3 dzdydx,

0

≤ C 0 (M )|h|2s , where C 0 (M ) is a positive constant. In all cases, we have B(t, h) ≤ C 00 (M )|h|2+2(s−1) , 19

with s − 1 > 0 and C 00 (M ) > 0. Hence, by a the corollary 3.2.6, there exists a modification of Z whose sample paths are almost surely continuously differentiable on R. As both this modification and Z are continuous, we can conclude that Z has sample paths almost surely continuously differentiable on R. That is, there exists Ω∗ a measurable set of probability 1 such that for all ω ∈ Ω∗ , the function Z(., ω) is continously differentiable on R; and the covariance function of the derivative process Z 0 satisfies RZ 0 = ∂t1 ∂t2 RZ . Then, we use the Kolmogorov-Čentsov Theorem; there are two cases. Let M > 0 fixed. If s ∈ (1, 23 ), then 2(s − 1) ∈ (0, 1), and using the fact that ∂t1 ∂t2 RZ ∈ 2(s−1) Cloc (R2 ), there is a constant C(M ) > 0 such that, for all t and t + h in [−M, M ], E[|Z 0 (t + h) − Z 0 (t)|2 ] = (∂t1 ∂t2 RZ (t + h, t + h) − ∂t1 ∂t2 RZ (t + h, t)) −(∂t1 ∂t2 RZ (t, t + h) − ∂t1 ∂t2 RZ (t, t)), ≤ |∂t1 ∂t2 RZ (t + h, t + h) − ∂t1 ∂t2 RZ (t + h, t)| +|∂t1 ∂t2 RZ (t, t + h) − ∂t1 ∂t2 RZ (t, t)|, ≤ C(M )|(0, h)|2s + |(0, h)|2(s−1) , ≤ 2C(M )|h|2(s−1) . In the second case s ∈ ( 23 , 2), then 2(s − 1) − 1 ∈ (0, 1), and using the fact that ∂t1 ∂t22 RZ ∈ C 2(s−1)−1 (R2 ), there is a constant C 0 (M ) > 0 such that, 0

0

Z

2

t+h

|∂t1 ∂t22 RZ (t + h, u) − ∂t1 ∂t22 RZ (t, u)| du,

E[|Z (t + h) − Z (t)| ] ≤ t

Z ≤

t+h

C 0 (M )|(h, 0)|2(s−1)−1 du,

t 0

≤ C (M )|h|2(s−1) . In all cases there exists a constant C 00 (M ) > 0 such that, for all t and t + h in [−M, M ], E[|Z 0 (t + h) − Z 0 (t)|2 ] ≤ C 00 (M )|h|2(s−1) . ˜ Hence, by the previous remark, there exists a modification {Z(t), t ∈ R} of Z 0 , there exists a measurable set Ω∗1 such that P(Ω∗1 ) = 1 and for each compact set K of R and each ω ∈ Ω∗1 , there is C(ω, K) < +∞ such that for all t and t + h in K, ˜ + h, ω) − Z(t, ˜ ω)| ≤ C(ω, K)|h|(s−1)−ε , |Z(t 20

for all ε > 0 small enough. The processes Z˜ and Z 0 have almost surely continuous path so, almost surely ˜ = Z 0 (t). ∀t ∈ R, Z(t) So {Z(t), t ∈ R} is a modification of {X(t), t ∈ R} such that almost surely, Z(.) is s−1−ε s−ε continuously differentiable and Z 0 (.) ∈ Cloc (R), that is, Z(.) ∈ Cloc (R), for all ε > 0 small enough. This finishes the proof of the lemma. Theorem 3.3.4. Let s be a positive noninteger constant, with 2s ∈ / N, and suppose 2s 2 that RX ∈ Cloc (R ), then there exists a modification {Y (t), t ∈ R} of {X(t), t ∈ R} and a measurable set Ω∗ such that P(Ω∗ ) = 1 and for all ε > 0, small enough, and for all ω in Ω∗ , the function Y (., ω) : R → R, t 7→ Y (t, ω) s−ε lies in Cloc (R)

Proof. Let s = n + α where n lies in {0, 1, . . . } and α ∈ (0, 1). By the lemmas 3.3.1 and 3.3.3, we can suppose n ≥ 2, and with the inclusion result of the local Hölder spaces, there exists a modification {Y (t), t ∈ R} of {X(t), t ∈ R} whose sample paths 2−ε lie almost surely in Cloc (R), for all ε small enough. Moreover, the derivative process 0 {Y (t), t ∈ R} is a Gaussian process such that 0

2s RY 0 ∈ Cloc (R2 ),

with s0 = n − 1 + α where n − 1 lies in {0, 1, . . . } and α ∈ (0, 1). By induction on n, there is a modification {Z(t), t ∈ R} of Y 0 such that almost surely, for all ε > 0 small s0 −ε enough, its sample paths lie in Cloc (R). As Y 0 is almost surely continuous, we have almost surely for all t ∈ R, Y 0 (t) = Z(t). Hence, almost surely, for all ε > 0, small enough, the sample path of the process Y , a modification of the process X, are in s0 +1−ε s−ε Cloc (R) = Cloc (R).

21

4

Tightness criteria

In all the sequel, M denotes an arbitrary fixed positive constant. The presentation we detail is based [Bil68] (page 55-56 and 95). Let (Pn )n∈N be a sequence of probability measure on C, the Borel σ−algebra of C([−M, M ]d ), the Banach space of the real-valued continuous functions on [−M, M ]d equipped with the uniform norm. Notice that C is generated by the cylinders, that is the finite dimensional sets of the form {y ∈ C([−M, M ]d ) : (y(t1 ), . . . , y(tn )) ∈ A1 × . . . × An }, where n ∈ N, t1 , . . . , tn ∈ [−M, M ]d , and A1 , . . . , An belong to B(R) the Borel σ−algebra of R. Also, notice that the weak convergence of (Pn )n∈N in the sense of the finite-dimensional distributions (i.e. when one restricts to cylinders) does not necessarily imply its weak convergence in C([−M, M ]d ); generally speaking in order to be the case, the sequence (Pn )n∈N has to satisfy an additional condition; typically weak relative compactness i.e. every infinite (countable) subset of (Pn )n∈N contains a weakly convergent subsequence in C([−M, M ]d ). Usually, it is not very convenient to establish weak relative compactness by making a direct use of its definition; therefore, more convenient criteria have been introduced, in this section we will present some of them, which rely on the notion of tightness. Definition 4.1. Let S be a metric space equipped with the Borel σ−algebra S = B(S). We say that a probability measure P on S is tight if for each ε > 0, there exists a compact set K of S such that P(K) > 1 − ε. This notion can be extended to a family of probability measure, in the following way: Definition 4.2. A family Π of probability measures on a metric space (S, S) is said to be tight if for each ε > 0, there exists a compact set K such that ∀ P ∈ Π, P(K) > 1 − ε. The following important theorem shows that the notions of tightness and relative compactness, are closely connected. Theorem 4.3 (Prohorov Theorem). If a family of probability measure is tight then it is relatively compact. Moreover, the reverse result is true when S is separable and complete. As C([−M, M ]d ) is separable and complete, if a family of probability measure is relatively compact then it is tight. 22

By the Prohorov Theorem, in the case of (C([−M, M ]d ), C), weak relative compactness and tightness are equivalent notions; so in the sequel we will focus on the latter notion. Let (Pn )n∈N be a sequence of probability measure on (C([−M, M ]d ), C). For x in C([−M, M ]d ) and δ ∈ (0, 1) we define  wx (δ) = sup |x(s) − x(t)| : s, t ∈ [−M, M ]d , ∀l = 1, . . . , d, |sl − tl | ≤ δ . Proposition 4.4. Let (Pn )n∈N be a sequence of probability measures on the measurable space (C([−M, M ]d ), C). Then, this sequence is tight if the two following conditions hold: 1. for each positive η, there exists a constant a such that ∀n ∈ N, Pn (x : |x(0)| > a) ≤ η, 2. for each positive ε and η, there exists constants a > 0 and δ ∈ (0, 1), and an integer n0 such that ∀n ≥ n0 , Pn (x : wx (δ) ≥ ε) ≤ η. In the sequel, for each t = (t1 , . . . , td ) ∈ [−M, M ]d , we set ! d Y ∆(t, δ) = [tl , tl + δ] ∩ [−M, M ]d . l=1

Corollary 4.5. Let (Pn )n∈N be a sequence of probability measures on the measurable space (C([−M, M ]d ), C). Then, it is tight if the two following conditions hold: 1. for each η > 0, there exists a > 0 such that ∀n ∈ N, Pn (x : |x(0)| > a) ≤ η, 2. for every ε > 0 and η > 0, there exist δ ∈ (0, 1), and n0 ∈ N such that for all t ∈ [−M, M ]d ! 1 ∀n ≥ n0 , d Pn x : sup |x(s) − x(t)| ≥ ε ≤ η. δ s∈∆(t,δ)

23

Proof. Let ( At =

)

x : sup |x(s) − x(t)| ≥ ε , s∈∆(t,δ)

Then, let s and t in [−M, M ]d , there exist a unique (i1 (s), . . . , id (s)) ∈ Zd and a unique (i1 (t), . . . , id (t)) ∈ Zd such that s∈

d d Y Y [il (s)δ, (il (s) + 1)δ[, and t ∈ [il (t)δ, (il (t) + 1)δ[. l=1

l=1

In fact we have also the following inequalities, for all l ∈ {1, . . . , d}, |il (s)δ| ≤ M + 1 and |il (t)δ| ≤ M +1. If we make the assumption that for all l ∈ {1, . . . , d}, |sl −tl | ≤ δ, then by definition ∀l ∈ {1, . . . , d}, il (s) = ±1 + il (t) or il (s) = il (t). We will now estimate |x(s) − x(t)|, |x(s) − x(t)| ≤ |x(s) − x(i1 (s)δ, . . . , id (s)δ)|(= µ1 ) +|x(i1 (s)δ, . . . , id (s)δ) − x(i1 (t)δ, . . . , id (t)δ)|(= µ2 ) +|x(i1 (t)δ, . . . , id (t)δ) − x(t)|(= µ3 ). hence, if |x(s) − x(t)| ≥ 3ε, then there exists i ∈ {1, 2, 3} such that µi ≥ ε. Suppose now that wx (δ) ≥ 3ε, then by continuity of x, there exists s, t ∈ [−M, M ]d , such that ∀l = 1, . . . , d, |sl − tl | ≤ δ and |x(s) − x(t)| ≥ 3ε and finally we have the inclusion, ( ) [ {x : wx (δ) ≥ 3ε} ⊂ x: sup |x(s) − x(i1 δ, . . . , id δ)| ≥ ε , s∈∆((i1 δ,...,id δ);δ)

(i1 ,...,id )∈ΥM

where

 d ΥM = Zd ∩ − δ −1 (M + 1), δ −1 (M + 1) .

Hence, ! P(x : wx (δ) ≥ 3ε) ≤

X

P x:

(i1 ,...,id )∈ΥM

sup

|x(s) − x(i1 δ, . . . , id δ)| ≥ ε .

s∈∆((i1 δ,...,id δ);δ)

Moreover, one has by hypothesis, ! P x:

sup

|x(s) − x(i1 δ, . . . , id δ)| ≥ ε

s∈∆((i1 δ,...,id δ);δ)

24

≤ δ d η,

therefore, denoting by [·], the integer part function, and by Card(A), the number of elements in a finite set A included in Zd , one has, P(x : wx (δ) ≥ 3ε) ≤ Card (ΥM ) δ d η, ≤ (1 + 2[(M + 1)/δ])d δ d η, ≤ (1 + 2(M + 1))d η. This finishes the proof of the corollary. Definition 4.6. Let (Xn )n∈N be a sequence of real-valued random variables defined on (Ω, F, P ). We say that (Xn )n∈N is bounded in probability, if for every positive ε, there exists a positive Mε such that, for all integers n, P (|Xn | > Mε ) < ε. Let (Xn )n∈N be a sequence of (C([−M, M ]d ), C) valued random variables, defined on (Ω, F, P ). For each n let Pn be the probability distribution of Xn . The sequence (Xn )n∈N is bounded in probability if, and only if, the sequence (Pn )n∈N is tight. Proposition 4.7. If the Pn ’s are the probability distributions of (C([−M, M ]d ), C) valued random variables Xn ’s defined on the same probability space (Ω, F, P ), then the sequence (Pn )n∈N is tight if the following two conditions are satisfied: 1. the sequence of the real-valued random variables (Xn (0))n∈N is bounded in probability; 2. there are constants C > 0, γ ≥ 0 and α > 0 such that for all integers n, all t1 and t2 in [−M, M ]d and each positive λ, P (|Xn (t2 ) − Xn (t1 )| ≥ λ) ≤

C |t2 − t1 |d+α . λγ

Thanks to the Markov inequality, in order to show that the latest condition holds, it is sufficient to prove the existence of γ ≥ 0, α > 0 and C > 0, such that for all n ∈ N and (t1 , t2 ) ∈ [−M, M ]d × [−M, M ]d , one has E[|Xn (t1 ) − Xn (t2 )|γ ] ≤ C|t1 − t2 |d+α ; in the Gaussian case, this boils down to proving that there are β > 0 and C 0 > 0 for which E[|Xn (t1 ) − Xn (t2 )|2 ] ≤ C 0 |t1 − t2 |β , is satisfied, for any n ∈ N and (t1 , t2 ) ∈ [−M, M ]d × [−M, M ]d . 25

References [Adl81] A. Adler. The Geometry of Random Fields. John Wiley and Sons, 1981. [Bil68] Billingsley. Convergence of Probability Measures. John Wiley and Sons, 1968. [CL04] H. Crarmér and M. Leadbetter. Stationary and related stochastic processes. Sample function properties and their applications. Dover Publications, Inc., Mineola, 2004.

26