24 4 The results – p-variation of stable processes 29 4.1 Central limit theorem for random variable with infinite variance.. That is why the assumption that the underlying stochastic pro
Trang 1Limit theorems for p-Variations of
– Diplomarbeit –
Humboldt-Universit¨ at zu Berlin Mathematisch-Naturwissenschaftliche Fakult¨ at II
Institut f¨ ur Mathematik
eingereicht von Claudia Hein
Trang 52.1 L´evy processes 11
2.1.1 Definition and characterisation 11
2.1.2 Examples and stable processes 13
2.1.3 Properties of stable processes 15
2.2 Convergence of processes 16
2.2.1 The Skorokhod topology 16
2.2.2 Criteria for convergence 19
3 p-variation 21 3.1 p-variation 21
3.1.1 Definition and examples 21
3.1.2 Finiteness in case of stable processes 23
3.2 Limit behaviour of the Brownian motion 24
4 The results – p-variation of stable processes 29 4.1 Central limit theorem for random variable with infinite variance 30
4.2 p-variation for stable processes 33
4.3 Adding processes 33
5 Proofs for section 4.2 37 5.1 Proof of theorem 4.6 - finite dimensional distributions 43
5.2 Proof of theorem 4.6 - tightness 44
Trang 71 Introduction
Modelling the climate has been of great interest lately Paleoclimatic data is very helpfulfor the understanding of its dynamics In particular time series from the greenland ice
is the purpose of studies like [3] They also state that the model of a diffusion that
is driven by a Brownian Motion is not appropriate for these data The temperature,that can be obtained by the analysis of the calcium signal in the ice, has quite abruptchanges That is why the assumption that the underlying stochastic process has jumps
U0(Xs−) ds + εLtwhere L is a stable L´evy process and ε is small It is not clear, how U looks like, althoughone can conjecture it to be a double-well potential as the temperature in the time seriesmostly remains in the environment of one of two states The time it needs to changefrom the warm state to the cold one or vice versa is very short This kind of diffusionhas been studied in [13], [14] and [12], so properties like the exit times from the wellsare well known by now
One remaining problem is to calibrate the model The aim of this work is to develop amethod to extract the characteristics of the underlying stochastic process In this modelthis is equivalent to detect the parameter α Though the function U is often assumed to
be a double-well potential, there is no evidence for this That is why the assumptions
on U should be as weak as possible
Our approach is based on [6] The idea is to analyse one property of the process that
Trang 8is determined mainly by the underlying process Comparing the L´evy process and aLebesgue integral, one noticeable difference is the smoothness of its paths As the p-variation can be regarded as a ‘measure’ of smoothness, this is the property we willstudy here More precisely we analyse the behaviour of the limit of
for a stochastic process X and positive t
It is well known that Lebesgue integrals have finite variation, so the limit is zero forevery p bigger than one We will see that this is not true for stable processes, where thelimit is always positive because of the jumps of the process The question is if in thesum of these two processes the limit is also dominated by the L´evy process
The first results in this work provide an understanding of the p-variation of α-stableL´evy processes themselves With the help of a central limit theorem we can see thatthe limit of Vn
p (L) of a stable process L is again a stable process in the meaning ofweak convergence in the Skorokhod topology The stability index is well-defined by thestability index of L and by p
Understanding the behaviour of the p-variation of stable processes we then concentrate
on the conditions that stochastic processes have to satisfy not to interfere with the limit
of the stable process One sufficient condition is, that the limit of Vpn has to be zero forsome positive p That is exactly what we looked for as Lebesgue integrals satisfy thiscondition
Now that we have a limit theorem for the diffusion that only depends on the underlyingstochastic process it is possible to develop statistical tests to identify this process As Ujust influences the integral part of the diffusion there is no need to take any assumption
on this function for these tests
The structure of this work is as follows Chapter two will give a short introduction tothe theory of stochastic processes in general and L´evy processes in particular This isfollowed by an explanation of the convergence of stochastic processes with some criteriafor weak convergence in the Skorokhod topology
Trang 9The subject of chapter three is the p-variation After the definition and some basicexamples we discuss the condition for finiteness in the case of α-stable processes After-wards we give some results for the limit behaviour of the Brownian Motion for a betterclassification of the latter results.
Finally, in chapter four we develop the results for stable processes After studying thelimit behaviour for stable processes themselves we begin to add other processes Note,that all conditions on these processes concern their smoothness There are no conditions
on independence from the stable process Hence, these theorems are applicable for ourdiffusion as the two terms of course depend heavily on each other
For a better readability we omit the proofs in this chapter They are the content ofchapter five and six, together with an introduction to a more general central limit the-orem
Trang 112 Stochastic processes
2.1.1 Definition and characterisation
First we will collect some facts about L´evy processes They are mostly taken from thebook by Applebaum[1], where a good introduction into this topic can be found
In this thesis we concentrate on the one-dimensional case and deal with real-valuedstochastic processes on a probability space (Ω,F , P) Hence we have a family of randomvariables (Xt)t≥0 with values in R
To have a better handling of these processes we will just look at a certain class withvery nice properties
Definition 2.1 We say that a stochastic process X = (Xt)t≥0 is a L´evy process, if
Trang 12Every L´evy process has a modification with c`adl`ag paths (i.e right continuous with leftlimits) that is itself a L´evy process (Theorem 2.1.7, [1]) From now on we will considersuch a modification.
There is a close relationship between L´evy processes and a class of random variables,the class with infinite divisible distributions
Definition 2.2 Let Y be a real-valued random variable Y is called infinitely divisible
if for all n ∈ N there exist random variables Y1,n, , Yn,n independent and identicallydistributed (i.i.d.) such that
We will see that infinitely divisible random variables (and with it L´evy processes, too)have a closed form for their characteristic functions This important characterisation iscalled the L´evy-Khinchin formula
Theorem 2.1 (L´evy-Khinchin formula, Theorem 8.1, [16]) Let Y be a infinitely divisiblerandom variable Then its characteristic function has the following form:
Trang 13The triple (γ, A, ν) in this representation is unique.
It is possible to generalise this representation for L´evy processes For a L´evy process Xand t ≥ 0 the characteristic function of Xt is given by
Here (γ, A, ν) is called the characteristic triple of the process X The so-called L´evymeasure ν characterises the law of jumps of the process There are other representations
of the characteristic functions that differ in the truncation function, here 1{|y|≤1} There
is a characteristic triple for each truncation function However, the difference betweenthese triples is not very big and one can easily calculate one triple from another
2.1.2 Examples and stable processes
The definition of L´evy processes is still quite general We will give some typical examplesfor L´evy processes in this subsection
Example 2.1 (Brownian motion) The most famous L´evy process is Brownian motion,where the increments have normal distributions The L´evy measure equals zero, so thisprocess has no jumps The Brownian motion and the Brownian motion with drift arethe only L´evy processes without jumps The characteristic triple is (A, γ, 0) with γ = 0
Trang 14val-Another class of L´evy processes is the family of stable processes A stable randomvariable can be represented as a weighted sum of random variables with the same dis-tribution.
Definition 2.3 A random variable Y has a stable distribution if for every n ∈ Nthere exist independent random variables Y1,n, , Yn,n with the same distribution as Y(Yi,n= Y ) and constants ad n> 0 and bn∈R such that
Y1,n+ · · · + Yn,n= anY + bn
It is called strictly stable if bn= 0
Obviously every stable random variable is infinitely divisible Strictly stable L´evy cesses which we will deal with in this work can be characterised by a number α ∈ (0, 2].They are often called α-stable L´evy processes and have an explicit solution for the equa-tion in the definition above Namely for any α-stable process X, t ≥ 0 and n ∈ N wecan find random variables X1
Theorem 2.2 (Theorem 14.14, 14.15, [16]) Let Y be an α-stable random variable for
α ∈ (0, 2) If Y is non-trivial, then for λ ≥ 0
for α 6= 1,
for α = 1,with c > 0, β ∈ [−1, 1] and τ ∈ R Here c, β and τ are uniquely determined by Y The process is strictly α-stable if and only if τ = 0 (if α 6= 1) or β = 0 (if α = 1).Furthermore Y is rotation invariant if and only if for λ ≥ 0
E eiλY = e−c|λ| α
with c > 0
Trang 152.1 L´evy processes
We only deal with strictly stable processes in this work From now on stability impliesstrict stability
2.1.3 Properties of stable processes
Despite the simple and closed form of their characteristic function the distribution even
of translation invariant α-stable processes can rarely be expressed in terms of densityfunctions or distribution functions There are only two exceptions: the normal distri-bution for α = 2 and the Cauchy distribution for α = 1 However there are some veryhelpful characteristics that all stable processes have in common
In the proofs of the theorems in chapter 3 we will often use the self similarity of strictlystable processes
Definition 2.4 Let X = (Xt)t≥0 be a stochastic process onR It is called self similar
if, for any a > 0, there is b > 0 such that
(Xat)t≥0= (bXd t)t≥0
Strictly stable processes are self similar Moreover we know the exact scaling constants
as we will see in the following theorem
Theorem 2.3 (Theorem 13.15, Definition 13.16) Let X = (Xt)t≥0 be a strictly α-stableprocess Then for a > 0
(Xat)t≥0= (ad 1/αXt)t≥0
Stable processes have a lot in common with the Brownian motion However there aresome fundamental properties in which α-stable processes with α < 2 and Brownianmotions differ All α-stable processes for α ∈ (0, 2) are heavy-tailed From Theorem 1,page 576 of [4] follows that for an α-stable process X with α ∈ (0, 2) and for t > 0 thereexist constants c1, c2 > 0 such that
P(X > x) ∼ c1x−α and P(X < −x) ∼ c2x−α
Trang 16Another aspect in which the α-stable case for α ∈ (0, 2) differs from the Brownian motioncase is the existence of moments The Brownian motion has moments of all orders This
is not true if α < 2 Processes related to α < 2 already lack to posses second ordermoments Example 25.10 in [16] tells us that there is a strict boundary for the existence
of moments Hence for α ∈ (0, 2) and X α-stable we have the equivalence
To study the convergence of processes we first need a set that the paths of these processesare in and a topology on this set The results and definitions in this section are mostlytaken from [2] and [5]
2.2.1 The Skorokhod topology
All the processes we deal with have c`adl`ag paths (that is continuous from the right sidewith left limits) So this is the appropriate space to define the topology on
Definition 2.5 LetD be the set of all c`adl`ag functions f : R+ →R
We now endow D with a topology that makes this set a Polish space That makes thespace of measures in onD together with the weak topology (that we will define lateron)
a Polish space as well In these spaces we can apply the theorem of Arzela-Ascoli tocharacterise compact sets This is very useful to find criteria of convergence
Trang 172.2 Convergence of processes
The definition of this topology is not very intuitive at first sight But we will see, that
it meets our demands In fact there is a metrisable topology on D, the Skorokhodtopology, for which this space is Polish and for which convergence of sequences can becharacterised as follows (the proof of existence can be found in theorem 1.14, [5]) First
we introduce the concept of time changes
Definition 2.6 A time change is a continuous and strictly increasing function λ :
R+ →R+, with λ(0) = 0 and lim λ(t) ↑ ∞ for t ↑ ∞
Now we can characterise the convergence in the Skorokhod topology A sequence offunctions (fn)n∈N ⊂D converge in the Skorokhod topology if and only if there exists asequence (λn)n∈N of time changes such that for n → ∞
With the Skorokhod topology we have already one possible form of convergence of astochastic process, namely the pathwise convergence with respect to this topology How-ever this is not adequate because it is a very strong form of convergence Especially if
we want to expand the convergence of law of finite dimensional marginal distributions
of random processes to the entire process, we need a notion of convergence of measures
on the path space of the process
Trang 18Since L´evy processes have c`adl`ag paths, all measures on D are also measures on thepaths on our stochastic processes The Skorokhod topology induces the weak topology
on the measure space as follows
Definition 2.7 We say that a sequence of measures (µn)n≥0 onD converges weakly to
a measure µ on D if for every open set G with respect to the Skorokhod topology wehave the inequality
Definition 2.8 Let (µn)n≥0 be a family of probability measures on (D, D) The family
is tight if for every ε > 0 there exists a compact set Kε (with respect to the Skorokhodtopology) such that
Theorem 2.4 (Theorem 13.1, [2]) Let (Xn)n≥0 be a sequence of stochastic processeswith c´adl´ag paths If finite-dimensional distributions converge to the finite dimensionaldistributions of a stochastic process X and the family is tight, then Xn converges weakly
to X (Xn −→ X).d
Trang 192.2 Convergence of processes
2.2.2 Criteria for convergence
To show the weak convergence of stochastic processes we have to check the convergence
of the finite-dimensional distributions and the tightness This is why we need criteriafor the tightness of stochastic processes
The first theorem already merges the tightness condition and the convergence of thefinite-dimensional distributions as the assumptions are such that the central limit theo-rem can be applied
Theorem 2.5 (Theorem 16.1, [2]) Suppose the random variables ξi, i ∈ N, are pendent and identically distributed with mean 0 and finite, positive variance:
where W is a Brownian motion, independent of X(n), n ∈N
The assumptions for this theorem are quite strong, especially the existence of the anceis missing if we deal with stable L´evy processes The theorem we apply instead usesstopping times
vari-Theorem 2.6 (Aldous’ Criterion, vari-Theorem VI.4.5, [5]) Let (Fn)n→∞ be a filtration and
Xn a sequence of real-valued adapted c`adl`ag processes For N ∈ N we denote by Tn
N
the set of all Fn-stopping times that are bounded by N The sequence (Xn)n∈N is tight
if the following two conditions are satisfied
1 For all N ∈N and ε > 0 there are n0 ∈N and K ∈ R+ with
P
sup
Trang 202 for all N ∈N and ε > 0 we have:
Trang 213 p-variation
In this chapter we introduce the concept of the p-variation as a measure for smoothness
of a function or the path of a stochastic process Later we will examine the p-variation
of the Brownian motion, in preparation of the 4th chapter where we will treat α-stableprocesses in general
3.1 p-variation
In this section we will introduce the p-variation of stochastic processes After the tion we will give some characteristic examples to see how the behaviour of the p-variation
defini-of a stochastic process is linked to the smoothness defini-of its paths
3.1.1 Definition and examples
Definition 3.1 We say that a tuple of real points Π = {t0, , tn}, n ∈N is a partition
of an interval [a, b] ⊂ R if a = t0 < < tn = b The mesh of a partition Π is defined as
|Π| := sup
1≤i≤n
|ti− ti−1|
The set of partitions of the interval [a, b] is denoted by Π ([a, b])
Definition 3.2 Let f : [a, b] → R be a function and p > 0 a positive number Thep-variation of f in the interval [a, b] is defined as
Trang 22For a stochastic process X = (Xt)t≥0 we use a pathwise definition of the p-variation,that is
γ(f, [a, b]) := infp > 0 : Vp(f, [a, b]) < ∞
We extend this definition for stochastic processes in the same way we did with the variation For all stochastic processes we deal with, it neither depends on t nor on ω
p-So we can just write γ(X)
In the following examples we will see how this value can be interpreted as a measure forthe regularity of the sample paths of stochastic processes
Example 3.1 (Poisson process) The Poisson process is a pure jump process with a finitenumber of jumps in every finite interval The jumps have height 1, so the p-variationequals the number of jumps in the observed interval for every p > 0
The compound Poisson process has jumps with random heights, but the number is stillfinite So its p-variations differ for varying p, but they still are finite for every p
Example 3.2 (Absolutely continuous functions) Absolutely continuous functions havefinite variation (p-variation with p = 1) For p < 1 the p-variation is infinite if thefunction is not constant Lebesgue integrals are absolutely continuous, so they have alsofinite variation
Example 3.3 (Brownian motion) The sample paths of the Brownian motion are notvery smooth, so the p-variation is infinite for p < 2 and finite for p ≥ 2
Trang 233.1 p-variation
3.1.2 Finiteness in case of stable processes
We will now concentrate on the behaviour of the p-variation of stable processes So let
L = (Lt)t≥0 be an α-stable L´evy process with α ∈ (0, 2) We will see that γ(L) onlydepends on α
At first we introduce another characteristic value of L´evy processes, the Getoor index
Blumenthal-Definition 3.3 Let L = (Lt)t≥0 be a L´evy process with L´evy measure ν We denotethe Blumenthal-Getoor index βL by
There is a close relationship between the Blumenthal-Getoor index of a process L andγ(L) The first time these results were established in [15] In this paper a slightlydifferent form of the L´evy-Khintchin formula is used and as a consequence the L´evymeasure differs from our definition, but this has no influence on βL
Theorem 3.1 (Theorem 4.1, [15]) Let L = (Lt)t≥0 be a L´evy process with Getoor index βL If 0 < p < βL then P(Vp(L)t = ∞) = 1 for every t > 0
Blumenthal-Theorem 3.2 (Blumenthal-Theorem 4.2, [15]) Let L = (Lt)t≥0 be a L´evy process with Getoor index βL If βL < p ≤ 1 then P(Vp(L)t< ∞) = 1 for every t > 0
Blumenthal-These two theorems tell us that for a L´evy process L the Blumenthal-Getoor index βL
and γ(L) coincide if βL < 1 and that γ(L) ≥ βL if βL ≥ 1 In [9] it was shown that inthe latter case equality pertains
Theorem 3.3 (Theorem 2, [9]) Let L = (Lt)t≥0 be a L´evy process with A = 0 andBlumenthal-Getoor index βL ∈ [1, 2] If p > βL then P(Vp(L)t < ∞) = 1 for every
t > 0
We summarise all the facts above in the following corollary
Trang 24Corollary 3.1 Let L = (Lt)t≥0 be a L´evy process If L is α-stable with α ∈ (0, 2) theγ(L) and α coincide.
Proof The theorems above give us the equality of the Blumenthal-Getoor index βLandγ(L) However, since L is stable we are able to state the explicit form of the L´evymeasure ν With some constants c1 and c2 we have
ν(dx) = dx
|x|1+αc11{x<0}+ dx
|x|1+αc21{x>0}.Now it is easy to verify that
This implies that γ(L) = βL
3.2 Limit behaviour of the Brownian motion
Before we deal with α-stable processes for α ∈ (0, 2), we will study the case where α = 2.This way we are able to compare the results for the Brownian motion with those forother stable processes
Until now we have studied the p-variation of processes, i.e the supremum over allpartitions If this supremum is finite, it is often more significant to study the limit ifthe mesh fo the partition Π = {t0, , tn} goes to zero, that is
In what follows we specify the sequence of partitions to be equidistant So we define then-th approximation
Definition 3.4 Let (Xt)t≥0 be a stochastic process and p > 0 a real number For n ∈N
we define the new stochastic process Vn
Trang 253.2 Limit behaviour of the Brownian motion
By [s] we denote the integer part of s ∈ R: [s] := sup{i ∈ Z : i ≤ s}
There are some results by Barndorff-Nielsen and Shephard ([10], [11]) for the p-variation
of the Brownian motion and semimartingales in general However they concentrate
on different subjects such as the limit behaviour for more general partitions than theequidistant one We are more interested in generalisations in the direction of addingprocesses
We look for some convergence properties of Vpn(X) if X is a Brownian motion (in thissection) or another stable process (in the next chapter) So in our case Vpn(X)tis always
a sum of independent and identically distributed random variables and it is quite natural
to apply the two main theorems for these sums: the law of large numbers and the centrallimit theorem
In addition to the p-variation of the Brownian motion we want to develop conditions foranother process that, if added, does not influence the limit behaviour of the approxi-mated p-variation
At first we use the law of large numbers (for example [8], p 251) to get an impression
Proof We will first show the convergence for Vn
p (W )tand then show that the difference
Trang 26independent increments we can show that
to show the convergence of identically distributed variables since convergence indistribution is equivalent to convergence in probability if if the limit is a Diracmeasure
2 To prove the convergence in total we will first assume that p ≤ 1 We can nowapply the triangle inequality to show that
If p > 1, we use Minkowski’s inequality to get a similar result:
Trang 273.2 Limit behaviour of the Brownian motion
This result is consistent with our knowledge about the p-variation of the Brownianmotion If we fix p = 2 we get a convergence without compensation If p < 2 it divergeswhereas it converges to zero for p > 2
To get some more information about the limit behaviour we apply the central limittheorem now The result in the next theorem is even stronger as we can employ theweak convergence of processes instead of just the one-dimensional distributions
Theorem 3.5 Let W be a Brownian motion For every p > 0 we have the convergence
np/2−1/2Vpn(W )t− tn1/2E(|W1|p)t≥0 −→ var(|Wd 1|p)1/2(Wt0)t≥0 as n → ∞,where W0 is a Brownian motion that is independent of W
Proof We will bring the expression into a form that satisfies theorem 2.5 Therefore weuse the scaling property of the Brownian motion, namely that (Wat)t≥0 = (ad 1/2Wt)t≥0for a > 0 So we can write
var(|W1|p)−1/2 np/2−1/2Vpn(W )t− [nt]n−1/2E(|W1|p)
=
P[nt]
i=1(np/2|∆nWi|p− E(|W1|p))(n var(|W1|p))1/2
d
=
P[nt]
i=1(|∆1Wi|p− E(|W1|p))(n var(|W1|p))1/2 ,
to which we can apply the theorem 2.5 We know that (nt − [nt])n−1/2E(|W1|p) → 0 for
n → ∞, so the desired result follows
In the next chapter we will see that for α-stable processes with α < 2 we can addanother process with certain properties (for example continuous with finite variation)without disturbing this asymptotic behaviour Unfortunately this is not enough in thecase considered as the following example will show
Example 3.4 So take p = 2 and the (deterministic) process Yt = t1/8 This process iscontinuous in the interval [0, 1] and has finite variation From theorem 3.5 we know that
n1/2Vpn(W )1− n1/2E |W1|2 d
−→ var |W1|21/2
N
Trang 28where N is a standard normal distributed random variable This behaviour changes if
Trang 294 The results – p-variation of stable
processes
In this chapter we are interested in the behaviour of the p-variation of a stable cess Like in the previous chapter we will see the convergence in distribution and theconvergence in probability of Vn
pro-p
The idea for these studies comes from [6] The focus of this paper is the development oflimit theorems for the p-variation of integrals with respect to a stable process We willshortly summarise them and simplify the results by stating the theorems just for thestable processes
The first result is an application of the law of large numbers to get a statement aboutthe asymptotic behaviour of the p-variation
Theorem 4.1 (Theorem 2.1, [6]) Suppose that (Lt)t≥0 is an α-stable L´evy process with
α ∈ (0, 2) and (Yt)t≥0 a stochastic process which satisfies
of second order
Trang 30The problem with these restrictions on the power p is that they directly correlate withrestrictions on the process Y The smaller p is chosen the smoother the paths of Y have
to be In particular if p < 1 a (not constant) Lebesgue integral does not satisfy thiscondition and this is the reason why the following theorem does not meet our needs.Theorem 4.2 Let L be an α-stable L´evy process with α ∈ (0, 2) Fix 0 < p < α/2 andassume that the stochastic process Y satisfies
To extend this limit theorems for larger p we need limit theorems that do not requiresecond order moments We introduce them in the next section
4.1 Central limit theorem for random variable with
infinite variance
There is a generalised central limit theorem that deals with the case where the randomvariable has infinite variance In this section we introduce some results we need later
on They are mainly taken from [4]
In the central limit theorem for random variables with finite variance there is just onedistribution (the normal distribution) that sums of i.i.d random variables may converge
to This is different if we observe random variables with infinite second order moments.Now there is a class of distributions, that is a good candidate This is the class ofinfinitely divisible distributions we already encountered in the introduction to L´evyprocesses
In what follows we deal with stable distributions as limit laws There are some criteriafor the convergence sums of i.i.d random variables The limit distribution of such a
Trang 314.1 Central limit theorem for random variable with infinite variance
normalised sum depends on the distribution of the summands, especially on their tailasymptotics Every stable variable has a family of distributions for which it serves aslimit distribution This family is called the domain of attraction
Definition 4.1 Let Z be a real valued random variable and (Xi)i∈N a family of i.i.d.random variables We say that (Xi)i∈N belongs to the domain of attraction of Z if thereare constants an∈R+, bn ∈R, n ∈ N such that
stated convergence holds
We have to introduce regular variing functions first to have some criteria for the belonging
to a domain of attraction
Definition 4.2 A positive Function f defined on [0, ∞) varies regularly with exponent
ρ if and only if it is of the form
of the limit distributions and that they have to be sufficiently balanced
Theorem 4.3 (Corollary 2, p 578, [4]) A law ν with distribution function F belongs
to the domain of attraction of a stable distribution with exponent α < 2 if and only if itstails satisfy the balancing condition
1 − F (x)
1 − F (x) + F (−x) −→ r, F (−x)
1 − F (x) + F (−x) −→ 1 − r for x → ∞with r ∈ [0, 1] and if the tail sum varies regularly with exponent −α, 0 < α < 2 Thelatter condition is fully equivalent to
x2[1 − F (x) + F (−x)]
µ(x) −→ 2 − α
α , x → ∞