conditional probabilities defined by Hausdorff outer and inner measures 87Stochastic independence with respect to upper and lower conditional probabilities defined by Hausdorff outer and
Trang 2Fig 10 Mean density ρ vs p for different rules evolving under the third self-synchronization
method The density of the system decreases linearly with p.
that the behavior reported in the first self-synchronization method is newly obtained in this
case Rule 18 undergoes a phase transition for a critical value of ˜p For ˜p greater than the
critical value, the method is able to find the stable structure of the system (Sanchez &
Lopez-Ruiz, 2006) For the rest of the rules the freezing phase is not found The dynamics generates
patterns where the different marginally stable structures randomly compete Hence the DA
density decays linearly with ˜p (see Fig 8).
4.3 Third Self-Synchronization Method
At last, we introduce another type of stochastic element in the application of the rule Φ Given
an integer number L, the surrounding of site i at each time step is redefined A site i l is
randomly chosen among the L neighbors of site i to the left,(i−L, , i−1) Analogously,
a site i r is randomly chosen among the L neighbors of site i to the right, (i+1, , i+L)
The rule Φ is now applied on the site i using the triplet(i l , i, i r)instead of the usual nearest
neighbors of the site This new version of the rule is called ΦL, being ΦL=1= Φ Later the
operator Γpacts in identical way as in the first method Therefore, the dynamical evolution
law is:
σ(t+1) =Γp[(σ1(t), σ2(t))] =Γp[(σ(t), ΦL[σ(t)])] (13)
The DA density as a function of p is plotted in Fig 9 for the rule 18 and in Fig 10 for other
rules It can be observed again that the rule 18 is a singular case that, even for different L,
maintains the memory and continues to self-synchronize It means that the influence of the
rule is even more important than the randomness in the election of the surrounding sites The
system self-synchronizes and decays to the corresponding stable structure Contrary, for the
rest of the rules, the DA density decreases linearly with p even for L=1 as shown in Fig 10
Fig 11 Space-time configurations of automata with N = 100 sites iterated during T=400
time steps evolving under rules 18 and 150 for p p c Left panels show the automatonevolution in time (increasing from top to bottom) and the right panels display the evolution
of the corresponding DA
The systems oscillate randomly among their different marginally stable structures as in theprevious methods (Sanchez & Lopez-Ruiz, 2006)
5 Symmetry Pattern Transition in Cellular Automata with Complex Behavior
In this section, the stochastic synchronization method introduced in the former sections(Morelli & Zanette, 1998) for two CA is specifically used to find symmetrical patterns in theevolution of a single automaton To achieve this goal the stochastic operator, below described,
is applied to sites symmetrically located from the center of the lattice It is shown that a metry transition take place in the spatio-temporal pattern The transition forces the automaton
sym-to evolve sym-toward complex patterns that have mirror symmetry respect sym-to the central axe ofthe pattern In consequence, this synchronization method can also be interpreted as a controltechnique for stabilizing complex symmetrical patterns
Cellular automata are extended systems, in our case one-dimensional strings composed of N sites or cells Each site is labeled by an index i=1, , N, with a local variable s icarrying a
binary value, either 0 or 1 The set of sites values at time t represents a configuration (state
or pattern) σ t of the automaton During the automaton evolution, a new configuration σ t+1at
time t+1 is obtained by the application of a rule or operator Φ to the present configuration(see former section):
Trang 3Fig 10 Mean density ρ vs p for different rules evolving under the third self-synchronization
method The density of the system decreases linearly with p.
that the behavior reported in the first self-synchronization method is newly obtained in this
case Rule 18 undergoes a phase transition for a critical value of ˜p For ˜p greater than the
critical value, the method is able to find the stable structure of the system (Sanchez &
Lopez-Ruiz, 2006) For the rest of the rules the freezing phase is not found The dynamics generates
patterns where the different marginally stable structures randomly compete Hence the DA
density decays linearly with ˜p (see Fig 8).
4.3 Third Self-Synchronization Method
At last, we introduce another type of stochastic element in the application of the rule Φ Given
an integer number L, the surrounding of site i at each time step is redefined A site i l is
randomly chosen among the L neighbors of site i to the left,(i−L, , i−1) Analogously,
a site i r is randomly chosen among the L neighbors of site i to the right, (i+1, , i+L)
The rule Φ is now applied on the site i using the triplet(i l , i, i r)instead of the usual nearest
neighbors of the site This new version of the rule is called ΦL, being ΦL=1 =Φ Later the
operator Γpacts in identical way as in the first method Therefore, the dynamical evolution
law is:
σ(t+1) =Γp[(σ1(t), σ2(t))] =Γp[(σ(t), ΦL[σ(t)])] (13)
The DA density as a function of p is plotted in Fig 9 for the rule 18 and in Fig 10 for other
rules It can be observed again that the rule 18 is a singular case that, even for different L,
maintains the memory and continues to self-synchronize It means that the influence of the
rule is even more important than the randomness in the election of the surrounding sites The
system self-synchronizes and decays to the corresponding stable structure Contrary, for the
rest of the rules, the DA density decreases linearly with p even for L=1 as shown in Fig 10
Fig 11 Space-time configurations of automata with N = 100 sites iterated during T= 400
time steps evolving under rules 18 and 150 for p p c Left panels show the automatonevolution in time (increasing from top to bottom) and the right panels display the evolution
of the corresponding DA
The systems oscillate randomly among their different marginally stable structures as in theprevious methods (Sanchez & Lopez-Ruiz, 2006)
5 Symmetry Pattern Transition in Cellular Automata with Complex Behavior
In this section, the stochastic synchronization method introduced in the former sections(Morelli & Zanette, 1998) for two CA is specifically used to find symmetrical patterns in theevolution of a single automaton To achieve this goal the stochastic operator, below described,
is applied to sites symmetrically located from the center of the lattice It is shown that a metry transition take place in the spatio-temporal pattern The transition forces the automaton
sym-to evolve sym-toward complex patterns that have mirror symmetry respect sym-to the central axe ofthe pattern In consequence, this synchronization method can also be interpreted as a controltechnique for stabilizing complex symmetrical patterns
Cellular automata are extended systems, in our case one-dimensional strings composed of N sites or cells Each site is labeled by an index i =1, , N, with a local variable s icarrying a
binary value, either 0 or 1 The set of sites values at time t represents a configuration (state
or pattern) σ t of the automaton During the automaton evolution, a new configuration σ t+1at
time t+1 is obtained by the application of a rule or operator Φ to the present configuration(see former section):
Trang 4Rule 18 Rule 150
Fig 12 Time configurations of automata with N = 100 sites iterated during T = 400 time
steps evolving under rules 18 and 150 using p > p c The space symmetry of the evolving
patterns is clearly visible
5.1 Self-Synchronization Method by Symmetry
Our present interest (Sanchez & Lopez-Ruiz, 2008) resides in those CA evolving under rules
capable to show asymptotic complex behavior (rules of class III and IV) The technique applied
here is similar to the synchronization scheme introduced by Morelli and Zanette (Morelli &
Zanette, 1998) for two CA evolving under the same rule Φ The strategy supposes that the two
systems have a partial knowledge one about each the other At each time step and after the
application of the rule Φ, both systems compare their present configurations Φ[σ t1]and Φ[σ t2]
along all their extension and they synchronize a percentage p of the total of their different sites.
The location of the percentage p of sites that are going to be put equal is decided at random
and, for this reason, it is said to be an stochastic synchronization If we call this stochastic
operator Γp, its action over the couple(Φ[σ t1], Φ[σ t2])can be represented by the expression:
(σ t+11 , σ t+12 ) =Γp(Φ[σ t1], Φ[σ t2]) = (Γp◦Φ)(σ t1, σ t2) (15)
0 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0
0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
p
Rule 30 Rule 122 Rule 150
Fig 13 Asymptotic density of the DA for different rules is plotted as a function of the coupling
probability p Different values of p c for each rule appear clearly at the points where ρ →0
The automata with N=4000 sites were iterated during T=500 time steps The mean values
of the last 100 steps were used for density calculations
Rule 18 22 30 54 60 90 105 110 122 126 146 150 182
p c 0.25 0.27 1.00 0.20 1.00 0.25 0.37 1.00 0.27 0.30 0.25 0.37 0.25
Table 1 Numerically obtained values of the critical probability p cfor different rules displayingcomplex behavior Rules that can not sustain symmetric patterns need fully coupling of the
symmetric sites, i.e (p c=1)
The same strategy can be applied to a single automaton with a even number of sites (Sanchez
& Lopez-Ruiz, 2008) Now the evolution equation, σ t+1= (Γp◦Φ)[σ t], given by the successiveaction of the two operators Φ and Γp , can be applied to the configuration σ tas follows:
1 the deterministic operator Φ for the evolution of the automaton produces Φ[σ t], and,
2 the stochastic operator Γp, produces the result Γp(Φ[σ t]), in such way that, if sites
sym-metrically located from the center are different, i.e s i=s N−i+1, then Γp equals s N−i+1
to s i with probability p Γ pleaves the sites unchanged with probability 1−p.
A simple way to visualize the transition to a symmetric pattern can be done by splitting the
automaton in two subsystems (σ t1, σ t2),
• σ1
t , composed by the set of sites s(i)with i=1, , N/2 and
• σ t2, composed the set of symmetrically located sites s(N−i+1)with i=1, , N/2,
and displaying the evolution of the difference automaton (DA), defined as
Trang 5Rule 18 Rule 150
Fig 12 Time configurations of automata with N = 100 sites iterated during T = 400 time
steps evolving under rules 18 and 150 using p > p c The space symmetry of the evolving
patterns is clearly visible
5.1 Self-Synchronization Method by Symmetry
Our present interest (Sanchez & Lopez-Ruiz, 2008) resides in those CA evolving under rules
capable to show asymptotic complex behavior (rules of class III and IV) The technique applied
here is similar to the synchronization scheme introduced by Morelli and Zanette (Morelli &
Zanette, 1998) for two CA evolving under the same rule Φ The strategy supposes that the two
systems have a partial knowledge one about each the other At each time step and after the
application of the rule Φ, both systems compare their present configurations Φ[σ t1]and Φ[σ t2]
along all their extension and they synchronize a percentage p of the total of their different sites.
The location of the percentage p of sites that are going to be put equal is decided at random
and, for this reason, it is said to be an stochastic synchronization If we call this stochastic
operator Γp, its action over the couple(Φ[σ t1], Φ[σ t2])can be represented by the expression:
(σ1t+1 , σ2t+1) =Γp(Φ[σ t1], Φ[σ t2]) = (Γp◦Φ)(σ t1, σ t2) (15)
0 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0
0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
p
Rule 30 Rule 122 Rule 150
Fig 13 Asymptotic density of the DA for different rules is plotted as a function of the coupling
probability p Different values of p c for each rule appear clearly at the points where ρ→ 0
The automata with N=4000 sites were iterated during T=500 time steps The mean values
of the last 100 steps were used for density calculations
Rule 18 22 30 54 60 90 105 110 122 126 146 150 182
p c 0.25 0.27 1.00 0.20 1.00 0.25 0.37 1.00 0.27 0.30 0.25 0.37 0.25
Table 1 Numerically obtained values of the critical probability p cfor different rules displayingcomplex behavior Rules that can not sustain symmetric patterns need fully coupling of the
symmetric sites, i.e (p c=1)
The same strategy can be applied to a single automaton with a even number of sites (Sanchez
& Lopez-Ruiz, 2008) Now the evolution equation, σ t+1= (Γp◦Φ)[σ t], given by the successiveaction of the two operators Φ and Γp , can be applied to the configuration σ tas follows:
1 the deterministic operator Φ for the evolution of the automaton produces Φ[σ t], and,
2 the stochastic operator Γp, produces the result Γp(Φ[σ t]), in such way that, if sites
sym-metrically located from the center are different, i.e s i=s N−i+1, then Γp equals s N−i+1
to s i with probability p Γ pleaves the sites unchanged with probability 1−p.
A simple way to visualize the transition to a symmetric pattern can be done by splitting the
automaton in two subsystems (σ t1, σ2t),
• σ1
t , composed by the set of sites s(i)with i=1, , N/2 and
• σ t2, composed the set of symmetrically located sites s(N−i+1)with i=1, , N/2,
and displaying the evolution of the difference automaton (DA), defined as
Trang 6The mean density of active sites for the difference automaton, defined as
represents the Hamming distance between the sets σ1and σ2 It is clear that the automaton
will display a symmetric pattern when limt→∞ρ t =0 For class III and IV rules, a symmetry
transition controlled by the parameter p is found The transition is characterized by the DA
behavior:
when p<p c → limt→∞ρ t=0 (complex non-symmetric patterns),
when p>p c → limt→∞ρ t=0 (complex symmetric patterns)
The critical value of the parameter p csignals the transition point
In Fig 11 the space-time configurations of automata evolving under rules 18 and 150 are
shown for p p c The automata are composed by N =100 sites and were iterated during
T =400 time steps Left panels show the automaton evolution in time (increasing from top
to bottom) and the right panels display the evolution of the corresponding DA For p p c,
complex structures can be observed in the evolution of the DA As p approaches its critical
value p c, the evolution of the DA become more stumped and reminds the problem of
struc-tures trying to percolate the plane (Pomeau, 1986; Sanchez & Lopez-Ruiz, 2005-a) In Fig 12
the space-time configurations of the same automata are displayed for p>p c Now, the space
symmetry of the evolving patterns is clearly visible
Table 1 shows the numerically obtained values of p cfor different rules displaying complex
be-havior It can be seen that some rules can not sustain symmetric patterns unless those patterns
are forced to it by fully coupling the totality of the symmetric sites (p c =1) The rules whose
local dynamics verify φ( 1, s0, s2) =φ( 2, s0, s1)can evidently sustain symmetric patterns, and
these structures are induced for p c<1 by the method here explained
Finally, in Fig 13 the asymptotic density of the DA, ρ t for t→∞, for different rules is plotted
as a function of the coupling probability p The values of p c for the different rules appear
clearly at the points where ρ→0
6 Conclusion
A method to measure statistical complexity in extended systems has been implemented It
has been applied to a transition to spatio-temporal complexity in a coupled map lattice and
to a transition to synchronization in two stochastically coupled cellular automata (CA) The
statistical indicator shows a peak just in the transition region, marking clearly the change of
dynamical behavior in the extended system
Inspired in stochastic synchronization methods for CA, different schemes for
self-synchronization of a single automaton have also been proposed and analyzed
Self-synchronization of a single automaton can be interpreted as a strategy for searching and
con-trolling the structures of the system that are constant in time In general, it has been found
that a competition among all such structures is established, and the system ends up
oscillat-ing randomly among them However, rule 18 is a unique position among all rules because,
even with random election of the neighbors sites, the automaton is able to reach the
configu-ration constant in time
Also a transition from asymmetric to symmetric patterns in time-dependent extended systems
has been described It has been shown that one dimensional cellular automata, started from
fully random initial conditions, can be forced to evolve into complex symmetrical patterns by stochastically coupling a proportion p of pairs of sites located at equal distance from the center
of the lattice A nontrivial critical value of p must be surpassed in order to obtain symmetrical
patterns during the evolution This strategy could be used as an alternative to classify thecellular automata rules -with complex behavior- between those that support time-dependentsymmetric patterns and those which do not support such kind of patterns
7 References
Anteneodo, C & Plastino, A.R (1996) Some features of the statistical LMC complexity Phys.
Lett A, Vol 223, No 5, 348-354.
Argentina, M & Coullet, P (1997) Chaotic nucleation of metastable domains Phys Rev E, Vol.
56, No 3, R2359-R2362
Bennett, C.H (1985) Information, dissipation, and the definition of organization In: Emerging
Syntheses in Science, David Pines, (Ed.), 297-313, Santa Fe Institute, Santa Fe.
Boccaletti, S.; Kurths, J.; Osipov, G.; Valladares, D.L & Zhou, C.S (2002) The synchronization
of chaotic systems Phys Rep., Vol 366, No 1-2, 1-101.
Calbet, X & López-Ruiz, R (2001) Tendency toward maximum complexity in a
non-equilibrium isolated system Phys Rev E, Vol 63, No.6, 066116(9).
Chaitin, G (1966) On the length of programs for computing finite binary sequences J Assoc.
Comput Mach., Vol 13, No.4, 547-569.
Chaté, H & Manneville, P (1987) Transition to turbulence via spatio-temporal intermittency
Phys Rev Lett., Vol 58, No 2, 112-115.
Crutchfield, J.P & Young, K (1989) Inferring statistical complexity Phys Rev Lett., Vol 63, No.
2, 105-108
Feldman D.P & Crutchfield, J.P (1998) Measures of statistical complexity: Why? Phys Lett.
A, Vol 238, No 4-5, 244-252.
Grassberger, P (1986) Toward a quantitative theory of self-generated complexity Int J Theor.
Phys., Vol 25, No 9, 907-915.
Hawking, S (2000) “I think the next century will be the century of complexity", In San José
Mercury News, Morning Final Edition, January 23.
Houlrik, J.M.; Webman, I & Jensen, M.H (1990) Mean-field theory and critical behavior of
coupled map lattices Phys Rev A, Vol 41, No 8, 4210-4222.
Ilachinski, A (2001) Cellular Automata: A Discrete Universe, World Scientific, Inc River Edge,
NJ
Kaneko, K (1989) Chaotic but regular posi-nega switch among coded attractors by
cluster-size variation Phys Rev Lett., Vol 63, No 3, 219-223.
Kolmogorov, A.N (1965) Three approaches to the definition of quantity of information Probl.
Inform Theory, Vol 1, No 1, 3-11.
Lamberti, W.; Martin, M.T.; Plastino, A & Rosso, O.A (2004) Intensive entropic non-triviality
measure Physica A, Vol 334, No 1-2, 119-131.
Lempel, A & Ziv, J (1976) On the complexity of finite sequences IEEE Trans Inform Theory,
Trang 7The mean density of active sites for the difference automaton, defined as
represents the Hamming distance between the sets σ1and σ2 It is clear that the automaton
will display a symmetric pattern when limt→∞ρ t =0 For class III and IV rules, a symmetry
transition controlled by the parameter p is found The transition is characterized by the DA
behavior:
when p<p c → limt→∞ρ t=0 (complex non-symmetric patterns),
when p>p c → limt→∞ρ t=0 (complex symmetric patterns)
The critical value of the parameter p csignals the transition point
In Fig 11 the space-time configurations of automata evolving under rules 18 and 150 are
shown for p p c The automata are composed by N = 100 sites and were iterated during
T =400 time steps Left panels show the automaton evolution in time (increasing from top
to bottom) and the right panels display the evolution of the corresponding DA For p p c,
complex structures can be observed in the evolution of the DA As p approaches its critical
value p c, the evolution of the DA become more stumped and reminds the problem of
struc-tures trying to percolate the plane (Pomeau, 1986; Sanchez & Lopez-Ruiz, 2005-a) In Fig 12
the space-time configurations of the same automata are displayed for p>p c Now, the space
symmetry of the evolving patterns is clearly visible
Table 1 shows the numerically obtained values of p cfor different rules displaying complex
be-havior It can be seen that some rules can not sustain symmetric patterns unless those patterns
are forced to it by fully coupling the totality of the symmetric sites (p c =1) The rules whose
local dynamics verify φ( 1, s0, s2) =φ( 2, s0, s1)can evidently sustain symmetric patterns, and
these structures are induced for p c<1 by the method here explained
Finally, in Fig 13 the asymptotic density of the DA, ρ t for t→∞, for different rules is plotted
as a function of the coupling probability p The values of p c for the different rules appear
clearly at the points where ρ→0
6 Conclusion
A method to measure statistical complexity in extended systems has been implemented It
has been applied to a transition to spatio-temporal complexity in a coupled map lattice and
to a transition to synchronization in two stochastically coupled cellular automata (CA) The
statistical indicator shows a peak just in the transition region, marking clearly the change of
dynamical behavior in the extended system
Inspired in stochastic synchronization methods for CA, different schemes for
self-synchronization of a single automaton have also been proposed and analyzed
Self-synchronization of a single automaton can be interpreted as a strategy for searching and
con-trolling the structures of the system that are constant in time In general, it has been found
that a competition among all such structures is established, and the system ends up
oscillat-ing randomly among them However, rule 18 is a unique position among all rules because,
even with random election of the neighbors sites, the automaton is able to reach the
configu-ration constant in time
Also a transition from asymmetric to symmetric patterns in time-dependent extended systems
has been described It has been shown that one dimensional cellular automata, started from
fully random initial conditions, can be forced to evolve into complex symmetrical patterns by stochastically coupling a proportion p of pairs of sites located at equal distance from the center
of the lattice A nontrivial critical value of p must be surpassed in order to obtain symmetrical
patterns during the evolution This strategy could be used as an alternative to classify thecellular automata rules -with complex behavior- between those that support time-dependentsymmetric patterns and those which do not support such kind of patterns
7 References
Anteneodo, C & Plastino, A.R (1996) Some features of the statistical LMC complexity Phys.
Lett A, Vol 223, No 5, 348-354.
Argentina, M & Coullet, P (1997) Chaotic nucleation of metastable domains Phys Rev E, Vol.
56, No 3, R2359-R2362
Bennett, C.H (1985) Information, dissipation, and the definition of organization In: Emerging
Syntheses in Science, David Pines, (Ed.), 297-313, Santa Fe Institute, Santa Fe.
Boccaletti, S.; Kurths, J.; Osipov, G.; Valladares, D.L & Zhou, C.S (2002) The synchronization
of chaotic systems Phys Rep., Vol 366, No 1-2, 1-101.
Calbet, X & López-Ruiz, R (2001) Tendency toward maximum complexity in a
non-equilibrium isolated system Phys Rev E, Vol 63, No.6, 066116(9).
Chaitin, G (1966) On the length of programs for computing finite binary sequences J Assoc.
Comput Mach., Vol 13, No.4, 547-569.
Chaté, H & Manneville, P (1987) Transition to turbulence via spatio-temporal intermittency
Phys Rev Lett., Vol 58, No 2, 112-115.
Crutchfield, J.P & Young, K (1989) Inferring statistical complexity Phys Rev Lett., Vol 63, No.
2, 105-108
Feldman D.P & Crutchfield, J.P (1998) Measures of statistical complexity: Why? Phys Lett.
A, Vol 238, No 4-5, 244-252.
Grassberger, P (1986) Toward a quantitative theory of self-generated complexity Int J Theor.
Phys., Vol 25, No 9, 907-915.
Hawking, S (2000) “I think the next century will be the century of complexity", In San José
Mercury News, Morning Final Edition, January 23.
Houlrik, J.M.; Webman, I & Jensen, M.H (1990) Mean-field theory and critical behavior of
coupled map lattices Phys Rev A, Vol 41, No 8, 4210-4222.
Ilachinski, A (2001) Cellular Automata: A Discrete Universe, World Scientific, Inc River Edge,
NJ
Kaneko, K (1989) Chaotic but regular posi-nega switch among coded attractors by
cluster-size variation Phys Rev Lett., Vol 63, No 3, 219-223.
Kolmogorov, A.N (1965) Three approaches to the definition of quantity of information Probl.
Inform Theory, Vol 1, No 1, 3-11.
Lamberti, W.; Martin, M.T.; Plastino, A & Rosso, O.A (2004) Intensive entropic non-triviality
measure Physica A, Vol 334, No 1-2, 119-131.
Lempel, A & Ziv, J (1976) On the complexity of finite sequences IEEE Trans Inform Theory,
Trang 8López-Ruiz, R & Pérez-Garcia, C (1991) Dynamics of maps with a global multiplicative
cou-pling Chaos, Solitons and Fractals, Vol 1, No 6, 511-528.
López-Ruiz, R (1994) On Instabilities and Complexity, Ph D Thesis, Universidad de Navarra,
Pamplona, Spain
López-Ruiz, R.; Mancini, H.L & Calbet, X (1995) A statistical measure of complexity Phys.
Lett A, Vol 209, No 5-6, 321-326.
López-Ruiz, R & Fournier-Prunaret, D (2004) Complex behaviour in a discrete logistic model
for the simbiotic interaction of two species Math Biosc Eng., Vol 1, No 2, 307-324.
López-Ruiz, R (2005) Shannon information, LMC complexity and Rényi entropies: a
straight-forward approach Biophys Chem., Vol 115, No 2-3, 215-218.
Lovallo, M.; Lapenna, V & Telesca, L (2005) Transition matrix analysis of earthquake
magni-tude sequences Chaos, Solitons and Fractals, Vol 24, No 1, 33-43.
Martin, M.T.; Plastino, A & Rosso, O.A (2003) Statistical complexity and disequilibrium
Phys Lett A, Vol 311, No 2-3, 126-132.
McKay, C.P (2004) What is life? PLOS Biology, Vol 2, Vol 9, 1260-1263.
Menon, G.I.; Sinha, S & Ray, P (2003) Persistence at the onset of spatio-temporal
intermit-tency in coupled map lattices Europhys Lett., Vol 61, No 1, 27-33.
Morelli, L.G & Zanette, D.H (1998) Synchronization of stochastically coupled cellular
au-tomata Phys Rev E, Vol 58, No 1, R8-R11.
Perakh, M (2004) Defining complexity In online site: On Talk Reason, paper:
www.talkreason.org/articles/complexity.pdf
Pomeau, Y & Manneville, P (1980) Intermittent transition to turbulence in dissipative
dy-namical systems Commun Math Phys., Vol 74, No.2, 189-197.
Pomeau, Y (1986) Front motion, metastability and subcritical bifurcations in hydrodynamics
Physica D, Vol 23, No 1-3, 3-11.
Rolf, J.; Bohr, T & Jensen, M.H (1998) Directed percolation universality in asynchronous
evolution of spatiotemporal intermittency Phys Rev E, Vol 57, No 3, R2503-R2506
(1998)
Rosso, O.A.; Martín, M.T & Plastino, A (2003) Tsallis non-extensivity and complexity
mea-sures Physica A, Vol 320, 497-511.
Rosso, O.A.; Martín, M.T & Plastino, A (2005) Evidence of self-organization in brain electrical
activity using wavelet-based informational tools Physica A, Vol 347, 444-464.
Sánchez, J.R & López-Ruiz, R., a, (2005) A method to discern complexity in two-dimensional
patterns generated by coupled map lattices Physica A, Vol 355, No 2-4, 633-640.
Sánchez, J.R & López-Ruiz, R., b, (2005) Detecting synchronization in spatially extended
dis-crete systems by complexity measurements Disdis-crete Dyn Nat Soc., Vol 2005, No 3,
337-342
Sánchez, J.R & López-Ruiz, R (2006) Self-synchronization of Cellular Automata: an attempt
to control patterns Lect Notes Comp Sci., Vol 3993, No 3, 353-359.
Sánchez, J.R & López-Ruiz, R (2008) Symmetry pattern transition in Cellular Automata with
complex behavior Chaos, Solitons and Fractals, vol 37, No 3, 638-642.
Shiner, J.S.; Davison, M & Landsberg, P.T (1999) Simple measure for complexity Phys Rev.
E, Vol 59, No 2, 1459-1464.
Toffoli, T & Margolus, N (1987) Cellular Automata Machines: A New Environment for Modeling,
The MIT Press, Cambridge, Massachusetts
Wolfram, S (1983) Statistical mechanics of cellular automata Rev Mod Phys., Vol 55, No 3,
601-644
Yu, Z & Chen, G (2000) Rescaled range and transition matrix analysis of DNA sequences
Comm Theor Phys (Beijing China), Vol 33, No 4, 673-678.
Zimmermann, M.G.; Toral, R.; Piro, O & San Miguel, M (2000) Stochastic spatiotemporal
intermittency and noise-induced transition to an absorbing phase Phys Rev Lett.,
Vol 85, No 17, 3612-3615
Trang 9López-Ruiz, R & Pérez-Garcia, C (1991) Dynamics of maps with a global multiplicative
cou-pling Chaos, Solitons and Fractals, Vol 1, No 6, 511-528.
López-Ruiz, R (1994) On Instabilities and Complexity, Ph D Thesis, Universidad de Navarra,
Pamplona, Spain
López-Ruiz, R.; Mancini, H.L & Calbet, X (1995) A statistical measure of complexity Phys.
Lett A, Vol 209, No 5-6, 321-326.
López-Ruiz, R & Fournier-Prunaret, D (2004) Complex behaviour in a discrete logistic model
for the simbiotic interaction of two species Math Biosc Eng., Vol 1, No 2, 307-324.
López-Ruiz, R (2005) Shannon information, LMC complexity and Rényi entropies: a
straight-forward approach Biophys Chem., Vol 115, No 2-3, 215-218.
Lovallo, M.; Lapenna, V & Telesca, L (2005) Transition matrix analysis of earthquake
magni-tude sequences Chaos, Solitons and Fractals, Vol 24, No 1, 33-43.
Martin, M.T.; Plastino, A & Rosso, O.A (2003) Statistical complexity and disequilibrium
Phys Lett A, Vol 311, No 2-3, 126-132.
McKay, C.P (2004) What is life? PLOS Biology, Vol 2, Vol 9, 1260-1263.
Menon, G.I.; Sinha, S & Ray, P (2003) Persistence at the onset of spatio-temporal
intermit-tency in coupled map lattices Europhys Lett., Vol 61, No 1, 27-33.
Morelli, L.G & Zanette, D.H (1998) Synchronization of stochastically coupled cellular
au-tomata Phys Rev E, Vol 58, No 1, R8-R11.
Perakh, M (2004) Defining complexity In online site: On Talk Reason, paper:
www.talkreason.org/articles/complexity.pdf
Pomeau, Y & Manneville, P (1980) Intermittent transition to turbulence in dissipative
dy-namical systems Commun Math Phys., Vol 74, No.2, 189-197.
Pomeau, Y (1986) Front motion, metastability and subcritical bifurcations in hydrodynamics
Physica D, Vol 23, No 1-3, 3-11.
Rolf, J.; Bohr, T & Jensen, M.H (1998) Directed percolation universality in asynchronous
evolution of spatiotemporal intermittency Phys Rev E, Vol 57, No 3, R2503-R2506
(1998)
Rosso, O.A.; Martín, M.T & Plastino, A (2003) Tsallis non-extensivity and complexity
mea-sures Physica A, Vol 320, 497-511.
Rosso, O.A.; Martín, M.T & Plastino, A (2005) Evidence of self-organization in brain electrical
activity using wavelet-based informational tools Physica A, Vol 347, 444-464.
Sánchez, J.R & López-Ruiz, R., a, (2005) A method to discern complexity in two-dimensional
patterns generated by coupled map lattices Physica A, Vol 355, No 2-4, 633-640.
Sánchez, J.R & López-Ruiz, R., b, (2005) Detecting synchronization in spatially extended
dis-crete systems by complexity measurements Disdis-crete Dyn Nat Soc., Vol 2005, No 3,
337-342
Sánchez, J.R & López-Ruiz, R (2006) Self-synchronization of Cellular Automata: an attempt
to control patterns Lect Notes Comp Sci., Vol 3993, No 3, 353-359.
Sánchez, J.R & López-Ruiz, R (2008) Symmetry pattern transition in Cellular Automata with
complex behavior Chaos, Solitons and Fractals, vol 37, No 3, 638-642.
Shiner, J.S.; Davison, M & Landsberg, P.T (1999) Simple measure for complexity Phys Rev.
E, Vol 59, No 2, 1459-1464.
Toffoli, T & Margolus, N (1987) Cellular Automata Machines: A New Environment for Modeling,
The MIT Press, Cambridge, Massachusetts
Wolfram, S (1983) Statistical mechanics of cellular automata Rev Mod Phys., Vol 55, No 3,
601-644
Yu, Z & Chen, G (2000) Rescaled range and transition matrix analysis of DNA sequences
Comm Theor Phys (Beijing China), Vol 33, No 4, 673-678.
Zimmermann, M.G.; Toral, R.; Piro, O & San Miguel, M (2000) Stochastic spatiotemporal
intermittency and noise-induced transition to an absorbing phase Phys Rev Lett.,
Vol 85, No 17, 3612-3615
Trang 11Zero-sum stopping game associated with threshold probability
Yoshio Ohtsubo
0
Zero-sum stopping game associated
with threshold probability
Yoshio Ohtsubo
Kochi University
Japan
Abstract
We consider a zero-sum stopping game (Dynkin’s game) with a threshold probability criterion
in discrete time stochastic processes We first obtain fundamental characterization of value
function of the game and optimal stopping times for both players as the result of the classical
Dynkin’s game, but the value function of the game and the optimal stopping time for each
player depend upon a threshold value We also give properties of the value function of the
game with respect to threshold value These are applied to an independent model and we
explicitly find a value function of the game and optimal stopping times for both players in a
special example
1 Introduction
In the classical Dynkin’s game, a standard criterion function is the expected reward (e.g
DynkinDynkin (1969) and NeveuNeveu (1975)) It is, however, known that the criterion is
quite insufficient to characterize the decision problem from the point of view of the decision
maker and it is necessary to select other criteria to reflect the variability of risk features for
the problem (e.g WhiteWhite (1988)) In a optimal stopping problem, Denardo and
Roth-blumDenardo & Rothblum (1979) consider an optimal stopping problem with an exponential
utility function as a criterion function in finite Markov decision chain and use a linear
pro-gramming to compute an optimal policy In Kadota et al.Kadota et al (1996), they investigate
an optimal stopping problem with a general utility function in a denumerable Markov chain
They give a sufficient condition for an one-step look ahead (OLA) stopping time to be optimal
and characterize a property of an OLA stopping time for risk-averse and risk-seeking utilities
BojdeckiBojdecki (1979) formulates an optimal stopping problem which is concerned with
maximizing the probability of a certain event and give necessary and sufficient conditions for
existence of an optimal stopping time He also applies the results to a version of the
discrete-time disorder problem OhtsuboOhtsubo (2003) considers optimal stopping problems with
a threshold probability criterion in a Markov process, characterizes optimal values and finds
optimal stopping times for finite and infinite horizon cases, and he in Ohtsubo (2003) also
investigates optimal stopping problem with analogous objective for discrete time stochastic
process and these are applied to a secretary problem, a parking problem and job search
prob-lems
5
Trang 12On the other hand, many authors propose a variety of criteria and investigate Markov
deci-sion processes for their criteria, instead of standard criteria, that is, the expected discounted
total reward and the average expected reward per unit (see WhiteWhite (1988) for survey)
Especially, WhiteWhite (1993), Wu and LinWu & Lin (1999), Ohtsubo and ToyonagaOhtsubo
& Toyonaga (2002) and OhtsuboOhtsubo (2004) consider a problem in which we minimize a
threshold probability Such a problem is called risk minimizing problem and is available for
applications to the percentile of the losses or Value-at-Risk (VaR) in finance (e.g FilarFilar et
al (1995) and UryasevUryasev (2000))
In this paper we consider Dynkin’s game with a threshold probability in a random sequence
In Section 3 we characterize a value function of game and optimal stopping times for both
players and show that the value function of game has properties of a distribution function
with respect to a threshold value except a right continuity In Section 4 we investigate an
independent model, as applications of our game, and we explicitly find a value function which
is right continuous and optimal stopping times for both players
2 Formulation of problem
Let(Ω,F , P)be a probability space and(F n)n∈N an increasing family of sub-σ-fields of F,
where N = {0, 1, 2,· · · } is a discrete time space Let X = (X n)n∈N , Y = (Y n)n∈N , W =
(W n)n∈Nbe sequences of random variables defined on(Ω,F , P)and adapted to(F n)such
that X n ≤ W n ≤ Y n almost surely (a.s.) for all n ∈ N and P(supn X+
n +supn Y n − <∞) = 1,
where x+ =max(0, x) and x − = (−x)+ The second assumption holds if random
vari-ables supn X+
n and supn Y n −are integrable, which are standard conditions given in the
classi-cal Dynkin’s game Also let Z be an arbitrary integrable random variable on(Ω,F , P) For
each n ∈ N, we denote by Γ nthe class of(F n)–stopping times τ such that τ ≥ n a s
We consider the following zero-sum stopping game There are two players and the first and
the second players choose stopping times τ and σ in Γ0, respectively Then the reward paid to
the first player from the second is equal to
g(τ , σ) =X τ I (τ<σ)+Y σ I (σ<τ)+W τ I (τ=σ<∞)+ZI (τ=σ=∞),
where I A is the indicator function of a set A in F In the classical Dynkin’s game the aim of the
first player is to maximize the expected gain E[g(τ , σ)]with respect to τ ∈Γ0and that of the
second is to minimize this expectation with respect to σ ∈Γ0 In our problem the objective of
the first player is to minimize the threshold probability P[g(τ , σ)≤ r]with respect to τ ∈Γ0
and the second maximizes the probability with respect to σ ∈ Γ0for a given threshold value
r.
We can define processes of minimax and maxmin values corresponding to our problem by
V n(r) = ess inf ess sup
τ∈Γ n σ∈Γ n P[g(τ , σ)≤ r|F n],
V n(r) = ess sup ess inf
σ∈Γ n τ∈Γ n P[g(τ , σ)≤ r|F n],
respectively, where P[g(τ , σ) ≤ r|F n]is a conditional probability of an event{g(τ , σ) ≤ r}
givenF n See NeveuNeveu (1975) for the definition of ess sup and ess inf We also define
sequences of minimax and maxmin values by
X n(r) =I(X n ≤r), Y n(r) =I(Y n ≤r), W n(r) =I(W n ≤r), Z(r) =I(Z≤r)
Since X n ≤ W n ≤ Y n, we see that Y n(r)≤ Wn(r)≤ Xn(r)for all r Thus our problem is just a special version of the classical Dynkin’s game for a fixed threshold value r.
We first have three propositions below for a fixed r from the result of Dynkin’s game (e.g see
NeveuNeveu (1975) and OhtsuboOhtsubo (2000)) In the following proposition, the notationmid(a, b, c)denotes the middle value among constants a, b and c For example, when a < b < c
then mid(a, b, c) =b If a < b, mid(a, b, c) =max(a, min(b, c)) =min(b, max(a, c))
Proposition 3.1. Let r be arbitrary.
(a) For each n ∈ N, V n(r) =V n(r), say V n(r), and v n(r) =v n(r) =E[V n(r)], say v n(r).
(b)(V n(r))is the unique sequence of random variables satisfying the equalities
V n=mid(Xn(r), Y n(r), E[V n+1 |F n]), n ∈ N and the inequalities
X n(r)≤ V n ≤ Yn(r), n ∈ N, where(Xn(r))is the largest submartingale dominated by min(Xn(r), E[Z(r)|F n])and(Yn(r))is the smallest supermartingale dominating max(Yn(r), E[Z(r)|F n]), that is,
Trang 13On the other hand, many authors propose a variety of criteria and investigate Markov
deci-sion processes for their criteria, instead of standard criteria, that is, the expected discounted
total reward and the average expected reward per unit (see WhiteWhite (1988) for survey)
Especially, WhiteWhite (1993), Wu and LinWu & Lin (1999), Ohtsubo and ToyonagaOhtsubo
& Toyonaga (2002) and OhtsuboOhtsubo (2004) consider a problem in which we minimize a
threshold probability Such a problem is called risk minimizing problem and is available for
applications to the percentile of the losses or Value-at-Risk (VaR) in finance (e.g FilarFilar et
al (1995) and UryasevUryasev (2000))
In this paper we consider Dynkin’s game with a threshold probability in a random sequence
In Section 3 we characterize a value function of game and optimal stopping times for both
players and show that the value function of game has properties of a distribution function
with respect to a threshold value except a right continuity In Section 4 we investigate an
independent model, as applications of our game, and we explicitly find a value function which
is right continuous and optimal stopping times for both players
2 Formulation of problem
Let(Ω,F , P)be a probability space and(F n)n∈N an increasing family of sub-σ-fields of F,
where N = {0, 1, 2,· · · } is a discrete time space Let X = (X n)n∈N , Y = (Y n)n∈N , W =
(W n)n∈Nbe sequences of random variables defined on(Ω,F , P)and adapted to(F n)such
that X n ≤ W n ≤ Y n almost surely (a.s.) for all n ∈ N and P(supn X+
n +supn Y n − <∞) = 1,
where x+ = max(0, x) and x − = (−x)+ The second assumption holds if random
vari-ables supn X+
n and supn Y n −are integrable, which are standard conditions given in the
classi-cal Dynkin’s game Also let Z be an arbitrary integrable random variable on(Ω,F , P) For
each n ∈ N, we denote by Γ nthe class of(F n)–stopping times τ such that τ ≥ n a s
We consider the following zero-sum stopping game There are two players and the first and
the second players choose stopping times τ and σ in Γ0, respectively Then the reward paid to
the first player from the second is equal to
g(τ , σ) =X τ I (τ<σ)+Y σ I (σ<τ)+W τ I (τ=σ<∞)+ZI (τ=σ=∞),
where I A is the indicator function of a set A in F In the classical Dynkin’s game the aim of the
first player is to maximize the expected gain E[g(τ , σ)]with respect to τ ∈Γ0and that of the
second is to minimize this expectation with respect to σ ∈Γ0 In our problem the objective of
the first player is to minimize the threshold probability P[g(τ , σ)≤ r]with respect to τ ∈Γ0
and the second maximizes the probability with respect to σ ∈Γ0for a given threshold value
r.
We can define processes of minimax and maxmin values corresponding to our problem by
V n(r) = ess inf ess sup
τ∈Γ n σ∈Γ n P[g(τ , σ)≤ r|F n],
V n(r) = ess sup ess inf
σ∈Γ n τ∈Γ n P[g(τ , σ)≤ r|F n],
respectively, where P[g(τ , σ) ≤ r|F n]is a conditional probability of an event{g(τ , σ) ≤ r}
givenF n See NeveuNeveu (1975) for the definition of ess sup and ess inf We also define
sequences of minimax and maxmin values by
X n(r) =I(X n ≤r), Y n(r) = I(Y n ≤r), W n(r) =I(W n ≤r), Z(r) =I(Z≤r)
Since X n ≤ W n ≤ Y n, we see that Y n(r)≤ Wn(r)≤ Xn(r)for all r Thus our problem is just a special version of the classical Dynkin’s game for a fixed threshold value r.
We first have three propositions below for a fixed r from the result of Dynkin’s game (e.g see
NeveuNeveu (1975) and OhtsuboOhtsubo (2000)) In the following proposition, the notationmid(a, b, c)denotes the middle value among constants a, b and c For example, when a < b < c
then mid(a, b, c) =b If a < b, mid(a, b, c) =max(a, min(b, c)) =min(b, max(a, c))
Proposition 3.1. Let r be arbitrary.
(a) For each n ∈ N, V n(r) =V n(r), say V n(r), and v n(r) =v n(r) =E[V n(r)], say v n(r).
(b)(V n(r))is the unique sequence of random variables satisfying the equalities
V n=mid(Xn(r), Y n(r), E[V n+1 |F n]), n ∈ N and the inequalities
X n(r)≤ V n ≤ Yn(r), n ∈ N, where(Xn(r))is the largest submartingale dominated by min(Xn(r), E[Z(r)|F n])and(Yn(r))is the smallest supermartingale dominating max(Yn(r), E[Z(r)|F n]), that is,
Trang 14Proposition 3.2. Let r be arbitrary For each k, n : k ≥ n, γ k
n(r)≥ γ k+1 n (r)and for each n ∈ N,
limk→∞ γ k n(r) =Xn(r).
For k ≥ n, let
β k k(r) =Xk(r),
β k n(r) =mid(Xn(r), Y n(r), E[β k n+1(r)|F n]), n < k,
Proposition 3.3. Let r be arbitrary For each k ≥ n, β k
n(r)≤ β k+1 n and for each n, lim k→∞ β k n(r) =
V n(r).
Theorem 3.1 For eachn, V n(·) has properties of a distribution function on R except for the
right continuity
Proof We first notice that Z(r) =I(Z≤r) is a nondecreasing function in r From the definition
of a conditional expectation and the dominated convergence theorem, E[Z(r)|F k]for each k
is also nondecreasing at r Since X k(r) =I(X k ≤r) is nondecreasing at r for each k ∈N, we see
that γ k
k(r) =min(Xk(r), E[Z(r)|F k])is a nondecreasing function in r By induction, γ k
n(r)is
nondecreasing in r for each k ≥ n Since a sequence {γ k n(r)}∞k=nof functions is nonincreasing
and X n(r) = limk→∞ γ k n(r), it follows that β n(r) = Xn(r)is nondecreasing for each n
Sim-ilarly, it follows by induction that β k
n(r)is nondecreasing at r for each n ≤ k, since Y n(r)is
nondecreasing at r From Proposition 2.3, the monotonicity of a sequence {β k n(r)}∞k=nimplies
that V n(r) =limk→∞ β k n(r)is a nondecreasing function in r.
Next, since we have V n(r) ≤ Xn(r)and we see that X n(r) = I(X n ≤r) = 0 for a sufficiently
small r, it follows that lim r→−∞ V n(r) =0 Similarly, we see that limr→∞ V n(r) =1, since we
have V n(r) ≥ Yn(r)and we see that Y n(r) =1 for a sufficiently large r Thus this theorem is
completely proved
We give an example below in which the value function V n(r)is not right continuous at some
r.
Example 3.1 LetX n=W n=− 1, Y n=1/n for each n and let Z=1 We shall obtain the value
function V n(r) by Propositions 3.2 and 3.3 Since X k(r) = I[−1,∞)(r)and Z(r) = I[1,∞)(r),
we have γ k
k(r) = I[1,∞)(r) By induction, we easily see that γ k
n(r) = I[1,∞)(r)for each k ≥ n and hence β n(r) = Xn(r) = limk→∞ γ k n = I[1,∞)(r) Next, since Y k−1(r) = I[1/(k−1),∞)(r),
We shall consider an independent sequences as a special model Let
(W n)n∈N be a sequence of independent distributed random variables with
P(supn |W n | < ∞) = 1, and let Z be a random variable which is independent of(W n)n∈N
For each n ∈ N let F n be the σ-field generated by {W k ; k ≤ n} Also, for each n ∈ N, let
X n=W n − c and Y n=W n+d, where c and d are positive constants.
SinceF nis independent of{W k ; k > n}, the relation in Proposition 3.1 (b) is represented as
Example 4.1 LetW be a uniformly distributed random variable on an interval[0, 1]and
assume that W n has the same distribution as W for all n ∈ N and that 0 < c, d <1/2 Thensince(W n)n∈N is a sequence of independently and identically distributed random variables,
V n(r)does not depend on n Hence, letting V(r) =V n(r), n ∈ N and v(r) =E[V(r)], we have
V(r) =mid(I(W≤r+c) , I(W≤r−d) , v(r))
When W < r − d, we have I(W≤r+c)=I(W≤r−d)=1, so V(r) =1 When W ≥ r+c, we have
V(r) =0, since I(W≤r+c)=I(W≤r−d)=0 Thus we obtain
V(r) = I(W≤r−d)+v(r)I(r−d≤W<r+c).Taking the expectation on the both sides, we see that
v(r) =P(W ≤ r − d) +v(r)P(r − d ≤ W < r+c)
If r < d then we have v(r) =v(r)P(0≤ W < r+c) Since r < d <1/2<1− c, P(0≤ W <
r+c ) < 1 and hence v(r) = 0 If d ≤ r <1− c, then we obtain v(r) = (r − d)/(1− c − d),
since P(W ≤ r − d) =r − d and P(r − d ≤ W < r+c) =c+d Similarly, if r ≥1− c then we have v(r) =1 Thus it follows that
E[Y(r)] =Y(r) =Yn(r) =P[Z ≤ r]I(−∞,d)(r) +I[d,∞)(r)
Now v(r)is a distribution function in r Let U is a random variable corresponding to v(r)
Then we see that E[U] = (1− c+d))/2
We shall next compare our model with the classical Dynkin’s game in this example Let
J n= ess inf ess sup
τ ∈Γ n σ ∈Γ n
E[g(τ , σ)|F n],
J n= ess sup ess inf
σ∈Γ n τ∈Γ n E[g(τ , σ)|F n],
Trang 15Proposition 3.2. Let r be arbitrary For each k, n : k ≥ n, γ k
n(r)≥ γ k+1 n (r)and for each n ∈ N,
limk→∞ γ k n(r) =Xn(r).
For k ≥ n, let
β k k(r) =Xk(r),
β k n(r) =mid(Xn(r), Y n(r), E[β k n+1(r)|F n]), n < k,
Proposition 3.3. Let r be arbitrary For each k ≥ n, β k
n(r)≤ β k+1 n and for each n, lim k→∞ β k n(r) =
V n(r).
Theorem 3.1 For eachn, V n(·) has properties of a distribution function on R except for the
right continuity
Proof We first notice that Z(r) =I(Z≤r) is a nondecreasing function in r From the definition
of a conditional expectation and the dominated convergence theorem, E[Z(r)|F k]for each k
is also nondecreasing at r Since X k(r) =I(X k ≤r) is nondecreasing at r for each k ∈N, we see
that γ k
k(r) =min(Xk(r), E[Z(r)|F k])is a nondecreasing function in r By induction, γ k
n(r)is
nondecreasing in r for each k ≥ n Since a sequence {γ n k(r)}∞k=nof functions is nonincreasing
and X n(r) =limk→∞ γ k n(r), it follows that β n(r) = Xn(r)is nondecreasing for each n
Sim-ilarly, it follows by induction that β k
n(r)is nondecreasing at r for each n ≤ k, since Y n(r)is
nondecreasing at r From Proposition 2.3, the monotonicity of a sequence {β k n(r)}∞k=nimplies
that V n(r) =limk→∞ β k n(r)is a nondecreasing function in r.
Next, since we have V n(r) ≤ Xn(r)and we see that X n(r) = I(X n ≤r) = 0 for a sufficiently
small r, it follows that lim r→−∞ V n(r) =0 Similarly, we see that limr→∞ V n(r) =1, since we
have V n(r) ≥ Yn(r)and we see that Y n(r) =1 for a sufficiently large r Thus this theorem is
completely proved
We give an example below in which the value function V n(r)is not right continuous at some
r.
Example 3.1 LetX n=W n=− 1, Y n=1/n for each n and let Z=1 We shall obtain the value
function V n(r)by Propositions 3.2 and 3.3 Since X k(r) = I[−1,∞)(r)and Z(r) = I[1,∞)(r),
we have γ k
k(r) = I[1,∞)(r) By induction, we easily see that γ k
n(r) = I[1,∞)(r)for each k ≥ n and hence β n(r) = Xn(r) = limk→∞ γ k n = I[1,∞)(r) Next, since Y k−1(r) = I[1/(k−1),∞)(r),
We shall consider an independent sequences as a special model Let
(W n)n∈N be a sequence of independent distributed random variables with
P(supn |W n | < ∞) = 1, and let Z be a random variable which is independent of(W n)n∈N
For each n ∈ N let F n be the σ-field generated by {W k ; k ≤ n} Also, for each n ∈ N, let
X n=W n − c and Y n=W n+d, where c and d are positive constants.
SinceF nis independent of{W k ; k > n}, the relation in Proposition 3.1 (b) is represented as
Example 4.1 LetW be a uniformly distributed random variable on an interval[0, 1]and
assume that W n has the same distribution as W for all n ∈ N and that 0 < c, d <1/2 Thensince(W n)n∈N is a sequence of independently and identically distributed random variables,
V n(r)does not depend on n Hence, letting V(r) =V n(r), n ∈ N and v(r) =E[V(r)], we have
V(r) =mid(I(W≤r+c) , I(W≤r−d) , v(r))
When W < r − d, we have I(W≤r+c)=I(W≤r−d)=1, so V(r) =1 When W ≥ r+c, we have
V(r) =0, since I(W≤r+c)=I(W≤r−d)=0 Thus we obtain
V(r) =I(W≤r−d)+v(r)I(r−d≤W<r+c).Taking the expectation on the both sides, we see that
v(r) =P(W ≤ r − d) +v(r)P(r − d ≤ W < r+c)
If r < d then we have v(r) =v(r)P(0≤ W < r+c) Since r < d <1/2<1− c, P(0≤ W <
r+c ) < 1 and hence v(r) = 0 If d ≤ r <1− c, then we obtain v(r) = (r − d)/(1− c − d),
since P(W ≤ r − d) =r − d and P(r − d ≤ W < r+c) =c+d Similarly, if r ≥1− c then we have v(r) =1 Thus it follows that
E[Y(r)] =Y(r) =Yn(r) =P[Z ≤ r]I(−∞,d)(r) +I[d,∞)(r)
Now v(r)is a distribution function in r Let U is a random variable corresponding to v(r)
Then we see that E[U] = (1− c+d))/2
We shall next compare our model with the classical Dynkin’s game in this example Let
J n= ess inf ess sup
τ ∈Γ n σ ∈Γ n
E[g(τ , σ)|F n],
J n= ess sup ess inf
σ∈Γ n τ∈Γ n E[g(τ , σ)|F n],
Trang 16be minimax and maxmin value processes, respectively Then we have J n= J n=J, say, since
J n=J n does not depend upon n in this example Also, by solving the relation
Bojdecki, T (1979) Probability maximizing approach to optimal stopping and its application
to a disorder problem Stochastics, Vol.3, 61–71.
Chow, Y S.; Robbins, H & Siegmund, D (1971) Great Expectations: The Theory of Optimal
Stopping Houghton Mifflin, Boston.
DeGroot, M H (1970) Optimal Statistical Decisions McGraw Hill, New York.
Denardo, E V & Rothblum, U G (1979) Optimal stopping, exponential utility, and linear
programming Math Programming, Vol.16, 228–244.
Dynkin, E B (1969) Game variant of a problem on optimal stopping Soviet Math Dokl.,
Vol.10, 270–274
Filar, J A., Krass, D & Ross, K W (1995) Percentile performance criteria for limiting average
Markov decision processes IEEE Trans Automat Control, Vol.40, 2-10.
Kadota, Y., Kurano, M & Yasuda, M (1996) Utility-optimal stopping in a denumerable
Markov chain Bull Informatics and Cybernetics, Vol.28, 15-21.
Neveu, J (1975) Discrete-Parameter Martingales North-Holland, New York.
Ohtsubo, Y (2000) The values in Dynkin stopping problem with th some constraints
Mathe-matica Japonica, Vol.51, 75-81.
Ohtsubo, Y & Toyonaga, K (2002) Optimal policy for minimizing risk models in Markov
decision processes J Math Anal Appl., Vol.271, 66-81.
Ohtsubo, Y (2003) Value iteration methods in risk minimizing stopping problem J Comput.
Appl Math., Vol.152, 427-439.
Ohtsubo, Y (2003) Risk minimization in optimal stopping problem and applications J
Oper-ations Research Society of Japan, Vol.46, 342-352.
Ohtsubo, Y (2004) Optimal threshold probability in undiscounted Markov decision processes
with a target set Applied Math Computation, Vol.149, 519-532.
Shiryayev, A N (1978) Optimal Stopping Rules Springer, New York.
Uryasev, S P (2000) Introduction to theory of probabilistic functions and percentiles
(Value-at-Risk) Probabilistic Constrained Optimization Uryasev, S P., (Ed.), Kluwer Academic
Publishers, Dordrecht, pp.1-25
White, D J (1988) Mean, variance and probabilistic criteria in finite Markov decision
pro-cesses: a review J Optim Theory Appl., Vol.56, 1-29.
White, D J (1993) Minimising a threshold probability in discounted Markov decision
pro-cesses J Math Anal Appl., Vol.173, 634-646.
Wu, C & Lin, Y (1999) Minimizing risk models in Markov decision processes with policies
depending on target values J Math Anal Appl Vol.231, 47-67.
Trang 17conditional probabilities defined by Hausdorff outer and inner measures 87
Stochastic independence with respect to upper and lower conditional probabilities defined by Hausdorff outer and inner measures
Serena Doria
0
Stochastic independence with respect to upper
and lower conditional probabilities defined
by Hausdorff outer and inner measures
Serena Doria
University G.d’Annunzio
Italy
1 Introduction
A new model of coherent upper conditional prevision is proposed in a metric space It is
defined by the Choquet integral with respect to the s-dimensional Hausdorff outer measure
if the conditioning event has positive and finite Hausdorff outer measure in its dimension s.
Otherwise if the conditioning event has Hausdorff outer measure in its dimension equal to
zero or infinity it is defined by a 0-1 valued finitely, but not countably, additive probability
If the conditioning event has positive and finite Hausdorff outer measure in its dimension the
coherent upper conditional prevision is proven to be monotone, comonotonically additive,
submodular and continuous from below
Given a coherent upper conditional prevision the coherent lower conditional prevision is
de-fined as its conjugate
In Doria (2007) coherent upper and lower conditional probablities are obtained when only 0-1
valued random variables are considered
The aim of this chapter is to introduce a new definition of stochastic independence with
re-spect to coherent upper and lower conditional probabilities defined by Hausdorff outer and
inner measures
A concept related to the definition of conditional probability is stochastic independence In
a continuous probability space where probability is usually assumed equal to the Lebesgue
measure, we have that finite, countable and fractal sets (i.e the sets with non-integer
Haus-dorff dimension) have probability equal to zero For these sets the standard definition of
independence given by the factorization property is always satisfied since both members of
the equality are zero
The notion of s-independence with respect to Hausdorff outer and inner measures is
intro-duced to check probabilistic dependence for sets with probability equal to zero, which are
always independent according to the standard definition given by the factorization property
Moreover s-independence is compared with the notion of epistemic independence with
re-spect to upper and lower conditional probabilities (Walley, 1991)
The outline of the chapter is the following
In Section 2 it is proven that a conditional prevision defined by the Radon-Nikodym derivative
may be not coherent and examples are given
6
Trang 18In Section 3 coherent upper conditional previsions are defined in a metric space by the
Cho-quet integral with respect to Hausdorff outer measure if the conditioning event has positive
and finite Hausdorff outer measure in its dimension Otherwise they are defined by a 0-1
valued finitely, but not countably, additive probability Their properties are proven
In Section 4 the notion of s-irrelevance and s-independence with respect to coherent upper
and lower conditional probabilities defined by Hausdorff outer and inner measures are
intro-duced It is proven that the notions of epistemic irrelevance and s-irrelevance are not always
related In particular we give conditions for which an event B is epistemically irrelevant to an
event A, but it is not s-irrelevant In the Euclidean metric space it is proven that a necessary
condition for s-irrelevance between events is that the Hausdorff dimension of the two events
and their intersection is equal to the Hausdorff dimension of Ω Finally sufficient conditions
for s-irrelevance between Souslin subsets of nare given
In Section 5 some fractal sets are proven to be s-dependent since they do not satisfy the
neces-sary condition for s-independence In particular the attractor of a finite family of similitudes
and its boundary are proven to be s-dependent if the open set condition holds Moreover a
condition for which two middle Cantor sets are s-dependent is given
It is important to note that all these sets are stochastically independent according the axiomatic
definition given by the factorization property if probability is defined by the Lebesgue
mea-sure
In Section 6 curves filling the space, such as Peano curve and Hilbert curve are proven to be
s-independent
2 Conditional expectation and coherent conditional prevision
Partial knowledge is a natural interpretation of conditional probability This interpretation
can be formalized in a different way in the axiomatic approach and in the subjective approach
where conditional probability is respectively defined by the Radon-Nikodym derivative or by
the axioms of coherence In both cases conditional probability is obtained as the restriction of
conditional expectation or conditional prevision to the class of indicator functions of events
Some critical situations, which highlight as the axiomatic definition of conditional probability
is not always a useful tool to represent partial knoweledge, are proposed in literature and
ana-lyzed in this section In particular the role of the Radon-Nikodym derivative in the assessment
of a coherent conditional prevision is investigated
It is proven that, every time that the σ-field of the conditioning events is properly contained in
the σ-field of the probability space and it contains all singletons, the Radon-Nikodym
deriva-tive cannot be used as a tool to define coherent conditional previsions This is due to the fact
that one of the defining properties of the Radon-Nikodym derivative, that is to be measurable
with respect to the σ-field of the conditioning events, contradicts a necessary condition for the
coherence
Analysis done points out the necessity to introduce a different tool to define coherent
condi-tional previsions
2.1 Conditional expectation and Radon-Nikodym derivative
In the axiomatic approach Billingsley (1986) conditional expectation is defined with respect
to a σ-field G of conditioning events by the Radon-Nikodym derivative Let(Ω, F, P) be a
probability space and let F and G be two σ-fields of subsets of Ω with G contained in F and let
X be an integrable random variable on(Ω, F, P) Let P be a probability measure on F; define
a measure ν on G by ν(G)=G XdP This measure is finite and absolutely continuous with
respect to P So there exists a function, the Radon-Nikodym derivative denoted by E[X|G],
defined on Ω, G-measurable, integrable and satisfying the functional equation
If X is the indicator function of any event A belonging to F then E[X|G] =E[A|G] =P[A|G]
is a version of the conditional probability
Conditional probability can be used to represent partial information (Billingsley, 1986, Section33)
A probability space(Ω, F, P)can be use to represent a random phenomenon or an experiment
whose outcome is drawn from according to the probability given by P Partial information
about the experiment can be represented by a sub σ-field G of F in the following way: an
observer does not know which ω has been drawn but he knows for each H ∈ G, if ω belongs
to H or if ω belongs to H c A sub σ-field G of F can be identified as partial information about the random experiment, and, fixed A in F, conditional probability can be used to represent partial knowledge about A given the information on G If conditional probability is defined
by the Radon-Nykodim derivative, denoted by P[A|G], by the standard definition
(Billings-ley, 1986, p.52) we have that an event A is independent from the σ-field G if it is independent
from each H ∈ G, that isP[A|G] =P(A)with probability 1 In (Billingsley, 1986, Example33.11) it is shown that the interpretation of conditional probability in terms of partial knowl-
edge breaks down in certain cases Let Ω = [0,1], let F be the Borel σ-field of [0,1] and let
P be the Lebesgue measure on F Let G be the sub σ-field of sets that are either countable
or co-countable Then P(A)is a version of the conditional probability P[A|G]define by the
Radon-Nikodym derivative because P(G)is either 0 or 1 for evey G ∈ G So an eventA is
independent from the information represented by G and this is a contradiction according to the fact that the information represented by G is complete since G contains all the singletons
of Ω
2.2 Coherent upper conditional previsions
In the subjective probabilistic approach (de Finetti 1970, Dubins 1975 and Walley 1991)
coher-ent upper conditional previsions P(·|B)are functionals, defined on a linear space of boundedrandom variables, satisfying the axioms of coherence
In Walley (1991) coherent upper conditional previsions are defined when the conditioningevents are sets of a partition
Definition 1. Let Ω be a non-empty set let B be a partition of Ω For every B ∈ B let K(B)be a linear space of bounded random variables defined on B Then separately coherent upper conditional previsions are functionals P(·|B)defined on K(B), such that the following conditions hold for every X and Y in
K(B)and every strictly positive constant λ:
• 1) P (X|B) ≤ sup(X|B);
• 2) P(λ X|B) = λ P(X|B)(positive homogeneity);
• 3) P(X+Y)|B)≤ P(X|B) +P(Y|B);
• 4) P(B|B) =1.
Trang 19conditional probabilities defined by Hausdorff outer and inner measures 89
In Section 3 coherent upper conditional previsions are defined in a metric space by the
Cho-quet integral with respect to Hausdorff outer measure if the conditioning event has positive
and finite Hausdorff outer measure in its dimension Otherwise they are defined by a 0-1
valued finitely, but not countably, additive probability Their properties are proven
In Section 4 the notion of s-irrelevance and s-independence with respect to coherent upper
and lower conditional probabilities defined by Hausdorff outer and inner measures are
intro-duced It is proven that the notions of epistemic irrelevance and s-irrelevance are not always
related In particular we give conditions for which an event B is epistemically irrelevant to an
event A, but it is not s-irrelevant In the Euclidean metric space it is proven that a necessary
condition for s-irrelevance between events is that the Hausdorff dimension of the two events
and their intersection is equal to the Hausdorff dimension of Ω Finally sufficient conditions
for s-irrelevance between Souslin subsets of nare given
In Section 5 some fractal sets are proven to be s-dependent since they do not satisfy the
neces-sary condition for s-independence In particular the attractor of a finite family of similitudes
and its boundary are proven to be s-dependent if the open set condition holds Moreover a
condition for which two middle Cantor sets are s-dependent is given
It is important to note that all these sets are stochastically independent according the axiomatic
definition given by the factorization property if probability is defined by the Lebesgue
mea-sure
In Section 6 curves filling the space, such as Peano curve and Hilbert curve are proven to be
s-independent
2 Conditional expectation and coherent conditional prevision
Partial knowledge is a natural interpretation of conditional probability This interpretation
can be formalized in a different way in the axiomatic approach and in the subjective approach
where conditional probability is respectively defined by the Radon-Nikodym derivative or by
the axioms of coherence In both cases conditional probability is obtained as the restriction of
conditional expectation or conditional prevision to the class of indicator functions of events
Some critical situations, which highlight as the axiomatic definition of conditional probability
is not always a useful tool to represent partial knoweledge, are proposed in literature and
ana-lyzed in this section In particular the role of the Radon-Nikodym derivative in the assessment
of a coherent conditional prevision is investigated
It is proven that, every time that the σ-field of the conditioning events is properly contained in
the σ-field of the probability space and it contains all singletons, the Radon-Nikodym
deriva-tive cannot be used as a tool to define coherent conditional previsions This is due to the fact
that one of the defining properties of the Radon-Nikodym derivative, that is to be measurable
with respect to the σ-field of the conditioning events, contradicts a necessary condition for the
coherence
Analysis done points out the necessity to introduce a different tool to define coherent
condi-tional previsions
2.1 Conditional expectation and Radon-Nikodym derivative
In the axiomatic approach Billingsley (1986) conditional expectation is defined with respect
to a σ-field G of conditioning events by the Radon-Nikodym derivative Let(Ω, F, P) be a
probability space and let F and G be two σ-fields of subsets of Ω with G contained in F and let
X be an integrable random variable on(Ω, F, P) Let P be a probability measure on F; define
a measure ν on G by ν(G)=G XdP This measure is finite and absolutely continuous with
respect to P So there exists a function, the Radon-Nikodym derivative denoted by E[X|G],
defined on Ω, G-measurable, integrable and satisfying the functional equation
If X is the indicator function of any event A belonging to F then E[X|G] =E[A|G] =P[A|G]
is a version of the conditional probability
Conditional probability can be used to represent partial information (Billingsley, 1986, Section33)
A probability space(Ω, F, P)can be use to represent a random phenomenon or an experiment
whose outcome is drawn from according to the probability given by P Partial information
about the experiment can be represented by a sub σ-field G of F in the following way: an
observer does not know which ω has been drawn but he knows for each H ∈ G, if ω belongs
to H or if ω belongs to H c A sub σ-field G of F can be identified as partial information about the random experiment, and, fixed A in F, conditional probability can be used to represent partial knowledge about A given the information on G If conditional probability is defined
by the Radon-Nykodim derivative, denoted by P[A|G], by the standard definition
(Billings-ley, 1986, p.52) we have that an event A is independent from the σ-field G if it is independent
from each H ∈ G, that isP[A|G] =P(A)with probability 1 In (Billingsley, 1986, Example33.11) it is shown that the interpretation of conditional probability in terms of partial knowl-
edge breaks down in certain cases Let Ω = [0,1], let F be the Borel σ-field of [0,1] and let
P be the Lebesgue measure on F Let G be the sub σ-field of sets that are either countable
or co-countable Then P(A)is a version of the conditional probability P[A|G]define by the
Radon-Nikodym derivative because P(G)is either 0 or 1 for evey G ∈ G So an eventA is
independent from the information represented by G and this is a contradiction according to the fact that the information represented by G is complete since G contains all the singletons
of Ω
2.2 Coherent upper conditional previsions
In the subjective probabilistic approach (de Finetti 1970, Dubins 1975 and Walley 1991)
coher-ent upper conditional previsions P(·|B)are functionals, defined on a linear space of boundedrandom variables, satisfying the axioms of coherence
In Walley (1991) coherent upper conditional previsions are defined when the conditioningevents are sets of a partition
Definition 1. Let Ω be a non-empty set let B be a partition of Ω For every B ∈ B let K(B)be a linear space of bounded random variables defined on B Then separately coherent upper conditional previsions are functionals P(·|B)defined on K(B), such that the following conditions hold for every X and Y in
K(B)and every strictly positive constant λ:
• 1) P (X|B) ≤ sup(X|B);
• 2) P(λ X|B) = λ P(X|B)(positive homogeneity);
• 3) P(X+Y)|B)≤ P(X|B) +P(Y|B);
• 4) P(B|B) =1.
Trang 20Coherent conditional upper previsions can always be extended to coherent upper previsions
on the class L(B)of all bounded random variables defined on B.
Suppose that P(X|B) is a coherent upper conditional prevision on K then its conjugate
coher-ent lower conditional prevision is defined by P(−X|B) =−P(X|B) If for every X belonging
to K we have P(X|B) =P(X|B) =P(X|B) then P(X|B)is called a coherent linear conditional
prevision de Finetti (1970) and it is a linear positive functional on K.
Definition 2. Let Ω be a non-empty set let B be a partition of Ω For every B ∈ B let K(B)be a
linear space of bounded random variables defined on B Then linear coherent conditional previsions are
functionals P(·|B)defined on K(B), such that the following conditions hold for every X and Y in K(B)
and every strictly positive constant λ:
In Dubins (1975) coherent conditional probabilities are defined when the family of the
condi-tioning events is a field of subsets of Ω
Definition 3. Let Ω be a non-empty set and let F and G be two fields of subsets of Ω , with G ⊆ F P
is a finitely additive conditional probability on(F, G)if it is a real function defined on F × G0, where
G0=G − such that the following conditions hold:
• I) given any H ∈ G0 and A1, , An ∈ F and A i ∩ A j = for i = j, the function P(·|H)
defined on F is such that P(A|H)≥ 0, P(n k=1 A k |H) =∑n k=1 P(A k |H), P(Ω|H) =1
• II) P(H |H) =1 if H ∈ G0
• III) given E ∈ F, H ∈ F with A ∈ G0and EA ∈ G0then P(EH|A) =P(E|A)P(H |EA).
From conditions I) and II) we have
II’) P(A|H) =1 if A ∈F,H ∈G0and H ⊂ A.
These conditional probabilities are coherent in the sense of de Finetti, since conditions I), II),
III) are sufficient for the coherence of P on C=F×G0when F and G are fields of subsets of Ω
with G⊆F or G is an additive subclass of F; otherwise if F and G are two arbitrary families
of subsets of Ω, such that Ω∈ F the previous conditions are necessary for the coherence but
not sufficient
2.3 Coherent conditional previsions and the Radon-Nikodym derivative
In this subsection the role of the Radon-Nikodym derivative in the assessment of a coherent
conditional prevision is analyzed
The definitions of conditional expectation and coherent linear conditional prevision can be
compared when the σ-field G is generated by the partition B Let G be equal or contained in
the σ-field generated by a countable class C of subsets of F and let B be the partition generated
by the class C Denote Ω’ = B and ϕ B the function from Ω to Ω’ that associates to every ω ∈Ω
the atom B of the partition B that contains ω; then we have that P(A|G) =P(A|B)◦ ϕ B for
every A ∈F (Koch, 1997, 262).
The next theorem shows that every time that the σ-field G of the conditioning events is
prop-erly contained in F and it contains all singletons of[0, 1]then the conditional prevision, fined by the Radon-Nikodym derivative is not coherent It occurs because one of the defining
de-properties of conditional expectation that is to be measurable with respect to the σ-field of
conditioning events contradicts a necessary condition for coherence of a linear conditional
prevision A bounded random variable is called B-measurable or measurable with respect to
the partition B (Walley, 1991, p.291) if it is constant on the atoms B of the partition If for every
B belonging to B P(X|B) are coherent linear conditional previsions and X is B-measurable
then P(X|B) =X (Walley, 1991, p.292) This necessary condition for coherence is not always satisfied if P(X|B)is defined by the Radon-Nikodym derivative
Theorem 1. Let Ω = [0,1], let F be the Borel σ-field of [0,1] and let P be the Lebesgue measure on F Let
G be a sub σ-field properly contained in F and containing all singletons of [0,1] Let B be the partition
of all singletons of [0,1] and let X be the indicator function of an event A belonging to F - G If we
define the conditional prevision P(X| { ω }) equal to the Radon-Nikodym derivative with probability 1, that is
P(X | { ω}) = E[X|G]
except on a subset N of [0,1] of P-measure zero, then the conditional prevision P(X | { ω}) is not coherent.
Proof If the equality P(X | { ω}) = E[X|G]holds with probability 1, then we have that, with
probability 1, the linear conditional prevision P(X| { ω }) is different from X, the indicator
function of A; in fact having fixed A in F −G the indicator functionX is not G-measurable,
it does not verify a property of the Radon-Nikodym derivative and therefore it cannot beassumed as conditional expectation according to axiomatic definition So the linear con-
ditional prevision P(X | { ω}) does not satisfy the necessary condition for being coherent,
P(X | { ω}) = X for every singleton {ω}of G.
Example 1 (Billingsley, 1986, Example 33.11) Let Ω = [0,1], let F be the Borel σ-field of Ω, let P
be the Lebesgue measure on F and let G be the sub σ-field of F of sets that are either countable
or co-countable Let B be the partition of all singletons of Ω; if the linear conditional prevision
is defined equal, with probability 1, to conditional expectation defined by the Radon-Nikodymderivative, we have that
P(X|B) =E[X|G] =P(X)
So when X is the indicator function of an event A= [a, b]with 0< a < b < 1 then P(X|B) =
P(A)and it does not satisfy the necessary condition for coherence that is P(X | { ω}) = X for
every singleton{ ω }of G.
Evident from Theorem 1 and Example 1 is the necessity to introduce a new tool to definecoherent linear conditional previsions
3 Coherent upper conditional previsions defined by Hausdorff outer measures
In this section coherent upper conditional previsions are defined by the Choquet integral with
respect to Hausdorff outer measures if the conditioning event B has positive and finite dorff outer measure in its dimension Otherwise if the conditioning event B has Hausdorff
Haus-outer measure in its dimension equal to zero or infinity they are defined by a 0-1 valuedfinitely, but not countably, additive probability