1. Trang chủ
  2. » Giáo án - Bài giảng

a generalization of the havrda charvat and tsallis entropy and its axiomatic characterization

9 1 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 9
Dung lượng 209,18 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Research ArticleA Generalization of the Havrda-Charvat and Tsallis Entropy and Its Axiomatic Characterization Satish Kumar1and Gurdas Ram2 1 Department of Mathematics, College of Natural

Trang 1

Research Article

A Generalization of the Havrda-Charvat and Tsallis Entropy and Its Axiomatic Characterization

Satish Kumar1and Gurdas Ram2

1 Department of Mathematics, College of Natural Sciences, Arba Minch University, Arab Minch, Ethiopia

2 Department of Applied Sciences, Maharishi Markandeshwar University, Solan, Himachal Pradesh 173229, India

Correspondence should be addressed to Satish Kumar; drsatish74@rediffmail.com

Received 3 September 2013; Revised 20 December 2013; Accepted 20 December 2013; Published 19 February 2014

Academic Editor: Chengjian Zhang

Copyright © 2014 S Kumar and G Ram This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

In this communication, we characterize a measure of information of types𝛼, 𝛽, and 𝛾 by taking certain axioms parallel to those considered earlier by Havrda and Charvat along with the recursive relation𝐻𝑛(𝑝1, , 𝑝𝑛;𝛼, 𝛽, 𝛾)−𝐻𝑛−1(𝑝1+ 𝑝2,𝑝3, , 𝑝𝑛;𝛼, 𝛽, 𝛾)= (𝐴(𝛼,𝛾)/(𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)))(𝑝1+ 𝑝2)𝛼/𝛾𝐻2(𝑝1/(𝑝1+ 𝑝2), 𝑝2/(𝑝1+ 𝑝2); 𝛼,𝛾)+(𝐴(𝛽,𝛾)/(𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)))(𝑝1+ 𝑝2)(𝛽/𝛾)𝐻2(𝑝1/(𝑝1+ 𝑝2),

𝑝2/(𝑝1+ 𝑝2); 𝛾, 𝛽), 𝛼 ̸= 𝛾 ̸= 𝛽, 𝛼, 𝛽, 𝛾 > 0 Some properties of this measure are also studied This measure includes Shannon’s information measure as a special case

1 Introduction

Shannon’s measure of entropy for a discrete probability

distribution

𝑃 = (𝑝1, , 𝑝𝑛) , 𝑝𝑖≥ 0, ∑𝑛

𝑖=1

𝑝𝑖= 1, (1) given by

𝐻 (𝑃) = −∑𝑛

𝑖=1

𝑝𝑖log𝑝𝑖, (2)

has been characterized in several ways (see Acz´el and Dar´oczy

[]) Out of the many ways of characterization the two elegant

approaches are to be found in the work of (i) Faddeev [2], who

uses branching property namely,

𝐻𝑛(𝑝1, , 𝑝𝑛) = 𝐻𝑛−1(𝑝1+ 𝑝2, 𝑝3, , 𝑝𝑛)

+ (𝑝1+ 𝑝2) 𝐻2( 𝑝1

𝑝1+ 𝑝2,

𝑝2

𝑝1+ 𝑝2) ,

(3)

𝑛 = 3, 4, for the above distribution 𝑃, as the basic postulate, and (ii) Chaundy and McLeod [3], who studied the functional equation

𝑛

𝑖=1

𝑚

𝑗=1

𝑓 (𝑝𝑖𝑞𝑗) =∑𝑛

𝑖=1

𝑓 (𝑝𝑖) +∑𝑚

𝑗=1

𝑓 (𝑞𝑗) , for𝑝𝑖≥ 0, 𝑞𝑗 ≥ 0

(4)

Both of the above-mentioned approaches have been exten-sively exploited and generalized The most general form of (4) has been studied by Sharma and Taneja [4], who considered the functional equation

𝑛

𝑖=1

𝑚

𝑗=1

𝑓 (𝑝𝑖𝑞𝑗) =∑𝑛

𝑖=1

𝑚

𝑗=1

𝑓 (𝑝𝑖) 𝑔 (𝑞𝑗)

+ ∑𝑛

𝑖=1

𝑚

𝑗=1

𝑔 (𝑝𝑖) 𝑓 (𝑞𝑗) ,

𝑛

𝑖=1

𝑝𝑖=∑𝑚

𝑗=1

𝑞𝑗 = 1, 𝑝𝑖≥ 0, 𝑞𝑗≥ 0

(5)

http://dx.doi.org/10.1155/2014/505184

Trang 2

We define the information measure as

𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1 𝑛∑

𝑖=1(𝑝𝑖𝛼/𝛾− 𝑝𝑖𝛽/𝛾) ,

𝛼 ̸= 𝛾 ̸= 𝛽, 𝛼, 𝛽, 𝛾 > 0,

(6)

for a complete probability distribution𝑃 = (𝑝1, , 𝑝𝑛), 𝑝𝑖≥

0, ∑𝑛𝑖=1𝑝𝑖= 1

Measure (6) reduces to entropy of type𝛽 (or 𝛼) when 𝛼 =

𝛾 = 1 (or 𝛽 = 𝛾 = 1) given by

𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛽) = (21−𝛽− 1)−1[∑𝑛

𝑖=1𝑝𝛽𝑖 − 1] ,

𝛽 ̸= 1, 𝛽 > 0

(7)

When𝛽 → 1, measure (7) reduces to Shannon’s entropy [5],

namely,

𝐻𝑛(𝑝1, , 𝑝2) = −∑𝑛

𝑖=1

𝑝𝑖log2𝑝𝑖 (8)

The measure (7) was characterized by many authors by

different approaches Havrda and Charv´at [6] characterized

(7) by an axiomatic approach Dar´oczy [7] studied (7) by a

functional equation A joint characterization of the measures

(7) and (8) has been done by author in two different ways

Firstly by a generalized functional equation having four

different functions and secondly by an axiomatic approach

Later on Tsallis [8] gave the applications of (7) in Physics

To characterize strongly interacting statistical systems

within a thermodynamical framework—complex system in

particular—it might be necessary to introduce generalized

entropies A series of such entropies have been proposed in

the past, mainly to accommodate important empirical

dis-tribution functions to a maximum ignorance principles The

understanding of the fundamental origin of these entropies

and its deeper relations to complex systems is limited

Authors [9] explore this question from first principle Authors

[9] observed that the 4th Khinchin axiom is violated by

strongly interacting system in general and by assuming the

first three Khinchin axioms derived a unique entropy and also

classified the known entropies with in equivalence classes

For statistical system that violates the four

Shannon-Khinchin axioms, entropy takes a more general form than

the Boltzmann-Gibbs entropy The framework of

superstatis-tics allows one to formulate a maximum entropy principle

with these generalized entropies, making them useful for

understanding distribution functions of non-Markovian or

nonergodic complex systems For such systems where the

composability axiom is violated there exist only two ways

to implement the maximum entropy principle; one is using

the escort probabilities and the other is not The two ways

are connected through a duality Authors [10] showed that

this duality fixes a unique escort probability and derived a

complete theory of the generalized logarithms and also gave

the relationship between the functional forms of general-ized logarithms and the asymptotic scaling behavior of the entropy

Suyari [11] has proposed a generalization of Shannon-Khinchin axioms, which determines a class of entropies containing the well-known Tsallis and Havrda-Charvat entropies Authors [12] showed that the class of entropy functions determined by Suyari’s axioms is wider than the one proposed by Suyari and generalized Suyari’s axioms characterizing recently introduced class of entropies obtained

by averaging pseudoadditive information content

In this communication, we characterized the measure (6)

by taking certain axioms parallel to those considered earlier

by Havrda and Charv´at [6] along with the recursive relation (9) Some properties of this measure are also studied The measure (6) satisfies a recursive relation as follows:

𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) − 𝐻𝑛−1(𝑝1+ 𝑝2, 𝑝3, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= 𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)(𝑝1+ 𝑝2)

𝛼/𝛾𝐻2(𝑝 𝑝1

1+ 𝑝2,

𝑝2

𝑝1+ 𝑝2; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)(𝑝1+ 𝑝2)

𝛽/𝛾𝐻2( 𝑝1

𝑝1+ 𝑝2,

𝑝2

𝑝1+ 𝑝2; 𝛾, 𝛽) ,

𝛼 ̸= 𝛾 ̸= 𝛽, 𝛼, 𝛽, 𝛾 > 0,

(9)

where𝑝1+ 𝑝2 > 0, 𝐴(𝛼,𝛾) = (2(𝛾−𝛼)/𝛾 − 1), and 𝐴(𝛽,𝛾) = (2(𝛾−𝛽)/𝛾− 1)

Consider

𝐻 (𝑝1, 𝑝2, , 𝑝𝑛; 𝛼, 𝛾) = 𝐴−1(𝛼,𝛾) [∑𝑛

𝑖=1

𝑝𝑖𝛼/𝛾− 1] ,

𝛼 ̸= 𝛾, 𝛼, 𝛾 > 0 ̸= 1,

𝐻 (𝑝1, 𝑝2, , 𝑝𝑛; 𝛾, 𝛽) = 𝐴−1(𝛽,𝛾)[1 −∑𝑛

𝑖=1𝑝𝛽/𝛾𝑖 ] ,

𝛽 ̸= 𝛾, 𝛽, 𝛾 > 0 ̸= 1

(10)

Proof.

𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) − 𝐻𝑛−1(𝑝1+ 𝑝2, 𝑝3, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1{(𝑝𝛼/𝛾1 − 𝑝𝛽/𝛾1 ) + (𝑝𝛼/𝛾2 − 𝑝𝛽/𝛾2 )

+ ⋅ ⋅ ⋅ + (𝑝𝛼/𝛾𝑛 − 𝑝𝛽/𝛾𝑛 )}

− (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1

× {(𝑝1+ 𝑝2)𝛼/𝛾− (𝑝1+ 𝑝2)𝛽/𝛾+ (𝑝3𝛼/𝛾− 𝑝3𝛽/𝛾) + ⋅ ⋅ ⋅ + (𝑝𝑛𝛼/𝛾− 𝑝𝑛𝛽/𝛾) }

Trang 3

= (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1{𝑝𝛼/𝛾1 − 𝑝𝛽/𝛾1 + 𝑝𝛼/𝛾2 − 𝑝𝛽/𝛾2

−(𝑝1+ 𝑝2)𝛼/𝛾+ (𝑝1+ 𝑝2)𝛽/𝛾}

= (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1{𝑝𝛼/𝛾1 + 𝑝𝛼/𝛾2 − (𝑝1+ 𝑝2)𝛼/𝛾}

+ (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1{(𝑝1+ 𝑝2)𝛽/𝛾− 𝑝𝛽/𝛾1 − 𝑝𝛽/𝛾2 }

= (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1(𝑝1+ 𝑝2)𝛼/𝛾

× [ 𝑝

𝛼/𝛾

1

(𝑝1+ 𝑝2)𝛼/𝛾 +

𝑝2𝛼/𝛾 (𝑝1+ 𝑝2)𝛼/𝛾 − 1]

+ (2(𝛾−𝛼)/𝛾− 2(𝛾−𝛽)/𝛾)−1(𝑝1+ 𝑝2)𝛽/𝛾

× [1 − 𝑝𝛽/𝛾1

(𝑝1+ 𝑝2)𝛼/𝛾 −

𝑝𝛽/𝛾2 (𝑝1+ 𝑝2)𝛼/𝛾]

= 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)(𝑝1+ 𝑝2)

𝛼/𝛾𝐻2( 𝑝1

𝑝1+ 𝑝2,

𝑝2

𝑝1+ 𝑝2; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)(𝑝1+ 𝑝2)

𝛽/𝛾

× 𝐻2( 𝑝1

𝑝1+ 𝑝2,

𝑝2

𝑝1+ 𝑝2; 𝛾, 𝛽) ,

(11) which proves (9)

2 Set of Axioms

For characterizing a measure of information of types 𝛼, 𝛽,

and 𝛾 associated with a probability distribution 𝑃 =

(𝑝1, , 𝑝𝑛), 𝑝𝑖 ≥ 0, ∑𝑛𝑖=1𝑝𝑖 = 1, we introduce the following

axioms:

(1)𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) is continuous in the region

𝑝𝑖≥ 0, ∑𝑛

𝑖=1

𝑝𝑖= 1, 𝛼, 𝛽, 𝛾 > 0; (12)

(2)𝐻2(1, 0; 𝛼, 𝛽, 𝛾) = 0;

(3)𝐻2(1/2, 1/2; 𝛼, 𝛽, 𝛾) = 1, 𝛼, 𝛽, 𝛾 > 0;

(4)

𝐻𝑛(𝑝1, , 𝑝𝑖−1, 0, 𝑝𝑖+1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= 𝐻𝑛−1(𝑝1, , 𝑝𝑖−1, 𝑝𝑖+1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) , (13)

for every𝑖 = 1, 2, , 𝑛;

(5)

𝐻𝑛+1(𝑝1, , 𝑝𝑖−1, V𝑖1, V𝑖2, 𝑝𝑖+1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

− 𝐻𝑛(𝑝1, , 𝑝𝑖−1, 𝑝𝑖, 𝑝𝑖+1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)𝑝𝛼/𝛾𝑖 𝐻2(V𝑖1

𝑝𝑖,

V𝑖2

𝑝𝑖; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾) 𝑝𝑖𝛽/𝛾𝐻2(V𝑖1

𝑝𝑖,

V𝑖2

𝑝𝑖; 𝛾, 𝛽) ,

𝛼 ̸= 𝛾 ̸= 𝛽, 𝛼, 𝛽, 𝛾 > 0,

(14)

for everyV𝑖1 + V𝑖2 = 𝑝𝑖 > 0, 𝑖 = 1, 2 , 𝑛, where 𝐴(𝛼,𝛾) = (2(𝛾−𝛼)/𝛾− 1) and 𝐴(𝛽,𝛾)= (2(𝛾−𝛽)/𝛾− 1), 𝛼 ̸= 𝛾 ̸= 𝛽

Theorem 1 If 𝛼 ̸= 𝛽 ̸= 𝛾; 𝛼, 𝛽, 𝛾 > 0, then the axioms (1)–(5)

determine a measure given by

𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

(𝑝𝛼/𝛾𝑖 − 𝑝𝛽/𝛾𝑖 ) ,

𝛼 ̸= 𝛾 ̸= 𝛽, 𝛼, 𝛽, 𝛾 > 0,

(15)

Before proving the theorem we prove some intermediate results based on the above axioms.

Lemma 2 If V𝑘 ≥ 0, 𝑘 = 1, 2, , 𝑚 and ∑𝑚𝑘=1V𝑘 = 𝑝𝑖 > 0,

then

𝐻𝑛+𝑚−1(𝑝1, , 𝑝𝑖−1, V1, , V𝑚, 𝑝𝑖+1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= 𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) + 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)𝑝

𝛼/𝛾

𝑖 𝐻𝑚(V1

𝑝𝑖, ,

V𝑚

𝑝𝑖; 𝛼, 𝛾) + 𝐴 𝐴(𝛽,𝛾)

(𝛽,𝛾)− 𝐴(𝛼,𝛾)𝑝𝛽/𝛾𝑖 𝐻𝑚(V1

𝑝𝑖, ,

V𝑚

𝑝𝑖; 𝛾, 𝛽)

(16)

2, the desired statement holds (cf axiom (4)) Let us suppose

Trang 4

that the result is true for numbers less than or equal to𝑚, we

will prove it for𝑚 + 1 We have

𝐻𝑛+𝑚(𝑝1, , 𝑝𝑖−1, V1, , V𝑚+1, 𝑝𝑖+1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= 𝐻𝑛+1(𝑝1, , 𝑝𝑖−1, V1, 𝐿, 𝑝𝑖+1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

+𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)𝐿𝛼/𝛾𝐻𝑚(V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛼, 𝛾) +𝐴 𝐴(𝛽,𝛾)

(𝛽,𝛾)− 𝐴(𝛼,𝛾)𝐿𝛽/𝛾𝐻𝑚(

V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛾, 𝛽) (where 𝐿 = V2+ ⋅ ⋅ ⋅ + V𝑚+1)

= 𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

+𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)𝑝𝛼/𝛾𝑖 𝐻2(V𝑝1

𝑖,𝑝𝐿

𝑖; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)𝑝𝛽/𝛾𝑖 𝐻2(V1

𝑝𝑖,

𝐿

𝑝𝑖; 𝛾, 𝛽) + 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)𝐿𝛼/𝛾𝐻𝑚(

V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)𝐿𝛽/𝛾𝐻𝑚(

V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛾, 𝛽)

= 𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) + 𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)

× {𝑝𝛼/𝛾𝑖 𝐻2(V1

𝑝𝑖,

𝐿

𝑝𝑖; 𝛼, 𝛾) + 𝐿𝛼/𝛾𝐻𝑚(V𝐿2, ,V𝑚+1𝐿 ; 𝛼, 𝛾)}

+ 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾) {𝑝

𝛽/𝛾

𝑖 𝐻2(V1

𝑝𝑖,

𝐿

𝑝𝑖; 𝛾, 𝛽) +𝐿𝛽/𝛾𝐻𝑚(V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛾, 𝛽)} , where𝑝𝑖= V1+ 𝐿 > 0

(17) One more application of induction premise yields

𝐻𝑚+1(V1

𝑝𝑖, ,

V𝑚+1

𝑝𝑖 ; 𝛼, 𝛽, 𝛾)

= 𝐻2(V1

𝑝𝑖,

𝐿

𝑝𝑖; 𝛼, 𝛽, 𝛾)

+ 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)(

𝐿

𝑝𝑖)

𝛼/𝛾

𝐻𝑚(V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛼, 𝛾) + 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾) (

𝐿

𝑝𝑖)

𝛽/𝛾

𝐻𝑚(V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛾, 𝛽)

(18)

For𝛽 = 𝛾, (18) reduces to

𝐻𝑚+1(V1

𝑝𝑖, ,

V𝑚+1

𝑝𝑖 ; 𝛼, 𝛾)

= 𝐻2(V1

𝑝𝑖,

𝐿

𝑝𝑖; 𝛼, 𝛾) + (

𝐿

𝑝𝑖)

𝛼/𝛾

𝐻𝑚(V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛼, 𝛾) (19) Similarly for𝛼 = 𝛾, (18) reduces to

𝐻𝑚+1(V1

𝑝𝑖, ,

V𝑚+1

𝑝𝑖 ; 𝛾, 𝛽)

= 𝐻2(V1

𝑝𝑖,

𝐿

𝑝𝑖; 𝛾, 𝛽) + (

𝐿

𝑝𝑖)

𝛽/𝛾

𝐻𝑚(V2

𝐿, ,

V𝑚+1

𝐿 ; 𝛾, 𝛽) (20)

Expression (17) together with (19) and (20) gives the desired result

Lemma 3 If V𝑖𝑗 ≥ 0, 𝑗 = 1, 2, , 𝑚𝑖,∑𝑚𝑖

𝑗=1V𝑖𝑗 = 𝑝𝑖 > 0,

𝑖 = 1, 2, , 𝑛, and ∑𝑛𝑖=1𝑝𝑖= 1, then

𝐻𝑚1+⋅⋅⋅+𝑚𝑛(V1 1, V1 2, , V1 𝑚1 : ⋅ ⋅ ⋅ : V𝑛 1,

V𝑛 2, , V𝑛 𝑚𝑛; 𝛼, 𝛽, 𝛾)

= 𝐻𝑛(𝑝1, 𝑝2, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) +𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)

𝑛

𝑖=1

𝑝𝛼/𝛾𝑖 𝐻𝑚𝑖(V𝑝𝑖 1

𝑖, ,V𝑖 𝑚𝑖

𝑝𝑖 ; 𝛼, 𝛾) +𝐴 𝐴(𝛽,𝛾)

(𝛽,𝛾)− 𝐴(𝛼,𝛾)

𝑛

𝑖=1

𝑝𝛽/𝛾𝑖 𝐻𝑚𝑖(V𝑖 1

𝑝𝑖, ,

V𝑖 𝑚𝑖

𝑝𝑖 ; 𝛾, 𝛽)

(21)

Lemma 4 If 𝐹(𝑛; 𝛼, 𝛽, 𝛾) = 𝐻𝑛(1/𝑛, , 1/𝑛; 𝛼, 𝛽, 𝛾), then

𝐹 (𝑛; 𝛼, 𝛽, 𝛾) = 𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)𝐹 (𝑛; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)𝐹 (𝑛; 𝛾, 𝛽) ,

(22)

(𝛼,𝛾)(𝑛(𝛾−𝛼)/𝛾− 1), 𝛼 ̸= 𝛾, and

𝐹 (𝑛; 𝛾, 𝛽) = 𝐴−1(𝛽,𝛾)(𝑛(𝛾−𝛽)/𝛾− 1) , 𝛽 ̸= 𝛾 (23)

Trang 5

Proof Replacing in Lemma 3𝑚𝑖 by 𝑚 and putting V𝑖𝑗 =

1/𝑚𝑛, 𝑖 = 1, 2, 𝑛, 𝑗 = 1, 2, 𝑚, where 𝑚 and 𝑛 are positive

integer, we have

𝐹 (𝑚𝑛; 𝛼, 𝛽, 𝛾) = 𝐹 (𝑚; 𝛼, 𝛽, 𝛾)

+𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)(

1

𝑚)

(𝛼−𝛾)/𝛾

𝐹 (𝑛; 𝛼, 𝛾)

+𝐴 𝐴(𝛽,𝛾)

(𝛽,𝛾)− 𝐴(𝛼,𝛾)(

1

𝑚)

(𝛽−𝛾)/𝛾

𝐹 (𝑛; 𝛾, 𝛽) ,

(24)

𝐹 (𝑚𝑛; 𝛼, 𝛽, 𝛾) = 𝐹 (𝑛; 𝛼, 𝛽, 𝛾)

+ 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)(

1

𝑛)

(𝛼−𝛾)/𝛾

𝐹 (𝑚; 𝛼, 𝛾)

+𝐴 𝐴(𝛽,𝛾)

(𝛽,𝛾)− 𝐴(𝛼,𝛾)(

1

𝑛)

𝛽/𝛾−1

𝐹 (𝑚; 𝛾, 𝛽)

(25) Putting𝑚 = 1 in (24) and using𝐹(1; 𝛼, 𝛽, 𝛾) = 0 (by axiom

(2)), we get

𝐹 (𝑛; 𝛼, 𝛽, 𝛾) = 𝐴 𝐴(𝛼,𝛾)

(𝛼,𝜆)− 𝐴(𝛽,𝛾)𝐹 (𝑛; 𝛼, 𝛾) +𝐴 𝐴(𝛽,𝛾)

(𝛽,𝛾)− 𝐴(𝛼,𝛾)𝐹 (𝑛; 𝛾, 𝛽) ,

(26)

which is (22)

Comparing the right hand sides of (24) and (25), we get

𝐹 (𝑚; 𝛼, 𝛽, 𝛾) +𝐴 𝐴(𝛼,𝛾)

(𝛼,𝛾)− 𝐴(𝛽,𝛾)(

1

𝑚)

𝛼/(𝛼−𝛾)

𝐹 (𝑛; 𝛼, 𝛾)

+𝐴 𝐴(𝛽,𝛾)

(𝛽,𝛾)− 𝐴(𝛼,𝛾)(

1

𝑚)

𝛽/(𝛽−𝛾)

𝐹 (𝑛; 𝛾, 𝛽)

= 𝐹 (𝑛; 𝛼, 𝛽, 𝛾) + 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)(

1

𝑛)

𝛼/(𝛼−𝛾)

𝐹 (𝑚; 𝛼, 𝛾)

+ 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)(

1

𝑛)

𝛽/(𝛽−𝛾)

𝐹 (𝑚; 𝛾, 𝛽)

(27)

Equation (27) together with (22) gives

𝐴(𝛼,𝛾){[1 − (1

𝑛)

𝛼/𝛾−1

] 𝐹 (𝑚; 𝛼, 𝛾) + [(𝑚1)𝛼/𝛾−1− 1] 𝐹 (𝑛; 𝛼, 𝛾)}

= 𝐴(𝛽,𝛾){[1 − (1𝑛)𝛽/𝛾−1] 𝐹 (𝑚; 𝛾, 𝛽)

+ [(1

𝑚)

𝛽/𝛾−1

− 1] 𝐹 (𝑛; 𝛾, 𝛽)}

(28)

Putting 𝑛 = 2 in (28) and using 𝐹(2, 𝛼, 𝛽, 𝛾) =

𝐻2(1/2, 1/2; 𝛼, 𝛽, 𝛾) = 1, we get

𝐴(𝛼,𝛾){(1 − 21−𝛼/𝛾) 𝐹 (𝑚; 𝛼, 𝛾) − (1 − (𝑚1)𝛼/𝛾−1)}

= 𝐴(𝛽,𝛾){(1 − 21−𝛽/𝛾) 𝐹 (𝑚; 𝛾, 𝛽) − (1 − (1

𝑚)

𝛽/𝛾−1

)}

= 𝐶 (say)

(29)

That is,𝐴(𝛼,𝛾){(1 − 21−𝛼/𝛾)𝐹(𝑚; 𝛼, 𝛾) − (1 − (1/𝑚)𝛼/𝛾−1)} = 𝐶, where𝐶 is an arbitrary constant

For𝑚 = 1, we get 𝐶 = 0

Thus, we have

𝐹 (𝑚; 𝛼, 𝛾) = 1 − 𝑚1−𝛼/𝛾

1 − 21−𝛼/𝛾 = 𝐴−1(𝛼,𝛾)(𝑚1−𝛼/𝛾− 1) , 𝛼 ̸= 𝛾

(30) Similarly,

𝐹 (𝑚; 𝛾, 𝛽) = 1 − 𝑚1−𝛽/𝛾

1 − 21−𝛽/𝛾 = 𝐴−1(𝛽,𝛾)(𝑚1−𝛽/𝛾− 1) , 𝛽 ̸= 𝛾,

(31) which is (23)

Now (22) together with (23) gives

𝐹 (𝑛; 𝛼, 𝛽, 𝛾) = 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)𝐹 (𝑛; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)𝐹 (𝑛; 𝛾, 𝛽)

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1(𝑛1−𝛼/𝛾− 𝑛1−𝛽/𝛾)

(32)

Proof of the Theorem We prove the theorem for rationals and

then the continuity axiom(1) extends the result for reals For this let𝑚 and 𝑟󸀠

𝑖’s be positive integers such that∑𝑛𝑖=1𝑟𝑖 = 𝑚 and if we put𝑝𝑖 = 𝑟𝑖/𝑚, 𝑖 = 1, 2, , 𝑛 then an application of

Lemma 3gives

𝐻𝑚(1

𝑚, ,

1 𝑚

⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

𝑟 1

, , 1

𝑚, ,

1 𝑚

⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

𝑟 𝑛

; 𝛼, 𝛽, 𝛾)

= 𝐻𝑛(𝑝1, 𝑝2, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) + 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)

𝑛

𝑖=1

𝑝𝛼/𝛾𝑖 𝐻𝑟𝑖(1

𝑟𝑖, ,

1

𝑟𝑖; 𝛼, 𝛾) + 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)

𝑛

𝑖=1

𝑝𝛽/𝛾𝑖 𝐻𝑟𝑖(1

𝑟𝑖, ,

1

𝑟𝑖; 𝛾, 𝛽)

(33)

Trang 6

That is,

𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾)

= 𝐹 (𝑚; 𝛼, 𝛽, 𝛾)

− 𝐴(𝛼,𝛾)

𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾)

𝑛

𝑖=1

𝑝𝛼/𝛾𝑖 𝐹 (𝑟𝑖; 𝛼, 𝛾)

− 𝐴(𝛽,𝛾)

𝐴(𝛽,𝛾)− 𝐴(𝛼,𝛾)

𝑛

𝑖=1

𝑝𝛽/𝛾𝑖 𝐹 (𝑟𝑖; 𝛾, 𝛽)

(34)

Equation (34) together with (23) and (32) gives

𝐻𝑛(𝑝1, , 𝑝𝑛; 𝛼, 𝛽, 𝛾) = 𝐴 1

(𝛼,𝛾)− 𝐴(𝛽,𝛾)

𝑛

𝑖=1

(𝑝𝑖𝛼/𝛾− 𝑝𝑖𝛽/𝛾) ,

𝛼 ̸= 𝛾 ̸= 𝛽, 𝛼, 𝛽, 𝛾 > 0

(35) which is (15)

This completes the proof of the theorem

3 Properties of Entropy of Types 𝛼, 𝛽, and 𝛾

The measure𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾), where 𝑃 = (𝑝1, , 𝑝𝑛), 𝑝𝑖 ≥ 0,

∑𝑛𝑖=1𝑝𝑖 = 1, is a probability distribution, as characterized in

the preceding section and satisfies certain properties, which

are given in the following theorems:

Theorem 5 The measure 𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾) is nonnegative for

𝛼 ̸= 𝛾 ̸= 𝛽, 𝛼, 𝛽, 𝛾 > 0.

Proof.

Case 1.𝛼 > 𝛾; 𝛽 < 𝛾 ⇒ 𝛼/𝛾 > 1, 𝛽/𝜆 < 1;

󳨐⇒∑𝑛

𝑖=1

𝑝𝛼/𝛾𝑖 < 1, ∑𝑛

𝑖=1

𝑝𝛽/𝛾𝑖 > 1,

󳨐⇒∑𝑛

𝑖=1

(𝑝𝑖𝛼/𝛾− 𝑝𝑖𝛽/𝛾) < 0

(36)

Since,𝛼 > 𝛾 and 𝛽 < 𝛾, we get

(21−𝛼/𝛾− 21−𝛽/𝛾)−1 𝑛∑

𝑖=1(𝑝𝑖𝛼/𝛾− 𝑝𝑖𝛽/𝛾) > 0 (37)

Case 2 Similarly for𝛼 < 𝛾 and 𝛽 > 𝛾, we get

(21−𝛼/𝛾− 21−𝛽/𝛾)−1 𝑛∑

𝑖=1

𝑝𝛼/𝛾𝑖 − 𝑝𝛽/𝛾𝑖 > 0 (38) Therefore from Case 1, Case 2, and axiom (2), we get

𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾) ≥ 0 (39) This completes the proof of theorem

Definition 6 We will use the following definition of a convex

function

A function𝑓(⋅) over the points in a convex set 𝑅 is convex

∩ if for all 𝑟1, 𝑟2∈ 𝑅 and 𝜇 ∈ (0, 1)

𝜇𝑓 (𝑟1) + (1 − 𝜇) 𝑓 (𝑟2) ≤ 𝑓 (𝜇𝑟1+ (1 − 𝜇) 𝑟2) (40) The function𝑓(⋅) is convex ∪ if (40) holds with≥ in place of

Theorem 7 The measure 𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾) is convex ∩ function of

the probability distribution𝑃 = (𝑝1, , 𝑝𝑛), 𝑝𝑖≥ 0, ∑𝑛𝑖=1𝑝𝑖=

1, when either 𝛼 > 𝛾and 𝛽 ≤ 𝛾 or 𝛽 > 𝛾and 𝛼 ≤ 𝛾.

𝑃𝑘(𝑋) = {𝑝𝑘(𝑥1) , , 𝑝𝑘(𝑥𝑛)} , ∑𝑛

𝑖=1

𝑝𝑘(𝑥𝑖) = 1,

𝑘 = 1, 2, , 𝑟,

(41)

associated with the random variable𝑋 = (𝑥1, , 𝑥𝑛) Consider 𝑟 numbers (𝑎1, , 𝑎𝑟) such that 𝑎𝑘 ≥ 0 and

∑𝑟𝑘=1𝑎𝑘= 1 and define

𝑃𝑜(𝑋) = {𝑝𝑜(𝑥1) , , 𝑝𝑜(𝑥𝑛)} , (42) where

𝑝𝑜(𝑥𝑖) =∑𝑟

𝑘=1

𝑎𝑘𝑝𝑘(𝑥𝑖) , 𝑖 = 1, 2, , 𝑛 (43)

Obviously, ∑𝑛𝑖=1𝑝𝑜(𝑥𝑖) = 1 and thus 𝑃𝑜(𝑥) is a bonafide distribution of𝑋

Let𝛼 > 𝛾 and 0 < 𝛽 ≤ 𝛾, then we have

𝑟

𝑘=1

𝑎𝑘𝐻𝑛(𝑝𝑘; 𝛼, 𝛽, 𝛾) − 𝐻𝑛(𝑃𝑜(𝛼, 𝛽, 𝛾))

=∑𝑟

𝑘=1

𝑎𝑘𝐻𝑛(𝑝𝑘; 𝛼, 𝛽, 𝛾)

− (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1{{

{

[ [

𝑟

𝑗=1

𝑎𝑗𝑝𝑗] ]

𝛼/𝛾

− [ [

𝑟

𝑗=1

𝑎𝑗𝑝𝑗] ]

𝛽/𝛾} } }

≤∑𝑟

𝑘=1

𝑎𝑘𝐻𝑛(𝑝𝑘; 𝛼, 𝛽, 𝛾)

− (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1(∑𝑟

𝑗=1

𝑎𝑗𝑝𝛼/𝛾𝑗 −∑𝑟

𝑗=1

𝑎𝑗𝑝𝑗𝛽/𝛾) = 0, (by Jensen’s inequality)

(44)

⇒ ∑𝑟𝑘=1𝑎𝑘𝐻𝑛(𝑝𝑘; 𝛼, 𝛽, 𝛾) − 𝐻𝑛(𝑃𝑜; 𝛼, 𝛽, 𝛾) ≤ 0, that is,∑𝑟𝑘=1𝑎𝑘𝐻𝑛(𝑝𝑘; 𝛼, 𝛽, 𝛾) ≤ 𝐻𝑛(𝑃𝑜; 𝛼, 𝛽, 𝛾), for 𝛼 > 𝛾, 0 <

𝛽 ≤ 𝛾

By symmetry in 𝛼, 𝛽, and 𝛾 the above result is true for

𝛽 > 𝛾 and 0 < 𝛼 ≤ 𝛾

Trang 7

Theorem 8 The measure 𝐻𝑛(𝑝; 𝛼, 𝛽, 𝛾) satisfies the following

relations:

(i) Generalized-Additive:

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) = 𝐺𝑛(𝑃; 𝛼, 𝛽, 𝛾) 𝐻𝑚(𝑄; 𝛼, 𝛽, 𝛾)

+ 𝐺𝑚(𝑄; 𝛼, 𝛽, 𝛾) 𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾) ,

𝛼, 𝛽, 𝛾 > 0,

(45)

where

𝐺𝑛(𝑃; 𝛼, 𝛽, 𝛾) = 12∑𝑛

𝑖=1

(𝑝𝛼/𝛾𝑖 + 𝑝𝛽/𝛾𝑖 ) ,

𝛼, 𝛽, 𝛾 > 0

(46)

(ii) Subadditive: for 𝛼, 𝛽 > 𝛾, the measure 𝐻𝑛(𝑝; 𝛼, 𝛽, 𝛾) is

subadditive; that is,

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) ≤ 𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾)

+ 𝐻𝑚(𝑄; 𝛼, 𝛽, 𝛾) , (47)

where𝑃 = (𝑝1, , 𝑝𝑛), 𝑄 = (𝑞1, , 𝑞𝑚) and

𝑃 ∗ 𝑄 = (𝑝1𝑞1, , 𝑝1𝑞𝑚, , 𝑝𝑛𝑞1, , 𝑝𝑛𝑞𝑚) (48)

are complete probability distributions.

Proof of (i) We have

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) = (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1

×∑𝑛

𝑖=1

𝑚

𝑗=1

[(𝑝𝑖𝑞𝑗)𝛼/𝛾− (𝑝𝑖𝑞𝑗)𝛽/𝛾]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

𝑚

𝑗=1

[(𝑝𝑖𝑞𝑗)𝛼/𝛾− (𝑝𝑖𝑞𝑗)𝛽/𝛾 +𝑝𝛼/𝛾𝑖 𝑞𝛽/𝛾𝑗 − 𝑝𝛼/𝛾𝑖 𝑞𝛽/𝛾𝑗 ]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

𝑚

𝑗=1[𝑝𝑖𝛼/𝛾𝑞𝛼/𝛾𝑗 − 𝑝𝛽/𝛾𝑖 𝑞𝛽/𝛾𝑗 +𝑝𝛼/𝜆𝑖 𝑞𝛽/𝛾𝑗 − 𝑝𝑖𝛼/𝛾𝑞𝛽/𝛾𝑗 ]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

𝑚

𝑗=1

[𝑝𝛼/𝛾𝑖 (𝑞𝛼/𝛾𝑗 + 𝑞𝛽/𝛾𝑗 )

−𝑞𝛽/𝛾𝑗 (𝑝𝛼/𝛾𝑖 + 𝑝𝛽/𝛾𝑖 )]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1[

[

𝑛

𝑖=1𝑝𝑖𝛼/𝛾∑𝑚

𝑗=1(𝑞𝛼/𝛾𝑗 + 𝑞𝛽/𝛾𝑗 )

−∑𝑚

𝑗=1

𝑞𝛽/𝛾𝑗 ∑𝑛

𝑖=1

(𝑝𝑖𝛼/𝛾+ 𝑝𝑖𝛽/𝛾)]

] (49) Also

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾)

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

𝑚

𝑗=1

[(𝑝𝑖𝑞𝑗)𝛼/𝛾− (𝑝𝑖𝑞𝑗)𝛽/𝛾]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

𝑚

𝑗=1

[(𝑝𝑖𝑞𝑗)𝛼/𝛾− (𝑝𝑖𝑞𝑗)𝛽/𝛾 +𝑝𝑖𝛽/𝛾𝑞𝛼/𝛾𝑗 − 𝑝𝑖𝛽/𝛾𝑞𝛼/𝛾𝑗 ]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

𝑚

𝑗=1

[𝑝𝛼/𝛾𝑖 𝑞𝛼/𝛾𝑗 − 𝑝𝛽/𝛾𝑖 𝑞𝛽/𝛾𝑗 +𝑝𝑖𝛽/𝛾𝑞𝛼/𝛾𝑗 − 𝑞𝛼/𝛾𝑗 ]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1 𝑛∑

𝑖=1

𝑚

𝑗=1[𝑞𝛼/𝛾𝑗 (𝑝𝑖𝛼/𝛾+ 𝑝𝑖𝛽/𝛾)

−𝑝𝑖𝛽/𝛾(𝑞𝛼/𝛾𝑗 + 𝑞𝛽/𝛾𝑗 )]

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1[

[

𝑚

𝑗=1

𝑞𝛼/𝛾𝑗 ∑𝑛

𝑖=1

(𝑝𝛼/𝛾𝑖 + 𝑝𝛽/𝛾𝑖 )

−∑𝑛

𝑖=1

𝑝𝑖𝛽/𝛾∑𝑛

𝑖=1

(𝑞𝛼/𝛾𝑗 + 𝑞𝛽/𝛾𝑗 )]

(50)

Adding (49) and (50), we get

2𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾)

= (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1[

[

𝑛

𝑖=1

𝑝𝛼/𝛾𝑖 ∑𝑚

𝑗=1

(𝑞𝛼/𝛾𝑗 + 𝑞𝛽/𝛾𝑗 )

−∑𝑚

𝑗=1

𝑞𝛽/𝛾𝑗 ∑𝑛

𝑖=1

(𝑝𝛼/𝛾𝑖 + 𝑝𝛽/𝛾𝑖 )]

]

Trang 8

+ (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1[

[

𝑚

𝑗=1

𝑞𝛼/𝛾𝑗 ∑𝑛

𝑖=1

(𝑝𝛼/𝛾𝑖 + 𝑝𝛽/𝛾𝑖 )

−∑𝑛

𝑖=1

𝑝𝑖𝛽/𝛾∑𝑛

𝑖=1

(𝑞𝛼/𝛾𝑗 + 𝑞𝑗𝛽/𝛾)]

=∑𝑛

𝑖=1

(𝑝𝛼/𝛾𝑖 + 𝑝𝛽/𝛾𝑖 ) (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1

×∑𝑚

𝑗=1

(𝑞𝛼/𝛾𝑗 − 𝑞𝑗𝛽/𝛾)

+∑𝑚

𝑗=1

(𝑞𝛼/𝛾𝑗 + 𝑞𝑗𝛽/𝛾) (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1

×∑𝑛

𝑖=1

(𝑝𝑖𝛼/𝛾− 𝑝𝑖𝛽/𝛾) ,

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾)

= 12∑𝑛

𝑖=1

(𝑝𝛼/𝛾𝑖 + 𝑝𝛽/𝛾𝑖 ) (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1

×∑𝑚

𝑗=1

(𝑞𝛼/𝛾𝑗 − 𝑞𝑗𝛽/𝛾)

+12∑𝑚

𝑗=1

(𝑞𝛼/𝛾𝑗 + 𝑞𝛽/𝛾𝑗 ) (𝐴(𝛼,𝛾)− 𝐴(𝛽,𝛾))−1

×∑𝑛

𝑖=1

(𝑝𝑖𝛼/𝛾− 𝑝𝑖𝛽/𝛾)

(51) Using (46)

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝑠) = 𝐺𝑛(𝑃; 𝛼, 𝛽, 𝛾) 𝐻𝑚(𝑄; 𝛼, 𝛽, 𝛾)

+ 𝐺𝑚(𝑄; 𝛼, 𝛽, 𝛾) 𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾) ,

(52) which is (45) This completes the proof of part (i)

Proof of (ii) From part (i), we have

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) = 𝐺𝑛(𝑃; 𝛼, 𝛽, 𝛾) 𝐻𝑚(𝑄; 𝛼, 𝛽, 𝛾)

+ 𝐺𝑚(𝑄; 𝛼, 𝛽, 𝛾) 𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾)

(53)

As𝐺𝑛(𝑃; 𝛼, 𝛽, 𝛾) = (1/2) ∑𝑛𝑖=1(𝑝𝑖𝛼/𝛾+ 𝑝𝑖𝛽/𝛾) ≤ 1, for 𝛼, 𝛽 ≥ 𝛾,

𝐻𝑛𝑚(𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) ≤ 𝐻𝑚(𝑄; 𝛼, 𝛽, 𝛾) + 𝐻𝑛(𝑃; 𝛼, 𝛽, 𝛾)

(54) This proves the subadditivity

4 Conclusion

In addition to well-known information measure of Shannon, Renyi’s, Havrda-Charvat, Vajda [13], Darc´ozy, we have char-acterized a measure which we call𝛼, 𝛽, and 𝛾 information measure We have given some basic axioms and properties with recursive relation The Shannon’s [5] measure included

in the𝛼, 𝛽, and 𝛾 information measure for the limiting case

𝛼 = 𝛾 = 1 and 𝛽 → 1; 𝛽 = 𝛾 = 1 and 𝛼 → 1 This measure

is generalization of Havrda-Charvat entropy

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper

References

[1] J Acz´el and Z Dar´oczy, On Measures of Information and Their

Characterization, Academic Press, New York, NY, USA, 1975.

[2] D K Faddeev, “On the concept of entropy of a finite

probabilis-tic scheme,” Uspekhi Matemaprobabilis-ticheskikh Nauk, vol 11, no 1(67),

pp 227–231, 1956

[3] T W Chaundy and J B McLeod, “On a functional equation,”

Proceedings of the Edinburgh Mathematical Society Series II, vol.

12, no 43, pp 6–7, 1960

[4] B D Sharma and I J Taneja, “Functional measures in

informa-tion theory,” Funkcialaj Ekvacioj, vol 17, pp 181–191, 1974 [5] C E Shannon, “A mathematical theory of communication,” The

Bell System Technical Journal, vol 27, pp 379–423, 623–636,

1948

[6] J Havrda and F Charv´at, “Quantification method of classifica-tion processes Concept of structural𝛼-entropy,” Kybernetika,

vol 3, pp 30–35, 1967

[7] Z Dar´oczy, “Generalized information functions,” Information

and Computation, vol 16, pp 36–51, 1970.

[8] C Tsallis, “Possible generalization of Boltzmann-Gibbs

statis-tics,” Journal of Statistical Physics, vol 52, no 1-2, pp 479–487,

1988

[9] R Hanel and S Thurner, “A comprehensive classification of complex statistical systems and an ab-initio derivation of their

entropy and distribution functions,” Europhysics Letters, vol 93,

no 2, Article ID 20006, 2011

[10] R Hanel, S Thurner, and M Gell-Mann, “Generalized entropies

and logarithms and their duality relations,” Proceedings of the

National Academy of Sciences of the United States of America,

vol 109, no 47, pp 19151–19154, 2012

[11] H Suyari, “Generalization of Shannon-Khinchin axioms to nonextensive systems and the uniqueness theorem for the

nonextensive entropy,” IEEE Transactions on Information

The-ory, vol 50, no 8, pp 1783–1787, 2004.

[12] V M IIic, M S Stankovic, and E H Mulalic, “Comments on Generalization of Shannon-Khinchin axioms to nonextensive systems and the uniqueness theorem for nonextensive entropy,”

IEEE Transactions on Information Theory, vol 59, no 10, pp.

6950–6952, 2013

[13] I Vajda, “Axioms for𝛼-entropy of a generalized probability

scheme,” Kybernetika, vol 2, pp 105–112, 1968.

Trang 9

the copyright holder's express written permission However, users may print, download, or email articles for individual use.

Ngày đăng: 02/11/2022, 08:46

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm