Abstract

In this communication, we characterize a measure of information of types , , and by taking certain axioms parallel to those considered earlier by Havrda and Charvat along with the recursive relation (; , , ) , ; , , ) (()), ; ,()), ; , ), , , , . Some properties of this measure are also studied. This measure includes Shannon’s information measure as a special case.

1. Introduction

Shannon’s measure of entropy for a discrete probability distribution given by has been characterized in several ways (see Aczél and Daróczy [1]). Out of the many ways of characterization the two elegant approaches are to be found in the work of (i) Faddeev [2], who uses branching property namely, for the above distribution , as the basic postulate, and (ii) Chaundy and McLeod [3], who studied the functional equation Both of the above-mentioned approaches have been extensively exploited and generalized. The most general form of (4) has been studied by Sharma and Taneja [4], who considered the functional equation We define the information measure as for a complete probability distribution .

Measure (6) reduces to entropy of type (or ) when (or ) given by When , measure (7) reduces to Shannon’s entropy [5], namely, The measure (7) was characterized by many authors by different approaches. Havrda and Charvát [6] characterized (7) by an axiomatic approach. Daróczy [7] studied (7) by a functional equation. A joint characterization of the measures (7) and (8) has been done by author in two different ways. Firstly by a generalized functional equation having four different functions and secondly by an axiomatic approach. Later on Tsallis [8] gave the applications of (7) in Physics.

To characterize strongly interacting statistical systems within a thermodynamical framework—complex system in particular—it might be necessary to introduce generalized entropies. A series of such entropies have been proposed in the past, mainly to accommodate important empirical distribution functions to a maximum ignorance principles. The understanding of the fundamental origin of these entropies and its deeper relations to complex systems is limited. Authors [9] explore this question from first principle. Authors [9] observed that the 4th Khinchin axiom is violated by strongly interacting system in general and by assuming the first three Khinchin axioms derived a unique entropy and also classified the known entropies with in equivalence classes.

For statistical system that violates the four Shannon-Khinchin axioms, entropy takes a more general form than the Boltzmann-Gibbs entropy. The framework of superstatistics allows one to formulate a maximum entropy principle with these generalized entropies, making them useful for understanding distribution functions of non-Markovian or nonergodic complex systems. For such systems where the composability axiom is violated there exist only two ways to implement the maximum entropy principle; one is using the escort probabilities and the other is not. The two ways are connected through a duality. Authors [10] showed that this duality fixes a unique escort probability and derived a complete theory of the generalized logarithms and also gave the relationship between the functional forms of generalized logarithms and the asymptotic scaling behavior of the entropy.

Suyari [11] has proposed a generalization of Shannon-Khinchin axioms, which determines a class of entropies containing the well-known Tsallis and Havrda-Charvat entropies. Authors [12] showed that the class of entropy functions determined by Suyari’s axioms is wider than the one proposed by Suyari and generalized Suyari’s axioms characterizing recently introduced class of entropies obtained by averaging pseudoadditive information content.

In this communication, we characterized the measure (6) by taking certain axioms parallel to those considered earlier by Havrda and Charvát [6] along with the recursive relation (9). Some properties of this measure are also studied.

The measure (6) satisfies a recursive relation as follows: where , and .

Consider

Proof. which proves (9).

2. Set of Axioms

For characterizing a measure of information of types , and associated with a probability distribution ,    ,   , we introduce the following axioms:(1)is continuous in the region (2); (3); (4) for every ;(5)for every , where and , .

Theorem 1. If , then the axioms (1)–(5) determine a measure given by where and .
Before proving the theorem we prove some intermediate results based on the above axioms.

Lemma 2. If ,   and , then

Proof. To prove the lemma, we proceed by induction. For , the desired statement holds (cf. axiom (4)). Let us suppose that the result is true for numbers less than or equal to , we will prove it for . We have
One more application of induction premise yields For , (18) reduces to Similarly for , (18) reduces to Expression (17) together with (19) and (20) gives the desired result.

Lemma 3. If , , , , and , then

Proof. Proof of this lemma directly follows from Lemma 2.

Lemma 4. If , then where , , and

Proof. Replacing in Lemma 3   by and putting , , , where and are positive integer, we have Putting in (24) and using (by axiom (2)), we get which is (22).
Comparing the right hand sides of (24) and (25), we get Equation (27) together with (22) gives Putting in (28) and using , we get That is, , where is an arbitrary constant.
For , we get .
Thus, we have Similarly, which is (23).
Now (22) together with (23) gives

Proof of the Theorem. We prove the theorem for rationals and then the continuity axiom extends the result for reals. For this let and ’s be positive integers such that and if we put , then an application of Lemma 3 gives That is, Equation (34) together with (23) and (32) gives which is (15).
This completes the proof of the theorem.

3. Properties of Entropy of Types and

The measure , where , , is a probability distribution, as characterized in the preceding section and satisfies certain properties, which are given in the following theorems:

Theorem 5. The measure is nonnegative for ,    .

Proof.
Case 1. ; , ; Since, and , we get
Case 2. Similarly for and , we get Therefore from Case 1, Case 2, and axiom (2), we get This completes the proof of theorem.

Definition 6. We will use the following definition of a convex function.
A function over the points in a convex set is convex if for all and The function is convex if (40) holds with in place of .

Theorem 7. The measure is convex function of the probability distribution , , , when either and   or and  .

Proof. Let there be distributions associated with the random variable .
Consider numbers such that and and define where Obviously, and thus is a bonafide distribution of .
Let and , then we have , that is, , for ,   .

By symmetry in , and the above result is true for and  .

Theorem 8. The measure satisfies the following relations:(i)Generalized-Additive: where (ii)Subadditive: for , the measure is subadditive; that is, where , and are complete probability distributions.

Proof of (i). We have Also Adding (49) and (50), we get Using (46) which is (45). This completes the proof of part (i).

Proof of (ii). From part (i), we have As , for , This proves the subadditivity.

4. Conclusion

In addition to well-known information measure of Shannon, Renyi’s, Havrda-Charvat, Vajda [13], Darcózy, we have characterized a measure which we call , and information measure. We have given some basic axioms and properties with recursive relation. The Shannon’s [5] measure included in the , and information measure for the limiting case and ; and . This measure is generalization of Havrda-Charvat entropy.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.