Research Article  Open Access
Litegebe Wondie, Satish Kumar, "A Joint Representation of Rényi’s and Tsalli’s Entropy with Application in Coding Theory", International Journal of Mathematics and Mathematical Sciences, vol. 2017, Article ID 2683293, 5 pages, 2017. https://doi.org/10.1155/2017/2683293
A Joint Representation of Rényi’s and Tsalli’s Entropy with Application in Coding Theory
Abstract
We introduce a quantity which is called Rényi’sTsalli’s entropy of order and discussed some of its major properties with Shannon and other entropies in the literature. Further, we give its application in coding theory and a coding theorem analogous to the ordinary coding theorem for a noiseless channel is proved. The theorem states that the proposed entropy is the lower bound of mean code word length.
1. Introduction
Let , , be set of complete probability distributions. For any probability distribution , Shannon [1] defined an entropy given as where the convention is adopted (see Shannon [1]). Throughout this paper, logarithms are taken to the base . A number of parametric generalizations of Shannon entropy are proposed by many authors in literature which produces (1) for specific values of parameters. The presence of parameters makes an entropy more flexible from application point of view. One of the first generalizations of (1) was proposed by Rényi [2] as Another wellknown entropy was proposed by Havrda and Charvát [3]Independently, Tsalli [4] proposed another parametric generalization of the Shannon entropy as Equations (3) and (4) essentially have the same expression except the normalized factor. The Havrda and Charvát entropy is normalized to 1. That is, if then whereas Tsalli’s entropy is not normalized. Both the entropies yield the same result and we call these entropies as TsalliHavrdaCharvát entropy. Equations (2), (3), and (4) reduce to (1) when .
N. R. Pal and S. K. Pal [5, 6] have proposed an exponential entropy asThese authors claim that the exponential entropy has some advantage over Shannon’s entropy, especially within context of image processing. One such claim is that the exponential entropy has a fixed upper bound such as that for uniform distribution and for the entropy in (5).as compared to infinite limit (as ) for the entropies in (1) and (2) and also for that in (3) when . Equation (5) was further generalized by Kvalseth [7] introducing a parameter as In this paper, we introduce and study a new information measure which is called Rényi’sTsalli’s entropy of order and a new mean code word length and discuss the relation with each other. In Section 2, Rényi’s and Tsalli’s entropy is introduced and also some of its major properties are discussed. In Section 3, the application of proposed information measure in coding theory is given and it is proved that the proposed information measure is the lower bound of mean code word length.
Now, in the next section, we propose a new parametric information measure.
2. A New Generalized Information Measure
However, in literature of information theory, there exists various generalizations of Shannon entropy; we introduce a new information measure as Second case in (8) is a wellknown Shannon entropy.
The quantity (8) introduced in the present section is a joint representation of Rényi’s and Tsalli’s entropy of order . Such a name will be justified, if it shares some major properties with Shannon entropy and other entropies in the literature. We study some such properties in the next theorem.
2.1. Properties of Proposed Entropy
Theorem 1. The parametric entropy has the following properties.
(1) Symmetry. is a symmetric function of .
(2) Nonnegative. for all .
(3) Expansible
(4) Decisive
(5) Maximality
(6) Concavity. The entropy is a concave function for of the probability distribution , .
(7) Continuity. is continuous in the region for all and .
Proof. The properties (1), (3), (4), and (5) follow immediately from the definition. For property (7), we know that is continuous in the region for all . Thus, , is also continuous in the region and and .
Property (2)
Case 1 ()From (13), we getsince .
Therefore, we getthat is, .
Therefore, we conclude that for all .
Case 2 (). The proof is on the same lines as in Case . (Note that inequality in (14) will get reversed for .)
Property (6). Now, we prove that is a concave function of .
Differentiating (8) twice with respect to , we getNow, for ,This implies that is a concave function of
3. A Measure of Length
Let a finite set of input symbols be encoded using alphabet of symbols; then it has been shown by Feinstein [8] that there is a uniquely decipherable code with lengths if and only if Kraft’s inequality holds; that is,where is the size of code alphabet. Furthermore, ifis the average code word length, then for a code satisfying (18), the inequalityis also fulfilled and the equality holds if and only ifIf , then by being suitably encoded into words of long sequences, the average length can be made arbitrarily close to (see Feinstein [8]). This is Shannon’s noiseless coding theorem. By considering Rényi’s entropy [2], a coding theorem analogous to the above noiseless coding theorem has been established by Campbell [9] and the authors obtained bounds for it in terms of .
Kieffer [10] defined class rules and showed is the best decision rule for deciding which of the two sources can be coded with expected cost of sequence of length when , where the cost of encoding a sequence is assumed to be a function of length only. Further, in Jelinek [11], it is shown that coding with respect to Campbell’s mean length is useful in minimizing the problem of buffer overflow which occurs when the source symbol is produced at a fixed rate and the code words are stored temporarily in a finite buffer.
There are many different codes whose lengths satisfy the constraints (18). To compare different codes and pick out an optimum code it is customary to examine the mean length, , and to minimize this quantity. This is a good procedure if the cost of using a sequence of length is directly proportional to . However, there may be occasions when the cost is more nearly an exponential function of . This could be the case, for example, if the cost of encoding and decoding equipment was an important factor. Thus, in some circumstances, it might be more appropriate to choose a code which minimizes the quantity where is a parameter related to the cost. For reasons which will become evident later we prefer to minimize a monotonic function of . Clearly, this will minimize .
In order to make the result of this paper more directly comparable with the usual coding theorem we introduce a quantity which resembles the mean length. Let a code length of order be defined by
Remark 2. If , then (23) becomes the wellknown result studied by Shannon.
Remark 3. If all are the same, say, , then (23) becomesThis is reasonable property for any measure of length to possess.
In the following theorem, we give a lower bound for in terms of .
Theorem 4. If , denote the length of a uniquely decipherable code satisfying (18); then
Proof. By Hlder’s inequality, for all , and ; or ; equality holds if and only if, for some , . Note that the direction of Hlder’s inequality is reverse of the usual one for . (Beckenbach and Bellman [12], see p. 19). Making the substitutions, ; ; ; , in (26) and simplifying using (18). The following cases arise.
Case 1 (when )From (27), we getAlso, Thus, from (28) and (29), we may conclude that .
Case 2 (when )From (30), we getAlso,From (31) and (32), we getCase 3. It is clear that the equality in (25) is valid if . The necessity of this condition for equality in (25) follows from the condition for equality in Hlder’s inequality: in the case of reverse Hlder’s equality given above, equality holds if and only if for some , Plugging this condition into our situation, with the and as specified, and using the fact that , the necessity is true one. This proves the theorem.
Remark 5. Huffman [13] introduced a measure for designing a variable length source code which achieves performance close to Shannon’s entropy bound. For individual code word lengths , the average length of Huffman code is always within one unit of Shannon’s measure of entropy; that is, , where is the Shannon’s measure of entropy. Huffman coding scheme can also be applied to code word length for code word length ; the average length of Huffman code satisfies In Table 1, we have developed the relation between the entropy and average code word length .

From the Table 1, we can observe that average code word length exceeds the entropy .
4. Monotonic Behaviour of Mean Code Word Length
In this section, we study the monotonic behaviour of mean code word length (23) with respect to parameter . Let be the set of probabilities. For different values of , the calculated values of are displayed in Tables 2 and 3.


Graphical representation of monotonic behaviour of for is shown in Figure 1.
Graphical representation of monotonic behaviour of for is shown in Figure 2.
Figures 1 and 2 explain the monotonic behaviour of for and , respectively. From the figures, it is clear that is monotonically increasing for as well as .
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
References
 C. E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol. 27, no. 4, pp. 623–656, 1948. View at: Publisher Site  Google Scholar
 A. Rényi, “On measures of entropy and information,” in Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, pp. 547–561, University of California Press, 1961. View at: Google Scholar  MathSciNet
 J. Havrda and F. S. Charvát, “Quantification method of classification processes. Concept of structural αentropy,” Kybernetika, vol. 3, pp. 30–35, 1967. View at: Google Scholar  MathSciNet
 C. Tsallis, “Possible generalization of BoltzmannGibbs statistics,” Journal of Statistical Physics, vol. 52, no. 12, pp. 479–487, 1988. View at: Publisher Site  Google Scholar  MathSciNet
 N. R. Pal and S. K. Pal, “Objectbackground segmentation using new definitions of entropy,” IEE Proceedings Part E Computers and Digital Techniques, vol. 136, no. 4, pp. 284–295, 1989. View at: Publisher Site  Google Scholar
 N. R. Pal and S. K. Pal, “Entropy: a new definition and its applications,” The Institute of Electrical and Electronics Engineers Systems, Man, and Cybernetics Society, vol. 21, no. 5, pp. 1260–1270, 1991. View at: Publisher Site  Google Scholar  MathSciNet
 T. O. Kvalseth, “On exponential entropies,” in Proceedings of the IEEE International Conference on Systems, Man And Cybernatics, vol. 4, pp. 2822–2826, 2000. View at: Google Scholar
 A. Feinstein, Foundations of Information Theory, McGrawHill, New York, NY, USA, 1956. View at: MathSciNet
 L. L. Campbell, “A coding theorem and Rényi's entropy,” Information and Control, vol. 8, no. 4, pp. 423–429, 1965. View at: Publisher Site  Google Scholar
 J. C. Kieffer, “Variablelength source coding with a cost depending only on the code word length,” Information and Control, vol. 41, no. 2, pp. 136–146, 1979. View at: Publisher Site  Google Scholar  MathSciNet
 F. Jelinek, “Buffer overflow in variable length coding of fixed rate sources,” IEEE Transactions on Information Theory, vol. 14, no. 3, pp. 490–501, 1968. View at: Publisher Site  Google Scholar
 E. F. Beckenbach and R. Bellman, Inequalities, Springer, New York, NY, USA, 1961. View at: MathSciNet
 D. A. Huffman, “A method for the construction of minimumredundancy codes,” Proceedings of the IRE, vol. 40, no. 9, pp. 1098–1101, 1952. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2017 Litegebe Wondie and Satish Kumar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.