Abstract

We will show that a two-parameter extended entropy function is characterized by a functional equation. As a corollary of this result, we obtain that Tsallis entropy function is characterized by a functional equation, which is a different form that used by Suyari and Tsukada, 2009, that is, in a proposition 2.1 in the present paper. We give an interpretation of the functional equation in our main theorem.

1. Introduction

Recently, generalized entropies have been studied from the mathematical point of view. The typical generalizations of Shannon entropy [1] are Rényi entropy [2] and Tsallis entropy [3]. The recent comprehensive book [4] and the review [5] support to understand the Tsallis statistics for the readers. Rényi entropy and Tsallis entropy are defined by ğ‘…ğ‘ž1(𝑋)=1âˆ’ğ‘žlog𝑛𝑗=1ğ‘ğ‘žğ‘—ğ‘†,(ğ‘žâ‰ 1,ğ‘ž>0),ğ‘ž(𝑋)=𝑛𝑗=1ğ‘ğ‘žğ‘—âˆ’ğ‘ğ‘—1âˆ’ğ‘ž,(ğ‘žâ‰ 1,ğ‘ž>0),(1.1)

for a given information source 𝑋={𝑥1,…,𝑥𝑛} with the probability 𝑝𝑗≡Pr(𝑋=𝑥𝑗). Both entropies recover Shannon entropy 𝑆1(𝑋)≡−𝑛𝑗=1𝑝𝑗log𝑝𝑗,(1.2) in the limit ğ‘žâ†’1. The uniqueness theorem for Tsallis entropy was firstly given in [6] and improved in [7].

Throughout this paper, we call a parametric extended entropy, such as Rényi entropy and Tsallis entropy, a generalized entropy. If we take 𝑛=2 in (1.2), we have the so-called binary entropy 𝑠𝑏(𝑥)=−𝑥log𝑥−(1−𝑥)log(1−𝑥). Also we take 𝑛=1 in (1.2), and we have the Shannon's entropy function 𝑓(𝑥)=−𝑥log𝑥. In this paper, we treat the entropy function with two parameters. We note that we can produce the relative entropic function −𝑦𝑓(𝑥/𝑦)=𝑥(log𝑥−log𝑦)by the use of the Shannon's entropy function 𝑓(𝑥).

We note that Rényi entropy has the additivity ğ‘…ğ‘ž(𝑋×𝑌)=ğ‘…ğ‘ž(𝑋)+ğ‘…ğ‘ž(𝑌),(1.3) but Tsallis entropy has the nonadditivity ğ‘†ğ‘ž(𝑋×𝑌)=ğ‘†ğ‘ž(𝑋)+ğ‘†ğ‘ž(𝑌)+(1âˆ’ğ‘ž)ğ‘†ğ‘ž(𝑋)ğ‘†ğ‘ž(𝑌),(1.4) where 𝑋×𝑌 means that 𝑋 and 𝑌 are independent random variables. Therefore, we have a definitive difference for these entropies although we have the simple relation between them 𝑅expğ‘žî€¸(𝑋)=expğ‘žî€·ğ‘†ğ‘žî€¸,(𝑋)(ğ‘žâ‰ 1),(1.5) where ğ‘ž-exponential function expğ‘ž(𝑥)≡{1+(1âˆ’ğ‘ž)𝑥}1/(1âˆ’ğ‘ž) is defined if 1+(1âˆ’ğ‘ž)𝑥≥0. Note that we have expğ‘ž(ğ‘†ğ‘žâˆ‘(𝑋))=(𝑛𝑗=1ğ‘ğ‘žğ‘—)1/(1âˆ’ğ‘ž)>0.

Tsallis entropy is rewritten by ğ‘†ğ‘ž(𝑋)=−𝑛𝑗=1ğ‘ğ‘žğ‘—lnğ‘žğ‘ğ‘—,(1.6) where ğ‘ž-logarithmic function (which is an inverse function of expğ‘ž(⋅)) is defined by lnğ‘žğ‘¥ğ‘¥â‰¡1âˆ’ğ‘žâˆ’11âˆ’ğ‘ž,(ğ‘žâ‰ 1),(1.7) which converges to log𝑥 in the limit ğ‘žâ†’1.

Since Shannon entropy can be regarded as the expectation value for each value −log𝑝𝑗, we may consider that Tsallis entropy can be regarded as the ğ‘ž-expectation value for each value −lnğ‘žğ‘ğ‘—, as an analogy to the Shannon entropy, where ğ‘ž-expectation value ğ¸ğ‘ž is defined by ğ¸ğ‘ž(𝑋)≡𝑛𝑗=1ğ‘ğ‘žğ‘—ğ‘¥ğ‘—.(1.8)

However, the ğ‘ž-expectation value ğ¸ğ‘ž lacks the fundamental property such as 𝐸(1)=1, so that it was considered to be inadequate to adopt as a generalized definition of the usual expectation value. Then the normalized ğ‘ž-expectation value was introduced 𝐸(nor)ğ‘žâˆ‘(𝑋)≡𝑛𝑗=1ğ‘ğ‘žğ‘—ğ‘¥ğ‘—âˆ‘ğ‘›ğ‘–=1ğ‘ğ‘žğ‘–,(1.9) and by using this, the normalized Tsallis entropy was defined by 𝑆(nor)ğ‘žğ‘†(𝑋)â‰¡ğ‘ž(𝑋)∑𝑛𝑗=1ğ‘ğ‘žğ‘—âˆ‘=−𝑛𝑗=1ğ‘ğ‘žğ‘—lnğ‘žğ‘ğ‘—âˆ‘ğ‘›ğ‘–=1ğ‘ğ‘žğ‘–,(ğ‘žâ‰ 1).(1.10)

We easily find that we have the following nonadditivity relation for the normalized Tsallis entropy: 𝑆(nor)ğ‘ž(𝑋×𝑌)=𝑆(nor)ğ‘ž(𝑋)+𝑆(nor)ğ‘ž(𝑌)+(ğ‘žâˆ’1)𝑆(nor)ğ‘ž(𝑋)𝑆(nor)ğ‘ž(𝑌).(1.11)

As for the details on the mathematical properties of the normalized Tsallis entropy, see [8], for example. See also [9] for the role of Tsallis entropy and the normalized Tsallis entropy in statistical physics. The difference between two non-additivity relations (1.4) and (1.11) is the signature of the coefficient 1âˆ’ğ‘ž in the third term of the right-hand sides.

We note that Tsallis entropy is also rewritten by ğ‘†ğ‘ž(𝑋)=𝑛𝑗=1𝑝𝑗lnğ‘ž1𝑝𝑗,(1.12) so that we may regard it as the expectation value such as ğ‘†ğ‘ž(𝑋)=𝐸1[lnğ‘ž1/𝑝𝑗],where 𝐸1 means the usual expectation value 𝐸1∑[𝑋]=𝑛𝑗=1𝑝𝑗𝑥𝑗. However, if we adopt this formulation in the definition of Tsallis conditional entropy, we do not have an important property such as a chain rule (see [10] for details). Therefore, we often adopt the formulation using the ğ‘ž-expectation value.

As a further generalization, a two-parameter extended entropy 𝑆𝜅,𝑟(𝑋)≡−𝑛𝑗=1𝑝𝑗ln(𝜅,𝑟)𝑝𝑗(1.13) was recently introduced in [11, 12] and systematically studied with the generalized exponential function and the generalized logarithmic function ln𝜅,𝑟(𝑥)≡𝑥𝑟((𝑥𝜅−𝑥−𝜅)/2𝜅). In the present paper, we treat a two-parameter extended entropy defined in the following form: 𝑆𝛼,𝛽(𝑋)≡𝑛𝑗=1𝑝𝛼𝑗−𝑝𝛽𝑗𝛽−𝛼,(𝛼,𝛽∈ℝ,𝛼≠𝛽),(1.14)

for two positive numbers 𝛼 and 𝛽. This form can be obtained by putting 𝛼=1+𝑟−𝜅 and 𝛽=1+𝑟+𝜅 in (1.13), and it coincides with the two-parameter extended entropy studied in [13]. In addition, the two-parameter extended entropy (1.14) was axiomatically characterized in [14]. Furthermore, a two-parameter extended relative entropy was also axiomatically characterized in [15].

In the paper [16], a characterization of Tsallis entropy function was proven by using the functional equation. In the present paper, we will show that the two-parameter extended entropy function 𝑓𝛼,𝛽𝑥(𝑥)=𝛼−𝑥𝛽𝛽−𝛼(𝛼,𝛽∈ℝ,𝛼≠𝛽)(1.15) can be characterized by the simple functional equation.

2. A Review of the Characterization of Tsallis Entropy Function by the Functional Equation

The following proposition was originally given in [16] by the simple and elegant proof. Here, we give the alternative proof along to the proof given in [17].

Proposition 2.1 (see [16]). If the differentiable nonnegative function ğ‘“ğ‘ž with positive parameter q∈ℝ satisfies the following functional equation: ğ‘“ğ‘ž(𝑥𝑦)+ğ‘“ğ‘ž((1−𝑥)𝑦)âˆ’ğ‘“ğ‘žî€·ğ‘“(𝑦)=ğ‘ž(𝑥)+ğ‘“ğ‘žî€¸ğ‘¦(1−𝑥)ğ‘ž,(0<𝑥<1,0<𝑦≤1),(2.1) then the function ğ‘“ğ‘ž is uniquely given by ğ‘“ğ‘ž(𝑥)=âˆ’ğ‘ğ‘žğ‘¥ğ‘žlnğ‘žğ‘¥,(2.2) where ğ‘ğ‘ž is a nonnegative constant depending only on the parameter ğ‘ž.

Proof. If we put 𝑦=1 in (2.1), then we have ğ‘“ğ‘ž(1)=0. From here, we assume that 𝑦≠1. We also put ğ‘”ğ‘ž(𝑡)â‰¡ğ‘“ğ‘ž(𝑡)/𝑡 then we have ğ‘¥ğ‘”ğ‘ž(𝑥𝑦)+(1−𝑥)ğ‘”ğ‘ž((1−𝑥)𝑦)âˆ’ğ‘”ğ‘žî€·(𝑦)=ğ‘¥ğ‘”ğ‘ž(𝑥)+(1−𝑥)ğ‘”ğ‘žî€¸ğ‘¦(1−𝑥)ğ‘žâˆ’1.(2.3) Putting 𝑥=1/2 in (2.3), we have ğ‘”ğ‘žî‚€ğ‘¦2=ğ‘”ğ‘žî‚€12î‚ğ‘¦ğ‘žâˆ’1+ğ‘”ğ‘ž(𝑦).(2.4) Substituting 𝑦/2 into 𝑦, we have ğ‘”ğ‘žî‚µğ‘¦22=ğ‘”ğ‘žî‚€12î‚î‚µğ‘¦ğ‘žâˆ’1+𝑦2î‚ğ‘žâˆ’1+ğ‘”ğ‘ž(𝑦).(2.5) By repeating similar substitutions, we have ğ‘”ğ‘žî‚µğ‘¦2𝑁=ğ‘”ğ‘žî‚€12î‚ğ‘¦ğ‘žâˆ’111+2î‚ğ‘žâˆ’1+122(ğ‘žâˆ’1)1+⋯+2(𝑁−1)(ğ‘žâˆ’1)+ğ‘”ğ‘ž(𝑦)=ğ‘”ğ‘žî‚€12î‚ğ‘¦ğ‘žâˆ’12𝑁(1âˆ’ğ‘ž)−121âˆ’ğ‘žî‚¶âˆ’1+ğ‘”ğ‘ž(𝑦).(2.6) Then, we have limğ‘â†’âˆžğ‘”ğ‘žî€·ğ‘¦/2𝑁2𝑁=0,(2.7) due to ğ‘ž>0. Differentiating (2.3) by 𝑦, we have 𝑥2ğ‘”ğ‘ž(𝑥𝑦)+(1−𝑥)2ğ‘”ğ‘ž((1−𝑥)𝑦)âˆ’ğ‘”î…žğ‘žî€·(𝑦)=(ğ‘žâˆ’1)ğ‘¥ğ‘”ğ‘ž(𝑥)+(1−𝑥)ğ‘”ğ‘žî€¸ğ‘¦(1−𝑥)ğ‘žâˆ’2.(2.8) Putting 𝑦=1 in the above equation, we have 𝑥2ğ‘”î…žğ‘ž(𝑥)+(1−𝑥)2ğ‘”î…žğ‘žî€·(1−𝑥)+(1âˆ’ğ‘ž)ğ‘¥ğ‘”ğ‘ž(𝑥)+(1−𝑥)ğ‘”ğ‘žî€¸(1−𝑥)=âˆ’ğ‘ğ‘ž,(2.9) where ğ‘ğ‘ž=âˆ’ğ‘”î…žğ‘ž(1).
By integrating (2.3) from 2−𝑁 to 1 with respect to 𝑦 and performing the conversion of the variables, we have 𝑥2âˆ’ğ‘ğ‘¥ğ‘”ğ‘žî€œ(𝑡)𝑑𝑡+21−𝑥−𝑁(1−𝑥)ğ‘”ğ‘žî€œ(𝑡)𝑑𝑡−12âˆ’ğ‘ğ‘”ğ‘žî€·(𝑡)𝑑𝑡=ğ‘¥ğ‘”ğ‘ž(𝑥)+(1−𝑥)ğ‘”ğ‘žî€¸(1−𝑥)1−2âˆ’ğ‘žğ‘ğ‘ž.(2.10) By differentiating the above equation with respect to 𝑥, we have ğ‘”ğ‘ž(𝑥)−2âˆ’ğ‘ğ‘”ğ‘žî€·2âˆ’ğ‘ğ‘¥î€¸âˆ’ğ‘”ğ‘ž(1−𝑥)+2âˆ’ğ‘ğ‘”ğ‘žî€·2−𝑁=(1−𝑥)1−2âˆ’ğ‘žğ‘ğ‘žî€·ğ‘”ğ‘ž(𝑥)+ğ‘¥ğ‘”î…žğ‘ž(𝑥)âˆ’ğ‘”ğ‘ž(1−𝑥)−(1−𝑥)ğ‘”î…žğ‘žî€¸.(1−𝑥)(2.11) Taking the limit ğ‘â†’âˆž in the above, we have (1−𝑥)ğ‘”î…žğ‘ž(𝑥)+(1âˆ’ğ‘ž)ğ‘”ğ‘ž(1−𝑥)=ğ‘¥ğ‘”î…žğ‘ž(𝑥)+(1âˆ’ğ‘ž)ğ‘”ğ‘ž(𝑥),(2.12) thanks to (2.7). From (2.9) and (2.12), we have the following differential equation: ğ‘¥ğ‘”î…žğ‘ž(𝑥)+(1âˆ’ğ‘ž)ğ‘”ğ‘ž(𝑥)=âˆ’ğ‘ğ‘ž.(2.13) This differential equation has the following general solution: ğ‘”ğ‘žğ‘(𝑥)=âˆ’ğ‘ž1âˆ’ğ‘ž+ğ‘‘ğ‘žğ‘¥ğ‘žâˆ’1,(2.14) where ğ‘‘ğ‘ž is an integral constant depending on ğ‘ž. From ğ‘”ğ‘ž(1)=0, we have ğ‘‘ğ‘ž=ğ‘ğ‘ž/(1âˆ’ğ‘ž). Thus, we have ğ‘”ğ‘ž(𝑥)=ğ‘ğ‘žğ‘¥ğ‘žâˆ’1−11âˆ’ğ‘ž.(2.15) Finally, we have ğ‘“ğ‘ž(𝑥)=ğ‘ğ‘žğ‘¥ğ‘žâˆ’ğ‘¥1âˆ’ğ‘ž=âˆ’ğ‘ğ‘žğ‘¥ğ‘žlnğ‘žğ‘¥.(2.16) From ğ‘“ğ‘ž(𝑥)≥0, we have ğ‘ğ‘žâ‰¥0.
If we take the limit as ğ‘žâ†’1 in Proposition 2.1, we have the following corollary.

Corollary 2.2 (see [17]). If the differentiable nonnegative function f satisfies the following functional equation: 𝑓(𝑥𝑦)+𝑓((1−𝑥)𝑦)−𝑓(𝑦)=(𝑓(𝑥)+𝑓(1−𝑥))𝑦,(0<𝑥<1,0<𝑦≤1),(2.17) then the function 𝑓 is uniquely given by 𝑓(𝑥)=−𝑐𝑥log𝑥,(2.18) where 𝑐 is a nonnegative constant.

3. Main Results

In this section, we give a characterization of a two-parameter extended entropy function by the functional equation. Before we give our main theorem, we review the following result given by Kannappan [18, 19].

Proposition 3.1 (see [18, 19]). Let two probability distributions (𝑝1,…,𝑝𝑛) and (ğ‘ž1,…,ğ‘žğ‘š). If the measureable function 𝑓∶(0,1)→ℝ satisfies 𝑛𝑚𝑖=1𝑗=1ğ‘“î€·ğ‘ğ‘–ğ‘žğ‘—î€¸=𝑛𝑖=1𝑝𝛼𝑖𝑚𝑗=1ğ‘“î€·ğ‘žğ‘—î€¸+𝑚𝑗=1ğ‘žğ›½ğ‘—ğ‘›î“ğ‘–=1𝑓𝑝𝑖,(3.1) for all (𝑝1,…,𝑝𝑛) and (ğ‘ž1,…,ğ‘žğ‘š) with fixed 𝑚,𝑛≥3, then the function 𝑓 is given by âŽ§âŽªâŽªâŽ¨âŽªâŽªâŽ©ğ‘î€·ğ‘ğ‘“(𝑝)=𝛼−𝑝𝛽,𝛼≠𝛽,𝑐𝑝𝛼log𝑝,𝛼=𝛽,𝑐𝑝log𝑝+𝑏(𝑚𝑛−𝑚−𝑛)𝑝+𝑏,𝛼=𝛽=1,(3.2) where 𝑐 and 𝑏 are arbitrary constants.

Here, we review a two-parameter generalized Shannon additivity, [14, equation (30)]𝑛𝑚𝑖=1𝑖𝑗=1𝑠𝛼,𝛽𝑝𝑖𝑗=𝑛𝑖=1𝑝𝛼𝑖𝑚𝑖𝑗=1𝑠𝛼,𝛽(𝑝(𝑗∣𝑖))+𝑛𝑖=1𝑠𝛼,𝛽𝑝𝑖𝑚𝑖𝑗=1𝑝(𝑗∣𝑖)𝛽,(3.3) where 𝑠𝛼,𝛽 is a component of the trace form of the two-parameter entropy [14, equation (26)] 𝑆𝛼,𝛽𝑝𝑖=𝑛𝑖=1𝑠𝛼,𝛽𝑝𝑖.(3.4)

Equation (3.3) was used to prove the uniqueness theorem for two-parameter extended entropy in [14]. As for (3.3), a tree-graphical interpretation was given in [14]. The condition (3.1) can be read as the independent case (𝑝(𝑗∣𝑖)=𝑝𝑗) in (3.3).

Here, we consider the nontrivial simplest case for (3.3). Take 𝑝𝑖𝑗={ğ‘ž1,ğ‘ž2,ğ‘ž3}, 𝑝1=ğ‘ž1+ğ‘ž2, and 𝑝2=ğ‘ž3. then we have 𝑝(1∣1)=ğ‘ž1/(ğ‘ž1+ğ‘ž2), 𝑝(2∣1)=ğ‘ž2/(ğ‘ž1+ğ‘ž2), 𝑝(1∣2)=1, and 𝑝(2∣2)=0, then (3.3) is written by𝑆𝛼,ğ›½î€·ğ‘ž1,ğ‘ž2,ğ‘ž3=î€·ğ‘ž1+ğ‘ž2𝛼𝑠𝛼,ğ›½î‚µğ‘ž1ğ‘ž1+ğ‘ž2+𝑠𝛼,ğ›½î‚µğ‘ž2ğ‘ž1+ğ‘ž2+ğ‘žğ›¼3𝑠𝛼,𝛽(1)+𝑠𝛼,𝛽(0)+𝑠𝛼,ğ›½î€·ğ‘ž1+ğ‘ž2î€¸îƒ¯î‚µğ‘ž1ğ‘ž1+ğ‘ž2𝛽+î‚µğ‘ž2ğ‘ž1+ğ‘ž2𝛽+𝑠𝛼,ğ›½î€·ğ‘ž3.(3.5)

If 𝑠𝛼,𝛽 is an entropic function, then it vanishes at 0 or 1, since the entropy has no informational quantity for the deterministic cases, then the above identity is reduced in the following: 𝑆𝛼,ğ›½î€·ğ‘ž1,ğ‘ž2,ğ‘ž3=î€·ğ‘ž1+ğ‘ž2𝛼𝑠𝛼,ğ›½î‚µğ‘ž1ğ‘ž1+ğ‘ž2+𝑠𝛼,ğ›½î‚µğ‘ž2ğ‘ž1+ğ‘ž2+𝑠𝛼,ğ›½î€·ğ‘ž1+ğ‘ž2î€¸îƒ¯î‚µğ‘ž1ğ‘ž1+ğ‘ž2𝛽+î‚µğ‘ž2ğ‘ž1+ğ‘ž2𝛽+𝑠𝛼,ğ›½î€·ğ‘ž3.(3.6)

In the following theorem, we adopt a simpler condition than (3.1).

Theorem 3.2. If the differentiable nonnegative function 𝑓𝛼,𝛽 with two positive parameters 𝛼,𝛽∈ℝ satisfies the following functional equation: 𝑓𝛼,𝛽(𝑥𝑦)=𝑥𝛼𝑓𝛼,𝛽(𝑦)+𝑦𝛽𝑓𝛼,𝛽(𝑥),(0<𝑥,𝑦≤1),(3.7) then the function 𝑓𝛼,𝛽 is uniquely given by 𝑓𝛼,𝛽(𝑥)=𝑐𝛼,𝛽𝑥𝛽−𝑥𝛼𝑓𝛼−𝛽,(𝛼≠𝛽),𝛼(𝑥)=−𝑐𝛼𝑥𝛼log𝑥,(𝛼=𝛽),(3.8) where 𝑐𝛼,𝛽 and 𝑐𝛼 are nonnegative constants depending only on the parameters 𝛼 (and 𝛽).

Proof. If we put 𝑦=1, then we have 𝑓𝛼,𝛽(1)=0 due to 𝑥>0. By differentiating (3.7) with respect to 𝑦, we have ğ‘¥ğ‘“î…žğ›¼,𝛽(𝑥𝑦)=ğ‘¥ğ›¼ğ‘“î…žğ›¼,𝛽(𝑦)+𝛽𝑦𝛽−1𝑓𝛼,𝛽(𝑥).(3.9) Putting 𝑦=1 in (3.9), we have the following differential equation: ğ‘¥ğ‘“î…žğ›¼,𝛽(𝑥)−𝛽𝑓𝛼,𝛽(𝑥)=−𝑐𝛼,𝛽𝑥𝛼,(3.10) where we put 𝑐𝛼,ğ›½â‰¡âˆ’ğ‘“î…žğ›¼,𝛽(1). Equation (3.10) can be deformed as follows: 𝑥𝛽+1𝑥−𝛽𝑓𝛼,𝛽(𝑥)=−𝑐𝛼,𝛽𝑥𝛼,(3.11) that is, we have 𝑥−𝛽𝑓𝛼,𝛽(𝑥)=−𝑐𝛼,𝛽𝑥𝛼−𝛽−1.(3.12) Integrating both sides on the above equation with respect to 𝑥, we have 𝑥−𝛽𝑓𝛼,𝛽𝑐(𝑥)=−𝛼,𝛽𝑥𝛼−𝛽𝛼−𝛽+𝑑𝛼,𝛽,(3.13) where 𝑑𝛼,𝛽 is a integral constant depending on 𝛼 and 𝛽. Therefore, we have 𝑓𝛼,𝛽𝑐(𝑥)=−𝛼,𝛽𝑥𝛼−𝛽𝛼+𝑑𝛼,𝛽𝑥𝛽.(3.14) By 𝑓𝛼,𝛽(1)=0, we have 𝑑𝛼,𝛽=𝑐𝛼,𝛽/(𝛼−𝛽). Thus, we have 𝑓𝛼,𝛽𝑐(𝑥)=𝛼,𝛽𝑥𝛼−𝛽𝛽−𝑥𝛼.(3.15) Also by 𝑓𝛼,𝛽(𝑥)≥0, we have 𝑐𝛼,𝛽≥0.
As for the case of 𝛼=𝛽, we can prove by the similar way.

Remark 3.3. We can derive (3.6) from our condition (3.7). Firstly, we easily have 𝑓𝛼,𝛽(0)=𝑓𝛼,𝛽(1)=0 from our condition equation (3.7). In addition, we have for ğ‘ž=ğ‘ž1+ğ‘ž2, 𝑆𝛼,ğ›½î‚µğ‘žğ‘ž1ğ‘žğ‘ž,ğ‘ž2ğ‘ž,ğ‘ž3=𝑓𝛼,ğ›½î‚µğ‘žğ‘ž1ğ‘žî‚¶+𝑓𝛼,ğ›½î‚µğ‘žğ‘ž2ğ‘žî‚¶+𝑓𝛼,ğ›½î€·ğ‘ž3=ğ‘žğ›¼ğ‘“ğ›¼,ğ›½î‚µğ‘ž1ğ‘žî‚¶+î‚µğ‘ž1ğ‘žî‚¶ğ›½ğ‘“ğ›¼,𝛽(ğ‘ž)+ğ‘žğ›¼ğ‘“ğ›¼,ğ›½î‚µğ‘ž2ğ‘žî‚¶+î‚µğ‘ž2ğ‘žî‚¶ğ›½ğ‘“ğ›¼,𝛽(ğ‘ž)+𝑓𝛼,ğ›½î€·ğ‘ž3=î€·ğ‘ž1+ğ‘ž2𝛼𝑓𝛼,ğ›½î‚µğ‘ž1ğ‘ž1+ğ‘ž2+𝑓𝛼,ğ›½î‚µğ‘ž2ğ‘ž1+ğ‘ž2+𝑓𝛼,ğ›½î€·ğ‘ž1+ğ‘ž2î€¸îƒ¯î‚µğ‘ž1ğ‘ž1+ğ‘ž2𝛽+î‚µğ‘ž2ğ‘ž1+ğ‘ž2𝛽+𝑓𝛼,ğ›½î€·ğ‘ž3.(3.16) Thus, we may interpret that our condition (3.7) contains an essential part of the two-parameter generalized Shannon additivity.
Note that we can reproduce the two-parameter entropic function by the use of 𝑓𝛼,𝛽 as −𝑦𝑓𝛼,𝛽𝑥𝑦=𝑥𝛼𝑦1−𝛽−𝑥𝛽𝑦1−𝛼𝛼−𝛽,(3.17) with 𝑐𝛼,𝛽=1 for simplicity. This leads to two-parameter extended relative entropy [15] 𝐷𝛼,𝛽𝑥1,…,𝑥𝑛𝑦||1,…,𝑦𝑛≡𝑛𝑗=1𝑥𝛼𝑗𝑦𝑗1−𝛽−𝑥𝛽𝑗𝑦𝑗1−𝛼𝛼−𝛽.(3.18) See also [20] on the first appearance of the Tsallis relative entopy (generalized Kullback-Leibler information).
If we take 𝛼=ğ‘ž,𝛽=1 or 𝛼=1,𝛽=ğ‘ž in Theorem 3.2, we have the following corollary.

Corollary 3.4. If the differentiable nonnegative function ğ‘“ğ‘ž with a positive parameter ğ‘žâˆˆâ„ satisfies the following functional equation: ğ‘“ğ‘ž(𝑥𝑦)=ğ‘¥ğ‘žğ‘“ğ‘ž(𝑦)+ğ‘¦ğ‘“ğ‘ž(𝑥),(0<𝑥,𝑦≤1,ğ‘žâ‰ 1),(3.19) then the function ğ‘“ğ‘ž is uniquely given by ğ‘“ğ‘ž(𝑥)=âˆ’ğ‘ğ‘žğ‘¥ğ‘žlnğ‘žğ‘¥,(3.20) where ğ‘ğ‘ž is a nonnegative constant depending only on the parameter ğ‘ž.

Here, we give an interpretation of the functional equation (3.19) from the view of Tsallis statistics.

Remark 3.5. We assume that we have the following two functional equations for 0<𝑥, 𝑦≤1: ğ‘“ğ‘ž(𝑥𝑦)=ğ‘¦ğ‘“ğ‘ž(𝑥)+ğ‘¥ğ‘“ğ‘ž(𝑦)+(1âˆ’ğ‘ž)ğ‘“ğ‘ž(𝑥)ğ‘“ğ‘žğ‘“(𝑦),ğ‘ž(𝑥𝑦)=ğ‘¦ğ‘žğ‘“ğ‘ž(𝑥)+ğ‘¥ğ‘žğ‘“ğ‘ž(𝑦)+(ğ‘žâˆ’1)ğ‘“ğ‘ž(𝑥)ğ‘“ğ‘ž(𝑦).(3.21) These equations lead to the following equations for 0<𝑥𝑖,𝑦𝑗≤1: ğ‘“ğ‘žî€·ğ‘¥ğ‘–ğ‘¦ğ‘—î€¸=ğ‘¦ğ‘—ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸+ğ‘¥ğ‘–ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸+(1âˆ’ğ‘ž)ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸,ğ‘“ğ‘žî€·ğ‘¥ğ‘–ğ‘¦ğ‘—î€¸=ğ‘¦ğ‘žğ‘—ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸+ğ‘¥ğ‘žğ‘–ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸+(ğ‘žâˆ’1)ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸,(3.22) where 𝑖=1,…,𝑛 and 𝑗=1,…,𝑚. Taking the summation on 𝑖 and 𝑗 in both sides, we have 𝑛𝑚𝑖=1𝑗=1ğ‘“ğ‘žî€·ğ‘¥ğ‘–ğ‘¦ğ‘—î€¸=𝑛𝑖=1ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸+𝑚𝑗=1ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸+(1âˆ’ğ‘ž)𝑛𝑖=1ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸ğ‘šî“ğ‘—=1ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸,(3.23)𝑛𝑚𝑖=1𝑗=1ğ‘“ğ‘žî€·ğ‘¥ğ‘–ğ‘¦ğ‘—î€¸=𝑚𝑗=1ğ‘¦ğ‘žğ‘—ğ‘›î“ğ‘–=1ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸+𝑛𝑖=1ğ‘¥ğ‘žğ‘–ğ‘šî“ğ‘—=1ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸+(ğ‘žâˆ’1)𝑛𝑖=1ğ‘“ğ‘žî€·ğ‘¥ğ‘–î€¸ğ‘šî“ğ‘—=1ğ‘“ğ‘žî€·ğ‘¦ğ‘—î€¸,(3.24) under the condition ∑𝑛𝑖=1𝑥𝑖=∑𝑚𝑗=1𝑦𝑗=1. If the function ğ‘“ğ‘ž(𝑥) is given by (3.20), then two above functional equations coincide with two nonadditivity relations given in (1.4) and (1.11).
On the other hand, we have the following equation from (23) and (3.21): ğ‘“ğ‘žî‚µğ‘¥(𝑥𝑦)=ğ‘ž+𝑥2î‚¶ğ‘“ğ‘žî‚µğ‘¦(𝑦)+ğ‘ž+𝑦2î‚¶ğ‘“ğ‘ž(𝑥),(0<𝑥,𝑦≤1,ğ‘žâ‰ 1).(3.25) By a similar way to the proof of Theorem 3.2, we can show that the functional equation (3.25) uniquely determines the function ğ‘“ğ‘ž by the form given in (3.20). Therefore, we can conclude that two functional equations (23) and (3.21), which correspond to the non-additivity relations (1.4) and (1.11), also characterize Tsallis entropy function.
If we again take the limit as ğ‘žâ†’1 in Corollary 3.4, we have the following corollary.

Corollary 3.6. If the differentiable nonnegative function 𝑓 satisfies the following functional equation: 𝑓(𝑥𝑦)=𝑦𝑓(𝑥)+𝑥𝑓(𝑦),(0<𝑥,𝑦≤1),(3.26) then the function 𝑓 is uniquely given by 𝑓(𝑥)=−𝑐𝑥log𝑥,(3.27) where 𝑐 is a nonnegative constant.

4. Conclusion

As we have seen, the two-parameter extended entropy function can be uniquely determined by a simple functional equation. Also an interpretation related to a tree-graphical structure was given as a remark.

Recently, the extensive behaviours of generalized entropies were studied in [21–23]. Our condition given in (3.7) may be seen as extensive form. However, I have not yet found any relation between our functional (3.7) and the extensive behaviours of the generalized entropies. This problem is not the purpose of the present paper, but it is quite interesting to study this problem as a future work.

Acknowledgments

This paper is dedicated to Professor Kenjiro Yanagi on his 60th birthday. The author would like to thank the anonymous reviewers for providing valuable comments to improve the paper. The author was partially supported by the Japanese Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Encouragement of Young Scientists (B) 20740067.