#### Abstract

Nakagami distribution is considered. The classical maximum likelihood estimator has been obtained. Bayesian method of estimation is employed in order to estimate the scale parameter of Nakagami distribution by using Jeffreys’, Extension of Jeffreys’, and Quasi priors under three different loss functions. Also the simulation study is conducted in R software.

#### 1. Introduction

Nakagami distribution can be considered as a flexible lifetime distribution. It has been used to model attenuation of wireless signals traversing multiple paths (for details see Hoffman [1]), fading of radio signals, data regarding communicational engineering, and so forth. The distribution may also be employed to model failure times of a variety of products (and electrical components) such as ball bearing, vacuum tubes, and electrical insulation. It is also widely considered in biomedical fields, such as to model the time to the occurrence of tumors and appearance of lung cancer. It has the applications in medical imaging studies to model the ultrasounds especially in Echo (heart efficiency test). Shanker et al. [2] and Tsui et al. [3] use the Nakagami distribution to model ultrasound data in medical imaging studies. This distribution is extensively used in reliability theory and reliability engineering and to model the constant hazard rate portion because of its memory less property. Yang and Lin [4] investigated and derived the statistical model of spatial-chromatic distribution of images. Through extensive evaluation of large image databases, they discovered that a two-parameter Nakagami distribution well suits the purpose. Kim and Latchman [5] used the Nakagami distribution in their analysis of multimedia.

The probability density function (pdf) of the Nakagami distribution is given as mentioned in Figure 1:where and are the scale and the shape parameters, respectively.

#### 2. Materials and Methods

There are two main philosophical approaches to statistics. The first is called the classical approach which was founded by Professor R. A. Fisher in a series of fundamental papers round about 1930. In classical approach we use the same method as obtained by Ahmad et al. [6].

The alternative approach is the Bayesian approach which was first discovered by Reverend Thomas Bayes. In this approach, parameters are treated as random variables and data is treated as fixed. Recently Bayesian estimation approach has received great attention by most researchers among them are Al-Aboud [7] who studied Bayesian estimation for the extreme value distribution using progressive censored data and asymmetric loss. Ahmed et al. [8] considered Bayesian Survival Estimator for Weibull distribution with censored data. An important prerequisite in this approach is the appropriate choice of prior(s) for the parameters. Very often, priors are chosen according to one’s subjective knowledge and beliefs. The other integral part of Bayesian inference is the choice of loss function. A number of symmetric and asymmetric loss functions have been shown to be functional; see Pandey et al. [9], Al-Athari [10], S. P. Ahmad and K. Ahmad [11], Ahmad et al. [12, 13], and so forth.

Theorem 1. *Let be a random sample of size n having pdf (1); then the maximum likelihood estimator of scale parameter , when the shape parameter is known, is given by*

*Proof. *The likelihood function of the pdf (1) is given byThe log likelihood function is given byDifferentiating (4) with respect to and equating to zero, we get

##### 2.1. Loss Functions Used in This Paper

(i) The quadratic loss function which is given by which is a symmetric loss function; and represent the true and estimated values of the parameter.

(ii) The Al-Bayyati new loss function is of the form which is an asymmetric loss function; and represent the true and estimated values of the parameter.

(iii) The entropy loss function is given by where and represent the true and estimated values of the parameter.

#### 3. Bayesian Method of Estimation

In this section Bayesian estimation of the scale parameter of Nakagami distribution is obtained by using various priors under different symmetric and asymmetric loss functions.

##### 3.1. Posterior Density under Jeffreys’ Prior

Let be a random sample of size having the probability density function (1) and the likelihood function (2).

Jeffreys’ prior for is given byBy using the Bayes theorem, we haveUsing (2) and (9) in (10),where is independent of andUsing the value of in (11),

##### 3.2. Posterior Density under Extension of Jeffreys’ Prior

Let be a random sample of size having the probability density function (1) and the likelihood function (2).

The extension of Jeffreys’ for is given byBy using the Bayes theorem, we haveUsing (2) and (14) in (15),ThusBy using the value of in (17), we have

##### 3.3. Posterior Density under Quasi Prior

Let be a random sample of size having the probability density function (1) and the likelihood function (2).

Quasi prior for is given byBy using the Bayes theorem, we haveUsing (2) and (20) in (21),where is independent of andUsing the value of in (22),

#### 4. Bayesian Estimation by Using Jeffreys’ Prior under Different Loss Functions

Theorem 2. *Assuming the loss function , the Bayes estimate of the scale parameter , if the shape parameter is known, is of the form*

*Proof. *The risk function of the estimator under the quadratic loss function is given by the formula Using (13) in (26), we getOn solving (27), we getMinimization of the risk with respect to gives us the optimal estimator:

Theorem 3. *Assuming the loss function , the Bayes estimate of the scale parameter , if the shape parameter is known, is of the form*

*Proof. *The risk function of the estimator under the Al-Bayyati loss function is given by the formula On substituting (13) in (31), we haveSolving (32), we getMinimization of the risk with respect to gives us the optimal estimator:

Theorem 4. *Assuming the loss function , the Bayes estimate of the scale parameter , if the shape parameter is known, is of the form*

*Proof. *The risk function of the estimator under entropy loss function is given by the formula Using (13) in (36), we getOn solving (37), we getMinimization of the risk with respect to gives us the optimal estimator:

#### 5. Bayesian Estimation by Using Extension Jeffreys’ Prior under Different Loss Functions

Theorem 5.

*Proof. *The risk function of the estimator under the quadratic loss function is given by the formula Using (19) in (41), we getOn solving (42), we getMinimization of the risk with respect to gives us the optimal estimator:

*Remark 6. *By replacing in (44), the same Bayes estimate is obtained as in (29).

Theorem 7.

*Proof. *The risk function of the estimator under the Al-Bayyati loss function is given by the formula On substituting (19) in (46), we haveSolving (47), we getMinimization of the risk with respect to gives us the optimal estimator:

*Remark 8. *By replacing in (49), the same Bayes estimate is obtained as in (34).

Theorem 9.

*Proof. *The risk function of the estimator under entropy loss function is given by the formula Using (19) in (51), we getOn solving (52), we getMinimization of the risk with respect to gives us the optimal estimator:

*Remark 10. *By replacing in (54), the same Bayes estimate is obtained as in (39).

#### 6. Bayesian Estimation by Using Quasi Prior under Different Loss Functions

Theorem 11.

*Proof. *The risk function of the estimator under the quadratic loss function is given by the formula Using (24) in (56), we getOn solving (57), we getMinimization of the risk with respect to gives us the optimal estimator:

*Remark 12. *By replacing in (59), the same Bayes estimate is obtained as in (29).

Theorem 13.

*Proof. *The risk function of the estimator under the Al-Bayyati loss function is given by the formula On substituting (24) in (61), we haveSolving (62), we getMinimization of the risk with respect to gives us the optimal estimator:

*Remark 14. *By replacing in (64), the same Bayes estimate is obtained as in (34).

Theorem 15.

*Proof. *The risk function of the estimator under the entropy loss function is given by the formula Using (24) in (66), we getOn solving (67), we getMinimization of the risk with respect to gives us the optimal estimator:

*Remark 16. *By replacing in (69), the same Bayes estimate is obtained as in (39).

#### 7. Results and Discussion

We primarily studied the classical maximum likelihood estimation and Bayesian estimation for Nakagami distribution using Jeffreys’, extension of Jeffreys’, and Quasi priors under three different symmetric and asymmetric loss functions. Here our main focus was to find out the estimate of scale parameter for Nakagami distribution. The mathematical derivations were checked by using the different data sets and the estimate was obtained.

For descriptive manner, we generate different random samples of size 25, 50, and 100 to represent small, medium, and large data set for the Nakagami distribution in R Software; a simulation study was carried out 3,000 times for each pairs of where and . The values of extension were () and (). The value for the loss parameter was ( and ). This was iterated 2000 times and the estimates of scale parameter for each method were calculated. The results are presented in (Tables 1, 2, and 3), respectively.

#### 8. Conclusion

In this paper we have generated three types of data sets with different sample sizes for Nakagami distribution. These data sets were simulated with the help of programs and the behavior of the data was checked in case of parameter estimation for Nakagami distribution in R Software. With these data sets we have obtained the estimate of scale parameter for Nakagami distribution under three different symmetric and asymmetric loss functions by using three different priors. With the help of these results we can also do comparison between loss functions and the priors.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.