Statistics are mathematical tools applying scientific investigations, such as engineering and medical and biological analyses. However, statistical methods are often improved. Nowadays, statisticians try to find an accurate way to solve a problem. One of these problems is estimation parameters, which can be expressed as an inverse problem when independent variables are highly correlated. This paper’s significant goal is to interpret the parameter estimates of double generalized Rayleigh distribution in a regression model using a wavelet basis. It is difficult to use the standard version of the regression methods in practical terms, which is obtained using the likelihood. Since a noise level usually makes the result of estimation unstable, multicollinearity leads to various estimates. This kind of problem estimates that features of the truth are complicated. So it is reasonable to use a mixed method that combines a fully Bayesian approach and a wavelet basis. The usual rule for wavelet approaches is to choose a wavelet basis, where it helps to compute the wavelet coefficients, and then, these coefficients are used to remove Gaussian noise. Recovering data is typically calculated by inverting the wavelet coefficients. Some wavelet bases have been considered, which provide a shift-invariant wavelet transform, simultaneously providing improvements in smoothness, in recovering, and in squared-error performance. The proposed method uses combining a penalized maximum likelihood approach, a penalty term, and wavelet tools. In this paper, real data are involved and modeled using double generalized Rayleigh distributions, as they are used to estimate the wavelet coefficients of the sample using numerical tools. In practical applications, wavelet approaches are recommended. They reduce noise levels. This process may be useful since the noise level is often corrupted in real data, as a significant cause of most numerical estimation problems. A simulation investigation is studied using the MCMC tool to estimate the underlying features as an essential task statistics.

1. Introduction

Parameters’ estimation, to provide an interpreted model, is often the biggest challenge in statistics since data might contain noise, blur, or both. These kinds of problems were found in science, geophysics, engineering, and medicine. This kind of situation received much attention from researchers over the past decade. In practical applications, the biggest challenge in estimating the unknown parameters is that real data usually contain white noise. Hence, using a pretreatment may reduce noise, where it might provide a suitable fit. More precisely, it is used in the statistical approaches of data corrupted with white noise arising from the collocation of equipment. There are two types of statistical tools that are usually involved in processing the data. The first one is data pretreatment, which is applied to reduce the independent variable’s correlation or noise level. The second is model calibration, which can be related to using Bayesian and wavelet methods. Hence, the key issues can be presented as working with many unknown features compared to the number of observations and then an ill-posed or ill-conditioned order in the model; that is, the maximum likelihood estimation is unsuitable for estimating underlying parameters. The widespread problem is to study real data collected by magnetometer or voltage reading, which are usually highly correlated. This process is needed since the sample’s measured spectral characteristics may have noise levels and blur. Statistically, several established methods can be applied, such as classical thresholding approaches. Early work for studying this procedure can be found in [1, 2] who introduced a new tool for removing noise (see [3, 4] for explicit motivation). Bayesian approaches were studied using different probability distributions in many fields over the last century. A common practice would be to perform exponential [5], and others were applied to various density distributions. For example, the authors in [6] studied the exponential distribution and estimated their parameters, whereas those in [7] employed the Weibull distribution to estimate the parameters using censored data. Also, the authors in [8] studied the Rayleigh distribution using consorted data. Hence, the idea of this article is to combine Bayesian and wavelet methods for estimating underlying parameters. Wavelets can be powerful mathematical tools applied to reduce the impact of multicollinearity problems. Wavelet basis can be explained as a special complicated level of the Fourier transform. However, the main reason for using wavelet approaches is that it is easy to choose between different wavelet bases. Many summaries were written about this topic by several authors. For example, Mallat [9] states that a probability density function of wavelet coefficients is notably peaked and centered around at zero. Also, the algorithm of discrete wavelet transform can be found in [10]. In wavelet, the stationary basis is recommended for the reconstruction (see [11] for more details). Then, the wavelets have received many comments from scientists, while several authors analyzed some real-statistical applications (see [12] for a direct result). Different approaches to the use of the wavelet can be found in [13]. Considerable details about wavelets can found in [14]. The central concept of the Bayesian approach is using the construction of theory. However, when the rules are built carefully, the model provides a good fit afterward as the estimation process. There are several papers on Bayesian methods (see [15] who studied Bayesian approaches in the wavelet domain). Wavelet via Bayesian approaches can be studied in many articles, such as in [16]. More details about the combination of Bayesian and wavelet can be studied in [17]. Besides, using the MCMC algorithm is extracting a sample at each run of simulation from the rule. The posterior rule is more complicated for an analytic solution. The easy type of MCMC is in [18], which can be implemented to extract notation. More details about the MCMC tool can be studied in [1921]. Moreover, the estimation of the unknown parameters of the double generalized Rayleigh DGRay () distributions is proposed to provide a new tool, where for some indexes . In practical terms, this type of investigation is sometimes called the “level-dependent” models since the distribution parameters are estimated for each level , especially when the measurable characteristics are assumed under two or more different conditions. For example, some wavelet coefficients have defects that are close to be around zero, whereas wavelet coefficients without defects may take a form far from zero. Consider the linear inverse problem defined bywith observed measurement , the vector of the unknown parameters , and errors . Furthermore, , the noise level is usually assumed to be independent and identically distributed normally random, and . Consider the unknown parameters defined bywhere is an orthonormal matrix containing the wavelet basis. Hence, the unknown parameters can be defined by their discrete wavelet transform , and the stationary transform is used in this article. So the number of wavelet coefficients and observations is equal. Also, the wavelet coefficients of the observed data are defined bywhere is the set of the wavelet coefficients of and is also the set of , where . Level-dependent models play a significant role in wavelet applications—this procedure allows us to investigate the value of unknown parameters at each resolution of wavelet coefficients. There are numerous methods for specifying values of unknown parameters of the double generalized Rayleigh distributions. Moreover, the MCMC algorithms are implemented to investigate the unknown parameters from complicated or nonstandard posterior distributions [22]. In statistics, there are many tools that can be applied to estimate parameters, such as EM and MCMC algorithms. In this article, two types of methods are supposed; the first one is the posterior mean (PM), and the second is maximum a posteriori (MAP).

Figure 1 illustrates the shape of the double Rayleigh distribution for different values of . It can be seen that as , the density double Rayleigh approaches infinity, and this type of distribution can be used to fit the density of the empirical wavelet coefficients. More precisely, the wavelet coefficients are nearby the zero, which is found using the double generalized Rayleigh distribution with and . In the other words, the density double Rayleigh approaches infinity as approaches zero when and , which is equivalent to the summary of Mallat. This article is structured as follows: introduction to the double generalized Rayleigh distribution is explained in Section 2. All technical arguments are referred to in Sections 3 and 4. Numerical work confirming their features and simulation study to investigate estimation properties is provided in Sections 5 and 6. Section 7 gives the result of the proposed rule to real data. The final summary and conclusions are presented in Section 8.

2. Double Generalized Rayleigh Distribution

The generalized Rayleigh DGRay () distribution was proposed by Aykroyd et al. as a generalized distribution. They showed the properties of the model, such as cumulative and survivor functions. Also, Aykroyd et al. [23] showed that the generalized Rayleigh distribution works well to fit data. They also used the Bayesian approaches to estimate unknown parameters of the generalized Rayleigh distribution. In this paper, a double generalized Rayleigh distribution will be used to model the wavelet coefficients, equivalent to the density of the wavelet coefficients. Let single wavelet coefficient at the level be the probability density function (pdf) given bywhere is the absolute value. The cumulative distribution function (cdf) is defined byThe survivor function (sf) is given byand the failure function (hrf) is given bywhere and . In some indexes, . The parameters and are shape parameters, and is a location parameter. Setting and in (4)–(6), the results of the standard of the double Rayleigh distribution with parameter are obtained.

3. Bayesian Approach

In statistics, Bayesian tools play important roles, where the approach has two keys. The first one is the likelihood, concocted between observation and unknown parameters, say , where and are sets of underlying parameters and observations, respectively. The second key is the prior distribution, say , and then the combining posterior distribution. Assuming the link between the model of and the unknown of wavelet coefficients ,where is the variance of data and can be assumed byusing equation (2) and the marginal likelihood given by

The result of the previous integration can be found in [24]. The equivalent likelihood is defined by

In addition, the posterior distribution for given iswhere are the size of the coefficients at each level . Hence, the value of is suggested as . The main reason for choosing the double generalized Rayleigh is that as the value of and , the proposed distribution approaches infinity, which is followed by the saying of Mallat about the interpretation of the wavelet coefficients distribution. Clearly, equation (12) can be used to estimate the unknown parameters given , and then these unknown parameters can be employed to describe the reconstruction. Hence, the unknown parameters are made up of one set, say with , and then, the previous form (12) becomeswhere and suppose that at the level . Aykroyd et al. considered gamma prior density for and with hyperprior parameters and . Also, gamma distribution is proposed for with hyperparameters , with density function

Then, the posterior density of the single value of with parameters , , and at the level , given the data , is given byand the joint posterior density given data, , can be written as

The hyperprior parameters , , and can be fixed, as follows: let the expectation and variance of at resolution , say and , where . By solving the following equationsthe corresponding hyperprior parameters can be defined as and .

4. Stationary Approaches

The vital task in the wavelet approaches is to choose a basis. For more details, the interpretation of the wavelet basis is to start with two functions. The first one is scaling or father function , where the main task of this function is to compute the scaling coefficients. The other is a wavelet or mother function , where it can be used to calculate the wavelet coefficients. Several wavelet bases are now available with different degrees of smoothness. However, the Haar basis is a simple version of the wavelet transform. Moreover, there are several established wavelet families demonstrated (see [2529] for details). Stationary wavelet transforms (SWTs) attracted much attention for many applications over the last few years. In particular, the classical stationary wavelet transform was introduced in [30], while the authors in [31, 32] applied at that time as the maximal overlap for discrete wavelet.

In 1995, Nason extended the discrete wavelet and recalled it as the “stationary.” In the same year, Ronald and David [33] proposed a new tool: stationary wavelet coefficients and is sometimes referred to as “cycle spinning.” In general, the SWT can be described as “fills in the gaps” between the decimated wavelet coefficients; that is, there is no missing computation between two different values of wavelet coefficients. Nason stated that this leads to an over-determined redundancy of the original data (see the below example for more explanation). The producer gives a shift-invariant removing noisy tool, which simultaneously shows improvements in reconstruction quality (see Ronald and David). For example, of the SWT, the Haar wavelet is applied to the data . The first and second sets of the scaling and detail coefficients can be computed:where , , , and are the matrices of transform at level , respectively. Hence, the number of vanishing moments decreases. This implicates that the smoothness of the corresponding shape decreases. In this paper, Daubechies father function and mother with vanishing moments are used to provide a smooth reconstruction.

The plotting procedure for the stationary wavelet transform is shown in Figure 2. It can be seen that each level has the same number of wavelet coefficients. Figure 3 shows the scaling and wavelet functions for Daubechies with vanishing moments. Table 1 shows the wavelet coefficients for Daubechies compact, phase . Here, we present the idea of Daubechies, omitting some technical details.

5. Numerical Methods

The goal of the Bayesian computation is to extract a posterior sample for some unknown parameters . However, computational statistics can be explained as inverse problems. Some tools can be used to make the estimation more efficient. They include the standard version of the MCMC algorithms, Metropolis-Hastings tools, to extract a random sample from the posterior rule in (16). The procedure of the technique for parameter estimation, through the MCMC approach, can be found in [34]; for more information, see [35,36] and more recent works such as [37].

Figure 4 shows the diagram of the procedure of the proposed methods, where the procedure starts with data, which is corrupted by noise. The data are transformed to wavelet coefficients, which are used to estimate the unknown wavelet coefficients using the suggested method, and then, the underlying signal is calculated by inverting the estimation of wavelet coefficients. The main idea of the MCMC algorithms is that the parameter can take at any valued point in the parameter space , say is the value point. Then, at each step, MCMC creates values, say . Each single parameter updates separately in the order that the MCMC algorithms depend on a random walk. More precisely, the general framework of the tool is defined as follows:(i)Starting with an initial value for and for each level , that is, for parameters, let .(ii)For times .

(1)Generate a new value , where . Hence, the current value of the prior parameters is proposed with a variance parameter for each resolution , which is chosen to obtain an acceptable convergence rate.(2)Compute the posterior distribution in (16).

For , that is, for each wavelet coefficient .(a)Generate a new wavelet coefficient , where .(b)Again, compute the posterior distribution in (16).(c)Generate .(d)When , accept the proposal and set and ; else, and .

Hence, all parameters are generated from the Gaussian distribution, while the current amount of the parameter is the expectation of the normal distribution with updating variance, which is based on the acceptance rate. It is essential to realize to pick up a random value around the current value, that is, both low and high, with variances chosen to depend on an acceptance rate. More precisely, choosing any valued point in the parameter space is accepted. The authors in [38] stated determining value is between 20% and 30% for an acceptance rate. Hence, we considered the following gamma prior density for the variance of noise , where the starting point is computed from the finest level of the wavelet coefficients, (see Nason).

Once the sample is collected from the posterior rule, the posterior mean for can be calculated byand also, the posterior variance can be calculated bywhere and are the number of run and burn-in, respectively. Hence, there is an enormous method to compute the estimate point and interval. For the MAP rule, the previous procedure is changed into a simulated annealing process of Geman and Geman; this process can answer more quickly than the posterior mean. More accurately, the MAP estimate is chosen as the final iteration . In other words, sample mean and variance can not be computed. The maximum a posteriori estimator (MAP) is defined aswhere indicates the final iteration of the run of the MCMC algorithms.

6. Simulation

The investigation of the proposed rule is considered. Then, the results are compared to some established wavelet-based methods. The authors in [39] introduced four test signals: bumps, Doppler, heavisine, and blocks. Moreover, these functions were corrupted by the independent Gaussian noise . Different sizes are studied to investigate the proposed method’s performance, which is and 128, where the four test functions were simulated. Also, various wavelet bases were used: Daubechies with applied for the test functions heavisine, Doppler, and bumps, while Haar basis was used for blocks. The starting level was , as recommended in [40]. The average mean squared-error (AMSE) evaluated the results of the estimation, which is defined aswhere and are the numbers of the data and the runs of the MCMC algorithms. Moreover, the results from the proposed method are denoted by at -th run of MCMC algorithms.

The proposed estimators were compared to various methods, such as the Bayesian wavelet thresholding (BAYES.THR) method of Abramovich and Silverman, the ABWS rule of Chipman, Kolaczyk, and McCulloch, and the BAMS rule of Vidakovic and Ruggeri. Table 2 shows the results of the simulation when decimated and the nonstationary wavelet were used. It shows the result of AMSE; for our simulation, two bases are used. The first one is the basis with zero vanishing moments, and the other is the Daubechies’ wavelets with vanishing moments. The proposed technique always gives the best reconstructions. The main interest is to improve the result of the reconstruction. This can be seen when the size of the sample is large because extensive observations contain massive information about the feature of the signal. In general, the MAP method provides a fair resolution in the test functions. However, the worst of the results is better than the other of the competed wavelet rules. The biggest problem in the MAP estimate is that the confidence intervals can not be computed because the latest sample of posterior is picked.

7. Application to Medical Data

The suggested method is studied and investigated to a real-world inductance plethysmography data to evaluate the excellent performance of the proposed rules, compared to the state-of-the-art methods. The Department of Anaesthesia at the Bristol Royal Infirmary collected these observations. The number of observations is 2048, equally spaced points. Readers can obtain these data within WaveThresh using data (BabyECG). Also, the structure of the sleep state can be downloaded using data (BabySS). Figure 5 shows the plots of BabyECG and sleep state. Hence, the aim of the investigation of the BabyECG was to specify the sleep state successfully from the observations. These data were studied and investigated by other authors (for example, [41]). The reconstruction of the unbalanced Haar approach (red line) is illustrated in Figure 6. It is not accessible to describe every moment using the unbalanced Haar method or to talk in general about the sleep state for the babies. Figures 7 and 8 show the reconstructions of the underlying feature with the MAP method using the Haar wavelet basis and Daubechies wavelet with N = 8 vanishing moments. In our reconstruction, the value of the shape is set within the interval . Table 3 shows the results of the simulations using the MAP and PM estimators. As the level decreases, the value of increases. In contrast, the value of the parameter is slightly changed.

8. Conclusion

In this article, we show various ways in which the Bayesian rules and wavelet methods were used successfully in the practical problem. Also, a procedure for estimating the scale parameters, k and , of double generalized Rayleigh was estimated based on the BabyECG sample. This approach was adopted from the wavelet method for the independent level and Bayesian approaches. Prior probability distributions for the parameters were assumed to be gamma distribution. Bayesian estimates for the points were proposed in the cases of artificial samples under the squared-error loss. The simulation studies are showing that the proposed rules worked well, and the proposed Bayesian estimate performed better than the existing state-of-the-art methods based on signal functions by reducing the AMSE. We discussed the proposed method estimates to estimate the underlying parameters. Numerical results were obtained to compare the theoretical performance results. Some points are observed from numerical results, which are summarized as follows:(i)From the results in Table 2, the suggested method process provides better excellent results for artificial data.(ii)Estimation results under the PM and MAP methods provide better estimation than the other established wavelet denoising methods according to the MSE.(iii)The use of the suggested method allows to describe the main feature of the real data, especially when observations are large.

This paper has confirmed that the wavelet approach provides attractive alternatives to other established wavelet methods, especially when underlying signals are inhomogeneous.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The author declares that there are no conflicts of interest.


This study was funded by Taif University Researchers Supporting Project number TURSP-2020/279, Taif University, Taif, Saudi Arabia.