International Scholarly Research Notices

International Scholarly Research Notices / 2012 / Article

Review Article | Open Access

Volume 2012 |Article ID 919234 | 9 pages | https://doi.org/10.5402/2012/919234

Single Channel Speech Enhancement Techniques in Spectral Domain

Academic Editor: D. Aggelis
Received13 Feb 2012
Accepted30 Apr 2012
Published22 Jul 2012

Abstract

This paper presents single-channel speech enhancement techniques in spectral domain. One of the most famous single channel speech enhancement techniques is the spectral subtraction method proposed by S.F. Boll in 1979. In this method, an estimated speech spectrum is obtained by simply subtracting a preestimated noise spectrum from an observed one. Hence, the spectral subtraction method is not concerned with speech spectral properties. It is well known that the spectral subtraction method produces an annoying artificial noise in the extracted speech signal. On the other hand, recent successful speech enhancement methods positively utilize the speech property and achieve an efficient speech enhancement capability. This paper presents a historical review about some speech estimation techniques and explicitly states the difference between their theoretical back-ground. Moreover, to evaluate their speech enhancement capabilities, we perform computer simulations. The results show that an adaptive speech enhancement method based on MAP estimation gives the best noise reduction capability in comparison to other speech enhancement methods presented in this paper.

1. Introduction

In recent years, speech enhancement is required in a wide area of applications including mobile communication and speech recognition systems, where the major example is a cell-phone as shown in Figure 1. Many speech enhancement methods have been established in decades [1ā€“15]. These speech enhancement techniques can be classified to time domain methods and spectral domain methods. Recent major speech enhancement techniques are of the spectral domain method which is preferably used in a cell phone. In this paper, we focus on the spectral domain speech enhancement techniques that employ a single microphone.

The spectral subtraction method [3] is one of the most popular methods among numerous noise reduction techniques in spectral domain. This method achieves noise reduction by simply subtracting a pre-estimated noise spectral amplitude from an observed spectral amplitude, where the spectral phase is not processed. The spectral subtraction method is easy for implementation and effectively reduces stationary noises. However, it incurs an artificial noise, called musical noise, which is caused from speech estimation errors. Because the spectral subtraction method is not concerned with speech spectral information, it often gives estimation errors. Ephraim and Malah have proposed the MMSE-STSA (Minimum Mean Square Error-Short-Time Spectral Amplitude) method [4] which utilizes a speech PDF (Probability Density Function) and a noise PDF. In the literature in [4], the speech and noise PDFs were modeled by Rayleigh and Gauss density functions, respectively. This method gives an optimal solution of the estimated speech signal in the sense of MMSE-STSA (the solution may change to Wiener filter [5] if we assume Gauss distributions for both of the speech and noise PDFs). Although the MMSE-STSA method gives an estimated speech signal with less musical noise, it requires more complicated computations, for example, the solution required to calculate the modified Bessel function. Moreover, as pointed out by some researchers, real speech histograms do not fit to Rayleigh function employed in [4].

A more efficient method that is based on a maximum a posteriori (MAP) estimation has been established by Lotter and Vary [11]. Lotter and Vary modeled the speech PDF by a parametric super-Gaussian function, controlled by two shape parameters. The parametric super-Gaussian function has been developed from a histogram made from a large amount of real speech data in a single narrow SNR (Signal to Noise Ratio) interval. The noise suppression capability of this method is superior to the Wiener filter. However, the residual noise is still persistently perceived. Andrianakis and White were aware that the speech PDF may change in some SNR intervals [12]. They utilized three histograms made from speech signals in three different narrow SNR intervals and approximate them with Gamma density function. As reported in [12], changing these three speech PDFs according to the SNR can improve the noise reduction capability. While Andrianakis discretely changes the speech PDF, Tsukamoto et al. continuously change the speech PDF according to the SNR [13]. They employed the parametric super-Gaussian function proposed in [11] and adaptively changed its shape parameters according to the SNR. Recently, Thanhikam et al. [16] sophisticated this approach by making and evaluating many real speech histograms made from various narrow SNR intervals. As shown in [16], this method has a very strong noise reduction capability in comparison to other traditional speech enhancement methods, and hence it is effective especially in low SNR environments.

In the following sections, we present a historical review of useful speech enhancement methods mentioned above and compare their speech enhancement capabilities by computer simulations.

2. Speech Enhancement in Spectral Domain

This section presents several speech enhancement techniques including both traditional methods and recent methods. Particularly, we will carefully explain the difference between them.

2.1. General Speech Enhancement System

Firstly, we explain about a general single-channel speech enhancement system in spectral domain.

We assume that an observed signal is a sum of a speech signal and a noise signal given as š‘¦(š‘”)=š‘„(š‘”)+š‘‘(š‘”),(1) where š‘¦(š‘”) is the observed signal at time š‘”. š‘„(š‘”) and š‘‘(š‘”) denote the speech signal and the noise signal, respectively. We assume that š‘„(š‘”) is uncorrelated with š‘‘(š‘”) through the paper. Taking the DFT of (1), we have š‘Œš‘˜(š‘›)=š‘›š‘„+š‘āˆ’1ī“š‘”=š‘›š‘„š‘¦(š‘›š‘„+š‘”)ā„Ž(š‘”)š‘’āˆ’š‘—2šœ‹š‘›š‘˜/š‘(š‘˜=0,1,ā€¦,š‘āˆ’1),(2) where š‘, š‘›, and š‘˜ denote the frame length, the frame index, and the frequency bin index, respectively. The analysis frame is shifted by š‘„ samples, where š‘„=š‘/2 is used through the paper. The function ā„Ž(š‘”) denotes an analysis window function, where the Hanning window of size š‘ is used as ā„Ž(š‘”). The DFT spectrum š‘Œš‘˜(š‘›) can be rewritten as š‘Œš‘˜(š‘›)=š‘‹š‘˜(š‘›)+š·š‘˜(š‘›),(3) where š‘‹š‘˜(š‘›) and š·š‘˜(š‘›) are the š‘˜th spectra of š‘„(š‘”) and š‘‘(š‘”), respectively. The enhanced speech spectrum īš‘‹š‘˜(š‘›) is given as īš‘‹š‘˜(š‘›)=šŗš‘˜(š‘›)š‘Œš‘˜(š‘›),(4) where šŗš‘˜(š‘›) is a spectral gain. The enhanced speech is obtained as the observed signal š‘Œš‘˜(š‘›) multiplied by the spectral gain šŗš‘˜(š‘›). Hence, speech enhancement capability depends only on the spectral gain.

A general speech enhancement system can be illustrated in Figure 2, where the value of the spectral gain šŗš‘˜(š‘›) depends on an employed speech enhancement algorithm. We see from (3) and (4) that the ideal spectral gain is given as šŗš‘˜,optš·(š‘›)=1āˆ’š‘˜(š‘›)š‘Œš‘˜.(š‘›)(5) This spectral gain perfectly provides the original speech signal as the enhanced speech. Since the ideal spectral gain above cannot be directly obtained from š‘Œš‘˜(š‘›), we have to approximate the ideal spectral gain by introducing additional assumptions for the speech or the noise signals.

In the following sections, we give some typical spectral gains which have been derived from respective assumptions for the speech or the noise. For avoiding redundant expressions, we omit the indices š‘› and š‘˜ if they do not play an important role.

2.2. Spectral Subtraction

The most simple and famous speech enhancement technique is the spectral subtraction proposed by Boll in 1979 [3]. This method just subtracts a pre-estimated noise spectral amplitude from an observed one to obtain the estimated speech spectral amplitude. In the spectral subtraction method, the spectral phase is not modified; that is, the estimated speech spectral phase is identical to the observed one. This is based on the fact that the spectral phase is unimportant in comparison to the spectral amplitude in human speech perception [17]. The spectral subtraction method is achieved by using the following spectral gain. šŗSS||īš·||=1āˆ’||š‘Œ||,(6) where |īš·| is the pre-estimated noise spectral amplitude. Usually, we choose |īš·|=šø[|š·|]. We note that formula (6) is an absolute version of (5).

The spectral subtraction is not concerned with speech spectral property. As a result, the estimated speech signal includes many estimation errors. The estimation error produces an isolated spectrum in the estimated speech signal. This noise is called ā€œmusical noiseā€ and it is perceived as an annoying sound for human. To obtain an estimated speech signal with less musical noise, we should introduce a speech property into speech enhancement scheme. In the following sections, we present some speech enhancement methods taking into account speech probabilistic properties.

2.3. Wiener Filter

In this section, we explain the Wiener filter [5] which utilizes both of the speech and the noise spectral probabilistic properties. It is well known that the Wiener filter provides an estimated speech signal with less musical noise in comparison to the spectral subtraction method.

To derive the Wiener filter, we assume that the speech spectrum š‘‹ is uncorrelated with the noise spectrum š· and šø[š‘‹]=0, šø[|š‘‹|2]=šœŽ2š‘„, šø[š·]=0, šø[|š·|2]=šœŽ2š‘‘. The Wiener filter is obtained by minimizing the following cost function: ī‚ƒ||īš‘‹||š½=šøš‘‹āˆ’2ī‚„ī‚ƒ||||=šøš‘‹āˆ’šŗš‘Œ2ī‚„,(7) where šø[ā‹…] denotes the expected value. We can rewrite š½ as ī‚ƒ||š‘‹||š½=šø2ī‚„+||šŗ||2šøī‚ƒ||š‘Œ||2ī‚„ī€ŗāˆ’šŗšøš‘‹š‘Œāˆ—ī€»āˆ’šŗāˆ—šøī€ŗš‘‹āˆ—š‘Œī€»=šœŽ2š‘„+||šŗ||2ī€·šœŽ2š‘„+šœŽ2š‘‘ī€øāˆ’šŗšœŽ2š‘„āˆ’šŗāˆ—šœŽ2š‘„.(8) Differentiating š½ with respect to šŗāˆ— gives šœ•š½šœ•šŗāˆ—ī€·šœŽ=šŗ2š‘„+šœŽ2š‘‘ī€øāˆ’šœŽ2š‘„.(9) Putting (9) to zero and solving it with respect to šŗ, we have the spectral gain of the Wiener filter given as šŗWiener=šœŽ2š‘„šœŽ2š‘„+šœŽ2š‘‘=šœ‰,1+šœ‰(10) where šœ‰=šœŽ2š‘„/šœŽ2š‘‘ is the a priori SNR. The Wiener filter requires one parameter šœ‰ or two variances šœŽ2š‘„ and šœŽ2š‘‘.

2.4. MMSE-STSA Method

In this section, we explain a historically important speech enhancement method, that is, the MMSE-STSA method [4] proposed by Ephraim and Malah in 1984. Ephraim and Malah have proposed not only an efficient spectral gain, but also an efficient estimation technique to get the a priori SNR.

The MMSE-STSA method is derived by minimizing a conditional mean square value of the short time spectral amplitude. The cost function to be minimized is given by š½MMSEī‚ƒ||īš‘‹||=šøš‘‹āˆ’2ī‚„=ī€œāˆ£š‘Œāˆžāˆ’āˆž||š‘‹||2||īš‘‹||š‘(š‘‹āˆ£š‘Œ)š‘‘š‘„+2āˆ’īš‘‹ī€œāˆžāˆ’āˆžš‘‹āˆ—āˆ’īš‘‹š‘(š‘‹āˆ£š‘Œ)š‘‘š‘„āˆ—ī€œāˆžāˆ’āˆžš‘‹š‘(š‘‹āˆ£š‘Œ)š‘‘š‘„,(11) where š‘(š‘‹āˆ£š‘Œ) denotes the conditional PDF of š‘‹. The estimated speech spectrum which minimizes š½MMSE is given as īš‘‹MMSE=ī€œāˆžāˆ’āˆž[].š‘‹š‘(š‘‹āˆ£š‘Œ)š‘‘š‘„=šøš‘‹āˆ£š‘Œ(12) As shown in [6], when we assume š‘(š‘‹) and š‘(š·) as Gauss functions, (12) produces the Wiener filter again. On the other hand, Ephraim and Malah considered the PDFs of the speech spectral amplitude and phase, that is, š‘(|š‘‹|) and š‘(āˆ š‘‹). They assumed that š‘(|š‘‹|) and š‘(āˆ š‘‹) as the Rayleigh distribution and the uniform distribution, respectively [18]. They assumed š‘(š·) as the Gauss function, where the noise variance šœŽ2š‘‘ is assumed to split equally into real and imaginary parts. These PDFs are expressed as š‘ī€·||š‘‹||ī€ø=2||š‘‹||šœŽ2š‘„īƒÆāˆ’||š‘‹||exp2šœŽ2š‘„īƒ°,1(13)š‘(āˆ š‘‹)=,12šœ‹(14)š‘(š‘Œāˆ£š‘‹)=šœ‹šœŽ2š‘‘īƒÆāˆ’||||expš‘Œāˆ’š‘‹2šœŽ2š‘‘īƒ°,(15) where š‘ƒ(š‘Œāˆ£š‘‹) is corresponding to š‘(š·). Assuming š‘(š‘‹)=š‘(|š‘‹|)š‘(āˆ š‘‹), we can calculate (12) by using the relation š‘(š‘‹āˆ£š‘Œ)=š‘(š‘Œāˆ£š‘‹)š‘(š‘‹)/š‘(š‘Œ). After tedious and complex computations, the spectral gain is given as [4] šŗMMSE=(šœ‹š‘£)1/2ī‚€2š›¾expāˆ’š‘£2ī‚Ć—ī‚ƒ(1+š‘£)š¼0ī‚€š‘£2ī‚+š‘£š¼1ī‚€š‘£2,ī‚ī‚„(16) where š¼š‘–(ā‹…) is the modified Bessel function of order š‘– and šœ‰š‘£=||š‘Œ||1+šœ‰š›¾,š›¾=2šœŽ2š‘‘.(17) Here, š›¾ is called as the a posteriori SNR. As shown in [4], the optimal spectral phase in the sense of MMSE-STSA is identical to the observed one. Hence, šŗMMSE is also a real value. The MMSE-STSA solution, šŗMMSE, is completely characterized by šœŽ2š‘‘, šœ‰, and š›¾. When the noise variance šœŽ2š‘‘ is known or can be estimated, š›¾ is simply obtained by the observed spectrum. On the other hand, estimating the a priori SNR šœ‰ is difficult, although it needs to be required for many other spectral speech enhancers. One of the valuable contributions in [4] is to present a useful estimation method of šœ‰, called the decision-directed method. We will show and use it to estimate šœ‰ in Section 3.

2.5. MAP Estimation Method

As confirmed in many literatures, the spectral gain šŗMMSE derived in the previous section is superior to the spectral subtraction method. But šŗMMSE is not easy to implement due to a large amount of computational complexity. Indeed, we can obtain a more theoretically relevant and reasonable spectral gain from the same cost function shown in (11). The MMSE-STSA method has chosen īš‘‹=šø[š‘‹āˆ£š‘Œ] to minimize (11). Here, we can note that šø[š‘‹āˆ£š‘Œ] is the best choice when the PDF is an even function like a Gauss function. Because the Rayleigh distribution is asymmetric function, īš‘‹=šø[š‘‹āˆ£š‘Œ] is not appropriate. The MAP estimation method [6] denotes that the best choice for minimizing (11) is to employ the speech spectrum maximizing š‘(š‘‹āˆ£š‘Œ).

To illustrate the difference between the MMSE-STSA solution and the MAP solution, we show an example of the specific PDF. Figures 3(a) and 3(b) show the Gauss and Rayleigh distributions, respectively. Here, the horizontal axis denotes the value of an argument š‘„ and the vertical axis is a PDF š‘(š‘„). The vertical dotted lines denote the argument values giving the mean value and maximum value of š‘(š‘„), respectively. The former value is corresponding to the MMSE-STSA solution and the latter value is corresponding to the MAP solution. As shown in Figure 3(a), the MMSE-STSA solution is identical to the MAP solution for the Gauss distribution which is an even function. On the other hand, the solutions of them are different for the asymmetric Rayleigh distribution as shown in Figure 3(b). Obviously, we should choose the solution of the MAP estimation rather than the MMSE-STSA solution to minimize the cost function (11).

To obtain the MAP solution, we have to maximize the conditional PDF š‘(š‘‹āˆ£š‘Œ). Based on the Bayesā€™s rule, we have [6] š‘(š‘‹āˆ£š‘Œ)=š‘(š‘Œāˆ£š‘‹)š‘(š‘‹)š‘(š‘Œ)āˆš‘(š‘Œāˆ£š‘‹)š‘(š‘‹).(18) The MAP estimation is to find the arguments š‘‹ which maximize š‘(š‘‹|š‘Œ), that is, īš‘‹=argmaxš‘‹š‘(š‘‹āˆ£š‘Œ)=argmaxš‘‹š‘(š‘Œāˆ£š‘‹)š‘(š‘‹)=argmaxš‘‹ln{š‘(š‘Œāˆ£š‘‹)š‘(š‘‹)}.(19) We assume the same PDFs from (13) to (15), and š‘(š‘‹)=š‘(|š‘‹|)š‘(āˆ š‘‹). After calculating ln{š‘(š‘Œāˆ£š‘‹)š‘(š‘‹)} and differentiating it with respect to |š‘‹| (or āˆ š‘‹), we put the obtained derivative to zero and solve it with respect to |š‘‹| (or āˆ š‘‹). Then, we have [6] šŗMAP=āˆššœ‰+šœ‰2+2(1+šœ‰)(šœ‰/š›¾)2.(1+šœ‰)(20) Since the MAP solution of āˆ š‘‹ is identical to the observed spectral phase, šŗMAP is also a real value. We see that šŗMAP consists of šœ‰ and š›¾ only; thus its computational complexity is extremely low in comparison to (16).

2.6. Lotter's Spectral Gain

In the previous section, we obtained a MAP solution for speech enhancement under the assumption that the PDF of the speech spectral amplitude can be modeled as the Rayleigh distribution. However, some researchers pointed out that there exists other appropriate speech PDF [8ā€“11]. In 2005, Lotter and Vary have proposed an original speech spectral amplitude PDF. This PDF was derived from a real speech histogram made from a large amount of real speech data. In the same manner as in the previous section, the speech spectral amplitude and phase were separately modeled in [11]. The PDF of the spectral phase was also modeled as the uniform distribution defined in (14). Lotter et al. modeled the PDF of the speech spectral amplitude as a super-Gaussian function represented by š‘ī€·||š‘‹||ī€ø=šœ‡šœˆ+1Ī“||š‘‹||(šœˆ+1)šœˆšœŽš‘„šœˆ+1ī‚µ||š‘‹||expāˆ’šœ‡šœŽš‘„ī‚¶,(21) where Ī“(ā‹…) is a Gamma function and šœ‡ and šœˆ are the shape parameters which determine the shape of the above PDF. Using (21), (14) and (15), the same procedure in the previous section gives the MAP solution expressed as šŗLā‹…MAPī‚™=š‘¢+š‘¢2+šœˆ,12š›¾(22)š‘¢=2āˆ’šœ‡4ī‚™1.š›¾šœ‰(23) The MAP solution of the speech spectral phase is also identical to the observed one, and thus šŗLā‹…MAP is a real value. Lotter and Vary reported that the most appropriate shape parameters are šœ‡=1.74 and šœˆ=0.126 in [11]. The spectral gain šŗLā‹…MAP also consists of šœ‰ and š›¾ only, hence it is easy to implement.

2.7. Adaptive Speech PDF Method

In [11], the shape parameters of the speech spectral amplitude PDF, šœ‡ and šœˆ, had been derived from a large amount of speech data in a single narrow SNR interval. However, in a practical situation, a speech signal includes both activity segments and pause segments. Since the value of the speech spectral amplitude is always zero in the pause segments, we expect that its PDF can be modeled as a delta function. On the other hand, in the activity speech segments, the PDF of the speech spectral amplitude obeys other functions. Tsukamoto et al. have noticed the fact and investigated an adaptive method to change the PDF of the speech spectral amplitude, according to the SNR [13]. They have chosen Lotter's PDF defined in (21) as the adaptive PDF, because its shape is easily controlled by šœˆ and šœ‡. Here, we show examples of Lotter's PDF with different shape parameters in Figure 4. We see from this figure that the PDF can fit the exponential distribution and the Rayleigh distribution by adjusting the shape parameters. Utilizing real speech histograms, Tsukamoto et al. derived adaptive shape parameters and showed its effectiveness through the computer simulations [13]. This basic idea is useful for speech enhancement in a practical situation. Unfortunately, a reliability of the derived adaptive shape parameter is comparatively low, because it is derived from only two speech histograms.

To sophisticate Tsukamoto's adaptive shape parameter, Thanhikam et al. have made and evaluated many real speech histograms in various narrow SNR intervals [16]. They tried to fit the speech histograms with (21) and revealed an interesting curve of the shape parameters for narrow SNR intervals. The obtained shape parameters as the fitting results and the derived curve are shown in Figures 5(a) and 5(b), where the narrow SNR was calculated as š‘ƒ=10log10šœ‰ [dB]. The lines in the figures denote the curves obtained by the least mean square method. Thes curves denote the relation between the shape parameters and š‘ƒ. Table 1 shows the formulations of the derived shape parameter function for š‘ƒ, where we denote the derived shape parameters by š‘…šœ‡š‘˜(š‘›) and š‘…šœˆš‘˜(š‘›), and š¹[š‘„]=ī‚»š‘„,š‘„>00,otherwise.(24)


SNR range [dB] š‘… šœ‡ š‘˜ ( š‘› ) = š¹ [ š‘Ž 0 š‘ƒ š‘˜ ( š‘› ) + š‘ 0 ] š‘… šœˆ š‘˜ ( š‘› ) = š¹ [ š‘ 0 š‘ƒ š‘˜ ( š‘› ) + š‘‘ 0 ]
š‘Ž 0 š‘ 0 š‘ 0 š‘‘ 0

š‘ƒ š‘˜ ( š‘› ) ā‰¤ 2 0 āˆ’0.0873.500.060āˆ’1.04
2 0 < š‘ƒ š‘˜ ( š‘› ) ā‰¤ 3 3 0.0450.840.060āˆ’1.04
3 3 < š‘ƒ š‘˜ ( š‘› ) ā‰¤ 4 9 āˆ’0.0794.90āˆ’0.0352.11
4 9 < š‘ƒ š‘˜ ( š‘› ) ā‰¤ 6 5 āˆ’0.0111.600.039āˆ’1.56
6 5 < š‘ƒ š‘˜ ( š‘› ) āˆ’0.0745.6001.00

Thanhikam et al. used an averaged value of š‘…šœ‡š‘˜(š‘›) and š‘…šœˆš‘˜(š‘›) to determine the present PDF shape of the speech spectral amplitude. Their ā€œadaptiveā€ MAP solution is as follows:

šŗš‘˜(š‘›)=š‘¢š‘˜īƒŽ(š‘›)+š‘¢2š‘˜šœˆ(š‘›)+š‘˜(š‘›)2š›¾š‘˜,š‘¢(š‘›)(25)š‘˜1(š‘›)=2āˆ’šœ‡š‘˜(š‘›)4ī”š›¾š‘˜Ģ‚šœ‰(š‘›)š‘˜,šœ‡(š‘›)(26)š‘˜(š‘›)=š›¼šœ‡š‘˜(š‘›āˆ’1)+(1āˆ’š›¼)š‘…šœ‡š‘˜šœˆ(š‘›),(27)š‘˜(š‘›)=&š›¼šœˆš‘˜(š‘›āˆ’1)+(1āˆ’š›¼)š‘…šœˆš‘˜(š‘›),(28) where š›¼ is the forgetting factor and šœ‡š‘˜(š‘›) and šœˆš‘˜(š‘›) are the adaptive shape parameters. In [16], they put š›¼=0.98, šœ‡š‘˜(0)=20, šœˆš‘˜(0)=0. This paper also use these settings.

In the next section, we compare the speech enhancement capabilities of the spectral gains presented in this paper.

3. Speech Enhancement Simulation

To compare the speech enhancement capabilities of some spectral gains derived in this paper, we firstly explain about common conditions for speech enhancement simulation. After that, we show the simulation results and discuss them.

3.1. Common Conditions

The speech enhancement methods explained in this paper commonly require the noise variance šœŽ2š‘‘,š‘˜(š‘›), a priori SNR šœ‰š‘˜(š‘›), and a posteriori SNR š›¾š‘˜(š‘›). To obtain these parameters, the following estimation methods were used.

Firstly, the noise variance was calculated by using the weighted noise estimator proposed in [19]. This method can update the estimated noise variance even if a speech signal exists. The weighted noise estimator calculates an instantaneous noise power by using the weight š‘Šš‘˜(š‘›) as shown in Figure 6. Here, šœƒ and Ģ‚š›¾š» are constant values. The literature in [19] recommends that šœƒ=7 and Ģ‚š›¾š»=10. As shown in Figure 6, š‘Š(š‘›) is a function of Ģ‚š›¾(š‘›) given as Ģ‚š›¾(š‘›)=10log10||||š‘Œ(š‘›)2šœŽ2š‘‘,š‘˜.(š‘›āˆ’1)(29) The noise variance šœŽ2š‘‘,š‘˜(š‘›) is updated as šœŽ2š‘‘,š‘˜(š‘›)=š›½šœŽ2š‘‘,š‘˜(š‘›āˆ’1)+(1āˆ’š›½)š‘Šš‘˜||š‘Œ(š‘›)š‘˜||(š‘›)2,(30) where š›½ is a forgetting factor and š›½=0.92 was used.

Next, the a posteriori SNR was directly calculated as š›¾š‘˜||š‘Œ(š‘›)=š‘˜||(š‘›)2šœŽ2š‘‘,š‘˜.(š‘›)(31)

Lastly, the a priori SNR was calculated by using the decision-directed method proposed in [4]. The decision-directed method is given by šœ‰š‘˜(š‘›)=š›¼snr||īš‘‹š‘˜||(š‘›āˆ’1)2šœŽš‘‘,š‘˜(+ī€·š‘›āˆ’1)1āˆ’š›¼snrī€øš¹ī€ŗš›¾š‘˜ī€»,(š‘›)āˆ’1(32) where š›¼snr is a forgetting factor and š›¼snr=0.98 was used according to [4].

The common speech enhancement system is shown in Figure 7, where the numbers denote the order of the estimation procedures. Of course, the spectral gain estimation is depending on the employed speech enhancement method. In simulations, the observed signal š‘¦(š‘”) was a female speech signal š‘„(š‘”) corrupted with a practical tunnel noise š‘‘(š‘”) with SNR = 0ā€‰dB, where the noise was recorded in a tunnel in an expressway in Japan. All the signals used in the simulations were sampled at 8ā€‰kHz, and the DFT size was 256 (the FFT was used instead of the DFT). For objective evaluations, we utilized the SNR defined as SNR=10log10āˆ‘šæš‘”=0||||š‘„(š‘”)2āˆ‘šæš‘”=0||||š‘„(š‘”)āˆ’Ģ‚š‘„(š‘”)2,(33) where šæ denotes the number of the samples in time domain. It was also utilized the other evaluation function given as [17] 1LR=š½š½āˆ’1ī“š‘—=01š‘š‘āˆ’1ī“š‘˜=0īƒ©||š‘‹logš‘˜(||š‘›)||īš‘‹š‘˜||+||īš‘‹(š‘›)š‘˜||(š‘›)||š‘‹š‘˜||īƒŖ,(š‘›)āˆ’1(34) where š½ is the number of frames. The LR (Likelihood Ratio) denotes a spectral distance between the original speech and the estimated one, hence the perfect speech estimate gives LR = 0.

3.2. Simulation Results

Speech enhancement simulations were carried out to compare the presented speech enhancement methods. The chosen methods were the spectral subtraction method [3] and Wiener filter [5] as traditional methods, Lotter's spectral gain [11] as a MAP method using a fixed speech PDF, and the adaptive speech PDF method [16] as the recent method.

Table 2 shows the results of the objective evaluation for each methods, where both of the best SNR and LR results were obtained from the adaptive speech PDF method proposed by Thanhikam et al. [16]. We see from this table that the Wiener filter and Lotter's method also gave comparatively good SNR and LR results in comparison to the spectral subtraction method. The waveforms of the simulation results are shown in Figures 8(a)ā€“8(e), and the respective spectrograms are shown in Figures 9(a)ā€“9(e). From Figures 8(b) and 9(b), we see that the spectral subtraction method provided many residual noises. The main reason of it may be that the spectral subtraction method does not use any speech spectral information. The residual noises are perceived as an annoying musical noise. From Figures 8 and 9(c), we see that the Wiener filter is superior to the spectral subtraction method for speech enhancement. The Wiener filter gave the estimated speech with less musical noise, although the amount of the residual noise was comparatively large. From the waveform shown in Figure 8(d), we can confirm that the Lotterā€™s spectral gain method can effectively reduce the noise in some segments. But its spectrogram shown in Figure 9(d) showed that the Lotterā€™s spectral gain method emphasized isolated spectra, that is, musical noises. As a result, it also causes a perception problem. In Figures 8 and 9(e), such estimation errors cannot be confirmed. It implies that the adaptive PDF method proposed by Thanhikam is appropriate to reduce the noise in speech pause segments. However, in the speech activity segments, we can confirm that the speech spectral components were also vanished. The output speech quality of the adaptive speech PDF method may be improved by adjusting the forgetting factor in the adaptive shape parameters of the speech PDF.


š‘† š‘Š šæ š“

SNR [dB]6.814.512.714.8
LR141.829.027.97.0

S: spectral subtraction in (6), W: Wiener filter in (10), L: Lotterā€™s spectral gain in (22), A: adaptive PDF in (25).

4. Conclusion

Single channel speech enhancement methods have been extensively studied in decades. This paper have presented some spectral gain methods among numerous studies. Of course, there exists various noisy situations, and hence we cannot choose the best speech enhancement system among them. We just tried to explicitly denote theoretical backgrounds of the chosen speech enhancement methods. The noise reduction capability of the speech enhancement methods was roughly compared for an arbitrary noisy speech, although the simulation results may slightly change when different noise and speech signals are used. From the obtained simulation results, we confirmed that the MAP estimation methods gave a good noise reduction performance. Particularly, the recently proposed adaptive speech PDF method reduced the noise signal strongly and hence did not produce a musical noise in speech pause segments. In the speech activity segments, we however perceived a small-level musical noise and a degradation of the speech. Such degradation tends to become large as noise increases. Future works in speech enhancement include a development of an effective noise reduction method which can give a good performance for a noisy speech signal with SNR less than 0ā€‰dB.

References

  1. M. Muneyasu and A. Taguchi, Nonlinear Digital Signal Processing, Asakura Publishing, Tokyo, Japan, 1999.
  2. A. Kawamura, Y. Iiguni, and Y. Itoh, ā€œA noise reduction method based on linear prediction with variable step-size,ā€ IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E88-A, no. 4, pp. 855ā€“861, 2005. View at: Publisher Site | Google Scholar
  3. S. F. Boll, ā€œSuppression of acoustic noise in speech using spectral subtraction,ā€ IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 27, no. 2, pp. 113ā€“120, 1979. View at: Google Scholar
  4. Y. Ephraim and D. Malah, ā€œSpeech enhancement using a minimum mean-square error short-time spectral amplitude estimator,ā€ IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 32, no. 6, pp. 1109ā€“1121, 1984. View at: Google Scholar
  5. B. Widrow, J. G. R. Glover Jr., J. M. Mccool et al., ā€œAdaptive noise cancelling: principles and applications,ā€ Proceedings of The IEEE, vol. 63, no. 12, pp. 1692ā€“1716, 1975. View at: Publisher Site | Google Scholar
  6. P. J. Wolfe and S. J. Godsill, ā€œEfficient alternatives to the Ephraim and Malah suppression rule for audio signal enhancement,ā€ Eurasip Journal on Applied Signal Processing, vol. 2003, no. 10, pp. 1043ā€“1051, 2003. View at: Publisher Site | Google Scholar
  7. R. J. McAulay and M. L. Malpass, ā€œSpeech enhancement using a soft-decision noise suppression filter,ā€ IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 28, no. 2, pp. 137ā€“145, 1980. View at: Google Scholar
  8. B. Chen and P. C. Loizou, ā€œSpeech enhancement using a MMSE short time spectral amplitude estimator with laplacian speech modeling,ā€ in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), pp. I1097ā€“I1100, March 2005. View at: Publisher Site | Google Scholar
  9. R. Martin, ā€œSpeech enhancement based on minimum mean-square error estimation and supergaussian priors,ā€ IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 845ā€“856, 2005. View at: Publisher Site | Google Scholar
  10. S. Gazor and W. Zhang, ā€œSpeech enhancement employing laplacian-gaussian mixture,ā€ IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 896ā€“904, 2005. View at: Publisher Site | Google Scholar
  11. T. Lotter and P. Vary, ā€œSpeech enhancement by MAP spectral amplitude estimation using a super-Gaussian speech model,ā€ Eurasip Journal on Applied Signal Processing, vol. 2005, no. 7, pp. 1110ā€“1126, 2005. View at: Publisher Site | Google Scholar
  12. I. Andrianakis and P. R. White, ā€œSpeech spectral amplitude estimators using optimally shaped Gamma and Chi priors,ā€ Speech Communication, vol. 51, no. 1, pp. 1ā€“14, 2009. View at: Publisher Site | Google Scholar
  13. Y. Tsukamoto, A. Kawamura, and Y. Iiguni, ā€œSpeech enhancement based on MAP estimation using a variable speech distribution,ā€ IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E90-A, no. 8, pp. 1587ā€“1593, 2007. View at: Publisher Site | Google Scholar
  14. A. Kawamura, W. Thanhikam, and Y. Iiguni, ā€œA speech spectral estimator using adaptive speech probability density function,ā€ in Proceedings of the EUSIPCO 2010, pp. 1549ā€“1552, August 2010. View at: Google Scholar
  15. W. Thanhikam, A. Kawamura, and Y. Iiguni, ā€œSpeech enhancement using speech model parameters refined by two-step technique,ā€ in Proceedings of the 2nd APSIPA Annual Summit and Conference, p. 11, December 2010. View at: Google Scholar
  16. W. Thanhikam, A. Kawamura, and Y. Iiguni, ā€œSpeech enhancement based on real-speech PDF in various narrow SNR intervals,ā€ IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E95-A, no. 3, pp. 623ā€“630, 2012. View at: Google Scholar
  17. S. Furui, Digital Speech Processing, Tokai University Press, Tokyo, Japan, 1985.
  18. S. L. Miller and D. G. Childers, Probability and Random Processes, Elsevier/Academic Press, 2004.
  19. M. Kato, A. Sugiyama, and M. Serizawa, ā€œNoise suppression with high speech quality based on weighted noise estimation and MMSE STSA,ā€ IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E85-A, no. 7, pp. 1710ā€“1718, 2002. View at: Google Scholar

Copyright © 2012 Arata Kawamura et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

3028Ā Views | 1095Ā Downloads | 0Ā Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder