Review Article  Open Access
Arata Kawamura, Weerawut Thanhikam, Youji Iiguni, "Single Channel Speech Enhancement Techniques in Spectral Domain", International Scholarly Research Notices, vol. 2012, Article ID 919234, 9 pages, 2012. https://doi.org/10.5402/2012/919234
Single Channel Speech Enhancement Techniques in Spectral Domain
Abstract
This paper presents singlechannel speech enhancement techniques in spectral domain. One of the most famous single channel speech enhancement techniques is the spectral subtraction method proposed by S.F. Boll in 1979. In this method, an estimated speech spectrum is obtained by simply subtracting a preestimated noise spectrum from an observed one. Hence, the spectral subtraction method is not concerned with speech spectral properties. It is well known that the spectral subtraction method produces an annoying artificial noise in the extracted speech signal. On the other hand, recent successful speech enhancement methods positively utilize the speech property and achieve an efficient speech enhancement capability. This paper presents a historical review about some speech estimation techniques and explicitly states the difference between their theoretical background. Moreover, to evaluate their speech enhancement capabilities, we perform computer simulations. The results show that an adaptive speech enhancement method based on MAP estimation gives the best noise reduction capability in comparison to other speech enhancement methods presented in this paper.
1. Introduction
In recent years, speech enhancement is required in a wide area of applications including mobile communication and speech recognition systems, where the major example is a cellphone as shown in Figure 1. Many speech enhancement methods have been established in decades [1–15]. These speech enhancement techniques can be classified to time domain methods and spectral domain methods. Recent major speech enhancement techniques are of the spectral domain method which is preferably used in a cell phone. In this paper, we focus on the spectral domain speech enhancement techniques that employ a single microphone.
The spectral subtraction method [3] is one of the most popular methods among numerous noise reduction techniques in spectral domain. This method achieves noise reduction by simply subtracting a preestimated noise spectral amplitude from an observed spectral amplitude, where the spectral phase is not processed. The spectral subtraction method is easy for implementation and effectively reduces stationary noises. However, it incurs an artificial noise, called musical noise, which is caused from speech estimation errors. Because the spectral subtraction method is not concerned with speech spectral information, it often gives estimation errors. Ephraim and Malah have proposed the MMSESTSA (Minimum Mean Square ErrorShortTime Spectral Amplitude) method [4] which utilizes a speech PDF (Probability Density Function) and a noise PDF. In the literature in [4], the speech and noise PDFs were modeled by Rayleigh and Gauss density functions, respectively. This method gives an optimal solution of the estimated speech signal in the sense of MMSESTSA (the solution may change to Wiener filter [5] if we assume Gauss distributions for both of the speech and noise PDFs). Although the MMSESTSA method gives an estimated speech signal with less musical noise, it requires more complicated computations, for example, the solution required to calculate the modified Bessel function. Moreover, as pointed out by some researchers, real speech histograms do not fit to Rayleigh function employed in [4].
A more efficient method that is based on a maximum a posteriori (MAP) estimation has been established by Lotter and Vary [11]. Lotter and Vary modeled the speech PDF by a parametric superGaussian function, controlled by two shape parameters. The parametric superGaussian function has been developed from a histogram made from a large amount of real speech data in a single narrow SNR (Signal to Noise Ratio) interval. The noise suppression capability of this method is superior to the Wiener filter. However, the residual noise is still persistently perceived. Andrianakis and White were aware that the speech PDF may change in some SNR intervals [12]. They utilized three histograms made from speech signals in three different narrow SNR intervals and approximate them with Gamma density function. As reported in [12], changing these three speech PDFs according to the SNR can improve the noise reduction capability. While Andrianakis discretely changes the speech PDF, Tsukamoto et al. continuously change the speech PDF according to the SNR [13]. They employed the parametric superGaussian function proposed in [11] and adaptively changed its shape parameters according to the SNR. Recently, Thanhikam et al. [16] sophisticated this approach by making and evaluating many real speech histograms made from various narrow SNR intervals. As shown in [16], this method has a very strong noise reduction capability in comparison to other traditional speech enhancement methods, and hence it is effective especially in low SNR environments.
In the following sections, we present a historical review of useful speech enhancement methods mentioned above and compare their speech enhancement capabilities by computer simulations.
2. Speech Enhancement in Spectral Domain
This section presents several speech enhancement techniques including both traditional methods and recent methods. Particularly, we will carefully explain the difference between them.
2.1. General Speech Enhancement System
Firstly, we explain about a general singlechannel speech enhancement system in spectral domain.
We assume that an observed signal is a sum of a speech signal and a noise signal given as where is the observed signal at time . and denote the speech signal and the noise signal, respectively. We assume that is uncorrelated with through the paper. Taking the DFT of (1), we have where , , and denote the frame length, the frame index, and the frequency bin index, respectively. The analysis frame is shifted by samples, where is used through the paper. The function denotes an analysis window function, where the Hanning window of size is used as . The DFT spectrum can be rewritten as where and are the th spectra of and , respectively. The enhanced speech spectrum is given as where is a spectral gain. The enhanced speech is obtained as the observed signal multiplied by the spectral gain . Hence, speech enhancement capability depends only on the spectral gain.
A general speech enhancement system can be illustrated in Figure 2, where the value of the spectral gain depends on an employed speech enhancement algorithm. We see from (3) and (4) that the ideal spectral gain is given as This spectral gain perfectly provides the original speech signal as the enhanced speech. Since the ideal spectral gain above cannot be directly obtained from , we have to approximate the ideal spectral gain by introducing additional assumptions for the speech or the noise signals.
In the following sections, we give some typical spectral gains which have been derived from respective assumptions for the speech or the noise. For avoiding redundant expressions, we omit the indices and if they do not play an important role.
2.2. Spectral Subtraction
The most simple and famous speech enhancement technique is the spectral subtraction proposed by Boll in 1979 [3]. This method just subtracts a preestimated noise spectral amplitude from an observed one to obtain the estimated speech spectral amplitude. In the spectral subtraction method, the spectral phase is not modified; that is, the estimated speech spectral phase is identical to the observed one. This is based on the fact that the spectral phase is unimportant in comparison to the spectral amplitude in human speech perception [17]. The spectral subtraction method is achieved by using the following spectral gain. where is the preestimated noise spectral amplitude. Usually, we choose . We note that formula (6) is an absolute version of (5).
The spectral subtraction is not concerned with speech spectral property. As a result, the estimated speech signal includes many estimation errors. The estimation error produces an isolated spectrum in the estimated speech signal. This noise is called “musical noise” and it is perceived as an annoying sound for human. To obtain an estimated speech signal with less musical noise, we should introduce a speech property into speech enhancement scheme. In the following sections, we present some speech enhancement methods taking into account speech probabilistic properties.
2.3. Wiener Filter
In this section, we explain the Wiener filter [5] which utilizes both of the speech and the noise spectral probabilistic properties. It is well known that the Wiener filter provides an estimated speech signal with less musical noise in comparison to the spectral subtraction method.
To derive the Wiener filter, we assume that the speech spectrum is uncorrelated with the noise spectrum and , , , . The Wiener filter is obtained by minimizing the following cost function: where denotes the expected value. We can rewrite as Differentiating with respect to gives Putting (9) to zero and solving it with respect to , we have the spectral gain of the Wiener filter given as where is the a priori SNR. The Wiener filter requires one parameter or two variances and .
2.4. MMSESTSA Method
In this section, we explain a historically important speech enhancement method, that is, the MMSESTSA method [4] proposed by Ephraim and Malah in 1984. Ephraim and Malah have proposed not only an efficient spectral gain, but also an efficient estimation technique to get the a priori SNR.
The MMSESTSA method is derived by minimizing a conditional mean square value of the short time spectral amplitude. The cost function to be minimized is given by where denotes the conditional PDF of . The estimated speech spectrum which minimizes is given as As shown in [6], when we assume and as Gauss functions, (12) produces the Wiener filter again. On the other hand, Ephraim and Malah considered the PDFs of the speech spectral amplitude and phase, that is, and . They assumed that and as the Rayleigh distribution and the uniform distribution, respectively [18]. They assumed as the Gauss function, where the noise variance is assumed to split equally into real and imaginary parts. These PDFs are expressed as where is corresponding to . Assuming , we can calculate (12) by using the relation . After tedious and complex computations, the spectral gain is given as [4] where is the modified Bessel function of order and Here, is called as the a posteriori SNR. As shown in [4], the optimal spectral phase in the sense of MMSESTSA is identical to the observed one. Hence, is also a real value. The MMSESTSA solution, , is completely characterized by , , and . When the noise variance is known or can be estimated, is simply obtained by the observed spectrum. On the other hand, estimating the a priori SNR is difficult, although it needs to be required for many other spectral speech enhancers. One of the valuable contributions in [4] is to present a useful estimation method of , called the decisiondirected method. We will show and use it to estimate in Section 3.
2.5. MAP Estimation Method
As confirmed in many literatures, the spectral gain derived in the previous section is superior to the spectral subtraction method. But is not easy to implement due to a large amount of computational complexity. Indeed, we can obtain a more theoretically relevant and reasonable spectral gain from the same cost function shown in (11). The MMSESTSA method has chosen to minimize (11). Here, we can note that is the best choice when the PDF is an even function like a Gauss function. Because the Rayleigh distribution is asymmetric function, is not appropriate. The MAP estimation method [6] denotes that the best choice for minimizing (11) is to employ the speech spectrum maximizing .
To illustrate the difference between the MMSESTSA solution and the MAP solution, we show an example of the specific PDF. Figures 3(a) and 3(b) show the Gauss and Rayleigh distributions, respectively. Here, the horizontal axis denotes the value of an argument and the vertical axis is a PDF . The vertical dotted lines denote the argument values giving the mean value and maximum value of , respectively. The former value is corresponding to the MMSESTSA solution and the latter value is corresponding to the MAP solution. As shown in Figure 3(a), the MMSESTSA solution is identical to the MAP solution for the Gauss distribution which is an even function. On the other hand, the solutions of them are different for the asymmetric Rayleigh distribution as shown in Figure 3(b). Obviously, we should choose the solution of the MAP estimation rather than the MMSESTSA solution to minimize the cost function (11).
(a) Gauss distribution
(b) Rayleigh distribution
To obtain the MAP solution, we have to maximize the conditional PDF . Based on the Bayes’s rule, we have [6] The MAP estimation is to find the arguments which maximize , that is, We assume the same PDFs from (13) to (15), and . After calculating and differentiating it with respect to (or ), we put the obtained derivative to zero and solve it with respect to (or ). Then, we have [6] Since the MAP solution of is identical to the observed spectral phase, is also a real value. We see that consists of and only; thus its computational complexity is extremely low in comparison to (16).
2.6. Lotter's Spectral Gain
In the previous section, we obtained a MAP solution for speech enhancement under the assumption that the PDF of the speech spectral amplitude can be modeled as the Rayleigh distribution. However, some researchers pointed out that there exists other appropriate speech PDF [8–11]. In 2005, Lotter and Vary have proposed an original speech spectral amplitude PDF. This PDF was derived from a real speech histogram made from a large amount of real speech data. In the same manner as in the previous section, the speech spectral amplitude and phase were separately modeled in [11]. The PDF of the spectral phase was also modeled as the uniform distribution defined in (14). Lotter et al. modeled the PDF of the speech spectral amplitude as a superGaussian function represented by where is a Gamma function and and are the shape parameters which determine the shape of the above PDF. Using (21), (14) and (15), the same procedure in the previous section gives the MAP solution expressed as The MAP solution of the speech spectral phase is also identical to the observed one, and thus is a real value. Lotter and Vary reported that the most appropriate shape parameters are and in [11]. The spectral gain also consists of and only, hence it is easy to implement.
2.7. Adaptive Speech PDF Method
In [11], the shape parameters of the speech spectral amplitude PDF, and , had been derived from a large amount of speech data in a single narrow SNR interval. However, in a practical situation, a speech signal includes both activity segments and pause segments. Since the value of the speech spectral amplitude is always zero in the pause segments, we expect that its PDF can be modeled as a delta function. On the other hand, in the activity speech segments, the PDF of the speech spectral amplitude obeys other functions. Tsukamoto et al. have noticed the fact and investigated an adaptive method to change the PDF of the speech spectral amplitude, according to the SNR [13]. They have chosen Lotter's PDF defined in (21) as the adaptive PDF, because its shape is easily controlled by and . Here, we show examples of Lotter's PDF with different shape parameters in Figure 4. We see from this figure that the PDF can fit the exponential distribution and the Rayleigh distribution by adjusting the shape parameters. Utilizing real speech histograms, Tsukamoto et al. derived adaptive shape parameters and showed its effectiveness through the computer simulations [13]. This basic idea is useful for speech enhancement in a practical situation. Unfortunately, a reliability of the derived adaptive shape parameter is comparatively low, because it is derived from only two speech histograms.
To sophisticate Tsukamoto's adaptive shape parameter, Thanhikam et al. have made and evaluated many real speech histograms in various narrow SNR intervals [16]. They tried to fit the speech histograms with (21) and revealed an interesting curve of the shape parameters for narrow SNR intervals. The obtained shape parameters as the fitting results and the derived curve are shown in Figures 5(a) and 5(b), where the narrow SNR was calculated as [dB]. The lines in the figures denote the curves obtained by the least mean square method. Thes curves denote the relation between the shape parameters and . Table 1 shows the formulations of the derived shape parameter function for , where we denote the derived shape parameters by and , and

(a)
(b)
Thanhikam et al. used an averaged value of and to determine the present PDF shape of the speech spectral amplitude. Their “adaptive” MAP solution is as follows:
where is the forgetting factor and and are the adaptive shape parameters. In [16], they put , , . This paper also use these settings.
In the next section, we compare the speech enhancement capabilities of the spectral gains presented in this paper.
3. Speech Enhancement Simulation
To compare the speech enhancement capabilities of some spectral gains derived in this paper, we firstly explain about common conditions for speech enhancement simulation. After that, we show the simulation results and discuss them.
3.1. Common Conditions
The speech enhancement methods explained in this paper commonly require the noise variance , a priori SNR , and a posteriori SNR . To obtain these parameters, the following estimation methods were used.
Firstly, the noise variance was calculated by using the weighted noise estimator proposed in [19]. This method can update the estimated noise variance even if a speech signal exists. The weighted noise estimator calculates an instantaneous noise power by using the weight as shown in Figure 6. Here, and are constant values. The literature in [19] recommends that and . As shown in Figure 6, is a function of given as The noise variance is updated as where is a forgetting factor and was used.
Next, the a posteriori SNR was directly calculated as
Lastly, the a priori SNR was calculated by using the decisiondirected method proposed in [4]. The decisiondirected method is given by where is a forgetting factor and was used according to [4].
The common speech enhancement system is shown in Figure 7, where the numbers denote the order of the estimation procedures. Of course, the spectral gain estimation is depending on the employed speech enhancement method. In simulations, the observed signal was a female speech signal corrupted with a practical tunnel noise with SNR = 0 dB, where the noise was recorded in a tunnel in an expressway in Japan. All the signals used in the simulations were sampled at 8 kHz, and the DFT size was 256 (the FFT was used instead of the DFT). For objective evaluations, we utilized the SNR defined as where denotes the number of the samples in time domain. It was also utilized the other evaluation function given as [17] where is the number of frames. The LR (Likelihood Ratio) denotes a spectral distance between the original speech and the estimated one, hence the perfect speech estimate gives LR = 0.
3.2. Simulation Results
Speech enhancement simulations were carried out to compare the presented speech enhancement methods. The chosen methods were the spectral subtraction method [3] and Wiener filter [5] as traditional methods, Lotter's spectral gain [11] as a MAP method using a fixed speech PDF, and the adaptive speech PDF method [16] as the recent method.
Table 2 shows the results of the objective evaluation for each methods, where both of the best SNR and LR results were obtained from the adaptive speech PDF method proposed by Thanhikam et al. [16]. We see from this table that the Wiener filter and Lotter's method also gave comparatively good SNR and LR results in comparison to the spectral subtraction method. The waveforms of the simulation results are shown in Figures 8(a)–8(e), and the respective spectrograms are shown in Figures 9(a)–9(e). From Figures 8(b) and 9(b), we see that the spectral subtraction method provided many residual noises. The main reason of it may be that the spectral subtraction method does not use any speech spectral information. The residual noises are perceived as an annoying musical noise. From Figures 8 and 9(c), we see that the Wiener filter is superior to the spectral subtraction method for speech enhancement. The Wiener filter gave the estimated speech with less musical noise, although the amount of the residual noise was comparatively large. From the waveform shown in Figure 8(d), we can confirm that the Lotter’s spectral gain method can effectively reduce the noise in some segments. But its spectrogram shown in Figure 9(d) showed that the Lotter’s spectral gain method emphasized isolated spectra, that is, musical noises. As a result, it also causes a perception problem. In Figures 8 and 9(e), such estimation errors cannot be confirmed. It implies that the adaptive PDF method proposed by Thanhikam is appropriate to reduce the noise in speech pause segments. However, in the speech activity segments, we can confirm that the speech spectral components were also vanished. The output speech quality of the adaptive speech PDF method may be improved by adjusting the forgetting factor in the adaptive shape parameters of the speech PDF.
(a) Observed signal
(b) Spectral subtraction shown in (6)
(c) Wiener filter shown in (10)
(d) MAP estimation using Lotter’s PDF shown in (22)
(e) MAP estimation using adaptive PDF shown in (25)
(a) Observed signal
(b) Spectral subtraction shown in (6)
(c) Wiener filter shown in (10)
(d) MAP estimation using Lotter’s PDF shown in (22)
(e) MAP estimation using adaptive PDF shown in (25)
4. Conclusion
Single channel speech enhancement methods have been extensively studied in decades. This paper have presented some spectral gain methods among numerous studies. Of course, there exists various noisy situations, and hence we cannot choose the best speech enhancement system among them. We just tried to explicitly denote theoretical backgrounds of the chosen speech enhancement methods. The noise reduction capability of the speech enhancement methods was roughly compared for an arbitrary noisy speech, although the simulation results may slightly change when different noise and speech signals are used. From the obtained simulation results, we confirmed that the MAP estimation methods gave a good noise reduction performance. Particularly, the recently proposed adaptive speech PDF method reduced the noise signal strongly and hence did not produce a musical noise in speech pause segments. In the speech activity segments, we however perceived a smalllevel musical noise and a degradation of the speech. Such degradation tends to become large as noise increases. Future works in speech enhancement include a development of an effective noise reduction method which can give a good performance for a noisy speech signal with SNR less than 0 dB.
References
 M. Muneyasu and A. Taguchi, Nonlinear Digital Signal Processing, Asakura Publishing, Tokyo, Japan, 1999.
 A. Kawamura, Y. Iiguni, and Y. Itoh, “A noise reduction method based on linear prediction with variable stepsize,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E88A, no. 4, pp. 855–861, 2005. View at: Publisher Site  Google Scholar
 S. F. Boll, “Suppression of acoustic noise in speech using spectral subtraction,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 27, no. 2, pp. 113–120, 1979. View at: Google Scholar
 Y. Ephraim and D. Malah, “Speech enhancement using a minimum meansquare error shorttime spectral amplitude estimator,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 32, no. 6, pp. 1109–1121, 1984. View at: Google Scholar
 B. Widrow, J. G. R. Glover Jr., J. M. Mccool et al., “Adaptive noise cancelling: principles and applications,” Proceedings of The IEEE, vol. 63, no. 12, pp. 1692–1716, 1975. View at: Publisher Site  Google Scholar
 P. J. Wolfe and S. J. Godsill, “Efficient alternatives to the Ephraim and Malah suppression rule for audio signal enhancement,” Eurasip Journal on Applied Signal Processing, vol. 2003, no. 10, pp. 1043–1051, 2003. View at: Publisher Site  Google Scholar
 R. J. McAulay and M. L. Malpass, “Speech enhancement using a softdecision noise suppression filter,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 28, no. 2, pp. 137–145, 1980. View at: Google Scholar
 B. Chen and P. C. Loizou, “Speech enhancement using a MMSE short time spectral amplitude estimator with laplacian speech modeling,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), pp. I1097–I1100, March 2005. View at: Publisher Site  Google Scholar
 R. Martin, “Speech enhancement based on minimum meansquare error estimation and supergaussian priors,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 845–856, 2005. View at: Publisher Site  Google Scholar
 S. Gazor and W. Zhang, “Speech enhancement employing laplaciangaussian mixture,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 896–904, 2005. View at: Publisher Site  Google Scholar
 T. Lotter and P. Vary, “Speech enhancement by MAP spectral amplitude estimation using a superGaussian speech model,” Eurasip Journal on Applied Signal Processing, vol. 2005, no. 7, pp. 1110–1126, 2005. View at: Publisher Site  Google Scholar
 I. Andrianakis and P. R. White, “Speech spectral amplitude estimators using optimally shaped Gamma and Chi priors,” Speech Communication, vol. 51, no. 1, pp. 1–14, 2009. View at: Publisher Site  Google Scholar
 Y. Tsukamoto, A. Kawamura, and Y. Iiguni, “Speech enhancement based on MAP estimation using a variable speech distribution,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E90A, no. 8, pp. 1587–1593, 2007. View at: Publisher Site  Google Scholar
 A. Kawamura, W. Thanhikam, and Y. Iiguni, “A speech spectral estimator using adaptive speech probability density function,” in Proceedings of the EUSIPCO 2010, pp. 1549–1552, August 2010. View at: Google Scholar
 W. Thanhikam, A. Kawamura, and Y. Iiguni, “Speech enhancement using speech model parameters refined by twostep technique,” in Proceedings of the 2nd APSIPA Annual Summit and Conference, p. 11, December 2010. View at: Google Scholar
 W. Thanhikam, A. Kawamura, and Y. Iiguni, “Speech enhancement based on realspeech PDF in various narrow SNR intervals,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E95A, no. 3, pp. 623–630, 2012. View at: Google Scholar
 S. Furui, Digital Speech Processing, Tokai University Press, Tokyo, Japan, 1985.
 S. L. Miller and D. G. Childers, Probability and Random Processes, Elsevier/Academic Press, 2004.
 M. Kato, A. Sugiyama, and M. Serizawa, “Noise suppression with high speech quality based on weighted noise estimation and MMSE STSA,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E85A, no. 7, pp. 1710–1718, 2002. View at: Google Scholar
Copyright
Copyright © 2012 Arata Kawamura et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.