ISRN Signal Processing The latest articles from Hindawi Publishing Corporation © 2014 , Hindawi Publishing Corporation . All rights reserved. Radar Coincidence Imaging under Grid Mismatch Tue, 22 Apr 2014 00:00:00 +0000 Radar coincidence imaging is an instantaneous imaging technique which does not depend on the relative motion between targets and radars. High-resolution, fine-quality images can be obtained using a single pulse either for stationary targets or for complexly maneuvering ones. There are two image-reconstruction algorithms used for radar coincidence imaging, that is, the correlation method and the parameterized method. In comparison with the former, the parameterized method can achieve much higher resolution but is seriously sensitive to grid mismatch. In the presence of grid mismatch, neither of the two algorithms can obtain recognizable high-resolution images. The above problem largely limits the applicability of radar coincidence imaging in actual imaging scenes where grid mismatch generally exists. This paper proposes a joint correlation-parameterization algorithm, which uses the correlation method to estimate the grid-mismatch error and then iteratively modifies the results of the parameterized method. The proposed algorithm can achieve high resolution with fine imagery quality under the grid mismatch. Examples are provided to illustrate the improvement of the proposed method. Dongze Li, Xiang Li, Yongqiang Cheng, Yuliang Qin, and Hongqiang Wang Copyright © 2014 Dongze Li et al. All rights reserved. Statistically Matched Wavelet Based Texture Synthesis in a Compressive Sensing Framework Mon, 17 Feb 2014 11:47:19 +0000 This paper proposes a statistically matched wavelet based textured image coding scheme for efficient representation of texture data in a compressive sensing (CS) frame work. Statistically matched wavelet based data representation causes most of the captured energy to be concentrated in the approximation subspace, while very little information remains in the detail subspace. We encode not the full-resolution statistically matched wavelet subband coefficients but only the approximation subband coefficients (LL) using standard image compression scheme like JPEG2000. The detail subband coefficients, that is, HL, LH, and HH, are jointly encoded in a compressive sensing framework. Compressive sensing technique has proved that it is possible to achieve a sampling rate lower than the Nyquist rate with acceptable reconstruction quality. The experimental results demonstrate that the proposed scheme can provide better PSNR and MOS with a similar compression ratio than the conventional DWT-based image compression schemes in a CS framework and other wavelet based texture synthesis schemes like HMT-3S. Mithilesh Kumar Jha, Brejesh Lall, and Sumantra Dutta Roy Copyright © 2014 Mithilesh Kumar Jha et al. All rights reserved. Weighted Least Squares Based Detail Enhanced Exposure Fusion Mon, 17 Feb 2014 11:29:45 +0000 Many recent computational photography techniques play a significant role to avoid limitation of standard digital cameras to handle wide dynamic range of the real-world scenes, containing brightly and poorly illuminated areas. In many of these techniques, it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. In this paper we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement. Computationally simple texture features (i.e., detail layer extracted with the help of edge preserving filter) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multiexposure images. Instead of employing intermediate High Dynamic Range (HDR) reconstruction and tone mapping steps, well-exposed fused image is generated for displaying on conventional display devices. A further advantage of the present technique is that it is well suited for multifocus image fusion. Simulation results are compared with a number of existing single resolution and multiresolution techniques to show the benefits of the proposed scheme for variety of cases. Harbinder Singh, Vinay Kumar, and Sunil Bhooshan Copyright © 2014 Harbinder Singh et al. All rights reserved. A Spectrum Sensing Scheme for Partially Polarized Waves over α-μ Generalized Gamma Fading Channels Sun, 09 Feb 2014 15:35:46 +0000 Schemes for spectrum holes sensing for cognitive radio based on the estimation of the Stokes parameters of monochromatic and quasimonochromatic polarized electromagnetic waves are developed. Statistical information that includes the variations of the polarization state in both cases (present and absent) of Primary User (PU) is accounted for. A detector based on the fluctuation of the Stokes parameters is analyzed, and its performance is compared with that of energy detectors, which use only the scalar amplitude information to sense the PU signal. The cooperative spectrum sensing based on the polarization in which the reporting channels are noisy will be investigated. The cluster technique is proposed to reduce the bit error probability due to channel impairment. A closed-form expression for the polarization detection is derived using α-μ generalized fading model, which provides directly an expression for the special cases of Nakagami-m and Weibull models as well as their derivatives. These expressions are verified using simulation. The results show that the polarization spectrum sensing gives superior performance for a wide range of SNR over the conventional energy detection method. Mohamed A. Hankal, Islam A. Eshrah, and Hazim Tawfik Copyright © 2014 Mohamed A. Hankal et al. All rights reserved. Complex Cepstrum Based Voice Conversion Using Radial Basis Function Thu, 06 Feb 2014 17:02:24 +0000 The complex cepstrum vocoder is used to modify the speaker specific characteristics of the source speaker speech to that of the target speaker speech. The low time and high time liftering are used to split the calculated cepstrum into the vocal tract and the source excitation parameters. The obtained mixed phase vocal tract and source excitation parameters with finite impulse response preserve the phase properties of the resynthesized speech frame. The radial basis function is explored to capture the nonlinear mapping function for modifying the complex cepstrum based real and imaginary components of the vocal tract and source excitation of the speech signal. The state-of-the-art Mel cepstrum envelope and the fundamental frequency () are considered to represent the vocal tract and the source excitation of the speech frame, respectively. Radial basis function is used to capture and formulate the nonlinear relations between the Mel cepstrum envelope of the source and target speakers. Mean and standard deviation approach is employed to modify the fundamental frequency (). The Mel log spectral approximation filter is used to reconstruct the speech signal from the modified Mel cepstrum envelope and fundamental frequency. A comparison of the proposed complex cepstrum based model has been made with the state-of-the-art Mel Cepstrum Envelope based voice conversion model with objective and subjective evaluations. The evaluation measures reveal that the proposed complex cepstrum based voice conversion system approximate the converted speech signal with better accuracy than the model based on the Mel cepstrum envelope based voice conversion. Jagannath Nirmal, Suprava Patnaik, Mukesh Zaveri, and Pramod Kachare Copyright © 2014 Jagannath Nirmal et al. All rights reserved. Weather Forecasting Using Sliding Window Algorithm Tue, 10 Dec 2013 16:06:36 +0000 To predict the future’s weather condition, the variation in the conditions in past years must be utilized. The probability that the weather condition of the day in consideration will match the same day in previous year is very less. But the probability that it will match within the span of adjacent fortnight of previous year is very high. So, for the fortnight considered for previous year a sliding window is selected of size equivalent to a week. Every week of sliding window is then matched with that of current year’s week in consideration. The window best matched is made to participate in the process of predicting weather conditions. The prediction is made based on sliding window algorithm. The monthwise results are being computed for three years to check the accuracy. The results of the approach suggested that the method used for weather condition prediction is quite efficient with an average accuracy of 92.2%. Piyush Kapoor and Sarabjeet Singh Bedi Copyright © 2013 Piyush Kapoor and Sarabjeet Singh Bedi. All rights reserved. Recovery of Missing Samples with Sparse Approximations Mon, 07 Oct 2013 15:10:51 +0000 In most missing samples problems, the signals are assumed to be bandlimited. That is, the signals are assumed to be sparsely approximated by a known subset of the discrete Fourier transform basis vectors. We discuss the recovery of missing samples when the signals can be sparsely approximated by an unknown subset of certain unitary basis vectors. We propose the use of the orthogonal matching pursuit to recover missing samples by sparse approximations. Benjamin G. Salomon Copyright © 2013 Benjamin G. Salomon. All rights reserved. Keystroke Dynamics User Authentication Based on Gaussian Mixture Model and Deep Belief Nets Mon, 07 Oct 2013 09:17:32 +0000 User authentication using keystroke dynamics offers many advances in the domain of cyber security, including no extra hardware cost, continuous monitoring, and nonintrusiveness. Many algorithms have been proposed in the literature. Here, we introduce two new algorithms to the domain: the Gaussian mixture model with the universal background model (GMM-UBM) and the deep belief nets (DBN). Unlike most existing approaches, which only use genuine users’ data at training time, these two generative model-based approaches leverage data from background users to enhance the model’s discriminative capability without seeing the imposter’s data at training time. These two new algorithms make no assumption about the underlying probability distribution and are fast for training and testing. They can also be extended to free text use cases. Evaluations on the CMU keystroke dynamics benchmark dataset show over 58% reduction in the equal error rate over the best published approaches. Yunbin Deng and Yu Zhong Copyright © 2013 Yunbin Deng and Yu Zhong. All rights reserved. A Novel Neuron in Kernel Domain Wed, 18 Sep 2013 17:22:29 +0000 Kernel-based neural network (KNN) is proposed as a neuron that is applicable in online learning with adaptive parameters. This neuron with adaptive kernel parameter can classify data accurately instead of using a multilayer error backpropagation neural network. The proposed method, whose heart is kernel least-mean-square, can reduce memory requirement with sparsification technique, and the kernel can adaptively spread. Our experiments will reveal that this method is much faster and more accurate than previous online learning algorithms. Zahra Khandan and Hadi Sadoghi Yazdi Copyright © 2013 Zahra Khandan and Hadi Sadoghi Yazdi. All rights reserved. Design of Adjustable Square-Shaped 2D IIR Filters Wed, 11 Sep 2013 11:34:50 +0000 This paper proposes an analytical design method for two-dimensional square-shaped IIR filters. The designed 2D filters are adjustable since their bandwidth and orientation are specified by parameters appearing explicitly in the filter matrices. The design relies on a zero-phase low-pass 1D prototype filter. To this filter a frequency transformation is next applied, which yields a 2D filter with the desired square shape in the frequency plane. The proposed method combines the analytical approach with numerical approximations. Since the prototype transfer function is factorized into partial functions, the 2D filter also will be described by a factorized transfer function, which is an advantage in implementation. Radu Matei Copyright © 2013 Radu Matei. All rights reserved. Two-Channel Quadrature Mirror Filter Bank: An Overview Tue, 03 Sep 2013 15:34:59 +0000 During the last two decades, there has been substantial progress in multirate digital filters and filter banks. This includes the design of quadrature mirror filters (QMF). A two-channel QMF bank is extensively used in many signal processing fields such as subband coding of speech signal, image processing, antenna systems, design of wavelet bases, and biomedical engineering and in digital audio industry. Therefore, new efficient design techniques are being proposed by several authors in this area. This paper presents an overview of analysis and design techniques of the two-channel QMF bank. Application in the area of subband coding and future research trends are also discussed. S. K. Agrawal and O. P. Sahu Copyright © 2013 S. K. Agrawal and O. P. Sahu. All rights reserved. Studies on Z-Window Based FIR Filters Sun, 01 Sep 2013 14:28:05 +0000 As per classification of the window functions, the Z-windows are grouped in the category of steerable side-lobe dip (SSLD) windows. In this work, the application of these windows for the design of FIR filters with improved filter parameters has been explored. The numbers of dips with their respective positions in the side-lobe region have been compositely used to tailor the window shape. Filter design relationships have been established and included in this paper. Simultaneously, an application of these Z-window based FIR filters in designing two-channel quadrature mirror filter (QMF) bank has been presented. Better values of reconstruction and aliasing errors have been achieved in contrast to the Kaiser window based QMF bank. Rahul Pachauri, Rajiv Saxena, and Sanjeev N. Sharma Copyright © 2013 Rahul Pachauri et al. All rights reserved. A Novel Optimized Golomb-Rice Technique for the Reconstruction in Lossless Compression of Digital Images Wed, 07 Aug 2013 16:26:43 +0000 The research trends that are available in the area of image compression for various imaging applications are not adequate for some of the applications. These applications require good visual quality in processing. In general the tradeoff between compression efficiency and picture quality is the most important parameter to validate the work. The existing algorithms for still image compression were developed by considering the compression efficiency parameter by giving least importance to the visual quality in processing. Hence, we proposed a novel lossless image compression algorithm based on Golomb-Rice coding which was efficiently suited for various types of digital images. Thus, in this work, we specifically address the following problem that is to maintain the compression ratio for better visual quality in the reconstruction and considerable gain in the values of peak signal-to-noise ratios (PSNR). We considered medical images, satellite extracted images, and natural images for the inspection and proposed a novel technique to increase the visual quality of the reconstructed image. Shaik. Mahaboob Basha and B. C. Jinaga Copyright © 2013 Shaik. Mahaboob Basha and B. C. Jinaga. All rights reserved. Single Channel Speech Enhancement Using Adaptive Soft-Thresholding with Bivariate EMD Wed, 31 Jul 2013 13:26:27 +0000 This paper presents a novel data adaptive thresholding approach to single channel speech enhancement. The noisy speech signal and fractional Gaussian noise (fGn) are combined to produce the complex signal. The fGn is generated using the noise variance roughly estimated from the noisy speech signal. Bivariate empirical mode decomposition (bEMD) is employed to decompose the complex signal into a finite number of complex-valued intrinsic mode functions (IMFs). The real and imaginary parts of the IMFs represent the IMFs of observed speech and fGn, respectively. Each IMF is divided into short time frames for local processing. The variance of IMF of fGn calculated within a frame is used as the reference term to classify corresponding noisy speech frame into noise and signal dominant frames. Only the noise dominant frames are soft-thresholded to reduce the noise effects. Then, all the frames as well as IMFs of speech are combined, yielding the enhanced speech signal. The experimental results show the improved performance of the proposed algorithm compared to the recently reported methods. Md. Ekramul Hamid, Md. Khademul Islam Molla, Xin Dang, and Takayoshi Nakai Copyright © 2013 Md. Ekramul Hamid et al. All rights reserved. Dynamically Measuring Statistical Dependencies in Multivariate Financial Time Series Using Independent Component Analysis Sun, 02 Jun 2013 08:55:20 +0000 We present a computationally tractable approach to dynamically measure statistical dependencies in multivariate non-Gaussian signals. The approach makes use of extensions of independent component analysis to calculate information coupling, as a proxy measure for mutual information, between multiple signals and can be used to estimate uncertainty associated with the information coupling measure in a straightforward way. We empirically validate relative accuracy of the information coupling measure using a set of synthetic data examples and showcase practical utility of using the measure when analysing multivariate financial time series. Nauman Shah and Stephen J. Roberts Copyright © 2013 Nauman Shah and Stephen J. Roberts. All rights reserved. Anisotropic Diffusion for Details Enhancement in Multiexposure Image Fusion Sun, 19 May 2013 14:25:21 +0000 We develop a multiexposure image fusion method based on texture features, which exploits the edge preserving and intraregion smoothing property of nonlinear diffusion filters based on partial differential equations (PDE). With the captured multiexposure image series, we first decompose images into base layers and detail layers to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. Then, we have considered texture features of the base layer to generate a mask (i.e., decision mask) that guides the fusion of base layers in multiresolution fashion. Finally, well-exposed fused image is obtained that combines fused base layer and the detail layers at each scale across all the input exposures. Proposed algorithm skipping complex High Dynamic Range Image (HDRI) generation and tone mapping steps to produce detail preserving image for display on standard dynamic range display devices. Moreover, our technique is effective for blending flash/no-flash image pair and multifocus images, that is, images focused on different targets. Harbinder Singh, Vinay Kumar, and Sunil Bhooshan Copyright © 2013 Harbinder Singh et al. All rights reserved. Adaptive Selection Combining Receiver over Time Varying Frequency Selective Fading Channel in Class-A Noise Mon, 13 May 2013 14:23:27 +0000 An adaptive selection combining (SC) scheme is proposed for time varying mobile communication channel in Class-A impulsive noise. The receiver adaptively selects a diversity branch out of the available branches and discards the others. This is performed by computing the maximum likelihood (ML) metric of each diversity branch and selects the branch with the maximum metric. The proposed adaptive SC scheme dynamically adjusts the threshold value according to the time variations of the channel. Equalization and data detection are performed after combining using maximum likelihood sequence estimation implemented by Viterbi algorithm (MLSE-VA). The minimum survivor technique is employed to reduce the complexity of the receiver. Ahmed El-Sayed El-Mahdy Copyright © 2013 Ahmed El-Sayed El-Mahdy. All rights reserved. NGFICA Based Digitization of Historic Inscription Images Wed, 08 May 2013 15:26:11 +0000 This paper addresses the problems encountered during digitization and preservation of inscriptions such as perspective distortion and minimal distinction between foreground and background. In general inscriptions possess neither standard size and shape nor colour difference between the foreground and background. Hence the existing methods like variance based extraction and Fast ICA based analysis fail to extract text from these inscription images. Natural gradient flexible ICA (NGFICA) is a suitable method for separating signals from a mixture of highly correlated signals, as it minimizes the dependency among the signals by considering the slope of the signal at each point. We propose an NGFICA based enhancement of inscription images. The proposed method improves word and character recognition accuracies of the OCR system by 65.3% (from 10.1% to 75.4%) and 54.3% (from 32.4% to 86.7%), respectively. Indu Sreedevi, Rishi Pandey, N. Jayanthi, Geetanjali Bhola, and Santanu Chaudhury Copyright © 2013 Indu Sreedevi et al. All rights reserved. High-Resolution Direction-of-Arrival Estimation via Concentric Circular Arrays Thu, 28 Mar 2013 08:27:55 +0000 Estimating the direction of arrival (DOA) of source signals is an important research interest in application areas including radar, sonar, and wireless communications. In this paper, the problem of DOA estimation is addressed on concentric circular antenna arrays (CCA) in detail as an alternative to the well-known geometries of the uniform linear array (ULA) and uniform circular array (UCA). We define the steering matrix of the CCA geometry and investigate the performance analysis of the array in the DOA-estimation problem by simulations that are realized through varying the parameters of signal-to-noise ratio, number of sensors, and resolution angle of sensor arrays by using the MUSIC (Multiple Signal Classification) algorithm. The results present that CCA geometries provide higher angle resolutions compared to UCA geometries and require less physical area for the same number of sensor elements. However, as a cost-increasing effect, higher computational power is needed to estimate the DOA of source signals in CCAs compared to ULAs. Serdar Ozgur Ata and Cevdet Isik Copyright © 2013 Serdar Ozgur Ata and Cevdet Isik. All rights reserved. About a Partial Differential Equation-Based Interpolator for Signal Envelope Computing: Existence Results and Applications Thu, 07 Mar 2013 15:14:47 +0000 This paper models and solves the mathematical problem of interpolating characteristic points of signals by a partial differential Equation-(PDE-) based approach. The existence and uniqueness results are established in an appropriate space whose regularity is similar to cubic spline one. We show how this space is suitable for the empirical mode decomposition (EMD) sifting process. Numerical schemes and computing applications are also presented for signal envelopes calculation. The test results show the usefulness of the new PDE interpolator in some pathological cases like input class functions that are not so regular as in the cubic splines case. Some image filtering tests strengthen the demonstration of PDE interpolator performance. Oumar Niang, Abdoulaye Thioune, Éric Deléchelle, Mary Teuw Niane, and Jacques Lemoine Copyright © 2013 Oumar Niang et al. All rights reserved. Instantaneous Granger Causality with the Hilbert-Huang Transform Wed, 20 Feb 2013 09:56:31 +0000 Current measures of causality and temporal precedence have limited frequency and time resolution and therefore may not be viable in the detection of short periods of causality in specific frequencies. In addition, the presence of nonstationarities hinders the causality estimation of current techniques as they are based on Fourier transforms or autoregressive model estimation. In this work we present a combination of techniques to measure causality and temporal precedence between stationary and nonstationary time series, that is sensitive to frequency-specific short episodes of causality. This methodology provides a highly informative time-frequency representation of causality with existing causality measures. This is done by decomposing each time series into intrinsic oscillatory modes with an empirical mode decomposition algorithm and, subsequently, calculating their complex Hilbert spectrum. At each time point the cross-spectrum is calculated between time series and used to measure coherency and compute the transfer function and error covariance matrices using the Wilson-Burg method for spectral factorization. The imaginary part of coherency can then be computed as well as several Granger causality measures in the previous matrices. This work covers the most important theoretical background of these techniques and tries to prove the usefulness of this new approach while pointing out some of its qualities and drawbacks. João Rodrigues and Alexandre Andrade Copyright © 2013 João Rodrigues and Alexandre Andrade. All rights reserved. A Review of Subspace Segmentation: Problem, Nonlinear Approximations, and Applications to Motion Segmentation Wed, 13 Feb 2013 15:03:02 +0000 The subspace segmentation problem is fundamental in many applications. The goal is to cluster data drawn from an unknown union of subspaces. In this paper we state the problem and describe its connection to other areas of mathematics and engineering. We then review the mathematical and algorithmic methods created to solve this problem and some of its particular cases. We also describe the problem of motion tracking in videos and its connection to the subspace segmentation problem and compare the various techniques for solving it. Akram Aldroubi Copyright © 2013 Akram Aldroubi. All rights reserved. Extraction of Correlated Sparse Sources from Signal Mixtures Wed, 13 Feb 2013 14:03:47 +0000 A blind source separation method is described to extract sources from data mixtures where the underlying sources are sparse and correlated. The approach used is to detect and analyze segments of time where one source exists on its own. The method does not assume independence of sources and probability density functions are not assumed for any of the sources. A comparison is made between the proposed method and the Fast-ICA and Clusterwise PCA methods. It is shown that the proposed method works best for cases where the underlying sources are strongly correlated because Fast-ICA assumes zero correlation between sources and Clusterwise PCA can be sensitive to overlap between sources. However, for cases of sources that are sparse and weakly correlated with each other, there is a tendency for Fast-ICA and Clusterwise PCA to have better performances than the proposed method, the reason being that these methods appear to be more robust to changes in input parameters to the algorithms. In addition, because of the deflationary nature of the proposed method, there is a tendency for estimates to be more affected by noise than Fast-ICA when the number of sources increases. The paper concludes with a discussion concerning potential applications for the proposed method. M. S. Woolfson, C. Bigan, J. A. Crowe, and B. R. Hayes-Gill Copyright © 2013 M. S. Woolfson et al. All rights reserved. Seven Challenges in Image Quality Assessment: Past, Present, and Future Research Wed, 06 Feb 2013 16:17:13 +0000 Image quality assessment (IQA) has been a topic of intense research over the last several decades. With each year comes an increasing number of new IQA algorithms, extensions of existing IQA algorithms, and applications of IQA to other disciplines. In this article, I first provide an up-to-date review of research in IQA, and then I highlight several open challenges in this field. The first half of this article provides discuss key properties of visual perception, image quality databases, existing full-reference, no-reference, and reduced-reference IQA algorithms. Yet, despite the remarkable progress that has been made in IQA, many fundamental challenges remain largely unsolved. The second half of this article highlights some of these challenges. I specifically discuss challenges related to lack of complete perceptual models for: natural images, compound and suprathreshold distortions, and multiple distortions, and the interactive effects of these distortions on the images. I also discuss challenges related to IQA of images containing nontraditional, and I discuss challenges related to the computational efficiency. The goal of this article is not only to help practitioners and researchers keep abreast of the recent advances in IQA, but to also raise awareness of the key limitations of current IQA knowledge. Damon M. Chandler Copyright © 2013 Damon M. Chandler. All rights reserved. Spatial Resolution Analysis for Few-Views Discrete Tomography Based on MART-AP Algorithm Wed, 23 Jan 2013 10:52:53 +0000 We study a new MART-AP algorithm of few-views discrete tomography. Its efficiency for high-frequency structure reproduction is investigated in a numerical experiment where we reconstruct a 2D model for the estimation of the spatial resolution limit. We estimate the modulation transfer function of the reconstruction algorithm and compare it with the modulation transfer function of projection distortions. Our results show that MART-AP weakly influences the contrast of spatial structures being reproduced and can be used for high-resolution reconstruction when only a few projections are registered. Alexander B. Konovalov and Vitaly V. Vlasov Copyright © 2013 Alexander B. Konovalov and Vitaly V. Vlasov. All rights reserved. An Overview on Image Forensics Thu, 10 Jan 2013 09:39:27 +0000 The aim of this survey is to provide a comprehensive overview of the state of the art in the area of image forensics. These techniques have been designed to identify the source of a digital image or to determine whether the content is authentic or modified, without the knowledge of any prior information about the image under analysis (and thus are defined as passive). All these tools work by detecting the presence, the absence, or the incongruence of some traces intrinsically tied to the digital image by the acquisition device and by any other operation after its creation. The paper has been organized by classifying the tools according to the position in the history of the digital image in which the relative footprint is left: acquisition-based methods, coding-based methods, and editing-based schemes. Alessandro Piva Copyright © 2013 Alessandro Piva. All rights reserved. Direct Recovery of Clean Speech Using a Hybrid Noise Suppression Algorithm for Robust Speech Recognition System Wed, 26 Dec 2012 10:45:32 +0000 A new log-power domain feature enhancement algorithm named NLPS is developed. It consists of two parts, direct solution of nonlinear system model and log-power subtraction. In contrast to other methods, the proposed algorithm does not need prior speech/noise statistical model. Instead, it works by direct solution of the nonlinear function derived from the speech recognition system. Separate steps are utilized to refine the accuracy of estimated cepstrum by log-power subtraction, which is the second part of the proposed algorithm. The proposed algorithm manages to solve the speech probability distribution function (PDF) discontinuity problem caused by traditional spectral subtraction series algorithms. The effectiveness of the proposed filter is extensively compared using the standard database, AURORA2. The results show that significant improvement can be achieved by incorporating the proposed algorithm. The proposed algorithm reaches a recognition rate of over 86% for noisy speech (average from SNR 0 dB to 20 dB), which means a 48% error reduction over the baseline Mel-frequency Cepstral Coefficient (MFCC) system. Peng Dai, Ing Yann Soon, and Rui Tao Copyright © 2012 Peng Dai et al. All rights reserved. DCT Watermarking Approach for Security Enhancement of Multimodal System Tue, 18 Dec 2012 11:12:02 +0000 We have addressed a novel watermarking algorithm to support the capacity demanded by the multimodal biometric templates. Proposed technique embeds watermark in low frequency AC coefficients of selected 8 × 8 DCT blocks. Selection of blocks accomplishes perceptual transparency by exploiting the masking effects of human visual system (HVS). Embedding is done by modulating the coefficient magnitude as a function of its estimated value. Neighborhood estimation is used for the weighted DC coefficients from eight neighboring DCT blocks. The weights of the DC coefficients are calculated from local image intrinsic property. For our experimentation we have used iris and finger prints as the two templates which are watermarked into standard test images. The robustness of the proposed algorithm is compared with the few state-of-the-art literature when watermarked image is subjected to common channel attacks. Mita Paunwala and S. Patnaik Copyright © 2012 Mita Paunwala and S. Patnaik. All rights reserved. A Probabilistic Approach to Computerized Tracking of Arterial Walls in Ultrasound Image Sequences Mon, 17 Dec 2012 13:59:37 +0000 Tracking of arterial walls in ultrasound image sequences is useful for studying the dynamics of arteries. Manual delineation is prohibitively labour intensive and existing methods of computerized segmentation are limited in terms of applicability and availability. This paper presents a probabilistic approach to the computerized tracking of arterial walls that is effective and easy to implement. In the probabilistic approach, given a point B with a probability of being in an arterial lumen of interest, the probability that a neighbouring point A is also a part of the same lumen is proportional to with a Gaussian fall in probability with increasing grayscale contrast between the two points. Efficacy of the probabilistic algorithm was evaluated by testing it on ultrasound images and image sequences of the carotid arteries and the abdominal aorta and various laboratory, ultrasound test objects. The results showed that the probabilistic algorithm produced robust and effective lumen segmentation in the majority of cases encountered. Comparison with a conventional region growing technique based on intensity thresholding with a running, regional intensity average identified the main benefits of the probabilistic approach as increased immunity to speckle noise within the arterial lumen and a reduced susceptibility to region overflowing at boundary imperfections. Baris Kanber and Kumar Vids Ramnarine Copyright © 2012 Baris Kanber and Kumar Vids Ramnarine. All rights reserved. Spectral Intrinsic Decomposition Method for Adaptive Signal Representation Thu, 13 Dec 2012 16:03:28 +0000 We propose a new method called spectral intrinsic decomposition (SID) for the representation of nonlinear signals. This approach is based on the spectral decomposition of partial differential equation- (PDE-) based operators which interpolate the characteristic points of a signal. The SID’s components which are the eigenvectors of these PDE interpolation operators underlie the new signal decomposition-reconstruction method. The usefulness and the efficiency of this method is illustrated, in signal reconstruction or denoising aim, in some examples using artificial and pathological signals. Oumar Niang, Abdoulaye Thioune, Éric Deléchelle, and Jacques Lemoine Copyright © 2012 Oumar Niang et al. All rights reserved.