International Journal of Biomedical Imaging The latest articles from Hindawi Publishing Corporation © 2016 , Hindawi Publishing Corporation . All rights reserved. Blind Source Parameters for Performance Evaluation of Despeckling Filters Thu, 19 May 2016 12:58:31 +0000 The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein’s unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. Nagashettappa Biradar, M. L. Dewal, ManojKumar Rohit, Sanjaykumar Gowre, and Yogesh Gundge Copyright © 2016 Nagashettappa Biradar et al. All rights reserved. Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation Mon, 09 May 2016 12:58:39 +0000 Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy. Hossein Rabbani, Rahele Kafieh, Mahdi Kazemian Jahromi, Sahar Jorjandi, Alireza Mehri Dehnavi, Fedra Hajizadeh, and Alireza Peyman Copyright © 2016 Hossein Rabbani et al. All rights reserved. Comparison of Texture Features Used for Classification of Life Stages of Malaria Parasite Mon, 09 May 2016 12:09:01 +0000 Malaria is a vector borne disease widely occurring at equatorial region. Even after decades of campaigning of malaria control, still today it is high mortality causing disease due to improper and late diagnosis. To prevent number of people getting affected by malaria, the diagnosis should be in early stage and accurate. This paper presents an automatic method for diagnosis of malaria parasite in the blood images. Image processing techniques are used for diagnosis of malaria parasite and to detect their stages. The diagnosis of parasite stages is done using features like statistical features and textural features of malaria parasite in blood images. This paper gives a comparison of the textural based features individually used and used in group together. The comparison is made by considering the accuracy, sensitivity, and specificity of the features for the same images in database. Vinayak K. Bairagi and Kshipra C. Charpe Copyright © 2016 Vinayak K. Bairagi and Kshipra C. Charpe. All rights reserved. Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images Thu, 31 Mar 2016 17:45:45 +0000 Medical imaging systems often produce images that require enhancement, such as improving the image contrast as they are poor in contrast. Therefore, they must be enhanced before they are examined by medical professionals. This is necessary for proper diagnosis and subsequent treatment. We do have various enhancement algorithms which enhance the medical images to different extents. We also have various quantitative metrics or measures which evaluate the quality of an image. This paper suggests the most appropriate measures for two of the medical images, namely, brain cancer images and breast cancer images. Suneet Gupta and Rabins Porwal Copyright © 2016 Suneet Gupta and Rabins Porwal. All rights reserved. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography Thu, 17 Mar 2016 06:48:58 +0000 This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD. Thomas Weidinger, Thorsten M. Buzug, Thomas Flohr, Steffen Kappler, and Karl Stierstorfer Copyright © 2016 Thomas Weidinger et al. All rights reserved. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction Tue, 15 Mar 2016 16:55:53 +0000 Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. Hongyang Lu, Jingbo Wei, Qiegen Liu, Yuhao Wang, and Xiaohua Deng Copyright © 2016 Hongyang Lu et al. All rights reserved. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift Tue, 15 Mar 2016 12:21:18 +0000 Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. Yudong Zhang, Jiquan Yang, Jianfei Yang, Aijun Liu, and Ping Sun Copyright © 2016 Yudong Zhang et al. All rights reserved. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent Tue, 08 Mar 2016 11:44:37 +0000 For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient’s left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 6.3 mm and it dropped to 4.6 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 6.3 mm to 5.7 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. Matthias Hoffmann, Christopher Kowalewski, Andreas Maier, Klaus Kurzidim, Norbert Strobel, and Joachim Hornegger Copyright © 2016 Matthias Hoffmann et al. All rights reserved. Detection of Cardiac Function Abnormality from MRI Images Using Normalized Wall Thickness Temporal Patterns Tue, 01 Mar 2016 11:13:32 +0000 Purpose. To develop a method for identifying abnormal myocardial function based on studying the normalized wall motion pattern during the cardiac cycle. Methods. The temporal pattern of the normalized myocardial wall thickness is used as a feature vector to assess the cardiac wall motion abnormality. Principal component analysis is used to reduce the feature dimensionality and the maximum likelihood method is used to differentiate between normal and abnormal features. The proposed method was applied on a dataset of 27 cases from normal subjects and patients. Results. The developed method achieved 81.5%, 85%, and 88.5% accuracy for identifying abnormal contractility in the basal, midventricular, and apical slices, respectively. Conclusions. A novel feature vector, namely, the normalized wall thickness, has been introduced for detecting myocardial regional wall motion abnormality. The proposed method provides assessment of the regional myocardial contractility for each cardiac segment and slice; therefore, it could be a valuable tool for automatic and fast determination of regional wall motion abnormality from conventional cine MRI images. Mai Wael, El-Sayed H. Ibrahim, and Ahmed S. Fahmy Copyright © 2016 Mai Wael et al. All rights reserved. Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping Mon, 15 Feb 2016 06:24:27 +0000 An integrate fabrication framework is presented to build heterogeneous objects (HEO) using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combine the structure, material, and visualization in one process and give the digital model for manufacture. From the given model, it is concluded that the method is effective for HEO. Using microdroplet rapid prototyping and the model given in the paper HEO could be gotten basically. The model could be used in 3D biomanufacturing. Na Li, Jiquan Yang, Chunmei Feng, Jianfei Yang, Liya Zhu, and Aiqing Guo Copyright © 2016 Na Li et al. All rights reserved. Parallel Digital Watermarking Process on Ultrasound Medical Images in Multicores Environment Thu, 11 Feb 2016 07:17:01 +0000 With the advancement of technology in communication network, it facilitated digital medical images transmitted to healthcare professionals via internal network or public network (e.g., Internet), but it also exposes the transmitted digital medical images to the security threats, such as images tampering or inserting false data in the images, which may cause an inaccurate diagnosis and treatment. Medical image distortion is not to be tolerated for diagnosis purposes; thus a digital watermarking on medical image is introduced. So far most of the watermarking research has been done on single frame medical image which is impractical in the real environment. In this paper, a digital watermarking on multiframes medical images is proposed. In order to speed up multiframes watermarking processing time, a parallel watermarking processing on medical images processing by utilizing multicores technology is introduced. An experiment result has shown that elapsed time on parallel watermarking processing is much shorter than sequential watermarking processing. Hui Liang Khor, Siau-Chuin Liew, and Jasni Mohd. Zain Copyright © 2016 Hui Liang Khor et al. All rights reserved. Patch-Based Segmentation with Spatial Consistency: Application to MS Lesions in Brain MRI Sun, 24 Jan 2016 13:44:09 +0000 This paper presents an automatic lesion segmentation method based on similarities between multichannel patches. A patch database is built using training images for which the label maps are known. For each patch in the testing image, similar patches are retrieved from the database. The matching labels for these patches are then combined to produce an initial segmentation map for the test case. Finally an iterative patch-based label refinement process based on the initial segmentation map is performed to ensure the spatial consistency of the detected lesions. The method was evaluated in experiments on multiple sclerosis (MS) lesion segmentation in magnetic resonance images (MRI) of the brain. An evaluation was done for each image in the MICCAI 2008 MS lesion segmentation challenge. Results are shown to compete with the state of the art in the challenge. We conclude that the proposed algorithm for segmentation of lesions provides a promising new approach for local segmentation and global detection in medical images. Roey Mechrez, Jacob Goldberger, and Hayit Greenspan Copyright © 2016 Roey Mechrez et al. All rights reserved. Insight into the Molecular Imaging of Alzheimer’s Disease Sun, 10 Jan 2016 13:47:21 +0000 Alzheimer’s disease is a complex neurodegenerative disease affecting millions of individuals worldwide. Earlier it was diagnosed only via clinical assessments and confirmed by postmortem brain histopathology. The development of validated biomarkers for Alzheimer’s disease has given impetus to improve diagnostics and accelerate the development of new therapies. Functional imaging like positron emission tomography (PET), single photon emission computed tomography (SPECT), functional magnetic resonance imaging (fMRI), and proton magnetic resonance spectroscopy provides a means of detecting and characterising the regional changes in brain blood flow, metabolism, and receptor binding sites that are associated with Alzheimer’s disease. Multimodal neuroimaging techniques have indicated changes in brain structure and metabolic activity, and an array of neurochemical variations that are associated with neurodegenerative diseases. Radiotracer-based PET and SPECT potentially provide sensitive, accurate methods for the early detection of disease. This paper presents a review of neuroimaging modalities like PET, SPECT, and selected imaging biomarkers/tracers used for the early diagnosis of AD. Neuroimaging with such biomarkers and tracers could achieve a much higher diagnostic accuracy for AD and related disorders in the future. Abishek Arora and Neeta Bhagat Copyright © 2016 Abishek Arora and Neeta Bhagat. All rights reserved. Imaging Performance of Quantitative Transmission Ultrasound Wed, 28 Oct 2015 11:35:51 +0000 Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. Mark W. Lenox, James Wiskin, Matthew A. Lewis, Stephen Darrouzet, David Borup, and Scott Hsieh Copyright © 2015 Mark W. Lenox et al. All rights reserved. Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities Thu, 08 Oct 2015 13:51:48 +0000 The Haralick texture features are a well-known mathematical method to detect the lung abnormalities and give the opportunity to the physician to localize the abnormality tissue type, either lung tumor or pulmonary edema. In this paper, statistical evaluation of the different features will represent the reported performance of the proposed method. Thirty-seven patients CT datasets with either lung tumor or pulmonary edema were included in this study. The CT images are first preprocessed for noise reduction and image enhancement, followed by segmentation techniques to segment the lungs, and finally Haralick texture features to detect the type of the abnormality within the lungs. In spite of the presence of low contrast and high noise in images, the proposed algorithms introduce promising results in detecting the abnormality of lungs in most of the patients in comparison with the normal and suggest that some of the features are significantly recommended than others. Nourhan Zayed and Heba A. Elnemr Copyright © 2015 Nourhan Zayed and Heba A. Elnemr. All rights reserved. Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching Thu, 08 Oct 2015 12:12:42 +0000 Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of , which was statistically significantly better ( value ) than the 3D method (). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes. Gurman Gill and Reinhard R. Beichel Copyright © 2015 Gurman Gill and Reinhard R. Beichel. All rights reserved. Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features Tue, 15 Sep 2015 14:14:35 +0000 Computer-aided diagnostic (CAD) systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients’ lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM) method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR) for classification step. Finally, -nearest neighbour (NN), support vector machine (SVM), naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier. Eman Magdy, Nourhan Zayed, and Mahmoud Fakhr Copyright © 2015 Eman Magdy et al. All rights reserved. Clutter Mitigation in Echocardiography Using Sparse Signal Separation Wed, 24 Jun 2015 09:38:03 +0000 In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. Javier S. Turek, Michael Elad, and Irad Yavneh Copyright © 2015 Javier S. Turek et al. All rights reserved. Automated Feature Extraction in Brain Tumor by Magnetic Resonance Imaging Using Gaussian Mixture Models Tue, 02 Jun 2015 12:34:53 +0000 This paper presents a novel method for Glioblastoma (GBM) feature extraction based on Gaussian mixture model (GMM) features using MRI. We addressed the task of the new features to identify GBM using T1 and T2 weighted images (T1-WI, T2-WI) and Fluid-Attenuated Inversion Recovery (FLAIR) MR images. A pathologic area was detected using multithresholding segmentation with morphological operations of MR images. Multiclassifier techniques were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate GBM and normal tissue. GMM features demonstrated the best performance by the comparative study using principal component analysis (PCA) and wavelet based features. For the T1-WI, the accuracy performance was 97.05% (AUC = 92.73%) with 0.00% missed detection and 2.95% false alarm. In the T2-WI, the same accuracy (97.05%, AUC = 91.70%) value was achieved with 2.95% missed detection and 0.00% false alarm. In FLAIR mode the accuracy decreased to 94.11% (AUC = 95.85%) with 0.00% missed detection and 5.89% false alarm. These experimental results are promising to enhance the characteristics of heterogeneity and hence early treatment of GBM. Ahmad Chaddad Copyright © 2015 Ahmad Chaddad. All rights reserved. Image Processing Algorithms and Measures for the Analysis of Biomedical Imaging Systems Applications Mon, 27 Apr 2015 10:02:20 +0000 Karen Panetta, Sos Agaian, Jean-Charles Pinoli, and Yicong Zhou Copyright © 2015 Karen Panetta et al. All rights reserved. Automatic Extraction of Blood Vessels in the Retinal Vascular Tree Using Multiscale Medialness Wed, 22 Apr 2015 13:49:27 +0000 We propose an algorithm for vessel extraction in retinal images. The first step consists of applying anisotropic diffusion filtering in the initial vessel network in order to restore disconnected vessel lines and eliminate noisy lines. In the second step, a multiscale line-tracking procedure allows detecting all vessels having similar dimensions at a chosen scale. Computing the individual image maps requires different steps. First, a number of points are preselected using the eigenvalues of the Hessian matrix. These points are expected to be near to a vessel axis. Then, for each preselected point, the response map is computed from gradient information of the image at the current scale. Finally, the multiscale image map is derived after combining the individual image maps at different scales (sizes). Two publicly available datasets have been used to test the performance of the suggested method. The main dataset is the STARE project’s dataset and the second one is the DRIVE dataset. The experimental results, applied on the STARE dataset, show a maximum accuracy average of around 94.02%. Also, when performed on the DRIVE database, the maximum accuracy average reaches 91.55%. Mariem Ben Abdallah, Jihene Malek, Ahmad Taher Azar, Philippe Montesinos, Hafedh Belmabrouk, Julio Esclarín Monreal, and Karl Krissian Copyright © 2015 Mariem Ben Abdallah et al. All rights reserved. Recovering 3D Shape with Absolute Size from Endoscope Images Using RBF Neural Network Thu, 09 Apr 2015 09:00:09 +0000 Medical diagnosis judges the status of polyp from the size and the 3D shape of the polyp from its medical endoscope image. However the medical doctor judges the status empirically from the endoscope image and more accurate 3D shape recovery from its 2D image has been demanded to support this judgment. As a method to recover 3D shape with high speed, VBW (Vogel-Breuß-Weickert) model is proposed to recover 3D shape under the condition of point light source illumination and perspective projection. However, VBW model recovers the relative shape but there is a problem that the shape cannot be recovered with the exact size. Here, shape modification is introduced to recover the exact shape with modification from that with VBW model. RBF-NN is introduced for the mapping between input and output. Input is given as the output of gradient parameters of VBW model for the generated sphere. Output is given as the true gradient parameters of true values of the generated sphere. Learning mapping with NN can modify the gradient and the depth can be recovered according to the modified gradient parameters. Performance of the proposed approach is confirmed via computer simulation and real experiment. Seiya Tsuda, Yuji Iwahori, M. K. Bhuyan, Robert J. Woodham, and Kunio Kasugai Copyright © 2015 Seiya Tsuda et al. All rights reserved. Edge Detection in Digital Images Using Dispersive Phase Stretch Transform Mon, 23 Mar 2015 13:14:44 +0000 We describe a new computational approach to edge detection and its application to biomedical images. Our digital algorithm transforms the image by emulating the propagation of light through a physical medium with specific warped diffractive property. We show that the output phase of the transform reveals transitions in image intensity and can be used for edge detection. Mohammad H. Asghari and Bahram Jalali Copyright © 2015 Mohammad H. Asghari and Bahram Jalali. All rights reserved. Classifying Dementia Using Local Binary Patterns from Different Regions in Magnetic Resonance Images Sun, 22 Mar 2015 14:11:02 +0000 Dementia is an evolving challenge in society, and no disease-modifying treatment exists. Diagnosis can be demanding and MR imaging may aid as a noninvasive method to increase prediction accuracy. We explored the use of 2D local binary pattern (LBP) extracted from FLAIR and T1 MR images of the brain combined with a Random Forest classifier in an attempt to discern patients with Alzheimer's disease (AD), Lewy body dementia (LBD), and normal controls (NC). Analysis was conducted in areas with white matter lesions (WML) and all of white matter (WM). Results from 10-fold nested cross validation are reported as mean accuracy, precision, and recall with standard deviation in brackets. The best result we achieved was in the two-class problem NC versus AD + LBD with total accuracy of 0.98 (0.04). In the three-class problem AD versus LBD versus NC and the two-class problem AD versus LBD, we achieved 0.87 (0.08) and 0.74 (0.16), respectively. The performance using 3DT1 images was notably better than when using FLAIR images. The results from the WM region gave similar results as in the WML region. Our study demonstrates that LBP texture analysis in brain MR images can be successfully used for computer based dementia diagnosis. Ketil Oppedal, Trygve Eftestøl, Kjersti Engan, Mona K. Beyer, and Dag Aarsland Copyright © 2015 Ketil Oppedal et al. All rights reserved. Optic Disc Segmentation by Balloon Snake with Texture from Color Fundus Image Mon, 16 Mar 2015 06:39:12 +0000 A well-established method for diagnosis of glaucoma is the examination of the optic nerve head based on fundus image as glaucomatous patients tend to have larger cup-to-disc ratios. The difficulty of optic segmentation is due to the fuzzy boundaries and peripapillary atrophy (PPA). In this paper a novel method for optic nerve head segmentation is proposed. It uses template matching to find the region of interest (ROI). The method of vessel erasing in the ROI is based on PDE inpainting which will make the boundary smoother. A novel optic disc segmentation approach using image texture is explored in this paper. A cluster method based on image texture is employed before the optic disc segmentation step to remove the edge noise such as cup boundary and vessels. We replace image force in the snake with image texture and the initial contour of the balloon snake is inside the optic disc to avoid the PPA. The experimental results show the superior performance of the proposed method when compared to some traditional segmentation approaches. An average segmentation dice coefficient of 94% has been obtained. Jinyang Sun, Fangjun Luan, and Hanhui Wu Copyright © 2015 Jinyang Sun et al. All rights reserved. High Performance GPU-Based Fourier Volume Rendering Thu, 19 Feb 2015 12:07:46 +0000 Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures. Marwan Abdellah, Ayman Eldeib, and Amr Sharawi Copyright © 2015 Marwan Abdellah et al. All rights reserved. Incidence of Brain Abnormalities Detected on Preoperative Brain MR Imaging and Their Effect on the Outcome of Cochlear Implantation in Children with Sensorineural Hearing Loss Tue, 20 Jan 2015 10:26:57 +0000 The incidence of sensorineural hearing loss (SNHL) increased gradually in the past decades. High-resolution computed tomography (HRCT) and magnetic resonance (MR) imaging, as an important part of preimplantation evaluation for children with SNHL, could provide the detailed information about the inner ear, the vestibulocochlear nerve, and the brain, so as to select suitable candidate for cochlear implantation (CI). Brain abnormalities were not rare in the brain MR imaging of SNHL children; however, its influence on the effect of CI has not been clarified. After retrospectively analyzing the CT and MR imaging of 157 children with SNHL that accepted preoperative evaluation from June 2011 to February 2013 in our hospital and following them during a period of months, we found that the white matter change, which might be associated with the history of medical condition, was the most common brain abnormality. Usually CI was still beneficial to the children with brain abnormalities, and the short-term hearing improvement could be achieved. Further study with more patients and longer follow-up time was needed to confirm our results. Xiao-Quan Xu, Fei-Yun Wu, Hao Hu, Guo-Yi Su, and Jie Shen Copyright © 2015 Xiao-Quan Xu et al. All rights reserved. Automated Classification of Glandular Tissue by Statistical Proximity Sampling Sun, 18 Jan 2015 09:19:48 +0000 Due to the complexity of biological tissue and variations in staining procedures, features that are based on the explicit extraction of properties from subglandular structures in tissue images may have difficulty generalizing well over an unrestricted set of images and staining variations. We circumvent this problem by an implicit representation that is both robust and highly descriptive, especially when combined with a multiple instance learning approach to image classification. The new feature method is able to describe tissue architecture based on glandular structure. It is based on statistically representing the relative distribution of tissue components around lumen regions, while preserving spatial and quantitative information, as a basis for diagnosing and analyzing different areas within an image. We demonstrate the efficacy of the method in extracting discriminative features for obtaining high classification rates for tubular formation in both healthy and cancerous tissue, which is an important component in Gleason and tubule-based Elston grading. The proposed method may be used for glandular classification, also in other tissue types, in addition to general applicability as a region-based feature descriptor in image analysis where the image represents a bag with a certain label (or grade) and the region-based feature vectors represent instances. Jimmy C. Azar, Martin Simonsson, Ewert Bengtsson, and Anders Hast Copyright © 2015 Jimmy C. Azar et al. All rights reserved. Computer-Assisted Segmentation of Videocapsule Images Using Alpha-Divergence-Based Active Contour in the Framework of Intestinal Pathologies Detection Thu, 18 Dec 2014 00:10:27 +0000 Visualization of the entire length of the gastrointestinal tract through natural orifices is a challenge for endoscopists. Videoendoscopy is currently the “gold standard” technique for diagnosis of different pathologies of the intestinal tract. Wireless capsule endoscopy (WCE) has been developed in the 1990s as an alternative to videoendoscopy to allow direct examination of the gastrointestinal tract without any need for sedation. Nevertheless, the systematic postexamination by the specialist of the 50,000 (for the small bowel) to 150,000 images (for the colon) of a complete acquisition using WCE remains time-consuming and challenging due to the poor quality of WCE images. In this paper, a semiautomatic segmentation for analysis of WCE images is proposed. Based on active contour segmentation, the proposed method introduces alpha-divergences, a flexible statistical similarity measure that gives a real flexibility to different types of gastrointestinal pathologies. Results of segmentation using the proposed approach are shown on different types of real-case examinations, from (multi)polyp(s) segmentation, to radiation enteritis delineation. L. Meziou, A. Histace, F. Precioso, O. Romain, X. Dray, B. Granado, and B. J. Matuszewski Copyright © 2014 L. Meziou et al. All rights reserved. Ischemic Stroke Detection System with a Computer-Aided Diagnostic Ability Using an Unsupervised Feature Perception Enhancement Method Tue, 09 Dec 2014 12:20:50 +0000 We propose an ischemic stroke detection system with a computer-aided diagnostic ability using a four-step unsupervised feature perception enhancement method. In the first step, known as preprocessing, we use a cubic curve contrast enhancement method to enhance image contrast. In the second step, we use a series of methods to extract the brain tissue image area identified during preprocessing. To detect abnormal regions in the brain images, we propose using an unsupervised region growing algorithm to segment the brain tissue area. The brain is centered on a horizontal line and the white matter of the brain’s inner ring is split into eight regions. In the third step, we use a coinciding regional location method to find the hybrid area of locations where a stroke may have occurred in each cerebral hemisphere. Finally, we make corrections and mark the stroke area with red color. In the experiment, we tested the system on 90 computed tomography (CT) images from 26 patients, and, with the assistance of two radiologists, we proved that our proposed system has computer-aided diagnostic capabilities. Our results show an increased stroke diagnosis sensitivity of 83% in comparison to 31% when radiologists use conventional diagnostic images. Yeu-Sheng Tyan, Ming-Chi Wu, Chiun-Li Chin, Yu-Liang Kuo, Ming-Sian Lee, and Hao-Yan Chang Copyright © 2014 Yeu-Sheng Tyan et al. All rights reserved.