International Journal of Biomedical Imaging The latest articles from Hindawi Publishing Corporation © 2016 , Hindawi Publishing Corporation . All rights reserved. Anatomy-Correlated Breast Imaging and Visual Grading Analysis Using Quantitative Transmission Ultrasound™ Mon, 26 Sep 2016 11:36:06 +0000 Objectives. This study presents correlations between cross-sectional anatomy of human female breasts and Quantitative Transmission (QT) Ultrasound, does discriminate classifier analysis to validate the speed of sound correlations, and does a visual grading analysis comparing QT Ultrasound with mammography. Materials and Methods. Human cadaver breasts were imaged using QT Ultrasound, sectioned, and photographed. Biopsies confirmed microanatomy and areas were correlated with QT Ultrasound images. Measurements were taken in live subjects from QT Ultrasound images and values of speed of sound for each identified anatomical structure were plotted. Finally, a visual grading analysis was performed on images to determine whether radiologists’ confidence in identifying breast structures with mammography (XRM) is comparable to QT Ultrasound. Results. QT Ultrasound identified all major anatomical features of the breast, and speed of sound calculations showed specific values for different breast tissues. Using linear discriminant analysis overall accuracy is 91.4%. Using visual grading analysis readers scored the image quality on QT Ultrasound as better than on XRM in 69%–90% of breasts for specific tissues. Conclusions. QT Ultrasound provides accurate anatomic information and high tissue specificity using speed of sound information. Quantitative Transmission Ultrasound can distinguish different types of breast tissue with high resolution and accuracy. John C. Klock, Elaine Iuanow, Bilal Malik, Nancy A. Obuchowski, James Wiskin, and Mark Lenox Copyright © 2016 John C. Klock et al. All rights reserved. Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing Sun, 04 Sep 2016 11:15:36 +0000 Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images. Peishan Dai, Hanwei Sheng, Jianmei Zhang, Ling Li, Jing Wu, and Min Fan Copyright © 2016 Peishan Dai et al. All rights reserved. Automated Fovea Detection in Spectral Domain Optical Coherence Tomography Scans of Exudative Macular Disease Wed, 31 Aug 2016 14:33:31 +0000 In macular spectral domain optical coherence tomography (SD-OCT) volumes, detection of the foveal center is required for accurate and reproducible follow-up studies, structure function correlation, and measurement grid positioning. However, disease can cause severe obscuring or deformation of the fovea, thus presenting a major challenge in automated detection. We propose a fully automated fovea detection algorithm to extract the fovea position in SD-OCT volumes of eyes with exudative maculopathy. The fovea is classified into 3 main appearances to both specify the detection algorithm used and reduce computational complexity. Based on foveal type classification, the fovea position is computed based on retinal nerve fiber layer thickness. Mean absolute distance between system and clinical expert annotated fovea positions from a dataset comprised of 240 SD-OCT volumes was 162.3 µm in cystoid macular edema and 262 µm in nAMD. The presented method has cross-vendor functionality, while demonstrating accurate and reliable performance close to typical expert interobserver agreement. The automatically detected fovea positions may be used as landmarks for intra- and cross-patient registration and to create a joint reference frame for extraction of spatiotemporal features in “big data.” Furthermore, reliable analyses of retinal thickness, as well as retinal structure function correlation, may be facilitated. Jing Wu, Sebastian M. Waldstein, Alessio Montuoro, Bianca S. Gerendas, Georg Langs, and Ursula Schmidt-Erfurth Copyright © 2016 Jing Wu et al. All rights reserved. Kinect-Based Correction of Overexposure Artifacts in Knee Imaging with C-Arm CT Systems Tue, 19 Jul 2016 12:10:04 +0000 Objective. To demonstrate a novel approach of compensating overexposure artifacts in CT scans of the knees without attaching any supporting appliances to the patient. C-Arm CT systems offer the opportunity to perform weight-bearing knee scans on standing patients to diagnose diseases like osteoarthritis. However, one serious issue is overexposure of the detector in regions close to the patella, which can not be tackled with common techniques. Methods. A Kinect camera is used to algorithmically remove overexposure artifacts close to the knee surface. Overexposed near-surface knee regions are corrected by extrapolating the absorption values from more reliable projection data. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates. Results. Artifacts at both knee phantoms are reduced significantly in the reconstructed data and a major part of the truncated regions is restored. Conclusion. The results emphasize the feasibility of the proposed approach. The accuracy of the cross-calibration procedure can be increased to further improve correction results. Significance. The correction method can be extended to a multi-Kinect setup for use in real-world scenarios. Using depth cameras does not require prior scans and offers the possibility of a temporally synchronized correction of overexposure artifacts. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates. Johannes Rausch, Andreas Maier, Rebecca Fahrig, Jang-Hwan Choi, Waldo Hinshaw, Frank Schebesch, Sven Haase, Jakob Wasza, Joachim Hornegger, and Christian Riess Copyright © 2016 Johannes Rausch et al. All rights reserved. Fluorescence-Guided Resection of Malignant Glioma with 5-ALA Mon, 27 Jun 2016 09:46:42 +0000 Malignant gliomas are extremely difficult to treat with no specific curative treatment. On the other hand, photodynamic medicine represents a promising technique for neurosurgeons in the treatment of malignant glioma. The resection rate of malignant glioma has increased from 40% to 80% owing to 5-aminolevulinic acid-photodynamic diagnosis (ALA-PDD). Furthermore, ALA is very useful because it has no serious complications. Based on previous research, it is apparent that protoporphyrin IX (PpIX) accumulates abundantly in malignant glioma tissues after ALA administration. Moreover, it is evident that the mechanism underlying PpIX accumulation in malignant glioma tissues involves an abnormality in porphyrin-heme metabolism, specifically decreased ferrochelatase enzyme activity. During resection surgery, the macroscopic fluorescence of PpIX to the naked eye is more sensitive than magnetic resonance imaging, and the alert real time spectrum of PpIX is the most sensitive method. In the future, chemotherapy with new anticancer agents, immunotherapy, and new methods of radiotherapy and gene therapy will be developed; however, ALA will play a key role in malignant glioma treatment before the development of these new treatments. In this paper, we provide an overview and present the results of our clinical research on ALA-PDD. Sadahiro Kaneko and Sadao Kaneko Copyright © 2016 Sadahiro Kaneko and Sadao Kaneko. All rights reserved. Application of Artificial Neural Network Models in Segmentation and Classification of Nodules in Breast Ultrasound Digital Images Thu, 16 Jun 2016 09:37:22 +0000 This research presents a methodology for the automatic detection and characterization of breast sonographic findings. We performed the tests in ultrasound images obtained from breast phantoms made of tissue mimicking material. When the results were considerable, we applied the same techniques to clinical examinations. The process was started employing preprocessing (Wiener filter, equalization, and median filter) to minimize noise. Then, five segmentation techniques were investigated to determine the most concise representation of the lesion contour, enabling us to consider the neural network SOM as the most relevant. After the delimitation of the object, the most expressive features were defined to the morphological description of the finding, generating the input data to the neural Multilayer Perceptron (MLP) classifier. The accuracy achieved during training with simulated images was 94.2%, producing an AUC of 0.92. To evaluating the data generalization, the classification was performed with a group of unknown images to the system, both to simulators and to clinical trials, resulting in an accuracy of 90% and 81%, respectively. The proposed classifier proved to be an important tool for the diagnosis in breast ultrasound. Karem D. Marcomini, Antonio A. O. Carneiro, and Homero Schiabel Copyright © 2016 Karem D. Marcomini et al. All rights reserved. Blind Source Parameters for Performance Evaluation of Despeckling Filters Thu, 19 May 2016 12:58:31 +0000 The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein’s unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. Nagashettappa Biradar, M. L. Dewal, ManojKumar Rohit, Sanjaykumar Gowre, and Yogesh Gundge Copyright © 2016 Nagashettappa Biradar et al. All rights reserved. Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation Mon, 09 May 2016 12:58:39 +0000 Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy. Hossein Rabbani, Rahele Kafieh, Mahdi Kazemian Jahromi, Sahar Jorjandi, Alireza Mehri Dehnavi, Fedra Hajizadeh, and Alireza Peyman Copyright © 2016 Hossein Rabbani et al. All rights reserved. Comparison of Texture Features Used for Classification of Life Stages of Malaria Parasite Mon, 09 May 2016 12:09:01 +0000 Malaria is a vector borne disease widely occurring at equatorial region. Even after decades of campaigning of malaria control, still today it is high mortality causing disease due to improper and late diagnosis. To prevent number of people getting affected by malaria, the diagnosis should be in early stage and accurate. This paper presents an automatic method for diagnosis of malaria parasite in the blood images. Image processing techniques are used for diagnosis of malaria parasite and to detect their stages. The diagnosis of parasite stages is done using features like statistical features and textural features of malaria parasite in blood images. This paper gives a comparison of the textural based features individually used and used in group together. The comparison is made by considering the accuracy, sensitivity, and specificity of the features for the same images in database. Vinayak K. Bairagi and Kshipra C. Charpe Copyright © 2016 Vinayak K. Bairagi and Kshipra C. Charpe. All rights reserved. Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images Thu, 31 Mar 2016 17:45:45 +0000 Medical imaging systems often produce images that require enhancement, such as improving the image contrast as they are poor in contrast. Therefore, they must be enhanced before they are examined by medical professionals. This is necessary for proper diagnosis and subsequent treatment. We do have various enhancement algorithms which enhance the medical images to different extents. We also have various quantitative metrics or measures which evaluate the quality of an image. This paper suggests the most appropriate measures for two of the medical images, namely, brain cancer images and breast cancer images. Suneet Gupta and Rabins Porwal Copyright © 2016 Suneet Gupta and Rabins Porwal. All rights reserved. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography Thu, 17 Mar 2016 06:48:58 +0000 This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD. Thomas Weidinger, Thorsten M. Buzug, Thomas Flohr, Steffen Kappler, and Karl Stierstorfer Copyright © 2016 Thomas Weidinger et al. All rights reserved. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction Tue, 15 Mar 2016 16:55:53 +0000 Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. Hongyang Lu, Jingbo Wei, Qiegen Liu, Yuhao Wang, and Xiaohua Deng Copyright © 2016 Hongyang Lu et al. All rights reserved. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift Tue, 15 Mar 2016 12:21:18 +0000 Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. Yudong Zhang, Jiquan Yang, Jianfei Yang, Aijun Liu, and Ping Sun Copyright © 2016 Yudong Zhang et al. All rights reserved. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent Tue, 08 Mar 2016 11:44:37 +0000 For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient’s left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 6.3 mm and it dropped to 4.6 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 6.3 mm to 5.7 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. Matthias Hoffmann, Christopher Kowalewski, Andreas Maier, Klaus Kurzidim, Norbert Strobel, and Joachim Hornegger Copyright © 2016 Matthias Hoffmann et al. All rights reserved. Detection of Cardiac Function Abnormality from MRI Images Using Normalized Wall Thickness Temporal Patterns Tue, 01 Mar 2016 11:13:32 +0000 Purpose. To develop a method for identifying abnormal myocardial function based on studying the normalized wall motion pattern during the cardiac cycle. Methods. The temporal pattern of the normalized myocardial wall thickness is used as a feature vector to assess the cardiac wall motion abnormality. Principal component analysis is used to reduce the feature dimensionality and the maximum likelihood method is used to differentiate between normal and abnormal features. The proposed method was applied on a dataset of 27 cases from normal subjects and patients. Results. The developed method achieved 81.5%, 85%, and 88.5% accuracy for identifying abnormal contractility in the basal, midventricular, and apical slices, respectively. Conclusions. A novel feature vector, namely, the normalized wall thickness, has been introduced for detecting myocardial regional wall motion abnormality. The proposed method provides assessment of the regional myocardial contractility for each cardiac segment and slice; therefore, it could be a valuable tool for automatic and fast determination of regional wall motion abnormality from conventional cine MRI images. Mai Wael, El-Sayed H. Ibrahim, and Ahmed S. Fahmy Copyright © 2016 Mai Wael et al. All rights reserved. Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping Mon, 15 Feb 2016 06:24:27 +0000 An integrate fabrication framework is presented to build heterogeneous objects (HEO) using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combine the structure, material, and visualization in one process and give the digital model for manufacture. From the given model, it is concluded that the method is effective for HEO. Using microdroplet rapid prototyping and the model given in the paper HEO could be gotten basically. The model could be used in 3D biomanufacturing. Na Li, Jiquan Yang, Chunmei Feng, Jianfei Yang, Liya Zhu, and Aiqing Guo Copyright © 2016 Na Li et al. All rights reserved. Parallel Digital Watermarking Process on Ultrasound Medical Images in Multicores Environment Thu, 11 Feb 2016 07:17:01 +0000 With the advancement of technology in communication network, it facilitated digital medical images transmitted to healthcare professionals via internal network or public network (e.g., Internet), but it also exposes the transmitted digital medical images to the security threats, such as images tampering or inserting false data in the images, which may cause an inaccurate diagnosis and treatment. Medical image distortion is not to be tolerated for diagnosis purposes; thus a digital watermarking on medical image is introduced. So far most of the watermarking research has been done on single frame medical image which is impractical in the real environment. In this paper, a digital watermarking on multiframes medical images is proposed. In order to speed up multiframes watermarking processing time, a parallel watermarking processing on medical images processing by utilizing multicores technology is introduced. An experiment result has shown that elapsed time on parallel watermarking processing is much shorter than sequential watermarking processing. Hui Liang Khor, Siau-Chuin Liew, and Jasni Mohd. Zain Copyright © 2016 Hui Liang Khor et al. All rights reserved. Patch-Based Segmentation with Spatial Consistency: Application to MS Lesions in Brain MRI Sun, 24 Jan 2016 13:44:09 +0000 This paper presents an automatic lesion segmentation method based on similarities between multichannel patches. A patch database is built using training images for which the label maps are known. For each patch in the testing image, similar patches are retrieved from the database. The matching labels for these patches are then combined to produce an initial segmentation map for the test case. Finally an iterative patch-based label refinement process based on the initial segmentation map is performed to ensure the spatial consistency of the detected lesions. The method was evaluated in experiments on multiple sclerosis (MS) lesion segmentation in magnetic resonance images (MRI) of the brain. An evaluation was done for each image in the MICCAI 2008 MS lesion segmentation challenge. Results are shown to compete with the state of the art in the challenge. We conclude that the proposed algorithm for segmentation of lesions provides a promising new approach for local segmentation and global detection in medical images. Roey Mechrez, Jacob Goldberger, and Hayit Greenspan Copyright © 2016 Roey Mechrez et al. All rights reserved. Insight into the Molecular Imaging of Alzheimer’s Disease Sun, 10 Jan 2016 13:47:21 +0000 Alzheimer’s disease is a complex neurodegenerative disease affecting millions of individuals worldwide. Earlier it was diagnosed only via clinical assessments and confirmed by postmortem brain histopathology. The development of validated biomarkers for Alzheimer’s disease has given impetus to improve diagnostics and accelerate the development of new therapies. Functional imaging like positron emission tomography (PET), single photon emission computed tomography (SPECT), functional magnetic resonance imaging (fMRI), and proton magnetic resonance spectroscopy provides a means of detecting and characterising the regional changes in brain blood flow, metabolism, and receptor binding sites that are associated with Alzheimer’s disease. Multimodal neuroimaging techniques have indicated changes in brain structure and metabolic activity, and an array of neurochemical variations that are associated with neurodegenerative diseases. Radiotracer-based PET and SPECT potentially provide sensitive, accurate methods for the early detection of disease. This paper presents a review of neuroimaging modalities like PET, SPECT, and selected imaging biomarkers/tracers used for the early diagnosis of AD. Neuroimaging with such biomarkers and tracers could achieve a much higher diagnostic accuracy for AD and related disorders in the future. Abishek Arora and Neeta Bhagat Copyright © 2016 Abishek Arora and Neeta Bhagat. All rights reserved. Imaging Performance of Quantitative Transmission Ultrasound Wed, 28 Oct 2015 11:35:51 +0000 Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. Mark W. Lenox, James Wiskin, Matthew A. Lewis, Stephen Darrouzet, David Borup, and Scott Hsieh Copyright © 2015 Mark W. Lenox et al. All rights reserved. Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities Thu, 08 Oct 2015 13:51:48 +0000 The Haralick texture features are a well-known mathematical method to detect the lung abnormalities and give the opportunity to the physician to localize the abnormality tissue type, either lung tumor or pulmonary edema. In this paper, statistical evaluation of the different features will represent the reported performance of the proposed method. Thirty-seven patients CT datasets with either lung tumor or pulmonary edema were included in this study. The CT images are first preprocessed for noise reduction and image enhancement, followed by segmentation techniques to segment the lungs, and finally Haralick texture features to detect the type of the abnormality within the lungs. In spite of the presence of low contrast and high noise in images, the proposed algorithms introduce promising results in detecting the abnormality of lungs in most of the patients in comparison with the normal and suggest that some of the features are significantly recommended than others. Nourhan Zayed and Heba A. Elnemr Copyright © 2015 Nourhan Zayed and Heba A. Elnemr. All rights reserved. Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching Thu, 08 Oct 2015 12:12:42 +0000 Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of , which was statistically significantly better ( value ) than the 3D method (). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes. Gurman Gill and Reinhard R. Beichel Copyright © 2015 Gurman Gill and Reinhard R. Beichel. All rights reserved. Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features Tue, 15 Sep 2015 14:14:35 +0000 Computer-aided diagnostic (CAD) systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients’ lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM) method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR) for classification step. Finally, -nearest neighbour (NN), support vector machine (SVM), naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier. Eman Magdy, Nourhan Zayed, and Mahmoud Fakhr Copyright © 2015 Eman Magdy et al. All rights reserved. Clutter Mitigation in Echocardiography Using Sparse Signal Separation Wed, 24 Jun 2015 09:38:03 +0000 In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. Javier S. Turek, Michael Elad, and Irad Yavneh Copyright © 2015 Javier S. Turek et al. All rights reserved. Automated Feature Extraction in Brain Tumor by Magnetic Resonance Imaging Using Gaussian Mixture Models Tue, 02 Jun 2015 12:34:53 +0000 This paper presents a novel method for Glioblastoma (GBM) feature extraction based on Gaussian mixture model (GMM) features using MRI. We addressed the task of the new features to identify GBM using T1 and T2 weighted images (T1-WI, T2-WI) and Fluid-Attenuated Inversion Recovery (FLAIR) MR images. A pathologic area was detected using multithresholding segmentation with morphological operations of MR images. Multiclassifier techniques were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate GBM and normal tissue. GMM features demonstrated the best performance by the comparative study using principal component analysis (PCA) and wavelet based features. For the T1-WI, the accuracy performance was 97.05% (AUC = 92.73%) with 0.00% missed detection and 2.95% false alarm. In the T2-WI, the same accuracy (97.05%, AUC = 91.70%) value was achieved with 2.95% missed detection and 0.00% false alarm. In FLAIR mode the accuracy decreased to 94.11% (AUC = 95.85%) with 0.00% missed detection and 5.89% false alarm. These experimental results are promising to enhance the characteristics of heterogeneity and hence early treatment of GBM. Ahmad Chaddad Copyright © 2015 Ahmad Chaddad. All rights reserved. Image Processing Algorithms and Measures for the Analysis of Biomedical Imaging Systems Applications Mon, 27 Apr 2015 10:02:20 +0000 Karen Panetta, Sos Agaian, Jean-Charles Pinoli, and Yicong Zhou Copyright © 2015 Karen Panetta et al. All rights reserved. Automatic Extraction of Blood Vessels in the Retinal Vascular Tree Using Multiscale Medialness Wed, 22 Apr 2015 13:49:27 +0000 We propose an algorithm for vessel extraction in retinal images. The first step consists of applying anisotropic diffusion filtering in the initial vessel network in order to restore disconnected vessel lines and eliminate noisy lines. In the second step, a multiscale line-tracking procedure allows detecting all vessels having similar dimensions at a chosen scale. Computing the individual image maps requires different steps. First, a number of points are preselected using the eigenvalues of the Hessian matrix. These points are expected to be near to a vessel axis. Then, for each preselected point, the response map is computed from gradient information of the image at the current scale. Finally, the multiscale image map is derived after combining the individual image maps at different scales (sizes). Two publicly available datasets have been used to test the performance of the suggested method. The main dataset is the STARE project’s dataset and the second one is the DRIVE dataset. The experimental results, applied on the STARE dataset, show a maximum accuracy average of around 94.02%. Also, when performed on the DRIVE database, the maximum accuracy average reaches 91.55%. Mariem Ben Abdallah, Jihene Malek, Ahmad Taher Azar, Philippe Montesinos, Hafedh Belmabrouk, Julio Esclarín Monreal, and Karl Krissian Copyright © 2015 Mariem Ben Abdallah et al. All rights reserved. Recovering 3D Shape with Absolute Size from Endoscope Images Using RBF Neural Network Thu, 09 Apr 2015 09:00:09 +0000 Medical diagnosis judges the status of polyp from the size and the 3D shape of the polyp from its medical endoscope image. However the medical doctor judges the status empirically from the endoscope image and more accurate 3D shape recovery from its 2D image has been demanded to support this judgment. As a method to recover 3D shape with high speed, VBW (Vogel-Breuß-Weickert) model is proposed to recover 3D shape under the condition of point light source illumination and perspective projection. However, VBW model recovers the relative shape but there is a problem that the shape cannot be recovered with the exact size. Here, shape modification is introduced to recover the exact shape with modification from that with VBW model. RBF-NN is introduced for the mapping between input and output. Input is given as the output of gradient parameters of VBW model for the generated sphere. Output is given as the true gradient parameters of true values of the generated sphere. Learning mapping with NN can modify the gradient and the depth can be recovered according to the modified gradient parameters. Performance of the proposed approach is confirmed via computer simulation and real experiment. Seiya Tsuda, Yuji Iwahori, M. K. Bhuyan, Robert J. Woodham, and Kunio Kasugai Copyright © 2015 Seiya Tsuda et al. All rights reserved. Edge Detection in Digital Images Using Dispersive Phase Stretch Transform Mon, 23 Mar 2015 13:14:44 +0000 We describe a new computational approach to edge detection and its application to biomedical images. Our digital algorithm transforms the image by emulating the propagation of light through a physical medium with specific warped diffractive property. We show that the output phase of the transform reveals transitions in image intensity and can be used for edge detection. Mohammad H. Asghari and Bahram Jalali Copyright © 2015 Mohammad H. Asghari and Bahram Jalali. All rights reserved. Classifying Dementia Using Local Binary Patterns from Different Regions in Magnetic Resonance Images Sun, 22 Mar 2015 14:11:02 +0000 Dementia is an evolving challenge in society, and no disease-modifying treatment exists. Diagnosis can be demanding and MR imaging may aid as a noninvasive method to increase prediction accuracy. We explored the use of 2D local binary pattern (LBP) extracted from FLAIR and T1 MR images of the brain combined with a Random Forest classifier in an attempt to discern patients with Alzheimer's disease (AD), Lewy body dementia (LBD), and normal controls (NC). Analysis was conducted in areas with white matter lesions (WML) and all of white matter (WM). Results from 10-fold nested cross validation are reported as mean accuracy, precision, and recall with standard deviation in brackets. The best result we achieved was in the two-class problem NC versus AD + LBD with total accuracy of 0.98 (0.04). In the three-class problem AD versus LBD versus NC and the two-class problem AD versus LBD, we achieved 0.87 (0.08) and 0.74 (0.16), respectively. The performance using 3DT1 images was notably better than when using FLAIR images. The results from the WM region gave similar results as in the WML region. Our study demonstrates that LBP texture analysis in brain MR images can be successfully used for computer based dementia diagnosis. Ketil Oppedal, Trygve Eftestøl, Kjersti Engan, Mona K. Beyer, and Dag Aarsland Copyright © 2015 Ketil Oppedal et al. All rights reserved.