Computational and Mathematical Methods in Medicine The latest articles from Hindawi Publishing Corporation © 2015 , Hindawi Publishing Corporation . All rights reserved. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems Wed, 29 Jul 2015 16:02:05 +0000 Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification. Parvaneh Shabanzadeh and Rubiyah Yusof Copyright © 2015 Parvaneh Shabanzadeh and Rubiyah Yusof. All rights reserved. Dual Energy Method for Breast Imaging: A Simulation Study Mon, 13 Jul 2015 11:22:59 +0000 Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio () of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels. V. Koukou, N. Martini, C. Michail, P. Sotiropoulou, C. Fountzoula, N. Kalyvas, I. Kandarakis, G. Nikiforidis, and G. Fountos Copyright © 2015 V. Koukou et al. All rights reserved. Modelling Optimal Control of Cholera in Communities Linked by Migration Mon, 13 Jul 2015 06:40:12 +0000 A mathematical model for the dynamics of cholera transmission with permissible controls between two connected communities is developed and analysed. The dynamics of the disease in the adjacent communities are assumed to be similar, with the main differences only reflected in the transmission and disease related parameters. This assumption is based on the fact that adjacent communities often have different living conditions and movement is inclined toward the community with better living conditions. Community specific reproduction numbers are given assuming movement of those susceptible, infected, and recovered, between communities. We carry out sensitivity analysis of the model parameters using the Latin Hypercube Sampling scheme to ascertain the degree of effect the parameters and controls have on progression of the infection. Using principles from optimal control theory, a temporal relationship between the distribution of controls and severity of the infection is ascertained. Our results indicate that implementation of controls such as proper hygiene, sanitation, and vaccination across both affected communities is likely to annihilate the infection within half the time it would take through self-limitation. In addition, although an infection may still break out in the presence of controls, it may be up to 8 times less devastating when compared with the case when no controls are in place. J. B. H. Njagarah and F. Nyabadza Copyright © 2015 J. B. H. Njagarah and F. Nyabadza. All rights reserved. Automated Delineation of Vessel Wall and Thrombus Boundaries of Abdominal Aortic Aneurysms Using Multispectral MR Images Sun, 05 Jul 2015 07:31:50 +0000 A correct patient-specific identification of the abdominal aortic aneurysm is useful for both diagnosis and treatment stages, as it locates the disease and represents its geometry. The actual thickness and shape of the arterial wall and the intraluminal thrombus are of great importance when predicting the rupture of the abdominal aortic aneurysms. The authors describe a novel method for delineating both the internal and external contours of the aortic wall, which allows distinguishing between vessel wall and intraluminal thrombus. The method is based on active shape model and texture statistical information. The method was validated with eight MR patient studies. There was high correspondence between automatic and manual measurements for the vessel wall area. Resulting segmented images presented a mean Dice coefficient with respect to manual segmentations of 0.88 and a mean modified Hausdorff distance of 1.14 mm for the internal face and 0.86 and 1.33 mm for the external face of the arterial wall. Preliminary results of the segmentation show high correspondence between automatic and manual measurements for the vessel wall and thrombus areas. However, since the dataset is small the conclusions cannot be generalized. B. Rodriguez-Vila, J. Tarjuelo-Gutierrez, P. Sánchez-González, P. Verbrugghe, I. Fourneau, G. Maleux, P. Herijgers, and E. J. Gomez Copyright © 2015 B. Rodriguez-Vila et al. All rights reserved. Dynamical Analysis of an SEIT Epidemic Model with Application to Ebola Virus Transmission in Guinea Thu, 02 Jul 2015 09:14:26 +0000 In order to investigate the transmission mechanism of the infectious individual with Ebola virus, we establish an SEIT (susceptible, exposed in the latent period, infectious, and treated/recovery) epidemic model. The basic reproduction number is defined. The mathematical analysis on the existence and stability of the disease-free equilibrium and endemic equilibrium is given. As the applications of the model, we use the recognized infectious and death cases in Guinea to estimate parameters of the model by the least square method. With suitable parameter values, we obtain the estimated value of the basic reproduction number and analyze the sensitivity and uncertainty property by partial rank correlation coefficients. Zhiming Li, Zhidong Teng, Xiaomei Feng, Yingke Li, and Huiguo Zhang Copyright © 2015 Zhiming Li et al. All rights reserved. Undersampled MR Image Reconstruction with Data-Driven Tight Frame Wed, 24 Jun 2015 12:09:26 +0000 Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI) method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI. Jianbo Liu, Shanshan Wang, Xi Peng, and Dong Liang Copyright © 2015 Jianbo Liu et al. All rights reserved. Advances in Computational Methods for Genetic Diseases Thu, 18 Jun 2015 12:39:23 +0000 Francesco Camastra, Roberto Amato, Maria Donata Di Taranto, and Antonino Staiano Copyright © 2015 Francesco Camastra et al. All rights reserved. Accelerated Compressed Sensing Based CT Image Reconstruction Thu, 18 Jun 2015 08:16:22 +0000 In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. SayedMasoud Hashemi, Soosan Beheshti, Patrick R. Gill, Narinder S. Paul, and Richard S. C. Cobbold Copyright © 2015 SayedMasoud Hashemi et al. All rights reserved. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets Mon, 15 Jun 2015 14:03:06 +0000 An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron. Charita Bhikha, Arne Andreasen, Erik I. Christensen, Robyn F. R. Letts, Adam Pantanowitz, David M. Rubin, Jesper S. Thomsen, and Xiao-Yue Zhai Copyright © 2015 Charita Bhikha et al. All rights reserved. The Prioritization of Clinical Risk Factors of Obstructive Sleep Apnea Severity Using Fuzzy Analytic Hierarchy Process Mon, 15 Jun 2015 08:56:26 +0000 Recently, there has been a problem of shortage of sleep laboratories that can accommodate the patients in a timely manner. Delayed diagnosis and treatment may lead to worse outcomes particularly in patients with severe obstructive sleep apnea (OSA). For this reason, the prioritization in polysomnography (PSG) queueing should be endorsed based on disease severity. To date, there have been conflicting data whether clinical information can predict OSA severity. The 1,042 suspected OSA patients underwent diagnostic PSG study at Siriraj Sleep Center during 2010-2011. A total of 113 variables were obtained from sleep questionnaires and anthropometric measurements. The 19 groups of clinical risk factors consisting of 42 variables were categorized into each OSA severity. This study aimed to array these factors by employing Fuzzy Analytic Hierarchy Process approach based on normalized weight vector. The results revealed that the first rank of clinical risk factors in Severe, Moderate, Mild, and No OSA was nighttime symptoms. The overall sensitivity/specificity of the approach to these groups was 92.32%/91.76%, 89.52%/88.18%, 91.08%/84.58%, and 96.49%/81.23%, respectively. We propose that the urgent PSG appointment should include clinical risk factors of Severe OSA group. In addition, the screening for Mild from No OSA patients in sleep center setting using symptoms during sleep is also recommended (sensitivity = 87.12% and specificity = 72.22%). Thaya Maranate, Adisak Pongpullponsak, and Pimon Ruttanaumpawan Copyright © 2015 Thaya Maranate et al. All rights reserved. A Forward Dynamic Modelling Investigation of Cause-and-Effect Relationships in Single Support Phase of Human Walking Sun, 14 Jun 2015 09:46:52 +0000 Mathematical gait models often fall into one of two categories: simple and complex. There is a large leap in complexity between model types, meaning the effects of individual gait mechanisms get overlooked. This study investigated the cause-and-effect relationships between gait mechanisms and resulting kinematics and kinetics, using a sequence of mathematical models of increasing complexity. The focus was on sagittal plane and single support only. Starting with an inverted pendulum (IP), extended to include a HAT (head-arms-trunk) segment and an actuated hip moment, further complexities were added one-by-one. These were a knee joint, an ankle joint with a static foot, heel rise, and finally a swing leg. The presence of a knee joint and an ankle moment (during foot flat) were shown to largely influence the initial peak in the vertical GRF curve. The second peak in this curve was achieved through a combination of heel rise and the presence of a swing leg. Heel rise was also shown to reduce errors in the horizontal GRF prediction in the second half of single support. The swing leg is important for centre-of-mass (CM) deceleration in late single support. These findings provide evidence for the specific effects of each gait mechanism. Michael McGrath, David Howard, and Richard Baker Copyright © 2015 Michael McGrath et al. All rights reserved. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach Sun, 14 Jun 2015 06:29:29 +0000 This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (). Without homography, RMS error of control points was greater for underwater than surface cameras () and the opposite was observed for validation points (). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. Kelly de Jesus, Karla de Jesus, Pedro Figueiredo, João Paulo Vilas-Boas, Ricardo Jorge Fernandes, and Leandro José Machado Copyright © 2015 Kelly de Jesus et al. All rights reserved. Genetic Consequences of Antiviral Therapy on HIV-1 Wed, 10 Jun 2015 07:41:06 +0000 A variety of enzyme inhibitors have been developed in combating HIV-1, however the fast evolutionary rate of this virus commonly leads to the emergence of resistance mutations that finally allows the mutant virus to survive. This review explores the main genetic consequences of HIV-1 molecular evolution during antiviral therapies, including the viral genetic diversity and molecular adaptation. The role of recombination in the generation of drug resistance is also analyzed. Besides the investigation and discussion of published works, an evolutionary analysis of protease-coding genes collected from patients before and after treatment with different protease inhibitors was included to validate previous studies. Finally, the review discusses the importance of considering genetic consequences of antiviral therapies in models of HIV-1 evolution that could improve current genotypic resistance testing and treatments design. Miguel Arenas Copyright © 2015 Miguel Arenas. All rights reserved. Nanodosimetry-Based Plan Optimization for Particle Therapy Mon, 08 Jun 2015 06:00:50 +0000 Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. Margherita Casiraghi and Reinhard W. Schulte Copyright © 2015 Margherita Casiraghi and Reinhard W. Schulte. All rights reserved. Revisiting Warfarin Dosing Using Machine Learning Techniques Thu, 04 Jun 2015 15:19:15 +0000 Determining the appropriate dosage of warfarin is an important yet challenging task. Several prediction models have been proposed to estimate a therapeutic dose for patients. The models are either clinical models which contain clinical and demographic variables or pharmacogenetic models which additionally contain the genetic variables. In this paper, a new methodology for warfarin dosing is proposed. The patients are initially classified into two classes. The first class contains patients who require doses of >30 mg/wk and the second class contains patients who require doses of ≤30 mg/wk. This phase is performed using relevance vector machines. In the second phase, the optimal dose for each patient is predicted by two clinical regression models that are customized for each class of patients. The prediction accuracy of the model was 11.6 in terms of root mean squared error (RMSE) and 8.4 in terms of mean absolute error (MAE). This was 15% and 5% lower than IWPC and Gage models (which are the most widely used models in practice), respectively, in terms of RMSE. In addition, the proposed model was compared with fixed-dose approach of 35 mg/wk, and the model proposed by Sharabiani et al. and its outperformance were proved in terms of both MAE and RMSE. Ashkan Sharabiani, Adam Bress, Elnaz Douzali, and Houshang Darabi Copyright © 2015 Ashkan Sharabiani et al. All rights reserved. Medical Image Fusion Based on Rolling Guidance Filter and Spiking Cortical Model Wed, 03 Jun 2015 11:55:59 +0000 Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. Although numerous medical image fusion methods have been proposed, most of these approaches are sensitive to the noise and usually lead to fusion image distortion, and image information loss. Furthermore, they lack universality when dealing with different kinds of medical images. In this paper, we propose a new medical image fusion to overcome the aforementioned issues of the existing methods. It is achieved by combining with rolling guidance filter (RGF) and spiking cortical model (SCM). Firstly, saliency of medical images can be captured by RGF. Secondly, a self-adaptive threshold of SCM is gained by utilizing the mean and variance of the source images. Finally, fused image can be gotten by SCM motivated by RGF coefficients. Experimental results show that the proposed method is superior to other current popular ones in both subjectively visual performance and objective criteria. Liu Shuaiqi, Zhao Jie, and Shi Mingzhu Copyright © 2015 Liu Shuaiqi et al. All rights reserved. Enhancing the Lasso Approach for Developing a Survival Prediction Model Based on Gene Expression Data Wed, 03 Jun 2015 07:57:39 +0000 In the past decade, researchers in oncology have sought to develop survival prediction models using gene expression data. The least absolute shrinkage and selection operator (lasso) has been widely used to select genes that truly correlated with a patient’s survival. The lasso selects genes for prediction by shrinking a large number of coefficients of the candidate genes towards zero based on a tuning parameter that is often determined by a cross-validation (CV). However, this method can pass over (or fail to identify) true positive genes (i.e., it identifies false negatives) in certain instances, because the lasso tends to favor the development of a simple prediction model. Here, we attempt to monitor the identification of false negatives by developing a method for estimating the number of true positive (TP) genes for a series of values of a tuning parameter that assumes a mixture distribution for the lasso estimates. Using our developed method, we performed a simulation study to examine its precision in estimating the number of TP genes. Additionally, we applied our method to a real gene expression dataset and found that it was able to identify genes correlated with survival that a CV method was unable to detect. Shuhei Kaneko, Akihiro Hirakawa, and Chikuma Hamada Copyright © 2015 Shuhei Kaneko et al. All rights reserved. The Technological Growth in eHealth Services Wed, 03 Jun 2015 07:56:59 +0000 The infusion of information communication technology (ICT) into health services is emerging as an active area of research. It has several advantages but perhaps the most important one is providing medical benefits to one and all irrespective of geographic boundaries in a cost effective manner, providing global expertise and holistic services, in a time bound manner. This paper provides a systematic review of technological growth in eHealth services. The present study reviews and analyzes the role of four important technologies, namely, satellite, internet, mobile, and cloud for providing health services. Shilpa Srivastava, Millie Pant, Ajith Abraham, and Namrata Agrawal Copyright © 2015 Shilpa Srivastava et al. All rights reserved. Monte Carlo Calculation of Radioimmunotherapy with 90Y-, 177Lu-, 131I-, 124I-, and 188Re-Nanoobjects: Choice of the Best Radionuclide for Solid Tumour Treatment by Using TCP and NTCP Concepts Tue, 02 Jun 2015 07:32:44 +0000 Radioimmunotherapy has shown that the use of monoclonal antibodies combined with a radioisotope like 131I or 90Y still remains ineffective for solid and radioresistant tumour treatment. Previous simulations have revealed that an increase in the number of 90Y labelled to each antibody or nanoobject could be a solution to improve treatment output. It now seems important to assess the treatment output and toxicity when radionuclides such as 90Y, 177Lu, 131I, 124I, and 188Re are used. Tumour control probability (TCP) and normal tissue complication probability (NTCP) curves versus the number of radionuclides per nanoobject were computed with MCNPX to evaluate treatment efficacy for solid tumours and to predict the incidence of surrounding side effects. Analyses were carried out for two solid tumour sizes of 0.5 and 1.0 cm radius and for nanoobject (i.e., a radiolabelled antibody) distributed uniformly or nonuniformly throughout a solid tumour (e.g., Non-small-cell-lung cancer (NSCLC)). 90Y and 188Re are the best candidates for solid tumour treatment when only one radionuclide is coupled to one carrier. Furthermore, regardless of the radionuclide properties, high values of TCP can be reached without toxicity if the number of radionuclides per nanoobject increases. S. Lucas, O. Feron, B. Gallez, B. Masereel, C. Michiels, and T. Vander Borght Copyright © 2015 S. Lucas et al. All rights reserved. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis Tue, 02 Jun 2015 06:48:46 +0000 Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners’ ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (, ). These correlations were approximately the same as the interrater agreement among human raters (, ). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. Tino Haderlein, Cornelia Schwemmle, Michael Döllinger, Václav Matoušek, Martin Ptok, and Elmar Nöth Copyright © 2015 Tino Haderlein et al. All rights reserved. Preliminary Investigation of Microdosimetric Track Structure Physics Models in Geant4-DNA and RITRACKS Mon, 01 Jun 2015 13:12:32 +0000 The major differences between the physics models in Geant4-DNA and RITRACKS Monte Carlo packages are investigated. Proton and electron ionisation interactions and electron excitation interactions in water are investigated in the current work. While these packages use similar semiempirical physics models for inelastic cross-sections, the implementation of these models is demonstrated to be significantly different. This is demonstrated in a simple Monte Carlo simulation designed to identify differences in interaction cross-sections. Michael Douglass, Scott Penfold, and Eva Bezak Copyright © 2015 Michael Douglass et al. All rights reserved. The Influence of DNA Configuration on the Direct Strand Break Yield Mon, 01 Jun 2015 12:37:51 +0000 Purpose. To study the influence of DNA configuration on the direct damage yield. No indirect effect has been accounted for. Methods. The GEANT4-DNA code was used to simulate the interactions of protons and alpha particles with geometrical models of the A-, B-, and Z-DNA configurations. The direct total, single, and double strand break yields and site-hit probabilities were determined. Certain features of the energy deposition process were also studied. Results. A slight increase of the site-hit probability as a function of the incident particle linear energy transfer was found for each DNA configuration. Each DNA form presents a well-defined site-hit probability, independently of the particle linear energy transfer. Approximately 70% of the inelastic collisions and ~60% of the absorbed dose are due to secondary electrons. These fractions are slightly higher for protons than for alpha particles at the same incident energy. Conclusions. The total direct strand break yield for a given DNA form depends weakly on DNA conformation topology. This yield is practically determined by the target volume of the DNA configuration. However, the double strand break yield increases with the packing ratio of the DNA double helix; thus, it depends on the DNA conformation. M. A. Bernal, C. E. deAlmeida, S. Incerti, C. Champion, V. Ivanchenko, and Z. Francis Copyright © 2015 M. A. Bernal et al. All rights reserved. Electrical Neuroimaging with Irrotational Sources Sun, 31 May 2015 07:58:39 +0000 This paper discusses theoretical aspects of the modeling of the sources of the EEG (i.e., the bioelectromagnetic inverse problem or source localization problem). Using the Helmholtz decomposition (HD) of the current density vector (CDV) of the primary current into an irrotational (I) and a solenoidal (S) part we show that only the irrotational part can contribute to the EEG measurements. In particular we present for the first time the HD of a dipole and of a pure irrotational source. We show that, for both kinds of sources, I extends all over the space independently of whether the source is spatially concentrated (as the dipole) or not. However, the divergence remains confined to a region coinciding with the expected location of the sources, confirming that it is the divergence rather than the CDV that really defines the spatial extension of the generators, from where it follows that an irrotational source model (ELECTRA) is always physiologically meaningful as long as the divergence remains confined to the brain. Finally we show that the irrotational source model remains valid for the most general electrodynamics model of the EEG in inhomogeneous anisotropic dispersive media and thus far beyond the (quasi) static approximation. Rolando Grave de Peralta Menendez and Sara Gonzalez Andino Copyright © 2015 Rolando Grave de Peralta Menendez and Sara Gonzalez Andino. All rights reserved. A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences Thu, 28 May 2015 12:36:32 +0000 Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method. Yun Xue, Zhengling Liao, Meihang Li, Jie Luo, Qiuhua Kuang, Xiaohui Hu, and Tiechen Li Copyright © 2015 Yun Xue et al. All rights reserved. Statistical and Computational Methods for Genetic Diseases: An Overview Thu, 28 May 2015 11:29:41 +0000 The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. Francesco Camastra, Maria Donata Di Taranto, and Antonino Staiano Copyright © 2015 Francesco Camastra et al. All rights reserved. Optimization and Corroboration of the Regulatory Pathway of p42.3 Protein in the Pathogenesis of Gastric Carcinoma Thu, 28 May 2015 11:01:21 +0000 Aims. To optimize and verify the regulatory pathway of p42.3 in the pathogenesis of gastric carcinoma (GC) by intelligent algorithm. Methods. Bioinformatics methods were used to analyze the features of structural domain in p42.3 protein. Proteins with the same domains and similar functions to p42.3 were screened out for reference. The possible regulatory pathway of p42.3 was established by integrating the acting pathways of these proteins. Then, the similarity between the reference proteins and p42.3 protein was figured out by multiparameter weighted summation method. The calculation result was taken as the prior probability of the initial node in Bayesian network. Besides, the probability of occurrence in different pathways was calculated by conditional probability formula, and the one with the maximum probability was regarded as the most possible pathway of p42.3. Finally, molecular biological experiments were conducted to prove it. Results. In Bayesian network of p42.3, probability of the acting pathway “S100A11→RAGE→P38→MAPK→Microtubule-associated protein→Spindle protein→Centromere protein→Cell proliferation” was the biggest, and it was also validated by biological experiments. Conclusions. The possibly important role of p42.3 in the occurrence of gastric carcinoma was verified by theoretical analysis and preliminary test, helping in studying the relationship between p42.3 and gastric carcinoma. Yibin Hao, Tianli Fan, and Kejun Nan Copyright © 2015 Yibin Hao et al. All rights reserved. Unified Modeling of Familial Mediterranean Fever and Cryopyrin Associated Periodic Syndromes Thu, 28 May 2015 09:27:45 +0000 Familial mediterranean fever (FMF) and Cryopyrin associated periodic syndromes (CAPS) are two prototypical hereditary autoinflammatory diseases, characterized by recurrent episodes of fever and inflammation as a result of mutations in MEFV and NLRP3 genes encoding Pyrin and Cryopyrin proteins, respectively. Pyrin and Cryopyrin play key roles in the multiprotein inflammasome complex assembly, which regulates activity of an enzyme, Caspase 1, and its target cytokine, IL-1β. Overproduction of IL-1β by Caspase 1 is the main cause of episodic fever and inflammatory findings in FMF and CAPS. We present a unifying dynamical model for FMF and CAPS in the form of coupled nonlinear ordinary differential equations. The model is composed of two subsystems, which capture the interactions and dynamics of the key molecular players and the insults on the immune system. One of the subsystems, which contains a coupled positive-negative feedback motif, captures the dynamics of inflammation formation and regulation. We perform a comprehensive bifurcation analysis of the model and show that it exhibits three modes, capturing the Healthy, FMF, and CAPS cases. The mutations in Pyrin and Cryopyrin are reflected in the values of three parameters in the model. We present extensive simulation results for the model that match clinical observations. Yasemin Bozkurt, Alper Demir, Burak Erman, and Ahmet Gül Copyright © 2015 Yasemin Bozkurt et al. All rights reserved. Evolutionary Influenced Interaction Pattern as Indicator for the Investigation of Natural Variants Causing Nephrogenic Diabetes Insipidus Thu, 28 May 2015 09:07:40 +0000 The importance of short membrane sequence motifs has been shown in many works and emphasizes the related sequence motif analysis. Together with specific transmembrane helix-helix interactions, the analysis of interacting sequence parts is helpful for understanding the process during membrane protein folding and in retaining the three-dimensional fold. Here we present a simple high-throughput analysis method for deriving mutational information of interacting sequence parts. Applied on aquaporin water channel proteins, our approach supports the analysis of mutational variants within different interacting subsequences and finally the investigation of natural variants which cause diseases like, for example, nephrogenic diabetes insipidus. In this work we demonstrate a simple method for massive membrane protein data analysis. As shown, the presented in silico analyses provide information about interacting sequence parts which are constrained by protein evolution. We present a simple graphical visualization medium for the representation of evolutionary influenced interaction pattern pairs (EIPPs) adapted to mutagen investigations of aquaporin-2, a protein whose mutants are involved in the rare endocrine disorder known as nephrogenic diabetes insipidus, and membrane proteins in general. Furthermore, we present a new method to derive new evolutionary variations within EIPPs which can be used for further mutagen laboratory investigations. Steffen Grunert and Dirk Labudde Copyright © 2015 Steffen Grunert and Dirk Labudde. All rights reserved. From Heuristic to Mathematical Modeling of Drugs Dissolution Profiles: Application of Artificial Neural Networks and Genetic Programming Tue, 26 May 2015 11:53:45 +0000 The purpose of this work was to develop a mathematical model of the drug dissolution () from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of versus extrudate diameter () and the time variable () and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations’ parameters. Two inputs were found important for the drug dissolution: and . The extrudates length () was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of versus and resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs’ black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies. Aleksander Mendyk, Sinan Güres, Renata Jachowicz, Jakub Szlęk, Sebastian Polak, Barbara Wiśniowska, and Peter Kleinebudde Copyright © 2015 Aleksander Mendyk et al. All rights reserved. Inside of the Linear Relation between Dependent and Independent Variables Mon, 25 May 2015 11:53:36 +0000 Simple and multiple linear regression analyses are statistical methods used to investigate the link between activity/property of active compounds and the structural chemical features. One assumption of the linear regression is that the errors follow a normal distribution. This paper introduced a new approach to solving the simple linear regression in which no assumptions about the distribution of the errors are made. The proposed approach maximizes the probability of observing the event according to the random error. The use of the proposed approach is illustrated in ten classes of compounds with different activities or properties. The proposed method proved reliable and was showed to fit properly the observed data compared to the convenient approach of normal distribution of the errors. Lorentz Jäntschi, Lavinia L. Pruteanu, Alina C. Cozma, and Sorana D. Bolboacă Copyright © 2015 Lorentz Jäntschi et al. All rights reserved.