Scalable Data Mining Algorithms in Computational Biology and BiomedicineView this Special Issue
Research Article | Open Access
Zhongwei Li, Beibei Sun, Yuezhen Xin, Xun Wang, Hu Zhu, "A Computational Method for Optimizing Experimental Environments for Phellinus igniarius via Genetic Algorithm and BP Neural Network", BioMed Research International, vol. 2016, Article ID 4374603, 6 pages, 2016. https://doi.org/10.1155/2016/4374603
A Computational Method for Optimizing Experimental Environments for Phellinus igniarius via Genetic Algorithm and BP Neural Network
Flavones, the secondary metabolites of Phellinus igniarius fungus, have the properties of antioxidation and anticancer. Because of the great medicinal value, there are large demands on flavones for medical use and research. Flavones abstracted from natural Phellinus can not meet the medical and research need, since Phellinus in the natural environment is very rare and is hard to be cultivated artificially. The production of flavones is mainly related to the fermentation culture of Phellinus, which made the optimization of culture conditions an important problem. Some researches were made to optimize the fermentation culture conditions, such as the method of response surface methodology, which claimed the optimal flavones production was 1532.83 μg/mL. In order to further optimize the fermentation culture conditions for flavones, in this work a hybrid intelligent algorithm with genetic algorithm and BP neural network is proposed. Our method has the intelligent learning ability and can overcome the limitation of large-scale biotic experiments. Through simulations, the optimal culture conditions are obtained and the flavones production is increased to 2200 μg/mL.
Phellinus is an ancient Chinese medicine, which has high medicinal value. Resent research confirmed that flavones, the secondary metabolites of Phellinus, can improve human immune system, reduce the side effects of anticancer agents, and relieve the reaction of patients to radiotherapy or chemotherapy . In addition, flavones have positive effect on irregular menstruation and other gynecological diseases of female. There are large demands on the production of flavones, but the natural Phellinus is very rare. The artificial culture of Phellinus is hard to implement, because of the lack of culture technology and the long growth cycle of Phellinus. In 2008, Zeng et al. introduced a breeding method of Phellinus by protoplast fusion . Considering the limited production of Phellinus, it is necessary to develop the extraction process of flavones from Phellinus. Fermentation is usually used to produce the secondary metabolites of fungus, such as ethanol extracts of Phellinus baumii . It should be noticed that different fermentation methods can generate secondary metabolites with different biological activities. Currently, the production of flavones is mainly based on the fermentation culture of Phellinus.
In order to increase the production of Phellinus, the fermentation culture conditions are considered carefully, such as the fermentation temperature, the PH value, the rotation speed of the centrifuge, the inoculation volume, and the seed age. In the meantime, the composition of medium should also be taken into consideration. The multiple variables of fermentation conditions and culture mediums make the optimization by biotic experiments a hard problem. Zhu et al. took the fermentation temperature, the inoculum size, the rotation speed of the centrifuge, and bottling capacity as the independent variables and the fermentation yield as a dependent variable . The quadratic regression orthogonal rotating combination design method was used to build the model for Phellinus linteus fermentation process, and the following optimal fermentation conditions of Phellinus linteus were obtained: the bottling volume is 120 mL, the inoculum size is 17 mL, the temperature is 26°C, and the rotation speed of the centrifuge is 135 r/min. At this time, the theoretical extreme value of fermentation mycelium production was 24.51 mg/mL. Other researches were focused on the fermentation parameters, such as the dosage of carbon source and nitrogen source. In 2010, Zhu et al. used response surface methodology and gave out the optimum liquid fermentation conditions as follows: the concentration of corn starch is , the concentration of yeast extract is , the concentration of VB1 is , the period of fermentation is days, and the production of mycelium (dry weight) of P. linteus is 18.43 g/L [5, 6]. It should be noticed that these researches were based on single-factor experiments, which made the outcomes rely on some presented parameters. Some machine learning strategies [7, 8] have been applied in solving problem with multiple variables and rely on fewer biotic experiments [9, 10].
In this work, we focus on the optimization of fermentation conditions, which include the concentrations of glucose, maltose, mannitol, corn powder, yeast extract, copper sulfate, sodium chloride, ferrous sulfate, and vitamin B1. A hybrid algorithm combined by genetic algorithm (GA) and artificial neural network (ANN) is introduced in this paper. This new algorithm has the intelligent learning ability and can overcome the limitation of large-scale biotic experiments. Through simulations, the optimal culture conditions are obtained and the flavones production is increased to 2200 μg/mL.
In this section, our method for optimizing the fermentation condition by BP neural network and genetic algorithm is introduced.
2.1. BP Neural Network
The back propagation (BP) neural network proposed in  is a kind of former multiway propagation network, with an input layer, an intermediate layer (hidden layer), and an output layer. The model is now known as one of the most widely applied neural network models in practice. Any neuron from the input layer has a connection to every neuron in the hidden layer, while any neuron in the hidden layer connects with every output neuron. There is no connection between each pair of neurons in the same layer . BP network can be used to learn and store the relationship between the input and output. In the learning process, back propagation is used to update the weights and threshold values of the network to achieve the minimum error sum of square . When a pair of the learning samples are input into the network, the neuron activation values are regulated from the output layer to the input layer, to obtain input response in the output layer neurons. With the spread of correcting such errors inversely ongoing, correct rate of the network is input to the model in response to the rise.
BP neural network is used here as a mathematic model on fermentation condition for Phellinus igniarius. The proposed BP neural network contains three layers of neurons:(i)Input layer has 9 neurons to input the values of the 9 related factors of fermentation condition: glucose, maltose, mannitol, corn powder, yeast, cupric sulfate, ferrous sulfate, sodium chloride, and vitamin B1.(ii)Hidden layer with 11 neurons is used to generate the scaled estimated value of Phellinus yield. The hidden layer neurons should be selected as an integer between and . Here the hidden layer neurons are decided to be , since the variance between the predicted value and the actual value is minimum when the number of the hidden layer neurons is 11.(iii)Output layer with one neuron is used to calculate the production of Phellinus igniarius.
The input and output values are limited in the range by where is the maximum value of the same class in the training set of data, is the minimum value of the same class in the training set of data, is the true value of the same class in the training set of data, and is the input or output value of the network.
The topological structure of the proposed BP neural network model is shown in Figure 1.
The Levenberg-Marquardt algorithm and scaled conjugate gradient method are used for training the network. Such training strategy uses both the information of the first derivative of the objective function and the information with the second derivative of the objective function, which are described as follows:where is a vector composed of all the weights and thresholds in the network, is the search direction of the vector space composed of every component of , and is the smallest steps using on the directory of .
In order to establish a training set, more than experiments were completed as follows: inoculating the Phellinus strains on PDA slant medium and cultivating it for days under the temperature of 28°C; loading 200 mL PDA liquid medium in 500 mL flask, keeping the temperature at 28°C, the speed of the centrifuge at 150 rpm, and the PH nature, and cultivating it for days after inoculating; cultivating the 250 mL shake flasks for days after inoculating, where the inoculation of seed liquid is , the capacity of medium is 100 mL, the temperature is 28°C, and the speed is 150 rpm. Data of experimental fermentation conditions with optimal production of Phellinus igniarius are selected from the above experiments as training set. The data of experimental conditions is shown in Table 1, in which “Glu” stands for glucose, “Mal” for maltose, “Mann” for mannitol, “CP” for corn powder, “CS” for cupric sulfate, “SC” for sodium chloride, “FS” for ferrous sulfate, and “TF” for total flavonoids.
Three parameters are used to measure the accuracy and speed of the network. Specifically, obtained target error (MSE) is 0.018999, operation time is 3 seconds, and the number of iterations is 577 times. Convergence property of the neural network is shown in Figure 2.
As shown in Figure 2, it is easy to find that the convergence is rapid, and the neural network converges slowly when it is closed to the target solution and finally converges to a best value. The coverage of model is shown in Figure 3.
2.2. Genetic Algorithm for Optimization of Fermentation Condition
In this subsection, genetic algorithm (GA) is used to optimize the fermentation condition.
GA is known as an intelligent algorithm inspired by the natural selection and genetic mechanism . Similar to the basic laws of nature evolution, “survival of the fittest” is the core mechanism of the genetic algorithm; meanwhile, “reproduce,” “crossover,” and “mutation” operators are used in GA. The process has the following steps .
Step 1. Encode the individuals; since there are variables in consideration, an individual should contain bases. For example, encode [glucose, maltose, mannitol, corn powder, yeast, copper sulfate, sodium chloride, ferrous sulfate, vitamin B1] to [, , , , , , , , ], where , , , , , , , , , and the unit in use is g/L.
Step 2. Select “good” individuals according to a fitness function.
Step 3. Remove individuals with low fitness.
Step 4. Perform crossover and mutation to generate new individuals.
Step 5. Generate a new generation for evaluation by fitness function, and go to Step 2.
The process can be repeated until the halting condition is matched.
In iterations analysis, the numbers of iterations are set to be , , , and , respectively. The range of crossover probability is from to , and mutation probability is from to . The size of the population is set to be and chromosome size is for factors glucose, maltose, mannitol, corn pulp powder, yeast, sodium chloride, ferrous sulfate, copper sulfate, and vitamin B1. The encoding strategy of chromosome is floating-point (real) coding, where crossover and mutation are directly on the real operation. The concentration ranges of the factors are given as follows: glucose: 0–40 g/L, maltose: 0–40 g/L, mannitol: 0–40 g/L, corn pulp powder: 0–100 g/L, yeast: 0–100 g/L, copper sulfate: 0–0.5 g/L, sodium chloride: 0–10 g/L, ferrous sulfate: 0–0.5 g/L, and vitamin B1: 0–0.1 g/L.
In our method, the BP neural network obtained in Section 2.1 is used as fitness function to select good individuals. Selection operator is roulette selection method, roulette wheel selection, also known as proportional selection operator. The basic idea is that the probability of each individual selected is proportional to its fitness value. Assuming that group size is , is an individual, the fitness of is , and the selection probability of is
Since GA starts with certain randomly selected individuals, we perform data experiments. In Table 2, optimized fermentation conditions for Phellinus igniarius are shown, where(i)the average of glucose is 15.1 g/mL;(ii)the average of maltose is 15.264 g/mL;(iii)the average of mannitol is 29.83 g/mL;(iv)the average of corn pulp powder is 5.23 g/mL;(v)the average of yeast is 5.14 g/mL;(vi)the average of copper sulfate is 0.18 g/mL;(vii)the average of sodium chloride is 9.89 g/mL;(viii)the average of ferrous sulfate is 0.3 g/mL;(ix)the average of vitamin B1 is 1.03 g/mL.The average production of Phellinus is 2200 μg/mL.
Our method has the intelligent learning ability (by BP neural network) and can overcome the limitation of large-scale biotic experiments. Through simulations, the optimal culture conditions are obtained and the flavones production is increased to 2200 μg/mL from the known optimal result of 1532.83 μg/mL in .
In this work, we focused on the optimization of fermentation conditions for Phellinus igniarius, including the concentration of glucose, maltose, mannitol, corn powder, yeast extract, copper sulfate, sodium chloride, ferrous sulfate, and vitamin B1. A hybrid algorithm of GA is proposed, where a BP neural network trained by 25 groups of data of experiments with optimal productions of Phellinus igniarius is used as the fitness function of GA. The simulation results show that our method has the ability to overcome the limitation of large-scale biotic experiments. The optimal culture conditions are obtained and the flavones production is increased to μg/mL. Our work would also be a guide for the “Precision Medicine” with personal SNP data  and other tasks in bioinformatics [17, 18].
In our study, BP neural network is used, which is known as some classical neural computing models. It is of interest to use spiking neural networks computing models to do the optimization [19–22]. In the framework of membrane computing, cell-like  and tissue-like  computing models have been proved to be powerful as bioinspired computing models. What will happen if these models are used in calculating the optimized conditions? Some web servers are useful for biological data processing; see, for example, . It is worthy to develop some web servers for experimental conditions optimization.
The authors declare that they have no competing interests.
The research is under the auspices of National Natural Science Foundation of China (nos. 41276135, 31172010, 61272093, 61320106005, 61402187, 61502535, 61572522, and 61572523), Program for New Century Excellent Talents in University (NCET-13-1031), 863 Program (2015AA020925), Fundamental Research Funds for the Central Universities (R1607005A), and China Postdoctoral Science Foundation funded project (2016M592267).
- Y. Yang, J. Hu, Y. Liu et al., “Antioxidant and cytotoxic activities of ethanolic extracts and isolated fractions of species of the genus Phellinus Quél. (Aphyllophoromycetideae),” International Journal of Medicinal Mushrooms, vol. 13, no. 2, pp. 145–152, 2011.
- N.-K. Zeng, Q.-Y. Wang, and M.-S. Su, “The breeding of Phellinus baumii by protoplast fusion,” Journal of Chinese Medicinal Materials, vol. 31, no. 4, pp. 475–478, 2008.
- Q. Shao, Y. Yang, T. Li et al., “Biological activities of ethanol extracts of Phellinus baumii (higher basidiomycetes) obtained by different fermentation methods,” International Journal of Medicinal Mushrooms, vol. 17, no. 4, pp. 361–369, 2015.
- Z. Zhu, N. Li, J. Wang, and X. Tang, “Establishment and analysis of the fermentation model of Phellinus igniarius,” AASRI Procedia, vol. 1, pp. 2–7, 2012.
- B. Gu, X. Sun, and V. S. Sheng, “Structural minimax probability machine,” IEEE Transactions on Neural Networks and Learning Systems, 2016.
- M. Zhu, Y. Wang, G. Zhang et al., “Liquid fermentation conditions for production of mycelium of Phellinus linteus,” China Brewing, vol. 29, no. 11, pp. 88–91, 2010.
- B. Gu, V. S. Sheng, K. Y. Tay, W. Romano, and S. Li, “Incremental support vector learning for ordinal regression,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 7, pp. 1403–1416, 2015.
- B. Gu, V. S. Sheng, Z. Wang, D. Ho, S. Osman, and S. Li, “Incremental learning for v-support vector regression,” Neural Networks, vol. 67, pp. 140–150, 2015.
- X. Wen, L. Shao, Y. Xue, and W. Fang, “A rapid learning algorithm for vehicle classification,” Information Sciences, vol. 295, pp. 395–406, 2015.
- Z. Xia, X. Wang, X. Sun, and Q. Wang, “A secure and dynamic multi-keyword ranked search scheme over encrypted cloud data,” IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 2, pp. 340–352, 2016.
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986.
- L.-Y. Zhang and B.-R. Tao, “Ontology mapping based on Bayesian network,” Journal of Donghua University (English Edition), vol. 32, no. 4, pp. 681–687, 2015.
- C. Kwak, J. A. Ventura, and K. Tofang-Sazi, “Neural network approach for defect identification and classification on leather fabric,” Journal of Intelligent Manufacturing, vol. 11, no. 5, pp. 485–499, 2000.
- P. K. H. Phua and D. Ming, “Parallel nonlinear optimization techniques for training neural networks,” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1460–1468, 2003.
- G. Mitsuo and C. Runwei, Genetic Algorithms and Engineering Optimization, John Wiley & Sons, 2000.
- P. Li, M. Guo, C. Wang, X. Liu, and Q. Zou, “An overview of SNP interactions in genome-wide association studies,” Briefings in Functional Genomics, vol. 14, no. 2, pp. 143–155, 2015.
- B. Liu, F. Liu, X. Wang, J. Chen, L. Fang, and K. Chou, “Pse-in-One: a web server for generating various modes of pseudo components of DNA, RNA, and protein sequences,” Nucleic Acids Research, vol. 43, no. 1, pp. W65–W71, 2015.
- R. Wang, Y. Xu, and B. Liu, “Recombination spot identification based on gapped k-mers,” Scientific Reports, vol. 6, Article ID 23934, 2016.
- X. Zhang, B. Wang, and L. Pan, “Spiking neural P systems with a generalized use of rules,” Neural Computation, vol. 26, no. 12, pp. 2925–2943, 2014.
- T. Song and L. Pan, “On the universality and non-universality of spiking neural P systems with rules on synapses,” IEEE Transactions on NanoBioscience, vol. 14, no. 8, pp. 960–966, 2015.
- T. Song, Q. Zou, X. Liu, and X. Zeng, “Asynchronous spiking neural P systems with rules on synapses,” Neurocomputing, vol. 151, part 3, pp. 1439–1445, 2015.
- X. Wang, T. Song, F. Gong, and P. Zheng, “On the computational power of spiking neural P systems with self-organization,” Scientific Reports, vol. 6, Article ID 27624, 2016.
- X. Zhang, L. Pan, and A. Păun, “On the universality of axon P systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 11, pp. 2816–2829, 2015.
- X. Zhang, Y. Liu, B. Luo, and L. Pan, “Computational power of tissue P systems for generating control languages,” Information Sciences, vol. 278, pp. 285–297, 2014.
- B. Liu, F. Liu, X. Wang, J. Chen, L. Fang, and K.-C. Chou, “Pse-in-One: a web server for generating various modes of pseudo components of DNA, RNA, and protein sequences,” Nucleic Acids Research, vol. 43, pp. W65–W71, 2015.
Copyright © 2016 Zhongwei Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.