Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article
Special Issue

Application of Discrete Mathematics in Urban Transportation System Analysis

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 850141 |

Bo Yan, Yao Cui, Lin Zhang, Chao Zhang, Yongzhi Yang, Zhenming Bao, Guobao Ning, "Beam Structure Damage Identification Based on BP Neural Network and Support Vector Machine", Mathematical Problems in Engineering, vol. 2014, Article ID 850141, 8 pages, 2014.

Beam Structure Damage Identification Based on BP Neural Network and Support Vector Machine

Academic Editor: Rui Mu
Received10 Nov 2013
Revised29 Nov 2013
Accepted30 Nov 2013
Published06 Jan 2014


It is not easy to find marine cracks of structures by directly manual testing. When the cracks of important components are extended under extreme offshore environment, the whole structure would lose efficacy, endanger the staff’s safety, and course a significant economic loss and marine environment pollution. Thus, early discovery of structure cracks is very important. In this paper, a beam structure damage identification model based on intelligent algorithm is firstly proposed to identify partial cracks in supported beams on ocean platform. In order to obtain the replacement mode and strain mode of the beams, the paper takes simple supported beam with single crack and double cracks as an example. The results show that the difference curves of strain mode change drastically only on the injured part and different degrees of injury would result in different mutation degrees of difference curve more or less. While the model based on support vector machine (SVM) and BP neural network can identify cracks of supported beam intelligently, the methods can discern injured degrees of sound condition, single crack, and double cracks. Furthermore, the two methods are compared. The results show that the two methods presented in the paper have a preferable identification precision and adaptation. And damage identification based on support vector machine (SVM) has smaller error results.

1. Introduction

The designed life of an offshore platform is usually in 15~20 years. The maintenance cost of it is extremely expensive, but compared with its purchasing expense, it seems to be acceptable. As a result, from economic angle, it is important to evaluate the new platform, estimate residual life of existing platform, and prolong the life time of jacket platform for insuring production safety and improving production efficiency, extending lifespan and saving maintenance cost. Thus, it is necessary to provide an effective beam structure damage identification model to timely detect damage, evaluate damage degree, then verify and improve the design method of current platform, and provide references for future structure residual life assessment.

There are many literatures about the damage identification problem. Kim and Melhem [1] summarized the applications of the wavelet analysis method in system damage checking and health monitoring in mechanical and other structures. Sun and Chang [2] utilized wavelet packet transform to analyze the signal of structure measurement; besides damage index based on wavelet packet is given and combined with neural network to identify the damage. In the 1970s, Cawley and Adams [3] proposed that using vibration test data into analog neural network is a method that could be applied to detect and research material damage. Elkordy et al. [4] did structural damage detection by BP network. The research is based on the experimental data of a shaker and a simulated data of a finite element to carry out the network training. And then the paper used network after training to identify structural damage. Pandey and Barai [5] took the replacement under static load as multilayer perceptron model’s input to test and recognize the damages of steel bridge. Kirkegaard and Rytter [6] took advantage of the frequency change before and after the injury and used the BP neural network to locate the damage and identify the damage degree of steel beam. Vakil-Baghmisheh et al. [7] proposed an alternative method of material structural damage identification based on genetic algorithm. An analysis model of a cantilever beam with crack was applied to obtain the frequency of structure by numerical simulation. Chou and Ghaboussi [8] thought of the damage problem as optimization problem and solved the problem with genetic algorithms. Several freedom of static was used to measure the displacement to determine the cross-sectional area and the changes of structural elastic modulus. Other successful applications can be found in literatures [9, 10].

This paper attempts to propose a beam structure damage identification model based on intelligent algorithm to identify the crack of noncracked beam, since BP neural network and SVM have been successfully applied in solving these kinds of complex problems [1119]. Thus, BP neural network and SVM are also applied to identify the damage degree of the beam with crack intelligently in the conditions of good, single, and double cracks.

This paper introduces beam structure damage identification models based on BP neural network and SVM, respectively. Therefore, the remainder of this paper is organized as follows. Section 2 describes beam structure damage identification models based on BP neural network and SVM, respectively. Section 3 attempted to determinate the input parameters. In Section 4, an empirical example was used to examine the effectiveness of the beam structure damage identification models. Conclusions are displayed in Section 5.

2. Beam Structure Damage Identification Model

2.1. Artificial Neural Network

Artificial network, a computer artificial intelligence, based on neural structure and physiology simulates human thinking. Modern computers specialize in calculation and quick information processing. But for the capacity of dealing with complexity (schema awareness, pattern recognition and making final decision in complex environment), modern computers are nowhere near as human. Modern computers can only implement some form of the stored-program architecture according to program edited in advance and do not have the capacity to adapt to complex environment and study in the environment. The differences between the way human brain works and the functions of computer system are great. Brain is a highly complex, nonlinear, and parallel processing system which is formed from a jumble of interconnected basic units. Though the reaction speed of brain single neuron is lower than the speed of general computer’s basic unit (logic gate), about 5 orders of magnitude, the number of neuron is huge, and every single neuron can connect with thousands or more other neurons. The brain’s speed of processing complex problems is far more quick than computer.

Therefore, human takes advantage of the characteristics of brain’s operating mechanism and organizational structure, from the perspective of emulating the brain intelligence, looking for a better way to storage and handle information. Ultimately, a new integrated information system based on new intelligent computer is constructed which is closer to human intelligence and can be used for processing complex information. In this paper, artificial neural network is used to identify the degree of damage of beam structural.

2.1.1. Neural Model

The model of neuron is shown in Figure 1. The basic unit of ANN and the basic elements are as follows.(1)A group of related connection. The connection strength is represented by the weight given on the connection line. It is in the active state if the weight value is positive and in the suppressive state if the weight value is negative.(2)Summation unit. It is used to require the weighted sum of a number of input signals.(3)Nonlinear activation function. It plays the role of the nonlinear mapping and limits the output amplitude variation range of neuron. The common activation function includes segmented linear function, the threshold value function, sigmoid function, and so on.

2.1.2. BP Neural Network

BP neural network is mostly used in the beam structure damage identification. BP network algorithm consists of reverse and forward propagation. It contains hidden layer, output layer, and input layer, in which the states of neurons in each layer can only influence the neurons under them. Initially, through the process of forward propagation, the signal is transmitted from the input layer to hidden layer and calculated in the hidden layer. The results calculated in hidden layer are transmitted to output layer and outputted. The results are compared with the expected value, and the error will be corrected through the reverse propagation, that is, backtrack. The function in the hidden layer used in the process is called activation function. This process will be repeated. The weight will be changed according to the results in last layer during every reverse calculation to reduce the error. When the error meets the requirement, stop the calculation.

The change of BP network connection weights has a high level of confidence. However, this algorithm is a method of gradient descent search, so it has some inherent characteristics which are shown as follows.

(1) Slower Convergence. In dealing with complex issues, BP algorithm may need training in repetition to achieve convergence. And when the whole network reaches a certain level after training, the convergence speed of BP network will slow down to a very low level and occupy the machine for a long time.

(2) Easy to Fall into the Local Minima of Error Function. As the BP algorithm uses a gradient descent method to solve problems, the training results gradually approach the minimum of error along the curve surface of the error function. However, when training for a complex problem, its error function is always high-dimensional space surface. The training results are easy to fall into the local minimum rather than the global minimum. Therefore, although the network weights under the BP algorithm converge to a unique value, it is difficult to ensure that the error surface obtained is a global minimum solution.

(3) The Instability of System Training. The change of weight is determined by each learning rate. If the learning rate is large, it can cause the system to become unstable. In the initial training of the network, larger learning rate can obtain faster convergence rate and better error decreases. But it is limited to the early stage of training. When training reaches later period, the high-speed learning efficiency may make the correction rate of network weight too large. So in the processing of error correction, the error beyond the minimum and the system fall into the situation of never converge, making the whole system unstable.

In the classic BP algorithm, the learning rate is often set to be a constant, which largely determines the performance of the algorithm. High learning rate can improve the efficiency of the algorithm effectively but often causes excessive fluctuations in weight and makes the system not stable. Low learning rate will elongate learning time and occupy too much machine. To solve this problem, related researchers proposed a variety of adaptive learning rate methods. In this paper, competitive learning method is used to fix the entire BP neural network.

2.2. Basic Theory of Support Vector Machine

Many traditional statistical methods based on law of large numbers require large amounts of sample data to be the theoretical basis, which often do not fit with the reality. Because, in practice, the situation where the sample number is shortage is very common. The use of these methods makes it difficult to obtain satisfactory results. Thus, in the last century, Vapnik, and so forth, studied the statistical learning theory (SLT) deeply, which is a specialized method to study how to use limited number of samples for machine. The statistical inference rules of the theory only consider the asymptotic properties and can find the optimal solution under the conditions of limited information. In mid-1990s, the machine learning theory under the conditions of the limited sample was developed and applied gradually. And ultimately, a relatively complete theoretical system is formed [20].

Support vector machine (SVM) was proposed by Vapnik [2123], which is a statistical-based learning method. SVM is based on a limited sample data and balances the reasoning ability and complexity of the model to achieve optimal results. This approach has its unique advantages in solving high-dimensional pattern recognition, nonlinear, and small sample event and can also be used in the regression analysis and so on [24].

2.2.1. The Feature of SVM

The basic features of SVM for classifying and regressing problems are as follows [25].(1)SVM is specific for the case of limited samples. The calculated objective is to obtain the optimal solution under the existing data rather than when the samples are infinity.(2)When the data is linear inseparability, the linearly nonseparable data in low-dimensional vector space is transited to high-dimensional vector space by nonlinear transformation to make it linearly separable. The data is analyzed and calculated according to the characteristics of nonlinear part in high-dimensional vector space.(3)To minimize the risk experience, confidence interval, and optimization of the overall results of learning machine, according to the theory of structural risk minimization, an optimal separating hyperplane must be obtained in space.

2.2.2. Principle of SVM

Generalized optimal separating hyperplane. Assume that the training data can be separated by a hyperplane without error. When the distance between the hyperplane and its nearest training point is maximum, the hyperplane is optimal hyperplane.

To describe the hyperplane, the following forms are used Use the compact form of these inequalities It is easy to verify that the optimal hyperplane satisfies condition formula (3) and attains the following: minimum.

For a hyperplane, if vector is classified according to the following form: It is called interval separating hyperplane and has the following information about collection interval separating hyperplane of VC dimension theorem.

Nonlinear problems change from low-dimensional feature space to high-dimensional feature space and the optimal linear hyperplane can be obtained in this space. Similarly, with regard to the linearly inseparable problem, by the transformation of nonlinear mapping function, the input data in the low-dimensional space is converted into the high-dimensional space, so as to achieve the purpose of solving the problem. To solve the above problem in the high-dimensional feature space, it only needs to make inner product operation to kernel function in original space. This is determined by the fact that there are no other operations expecting the inner product operation among the training samples of classification function and optimization functions.

Therefore, most of the nonlinear problems in original space can be transformed into a linear separable problem after space conversion, as shown in Figure 2.

However, the difficulty of transforming the nonlinear problem into a high-dimensional space is that the nonlinear mapping in this process may be very complex. According to functional theory, as long as the kernel function satisfies Mercer conditions, it must correspond to the inner product of one space. For such conversion, it is not necessary to have a specific transformation process. In order to avoid complex calculations in such high-dimensional space, the kernel function is used to replace the dot product of optimal separating hyperplane and the problem can be solved. However, this method is based on the fact that the linear classification function does not include any other operations expecting support vector inner product of the training sample and the sample to be classified. At the same time, in the solution process, this function only takes the inner product operation to training samples. The classification function of this method in the sample space can be written as The choice of kernel function needs to meet Mercer conditions, and different forms of kernel functions can produce different support vector machines (see Table 1).

Kernel functionExpression Parameter

Liner kernel function
Polynomial kernel function
Radial basis function (RBF) kernel function
Sigmoid kernel function ,

3. Input Parameter Determination

In the process of damage identification, when training the sample data is based on the parameters of displacement vibration models or their derivatives, whether it is artificial neural network or support vector machine, the final recognition results may produce great error and sometimes even produce disorder phenomenon, so the parameter settings must be paid attention to. Strain mode is a very sensitive parameter to injury. It has advantages of high accuracy, being easy to test, mature analytical methods, and many others. In fact, when the artificial neural network is used to train, if the accuracy of the input parameters is ensured to be high enough, the results of the degree of damage recognition must be accurate and efficient. On the contrary, if the precision is lacking, the recognition results can not be guaranteed. Therefore, it is reasonable to select the strain mode difference parameter as the input data of support vector machine model and the neural network in this chapter. The flowchart of the two smart methods of beam structure damage identification is in Figure 3.

4. Empirical Example

A simple supported beam with localized damage is shown in Figure 4. The geometric dimensions are that the length is 400 mm, the width is 10 mm, and the height is 2 mm. The beam is used to simulate the conditions, the fourth quarter single crack across, the fourth quarter double cracks across and so on. The crack length is 1/5 of the beam width, and the crack depth is 3.125%, 6.250%, 12.500%, and 15.625% of the effective section height. The crack width is 2 mm. The modulus is 211 GPa and density is 7850 Kg/m3. The Poisson’s ratio is set to be 0.33. Eight-node SOLID45 solid element of ANSYS finite element analysis software is applied to model. Grid is divided into 25 equal parts in horizon, 16 equal parts vertically and 40 equal parts in length.

4.1. Intelligent Recognition with BP Neural Network

The training samples of BP neural network should choose continuous third strain mode difference of beam. Thus, input vector of network training is three-dimensional and the output vector is one-dimensional. They represent the damage degree of one unit. So, in order to build a three-tier network, three input layer neurons and output layer neurons are needed. Through repeated trials, when the neurons in hidden layer are 6 in the final, the training effect is optimal (speed and accuracy of training). Assuming the damage degree of beam was 3.125%, 6.250%, 12.500%, and 15.625%, the samples whose damage degree is 20% are used as test samples to verify the damage identification capability of the neural network. At the same time, the effects of noise are taken into account. Thus, random noise is added into the strain model when the damage cases were calculated. The added noise levels are 1% and 3%. The test results of the network are shown in Tables 2 and 3.

Working condition numberDamage element numberIdeal result of SVMActual result of SVM

No. 1103.125%3.150%
No. 2106.250%6.300%
No. 31012.500%12.520%
No. 41015.625%15.650%

Working condition numberDamage element numberIdeal result of SVMActual result of SVM

No. 1103.125%3.160%
No. 2106.250%6.310%
No. 31012.500%12.540%
No. 41015.625%15.680%

It can be seen from Table 2, when the level of noise is 1%, the effect of identification is good, while the identification errors of each unit are all small. The largest error is only 0.8% which occurred in the case 3. The recognition effect is still good when the level of noise is 3%, but the biggest error is slightly larger, reaching 1.12% which appeared in the case 3. Therefore, when the noise level is 1%, the recognition results of damage are more excellent which can be seen in Tables 4 and 5.

Working condition numberDamage element numberIdeal result of SVMActual result of SVM

No. 1103.125%3.180%
No. 2106.250%6.320%
No. 31012.500%12.490%
No. 41015.625%15.660%

Working condition numberDamage element numberIdeal result of SVMActual result of SVM

No. 1103.125%3.190%
No. 2106.250%6.350%
No. 31012.500%12.460%
No. 41015.625%15.690%

4.2. Intelligent Recognition with Support Vector Machine

Assuming that the damage extent of beam is 3.125%, 6.250%, 12.500%, and 15.625%, these samples are taken as the training samples. The samples whose damage extent is 20% are used as test samples to verify the damage identification capability of this neural network. The input parameters are the strain modes difference of the first 3 structural orders. The damage recognition results are shown in Tables 6 and 7.

Working condition numberDamage element numberIdeal result of SVMActual result of SVM

No. 1103.125%3.200%
No. 2106.250%6.290%
No. 31012.500%12.510%
No. 41015.625%15.640%

Working condition numberDamage element numberIdeal result of SVMActual result of SVM

No. 1103.125%3.140%
No. 2106.250%6.290%
No. 31012.500%12.470%
No. 41015.625%15.650%

4.3. The Comparison of Recognition Performance between BP Neural Networks and Support Vector Machine

(1) The Comparison of Run Time. It needs at least 37 times of iterations on average that the BP neural network can achieve the specified error, while the average running time is 3 minutes and 12 seconds. The SVM requires only one minute time to achieve the results. This showed that the learning convergence speed of SVM is quick and can approximate any nonlinear function.

(2) The Contrast of Recognition Results. The difference of prediction error between SVM model and BP neural network model is less. The error has been very small in some units and did not affect the discrimination of damage elements. Through the errors of the two methods, support vector machine is slightly better than BP neural network model. This is because the support vector machine is built on the VC dimension theory and structural risk minimization principle. Its generalization ability is stronger and can effectively avoid the overlearning problems. So SVM can ensure finding the global optimal solution. Therefore, support vector machine algorithm is more accurate for solving the damage position problem of beam structural. The accuracy of recognition results for single crack and double cracks with the SVM is better than the results of BP neural network. So it is a better approach to identify the degree of injury. The average recognition accuracy of structural damage degree of the two methods is shown in Table 8.

TypeWorking condition numberRecognition efficiency of BP neural networkRecognition efficiency of SVM

Single crack199.9%100%
Double cracks299.6%99.9%

4.4. Performance Analysis of the Identification Models Based on BP Neural Networks and SVM

The results of SVM and BP neural network are significantly better than the results of ordinary SVM or BP neural network model. Due to continuous interactive analysis and improvement, the results of the improved model are obtained from smart model. This suggests that support vector machine model or BP neural network model can effectively remove outliers to ensure higher prediction accuracy. The computing inspiration of BP neural network is from the structure and function of biological neural network. The neurons of BP neural network are interconnected to form a group, which handles the calculation method of link information. In most cases, BP neural network is an adaptive system. Compared with BP neural network model, BP neural network algorithm is difficult to achieve satisfactory results. SVM model can identify beam structure damage better. The computational complexity of SVM depends on the number of support vectors. Support vector machines can reach the global optimum, while the BP neural network tends to fall into a local optimal solution. So support vector machine is a powerful tool to identify the degree of structural damage.

5. Conclusions

The paper expounds the basic theories of neural network and support vector machine. Using the two methods damages in local damaged beams structure is located. And the strain model differences are selected to be input parameters. In the example of a simple supported beam, the strain model differentials of sound condition, a quarter of single cracked condition, a quarter of double cracked condition, and double cracked midspan condition, are imported. The crack depths of these conditions are 3.125%, 6.250%, 12.500%, and 15.625%, respectively. The samples are taken as training samples, and 20% damage degree samples served as testing samples that verified the capacities of damage identification of support vector machine and BP neural network. Considering noise effect, the noise levels of BP neural network are added into 1% and 3%. In this paper, both of the two methods could gain a preferable identification precision and adaptation under the conditions of single crack and double cracks. And the beam structure damage identification model base on SVM is of smaller error, less operation time, and better veracity.

Thus, the main contributions of this paper to the literature can be summarized as follows. Firstly, it attempts to develop the models to identify the beam structure damage. It is expected to help to efficiently make reasonable and effective measures to reduce the harm of damage. Secondly, in order to improve the identification accuracy, the beam structure damage identification model based on support vector machine and BP neural network is used to identify the damage level. The performance of the proposed model can provide some valuable insight for researchers as well as practitioners.


This work was supported by Grants from the Fundamental Research Funds for the Central Universities nos. 3132013337-4-5 and 3132013079.


  1. H. Kim and H. Melhem, “Damage detection of structures by wavelet analysis,” Engineering Structures, vol. 26, no. 3, pp. 347–362, 2004. View at: Publisher Site | Google Scholar
  2. Z. Sun and C. C. Chang, “Structural damage assessment based on wavelet packet transform,” Journal of Structural Engineering, vol. 128, no. 10, pp. 1354–1361, 2002. View at: Publisher Site | Google Scholar
  3. P. Cawley and R. D. Adams, “Improved frequency resolution from transient tests with short record lengths,” Journal of Sound and Vibration, vol. 64, no. 1, pp. 123–132, 1979. View at: Publisher Site | Google Scholar
  4. M. F. Elkordy, K. C. Chang, and G. C. Lee, “Neural networks trained by analytically simulated damage states,” Journal of Computing in Civil Engineering, vol. 7, no. 2, pp. 130–145, 1993. View at: Publisher Site | Google Scholar
  5. P. C. Pandey and S. V. Barai, “Multilayer perceptron in damage detection of bridge structures,” Computers and Structures, vol. 54, no. 4, pp. 597–608, 1995. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  6. P. H. Kirkegaard and A. Rytter, “The use of neural networks for damage detection and location in a steel member,” in Neural Networks and Combinatorial Optimization in Civil and Structural Engineering, pp. 1–9, Civil-Comp Press, Edinburgh, UK, 1993. View at: Google Scholar
  7. M.-T. Vakil-Baghmisheh, M. Peimani, M. H. Sadeghi, and M. M. Ettefagh, “Crack detection in beam-like structures using genetic algorithms,” Applied Soft Computing Journal, vol. 8, no. 2, pp. 1150–1160, 2008. View at: Publisher Site | Google Scholar
  8. J.-H. Chou and J. Ghaboussi, “Genetic algorithm in structural damage detection,” Computers and Structures, vol. 79, no. 14, pp. 1335–1353, 2001. View at: Publisher Site | Google Scholar
  9. W. J. Yi and X. Liu, “Damage diagnosis of structures by genetic algorithms,” Engineering Mechanics, vol. 18, no. 2, pp. 64–71, 2001. View at: Google Scholar
  10. Y. Y. Lee and K. W. Liew, “Detection of damage location in a beam using the wavelet analysis,” International Journal of Structural Stability and Dynamics, vol. 1, no. 3, pp. 455–465, 2001. View at: Publisher Site | Google Scholar
  11. B.-Z. Yao, C.-Y. Yang, J.-B. Yao, and J. Sun, “Tunnel surrounding rock displacement prediction using support vector machine,” International Journal of Computational Intelligence Systems, vol. 3, no. 6, pp. 843–852, 2010. View at: Google Scholar
  12. B. Yao, C. Yang, J. Hu, J. Yao, and J. Sun, “An improved ant colony optimization for flexible job shop scheduling problems,” Advanced Science Letters, vol. 4, no. 6-7, pp. 2127–2131, 2011. View at: Publisher Site | Google Scholar
  13. B. Z. Yao, P. Hu, M. H. Zhang, and S. Wang, “Artificial bee colony algorithm with scanning strategy for periodic vehicle routing problem,” SIMULATION, vol. 89, no. 6, pp. 762–770, 2013. View at: Publisher Site | Google Scholar
  14. B. Yu, W. H. K. Lam, and M. L. Tam, “Bus arrival time prediction at bus stop with multiple routes,” Transportation Research C, vol. 19, no. 6, pp. 1157–1170, 2011. View at: Publisher Site | Google Scholar
  15. B. Yu and Z. Z. Yang, “An ant colony optimization model: the period vehicle routing problem with time windows,” Transportation Research E, vol. 47, no. 2, pp. 166–181, 2011. View at: Publisher Site | Google Scholar
  16. B. Yu, Z. Z. Yang, and S. Li, “Real-time partway deadheading strategy based on transit service reliability assessment,” Transportation Research A, vol. 46, no. 8, pp. 1265–1279, 2012. View at: Google Scholar
  17. Y. Bin, Y. Zhongzhen, and Y. Baozhen, “Bus arrival time prediction using support vector machines,” Journal of Intelligent Transportation Systems, vol. 10, no. 4, pp. 151–158, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. B. Yu, Z.-Z. Yang, and B. Yao, “An improved ant colony optimization for vehicle routing problem,” European Journal of Operational Research, vol. 196, no. 1, pp. 171–176, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  19. H. Zhou, W. Li, C. Zhang, and J. Liu, “Ice breakup forecast in the reach of the Yellow River: the support vector machines approach,” Hydrology and Earth System Sciences Discussions, vol. 6, no. 2, pp. 3175–3198, 2009. View at: Publisher Site | Google Scholar
  20. M. K. Mayer, “A network parallel genetic algorithm for the one machine sequencing problem,” Computers & Mathematics with Applications, vol. 37, no. 3, pp. 71–78, 1999. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  21. V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995. View at: MathSciNet
  22. V. N. Vapnik, “An overview of statistical learning theory,” IEEE Transactions on Neural Networks, vol. 10, no. 5, pp. 988–999, 1999. View at: Publisher Site | Google Scholar
  23. V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 2000. View at: MathSciNet
  24. B. Dengiz, F. Altiparmak, and A. E. Smith, “Local search genetic algorithm for optimal design of reliable networks,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 3, pp. 179–188, 1997. View at: Publisher Site | Google Scholar
  25. M. L. M. Beckers, E. P. P. A. Derks, W. J. Melssen, and L. M. C. Buydens, “Parallel processing of chemical information in a local area network—III. Using genetic algorithms for conformational analysis of biomacromolecules,” Computers and Chemistry, vol. 20, no. 4, pp. 449–457, 1996. View at: Publisher Site | Google Scholar

Copyright © 2014 Bo Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.