Research Article  Open Access
Heart Disease Diagnosis Utilizing Hybrid Fuzzy Wavelet Neural Network and Teaching Learning Based Optimization Algorithm
Abstract
Among the various diseases that threaten human life is heart disease. This disease is considered to be one of the leading causes of death in the world. Actually, the medical diagnosis of heart disease is a complex task and must be made in an accurate manner. Therefore, a software has been developed based on advanced computer technologies to assist doctors in the diagnostic process. This paper intends to use the hybrid teaching learning based optimization (TLBO) algorithm and fuzzy wavelet neural network (FWNN) for heart disease diagnosis. The TLBO algorithm is applied to enhance performance of the FWNN. The hybrid TLBO algorithm with FWNN is used to classify the Cleveland heart disease dataset obtained from the University of California at Irvine (UCI) machine learning repository. The performance of the proposed method (TLBO_FWNN) is estimated using fold cross validation based on mean square error (MSE), classification accuracy, and the execution time. The experimental results show that TLBO_FWNN has an effective performance for diagnosing heart disease with 90.29% accuracy and superior performance compared to other methods in the literature.
1. Introduction
Heart disease is a term that refers to any disturbance that makes the heart function abnormally [1]. When the coronary arteries are narrowed or blocked, the blood flow to the myocardium is decreased. This represents the main reason for the emergence of heart disease in humans. There are several risk factors for this disease, including diabetes, smoking, obesity, and a family history of heart disease, high cholesterol, and high blood pressure [1–3].
Actually, the incidence rate of heart disease is on the rise. Every year about 720,000 Americans have a heart attack. Among them, 515,000 have their first heart attack and 205,000 people have a second (or third, etc.) heart attack. Heart disease causes the death of about 600,000 people in the United States every year, which makes it responsible for one in every four deaths [4].
Due to the large number of patients with heart disease, it became necessary to have a powerful tool that works accurately and efficiently for diagnosing this disease and helping the physicians make decisions about their patients. This is because the process of diagnosis and decision making is a difficult task and needs a lot of experience and skill.
Recently, a lot of research has been published regarding the field of medical diagnosis of heart disease. In 2008, Kahramanli and Allahverdi designed a hybrid system that represents a combination between fuzzy neural network (FNN) and artificial neural network (ANN), trained by a backpropagation (BP) algorithm. The results have shown that the proposed method is comparable with other methods [5]. Besides, Das et al. in 2009 proposed a system for diagnosing heart disease by using the neural network ensemble model, which combines several individual neural networks that are trained on the same task. However, this method increased the complexity and therefore the execution time [3]. In 2011, Khemphila and Boonjing presented a classification approach to diagnose heart disease using multilayer perceptron (MLP) with a backpropagation learning algorithm, as well as a feature selection algorithm to use 8 features instead of 13. The results have shown that the accuracy rate was enhanced by 1.1% in the training data set and by 0.82% in the testing data set [6]. In 2013, Beheshti et al. applied the centripetal accelerated particle swarm optimization (CAPSO) to evolve the learning of the artificial neural network (ANN), which was used to classify a heart disease data set. The results have shown that the diagnosis rate still needs to be improved [7]. From all previous studies, the classification accuracy did not reach a desirable level. However, the main objective of all previous studies was to make the diagnosis process of heart disease more accurate and efficient.
Moreover, fuzzy neural network (FNN) is the combination of neural network and fuzzy logic in one system that contains both the interpretability property and inference ability of fuzzy logic to handle the uncertainty and the selflearning ability of a neural network to improve the approximation accuracy [8–10]. In spite of the fact that the FNN/NN has many advantages, it still suffers from some drawbacks, such as slow training speed, high approximation error, and poor convergence problems [8].
In recent years, many researchers have emphasized the use of wavelet neural network (WNN), which combines the wavelet function and a neural network. WNN integrates the learning capability of NN with the decomposition capability [8, 11], orthogonality [12], and time frequency localization properties [10, 12–14] of the wavelet function. The main advantages of the WNN are better generalization capability [15], faster learning property [8, 15], and smaller approximation errors and size of networks than NN [8]. Therefore, it is able to overcome the obstacles of FNN/NN, especially in highly nonlinear systems [8].
According to the mentioned properties of the WNN and fuzzy system, fuzzy wavelet neural network (FWNN) will be considered in our work. FWNN has been presented in some application areas: function learning [16], chaotic time series identification [14], and the approximation of function, identification of nonlinear dynamic plants, and prediction chaotic time series [17]. FWNN combines the main advantages of a fuzzy system, wavelet function, and neural networks and, therefore, could bring the low level learning and good computational ability of WNN into a fuzzy system and the high humanlike thinking of fuzzy system into the WNN [8].
The training process of FWNN is a crucial task and requires robust optimization techniques to make the performance of this network more accurate and efficient. According to [18–21], which make a comparison study among some naturally inspired optimization algorithms, such as particle swarm optimization (PSO) [22], differential evolution (DE) [23], teaching learning based optimization (TLBO) [24], artificial bee colony (ABC) [25], and firefly algorithm (FA) [26], the TLBO algorithm is accurate, effective, and efficient and shows superior performance in comparison to others. The attractive property of the TLBO algorithm is that it is a simple mathematical model and is one of the most powerful tools to find the optimal solution in a shorter computational time period [18]. In addition, TLBO has balanced exploration and exploitation abilities so it does not get stuck in local minima [27].
In accordance with the abovementioned advantages of both FWNN and TLBO, in this paper a new method (TLBOFWNN) is proposed to increase the efficiency of the heart disease diagnosis process.
The rest of this paper is organized as follows: section two represents a background about FWNN; section three explains the TLBO algorithm; in section four, the proposed method of TLBOFWNN is illustrated; section five describes the heart disease data set; section six demonstrates the fold cross validation; and finally section seven presents and discusses the experimental results.
2. The Background of Fuzzy Wavelet Neural Network (FWNN)
One of the most commonly used methods for diagnosing of heart disease is artificial neural network (ANN) [6, 7]. ANN is considered an effective artificial intelligence method if it has enough training data. The description of an ANN structure is a difficult task [28]. Fuzzy logic [29] has the ability to deal with cognitive uncertainties in a manner like humans [30]. Fuzzy logic is used for improving the ability of the neural network and increasing its learning rate [31]. So, the combination of a neural network and fuzzy logic in one system leads to the creation of another powerful computational tool called fuzzy neural network (FNN), which combines the advantages of both approaches [30, 31].
Most fuzzy neural networks use the sigmoid function as the activation function in the hidden layer, but this type of function leads the training algorithm to converge to the local minima [32] and generally decreases the convergence speed of the network. Therefore, the wavelet function is used as an alternative to the sigmoid function [33, 34]. The wavelet function is a waveform that has limited duration, and the mean value of this duration is zero. This function has two parameters, the dilation parameter and the translation parameter [33].
The combination of the wavelet theory with a fuzzy system and neural network generates the socalled fuzzy wavelet neural network (FWNN) [13]. The structure of the FWNN is described in Figure 1, which contains seven layers [17, 33, 34].(i)In the first layer, the number of nodes is equal to the number of input variables.(ii)In the second layer, each node represents one fuzzy set, in which the calculation for the membership value of the input variable to the fuzzy set is carried out.(iii)In the third layer, each node represents one fuzzy rule. Therefore, the number of nodes is related to the number of fuzzy rules. The output of this layer can be calculated using the following: where refers to the AND operation and represents the membership function used to calculate the membership degrees of the input variables. In this study, a Gaussian membership function is used that can be calculated using In (6), and represent the center and the width of the membership function, respectively.(iv)The fourth layer contains the wavelet functions in its neurons, which represents the consequent part of the fuzzy rules. In FWNN, fuzzy rules can be represented by the following equation: where refers to the th rule, and (), where is the number of the rule. refers to the th input, and (), where is the number of input parameters, while refers to the th output from the wavelet neural network (WNN). In the wavelet neural network, which is described in Figure 2, the output can be calculated using where refers to the weight coefficients and represents a set of wavelet functions, which is called a wavelet family that can be defined computationally using where and represent the dilation and translation parameters, respectively, and refers to the mother wavelet that can be calculated using the Mexican hat wavelet function that can be represented in (v)In the fifth layer, multiplication between the outputs of the third layer and the outputs of the fourth layer is done. The output of this layer () can be calculated using where and refers to the number of fuzzy rules and wavelet functions.(vi)In the sixth layer, the output involves two parts. The first part () aggregates the outputs of the fifth layer, which can be represented using The second part () aggregates the outputs of the third layer, which can be represented using (vii)The seventh layer represents the defuzzification process, which is used to calculate the overall output of FWNN by using
3. Review of Teaching Learning Based Optimization (TLBO) Algorithm
Teaching learning based optimization algorithm (TLBO) is a new optimization algorithm proposed by Rao et al. in 2011 [24]. This algorithm is inspired by learners receiving teaching from the teacher in a class. The teaching process is done either by means of the teacher who is considered to be a highly learned person and has a great influence on the output of students (teacher phase) or through the interaction among learners (learner phase) [20, 21, 24, 35–37].
TLBO is a populationbased algorithm in which the population is considered as a class of learners. Each learner represents a solution and the dimension of each solution that is considered as different subjects offered to the learners actually represents the parameters involved in the objective function of the given optimization problem. The evaluation of each solution (fitness value) of the optimization problem represents the students’ grades. The best fitness value is considered to be the teacher [36, 38–40].
The main characteristic of this algorithm is that it does not contain any specific parameters; it includes common parameters only [27, 41]. The implementation of a TLBO algorithm will be explained in the following subsection [24, 38].(1)Define the optimization problem and initialize the common parameters, which are population size (ps) that represents the number of learners () and the dimension of each learner (), which represents the subjects offered to the learners. In addition, set the value of the maximum number of iterations and the values of the constraints variables , which denote lower and upper boundaries, respectively.(2)Generate the initial population randomly with () rows and () columns within and then calculate the objective function value of each solution using , where is . The results were sorted in an ascending order corresponding to ps (ascending order is convenient for finding minimum value; maximum value can be obtained by multiplying by −1 before the objective): where . Therefore, the first learner is considered to be the best solution (teacher).(3)In the teacher phase, calculate the mean of the population column wise where (4)The teacher tries to improve the grade average of the students using where represents the improved learners, represents the current learners, is a random number in the interval , is the desired mean, is the current mean [42], and is a teaching factor that is not a parameter of the TLBO algorithm: it is calculated randomly using (14), which decides the value of the mean to be changed [43]: In , if the value of any variable is less than or bigger than , it is equal to or , respectively [35]: (5)In the learner phase, a learner interacts randomly with other learners to enhance his or her knowledge.(6)Randomly select two learners and and where : In , if the value of any variable is less than or bigger than , it is equal to or , respectively: (7)The duplicate solutions are modified in order to avoid trapping in the local optima by using mutation process on randomly selected dimensions of the duplicate solutions before executing the next iteration.(8)Sort the results in an ascending order corresponding to ps.(9)Repeat processes (3) to (5) until the termination condition is satisfied.
4. The Proposed Method of TLBOFWNN
To increase the accuracy of the diagnosis process of heart disease using FWNN, one of the robust optimization algorithms should be used for FWNN training. Thus, the methodology of this study includes two important procedures. The first one is constructing the structure of FWNN for the heart dataset that will be used in both the training and testing phases. The second procedure represents the training process for the constructed FWNN by utilizing the TLBO algorithm. For conducting the training process, a sample of data related to heart disease, which is called training data, is used as the input variables to the FWNN. Then, the mean square error (MSE) is calculated, which represents the difference between the actual output and the desired output of the FWNN. MSE is computed using where represents the number of input patterns, refers to the iteration number, represents desired output, and represents the actual output of the FWNN.
The MSE represents the objective function of the TLBO algorithm, which is used to calculate the objective function value of each individual in the population. The population in this algorithm is represented by a set of solutions; each solution refers to one learner and that solution has a number of values that indicate the number of parameters (subjects) to be updated in the FWNN. In this study, the parameters are linkage weights in the FWNN and the wavelet parameters (dilation and translation). The value of these parameters is initialized randomly. Then, these values are updated using the TLBO algorithm to obtain optimal values with the minimum error rate and the highest classification accuracy.
In the testing phase, the optimal values obtained from the training phase with the testing data will be used as the input variables to test the FWNN trained by the TLBO algorithm. The output of the FWNN is calculated and compared with the desired output to investigate the learning ability of the FWNN to classify the heart dataset.
The main steps of training the FWNN using the TLBO algorithm are as follows.(1)Initialize randomly the values of each learner (i.e., weights, dilation parameters, and translation parameters) within the interval , which represents the lower and upper boundaries, respectively. Then, initialize the common parameters of the TLBO algorithm, which are population size, maximum iteration number, and the dimension.(2)Put Cycle = 1.(3)Evaluate each learner by calculating its objective function value based on the FWNN, which gives the error rate of each learner.(4)Update the weight, dilation, and translation parameters using the TLBO algorithm.(5)Keep the best learner, which represents the teacher (the best values of weights, dilation, and translation parameters).(6)Put Cycle = Cycle + 1.(7)Repeat steps (3) to (6) until the maximum iteration number is reached. Figure 3 represents the flowchart of the TLBOFWNN method.
5. The Heart Disease Dataset
The Cleveland heart disease dataset was obtained from the Cleveland Clinic Foundation, collected by Robert Detrano. This data was used to predict the presence or the absence of heart disease. It includes 303 instances, but only 297 instances of them were used in this study because 6 instances were missing some of their attributes. The heart dataset contains 160 normal instances and 137 abnormal instances. Each instance has 76 attributes but all published experiments prefer to use 14 of them. Tables 1 and 2 briefly illustrate the properties of these attributes [44].


6. Fold Cross Validation
During the training process of a neural network, the use of fold cross validation makes the results of the testing process more reliable [45] because it guarantees that all data is used for training and testing. In fold cross validation, the data is randomly divided into parts called folds, where each fold is equal to another. Among the folds, one fold will be selected for testing and the folds will be used for training. This process is repeated times. Finally, all testing results are averaged to produce a single estimation result [45, 46]. In this study, fivefold cross validation is used.
7. Experimental Results and Discussion
In this study, the performance of the proposed system for diagnosing the presence (1) or the absence (0) of heart disease is investigated using a common heart dataset (Cleveland heart disease dataset). We measured error rate, classification rate, and the time taken. The heart dataset is divided into 5 folds: one fold is used for testing and the other 4 folds are used for training. Each fold has 60 instances and each instance has 13 attributes, as shown in Table 1. This process is repeated 5 times. As mentioned previously, the TLBO algorithm has common parameters, which are population size (ps) and the dimension of each solution (). In this research, the value of the parameter is equal to 81, which represents the weight, dilation, and translation parameters. However, the value of the ps parameter is varied because the user of this algorithm does not have adequate knowledge about the appropriate value of this parameter. In this study, the training process is repeated in three separate experiments with three various ps, which are 50, 100, or 150. The maximum iteration number represents the stopping condition, set to 500. In FWNN, the classification of the heart dataset is based on the number of fuzzy sets, fuzzy rules, and wavelet functions, which are equal to three, in addition to the value of each attribute.
In terms of the error rate, which represents the percentage of incorrect classifications for training the FWNN using the TLBO algorithm for the Cleveland heart disease dataset, it is illustrated in Table 3. In this table, the results have shown that the TLBOFWNN reached the minimum error rate (0.0585) when the population size was equal to 150.

Moreover, we noticed from Table 4, which represents the correct classification percentage, that the best classification accuracy is obtained when the population size is equal to 150 (94.1422). Increasing the population size leads to an increase in the training duration, as shown in Table 5, where the average time of training is 138.31 minutes when the population was equal to 150; the average time is 45.39 when the population size was equal to 50. Also, the classification accuracy and the error rate are very close to each other for both population sizes (100 and 150), as shown in Tables 3 and 4.


Tables 6 and 7 illustrate the MSE and the classification accuracy of testing the FWNN for unseen heart data by using the optimal parameter values obtained from the TLBO algorithm, respectively. In Table 6, the minimum average error rate of 0.0970 is obtained when the population size is equal to 100. In addition, the highest average classification rate (90.2909) is acquired when the population size is equal to 100.


Moreover, the input variables of the FNN are automatically normalized during the fuzzification process. So, the input variables of the WNN should be normalized too to be more homogenous with the input variables of the FNN. The data normalization process is done by finding the maximum value of each column of input variables and then dividing each value in that column on this maximum value.
The error rate of training the FWNN utilizing the TLBO algorithm for the normalized heart disease dataset is illustrated in Table 8. In this table, the results have shown that the TLBOFWNN reached the minimum error rate when the population size was equal to 100, which is 0.0489 in all experiments.

In addition, as shown in Table 9, which represents the correct classification percentage, the highest average classification accuracy (95.0964) is obtained when the population size equals 100.

The average time taken was equal to 88.498 minutes only when the population size was equal to 100, as shown in Table 10.

Thus, Tables 11 and 12 explain the MSE and classification accuracy when testing the FWNN for normalized unseen heart data, respectively. In Table 11, the lower average error testing rate (0.0997) is obtained when the population size is equal to 50. In Table 12, the highest average classification rate (90.0213) is acquired when the population size was equal to 50.


In conclusion, in comparing the maximum classification rate acquired during the testing of the FWNN on normalized and nonnormalized data, the results have shown that the classification accuracy on nonnormalized data (90.2909) is better than the classification accuracy on normalized data (90.0213).
Also, to investigate the performance of the proposed TLBOFWNN method, a comparison with eight recently proposed methods from the literature was carried out on the same dataset.
As shown in Table 13 and Figure 4, the results show that the proposed method, TLBOFWNN, has the best performance for diagnosing heart disease in terms of classification accuracy (90.2909) compared to the results obtained by other methods. The GSA+MLP method was the worst of the others.

8. Conclusion and Future Work
In this paper, a hybrid fuzzy wavelet neural network and teaching learning based optimization algorithm were used to classify the presence or the absence of heart disease. The teaching learning based optimization (TLBO) algorithm has been proposed for training fuzzy wavelet neural networks (FWNN). The simulation results have shown that when population is of a medium size (100), TLBOFWNN gives good results in a somewhat short amount of time. In addition, these results demonstrate that the TLBOFWNN method has superior performance compared to other published methods, giving the highest classification accuracy.
In addition, there are some suggestions that can be applied to enhance the performance of the TLBOFWNN method in the future, such as using a TLBO algorithm to evolve the structure of the FWNN or utilizing another optimization algorithm to enhance the learning of the FWNN.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 K. Polat, S. Güneş, and S. Tosun, “Diagnosis of heart disease using artificial immune recognition system and fuzzy weighted preprocessing,” Pattern Recognition, vol. 39, no. 11, pp. 2186–2193, 2006. View at: Publisher Site  Google Scholar
 P. Gayathri and N. Jaisankar, “Comprehensive study of heart disease diagnosis using data mining and soft computing techniques,” International Journal of Engineering & Technology, vol. 5, no. 3, pp. 2947–2958, 2013. View at: Google Scholar
 R. Das, I. Turkoglu, and A. Sengur, “Effective diagnosis of heart disease through neural networks ensembles,” Expert Systems with Applications, vol. 36, no. 4, pp. 7675–7680, 2009. View at: Publisher Site  Google Scholar
 centers of disease control and prevention, 2014, http://www.cdc.gov/heartdisease/facts.htm.
 H. Kahramanli and N. Allahverdi, “Design of a hybrid system for the diabetes and heart diseases,” Expert Systems with Applications, vol. 35, no. 12, pp. 82–89, 2008. View at: Publisher Site  Google Scholar
 A. Khemphila and V. Boonjing, “Heart disease classification using neural network and feature selection,” in Proceedings of the 21st International Conference on Systems Engineering (ICSEng '11), pp. 406–409, IEEE Computer Society, Las Vegas, Nev, USA, August 2011. View at: Publisher Site  Google Scholar
 Z. Beheshti, S. M. H. Shamsuddin, E. Beheshti, and S. S. Yuhaniz, “Enhancement of artificial neural network learning using centripetal accelerated particle swarm optimization for medical diseases diagnosis,” Soft Computing, pp. 1–18, 2013. View at: Publisher Site  Google Scholar
 Y. Wang, T. Mai, and J. Mao, “Adaptive motion/force control strategy for nonholonomic mobile manipulator robot using recurrent fuzzy wavelet neural networks,” Engineering Applications of Artificial Intelligence, vol. 34, pp. 137–153, 2014. View at: Google Scholar
 E. Lughofer, “Online assurance of interpretability criteria in evolving fuzzy systems—achievements, new concepts and open issues,” Information Sciences, vol. 251, pp. 22–46, 2013. View at: Publisher Site  Google Scholar
 S. Tzeng, “Design of fuzzy wavelet neural networks using the GA approach for function approximation and system identification,” Fuzzy Sets and Systems, vol. 161, no. 19, pp. 2585–2596, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 E. Karatepe and T. Hiyama, “Fuzzy wavelet network identification of optimum operating point of noncrystalline silicon solar cells,” Computers and Mathematics with Applications, vol. 63, no. 1, pp. 68–82, 2012. View at: Publisher Site  Google Scholar
 V. S. Kodogiannis, M. Amina, and I. Petrounias, “A clusteringbased fuzzy wavelet neural network model for shortterm load forecasting,” International Journal of Neural Systems, vol. 23, no. 5, Article ID 1350024, 2013. View at: Publisher Site  Google Scholar
 M. Shahriari Kahkeshi, F. Sheikholeslam, and M. Zekri, “Design of adaptive fuzzy wavelet neural sliding mode controller for uncertain nonlinear systems,” ISA Transactions, vol. 52, no. 3, pp. 342–350, 2013. View at: Publisher Site  Google Scholar
 Y. Bodyanskiy and O. Vynokurova, “Hybrid adaptive waveletneurofuzzy system for chaotic time series identification,” Information Sciences, vol. 220, pp. 170–179, 2013. View at: Publisher Site  Google Scholar
 C. M. Lin, A. B. Ting, C. F. Hsu, and C. M. Chung, “Adaptive control for mimo uncertain nonlinear systems using recurrent wavelet neural network,” International Journal of Neural Systems, vol. 22, no. 1, pp. 37–50, 2012. View at: Publisher Site  Google Scholar
 D. W. C. Ho, P.A. Zhang, and J. Xu, “Fuzzy wavelet networks for function learning,” IEEE Transactions on Fuzzy Systems, vol. 9, no. 1, pp. 200–211, 2001. View at: Publisher Site  Google Scholar
 M. Davanipoor, M. Zekri, and F. Sheikholeslam, “Fuzzy wavelet neural network with an accelerated hybrid learning algorithm,” IEEE Transactions on Fuzzy Systems, vol. 20, no. 3, pp. 463–470, 2012. View at: Publisher Site  Google Scholar
 S. C. Satapathy, A. Naik, and K. Parvathi, “Teaching learning based optimization for neural networks learning enhancement,” in Swarm, Evolutionary, and Memetic Computing, pp. 761–769, Springer, New York, NY, USA, 2012. View at: Google Scholar
 J. Salah Aldeen and R. A. Wahid, “A comparative study among some naturalinspired optimization algorithms,” Journal of Education and Science. In press. View at: Google Scholar
 R. Rao and G. Waghmare, “Solving composite test functions using teachinglearningbased optimization algorithm,” in Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA '13), vol. 199, pp. 395–403, Springer, 2013. View at: Google Scholar
 R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teachinglearningbased optimization: an optimization method for continuous nonlinear large scale problems,” Information Sciences, vol. 183, pp. 1–15, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Western Australia, Australia, December 1995. View at: Google Scholar
 R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site  Google Scholar  MathSciNet
 R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teachinglearningbased optimization: a novel method for constrained mechanical design optimization problems,” Computer Aided Design, vol. 43, no. 3, pp. 303–315, 2011. View at: Publisher Site  Google Scholar
 D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep., Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey, 2005. View at: Google Scholar
 X.S. Yang, “Firefly algorithms for multimodal optimization,” in Stochastic Algorithms: Foundations and Applications, pp. 169–178, Springer, New York, NY, USA, 2009. View at: Google Scholar
 E. Uzlu, M. Kanka, A. Akpinar, T. Dede, and A. Akpınar, “Estimates of energy consumption in Turkey using neural networks with the teachinglearningbased optimization algorithm,” Energy, 2014. View at: Publisher Site  Google Scholar
 H. Malek, M. M. Ebadzadeh, and M. Rahmati, “Three new fuzzy neural networks learning algorithms based on clustering, training error and genetic algorithm,” Applied Intelligence, vol. 37, no. 2, pp. 280–289, 2012. View at: Publisher Site  Google Scholar
 L. A. Zadeh, “Fuzzy sets,” Information and Computation, vol. 8, pp. 338–353, 1965. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 M. M. Gupta and D. H. Rao, “On the principles of fuzzy neural networks,” Fuzzy Sets and Systems, vol. 61, no. 1, pp. 1–18, 1994. View at: Publisher Site  Google Scholar  MathSciNet
 T. Hassanzadeh, K. Faez, and G. Seyfi, “A speech recognition system based on Structure Equivalent Fuzzy Neural Network trained by Firefly algorithm,” in Proceedings of the International Conference on Biomedical Engineering (ICoBE '12), pp. 63–67, IEEE, Penang, Malaysia, February 2012. View at: Publisher Site  Google Scholar
 A. K. Alexandridis and A. D. Zapranis, “Wavelet neural networks: a practical guide,” Neural Networks, vol. 42, pp. 1–27, 2013. View at: Publisher Site  Google Scholar
 R. H. Abiyev and O. Kaynak, “Fuzzy wavelet neural networks for identification and control of dynamic plants—a novel structure and a comparative study,” IEEE Transactions on Industrial Electronics, vol. 55, no. 8, pp. 3133–3140, 2008. View at: Publisher Site  Google Scholar
 R. H. Abiyev and O. Kaynak, “Identification and control of dynamic plants using fuzzy wavelet neural networks,” in Proceedings of the IEEE International Symposium on Intelligent Control (ISIC '08), pp. 1295–1301, September 2008. View at: Publisher Site  Google Scholar
 V. Toĝan, “Design of planar steel frames using teachinglearning based optimization,” Engineering Structures, vol. 34, pp. 225–232, 2012. View at: Publisher Site  Google Scholar
 A. Baykasoğlu, A. Hamzadayi, and S. Y. Köse, “Testing the performance of teachinglearning based optimization (TLBO) algorithm on combinatorial problems: flow shop and job shop scheduling cases,” Information Sciences. An International Journal, vol. 276, pp. 204–218, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 R. V. Rao and G. Waghmare, “A comparative study of a teachinglearningbased optimization algorithm on multiobjective unconstrained and constrained functions,” Journal of King Saud University—Computer and Information Sciences, 2013. View at: Publisher Site  Google Scholar
 W. Cheng, F. Liu, and L. Li, “Size and geometry optimization of Trusses using teachinglearningbased optimization,” International Journal of Optimization in Civil Engineering, vol. 3, no. 3, pp. 431–444, 2013. View at: Google Scholar
 R. V. Rao, V. J. Savsani, and J. Balic, “Teachinglearningbased optimization algorithm for unconstrained and constrained realparameter optimization problems,” Engineering Optimization, vol. 44, no. 12, pp. 1447–1462, 2012. View at: Publisher Site  Google Scholar
 R. V. Rao, V. D. Kalyankar, and G. Waghmare, “Parameters optimization of selected casting processes using teachinglearningbased optimization algorithm,” Applied Mathematical Modelling, 2014. View at: Publisher Site  Google Scholar
 G. Waghmare, “Comments on “a note on teachinglearningbased optimization algorithm”,” Information Sciences, vol. 229, pp. 159–169, 2013. View at: Publisher Site  Google Scholar
 P. V. Babu, S. C. Satapathy, M. K. Samantula, P. K. Patra, and B. N. Biswal, “Teaching learning based optimized mathematical model for data classification problems,” in Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA '13), pp. 487–496, Springer, 2013. View at: Google Scholar
 R. Venkata Rao and V. D. Kalyankar, “Parameter optimization of modern machining processes using teachinglearningbased optimization algorithm,” Engineering Applications of Artificial Intelligence, vol. 26, no. 1, pp. 524–531, 2013. View at: Publisher Site  Google Scholar
 D. Robert, “V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.DDonor: David W. Aha,” vol. 714, pp. 856879, 1988. View at: Google Scholar
 N. A. Baykan and N. Yilmaz, “A mineral classification system with multiple artificial neural network using kfold cross validation,” Mathematical and Computational Applications, vol. 16, no. 1, pp. 22–30, 2011. View at: Google Scholar
 S. T. Ishikawa and V. C. Gulick, “An automated mineral classifier using Raman spectra,” Computers and Geosciences, vol. 54, pp. 259–268, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Jamal Salahaldeen Majeed Alneamy and Rahma Abdulwahid Hameed Alnaish. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.