Research Article  Open Access
Hadi Sadoghi Yazdi, Hessam Jahani Fariman, Jaber Roohi, "Gait Recognition Based on Invariant Leg Classification Using a NeuroFuzzy Algorithm as the Fusion Method", International Scholarly Research Notices, vol. 2012, Article ID 289721, 9 pages, 2012. https://doi.org/10.5402/2012/289721
Gait Recognition Based on Invariant Leg Classification Using a NeuroFuzzy Algorithm as the Fusion Method
Abstract
This paper presents a human gait recognition algorithm based on a leg gesture separation. Main innovation in this paper is gait recognition using leg gesture classification which is invariant to covariate conditions during walking sequence and just focuses on underbody motions and a neurofuzzy combiner classifier (NFCC) which derives a high precision recognition system. At the end, performance of the proposed algorithm has been validated by using the HumanID Gait Challenge data set (HGCD), the largest gait benchmarking data set with 122 objects with different realistic parameters including viewpoint, shoe, surface, carrying condition, and time. And it has been compared to recent algorithm of gait recognition.
1. Introduction
In the last decade, there have been great interests in applying human biometrics for identification and verification purposes, for instance, in video surveillance and human recognition areas. Amongst there have been lots of researches in using ear and face recognition, body tracking and hand gesture recognition, and recently gait recognition using in the human identification areas. But as a comparison between human gait and other various biometrics, such as hand geometry, iris, face, voice, signature, and fingerprint [1], the human gait has some eligible advantages over them that make the gait recognition an ideal method in identification procedures. For instance, there is no need to subject cooperation in gait recognition, and it can operate without interrupting or interfacing with the subject activities [2]. In other words, we can recognize people using human gait regardless of their clothes or the backgrounds [3]. Further it is difficult to conceal or disguise in application scenarios like bank robbery that other biometrics such as face recognition or fingerprint are impossible in detection. Moreover, it is nonobstructive and effective for identifying at long distances like surveillance applications in public places [4].
The previous works have been classified under similar covariate conditions (e.g., clothing, surface, carrying, etc.). But in this paper we proposed an improved and also novel method of classification which is only based on different gestures of leg during walking without body parts tracking and invariant to different covariate conditions.
As it is indicated in Figure 1, with a more careful focus on the sequences of sample energy halation images, we can obviously conclude that because of negligible changes of bust’s organs during walking cadence and little effect of added objects (e.g., carrying a bag, wearing a coat, etc.) on the gait, for obtaining higher recognition rate we can only focus on leg gesture for gait recognition and derive accurately from its classification.
(a)
(b)
(c)
As a review to fundamental of the usual gait recognition algorithm, we can express that in the walking process functional versatility of the body joints allows the lower and upper limbs to readily accommodate stairs, doorways, changing surfaces, and obstacles in the path of progression. Efficiency in these endeavors depends upon free joint mobility and muscle activity that is selective with timing and intensity. Energy conservation is optimal in the normal pattern of limb action. A person will perform one’s walking pattern in a fairly repeatable and unique way, and medical research has been trying to apply these gait patterns for the treatment of pathologically abnormal patients [4].
As a brief introduction of the approach of this paper we can express the following procedures.
Five states of human gait are extracted after background estimation and human detection in the scene. Leg gestures are classified over directional chain code of bottom part of silhouette contour. A spatiotemporal data base, namely, Energy Halation Image (EHI), is constructed over bottom part of human silhouette from train film sequence for five leg gestures separately. Eigen space of energy halation is applied to multilayer perceptron neural network. Five neural network systems recognize people but with medium recognition rate. A Neurofuzzy fusion technique is used for obtaining high recognition rate. Experimental results are performed over a suitable data base. It includes 20 samples for eight people which each sample have 100 frames approximately. 99% recognition rate of the proposed system is obtained over 10 samples test patterns.
1.1. Recent Works
Leg gesture studies have various applications. Among this, some interest work indicates importance of leg gesture classification as in [5–7]. In [8], matching between stored prototypes and silhouette images helps for state classification. View point of this paper [8] is based on pattern matching and recognition of state using hidden Markov model; it helps to insert the prior knowledge of gait in state recognition.
The infrared thermal imaging was applied to collect gait video, and an infrared thermal gait database was established in [9]. Infrared is useful to detect human body and remove noises from complex background; illumination variations in [10] show that using Principle Component Analysis (PCA) on accelerometerbased gait data gives a large improvement in the performance of gait recognition system.
Reference [2] argues that selecting the most relevant gait features that are invariant to changes in gait covariate conditions is the key to develop a gait recognition system that works without subject cooperation. So [2] proposes Gait Entropy Image to perform automatic feature selection on each pair of gallery and probe gait sequences. The performance of gait recognition decreases because of lowresolution (LR) sequences. Reference [11] proposes method for solving this solution. They proposed a new algorithm called superresolution with manifold sampling and backprojection, which learns the highresolution (HR) counterparts of LR test images from a collection of HR/LR training gait image patch pairs.
Reference [12] presents a novel framework for gait recognition augmented with soft biometric information. Geometric gait analysis is based on Radon transforms and on gait energy images. User height and stride length information is extracted and utilized in a probabilistic framework for the detection of soft biometric features of substantial discrimination power.
In [13] multiple gait features fusion was explored with the framework of the factorial hidden Markov model (FHMM). The FHMM has a multiplelayer structure and provides an alternative to combine several gait features without concatenating them into a single augmented feature. Besides, the feature concatenation was used to directly concatenate the features and the parallel HMM (PHMM) was introduced as a decisionlevel fusion scheme, which employs traditional fusion rules to combine the recognition results at decision level.
Tactile ground surface indicators installed on sidewalks help visually impaired people walk safely. The visually impaired distinguish the indicators by stepping into its convexities and following them. However, these indicators sometimes cause the nonvisually impaired to stumble. In [5] have been studied effects of these indicators by comparing the kinematics and kinetic variables of walking on paths with and without indicators.
Another interest for gait identification is that of reflect gait degeneration due to ageing that might have closer linkage to the causes of falls. This would help to undertake appropriate measures to prevent falls. Like in many other developed countries, falls in older population have been identified as a major health issue in Australia [6]. In [7] automatic recognition of youngold gait types from their respective gait patterns has been studied using support vector machine. Ageing influences gait patterns causing constant threats to locomotor balance control.
Biomechanical analysis of gait has been successfully applied in human clinical gait analysis [14]. With regards to gait recognition, a major early result from psychology is by Johansson [15], who used point light displays to demonstrate the ability of humans to rapidly distinguish human locomotion from other motion patterns. Cutting and Kozlowski [16] showed that this ability also extends to recognition of friends.
Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is “solvable” are not understood or characterized as in [17]. The biggest limitation in human motion analysis is the underlying difficulty of tracking the human body for subsequent interpretation [18, 19].
As a solution for making it possible to identify human gait from a sequence of segmented noisy silhouettes in lowresolution video, a modelbased gait cycle extraction based on the predictionbased hierarchical active shape model (ASM) is presented in [1]. Moreover in [20] there is a presentation of a new gait recognition method that does not presume the existence of strict lab conditions for its operation.
As it mentioned, the gait recognition is an effective way for identifying from a distance but there are two different obstacles in this situation. First in the lowresolution case the performance of gait recognition is abated because of noisy images. Furthermore, as a usual procedure of gait recognition the gait sequences are projected onto a nonoptimal lowdimensional subspace to reduce the data complexity which again would lead to decline of gait recognition performance. A new algorithm is proposed in [11] called super resolution with manifold sampling and back projection (SRMS), which learns the highresolution (HR) counterparts of LR test images from a collection of HR/LR training gait image patch pairs.
1.2. Contributions and Motivation
Recognizing gait with body decomposition to details and fusion of them were not observed in the literature. Main contribution of this paper is gesture classification for human gait recognition. But some new notes can be found in this paper as follows.(a)A new spatiotemporal data base, namely, energy halation.(b)Fivefeature space generation using leg gesture concept.(c)Human gait recognition based on leg gesture classification.(d)Neurofuzzybased combiner classifiers (NFCCs).(e)Presentation of complete system in gait recognition.
Low performance in human gait recognition systems is one of motivations of the proposed method. Human detection in the scene, object tracking, and classifiers capability over timedependent features are some of problems in obtaining low recognition rate. So, we try to present a complete system in human gait recognition which includes many features.
2. The Proposed Method
Block diagram of the proposed method can be abstracted in Figure 2. Five parts of this system are as follows and are explained in the next subsections.(i)Background estimation,(ii)leg gesture recognizer,(iii)energy halation image construction (spatiotemporal data base),(iv)gait recognition in Eigen space,(v)neurofuzzybased combiner classifier.
2.1. Background Estimation
Several approaches are known to separate foreground from background. If the background is known a simple thresholding yields to the foreground. One suitable way in object detection is background estimation. This paper uses probability density function (PDF) estimation of each pixel [21]. Gaussian PDF can model variation of scene because of flicker, CCD noise, and shadow approximately. For obtaining mean and variance of Gaussian PDF, (1) and (2) are used which can accept scene variations. Results of human detection in the scene are shown in Figure 3: where , is the pixel’s current value in location and the previous average, the previous variance; is transpose; is an empirical weight often chosen as a tradeoff between stability and quick update. At each frame time, the , pixel’s value can then be classified as a foreground pixel if the inequality holds, where , is threshold value.
2.2. Leg Gesture Recognizer
After background estimation and human detection in the scene, binary human image (blob) is obtained. After cutting a bottom of blob image (waist to sole), distribution function of directional chain code is extracted from blob contour. After normalizing the chain code to its maximum, a multilayer perceptron neural network (MLPNN) is used for leg gesture recognizing with this feature. Block diagram of leg gesture classifier is shown in Figure 3. For training the proposed artificial neural network, we have five states of five people (five images for each person). So after creating images’ database, we named them as the following sequence.(i)First digit denotes the state of person (12345).(ii)Second digit denotes the person (12345).(iii)Third digit denotes the number of the image of each person (12345).
Therefore we now have 125 named images in the database for training. Moreover, we considered five different angles in the video sequence of samples for each state like the one in Figure 4. Then by considering 5 different angle states (Figure 5), in our program, we would have the angles of every one of the five gait states which is shown in Figure 6.
One of leg gesture classifier parts is gesture data base which is necessary for training of MLPNN using backpropagation algorithm. Five states are determined for leg gesture which depends on frame rate and type of application. Figure 4 shows these five states for number of people. Gesture data base is collected from a set of film which includes 160 sequences of eight people. Obtained manually gesture data base includes five leg states, and for each state 100 images have been collected. Extracted distribution directional chain code is shown in Figures 7(a) and 7(b) shows directional chain codes histogram for difference state.
(a)
(b)
However, trained neural network cannot classify leg gestures perfectly but this problem compensates in creation of spatiotemporal data base and using classifier.
2.3. Energy Halation Image Construction (Spatiotemporal Data Base)
Spatiotemporal data base use for compact presentation of film sequence and use in many applications as image retrieval, gesture analysis, action recognition, and behavioral recognition in the scene.
In this subsection we propose a spatiotemporal like motion history image (MHI) in [22] which pseudo code is as follow and results is named energy halation images (EHI).
Each input frame belong to one of five leg gestures and is used for generation of five energy halation images.(1)Initializing:Let , beforced to zeros with dimension 220 × 90.Let ; is frame’s index.(2).(3)blob matrix of th frame with size ; is state of leg (1 to 5).
Note: is less than (220, 90) for each blob size.(4)Adding zero rows and columns bilateral of that become matrix;(5); is state of leg gesture.(6)If it is not end of sequence go to step 2.(7)End.
Obtained results include five images of energy halation for each input sequence. As an example, Figure 8 shows five images of energy halation for three people.
2.4. Gait Recognition in Eigen Space
As face recognition and similar applications, we use Eigen space transform for reducing the dimensions of the energy halation images before applying to MLPneural network. Training MLPNN is performed over each leg gesture for human gait recognition. So five trained MLPNNs are created and use for human identification but each network recognized people separately based on different features (these features are energy halation over each leg gesture).
Recognition rate of each network does not satisfy the using system as good human gait recognizer so we combine neural networks output using neurofuzzybased mixer classifiers which is followed in the next subsection.
2.5. NeuroFuzzyBased Combiner Classifier
Neurofuzzy system has been proved to have significant results in modeling nonlinear functions. Neurofuzzy system has been used frequently in the literature as fishing predictions [23], vehicular navigation [24], identifying the turbine speed dynamics [25], radio frequency power amplifier linearization [26], microwave application [27], image denoising [28, 29], prediction in cleaning with high pressure water [30], sensor calibration [31], fetal electrocardiogram extraction from ECG signal captured from mother [32], and identification of normal and glaucomatous eyes [33].
In a neurofuzzy system, the membership functions (MFs) are extracted from a data set that describes the system behavior. The neurofuzzy system learns features in the data set and adjusts the system parameters according to given error criterion. In a fused architecture, NN learning algorithms are used to determine the parameters of fuzzy inference system. Below, we have summarized the advantages of the neurofuzzy system technique. Fusion of output classifiers with linear combiner has been pointed in [34]. In this paper, we used a nonlinear mixer classifier which is based on neurofuzzy system for the first time in human gait recognition.
3. Experimental Results
A set of film including 160 sequences of eight people is used as data base. Frame rate per second is 25, and image size is 352 × 288. Some images from data base are shown in Figure 9.
Leg gesture recognizer is a threelayer MLP neural network with eight input neurons and five output neurons and fifteen neurons in hidden layer that can categorize input frames to 5 states. An example of this stage is shown in Figure 10.
As it was mentioned before, each gesture helps in categorization of frame sequence in five images of energy halation are performed, and five MLP neural networks are trained over 10 film sequences for 8 people. Each network has 50 neurons in input layer, and three hidden layers with 100, 90, 40 neurons and 8 neurons in output layer. In testing phase, captured confusion matrixes for two networks are shown in Tables 1 and 2. These tables show that fusion of networks increases performance. As an example, network 2 can recognize people 1 but network 1 cannot perform recognition over this people as well. Confusion matrix after application of neurofuzzy combiner is shown in Table 3. Recognition rate increases to 99.8% over test pattern whereas learning of neurofuzzy system has been performed over learning patterns.



As an approach to evaluate our proposed method (gait recognition based on NFCC), we also analyzed comparison between our method and different algorithms of HumanID Gait Challenge Dataset (HGCD) and compared its result with a recent algorithm of gait recognition which have been evaluated by HGCD (Table 4).
 
*The average performance under different experiments of HGCD. 
4. Conclusion
An interesting note was found in this paper “human gait recognition based on leg gestu.” But this paper includes a new spatiotemporal gait data base (Energy Halation Image), neurofuzzybased combiner classifier (NFCC). To overcome the limitation of recognition performance rate, we proposed a system for gait feature fusion. We used five spatiotemporal data bases and applied their features in Eigen space to five neural networks separately. Performance of each NN for test samples was low (about 70% to 80%). Then we used a neurofuzzy combiner classifier for mixing the neural networks for the first time in gait recognition. Result of combination of neural network outputs was satisfying.
Appendix
NeuroFuzzy Inference System Architecture
Neural Networks (NNs) are demonstrated to have powerful capability of expressing relationship between inputoutput variables. In fact it is always possible to develop a structure that approximates a function with a given precision. However, there is still distrust about NNs identification capability in some applications. Fuzzy set theory plays an important role in dealing with uncertainty in plant modeling applications. Neurofuzzy systems are fuzzy systems, which use NNs to determine their properties (fuzzy sets and fuzzy rules) by processing data samples. Neurofuzzy integrates to synthesize the merits of both NN and fuzzy systems in a complementary way to overcome their disadvantages. The fusion of an NN and fuzzy logic in neurofuzzy models possesses both lowlevel learning and computational power of NNs and advantages of highlevel humanlike thinking of fuzzy systems. For identification, hybrid neurofuzzy system called ANFIS combines an NN and a fuzzy system together. ANFIS has been proved to have significant results in modeling nonlinear functions. In ANFIS, the membership functions (MFs) are extracted from a data set that describes the system behavior. The ANFIS learns features in the data set and adjusts the system parameters according to given error criterion. In a fused architecture, NN learning algorithms are used to determine the parameters of fuzzy inference system. Below, we have summarized the advantages of the ANFIS technique.(i)Realtime processing of instantaneous system input and output data’s. This property helps using of this technique for many operational researches problems.(ii)Offline adaptation instead of online systemerror minimization, thus easier to manage and no iterative algorithms being involved.(iii)System performance is not limited by the order of the function since it is not represented in polynomial format.(iv)Fast learning time.(v)System performance tuning is flexible as the number of membership functions and training epochs can be altered easily.(vi)The simple ifthen rules declaration and the ANFIS structure are easy to understand and implement.
A typical architecture of ANFIS is shown in Figure 1, in which a circle indicates a fixed node and a square indicates an adaptive node. For simplicity, we consider two inputs , and one output in the FIS. The ANFIS used in this paper implements a firstorder Sugeno fuzzy model. Among many FIS models, the Sugeno fuzzy model is the most widely used for its high interpretability and computational efficiency and builtin optimal and adaptive techniques. For a firstorder Sugeno fuzzy model, a common rule set with two fuzzy ifthen rules can be expressed as follows. where and are fuzzy sets in the antecedent and are the design parameters that are determined during the training process. As in Figure 11, the ANFIS consists of five layers.
Layer 1, every node in this layer is an adaptive node with a node function: where , are the input of node and and can adopt any fuzzy membership function (MF). In this paper, Gaussian MFs are used: where is center of Gaussian membership function and is standard deviation of this cluster.
Layer 2, every node in the second layer represents the ring strength of a rule by multiplying the incoming signals and forwarding the product as.
Layer 3, the th node in this layer calculates the ratio of the th rule’s ring strength to the sum of all rules’ ring strengths: where is referred to as the normalized ring strengths.
Layer 4, the node function in this layer is represented by where is the output of layer 3 and are the parameter set. Parameters in this layer are referred to as the consequent parameters.
Layer 5, the single node in this layer computes the overall output as the summation of all incoming signals:
It is seen from the ANFIS architecture that when the values of the premise parameters are fixed, the overall output can be expressed as a linear combination of the consequent parameters:
The hybrid learning algorithm combining the least square method and the backpropagation (BP) algorithm can be used to solve this problem. This algorithm converges much faster since it reduces the dimension of the search space of the BP algorithm. During the learning process, the premise parameters in layer 1 and the consequent parameters in layer 4 are tuned until the desired response of the FIS is achieved. The hybrid learning algorithm has a twostep process. First, while holding the premise parameters fixed, the functional signals are propagated forward to layer 4, where the consequent parameters are identified by the least square method. Second, the consequent parameters are held fixed while the error signals, the derivative of the error measure with respect to each node output, are propagated from the output end to the input end, and the premise parameters are updated by the standard BP algorithm.
References
 D. Kim, D. Kim, and J. Paik, “Gait recognition using active shape model and motion prediction,” IET Computer Vision, vol. 4, no. 1, pp. 25–36, 2010. View at: Publisher Site  Google Scholar
 K. Bashir, T. Xiang, and S. Gong, “Gait recognition without subject cooperation,” Pattern Recognition Letters, vol. 31, no. 13, pp. 2052–2060, 2010. View at: Publisher Site  Google Scholar
 J. B. HayfronAcquah, M. S. Nixon, and J. N. Carter, “Automatic gait recognition by symmetry analysis,” Pattern Recognition Letters, vol. 24, no. 13, pp. 2175–2183, 2003. View at: Publisher Site  Google Scholar
 C. Chen, J. Liang, and X. Zhu, “Gait recognition based on improved dynamic Bayesian networks,” Pattern Recognition, vol. 44, no. 4, pp. 988–995, 2011. View at: Publisher Site  Google Scholar
 Y. Kobayashi, T. Takashima, M. Hayashi, and H. Fujimoto, “Gait analysis of people walking on tactile ground surface indicators,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 13, no. 1, pp. 53–59, 2005. View at: Publisher Site  Google Scholar
 B. Fildes, Injuries Among Older People: Falls at Home and Pedestrian Accidents, Dove Publications, Melbourne, Fla, USA, 1994.
 R. K. Begg, M. Palaniswami, and B. Owen, “Support vector machines for automated gait classification,” IEEE Transactions on Biomedical Engineering, vol. 52, no. 5, pp. 828–838, 2005. View at: Publisher Site  Google Scholar
 Z. Ziheng, A. PrügelBennett, and R. I. Damper, “A bayesian framework for extracting human gait using strong prior knowledge,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp. 1738–1752, 2006. View at: Publisher Site  Google Scholar
 Z. Xue, D. Ming, W. Song, B. Wan, and S. Jin, “Infrared gait recognition based on wavelet transform and support vector machine,” Pattern Recognition, vol. 43, no. 8, pp. 2904–2910, 2010. View at: Publisher Site  Google Scholar
 P. Bours and R. Shrestha, “Eigensteps: a giant leap for gait recognition,” in Proceedings of the 2nd International Workshop on Security and Communication Networks (IWSCN '10), pp. 1–6, May 2010. View at: Publisher Site  Google Scholar
 J. Zhang, J. Pu, C. Chen, and R. Fleischer, “Lowresolution gait recognition,” IEEE Transactions on Systems, Man, and Cybernetics Part B, vol. 40, no. 4, Article ID 5422638, pp. 986–996, 2010. View at: Publisher Site  Google Scholar
 K. Moustakas, D. Tzovaras, and G. Stavropoulos, “Gait recognition using geometric features and soft biometrics,” IEEE Signal Processing Letters, vol. 17, no. 4, Article ID 5395696, pp. 367–370, 2010. View at: Publisher Site  Google Scholar
 C. Chen, J. Liang, H. Zhao, H. Hu, and J. Tian, “Factorial HMM and parallel HMM for gait recognition,” IEEE Transactions on Systems, Man and Cybernetics Part C, vol. 39, no. 1, pp. 114–123, 2009. View at: Publisher Site  Google Scholar
 M. W. Whittle, “Clinical gait analysis: a review,” Human Movement Science, vol. 15, no. 3, pp. 369–387, 1996. View at: Publisher Site  Google Scholar
 G. Johansson, “Visual motion perception,” Scientific American, vol. 232, no. 6, pp. 76–88, 1975. View at: Google Scholar
 J. E. Cutting and L. T. Kozlowski, “Recognition of friends by their walk,” Bulletin of the Psychonomic Society, vol. 9, pp. 353–356, 1977. View at: Google Scholar
 S. Sarkar, P. J. Phillips, Z. Liu, I. R. Vega, P. Grother, and K. W. Bowyer, “The HumanID gait challenge problem: data sets, performance, and analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 162–177, 2005. View at: Publisher Site  Google Scholar
 J. W. Davis and S. R. Taylor, “Analysis and recognition of walking movements,” in Proceedings of the International Conference on Pattern Recognition, vol. 16, no. 1, pp. 315–318, Québec City, QC, Canada, August 2002. View at: Google Scholar
 S. L. Dockstader and N. S. Imennov, “Prediction for human motion tracking failures,” IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 411–421, 2006. View at: Publisher Site  Google Scholar
 X. Huang and N. V. Boulgouris, “Gait recognition for random walking patterns and variable body postures,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '10), pp. 1726–1729, March 2010. View at: Publisher Site  Google Scholar
 C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “P finder: realtime tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780–785, 1997. View at: Google Scholar
 R. V. Babu and K. R. Ramakrishnan, “Recognition of human actions using motion history extracted from the compressed video,” Image and Vision Computing, vol. 22, no. 8, pp. 597–607, 2004. View at: Publisher Site  Google Scholar
 A. I. Nuno, B. Arcay, J. M. Cotos, and J. Varela, “Optimisation of fishing predictions by means of artificial neural networks, anfis, functional networks and remote sensing images,” Expert Systems with Applications, vol. 29, no. 2, pp. 356–363, 2005. View at: Publisher Site  Google Scholar
 A. Noureldin, A. ElShafie, and M. R. Taha, “Optimizing neurofuzzy modules for data fusion of vehicular navigation systems using temporal crossvalidation,” Engineering Applications of Artificial Intelligence, vol. 20, no. 1, pp. 49–61, 2007. View at: Publisher Site  Google Scholar
 N. Kishor, S. P. Singh, and A. S. Raghuvanshi, “Adaptive intelligent hydro turbine speed identification with water and random load disturbances,” Engineering Applications of Artificial Intelligence, vol. 20, no. 6, pp. 795–808, 2007. View at: Publisher Site  Google Scholar
 K. C. Lee and P. Gardner, “Adaptive neurofuzzy inference system (ANFIS) digital predistorter for RF power amplifier linearization,” IEEE Transactions on Vehicular Technology, vol. 55, no. 1, pp. 43–51, 2006. View at: Publisher Site  Google Scholar
 E. D. Übeyli and I. Güler, “Adaptive neurofuzzy inference system to compute quasiTEM characteristic parameters of microshield lines with practical cavity sidewall profiles,” Neurocomputing, vol. 70, no. 1–3, pp. 296–304, 2006. View at: Publisher Site  Google Scholar
 H. Qin and S. X. Yang, “Adaptive neurofuzzy inference systems based approach to nonlinear noise cancellation for images,” Fuzzy Sets and Systems, vol. 158, no. 10, pp. 1036–1063, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 P. Çivicioglu, “Using uncorrupted neighborhoods of the pixels for impulsive noise suppression with ANFIS,” IEEE Transactions on Image Processing, vol. 16, no. 3, pp. 759–773, 2007. View at: Publisher Site  Google Scholar
 G. Daoming and C. Jie, “ANFIS for highpressure waterjet cleaning prediction,” Surface and Coatings Technology, vol. 201, no. 34, pp. 1629–1634, 2006. View at: Publisher Site  Google Scholar
 A. Depari, A. Flammini, D. Marioli, and A. Taroni, “Application of an ANFIS algorithm to sensor data processing,” IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 1, pp. 75–79, 2007. View at: Publisher Site  Google Scholar
 K. Assaleh, “Extraction of fetal electrocardiogram using adaptive neurofuzzy inference systems,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 1, article 10, pp. 59–68, 2007. View at: Publisher Site  Google Scholar
 M. L. Huang, H. Y. Chen, and J. J. Huang, “Glaucoma detection using adaptive neurofuzzy inference system,” Expert Systems with Applications, vol. 32, no. 2, pp. 458–468, 2007. View at: Publisher Site  Google Scholar
 L. I. Kuncheva, Combining Pattern Classifiers: Methods and Algorithms, John Wiley & Sons, Hoboken, NJ, USA, 2004.
Copyright
Copyright © 2012 Hadi Sadoghi Yazdi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.