Engineering Applications of Intelligent Monitoring and Control 2014
View this Special IssueResearch Article  Open Access
Pak Kin Wong, Chi Man Vong, Xiang Hui Gao, Ka In Wong, "Adaptive Control Using Fully Online SequentialExtreme Learning Machine and a Case Study on Engine AirFuel Ratio Regulation", Mathematical Problems in Engineering, vol. 2014, Article ID 246964, 11 pages, 2014. https://doi.org/10.1155/2014/246964
Adaptive Control Using Fully Online SequentialExtreme Learning Machine and a Case Study on Engine AirFuel Ratio Regulation
Abstract
Most adaptive neural control schemes are based on stochastic gradientdescent backpropagation (SGBP), which suffers from local minima problem. Although the recently proposed regularized online sequentialextreme learning machine (ReOSELM) can overcome this issue, it requires a batch of representative initial training data to construct a base model before online learning. The initial data is usually difficult to collect in adaptive control applications. Therefore, this paper proposes an improved version of ReOSELM, entitled fully online sequentialextreme learning machine (FOSELM). While retaining the advantages of ReOSELM, FOSELM discards the initial training phase, and hence becomes suitable for adaptive control applications. To demonstrate its effectiveness, FOSELM was applied to the adaptive control of engine airfuel ratio based on a simulated engine model. Besides, controller parameters were also analyzed, in which it is found that large hidden node number with small regularization parameter leads to the best performance. A comparison among FOSELM and SGBP was also conducted. The result indicates that FOSELM achieves better tracking and convergence performance than SGBP, since FOSELM tends to learn the unknown engine model globally whereas SGBP tends to “forget” what it has learnt. This implies that FOSELM is more preferable for adaptive control applications.
1. Introduction
Adaptive control is a powerful control scheme for dynamic system with high uncertainty. Its principle is to, based on the output feedback of the system, selfadjust the characteristics of the controller online in a way that the tracking error is reduced while stability is maintained. One remarkable development in adaptive control is the application of neural networks to the adaptive mechanism [1–3], which is often referred to as adaptive neural control. It is well known that neural networks can approximate any nonlinear relationship by means of different network parameters and activation functions. Therefore, by expressing the system uncertainty in terms of neural networks, an adaptive neural controller is able to handle arbitrary nonlinearities through the tuning of its unknown network parameters. With this attractive feature, adaptive neural control has been extensively used in many controller design problems and practical applications [4–8].
Nevertheless, in most typical neural controllers, the parameter adjustment method, or socalled the adaptive law, is based on the backpropagation (BP) algorithm [9]. The critical drawback of this algorithm is that it is a gradientdecent based learning method which may easily converge to local minima [10, 11]. Therefore, it usually takes “more than required” steps for the controller to achieve satisfactory performance. For instance, the simulation results in an earlier work of adaptive neural control [1] showed that thousands of updating steps were needed before the controller could finally achieve the desired convergence. Moreover, in some recent studies such as [5, 6] the neural controllers were shown to perform better than traditional proportionalintegralderivative controllers. However, the controllers still take many time steps to settle every time when the desired output is changed. These results indicate that the system dynamics cannot be globally approximated. Another disadvantageous property of BP is that it updates the parameters in all the layers of the neural network, leading to a long processing time and hence a slow convergence speed.
In order to address the issues of BP, Huang et al. [12, 13] proposed a simple and fast algorithm entitled extreme learning machine (ELM). This algorithm trains only a single hidden layer feedforward neural network. Unlike BP where all the parameters need to be tuned, ELM learns the unknown nonlinear relation by updating only the output weights (parameters between the hidden layer and the output layer of the neural network); the parameters in the hidden layer are randomly initialized and remain unchanged. Due to its simple structure and learning mechanism, ELM runs much faster (up to thousands of times [11–13]) than traditional BP. Meanwhile, ELM is also superior to BP in terms of generalization performance and accuracy, which has been verified in many latest works [11, 14–18]. In this sense, employing ELM into adaptive neural control should lead to a better control performance. Yet the original ELM algorithm is only suitable for batch learning. To learn the model online, online sequential ELM (OSELM) was proposed in [19]. While achieving the same performance as batch ELM, OSELM could update the network parameters sequentially no matter whether the data comes one by one or chunk by chunk. Therefore, by replacing BP with OSELM, a better and faster adaptive neural controller should be obtained.
However, there are some factors limiting the direct application of OSELM to adaptive control. Firstly, OSELM is not robust for noisy data. Secondly, the initial parameters of OSELM, which are randomly generated, can easily lead to singular and illposed problems [20]. These problems significantly affect the model, so that the generalization performance could degrade to an unacceptable level. Furthermore, theoretically speaking, OSELM is not a fully online sequential learning algorithm; it requires a chunk of representative initial data to train a base ELM model in advance to the online sequential learning. This chunk of representative initial data is usually difficult to obtain for adaptive control problems. The number of the initial data could not be less than the number of hidden nodes either. All these together highly restrict the use of OSELM.
In order to deal with the aforesaid problems, regularized OSELM (ReOSELM), which was proposed by Huynh and Won [20], could be used. In ReOSELM, the norm of output weights is added to the objective function to avoid singular and illposed problems. At the same time, a regularization parameter is included for the tradeoff between the optimization of output weight norm and the training error. With the introduction of the regularization parameter, the number of training data could also be less than the number of hidden nodes. A base model, however, is still required in ReOSELM. To overcome this limitation, this paper proposes a fully online version of ELM, entitled fully online sequentialextreme learning machine (FOSELM). This proposed FOSELM is derived from ReOSELM, so it retains all the advantages and properties of ReOSELM, with the only difference that the batch training phase is discarded. Due to the removal of batch training phase, FOSELM can easily be applied to any adaptive control problems. For demonstration purpose, this paper presents the application of FOSELM to the adaptive engine airfuel ratio (AFR) control based on a simulated engine model. The influence of the parameters (regularization parameter and hidden node number) is analyzed in the simulation. To verify the effectiveness of FOSELM, stochastic gradientdescent BP (SGBP), as a sequential learning variant of BP, is also applied to the same adaptive AFR control problem for comparison.
The organization of this paper is as follows. A brief review of ELM and its variants is provided in Section 2. The details of the proposed FOSELM are presented in Section 3. The application of FOSELM to adaptive engine AFR control and the related discussions are given in Section 4. Finally, conclusions are drawn in Section 5.
2. Review of ELM and Its Variants
This section briefly reviews the related work of ELM, including basic ELM, regularized ELM (ReELM), OSELM, and ReOSELM, in order to provide necessary background.
2.1. ELM and Regularized ELM
ELM [13] is an emerging technique for training feedforward neural networks without iterations. It consists of only one hidden layer, in which the input weights are randomly generated and need not be tuned. The output weights are optimized using a MoorePenrose pseudoinverse instead of gradientdecent method. Apart from the number of hidden nodes, no other parameters have to be manually chosen [13, 19]. For a network with one hidden layer andhidden nodes, the output function is whereis the output vector of the hidden layer feature mapping with respect to the input andis the vector of output weights between the hidden layer and the output nodes.
For a training datasetwithsamples, matrix can be used to present the hidden layer output. The size ofisand each row ofis a training sample after feature mapping.
The goal of basic ELM is to minimize the training error; that is, whereis the vector of real targetwith respect to a samplefrom . Mathematically, it is a multiple linear regression problem. The solution ofto (2) is whereis the MoorePenrose generalized inverse of matrix . Ifis nonsingular, the orthogonal projection method can be used to calculate the pseudoinverse of
Thus,can be rewritten as
Since basic ELM is based on empirical risk minimization principle (please refer to (2)), the trained model tends to be overfitting [20, 21]. Therefore, ReELM was proposed in [21] as an improved version of ELM. A similar work has also been introduced by the authors of ELM in [22], and a more detailed explanation can be found in [23]. The optimal goal of ReELM is to minimize not only the training error, but also the norm of the output weights; that is,
The optimization problem of ReELM for a singleoutput node can then be formulated as follows: whereis the userspecified parameter that provides a tradeoff between the training error and the norm of the output weights,is the number of training data, andis the error forth training data (also known as slack variable). The solution ofcan be calculated by
According to Bartlett’s theory [24], this resulting solution tends to have better and more stable generalization performance, as verified in [21–23].
2.2. OSELM and ReOSELM
OSELM, originated from basic ELM, is an online sequential learning algorithm that can learn data not only onebyone but also chunkbychunk with fixed or varying chunk size [19]. It consists of two phases: initialization phase and sequential learning phase. In the initialization phase, a base ELM model is trained using a small chunk of initial training data. For instance, the output weight for an initial training datasetwithtraining samples is obtained as
Then, in the sequential learning phase, when a new chunk of training data arrives, the output weights are updated by whereindicates theth arriving training data withstarting from zero andis the hidden layer output for theth arriving training data.
One major problem in OSELM is that, if the termis singular, then (10) is unsolvable. Therefore, to avoid the singular problem, OSELM restricts that the initial training datasetshould have at least(hidden node number) distinct samples. To improve this situation, ReOSELM [20] adds a regularization term to (10); that is,
According to the ridge regression theory, adding a small positive value into the diagonal ofcan also avoid singular problem when the number of initial training data is less than the hidden nodes number. Therefore, ReOSELM can resolve the constraint suffered in OSELM, making it suitable for case when initial number of data is small (e.g., adaptive control problems). In addition, similar to ReELM, the termin (13) of ReOSELM mainly controls the relative importance between the training error and the norm of output weights. The theory behind the improvement of ReOSELM over OSELM can be explained using the same reason of ReELM over basic ELM.
3. Proposed FOSELM
In this section, an improved ReOSELM, namely, fully online sequentialextreme learning machine (FOSELM), is proposed. It does not need a small chunk of initial training data to construct a base model but can achieve the same performance with ReOSELM.
Considering an initial training datasetwith a corresponding hidden layer output matrix, using (9) and (13), the output weightsare calculated as whereand.
Suppose now a new training dataset arrives with a corresponding hidden layer output matrix . By considering both training datasetsand, using (9) and (13) again, the output weightsshould be obtained as where. Now expanding the last two terms on the righthand side of (15)
Then, combining (15), (16), and (17),is obtained as
Now, considering only,can be obtained as
Comparing (18) and (19), it is obvious that (19) can be obtained from (18) if and only ifand . Therefore, by initializingand , the initial training datasetscan be omitted, while a model forcan still be constructed. In other words, the batch training in the initialization phase of ReOSELM is automatically integrated in FOSELM. Thereby, FOSELM becomes a fully online sequential learning algorithm and still can achieve the same learning performance with ReOSELM.
To make it clear, the proposed FOSELM algorithm is rewritten as below.
Step 1. Assign random values for input weights, and setand .
Step 2. For thearriving training data,(a)calculate the hidden layer output matrix ;(b)update the output weightsusing (11) and (12).
In short, FOSELM is a fully online sequential learning algorithm. It is simpler than ReOSELM and easier to implement. Compared with OSELM and ReOSELM, FOSELM is more suitable for learning problems in which the training data is difficult to collect in advance. To emphasize the advantages of FOSELM, a detailed comparison among OSELM, ReOSELM, and FOSELM is summarized in Table 1.

As declared in [19], the sequential learning algorithm (11) and (12) of OSELM is similar to recursive leastsquares (RLS) algorithm, so that all the convergence results of RLS can be applied. It has to be noted that, in fact, ReOSELM and FOSELM also share the same sequential learning update algorithm with OSELM, so the convergence results of RLS can also be applied to all the three algorithms, OSELM, ReOSELM, and FOSELM. In other words, if the three algorithms are applied to the adaptive controller, the controller stability can be guaranteed.
4. Case Study on Adaptive Engine AFR Control
To demonstrate the usefulness of the proposed algorithm, FOSELM is applied to the adaptive control of engine AFR based on a simulated engine model. The effectiveness of FOSELM in this application is discussed and a comparison with SGBP is provided in this section.
4.1. Adaptive Engine AFR Control
Engine AFR refers to the mass ratio of air to fuel present in the engine. It is a parameter that critically affects the engine emissions, brakespecific fuel consumption, and power [25]. In general, the AFR can be set to different values for different purpose. For example, using gasoline as the fuel, the AFR should be controlled to the stoichiometric AFR of gasoline, 14.7 : 1, in order to keep maximum conversion efficiency of the threeway catalytic converter [25]. In case higher engine torque is demanded, the AFR should be controlled to 12.5 : 1 in order to achieve the best engine power. For the best brakespecific fuel consumption, the AFR should be set to 16 : 1. Consequently, controlling the AFR is essential for maintaining the desired engine performance. However, the combustion process of an engine is a complex dynamic system that involves many uncertainties [11, 15]. Therefore, for illustrative purpose, this paper applies the adaptive control scheme, based on the proposed FOSELM, to the AFR control.
Theoretically, the dynamics of AFR can be described by a discrete approximated model in which the control appears linearly [1]: whereis the AFR,is the control input,is the time step,is the system order, andmust be a nonzero function. If bothandare known, the following control law can be used to exactly track the desired AFR,:
Therefore, assuming that FOSELM consists of two functionsand, the purpose of FOSELM is to adaptively learnandby selftuning the parameters ofand, based on the error from the system output feedback (i.e.,and). The engine AFR control scheme is illustrated in Figure 1.
The purpose of the controller is to control the amount of fuel injected to the engine so that the corresponding AFR can match the target AFR. The control signalin (21) is the fuel injection time of the injectors. The longer the fuel injection time is, the larger the amount of the fuel injected is. To simplify the problem and focus on performance of FOSELM, a simulated engine model (465Q gasoline engine at engine speed of 3500 rpm and manifold pressure of 85 kPa) [26] is used in this paper, given as
Two reference AFR outputs () are used to evaluate the performance of FOSELM. The first one is a square wave, of which the amplitude changes between 12.5 and 14.7 every 50 steps. This can test the step response of the adaptive controller. The other reference command is a sine wave, of which the amplitude varies between 12.5 and 14.7 with a period of 100 steps. This, on the other hand, can test the continuous tracking performance of the controller. In addition, all the simulations in the following sections were implemented in MATLAB and executed on a PC with Intel Core i7 CPU and 4 GB RAM onboard.
4.2. Performance of FOSELM
The performance of FOSELM on the adaptive AFR controller is evaluated by three cases. As compared to OSELM, there is a regularization parameterintroduced in FOSELM. Therefore, the first case is to test the effectiveness of. Moreover, as there is another important parameter in FOSELM, namely, the hidden nodes number, the influence of these two parameters on the performance of FOSELM should also be analyzed. Thus, in the second case, simulations under variousandvalues are presented. In the last case, disturbances are introduced to the reference command in order to test the robustness of FOSELM.
4.2.1. Effectiveness of
In this case, the FOSELM used in the adaptive AFR controller was run under twovalues:and. For both situations,was set to 30. The simulation results are presented in Figure 2.
(a)
(b)
It can be seen that the tracking performance of FOSELM without usingis quite poor as compared to that with. This is mainly due to the singular problem. It should be noted that when, it becomes a special case of OSELM which trains the base model using the first arriving training sample. As in this adaptive problem, the training sample arrives in a onebyone manner; the number of initial training data in the first step is also one. This is against the restriction of OSELM that the initial training dataset should have at leastdistinct samples in the initial phase. Thus, the termis mainly determined by the first arriving training sample and the singularis inevitable. Furthermore, the rank of(used in the update phase as given in (12)) basically remains at one in the sequential learning phase, so the learning performance of the FOSELM withoutis simply equal to an OSELM with only one hidden node; even a large number of hidden nodes are set. As a result, the tracking error of FOSELM without usingcannot be eliminated.
4.2.2. Influence of and
Referring to (7), the regularization parametermainly controls the relative importance of the two terms: training error and norm of output weights. Ifis small, reducing the training error is more important. Otherwise, minimizing the norm of output weights is more important. Therefore, by settingto a small value, the tracking error will rapidly be reduced. However, ifis too small (very close to 0), the matrixwill tend to become singular, which leads to the situation suffered in the first case. Moreover, the hidden node numberis another factor that can affect the performance of FOSELM. It mainly controls the dimensionality of the model. Since, in all the variants of ELM, the parameters in the hidden node are randomly generated, it should follow that the larger the hidden node number is, the better the representation power is. Asis already introduced in the FOSELM model, the overfitting problem due to large hidden node number is avoided. To analyze the influence ofandon the model, simulations were run at 4 differentvalues (, , , and ) and 3 differentvalues (10, 30, and 50). The norm of the output weights of the FOSELM at each step was recorded. The results are shown in Figure 3.
(a) L = 10
(b) L = 30
(c) L = 50
The results in Figure 3 show that the influence ofis more significant than. As shown in Figure 3(a), whenis small, the norm of the output weights tends to “blow up,” especially whenis close to zero (e.g.,). This “blow up” often leads to bad generalization performance [24]. On the contrary, as shown in Figure 3(c), ifis set to a large number, say 50, then differentvalues do not affect the norm of output weights too much; the trend at differentbasically remains the same. This indicates thatis not very sensitive to the norm of output weight, as long as it is not zero andis sufficient. In order to investigate how the change ofaffects the controller performance, simulations were run at two differentvalues:and, with a fixed. The results are shown in Figure 4.
(a)
(b)
From Figure 4, it can be seen that a FOSELM with a larger number of hidden nodes has better tracking performance. This verifies the idea that the hidden node number is in proportion to the representation power. As a remark, these simulation results are in accordance with the proof of ELM by Huang et al. [13].
4.2.3. Robustness of FOSELM
One powerful feature of adaptive controller is its robustness to disturbance. Therefore, to evaluate the robustness of FOSELM on this adaptive control application, two disturbances were introduced to the reference AFR command atand. The simulations were run atand. The results are presented in Figure 5. It shows that although disturbances are introduced, the FOSELM adaptive controller can converge quickly back to the reference command, indicating that FOSELM is quite robust to disturbance.
(a)
(b)
4.3. Comparison to SGBP
SGBP, being a variant of BP for sequential learning application [9], is a typical algorithm for adaptive neural control. In order to show the benefits of FOSELM over SGBP on adaptive neural control, SGBP was also applied to the adaptive engine AFR control problem, and a comparison between FOSELM and SGBP was carried out. Similar to FOSELM, SGBP has two parameters, known as learning rate and the hidden node number. In the comparison, the hidden node number for SBGP is 20, and three learning rates () were assigned. For FOSELM, the hidden node number is set to 50, with a regularization factor of 0.001 again. The results of the two methods are provided in Figure 6.
(a)
(b)
(c)
(d)
The comparative results from Figure 6 imply two things. The first one is that SGBP is quite sensitive to its learning rate. As shown in Figure 6(a), a small learning rate leads to a slow convergence speed, while, as shown in Figure 6(c), a large learning rate leads to an oscillation in the convergence process. It was found from some preliminary tests (not shown here) that the learning rate is also strongly associated with the hidden node number. In other words, to determine the structure and parameters of SGBP, expert experience is necessary [11]. Comparing to SGBP, FOSELM is less sensitive to parameters. Usually, the regularized parametercan be set to a small value like, and the hidden node numbercan be set to a large value like 50. This has already been verified in the previous section.
Another implication from Figure 6 is that FOSELM can achieve better global control performance as compared to SGBP. Referring to Figure 6(b), every time when the amplitude changes, several steps were required by SGBP to adapt to the desired reference. This shows that SGBP always tends to “forget” what it has learnt. The reason behind this phenomenon is that SGBP updates both input weights and output weights to achieve the desired output, which may easily suffer from local minima (i.e., optimal for the most recent arrived data). Thus, when the desired output changes, the weights need to be adjusted again for tracking the desired output. In contrast, the theory behind FOSELM is to seek for a global optimal (i.e., optimal for all the seen data). Hence, as shown in Figure 6(d), once the model is learnt, the controller can directly adapt to the desired output even if it changes frequently. In fact, by referring to Figure 3, the norm of the output weights becomes stable after several learning steps, which also verifies that the model has been learnt and no further update is required. This unique feature of FOSELM is highly suitable for adaptive control applications.
5. Conclusions
In this paper, a novel fully online learning algorithm entitled FOSELM is proposed for adaptive neural control. It keeps the same learning performance with ReOSELM but discards the initial batch training phase adopted in ReOSELM. Without the initial training phase, FOSELM becomes easier to be implemented and more suitable for online learning task, of which the training data is difficult to be provided in advance, for example, adaptive control problems.
To demonstrate its effectiveness, the proposed FOSELM is applied to the adaptive control of engine AFR based on a simulated engine model. As the performance of FOSELM is determined by two important parameters, namely, regularization parameter and hidden node number, the influence of these parameters was analyzed. Furthermore, a comparison between FOSELM and SGBP on the adaptive control application was also carried out. The results imply the following.(1)Without a regularization parameter, FOSELM becomes a special OSELM and leads to the singular problem, but, with a regularization parameter, both singular and overfitting problems are solved simultaneously, leading to better tracking performance.(2)The norm of output weights is less sensitive to the regularization parameter but it is sensitive to the number of hidden nodes. It is found that a large number of hidden nodes with a small regularization parameter can result in a smaller norm of output weights and better tracking performance.(3)SGBP always tends to “forget” what it has learnt, as its algorithm easily falls into a local optimal. Conversely, FOSELM aims to seek for a global optimal. Thus, once the model is learnt globally, the controller can directly adapt to the desired output even if the desired output changes frequently. This is the unique feature of FOSELM.
As FOSELM can achieve good tracking and convergence performance than traditional SGBP, it is more preferable for adaptive neural control applications.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The research is supported by the University of Macau Research Grant, Grant nos. MYRG081(Y1L2)FST12WPK, MYRG077(Y1L2)FST13WPK, and MYRG075(Y1L2)FST13VCM, and Macau FDCT Grant FDCT/075/2013/A.
References
 F. C. Chen, “Backpropagation neural networks for nonlinear selftuning adaptive control,” IEEE Control Systems Magazine, vol. 10, no. 3, pp. 44–48, 1990. View at: Google Scholar
 C. Y. Lee and J. J. Lee, “Adaptive Control for Uncertain Nonlinear Systems Based on Multiple Neural Networks,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 34, no. 1, pp. 325–333, 2004. View at: Publisher Site  Google Scholar
 Y. J. Liu, C. L. P. Chen, G. X. Wen, and S. Tong, “Adaptive neural output feedback tracking control for a class of uncertain discretetime nonlinear systems,” IEEE Transactions on Neural Networks, vol. 22, no. 7, pp. 1162–1167, 2011. View at: Publisher Site  Google Scholar
 C. C. Tsai, H. C. Huang, and S. C. Lin, “Adaptive neural network control of a selfbalancing twowheeled scooter,” IEEE Transactions on Industrial Electronics, vol. 57, no. 4, pp. 1420–1428, 2010. View at: Publisher Site  Google Scholar
 Y. Xiaofang, W. Yaonan, S. Wei, and W. Lianghong, “RBF networksbased adaptive inverse model control system for electronic throttle,” IEEE Transactions on Control Systems Technology, vol. 18, no. 3, pp. 750–756, 2010. View at: Publisher Site  Google Scholar
 J. Z. Peng and R. Dubay, “Identification and adaptive neural network control of a DC motor system with deadzone characteristics,” ISA Transactions, vol. 50, no. 4, pp. 588–598, 2011. View at: Publisher Site  Google Scholar
 L. S. Guo and L. Parsa, “Model reference adaptive control of fivephase IPM motors based on neural network,” IEEE Transactions on Industrial Electronics, vol. 59, no. 3, pp. 1500–1508, 2012. View at: Publisher Site  Google Scholar
 G. Q. Xia, X. C. Shao, A. Zhao, and H. Y. Wu, “Adaptive neural network control with backstepping for surface ships with input deadzone,” Mathematical Problems in Engineering, vol. 2013, Article ID 530162, 9 pages, 2013. View at: Publisher Site  Google Scholar
 Y. LeCun, L. Bottou, G. B. Orr, and K. R. Müller, “Efficient backprop,” in Neural Networks: Tricks of the Trade, G. B. Orr and K. R. Müller, Eds., pp. 9–50, Springer, Berlin, Germany, 1998. View at: Google Scholar
 H. J. Rong and G. S. Zhao, “Direct adaptive neural control of nonlinear systems with extreme learning machine,” Neural Computing and Applications, vol. 22, pp. 577–586, 2013. View at: Publisher Site  Google Scholar
 K. I. Wong, P. K. Wong, C. S. Cheung, and C. M. Vong, “Modelling of diesel engine performance using advanced machine learning methods under scarce and exponential data set,” in Applied Soft Computing, vol. 13, pp. 4428–4441, 2013. View at: Google Scholar
 G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” in Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 985–990, Budapest, Hungary, July 2004. View at: Google Scholar
 G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing, vol. 70, no. 1–3, pp. 489–501, 2006. View at: Publisher Site  Google Scholar
 C. J. Lu and Y. J. E. Shao, “Forecasting computer products sales by integrating ensemble empirical mode decomposition and extreme learning machine,” Mathematical Problems in Engineering, vol. 2012, Article ID 831201, 15 pages, 2012. View at: Publisher Site  Google Scholar
 K. I. Wong, P. K. Wong, C. S. Cheung, and C. M. Vong, “Modeling and optimization of biodiesel engine performance using advanced machine learning methods,” Energy, vol. 55, pp. 519–528, 2013. View at: Google Scholar
 P. K. Wong, C. M. Vong, C. S. Cheung, and K. I. Wong, “Diesel engine modelling using extreme learning machine under scarce and exponential data sets,” International Journal of Uncertainty Fuzziness and KnowledgeBased Systems, vol. 21, supplement 2, pp. 87–98, 2013. View at: Google Scholar
 Y. Lan, Z. J. Hu, Y. C. Soh, and G. B. Huang, “An extreme learning machine approach for speaker recognition,” Neural Computing and Applications, vol. 22, pp. 417–425, 2013. View at: Google Scholar
 A. Iosifidis, A. Tefas, and I. Pitas, “Minimum class variance extreme learning machine for human action recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, pp. 1968–1979, 2013. View at: Google Scholar
 N. Y. Liang, G. B. Huang, P. Saratchandran, and N. Sundararajan, “A fast and accurate online sequential learning algorithm for feedforward networks,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411–1423, 2006. View at: Publisher Site  Google Scholar
 H. T. Huynh and Y. Won, “Regularized online sequential learning algorithm for singlehidden layer feedforward neural networks,” Pattern Recognition Letters, vol. 32, no. 14, pp. 1930–1935, 2011. View at: Publisher Site  Google Scholar
 W. Deng, Q. Zheng, and C. Lin, “Regularized extreme learning machine,” in Proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM '09), pp. 389–395, Nashville, Tenn, USA, April 2009. View at: Publisher Site  Google Scholar
 G. B. Huang, X. Ding, and H. Zhou, “Optimization method based extreme learning machine for classification,” Neurocomputing, vol. 74, no. 1–3, pp. 155–163, 2010. View at: Publisher Site  Google Scholar
 G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 42, no. 2, pp. 513–529, 2012. View at: Publisher Site  Google Scholar
 P. L. Bartlett, “The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network,” IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 525–536, 1998. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 P. K. Wong, H. C. Wong, and C. M. Vong, “Online timesequence incremental and decremental least squares support vector machines for engine airratio prediction,” International Journal of Engine Research, vol. 13, no. 1, pp. 28–40, 2012. View at: Publisher Site  Google Scholar
 G. Y. Li, Application of Intelligent Control and MATLAB to Electronically Controlled Engines, Publishing House of Electronics Industry, Beijing, China, 1 edition, 2007 (Chinese).
Copyright
Copyright © 2014 Pak Kin Wong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.