Time-Delay Systems and Its Applications in Engineering 2014View this Special Issue
A Novel Fractional-Order PID Controller for Integrated Pressurized Water Reactor Based on Wavelet Kernel Neural Network Algorithm
This paper presents a novel wavelet kernel neural network (WKNN) with wavelet kernel function. It is applicable in online learning with adaptive parameters and is applied on parameters tuning of fractional-order PID (FOPID) controller, which could handle time delay problem of the complex control system. Combining the wavelet function and the kernel function, the wavelet kernel function is adopted and validated the availability for neural network. Compared to the conservative wavelet neural network, the most innovative character of the WKNN is its rapid convergence and high precision in parameters updating process. Furthermore, the integrated pressurized water reactor (IPWR) system is established by RELAP5, and a novel control strategy combining WKNN and fuzzy logic rule is proposed for shortening controlling time and utilizing the experiential knowledge sufficiently. Finally, experiment results verify that the control strategy and controller proposed have the practicability and reliability in actual complicated system.
With the tightening supplies of energy and deterioration of environment pollution in the globe, the reliability, cleanliness, and security help the nuclear technology gain increasing traction. The majority of marine nuclear power plants adopt integrated design. Since vessels request strong mobility and operational condition changes dramatically, the nuclear reactor and system device need to accommodate themselves to the radical load changing. The nuclear reactor power control dominates the whole system of nuclear power plant and decides the security and reliability of the global device. Hence, the operation control strategy and controller design should accord with its particularities, which is convenient for power controlling and adjusting and ensures the steady operation. The reactor is a time delay system that is similar to other industrial systems. The time delay effect influences the system performance, which possibly brings oscillation and even destroys the instability [1, 2]. Thus, it is practically significant to research the time delay control of nuclear reactor.
In numerous problems encountered in modern control, the time delay issue is of central importance and also a difficult point to elaborate. There is an increasing interest in time delay problem of industrial system from scientists and engineers. Classical control theory utilizes plenty of Smith control, Dahlin control, and so on [3–5]. During recent years, some novel control algorithms also have obtained achievements of time delay systems [6, 7]; for instance, Yang and Zhang  studied a robust network control method for T-S fuzzy systems with uncertainty and time delay. Besides, Shi and Yu proposed the control for the two-mode-dependent robust control synthesis of networked control systems with random delay . Moreover, a class of uncertain singular time delay systems is controlled by sliding mode [10, 11], and an optimal sliding surface is designed for sliding model control in time delay uncertain systems . The industrial system, especially the large-scaled complex industrial systems, possesses the feature of process delay. The control of these systems depends on both current state and past state, which makes the control issue difficult. As a mature controller with strong availability, PID and the controllers have been applied in industrial processes for sophisticated dynamic systems which have large scale. The primary reason is its relatively simple structure that could be easily comprehended and implemented [13–15]. In recent years, considerable interests in time delay have been aroused in fractional-order PID controller and many researchers have presented beneficial methods to design stabilizing fractional-order PID (FOPID) controllers for varying time delay system [16–19]. This paper adopts neural network (NN) to tune the FOPID online to control the nuclear reactor power, which keeps the coolant average temperature constant on the condition of load changing. NN is an effective method for PID parameters tuning, and the wavelet neural network (WNN) improves the efficiency of NN by transforming the nonlinear excitation function to the wavelet function. However, WNN still follows the gradient descent method to adjust weights, which inevitably causes the problems of deficient generalization performance, ill-posedness, and overfitting. Motivated by the drawback above, it should take advantage of the algorithm core, LMS, to improve WNN. Both LMS and KLMS are derived from mean squared error cost function [20–22]. LMS algorithm is famous for its easy comprehending and implementing but has no ability to be utilized in nonlinear domain. With the wide spread of kernel theory, the novel LMS algorithm with kernel trick has drawn considerable attention to learning machines including neural network [23–26]. Recently, kernel functions (such as Gaussian kernel) are applied in support vector machine (SVM) mostly, which projects the linear nonseparable data to another space for being separable in the new space. RBF network is a sort of NN which use Gaussian kernel frequently. But this kernel could not approach the target appropriately when it encounters more complex signal in the calculation process. There are some literatures indicating that Gaussian kernel in RBF network and SVM could not generate a set of complete bases in the subspace ideally by translation . Some potential wavelet functions could be transformed as kernel functions, which are competitive to produce a set of complete bases in space (square and integrable space) by stretching and translating. Meanwhile, RBF network needs to identify the model of object and obtain the Jacobian information when it tunes the network weights, and it is hard to gain the Jacobian information of complex industrial system. But the conventional WNN could tune the PID controller without such information and is more appropriate to be the parent body for improvement. Thus, this paper proposes a novel neural network based on wavelet kernel function and combines with fuzzy logic rules to achieve the online tuning of FOPID. Experiment results validate that the control strategy and controller design improve the accuracy and speed, which could control the nuclear reactor power operation and enhances the overall efficiency.
The next content is composed of six sections. Section 2 provides a short introduction to LMS algorithm and kernel LMS algorithm. Section 3 presents the related theory of wavelet kernel function and proposes the method of wavelet kernel neural network including the discussion of adaptive parameters. The model of nuclear reactor is structured by RELAP 5 program in Section 4. Then the control strategy and the fractional-order PID controller are designed with fuzzy logic and wavelet kernel neural network in Section 5. In Section 6, experiments illustrate the effectiveness of the wavelet kernel neural network and the fractional-order PID controller. As some controllers are faced with the problem of losing efficacy in practice, Section 6 also emulates and analyzes the control process in the basis of reactor model. Finally, conclusion of this paper has been described in Section 7.
2. The Description of Kernel LMS Algorithm
In this section, the algorithm of kernel LMS is introduced as a show review for wavelet kernel neural network. Kernel LMS is an approach to perform the LMS algorithm in the kernel feature space. In the LMS algorithm, the hypothesis space is composed by all the linear operators on (mapping ) , which is embedded in a Hilbert space, and the linear operator represents standard inner product. The LMS algorithm minimizes the total instantaneous error of the system output: where , is the desired value, is the weight vector, is the input vector, and is the instant. Using the straightforward stochastic gradient descent update rule, the weight vector is approximated based on input vector : where is the step-size. By training step by step, the weight is updated as a linear combination of the previous and present input vector.
This LMS algorithm learns linear patterns satisfactorily while it could not approach the same sound effect in nonlinear pattern. Motivated by the drawback, Pokharel with other scholars extended the LMS algorithm to RKHS by the kernel trick [28, 29]. Kernel methods map the input into a high dimensional space (HDS) . And any kernel function based on Mercer’s theorem has the follow mapping :
When the input in (2) consists of but not , is the infinite feature vector transformed from input vector . The LMS algorithm is no longer applicable, and the total instantaneous error of the system output is as follows: where is matrixes of input vector, is desired value vector, and is weight vector in RKHS.
Then build the hypothesis that for convenience, and (2) could convert to and the final output in step updating of the learning algorithm is and . From (4)–(6), the KLMS algorithm remedies the defect of the LMS algorithm of its relatively small dimensional input space and improves the well-posedness study in the infinite dimensional space with finite training data . As the application of adaptive filter theory, the neural network could take advantage of the KLMS algorithm, which endows the learning of neural network with more efficiency and reliability.
3. Wavelet Kernel Neural Network
Following the principle above, this paper acquires the wavelet kernel function by Mercer theorem and proposes a more competent method to update parameters on basis of conventional WNN. With the kernel LMS above, the wavelet kernel neural network is proposed in this section on the basis of wavelet kernel function and conventional wavelet neural network. First, the wavelet kernel function is constructed, which is admissible neural network kernel. Second, the wavelet kernel neural network is proposed including neuron explanation and weight updating. Then the modulations of adaptive variable step-size and wavelet kernel width are discussed in Sections 3.3 and 3.4.
3.1. The Wavelet Kernel Function
The wavelet analysis is executed to express or approximate functions or signals generated family functions given by dilations and translations from the mother wavelet . Hypothesize that is linear integrable space and is square integrable space. When the wavelet function (), the wavelet function group could be defined as where , , , is a dilation factor, and is a translation factor. And satisfy the condition [32, 33] where is the Fourier transform of . For a common multidimensional wavelet function, if the wavelet function of one dimension is which satisfies the condition (8), the product of one-dimensional (1D) wavelet functions could be described by tensor theory [29, 31]: where .
The conception above is the foundation of wavelet analysis and theory. Note that the wavelet kernel function for neural network is a special kernel function and comes from wavelet theory; it would be the horizontal function and satisfied the condition of Mercer theorem , which could be summarized as follows.
Lemma 1. A continuous symmetric kernel is , and is bounded, likewise for . Then is the valid kernel function for wavelet neural network if and only if it holds for any nonzero function which makes , and the following condition will be satisfied:
With regard to the horizontal floating function, there is an analogous theory for support vector machine (SVM) . Allowing for the affiliation between SVM and NN , an approved horizontal floating kernel function of SVM is effective in NN. When is a mother wavelet, is a dilation factor and is a translation factor, , ; the horizontal floating translation-invariant wavelet kernels could be expressed as 
Since the number of wavelet kernel functions which satisfy the horizontal floating function condition  is not much up to now, the neural network in this paper adopts an existent wavelet kernel function, Morlet wavelet function:
Then the Morlet wavelet kernel function is defined as
3.2. The Wavelet Kernel-LMS Neural Network
The neural network on basis of wavelet kernel function focuses on the matter relating to the performance of a multilayer perception trained with the backpropagation algorithm by adding KLMS theory to wavelet neural network. This paper utilizes the multilayer perception (MLP) fully connected structure and chooses three layers for understanding. The network consists of input layer , hidden layer , and output layer . The activation functions in hidden layer and output layer are wavelet kernel function and sigmoid function , respectively. Weights between input layer and hidden layer are regarded as , and the one between hidden layer and output layer is .
In the WKNN, hypothesize that the input set is , practical output set and output set are and , respectively, and then the cost function is defined as : where .
As the hidden layer , the signal of single neuron is provided by input layer, and the induced local field and output of hidden layer could be displayed as where is weights, is the input of input layer neurons, and are the matrix or vector constituted by and , represents the sum of the neurons in the input layer, and is the wavelet kernel function in layer.
As the output layer , the signal of single neuron is provided by hidden layer, and the induced local field and output of output layer could be displayed as where is weights, is the input of hidden layer neurons, and are the matrix or vector constituted by and , represents the sum of the neurons in the input layer, and is the sigmoid function in layer.
Similar to the WNN algorithm, this paper achieves a gradient search to solve the optimum weight. For convenience, it chooses to be zero. Then, based on (17), the weight matrix between output layer and hidden layer is given as follows: where is adaptive variable step-size, and the method to find appropriate step-size will be illustrated in the following subsection. By the kernel trick and (17), could be determined as where .
Then the weight matrix between hidden layer and input layer is obtained on the basis of (18) as follows: and the output of hidden neuron could be determined as where .
3.3. The Updating of Adaptive Variable Step
According to the algorithm presented above (see Section 2), it stresses that the adaptive variable step could be regarded as a significant coefficient with the ability to influence the adaptivity and effectivity of the algorithm. The stability and convergence of the algorithm are influenced by step-size parameter and the choice of this parameter reflects a compromise between the dual requirements of rapid convergence and small misadjustment. A large prediction error would cause the step-size to increase to provide faster tracking while a small prediction error will result in a decrease in the step-size to yield smaller misadjustment . Therefore, this paper improves the NLMS algorithm derived from LMS algorithm, which is propitious to multidimensional nonlinear structure and avoids the limitation of linear structure.
NLMS focuses on the main problem of LMS algorithm’s constant step-size, which designs an adaptive step-size to avoid instability and picks the convergence speedup: where is regarded as the variable step-size compared with the LMS algorithm. But the NLMS has little ability to perform in the nonlinear space. By the kernel trick, the kernel function could not only improve the wavelet neural network, but also update the step-size of neural network self-adaptively. As what (17) reveals, is a consequence of nonlinear function . Since is continuously differentiable, this function could be matched by innumerable linear function following the principle of the definition of differential calculus.
Theorem 2. The weights updating satisfies the NLMS condition in the linear integrable space as follows: where is the adaptive variable step-size. Then weights are expanded to dimension matrix and adjusted as where the nonlinear function is continuously differentiable everywhere, the will have elements, and each step-size is given as
Proof. Hypothesize that is small enough and linear function exists, which has the relationship that in the closed interval ; therefore, regard that and are satisfied.
Consider the equations where , and minimize the functions and , and then could be inferred as follows:
Equations (27) and (28) reveal that the is independent of the linear function intercept and only relates to the input and slope . Motivated by the drawback of slight error from , the in (28) is replaced by , which improves the accuracy and completes the proof.
On the basis of the discussion above, (22) could be represented as a special situation by Theorem 2 when the slope of function is 1. Theorem 2 expands the NLMS to the multidimensional nonlinear space from the linear space and broadens the application of NLMS. Compared with the variable step-size in (22), (27) and (28) could be defined as the adaptive variable step-size of NLMS in the extended space.
When is too small, the problem caused by numerical calculation has to be considered. To overcome this difficulty, (28) is altered to where which is set as a pretty small value. The weights updating in (20) is more complicated than that between output layer and hidden layer. Because the updating in (20) is based on the output layer, meanwhile the existence of helps every neuron of output layer have the relationship with the updating process between th neuron of hidden layer and the previous layer; in other words, every neuron of output layer contributes to . Therefore, this paper adopts the same step-size in based on , and the elements of could be given as follows:
3.4. The Modulation of Wavelet Kernel Width
The goal of minimizing the mean square error could be achieved by gradient search. If kernel space structure cost function is derivable, the wavelet kernel of neural network could also be deduced by gradient algorithm. In consideration of conventional wavelet neural network, the gradient search method could be utilized for updating wavelet kernel weight due to the derivability of the wavelet kernel function. Besides the synaptic weights among various layers of network, there is another coefficient—dilation factor —in (11) which also influences the mean square error. With respect to the cost function (14), the dilation factor is corrected by where is the constant step-size and is the partial derivative function, that is, gradient function. According to the chain rule of calculus, this gradient is expressed as where and . Then could be represented as Allowing for (19) and (33), the updating formulation of dilation factor is where .
The WKNN process is presented as Algorithm 1 as follows.
4. The Model Design of IPWR
In order to reduce the volume of the pressure vessel, the integration reactor generally uses compact OTSG. Since there is no separator in the steam generator, the superheated steam produced by OTSG could improve the thermal efficiency of the secondary loop systems. Compared with natural circulation steam generator U-tube, the OTSG is designed with simpler structure but less water volume in heat transfer tube. Since it is difficult to measure the water level in OTSG especially during variable load operation, the OTSG needs a high-efficiency control system to ensure safe operation. Primary coolant average temperature is an important parameter to ensure steam generator heat exchange. The control scheme with a constant average temperature of the coolant can achieve accurate control of the reactor power and avoid generation of large volume coolant loop changes.
In the integrated pressurized water reactor (IPWR) system, the essential equation of thermal-hydraulic model is listed as follows.(1)Mass continuity equations: (2)Momentum conservation equations: (3)Energy conservation equations: (4)Noncondensables in the gas phase: (5)Boron concentration in the liquid field: (6)The point-reactor kinetics equations are
The RELAP5 thermal-hydraulic model solves eight field equations for eight primary dependent variables. The primary dependent variables are pressure (), phasic specific internal energies (, ), vapor volume fraction (void fraction) (), phasic velocities (, ), noncondensable quality (), and boron density (). The independent variables are time () and distance (). The secondary dependent variables used in the equations are phasic densities (, ), phasic temperatures (, ), saturation temperature (), and noncondensable mass fraction in noncondensable gas phase () . The point-reactor kinetics model in RELAP5 computes both the immediate fission power and the power from decay of fission fragments.
5. The Control Strategy and FOPID Controller Design
New control law is demanded in nuclear device . When the nuclear power plant operates stably on different conditions, the variation of main parameters in primary and secondary side is indeterminate. The operation strategy employs the coolant average temperature to be constant with load changing. The WKNN fuzzy fractional-order PID system achieves coolant average temperature invariability by controlling reactor power system (Figure 4).
The five parameters, including three coefficients and two exponents, are acquired in two forms—fuzzy logic and wavelet kernel neural network. The idea of this method combination is born from the fact that the power level of nuclear reactor tends to alter dramatically in a short while and then maintain slight variation within a certain period. Due to the boundedness of the WKNN excitation function, the output range of FOPID coefficients could not be wide enough. The initial parameters of FOPID are set by WKNN and fuzzy logic. Moreover, the exponents of FOPID have no need to vary with the small changing. As a consequence, the fuzzy logic rule is executed when the reactor power transforms remarkably to tune the five parameters of FOPID, and the WKNN retains updating and tuning of the coefficients based on the fuzzy logic all the while. The process is illustrated in Figure 5.
Based on theoretical analysis and design experience, fuzzy enhanced fractional-order PID controller is adopted to achieve the reactor power changing with steam flow and maintain the coolant average temperature constant. Power control system regards the average temperature deviation and steam flow as the main control signal and assistant control signal, respectively. Since the thermal output of reactor is proportional to the steam output of steam generator, the consideration of steam flow variation could calculate the power need immediately, which impels the reactor power to follow secondary loop load changing and improves the mobility of equipment. When the load alters, reactor power could be determined by the following equation: where , , is time step, is conversion coefficient, , , and are design parameters, is new steam flow of steam generator, and is the coolant average temperature deviation. Consider where is value and is measuring temperature: where and are the coolant temperature of core inlet and outlet, respectively. The signal from signal processor is generated by
Following the fuzzy logic principle, the fuzzy FOPID controller considers the error and its rate of change as incoming signals. It adjusts , , , , and online and produces them with the fractional factors satisfying conditions , . The coordination principle is establishing the relationship among , , and parameters. The fuzzy subset of fuzzy quantity chooses and the membership functions adopt normal distribution functions. Larger , smaller , and could obtain rapider system response and avoid the differential overflow by initial saltus step when is large. Then adopt smaller and to prevent exaggerated overshooting. When is medium, should reduce properly; meanwhile, , , and should be moderate to decrease the system overshooting and ensure the response fast. If is small, the larger , , and have the ability to fade the steady-state error. At the same time, moderate and keep output away from vibration around the value and guarantee anti-interference performance of the system. For more information, one could refer to the literature .
The WKNN proposed above in Section 3 has been adopted to improve the performance of FOPID parameter tuning, and three-layer structure has been applied. The WKNN input and output could be displayed as follows:
As the boundedness of sigmoid function, WKNN output parameters for enhanced PID are maintained in a range of . To solve this problem, the expertise improvement unit is added between WKNN and power require counter in Figure 4, which corrects parameters order of magnitudes. This control strategy overcomes the limitation problem of NN activation function and takes advantage of a priori knowledge from the researchers in reactor power domain, which endows the algorithm with reliability and simplifies the fuzzy logic rules setting to reduce the control complexity.
6. Simulation Results and Discussion
In this section, simulations of WKNN and WKNN fractional-order PID controller have been carried out aiming to examine performance and assess accuracy of the algorithm proposed in the previous sections, and it compares experimentally the performance of the WKNN algorithm and WNN algorithm. To display the results clearly, simulations are divided into two parts. One is achieved as numerical examples in Matlab program which includes WKNN algorithm and its fractional-order PID application in time delay system. Another is written in FORTRAN language and connected with RELAP 5 which implements the reactor power controlling.
6.1. Numerical Examples
To demonstrate the performance of WKNN algorithm, two simulations are conducted comparing with the WNN algorithm. The structure of neural network could be consulted in Figure 3, and the initial weights are generated randomly. The first experiment shows the behavior of WKNN and WNN in a stationary environment of which output is 0.6 (Figures 5 and 6). And the second one operates in the variable time process as (Figures 7 and 8).
Figures 6–9 adopt WKNN and WNN to calculate the regression process in the constant and time varying environment, respectively. And both methods bring relatively large system error in initial period. In Figures 6 and 7, the constant output and straight regression process emphasize the faster convergence of algorithms. In Figures 8 and 9, the time varying impels the regression process more difficult than the previous experiment. Firstly, there is overshoot from 0 s to 50 s in the WNN, which impels the error of WNN to decrease slowly. When the error of WNN starts to reduce, the one of WKNN has approached zero. Secondly, comparing the two error curves, the phenomenon could not be ignored in which the WKNN curve has sharp points, while the WNN curve has the relative smoothness. Thirdly, although the initial parameters have randomness, the regression process is periodical. The figures imply that the convergence rate and regression quality of WKNN are better than that of WNN and initial condition brings no benefit to WKNN. From the argument above, it indicates preliminary that WKNN algorithm reduces the amplitude of error variation and has better performance than WNN. However, it remains to be verified whether or not the WKNN is more appropriate for FOPID controller tuning in time delay system. Then, in this paper, the algorithm proposed is designed to examine the effectiveness of tuning the FOPID controller in time delay control. To illustrate clearly, a known dynamic time delay control system is proposed as follows: and then analyze the performance of unit step response. The control process is displayed in Figures 10 and 11.
From the information of Figures 10 and 11, when the time delay system utilizes FOPID controller with the same order, the overshooting of WNN-FOPID controller and the output oscillation frequency and amplitude are all higher than WKNN-FOPID obviously. The control curves and error curves of different algorithm confirm that the whole tuning performance of WKNN is better than that of WNN. To validate the algorithm practicability and independence from model, the next subsection will apply and control a more complex nuclear reactor model with the strategy of Section 5.
6.2. RELAP5 Operation Example
This paper establishes the integrated pressurized water reactor (IPWR) model by RELAP5 transient analysis program, and the control strategy in Figure 5 is applied to control the reactor power. As the boundedness of sigmoid function, WNN output parameters for enhanced PID remained in a range of . To solve this problem, the expertise improvement unit is added between KWNN and power require counter in Figure 1, which corrects parameters order of magnitudes. Through the analysis of load shedding limiting condition of nuclear power unit, the reliability and availability of the control method proposed have been demonstrated. In the process of load shedding, secondary feedwater reduces from 100% FP to 60% FP in 10 s and then reduces to 20% FP. The operating characteristic could be displayed in Figure 7.
Figure 12(a) displays the variation of once-through steam generator (OTSG) secondary feedwater flow and steam flow. Due to the small water capacity in OTSG, the steam flow decreased rapidly when water flow decreases. Then the decrease in steam flow leads to the reduction of the heat by feedwater, and primary coolant average temperature increases. The reactor power control system accords with the deviation of average temperature and introduces negative reactivity to bring the power down and maintain the temperature constant. The variation of outlet and inlet temperature is shown in Figure 12(c). The primary coolant flow stays the same and reduces the core temperature difference to guarantee the heat derivation. In the process of rapid load variation, the change of coolant temperature causes the motion of coolant loading and then leads to the variation of voltage regulator pressure (Figure 12(d)). The pressure maximum approaches 15.62 MPa; however, it drops rapidly with the power decrease and volume compression. Then pressure comes to the stable state when coolant temperature stabilizes.
The goal of this paper is to present a novel wavelet kernel neural network. Before the proposed WKNN, the wavelet kernel function has been confirmed to be valid in neural network. The WKNN takes advantage of KLMS and NLMS, which endow the algorithm with reliability, stability, and well convergence. On basis of the WKNN, a novel control strategy for FOPID controller design has been proposed. It overcomes the boundary problem of activation functions in network and utilizes the experiential knowledge of researchers sufficiently. Finally, by establishing the model of IPWR, the method above is applied in a certain simulation of load shedding, and simulation results validate the fact that the method proposed has the practicability and reliability in actual complicated system.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors are grateful for the funding by Grants from the National Natural Science Foundation of China (Project nos. 51379049 and 51109045), the Fundamental Research Funds for the Central Universities (Project nos. HEUCFX41302 and HEUCF110419), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, Heilongjiang Province (no. LC2013C21), and the Young College Academic Backbone of Heilongjiang Province (no. 1254G018) in support of the present research. They also thank the anonymous reviewers for improving the quality of this paper.
S. Haykin, Adaptive Filter Theory, Prentice-Hall, Upper Saddle River, NJ, USA, 4th edition, 2002.
W. Liu, J. C. Principe, and S. Haykin, Kernel Adaptive Filtering: A Comprehensive Introduction, John Wiley & Sons, 1st edition, 2010.
H. Fan and Q. Song, “A linear recurrent kernel online learning algorithm with sparse updates,” Neural Neworks, vol. 50, pp. 142–153, 2014.View at: Google Scholar
W. Gao, J. Chen, C. Richard, J. Huang, and R. Flamary, “Kernel LMS algorithm with forward-backward splitting for dictionary learning,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '13), pp. 5735–5739, Vancouver, BC, Canada, 2013.View at: Google Scholar
F. Wu and Y. Zhao, “Novel reduced support vector machine on Morlet wavelet kernel function,” Control and Decision, vol. 21, no. 8, pp. 848–856, 2006.View at: Google Scholar
S. Haykin, Neural Networks and Learning Machines, Prentic Hall, Upper Saddle River, NJ, USA, 3rd edition, 2008.
B. Scholkopf, C. J. C. Burges, and A. J. Smola, Advances in Kernel Methods: Support Vector Learning, MIT Press, 1999.
S. M. Sloan, R. R. Schultz, and G. E. Wilson, RELAP5/MOD3 Code Manual, United States Nuclear Regulatory Commission, Instituto Nacional Electoral, 1998.
G. Xia, J. Su, and W. Zhang, “Multivariable integrated model predictive control of nuclear power plant,” in Proceedings of the IEEE 2nd International Conference on Future Generation Communication and Networking Symposia (FGCNS '08), vol. 4, pp. 8–11, 2008.View at: Google Scholar
S. Das, I. Pan, and S. Das, “Fractional order fuzzy control of nuclear reactor power with thermal-hydraulic effects in the presence of random network induced delay and sensor noise having long range dependence,” Energy Conversion and Management, vol. 68, pp. 200–218, 2013.View at: Publisher Site | Google Scholar