Abstract

This paper focusses on a discrete-time neural identifier applied to a linear induction motor (LIM) model, whose model is assumed to be unknown. This neural identifier is robust in presence of external and internal uncertainties. The proposed scheme is based on a discrete-time recurrent high-order neural network (RHONN) trained with a novel algorithm based on extended Kalman filter (EKF) and particle swarm optimization (PSO), using an online series-parallel con…figuration. Real-time results are included in order to illustrate the applicability of the proposed scheme.

1. Introduction

Linear induction motors (LIM) are a special electrical machine, in which the electrical energy is converted directly into mechanical energy of translatory motion. Strongest interest on these machines raised in the early 1970, however, in the late 1970, the research intensity and number of publications dropped. After 1980, LIM found their first noticeable applications in transportation industry, automation, and home appliances, among others [1, 2]. LIM has many excellent performance features such as high-starting thrust force, elimination of gears between motor and motion devices, reduction of mechanical loses and the size of motion devices, high speed operation, and silence [1, 3]. The driving principles of the LIM are similar to the traditional rotary induction motor (RIM), but its control characteristics are more complicated than the RIM, and the parameters are time varying due to the change of operating conditions, such as speed, temperature, and rail configuration.

Modern control systems usually require detailed knowledge about the system to be controlled; such knowledge should be represented in terms of differential or difference equations. This mathematical description of the dynamic system is named as the model. There can be different motives for establishing mathematical descriptions of dynamic systems, such as simulation, prediction, fault detection, and control system design. In this sense, basically there are two ways to obtain a model; it can be derived in a deductive manner using physics laws, or it can be inferred from a set of data collected during a practical experiment. The first method can be simple, but in many cases it is excessively time-consuming; it would be unrealistic or impossible to obtain an accurate model in this way. The second method, which is commonly referred to as system identification [4], could be a useful shortcut for deriving mathematical models. Although system identification does not always result in an accurate model, a satisfactory one can be often obtained with reasonable efforts. The main drawback is the requirement to conduct a practical experiment, which brings the system through its range of operation [5].

Due to their nonlinear modeling characteristics, neural networks have been successfully applied in control systems, pattern classification, pattern recognition, and identification problems. The best well-known training approach for recurrent neural networks (RNN) is the back propagation through time [6]. However, it is a first-order gradient descent method, and hence its learning speed could be very slow. Another well-known training algorithm is the Levenberg-Marquardt one [6]; its principal disadvantage is that it can not guarantee the location of the global minimum and its learning speed could be slow too; this depends on the initialization. In the past years, extended Kalman filter (EKF) based algorithms have been introduced to train neural networks [7]. With the EKF based algorithm, the learning convergence is improved [6]. The EKF training of neural networks, both feedforward and recurrent ones, has proven to be reliable for many applications [6]. However, EKF training requires the heuristic selection of some design parameters which is not always an easy task [7].

On the other hand particle swarm optimization (PSO) technique, which is based on the behavior of a flock of birds or school of fish, is a type of evolutionary computing technique [8]. It has been shown that the PSO training algorithm takes fewer computations and is faster than the BP algorithm for neural networks to achieve the same performance [8].

In this paper a recurrent high-order neural network (RHONN) is used to design the proposed neural identifier for nonlinear systems, whose mathematical model is assumed to be unknown. The learning algorithm for the RHONN is implemented using an extended Kalman filter with particle swarm optimization (EKF-PSO) based algorithm. In this paper a class of multiinput multioutput (MIMO) discrete-time nonlinear system is considered, for which a neural identifier is developed [9]; then this identifier is applied to a discrete-time unknown nonlinear system. The identifier is based on a recurrent high-order neural network (RHONN) [10], trained with an EKF-PSO based algorithm. The applicability of these schemes is illustrated experimentally for a linear induction motor (LIM).

The remainder of the paper is organized as follows. Section 2 is devoted to describing the neural model, based on the RHONN, in which the training phase relies on an extended Kalman filter which is able to deal with the nonlinearity of the model, and the improvement of EKF training algorithm based on a particle swarm optimization strategy. Section 3 reports the experimental results of the proposed method applied to the problem of identif a three-phase induction motor. Finally Section 4 includes the conclusions and future work.

2. Preliminaries

Through this paper, is used as the sampling step, , as the absolute value, and as the Euclidian norm for vectors and as any adequate norm for matrices.

Consider a MIMO nonlinear system: where , , , and finally, and are nonlinear functions.

2.1. Discrete-Time Recurrent High-order Neural Networks

The use of multilayer neural networks is well known for pattern recognition and for modelling of static systems. The NN is trained to learn an input-output map. Theoretical works have proven that, even with just one hidden layer, a NN can uniformly approximate any continuous function over a compact domain, provided that the NN has a sufficient number of synaptic connections ([11, 12]).

For control tasks, extensions of the first-order Hopfield model called recurrent high-order neural networks (RHONN), which present more interactions among the neurons, are proposed in [13]. Additionally, the RHONN model is very flexible and allows us to incorporate to the neural model a priory information about the system structure. Besides, discrete-time neural networks are better fitted for real-time implementations [7].

Consider the following discrete-time recurrent high-order neural network (RHONN): where () is the state of the th neuron, is the respective number of higher-order connections, is a collection of nonordered subsets of , is the state dimension, () is the respective online adapted weight vector, and is given by with being a nonnegative integers, and is defined as follows: In (4), is the input vector to the neural network and is defined by Consider the problem to approximate the th plant state for the general discrete-time nonlinear system (1), by the following discrete-time RHONN [14]: where is the th plant state, is a bounded approximation error, which can be reduced by increasing the number of the adjustable weights [13]. Assume that there exists ideal unknown weights vector such that can be minimized on a compact set . The ideal weight vector is an artificial quantity required only for analytical purposes and is defined as It is assumed to be unknown and it is the optimal set which renders the minimum approximation error, defined as ; is the th component of [5, 13]. Let us define its estimate as and the estimation error as

Due to this fact, we use as the approximation of the weight vector and , the modelling error, corresponds to . The estimate is used for stability analysis which will be discussed later. Since is constant, then , for all .

2.2. The EKF Training Algorithm

It is well-known that Kalman filtering (KF) estimates the state of a linear system with state and output additive white noises [15]. For KF-based neural network training, the network weights become the states to be estimated. In this case the error between the neural network output and the measured plant output can be considered as the additive white noise [6]. Although the white noise assumption is seldom satisfied, the developed algorithm has proven to be efficient in real applications [6, 7]. Due to the fact that the neural network mapping is nonlinear, an EKF-type of training is required [10]. The training goal is to find the weight values which minimize the prediction error. In this paper, an EKF-based training algorithm is used as follows: with where is the output estimation error and is the weight estimation error covariance matrix at step is the weight (state) vector, is the respective number of neural network weights, is the plant output, is the NN output, is the number of states, is the Kalman gain matrix, is the NN weight estimation noise covariance matrix, is the error noise covariance, and is a matrix, in which each entry is the derivative of the th neural output with respect to th neural network weight , given as follows: where and . Usually , , and are initialized as diagonal matrices, with entries , , and , respectively. Because typically the entries , , and are defined heuristically, in this paper, we propose the use of a PSO algorithm in order to compute online such entries to improve the EKF training algorithm, as follows.

2.3. PSO Improvement for EKF Training Algorithm

Particle swarm optimization (PSO) is a swarm intelligence technique developed by Kennedy and Eberhart in 1995 [16]. In fact, natural flocking and swarm behavior of birds and insects inspired them to develop the PSO algorithm. This technique has been used in several optimization and engineering problems ([8, 1719]). In the basic PSO technique proposed by Kennedy and Eberhart [16], a great number of particles move around in a multidimensional space and each particle memorizes its position vector and velocity vector as well as the time at which the particle has acquired the best fitness. Furthermore, related particles can share data at the best-fitness time. The velocity of each particle is updated with the best positions acquired for all particles over iterations and the best positions are acquired by the related particles over generations [20].

To improve the performance of the basic PSO algorithm, some new versions of it have been proposed. At first, the concept of an inertia weight was developed to better control exploration and exploitation in [8, 20, 21]. Then, the research done by Clerc [22] indicated that using a constriction factor may be necessary to insure convergence of the particle swarm algorithm. After these two important modifications in the basic PSO were introduced, the multiphase particle swarm optimization (MPSO), the particle swarm optimization with Gaussian mutation, the quantum particle swarm optimization, a modified PSO with increasing inertia weight schedule, the Gaussian particle swarm optimization (GPSO), and the guaranteed convergence PSO (GCPSO) were introduced in [23].

In this paper the algorithm proposed in [8] is used in order to determine the design parameters for the EFK-learning algorithm. Initially a set of random solutions or a set of particles are considered. A random velocity is given to each particle and they are flown through the problem space. Each particle has memory which is used to keep track of the previous best position and corresponding fitness. The best value of the position of each individual is stored as . In other words, is the best position acquired by a particle during the course of its movement within the swarm. It has another value called the , which is the best value of all the particles in the swarm. The basic concept of the PSO technique lies in accelerating each particle towards its and locations at each time step. The PSO algorithm used in this paper is depicted in Figure 1 and can be defined as follows [8].(1)Initialize a population of particles with random positions and velocities in the problem space.(2)For each particle, evaluate the desired optimization fitness function.(3)Compare the particles fitness evaluation with the particles ; if the current value is better than the , then set value equal to the current location.(4)Compare the best fitness evaluation with the population’s overall previous best values. If the current value is better than the , then set to the particle’s array and index value.(5)Update the particle’s velocity and position as follows.

The velocity of the th particle of dimension is given by The position vector of the th particle of dimension is updated as follows: where is the inertia weight, is the cognition acceleration constant, and is the social acceleration constant.(6)Repeat step 2 until a criterion is met, usually a sufficiently good fitness or a maximum number of iterations or epochs.

In case that the velocity of the particle exceeds (the maximum velocity for the particles) then it is reduced to . Thus, the resolution and fitness of search depends on the . If is too high, then particles will move in larger steps and so the solution reached may not be as good as expected. If is too low, then particles will take a long time to reach the desired solution [8]. Because of the above explained PSO is a very suitable model to deal with noisy problems, as the one we are considering, besides that PSO has shown good results in optimization problems [8], it will be used to optimize the values for Kalman’s filter covariance matrices instead of heuristic solutions. For this purpose, each particle will represent one of the Kalman’s covariance entries.

2.4. Neural Identification

In the literature the capacity of RHONN to identify nonlinear systems in continuous-time [13] as well as in discrete-time [7, 24] is reported; however, the accurateness of results has a hard dependency on the RHONN structure selection as well as its training. For real-time implementations RHONN in discrete-time has shown good results; however, as it is demonstrated in [25], the identification error can be minimized increasing the number of high-order connections, although it is not possible to eliminate it for real-life problems. On the other hand, it is possible to reduce the identification error by means of the selection of an adequate training algorithm. The EKF training algorithm has proven to be reliable for many applications, particularly for real-time implementations [6, 24]. However, EKF training requires the heuristic selection of some design parameters which is not always an easy task; besides, the adequate selection of the design parameter affect directly the bound of the identification error [24]. Therefore a systematic methodology to select the design parameters is an important contribution for neural identification of unknown discrete-time nonlinear systems.

Now the RHONN (6) is trained with a EKF-PSO algorithm as defined above to identify the nonlinear system (2). First a RHONN structure is proposed; such structure can have physical significance or not; for control tasks it is better to consider a RHONN structure with physical significance. Then the EKF (9)-(10) is selected to implement the online series parallel training of the RHONN; however, as mentioned above, EKF training algorithm requires the adequate selection of many design parameters; particularly covariances matrices and learning rate affect directly the identification error and it is hard to determine their suitable values, in order to simplify the training algorithm tuning and to improve the identification process, the use of standard PSO algorithm (Figure 1) to determine such parameters is proposed in this paper. Finally an improved identification scheme for unknown discrete-time nonlinear systems is obtained.

3. Linear Induction Motor Application

In this section, the above developed scheme is applied to identify a three-phase linear induction motor. It is important to note that the proposed scheme is developed assuming that the plant model, parameters, and external disturbances (load torque) are unknown.

3.1. Motor Model

In order to illustrate the applicability of the proposed scheme, in this section, the proposed neural identifier is applied to the model of a LIM discretized by the Euler technique, which for the purpses of this paper is considered unknown, [2628] as follows: with where is the position, is the linear velocity, and are the and secondary flux components, respectively, and are the and primary current components, respectively, and are the and primary voltage components, respectively, is the winding resistance per phase, is the secondary resistance per phase, is the magnetizing inductance per phase, is the primary inductance per phase, is the secondary inductance per phase, is the load disturbance, is the viscous friction and iron-loss coefficient, is the number of poles pairs, and is the sampling period [26].

It is important to note that this mathematical model is considered unknown for the design of the neural identifier. It is only included in this paper for completeness purposes.

3.2. Neural Identifier Design

The neural identifier proposed is designed as follows: where , identifies , identifies , identifies , identifies , identifies and identifies . For this application only the fluxes are considered unmeasurable. The training is performed online, using a series-parallel configuration as shown in Figure 2. Both the NN and LIM states are initialized randomly. The associated covariances matrices are computed using the PSO algorithm and the RHONN weights are updated with the EKF as in (9). The input signals and are selected as chirp functions.

3.2.1. Reduced-Order Nonlinear Observer

The proposed neural identifier (16) requires the full state measurement assumption [26]. However, for real-time implementations, rotor fluxes measurement is a difficult task. Here, a reduced-order nonlinear observer is designed for fluxes on the basis of rotor speed and currents measurements. The flux dynamics in (14). Therefore, the following observer is used [29]: where The stability proof for (17) is presented in [29].

3.3. Experimental Results

The proposed scheme is depicted in Figure 2. The experiments are performed using a benchmark whose schematic representation is depicted in Figure 3. Figure 4 shows the experimental benchmark for the LIM.

The methodology used to implement experimental identifier is as follows:(1)to validate and to test this algorithm via simulation in Matlab/Simulink, using a plant model and their respective parameters;(2)to download the validated identifier to the DS1104 board;(3)to replace the simulated model state variable values by the induction motor measurements current and angular position, acquired through the DS1104 board A/D ports, and calculated (fluxes) state variables values;(4)to send back through the DS1104 board the input signals (voltages) defined as chirp signals;(5)to process the input signals through the space vector pulse width modulation (SVPWM) power stage;(6)to apply the SVPWM output to the induction motor.

Experimental results are presented as follows. Figure 5 displays the detail for identification performance for position, it can be observed that there exists noise in the plant signal; however, RHONN is capable of identifing position signal. Figure 6 illustrates the identification performance for linear velocity; it is possible to note that identification errors remain bounded and decay faster to a minimum value; Figures 7 and 8 present the identification performance for the fluxes in phase and , respectively, for both variables the identification is reached in least of 100 ms with an adequate accuracy. Figures 9 and 10 portray the identification performance for currents in phase and , respectively, from both figures. It is possible to appreciate that a suitable identification is performed; however, evolution of each one is different due to many circumstances as well as the structure selected for each neural state variable (16) and/or the presence of external and internal disturbances not necessarily equal for each component, among others. Finally Figure 11 shows the identification errors; it can be seen that all of them remain bounded. Besides, from Figure 11, it is possible to note that the bounds for identification errors for each state variable are different in that they can be affected by internal and external disturbances of the system; in this particular experiment it can be observed that the beta components for current and flux are more affected than the alpha ones; this can be an instrumentation problem; however, in the case of neural network training, this problem can be seen as an opportunity field; due to the existence of internal and external disturbances the neural network can perform an adequate identification of the state variables and the inclusion of PSO improves such identification scheme as explained below. It is important to consider that experimental results, depicted in Figure 5 to Figure 11, have been obtained in open loop with chirp functions as inputs for LIM, in order to excite most of the plant dynamics; moreover, the neural states are initialized randomly.

3.4. Comparison of the EKF-PSO Algorithm for Neural Identification

In order to evaluate the performance of the EKF-PSO algorithm, it is compared with the typical EKF algorithm [24]; both training algorithms are used to identify a LIM in real time. The RHONN structures are exactly the same (16); only the training algorithm is changed in order to compare its performance.

Table 1 includes a comparison between the proposed PSO-EKF learning algorithm and the EKF one.

Results included in Table 1 show that the proposed methodology leads to an improvement of the results compared with respect to the EKF training algorithm.

4. Conclusions

This paper has presented the application of recurrent high-order neural networks to the identification of discrete-time nonlinear systems. The training of the neural networks was performed online using an extended Kalman filter improved with a PSO algorithm. Experimental results illustrate the applicability of the proposed identification methodology for the online identification of a three-phase induction motor. The proposed neural identifier in our experiments was proved to be a model that captures very well the complexity for unknown discrete-time nonlinear systems. Finally, the use of PSO to improve the identification results has been experimentally illustrated in this paper.

Acknowledgments

The authors acknowledge the support of CONACYT Mexico, through Projects 103191Y, 106838Y, and 156567Y.