Abstract

This paper treats some problems related to nonlinear systems identification. A stability analysis neural network model for identifying nonlinear dynamic systems is presented. A constrained adaptive stable backpropagation updating law is presented and used in the proposed identification approach. The proposed backpropagation training algorithm is modified to obtain an adaptive learning rate guarantying convergence stability. The proposed learning rule is the backpropagation algorithm under the condition that the learning rate belongs to a specified range defining the stability domain. Satisfying such condition, unstable phenomena during the learning process are avoided. A Lyapunov analysis leads to the computation of the expression of a convenient adaptive learning rate verifying the convergence stability criteria. Finally, the elaborated training algorithm is applied in several simulations. The results confirm the effectiveness of the CSBP algorithm.

1. Introduction

The last few decades have witnessed the use of artificial neural networks (ANNs) in many real-world applications and have offered an attractive paradigm for a broad range of adaptive complex systems. In recent years, ANNs have enjoyed a great deal of success and have proven useful in wide variety pattern recognition feature-extraction tasks. Examples include optical character recognition, speech recognition, and adaptive control, to name a few. To keep the pace with the huge demand in diversified application areas, many different kinds of ANN architecture and learning types have been proposed to meet varying needs as robustness and stability.

The area of system identification has received significant attention over the past decades and now it is a fairly mature field with many powerful methods available at the disposal of control engineers. Online system identification methods to date are based on recursive methods, such as least squares, for most systems that are expressed as linear in the parameters.

During the past few years, several authors [13] have suggested neural networks for nonlinear dynamical black-box modelling. The problem of designing a mathematical model of a process using only observed data has attracted much attention, both from an academic and an industrial point of view. Neural models can be used either as simulators or as models.

Recently, feedforward neural networks have been shown to obtain successful results in system identification and control [4]. Such neural networks are static input/output mapping schemes that can approximate a continuous function to an arbitrary degree of accuracy. Results have also been extended to recurrent neural networks [5, 6].

Recent results show that neural network technique seems to be very effective to identify a broad category of complex nonlinear systems when complete model information cannot be obtained. The Lyapunov approach has been used directly to obtain stable training algorithms for continuous-time neural networks [79]. The stability of neural networks can be found in [10, 11]. The stability of learning algorithms has been discussed in [6, 12].

It is well known that conventional identification algorithms are stable for ideal plants [1315]. In the presence of disturbances or unmodeled dynamics, these adaptive procedures can go to instability easily. The lack of robustness in parameters identification was demonstrated in [10] and became a hot issue in 1980s. Several robust modification techniques were proposed in [13, 14]. The weight-adjusting algorithms of neural networks are a type of parameters identification; the normal-gradient algorithm is stable when neural-network model can match the nonlinear plant exactly [6]. Generally, some modifications to the normal-gradient algorithm or backpropagation should be applied, such that the learning process is stable. For example, in [12, 16], some hard restrictions were added to the learning law, and in [11], the dynamic backpropagation has been modified with NLq stability constraints.

The paper is organized as follows. Section 2 describes the neural identifier structure considered in this paper and the usual backpropagation algorithm. In Section 3 and through a stability analysis, a constrained adaptive stable backpropagation algorithm (CSBP) is proposed to provide stable adaptive updating process. Three simulation examples give the effectiveness of the suggested algorithm in Section 4.

2. Preliminaries

The main concern of this section is to introduce the feedforward neural network, which is the adopted architecture, as well as some concepts of backpropagation training algorithm. Consider the following discrete-time input-output nonlinear system: The neural model for the plant can be expressed as where , and is the weight parameter vector for the neural model.

A typical multilayer feedforward neural network is shown in Figure 1, where is the jth hidden neuron input, is the jth hidden neuron output i, j, and k indicate neurons, is the weight between neuron i and neuron j, while  is the weight between neuron j and output neuron. For all neurons, the nonlinear activation function is defined as The output of the considered NN is Training the neural model is to adjust the weight parameters so that it emulates the nonlinear plant dynamics. Input-output training is obtained from the operation history of the plant.

Using the gradient descent, the weight connecting neuron i to neuron j is updated as

where and is the learning rate. The partial derivatives are calculated with respect to the vectors of weights W and V, Backpropagation algorithm has become the most popular one for training of the multilayer perceptron [1]. Generally, some modifications to the normal-gradient algorithm or backpropagation should be applied, such that the learning process is stable. For example, in [12, 16], some hard restrictions were added in the learning law, and in [11], the dynamic backpropagation was modified with stability constraints.

The research on modified algorithms of feedforward neural networks is becoming a challenging field. These researches involve the development of heuristic techniques, which arise out of a study of the distinctive performance of standard backpropagation algorithm. These heuristic techniques include such ideas as varying the learning rate [17], using momentum [18], and rescaling variables [19].

3. Stability Analysis and CSBP Algorithm Formulation

In the literature, the Lyapunov synthesis [4, 5] consists of the selection of a positive function candidate V which leads to the computation of an adaptation law insuring its decrescence, that is, for continuous systems and for discrete-time systems. Under these assumptions, the function V is called Lyapunov function and guarantees the stability of the system. Our objective is the determination of a stabilizing adaptation law ensuring the stability of the identification scheme presented in what follows and the boundness of the output signals. The stability of the learning process in an identification approach leads to a better modelling and a guaranteed reached performance. The proposed learning rule is the backpropagation algorithm adopting a constrained learning rate. Satisfying such condition, unstable phenomena during the learning process are avoided. This problem has been treated in the literature of neural identification so this work is considered as a solution for extended problems. The originality of this work consists of the constraints themselves. In fact, a choice of the learning rate with respect to the proposed constraints ensures an efficient stable identification which is not the case adopting an arbitrary learning rate especially when it does not belong to the specified stability domain. In the proposed one and through the original calculation results, the learning rate is iterative and computed instantaneously with respect to the elaborated constraints. The following assumptions are made for system (1).

Assumption 1. The unknown nonlinear function is continuous and differentiable.

Assumption 2. System output can be measured and its initial values are assumed to be in a compact set .

Theorem 1. The stability in Lyapunov sense of the identification scheme is guaranteed for a learning rate verifying the following inequality: where denotes the gradient of J with respect to .

Proof. Considering the Lyapunov function where denotes the matrix trace operation, and  denotes the optimal value of the weight vector parameters.
The computation of the expression leads to where The adopted adaptation law is the gradient algorithm. We have where the partial derivatives are expressed as The partial derivatives are given through Let A and B be defined as follows: The expression is calculated as where The stability condition is satisfied only if Solving this ε, second-degree equation leads to the establishment of the result presented in (7); if ε satisfies the following condition: where
Using the expressions of and we obtain where denotes , denotes , and D denotes .
The previous result is useful in the case of a backpropagation adaptation law adopting the same learning rate for training in all the neural network architecture. An extension is made in the next section. This extension consists in the fact of considering two different constrained learning rates which improve the efficiency of the first elaborated algorithm.

Theorem 2. Let be the learning rates for the tuning parameters of the neural identifier and let   be defined as Then asymptotic convergence is guaranteed if the learning rates are chosen to satisfy where

Lemma 1. If the learning rates are chosen as , then one has the convergence condition

Proof. Considering the Lyapunov function where The computation of the expression leads to where
The expression of is given by Substituting the expression of in , we have so Finally, when we define the matrix norm by the theorem results are established.
The stability condition is satisfied only if where

Remark 1. Through simulations, learning rates are chosen belonging to the defined learning rates stability range to prove the effectiveness of the proposed CSBP algorithm. The learning rate which guarantees convergence corresponds to where is a small value guarantying the convergence stability condition.

4. Simulation Results

In this section, two discrete time systems are considered to demonstrate the effectiveness of the result discussed below.

4.1. First-Order System

The considered system is a famous one in the literature of neural adaptive control and identification. The system is described by the following recurrent equation [2]: For the neural model, a three-layer NN was selected with two inputs, three hidden and one output nodes. Sigmoidal activation functions were employed in all the nodes.

The weights are initialized to small random values. The learning rate is evaluated at each iteration through (21). It is also recognized that the training performs very well when the learning rate is small.

As input signal, a sinusoidal one is chosen in which the expression is defined by The simulations are realized in the two cases during 120 iterations. Two learnning rate values are fixed in and out of the learning rate range presented in (7).

Simulation results are given through Figures 2 and 3.

Figures 2 and 3 show that if the learning rate belongs to the range defined in (7), the stability of the identification scheme is guaranteed. It is shown through this simulation that the identification objectives are satisfied. Out of this variation domain of the learning rate, the identification is instable and the identification objectives are unreachable.

4.2. Second-Order System

An example is used to illustrate the effectiveness of the proposed constrained updating law. Consider a nonlinear discrete time plant where .

The process dynamic is interesting. In fact it has the behaviour of a first-order lowpass filter for input signal amplitude about 0.1, the behaviour of a linear second-order system in the case of small amplitudes (), and the behaviour of a nonlinear second-order system in the case of great input amplitudes () [20].

For the neural model, a three-layer NN was selected with three inputs, three hidden and one output nodes. Sigmoidal activation functions were employed in all the nodes.

The weights are initialized to small random values. The learning rate parameter is computed instantaneously. As input signal, a sinusoidal one is chosen which the expression is defined by The simulations are realized in the two cases during 120 iterations. Two learning rates values are fixed in and out of the learning rate range presented in (7).

Simulation results are given through Figures 4 and 5.

The simulation results, through Figures 4 and 5, show that a learning rate arbitrarily chosen out of the predefined stability domain leads to an unstable identification of the considered system; however, a specified learning rate belonging to the range verifying stability condition ensures the tracking capability and the stability of the identification scheme

Example 1 : (identification of semiconductor manufacturing process). This example illustrates the advantage and effectiveness of our approach-on-line self-tuning property (stability). We consider here the SISO simple first-order linear process of the form [21] where and are process parameters, is the autoregressive coefficient, and N denotes the noise term that follows an ARMA process: In this simulation, is a uniform distribution and the system parameters are chosen as Here, the current output of the plant depends on four previous outputs and four previous inputs. In this case, the feedforward neural network, with four input nodes for feeding the appropriate past values of and were used. In this paper, only four values are fed into the FFNN to determine the output . In training the FFNN, we used 100 epochs. The testing input signal is used to determine the identification results and is given by (37).
The weights are initialized to very small random values. The learning rate parameter is calculated each iteration.
The simulations are realized in the two cases. Two learning rates values are fixed in and out of the learning rate range presented in (21) (see Figures 6 and 7).
In order to compare the performances of different learning rate, rules are chosen in and out the learning rate stability range.
Adopting an adaptive constrained learning rate inside the stability domain, faster convergence, stability, and tracking capability are guaranteed.

5. Conclusion

To avoid unstable phenomenon during the learning process, constrained stable backpropagation algorithm is proposed (CSBP). A stable adaptive updating process is guaranteed. A Lyapunov analysis is made in order to extract new updating formulations which contain a set of inequality constraints. Both the convergence rate and the tracking capability of the CSBP algorithm are mainly determined by the learning rate. For a larger learning rate, one has the faster convergence but the poorer tracking capability; while for a smaller learning rate, one gets the slower convergence but the better tracking capability. With The CSBP algorithm, faster convergence, stability, and tracking capability are guaranteed. The applicability and the effectiveness of the approach presented are proved through simulation examples.