Abstract

This paper presents a gain-scheduling design technique that relies upon neural models to approximate plant behaviour. The controller design is based on generic model control (GMC) formalisms and linearization of the neural model of the process. As a result, a PI controller action is obtained, where the gain depends on the state of the system and is adapted instantaneously on-line. The algorithm is tested on a nonisothermal continuous stirred tank reactor (CSTR), considering both single-input single-output (SISO) and multi-input multi-output (MIMO) control problems. Simulation results show that the proposed controller provides satisfactory performance during set-point changes and disturbance rejection.

1. Introduction

Most industrial plants are nonlinear in nature, but process control often relies on traditional linear PID algorithm, because of its simplicity and well recognition by the industry. This solution can be justified by the fact that nonlinear processes can be approximated with a linear model as they approach steady state; therefore, in case of close regulatory control, the use of a linear control is adequate [1]. Nevertheless, for those nonlinear processes whose nonlinearities are strong and large changes of the operating conditions occur during the operation, linear controller designs are inadequate and more effective alternatives should be considered [2]. The availability of powerful computer tools opened the way to implement advanced process control strategies, where system nonlinearities can be taken into account.

Model-based feedback control can be a valid alternative to the use of the linear model-free PI algorithm, and the use of adaptive parameters along the process motions can represent a possible solution for controlling nonlinear time-variant processes [3]. Adaptive systems are also used to compensate input delay, as proposed in Na et al. [4] where an adaptive NN observer was designed for nonlinear systems. The generic model control (GMC) of Lee and Sullivan [5] is probably one of the simplest nonlinear control techniques to install and maintain among nonlinear model-based controllers. In the chemical engineering field the application of GMC strategy was investigated for control reactors [6, 7], batch cooling crystallizers [8], batch and semibatch polymerization reactors [9, 10], and, recently, multistage flash desalinations [11].

In this work, the GMC algorithm was used in conjunction with a dynamic neural network model, which describes the nonlinear relationship between controlled outputs and manipulated inputs [12, 13]. The main advantage of the proposed algorithm is the obtainment of a PID-like controller structure, where the nonlinear dependence of the process gain on the operating condition is achieved by a gain-scheduling control scheme. The proposed algorithm is simple to implement and it can be obtained without the need of a detailed knowledge of the plant; therefore it is suited for industrial applications where standard solutions are generally preferred. The performance of the proposed technique for nonlinear process control was tested for two case studies, referring to a SISO [14] and a MIMO [15] control problem for a nonisothermal continuously stirred tank reactor (CSTR). These two cases, which are well known benchmarks for testing control methodology, were selected because the strong nonlinearities, due to the Arrhenius dependence of the kinetic rate on temperature, lead to a very rapid response of the process variables in regions of high conversion and a very mild response in regions of low conversion. Therefore, the need for adaptive control strategies is more evident.

2. Neural Network Model

There are several possibilities of building a dynamic neural network that may be classified in two main classes: time-lagged feedforward networks, TLFN, or recurrent neural networks, RNN (cf. [16]). While in the TLFN the dynamics are generally accounted for in the input layer as a linear combination of inputs (present and past values of the inputs) followed by a static nonlinear mapper between inputs and outputs, in the RNN the memory mechanism is brought inside the nonlinear mapping; that is, recurrent connections are applied among some or all layers (cf. Jordan or Elman networks). Both the architectures are able to describe the dynamic behaviour of a process, while training procedure and stability become an issue moving from the TLFN to the RNN (cf. [16]).

In this work, a multi-feed-forward neural network with recurrent neurons in the output layer is used to describe the process dynamics. The net topology is shown in Figure 1 and can be considered a special case of the TLFN where the dynamic is moved from the input to the output layer and, in this way, the advantage of the TLFN over the RNN is maintained.

The equations that describe the neuron output evolution are reported in the following:where the activation function is chosen as

The weight is the interconnection between the th output and the th hidden neuron; is the connection between the th input and the th hidden neuron; is the output vector from the hidden layer; and is the input vector. It is worth noting that the nonlinear plant characteristic modelled by (1a)-(1b) is stored in the weights between the input and output layers, and this represents the long-term capability prediction, while the locally recurrent neurons in the output layer may be thought of as a representational layer for time (short-term) of information (cf. [16]).

In the present work, this neural model will be applied to develop a gain-scheduling control scheme. In order to obtain the simplest controller structure, a number of hidden neurons equal to the number of manipulated variables are selected. As results reported in the following sections will demonstrate, this choice does not affect the neural model performance. Under this assumption, for a SISO system the neural network expression becomeswhere is the predicted output and is the manipulated variable. The variable is the only output of the hidden neuron, which nonlinearly depends on the unique manipulated variable, , since sigmoidal activation function is used for neurons in the hidden layer. The vector represents the other net inputs necessary to describe the plant properly. For the sake of brevity, the case of the neural network for a MIMO system is not reported, since it can be straightforwardly derived from (3).

3. Neural Based Gain-Scheduling Control

Starting from the nonlinear neural model, a GMC [5] approach is used to synthesize the gain-scheduling control. In order to define the controller action, the desired output behaviour has to be specified in the form of a trajectory. Then the process model is used directly to synthesize the controller required to cause the process output to follow this trajectory. A good choice [5] for the reference trajectory is reported inwhere, for a given desired output , a suitable selection of parameters and can be made to achieve a variety of responses in . Substituting the reference trajectory in (3), the action of the manipulated variable can be implicitly derived from the model. Of course, the control action in this case is not linear, and it is difficult to implement. Referring to Ogunnaike and Ray [17], a GMC control structure can lead to a PI-like controller if a first-order system is driven to follow a first-order reference trajectory, that is, setting in (4).

Letting , the process model (3) can be used directly to obtain the controller required to cause the process to follow trajectory (4) and solving for . It should be pointed out that using model (3) no explicit solution is available for the manipulated input and the controller equation has to be solved numerically. The controller algorithm can be simplified by linearizing the right-hand side of (3); hence the model for a SISO system becomesIn the following, the coefficient of the manipulated input will be indicated with (6). The gain depends on the state of the system, through the derivative of with respect to the input of the neural model:

The nonlinearity of the system is taken into account by calculating at every sampling time the derivative of with respect to the inputs from the neural model (3).

The PI controller action is derived at this point by setting in the reference trajectory (4), leading toor where represents the usual feedback error term. Integrating (8), the following expression for the desired trajectory is obtained:

The control action required to cause the process to follow the trajectory described by (8) and (9) may be obtained by substituting the desired trajectory expressions in the linearized model (5), leading toand then solving for :

In this way, a PI controller law is obtained, where represents the actual error, τ is equal to the time constant of the output neuron, is a tuning parameter and is set as the inverse of the desired time of response, and is the static gain of the linear first-order system that we have obtained by linearizing the neural model of the system, as defined by (5). The gain is a function of the system status because it depends on the derivative of with respect to the input. The presence of the integral term will compensate for the error of the approximated neural model.

The advantage of such an approach is that the nonlinear behaviour of the process is considered because of the time-varying nature of the controller gain. As a result, since parameters are simply adjusted on-line for all process conditions, a standard PI controller structure is maintained. The proposed methodology has the scheme of a gain-scheduling control approach, as the block diagram in Figure 2 shows. It is important to note that in this case the use of a neural model removes the need for detailed process knowledge to define operating bands and for open loop tests to locally calibrate the controller gain within each band, which is the typical drawback of the gain-scheduling controller scheme.

4. Results and Discussion

As mentioned in Introduction, two continuously stirred tank reactor systems are considered to show the performance of the proposed control algorithm. The two proposed case studies are well known benchmarks for advanced control methodology testing [14, 15, 18].

4.1. Case  1

The first case study is the CSTR proposed by Lightbody and Irwin [18] and Ge et al. [14]. The system consists of a constant volume reactor cooled by a single coolant stream, and the objective is to control the effluent concentration, , by manipulating the coolant flow rate, . The model equations are reported as follows:where and . The model parameters and nominal values used for this work are reported in Table 1.

The selected neural network model that describes the process is composed of two neurons in the input layer, one neuron in the hidden one (because one is the number of the manipulated variables), and one neuron in the output layer. The system has one natural input, which is the coolant flow rate, . In order to provide more information on the state of the reactor, another input, , is selected as energy balance around the cooling system. The net is trained using data obtained simulating the reactor for 300 minutes (1500 points sampled every 0.2 minutes), by exciting the plant through step variations of the manipulated variable. The input was modified every 5 minutes, randomly varying the amplitude of the step in the range 94.7–108.0 l/min. The Levenberg-Marquardt algorithm is used to estimate the network’s weights, with the sum of squared error as objective function and performing the training for one hundred initial values, randomly generated. The selected weights are the ones leading to the lower error calculated on validation set, which is the 30% of the total amount of data recorded.

The capability of the network to reconstruct the system dynamics is shown in Figure 3, where concentration and Jacobian values estimated by the neural model are compared to those calculated by integrating equations ((12a)-(12b)) of the true plant. In this case, the output was obtained by randomly changing the manipulated variable and corrupting the measured inputs to the network by noise, in particular introducing an error of ±1°C in temperature measurements and ±2 l/min in the coolant flow rate measurement. Results indicate that the neural model reliably represents the system behaviour, at least for the considered process conditions; in fact the line representing the estimated variables completely masks the true values. When the plant model is not available, proof of the ability of the network to reconstruct system gain can be accomplished experimentally by performing an appropriate set of step tests.

The obtained dynamic neural model was then applied to construct the gain-scheduling PI controller described in Section 3. As a result, the only parameter to be tuned is the inverse of the desired time response, , because the controller gain and the integral time are derived from the model (the latter is not adjusted in time). In order to guarantee the robustness of the control system, the inverse of is set 2.5 times smaller than the maximum characteristic time of the process, applying the recommended choice for model-based controller that suggests to have the desired closed-loop time constant for a first-order process greater than 0.2 τ [17]. The good fit shown in Figure 3 between the Jacobian calculated by the neural model and the true one guarantees that the gain will be properly adapted on-line.

The adaptive controller technique was tested in terms of set-point tracking, performing the test and comparing the results to a conventional PI controller, as reported in Ge et al. [14]. The set-point tracking results are shown in Figure 4, where the output variable, , and the loads on the manipulated variables, , are reported for the adaptive controller and the conventional PI. The results show that the adaptive controller exhibits good set-point tracking capabilities (short response time), without requiring excessive loads on the manipulated variables. Indeed, the curves representing the programmed set-point changes are almost completely masked by the ones representing the dynamic behaviour of the CSTR under the control of the gain scheduled PI. The presence of a low overshoot is the compromise for a short time response. On the other hand, the conventional PI shows a sluggish response with respect to the gain-scheduling controller.

4.2. Case  2

As a second case study, a CSTR in which an exothermic first-order reaction takes place is considered. This benchmark was proposed by Scott and Ray [15] in order to demonstrate the inadequacy of a standard PI controller in such nonlinear systems and is described in a dimensionless form by the following differential equations:

According to Scott and Ray [15], the meaning of the constants and variables that appear in (13a) and (13b), along with the normalized variable used for training the network and representing the results of the control system, are summarized in Table 2.

The control objective is to maintain both bulk temperature and concentration at set-point values. The manipulated variables are the coolant temperature, , and the input feed flow rate, , while the control variables are the reactant mole fraction, , and the bulk temperature, .

The dynamic neural model of the CSTR consists of a net with four neurons in the input layer, two neurons in the hidden layer, and two neurons in the output layer. Four inputs were selected with the aim of giving the net the most representative information about the process status. Indeed, two terms which represent, respectively, the heat flux exchange through the cooling system and a measure of the heat flux carried out with the convective outlet stream were fed as input to the network, along with the two manipulated variables (cf. [13]).

The dynamic neural network was trained using 1000 data points generated through the CSTR dynamic simulator (sampling time equal to 0.1 min), randomly changing the dimensionless manipulated variable. The Levenberg-Marquardt algorithm was used to calculate the weights using the sum of the squared error as objective function and selecting the best model evaluating the predictions on the validation test. Also in this situation, it was verified that the Jacobian of the neural model fitted the exact one, which is a guarantee of the robustness of the control system. Results are not reported for the sake of brevity.

As in the previous case, the only controller parameter to be tuned is the inverse of the desired time response, , because the controller gain and the integral time are derived from the model. In this case, the inverse of is set 2.5 times smaller than the characteristic time of the process for both the controllers [17].

The gain-scheduling PI controller was tested in terms of set-point tracking and disturbance rejection capabilities, performing the same test as in Scott and Ray [15]. The set-point changes and disturbances imposed on the system to test the performance of the proposed methodology are summarized in Table 3.

For the sake of brevity, only the performance of the gain-scheduling control structure at the point of maximum gain is reported in graphical form, while all the other results are reported in Table 4 using the integral squared error (ISE) as performance index:

The set-point tracking results are shown in Figure 5(b) for the first variable, , and in Figure 5(c) for the second variable, , while the loads on the manipulated variables are reported in Figure 5(a). The results show that the controller exhibits good set-point tracking capability (short response time) without requiring excessive loads on the manipulated variables. Indeed, the curves representing the programmed set-point changes (dashed curves) are completely masked by the ones representing the dynamic behaviour of the CSTR under the control of the gain scheduled PI controller. The performance of the controller with respect to disturbance rejection is shown in Figures 5(d)–5(f). Also in this situation the gain-scheduling controller performs well and exhibits good capability to quickly compensate for the upsets entering the CSTR.

As in the previous case, noisy measured inputs fed to the neural network did not affect the performance of the neural model. These results have already been published in a previous work [13] for a different model-based controller.

5. Conclusion

In this paper, the use of a gain-scheduling controller strategy based on a dynamic neural network model was presented. This technique was proposed to solve problems concerning the control of highly nonlinear systems, without requiring a controller structure unusual to industrial practice. Starting from a GMC approach, the resulting algorithm is a PI controller, with an adaptive gain based on the neural model.

The performance of the developed control technique was tested in two different cases of nonisothermal CSTRs, in which an exothermic, first-order reaction takes place. The controller was applied to a wide range of set-point changes because in this way the system is forced to operate at very critical conditions, and the robustness of the control system can be verified. The proposed control scheme was also tested in presence of disturbances in order to demonstrate the capability of the net to properly adjust gain controller values. Good results were obtained, indicating that the proposed algorithm properly adjusts the constant gain value over a wide range of operating conditions. This means that the neural model is able to describe the essential features of the process, and it captures the essential nonlinearities through an effective linear description. This characteristic enhances the adaptive controller robustness, because it performs well in the neighbourhood of the nominal operating conditions without incorporating a linear function in the neural network model [7].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

Stefania Tronci kindly acknowledges the Fondazione Banco di Sardegna for the financial support.