Modeling and Control of Complex Dynamic Systems: Applied Mathematical AspectsView this Special Issue
Pareto Design of Decoupled Sliding-Mode Controllers for Nonlinear Systems Based on a Multiobjective Genetic Algorithm
This paper presents Pareto design of decoupled sliding-mode controllers based on a multiobjective genetic algorithm for several fourth-order coupled nonlinear systems. In order to achieve an optimum controller, at first, the decoupled sliding mode controller is applied to stablize the fourth-order coupled nonlinear systems at the equilibrium point. Then, the multiobjective genetic algorithm is applied to search the optimal coefficients of the decoupled sliding-mode control to improve the performance of the control system. Considered objective functions are the angle and distance errors. Finally, the simulation results implemented in the MATLAB software environment are presented for the inverted pendulum, ball and beam, and seesaw systems to assure the effectiveness of this technique.
There are many control techniques that have been used to investigate the control behavior of the nonlinear systems [1–4]. A variable structure control with sliding mode, which is commonly known as sliding-mode control, is a nonlinear control strategy that is well-known for its guaranteed stability, robustness against parameter variations, fast dynamic response, and simplicity in implementation . Although the sliding mode control method gives a satisfactory performance for the second-order systems, its performance for a fourth-order coupled system is questionable. For example, in an inverted pendulum system controlled by the sliding-mode control, either the pole or cart can be successfully controlled, but not both. A remedy to this problem is to decouple the states and apply a suitable control law to stabilize the whole system. Recently, a decoupled sliding-mode control has been proposed to cope with this issue. It provides a simple way to decouple a class of fourth-order nonlinear systems in two second-order subsystems such that each subsystem has a separate control objective expressed in terms of a sliding surface [5, 6]. An important consequence of the using decoupled sliding-mode control is that the second subsystem is successfully incorporated into the first one via a two-level decoupling strategy.
It is very important to note that for design of the sliding-mode control and decoupled sliding-mode control, the sliding surface parameters should be determined, properly. This point is very crucial for the performance of the control system. The problem can be solved using evolutionary optimization techniques such as the genetic algorithm [7–10]. In this paper, a new intelligent decoupled sliding-mode control scheme based on an improved multiobjective genetic algorithm is proposed. Using this optimization algorithm, the important parameters of the decoupled sliding mode controller are optimized in a way to decrease the errors of the position and angle, simultaneously. The results obtained from this study illustrate that there are some important optimal design facts among objective functions which have been discovered via the Pareto optimum design approach. Such important design facts could not be found without using the multiobjective Pareto optimization process. In the end, simulations are presented to show the feasibility and efficiency of the proposed Pareto optimum decoupled sliding-mode control for the nonlinear systems.
2. Sliding-Mode Control
Sliding-mode controller is a powerful robust control strategy to treat the model uncertainties and external disturbances . Furthermore, it has been widely applied to robust control of nonlinear systems [12–18]. In this section we recall the general concepts of sliding mode control for a second-order dynamic system. Suppose a nonlinear system is defined by the general state space equation as follows: where is the state vector, the input vector, is the order of the system, and is the number of inputs. Then, the sliding surface is given by the following: where represents the coefficients or slope of the sliding surface. Here, is the negative tracking error vector.
Usually a time-varying sliding surface is simply defined in the state-space by the scalar equation as the following: where is a strictly positive constant that can also be explained as the slope of the sliding surface. For instance, if (for a second-order system) then, and hence, is simply a weighted sum of the position and velocity error from (2.4). The th-order tracking problem is now being replaced by a first-order stabilization problem in which scalar is to be kept at zero by a governing reaching condition. By choosing Lyapunov function , the following equation can guarantee that the reaching condition is satisfied, The existence and convergence conditions can be rewritten as follows: This equation permits a nonswitching region. Here, is a strictly positive constant, and its value is usually chosen based on some knowledge of disturbances or system dynamics in terms of some known amplitudes.
In this control method, by changing the control law according to certain predefined rules which depend on the position of the error states of the system with respect to sliding surfaces, those states are switched between stable and unstable trajectories until they reach the sliding surface.
It can be shown that the sliding condition of (2.6) is always satisfied by the following: where is called equivalent-control input which is obtained by . is a design parameter and .
Function makes the high frequency chattering in control command. Using a proper definition of a thin boundary layer around the sliding surface, the chattering can be eliminated (Figure 1). This is accomplished by defining a boundary layer of thickness , and replacing function sgn with function sat. This function is as the following and shown in Figure 2,
3. Inverted Pendulum System
In this section, the model of an inverted pendulum is recalled. In fact, the work deals with the stabilization control of a complicated, nonlinear, and unstable system. A pole, hinged to a cart moving on a track, is balanced upwards by motioning of the cart via a DC motor. The system observable state vector is , including, respectively, the position of the cart, the angle of the pole with respect to the vertical axis, and their derivatives. The force to motion the cart may be expressed as , where is the input that is the limited motor supply voltage. The system dynamic model is as follows: where That is
Masses of the cart and pole are, respectively, and , represents the gravity acceleration, is the half length of the pole, and is the overall inertia moment of the cart and pole with respect to the system centre of mass. is the rotational friction coefficient of the pole, and is the horizontal friction coefficient of the cart (Figure 3). This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states and .
4. Ball and Beam System
The ball and beam system is one of the most enduringly popular and important laboratory models for teaching control systems engineering. Because it is very simple to understand as a system, and control techniques that can stable it cover many important classical and modern design methods. The system has a very important property, it is open-loop unstable. The system is very simple, a steel ball rolling on the top of a long beam. The beam is mounted on the output shaft of an electrical motor, and so the beam can be tilted about its center axis by applying an electrical control signal to the motor amplifier. The control job is to automatically regulate the position of the ball on the beam by changing the angle of the beam. This is a difficult control task because the ball does not stay in one place on the beam, and moves with acceleration that is approximately proportional to the tilt of the beam. In control terminology, the system is open-loop unstable because the system output (the ball position) increases without any limitation for a fixed input (beam angle). Feedback control must be used to stabilize the system and to keep the ball in a desired position on the beam.
Consider a ball and beam system depicted in Figure 4 and its dynamic is described below: where
The mass of the ball is , represents the gravity acceleration, and is the inertia moment of the beam (Figure 4). The system observable state vector is , including, respectively, the position of the ball, the angle of the beam with respect to the horizontal axis, and their derivatives. This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states and .
5. Seesaw System
According to the basic physical concepts, in the seesaw mechanism, if the vertical line along the centre of gravity of the inverted wedge is not passing through the fulcrum perpendicularly, then the inverted wedge will result in a torque and rotates until reaching the stable state. If we want to balance the inverted wedge, we have to put an external force to produce an appropriate opposite torque. For this reason, the inverted wedge is equipped with a cart to balance the unstable system. The cart can move to produce the appropriate torque against the internal force (Figure 5).
The observable state vector is , including, respectively, the cart position, the wedge angle with respect to the vertical axis, and their derivatives. The system dynamic model is as the following: where that is .
The cart and wedge masses are, respectively, and , represents the gravity acceleration, is the height of the wedge, is the height of mass centre, is the inertia moment of the wedge, is the rotational friction coefficient of the wedge, and is the friction coefficient of the cart. This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states and .
6. Decoupled Sliding-Mode Control
Consider the nonlinear fourth-order coupled system expressed as the following. This system includes two second-order subsystems in the canonical form with states and , and the sliding-mode control mentioned in the Section 2 can only control one of these subsystems. Hence, the basic idea of the decoupled sliding-mode control is proposed to design a control law such that the single input simultaneously controls two coupled subsystems to accomplish the desired performance [5, 6, 19]. To achieve this goal, the following sliding surfaces are defined:
Here, is a proportional value of and has a proper range with respect to . A comparison of (6.2a) with (2.5) shows the meaning of (6.2a): the control objective in the first subsystem of (6.1) changes from and to and . On the other hand, (6.2b) has the same meaning of (2.5) and its control objectives are and . Now, let the control law for (6.2a) be a sliding mode with a boundary layer, then: with So where represents the inverse of the width of the boundary layer for , transfers to the proper range of . Notice, in (6.5) is a decaying oscillation signal since . Moreover, in (6.2a), if , then and .
Now, the control sequence is as follows: when , then in (6.2a) causes (6.3) to generate a control action that reduces ; as decreases, decreases too. Hence, at the limit with , then with ; so, , and the control objective would be achieved .
7. Genetic Algorithm
Optimization in engineering design has always been of great importance and interest particularly in solving complex real-world design problems. Basically, the optimization process is defined as finding a set of values for a vector of design variables so that it leads to an optimum value of an objective or cost function. In such single-objective optimization problems, there may or may not exist some constraint functions on the design variables, and they are, respectively, referred to as constrained or unconstrained optimization problems. There are many calculus-based methods including gradient approaches to search for mostly local optimum solutions and these are well documented in [20, 21]. However, some basic difficulties in the gradient methods such as their strong dependence on the initial guess can cause them to find a local optimum rather than a global one. This has led to other heuristic optimization methods, particularly genetic algorithms (GAs) being used extensively during the last decade. Such nature-inspired evolutionary algorithms [22, 23] differ from other traditional calculus based techniques. The main difference is that GAs work with a population of candidate solutions, not a single point in search space. This helps significantly to avoid being trapped in local optima  as long as the diversity of the population is well preserved.
One of complex real-world problems is the controller design, because it is necessary to assign the control parameters. This parameter tuning is traditionally based on the trial and error procedure; however, this problem can be solved via evolutionary algorithms, for example, genetic algorithms. In the existing literature, several previous works have considered the evolutionary algorithms for control design. For an overview of evolutionary algorithms in the control engineering,  is appropriate. In particular, the pole placement procedure to design a discrete-time regulator in  and the observer-based feedback control design in  are formulated as multiobjective optimization problems and solved via genetic algorithms. Moreover, in , two decoupled sliding-mode control configurations are designed for a scale model of an oil platform supply ship while the genetic algorithm is used for optimization.
A simple genetic algorithm includes individual selection from population based on the fitness, crossover, and mutation with some probabilities to generate new individuals. With the genetic operation going on, the individual maximum fitness and the population average fitness are increased, steadily. When applied to a problem, GA uses a genetics-based mechanism to iteratively generate new solutions from currently available solutions. It then replaces some or all of the existing members of the current solution pool with the newly created members. The motivation behind the approach is that the quality of the solution pool should improve with the passage of time [22, 23].
8. Multiobjective Optimization
In multiobjective optimization problems which is also called multi-criteria optimization problems or vector optimization problems, there are several objective or cost functions (a vector of objectives) to be optimized (minimized or maximized), simultaneously. These objectives often conflict with each other so that as one objective function improves, another deteriorates. Therefore, there is no single optimal solution that is best with respect to all the objective functions. Instead, there is a set of optimal solutions, well-known as Pareto optimal solutions [29–32], which distinguishes significantly the inherent natures between single-objective and multiobjective optimization problems.
In fact, multiobjective optimization has been defined as finding a vector of decision variables satisfying constraints to give acceptable values to all objective functions. Such multiobjective minimization based on Pareto approach can be conducted using some definitions .
8.1. Definition of Pareto Dominance
A vector , is dominance to vector (denoted by ) if and only if for all .
8.2. Definition of Pareto Optimality
A point ( is a feasible region in ) is said to be Pareto optimal (minimal) if and only if there is not which is dominance to . Alternatively, it can be readily restated as following. For all .
8.3. Definition of Pareto Set
For a given multiobjective optimization problem, a Pareto set is a set in the decision variable space consisting of all the Pareto optimal vectors. .
8.4. Definition of Pareto Front
For a given multiobjective optimization problem, the Pareto front is a set of vectors of objective functions which are obtained using the vectors of decision variables in the Pareto set , that is . In other words, the Pareto front is a set of the vectors of objective functions mapped from .
In fact, evolutionary algorithms have been widely used for multiobjective optimization because of their natural properties suited for these types of problems. This is mostly because of their parallel or population-based search approach. Therefore, most of the difficulties and deficiencies within the classical methods in solving multiobjective optimization problems are eliminated. For example, there is no need for either several runs to find all individuals of the Pareto front or quantification of the importance of each objective using numerical weights. In this way, the original nondominated sorting procedure given by Goldberg  was the catalyst for several different versions of multiobjective optimization algorithms [29, 30]. However, it is very important that the genetic diversity within the population be preserved sufficiently. This main issue in multiobjective optimization problems has been addressed by many related research works . Consequently, the premature convergence of multiobjective optimization evolutionary algorithms is prevented, and the solutions are directed and distributed along the true Pareto front if such genetic diversity is well provided. The Pareto-based approach of NSGAII  has been used recently in a wide area of engineering multiobjective optimization problems because of its simple yet efficient non-dominance ranking procedure in yielding different level of Pareto frontiers. However, the crowding approach in such state-of-the-art multiobjective optimization problems  is not efficient as a diversity preserving operator . In this paper, a new diversity preserving algorithm called -elimination diversity algorithm , as a multiobjective tool, searches the definition space of decision variables and returns the optimum answers in Pareto form. In this -elimination diversity approach that is used to replace the crowding distance assignment approach in NSGAII , all the clones and/or -similar individuals based on Euclidean norm of two vectors are recognized and simply eliminated from the current population. Therefore, based on a predefined value of as the elimination threshold ( has been used in this paper) all the individuals in a front within this limit of a particular individual are eliminated. It should be noted that such -similarity must exist both in the space of objectives and in the space of the associated design variables. This will ensure that very different individuals in the space of design variables having -similarity in the space of objectives will not be eliminated from the population. Evidently, the clones or -similar individuals are replaced from the population with the same number of new randomly generated individuals. Meanwhile, this will additionally help to explore the search space of the given multiobjective optimization problems more efficiently .
9. Multiobjective Optimization of Decoupled Sliding Mode Control
As mentioned before this, it is necessary for the practical engineering applications to solve the optimization problems involving multiple design criteria which are also called objective functions. Furthermore, the design criteria may conflict with each other so that improving one of them will deteriorate since another. The inherent conflicting behavior of such objective functions lead to a set of optimal solutions named Pareto solutions. These types of problems can be solved using evolutionary multiobjective optimization techniques. Here, for multiobjective optimization of the decoupled sliding mode controller, vector ] is the vector of selective parameters of the decoupled sliding mode controller. and are positive constant. and are coefficients of sliding surfaces, and represents the inverse of the width of the boundary layer of . transfers to the proper range of . The error of the position and the error of the angle are functions of this vector’s components. This means that by selecting various values for the selective parameters, we can make changes in the position and angel errors. In this paper, we are concerned in choosing values for the selective parameters to minimize above two functions. Clearly, this is an optimization problem with two object functions (errors of position and angle) and six decision variables . The regions of the selective parameters are as follows:,: positive constant, , , : coefficients of the sliding surface, , : transfers to a proper range of , .The following parameters of the genetic algorithm are considered.
, , , , and . Also, the stopping criterion for this algorithm is the maximum number of generations.
10. Simulation and Results for the Inverted Pendulum System
The simulation for the inverted pendulum system considered here is carried out by MATLAB software. The initial values are as the following: The system parameters and constants used in the simulation are given in Table 1.
When we apply the multiobjective genetic algorithm, we achieve a Pareto front of the angle error and distance error as demonstrated in Figure 6.
Figure 6 is the chart resulted from multiobjective optimization which all the presented points are nondominated to each other. Each point in this chart is a representative of a vector of selective parameters which if we choose it for the decoupled sliding-mode controller, the analysis tends to objective functions corresponding to that point of chart. The design variables and objective functions of the optimum design points A, B, and C are presented in Table 2.
Achieving several solutions, all of which are considered optimum is a unique property of multiobjective optimization. Designer in facing to Pareto charts, among several different optimum points can choose a suitable multisided design point, easily. According to the Pareto chart, we applied point C for simulation, as shown in Figures 7, 8, 9, 10, and 11.
The numerical results show that the control action is bounded between −15 and 10 (N), and sliding surface reaches to zero during the simulation.
11. Simulation and Results for the Ball and Beam System
The initial values of the ball and beam system are considered in the following form: The system parameters and constants used in the simulation are given in Table 3.
When the multiobjective genetic algorithm is applied, a Pareto front of the angle error and distance error would be achieved (Figure 12).
Figure 12 shows the Pareto front obtained from the modified NSGAII algorithm in an arbitrary run for the ball and beam system. In this figure, points A and C stand for the best distance error and angle error, respectively. Furthermore, point B could be a trade-off optimum choice when considering minimum values of both angle error and distance error. Table 4 illustrates the design variables and objective functions corresponding to the optimum design points A, B, and C.
The time responses of the ball and beam system related to point B are shown in Figures 13, 14, 15, 16, and 17. These figures demonstrate that the ball and beam system can be stabilized to the equilibrium point.
Furthermore, the simulation shows that the control action is bounded between −1.2 and 4 (N), and sliding surface reaches to zero during simulation.
12. Simulation and Results for the Seesaw System
In this section, the simulation results for seesaw system are investigated. The initial values of this system are described by the following equations: The system parameters used in the simulation are given in Table 5.
Figure 18 demonstrates a Pareto front of two objective functions (angle error and distance error) which is achieved of the multiobjective genetic algorithm (e.g. modified NSGAII).
It is clear that all points in Figure 18 are nondominated to each other, and each point in this chart is a representative of a vector of selective parameters for the decoupled sliding mode controller. Moreover, choosing a better value for any objective function in the Pareto front would cause a worse value for another objective function. Here, point B has been chosen from Figure 18 to design an optimum decupled sliding mode controller (Figures 19, 20, 21, 22, and 23). Design variables and objective functions related to the optimum design points A, B, and C are detailed in Table 6.
This paper proposes the decoupled sliding-mode technique for stabilising the coupled nonlinear systems while the multiobjective genetic algorithm is employed in order to optimize two objective functions. This method is a universal design method and suitable to various kinds of control objects. Usage this method includes two steps. The first step is to design the decoupled sliding-mode controller for the nonlinear system. The second step is to apply the multiobjective optimization tool to search the definition space of decision variables and to return the optimum answers in the Pareto form. The simulation results on three different and typical control systems show good control and robust performance of the proposed strategy.
J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice-Hall, Englewood Cliffs, NJ, USA, 1991.
M. J. Mahmoodabadi, A. Bagheri, S. Arabani Mostaghim, and M. Bisheban, “Simulation of stability using Java application for Pareto design of controllers based on a new multi-objective particle swarm optimization,” Mathematical and Computer Modelling, vol. 54, no. 5-6, pp. 1584–1607, 2011.View at: Publisher Site | Google Scholar
J. C. Lo and Y. H. Kuo, “Decoupled fuzzy sliding-mode control,” IEEE Transactions on Fuzzy Systems, vol. 6, no. 3, pp. 426–435, 1998.View at: Google Scholar
N. H. Moin, A. S. I. Zinober, and P. J. Harley, “Sliding mode control design using genetic algorithms,” in Proceedings of the 1st IEE/IEEE International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA '95), vol. 414, pp. 238–244, September 1995.View at: Google Scholar
C. C. Wong and S. Y. Chang, “Parameter selection in the sliding mode control design using genetic algorithms,” Tamkang Journal of Science and Engineering, vol. 1, no. 2, pp. 115–122, 1998.View at: Google Scholar
H. K. Khalil, Nonlinear Systems, MacMillan, New York, NY, USA, 1992.
J. Jing and Q. H. Wuan, “Intelligent sliding mode control algorithm for position tracking servo system,” International Journal of Information Technology, vol. 12, no. 7, pp. 57–62, 2006.View at: Google Scholar
M. Dotoli, P. Lino, and B. Turchiano, “A decoupled fuzzy sliding mode approach to swing-up and stabilize an inverted pendulum, The CSD03,” in Proceedings of the 2nd IFAC Conference on Control Systems Design, pp. 113–120, Bratislava, Slovak Republic, 2003.View at: Google Scholar
J. S. Arora, Introduction to Optimum Design, McGraw-Hill, New York, NY, USA, 1989.
S. S. Rao, Engineering Optimization: Theory and Practice, Wiley, NewYork, NY, USA, 1996.
D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.
T. Back, D. B. Fogel, and Z. Michalewicz, Handbook of Evolutionary Computation, Institute of Physics Publishing, New York, NY, USA, Oxford University Press, Oxford, UK, 1997.
C. M. Fonseca and P. J. Fleming, “Multiobjective optimal controller design with genetic algorithms,” in Proceedings of the International Conference on Control, vol. 1, pp. 745–749, March 1994.View at: Google Scholar
G. Sánchez, M. Villasana, and M. Strefezza, “Multi-objective pole placement with evolutionary algorithms,” Lecture Notes in Computer Science, vol. 4403, pp. 417–427, 2007.View at: Google Scholar
N. Srinivas and K. Deb, “Multiobjective optimization using nondominated sorting in genetic algorithms,” Evolutionary Computation, vol. 2, no. 3, pp. 221–248, 1994.View at: Google Scholar
C. M. Fonseca and P. J. Fleming, “Genetic algorithms for multi-objective optimization: formulation, discussion and generalization,” in Proceedings of the 5th International Conference On genetic Algorithms, S. Forrest, Ed., pp. 416–423, Morgan Kaufmann, San Mateo, Calif, USA, 1993.View at: Google Scholar
C. A. Coello Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2002.
C. A. Coello Coello and R. L. Becerra, “Evolutionary multiobjective optimization using a cultural algorithm,” in Proceedings of the IEEE Swarm Intelligence Symposium, pp. 6–13, IEEE Service Center, Piscataway, NJ, USA, 2003.View at: Google Scholar