About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 639014, 22 pages
http://dx.doi.org/10.1155/2012/639014
Research Article

Pareto Design of Decoupled Sliding-Mode Controllers for Nonlinear Systems Based on a Multiobjective Genetic Algorithm

1Department of Mechanical Engineering, Faculty of Engineering, University of Guilan, P.O. Box 3756, Rasht, Iran
2Intelligent-Based Experimental Mechanics Center of Excellence, School of Mechanical Engineering, Faculty of Engineering, University of Tehran, Tehran, Iran
3Department of Mechanical Engineering, Takestan Branch, Islamic Azad University, Takestan, Iran

Received 11 January 2012; Revised 4 April 2012; Accepted 8 April 2012

Academic Editor: Zhiwei Gao

Copyright © 2012 M. J. Mahmoodabadi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents Pareto design of decoupled sliding-mode controllers based on a multiobjective genetic algorithm for several fourth-order coupled nonlinear systems. In order to achieve an optimum controller, at first, the decoupled sliding mode controller is applied to stablize the fourth-order coupled nonlinear systems at the equilibrium point. Then, the multiobjective genetic algorithm is applied to search the optimal coefficients of the decoupled sliding-mode control to improve the performance of the control system. Considered objective functions are the angle and distance errors. Finally, the simulation results implemented in the MATLAB software environment are presented for the inverted pendulum, ball and beam, and seesaw systems to assure the effectiveness of this technique.

1. Introduction

There are many control techniques that have been used to investigate the control behavior of the nonlinear systems [14]. A variable structure control with sliding mode, which is commonly known as sliding-mode control, is a nonlinear control strategy that is well-known for its guaranteed stability, robustness against parameter variations, fast dynamic response, and simplicity in implementation [1]. Although the sliding mode control method gives a satisfactory performance for the second-order systems, its performance for a fourth-order coupled system is questionable. For example, in an inverted pendulum system controlled by the sliding-mode control, either the pole or cart can be successfully controlled, but not both. A remedy to this problem is to decouple the states and apply a suitable control law to stabilize the whole system. Recently, a decoupled sliding-mode control has been proposed to cope with this issue. It provides a simple way to decouple a class of fourth-order nonlinear systems in two second-order subsystems such that each subsystem has a separate control objective expressed in terms of a sliding surface [5, 6]. An important consequence of the using decoupled sliding-mode control is that the second subsystem is successfully incorporated into the first one via a two-level decoupling strategy.

It is very important to note that for design of the sliding-mode control and decoupled sliding-mode control, the sliding surface parameters should be determined, properly. This point is very crucial for the performance of the control system. The problem can be solved using evolutionary optimization techniques such as the genetic algorithm [710]. In this paper, a new intelligent decoupled sliding-mode control scheme based on an improved multiobjective genetic algorithm is proposed. Using this optimization algorithm, the important parameters of the decoupled sliding mode controller are optimized in a way to decrease the errors of the position and angle, simultaneously. The results obtained from this study illustrate that there are some important optimal design facts among objective functions which have been discovered via the Pareto optimum design approach. Such important design facts could not be found without using the multiobjective Pareto optimization process. In the end, simulations are presented to show the feasibility and efficiency of the proposed Pareto optimum decoupled sliding-mode control for the nonlinear systems.

2. Sliding-Mode Control

Sliding-mode controller is a powerful robust control strategy to treat the model uncertainties and external disturbances [11]. Furthermore, it has been widely applied to robust control of nonlinear systems [1218]. In this section we recall the general concepts of sliding mode control for a second-order dynamic system. Suppose a nonlinear system is defined by the general state space equation as follows: ̇𝑥=𝑓(𝑥,𝑢,𝑡),(2.1) where 𝑥𝑅𝑛 is the state vector, 𝑢𝑅𝑚 the input vector, 𝑛 is the order of the system, and 𝑚 is the number of inputs. Then, the sliding surface 𝑠(𝑒,𝑡) is given by the following: 𝑠(𝑒,𝑡)=𝑒𝐻𝑇𝑒=0,(2.2) where 𝐻𝑅𝑛 represents the coefficients or slope of the sliding surface. Here, 𝑒=𝑥𝑥𝑑(2.3) is the negative tracking error vector.

Usually a time-varying sliding surface 𝑠(𝑡) is simply defined in the state-space 𝑅𝑛 by the scalar equation as the following: 𝑑𝑠(𝑒,𝑡)=𝑑𝑡+𝜆𝑛1𝑒=0,(2.4) where 𝜆 is a strictly positive constant that can also be explained as the slope of the sliding surface. For instance, if 𝑛=2 (for a second-order system) then, 𝑠=̇𝑒+𝜆𝑒,(2.5) and hence, 𝑠 is simply a weighted sum of the position and velocity error from (2.4). The 𝑛th-order tracking problem is now being replaced by a first-order stabilization problem in which scalar 𝑠 is to be kept at zero by a governing reaching condition. By choosing Lyapunov function 𝑉(𝑥)=(1/2)𝑠2, the following equation can guarantee that the reaching condition is satisfied, ̇𝑉(𝑥)=𝑠̇𝑠<0.(2.6) The existence and convergence conditions can be rewritten as follows: 𝑠̇𝑠𝜂|𝑠|.(2.7) This equation permits a nonswitching region. Here, 𝜂 is a strictly positive constant, and its value is usually chosen based on some knowledge of disturbances or system dynamics in terms of some known amplitudes.

In this control method, by changing the control law according to certain predefined rules which depend on the position of the error states of the system with respect to sliding surfaces, those states are switched between stable and unstable trajectories until they reach the sliding surface.

It can be shown that the sliding condition of (2.6) is always satisfied by the following: 𝑢=𝑢eq𝑘sgn(𝑠),(2.8) where 𝑢eq is called equivalent-control input which is obtained by ̇𝑠=0. 𝑘 is a design parameter and 𝑘𝜂.

Function sgn makes the high frequency chattering in control command. Using a proper definition of a thin boundary layer around the sliding surface, the chattering can be eliminated (Figure 1). This is accomplished by defining a boundary layer of thickness Φ, and replacing function sgn with function sat. This function is as the following and shown in Figure 2, 𝑠satΦ=𝑠sgnΦ|||𝑠ifΦ|||𝑠1Φ|||𝑠ifΦ|||<1.(2.9)

639014.fig.001
Figure 1: Sliding plant of a smooth controller.
639014.fig.002
Figure 2: Function Sat(𝑠/Φ) to eliminate the chattering phenomena in the sliding mode controller.

3. Inverted Pendulum System

In this section, the model of an inverted pendulum is recalled. In fact, the work deals with the stabilization control of a complicated, nonlinear, and unstable system. A pole, hinged to a cart moving on a track, is balanced upwards by motioning of the cart via a DC motor. The system observable state vector is 𝑥=[𝑥1,𝑥2,𝑥3,𝑥4]𝑇, including, respectively, the position of the cart, the angle of the pole with respect to the vertical axis, and their derivatives. The force to motion the cart may be expressed as 𝐹=𝛼𝑢, where 𝑢 is the input that is the limited motor supply voltage. The system dynamic model is as follows: ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢,(3.1) where 𝑓1𝑎(𝑥)=𝑓𝑟𝜇𝑥24sin𝑥2𝑥𝑙cos2𝜇𝑔sin𝑥2𝐶𝑥4𝐽+𝜇𝑙sin2𝑥2,𝑏1(𝑥)=𝑎𝛼𝐽+𝜇𝑙sin2𝑥2,𝑓2𝑥(𝑥)=𝑙cos2𝑓𝑟𝜇𝑥24sin𝑥2+𝜇𝑔sin𝑥2𝐶𝑥4𝐽+𝜇𝑙sin2𝑥2,𝑏2(𝑥)=𝑙cos𝑥2𝛼𝐽+𝜇𝑙sin2𝑥2,(3.2) That is 𝑙=𝐿𝑀12𝑀2+𝑀1,𝑎=𝑙2+𝐽𝑀2+𝑀1𝑀,𝜇=2+𝑀1𝑙.(3.3)

Masses of the cart and pole are, respectively, 𝑀2 and 𝑀1, 𝑔 represents the gravity acceleration, 𝐿 is the half length of the pole, and 𝐽 is the overall inertia moment of the cart and pole with respect to the system centre of mass. 𝐶 is the rotational friction coefficient of the pole, and 𝑓𝑟 is the horizontal friction coefficient of the cart (Figure 3). This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇.

639014.fig.003
Figure 3: Inverted pendulum system.

4. Ball and Beam System

The ball and beam system is one of the most enduringly popular and important laboratory models for teaching control systems engineering. Because it is very simple to understand as a system, and control techniques that can stable it cover many important classical and modern design methods. The system has a very important property, it is open-loop unstable. The system is very simple, a steel ball rolling on the top of a long beam. The beam is mounted on the output shaft of an electrical motor, and so the beam can be tilted about its center axis by applying an electrical control signal to the motor amplifier. The control job is to automatically regulate the position of the ball on the beam by changing the angle of the beam. This is a difficult control task because the ball does not stay in one place on the beam, and moves with acceleration that is approximately proportional to the tilt of the beam. In control terminology, the system is open-loop unstable because the system output (the ball position) increases without any limitation for a fixed input (beam angle). Feedback control must be used to stabilize the system and to keep the ball in a desired position on the beam.

Consider a ball and beam system depicted in Figure 4 and its dynamic is described below: ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢,(4.1) where 𝑓15(𝑥)=7𝑥𝑔sin2𝑥1𝑥24,𝑏1𝑓(𝑥)=0,2(𝑥)=𝑚𝑥12𝑥3𝑥4𝑥𝑔cos2𝑚𝑥21,𝑏+𝐽21(𝑥)=𝑚𝑥21.+𝐽(4.2)

639014.fig.004
Figure 4: Ball and beam system.

The mass of the ball is 𝑚, 𝑔 represents the gravity acceleration, and 𝐽 is the inertia moment of the beam (Figure 4). The system observable state vector is 𝑥=[𝑥1,𝑥2,𝑥3,𝑥4]𝑇, including, respectively, the position of the ball, the angle of the beam with respect to the horizontal axis, and their derivatives. This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇.

5. Seesaw System

According to the basic physical concepts, in the seesaw mechanism, if the vertical line along the centre of gravity of the inverted wedge is not passing through the fulcrum perpendicularly, then the inverted wedge will result in a torque and rotates until reaching the stable state. If we want to balance the inverted wedge, we have to put an external force to produce an appropriate opposite torque. For this reason, the inverted wedge is equipped with a cart to balance the unstable system. The cart can move to produce the appropriate torque against the internal force (Figure 5).

639014.fig.005
Figure 5: Seesaw system.

The observable state vector is 𝑥=[𝑥1,𝑥2,𝑥3,𝑥4]𝑇, including, respectively, the cart position, the wedge angle with respect to the vertical axis, and their derivatives. The system dynamic model is as the following: ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢,(5.1) where 𝑓1(𝑥𝑥)=𝑔sin2𝑇𝑐𝑚𝑥3,𝑏11(𝑥)=𝑚,𝑓2(𝑥)=𝑀𝑔𝑟2𝑥sin2+𝑚𝑔𝑥21+𝑟21𝑥sin2+𝛼𝑓𝑝𝑥4𝐽,𝑏2𝑟(𝑥)=1𝐽,(5.2) that is 𝛼=tan1(𝑥1/𝑟1).

The cart and wedge masses are, respectively, 𝑚 and 𝑀, 𝑔 represents the gravity acceleration, 𝑟1 is the height of the wedge, 𝑟2 is the height of mass centre, 𝐽 is the inertia moment of the wedge, 𝑓𝑝 is the rotational friction coefficient of the wedge, and 𝑇𝑐 is the friction coefficient of the cart. This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇.

6. Decoupled Sliding-Mode Control

Consider the nonlinear fourth-order coupled system expressed as the following. ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢.(6.1) This system includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇, and the sliding-mode control mentioned in the Section 2 can only control one of these subsystems. Hence, the basic idea of the decoupled sliding-mode control is proposed to design a control law such that the single input 𝑢 simultaneously controls two coupled subsystems to accomplish the desired performance [5, 6, 19]. To achieve this goal, the following sliding surfaces are defined:𝑠1(𝑥)=𝜆1𝑥2𝑥2𝑑𝑧+𝑥4𝑥4𝑑𝑠=0(6.2a)2(𝑥)=𝜆2𝑥1𝑥1𝑑+𝑥3𝑥3𝑑=0.(6.2b)

Here, 𝑧 is a proportional value of 𝑠2 and has a proper range with respect to 𝑥2. A comparison of (6.2a) with (2.5) shows the meaning of (6.2a): the control objective in the first subsystem of (6.1) changes from 𝑥2=𝑥2𝑑 and 𝑥4=𝑥4𝑑 to 𝑥2=𝑥2𝑑+𝑧 and 𝑥4=𝑥4𝑑. On the other hand, (6.2b) has the same meaning of (2.5) and its control objectives are 𝑥1=𝑥1𝑑 and 𝑥3=𝑥3𝑑. Now, let the control law for (6.2a) be a sliding mode with a boundary layer, then: 𝑢1=̂𝑢1𝐺𝑓1𝑠sat1(𝑥)𝑏2(𝑥)𝐺𝑠1,𝐺𝑓1,𝐺𝑠1>0,(6.3) with ̂𝑢1=𝑏11𝑓(𝑥)2(𝑥)̈𝑥2𝑑+𝜆1𝑥4𝜆1̇𝑥2𝑑.(6.4) So 𝑠𝑧=sat2𝐺𝑠2𝐺𝑓2,0<𝐺𝑓2<1,(6.5) where 𝐺𝑠2 represents the inverse of the width of the boundary layer for 𝑠2, 𝐺𝑓2 transfers 𝑠2 to the proper range of 𝑥2. Notice, in (6.5) 𝑧 is a decaying oscillation signal since 𝐺𝑓2<1. Moreover, in (6.2a), if 𝑠1=0, then 𝑥2=𝑥2𝑑+𝑧 and 𝑥4=𝑥4𝑑.

Now, the control sequence is as follows: when 𝑠20, then 𝑧0 in (6.2a) causes (6.3) to generate a control action that reduces 𝑠2; as 𝑠2 decreases, 𝑧 decreases too. Hence, at the limit 𝑠20 with 𝑥1𝑥1𝑑, then 𝑧0 with 𝑥2𝑥2𝑑; so, 𝑠10, and the control objective would be achieved [19].

7. Genetic Algorithm

Optimization in engineering design has always been of great importance and interest particularly in solving complex real-world design problems. Basically, the optimization process is defined as finding a set of values for a vector of design variables so that it leads to an optimum value of an objective or cost function. In such single-objective optimization problems, there may or may not exist some constraint functions on the design variables, and they are, respectively, referred to as constrained or unconstrained optimization problems. There are many calculus-based methods including gradient approaches to search for mostly local optimum solutions and these are well documented in [20, 21]. However, some basic difficulties in the gradient methods such as their strong dependence on the initial guess can cause them to find a local optimum rather than a global one. This has led to other heuristic optimization methods, particularly genetic algorithms (GAs) being used extensively during the last decade. Such nature-inspired evolutionary algorithms [22, 23] differ from other traditional calculus based techniques. The main difference is that GAs work with a population of candidate solutions, not a single point in search space. This helps significantly to avoid being trapped in local optima [24] as long as the diversity of the population is well preserved.

One of complex real-world problems is the controller design, because it is necessary to assign the control parameters. This parameter tuning is traditionally based on the trial and error procedure; however, this problem can be solved via evolutionary algorithms, for example, genetic algorithms. In the existing literature, several previous works have considered the evolutionary algorithms for control design. For an overview of evolutionary algorithms in the control engineering, [25] is appropriate. In particular, the pole placement procedure to design a discrete-time regulator in [26] and the observer-based feedback control design in [27] are formulated as multiobjective optimization problems and solved via genetic algorithms. Moreover, in [28], two decoupled sliding-mode control configurations are designed for a scale model of an oil platform supply ship while the genetic algorithm is used for optimization.

A simple genetic algorithm includes individual selection from population based on the fitness, crossover, and mutation with some probabilities to generate new individuals. With the genetic operation going on, the individual maximum fitness and the population average fitness are increased, steadily. When applied to a problem, GA uses a genetics-based mechanism to iteratively generate new solutions from currently available solutions. It then replaces some or all of the existing members of the current solution pool with the newly created members. The motivation behind the approach is that the quality of the solution pool should improve with the passage of time [22, 23].

8. Multiobjective Optimization

In multiobjective optimization problems which is also called multi-criteria optimization problems or vector optimization problems, there are several objective or cost functions (a vector of objectives) to be optimized (minimized or maximized), simultaneously. These objectives often conflict with each other so that as one objective function improves, another deteriorates. Therefore, there is no single optimal solution that is best with respect to all the objective functions. Instead, there is a set of optimal solutions, well-known as Pareto optimal solutions [2932], which distinguishes significantly the inherent natures between single-objective and multiobjective optimization problems.

In fact, multiobjective optimization has been defined as finding a vector of decision variables satisfying constraints to give acceptable values to all objective functions. Such multiobjective minimization based on Pareto approach can be conducted using some definitions [33].

8.1. Definition of Pareto Dominance

A vector 𝑈=[𝑢1,𝑢2,,𝑢𝑛], is dominance to vector 𝑉=[𝑣1,𝑣2,,𝑣𝑛] (denoted by 𝑈𝑉) if and only if for all 𝑖{1,2,,𝑛},𝑢𝑖𝑣𝑖𝑗{1,2,,𝑛}𝑢𝑗<𝑣𝑗.

8.2. Definition of Pareto Optimality

A point 𝑋Ω (Ω is a feasible region in 𝑅𝑛) is said to be Pareto optimal (minimal) if and only if there is not 𝑋Ω which is dominance to 𝑋. Alternatively, it can be readily restated as following. For all 𝑋Ω,𝑋𝑋,𝑖{1,2,,𝑚}𝑓𝑖(𝑋)<𝑓𝑖(𝑋).

8.3. Definition of Pareto Set

For a given multiobjective optimization problem, a Pareto set 𝑃 is a set in the decision variable space consisting of all the Pareto optimal vectors. 𝑃={𝑋Ω𝑋Ω𝐹(𝑋)𝐹(𝑋)}.

8.4. Definition of Pareto Front

For a given multiobjective optimization problem, the Pareto front 𝑃𝑇 is a set of vectors of objective functions which are obtained using the vectors of decision variables in the Pareto set 𝑃, that is 𝑃𝑇={𝐹(𝑋)=(𝑓1(𝑋),𝑓2(𝑋),,𝑓𝑚(𝑋))𝑋𝑃}. In other words, the Pareto front 𝑃𝑇 is a set of the vectors of objective functions mapped from 𝑃.

In fact, evolutionary algorithms have been widely used for multiobjective optimization because of their natural properties suited for these types of problems. This is mostly because of their parallel or population-based search approach. Therefore, most of the difficulties and deficiencies within the classical methods in solving multiobjective optimization problems are eliminated. For example, there is no need for either several runs to find all individuals of the Pareto front or quantification of the importance of each objective using numerical weights. In this way, the original nondominated sorting procedure given by Goldberg [22] was the catalyst for several different versions of multiobjective optimization algorithms [29, 30]. However, it is very important that the genetic diversity within the population be preserved sufficiently. This main issue in multiobjective optimization problems has been addressed by many related research works [34]. Consequently, the premature convergence of multiobjective optimization evolutionary algorithms is prevented, and the solutions are directed and distributed along the true Pareto front if such genetic diversity is well provided. The Pareto-based approach of NSGAII [33] has been used recently in a wide area of engineering multiobjective optimization problems because of its simple yet efficient non-dominance ranking procedure in yielding different level of Pareto frontiers. However, the crowding approach in such state-of-the-art multiobjective optimization problems [35] is not efficient as a diversity preserving operator [36]. In this paper, a new diversity preserving algorithm called 𝜀-elimination diversity algorithm [36], as a multiobjective tool, searches the definition space of decision variables and returns the optimum answers in Pareto form. In this 𝜀-elimination diversity approach that is used to replace the crowding distance assignment approach in NSGAII [33], all the clones and/or 𝜀-similar individuals based on Euclidean norm of two vectors are recognized and simply eliminated from the current population. Therefore, based on a predefined value of 𝜀 as the elimination threshold (𝜀=0.01 has been used in this paper) all the individuals in a front within this limit of a particular individual are eliminated. It should be noted that such 𝜀-similarity must exist both in the space of objectives and in the space of the associated design variables. This will ensure that very different individuals in the space of design variables having 𝜀-similarity in the space of objectives will not be eliminated from the population. Evidently, the clones or 𝜀-similar individuals are replaced from the population with the same number of new randomly generated individuals. Meanwhile, this will additionally help to explore the search space of the given multiobjective optimization problems more efficiently [36].

9. Multiobjective Optimization of Decoupled Sliding Mode Control

As mentioned before this, it is necessary for the practical engineering applications to solve the optimization problems involving multiple design criteria which are also called objective functions. Furthermore, the design criteria may conflict with each other so that improving one of them will deteriorate since another. The inherent conflicting behavior of such objective functions lead to a set of optimal solutions named Pareto solutions. These types of problems can be solved using evolutionary multiobjective optimization techniques. Here, for multiobjective optimization of the decoupled sliding mode controller, vector [𝐺𝑓1,𝐺𝑠1,𝜆1,𝐺𝑓2,𝐺𝑠2,𝜆2] is the vector of selective parameters of the decoupled sliding mode controller. 𝐺𝑓1 and 𝐺𝑠1 are positive constant. 𝜆1 and 𝜆2 are coefficients of sliding surfaces, and 𝐺𝑠2represents the inverse of the width of the boundary layer of 𝑠2. 𝐺𝑓2 transfers 𝑠2 to the proper range of 𝑥2. The error of the position and the error of the angle are functions of this vector’s components. This means that by selecting various values for the selective parameters, we can make changes in the position and angel errors. In this paper, we are concerned in choosing values for the selective parameters to minimize above two functions. Clearly, this is an optimization problem with two object functions (errors of position and angle) and six decision variables [𝐺𝑓1,𝐺𝑠1,𝜆1,𝐺𝑓2,𝐺𝑠2,𝜆2]. The regions of the selective parameters are as follows:𝐺𝑠2,𝐺𝑓1,𝐺𝑠1: positive constant, 𝐺𝑠2,𝐺𝑓1,𝐺𝑠1>0,𝜆1, 𝜆2: coefficients of the sliding surface, 𝜆1,𝜆2>0,𝐺𝑓2: transfers 𝑠2 to a proper range of 𝑥2, 0<𝐺𝑓2<1.The following parameters of the genetic algorithm are considered.

Populationsize=100, chromosomelength=48, generations=300, crossoverprobability=0.8, and mutationprobability=0.02. Also, the stopping criterion for this algorithm is the maximum number of generations.

10. Simulation and Results for the Inverted Pendulum System

The simulation for the inverted pendulum system considered here is carried out by MATLAB software. The initial values are as the following: 𝑥1(0)=0,𝑥2𝜋(0)=6rad,𝑥3(0)=0,𝑥4(0)=0.(10.1) The system parameters and constants used in the simulation are given in Table 1.

tab1
Table 1: Inverted pendulum parameters.

When we apply the multiobjective genetic algorithm, we achieve a Pareto front of the angle error and distance error as demonstrated in Figure 6.

639014.fig.006
Figure 6: Pareto front of the angle error and distance error for the inverted pendulum.

Figure 6 is the chart resulted from multiobjective optimization which all the presented points are nondominated to each other. Each point in this chart is a representative of a vector of selective parameters which if we choose it for the decoupled sliding-mode controller, the analysis tends to objective functions corresponding to that point of chart. The design variables and objective functions of the optimum design points A, B, and C are presented in Table 2.

tab2
Table 2: Comparison among points A, B, and C for Figure 6.

Achieving several solutions, all of which are considered optimum is a unique property of multiobjective optimization. Designer in facing to Pareto charts, among several different optimum points can choose a suitable multisided design point, easily. According to the Pareto chart, we applied point C for simulation, as shown in Figures 7, 8, 9, 10, and 11.

639014.fig.007
Figure 7: Simulation results for the pole angle.
639014.fig.008
Figure 8: Simulation results for the cart position.
639014.fig.009
Figure 9: Simulation results for the control action.
639014.fig.0010
Figure 10: Sliding surface 𝑠1(𝑥).
639014.fig.0011
Figure 11: Sliding surface 𝑠2(𝑥).

The simulation results (Figures 7, 8, 9, 10, and 11) show that the pole and the cart can be stabilized to the equilibrium point.

The numerical results show that the control action is bounded between −15 and 10 (N), and sliding surface 𝑠2(𝑥) reaches to zero during the simulation.

11. Simulation and Results for the Ball and Beam System

The initial values of the ball and beam system are considered in the following form: 𝑥1(0)=0.1m,𝑥2𝜋(0)=3rad,𝑥3(0)=0,𝑥4(0)=0.(11.1) The system parameters and constants used in the simulation are given in Table 3.

tab3
Table 3: Ball and beam system parameters.

When the multiobjective genetic algorithm is applied, a Pareto front of the angle error and distance error would be achieved (Figure 12).

639014.fig.0012
Figure 12: Pareto front of the angle error and distance error for the ball and beam system.

Figure 12 shows the Pareto front obtained from the modified NSGAII algorithm in an arbitrary run for the ball and beam system. In this figure, points A and C stand for the best distance error and angle error, respectively. Furthermore, point B could be a trade-off optimum choice when considering minimum values of both angle error and distance error. Table 4 illustrates the design variables and objective functions corresponding to the optimum design points A, B, and C.

tab4
Table 4: Comparison among points A, B, and C for Figure 12.

The time responses of the ball and beam system related to point B are shown in Figures 13, 14, 15, 16, and 17. These figures demonstrate that the ball and beam system can be stabilized to the equilibrium point.

639014.fig.0013
Figure 13: Simulation results for the beam angle.
639014.fig.0014
Figure 14: Simulation results for the ball position.
639014.fig.0015
Figure 15: Simulation results for the control action.
639014.fig.0016
Figure 16: Sliding surface 𝑠1(𝑥).
639014.fig.0017
Figure 17: Sliding surface 𝑠2(𝑥).

Furthermore, the simulation shows that the control action is bounded between −1.2 and 4 (N), and sliding surface 𝑠2(𝑥) reaches to zero during simulation.

12. Simulation and Results for the Seesaw System

In this section, the simulation results for seesaw system are investigated. The initial values of this system are described by the following equations: 𝑥1(0)=0.3m,𝑥2𝜋(0)=6rad,𝑥3(0)=0,𝑥4(0)=0.(12.1) The system parameters used in the simulation are given in Table 5.

tab5
Table 5: Seesaw system parameters.

Figure 18 demonstrates a Pareto front of two objective functions (angle error and distance error) which is achieved of the multiobjective genetic algorithm (e.g. modified NSGAII).

639014.fig.0018
Figure 18: Pareto front of the angle error and distance error for the seesaw system.

It is clear that all points in Figure 18 are nondominated to each other, and each point in this chart is a representative of a vector of selective parameters for the decoupled sliding mode controller. Moreover, choosing a better value for any objective function in the Pareto front would cause a worse value for another objective function. Here, point B has been chosen from Figure 18 to design an optimum decupled sliding mode controller (Figures 19, 20, 21, 22, and 23). Design variables and objective functions related to the optimum design points A, B, and C are detailed in Table 6.

tab6
Table 6: Comparison among points A, B, and C for Figure 18.
639014.fig.0019
Figure 19: Simulation results for the wedge angle.
639014.fig.0020
Figure 20: Simulation results for the cart position.
639014.fig.0021
Figure 21: Simulation results for the control action.
639014.fig.0022
Figure 22: Sliding surface 𝑠1(𝑥).
639014.fig.0023
Figure 23: Sliding surface 𝑠2(𝑥).

The simulations (Figures 19, 20, 21, 22, and 23) shows that the seesaw system is stabilized to the equilibrium point after 3 seconds, and the control effort is bounded between −5 and 10 (N).

13. Conclusion

This paper proposes the decoupled sliding-mode technique for stabilising the coupled nonlinear systems while the multiobjective genetic algorithm is employed in order to optimize two objective functions. This method is a universal design method and suitable to various kinds of control objects. Usage this method includes two steps. The first step is to design the decoupled sliding-mode controller for the nonlinear system. The second step is to apply the multiobjective optimization tool to search the definition space of decision variables and to return the optimum answers in the Pareto form. The simulation results on three different and typical control systems show good control and robust performance of the proposed strategy.

References

  1. J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice-Hall, Englewood Cliffs, NJ, USA, 1991.
  2. Z. Gao and S. X. Ding, “Actuator fault robust estimation and fault-tolerant control for a class of nonlinear descriptor systems,” Automatica, vol. 43, no. 5, pp. 912–920, 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. Z. Gao, X. Shi, and S. X. Ding, “Fuzzy state/disturbance observer design for T-S fuzzy systems with application to sensor fault estimation,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 3, pp. 875–880, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. M. J. Mahmoodabadi, A. Bagheri, S. Arabani Mostaghim, and M. Bisheban, “Simulation of stability using Java application for Pareto design of controllers based on a new multi-objective particle swarm optimization,” Mathematical and Computer Modelling, vol. 54, no. 5-6, pp. 1584–1607, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. J. C. Lo and Y. H. Kuo, “Decoupled fuzzy sliding-mode control,” IEEE Transactions on Fuzzy Systems, vol. 6, no. 3, pp. 426–435, 1998. View at Google Scholar · View at Scopus
  6. A. Bagheri and J. J. Moghaddam, “Decoupled adaptive neuro-fuzzy (DANF) sliding mode control system for a Lorenz chaotic problem,” Expert Systems with Applications, vol. 36, no. 3, pp. 6062–6068, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. N. H. Moin, A. S. I. Zinober, and P. J. Harley, “Sliding mode control design using genetic algorithms,” in Proceedings of the 1st IEE/IEEE International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA '95), vol. 414, pp. 238–244, September 1995. View at Scopus
  8. C. C. Wong and S. Y. Chang, “Parameter selection in the sliding mode control design using genetic algorithms,” Tamkang Journal of Science and Engineering, vol. 1, no. 2, pp. 115–122, 1998. View at Google Scholar · View at Scopus
  9. P. C. Chen, C. W. Chen, and W. L. Chiang, “GA-based suzzy sliding mode controller for nonlinear systems,” Mathematical Problems in Engineering, vol. 2008, Article ID 325859, 16 pages, 2008. View at Publisher · View at Google Scholar
  10. J. Javadi-Moghaddam and A. Bagheri, “An adaptive neuro-fuzzy sliding mode based genetic algorithm control system for under water remotely operated vehicle,” Expert Systems with Applications, vol. 37, no. 1, pp. 647–660, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. H. K. Khalil, Nonlinear Systems, MacMillan, New York, NY, USA, 1992.
  12. N. Yagiz and Y. Hacioglu, “Robust control of a spatial robot using fuzzy sliding modes,” Mathematical and Computer Modelling, vol. 49, no. 1-2, pp. 114–127, 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. W. S. Lin and C. S. Chen, “Robust adaptive sliding mode control using fuzzy modelling for a class of uncertain MIMO nonlinear systems,” IEE Proceedings: Control Theory and Applications, vol. 149, no. 3, pp. 193–202, 2002. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Jing and Q. H. Wuan, “Intelligent sliding mode control algorithm for position tracking servo system,” International Journal of Information Technology, vol. 12, no. 7, pp. 57–62, 2006. View at Google Scholar
  15. V. I. Utkin and H. C. Chang, “Sliding mode control on electro-mechanical systems,” Mathematical Problems in Engineering, vol. 8, no. 4-5, pp. 451–473, 2002. View at Publisher · View at Google Scholar · View at Scopus
  16. N. F. Al-Muthairi and M. Zribi, “Sliding mode control of a magnetic levitation system,” Mathematical Problems in Engineering, vol. 2004, no. 2, pp. 93–107, 2004. View at Publisher · View at Google Scholar · View at Scopus
  17. Z. L. Wan, Y. Y. Hou, T. L. Liao, and J. J. Yan, “Partial finite-time synchronization of switched stochastic Chua's circuits via sliding-mode control,” Mathematical Problems in Engineering, vol. 2011, Article ID 162490, 13 pages, 2011. View at Publisher · View at Google Scholar
  18. C. Pukdeboon, “Optimal sliding mode controllers for attitude stabilization of flexible spacecraft,” Mathematical Problems in Engineering, vol. 2011, Article ID 863092, 20 pages, 2011. View at Publisher · View at Google Scholar
  19. M. Dotoli, P. Lino, and B. Turchiano, “A decoupled fuzzy sliding mode approach to swing-up and stabilize an inverted pendulum, The CSD03,” in Proceedings of the 2nd IFAC Conference on Control Systems Design, pp. 113–120, Bratislava, Slovak Republic, 2003.
  20. J. S. Arora, Introduction to Optimum Design, McGraw-Hill, New York, NY, USA, 1989.
  21. S. S. Rao, Engineering Optimization: Theory and Practice, Wiley, NewYork, NY, USA, 1996.
  22. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.
  23. T. Back, D. B. Fogel, and Z. Michalewicz, Handbook of Evolutionary Computation, Institute of Physics Publishing, New York, NY, USA, Oxford University Press, Oxford, UK, 1997.
  24. G. Renner and A. Ekárt, “Genetic algorithms in computer aided design,” Computer Aided Design, vol. 35, no. 8, pp. 709–726, 2003. View at Publisher · View at Google Scholar · View at Scopus
  25. P. J. Fleming and R. C. Purshouse, “Evolutionary algorithms in control systems engineering: a survey,” Control Engineering Practice, vol. 10, no. 11, pp. 1223–1241, 2002. View at Publisher · View at Google Scholar · View at Scopus
  26. C. M. Fonseca and P. J. Fleming, “Multiobjective optimal controller design with genetic algorithms,” in Proceedings of the International Conference on Control, vol. 1, pp. 745–749, March 1994. View at Scopus
  27. G. Sánchez, M. Villasana, and M. Strefezza, “Multi-objective pole placement with evolutionary algorithms,” Lecture Notes in Computer Science, vol. 4403, pp. 417–427, 2007. View at Google Scholar · View at Scopus
  28. E. Alfaro-Cid, E. W. McGookin, D. J. Murray-Smith, and T. I. Fossen, “Genetic algorithms optimisation of decoupled Sliding Mode controllers: simulated and real results,” Control Engineering Practice, vol. 13, no. 6, pp. 739–748, 2005. View at Publisher · View at Google Scholar · View at Scopus
  29. N. Srinivas and K. Deb, “Multiobjective optimization using nondominated sorting in genetic algorithms,” Evolutionary Computation, vol. 2, no. 3, pp. 221–248, 1994. View at Google Scholar
  30. C. M. Fonseca and P. J. Fleming, “Genetic algorithms for multi-objective optimization: formulation, discussion and generalization,” in Proceedings of the 5th International Conference On genetic Algorithms, S. Forrest, Ed., pp. 416–423, Morgan Kaufmann, San Mateo, Calif, USA, 1993.
  31. C. A. Coello and A. D. Christiansen, “Multiobjective optimization of trusses using genetic algorithms,” Computers and Structures, vol. 75, no. 6, pp. 647–660, 2000. View at Publisher · View at Google Scholar · View at Scopus
  32. C. A. Coello Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2002.
  33. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Toffolo and E. Benini, “Genetic diversity as an objective in multi-objective evolutionary algorithms,” Evolutionary Computation, vol. 11, no. 2, pp. 151–167, 2003. View at Publisher · View at Google Scholar · View at Scopus
  35. C. A. Coello Coello and R. L. Becerra, “Evolutionary multiobjective optimization using a cultural algorithm,” in Proceedings of the IEEE Swarm Intelligence Symposium, pp. 6–13, IEEE Service Center, Piscataway, NJ, USA, 2003.
  36. K. Atashkari, N. Nariman-Zadeh, A. Pilechi, A. Jamali, and X. Yao, “Thermodynamic Pareto optimization of turbojet engines using multi-objective genetic algorithms,” International Journal of Thermal Sciences, vol. 44, no. 11, pp. 1061–1071, 2005. View at Publisher · View at Google Scholar · View at Scopus