Abstract

This paper presents Pareto design of decoupled sliding-mode controllers based on a multiobjective genetic algorithm for several fourth-order coupled nonlinear systems. In order to achieve an optimum controller, at first, the decoupled sliding mode controller is applied to stablize the fourth-order coupled nonlinear systems at the equilibrium point. Then, the multiobjective genetic algorithm is applied to search the optimal coefficients of the decoupled sliding-mode control to improve the performance of the control system. Considered objective functions are the angle and distance errors. Finally, the simulation results implemented in the MATLAB software environment are presented for the inverted pendulum, ball and beam, and seesaw systems to assure the effectiveness of this technique.

1. Introduction

There are many control techniques that have been used to investigate the control behavior of the nonlinear systems [14]. A variable structure control with sliding mode, which is commonly known as sliding-mode control, is a nonlinear control strategy that is well-known for its guaranteed stability, robustness against parameter variations, fast dynamic response, and simplicity in implementation [1]. Although the sliding mode control method gives a satisfactory performance for the second-order systems, its performance for a fourth-order coupled system is questionable. For example, in an inverted pendulum system controlled by the sliding-mode control, either the pole or cart can be successfully controlled, but not both. A remedy to this problem is to decouple the states and apply a suitable control law to stabilize the whole system. Recently, a decoupled sliding-mode control has been proposed to cope with this issue. It provides a simple way to decouple a class of fourth-order nonlinear systems in two second-order subsystems such that each subsystem has a separate control objective expressed in terms of a sliding surface [5, 6]. An important consequence of the using decoupled sliding-mode control is that the second subsystem is successfully incorporated into the first one via a two-level decoupling strategy.

It is very important to note that for design of the sliding-mode control and decoupled sliding-mode control, the sliding surface parameters should be determined, properly. This point is very crucial for the performance of the control system. The problem can be solved using evolutionary optimization techniques such as the genetic algorithm [710]. In this paper, a new intelligent decoupled sliding-mode control scheme based on an improved multiobjective genetic algorithm is proposed. Using this optimization algorithm, the important parameters of the decoupled sliding mode controller are optimized in a way to decrease the errors of the position and angle, simultaneously. The results obtained from this study illustrate that there are some important optimal design facts among objective functions which have been discovered via the Pareto optimum design approach. Such important design facts could not be found without using the multiobjective Pareto optimization process. In the end, simulations are presented to show the feasibility and efficiency of the proposed Pareto optimum decoupled sliding-mode control for the nonlinear systems.

2. Sliding-Mode Control

Sliding-mode controller is a powerful robust control strategy to treat the model uncertainties and external disturbances [11]. Furthermore, it has been widely applied to robust control of nonlinear systems [1218]. In this section we recall the general concepts of sliding mode control for a second-order dynamic system. Suppose a nonlinear system is defined by the general state space equation as follows: ̇𝑥=𝑓(𝑥,𝑢,𝑡),(2.1) where 𝑥𝑅𝑛 is the state vector, 𝑢𝑅𝑚 the input vector, 𝑛 is the order of the system, and 𝑚 is the number of inputs. Then, the sliding surface 𝑠(𝑒,𝑡) is given by the following: 𝑠(𝑒,𝑡)=𝑒𝐻𝑇𝑒=0,(2.2) where 𝐻𝑅𝑛 represents the coefficients or slope of the sliding surface. Here, 𝑒=𝑥𝑥𝑑(2.3) is the negative tracking error vector.

Usually a time-varying sliding surface 𝑠(𝑡) is simply defined in the state-space 𝑅𝑛 by the scalar equation as the following: 𝑑𝑠(𝑒,𝑡)=𝑑𝑡+𝜆𝑛1𝑒=0,(2.4) where 𝜆 is a strictly positive constant that can also be explained as the slope of the sliding surface. For instance, if 𝑛=2 (for a second-order system) then, 𝑠=̇𝑒+𝜆𝑒,(2.5) and hence, 𝑠 is simply a weighted sum of the position and velocity error from (2.4). The 𝑛th-order tracking problem is now being replaced by a first-order stabilization problem in which scalar 𝑠 is to be kept at zero by a governing reaching condition. By choosing Lyapunov function 𝑉(𝑥)=(1/2)𝑠2, the following equation can guarantee that the reaching condition is satisfied, ̇𝑉(𝑥)=𝑠̇𝑠<0.(2.6) The existence and convergence conditions can be rewritten as follows: 𝑠̇𝑠𝜂|𝑠|.(2.7) This equation permits a nonswitching region. Here, 𝜂 is a strictly positive constant, and its value is usually chosen based on some knowledge of disturbances or system dynamics in terms of some known amplitudes.

In this control method, by changing the control law according to certain predefined rules which depend on the position of the error states of the system with respect to sliding surfaces, those states are switched between stable and unstable trajectories until they reach the sliding surface.

It can be shown that the sliding condition of (2.6) is always satisfied by the following: 𝑢=𝑢eq𝑘sgn(𝑠),(2.8) where 𝑢eq is called equivalent-control input which is obtained by ̇𝑠=0. 𝑘 is a design parameter and 𝑘𝜂.

Function sgn makes the high frequency chattering in control command. Using a proper definition of a thin boundary layer around the sliding surface, the chattering can be eliminated (Figure 1). This is accomplished by defining a boundary layer of thickness Φ, and replacing function sgn with function sat. This function is as the following and shown in Figure 2, 𝑠satΦ=𝑠sgnΦ|||𝑠ifΦ|||𝑠1Φ|||𝑠ifΦ|||<1.(2.9)

3. Inverted Pendulum System

In this section, the model of an inverted pendulum is recalled. In fact, the work deals with the stabilization control of a complicated, nonlinear, and unstable system. A pole, hinged to a cart moving on a track, is balanced upwards by motioning of the cart via a DC motor. The system observable state vector is 𝑥=[𝑥1,𝑥2,𝑥3,𝑥4]𝑇, including, respectively, the position of the cart, the angle of the pole with respect to the vertical axis, and their derivatives. The force to motion the cart may be expressed as 𝐹=𝛼𝑢, where 𝑢 is the input that is the limited motor supply voltage. The system dynamic model is as follows: ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢,(3.1) where 𝑓1𝑎(𝑥)=𝑓𝑟𝜇𝑥24sin𝑥2𝑥𝑙cos2𝜇𝑔sin𝑥2𝐶𝑥4𝐽+𝜇𝑙sin2𝑥2,𝑏1(𝑥)=𝑎𝛼𝐽+𝜇𝑙sin2𝑥2,𝑓2𝑥(𝑥)=𝑙cos2𝑓𝑟𝜇𝑥24sin𝑥2+𝜇𝑔sin𝑥2𝐶𝑥4𝐽+𝜇𝑙sin2𝑥2,𝑏2(𝑥)=𝑙cos𝑥2𝛼𝐽+𝜇𝑙sin2𝑥2,(3.2) That is 𝑙=𝐿𝑀12𝑀2+𝑀1,𝑎=𝑙2+𝐽𝑀2+𝑀1𝑀,𝜇=2+𝑀1𝑙.(3.3)

Masses of the cart and pole are, respectively, 𝑀2 and 𝑀1, 𝑔 represents the gravity acceleration, 𝐿 is the half length of the pole, and 𝐽 is the overall inertia moment of the cart and pole with respect to the system centre of mass. 𝐶 is the rotational friction coefficient of the pole, and 𝑓𝑟 is the horizontal friction coefficient of the cart (Figure 3). This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇.

4. Ball and Beam System

The ball and beam system is one of the most enduringly popular and important laboratory models for teaching control systems engineering. Because it is very simple to understand as a system, and control techniques that can stable it cover many important classical and modern design methods. The system has a very important property, it is open-loop unstable. The system is very simple, a steel ball rolling on the top of a long beam. The beam is mounted on the output shaft of an electrical motor, and so the beam can be tilted about its center axis by applying an electrical control signal to the motor amplifier. The control job is to automatically regulate the position of the ball on the beam by changing the angle of the beam. This is a difficult control task because the ball does not stay in one place on the beam, and moves with acceleration that is approximately proportional to the tilt of the beam. In control terminology, the system is open-loop unstable because the system output (the ball position) increases without any limitation for a fixed input (beam angle). Feedback control must be used to stabilize the system and to keep the ball in a desired position on the beam.

Consider a ball and beam system depicted in Figure 4 and its dynamic is described below: ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢,(4.1) where 𝑓15(𝑥)=7𝑥𝑔sin2𝑥1𝑥24,𝑏1𝑓(𝑥)=0,2(𝑥)=𝑚𝑥12𝑥3𝑥4𝑥𝑔cos2𝑚𝑥21,𝑏+𝐽21(𝑥)=𝑚𝑥21.+𝐽(4.2)

The mass of the ball is 𝑚, 𝑔 represents the gravity acceleration, and 𝐽 is the inertia moment of the beam (Figure 4). The system observable state vector is 𝑥=[𝑥1,𝑥2,𝑥3,𝑥4]𝑇, including, respectively, the position of the ball, the angle of the beam with respect to the horizontal axis, and their derivatives. This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇.

5. Seesaw System

According to the basic physical concepts, in the seesaw mechanism, if the vertical line along the centre of gravity of the inverted wedge is not passing through the fulcrum perpendicularly, then the inverted wedge will result in a torque and rotates until reaching the stable state. If we want to balance the inverted wedge, we have to put an external force to produce an appropriate opposite torque. For this reason, the inverted wedge is equipped with a cart to balance the unstable system. The cart can move to produce the appropriate torque against the internal force (Figure 5).

The observable state vector is 𝑥=[𝑥1,𝑥2,𝑥3,𝑥4]𝑇, including, respectively, the cart position, the wedge angle with respect to the vertical axis, and their derivatives. The system dynamic model is as the following: ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢,(5.1) where 𝑓1(𝑥𝑥)=𝑔sin2𝑇𝑐𝑚𝑥3,𝑏11(𝑥)=𝑚,𝑓2(𝑥)=𝑀𝑔𝑟2𝑥sin2+𝑚𝑔𝑥21+𝑟21𝑥sin2+𝛼𝑓𝑝𝑥4𝐽,𝑏2𝑟(𝑥)=1𝐽,(5.2) that is 𝛼=tan1(𝑥1/𝑟1).

The cart and wedge masses are, respectively, 𝑚 and 𝑀, 𝑔 represents the gravity acceleration, 𝑟1 is the height of the wedge, 𝑟2 is the height of mass centre, 𝐽 is the inertia moment of the wedge, 𝑓𝑝 is the rotational friction coefficient of the wedge, and 𝑇𝑐 is the friction coefficient of the cart. This system is a nonlinear fourth-order system that includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇.

6. Decoupled Sliding-Mode Control

Consider the nonlinear fourth-order coupled system expressed as the following. ̇𝑥1=𝑥3,̇𝑥2=𝑥4,̇𝑥3=𝑓1(𝑥)+𝑏1(𝑥)𝑢,̇𝑥4=𝑓2(𝑥)+𝑏2(𝑥)𝑢.(6.1) This system includes two second-order subsystems in the canonical form with states [𝑥1,𝑥3]𝑇 and [𝑥2,𝑥4]𝑇, and the sliding-mode control mentioned in the Section 2 can only control one of these subsystems. Hence, the basic idea of the decoupled sliding-mode control is proposed to design a control law such that the single input 𝑢 simultaneously controls two coupled subsystems to accomplish the desired performance [5, 6, 19]. To achieve this goal, the following sliding surfaces are defined:𝑠1(𝑥)=𝜆1𝑥2𝑥2𝑑𝑧+𝑥4𝑥4𝑑𝑠=0(6.2a)2(𝑥)=𝜆2𝑥1𝑥1𝑑+𝑥3𝑥3𝑑=0.(6.2b)

Here, 𝑧 is a proportional value of 𝑠2 and has a proper range with respect to 𝑥2. A comparison of (6.2a) with (2.5) shows the meaning of (6.2a): the control objective in the first subsystem of (6.1) changes from 𝑥2=𝑥2𝑑 and 𝑥4=𝑥4𝑑 to 𝑥2=𝑥2𝑑+𝑧 and 𝑥4=𝑥4𝑑. On the other hand, (6.2b) has the same meaning of (2.5) and its control objectives are 𝑥1=𝑥1𝑑 and 𝑥3=𝑥3𝑑. Now, let the control law for (6.2a) be a sliding mode with a boundary layer, then: 𝑢1=̂𝑢1𝐺𝑓1𝑠sat1(𝑥)𝑏2(𝑥)𝐺𝑠1,𝐺𝑓1,𝐺𝑠1>0,(6.3) with ̂𝑢1=𝑏11𝑓(𝑥)2(𝑥)̈𝑥2𝑑+𝜆1𝑥4𝜆1̇𝑥2𝑑.(6.4) So 𝑠𝑧=sat2𝐺𝑠2𝐺𝑓2,0<𝐺𝑓2<1,(6.5) where 𝐺𝑠2 represents the inverse of the width of the boundary layer for 𝑠2, 𝐺𝑓2 transfers 𝑠2 to the proper range of 𝑥2. Notice, in (6.5) 𝑧 is a decaying oscillation signal since 𝐺𝑓2<1. Moreover, in (6.2a), if 𝑠1=0, then 𝑥2=𝑥2𝑑+𝑧 and 𝑥4=𝑥4𝑑.

Now, the control sequence is as follows: when 𝑠20, then 𝑧0 in (6.2a) causes (6.3) to generate a control action that reduces 𝑠2; as 𝑠2 decreases, 𝑧 decreases too. Hence, at the limit 𝑠20 with 𝑥1𝑥1𝑑, then 𝑧0 with 𝑥2𝑥2𝑑; so, 𝑠10, and the control objective would be achieved [19].

7. Genetic Algorithm

Optimization in engineering design has always been of great importance and interest particularly in solving complex real-world design problems. Basically, the optimization process is defined as finding a set of values for a vector of design variables so that it leads to an optimum value of an objective or cost function. In such single-objective optimization problems, there may or may not exist some constraint functions on the design variables, and they are, respectively, referred to as constrained or unconstrained optimization problems. There are many calculus-based methods including gradient approaches to search for mostly local optimum solutions and these are well documented in [20, 21]. However, some basic difficulties in the gradient methods such as their strong dependence on the initial guess can cause them to find a local optimum rather than a global one. This has led to other heuristic optimization methods, particularly genetic algorithms (GAs) being used extensively during the last decade. Such nature-inspired evolutionary algorithms [22, 23] differ from other traditional calculus based techniques. The main difference is that GAs work with a population of candidate solutions, not a single point in search space. This helps significantly to avoid being trapped in local optima [24] as long as the diversity of the population is well preserved.

One of complex real-world problems is the controller design, because it is necessary to assign the control parameters. This parameter tuning is traditionally based on the trial and error procedure; however, this problem can be solved via evolutionary algorithms, for example, genetic algorithms. In the existing literature, several previous works have considered the evolutionary algorithms for control design. For an overview of evolutionary algorithms in the control engineering, [25] is appropriate. In particular, the pole placement procedure to design a discrete-time regulator in [26] and the observer-based feedback control design in [27] are formulated as multiobjective optimization problems and solved via genetic algorithms. Moreover, in [28], two decoupled sliding-mode control configurations are designed for a scale model of an oil platform supply ship while the genetic algorithm is used for optimization.

A simple genetic algorithm includes individual selection from population based on the fitness, crossover, and mutation with some probabilities to generate new individuals. With the genetic operation going on, the individual maximum fitness and the population average fitness are increased, steadily. When applied to a problem, GA uses a genetics-based mechanism to iteratively generate new solutions from currently available solutions. It then replaces some or all of the existing members of the current solution pool with the newly created members. The motivation behind the approach is that the quality of the solution pool should improve with the passage of time [22, 23].

8. Multiobjective Optimization

In multiobjective optimization problems which is also called multi-criteria optimization problems or vector optimization problems, there are several objective or cost functions (a vector of objectives) to be optimized (minimized or maximized), simultaneously. These objectives often conflict with each other so that as one objective function improves, another deteriorates. Therefore, there is no single optimal solution that is best with respect to all the objective functions. Instead, there is a set of optimal solutions, well-known as Pareto optimal solutions [2932], which distinguishes significantly the inherent natures between single-objective and multiobjective optimization problems.

In fact, multiobjective optimization has been defined as finding a vector of decision variables satisfying constraints to give acceptable values to all objective functions. Such multiobjective minimization based on Pareto approach can be conducted using some definitions [33].

8.1. Definition of Pareto Dominance

A vector 𝑈=[𝑢1,𝑢2,,𝑢𝑛], is dominance to vector 𝑉=[𝑣1,𝑣2,,𝑣𝑛] (denoted by 𝑈𝑉) if and only if for all 𝑖{1,2,,𝑛},𝑢𝑖𝑣𝑖𝑗{1,2,,𝑛}𝑢𝑗<𝑣𝑗.

8.2. Definition of Pareto Optimality

A point 𝑋Ω (Ω is a feasible region in 𝑅𝑛) is said to be Pareto optimal (minimal) if and only if there is not 𝑋Ω which is dominance to 𝑋. Alternatively, it can be readily restated as following. For all 𝑋Ω,𝑋𝑋,𝑖{1,2,,𝑚}𝑓𝑖(𝑋)<𝑓𝑖(𝑋).

8.3. Definition of Pareto Set

For a given multiobjective optimization problem, a Pareto set 𝑃 is a set in the decision variable space consisting of all the Pareto optimal vectors. 𝑃={𝑋Ω𝑋Ω𝐹(𝑋)𝐹(𝑋)}.

8.4. Definition of Pareto Front

For a given multiobjective optimization problem, the Pareto front 𝑃𝑇 is a set of vectors of objective functions which are obtained using the vectors of decision variables in the Pareto set 𝑃, that is 𝑃𝑇={𝐹(𝑋)=(𝑓1(𝑋),𝑓2(𝑋),,𝑓𝑚(𝑋))𝑋𝑃}. In other words, the Pareto front 𝑃𝑇 is a set of the vectors of objective functions mapped from 𝑃.

In fact, evolutionary algorithms have been widely used for multiobjective optimization because of their natural properties suited for these types of problems. This is mostly because of their parallel or population-based search approach. Therefore, most of the difficulties and deficiencies within the classical methods in solving multiobjective optimization problems are eliminated. For example, there is no need for either several runs to find all individuals of the Pareto front or quantification of the importance of each objective using numerical weights. In this way, the original nondominated sorting procedure given by Goldberg [22] was the catalyst for several different versions of multiobjective optimization algorithms [29, 30]. However, it is very important that the genetic diversity within the population be preserved sufficiently. This main issue in multiobjective optimization problems has been addressed by many related research works [34]. Consequently, the premature convergence of multiobjective optimization evolutionary algorithms is prevented, and the solutions are directed and distributed along the true Pareto front if such genetic diversity is well provided. The Pareto-based approach of NSGAII [33] has been used recently in a wide area of engineering multiobjective optimization problems because of its simple yet efficient non-dominance ranking procedure in yielding different level of Pareto frontiers. However, the crowding approach in such state-of-the-art multiobjective optimization problems [35] is not efficient as a diversity preserving operator [36]. In this paper, a new diversity preserving algorithm called 𝜀-elimination diversity algorithm [36], as a multiobjective tool, searches the definition space of decision variables and returns the optimum answers in Pareto form. In this 𝜀-elimination diversity approach that is used to replace the crowding distance assignment approach in NSGAII [33], all the clones and/or 𝜀-similar individuals based on Euclidean norm of two vectors are recognized and simply eliminated from the current population. Therefore, based on a predefined value of 𝜀 as the elimination threshold (𝜀=0.01 has been used in this paper) all the individuals in a front within this limit of a particular individual are eliminated. It should be noted that such 𝜀-similarity must exist both in the space of objectives and in the space of the associated design variables. This will ensure that very different individuals in the space of design variables having 𝜀-similarity in the space of objectives will not be eliminated from the population. Evidently, the clones or 𝜀-similar individuals are replaced from the population with the same number of new randomly generated individuals. Meanwhile, this will additionally help to explore the search space of the given multiobjective optimization problems more efficiently [36].

9. Multiobjective Optimization of Decoupled Sliding Mode Control

As mentioned before this, it is necessary for the practical engineering applications to solve the optimization problems involving multiple design criteria which are also called objective functions. Furthermore, the design criteria may conflict with each other so that improving one of them will deteriorate since another. The inherent conflicting behavior of such objective functions lead to a set of optimal solutions named Pareto solutions. These types of problems can be solved using evolutionary multiobjective optimization techniques. Here, for multiobjective optimization of the decoupled sliding mode controller, vector [𝐺𝑓1,𝐺𝑠1,𝜆1,𝐺𝑓2,𝐺𝑠2,𝜆2] is the vector of selective parameters of the decoupled sliding mode controller. 𝐺𝑓1 and 𝐺𝑠1 are positive constant. 𝜆1 and 𝜆2 are coefficients of sliding surfaces, and 𝐺𝑠2represents the inverse of the width of the boundary layer of 𝑠2. 𝐺𝑓2 transfers 𝑠2 to the proper range of 𝑥2. The error of the position and the error of the angle are functions of this vector’s components. This means that by selecting various values for the selective parameters, we can make changes in the position and angel errors. In this paper, we are concerned in choosing values for the selective parameters to minimize above two functions. Clearly, this is an optimization problem with two object functions (errors of position and angle) and six decision variables [𝐺𝑓1,𝐺𝑠1,𝜆1,𝐺𝑓2,𝐺𝑠2,𝜆2]. The regions of the selective parameters are as follows:𝐺𝑠2,𝐺𝑓1,𝐺𝑠1: positive constant, 𝐺𝑠2,𝐺𝑓1,𝐺𝑠1>0,𝜆1, 𝜆2: coefficients of the sliding surface, 𝜆1,𝜆2>0,𝐺𝑓2: transfers 𝑠2 to a proper range of 𝑥2, 0<𝐺𝑓2<1.The following parameters of the genetic algorithm are considered.

Populationsize=100, chromosomelength=48, generations=300, crossoverprobability=0.8, and mutationprobability=0.02. Also, the stopping criterion for this algorithm is the maximum number of generations.

10. Simulation and Results for the Inverted Pendulum System

The simulation for the inverted pendulum system considered here is carried out by MATLAB software. The initial values are as the following: 𝑥1(0)=0,𝑥2𝜋(0)=6rad,𝑥3(0)=0,𝑥4(0)=0.(10.1) The system parameters and constants used in the simulation are given in Table 1.

When we apply the multiobjective genetic algorithm, we achieve a Pareto front of the angle error and distance error as demonstrated in Figure 6.

Figure 6 is the chart resulted from multiobjective optimization which all the presented points are nondominated to each other. Each point in this chart is a representative of a vector of selective parameters which if we choose it for the decoupled sliding-mode controller, the analysis tends to objective functions corresponding to that point of chart. The design variables and objective functions of the optimum design points A, B, and C are presented in Table 2.

Achieving several solutions, all of which are considered optimum is a unique property of multiobjective optimization. Designer in facing to Pareto charts, among several different optimum points can choose a suitable multisided design point, easily. According to the Pareto chart, we applied point C for simulation, as shown in Figures 7, 8, 9, 10, and 11.

The simulation results (Figures 7, 8, 9, 10, and 11) show that the pole and the cart can be stabilized to the equilibrium point.

The numerical results show that the control action is bounded between −15 and 10 (N), and sliding surface 𝑠2(𝑥) reaches to zero during the simulation.

11. Simulation and Results for the Ball and Beam System

The initial values of the ball and beam system are considered in the following form: 𝑥1(0)=0.1m,𝑥2𝜋(0)=3rad,𝑥3(0)=0,𝑥4(0)=0.(11.1) The system parameters and constants used in the simulation are given in Table 3.

When the multiobjective genetic algorithm is applied, a Pareto front of the angle error and distance error would be achieved (Figure 12).

Figure 12 shows the Pareto front obtained from the modified NSGAII algorithm in an arbitrary run for the ball and beam system. In this figure, points A and C stand for the best distance error and angle error, respectively. Furthermore, point B could be a trade-off optimum choice when considering minimum values of both angle error and distance error. Table 4 illustrates the design variables and objective functions corresponding to the optimum design points A, B, and C.

The time responses of the ball and beam system related to point B are shown in Figures 13, 14, 15, 16, and 17. These figures demonstrate that the ball and beam system can be stabilized to the equilibrium point.

Furthermore, the simulation shows that the control action is bounded between −1.2 and 4 (N), and sliding surface 𝑠2(𝑥) reaches to zero during simulation.

12. Simulation and Results for the Seesaw System

In this section, the simulation results for seesaw system are investigated. The initial values of this system are described by the following equations: 𝑥1(0)=0.3m,𝑥2𝜋(0)=6rad,𝑥3(0)=0,𝑥4(0)=0.(12.1) The system parameters used in the simulation are given in Table 5.

Figure 18 demonstrates a Pareto front of two objective functions (angle error and distance error) which is achieved of the multiobjective genetic algorithm (e.g. modified NSGAII).

It is clear that all points in Figure 18 are nondominated to each other, and each point in this chart is a representative of a vector of selective parameters for the decoupled sliding mode controller. Moreover, choosing a better value for any objective function in the Pareto front would cause a worse value for another objective function. Here, point B has been chosen from Figure 18 to design an optimum decupled sliding mode controller (Figures 19, 20, 21, 22, and 23). Design variables and objective functions related to the optimum design points A, B, and C are detailed in Table 6.

The simulations (Figures 19, 20, 21, 22, and 23) shows that the seesaw system is stabilized to the equilibrium point after 3 seconds, and the control effort is bounded between −5 and 10 (N).

13. Conclusion

This paper proposes the decoupled sliding-mode technique for stabilising the coupled nonlinear systems while the multiobjective genetic algorithm is employed in order to optimize two objective functions. This method is a universal design method and suitable to various kinds of control objects. Usage this method includes two steps. The first step is to design the decoupled sliding-mode controller for the nonlinear system. The second step is to apply the multiobjective optimization tool to search the definition space of decision variables and to return the optimum answers in the Pareto form. The simulation results on three different and typical control systems show good control and robust performance of the proposed strategy.