About this Journal Submit a Manuscript Table of Contents
Journal of Engineering
Volume 2013 (2013), Article ID 421543, 9 pages
http://dx.doi.org/10.1155/2013/421543
Research Article

Design a PID Controller for Suspension System by Back Propagation Neural Network

1Mechanical Engineering Group, Aligudarz Branch, Islamic Azad University, Aligudarz, Iran
2Faculty of Engineering, Shahrekord University, Shahrekord, Iran

Received 14 January 2013; Accepted 10 February 2013

Academic Editor: Jie Zhou

Copyright © 2013 M. Heidari and H. Homaei. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a neural network for designing of a PID controller for suspension system. The suspension system, designed as a quarter model, is used to simplify the problem to one-dimensional spring-damper system. In this paper, back propagation neural network (BPN) has been used for determining the gain parameters of a PID controller for suspension system of automotive. The BPN method is found to be the most accurate and quick. The best results were obtained by the BPN by Levenberg-Marquardt algorithm training with 10 neurons in the one hidden layer. Training was continued until the mean squared error is less than . Desired error value was achieved in the BPN, and the BPN was tested with both data used and not used for training. By training of this network, it is possible to estimate the gain parameters of PID controller at any condition. The inputs of network are automotive velocity, overshoot percentage, settling time, and steady state error of suspension system response. Also outputs of the net are the gain parameters of PID controller. Resultant low relative error value of the ANN model indicates the usability of the BPN in this area.

1. Introduction

Vehicle suspension serves as the basic function of isolating passengers and the chassis from the roughness of the road to provide a more comfortable ride. In other words, a very important role of the suspension system is the ride control. Due to developments in the control technology, electronically controlled suspensions have gained more interest. These suspensions have active components controlled by a microprocessor. By using this arrangement, significant achievements in vehicle response can be carried out. Selection of the control method is also important during the design process. The design of vehicle suspension systems is an active research field in which one of the objectives is to improve the passenger’s comfort through the vibration reduction of the internal engine and external road disturbances [13]. A design of a mixed suspension system (an actuator in tandem with a conventional passive suspension) for the axletree of a road vehicle based on a linear model with 4 degrees of freedom (dof) has been realized in [4]. The authors proposed an optimal control law that was aimed at optimizing the suspension performance while ensuring that the magnitude of the forces generated by the two actuators and the total forces applied between wheel and body never exceeded the given bounds. Neural network (NN) controllers parallel to McPherson strut-type independent suspensions have been realized in [5]. The major advantages of this control method were its success, robust structure and the ability and adaptation of using these types of controllers on vehicles. Hac [6] applied optimal linear preview control on the active suspensions of a quarter car model. An investigation of the variation of vertical vibrations of vehicles using a radial basis neural network (RBNN) has been presented in [7, 8]. The RBNN was employed to predict the desired values of amplitude of acceleration for different road conditions such as concrete, waved stone, block paved, and country roads. The proposed neural system was also tested for different natural frequencies and the ratios of damping. A methodology for the design of active/hybrid car suspension systems with the goal to maximize passenger comfort (minimization of passenger acceleration) was presented by Spentzas and Kanarachos [9]. For this reason, a neural network (NN) controller was proposed, which corresponds to a Taylor series approximation of the (unknown) nonlinear control function and the NN was due to the numerous local minima trained using a semistochastic parameter optimization method. The use of fuzzy-logic-based control for the vehicle-active suspension systems with the two variables going to the fuzzy controller as the active suspension velocity and deflection. The capability of fuzzy logic to model the real-world situations has resulted in its wider application in diverse fields as well. A fuzzy-logic-based control for vehicle-active suspension has been proposed and its capabilities for the improvement of ride comfort and vehicle maneuverability are studied through a software simulation. A control scheme of an active suspension system using a quarter car model has been proposed by Kim and Ro [10]. The authors have shown that due to the presence of nonlinearities such as a hardening spring, a quadratic damping force, and the “tire lift-off” phenomenon in a real suspension system, it was very difficult to achieve the desired performance using linear control techniques. To ensure robustness for a wide range of operating conditions, a sliding mode controller has been designed and compared with an existing nonlinear adaptive control scheme in the literature. The sliding mode scheme utilizes a variant of a sky-hook damper system as a reference model which does not require real-time measurement of road input. A neural scheme for controlling was presented as a bus suspension system. The suspension system, designed as a quarter bus model, was used to simplify the problem to a one-dimensional spring-damper system. The proposed controller was such that the system was always operating in a closed loop, which should lead to better performance characteristics [11].

As seen from previous studies, the researcher used the NN for control of suspension system, but in this study we use the BP neural network to estimate a PID controller. Also the constrains such as overshoot, settling time, and road condition to design a PID controller for suspension system with NN are not examined by other authors. To the authors’ best knowledge, no previous studies which cover all these issues are available.

In this paper, a BP neural network was investigated to estimate the gain parameters of PID controller for a suspension system of automotive. The paper is organized in the following manner. Section 2 contains a description of the mathematical model and the problem statement. Section 3 recalls the artificial neural network. Section 4 proposes network development. Simulation results and discussion of the problem are given in Section 5, and finally Section 6 gives the conclusions of this work.

2. Mathematical Model

A quarter-car suspension system shown in Figure 1 is used to simulate the control system. The dynamic equations of the suspension system are of the following form [12]: where , , , , , and denote the mass, the stiffness, and the damping rate of the sprung and unsprung elements, respectively. Variables , , and are the displacements of body, wheel, and road, respectively.

421543.fig.001
Figure 1: A quarter-car model of suspension system.

Also the system is equipped by a hydraulic actuator placed between the sprung and unsprung masses to exert a force between and . and are the fluid pressures in the lower cylinder chamber of the actuator and piston area, respectively. Several points are required to be noted;(1) The above equations are linearized dynamic equations at equilibrium point and the vehicle speed is constant.(2) Variables , , and are measured from the static equilibrium position.(3) The linearized dynamic behavior of tire through interaction with the road is justified where the tire is in contact with the road.(4) The applied force on tire can be considered as a disturbance force in the system.

Therefore, where is an applied force on the tire from the road.

Equation (2) can be rewritten as Assume that all of the initial conditions are zero, so these equations represent the situation when the wheel goes up a bump. The dynamic equations above can be expressed in a form of transfer functions by taking Laplace transform of the above equations. When the disturbance input was only considered, was set to zero. Thus, the transfer function can be written as follows: When the control input   only was considered, was set to zero. Thus, the transfer function can be written as follows: In this context, it is assumed that the car experiences a sinusoidal disturbance from the road, described by the following equation: where   is velocity of car on , is on rad/sec, and is total unsprung and sprung mass.

Assuming that each amplitude is completely decoupled and controlled independently from other amplitudes, the control input signal is given by In (10), is the control error where is the desired car amplitude of displacement and is the current measured car amplitude. is called the proportional gain, the integral gain, and the derivative gain.

3. Artificial Neural Networks

Artificial NNs are nonlinear mapping systems with a structure loosely based on principles observed in biological nervous systems. In greatly simplified terms as can be seen from Figure 2(a), a typical real neuron has a branching dendritic tree that collects signals from many other neurons in a limited area, a cell body that integrates collected signals and generates a response signal (as well as manages metabolic functions), and a long branching axon that distributes the response through contacts with dendritic trees of many other neurons. The response of each neuron is a relatively simple nonlinear function of its inputs and is largely determined by the strengths of the connections from its inputs. In spite of the relative simplicity of the individual units, systems containing many neurons can generate complex and interesting behaviours.

fig2
Figure 2: (a) A biological nervous systems and (b) an artificial neuron model.

An ANN shown in Figure 3 is very loosely based on these ideas. In the most general terms, an NN consists of a large number of simple processors linked by weighted connections. By analogy, the processing nodes may be called neurons.

421543.fig.003
Figure 3: A layered feed-forward artificial NN.

Each node output depends only on information that is locally available at the node, either stored internally or arriving via the weighted connections. Each unit receives inputs from many other nodes and transmits its output to other nodes. By itself, a single processing element is not very powerful; it generates a scalar output with a single numerical value, which is a simple nonlinear function of its inputs. The power of the system emerges from the combination of many units in an appropriate way.

A network is specialized to implement different functions by varying the connection topology and the values of the connecting weights. Complex functions can be implemented by connecting units together with appropriate weights. In fact, it has been shown that a sufficiently large network with an appropriate structure and property chosen weights can approximate with arbitrary accuracy any function satisfying certain broad constraints. Usually, the processing units have responses like (see Figure 2(b)) where are the output signals of hidden layer to output layer, and is a simple nonlinear function such as the sigmoid or logistic function. This unit computes a weighted linear combination of its inputs and passes this through the nonlinearity to produce a scalar output. In general, it is a bounded nondecreasing nonlinear function; the logistic function is a common choice. This model is, of course, a drastically simplified approximation of real nervous systems. The intent is to capture the major characteristics important in the information processing functions of real networks without varying too much about physical constraints imposed by biology. The impressive advantages of NNs are the capability of solving highly nonlinear and complex problems and the efficiency of processing imprecise and noisy data. Mainly, there are three types of training conditions for NNs, namely, supervised training, graded training, and self-organization training. Supervised training, which is adopted in this study, can be applied as follows.(1) First, the dataset of the system, including input and output values, is established.(2) The dataset is normalized according to the algorithm.(3) Then, the algorithm is run.(4) Finally, the desired output values corresponding to the input used in test phase.

3.1. Back Propagation Neural Network

Back propagation neural network (BPN), developed by Rumelhart et al. [13], is the most prevalent of the supervised learning models of ANN. BPN used the gradient steepest descent method to correct the weight of the interconnectivity neuron. BPN easily solved the interaction of processing elements by adding hidden layers. In the learning process BPN, the interconnective weights are adjusted using an error convergence technique to obtain a desired output for a given input. In general, the error at the output layer in the BPN model propagates backward to the input layer through the hidden layer in the network to obtain the final desired output. The gradient descent method is utilized to calculate the weight of the network and adjusts the weight of interconnectives to minimize the output error. The formulas used in this algorithm are as follows.(1) Hidden layer calculation results are where and are input data and weights of the input data, respectively. is activation function, and is the result obtained from hidden layer.(2) Output layer calculation results are where are weights of output layer, and is the result obtained from output layer.(3) Activation functions used in layers are logsig, tansig, and linear as (4) Errors made at the end of one cycle are where is result expected from output layer, is error occurred at output layer, and is error occurred at hidden layer.(5) Weights can be changed using these calculated error values according to (17) as where are weights of output layer. and are correction made in weights at the previous calculation. is learning ratio, and is momentum term that is used to adjust weights. In this paper, and are used. (6) Square error, occurred in one cycle, can be found by (18) as The completion of training the BPN, relative error (RE) for each data, and mean relative error (MRE) for all data are calculated according to (19), respectively, as where is the number of data [14, 15].

4. Network Development

4.1. Input and Output Data

The speed of automotive is changed between 10 and 55 m/sec. The overshoot, settling time, and steady state error of system response are assumed between 1% and 10%, 0.3 and 1.5 second, and 0% and 2%, respectively. These parameters are the input value of network. Finally, the outputs of net are the gain parameters of the PID controller.

4.2. Network Configuration

The nodes at the input and output layer are determined by the number of predictor and predicted variables. In this research, there are 4 nodes in the input layers due to the number of input variables, and 3 nodes in the output layer, for similar reasons. There are no rules given to determine the exact number of hidden layers and the number of nodes in hidden layers. A large number of hidden-layer nodes will lead to an overfit at intermediate points, which can slow down the operation of NN. On the other hand, an accurate output may not be achieved if too few hidden layer nodes are included in the neural network. The results show that the best configuration of the network is achieved by one hidden layer. The number of nodes in the input layer, in the hidden layer, and in the output layer is chosen to 4–10–3, respectively. The activation function in the input and the hidden layers is sigmoid function and linear function in the output layer.

4.3. Preprocessing the Data

For a proper working of the neural network, a preprocessing of the input and output data is performed. The input values are normalized between −1 and 1, since the activation function is a sigmoid function in the input layer. Normalization is made by the following function: The output values are normalized between 0 and 1 and a linear function in the output layer [16].

4.4. Training of the Network

Once a network is structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. During the training, the weights are iteratively adjusted to minimize the network performance function. As performance function the mean square error, the average squared error between the network output and the target output is applied. For the training of the network the MATLAB Neural Network Toolbox is used [17]. The Levenberg-Marquardt algorithm is chosen to perform the training with the default values suggested in [18]. In this work, for training is used of three functions, newelm, newff, and newcf. The stopping criteria are adjusted; that is the mean square error should be less than and the number of epochs (iterations) should be less than 5000. The BPN learning process involves a forward propagation pass calculating the outputs using the inputs, weights, and neuron transfer functions, as well as a back propagation pass correcting the weights using the error between the predicted and target values. The major advantage of the BPN model is its ability to learn from examples without requiring principal knowledge of domain problems. In addition, it is very effective in dealing with large amounts of data. The structure of the BPN model can easily be constructed according to the domain problem and the availability of data attributes.

5. Numerical Results

Specifications of the suspension system used for simulation are given in Table 1. The control system is simulated subject to a road displacement shown by (8).

tab1
Table 1: System specifications.

In this study, the back propagation learning algorithm is used in a feed forward, single hidden layer network. A variable transfer function is used as the activation function for both the hidden layer and the output layer. Many back propagation training algorithms were repeatedly applied until satisfactory training was achieved. The number of test data value used in the BPN is shown in Table 2. The names of training algorithms are shown in Table 3. The activation function for the hidden layer and the output layer that is used are shown in Table 4.

tab2
Table 2: The test data values set used in the BPN.
tab3
Table 3: The variable training methods.
tab4
Table 4: The variable activation functions in the layers.

The best combination for all methods that is used in this paper is logsig for hidden layer and purelin for output layer. In the hidden layer, a number of neurons from 7 to 29 are used. The data set available for , , and included 100 data patterns. is called the proportional gain, the integral gain, and the derivative gain of a PID controller. From these, 80 data patterns were used for training the network, and the remaining 20 patterns were randomly selected and used as the test data set. The regression value of the output variable values for the test data set for various neurons in hidden layer is shown in Table 5. It should be noted that these data were completely unknown to the network. The closer this value is to unity, the better is the prediction accuracy. The best value obtained is 0.9999, and it is obtained from the LM algorithm by 10 neurons in hidden layer.

tab5
Table 5: The () values for PID gain parameters with various neurons in the hidden layer.

In Tables 6, 7, and 8, the results of training the network using nine different training algorithms by 10 neurons in the hidden layer and logsig-purlin activation function are summarized. Each entry in the table represents 14 different trials, where different random initial weights are used in each trial.

tab6
Table 6: The results of the variable training methods in the BPN with newelm function.
tab7
Table 7: The results of the variable training methods in the BPN with newcf function.
tab8
Table 8: The results of the variable training methods in the BPN with newff function.

The fastest algorithm for this problem is the LM. On the average, it is over two times faster than the next fastest algorithm. This is the type of problem for which the LM algorithm is best suited.

In Tables 9, 10, and 11 a comparison between the actual gain parameters of PID controller and prediction with the artificial neural network for the LM method is presented. The actual valves are obtained by the written code by MATLAB. As can be seen, the error with newelm function is very small.

tab9
Table 9: Comparison between actual gain parameters of PID and the BPN model (with newelm function).
tab10
Table 10: Comparison between actual gain parameters of PID and the BPN model (with newcf function).
tab11
Table 11: Comparison between actual gain parameters of PID and the BPN model (with newff function).

Controlled and uncontrolled of sprung mass of vehicle is compared in displacement and acceleration as shown from Figures 4 and 5, respectively. Note that in these figures dash and solid lines are uncontrolled and controlled cases of sprung mass, respectively. The body displacement of controlled system is very smooth with maximum values of 0.0015 m and 0.0043 m while the uncontrolled system provides high oscillations with maximum values of 0.078 m and 0.1578 m. The passenger comfort is provided by controlling the body acceleration as shown in Figure 5. The controlled system reduces the acceleration successfully to zero after passing disturbances while uncontrolled case shows high accelerations.

421543.fig.004
Figure 4: The body displacement.
421543.fig.005
Figure 5: The body acceleration.

6. Conclusions

The present study shows that for the analyses of PID controller of suspension system, the BPN is a suitable method. The BPN was successfully applied for determining the gain parameters of a PID controller for a suspension system. Date for developing the ANN model is obtained by the written code with MATLAB. Results from ANN model are compared with the results from the classical model. The best regression value for the simulation is 0.9999 with newelm function. The MRE value of the BPN model is 4.2%. The results show that newelm function is more accurate than newff and newcf functions. Also the Levenberg-Marquardt training is faster than other training methods. The BPN method also has the advantages of computational speed, low cost, and ease of use by people with little technical experience.

References

  1. H. Du and N. Zhang, “H control of active vehicle suspensions with actuator time delay,” Journal of Sound and Vibration, vol. 301, no. 1-2, pp. 236–252, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. H. R. Karimi, “Optimal vibration control of vehicle engine-body system using Haar functions,” International Journal of Control, Automation and Systems, vol. 4, no. 6, pp. 714–724, 2006. View at Scopus
  3. H. R. Karimi, M. Zapateiro, and N. Luo, “An LMI Approach to H control of vehicle engine-body vibration systems with time-varying actuator delay,” Proceedings of the Institution of Mechanical Engineers I, vol. 222, pp. 883–894, 2008.
  4. A. Giua, C. Seatzu, and G. Usai, “A mixed suspension system for a half-car vehicle model,” Dynamics and Control, vol. 10, no. 4, pp. 375–397, 2000. View at Publisher · View at Google Scholar · View at Scopus
  5. R. Guclu and K. Gulez, “Neural network control of seat vibrations of a non-linear full vehicle model using PMSM,” Mathematical and Computer Modelling, vol. 47, no. 11-12, pp. 1356–1371, 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Hac, “Optimal linear preview control of active vehicle suspension,” Vehicle System Dynamics, vol. 21, no. 3, pp. 167–195, 1992. View at Scopus
  7. S. Yildirim and I. Uzmay, “Statistical analysis of vehicles' vibration due to road roughness using radial basis artificial neural network,” Applied Artificial Intelligence, vol. 15, no. 4, pp. 419–427, 2001. View at Publisher · View at Google Scholar · View at Scopus
  8. Ş. Yildirim and I. Uzmay, “Neural network applications to vehicle's vibration analysis,” Mechanism and Machine Theory, vol. 38, no. 1, pp. 27–41, 2003. View at Publisher · View at Google Scholar · View at Scopus
  9. K. Spentzas and S. A. Kanarachos, “Design of a non-linear hybrid car suspension system using neural networks,” Mathematics and Computers in Simulation, vol. 60, no. 3–5, pp. 369–378, 2002. View at Publisher · View at Google Scholar · View at Scopus
  10. C. Kim and P. I. Ro, “A sliding mode controller for vehicle active suspension systems with non-linearities,” Proceedings of the Institution of Mechanical Engineers D, vol. 212, no. 2, pp. 79–91, 1998. View at Scopus
  11. S. Yildirim, “Vibration control of suspension systems using a proposed neural network,” Journal of Sound and Vibration, vol. 277, no. 4-5, pp. 1059–1069, 2004. View at Publisher · View at Google Scholar · View at Scopus
  12. M. M. Fateh and S. S. Alavi, “Impedance control of an active suspension system,” Mechatronics, vol. 19, no. 1, pp. 134–140, 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at Scopus
  14. H. Zhang, W. Wu, and M. Yao, “Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks,” Neurocomputing, vol. 89, pp. 141–146, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Shao and G. Zheng, “Convergence analysis of a back-propagation algorithm with adaptive momentum,” Neurocomputing, vol. 74, no. 5, pp. 749–752, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. D. Gao, Y. Kinouchi, K. Ito, and Z. Zhao, “Neural networks for event extraction from time series: a back propagation algorithm approach,” Future Generation Computer Systems, vol. 21, no. 7, pp. 1096–1105, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. H. Demuth and M. Beale, Matlab Neural Networks Toolbox, User’s Guide, The Math Works, 1992–2001, http://www.mathworks.com.
  18. D. Ballabio and M. Vasighi, “A Matlab toolbox for self organizing maps and supervised neural network learning strategies,” Chemometrics and Intelligent Laboratory Systems, vol. 118, pp. 24–32, 2012.