Research Article | Open Access
Ayush Raizada, Pravin Singru, Vishnuvardhan Krishnakumar, Varun Raj, "Development of an Experimental Model for a Magnetorheological Damper Using Artificial Neural Networks (Levenberg-Marquardt Algorithm)", Advances in Acoustics and Vibration, vol. 2016, Article ID 7027259, 6 pages, 2016. https://doi.org/10.1155/2016/7027259
Development of an Experimental Model for a Magnetorheological Damper Using Artificial Neural Networks (Levenberg-Marquardt Algorithm)
This paper is based on the experimental study for design and control of vibrations in automotive vehicles. The objective of this paper is to develop a model for the highly nonlinear magnetorheological (MR) damper to maximize passenger comfort in an automotive vehicle. The behavior of the MR damper is studied under different loading conditions and current values in the system. The input and output parameters of the system are used as a training data to develop a suitable model using Artificial Neural Networks. To generate the training data, a test rig similar to a quarter car model was fabricated to load the MR damper with a mechanical shaker to excite it externally. With the help of the test rig the input and output parameter data points are acquired by measuring the acceleration and force of the system at different points with the help of an impedance head and accelerometers. The model is validated by measuring the error for the testing and validation data points. The output of the model is the optimum current that is supplied to the MR damper, using a controller, to increase the passenger comfort by minimizing the amplitude of vibrations transmitted to the passenger. Besides using this model for cars, bikes, and other automotive vehicles it can also be modified by retraining the algorithm and used for civil structures to make them earthquake resistant.
Isolation of the forces transmitted by external application is the most important function of a suspension system. The suspension system comprises a spring element and a dissipative element, which when placed between the object to be protected and the excitation reduces the vibration transmitted to the object.
Suspension systems range from active to passive suspensions. MR damper lies in between this range and behaves like a semiactive suspension system. The damping of a passive suspension system is a property of the system and cannot be varied, whereas in an active suspension system the damping of the system can be altered by using an actuator to give it an external force. This external force helps in improving the ride quality. The shortcoming of this model is that it requires considerable amount of external power source; it is costly and is difficult to incorporate in the system due to the added mass. A variation of the active suspension system is the semiactive or the adaptive suspension system. In these systems the damping of the system is varied by controlling the current thereby changing the viscous properties of the damping elements in the suspension system. PID neural network controller is one such controller used to develop a model to predict the displacement and velocity behavior of the MR damper . In comparison to active suspension semiactive suspension systems’ power consumption is considerably less.
The magnetorheological (MR) dampers and electrorheological dampers are the most common examples of semiactive dampers. In the past few years, research on MR damper has improved its capabilities and reduced the gap between adaptive suspension system and truly active suspension systems. Structurally, the MR damper is similar to a simple fluid damper, except that the viscosity of the fluid in the MR damper can be changed by altering the current in the system that induces a magnetic field. The MR fluid is a non-Newtonian fluid composed of mineral oil with suspended iron nanoparticles. When there is no magnetic flux (zero current), the MR damper behaves like a normal fluid damper in which the iron nanoparticles are randomly oriented as seen in Figure 1(a).
When a magnetic field is applied to the MR damper, the iron nanoparticles align themselves along the magnetic flux, as seen in Figures 1(b) and 1(c), and form chains which makes the fluid partly semisolid. These particles help in reinforcing the damping by forming chains in the oil that obstructs the movement of the oil as the magnetic field developed is in the direction perpendicular to the movement of the oil. Hence increasing the magnetic field increases damping in the system.
The behavior of the MR damper can be characterized by its highly nonlinear motion [2–6]. The force versus velocity plot forms a hysteresis loop depicting the nonlinear character of the MR damper. There exist many different parametric models like Bingham model, Bingham Body model, Lee model, Spencer model, Bouc-Wen model, and Gamota-Filisko model that portray the behavior of the MR damper. These models are difficult to implement throughout the working range of the MR damper due to the hysteresis and jump type phenomena resulting from the specific properties, like viscoplasticity, viscoelastoplasticity, and viscoelasticity, of the MR fluid. It is also difficult to find a solution for the equations for some of the defined models due to the numerical and analytical complexity of the model equation. These drawbacks prevent the models from being implemented as governing equations in real time controllers . In the low speed range the Bingham model can predict the rigid plastic behavior better than the involution model when the MR damper is loaded with a sinusoidal input. When triangular loading is used on the MR damper, the involution model predicts the model better than the Bingham model . The deformation in the hysteresis loop of the force-velocity and force-displacement graphs deviates from the Bouc-Wen model due to the force lag phenomenon in the MR damper. The modification of the Bouc-Wen model, the Bouc-Wen-Baber-Noori model, can to a certain extent describe the pinching hysteretic behavior . The evolutionary variable equation for the modified Bouc-Wen model depends on 4 parameters , , , and , where , , and control the shape and size of the hysteresis and the parameter controls the smoothness of the transition from elastic to plastic region . Genetic algorithm assisted inverse method and nonlinear-least square error optimization in MATLAB can be used to identify the parameters and develop a model [11, 12].
2. Artificial Neural Networks
Artificial Neural Network is an important tool used to build models for dynamic systems. It includes a variety of modeling tools and models. This method has considerable advantages over other modeling tools. ANNs can work easily with nonlinearly separable data, and this makes them ideal for applications such as machine condition monitoring, where the training data are sparse, and the network will have to generalize well. Several applications have demonstrated that a neural network can successfully recognize and classify different faults in a number of different condition monitoring applications . A good general introduction to neural networks is provided by Haykin  and also Rojas . Multilayer perceptron (MLP) ANNs have been used for classification purposes in this experiment. The MLP used in this case consists of one hidden layer and an output layer, the hidden layer having a logistic activation function (see (1)), while the output layer uses a linear activation function (see (2)):where is the sum of the weighted outputs. Different sizes of the hidden layer were tested. The size of the output layer was set at two neurons for this particular application. Training of the ANN networks is carried out using the Levenberg-Marquardt algorithm discussed in Section 3. The network is trained using a training and validation set, while testing is carried out using a further test feature set. Performance is measured in terms of the network’s classification success on unseen data in the test set. Training is stopped when the classification performance of the validation set starts to diverge from that of the training set (i.e., the network becomes overtrained on the training set) .
The neural network architecture consists of fully connected feedforward network consisting of 100 hidden layers as shown in Figure 2. The optimization is achieved using the network growing approach where one hidden neuron is used initially and is increased until there is improvement in the performance. This optimization is carried out using the fitting tool in the neural network toolbox in Matlab.
3. Levenberg-Marquardt Algorithm
A mathematical description of the Levenberg-Marquardt (LM) neural network training algorithm has been presented by Hagan and Menhaj . The LM algorithm was originally created [18, 19] to address the drawbacks of the Gauss-Newton (GN) method and gradient descent algorithm. The quadratic convergence properties of the GN algorithm make it very fast. However, the choice of initial weights is the key to success of this method and since these values are not readily available in real life problems, the GN method becomes slightly redundant in these cases. Since, in real-world problems, the prediction of an appropriate set of initial values is not always possible, the GN method is impractical for many applications. In comparison to the GN algorithm, the success of the gradient descent algorithm is less dependent on the initial choice of weight values. However, since the gradient descent algorithm approaches the minimum in a linear (first order) manner, its speed of convergence is normally low and, thus, does not always possess adequate convergence properties.
By merging the advantages of GN and gradient descent algorithms, a hybrid algorithm (LM algorithm) was created to cater to the drawbacks of GN and gradient descent algorithms. The LM algorithm possesses quadratic convergence (approximates the GN method) when it is in the vicinity of (but not too close to) a minimum . LM uses gradient descent to improve on an initial guess for its parameters and transforms to the GN method as it approaches the minimum value of the cost function. Once it approaches the minimum, it transforms back to the gradient descent algorithm to improve the accuracy. The LM algorithm is used for curve fitting [13–15] and many other optimization problems [21–24]. Due to its desirable convergence capabilities, in many optimization applications, the LM method is usually preferred over many other optimization techniques [25–29].
4. Experimental Setup
The experimental setup consists of three components: external actuation system, data acquisition system, and the controller. The external actuation for the system was fabricated to excite the damper with the help of the electrodynamic shaker. The system was designed to closely depict a quarter car model. The shaker transmits the road disturbances to the lower plate by means of a stringer as seen in Figure 3. The lower plate is equivalent to the tire of the vehicle in the quarter car model. The lower plate is coupled with the upper plate with a LORD manufactured MR damper RD-8040-1  which is a part of the suspension system. The upper plate behaves as the chassis for vehicle. The MR damper is clamped to the upper and lower plates by means of an L clamp.
The electrodynamic shaker (Model 2110E)  with a load capacity of 489 N (sine-peak) is connected to the linear power amplifier (Model 2050E09)  which provides the shaker with the input profile that is given to the system using the Spider 81 data acquisition system  and the EDM software. The vibration signals, taken as inputs for genetic programming, are measured using sensors. The signal is measured on the shaker and the lower plate using accelerometers “352C34 LW 155857” of sensitivity 101.3 mV/g  and “352C68 SN 92017” of sensitivity 102.2 mV/g , respectively. To measure the force transmitted and the acceleration signals at the upper plate an impedance head “288D01 SN 3176” is used of sensitivity 98.73 mV/LBF for force and 101.3 mV/g for acceleration . The MR damper is operated using an external power source. The current in the damper is varied using a potentiometer type device called Wonder Box RD-3002-03 . The current is measured using a multimeter.
The signal parameters from these sensors are used as inputs for the algorithm and the current that is varied using the potentiometer is used as the output. This current can be directly provided to the control system to create a feedback loop.
The training set for the ANN is obtained from experimentation using the setup described in Section 3. In the experimental setup system there are 6 factors that are monitored to develop the training set. The experimentation is carried out by varying the frequency and current parameters. The data collected is divided into three sets; that is, 70% is training set, 15% is testing set, and the remaining 15% is validation. Using this to training set the final model is developed which best fits the data having minimum error. Table 1 provides the configuration of various experiments conducted by varying the parameters.
The input parameters are recorded for 10 seconds of the response. These signatures are as shown in Figure 4.
In the first set of tests the current is kept constant and the frequency is increased from 5 Hz to 7 Hz in steps of 0.1 Hz. The data is collected at a sampling rate of 20.48 kHz. In the similar fashion the data set is obtained for all the current values. This data is then consolidated to form the training and testing matrix. There are 4 input parameters taken into consideration in this system:(1)Frequency of excitation.(2)Acceleration of the shaker.(3)Relative acceleration of the upper level (chassis) with respect to the lower level (tire).(4)Force transmitted to chassis.The current in the damper coils is taken as the output parameter. Using these input-output parameters the algorithm is trained.
6. Results and Discussion
The best model selected from the models generated in the different stages is based on the performance of the model. The algorithm is trained using the Levenberg-Marquardt algorithm. The error histogram for the model obtained with 100 nodes in the hidden layer is shown in Figure 5. It can be seen from the histogram that the large centre peaks indicate very small errors or the output is very close to the target values. The absence of spikes towards the end implies that the there are no misclassifications in the model .
The best training performance for the different iterations of the model is shown in Figure 6.
The training state for each iteration for the various parameters is shown in Figure 7.
Figure 8 shows the vibration signature for the force exerted on the upper mass of the fabricated structure. From the figure it is visible that there is a massive reduction in the amplitude of the force exerted due to the optimization performed.
This work is the first of its kind that develops a model to predict the current value to adjust the damper force for the given input parameters, using actual experimental data, in order to maintain the driver acceleration. This proves the versatility and success of Artificial Neural Networks in highly nonlinear and hysteretic systems like MR damper. The optimization performed, based on the Artificial Neural Network, has reduced the amplitude of force transmitted to the driver. The magnitude of the force transmitted is approximately 1/10 that at the base of the vehicle. The current parameter determined based on this optimization has caused this reduction in the force transmitted to the passenger seated in the vehicle.
The authors declare that they have no competing interests.
The authors acknowledge BITS-Pilani and Department of Science and Technology, FIST program, for providing funding for the experimental setup.
- W. Liu, W.-K. Shi, D.-W. Liu, and T.-Y. Yan, “Experimental modeling of magneto-rheological damper and PID neural network controller design,” in Proceedings of the 6th International Conference on Natural Computation (ICNC '10), pp. 1674–1678, Yantai, China, August 2010.
- F. Gandhi and I. Chopra, “A time-domain non-linear viscoelastic damper model,” Smart Materials and Structures, vol. 5, no. 5, pp. 517–528, 1996.
- G. M. Kamath and N. M. Wereley, “Nonlinear viscoelastic-plastic mechanisms-based model of an electrorheological damper,” Journal of Guidance, Control, and Dynamics, vol. 20, no. 6, pp. 1125–1132, 1997.
- R. A. Snyder, G. M. Kamth, and N. M. Werely, “Characterization and analysis of magnetorheological damper behaviour due to sinusoidal loading,” in Proceedings of the SPIE Symposium on Smart Materials and Structures, vol. 3989 of Proceedings of SPIE, pp. 213–229, New Port Beach, Calif, USA, 2000.
- R. Stanway, N. D. Sims, and A. R. Johnson, “Modelling and control of a magnetorheological vibration isolator,” in Smart Structures and Materials: Damping and Isolation, vol. 3989 of Proceedings of SPIE, pp. 184–193, March 2000.
- N. M. Wereley, L. Pang, and G. M. Kamath, “Idealized hysteresis modeling of electrorheological and magnetorheological dampers,” Journal of Intelligent Material Systems and Structures, vol. 9, no. 8, pp. 642–649, 1998.
- B. Sapiński and J. Filuś, “Analysis of parametric models of MR linear damper,” Journal of Theoretical and Applied Mechanics, vol. 41, no. 2, pp. 215–240, 2003.
- H. Fujitani, H. Sodeyama, K. Hata et al., “Dynamic performance evaluation of 200kN magnetorheological damper,” in Proceedings of the SPIE’s 7th Annual International Symposium, vol. 3989, pp. 194–203, 2000.
- M. T. Braz-Cesar and R. C. Barros, “Experimental and numerical analysis of MR dampers,” in Proceedings of the 4th ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and Earthquake Engineering, Kos Island, Greece, June 2013.
- G. R. Peng, W. H. Li, H. Du, H. X. Deng, and G. Alici, “Modelling and identifying the parameters of a magneto-rheological damper with a force-lag phenomenon,” Applied Mathematical Modelling, vol. 38, no. 15-16, pp. 3763–3773, 2014.
- M. Giuclea, T. Sirereanu, D. Stancioiu, and C. W. Stammers, “Modelling of magnetorheological damper dynamic behaviour by genetic algorithms based inverse method,” Proceedings of the Romanian Academy, Series A, vol. 5, no. 1, pp. 1–10, 2004.
- P. Prakash and A. K. Pandey, “Performance of MR damper based on experimental and analytical modelling,” in Proceedings of the 22nd International Congress of Sound and Vibration (ICSV '15), Florence, Italy, July 2015.
- T. J. Doyle, R. Pathak, J. S. Wolinsky, and P. A. Narayana, “Automated proton spectroscopic image processing,” Journal of Magnetic Resonance, Series B, vol. 106, no. 1, pp. 58–63, 1995.
- S. Haykin, Neural Networks: A Comprehensive Foundation, Pretence Hall, Upper Saddle River, NJ, USA, 2nd edition, 1999.
- R. Rojas, Neural Networks: A Systematic Introduction, Springer Science & Business Media, Berlin, Germany, 2013.
- L. B. Jack and A. K. Nandi, “Fault detection using support vector machines and artificial neural networks, augmented by genetic algorithms,” Mechanical Systems and Signal Processing, vol. 16, no. 2-3, pp. 373–390, 2002.
- M. T. Hagan and M. B. Menhaj, “Training feedforward networks with the Marquardt algorithm,” IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 989–993, 1994.
- K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quarterly of Applied Mathematics, vol. 2, pp. 164–168, 1944.
- D. W. Marquardt, “An algorithm for least squares estimation of non-linear parameters,” Japan Journal of Industrial and Applied Mathematics, vol. 11, no. 2, pp. 431–441, 1963.
- S. Kollias and D. Anastassiou, “An adaptive least squares algorithm for the efficient training of artificial neural networks,” IEEE Transactions on Circuits and Systems, vol. 36, no. 8, pp. 1092–1101, 1989.
- J. Basu and L. Hazra, “Role of line search in least-squares optimization of lens design,” Optical Engineering, vol. 33, no. 12, pp. 4060–4066, 1994.
- A. S. Deo and I. D. Walker, “Adaptive non-linear least squares for inverse kinematics,” in Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 186–193, May 1993.
- V. I. Dmitriev and T. Y. Shameeva, “Solution of an inverse problem for estimation of ionospheric parameters,” Computational Mathematics and Modeling, vol. 5, no. 2, pp. 157–161, 1994.
- M. Guarini, J. Urzua, A. Cipriano, and M. Matus, “Cardiac flow estimation using the radial arterial pressure waveform,” in Proceedings of the 25th Annual Summer Computer Simulation Conference, pp. 1016–1019, 1993.
- J. L. del Alamo, “Comparison among eight different techniques to achieve an optimum estimation of electrical grounding parameters in two-layered earth,” IEEE Transactions on Power Delivery, vol. 8, no. 4, pp. 1890–1899, 1993.
- P. G. Drennan, B. W. Smith, and D. Alexander, “Technique for the measurement of the in situ development rate,” in Proceedings of the Integrated Circuit Metrology, Inspection, and Process Control VIII, vol. 2196 of Proceedings of SPIE, pp. 449–465, San Jose, Calif, USA, May 1994.
- P. Ojala, J. Saarinen, P. Elo, and K. Kaski, “Novel technology independent neural network approach on device modelling interface,” IEE Proceedings: Circuits, Devices and Systems, vol. 142, no. 1, pp. 74–82, 1995.
- A. Swietlik, “Interactive approach to reconstruction of solids from range images,” in Applications of Artificial Intelligence: Machine Vision and Robotics, vol. 1964 of Proceedings of SPIE, pp. 202–210, International Society for Optics and Photonics, 1993.
- B. G. Kermani, S. S. Schiffman, and H. T. Nagle, “Performance of the Levenberg-Marquardt neural network training method in electronic nose applications,” Sensors and Actuators B: Chemical, vol. 110, no. 1, pp. 13–22, 2005.
- J. M. Twomey and A. E. Smith, “Performance measures, consistency, and power for artificial neural network models,” Mathematical and Computer Modelling, vol. 21, no. 1-2, pp. 243–258, 1995.
Copyright © 2016 Ayush Raizada et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.