Table of Contents
ISRN Signal Processing
Volume 2011 (2011), Article ID 725108, 18 pages
Research Article

Iterative Smooth Variable Structure Filter for Parameter Estimation

Department of Mechanical Engineering, McMaster University, Hamilton, ON, Canada L8S 4L7

Received 28 February 2011; Accepted 20 March 2011

Academic Editors: I. Guler and E. Salerno

Copyright © 2011 Mohammad Al-Shabi and Saeid Habibi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The smooth variable structure filter (SVSF) is a recently proposed predictor-corrector filter for state and parameter estimation. The SVSF is based on the sliding mode control concept. It defines a hyperplane in terms of the state trajectory and then applies a discontinuous corrective action that forces the estimate to go back and forth across that hyperplane. The SVSF is robust and stable to modeling uncertainties making it suitable for fault detection application. The discontinuous action of the SVSF results in a chattering effect that can be used to correct modeling errors and uncertainties in conjunction with adaptive strategies. In this paper, the SVSF is complemented with a novel parameter estimation technique referred to as the iterative bi-section/shooting method (IBSS). This combined strategy is used for estimating model parameters and states for systems in which only the model structure is known. This combination improves the performance of the SVSF in terms of rate of convergence, robustness, and stability. The benefits of the proposed estimation method are demonstrated by its application to an electrohydrostatic actuator.

1. Introduction

State and parameter estimation techniques are widely used in applications such as signal processing, radar tracking, satellite systems, simultaneous localization and mapping, weather forecasting, economics, dynamic positioning and tracking systems, fault detection, control, instrumentation, and prediction [1, 2]. They extract system states and parameters from measurements [2]. Estimating states is important for control. When used for parameter estimation, filters can track changes in systems and are useful for monitoring and fault detection [35].

State and parameter estimation techniques have seen a rapid development since the 1950s. The predominant methods include the Wiener filter [68], the Kalman filter (KF) [2, 4, 5], the 𝐻 filter [9, 10], the unscented Kalman filter [4, 11], the particle filter [11], the sliding mode observer (SMO), and the smooth variable structure (SVSF) [1, 3, 1215]. Some of these methods have been combined with model adaptation strategies involving fuzzy logic [16] and neural networks [14, 15] in order to improve their performance, accuracy, and stability. In this paper the recently proposed SVSF filtering method is combined with the iterative bi-section/shooting method (IBSS) to estimate the states and the parameters of linear systems when only their model structures are known. The system model considered as follows (assuming presence of system and measurement noise):𝐱𝑘=𝐀𝑘1𝐱𝑘1+𝐁𝑘1𝑢𝑘1+𝐰𝑘1𝐳𝑘=𝐇𝑘𝐱𝑘+𝐯𝑘.(1) The SVSF is a model-based strategy, and IBSS is an iterative searching method. Their combination allows adaptation of the SVSF’s filter model given modeling uncertainties. For comparison purposes, the IBSS is combined with both the KF and the SVSF. The IBSS method is discussed in Section 4, its combination with the KF and the SVSF are described in Sections 4.4.1 and 4.4.2, respectively. In Section 5,the IBSS with the KF, and the IBSS with the SVSF are applied to an uncertain electro-hydrostatic actuator. Conclusions are presented in Section 6. The nomenclature is presented in Table 1. Italic upper-case letters are used to denote matrices and vectors while their elements are denoted by italic lower-case letters with subscripts 𝑖 and/or 𝑗.

Table 1: Nomenclature.

2. The Kalman Filter

In 1960, Rudolf Kalman presented the Kalman filter (KF), as a recursive, optimal, and model-based estimator, that falls under the predictor-corrector category for linear systems [2, 4]. The KF is an optimal filter for linear Gaussian problems as it minimizes the mean square error between the actual and the estimated state (MMSE).

According to [2], the KF process may be divided into two stages: prediction and correction stages. In the prediction stage the a priori estimate, ̂𝐱𝑘|𝑘1, is obtained by using a model of the system under consideration such that̂𝐱𝑘𝑘1=𝐀𝑘1̂𝐱𝑘1𝑘1+𝐁𝑘1𝑢𝑘1̂𝐳𝑘𝑘1=𝐇𝑘̂𝐱𝑘𝑘1𝐏𝑘𝑘1=𝐀𝑘1𝐏𝑘1𝑘1𝐀𝑇𝑘1+𝐐𝑘1.(2) The a priori estimation errors are due to uncertainties as well as noise effects. The KF then uses the measurements and an optimal gain to refine the a priori estimates to an a posteriori form in what is referred to as an update step such ̂𝐱𝑘𝑘=̂𝐱𝑘𝑘1+𝐊kalman𝐳𝑘̂𝐳𝑘𝑘1𝐏𝑘𝑘=𝐈𝑛×𝑛𝐊kalman𝐇𝑇𝑘𝐏𝑘𝑘1,(3) where 𝐊kalman is the Kalman filter’s gain and it is defined as𝐊kalman=𝐏𝑘𝑘1𝐇𝑇𝑘𝐇𝑘𝐏𝑘𝑘1𝐇𝑇𝑘+𝐑𝑘1.(4)

The Kalman Filter’s limitations are its assumption of a Gaussian distribution for the noise and a largely known model. A number of different formulations of the Kalman Filter have been proposed to improve its performance [2].

3. The Smooth Variable Structure

The variable structure filter (VSF) was first proposed in 2003 as a recursive predictor-corrector filter that is based on the sliding mode concept [3]. It uses a projection of the true state trajectory as a switching hyperplane and forces the estimated state trajectories to stay close to the true states. The filter reduces the effects of noise and modeling errors and overcomes some of the limitations of the Kalman filter in terms of robustness and uncertainties. The solution is not, however, optimal [1]. The concepts of VSF and its performance in terms of stability, accuracy, and the rate of convergence are discussed and quantified in [1, 3, 15]. In 2007, the smooth variable structure filter (SVSF) was proposed to make the VSF simpler and applicable for nonlinear systems [13, 14].

The SVSF is a robust filter that can guarantee stability for bounded uncertainties as shown in [1]. It has a secondary set of indicators of performance (one per state or parameter that is estimated) that provides information on the uncertainties of the filter model. The simplest form of the SVSF is applicable to linear systems that have full-rank measurement matrix. Like the KF, the SVSF is a predictor-corrector method. However, the update stage is defined aŝ𝐱𝑘𝑘=̂𝐱𝑘𝑘1+𝐊SVSF,(5) where 𝐊SVSF is the SVSF gain for linear systems with full-rank measurement matrix, and is defined as:𝐊SVSF=𝐇𝟏|||𝐞𝐳𝑘𝑘1||||||𝐞+𝜸𝐳𝑘1𝑘1|||𝐞𝐬𝐠𝐧𝐳𝑘1𝑘2,(6) where 𝜸𝑛×𝑛 is a diagonal matrix with 𝛾𝑖𝑖1, and 𝐞𝐳𝑘𝑘1 and 𝐞𝐳𝑘𝑘 are the a priori and the a posteriori output estimation error vectors and are defined as𝐞𝐳𝑘𝑘1=𝐳𝑘̂𝐳𝑘𝑘1𝐞𝐳𝑘𝑘=𝐳𝑘̂𝐳𝑘𝑘.(7) The SVSF handles bounded uncertainties, such as: (i)inaccuracies in the estimation model,(ii)system and measurement noise,(iii)unknown initial position.

The SVSF is a robust recursive predictor-corrector estimation method that can effectively deal with uncertainties associated with initial conditions and modeling errors. It guarantees bounded-input bounded-output stability and the convergence of the estimation process by using the Lyapunov stability condition. The derivation of SVSF’s gain and its stability conditions can be found in [1] and are summarized in the following subsections.

3.1. The Lyapunov Stability Theorem

Let 𝐌𝑘 be a Lyapunov function defined in terms of the a posteriori estimation error, such that𝐌𝑘=|||𝐞𝐳𝑘𝑘|||.(8) The estimation process is stable ifΔ𝐌𝑘<0,(9) where Δ𝐌𝑘 represents the change in the Lyapunov function and in this case is defined as follows:Δ𝐌𝑘=|||𝐞𝐳𝑘𝑘||||||𝐞𝐳𝑘1𝑘1|||,(10) By substituting (10) into (9) and then rearranging, the following is obtained:|||𝐞𝐳𝑘𝑘||||||𝐞𝐳𝑘1𝑘1|||<0.(11) Equation (11) is equivalent to the following:|||𝐞𝐳𝑘𝑘|||<|||𝐞𝐳𝑘1𝑘1|||.(12) To remove the absolute operator, ||, both sides are expressed in the form of diagonal matrices and then they are multiplied with their transpose as follows: 𝐞diag𝐳𝑘𝑘𝐞diag𝐳𝑘𝑘𝑇𝐞<𝑑𝑖𝑎𝑔𝐳𝑘1𝑘1𝐞diag𝐳𝑘1𝑘1𝑇,(13) where diag(𝐞𝐳𝑘𝑘) is the diagonal matrix of 𝐞𝐳𝑘|𝑘.

The a posteriori output estimation error is obtained as follows (assuming the output matrix is well known):𝐞𝐳𝑘𝑘=𝐇𝑘𝐞𝐱𝑘𝑘+𝐯𝑘.(14) By substituting (14) into (13), (15) is obtained as𝐇diag𝑘𝐞𝐱𝑘𝑘𝐇diag𝑘𝐞𝐱𝑘𝑘𝐯+diag𝑘𝐯diag𝑘𝐇+diag𝑘𝐞𝐱𝑘𝑘𝐯diag𝑘𝐯+diag𝑘𝐇diag𝑘𝐞𝐱𝑘𝑘<𝐇diag𝑘1𝐞𝐱𝑘1𝑘1𝐇diag𝑘1𝐞𝐱𝑘1𝑘1𝐯+diag𝑘1𝐯diag𝑘1𝐇+diag𝑘1𝐞𝐱𝑘1𝑘1𝐯diag𝑘1𝐯+diag𝑘1𝐇diag𝑘1𝐞𝐱𝑘1𝑘1.(15) If the measurement noise is stationary white, then by taking the expectation of both sides, (15) is transformed to (16);𝐸𝐇diag𝑘𝐞𝐱𝑘𝑘𝐇diag𝑘𝐞𝐱𝑘𝑘𝐯+diag𝑘𝐯diag𝑘𝐇<𝐸diag𝑘1𝐞𝐱𝑘1𝑘1𝐇diag𝑘1𝐞𝐱𝑘1𝑘1𝐯+diag𝑘1𝐯diag𝑘1,(16) where 𝐸[diag(𝐇𝑘𝐞𝐱𝑘𝑘)diag(𝐯𝑘)] and 𝐸[diag(𝐯𝑘)diag(𝐇𝑘𝐞𝐱𝑘𝑘)] vanish due to the white noise assumption. For a diagonal, positive, and time-invariant measurement matrix, (16) is reduced to (17);𝐸𝐞diag𝐱𝑘𝑘𝐞diag𝐱𝑘𝑘𝐞<𝐸diag𝐱𝑘1𝑘1𝐞diag𝐱𝑘1𝑘1.(17) Note that the assumptions pertaining to the measurement matrix are realistic since most applications use linear sensors as feedback in their operations. Moreover, these sensors are well calibrated and their structures are well known, [1]. Equation (17) is equivalent to the following:𝐸|||𝐞𝐱𝑘𝑘||||||𝐞<𝐸𝐱𝑘1𝑘1|||.(18) From (17), the expectation of the a posteriori estimation error is reduced in time (it converges towards the origin) which means that the filter is stable.

3.2. The Derivation of the SVSF’s Gain

The SVSF’s gain is derived to guarantee the stability condition of (18). Moreover, the gain must be larger than the uncertain dynamics of the estimation process, yet it should be bounded for bounded-input bounded-output stability.

Let 𝜸 be a diagonal positive matrix with dimensions 𝜸𝑛×𝑛 and with elements less than unity, that is, 0<𝛾𝑖𝑖<1, then:𝜸|||𝐞𝐳𝑘1𝑘1|||<|||𝐞𝐳𝑘1𝑘1|||.(19) Adding the term |𝐞𝐳𝑘𝑘1| to both sides leads to the following:𝜸|||𝐞𝐳𝑘1𝑘1|||+|||𝐞𝐳𝑘𝑘1|||<|||𝐞𝐳𝑘1𝑘1|||+|||𝐞𝐳𝑘𝑘1|||.(20) The absolute value of the SVSF gain multiplied by the measurement matrix is set to be equal to the left-hand side of (20) as follows:|||𝐇𝑘𝐊SVSF𝑘||||||𝐞=𝜸𝐳𝑘1𝑘1|||+|||𝐞𝐳𝑘𝑘1|||.(21) The sign of the gain is made equal to the sign of the a priori estimation error, 𝐞𝐳𝑘|𝑘1. This leads to (6). Note that the proposed gain satisfies the conditions of being larger than the a priori estimation error. By applying the gain to the a priori estimate, the a posteriori estimated measurement is obtained as follows:̂𝐳𝑘𝑘=̂𝐳𝑘𝑘1+𝜸|||𝐞𝐳𝑘1𝑘1|||+|||𝐞𝐳𝑘𝑘1|||𝐞𝐬𝐠𝐧𝐳𝑘𝑘1.(22) Subtracting (22) from the measurement 𝐳𝑘 leads to the following:𝐞𝐳𝑘𝑘=𝐞𝐳𝑘𝑘1𝜸|||𝐞𝐳𝑘1𝑘1|||+|||𝐞𝐳𝑘𝑘1|||𝐞𝐬𝐠𝐧𝐳𝑘𝑘1.(23)Equation (23) can be rewritten in a simpler form by substituting |𝐞𝐳𝑘𝑘1|𝐬𝐠𝐧(𝐞𝐳𝑘𝑘1) by 𝐞𝐳𝑘𝑘1as follows:𝐞𝐳𝑘𝑘=𝐞𝐳𝑘𝑘1|||𝐞𝜸𝐳𝑘1𝑘1|||𝐞𝐬𝐠𝐧𝐳𝑘𝑘1𝐞𝐳𝑘𝑘1|||𝐞=𝜸𝐳𝑘1𝑘1|||𝐞𝐬𝐠𝐧𝐳𝑘𝑘1.(24) By taking the absolute of both sides of (24), (25) is obtained as follows:|||𝐞𝐳𝑘𝑘||||||𝐞=𝜸𝐳𝑘1𝑘1|||<|||𝐞𝐳𝑘1𝑘1|||.(25) The error decays in time, which means that (12) and (18) are satisfied and the filter is stable.

3.3. The Smoothing Boundary Layer

The essential idea behind the SVSF is that the estimated state would switch back and forth across the actual state trajectory. This switching effect results in chattering that can be eliminated by replacing the sign function in (6) with a saturation function with a known boundary layer referred to as the smoothing boundary layer. Inside the smoothing boundary layer, the corrective action is interpolated based on the ratio between the amplitude of the output’s a priori estimation error and the smoothing boundary layer’s width. Outside the smoothing boundary layer, the discontinuous corrective action with its full amplitude is applied. The SVSF assigns and requires one smoothing boundary layer per estimate. The SVSF gain becomes as𝐊SVSF=𝐇𝟏|||𝐞𝐳𝑘𝑘1||||||𝐞+𝜸𝐳𝑘1𝑘1|||𝐞𝐬𝐚𝐭𝐳𝑘𝑘1,𝚿,(26) and 𝐬𝐚𝐭(𝐞𝐳𝑘|𝑘1,Ψ) is a saturation vector and is defined as𝐞𝐬𝐚𝐭𝐳𝑘𝑘1=𝑒,𝚿sat𝑧1𝑘𝑘1,Ψ1,𝑒sat𝑧𝑛𝑘𝑘1,Ψ𝑛,(27) wheresat(𝑒𝑧𝑖𝑘𝑘1,Ψ𝑖𝑘),𝑖=1,,𝑛 is the saturation function and is defined as follows:𝑒sat𝑧𝑖𝑘𝑘1,Ψ𝑖𝑘=𝑒𝑧𝑖𝑘𝑘1Ψ𝑖𝑘𝑒𝑧𝑖𝑘𝑘1Ψ𝑖𝑘,𝑒sgn𝑧𝑖𝑘𝑘1𝑒𝑧𝑖𝑘𝑘1>Ψ𝑖𝑘.(28)

The boundary layer width Ψ𝑖𝑘 is calculated by using the upper bound of uncertainties [11]. If due to changes in the system, additional uncertainties are added such that the amplitude of 𝑒𝑧𝑖𝑘|𝑘1 grows larger than Ψ𝑖𝑘, then chattering will be observed [1]. For example, the a priori chattering signal has been tracked for a second-order system that is made to have parametric changes at time steps 𝑡1=4000 and 𝑡2=7000. These changes last for 1000 and 2500 time steps, respectively. The smoothing boundary layer was designed to enclose the existence subspace for the system before the parametric changes (𝑡<4000 time steps). Figure 1 shows chattering when uncertainties are injected into the model at time 𝑡1=4000 and 𝑡2=7000 time steps. Moreover, the figure shows the lasting period of each uncertainty injection. The SVSF is very sensitive to added uncertainties and exhibits chattering that can be used for detecting the inception of a change in the system. This capability is very useful for certain applications such as fault detection.

Figure 1: The chattering as an indicator of parametric changes.

4. Iterative Bi-Section/Shooting Method

The iterative bisection/shooting method consists of two elements discussed in the following subsections.

4.1. The Bi-Section Method

The bisection method is a well-known numerical root-finder for 𝑓(𝑥)=0, and it is based on the following.

If 𝑓(𝑥) is continuous over (𝑎𝑖,𝑎𝑒), and 𝑓(𝑎𝑖)𝑓(𝑎𝑒)<0 then there is at least one point (𝑎) in the interval where 𝑓(𝑎)=0 as shown in Figure 2 [17, 19].

Figure 2: The bi-section's principle [17].

The bisection method starts by defining an interval that contains the root 𝑎:𝑎𝑎𝑖,𝑎𝑒(29) where 𝑎𝑖 and 𝑎𝑒 are the interval boundaries and they are chosen to satisfy the following:𝑓𝑎𝑖𝑓𝑎𝑒<0.(30) By taking the interval’s middle point, 𝑎𝑚, the interval is divided into two subintervals. Based on 𝑎𝑚’s function sign, one of these subintervals is chosen to be a new interval for the next iteration as𝑎If𝑓𝑖𝑓𝑎𝑚𝑎<0thenthenewintervalisdenedas𝑖,𝑎𝑚,𝑎Elsethenewintervalisdenedas𝑚,𝑎𝑒.(31)

The interval is then divided in half iteratively until the width of the interval becomes smaller than a threshold and the root is considered to be the half of the final interval. This algorithm is summarized in Figure 3. If multiple zeros exist inside the interval, then only one of the zeros will be obtained depending on the interval size and its boundary values as shown in Figure 4. Therefore, this method is stable as it always converges to one of the zeros. Moreover, the level of accuracy is controllable which has a maximum absolute value equal to half of the last interval (threshold). However, its disadvantages are its slow rate of convergence and its sensitivity to the size of the interval [17]. Due to its stability and simplicity, this method has been used in many applications; that is, computing the 𝐻 norm of transfer functions in [19], and in system identification as in [20].

Figure 3: The bi-section steps [17].
Figure 4: Multiple zeros and interval effects.
4.2. The Shooting Method

According to [17], the shooting method is a numerical technique used to solve a differential equation with boundary conditions (at time 𝑡𝑓) defined as follows:𝑛𝑖=0𝑥(𝑖)(𝑡)=𝑥(𝑛)(𝑡)+𝑥(𝑛1)(𝑡)++𝑥(𝑡)=𝑥𝑝𝑥𝑡(𝑡)Fortheboundaries𝑓𝑥(𝑛1)𝑡𝑓𝑇=𝐱𝑓,(32) and convert it to an initial condition problem (at time 𝑡0) as follows:𝑛𝑖=0𝑥(𝑖)(𝑡)=𝑥(𝑛)(𝑡)+𝑥(𝑛1)(𝑡)++𝑥(𝑡)=𝑥𝑝𝑥𝑡(𝑡)Forinitialcondition0𝑥(𝑛1)𝑡0𝑇=𝐱0,(33) where 𝑥𝑝 is the input.

The shooting method has the same idea of hitting a target by a cannon projectile. If a cannon is used to hit a target, the muzzle angle must be adjusted properly; otherwise the projectile does not hit its target. If the adjustment process is done manually, then several trials will be done to achieve the proper angle as shown in Figure 5. The first trial is done by adjusting the muzzle by an initial angle, and then shooting the projectile. According to the projectile’s final destination and its difference from the target location, the muzzle angle is adjusted, and another trial is done. The angle is adjusted several times (the trial is repeated) until the projectile hits its target. Similarly, the shooting method starts by guessing initial conditions, ̂𝐱0, for the system, then finding the solutions of the system’s equation for the entire domain up to the final values, ̂𝐱𝑓. By comparing the resultant final values with their actual values, 𝐱𝑓, the initial guess is then refined and the process is repeated iteratively to minimize the error in the final (boundary) values. Once the error becomes smaller than a threshold value, iteration stops and the solution is adapted.

Figure 5: Adjusting the angle of the cannon muzzle manually to hit a target.
4.3. The Iterative Bi-Section/Shooting Method

The Bi-section/shooting method is a combination used to iteratively extract model parameters from the measurements for systems in which only the model structure is known. The maximum number of parameters that could be estimated for an 𝑛th order system using this method is 𝑛+1. The system’s parameters are assumed to be constant during the operation and they are divided into two groups; the first group is of size 𝑛 and is obtained by the shooting/bisection method. The second group is extracted based on measurements and the parameters from the first group. Therefore, they are stochastic variables with variances that are functions of noise and modeling errors pertaining to the parameters in the first group. For example, in second-order systems, the gain and the natural frequency are the first group parameters, and the damping ratio is the second group parameter. The damping ratio may be extracted from measurements if the gain and the natural frequency are known. The estimate of the damping ratio becomes a stochastic variable with variance that is a function of the noise and the variance in the latter two parameters as shown in Figure 6. The figure shows that if modeling errors are reduced, then the estimated parameter’s variance is reduced, and its value is obtained once the modeling errors become zero. Assuming that a curve connects the minimum variance points in Figure 6, then the smallest variance point has a (graphical) derivative with a zero value and the sign of the derivative changes across this point. By studying the sign of the derivative function, the minimum variance could be obtained using the bi-section method.

Figure 6: The affection of modeling errors on the extracted parameter, 𝜉, for a second-order system.

Parameter estimation is performed by defining a search interval (for each range of group-one parameters), estimating group-two parameters, and then obtaining their variances. Based on the variance of group two parameters, the intervals of group one parameters are reduced using the bi-section method until a threshold is reached. For example, to implement this method for a first-order system with two model parameters; that is, 𝑋 and 𝑌, the following process is used.

One of the two parameters is chosen first; that is, 𝑋. An interval is specified for this parameter and five different values are arbitrarily chosen; that is, 𝑋1 to 𝑋5. These values are uniformly distributed along the interval and they are assumed to be the shooting method’s initial guesses.

For each of these values, and using data segments of measurements and input, the second parameter, 𝐘𝑖, may be extracted using inverse system model. Note that the extracted parameter 𝐘𝑖 is not constant and is a temporal function. Thus, five variance values of the second parameter 𝐘𝑖 are obtained one for each 𝑋1 to 𝑋5 from the shooting method. As discussed earlier, each variance is a function of the noise as well as error in the first parameter estimate 𝑋𝑖.

The variance values are distributed as shown in Figure 7. Note that these points have a parabolic shape. Taking the derivative of the shape function (to obtain the root of the derivative that represent the minimum variance) and using the bi-section method, a new subinterval is obtained.

Figure 7: Refining the interval by bisection method.

The algorithm is iteratively repeated until the width of the subinterval becomes smaller than a threshold. At each iteration, the first parameter is assumed to be half of the resultant interval, and the second parameter is chosen to be the mean of the corresponding extracted vector.

The algorithm of IBSS is demonstrated by the following example.

Example 1. This example demonstrates the application of the IBSS algorithm to a second-order system. The parameters are divided into two groups as previously discussed. The first group consists of the gain (𝐵) and the natural frequency (𝜔𝑛), and their estimation is performed by an outer loop and five inner loops. The second group consists of the damping ratio (𝜉). Assuming the outer loop is related to 𝐵 and the inner loops are related to 𝜔𝑛. The computation loops are illustrated in Figure 8 and are as follows.

Figure 8: The IBSS algorithm for second-order system.

Outer Loop
(1) The algorithm selects one of the parameters from the first group, for example, 𝐵, for the outer loop. It relies on the availability of upper and lower bounds for it. If the parameter is the gain 𝐵, then the lower and upper bounds are defined as 𝐵1 and 𝐵5, respectively.(2) The algorithm defines five intermediate values within the above range, that is, 𝐵1 to 𝐵5, where 𝐵1 is the lower value, 𝐵5 is the highest value and the values of 𝐵2 to 𝐵4 are evenly distributed between 𝐵1 and 𝐵5. (3) For each value of 𝐵𝑖, an inner loop is created for the other parameter in group one, that is, 𝜔𝑛. Therefore, five inner loops are created. Each one of the inner loops has the following process.

Inner Loop 𝑖, 𝑖=1,,5
(a) Upper and lower bounds are specified for 𝜔𝑛, denoted as 𝜔𝑖5 and 𝜔𝑖1, respectively.(b) The algorithm defines five intermediate values within the above range, that is, 𝜔𝑖1 to 𝜔𝑖5, where 𝜔𝑖1 is the lower value, 𝜔𝑖5 is the highest value, and the values of (𝜔𝑖2 to 𝜔𝑖4) are evenly distributed between 𝜔𝑖1 and 𝜔𝑖5.(c) Each value of 𝜔𝑖𝑗 is assumed to be an initial guess for the shooting method, where 𝑖 and 𝑗 denote the outer and inner loops, respectively. The pair (𝐵𝑖,𝜔𝑖𝑗) are assumed to be the values of the unknown parameters 𝐵 and 𝜔𝑛.Note that if 𝐵 and 𝜔𝑛 are known, the system satisfies the observability condition for the estimation of the remaining parameters (𝜉) using the measurement. Therefore, the third parameter, ̂𝝃𝑖𝑗, can be extracted by using the measurement (𝑧), the input (𝑢), the sampling time 𝑇𝑠, and the assigned pair of (𝐵𝑖,𝜔𝑖𝑗) through filtering or by using the inverse model as follows: ̂𝜉𝑖𝐽𝑘1=𝐵𝑖𝑢𝑘1𝜔2𝑖𝑗𝑇𝑠𝑧1𝑘1𝑧2𝑘𝑧2𝑘12𝜔𝑖𝑗𝑇𝑠𝑧2𝑘1,(34) where ̂𝜉𝑖𝐽𝑘1 is the extracted damping ratio at time 𝑘1 using the pair (𝐵𝑖,𝜔𝑖𝑗).(d)̂𝜉𝑖𝑗̂𝜉=[𝑖𝐽𝑘𝑑+1̂𝜉𝑖𝐽𝑘] is a stochastic variable segment that is a function of the system and measurement noise as well as modeling errors. The variance of each ̂𝜉𝑖𝑗, 𝜎̂𝝃𝑖𝑗, is calculated for each corresponding 𝜔𝑖𝑗 which results in five values distributed as shown in Figure 7 (note the parabolic shape). The derivative of the curve is obtained by taking the differences between two successive variance points at a time. The sign of the derivative is examined. Using the bi-section method, a new subinterval is created from the old interval by reassigning the interval boundaries, 𝜔𝑖1 or/and 𝜔𝑖5 as shown in Table 2. Note that only one minimum value of the variance exists in the interval. The extreme cases (case 1 and case 5 in Table 2) treat the location of the minimum variance value to be close to the interval boundaries.(e) After defining the new interval, steps (b) to (d) are repeated iteratively until the width of the resultant interval of natural frequency is smaller than a threshold, 𝜀.(f) Once the loop stops, the natural frequency, 𝜔𝑛𝑖, of that loop is assumed to be half of the final resultant interval.

Table 2: The new interval boundaries using the bi-section method.

End of the Inner Loop 𝑖
(4) The algorithm uses the measurements, the input, the sampling time, and the pairs (𝐵𝑖,𝜔𝑛𝑖) to obtain the damping ratio ̂𝜉𝑖 for each inner loop as follows: ̂𝜉𝑖=𝐵𝑖𝑢𝑘1𝜔2𝑛𝑖𝑇𝑠𝑧1𝑘1𝑧2𝑘𝑧2𝑘12𝜔𝑛𝑖𝑇𝑠𝑧2𝑘1.(35)(5) The variance of ̂𝝃𝑖, 𝜎̂𝝃𝑖, is calculated for each corresponding pair (𝐵𝑖,𝜔𝑛𝑖) and results in five values distributed as shown in Figure 7. Similar to step (d), a new subinterval is created for the gain 𝐵 by reassigning the interval boundaries; 𝐵1 or/and 𝐵5 as discussed in step (d) and as shown in Table 2 (by replacing 𝜔𝑖𝑗 with 𝐵𝑖).(6) After defining the new interval, steps (2) to (5) are repeated iteratively until the width of the resultant interval of the gain 𝐵 is smaller than a threshold, 𝜚.

End of the Outer Loop
Once the outer loop stops, the gain, 𝐵, is assumed to be half of its final interval. One more inner loop is done using the gain 𝐵 and the steps (a) to (f) to obtain 𝜔𝑛. The damping ratio, ̂𝜉, is then obtained by using the measurement, the input, the pair (𝐵,𝜔𝑛) and the inverse model as follows: ̂1𝜉=𝑑𝑘𝑗=𝑘𝑑+1𝐵𝑢𝑗1𝜔𝑛2𝑇𝑠𝑧1𝑗1𝑧2𝑗𝑧2𝑗12𝜔𝑛𝑇𝑠𝑧2𝑗1.(36)
Increasing the number of parameters results in more nested loops, where for each inner loop there will be five subinner loops, and for each sub-inner loop there will be a further five sub-subinner loops, and so on. The algorithm’s computational time grows exponentially with the number of parameters. This algorithm is only suitable for systems with low levels of complexity and orders.

4.4. The Iterative Bi-Section/Shooting Method Combined with the SVSF and the KF

The IBSS is used to refine the estimated model. However, it does not estimate the states. Therefore, the IBSS needs to be combined with a filter such as the KF or the SVSF in order to estimate the states. This combination can also be used to estimate observable parameters. In this study, the combination of the SVSF with the IBSS is presented and is compared to the KF with the IBSS. In the example and algorithms presented, it is assumed that all model parameters and all states are extracted from the measurement signals. The only known information is model structure.

4.4.1. The Iterative Bi-Section/Shooting with the KF

The iterative Bi-section/shooting method is combined with the Kalman filter (IBSS/KF) to estimate the states and the parameters using data segments of the measurement signal. The segments are needed for the IBSS. As mentioned earlier, the parameters in the segments are assumed to be constant, otherwise the IBSS method will be misled. The IBSS element will be used to refine the filter’s model to reduce errors and then the KF is used to obtain the states. The main shortcoming of the IBSS/KF is that if the system’s parameters change, the IBSS/KF will not know when this change has occurred. Therefore, a segment of the measurements is taken at a time, in which the parameters are assumed to be constant. The segment is processed using the IBSS to obtain the parameters. In the next step, the KF estimates the states as shown in Figure 9 and as follows.(1)The measurement signal is divided into segments. The parameters of the system are assumed to be constant within the segment. Each segment is processed individually by the IBSS and the KF. The last time step value in a segment is considered as an initial condition to the next segment (for the KF).(2)The IBSS is applied to the segment to achieve the best value of the filter’s model parameters. The cost function (or criterion of the goodness) is the lowest variance in the estimation of the damping ratio, 𝜉, (in case of second-order system).(3)The KF estimates the states of the system in the segment based on the new model’s parameters from the IBSS.(4)Steps 1 to 3 are repeated for all data segments.

Figure 9: The IBSS/KF algorithm for second order systems.
4.4.2. The Iterative Bi-Section/Shooting SVSF

The advantage of the combination of the IBSS with the SVSF is that the secondary indicators of performance of the SVSF can very accurately determine when a physical parameter has changed. This is very advantageous since the IBSS requires that parameters are constant during the interval that they are estimated. This provides a dynamic segmentation ability to the IBSS/SVSF that is not possible with the IBSS/KF. The combined IBSS/SVSF provides a remarkable algorithm that enables the estimation of both all of the states and all of the model parameters for low-order systems. The combined robust stability of the SVSF and the interval definition can lead to a stable overall process.

The iterative bi-section/shooting method is combined with the SVSF (IBSS/SVSF) to estimate the states and the parameters. Moreover, the SVSF’s secondary indicators of performance are used to detect parametric changes in the system once they occur and pass that information to the IBSS for interval selection. If chattering occurs when the boundary layer is set to have a width that is a function of the upper bounds of uncertainties, then this means that the upper bound has been breached and at least one of the parameters has changed. Hence, chattering can provide a good indicator of the inception of change in systems. Once chattering occurs, the IBSS refines the estimated model. The SVSF then uses the refined model to continue estimating the states until chattering condition reoccurs. The combined algorithm is summarized in Figure 10 and as follows.(1)A SVSF with an appropriate smoothing boundary layer is used to estimate the states, and chattering is monitored.(2)Once chattering occurs, the IBSS takes a segment of the measurement and processes it to obtain model parameters and to refine the filter’s model.(3)The SVSF then continues to obtain the estimates until another chattering condition occurs.(4)Steps 1 to 3 are repeated.

Figure 10: The IBSS/SVSF algorithm.

In the following section, the methods are applied to an example problem.

5. Simulation Test

5.1. Simulation Setup

In this study, the proposed algorithmsare tested by their application to a simulation model of an electrohydrostatic actuator (EHA) described in [13]. The EHA is a “pump-controlled hydraulic system” that is used in the aerospace industry; that is, airplanes aileron, [21]. The EHA is an integrated unit that consists of an electrical motor, bidirectional pump, pressure and position sensors, and a linear actuator. Its hydraulic circuit is shown in Figure 11, [18, 22]. The EHA can be described by a third-order model defined in its discretized state space form as𝑥1𝑘+1𝑥2𝑘+1𝑥3𝑘+1=1𝑇𝑠001𝑇𝑠0𝜔2𝑛𝑇𝑠12𝜉𝜔𝑛𝑇𝑠𝑥1𝑘𝑥2𝑘𝑥3𝑘+00𝐵𝑇𝑠𝑢𝑘+𝑤1𝑘𝑤2𝑘𝑤3𝑘𝐳𝑘+1=𝐱100010001𝑘+1+𝐯𝑘+1,(37) where 𝑥1, 𝑥2, and 𝑥3 are the position, velocity, and acceleration, 𝜔𝑛=2(𝛽𝐴𝐸2/𝑀𝑉0), 𝐵=(2𝐷𝛽𝐴𝐸/𝑀𝑉0), and 𝜉=1/22((𝐵𝐸𝑉0+𝐿𝑀𝛽)/(𝑀𝑉0𝛽𝐴𝐸2)), these parameters are defined and quantified in Table 3. The parameter 𝛽 is the effective bulk modulus and its value is defined in the range (1×1083×108) Pa. The effective bulk modulus is made to change randomly several times. As the effective bulk modulus changes, the parameters 𝐵, 𝜔𝑛, and 𝜉 are also changed. The number of parameters in the EHA system is similar to the number of parameters in a second-order system. Therefore, the IBSS algorithm described in example  1 is used. The output signals have been divided into segments with a length equal to 200 time steps for the IBSS/KF. In the IBSS/SVSF, segmentation is not required. Initially, the estimates are within the smoothing boundary layer. When a system parameter (model) is changed, the filter estimates exit their smoothing boundary layer thus inducing chattering. This provides a very effective mechanism for detecting change in the system model and is utilized in the SVSF/IBSS formulation in term of segmentation. Hence, instead of taking segments continuously, a data segment of length 200 time steps is taken only once when chattering is detected. The changes in the parameters are randomly made and each change will last for more than 20000 time steps. Within the segment, the parameters are assumed to be constant. The IBSS will attempt to estimate the filter parameters 𝜔𝑛, 𝜉 and 𝐵, while the SVSF or the KF estimates the system states. The sampling time is 0.001 𝑠𝜔𝑛, 𝐵, and 𝜉 randomly vary between 100 to 400 Hz, 1 to 100 mrad·s and 0 to 1, respectively. The process and measurement noise are assumed to be white and Gaussian with noise-to-signal ratio of 10%. For the KF, the system and measurement noise covariance matrices are defined as 𝐐=𝐑=5×10143×10163×10143×10161×10121×10133×10141×10131×108,(38) and the initial error covariance matrix has a value of 𝐏𝟎=𝐈𝟑×𝟑. For the SVSF, the coefficient matrix 𝜸 has a value of 𝜸=0.1×𝐈𝟑×𝟑. The smoothing boundary layer is designed to have width of Ψ=[3×1061.2×1056×103]𝑇. The input consists of a random signal superimposed on step changes as shown in Figure 12.

Table 3: The parameters of the EHA proposed in [13].
Figure 11: EHA’s (a) components and (b) prototype [18].
Figure 12: The input signal to the IBSS simulation.
5.2. Simulation Results of the IBSS/KF Application

The results of the application of the IBSS/KF are shown in Figures 1318. The figures show that the IBSS/KF gives good, stable, and robust performance although modeling errors are present. The IBSS provides the KF with a tuned model while the KF uses this model to estimate the states. The system and measurement noise affect the results as the estimation error increases when the noise amplitudes increase.

Figure 13: (a) 𝑥1's actual and estimated values, (b) estimation error of 𝑥1 obtained by using the IBSS/KF.
Figure 14: (a) 𝑥2's actual and estimated values, (b) estimation error of 𝑥2 obtained by using the IBSS/KF.
Figure 15: (a) 𝑥3's actual and estimated values, (b) estimation error of 𝑥3 obtained by using the IBSS/KF.
Figure 16: (a) 𝜉's actual and estimated values, (b) estimation error of 𝜉 obtained by using the IBSS/KF.
Figure 17: (a) 𝜔𝑛's actual and estimated values, (b) estimation error of 𝜔𝑛 obtained by using the IBSS/KF.
Figure 18: (a) gain's actual and estimated values, (b) estimation error of gain obtained by using the IBSS/KF.
5.3. Simulation Results of the IBSS/SVSF Application

The results of the application of the IBSS/SVSF are shown in Figures 1924. The figures show that the IBSS/SVSF similarly to the IBSS/KF gives good, stable, and robust performance although modeling errors are present.

Figure 19: (a) 𝑥1's actual and estimated values, (b) estimation error of 𝑥1 obtained by using the IBSS/SVSF.
Figure 20: (a) 𝑥2's actual and estimated values, (b) estimation error of 𝑥2 obtained by using the IBSS/SVSF.
Figure 21: (a) 𝑥3's actual and estimated values, (b) estimation error of 𝑥3 obtained by using the IBSS/SVSF.
Figure 22: (a) 𝜉's actual and estimated values, (b) estimation error of 𝜉 obtained by using the IBSS/SVSF.
Figure 23: (a) 𝜔𝑛's actual and estimated values, (b) estimation error of 𝜔𝑛 obtained by using the IBSS/SVSF.
Figure 24: (a) Gain's actual and estimated values, (b) estimation error of gain obtained by using the IBSS/SVSF.
5.4. Discussion

The two methods, IBSS/SVSF and IBSS/KF, are compared in terms of the following.(i) The root mean square error (RMSE𝑗), which is defined as follows:RMSE𝑗=1length(𝐱)length(𝐱)𝑖=1𝑦𝑗𝑖̂𝑦𝑗𝑖2,for𝑦=𝑥1,𝑥2,𝑥3,𝜉,𝜔𝑛and𝐵.(39)(ii) The maximum absolute error (MaxError𝑗), which is equal toMaxError𝑗||𝑦=max𝑗𝑖̂𝑦𝑗𝑖||,for𝑦=𝑥1,𝑥2,𝑥3,𝜉,𝜔𝑛and𝐵.(40)(iii) The variance in the error (VarError𝑗) which is equal toVarError𝑗=1,×length(𝐱)1length(𝐱)𝑖=1𝑦𝑗𝑖̂𝑦𝑗𝑖length(𝐱)𝑖=1𝑦𝑗𝑖̂𝑦𝑗𝑖length(𝐱)2,for𝑦=𝑥1,𝑥2,𝑥3,𝜉,𝜔𝑛and𝐵.(41) Table 4 summarizes the comparison.

Table 4: Comparison between the IBSS/KF and the IBSS/SVSF.

The IBSS/KF and the IBSS/SVSF yield good results on estimating the states and the parameters. However, there are some differences between the two algorithms. When the IBSS/KF is applied, the system cannot identify the exact time when the parameters change. The algorithm divides the measurement signal into small segments and assumes that the segment is small enough, such that no changes happen within it. When a change happens within the segment, the error increases causing poor results for the parameters in that segment as shown in Figure 25. On the other hand, the IBSS/SVSF does not have this problem. Using the secondary indicators of performance allows the segment lengths to be adapted according to the time instance of change in the parameters.

Figure 25: The error when changes happen within the segment.

Further to Figure 25, the IBSS/KF and the IBSS/SVSF are both able to estimate the states and the parameters. However, their results differ in terms of adaptation, variance in the error, and the time needed to estimate the parameters and the states, as shown in Table 4. The amplitude of noise affects the IBSS/KF more than the IBSS/SVSF, and the profiles of the estimated parameters are smoother in IBSS/SVSF than those of the IBSS/KF, as shown in Figure 26.

Figure 26: Comparison between the IBSS/KF and the IBSS/SVSF in term of the damping ratio profile.

The IBSS/SVSF requires less time to estimate the states and the parameters compared to the IBSS/KF. Taking a segment each time and analyzing it takes a longer time than taking one segment per interval and analyzing it. This causes the IBSS/KF to take more than twelve-times what is needed for the IBSS/SVSF, as shown in Table 4.

Changing the segment length does not affect the IBSS/SVSF, while it greatly impacts the IBSS/KF. For example, reducing the segment length to 100 time steps makes the IBSS/SVSF be 1.3-times faster than the reported value in Table 4 without affecting its performance. However, this reduction can potentially reduce the overall RMSE of the IBSS/KF as the segments that have parametric changes within them become smaller. When the changes happen inside a segment, it means that the model is incorrectly estimated because it will be based on two partially different system models. If the segment is small, then the error becomes negligible in the overall RMSE. However, this is obtained at the expense of the computational time which is almost tripled.

6. Conclusion

A novel iterative parameter estimation technique, referred to as the iterative bi-section/shooting method (IBSS), is proposed. The IBSS is a searching technique used to obtain model parameters for systems in which only the model structure is known. The IBSS is further combined with the SVSF and the KF. These methods are applied to an electro-hydrostatic actuator with randomly changing parameters. The results show the superior performance of the IBSS/SVSF. The SVSF/IBSS enables the extraction of all parameters and states using only the measurement signals.


  1. S. Habibi, “Performance measures of the variable structure filter,” Transactions of the Canadian Society for Mechanical Engineering, vol. 29, no. 2, pp. 267–295, 2005. View at Google Scholar · View at Scopus
  2. M. Grewal and A. Andrews, Kalman Filtering: Theory and Practice Using MATLAB, John Wiley & Sons, Hoboken, NJ, USA, 2nd edition, 2001.
  3. S. Habibi and R. Burton, “The variable structure filter,” Dynamic Systems, Measurement, and Control, vol. 125, no. 3, pp. 287–293, 2003. View at Google Scholar
  4. R. Kalman, “A new approach to linear filtering and prediction problems,” Basic Engineering, vol. 82, no. 1, pp. 35–45, 1960. View at Google Scholar
  5. G. Welch and G. Bishop, An Introduction to the Kalman Filter, University of North Carolina, Durham, NC, USA, 2006.
  6. S. Haykin, Adaptive Filter Theory, Prentice Hall, Chapel Hill, NC, USA, 4th edition, 2002.
  7. Y. Bar-Shalom, X. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation, John Wiley & Sons, Hoboken, NJ, USA, 2001.
  8. N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, John Wiley & Sons, Hoboken, NJ, USA, 1949.
  9. D. Simon, “From here to infinity,” Embedded Systems Programming, vol. 14, no. 11, pp. 20–30, 2001. View at Google Scholar
  10. D. Simon, Optimal State Estimation: Kalman, H-Infinity, and Nonlinear Approaches, John Wiley & Sons, Hoboken, NJ, USA, 2006.
  11. B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications, Artech House, Boston, Mass, USA, 2004.
  12. S. Habibi, “The extended variable structure filter,” Journal of Dynamic Systems, Measurement, and Control, vol. 128, no. 2, pp. 341–351, 2006. View at Google Scholar
  13. S. Habibi, S. Wang, and R. Burton, A Smooth Variable Structure Filter For State Estimation, International Association of Science and Technology for Development, Washington, DC, USA, 2007.
  14. S. Habibi, “Parameter estimation using a combined variable structure and kalman filtering approach,” Journal of Dynamic Systems Measurement and Control-Transactions of The Asme, vol. 130, no. 5, pp. 0510041–05100414, 2008. View at Google Scholar
  15. S. Habibi, The Variable Structure Filter and Its Application to a New Micro-Precision Actuation System, Mechanical Engineering Department, McMaster University, Hamilton, Canada, 2006.
  16. L. Han and S. Habibi, A Fuzzy—Kalman Filtering Strategy For State Estimation, University of Saskatchewan, Saskatoon, Canada, 2004.
  17. A. Kaw and E. Kalu, Numerical Methods with Applications: Abridged, Lulu, Raleigh, NC, USA, 1st edition, 2011.
  18. S. R. Habibi and R. Burton, “Parameter identification for a high-performance hydrostatic actuation system using the variable structure filter concept,” Journal of Dynamic Systems, Measurement, and Control, vol. 129, no. 2, pp. 229–235, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Boyd, V. Balakrishnan, and P. Kabamba, “A bisection method for computing the H norm of a transfer matrix and related problems,” Mathematics of Control, Signals, and Systems, vol. 2, no. 3, pp. 207–219, 1989. View at Publisher · View at Google Scholar · View at Scopus
  20. R. E. Moore, Methods and Applications of Interval Analysis, SIAM, Philadelphia, Pa, USA, 1979.
  21. S. Wang, Integrated control and estimation based on sliding mode control applied to electrohydraulic actuator, Ph.D. thesis, University of Saskatchewan, Saskatchewan, Canada, 2007.
  22. S. Habibi and A. Goldenberg, “Design of a new high-performance electrohydraulic actuator,” IEEE/ASME Transactions on Mechatronics, vol. 5, no. 2, pp. 158–164, 2000. View at Publisher · View at Google Scholar · View at Scopus