Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article
Special Issue

Modeling and Control of Complex Dynamic Systems: Applied Mathematical Aspects

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 457603 |

Ruili Dong, Yonghong Tan, Hui Chen, Yangqiu Xie, "Nonsmooth Recursive Identification of Sandwich Systems with Backlash-Like Hysteresis", Journal of Applied Mathematics, vol. 2012, Article ID 457603, 16 pages, 2012.

Nonsmooth Recursive Identification of Sandwich Systems with Backlash-Like Hysteresis

Academic Editor: Zhiwei Gao
Received29 Mar 2012
Revised10 Jun 2012
Accepted10 Jun 2012
Published15 Jul 2012


A recursive gradient identification algorithm based on the bundle method for sandwich systems with backlash-like hysteresis is presented in this paper. In this method, a dynamic parameter estimation scheme based on a subgradient is developed to handle the nonsmooth problem caused by the backlash embedded in the system. The search direction of the algorithm is estimated based on the so-called bundle method. Then, the convergence of the algorithm is discussed. After that, simulation results on a nonsmooth sandwich system are presented to validate the proposed estimation algorithm. Finally, the application of the proposed method to an X-Y moving positioning stage is illustrated.

1. Introduction

Usually, a sandwich system with backlash-like hysteresis is defined as the system that a backlash-like hysteresis is sandwiched between two linear dynamic subsystems. In engineering applications, many mechanical systems such as mechanical transmission systems, servo control systems, and hydraulic valve systems can be described by the so-called sandwich systems with backlash-like hysteresis. The reason to cause the backlash-like hysteresis phenomenon is mainly due to the gaps existing in transmission mechanism systems such as gearbox and ball screw.

Recently, identification of sandwich systems has become one of the interesting issues in the domain of modeling and control for complex systems. References [13] proposed the recursive identification methods for the sandwich system with smooth nonlinearities. The main ideas of those approaches are to extend the linear system identification methods to smooth nonlinear cases. Moreover, there have been some methods for the identification of Hammerstein or Wiener systems with backlash-like hysteresis [48], most of which are the modified linear system identification methods.

However, until today, there have been very few publications concerning the identification of the sandwich systems with backlash-like hysteresis. Reference [9] proposed a method to identify the sandwich systems with backlash-like hysteresis, but the approach is still based on idea to extend the linear system identification method to nonlinear cases. On the other hand, the switching functions in that method have significant influence on the convergence speed of the algorithm.

In this paper, a recursive gradient algorithm based on the bundle method is proposed to identify parameters of the sandwich model. In this algorithm, the effect of the nonsmoothness caused by the backlash-like hysteresis in sandwich system is considered. In order to obtain the optimizing search direction at the nonsmooth points of the system, the Clarke subgradient technique is utilized based on the idea of the bundle method [1012]. By comparing with the above-mentioned available methods, the proposed method employs the nonsmooth optimization technique to identify the nonsmooth sandwich systems with backlash-like hysteresis. Thus, it will provide us with a new approach for dealing with on-line modeling of nonsmooth dynamic systems. A numerical example will be presented to evaluate the performance of the proposed approach. Finally, experimental results on an 𝑋-𝑌 moving positioning stage are illustrated.

2. Brief Description of Sandwich Systems with Backlash

The structure of a sandwich system with backlash-like hysteresis is shown in Figure 1, in which a backlash-like hysteresis is embedded between the input and output linear subsystems, that is, 𝐿1(·) and 𝐿2(·). It is assumed that input 𝑢(𝑘) and output 𝑦(𝑘) can be measured directly, but the internal variables 𝑥(𝑘) and 𝑣(𝑘) are not measurable.

Suppose that both linear subsystems are stable, and the time delays 𝑞1 and 𝑞2 in 𝐿1(·) and 𝐿2(·) are known, respectively. The corresponding discrete-time models of 𝐿1(·) and 𝐿2(·) are, respectively, written as 𝑥(𝑘)=𝑛𝑎𝑖2=1𝑎𝑖2𝑥𝑘𝑖2+𝑛𝑏𝑗2=0𝑏𝑗2𝑢𝑘𝑞1𝑗2,𝑦(𝑘)=𝑛𝑐𝑖1=1𝑐𝑖1𝑦𝑘𝑖1+𝑛𝑑𝑗1=0𝑑𝑗1𝑣𝑘𝑞2𝑗1,(2.1) where 𝑛𝑎 and 𝑛𝑏 are the orders of 𝐿1(·), 𝑞1 is the time delay, and 𝑎𝑖2 as well as 𝑏𝑗2 are the coefficients of 𝐿1(·); 𝑛𝑐 and 𝑛𝑑 are the orders of 𝐿2(·), 𝑞2 is the time delay, and 𝑐𝑖1 and 𝑑𝑗1 are the coefficients of 𝐿2(·). Let both 𝑏0 and 𝑑0 be equal to unity for unique representation.

Note that the backlash-like hysteresis shown in Figure 1 is specified by the slopes 𝑚1 and 𝑚2 as well as the absolute thresholds, 𝐷1 and 𝐷2, where 0<𝑚1<,0<𝑚2<,0<𝐷1<, and 0<𝐷2<. Hence, the discrete-time model of the backlash-like hysteresis is described as 𝑚𝑣(𝑘)=1𝑥(𝑘)𝐷1,𝑥(𝑘)>𝑣(𝑘1)𝑚1+𝐷1𝑣𝑣,𝑥(𝑘)>𝑥(𝑘1),increasezone,(𝑘1),(𝑘1)𝑚2𝐷2𝑣𝑥(𝑘)(𝑘1)𝑚1+𝐷1𝑚,memoryzone,2𝑥(𝑘)+𝐷2,𝑥(𝑘)<𝑣(𝑘1)𝑚2𝐷2,𝑥(𝑘)<𝑥(𝑘1),decreasezone.(2.2)

For the convenience to describe the system, the discrete-time model of the backlash-like hysteresis can be rewritten as 𝑚(𝑘)=𝑚1+𝑚2𝑚1𝑔(𝑘),𝑣1(𝑘)=𝑚(𝑘)𝑥(𝑘)+𝑔(𝑘)𝑥(𝑘)𝐷1𝑔1(𝑘)+𝐷2𝑔2(,𝑣[𝑣]𝑔𝑘)(𝑘)=𝑣1(𝑘)+(𝑘1)𝑣1(𝑘)1𝑔(𝑘)12,(𝑘)1(2.3) where the switching functions𝑔(𝑘), 𝑔1(𝑘), and 𝑔2(𝑘) are, respectively, defined as 𝑔𝑔(𝑘)=0,Δ𝑥(𝑘)>01,Δ𝑥(𝑘)0,1(𝑘)=1,𝑥(𝑘)>𝑣(𝑘1)𝑚1+𝐷1𝑔,𝑥(𝑘)>𝑥(𝑘1),0,else,2(𝑘)=1,𝑥(𝑘)<𝑣(𝑘1)𝑚2𝐷2,𝑥(𝑘)<𝑥(𝑘1),0,else,(2.4) where Δ𝑥(𝑘)=𝑥(𝑘)𝑥(𝑘1).

Thus, (2.1)–(2.3) present the model to describe the sandwich system with backlash-like hysteresis. Hence, the unknown parameter vector of the model can be written as 𝜽𝑅𝑛𝑎+𝑛𝑏+𝑛𝑐+𝑛𝑑+4, where 𝑐𝜽=1,,𝑐𝑛𝑐,𝑎1,,𝑎𝑛𝑎𝑚1,𝑚2,𝐷1,𝐷2,𝑏1,,𝑏𝑛𝑏,𝑑1,,𝑑𝑛𝑑𝑇.(2.5) According to concept of the gradient algorithm, define the objective function as 𝑄=𝑘,𝜽(𝑘)𝑛𝑘=1𝑦(𝑘)̂𝑦𝑘,𝜽(𝑘)22=12𝑛𝑘=1𝑓,𝑘,𝜽(𝑘)(2.6) where 𝜽 is the estimate of 𝜽, and ̂𝑦(𝑘,𝜽(𝑘)) is the output of system model. The optimal estimate of 𝜽 can be obtained by minimizing the above-mentioned criterion.

3. The Nonsmooth Estimation of the Sandwich Model with Backlash-Like Hysteresis

In this section, a gradient-based identification algorithm is proposed for identification of the sandwich system with backlash-like hysteresis. Due to the nonsmoothness of the backlash, the gradients of the system output with respect to the parameters of the backlash at nonsmooth points will not exist. The smooth gradient-based methods directly applied to nonsmooth systems may fail in convergence [13]. On the other hand, the genetic algorithms [14] or Powell’s method [15], which are based on derivative-free techniques, may be unreliable and become inefficient when the system structure is complicated. Thus, we should find a special way for solving this problem. The simplest way to solve the problem is to apply the Clarke subgradients [11] to the approximation of the gradients at the nonsmooth points.

The basic idea of the bundle method is to approximate the subdifferential of 𝑄(𝑘,𝜽(𝑘)) with respect to 𝜽(𝑘) by gathering the subgradients from previous iterations into a bundle for the nonsmooth objective function 𝑄(𝑘,𝜽(𝑘)). The gradient 𝑄(𝑘,𝜽(𝑘)) can change discontinuously, and some change of the gradient may not be small in the neighborhood of the minimum of the function. So the values of 𝑄(𝑘,𝜽(𝑘)) and 𝜕𝑄(𝑘,𝜽(𝑘)) at a single point 𝜽(𝑘) do not offer sufficient information of the local behavior of 𝑄(𝑘,𝜽(𝑘)). The detail of the bundle method can be found in [1012] and reference therein.

Considering that the sandwich system with backlash-like hysteresis is locally Lipschitz continuous, we have the following definition.

Definition 3.1 (see [11]). Let 𝐹: 𝑅𝑛×𝑅𝑅 be locally Lipschitz continuous. This allows one to define a Clarke subgradient of 𝐹 at 𝜉 as 𝑑𝐹(𝜉): 𝜉𝑑𝐹(𝜉)𝜕𝐹(𝜉),subjectto𝜕𝐹(𝜉)=conv𝐹𝑖𝜉𝑖𝜉𝜉,𝐹𝑖exists,(3.1) where “conv” denotes the convex hull of a set.
The set of all the Clarke subgradients is the Clarke subdifferential of 𝐹at 𝜉 which is denoted by 𝜕𝐹(𝜉) [11].
Considering that backlash-like hysteresis is a nonsmooth mapping, the gradients of parameters in 𝐿1(·) with respect to 𝑣(𝑘) do not exist at a nonsmooth point. Hence, we define the parameters of the backlash-like hysteresis and 𝐿1(·) as 𝝈={𝑚1,𝑚2,𝐷1,𝐷2,𝑎1𝑎𝑛𝑎,𝑏1𝑏𝑛𝑏}𝑅4+𝑛𝑎+𝑛𝑏. Considering the cost function described by (2.6), the gradients of 𝑓() with respect to 𝝈 will not exist at the nonsmooth points. Hence, at the nonsmooth points of 𝑄(), the Clarke subdifferential of 𝑓() with respect to 𝝈, that is, 𝜕𝑓(𝝈), can be obtained by 𝑦𝜽𝜕𝑓(𝝈)=conv(𝑘)̂𝑦𝑘,(𝑘)𝑛𝑏𝑗1=0𝑑𝑗1𝜕̂𝑣(𝑘𝑗1𝑞2)(𝝈),(3.2) where 𝜕̂𝑣(𝑘𝑗1𝑞2)̂𝑣(𝝈)=conv{(𝑘𝑗1𝑞2)(𝝈)}, and ̂𝑣(𝑘𝑗1𝑞2)(𝝈) is the gradient of ̂𝑣, the output of backlash-like hysteresis, with respect to 𝜎 at the smooth points. Thus, the corresponding gradients of ̂𝑣(𝑘𝑗1𝑞2) with respect to 𝝈 at the smooth points are ̂𝑣(𝑘𝑗1𝑞2)𝑢(𝝈)=𝑘𝑗1𝑞2𝑞1𝐷1(𝑘1),0,𝑚1(𝑘1),0,𝑚1(𝑘1)̂𝑥𝑘1𝑗1𝑞2,,𝑚1(𝑘1)̂𝑥𝑘𝑛𝑎𝑗1𝑞2,𝑚1(𝑘1)𝑢𝑘1𝑗1𝑞2𝑞1,,𝑚1(𝑘1)𝑢𝑘𝑛𝑏𝑗1𝑞2𝑞1𝑇[],inincreasezones,0,0,0,0,0,,0,0,,0𝑇,inmemoryzones,0,𝑢𝑘𝑗1𝑞2𝑞1+𝐷2(𝑘1),0,𝑚2(𝑘1),𝑚2(𝑘1)̂𝑥𝑘1𝑗1𝑞2,,𝑚2(𝑘1)̂𝑥𝑘𝑛𝑎𝑗1𝑞2,𝑚2(𝑘1)𝑢𝑘1𝑗1𝑞2𝑞1,,𝑚2𝑢(𝑘1)𝑘𝑛𝑏𝑗1𝑞2𝑞1𝑇,indecreasezones,(3.3) where ̂𝑥(𝑘)=𝑛𝑎𝑖2=1̂𝑎𝑖2̂𝑥(𝑘𝑖2)+𝑛𝑏𝑗2=0̂𝑏𝑗2𝑢(𝑘𝑞1𝑗2), and the coefficients ̂𝑎𝑖2 and ̂𝑏𝑗2 are the corresponding estimated values at the previous step.
Hence, based on (3.2) and (3.3), the Clarke subdifferential of 𝑓() with respect to 𝝈 can be obtained at nonsmooth points of the system. Besides, as 𝐿2(·) is a smooth function, the gradients of 𝑓() with respect to the parameters of the linear subsystems 𝐿2(·) always exist. So, the Clarke subdifferential of 𝑓() with respect to all the unknown parameters of the sandwich system can be determined.
The proper Clarke subgradient direction t(𝑘,𝜽(𝑘)) of 𝑓() with respect to the parameters to be estimated at nonsmooth points can be derived based on min𝜑,𝐝1𝜑(𝑘)+2𝐭𝑘,𝜽(𝑘)2s.t.𝛽𝑗𝐡(𝑘)+𝑗(𝑘),𝐭𝑘,𝜽(𝑘)𝜑(𝑘),𝑗𝐽𝑘,(3.4) where denotes the Euclidean norm; 𝐽𝑘 is a nonempty subset of {1,,𝑘}; set 𝜑(𝑘) is the predicted amount of descent; 𝐡𝑗𝜽(𝑘)𝜕𝑓(𝑘,𝑗(𝑘)) for 𝑗𝐽𝑘, and 𝜽𝑗(𝑘) are some trail points (from the past iterations); 𝛽𝑗(𝑘)=max{|𝛼𝑗(𝑘)|,𝛾(𝑠𝑗(𝑘))2} is the locality measure of subgradient; 𝛾0 is the distance measure parameter (𝛾=0 if 𝑓(𝑘,𝜽(𝑘)) is convex), 𝛼𝑗𝜽(𝑘)=𝑓(𝑘,𝜽(𝑘))𝑓(𝑘,𝑗(𝑘))𝐡𝑗𝜽(𝑘)(𝜽(𝑘)𝑗(𝑘)) is the linearization error; 𝑠𝑗𝜽(𝑘)=𝑗𝜽(𝑘)𝑗(𝑘)+𝑘1𝑖=𝑗𝜽𝑖+1𝜽(𝑘)𝑖(𝑘) is the distance measure to estimate 𝜽𝜽(𝑘)𝑗(𝑘) without the requirement to store the trial point 𝜽(𝑘).
According to formula (3.4), 𝐭(𝑘) and 𝜑(𝑘) are obtained, that is, 𝐭𝑘,𝜽(𝑘)=𝑗𝐽𝑘𝜆𝑘𝑗𝐡𝑗(𝑘)=𝑗𝐽𝑘𝜽𝑦(𝑘)̂𝑦𝑘,𝑗(𝜆𝑘)𝑘𝑗𝐰𝑗(𝑘)=𝑒𝑘,𝜽(𝑘)𝐡𝑗(𝑘),(3.5)𝜑(𝑘)=𝑡(𝑘,𝜽(𝑘))2𝑗𝐽𝑘𝜆𝑘𝑗𝛽𝑗(𝑘),(3.6) where 𝐰𝑗𝜽(𝑘)=𝜕̂𝑦(𝑘,𝜽)/𝜕𝜽|=𝜽𝑗(𝑘), 𝑒(𝑘,𝜽(𝑘))=𝜽𝑦(𝑘)̂𝑦(𝑘,𝑗(𝑘)), 𝐡𝑗(𝑘)=𝑗𝐽𝑘𝜆𝑘𝑗𝐰𝑗(𝑘),𝜆𝑘𝑗0, and 𝑗𝐽𝑘𝜆𝑘𝑗=1.

Remark 3.2. If 𝑓() is convex, the model 𝑓(𝑘,𝜽(𝑘)) is an underestimate for 𝑓(), and the nonnegative linearization error 𝛼𝑗(𝑘) measures the performance of an approximation of the model to the original cost function. If 𝑓() is nonconvex, these facts are not valid anymore because 𝛼𝑗(𝑘) may have a small or even negative value, although the trial point 𝜽𝑗(𝑘) locates far away from the current iteration point 𝜽(𝑘), and thus, the corresponding subgradient 𝐡𝑗(𝑘) is worthless. For these reasons, the locality measure of subgradient 𝛽𝑗(𝑘) is introduced.
Therefore, the proposed recursive gradient estimation algorithm based on bundle method for the sandwich model with backlash-like hysteresis is shown as follows.

Step 1. Select starting point 𝜽0𝑅𝑛𝑎+𝑛𝑏+𝑛𝑐+𝑛𝑑+4 and stopping parameter 𝛿>0. Calculate 𝑓(𝑘,𝜽0) and vector 𝐡𝑗(𝑘)𝜕𝑓(𝑘,𝜽0), where 𝑗𝐽𝑘, 𝐽𝑘={𝑘0}, |𝐽𝑘|𝑘1, |𝐽𝑘| is the element number of 𝐽𝑘, and 𝑘1 is a given positive number. Set 𝛽𝑗(𝑘)=0, 𝑘=𝑘0 and the line search parameters ].𝑞(0,0.5),𝑞(𝑞,1),𝜂(0)(0,1(3.7)

Step 2. Calculate optimal solution (𝜑(𝑘),𝐭(𝑘,𝜽(𝑘))) based on formulas (3.2)–(3.6). If 𝜑(𝑘)𝛿, then stop.

Step 3. Search for the largest step size 𝜂(𝑘)[0,1] such that 𝜂(𝑘)𝜂(0) and if 𝑓𝑘,𝜽(𝑘)+𝜂(𝑘)𝐭(𝑘)𝑓𝑘,𝜽(𝑘)+𝑞𝜂(𝑘)𝜑(𝑘),(3.8) it holds 𝜑(𝑘)=𝑓𝑘,𝜽(𝑘)+𝐭(𝑘)𝑓𝑘,𝜽(𝑘)<0.(3.9) Then, we take a long step and set 𝜽(𝑘+1)=𝜽(𝑘)+𝜂(𝑘)𝐭(𝑘) and 𝜽(𝑘+1)=𝜽(𝑘+1); go to Step 4.
Otherwise, if 0<𝜂(𝑘)<𝜂(0), and formula (3.8) holds, then we take a short step and set 𝜽(𝑘+1)=𝜽(𝑘)+𝜂(𝑘)𝐭(𝑘), and 𝜽(𝑘+1)=𝜽(𝑘)+𝜂(𝑘)𝐭(𝑘) where 𝜂(𝑘)>𝜂(𝑘). Go to Step 5.
If 𝜂(𝑘)=0, and formula (3.8) holds, we take a null step, and namely set 𝜽(𝑘+1)=𝜽(𝑘) and 𝜽(𝑘+1)=𝜽(𝑘)+𝜂(𝑘)𝐭(𝑘); go to Step 5.

Step 4. Let 𝐽𝑘=𝐽𝑘{𝑘+1}, 𝑘=𝑘+1; if 𝑘𝑘1, then 𝐽𝑘={1,,𝑘}, and if 𝑘>𝑘1, then 𝐽𝑘=𝐽𝑘1{𝑘}{𝑘𝑘1}, then go to Step 2.

Step 5. 𝐽𝑘=𝐽𝑘{𝑘+1}, 𝑘=𝑘+1; if 𝑘𝑘1, then 𝐽𝑘={1,,𝑘}, and if 𝑘>𝑘1, then 𝐽𝑘=𝐽𝑘1{𝑘}{𝑘𝑘1}, and the proper Clarke subgradient 𝐡𝑗(𝑘) satisfies 𝛽𝑗𝑘,𝜽(𝑘)+𝐡𝑇𝑗𝐭𝑘,𝜽(𝑘)𝑘,𝜽(𝑘1)𝑞𝜑𝑘,𝜽(𝑘1),(3.10) then go to Step 2.

Remark 3.3. In long step, there is an obvious decrease in the value of the objective function. Hence, it is unnecessary to detect discontinuities in the gradient of 𝑓(). Thus, we just set 𝐡𝑗(𝑘)𝜕𝑓(𝑘,𝜽(𝑘)). On the other hand, in short steps and null steps, the gradient of 𝑓() is discontinuous. Then, based on (3.10), both 𝜽(𝑘) and 𝜽(𝑘) located on the opposite sides of this discontinuity are guaranteed, and the new subgradient 𝐡𝑗(𝑘)𝜕𝑓(𝑘,𝜽(𝑘)) will force an obvious modification of the next search direction. Hence, the algorithm approximates the effectively searching direction at nonsmooth points based on the bundle method, which cannot be realized by the smooth optimization techniques.

Remark 3.4. If the value of 𝜂(0) is too small, the convergence speed will be very sluggish, while 𝜂(0) is too large, and the algorithm may not be convergent. Hence, it is important for 𝜂(0) to be chosen properly. Usually, 𝜂(0) is chosen based on an empirical method.

Remark 3.5. If all the Clarke subgradients are included in 𝐽𝑘, the corresponding storage capacity is infinite. Hence, the number of the subgradients in 𝐽𝑘 must be constrained. In the proposed algorithm, we give the upper bound of |𝐽𝑘|𝑘1, and the upper bound 𝑘1 is specified by empirical method.

4. Convergence of the Estimation

For the convergence of the above-mentioned estimation algorithm, we have the following

Theorem 4.1. Suppose that 𝜂(𝑘) and 𝛽𝑗(𝑘) satisfy 0𝜂(𝑘)2𝑒2(𝑘)𝐡(𝑘)𝐡𝑇(𝑘)𝛽𝑗(𝑘)𝑒2(𝑘)𝐡𝑇(𝑘)𝐡(𝑘)1+𝐡(𝑘)𝐡𝑇,𝛽(𝑘)(4.1)𝑗(𝑘)2𝑒2(𝑘)𝐡(𝑘)𝐡𝑇(𝑘),(4.2) respectively, then the parameters 𝜽 can be convergent to a local optimal value.

Proof. The proof of this theorem can be found in Appendix.

5. Simulation

The proposed approach is used to identify a numerical sandwich system with backlash-like hysteresis based on the measured system input and output. Suppose that the parameters of the backlash-like hysteresis in the system are 𝑚1=1, 𝑚2=1.2, 𝐷1=0.5, and 𝐷2=0.6. The linear subsystems 𝐿1(·) and 𝐿2(·) are 𝑥(𝑘)=0.1𝑥(𝑘1)0.2𝑥(𝑘2)+1.5𝑢(𝑘1),(5.1)𝑦(𝑘)=1.2𝑦(𝑘1)0.32𝑦(𝑘2)+2𝑣(𝑘1)0.1𝑣(𝑘2),(5.2) respectively.

That implies 𝑎1=0.1,𝑎2=0.2, 𝑏0=1.5, 𝑐1=1.2, 𝑐2=0.32,𝑑0=2, and 𝑑1=0.1. In the simulation, both 𝑏0 and 𝑑0 are assumed to be equal to unity for model uniqueness, which implies that the corresponding equivalent true values of the coefficients are ̃𝑎1=0.1, ̃𝑎2=0.2, ̃𝑐1=1.2, ̃𝑐2=0.32, 𝑑1=0.05, 𝑚1=3, 𝑚2=3.6, 𝐷1=0.33, and 𝐷2=0.4, respectively, but this does not affect the properties of the whole system.

In the simulation, the signal to excite the system is a random sequence with variance 𝜎2=0.49. Choose 𝛿=1.0×104. In the proposed algorithm, based on Remark 3.4, select 𝜂(0)=0.015,𝑘1=6, 𝜽0=[0,0,0,0,0.1,0.1,0.1,0.1,0]𝑇, and 𝛽1(𝑘0)=0, respectively. For comparison, the traditional gradient method is also used to estimate the parameters of the system. In this method, the nonsmooth points of the system are omitted for the gradients of the system do not exist at nonsmooth points. The initialized values of the parameters are the same as those used in the proposed method. The optimizing step is chosen as 0.009.

Figure 2 illustrates the comparison of the estimated parameter convergence procedures between the proposed method and the traditional gradient method. In Figure 2, blue and solid lines denote the convergence procedures of the parameters estimated by the proposed method, while red and dotted lines show the convergence procedures of the parameters determined by the traditional gradient method. From Figure 2, we note that the parameters of the backlash-like hysteresis converge slower than those of the linear submodels especially the input linear submodel. Moreover, the proposed method has achieved faster convergence than that of the traditional gradient method. It is noticed that the oscillation and sharp jumps happened in the estimation procedure of the traditional gradient approach.

In the case that the system is affected by random noise, the proposed strategy can still obtain better convergence of parameter estimation. In the simulation with noise, the signal to noise rate (SNR) is equal to 46.5. All the initial values of the parameters are the same as those in the noise-free case.

Figure 3 shows the comparison of the convergence procedures of the estimated parameters in the case with noise between the proposed method and the traditional gradient approach. Similar to the noise-free case, the blue and solid lines denote the convergence procedures of the parameters estimated by the proposed method, while the red and dotted lines show the convergence procedures of the parameters estimated by the traditional gradient method. Obviously, the proposed method has obtained faster convergent results than the traditional gradient method.

6. Application to an 𝑋-𝑌 Moving Positioning Stage

The proposed identification approach is also applied to the modeling of an 𝑋-𝑌 moving positioning stage with the architecture shown in Figure 4. In this equipment, the movement of the work platform of each axis is driven by a DC servomotor through a ball-screw-nut mechanism which transforms the rotational shaft movement into linear displacement. The servomotor is controlled by a digital signal processor (TMS320LF-2407A). The displacement of each axis is measured by a linear encoder (RGF2000H125B). The signals of both phase A and phase B encoders are decoded by a quadrature decoding circuit which is based on the decoding chip (Agilent HCTL-2020).

In this system, the servomotor can be considered as a second-order linear dynamic subsystem. The movement of the work platform is also described by a linear second-order dynamic model. Due to the inherent characteristic, both dead zone and backlash-like hysteresis exist in this system. In order to simplify the identification procedure, the dead zone is compensated by a dead zone inverse model-based compensator. Thus, in the identification, only the effect of backlash-like hysteresis existing in the ball-screw-nut mechanism will be considered. Therefore, the identified system is actually a typical sandwich system with backlash-like hysteresis. In this section, only the identification procedure of axis A will be presented due to the limited space. The corresponding models used to describe the behavior of axis A are shown as follows:(1)the input linear model (𝐿1): 𝑥(𝑘)=𝑎1𝑥(𝑘1)𝑎2𝑥(𝑘2)+𝑏0𝑢(𝑘1),(6.1)(2)the model of the backlash-like hysteresis: 𝑚𝑣(𝑘)=1𝑥(𝑘)𝐷1,𝑥(𝑘)>𝑣(𝑘1)𝑚1+𝐷1𝑣𝑣,𝑥(𝑘)>𝑥(𝑘1),(𝑘1),(𝑘1)𝑚2𝐷2𝑣𝑥(𝑘)(𝑘1)𝑚1+𝐷1,𝑚2𝑥(𝑘)+𝐷2,𝑥(𝑘)<𝑣(𝑘1)𝑚2𝐷2,𝑥(𝑘)<𝑥(𝑘1),(6.2)(3)the output linear model (𝐿2): 𝑦(𝑘)=𝑐1𝑦(𝑘1)𝑐2𝑦(𝑘2)+𝑑0𝑣(𝑘)+𝑑1𝑣(𝑘1),(6.3)where 𝑦(𝑘) is the moving speed of the work platform.

Based on the operating requirement, a sequence of square wave plus sinusoidal wave is used to excite the system within the operating range. The corresponding amplitude of the input varies in the range between −1.09 V and 1.05 V, and the sample period is 0.5 ms.

In this model, both 𝑏0 and 𝑑0 are set to one. The initial values of the other parameters are chosen as 𝜂(0)=0.00116, 𝜇=1, 𝜽0=[0,0,0,0,1,1,0.001,0.001,0]𝑇, and 𝛽(𝑘0)=0. After 6700 steps, the convergence of the estimation is achieved. Figure 5 illustrates the corresponding procedure of the parameter estimation. It shows that the estimate procedure converges quickly. Figure 6 shows the corresponding mean square error (MSE) of the parameter estimation. We can see that the MSE is decreased sharply in the beginning, at the 180th step, and a local minimum can be found. After that, the algorithm jumps out of the local minimum, and the corresponding MSE gradually converges to a constant of about 0.4.

Then, the corresponding model validation result is shown in Figure 7(a), while Figure 7(b) shows the comparison of the input-output plots between the proposed model and the real data. The maximum relative modeling error is less than 11%. Moreover, it is obvious that the obtained model can accurately approximate the behavior of the 𝑋-𝑌 moving positioning stage. Hence, it can be concluded that the proposed identification method is rather promising in engineering application.

7. Conclusion

In this paper, a recursive gradient-based identification algorithm for the sandwich system with backlash-like hysteresis is proposed. The subgradient is applied to the search of gradient direction at the nonsmooth points of the system. In order to find the proper search direction at the nonsmooth points, the technique of so-called bundle method is utilized. Simulation results have shown that the proposed algorithm has provided us with an option for identification of nonsmooth dynamic systems, and it provides a novel method to identify the more complicated nonsmooth systems. The experimental results of 𝑋-𝑌 stage also show that the proposed method has potential in engineering applications.


Based on (3.5) and Step 3 of the algorithm, it is obtained 𝜽(𝑘+1)=𝜽(𝑘)+𝜂(𝑘)𝑒𝑘,𝜽(𝑘)𝐡(𝑘).(A.1)

Subtracting the local optimal value 𝜽1 from both sides of (A.1), it leads to 𝜽(𝑘+1)𝜽1=𝜽(𝑘)𝜽1+𝜂(𝑘)𝑒𝑘,𝜽(𝑘)𝐡(𝑘).(A.2)

Rewrite (A.2) as 𝜽(𝑘+1)=𝜽(𝑘)+𝜂(𝑘)𝑒𝑘,𝜽(𝑘)𝐡(𝑘),(A.3) where 𝜽(𝑘+1)=𝜽(𝑘+1)𝜽1.

Choose the quadratic function as 𝜽𝐿(𝑘+1)=𝑇(𝑘+1)𝜽(𝑘+1)+𝑒2.𝑘,𝜽(𝑘+1)(A.4)

According to (A.3), it leads to 𝜽𝑇(𝜽𝑘+1)𝜽(𝑘+1)𝑇(𝜽𝑘)𝜽(𝑘)=2𝜂(𝑘)𝑒𝑘,𝜽(𝑘)𝑇(𝑘)𝐡(𝑘)+𝜂2(𝑘)𝑒2𝑘,𝜽(𝑘)𝐡𝑇(𝑘)𝐡(𝑘).(A.5)

As 𝑓(𝑘,𝜽)=[𝑦(𝑘)̂𝑦(𝑘,𝜽)]2, if (3.5) holds, then the cutting-plane model is ̂𝑒1(𝑘,𝜽)=max𝑦(𝑘)̂𝑦𝑘,𝜽(𝑘)2𝜽+2𝑒𝑘,𝜽(𝑘)𝑇(𝑘)𝜽𝑇𝐡(𝑘)𝛽𝑗(𝑘).(A.6)

Based on the definitions of 𝛽𝑗(𝑘) and 𝛼𝑗(𝑘), as well as the idea of bundle method, we know that ̂𝑒1(𝑘,𝜽)𝑓(𝑘,𝜽(𝑘)). Thus, considering (A.6) yields 𝜽2𝑒𝑘,𝜽(𝑘)𝑇(𝑘)𝜽𝑇𝐡(𝑘)𝛽𝑗(𝑘).(A.7)

Choosing 𝜽=𝜽1 and holding (A.7) yield 𝜽2𝑒𝑘,𝜽(𝑘)𝑇(𝑘)𝜽𝑇1𝐡(𝑘)𝛽𝑗(𝑘).(A.8)

Based on (A.5) and (A.8), we obtain 𝜽𝜽(𝑘+1)𝑇𝜽(𝑘+1)𝜽(𝑘)𝑇(𝑘)𝜂(𝑘)𝛽𝑗(𝑘)+𝜂2(𝑘)𝑒2𝑘,𝜽(𝑘)𝐡𝑇(𝑘)𝐡(𝑘).(A.9)

According to the idea of the gradient algorithm and [16], we know that 𝑒𝑘,𝜽(𝑘+1)=𝑒𝑘,𝜽(𝑘)+Δ𝑒𝑘,𝜽(𝑘).(A.10)

Hence, the change of 𝑒(𝑘) is written as 𝜽=Δ𝑒𝑘,(𝑘)𝜕𝑒𝑘,𝜽(𝑘)𝜕𝜽(𝑘)𝑇Δ𝜽(𝑘)=𝐡(𝑘)Δ𝜽(𝑘).(A.11)

According to (3.5) and Step 3 of the algorithm, it yields: Δ𝜽(𝑘)=𝜂(𝑘)𝐭(𝑘)=𝜂(𝑘)𝑒𝑘,𝜽(𝑘)𝐡𝑇(𝑘).(A.12)

From (A.11) and (A.12), we get Δ𝑒𝑘,𝜽(𝑘)=𝜂(𝑘)𝑒𝑘,𝜽(𝑘)𝐡(𝑘)𝐡𝑇(𝑘).(A.13)

According to (A.10) and (A.13), we obtain 𝑒2𝑘,𝜽(𝑘+1)𝑒2𝑘,𝜽(𝑘)=2𝜂(𝑘)𝑒2𝑘,𝜽(𝑘)𝐡(𝑘)𝐡𝑇(𝑘)+𝜂2(𝑘)𝑒2𝑘,𝜽(𝑘)𝐡(𝑘)𝐡𝑇(𝑘)2.(A.14) Based on (A.4), (A.9), and (A.14), it leads to the following: 𝐿(𝑘1)𝐿(𝑘)𝜂(𝑘)𝛽𝑗(𝑘)𝜂2(𝑘)𝑒2𝑘,𝜽(𝑘)𝐡𝑇(𝑘)𝐡(𝑘)1+𝐡(𝑘)𝐡𝑇(𝑘)2𝜂(𝑘)𝑒2𝑘,𝜽(𝑘)𝐡(𝑘)𝐡𝑇(𝑘)(A.15) if 0<𝜂(𝑘)(2𝑒2(𝑘,𝜽(𝑘))𝐡(𝑘)𝐡𝑇(𝑘)𝛽𝑗(𝑘))/𝑒2(𝑘,𝜽(𝑘))𝐡𝑇(𝑘)𝐡(𝑘)(1+[𝐡(𝑘)𝐡𝑇(𝑘)]) and 𝛽𝑗(𝑘)<2𝑒2(𝑘,𝜽(𝑘))[𝐡(𝑘)𝐡𝑇(𝑘)], we have 𝐿(𝑘+1)𝐿(𝑘)0.(A.16) Hence, the parameters 𝜽 can be convergent to a local optimal value.


This work was supported by the projects of Shanghai Normal University (DZL811, DRL904, and DYL201005); the projects of Shanghai Education Commission (11YZ92); the project of NSFC (Grant nos. 60971004 and 61171088); the projects of Science and Technology. Commission of Shanghai (09220503000, 10JC1412200, and 09ZR1423400).


  1. N. J. Bershad, P. Celka, and S. McLaughlin, “Analysis of stochastic gradient identification of Wiener-Hammerstein systems for nonlinearities with Hermite polynomial expansions,” IEEE Transactions on Signal Processing, vol. 49, no. 5, pp. 1060–1071, 2001. View at: Publisher Site | Google Scholar
  2. A. H. Tan and K. Godfrey, “Identification of Wiener-Hammerstein models using linear interpolation in the frequency domain (LIFRED),” IEEE Transactions on Instrumentation and Measurement, vol. 51, no. 3, pp. 509–521, 2002. View at: Publisher Site | Google Scholar
  3. M. Boutayeb and M. Darouach, “Recursive identification method for MISO Wiener-Hammerstein model,” IEEE Transactions on Automatic Control, vol. 40, no. 2, pp. 287–291, 1995. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  4. V. Cerone and D. Regruto, “Bounding the parameters of linear systems with input backlash-like hysteresis,” in Proceedings of the American Control Conference, pp. 4476–4481, Portland, Ore, USA, June 2005. View at: Google Scholar
  5. E.-W. Bai, “Identification of linear systems with hard input nonlinearities of known structure,” Automatica, vol. 38, no. 5, pp. 853–860, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  6. F. Giri, Y. Rochdi, F. Z. Chaoui, and A. Brouri, “Identification of Hammerstein systems in presence of hysteresis-backlash and hysteresis-relay nonlinearities,” Automatica, vol. 44, no. 3, pp. 767–775, 2008. View at: Publisher Site | Google Scholar
  7. R. Dong, Y. Tan, and H. Chen, “Recursive identification for dynamic systems with backlash,” Asian Journal of Control, vol. 12, no. 1, pp. 26–38, 2010. View at: Publisher Site | Google Scholar
  8. R. Dong, Q. Tan, and Y. Tan, “Recursive identification for dynamic systems with output backlash-like hysteresis and its convergence,” International Journal of Applied Mathematics and Computer Science, vol. 19, no. 4, pp. 631–638, 2009. View at: Google Scholar
  9. R. Dong and R. Tan, “Online identification algorithm and convergence analysis for sandwich systems with backlash,” International Journal of Control, Automation and Systems, vol. 9, no. 3, pp. 1–7, 2011. View at: Google Scholar
  10. C. Eitzinger, “Nonsmooth training of fuzzy neural networks,” Soft Computing, vol. 8, pp. 443–448. View at: Google Scholar
  11. M. M. Mäkelä, M. Miettinen, L. Lukšan, and J. Vlček, “Comparing nonsmooth nonconvex bundle methods in solving hemivariational inequalities,” Journal of Global Optimization, vol. 14, no. 2, pp. 117–135, 1999. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. S. A. Miller, An inexact bundle method for solving large structured linear matrix inequality [Doctoral dissertation], University of California, Santa Barbara, Calif, USA, 2001.
  13. C. Lemaréchal, “Nondifferentiable optimization,” in Optimization, G. L. Nemhauser, A. H. G. Rinnooy Kan, and M. J. Todd, Eds., vol. 1, pp. 529–572, North-Holland, Amsterdam, The Netherland, 1989. View at: Publisher Site | Google Scholar
  14. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, .Reading, Mass, USA, 1998.
  15. R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, Chichester, 2nd edition, 1987.
  16. C.-C. Ku and K. Y. Lee, “Diagonal recurrent neural networks for dynamic systems control,” IEEE Transactions on Neural Networks, vol. 6, no. 1, pp. 144–156, 1995. View at: Publisher Site | Google Scholar

Copyright © 2012 Ruili Dong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.