Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 684074, 14 pages
http://dx.doi.org/10.1155/2012/684074
Research Article

Least-Squares Parameter Estimation Algorithm for a Class of Input Nonlinear Systems

1Key Laboratory of Advanced Process Control for Light Industry of Ministry of Education, Jiangnan University, Wuxi 214122, China
2School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China

Received 18 March 2012; Accepted 26 April 2012

Academic Editor: Morteza Rafei

Copyright © 2012 Weili Xiong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper studies least-squares parameter estimation algorithms for input nonlinear systems, including the input nonlinear controlled autoregressive (IN-CAR) model and the input nonlinear controlled autoregressive autoregressive moving average (IN-CARARMA) model. The basic idea is to obtain linear-in-parameters models by overparameterizing such nonlinear systems and to use the least-squares algorithm to estimate the unknown parameter vectors. It is proved that the parameter estimates consistently converge to their true values under the persistent excitation condition. A simulation example is provided.

1. Introduction

Parameter estimation has received much attention in many areas such as linear and nonlinear system identification and signal processing [19]. Nonlinear systems can be simply divided into the input nonlinear systems, the output nonlinear systems, the feedback nonlinear systems, and the input and output nonlinear systems, and so forth. The Hammerstein models can describe a class of input nonlinear systems which consist of static nonlinear blocks followed by linear dynamical subsystems [10, 11].

Nonlinear systems are common in industrial processes, for example, the dead-zone nonlinearities and the valve saturation nonlinearities. Many estimation methods have been developed to identify the parameters of nonlinear systems, especially for Hammerstein nonlinear systems [12, 13]. For example, Ding et al. presented a least-squares-based iterative algorithm and a recursive extended least squares algorithm for Hammerstein ARMAX systems [14] and an auxiliary model-based recursive least squares algorithm for Hammerstein output error systems [15]. Wang and Ding proposed an extended stochastic gradient identification algorithm for Hammerstein-Wiener ARMAX Systems [16].

Recently, Wang et al. derived an auxiliary model-based recursive generalized least-squares parameter estimation algorithm for Hammerstein output error autoregressive systems and auxiliary model-based RELS and MI-ELS algorithms for Hammerstein output error moving average systems using the key term separation principle [17, 18]. Ding et al. presented a projection estimation algorithm and a stochastic gradient (SG) estimation algorithm for Hammerstein nonlinear systems by using the gradient search and further derived a Newton recursive estimation algorithm and a Newton iterative estimation algorithm by using the Newton method (Newton-Raphson method) [19]. Wang and Ding studied least-squares-based and gradient-based iterative identification methods for Wiener nonlinear systems [20].

Fan et al. discussed the parameter estimation problem for Hammerstein nonlinear ARX models [21]. On the basis of the work in [14, 15, 21], this paper studies the identification problems and their convergence for input nonlinear controlled autoregressive (IN-CAR) models using the martingale convergence theorem and gives the recursive generalized extended least-squares algorithm for input nonlinear controlled autoregressive autoregressive moving average (IN-CARARMA) models.

Briefly, the paper is organized as follows. Section 2 derives a linear-in-parameters identification model and gives a recursive least squares identification algorithm for input nonlinear CAR systems and analyzes the properties of the proposed algorithm. Section 4 gives the recursive generalized extended least squares algorithm for input nonlinear CARARMA systems. Section 5 provides an illustrative example to show the effectiveness of the proposed algorithms. Finally, we offer some concluding remarks in Section 6.

2. The Input Nonlinear CAR Model and Estimation Algorithm

Let us introduce some notations first. The symbol 𝐈(𝐈𝑛) stands for an identity matrix of appropriate sizes (𝑛×𝑛); the superscript 𝑇 denotes the matrix transpose; 𝟏𝑛 represents an 𝑛-dimensional column vector whose elements are 1; |𝐗|=det[𝐗] represents the determinant of the matrix 𝐗; the norm of a matrix 𝐗 is defined by 𝐗2=tr[𝐗𝐗𝑇]; 𝜆max[𝐗] and 𝜆min[𝐗] represent the maximum and minimum eigenvalues of the square matrix 𝐗, respectively; 𝑓(𝑡)=𝑜(𝑔(𝑡)) represents 𝑓(𝑡)/𝑔(𝑡)0 as 𝑡; for 𝑔(𝑡)0, we write 𝑓(𝑡)=𝑂(𝑔(𝑡)) if there exists a positive constant 𝛿1 such that |𝑓(𝑡)|𝛿1𝑔(𝑡).

2.1. The Input Nonlinear CAR Model

Consider the following input nonlinear controlled autoregressive (IN-CAR) systems [14, 21]: 𝐴(𝑧)𝑦(𝑡)=𝐵(𝑧)𝑢(𝑡)+𝑣(𝑡),(2.1) where 𝑦(𝑡) is the system output, 𝑣(𝑡) is a disturbance noise, the output of the nonlinear block 𝑢(𝑡) is a nonlinear function of a known basis (𝑓1,𝑓2,,𝑓𝑚) of the system input 𝑢(𝑡) [19], 𝑢(𝑡)=𝑓(𝑢(𝑡))=𝑐1𝑓1(𝑢(𝑡))+𝑐2𝑓2(𝑢(𝑡))++𝑐𝑚𝑓𝑚(𝑢(𝑡)),(2.2)𝐴(𝑧) and 𝐵(𝑧) are polynomials in the unit backward shift operator 𝑧1 [𝑧1𝑦(𝑡)=𝑦(𝑡1)], defined as 𝐴(𝑧)=1+𝑎1𝑧1+𝑎2𝑧2++𝑎𝑛𝑧𝑛,𝐵(𝑧)=𝑏1𝑧1+𝑏2𝑧2+𝑏3𝑧3++𝑏𝑛𝑧𝑛.(2.3) In order to obtain the identifiability of parameters 𝑏𝑖 and 𝑐𝑖, without loss of generality, we suppose that 𝑐1=1 or 𝑏1=1 [14, 21].

Define the parameter vector 𝝑 and information vector 𝝍(𝑡) as 𝝑=[𝐚𝑇,𝑐1𝐛𝑇,𝑐2𝐛𝑇,,𝑐𝑚𝐛𝑇]𝑇𝑛0,𝑛0=𝑛+𝑚𝑛,𝐚=[𝑎1,𝑎2,,𝑎𝑛]𝑇𝑛,𝐛=[𝑏1,𝑏2,,𝑏𝑛]𝑇𝑛,𝝍(𝑡)=[𝝍𝑇0(𝑡),𝝍𝑇1(𝑡),𝝍𝑇2(𝑡),,𝝍𝑇𝑚(𝑡)]𝑇𝑛0,𝝍0(𝑡)=[𝑦(𝑡1),𝑦(𝑡2),,𝑦(𝑡𝑛)]𝑇𝑛,𝝍𝑗𝑓(𝑡)=𝑗(𝑢(𝑡1)),𝑓𝑗(𝑢(𝑡2)),,𝑓𝑗(𝑢(𝑡𝑛))𝑇𝑛,𝑗=1,2,,𝑚.(2.4) From (2.1), we have []𝑦(𝑡)=1𝐴(𝑧)𝑦(𝑡)+𝐵(𝑧)𝑢(𝑡)+𝑣(𝑡)=𝑛𝑖=1𝑎𝑖𝑦(𝑡𝑖)+𝑛𝑖=1𝑏𝑖𝑚𝑗=1𝑐𝑗𝑓𝑗(𝑢(𝑡𝑖))+𝑣(𝑡)=𝑛𝑖=1𝑎𝑖𝑦(𝑡𝑖)+𝑚𝑛𝑗=1𝑖=1𝑐𝑗𝑏𝑖𝑓𝑗(𝑢(𝑡𝑖))+𝑣(𝑡)(2.5)=𝑛𝑖=1𝑎𝑖𝑦(𝑡𝑖)+𝑐1𝑏1𝑓1(𝑢(𝑡1))+𝑐1𝑏2𝑓1(𝑢(𝑡2))++𝑐1𝑏𝑛𝑓1(𝑢(𝑡𝑛))+𝑐2𝑏1𝑓2(𝑢(𝑡1))+𝑐2𝑏2𝑓2(𝑢(𝑡2))++𝑐2𝑏𝑛𝑓2(𝑢(𝑡𝑛))++𝑐𝑚𝑏1𝑓𝑚(𝑢(𝑡1))+𝑐𝑚𝑏2𝑓𝑚(𝑢(𝑡2))++𝑐𝑚𝑏𝑛𝑓𝑚(𝑢(𝑡𝑛))+𝑣(𝑡)=𝝍𝑇(𝑡)𝝑+𝑣(𝑡).(2.6) An alternative way is to define the parameter vector 𝜽 and information vector 𝝋(𝑡) as 𝜽=[𝐚𝑇,𝑏1𝐜𝑇,𝑏2𝐜𝑇,𝑇,𝑏𝑛𝐜𝑇]𝑇𝑛0,𝐚=[𝑎1,𝑎2,,𝑎𝑛]𝑇𝑛,𝐜=[𝑐1,𝑐2,,𝑐𝑚]𝑇𝑚,𝝋(𝑡)=[𝝋𝑇0(𝑡),𝝋𝑇1(𝑡),𝝋𝑇2(𝑡),,𝝋𝑇𝑛(𝑡)]𝑇𝑛0,𝝋0(𝑡)=[𝑦(𝑡1),𝑦(𝑡2),,𝑦(𝑡𝑛)]𝑇𝑛,𝝋𝑗𝑓(𝑡)=1(𝑢(𝑡𝑗)),𝑓2(𝑢(𝑡𝑗)),,𝑓𝑚(𝑢(𝑡𝑗))𝑇𝑚,𝑗=1,2,,𝑛.(2.7) Then (2.5) can be written as 𝑦(𝑡)=𝑛𝑖=1𝑎𝑖𝑦(𝑡𝑖)+𝑛𝑚𝑖=1𝑗=1𝑏𝑖𝑐𝑗𝑓𝑗(𝑢(𝑡𝑖))=𝑛𝑖=1𝑎𝑖𝑦(𝑡𝑖)+𝑏1𝑐1𝑓1𝑢(𝑡1)+𝑏1𝑐2𝑓2𝑢(𝑡1)++𝑏1𝑐𝑚𝑓𝑚𝑢(𝑡1)+𝑏2𝑐1𝑓1𝑢(𝑡2)+𝑏2𝑐2𝑓2𝑢(𝑡2)++𝑏2𝑐𝑚𝑓𝑚𝑢(𝑡2)++𝑏𝑛𝑐1𝑓1𝑢(𝑡𝑛)+𝑏𝑛𝑐2𝑓2𝑢(𝑡𝑛)++𝑏𝑛𝑐𝑚𝑓𝑚𝑢(𝑡𝑛)+𝑣(𝑡)=𝝋𝑇(𝑡)𝜽+𝑣(𝑡).(2.8) Equations (2.6) and (2.8) are both linear-in-parameters identification model for Hammerstein CAR systems by using parametrization.

2.2. The Recursive Least Squares Algorithm

Minimizing the cost function 𝐽(𝜽)=𝑡𝑗=1𝑦(𝑗)𝝋𝑇(𝑗)𝜽2(2.9) gives the following recursive least squares algorithm for computing the estimate 𝜽(𝑡) of 𝜽 in (2.8): 𝐏𝜽(𝑡)=𝜽(𝑡1)+𝐏(𝑡)𝝋(𝑡)𝑦(𝑡)𝝋(𝑡)𝜽(𝑡1),(2.10)1(𝑡)=𝐏1(𝑡1)+𝝋(𝑡)𝝋𝑇(𝑡),𝐏(0)=𝑝0𝐈.(2.11) Applying the matrix inversion formula [22] (𝐀+𝐁𝐂)1=𝐀1𝐀1𝐁𝐈+𝐂𝐀1𝐁1𝐂𝐀1(2.12) to (2.11) and defining the gain vector 𝐋(𝑡)=𝐏(𝑡)𝝋(𝑡)𝑛0, the algorithm in (2.10)-(2.11) can be equivalently expressed as ,𝜽(𝑡)=𝜽(𝑡1)+𝐋(𝑡)𝑦(𝑡)𝝋(𝑡)𝜽(𝑡1)𝐋(𝑡)=𝐏(𝑡)𝝋(𝑡)=𝐏(𝑡1)𝝋(𝑡)1+𝝋T,(𝑡)𝐏(𝑡1)𝝋(𝑡)𝐏(𝑡)=𝐏(𝑡1)𝐏(𝑡)𝝋(𝑡)𝝋𝑇(𝑡)𝐏(𝑡)1+𝝋𝑇=(𝑡)𝐏(𝑡1)𝝋(𝑡)𝐈𝐋(𝑡)𝝋𝑇(𝑡)𝐏(𝑡1),𝐏(0)=𝑝0𝐈.(2.13) To initialize the algorithm, we take 𝑝0 to be a large positive real number, for example, 𝑝0=106, and 𝜽(0) to be some small real vector, for example, 𝜽(0)=106𝟏𝑛0.

3. The Main Convergence Theorem

The following lemmas are required to establish the main convergence results.

Lemma 3.1 (Martingale convergence theorem: Lemma D.5.3 in [23, 24]). If 𝑇𝑡, 𝛼𝑡, 𝛽𝑡 are nonnegative random variables, measurable with respect to a nondecreasing sequence of 𝜎 algebra 𝑡1, and satisfy 𝐸𝑇𝑡𝑡1𝑇𝑡1+𝛼𝑡𝛽𝑡,a.s.,(3.1) then when 𝑡=1𝛼𝑡<, one has 𝑡=1𝛽𝑡<,a.s.and 𝑇𝑡𝑇,a.s. (a.s.: almost surely) a finite nonnegative random variable.

Lemma 3.2 (see [14, 21, 25]). For the algorithm in (2.10)-(2.11), for any 𝛾>1, the covariance matrix 𝐏(𝑡) in (2.11) satisfies the following inequality: 𝑡=1𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)||𝐏ln1||(𝑡)𝛾<,a.s.(3.2)

Theorem 3.3. For the system in (2.8) and the algorithm in (2.10)-(2.11), assume that {𝑣(𝑡),𝑡} is a martingale difference sequence defined on a probability space {Ω,,𝑃}, where {𝑡} is the 𝜎 algebra sequence generated by the observations {𝑦(𝑡),𝑦(𝑡1),,𝑢(𝑡),𝑢(𝑡1),} and the noise sequence {𝑣(𝑡)} satisfies E[𝑣(𝑡)𝑡1]=0,and E[𝑣2(𝑡)𝑡1]𝜎2<,a.s [23], and [ln|𝐏1(𝑡)|]𝛾=𝑜(𝜆min[𝐏1(𝑡)]), 𝛾>1. Then the parameter estimation error 𝜽(𝑡) converges to zero.

Proof. Define the parameter estimation error vector 𝜽(𝑡)=𝜽(𝑡)𝜽 and the stochastic Lyapunov function 𝜽𝑇(𝑡)=𝑇(𝑡)𝐏1(𝑡)𝜽(𝑡). Let ̃𝑦(𝑡)=𝝋𝑇(𝑡)𝜽(𝑡1)𝝋𝑇(𝑡)𝜽=𝝋𝑇(𝑡)𝜽(𝑡1). According to the definitions of 𝜽(𝑡) and 𝑇(𝑡) and using (2.10) and (2.11), we have 𝜽𝜽[],(𝑡)=(𝑡1)+𝐏(𝑡)𝝋(𝑡)̃𝑦(𝑡)+𝑣(𝑡)𝑇(𝑡)=𝑇(𝑡1)1𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)̃𝑦2(𝑡)+𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)𝑣2(𝑡)+21𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)̃𝑦(𝑡)𝑣(𝑡)𝑇(𝑡1)+𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)𝑣2(𝑡)+21𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)̃𝑦(𝑡)𝑣(𝑡).(3.3) Here, we have used the inequality 1𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)=[1+𝝋𝑇(𝑡)𝐏(𝑡1)𝝋(𝑡)]10. Because ̃𝑦(𝑡) and 𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡) are uncorrelated with 𝑣(𝑡) and are 𝑡1 measurable, taking the conditional expectation with respect to 𝑡1, we have 𝐸𝑇(𝑡)𝐹𝑡1𝑇(𝑡1)+2𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)𝜎2.(3.4) Since ln|𝐏1(𝑡)| is nondecreasing, letting 𝑉(𝑡)=𝑇(𝑡)||𝐏ln1||(𝑡)𝛾,𝛾>1,(3.5) we have 𝐸𝑉(𝑡)𝐹𝑡1𝑇(𝑡1)||𝐏ln1||(𝑡)𝛾+2𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)||𝐏ln1||(𝑡)𝛾𝜎2𝑉(𝑡1)+2𝝋𝑇(𝑡)𝐏(𝑡)𝝋(𝑡)||𝐏ln1||(𝑡)𝛾𝜎2,a.s.(3.6) Using Lemma 3.2, the sum of the last term in the right-hand side for 𝑡 from 1 to is finite. Applying Lemma 3.1 to the previous inequality, we conclude that 𝑉(𝑡) converges a.s. to a finite random variable, say 𝑉0, that is: 𝑉(𝑡)=𝑇(𝑡)||𝐏ln1||(𝑡)𝛾𝑉0||𝐏<,a.s.,or𝑇(𝑡)=𝑂ln1||(𝑡)𝛾,a.s.(3.7) Thus, according to the definition of 𝑇(𝑡), we have 𝜽(𝑡)2𝜽tr𝑇(𝑡)𝐏1(𝑡)𝜽(𝑡)𝜆min𝐏1||𝐏(𝑡)=𝑂ln1||(𝑡)𝛾𝜆min𝐏1𝑜𝜆(𝑡)=𝑂min𝐏1(𝑡)𝜆min𝐏1(𝑡)0,a.s.(3.8) This completes the proof of Theorem 3.3.

According to the definition of 𝜽 and the assumption 𝑏1=1, the estimates ̂𝐚(𝑡)=[̂𝑎1(𝑡),̂𝑎2(𝑡),,̂𝑎𝑛(𝑡)]𝑇 and ̂𝐜(𝑡)=[̂𝑐1(𝑡), ̂𝑐2(𝑡), , ̂𝑐𝑚(𝑡)]𝑇 of 𝐚 and 𝐜 can be read from the first 𝑛 and second 𝑚 entries of 𝜽, respectively. Let ̂𝜃𝑖 be the 𝑖th element of 𝜽. Referring to the definition of 𝜽, the estimates ̂𝑏𝑗(𝑡) of 𝑏𝑗, 𝑗=2,3,,𝑛, may be computed by ̂𝑏𝑗̂𝜃(𝑡)=𝑛+(𝑗1)𝑚+𝑖(𝑡)̂𝑐𝑖(𝑡),𝑗=2,3,,𝑛;𝑖=1,2,,𝑚.(3.9) Notice that there is a large amount of redundancy about ̂𝑏𝑗(𝑡) for each 𝑖=1,2,,𝑚. Since we do not need such 𝑚 estimates ̂𝑏𝑗(𝑡), one way is to take their average as the estimate of 𝑏𝑗 [14], that is: ̂𝑏𝑗1(𝑡)=𝑚𝑚𝑖=1̂𝜃𝑛+(𝑗1)𝑚+𝑖(𝑡)̂𝑐𝑖(𝑡),𝑗=2,3,,𝑛.(3.10)

4. The Input Nonlinear CARARMA System and Estimation Algorithm

Consider the following input nonlinear controlled autoregressive autoregressive moving average (IN-CARARMA) systems: 𝐴(𝑧)𝑦(𝑡)=𝐵(𝑧)𝑢(𝑡)+𝐷(𝑧)𝛾(𝑧)𝑣(𝑡),(4.1)𝑢(𝑡)=𝑓(𝑢(𝑡))=𝑐1𝑓1(𝑢(𝑡))+𝑐2𝑓2(𝑢(𝑡))++𝑐𝑚𝑓𝑚(𝑢(𝑡)),𝛾(𝑧)=1+𝛾1𝑧1+𝛾2𝑧2++𝛾𝑛𝛾𝑧𝑛𝛾,𝐷(𝑧)=1+𝑑1𝑧1+𝑑2𝑧2++𝑑𝑛𝑑𝑧𝑛𝑑.(4.2) Let 𝑤(𝑡)=𝐷(𝑧)𝛾(𝑧)𝑣(𝑡),(4.3) or []𝑤(𝑡)=1𝛾(𝑧)𝑤(𝑡)+𝐷(𝑧)𝑣(𝑡)=𝑛𝛾𝑖=1𝛾𝑖𝑤(𝑡𝑖)+𝑛𝑑𝑖=1𝑑𝑖𝑣(𝑡𝑖)+𝑣(𝑡).(4.4) Define the parameter vector 𝜽 and information vector 𝝋(𝑡) as 𝜽=[𝜽𝑇1,𝛾1,𝛾2,,𝛾𝑛𝛾,𝑑1,𝑑2,,𝑑𝑛𝑑]𝑇𝑛+𝑚𝑛+𝑛𝛾+𝑛𝑑,𝝋(𝑡)=[𝝋𝑇1(𝑡),𝑤(𝑡1),𝑤(𝑡2),𝑤𝑡𝑛𝛾,𝑣(𝑡1),𝑣(𝑡2),,𝑣𝑡𝑛𝑑]𝑇𝑛+𝑚𝑛+𝑛𝛾+𝑛𝑑,𝜽1𝐚𝑏=1𝐜𝑏2𝐜𝑏𝑛𝐜𝑛+𝑛𝑚,𝝋1𝝋(𝑡)=0𝝋(𝑡)1𝝋(𝑡)2𝝋(𝑡)𝑛(𝑡)𝑛+𝑛𝑚,𝑎𝐚=1𝑎2𝑎𝑛𝑛𝑐,𝐜=1𝑐2𝑐𝑚𝑚,𝝋0(𝑡)=𝑦(𝑡1)𝑦(𝑡2)𝑦(𝑡𝑛)𝑛,𝝋𝑗𝑓(𝑡)=1𝑓(𝑢(𝑡𝑗))2𝑓(𝑢(𝑡𝑗))𝑚(𝑢(𝑡𝑗))𝑚,𝑗=1,2,,𝑛.(4.5) Then (4.1) can be written as []𝑦(𝑡)=1𝐴(𝑧)𝑦(𝑡)+𝐵(𝑧)𝑢(𝑡)+𝑤(𝑡)=𝑛𝑖=1𝑎𝑖𝑦(𝑡𝑖)+𝑛𝑖=1𝑏𝑖𝑚𝑗=1𝑐𝑗𝑓𝑗(𝑢(𝑡𝑖))+𝑤(𝑡)=𝑛𝑖=1𝑎𝑖𝑦(𝑡𝑖)+𝑛𝑚𝑖=1𝑗=1𝑏𝑖𝑐𝑗𝑓𝑗(𝑢(𝑡𝑖))+𝑤(𝑡)=𝝋𝑇1(𝑡)𝜽1+𝑤(𝑡)=𝝋𝑇1(𝑡)𝜽1𝑛𝛾𝑖=1𝛾𝑖𝑤(𝑡𝑖)+𝑛𝑑𝑖=1𝑑𝑖𝑣(𝑡𝑖)+𝑣(𝑡)=𝝋𝑇(𝑡)𝜽+𝑣(𝑡).(4.6) This is a linear-in-parameter identification model for IN-CARARMA systems.

The unknown 𝑤(𝑡𝑖) and 𝑣(𝑡𝑖) in the information vector 𝝋(𝑡) are replaced with their estimates 𝑤(𝑡𝑖) and ̂𝑣(𝑡𝑖), and then we can obtain the following recursive generalized extended least squares algorithm for estimating 𝜽 in (4.6): 𝝋𝜽(𝑡)=𝜽(𝑡1)+𝐋(𝑡)𝑦(𝑡)𝑇,𝝋(𝑡)𝜽(𝑡1)𝐋(𝑡)=𝐏(𝑡1)𝝋(𝑡)1+𝑇(𝑡)𝐏(𝑡1)𝝋(𝑡)1,𝝋𝐏(𝑡)=𝐈𝐋(𝑡)𝑇(𝑡)𝐏(𝑡1),𝐏(0)=𝑝0𝐈,𝝋(𝑡)=[𝝋𝑇1𝑤(𝑡),𝑤(𝑡1),𝑤(𝑡2),,𝑡𝑛𝛾,̂̂̂𝑣𝑣(𝑡1),𝑣(𝑡2),,𝑡𝑛𝑑]𝑇,𝝋1𝝋(𝑡)=0𝝋(𝑡)1𝝋(𝑡)2𝝋(𝑡)𝑛(𝑡),𝝋0(𝑡)=𝑦(𝑡1)𝑦(𝑡2)𝑦(𝑡𝑛),𝝋𝑗𝑓(𝑡)=1𝑓(𝑢(𝑡𝑗))2𝑓(𝑢(𝑡𝑗))𝑚(,𝝋𝑢(𝑡𝑗))𝑤(𝑡)=𝑦(𝑡)𝑇1(𝜽𝑡)1(̂𝑣𝝋𝑡),(𝑡)=𝑦(𝑡)𝑇𝜽𝜽(𝑡)(𝑡),𝜽(𝑡)=[𝑇1(𝑡),̂𝛾1(𝑡),̂𝛾2(𝑡),,̂𝛾𝑛𝛾𝑑(𝑡),1𝑑(𝑡),2𝑑(𝑡),𝑛𝑑(𝑡)]𝑇.(4.7)

This paper presents a recursive least squares algorithm for IN-CAR systems and a recursive generalized extended least squares algorithm for IN-CARARMA systems with ARMA noise disturbances, which differ not only from the input nonlinear controlled autoregressive moving average (IN-CARMA) systems in [14] but also from the input nonlinear output error systems in [15].

5. Example

Consider the following IN-CAR system: 𝐴(𝑧)𝑦(𝑡)=𝐵(𝑧)𝑢(𝑡)+𝑣(𝑡),𝐴(𝑧)=1+𝑎1𝑧1+𝑎2𝑧2=11.35𝑧1+0.75𝑧2,𝐵(𝑧)=𝑏1𝑧1+𝑏2𝑧2=𝑧1+1.68𝑧2,𝑢(𝑡)=𝑓(𝑢(𝑡))=𝑐1𝑢(𝑡)+𝑐2𝑢2(𝑡)+𝑐3𝑢3(𝑡)=𝑢(𝑡)+0.50𝑢2(𝑡)+0.20𝑢3(𝜃𝑡),𝜽=1,𝜃2,𝜃3,𝜃4,𝜃5,𝜃6,𝜃7,𝜃8𝑇=𝑎1,𝑎2,𝑐1,𝑐2,𝑐3,𝑏2𝑐1,𝑏2𝑐2,𝑏2𝑐3𝑇=[]1.350,0.75,1.00,0.50,0.20,1.68,0.84,0.336𝑇,𝜽𝑠=𝑎1,𝑎2,𝑏2,𝑐1,𝑐2,𝑐3𝑇=[]1.35,0.75,1.68,1.00,0.50,0.20𝑇.(5.1) In simulation, the input {𝑢(𝑡)} is taken as a persistent excitation signal sequence with zero mean and unit variance and {𝑣(𝑡)} as a white noise sequence with zero mean and constant variance 𝜎2. Applying the proposed algorithm in (2.10)-(2.11) to estimate the parameters of this system, the parameter estimates 𝜽 and 𝜽𝑠 and their errors with different noise variances are shown in Tables 1, 2, 3, and 4, and the parameter estimation errors 𝛿=𝜽(𝑡)𝜽/𝜽 and 𝛿𝑠𝜽=𝑠(𝑡)𝜽/𝜽𝑠 versus 𝑡 are shown in Figures 1 and 2. When 𝜎2=0.502 and 𝜎2=1.502, the corresponding noise-to-signal ratios are 𝛿ns=10.96% and 𝛿ns=32.87%, respectively.

tab1
Table 1: The parameter estimates (𝜽) (𝜎2=0.502, 𝛿ns=10.96%).
tab2
Table 2: The parameter estimates (𝜽𝑠) (𝜎2 = 0.502, 𝛿ns = 10.96%).
tab3
Table 3: The parameter estimates (𝜽) (𝜎2 = 1.502, 𝛿ns = 32.87%).
tab4
Table 4: The parameter estimates (𝜽𝑠) (𝜎2 = 1.502, 𝛿ns = 32.87%).
684074.fig.001
Figure 1: The parameter estimation errors 𝛿 versus 𝑡.
684074.fig.002
Figure 2: The parameter estimation errors 𝛿𝑠 versus 𝑡.

From Tables 14 and Figures 1 and 2, we can draw the following conclusions.(i)The larger the data length is, the smaller the parameter estimation errors become. (ii)A lower noise level leads to smaller parameter estimation errors for the same data length. (iii)The estimation errors 𝛿 and 𝛿𝑠 become smaller (in general) as 𝑡 increases. This confirms the proposed theorem.

6. Conclusions

The recursive least-squares identification is used to estimate the unknown parameters for input nonlinear CAR and CARARMA systems. The analysis using the martingale convergence theorem indicates that the proposed recursive least squares algorithm can give consistent parameter estimation. It is worth pointing out that the multi-innovation identification theory [2633], the gradient-based or least-squares-based identification methods [3441], and other identification methods [4249] can be used to study identification problem of this class of nonlinear systems with colored noises.

Acknowledgment

This work was supported by the 111 Project (B12018).

References

  1. M. R. Zakerzadeh, M. Firouzi, H. Sayyaadi, and S. B. Shouraki, “Hysteresis nonlinearity identification using new Preisach model-based artificial neural network approach,” Journal of Applied Mathematics, Article ID 458768, 22 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. X.-X. Li, H. Z. Guo, S. M. Wan, and F. Yang, “Inverse source identification by the modified regularization method on poisson equation,” Journal of Applied Mathematics, vol. 2012, Article ID 971952, 13 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Shi and H. Fang, “Kalman filter-based identification for systems with randomly missing measurements in a network environment,” International Journal of Control, vol. 83, no. 3, pp. 538–551, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. Y. Liu, J. Sheng, and R. Ding, “Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems,” Computers & Mathematics with Applications, vol. 59, no. 8, pp. 2615–2627, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. F. Ding, G. Liu, and X. P. Liu, “Parameter estimation with scarce measurements,” Automatica, vol. 47, no. 8, pp. 1646–1655, 2011. View at Google Scholar · View at Scopus
  6. J. Ding, F. Ding, X. P. Liu, and G. Liu, “Hierarchical least squares identification for linear SISO systems with dual-rate sampled-data,” IEEE Transactions on Automatic Control, vol. 56, no. 11, pp. 2677–2683, 2011. View at Publisher · View at Google Scholar
  7. Y. Liu, Y. Xiao, and X. Zhao, “Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model,” Applied Mathematics and Computation, vol. 215, no. 4, pp. 1477–1483, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. J. Ding and F. Ding, “The residual based extended least squares identification method for dual-rate systems,” Computers & Mathematics with Applications, vol. 56, no. 6, pp. 1479–1487, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. L. Han and F. Ding, “Identification for multirate multi-input systems using the multi-innovation identification theory,” Computers & Mathematics with Applications, vol. 57, no. 9, pp. 1438–1449, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. F. Ding, Y. Shi, and T. Chen, “Gradient-based identification methods for Hammerstein nonlinear ARMAX models,” Nonlinear Dynamics, vol. 45, no. 1-2, pp. 31–43, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. F. Ding, T. Chen, and Z. Iwai, “Adaptive digital control of Hammerstein nonlinear systems with limited output sampling,” SIAM Journal on Control and Optimization, vol. 45, no. 6, pp. 2257–2276, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. J. Li and F. Ding, “Maximum likelihood stochastic gradient estimation for Hammerstein systems with colored noise based on the key term separation technique,” Computers & Mathematics with Applications, vol. 62, no. 11, pp. 4170–4177, 2011. View at Publisher · View at Google Scholar
  13. J. Li, F. Ding, and G. Yang, “Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average systems,” Mathematical and Computer Modelling, vol. 55, no. 3-4, pp. 442–450, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. F. Ding and T. Chen, “Identification of Hammerstein nonlinear ARMAX systems,” Automatica, vol. 41, no. 9, pp. 1479–1489, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. F. Ding, Y. Shi, and T. Chen, “Auxiliary model-based least-squares identification methods for Hammerstein output-error systems,” Systems & Control Letters, vol. 56, no. 5, pp. 373–380, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. D. Wang and F. Ding, “Extended stochastic gradient identification algorithms for Hammerstein-Wiener ARMAX systems,” Computers & Mathematics with Applications, vol. 56, no. 12, pp. 3157–3164, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. D. Wang, Y. Chu, G. Yang, and F. Ding, “Auxiliary model based recursive generalized least squares parameter estimation for Hammerstein OEAR systems,” Mathematical and Computer Modelling, vol. 52, no. 1-2, pp. 309–317, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. D. Wang, Y. Chu, and F. Ding, “Auxiliary model-based RELS and MI-ELS algorithm for Hammerstein OEMA systems,” Computers & Mathematics with Applications, vol. 59, no. 9, pp. 3092–3098, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  19. F. Ding, X. P. Liu, and G. Liu, “Identification methods for Hammerstein nonlinear systems,” Digital Signal Processing, vol. 21, no. 2, pp. 215–238, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. D. Wang and F. Ding, “Least squares based and gradient based iterative identification for Wiener nonlinear systems,” Signal Processing, vol. 91, no. 5, pp. 1182–1189, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. W. Fan, F. Ding, and Y. Shi, “Parameter estimation for Hammerstein nonlinear controlled auto-regression models,” in Proceedings of the IEEE International Conference on Automation and Logistics, pp. 1007–1012, Jinan, China, August 2007.
  22. L. Wang, F. Ding, and P. X. Liu, “Convergence of HLS estimation algorithms for multivariable ARX-like systems,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1081–1093, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  23. G. C. Goodwin and K. S. Sin, Adaptive Filtering, Prediction and Control, Prentice-Hall, Englewood Cliffs, NJ, USA, 1984.
  24. Y. Liu, L. Yu, and F. Ding, “Multi-innovation extended stochastic gradient algorithm and its performance analysis,” Circuits, Systems, and Signal Processing, vol. 29, no. 4, pp. 649–667, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  25. F. Ding and T. Chen, “Combined parameter and output estimation of dual-rate systems using an auxiliary model,” Automatica, vol. 40, no. 10, p. 1739, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  26. F. Ding and T. Chen, “Performance analysis of multi-innovation gradient type identification methods,” Automatica, vol. 43, no. 1, pp. 1–14, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  27. L. Han and F. Ding, “Multi-innovation stochastic gradient algorithms for multi-input multi-output systems,” Digital Signal Processing, vol. 19, no. 4, pp. 545–554, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. F. Ding, “Several multi-innovation identification methods,” Digital Signal Processing, vol. 20, no. 4, pp. 1027–1039, 2010. View at Google Scholar
  29. D. Wang and F. Ding, “Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systems,” Digital Signal Processing, vol. 20, no. 3, pp. 750–762, 2010. View at Publisher · View at Google Scholar · View at Scopus
  30. J. Zhang, F. Ding, and Y. Shi, “Self-tuning control based on multi-innovation stochastic gradient parameter estimation,” Systems & Control Letters, vol. 58, no. 1, pp. 69–75, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  31. F. Ding, H. Chen, and M. Li, “Multi-innovation least squares identification methods based on the auxiliary model for MISO systems,” Applied Mathematics and Computation, vol. 187, no. 2, pp. 658–668, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  32. L. Xie, Y. J. Liu, H. Z. Yang, and F. Ding, “Modelling and identification for non-uniformly periodically sampled-data systems,” IET Control Theory & Applications, vol. 4, no. 5, pp. 784–794, 2010. View at Publisher · View at Google Scholar
  33. F. Ding, P. X. Liu, and G. Liu, “Multiinnovation least-squares identification for system modeling,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 40, no. 3, Article ID 5299173, pp. 767–778, 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. J. Ding, Y. Shi, H. Wang, and F. Ding, “A modified stochastic gradient based parameter estimation algorithm for dual-rate sampled-data systems,” Digital Signal Processing, vol. 20, no. 4, pp. 1238–1247, 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. F. Ding, P. X. Liu, and H. Yang, “Parameter identification and intersample output estimation for dual-rate systems,” IEEE Transactions on Systems, Man, and Cybernetics A, vol. 38, no. 4, pp. 966–975, 2008. View at Publisher · View at Google Scholar · View at Scopus
  36. Y. Liu, D. Wang, and F. Ding, “Least squares based iterative algorithms for identifying Box-Jenkins models with finite measurement data,” Digital Signal Processing, vol. 20, no. 5, pp. 1458–1467, 2010. View at Publisher · View at Google Scholar · View at Scopus
  37. D. Wang and F. Ding, “Input-output data filtering based recursive least squares identification for CARARMA systems,” Digital Signal Processing, vol. 20, no. 4, pp. 991–999, 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. F. Ding, P. X. Liu, and G. Liu, “Gradient based and least-squares based iterative identification methods for OE and OEMA systems,” Digital Signal Processing, vol. 20, no. 3, pp. 664–677, 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. D. Wang, G. Yang, and R. Ding, “Gradient-based iterative parameter estimation for Box-Jenkins systems,” Computers & Mathematics with Applications, vol. 60, no. 5, pp. 1200–1208, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  40. L. Xie, H. Yang, and F. Ding, “Recursive least squares parameter estimation for non-uniformly sampled systems based on the data filtering,” Mathematical and Computer Modelling, vol. 54, no. 1-2, pp. 315–324, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  41. F. Ding, Y. Liu, and B. Bao, “Gradient-based and least-squares-based iterative estimation algorithms for multi-input multi-output systems,” Proceedings of the Institution of Mechanical Engineers. Part I: Journal of Systems and Control Engineering, vol. 226, no. 1, pp. 43–55, 2012. View at Publisher · View at Google Scholar · View at Scopus
  42. F. Ding, “Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling,” Applied Mathematical Modelling. In press. View at Publisher · View at Google Scholar
  43. F. Ding and J. Ding, “Least-squares parameter estimation for systems with irregularly missing data,” International Journal of Adaptive Control and Signal Processing, vol. 24, no. 7, pp. 540–553, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  44. Y. Liu, L. Xie, and F. Ding, “An auxiliary model based on a recursive least-squares parameter estimation algorithm for non-uniformly sampled multirate systems,” Proceedings of the Institution of Mechanical Engineers. Part I: Journal of Systems and Control Engineering, vol. 223, no. 4, pp. 445–454, 2009. View at Publisher · View at Google Scholar · View at Scopus
  45. F. Ding, L. Qiu, and T. Chen, “Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems,” Automatica, vol. 45, no. 2, pp. 324–332, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  46. F. Ding, G. Liu, and X. P. Liu, “Partially coupled stochastic gradient identification methods for non-uniformly sampled systems,” IEEE Transactions on Automatic Control, vol. 55, no. 8, pp. 1976–1981, 2010. View at Publisher · View at Google Scholar
  47. J. Ding and F. Ding, “Bias compensation-based parameter estimation for output error moving average systems,” International Journal of Adaptive Control and Signal Processing, vol. 25, no. 12, pp. 1100–1111, 2011. View at Publisher · View at Google Scholar · View at Scopus
  48. F. Ding and T. Chen, “Performance bounds of forgetting factor least-squares algorithms for time-varying systems with finite meaurement data,” IEEE Transactions on Circuits and Systems. I. Regular Papers, vol. 52, no. 3, pp. 555–566, 2005. View at Publisher · View at Google Scholar
  49. F. Ding and T. Chen, “Hierarchical identification of lifted state-space models for general dual-rate systems,” IEEE Transactions on Circuits and Systems. I. Regular Papers, vol. 52, no. 6, pp. 1179–1187, 2005. View at Publisher · View at Google Scholar