#### Abstract

Although moving horizon estimation (MHE) is a very efficient technique for estimating parameters and states of constrained dynamical systems, however, the approximation of the arrival cost remains a major challenge and therefore a popular research topic. The importance of the arrival cost is such that it allows information from past measurements to be introduced into current estimates. In this paper, using an adaptive estimation algorithm, we approximate and update the parameters of the arrival cost of the moving horizon estimator. The proposed method is based on the least-squares algorithm but includes a variable forgetting factor which is based on the constant information principle and a dead zone which ensures robustness. We show by this method that a fairly good approximation of the arrival cost guarantees the convergence and stability of estimates. Some simulations are made to show and demonstrate the effectiveness of the proposed method and to compare it with the classical MHE.

#### 1. Introduction

Since the 1960s, the state estimation of systems by a moving horizon approach has been highlighted. The dazzling success of this estimation technique is based on its ability to explicitly take into account the constraints on the estimate of variables of a dynamic system [1–3]. In [4], the problem of estimating the state of a linear system on a peer-to-peer network of linear sensors is discussed. The proposed approach is fully distributed and scalable and allows for taking into account the constraints on noise and state variables by resorting to the moving horizon estimation paradigm. Each node in the network calculates its local state estimate by minimizing a cost function defined over a sliding window of fixed size. The cost function includes a fused arrival cost which is computed in a distributed way by performing a consensus on the local arrival costs. In [5], the authors have proposed a novel type of iterative partition-based moving horizon estimators, the estimates of which approach those of a centralized moving horizon estimator as the number of iterations increases. It uses a deterministic setting and can handle known inputs as well as bounds on the estimated state. They have derived conditions on the system, its partitions, and the scalar regularization parameter, which guarantee convergence towards the optimal centralized state estimate as well as stability of the estimation error dynamics, even with a finite number of iterations at each time step. The authors, in [6], have investigated on a problem of event-triggered state estimation for networked linear systems. They consider that the stochastic system disturbances and noise are bounded and moving horizon estimation (MHE) is used to handle these constraints and establish an event-based state estimation mechanism that aims to provide good state estimates while reducing the frequencies of both the evaluation of the state estimator and networked communication between the plant and the estimator. In [7], the authors focus on distributed moving horizon estimation (DMHE) for a class of two-time-scale nonlinear systems described in the framework of singularly perturbed systems. By taking advantage of the time-scale separation property, a two-time-scale system is first decomposed into a reduced-order fast system and a reduced-order slow system. The slow system is further decomposed into several interconnected slow subsystems. In the proposed distributed state estimation scheme, a local estimator is designed for each slow subsystem and for the reduced-order fast system.

In the moving horizon estimation approach, the state estimate is determined by solving an optimization problem online that minimizes the sum of the squared errors. At a sampling time, when a new measurement is obtainable, the older measuring within the estimation window is discarded and the horizon finite optimization problem is solved again to obtain the new state estimate [8–10]. To take into account the historical data outside the estimation window, a variable that summarizes the information from this data is included in the objective function of the MHE optimization problem called *arrival cost*. This arrival cost is crucial for the stability and performance of the moving horizon estimation; therefore, its approximation is an active research topic.

To reduce the size of the estimation window and the size of the optimization problem, a good approximation of the arrival cost is necessary, ensuring good performance and good robustness. The arrival cost is commonly estimated using a weighted norm of states at the beginning of the estimated horizon window [8, 11, 12]. In [13], the authors proposed to use the Kalman filter approach for updating the term of the arrival cost (in particular the weighting matrix) for the case of linear systems, leading to an inadequate approximation of the weighting matrix leading to relatively poor estimates due to a Gaussian distribution of the Kalman filter. To overcome this problem, the authors in [14] proposed an iterative scheme for updating the arrival cost using the information on the active or inactive constraints from the previous iteration and a quadratic approximation. The idea brings the quadratic approximation as close as possible to the optimal solution of the arrival cost. The fundamental assumption behind the updating scheme is that the constraints should remain unchanged after a certain moment despite the increase of the estimation horizon. For a large estimation window, this assumption is functional giving good results; however, if after smoothing some constraints become inactive, less good results are obtained.

In [15], the author uses nonlinear filters based on sampling, including the unscented Kalman filter based on deterministic sampling, the particle filter based on random sampling, and the cell filter based on the aggregate Markov chain to update the arrival cost of the moving horizon estimation and demonstrate the advantages and performance of those filters over the traditional extended Kalman filter approach. However, the computation time is relatively long and better performance is also obtained due to a fairly large horizon length.

In more recent works, as in [16], the authors proposed a moving horizon estimation algorithm based on multiple estimation windows. The major advantage of this algorithm is that it reduces the size of the optimization problem while maintaining the performance properties of the estimator and its stability by taking advantage of the inactivity of the constraints. In [17], the authors developed a two-step design strategy, namely, decoupling step and convergence step for networked linear systems with unknown inputs under dynamic quantization effects. In the decoupling step, the decoupling parameter of the moving horizon estimator is designed based on certain assumptions on system parameters and quantization parameters. In the convergence step, by employing a special observability decomposition scheme, the convergence parameters of the moving horizon estimator are achieved such that the estimation error dynamics is ultimately bounded. The conventional moving horizon estimation strategy turns out to be unable to guarantee satisfactory performance as the estimation error depends on external disturbances. In [18], a moving horizon estimation strategy is formulated and the corresponding analytical solution of the state estimation is derived by using the completing the square technique. Based on the lifting approach, the dynamics of the estimation error covariance is analyzed and the desired estimator parameters are obtained by solving a set of linear matrix inequalities.

In this work, the weighting matrix is updated using an adaptive estimation algorithm, which is the least-squares algorithm including a *dead zone* and a *variable forgetting factor* [19, 20]. The dead zone here is used to obtain the robustness; the formula of a forgetting factor based on the principle of constant information can be applied to the process model with unmodeled process uncertainties or external disturbances. Uncertainties can cause the estimates to differ when the standard recursive least-squares algorithm is used, but to prevent this, the inclusion of the dead zone in the estimation algorithm is required. Therefore, the weighting matrix will be calculated in closed loop unlike standard estimation techniques. The effectiveness of the proposed approach is evaluated by carrying out simulation studies on benchmark problems found in the literature.

The main contributions of this work aim to (i) show that an adaptive estimation method can be implemented to approximate the arrival cost and have much better efficiency than the traditional MHE, (ii) also show that the proposed scheme guarantees robustness to the estimator for sudden disturbances, and (iii) finally show that the bounds of estimates are crucial when updating the arrival cost. The paper is organized as follows. In Section 2, we present the moving horizon estimation problem, followed by the arrival cost update mechanism in Section 3. Section 4 presents a stability analysis; the results are discussed in Section 5, and finally, Section 6 will be devoted to the conclusions.

#### 2. Preliminaries and Problem Statement

*Notation 1. *In the following, we will consider the following notations, is a column vector and will be its transpose. denotes a finite sequence of vectors over a given index up to . Let be a matrix and is its transpose. If , then denotes its inverse. Let be a symmetric real matrix, is said to be positive definite if and . The Euclidean matrix or vector norm is denoted by , for instance, and are positive definite; it follows that .

Consider the linear time invariant discrete system:where is the state vector, is the process noise vector, is the measurement vector, and is the measurement noise vector. The process noise and the measurement noise are assumed to be bounded and unknown eventually, i.e., and for , with , , , and being compact and convex sets considered closed with , , , and .

Using all measures for the full information estimator to estimate the system states increases the size of the optimization problem over time and leads to heavy calculation.

To overcome this problem, the moving horizon estimation was developed to take into account a certain amount of data and possibly to make dynamic updates of certain parameters in an estimation window moving in time.

A penalty term called arrival cost is essential in the formulation of the MHE because it takes into account past measurements to estimate current states. The problem of the MHE aims to determine, at a time , an estimate of the current state and process noises which solve the optimization problem.

For , the full information problem arises:

However, for all , the optimization problem becomeswhere the optimal estimated is , is the process noise estimate based on available measurements at time , and are the estimation residuals. To ensure a fairly good robust stability of the MHE, a judicious and adequate choice of and its parameters is necessary [21].

Usually, a relatively long estimation window is used to overcome a poor estimate caused by an inadequate cost function approximation [22]; this case is strongly encountered in the conventional MHE. It can also be noted that, for a short estimation window, an approximation of the optimal cost function is required to properly incorporate the past information into the problem, in case of which a poor estimate will be revealed.

In many dynamical systems, in particular linear systems, a Gaussian distribution is assumed and the arrival cost function is updated using a Kalman filter [23]. Besides, the MHE is equivalent to the Kalman filter for unconstrained linear systems with indeed the same Gaussian distribution. In the following, we will present the proposed mechanism for updating the arrival cost parameters of the MHE.

#### 3. Arrival Cost Update Algorithm

For the constrained MHE problem, it is difficult to find an exact analytical expression of the arrival cost; therefore, an approximation of the arrival cost used for the unconstrained problem will be used; this quadratic approximation is of the following form:where is the weighting matrix and is the initial state. Both define the approximation of the arrival cost function. The proposed approach for updating the arrival cost in this work, based on an adaptive algorithm, is formulated as an update of the following elements: and .

The update of the initial state will be made by a smooth update, which implies that once the estimate leaves the estimation horizon, the state of the constraints does not change [14, 24]:

Regarding the update of the weighting matrix , it is necessary to notice from the outset that, in a stochastic interpretation of the MHE problem, the weighting matrix can be seen as the covariance of the state [8]. For unconstrained linear systems, analytical solutions exist to compute the weighting matrix; on the contrary, for constrained linear systems, a deduction of the approximate solutions is made, which are fundamentally and based on a recursive process of updating useful information from the system at a given time. In the proposed adaptive estimation method, an update based on a recursive process relying on states (or not) and measurements is used [19, 25]:

With , where is the forgetting factor with , is a function which varies between 0 and 1 called *the dead zone*. Equation (6) produces a recursively real-time estimate of , by updating (previous estimate) with a dynamic temporal average over time of matrix . We can qualify this updating mechanism as a dynamic temporal filter overtime with , the initial condition and are the input.

Lemma 1. *If a matrix , then there exists an invertible matrix such that , which is the identity matrix. Moreover, which exists.*

*Proof. *See Lemma 7 in [26].

Note that the forgetting factor and the function each have a different role. When , the algorithm tends to keep the old data in ; in this case, all information is included, improving the estimation problem. On the contrary, when , the algorithm tends to discard the old information in , considering recent data to estimate [27]. This case allows avoiding many old data that could negatively affect the estimator performance. The dead zone function is used to ensure robustness when estimating.

By the matrix inversion Lemma, we can rewrite (6) as follows:A feedback loop is introduced into the MHE problem by this way to compute ; this way makes it possible to ensure the convergence of the estimates and and to adjust according to the amount of available information. The adaptive algorithm used proposes a variable forgetting factor varying between 0 and 1 and under certain conditions to be respected and a dead zone, both allowing to be updated as follows [19]:where is a positive design parameter chosen which is quite large: , , and , where , with and and are also design parameters, which are chosen positive.

Any sequence is bounded by some . Thus, the size of dead zone is also bounded because is bounded too.

The forgetting factor proposed in [19] is based on the constant information principle, which is adequate for external disturbances and unmodeled system uncertainties.

The following algorithm summarizes the proposed moving horizon estimation estimator based on the adaptive estimation method for the arrival cost update.

#### 4. Stability Analysis

The stability of the estimator requires that the prediction error converges to zero for the following nominal system:with no measurement and state noise. In [8], the authors require that the system has to evolve according to the constraints because, for the constrained systems’ case, a poor range of choice of constraints can prevent convergence to the real values of the system.

Lemma 2. *The estimation algorithm assumes that and , with as the system uncertainty vector; then, the following properties are valid:*(i)* converges and is uniformly bounded*(ii)*, , with *

*Proof. *By recursive calculation and the matrix inversion lemma, we get from (7)It follows thatwhich implies the uniform boundedness of ; by the property , it is easily proven from (11) that uniformly converges.

*Remark 1. *The matrix owns a steady-state solution, i.e.,

*Proof. *The update expression of given by equation (7) can be seen as a particular case of Ricatti equation:It is stable if and only if the eigenvalues of the transfer matrix are strictly inside the unit circle of the complex plane:sinceHence,All eigenvalues of matrix are greater than 0 and bounded by 1; then, the eigenvalues of (4) are also greater than 0 and bounded by 1. Hence, the matrix , therefore, has a steady-state solution .

Theorem 1. *Let be a Lyapunov function for the moving horizon estimator with the adaptive arrival cost, where and the matrix is given by equation (6), i.e., .*

*Proof. *; we haveand are symmetric positive definite and symmetric positive semidefinite, respectively, for all ; then, if (10) is observable, it follows that the MHE with the proposed adaptive arrival cost update is an asymptotically stable observer.

Regarding bounded stability, we can note that the arrival cost satisfiesThe estimator with adaptive arrival cost update is robust globally asymptotically stable under bounded disturbances; besides, if and , then .

*Proof. *cf. Theorem 7 in [28].

#### 5. Simulation Results

To demonstrate the performance of the proposed method, we adopted discrete-time models used by [13, 29].

##### 5.1. Example 1

where with a sequence of independent, normally distributed random values with zero mean and covariance equal to , and is also a sequence of independent, normally distributed random values with zero mean and covariance equal to . The initial state with zero mean is also normally distributed with a unit covariance. The constrained estimation problem here is formulated with , , , and . We choose and and and as design parameters.

The proposed algorithm is compared with the classical MHE () which propagates the initial state and the arrival cost estimate using Kalman filter. The sum square estimation error is used as a benchmark:where denotes the component of the vector . Based on trials, the average sum square estimation error was computed. The results are shown in Table 1, where it can be seen that the performance of is superior to the classical MHE.

Figures 1(a) and 1(b) show the states and their estimates and with the horizon length .

**(a)**

**(b)**

**(c)**

**(d)**

We can see that despite the relatively short horizon length chosen, the estimate is closer than . The estimates of tend to follow for . On the contrary, for the state , we observe a divergent behavior. At every sample time, the square error is shown in Figures 1(c) and 1(d); we can observe that the error of the proposed method is lower than , and we can also notice that the performance of is quite degraded, probably, because of the relatively short horizon length.

In Figures 2(a) and 2(b), we can observe the evolution of the forgetting factor and the dead zone function, respectively.

**(a)**

**(b)**

The forgetting factor is varying between 0 and 1, providing a good adaptation capability and allowing the proposed algorithm to insert pertinent information into the data and follow sudden variations. The dead zone function makes some variations between and to allow a rapid adaptation of the weighting matrix when a sudden change occurs. We can note that when the estimation error is large, the dead zone function is equal to 1, and when the error is relatively low, the dead zone function is equal to .

Figures 3(a) and 3(b) allow appreciating the evolution of the trace of the weighting matrix for both estimators.

**(a)**

**(b)**

The weighting matrix converges to the steady-state value and does not change anymore, while adapts its weighting matrix according to variations of the process noise.

A particular case is studied and presented in Figure 4, where the process noise covariance varies with time. The covariance will take values when , when , when , when , and when . We can see in Figures 4(a) and 4(b) that the proposed approach still performs efficiently better than the conventional MHE despite the variation with time of the covariance. We can also note that compared to the previous scenario, the estimate is less good for both estimators, but this can be explained by the fact that the estimators have no knowledge of this happening. Figures 4(c) and 4(d) show the trace of the matrix used by and , respectively. We can observe that the weighting matrix converges to a steady value at and does not change anymore, while the proposed method tries to adapt its weighting matrix according to the variations of the process noise.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

It can be seen that the trace of tends to stabilize when the error is small. On the contrary, the trace of the weighting matrix becomes smaller when the difference between the estimate and the real state grows, ensuring good adaptation despite the influence of varying disturbances.

In Figures 4(e) and 4(f), we can see the evolution of the forgetting factor and the dead zone, respectively.

We can see that the evolution of the forgetting factor is independent of the dead zone, but it evolves according to the variation of the process noise. We also notice that the forgetting factor tries to adapt to the variations while keeping its value close to . On the contrary, the behavior of the dead zone is such that when the estimation error is small, the dead zone function tends to while when the error grows, the dead zone is equal to , so to allow a good adaptation and provide an acceptable estimation.

Figure 5 shows the performance of with different values of the horizon length. We can notice that, for small values of , the estimation error is not negligible, while for large values of , the estimation error is very appreciable. Table 2 allows us to assess the results quantitatively.

**(a)**

**(b)**

##### 5.2. Example 2

Still in the same logic of showing the performance of the proposed approach, the example of the following system is provided:where

Here, disturbances are modeled with the same distribution as example 1. The constrained estimation problem is formulated with , , , and .

and are the control input and the disturbance, respectively. represents the Heaviside function. We also choose the horizon in this example and , , and as design parameters. Figures 6(a)–6(c) show the states and their estimates , , and . Figures 6(d)–6(f) show the square error of states and their estimates. From these figures, we can see that the proposed method () gives a good estimate compared to the conventional MHE (); this is because, in this example, the inputs of the system are bounded and persistent. We can simply say the estimates can track properly the states. The divergence of the conventional MHE is due to the inclusion of a large value of the disturbance (sudden variation) at ; on the contrary, the proposed algorithm adapts to slow or/and sudden changes. This can easily be explained by the fact that the crucial parameters of the proposed algorithm can follow slow or/and sudden changes or disturbances and provide good adaptability. Figures 7(a) and 7(b) show the evolution of the forgetting factor and the dead zone function, respectively. The same remarks as in the previous example are also considered in this case. It can be seen that the best performance of the proposed method occurred when the forgetting factor was equal to , and the dead zone function equal to 0, when the disturbances were less important. When the disturbances were considerable, the forgetting factor tried to adjust to them. Figures 8(a) and 8(b) allow appreciating the evolution of the trace of the weighting matrix for both estimators. We can see that the evolution of the trace of the classical MHE weighting matrix has difficulty reaching the steady state unlike the previous example, which justifies the weak reconstruction; on the contrary, the evolution of the weighting matrix of is smoother; that is, when the disturbances are less consequent, the evolution of the trace of the weighting matrix of behaves as a classical constrained optimization problem. When the noise increases, the proposed estimator does not follow the system and the forgetting factor decreases and moves away from to allow a fairly fast adaptation of . Based on trials, the average sum square estimation error was computed, and the results are shown in Table 3. Likewise, in Figure 9, we show the performance of with different values of the horizon length. It can be seen that errors do not change significantly with the different horizon sizes, showing that this algorithm seems to be a good method for the arrival cost approximation, even for short horizons. Table 4 allows us to assess the results quantitatively.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(c)**

#### 6. Conclusion

In this paper, an alternative method to approximate the arrival cost for the moving horizon estimation has been studied on linear systems. The proposed approach is based on the least-squares algorithm but includes a variable forgetting factor which is based on the constant information principle and a dead zone which ensures robustness. The use of CasADi toolbox [30] was used for the implementation of the optimization algorithm for real-time operation. The results obtained show that the adaptive estimation algorithm allows a good approximation of the arrival cost for the moving horizon estimation. This can easily be explained by the fact that the crucial parameters of the proposed method can follow slow or/and sudden changes or disturbances and provide good adaptability. Since the update mechanism does not depend on the model, the proposed method can be extended to nonlinear systems.

#### Notations

FIE: | Full information estimator |

: | State vector |

: | Measurement outputs |

: | Objective function |

: | Noise weighting matrix |

: | Process noise vector |

: | Measurement noise vector |

: | Process vector |

: | Measurement vector |

: | Jacobian matrix with respect to |

: | Jacobian matrix of with respect to |

: | Horizon length |

: | Arrival cost |

: | Weighting matrix representing the confidence in the dynamic model |

: | Weighting matrix representing the confidence in the measurements |

: | Estimate vector |

: | Estimate vector in the FIE problem |

: | Initial state |

: | Initial state |

: | Weighting matrix |

: | Forgetting factor |

: | Dead zone function |

: | Design parameter |

: | Estimation error |

: | Design parameters. |

#### Data Availability

No data were used to support the findings of this study.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.