Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018, Article ID 4789142, 11 pages
https://doi.org/10.1155/2018/4789142
Research Article

Multiple-Model Adaptive Estimation with A New Weighting Algorithm

School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China

Correspondence should be addressed to Weicun Zhang; ten.362@gnahznuciew

Received 1 March 2018; Revised 5 May 2018; Accepted 20 May 2018; Published 13 June 2018

Academic Editor: Wenbo Wang

Copyright © 2018 Weicun Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The state estimation of a complex dynamic stochastic system is described by a discrete-time state-space model with large parameter (including the covariance matrices of system noises and measurement noises) uncertainties. A new scheme of weighted multiple-model adaptive estimation is presented, in which the classical weighting algorithm is replaced by a new weighting algorithm to reduce the calculation burden and to relax the convergence conditions. Finally, simulation results verified the effectiveness of the proposed MMAE scheme for each possibility of parameter uncertainties.

1. Introduction

Kalman filter (KF) [1] can be viewed as a sensor fusion or data fusion algorithm. It has many applications in information technology and engineering, such as in the guidance, navigation, and control of vehicles, particularly aircraft and spacecraft [2]. It is also a widely applied concept in signal processing and econometrics. Currently, Kalman filters are one of the main topics in the field of robotic motion planning and control.

Practical implementation of the Kalman filter is often difficult due to the uncertainties and nonlinearities of the modeling of dynamic systems. Extensive research has been done to address the modeling uncertainty and nonlinearity problems in state estimation, filtering, and control. Among others, the multiple model adaptive estimation (MMAE) and multiple model adaptive control (MMAC) schemes have received much attention. The multiple model concept coincides with the logic of “divide and conquer.”

The thought of using multiple models for adaptive estimation come from Magill [3]. Later on, Lainiotis [4], Athans et al. [5, 6], Anderson and Moore [7], and Li and Bar-Shalom [8] studied MMAE/MMAC for different purposes of applications.

Since the late 1990s, MMAC is an important research direction in the adaptive control area, including switching MMAC [914] and weighted MMAC [1520]. With MMAC, it is expected that the limitations of classical adaptive control (mainly self-tuning control and model reference adaptive control), such as poor transient performance and stabilizability of an online estimated model, could be overcome. Simultaneously, there are also more and more research works conducted in the field of MMAE [2123].

There are mainly three aspects to complete an MMAE system in the design stage: first, to construct a “local” model set to cover the parameter uncertainty or nonlinearity of the system as described in (1); second, to design a local KF set according to the local model set, in which each local KF is designed for each local model; and third, to design a weighting algorithm to calculate weights for each local KF. After that, each local KF generates its own state estimates and a corresponding output error (residual) to feed the weighting algorithm. The “global” MMAE state estimate is then a weighted summation of each local KF’s estimates. We use Figure 1, which was borrowed from reference [22], to describe the logic structure of an MMAE system. In Figure 1, is the state of the system, is the control input, is the system output, is the measurement noise that cannot be measured with uncertain covariance , and is the system noise that cannot be measured with uncertain covariance .

Figure 1: The block diagram of an MMAE [16].

In this paper, a new weighting algorithm adapted from reference [19] is adopted with two purposes: one is to simplify the classical weighting algorithm that depends on the dynamic hypothesis test concept and Bayesian formula [3, 16]; the other is to relax the convergence condition of the classical weighting algorithm.

It should be noted that the preliminary version of this manuscript has been published on the proceedings of the 2018 International Conference on Artificial Life and Robotics [24]. In this augmented version, the following changes have been made: (1) The weighting algorithm was further improved, that is, weighting Algorithm 2 was presented to get a faster convergence rate than weighting Algorithm 1; (2) the proof of the convergence of the MMAE system was further polished in details; and (3) simulation results were presented to support the theoretical analysis.

The reminder of this paper is organized as follows. Section 2 provides a brief description of an MMAE system; Section 3 includes the development and convergence analysis of two weighting algorithms; Section 4 gives the main results about the performance (convergence) of the MMAE system; Section 5 demonstrates simulation results in four cases; and finally, Section 6 presents conclusions and future works.

It should also be noted that all the limit operations in this section are in the sense of probability one.

2. The Multiple-Model Adaptive Estimator

Consider a discrete-time system described by the following state-space equation

The matrices , , , , and are assumed piecewise continuous, uniformly bounded in time, and contain unknown constant parameters denoted by vector . The initial condition of (1) is assumed deterministic but unknown. Consider a finite set of candidate parameter values indexed by .

The MMAE can be described as follows: where is the estimate of the state at time , and are time-varying weights generated by the weighting algorithm, which will be given in the next section. In (2), each local state estimate is generated by a corresponding local KF, which is described as follows. where , , , , and ; is in (2).

We expect that if the jth model in the model set is (or close to) the real plant model, then the corresponding jth KF will generate the optimal state estimation . In addition, if the jth weight converges to 1, and others to 0, then the state estimates of the MMAE will converge to .

Thus, the key point for an MMAE system is to construct an effective weighting algorithm as well as an appropriate model set to include the real model or the closest model to the plant. The weighting algorithm will be described in the next section.

3. Weighting Algorithm

First of all, we give the residual/error signal of each local KF.

The classical weighting algorithm can be described by the following equations: where is the residual of the jth Kalman filter; is the steady-state constant residual covariance matrix of , is a constant scaling factor, and is the number of measurements. For more details of design and convergence analysis of the classical weighting algorithm, see references [3, 16].

To relax the convergence condition of the classical weighting algorithm, a novel weighting algorithm was put forward in [19] for MMAC systems, to replace the classical weighting algorithm.

Algorithm 1 where denotes the Euclidean norm.

According to [19], we have the following convergence result of weighting (6), (7), (8), (9), and (10). For more details of the proof of the theorem, see Lemma A.1 in the appendix.

Theorem 1. If is the model closest to the true plant in the following sense with probability one, where is a constant and may be a constant or infinity.

Then, the weighting algorithm (6), (7), (8), and (10) leads to

It is worth pointing out that the convergence condition for the weighting algorithm (6), (7), (8), and (10) is fewer than that for the classical weighting algorithm. To be specific, the convergence condition (11) means the discriminability of , while the convergence conditions for the classical weighting algorithm include ergodic, stationary, and discriminability of ; for more details, please be referred to reference [16].

In order to get a sharper convergence rate, reference [25] proposed another weighting algorithm.

Algorithm 2 where is the ceiling function that generates the smallest integer not less than , that is,

According to [25], we have the following convergence result of the weighting algorithm (13), (14), (15), (16), (17), and (18). For more details on the proof of the theorem, see Lemma A.2 in the appendix.

Theorem 2. If is the model closest to the true plant in the following sense with probability one,

Then, the weighting algorithm (13), (14), (15), (16), (17), and (18) leads to

Remarks. These two algorithms, that is, (6), (7), (8), (9), and (10) and (13), (14), (15), (16), (17), and (18), both can be used in MMAE; it should be chosen according to specific engineering conditions, such as software and hardware configurations.

4. Main Results

We only consider the situation that the model set includes the unique real model of system (1). Other complicated situations will be considered in the future research work. We have the following results about the convergence analysis of the proposed MMAE system.

Theorem 3. If the following conditions are satisfied:
(1) is the only real model of system (1) in the following sense with probability one where is a constant and may be a constant or infinity.
(2) Each Kalman filter is designed with assurance of stability, that is, the state estimates of each Kalman filter are bounded.

Then, the state estimates of MMAE with a weighting algorithm (6), (7), (8), (9), and (10) will converge to the optimal estimates given by the jth KF corresponding to , that is,

Proof. According to Theorem 1, condition (1) of Theorem 3, that is, (22), leads to Further, condition (2) of Theorem 3 guarantees that Then, based on (24) and (25), we have That completes the proof.

Theorem 4. If the following conditions are satisfied:
(1) is the only real model of system (1) in the following sense with probability one (2) Each Kalman filter is designed with assurance of stability; that is, the state estimates of each Kalman filter are bounded.

Then, the state estimates of MMAE with the weighting algorithm (13), (14), (15), (16), (17), and (18) will converge to the optimal estimates given by the jth KF corresponding to , that is,

Proof. According to Theorem 2, condition (1) of Theorem 4, that is, (27), leads to Further, condition (2) of Theorem 4 guarantees that Then, based on (29) and (30), we have That completes the proof.

5. Simulation Results

To test the effectiveness of the proposed MMAE scheme, specifically the effectiveness of the weighting algorithms, four cases of simulation have been conducted with MATLAB® 2014a. The simulation programm was coded in M-file.

In system (1), we adopt the following settings for all four cases.

Case 1. The parameters of are uncertain, and the other parameters are constants. The real model of plant is in the model set.

In the model set, we have 4 models:

Model 1.

Model 2.

Model 3.

Model 4.

And the initial state is known.

The simulation results of the weight signals obtained by three weighting algorithms are shown in Figure 2.

Case 2. The parameters of are uncertain, and the other parameters are constants. The real model of the plant is in the model set.

Figure 2: The weight signals of three weighting algorithms in Case 1.

In the model set, we have 4 models:

Model 1.

Model 2.

Model 3.

Model 4.

And the initial state is known.

The simulation results of the weight signals obtained by three weighting algorithms are shown in Figure 3.

Case 3. The covariance matrix of system noises, that is, , is uncertain, and the other parameters are constants. The real model of the plant is in the model set.

Figure 3: The weight signals of three weighting algorithms in Case 2.

In the model set, we have 4 models:

Model 1.

Model 2.

Model 3.

Model 4.

And the initial state is known.

The simulation results of the weight signals obtained by three weighting algorithms are shown in Figure 4.

Case 4. The covariance matrix of measurement noises, that is, , is uncertain, and the other parameters are constants. The real model of the plant is in the model set.

Figure 4: The weight signals of three weighting algorithms in Case 3.

In the model set, we have 4 models:

Model 1.

Model 2.

Model 3.

Model 4.

And the initial state is known.

The simulation results of the weight signals obtained by three weighting algorithms are shown in Figure 5.

Figure 5: The weight signals of three weighting algorithms in Case 4.

From the simulation results, in all cases, the weight signals converge correctly and identify the correct local KF.

Another observation is that the bigger the differences between the real (or the closest) model to the plant and the other models, the sharper the weight convergence rate.

6. Conclusions

A new MMAE scheme is proposed with improved weighting algorithms which were adapted from that of MMAC systems. Both theoretical analysis and simulation results verified the effectiveness of the proposed MMAE scheme. In the future, our research will be focused on three aspects: (1) to improve the weighting algorithm to have a more rapid convergence rate and good disturbance rejection performance; (2) to adapt the weighting algorithm for time-varying and nonlinear systems; and (3) to consider the situation that the true model of the plant is not included in the model set, that is, to add an online self-tuning model based on the machine learning algorithm (Traditionally, it is also well-known as parameter identification algorithm.) It is worth pointing out that the weighting algorithms adopted in this paper are actually online machine learning algorithms.

Appendix

Lemma A.1. Consider weighting algorithm (6), (7), (8), (9) and (10). Suppose is closest in the model set to the true plant in the following sense with probability one where is an unknown limited time instant, is a constant, and may be a constant or infinity.
Then, we have

Proof. It is not difficult to see that algorithms (6) to (10) together with (A.1) guarantee with probability one that Further, considering (A.2), we have Putting (A.4), (A.5), and (9) together, we obtain Then, from (10), we have That completes the proof of Lemma A.1.

Lemma A.2. Consider the weighting algorithm (13), (14), (15), (16), (17), and (18). Suppose there is a model, say , which is closest to the true plant in the following sense with probability one where is an unknown limited time instant.
Then, the weighting algorithm (13), (14), (15), (16), (17), and (18) guarantees

Proof. It is not difficult to see that algorithms (13), (14), (15), (16), (17), and (18) together with (A.8) guarantee with probability one that Further, we know that if and then we still have Putting (A.10), (A.12), and (17) together, we obtain Thus, from (18), we have That completes the proof of Lemma A.2.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their helpful comments and constructive suggestions for the revision of the paper. This work was supported by the National Key Technologies Research and Development Program of China (No. 2013BAB02B07) and National Sciences Foundation of China (No. 61520106010, No. 61741302).

References

  1. R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of Basic Engineering, vol. 82, no. 1, pp. 35–45, 1960. View at Publisher · View at Google Scholar · View at Scopus
  2. P. Zarchan and H. Musoff, Fundamentals of Kalman Filtering: a Practical Approach, American Institute of Aeronautics and Astronautics, 4th edition, 2015.
  3. D. Magill, “Optimal adaptive estimation of sampled stochastic processes,” IEEE Transactions on Automatic Control, vol. 10, no. 4, pp. 434–439, 1965. View at Publisher · View at Google Scholar · View at Scopus
  4. D. G. Lainiotis, “Partitioning: a unifying framework for adaptive systems, II: Control,” Proceedings of the IEEE, vol. 64, no. 8, pp. 1182–1198, 1976. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Athans, D. Castanon, K. Dunn et al., “The stochastic control of the F-8C aircraft using a multiple model adaptive control (MMAC) method--part I: equilibrium flight,” IEEE Transactions on Automatic Control, vol. 22, no. 5, pp. 768–780, 1977. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Athans, Y. Baram, D. Castanon et al., “Investigation of the multiple method adaptive control (MMAC) method for flight control systems,” Tech. Rep., Technical Report NASA-CR-2916, NASA Technical Reports, 1979. View at Google Scholar
  7. B. D. O. Anderson and J. B. Moore, Optimal Filtering, Prentice Hall, Upper Saddle River, NJ, USA, 1979.
  8. X.-R. Li and Y. Bar-Shalom, “Multiple-model estimation with variable structure,” IEEE Transactions on Automatic Control, vol. 41, no. 4, pp. 478–493, 1996. View at Publisher · View at Google Scholar · View at Scopus
  9. B. D. O. Anderson, T. S. Brinsmead, F. de Bruyne, J. Hespanha, D. Liberzon, and A. S. Morse, “Multiple model adaptive control. Part 1: finite controller coverings,” International Journal of Robust and Nonlinear Control, vol. 10, no. 11-12, pp. 909–929, 2000. View at Publisher · View at Google Scholar
  10. J. Hespanha, D. Liberzon, A. Stephen Morse, B. D. O. Anderson, T. S. Brinsmead, and F. De Bruyne, “Multiple model adaptive control. Part 2: switching,” International Journal of Robust and Nonlinear Control, vol. 11, no. 5, pp. 479–496, 2001. View at Publisher · View at Google Scholar · View at Scopus
  11. A. S. Morse, “Supervisory control of families of linear set-point controllers - part I. Exact matching,” IEEE Transactions on Automatic Control, vol. 41, no. 10, pp. 1413–1431, 1996. View at Publisher · View at Google Scholar · View at Scopus
  12. A. S. Morse, “Supervisory control of families of linear set-point controllers. 2. Robustness,” IEEE Transactions on Automatic Control, vol. 42, no. 11, pp. 1500–1515, 1997. View at Publisher · View at Google Scholar · View at Scopus
  13. K. S. Narendra and J. Balakrishnan, “Adaptive control using multiple models,” IEEE Transactions on Automatic Control, vol. 42, no. 2, pp. 171–187, 1997. View at Publisher · View at Google Scholar · View at Scopus
  14. K. S. Narendra and O. A. Driollet, “Stochastic adaptive control using multiple models for improved performance in the presence of random disturbances,” International Journal of Adaptive Control and Signal Processing, vol. 15, no. 3, pp. 287–317, 2001. View at Publisher · View at Google Scholar · View at Scopus
  15. M. Athans, S. Fekri, and A. Pascoal, “Issues on robust adaptive feedback control,” IFAC Proceedings Volumes, vol. 38, no. 1, pp. 547–577, 2005. View at Publisher · View at Google Scholar
  16. S. Fekri, M. Athans, and A. Pascoal, “Issues, progress and new results in robust adaptive control,” International Journal of Adaptive Control and Signal Processing, vol. 20, no. 10, pp. 519–579, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Fekri, M. Athans, and A. Pascoal, “Robust multiple model adaptive control (RMMAC): a case study,” International Journal of Adaptive Control and Signal Processing, vol. 21, no. 1, pp. 1–30, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. G. J. Schiller and P. S. Maybeck, “Control of a large space structure using MMAE/MMAC techniques,” IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 4, pp. 1122–1131, 1997. View at Publisher · View at Google Scholar · View at Scopus
  19. W. Zhang, “Stable weighted multiple model adaptive control: discrete-time stochastic plant,” International Journal of Adaptive Control and Signal Processing, vol. 27, no. 7, pp. 562–581, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Huang, X. Wang, and Z. Wang, “Multiple model self-tuning control for a class of nonlinear systems,” International Journal of Control, vol. 88, no. 10, pp. 1984–1994, 2015. View at Publisher · View at Google Scholar · View at Scopus
  21. A. P. Aguiar, M. Athans, and A. M. Pascoal, “Convergence properties of a continuous-time multiple-model adaptive estimator,” in 2007 European Control Conference (ECC), pp. 1530–1536, Kos, Greece, July 2007.
  22. A. P. Aguiar, V. Hassani, A. M. Pascoal, and M. Athans, “Identification and convergence analysis of a class of continuous-time multiple-model adaptive estimators,” IFAC Proceedings Volumes, vol. 41, no. 2, pp. 8605–8610, 2008. View at Publisher · View at Google Scholar
  23. J. Bernat and S. Stepien, “Multi-modelling as new estimation schema for high-gain observers,” International Journal of Control, vol. 88, no. 6, pp. 1209–1222, 2015. View at Publisher · View at Google Scholar · View at Scopus
  24. W. Zhang, S. Wang, and Y. Zhang, “Multiple-model adaptive estimation with a new weighting algorithm,” in Proceedings of the 2018 International Conference on Artificial Life and Robotics, pp. 708–711, Beppu, Japan, 2018.
  25. W. Zhang, “Further results on stable weighted multiple model adaptive control: discrete-time stochastic plant,” International Journal of Adaptive Control and Signal Processing, vol. 29, no. 12, pp. 1497–1514, 2015. View at Publisher · View at Google Scholar · View at Scopus