About this Journal Submit a Manuscript Table of Contents
Advances in Numerical Analysis
Volume 2011 (2011), Article ID 184314, 15 pages
http://dx.doi.org/10.1155/2011/184314
Research Article

Refinement Methods for State Estimation via Sylvester-Observer Equation

Department of Applied Mathematics, Faculty of Mathematical Sciences, University of Guilan, P.O. Box 1914, Rasht, Iran

Received 6 December 2010; Revised 26 April 2011; Accepted 14 June 2011

Academic Editor: William J. Layton

Copyright © 2011 H. Saberi Najafi and A. H. Refahi Sheikhani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We present new iterative methods based on refinement process for solving large sparse Sylvester-observer equations applied in state estimation of a continuous-time system. These methods use projection methods to produce low-dimensional Sylvester-observer matrix equations that are solved by the direct methods. Moreover, the refinement process described in this paper has the capability of improving the results obtained by any other methods. Some numerical results will be reported to illustrate the efficiency of the proposed methods.

1. Introduction

Consider the continuous-time linear system where , , and .

We well know that all the state-feedback problems, such as the feedback stabilization, the LQR, and the state-feedback -control problems, require the state vector explicitly [1]. In most practical situations, the states are not fully accessible and the designer only knows the output and the input vectors. The unavailable states somehow need to be estimated accurately from the knowledge of the matrices , , and , the output vector , and the input vector . There are two common procedures for state estimation: one via eigenvalue assignment (EVA) and the other via solution of the Sylvester-observer equation. The main step in state estimation via solution of the Sylvester-observer equation is solving the Sylvester-observer equation of the form where , , , , and . Sylvester-observer equations (1.2) play vital roles in a number of applications such as control and communications theory [1], model reduction [24], numerical solution of matrix differential Riccati equations [5], and image processing [6].

The analytical solution of the matrix equation (1.2) has been considered by many authors; see [7, 8]. Direct methods for solving the matrix equation (1.2) are attractive if the matrices are of small size. These methods are based on the Schur decomposition, by which the original equation is transformed into a form that is easy to be solved by a forward substitution. Moreover, the matrices of the large practical problem are very sparse. Since, the standard methods for solving the Sylvester equations destroy the sparsity of the problems, they are only applicable for the matrices of small size; see [911]. Iterative projection methods for solving large Sylvester-observer matrix equations have been developed during the past years; see [1216]. These methods use the Arnoldi process to compute an orthonormal basis of certain Krylov subspace. In this paper, we extend the idea to propose a new projection method for solving (1.1) based on Weighted block Krylov subspace process. These methods are based on the reduction of the large sparse Sylvester-observer equation to the low-dimensional problem by orthogonal similarity that is solved by the direct methods. Moreover, refinement process presented in Sections 4 and 5 has the capability of improving the results obtained by any other methods.

The remainder of the paper is organized as follows. In Section 2, we describe some fundamental results about control theory. Then, we discuss how the states of a continuous-time system can be estimated in Section 3. In Sections 4 and 5, we introduce two new iterative methods for solving large sparse Sylvester-observer equation. These methods are based on the reduction of the large sparse Sylvester-observer equation to the low-dimension problem. Section 6 is devoted to some numerical tests. Some concluding remarks are given in Section 7.

2. Some Fundamental Results

The two basic concepts in control theory are controllability and observability of a control system. In this section, we will state some well-known facts about controllability and observability for convenient use later in the paper. For an excellent account of controllability and observability and related results, see [1].

Definition 2.1. The system (1.1) is said to be controllable if, starting from any initial state , the system can be driven to any final state in some finite time , choosing the input vector , , appropriately.

Observability is the dual concept of controllability.

Definition 2.2. The system (1.1) is said to be observable if there exists such that the initial state can be uniquely determined from the knowledge of and for all.

Remark 2.3. Since matrix does not have any role in the definition of controllability, the controllability of the system (1.1) (see [1]), is often referred to as the controllability of the pair . Similarly, since matrix does not have any role in the definition of observability, the observability of the system (1.1) is often referred to as the observability of the pair .

Some well-known criteria of controllability and observability are now stated in the next two theorems. The proofs of Theorems 2.4 and 2.5 can be found in [1]. In the following, is , is , and is .

Theorem 2.4. The pair is controllable if and only if any one of the following equivalent conditions holds.
(1) The controllability matrix has rank . (2) Rank for every eigenvalue of . (3) Let be an eigenpair of , that is, ; then .

Proof. See [1].

Theorem 2.5. The pair is observable if and only if any one of the following equivalent conditions holds
(1) The observability matrix has rank . (2) The matrix has rank for every eigenvalue of . (3) Let be an eigenpair of , that is, ; then .

Proof. See [1].

Definition 2.6. A matrix is called a stable matrix if all of the eigenvalues of have negative real parts.

3. State Estimation

In this section, we discuss how the states of a continuous-time linear system (1.1) can be estimated. The discussions here apply equally to the discrete-time systems possibly with some minor changes. So we concentrate on the continuous-time case only. We describe two common procedures for state estimation: one via eigenvalue assignment (EVA) and the other,via solution of the Sylvester-observer equation.

3.1. State Estimation via Eigenvalue Assignment

Now consider the linear time-invariant system (1.1). Let be an estimate of the state vector . Obviously, we would like to construct the vector in such a way that the error approaches zero as fast as possible, for all initial states and for every input . The following theorem shows that the problem of state estimation can be solved by finding a matrix such that the matrix has a suitable desired spectrum. See [1].

Theorem 3.1. If is observable, then the states of the system (1.1) can be estimated by where is constructed such that is a stable matrix. The error is governed by and as .

Proof. See [1].

3.2. State Estimation via Sylvester Equation

There is another approach for state estimation. Knowing , , , , and , let us construct the system where is , is , and is , in such a way that for some constant nonsingular matrix the error vector for all , , and for every input . The vector will then be an estimate of . The system (3.3) is then said to be the state observer for the system (1.1). D. Luenberger originated the idea and is hence referred to in control theory as the Luenberger observer; see [1].

We now show that the system (3.3) will be a state observer if the matrices , , , and satisfy certain requirements.

Theorem 3.2. The system (3.3) is a state observer of the system (1.1), that is, is an estimate of in the sense that the error as for any initial conditions , , and if
(1)  ,
(2)  ,
(3)   is stable.

Proof. See [1].

Definition 3.3. The matrix equation where , , , , and , is called the Sylvester-observer equation.

Theorem 3.2 suggests the following method for the observer design; see [1].

Algorithm 3.4 (full-order observer design via Sylvester-observer equation). One has the following.
Inputs
The system matrices , , and of order , , and , respectively.
Output
An estimate of the state vector .
Assumptions. () is observable.Step 1. Find a nonsingular solution of the Sylvester-observer equation (1.2) by choosing as a stable matrix and choosing in such a way that the resulting solution is nonsingular.Step 2. Compute .Step 3. Construct the observer by solving the system of differential equations Step 4. Find an estimate of : .

3.3. Characterization of Nonsingular Solutions of the Sylvester Equation

In this section, we describe some necessary conditions for a unique solution of the Sylvester equation to have such properties. The following theorem was proved by Bhattacharyya and de Souza. The proof here has been taken from [1].

Theorem 3.5. Let , , , and , respectively, be of order , , , and . Let be a unique solution of the Sylvester-observer equation (1.2). Then, necessary conditions for to be nonsingular are that is observable and is controllable.

Proof. See [1].

Corollary 3.6. If is and is , then necessary and sufficient conditions for the unique solution of (1.2) to be nonsingular are that is controllable and is observable.

Proof. See [1].

Remark 3.7. According to Theorem 3.5 and Corollary 3.6, the controllability of and observability of guarantee the existence of a nonsingular solution of the Sylvester-observer equation (1.2) in Step 1 of Algorithm 3.4. Moreover, there are other choices for and in Step 1 of Algorithm 3.4 provided is controllable. Also, we can use Theorems 2.4 and 2.5 for analyzing the controllability of and the observability of .

Example 3.8. In this example we show how the Sylvester-observer equation (1.2) can be applied in state estimation of a continuous time system (1.1). In this sense, at first we use the MATLAB function ode23 for directly solving (1.1) with as the unit step function and . Then, we apply Algorithm 3.4 for computing the estimate . Also, the differential equation (3.12) was solved with and MATLAB function ode23. The comparison of the state and estimate for this example is shown in Figure 1. The solid line corresponds to the exact state and the dotted line corresponds to the estimate state. Let According to criteria 1 of Theorem 2.5, () is observable. Thus, we can use Algorithm 3.4 for state estimation of (1.1).

fig1
Figure 1: The (a) first and (b) second variables of the state and estimate for Example 3.8.

Step 5. Choose According to criteria 1 of Theorem  2.4, () is controllable. Thus, by Corollary 3.6, the nonsingular solution of is (computed by MATLAB function lyap).

Step 6. One has

Step 7. An estimate of is where is given by

Remark 3.9. According to Algorithm 3.4 the most important step is Step 1. In the case that is small, there are many reliable algorithm s for solving the Sylvester-observer equation and the states of a continuous-time system can be estimated. However, for large and sparse systems solving the Sylvester-observer equation by the available methods can be costly. In Sections 4 and 5, we introduce two iterative refinement methods for solving large sparse Sylvester-observer equations.

4. Block Refinement Method

As we already mentioned, so far many numerical methods have been developed by different authors; see [1, 13, 17]. For example, the Hessenberg-Schur method is now widely used as an effective computational method for the Sylvester-observer equation. But numerical stability of this method has not been investigated. As the iterative methods are very efficient for the solution of computational problems, we thought it will be good idea to create an iterative method for solving the Sylvester-observer equation where , , , , and . In this section we propose to show that the obtained approximate solution of the Sylvester-observer equation by any method can be improved, in other words the accuracy can be increased. If this idea is applicable, then we have found an iterative method for solving of the Sylvester-observer equation.

Theorem 4.1. Let be the approximate solution obtained by an arbitrary method for the matrix equation (1.2), and let be the corresponding residual. If steps of the block Arnoldi process for matrices and have been run and is the solution of the low-dimensional Sylvester equation then

Proof. Let and be two orthogonal bases constructed by the block Arnoldi process for the matrices and , respectively. Thus, we have Also, the square block Hessenberg matrices and (, where and are the dimensions of blocks) whose nonzero entries are the scalars and , constructed by the Block Arnoldi process, can be expressed as
If we set then the corresponding residual satisfies Since is the solution of (4.1) and by using (4.3), we have

According to Theorem 4.1, we can develop an iterative method for solving the Sylvester-observer equation when the matrices , , , and are large and sparse. For achieving this idea, if we choose , then instead of solving we can solve (4.1). In other words, in this method, first we transform the initial Sylvester-obsever equation to another Sylvester equation with less dimensions and then in each iteration step solve this matrix equation and extend the obtained solution to the solution of the initial equation by (4.5). The algorithm is as follows.

Algorithm 4.2 (block refinement method). (1) Start: choose an initial approximate solution , and a tolerance .
(2) Select two numbers and for dimensions of block and set .
(3) Compute .
(4) Construct the orthonormal bases and by the block Arnoldi process, such that
(5) Solve the low-dimensional Sylvester-observer equation with the Hessenberg-Schur method.
(6) Set .
(7) Compute .
(8) If stop, otherwise set , and go to step (3).

Remark 4.3. By choosing , Algorithm 4.2 reduces the original large sparse Sylvester-observer equation (1.2) to a low-dimensional Sylvester-observer equation (4.1). In step (5), we solve this low-dimensional matrix equation by any direct method such as the Hessenberg-Schur method. Then, in step (6) by using relation (4.5), we extend the obtained solution to the solution of the original matrix equation. Also, according to Theorem 4.1, Algorithm 4.2 is the convergence for any initial matrix .

5. Weighted Block Refinement Method

In this section we discuss a new iterative method based upon a modified block refinement method. The new process uses instead of the Euclidean scalar product another one, denoted by where is a chosen diagonal matrix. The idea of changing the inner product is to accelerate the convergence of the components of the residual which are far away from zero. To achieve this, an appropriate weight is associated to each term of the inner product. A natural choice of these weights is the entries of the first residual. The following method is based on reduction of and to the Hessenberg matrix with the use of weighted block Arnoldi process. Before giving a complete description of the new algorithm, let us define the -scalar product.

If and are two vectors of , their -scalar product is where is a diagonal matrix.

This inner product is well defined if and only if the matrix is positive definite, that is, , for all .

In this case, we can define the -norm associated with this inner product by

Theorem 5.1. Let be the approximate solution obtained by an arbitrary method for the matrix equation (1.2), and let be the corresponding residual. If steps of the weighted block Arnoldi process by the diagonal matrices and , respectively, for matrices and have been run and is the solution of the low-dimensional Sylvester equation then

Proof. By using the weighted Arnoldi process, we generate the bases and that are, respectively, -orthonormal and -orthonormal; thus it holds that where and are two diagonal matrices.
Moreover, the square Hessenberg matrices and whose nonzero entries are the scalars and , constructed by the weighted Arnoldi process, can be expressed in the form Now, we set where and is the solution of the Sylvester-obsever equation (5.3). Thus, the new residual matrix becomes Multiplying the above relation on the left by and on the right by , we have Now, by using (5.3), (5.5), and (5.6) we get

In order to get , we need to solve the low-dimensional Sylvester equation (5.3). According to the results, we can develop an iterative method for solving of the Sylvester-observer equation. The algorithm is as follows.

Algorithm 5.2 (weighted block refinement (WBR) method). (1) Start: choose an initial solution , new dimension lesser than and a tolerance .
(2) Compute .
(3) Construct the -orthonormal basis and -orthonormal basis by the weighted Arnoldi process, such that
(4) Solve the low-dimensional Sylvester-observer equation with the Hessenberg-Schur method.
(5) Set .
(6) Compute residual matrix .
(7) If stop, otherwise set , and go to step (2).

Remark 5.3. By choosing , Algorithm 5.2 reduces the original large sparse Sylvester-observer equation (1.2) to a low-dimensional Sylvester-observer equation (5.3). In step (4), we solve this low-dimensional matrix equation by any direct method such as the Hessenberg-Schur method. Also, according to Theorem 5.1, Algorithm 5.2 is the convergence for any initial matrix .

6. Numerical Experiments

In this section, we present some numerical examples to illustrate the effectiveness of our new iterative methods for solving large sparse Sylvester-obsever equation. In Examples 6.1 and 6.2, we apply Algorithms 2 and 3 for solving matrix equation (1.2). In Example 6.3, we compare the Hessenberg-Schur method described in [1] with our new algorithms for solving large sparse Sylvester-obsever equation. In order to show the efficiency of our algorithms, we choose the matrices and arbitrary in these three examples. But in Example 6.4, we use four matrices from MATLAB matrix collection with the large estimation of condition numbers.

The initial approximate solution is . The error is monitored by means of the test with the value of depending on the examples. The time is given in seconds for all examples. All numerical tests are performed in MATLAB software on a PC with 2.20 GHz with main memory 2 GB.

Example 6.1. For the first test we use two arbitrary matrices and . We choose the matrices and completely satisfying the controllability requirement of the pair . Now We apply block refinement method for solving Sylvester-observer equation with . Also, we take . In Table 1, we report the results for different values of . In Table 1, the results show that by increasing the values of and , the number of iterations decreases. The last column of Table 1 also shows the decreasing of time consumption. Note that the fourth and fifth columns of this table are the errors of the orthogonalization method. The desired accuracy has been chosen as , but the model works well with any choice of :

tab1
Table 1: Implementation of block refinement method to solve the Sylvester equation with different values of .

Example 6.2. Consider Example 6.1 again. We apply the weighted Block refinement method for solving and take . In Table 2, we report the results for different values of .

tab2
Table 2: Implementation of weighted block refinement method to solve the Sylvester-observer equation with different values of .

Example 6.3. According to the results in Tables 1 and 2, we see that the weighted block refinement method in comparison with block refinement method works better. Now, consider that and are the same matrices used in Example 6.1. We apply our two iterative methods with 2 iterations and the Hessenberg-Schur method to solve the Sylvester-observer equation when the dimensions of the matrices are large. Results are shown in Table 3.

tab3
Table 3: Implementation of new Iterative methods with 2 iterations and the Hessenberg-Schur method for solving the Sylvester equation.

Example 6.4. In this example we show that the convergence of our proposed algorithms independent of the matrices structure. In this sense, we use four matrices from MATLAB collection for the matrix . The first matrix is a sparse, random finite element matrix with the condition number . The second matrix is a symmetric, positive semidefinite (SPD) Toeplitz matrix that is composed of the sum of 800 rank 2 SPD Toeplitz matrices with the condition number . The third matrix is a row diagonally dominant matrix with the condition number . The last matrix is a sparse singular, row diagonally dominant matrix resulting from discrediting the Neumann problem with the usual five-point operator on a regular mesh. The estimated condition number is . For all of these examples, the matrix is , where is a random, sparse matrix with approximately uniformly distributed nonzero entries with . We choose the matrices and completely satisfying the controllability and observability requirement of the pairs and . We apply the Hessenberg-Schur method and weighted block refinement method (Algorithm 5.2) with 3 iterations for solving the Sylvester-observer equation . The results are shown in Table 4.
It is also obvious from Table 4 that the performance of the weighted block refinement method is much better than that of the Hessenberg-Schur method, specifically for the ill-conditioned matrices.

tab4
Table 4: Effectiveness of the Hessenberg-Schur method and WBR algorithm with 3 iterations for randomly generated matrices.

7. Comments and Conclusion

In this paper, we propose two new iterative algorithm s for solving the large sparse Sylvester-obsever matrix equations. The existing projection methods use the Arnoldi process, but the methods described in this paper are based on the weighted block Arnoldi process. Moreover, the refinement process presented in Sections 4 and 5 has the capability of improving the results obtained by an arbitrary method. The numerical examples show the efficiency of the proposed schemes.

References

  1. B. N. Datta, Numerical Methods for Linear Control Systems: Design and Analysis, Elsevier Academic Press, San Diego, Calif, USA, 2004.
  2. A. C. Antoulas, Approximation of Large-Scale Dynamical Systems, vol. 6 of Advances in Design and Control, SIAM, Philadelphia, Pa, USA, 2005.
  3. U. Baur and P. Benner, “Cross-Gramian based model reduction for data-sparse systems,” Electronic Transactions on Numerical Analysis, vol. 31, pp. 256–270, 2008.
  4. D. C. Sorensen and A. C. Antoulas, “The Sylvester equation and approximate balanced reduction,” Linear Algebra and Its Applications, vol. 351/352, pp. 671–700, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  5. W. H. Enright, “Improving the efficiency of matrix operations in the numerical solution of stiff ordinary differential equations,” ACM Transactions on Mathematical Software, vol. 4, no. 2, pp. 127–136, 1978.
  6. D. Calvetti and L. Reichel, “Application of ADI iterative methods to the restoration of noisy images,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 1, pp. 165–186, 1996. View at Publisher · View at Google Scholar · View at MathSciNet
  7. J. Z. Hearon, “Nonsingular solutions of TABT=C,” vol. 16, no. 1, pp. 57–63, 1977.
  8. E. de Souza and S. P. Bhattacharyya, “Controllability, observability and the solution of AXXB=C,” Linear Algebra and its Applications, vol. 39, pp. 167–188, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. G. H. Golub, S. Nash, and C. Van Loan, “A Hessenberg-Schur method for the problem AX + XB = C,” IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 909–913, 1979. View at Publisher · View at Google Scholar
  10. G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996.
  11. A. El Guennouni, K. Jbilou, and A. J. Riquet, “Block Krylov subspace methods for solving large Sylvester equations,” Numerical Algorithms, vol. 29, no. 1–3, pp. 75–96, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  12. L. Bao, Y. Lin, and Y. Wei, “Krylov subspace methods for the generalized Sylvester equation,” Applied Mathematics and Computation, vol. 175, no. 1, pp. 557–573, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  13. L. Bao, Y. Lin, and Y. Wei, “A new projection method for solving large Sylvester equations,” Applied Numerical Mathematics, vol. 57, no. 5–7, pp. 521–532, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  14. D. Calvetti, B. Lewis, and L. Reichel, “On the solution of large Sylvester-observer equations,” Numerical Linear Algebra with Applications, vol. 8, no. 6-7, pp. 435–451, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  15. K. Jbilou, “ADI preconditioned Krylov methods for large Lyapunov matrix equations,” Linear Algebra and its Applications, vol. 432, no. 10, pp. 2473–2485, 2010. View at Publisher · View at Google Scholar
  16. M. Robbé and M. Sadkane, “A convergence analysis of GMRES and FOM methods for Sylvester equations,” Numerical Algorithms, vol. 30, no. 1, pp. 71–89, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  17. P. Benner, R.-C. Li, and N. Truhar, “On the ADI method for Sylvester equations,” Journal of Computational and Applied Mathematics, vol. 233, no. 4, pp. 1035–1045, 2009. View at Publisher · View at Google Scholar