Abstract
Some new and simple Gramian-based model order reduction algorithms are presented on second-order linear dynamical systems, namely, SVD methods. Compared to existing Gramian-based algorithms, that is, balanced truncation methods, they are competitive and more favorable for large-scale systems. Numerical examples show the validity of the algorithms. Error bounds on error systems are discussed. Some observations are given on structures of Gramians of second order linear systems.
1. Introduction
Models of linear dynamical systems are often given in second-order formwhere , , , and are given matrices, is the given vector of inputs, is the unknown vector of outputs, is the unknown vector of internal variables, is the dimension of the system, and is assumed to be invertible.
Models of mechanical systems in particular are usually of second-order form (1). For such a system, , and are respectively the mass, damping, and stiffness matrices, is the input, is the vector of external forces, and is the vector of internal variables.
Many applications lead to very large models where is very large, while . Due to limitations on time and computer storage, it is often difficult or impossible to directly simulate or control these large-scale systems. Therefore, it is often desirable to reduce the system to a smaller-order model: such that (1), that is, outputs corresponding to the same inputs are close. (2)The reduced system preserves important system properties such as stability and passivity. (3)The procedure is computationally efficient.
By letting , second-order system (1) is then transformed to a corresponding -dimensional first-order system
where
Since algorithms and theory are well developed for first-order model reduction, it is natural to use these techniques to develop algorithms for second-order systems.
Closely related to first-order system are the two Lyapunov equationsUnder assumption that first-order system is stable, it is well known that the reachability and observability Gramians and are the unique solutions to Lyapunov equation (5), where both and are symmetric positive definite. This Gramian has also the following variational interpretation proposed in [1]. Let . The optimal value to optimization problem
is , that is, the minimal energy required to steer the state of the system from to state at time is . Similarly, the optimal value to its dual system is .
Partition and into four equal blocks:
In [2–4], it is shown that the optimal value to optimization problem is , that is, the minimal energy required to reach the given position over all past inputs and initial velocities. And the optimal value to optimization problem is , that is, the minimal energy required to reach the given velocity over all past inputs and initial positions. In [3, 4], and are defined to be position Gramian and velocity Gramian, respectively. By duality, and are position Gramian and velocity Gramian, respectively corresponding to .
It is well known that
Suppose is the solution of first-order system for impulse input . Then For second-order system and its corresponding first-order system,
Therefore,where and are position and velocity Gramians, respectively. Similar results can be applied to , , and through dual systems.
For Gramian based structure preserving mode order reduction of second-order systems, in [3, 5], some balanced truncation methods were derived. In this paper, we present two new algorithms based on SVD rather than balanced truncation. The algorithms are derived from second-order Gramians and system -norm. Compared to existing techniques, they are simple and more suitable for large-scale settings. In Section 4, we apply these methods to three numerical examples, the computational results show the validity of our algorithms. Error bounds and structures of Gramians on second-order systems are also discussed.
2. Balanced Truncation Method
For first-order system balanced truncation method is to transform the system to another coordinate system such that both and are equal and diagonal:
where . It then truncates the model by keeping the states corresponding to largest eigenvalues of the product , that is, largest Hankel singular values of the system.
The main problem is to find balancing projection matrices. A numerically stable way to get the balancing truncated system is given in [6]: suppose the Cholesky factorization of and are is
where is upper triangular and is lower triangular matrices. Take SVD of ,
Let be the first by principle submatrix of , , and consist of the first columns of and , respectively. Then the balanced projection matrices are
The reduced system is obtained in:
Balanced truncation method has a global error bound [1]:
For second-order system
there are two balanced truncation methods. One was given in [5] which is to balance both and to form the projection matrices and and then get the reduced system by keeping largest eigenvalues of . It is equivalent to performing projections to its corresponding first-order system as following:The reduction rules are thenSimilar results can be applied to and . This gives the following algorithm.
Algorithm 1 (Balanced truncation method based on and (or and , resp.) [5]). (1) Compute and .(2) Compute the balanced truncation matrices such that and , where are the largest eigenvalues of . (3) Perform projection to and get :
In [5], it also gave two other similar methods. One is to balance and to get the projection matrices, the other is to balance and to get the projection matrices.
Another second-order balanced truncation method is given in [3] which is to balance both and to get the projection matrices and by keeping largest eigenvalues of , balance both and to get the projection matrices and , then use and as projection matrices for the corresponding first-order system:
In order to let reduced system have the companion form of second-order system, it then takes the following transformation by letting ,The corresponding algorithm is as follows:
Algorithm 2 (Balanced truncation method based on and [3]). (1) Compute , , , and .(2) Compute the balanced truncation matrices for and to get , and balanced transformation matrices for and to get .(3) Perform projection to get reduced system : where .
3. Some New Algorithms Based on SVD
In this section, we propose some new algorithms based on SVD method directly.
First, norm is an important quantity for bounding linear systems.
Lemma 3 (see [7]). Given a first-order system suppose and are reachability and observability Gramians. Then, norm of the system can be expressed as
norm for second-order system (1) with can then be stated as in the following proposition.
Proposition 4. Given a second-order system (1) with , suppose is the transformed first-order system, where Assume reachability and observability Gramians and are divided into four equal blocks, and , . Then the following holds:
Proof. From Lemma 3,
From (8), we know the positions which are difficult to reach are spanned by the eigenvectors corresponding to small eigenvalues of . We would like to eliminate those positions in reduced systems. Proposition 4 shows that can independently decide the system norm. Therefore, we may get the reduced system by keeping the positions corresponding to largest eigenvalues of . This motivates the following procedure for model order reduction. Suppose SVD of is , where , and . So are orthogonal matrices and since is symmetric positive definite. Let the transformation matrices for the corresponding first-order system be
and the projection matrices be
where consists of the first columns of . For simplicity, we denote and by and , respectively. Then use and as projection matrices for the corresponding first-order system, that is, perform projections as in (22) and (23). Same results can be applied to . From the dual system, symmetrically and are also crucial in weighting the system norms independently. This results in the following algorithms.
Algorithm 5 (Second-order model reduction—SVD on (or, , , and , resp.)). (1) Compute (or , , and , resp.), and denote it as for simplicity.(2) Take SVD on and get . Form the orthogonal projection matrices which consists of the first columns of .(3) Perform projection to get the reduced system :
Besides eliminating the positions which are difficult to reach in reduced system, it is also desirable to delete the velocities that are difficult to reach. From (9), these velocities are spanned by the eigenvectors of corresponding to small eigenvalues. This motivates the following procedure for model order reduction. Suppose SVD of is , where , . And SVD of is , where , . So , , , and are all orthogonal matrices, and , . Let the transformation matrices for the corresponding first-order system be
and the projection matrices be
where and consist of the first columns of and , respectively. By using similar idea to Algorithm 2, in order to let the reduced system have the companion form of second-order model, we adopt a matrix such that , and then perform the projections as in (26) and (27). Similar results can be applied to and . This results in the following algorithm.
Algorithm 6 (Second-order model reduction—SVD on and (or and , resp.)). (1) Compute and .(2) Take SVD on and to get and , respectively.(3) Perform projection to get reduced system : where .
Now, consider Algorithm 5 and SISO second-order systems. Note that by taking SVD, . Denote and . So . Suppose , that is, Then from Proposition 4,
In reduced system, , . Therefore,
In many cases, the eigenvalues of decay rapidly. Therefore, the reduced system produced by Algorithm 5 takes the main part of the system norm. Actually we also have,
This is not the -norm of error system , but it gives some information to support Algorithm 5. This result for SISO second-order systems can easily extends to MIMO systems. Similar results can be applied to methods in Algorithm 6.
Now, consider complexity of the algorithms. In our SVD method on in Algorithm 5, after computing which takes time, it then uses one-time operation which is SVD on to get the projection matrices. In our Algorithm 6, after computing and each of which takes time, it then uses twice operations which are SVD on and SVD on to get the projection matrices. In balanced truncation method in Algorithm 1, after computing and , it also has one-time SVD. Besides this, it uses two Cholesky factorizations and which all take time in order to get the projection matrices. Note that Algorithm 2 has more flops than Algorithm 1. Therefore, our SVD algorithms are simpler in time cost than existing balanced truncation methods. There are some other techniques in getting the balanced truncation systems, for example, balance-free truncation [8]. But it is not hard to see that they are also more complex than our SVD methods. This makes our SVD methods more suitable to solve large-scale problems.
4. Computational Results
In this section, we apply the model reduction methods to three numerical examples: the building model, the clamped beam model, and international space station model, see [9] for detailed descriptions. In Table 1, four model reduction methods are compared: SVD method on in Algorithm 5 (“SVD1”), SVD method on and in Algorithm 6 (“SVD12”), balanced truncation on and in Algorithm 1 (“BT1”), and balanced truncation on and , and in Algorithm 5 (“BT12”). The comparison is based on the relative error of norm, where
is the size of , is the size of in reduced model, and is the size of input vector , is the size of output vector . Figures 1 and 2 show the amplitude of frequency response of original and reduced systems for clamped beam model for and , respectively.
5. Error Bounds and Discussions
Error bounds for first-order model reduction do not apply for second-order systems. Consider second-order system (1) with
Suppose the reduced system is obtained by keeping largest eigenvalues of the orthogonal eigenspace of . Sorensen and Antoulas in [10] provided a priori error bound showing that the norm of error system is bounded by a constant times the summation of neglected singular values of .
It considers structured systems in [10]. Let and be polynomial matrices in :
where is invertible, and is a strictly proper rational matrix. Denote by , the differential operators
The structured systems are defined by the following equations:where . The Gramian is defined as the Gramian of when the input is an impulse:
Let
be the eigensystem of , where is orthogonal, the diagonal elements of are in decreasing order:
where . Partition where consists of the first columns of . The reduced model is derived fromand the reduced system is then Partition accordingly,
Theorem 7 (see[10]). The reduced model derived from the dominant eigenspace of the Gramian for as described above satisfies where is a modest constant depending on and , and the diagonal elements of are the neglected smallest eigenvalues of .
Specially for second-order system (1) with :it is easy to see that it can be described by (49) with and . So there is the following corollary.
Corollary 8. For second-order system (57), suppose is diagonal, where , , and . Suppose the reduced system is obtained by keeping largest eigenvalues of as described in (53) and (54). Then where is a modest constant depending on and .
Now, we explain the corollary and what the constant is. Note and with . Let . Partition accordingly . Then,
If the reduced system has no poles on the imaginary axis, is finite, and so is finite. Therefore, is a constant depending on and . There is a similar result for by duality.
Consider Algorithm 5, SVD method on . Note that
and , . So the reduction rules in Algorithm 5 are exactly the same as (53). Similar result applies for . Therefore, the error systems produced by our Algorithm 5, SVD method on or , all have good global error bounds. So far, there is no priori error bound for error systems based on or .
Even though balanced truncation methods in Algorithms 1 and 2, and SVD method on both and (or and ) in Algorithm 6, all have good performance in those three numerical examples, so far no priori error bounds are provided for these methods.
For second-order system (1) with , the following theorem provides some observations on structures of gramians and in corresponding first-order system.
Theorem 9. Given second-order system (1) with , then in any coordinate system, there is no state space transformation which gives a block diagonal .
Proof. From the observability Lyapunov equation one can get Equating each block gives From the first equation in (63), one can get is not zero, otherwise , and then from (61) the solution for this Lyapunov equation is . This contradicts with being symmetric positive definite. So is not block diagonal.
Algorithm 2 [3], that is, balanced truncation method on and , is to balance both and to get the projection matrices and by keeping largest eigenvalues of , balance both and to get the projection matrices and , then use and as projection matrices for the corresponding first-order system. This is equivalent to assuming both and are block diagonal in corresponding first-order system, that is,
in order to keep largest eigenvalues of and in reduced corresponding first-order system. Similar results also apply for methods in Algorithm 6, SVD method on and . From Theorem 9, can not be block diagonal, therefore the reduced system obtained from Algorithm 2 or Algorithm 6 may lose some information. But Algorithm 1 [5] (balanced truncation based on (or resp.)) and Algorithm 5 (SVD on (or , , resp.)), avoid this drawback in assuming being block diagonal, but they only work for either position Gramians or velocity Gramians, while Algorithms 2 and 6 have the advantage in taking into account both position Gramians and velocity Gramians.
6. Conclusions
Two Gramian based second-order model reduction methods are proposed in this paper based on SVD on diagonal blocks of Gramians. Even though they are the well known SVD methods, to our knowledge, they are the simplest ways in Gramian based second-order model reduction. Two existing balanced truncation methods [3, 5] were presented in 2006 and 2008, respectively. When compared to these two existing techniques, our SVD methods are competitive in numerical examples and more suitable for large-scale setting problems. By using the result given in [10], there is global error bound for one method, SVD on or . Except this, so far no priori error bounds are provided for other methods on second-order model reduction, even through they work good in numerical examples. Interesting future work would be to provide global error bounds for these methods.
Acknowledgment
The paper is supported in part by NSFC research Project 61070230, Shandong Social Science Program of China 11CKJJ16, and Grant no. 20110205 by Jinan Science and Technology Bureau of China for Returned Oversea Scholars.