Research Article | Open Access

Wenhui Liu, Feiqi Deng, Jiarong Liang, Haijun Liu, "A Class of Transformation Matrices and Its Applications", *Abstract and Applied Analysis*, vol. 2014, Article ID 742098, 11 pages, 2014. https://doi.org/10.1155/2014/742098

# A Class of Transformation Matrices and Its Applications

**Academic Editor:**Turgut Ã–ziÅŸ

#### Abstract

This paper studies a class of transformation matrices and its applications. Firstly, we introduce a class of transformation matrices between two different vector operators and give some important properties of it. Secondly, we consider its two applications. The first one is to improve Qian Jiling's formula. And the second one is to deal with the observability of discrete-time stochastic linear systems with Markovian jump and multiplicative noises. A new necessary and sufficient condition for the weak observability will be given in the second application.

#### 1. Introduction

The vector operator is an important concept in matrix analysis and an effective mathematical tool in many applications [1]. For the symmetric case, we will consider a new vector operator. We find that there exists a class of transformation matrices between the two vector operators and they can be used to solve many problems. In order to demonstrate the importance of these transformation matrices, this paper will consider their two applications in control theory.

Qian Jiling's formula is an important and interesting formula to compute the Lyapunov functions [2]. Our paper will improve Qian Jiling's formula by the help of these transformation matrices.

Observability and detectability are two basic concepts in control theory [3]. These concepts of deterministic systems had been extended to the stochastic systems over the past few decades, such as [4â€“11] and references therein. Particularly, this paper will focus on the observability/detectability for discrete-time stochastic linear systems. The definition of the uniform observability/detectability [4] for deterministic discrete-time time-varying linear systems had been extended to stochastic linear systems in [5]. Actually the weak observability/detectability in [5] and the uniform observability/detectability in [4] are consistent. Reference [5] showed that the weak detectability was weaker than the mean square detectability. Still, the weak detectability plays the same role in the discussion of algebraic Lyapunov and Riccati equations. Under the same framework, [6] investigated the observability (i.e., the weak observability in [5]) problems for a class of discrete-time stochastic linear systems subject to Markovian jump and multiplicative noises and got some necessary and sufficient conditions. Reference [7] studied the equivalence between two different definitions of the observability/detectability for discrete-time stochastic linear systems. These are good works about the observability and detectability of discrete-time stochastic linear systems. However, our paper will further study the weak observability for discrete-time stochastic linear systems with the help of these transformation matrices. The system in [6] is more complicated than ours, but we get some more profound conclusions which are not included in [6]. Although the systems in [5, 7] are two special cases of our system, our results are not their parallel results. We need these transformation matrices in order to get these new results.

The outline of this paper is as follows. In Section 2, we define a class of transformation matrices which exists independently and study its important properties. Section 3 considers its applications. We use it to improve Qian Jiling's formula in Section 3.1. We use it to deal with the weak observability of discrete-time stochastic linear systems with Markovian jump and multiplicative noises in Section 3.2. A new necessary and sufficient condition for the weak observability will be given.

*Notations*. is -dimensional real Euclidean space. is -dimensional complex space. , , and , respectively, denote the transpose, determinant, and trace of . means that is a symmetric positive (semipositive) definite matrix. denotes the mathematical expectation of a random variable. . , where is a positive integer. denotes real matrix space with the inner product . with the inner product .

#### 2. Transformation Matrix and Its Properties

##### 2.1. Transformation Matrix between Two Vector Operators

*Definition 1 (see [1]). * is called a vector operator by column, if for any .

*Definition 2. * is called a half vector operator by column, if for any , where .

*Definition 3 (see [1]). *For matrices and , one calls matrix the Kronecker product of matrices and .

It is easy to find that and have many similar properties, such as the following:(a) are all linear operators,(b), (c), where .

For convenience, we will adopt the following notations.

denotes a matrix; its -element is , and it is zero elsewhere.

denotes a matrix. Its column vectors are denoted by from left to right and row vectors are denoted by from top to bottom, where It is easy to prove We will denote .

denotes a matrix. Its column vectors are denoted by from left to right and row vectors are denoted by from top to bottom, where . We will denote .

For example, and so forth.

Considering Property 3, we generally call and transformation matrices between and .

##### 2.2. Properties of Transformation Matrix

*Property 1. *Consider

*Proof. *For ,
For ,
Thus, .

*Remark 4. *Generally , such as

*Property 2. *Consider

, , .

*Remark 5. *Generally , such as which is not a symmetric matrix. This indicates that is the -generalized inverse matrix of , but not the -generalized inverse matrix of (see [1]).

*Property 3. *If , then , .

*Proof. *Consider

*Property 4. *For matrices and , we have the following:(a),
(b),
(c),
(d).

*Proof. *We can get (a) and (b) in [1], so we only need to prove (c) and (d).

Obviously, and are symmetrical. Therefore

*Remark 6. *If and are defined by row, that is,
we find that Properties 3 and 4 are still true.

*Property 5. *For all , then(a),(b),where denotes the set of all eigenvalues of matrix.

*Proof. *(a) Let
If , then . Thus, .

Next we prove . Obviously , so we only need to prove .

For any , such that , with , with (see [1]). Let ; then
If , then . Actually we always have .

Suppose that ; then for (i.e., for ). Without loss of generality, let ; then , . This contradicts . Then .

Thus, . And because , therefore .

(b) Let
If , then .

Thus, .

Next we prove . Obviously , so we only need to prove .

For any , such that , with , with (see [1]). Let ; then
If , then . Actually, we always have .

Suppose that ; then for (i.e., for ). Without loss of generality, let ; then , . This contradicts . Then .

Thus, .

And because , therefore .

#### 3. Applications of Transformation Matrix

##### 3.1. Application 1: The Improvement of Qian Jiling's Formula

Qian Jiling's formula is an important and interesting formula to compute the Lyapunov functions [2]. However, the formula can be improved with the help of the above transformation matrices.

Theorem 7. *Assume that the system is asymptotically stable (i.e., is a Hurwitz matrix); then one has the following:*(a)*for any , , there exists a unique solution ,*(b)*for any , there exists a unique quadratic Lyapunov function , such that , and the expression is
**where
*

*Proof. *We can get (a) in [2], so we only need to prove (b).

Let . Because and is a Hurwitz matrix, so and
By (a), the quadratic Lyapunov function which satisfies is unique.

Next we prove that can be expressed as (15). Note that ; then

By (a), we have .

By Property 5, we have .

By the Cramer rule, (18) has a unique solution and
where we use to replace 's -column in and denote this new matrix by ('s column vectors are, respectively, called the , -column vector from left to right).

By expanding the determinant, we have
For the uniqueness of , (15) is the desired expression.

*Remark 8. *The dimensions of and in (15) are significantly smaller than those in [2] due to the application of the transformation matrices and .

##### 3.2. Application 2: New Results for the Observability of Stochastic Linear Systems

This subsection considers the observability of discrete-time stochastic linear systems. A new necessary and sufficient condition for the weak observability will be given with the help of these transformation matrices.

###### 3.2.1. Description of the Stochastic Linear Systems

This subsection considers the following discrete-time stochastic linear system with Markovian jump and multiplicative noises: for , where , , and , respectively, denote the system state, control input, and measured output. , , and . is a discrete-time homogeneous Markovian chain. Its state space is , transition probability matrix is , and initial distribution is . are wide sense stationary processes and independent of each other, such that and all independent of .

Let .

For the stochastic system (21), its solution and output processes with and are, respectively, denoted by and . We will simply denote , .

If , the stochastic system (21) becomes for .

For convenience, we will use the following notations:

It is easy to get , , , where . Specifically .

For the system (23), we define several operators in as follows:

and are called dual operators for each other (see Lemma 15).

Also, we define the operator , where Then

###### 3.2.2. Some Preliminaries and Auxiliary Results

Let

Lemma 9. *For the system (23), one has for all , .*

*Proof. *Firstly, it is easy to get for all . Secondly, we assume that for allâ€‰â€‰, . Then, for , for all , we have
By mathematical induction, it is right.

Lemma 10. *For the system (23), one has for , .*

*Proof. *It is easy to get the result by mathematical induction, so we will omit the details.

Lemma 11. *For the system (23), for all , for all (i.e., for all ), one has
*

*Proof. *For all , we have
then for .

By induction, we have for .

Lemma 12. *For the system (23), for all , for all , one has
*

*Proof. *Because
therefore

*Remark 13. *There is a physical interpretation. is the accumulated energy of the output process on the interval . The th modal is .

Lemma 14. *For the system (23), for all , for all , , one has for .*

*Proof. *
It is easy to get the result by mathematical induction, so we will omit the details.

Lemma 15. *For the system (23), for all , for all , , one has
*

*Proof. *Firstly, it is easy to get
Secondly, we assume that for all . Then, for , for all , we have
By mathematical induction, it is right.

Corollary 16. *For the system (23), for all , for all , , one has
*

*Proof. *Consider

In particular, we have when .

Lemma 17 (see [12]). *For any random variable , one has
*

Lemma 18. *For the system (23), for all , for all , one has
*

*Proof. *This proof is omitted.

Lemma 19 (see [1]). *If , then there exist nonzero vectors such that .*

###### 3.2.3. A Useful Formula

Define two operators for and for in .

For convenience, we naturally think when we use in this subsection.

Theorem 20. *For the system (23), for all , one has
**
where
*

*Proof. *When for , then

By the same way, it is easy to get

Consider

Let .

*Remark 21. *For the symmetric case, is more suitable than in applications (such as Lemma 22 and Theorem 26). Without the transformation matrices, it is very difficult to obtain these results. Thus, we say that the transformation matrices are an effective mathematical tool.

Lemma 22. *For the system (23), for all , for all , , one has
*

*Proof. *By Lemma 12, it is easy to get

Obviously, .

We only need to prove :

If , then
Thus, for .

For , there exist by the - theorem, such that

For , we have
where .

Corollary 23. *For the system (23), for all *