Abstract

We consider the weighted low rank approximation of the positive semidefinite Hankel matrix problem arising in signal processing. By using the Vandermonde representation, we firstly transform the problem into an unconstrained optimization problem and then use the nonlinear conjugate gradient algorithm with the Armijo line search to solve the equivalent unconstrained optimization problem. Numerical examples illustrate that the new method is feasible and effective.

1. Introduction

Denote by be the set of real matrices and be the set of positive semidefinite Hankel matrices. The symbols and stand for the set of Vandermonde matrices and diagonal matrices, respectively. The notations rank and refer to the rank and the Frobenius norm of the matrix , respectively.

In this paper, we consider the following weighted low rank approximation of the positive semidefinite Hankel matrix.

Problem 1. Given nonsingular matrices , Hankel matrix , and an integer , , find a positive semidefinite Hankel matrix of rank such that Problem (1) arises in certain control and signal processing applications (see [1] for more details), which can be stated as follows. The relationship between the impulse response of the model and the state-space parameters is where , , and are constraint matrices of sizes 1 × , × , and × 1, respectively. The infinite Hankel matrix formed from the impulse response sequence isFor a given transfer function, the triplet of a minimal realization is unique modulo similarity (coordinate) transformation. An interesting choice of coordinates leads to a canonical form realization, with where () are the transfer function parameters. This is a form that directly relates the state-space model to the transfer function parameters. The Hankel matrix is an operator from the past input to the future output, and it necessarily has to be of rank . This rank property is due to that an impulse-response sequence admits of a finite-dimensional realization of order , if and only if the infinite Hankel matrix formed from the sequence has rank equal to . Usually, we need to find a lower rank positive semidefinite Hankel matrix to approximate the Hankel matrix containing the approximating signal. That is, given a Hankel matrix , we need to find a positive semidefinite Hankel matrix of given rank such that is minimized. It is clear especially in the practical problem that there are a few noises added to the rank-deficient signal, which leads to the following problem: where and are nonsingular matrices in , that is problem (1).

In the last few years, there has been a constantly increasing interest in developing the theory and numerical methods for the Hankel matrix approximation problem due to its wide applications in model reduction [2], system identification [3], linear prediction [4], and so forth. Some results for solving the Hankel matrix approximation problem are summarized below.

For the Hankel matrix approximation problem without the rank constraint, Maclnnes [5] proposed a method for finding the best approximation of an arbitrary matrix by a Hankel matrix. By preserving the advantages of the projection algorithm and Newton method, Al-Homidan [6] developed a hybrid method for minimizing least distance function with positive semidefinite Hankel matrix constraints. When using interior point method for the nearest positive semidefinite Hankel matrix approximation problem, it is important for some algorithms to start from within the cone of positive semidefinite Hankel matrices; that is, the initial point must be a positive definite Hankel matrix [7]. Based on the Vandermonde representation and bootstrapping techniques, Varah [8] presented an iterative algorithm for finding an optimal conditioned Hankel matrix of order . Recently, Al-Homidan et al. [9] proposed a semidefinite programming approach for the Hankel matrix approximation problem. The method is guaranteed to find an optimally conditioned positive definite Hankel matrix within any desired tolerance.

For the case with the rank constraint, based on the structured total least squares, Park et al. [10] developed a numerical method for the low rank approximation of the Hankel matrix, where the two means to generate a Hankel matrix of the given rank were given. And these techniques were extended to solve the Toeplitz low rank approximation problem [11]. By applying Vandermonde representation and spectral decomposition, three iteration algorithms for the low rank approximation of the Hankel matrix were proposed by Tang [12]. Meanwhile, he also made a further study for low rank and weighted Hankel matrix approximation problem by using interior point method. In [13], the low rank positive semidefinite Hankel matrix approximation problem was solved by two methods. One is formulated as a nonlinear minimization problem and then solved by using techniques related to filter sequential quadratic programming. And another is to formulate the problem as a smooth unconstrained minimization problem which is solved by the BFGS method.

Although the Hankel matrix approximation problems were extensively investigated, the results of the weighted low rank approximation of the positive semidefinite Hankel matrix approximation problem (1) are few as far as we know. In this paper, we firstly characterize the feasible set of problem (1) by using the Vandermonde representation. Then, the problem (1) is transformed into an unconstrained optimization problem, while the objective function is not linear but instead quadratic. We use the nonlinear conjugate gradient algorithm with the Armijo line search to solve the equivalent unconstrained optimization problem, and the most difficult point lies in how to compute the gradient of the objective function. We derive the explicit expression for the gradient of the objective function. Finally, two numerical examples are tested to illustrate that the new method is feasible to solve the problem (1).

2. Main Results

In this section, the problem (1) is transformed into an unconstrained optimization problem by making use of the Vandermonde representation. Then, the conjugate gradient algorithm with Armijo line search is applied to solve the equivalent unconstrained optimization problem. We begin with some definitions and lemmas.

Definition 2 (see [14]). The matrix of order is called Hankel matrix if , , which has the following form:

Definition 3 (see [14]). An matrix is called Vandermonde matrix if it has the following form: where the th column of matrix is the ratio of () geometric sequence.

Lemma 4 (see [15, 16] (Vandermonde representation)). For any positive semidefinite Hankel matrix of rank , , there exists an × Vandermonde matrix and a × diagonal matrix with positive diagonal entries such that

Combining Lemma 4 and Definitions 2 and 3, we obtain that the entries of the positive semidefinite Hankel matrix of rank can be written aswhere denote the th row of the matrix , denote the th column of the matrix , and is a diagonal matrix. Equation (9) is identical to the result (3.4) of [6].

Now, we begin to characterize the feasible set of the problem (1); that is, By using Lemma 4 and (9) together with , we obtain that the entries of the matrix arewhere denote the th row of matrix and denote the th column of matrix .

Similarly, the entries of the matrix are

Hence, by making use of Lemma 4, the problem (1) can be equivalently stated as the following unconstrained optimization problem.

Problem 5. Given nonsingular matrices , Hankel matrix and an integer , , find a Vandermonde matrix and a diagonal matrix such that Set Substituting (12) into (14), the objective function can be rewritten as where

Next, we use the nonlinear conjugate gradient algorithm with Armijo line search to solve the equivalent unconstrained optimization problem (13). We first give the gradient of the objective function (15) as follows.

Theorem 6. The gradient of the objective function (15) is where

Proof. According to the derivation rule, the partial derivatives of (15) arerespectively. This completely finishes the proof of Theorem 6.

Then, we construct the nonlinear conjugate gradient algorithm with Armijo line search to solve the equivalent unconstrained optimization problem (13).

Algorithm 7 (this algorithm is to solve problem (13)). We have the following steps.

Step 1. Initialize , , , and tolerance error . Choose the initial iterative vector . Set .

Step 2. Compute . If , stop and output .

Step 3. Determine the search direction , where

Step 4. Confirm the step length by applying Armijo line search; that is, find the smallest nonnegative integer such that Set , .

Step 5. Set and turn to Step 2.

By Theorem [17, page 203], we can establish the global convergence theorem for Algorithm 7.

Theorem 8. Suppose the function is twice continuous and differentiable; the level set is bounded, and the step length is generated by the Armijo line search; then, the sequence generated by Algorithm 7 is guaranteed to globally converge; that is,

3. Numerical Experiments

In this section, two numerical examples are tested to illustrate that Algorithm 7 is feasible to solve the weighted low rank approximation of positive semidefinite Hankel matrix problem. All experiments are tested in Matlab R2010a on a computer with 2.70 GHz of CPU and 4 GB of memory. We denote the gradient norm , where is the th iterative value in Algorithm 7. The stopping criterion is And we choose the random matrix as the initial value in the following examples, where the random matrix is generated by the Matlab function .

Example 1. The given matrixes are generated by , , , where , . In this example, we use Algorithm 7 to solve the problem (13) for with given rank . Some experimental results are listed in Table 1 including the number of iteration (denoted by IT), cpu time (denoted by CPU), the residual error , and the gradient norm .

Example 2. In this example, ten samples of an exponential with an amplitude and frequency were generated and complex gaussian white noise added to it. A 10 × 10 Hankel matrix was formed from them; that is,where the weighted matrix are, respectively,

Set , and we use Algorithm 7 to solve the problem (13). After 11 iterations and taking  s, we get the minimumHence, the closest positive semidefinite Hankel matrix of rank 2 for the problem (1) is

For the above example, we use Algorithm 7 to solve the problem (13) with different rank. We list the number of iteration (denoted by IT), cpu time (denoted by CPU), the residual error , and the gradient norm in Table 2.

Examples 1 and 2 show that Algorithm 7 is feasible and effective to solve the weighted low rank approximation of the positive semidefinite Hankel matrix problem.

4. Conclusion

The Hankel matrix approximation problem is a very popular and interesting problem in signal processing, model reduction, system identification, and linear prediction. This paper studies the weighted low rank approximation of the positive semidefinite Hankel matrix problem arising in signal processing. By using the Vandermonde representation, the problem is firstly transformed into an unconstrained optimization problem. Then, we use the nonlinear conjugate gradient algorithm to solve the equivalent unconstrained optimization problem. Finally, two numerical examples are tested to show that our method is feasible and effective.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank Professor Shuqian Shen and the two anonymous referees for providing their valuable comments and constructive suggestions, which have significantly improved the quality of the paper. The work was supported by the National Natural Science Foundation of China (nos. 11101100, 11301107, and 11261014), the Natural Science Foundation of Guangxi Province (nos. 2012GXNSFBA053006 and 2013GXNSFBA019009), the Innovation Project of GUET Graduate Education (GDYCSZ201473), and the Innovation Project of Guangxi Graduate Education (YCSZ2014137).