Abstract

We consider the low-rank approximation problem arising in the generalized Karhunen-Loeve transform. A sufficient condition for the existence of a solution is derived, and the analytical expression of the solution is given. A numerical algorithm is proposed to compute the solution. The new algorithm is illustrated by numerical experiments.

1. Introduction

Throughout this paper, we use to denote the set of real matrices. We use and to denote the transpose and Moore-Penrose generalized inverse of the matrix , respectively. The symbol stands for the set of all orthogonal matrices. The symbols and stand for the rank and the Frobenius norm of the matrix , respectively. For , the symbol stands for the -norm of the vector , that is, . The symbol stands for the square root of the matrix , that is, . For the random vector , we use to stand for the expected value of the th entry , and we use to stand for the covariance matrix of the random vector , where , .

The generalized Karhunen-Loeve transform is a well-known signal processing technique for data compression and filtering (see [1ā€“4] for more details). A simple description of the generalized Karhunen-Loeve transform is as follows. Given two random vectors , and an integer (), the generalized Karhunen-Loeve transform is presented by a matrix , which is a solution of the following minimization problem (see [1, 4]): Here the vector depends on some prior knowledge about the data .

Without the rank constraint on , the solution of the minimization problem (1) is where , . The minimization problem with this case is associated with the well-known concept of Wiener filtering (see [3]).

With the rank constraint on , that is, , we first consider the cost function of the minimization problem (1). By using the fact and the four Moore-Penrose equations of , it is easy to verify that (see also [1]) Noting that the covariance matrix is symmetric nonnegative definite, then it can be factorized as Substituting (4) into (3) gives rise to since is a constant, then that is to say, minimizing is equivalent to minimizing . Therefore, we can find the solution of (1) by solving the minimization problem which can be summarized as the following low rank approximation problem:

Problem 1. Given two matrices , and an integer , , find a matrix of rank such that

In the last few years there has been a constantly increasing interest in developing the theory and numerical approaches for the low rank approximations of a matrix, due to their wide applications. A well-known method for the low rank approximation is the singular value decomposition (SVD) [5, 6]. When the desired rank is relatively low and the matrix is large and sparse, a complete SVD becomes too expensive. Some less expensive alternatives for numerical computation, for example, Lanczos bidiagonalization process [7], and the Monte Carlo algorithm [8] are available. To speed up the computation of SVD, random sampling has been employed in [9]. Recently, Ye [10] proposed the generalized low rank approximations of matrices (GLRAM) method. This method is proved to have less computational time than the traditional singular value decomposition-based methods in practical applications. Later, GLRAM method has been revisited and extended by Liu et al. [11] and Liang and Shi [12]. In some applications, we need to emphasize important parts and deemphasize unimportant parts of the data matrix, so the weighted low rank approximations were considered by many authors. Some numerical methods, such as Newton-like algorithm [13], left versus right representations method [14], and unconstrained optimization method [15], are proposed. Recently, by using the hierarchical identification principle [16] which regards the known matrix as the system parameter matrix to be identified, Ding et al. and Xie et al. present the gradient-based iterative algorithms [16ā€“21] and least-squares-based iterative algorithm [22, 23] for solving matrix equations. The methods are innovational and computationally efficient numerical algorithms.

The common and practical method to tackle the low rank approximation Problem 1 is the singular value decomposition (SVD) (e.g. [1]). We briefly review SVD method as following. Minimizing (8) by a rank- matrix is known [5, Page 69] to satisfy where denotes rank- singular value decomposition truncation, that is, if the following SVD holds then . If the matrix is square and nonsingular, then by (9) we obtain that the solution of Problem 1 is The SVD method has two disadvantages as following: (1) it requires the matrix to be square and nonsingular; (2) in order to derive the solution (11), we must compute the inverse matrix of , whose computation cost is very expensive.

In this paper, we develop a new method to solve the low rank approximation Problem 1, which can avoid the disadvantages of SVD method. We first transform Problem 1 into the fixed rank solution of a matrix equation and then use the generalized singular value decomposition (GSVD) to solve it. Based on these, we derive a sufficient condition for the existence of a solution of Problem 1, and the analytical expression of the solution is given. A numerical algorithm is proposed to compute the solution. Numerical examples are used to illustrate the numerical algorithm. The first one is artificial to show that the new algorithm is feasible to solve Problem 1, and the second is simulation, which shows that the new algorithm can be used to realize the image compression.

2. Main Results

In this section, we give a sufficient condition and an analytical expression for the solution of Problem 1 by transforming Problem 1 into the fixed rank solution of a matrix equation. Finally, we establish an algorithm for solving Problem 1.

Lemma 2. A matrix is a solution of Problem 1 if and only if it is a solution of the following matrix equation:

Proof. It is easy to verify that a matrix is a solution of Problem 1 if and only if satisfies the following two equalities simultaneously:
Since the normal equation of the least squares problem (13) is and noting that the least squares problem (13) and its normal equation (15) have the same solution sets, then (13) and (14) can be equivalently written as which also imply that Problem 1 is equivalent to (12).

Remark 3. From Lemma 2 it follows that Problem 1 is equivalent to (12), hence we can solve Problem 1 by finding a fixed rank solution of the matrix equation .

Now we will use generalized singular value decomposition (GSVD) to solve (12). Set The GSVD of the matrix pair () is given by (see [24]) where , , is a nonsingular matrix, , , , and are block matrices, with and are identity matrices, and are zero matrices:

By (17) and (18), we have Set and is partitioned as follows: then

Therefore, by (21) and (24), we have that is to say, the matrix equation has a solution if and only if and according to (22), we know that the expression of the solution is where

By (26)ā€“(28) and noting that and are arbitrary matrices, we have

Hence, if then (12) has a solution, and the expressions of the solution are given by (26)ā€“(28), that is, where is an arbitrary matrix and is chosen such that And noting that the low rank approximation Problem 1 is equivalent to (12) (i.e. Lemma 2), then we obtain the following.

Theorem 4. If then Problem 1 has a solution, and the expressions of the solution are given by where is an arbitrary matrix and is chosen such that

Remark 5. In contrast with (11), the solution expression (35) does not require the matrix to be square and nonsingular and does not need to compute the inverse of .

Based on Theorem 4, we can establish an algorithm for finding the solution of Problem 1.

Algorithm 6. (1) Input the matrices and the integer ; (2) make the GSVD of the matrix pair according to (18); (3) choose and , such that ;(4) compute the solution according to (35).

3. Numerical Experiments

In this section, we first use a simple artificial example to illustrate that Algorithm 6 is feasible to solve Problem 1, then we use a simulation to show that Algorithm 6 can be used to realize the image compression. The experiments were done with MATLAB 7.6 on a 64-bit Intel Pentium Xeon 2.66ā€‰GHz with .

Example 7. Consider Problem 1 with

We make GSVD of the matrix pair as follows: where

It is easy to verify that that is, if , then the conditions of Theorem 4 are satisfied. Setting , according to (35), we obtain that the solution of Problem 1 is

Setting , according to (35), we obtain that the solution of Problem 1 is

Example 7 shows that Algorithm 6 is feasible to solve Problem 1. However, the SVD method in [1] cannot be used to solve Example 7, because is not a square matrix.

Example 8. We will use the generalized Karhunen-Loeve transform, based on Algorithm 6 and SVD method in [1], respectively, to realize the image compression. Figure 1(a) (see page 3) is the test image which has pixels and levels on each pixel. We separate it into blocks such that each block has pixels. Let and () be the values of the image and a Gaussian noise (generated by Matlab function ) at the th pixel in the th block, respectively. For convenience, let , , and the th pixel in the th block be expressed as the th pixel in the th block . We can also express and as and , respectively.
The test image is processed on each block. Therefore, we can assume that the blocked image space is - real vector space . The th block of the original image is expressed by the th vector: Hence the original image is expressed by 1024 64- vectors . The noise is similarly expressed by , where Figure 1(b) is the noisy image , where By (47), (49), (2), (4) and the definition of covariance matrix, we get and of (7). Then we use Algorithm 6 and SVD method in [1] to realize the image compression respectively, and the experiment results are in pages 4 and 5.
Figure 2 illustrates that Algorithm 6 can be used to realize image compression. Although it is difficult to see the difference between Figures 2 and 3, which are compressed by SVD method in [1], from Table 1 we can see that the execution time of Algorithm 6 is less than that of SVD method at the same rank. This shows that our algorithm outperforms the SVD method in execution time.

4. Conclusion

The low rank approximation Problem 1 arising in the generalized Karhunen-Loeve transform is studied in this paper. We first transform Problem 1 into the fixed rank solution of a matrix equation and then use the generalized singular value decomposition (GSVD) to solve it. Based on these, we derive a sufficient condition for the existence of a solution, and the analytical expression of the solution is also given. Finally, we use numerical experiments to show that new algorithm is feasible and effective.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (11101100; 11226323; 11261014; and 11171205), the Natural Science Foundation of Guangxi Province (2012GXNSFBA053006; 2013GXNSFBA019009; and 2011GXNSFA018138), the Key Project of Scientific Research Innovation Foundation of Shanghai Municipal Education Commission (13ZZ080), the Natural Science Foundation of Shanghai (11ZR1412500), the Ph.D. Programs Foundation of Ministry of Education of China (20093108110001), the Discipline Project at the corresponding level of Shanghai (A. 13-0101-12-005), and Shanghai Leading Academic Discipline Project (J50101).