Abstract

Kronecker product decomposition is often applied in various fields such as particle physics, signal processing, image processing, semidefinite programming, quantum computing, and matrix time series analysis. In the paper, a new method of Kronecker product decomposition is proposed. Theoretical results ensure that the new method is convergent and stable. The simulation results show that the new method is far faster than the known method. In fact, the new method is very applicable for exact decomposition, fast decomposition, big matrix decomposition, and online decomposition of Kronecker products. At last, the extension direction of the new method is discussed.

1. Introduction

Kronecker product, as a special case of tensor product, is a concept having its origin in group theory and has been successfully applied in various fields such as particle physics, signal processing, image processing, semidefinite programming, and quantum computing et al. [1]. Ford and Tyrtyshnikov [2] combined the discrete wavelet transform approximation and the approximation with a sum of Kronecker products to enable the solution of very large dense linear systems by an iterative technique using a Kronecker product approximation represented in a wavelet basis. Yang et al. [3] researched the generalized Kronecker product linear system associated with a class of consecutive-rank-descending matrices arising from bivariate interpolation problems. Muñoz-Matute et al. [4] introduced an algorithm to speed up the computation of the -function action over vectors for two-dimensional (2D) matrices expressed as a Kronecker sum using Kronecker products of one-dimensional matrices. More literature studies can refer to Rifa and Zinoviev [5], Enríquez and Rosas-Ortiz [6], Hao et al. [7], Marco et al. [8], Chen and Kressner [9], and the reference cited in.

Definition 1. (see [10]). Assume matrices and , then the block matrix is called the Kronecker product of and , denoted by , that isThe Kronecker product decomposition of a matrix is the factorization of into the Kronecker product of two matrices where the dimensions of and are given.
As the theory of matrix time series analysis develops, we often need to deal with the Kronecker product decomposition, see Chen et al. [11] and Wu and Hua [12]. For example, consider the order autoregressive model for centralized matrix time series in bilinear formwhere is a matrix time series, is a matrix white noise, and and are two constant matrices. According to the moment estimation method, it is easy to obtain thatwhere is the vectorization of matrix by columns, and is the length of observation sequence. Then, we need to solve and , which is a problem on Kronecker product decomposition. Denote that we need to solve and such thatAs far as we know, there is a method to solve (5), which is the optimization method [11]. That is, (5) is transformed into the following minimum problem on matriceswhere is the Frobenius norm of a matrix. However, it is very slow to solve (6) when is a big matrix.
In the paper, we will propose a new method of Kronecker product decomposition. The method is convergent and stable. Also, the new method is far easier and faster than the optimization method (6).

2. The New Method of Kronecker Product Decomposition

In the section, we will propose a new method to solve and satisfyingwhere is given.

For (7) and any constant , it follows that

That is, the Kronecker product decomposition of (7) is not unique. Thus, we add the constraint condition that

For the sake of convenience, we denote

For example, let , then .

For any dimensional matrix , if , then, we can take and , where is the dimensional matrix each element of which is one. Thus, we always assume for Kronecker product decomposition.(i)Steps of Kronecker product decomposition with the constraint condition (9):(1)Block matrix into and denote , where has the same dimensions as for all and .(2)Denotewhere means taking the maximum value of the absolute value of each element of the matrix.(3)Take(4)Denote , thenwhere means taking the average value, and and are the elements of and ,respectively, for each and .

For example, consider to decompose into Kronecker product of and , whereand is a dimensional matrix. It yields from the steps of Kronecker product decomposition with the constraint condition (9) that

Then,

Thus,

3. Theoretical Properties of the New Method

As for the new method of Kronecker product decomposition, we present some of its properties in the section. First, we will show the new method is always applicable.

Proposition 2. The new method of Kronecker product decomposition is well defined. That is, in Step (3) holds, and is not empty in Step (4).

The proof of the proposition is presented in Appendix A.

Theorem 3. If is not null matrix, where and are and dimensional matrices, respectively, it yields from the new method of Kronecker product decomposition that is decomposed into the Kronecker product of and , where

The proof of the theorem is presented in Appendix B.

Corollary 4. If is not null matrix, where is a dimensional matrix satisfying , it yields from the new method of Kronecker product decomposition that is decomposed into the Kronecker product of and , then

Theorem 3 shows the new method of Kronecker product decomposition can obtain the exact solution if exactly equals a Kronecker product of and . However, is obtained by some estimation method in practice and it will be affected by some random disturbance. That is,where is a matrix-valued white noise. That is a matrix-valued white noise means that is a vector-valued white noise.

Theorem 5. If is not null matrix, where and are and dimensional matrices, respectively, is a nonzero matrix and , it yields from the new method of Kronecker product decomposition that is decomposed into the Kronecker product of and , then

The proof of the theorem is presented in Appendix C.

Theorem 5 shows the new method of Kronecker product decomposition is effective, that is, the results of decomposition are close to the original matrices as long as the disturbance is not too large.

4. Simulation

For sake of convenience, we have compiled a MATLAB program for the new method of Kronecker product decomposition in the appendix, named by “KronDecomposition.m,” which is based on MATLAB R2020b version.

4.1. Simulation for Convergence

Considerand is decomposed into Kronecker product by our “KronDecomposition.m” thatthat is, at this time the Kronecker product decomposition has no error.

In the following, we consider the Kronecker product decomposition of with random disturbance as follows:where follows the uniform distribution on the interval or the standard normal distribution, i.e., or . For each , we simulate times , and compute the mean , standard deviation , maximum value of the absolute value of maximum error and running time of the decomposition in which the Kronecker product decomposition is by our “KronDecomposition.m,” see Table 1. Also, the corresponding results in which the Kronecker product decomposition is by the optimization method (6) are presented in Table 2.

Table 1 shows that the mean , standard deviation , and maximum value of the absolute value of maximum error by the new method decreases as the disturbance decreases whether obeys a uniform distribution or a normal distribution, which is consistent with Theorem 5.

Comparing Tables 1 and 2, it shows that the absolute value of maximum errors by the new method is a little greater than those by the optimization method (6). However, the computing time of the new method is far less than that by the optimization method (6).

4.2. Simulation for Computing Speed

In the subsection, we will present a comparison of the new method and the optimization method (6) in terms of computing speed. We consider the Kronecker product decomposition of with different dimensions as follows:

For the sake of simplicity, we set , , and , in which it is only to make the optimization method (6) easier that we take . Also, each row of is [1, 2, 3, 4] and that of is [1, 1], where . For example,

We decompose into a Kronecker product of and matrices by the new method “KronDecomposition.m” and the optimization method (6) for , and present the maximum error and running time of the decomposition in Table 3, where is the running time of the new method “KronDecomposition.m,” is the running time of the optimization method (6), is the maximum error of the new method “KronDecomposition.m,” and is the maximum error of the optimization method (6). Then draw the running time of the new method “KronDecomposition.m” and the optimization method (6) in Figure 1.

Table 3 shows there is no error using the new method “KronDecomposition.m” to decompose into Kronecker product for all , and the error using the optimization method (6) is also very small. And then, Table 3 and Figure 1 show the running time by using the new method “KronDecomposition.m” to decompose into Kronecker product is far less than that by using the optimization method (6) for all . In summary, as to decompose into Kronecker product, the new method “KronDecomposition.m” is much better than the optimization method (6).

5. Applications of Kronecker Product Decomposition

In this section, we consider the daily closing prices and the daily volumes of China Overseas Holdings Group Limited (Stock code: 000046), Shaanxi International Trust Company Limited (Stock code: 000563), and CNPC Capital Company Limited (Stock code: 000617), abbreviated as Stock 000046, Stock 000563, and Stock 000617, respectively. The data are downloaded from the China Stock Market and Accounting Research Database (CSMAR), and the time window is from July 6, 2018 to July 5, 2023, which includes 1205 complete records.

For the sake of clarity, we denote the time series bywhere and are the daily closing price and daily volume of Stock 000046, and are the daily closing price and daily volume of Stock 000563, and and are the daily closing price and daily volume of Stock 000617.

In the following, we will consider the logarithmic rates (log rate) of daily closing prices and daily volumes of the three stocks. Denotewhere

In order to obtain the first-order matrix autoregressive model, MAR (1), following asusing the conditional least square method in Chen et al. [11], we obtain thatandwhere is a -dimensional matrix white noise series.

Using the new method of Kronecker product decomposition, it yields from (32) that

Thus, the MAR (1) model (32) follows aswhere is a -dimensional matrix white noise series.

6. Conclusion

A new method of Kronecker product decomposition is proposed, which is easy, convergent, stable, and fast. The new method is very applicable for exact decomposition, fast decomposition, big matrix decomposition, and online decomposition of Kronecker products.

Comparing with the known method of Kronecker product decomposition, i.e., optimization method, the computing speed of the new method is very faster than that of the known method. If the matrix to be decomposed into a Kronecker product just equals a Kronecker product of two matrices, the new method can fast obtain its exact solution, but the known method has a little error. If the matrix to be decomposed into a Kronecker product does not equal a Kronecker product of two matrices, the error of the new method is a little bigger than that of the known method, but the computing speed of the new method is very faster than that of the known method.

There are many directions to extend the scope of the new method. It is a possible extension of the new method that using weighted average instead of arithmetic average in Step (4). Furthermore, the method can be applied in many fields such as group theory, particle physics, matrix time series analysis, and dynamic complex network modeling.

Appendix

A. Proof of Proposition 2

Assume in Step (3), that is, . It yields from Step (2) thatand then

By the recursive method, we can obtain that

Then,thus , which contradicts the assumption . That is, in Step (3) holds.

Furthermore, it yields from that , then , so is not empty in Step (4).

B. Proof of Theorem 3

It follows from Step (1) thatwhere is the element of . Without loss of generality, we assume , otherwise, and we consider whether equals zero, and so on. Owning to , it obtains from Step (2) thatwhere for all and , and is the sign function, i.e.,

Thus,

At this time,so it yields from Step (3) that

And then,

C. Proof of Theorem 5

First, we block the matrix-valued white noise into and denote , where is a dimensional matrix for all and . It follows from Step (1) thatwhere and is the element of .

Case C.1. The minimum of all elements of is greater than .
Noting that is a nonzero matrix, denotethenWhen , for any it follows thatwhere is the element of . Thus,and thenIn the following, we will show the determining of addition or subtraction to compute in Step (2).
DenotethenIt yields from Step (2) thatwhere , and we stipulate , andDenote the sign of the first element whose absolute value equals by “,” thenthen it follows from (C.6) thatand the sign of the first element with the largest absolute value in is also . Furthermore, using a series of complex calculations, we can obtain thatwhere the penultimate equation comes from as . Thus, it yields from (C.13) and Step (3) thatWhen and , it is easy to show thatthusAnd then,

Case C.2. The minimum of all elements of equals .
Similar to Case C.1, we can obtain from Step (2) thatIn the following, we will investigate the more explicit form of in (C.18).
First, it follows from Step (2) thatthenThat is,Denotewhen , we haveIn fact, it is obvious thatThus,Noting that for all andwe know the sign of must be the same as that of , soThat is,It yields from (C.25) and (C.28) that (C.23) holds. Analogically, we obtain thatThus, it yields from (C.29) and Step (3) thatWhen and , it is easy to show thatthusAnd then,

Data Availability

All data, models, and code generated or used during the study appear in the submitted article.

Conflicts of Interest

The author declares that there are no conflicts of interest.