Abstract

This paper proposes a novel second-order cone programming (SOCP) relaxation for a quadratic program with one quadratic constraint and several linear constraints (QCQP) that arises in various real-life fields. This new SOCP relaxation fully exploits the simultaneous matrix diagonalization technique which has become an attractive tool in the area of quadratic programming in the literature. We first demonstrate that the new SOCP relaxation is as tight as the semidefinite programming (SDP) relaxation for the QCQP when the objective matrix and constraint matrix are simultaneously diagonalizable. We further derive a spatial branch-and-bound algorithm based on the new SOCP relaxation in order to obtain the global optimal solution. Extensive numerical experiments are conducted between the new SOCP relaxation-based branch-and-bound algorithm and the SDP relaxation-based branch-and-bound algorithm. The computational results illustrate that the new SOCP relaxation achieves a good balance between the bound quality and computational efficiency and thus leads to a high-efficiency global algorithm.

1. Introduction

In this paper, we consider a quadratic program with one quadratic constraint and several linear constraints in the following form:where is the decision variable, is a symmetric matrix, and for , , and . (1) covers many important combinatorial optimization problems and engineering problems, such as the binary quadratic programming problem [1], the max-cut problem [2], the quadratic knapsack problem [3,4], the binary least-square problem [5], the image processing problem [6], the multiuser detection problem [7], the project selection and resource distribution problems [8], the multisensor beamforming problem [9], and the system equilibrium problem [10]. It is known that if and are all positive semidefinite, then the problem becomes convex and can be solved efficiently by using SOCP methods [11]. In this paper, we assume that is positive semidefinite while is not, and the feasible region of (1) is bounded with nonempty relative interior points.

As a quadratically constrained quadratic programming problem, a typical approach to solve (1) is to exploit the branch-and-bound methods [12,13]. Note that the tightness and the computing efficiency of solving the convex relaxation problem of (1) at each node are critical factors that can affect the performance of the branch-and-bound algorithm. Thus, designing an effective convex relaxation of (1) is a hot topic in the literature. To the best of my knowledge, the SDP relaxation has become an attractive approach for obtaining good convex relaxations [14,15]. Though the SDP relaxation is tighter, it would also tremendously enlarge the dimension of the problem by lifting the original n-dimensional variable vector x to an variable matrix. Consequently, solving the SDP relaxation of a huge-size problem should take a very long computational time. Hence, designing a convex relaxation which can be efficiently solved even for huge-size problem while maintaining the strength of the convex relaxation are investigated in the literatures [16,17].

In this paper, we will develop a new SOCP relaxation for (1) employing the simultaneous matrix diagonalization technique. We first give the following definition.

Definition 1. and are called simultaneously diagonalizable (SD) if there exists a nonsingular matrix such that and are both diagonal matrices.
In the past few years, many researches have shown that for some quadratically constrained quadratic programming problems, the simultaneous matrix diagonalization technique that is embedded into a convex relaxation may result in a tighter convex relaxation than that is not. Ben-Tal and Den Hertog [18] showed that a quadratic program with one or two quadratic constraints has a hidden conic quadratic representation if the matrices in the quadratic forms are simultaneously diagonalizable. Jiang and Li [19] also applied the simultaneous diagonalization techniques to solve a quadratic programming problem. Zhou and Xu [20] proposed a simultaneous diagonalization-based SOCP relaxation for convex quadratic program with linear complementarity constraints. They also designed a new SOCP relaxation for the generalized trust-region problem and provided a sufficient condition under which the proposed SOCP relaxation is exact [21].
The main contributions of this paper are twofold.(1)We first decompose the matrix according to the signs of its eigenvalues such that , where A and B are both positive semidefinite. Then, we propose a new SOCP relaxation via simultaneous diagonalization of the two positive semidefinite matrices and B. We further show that the new SOCP relaxation is as tight as the SDP relaxation when is positive definite or is negative semidefinite.(2)We derive a spatial branch-and-bound algorithm based on the new SOCP relaxation, and the following extensive numerical results show that the proposed algorithm outperforms the branch-and-bound algorithm as derived from the SDP relaxation. It implies that the new SOCP relaxation balances the bound tightness and computing efficiency better than that of the SDP relaxation.The rest of the paper is organized as follows. In Section 2, a new SOCP relaxation for (1) is introduced. We also show that the new SOCP relaxation is as tight as SDP relaxation in certain cases. In Section 3, we propose a spatial branch-and-bound algorithm based on the new SOCP relaxation. Some numerical experiments are conducted to illustrate the effectiveness of the proposed algorithm in Section 4. Finally, some concluding remarks are provided in Section 5.
Notations. For two n by n real matrices and , represents the inverse of A and . For a real symmetric matrix X, and mean that X is positive semidefinite and positive definite, respectively. denotes the n-dimensional identity matrix, and e is a unit vector with all elements being 1. Given a vector , corresponds to an diagonal matrix with its diagonal elements equal to a.

2. A Simultaneous Diagonalization-Based SOCP Relaxation

In this section, we derive a new SOCP relaxation exploiting the techniques of difference of convex (DC) decomposition, and two positive semidefinite matrices are simultaneously diagonalizable. First of all, we present two lemmas concerning simultaneous matrix diagonalization that appear to play a central role in this paper.

Lemma 1 (see [22, 23]). If and are two symmetric matrices and , then there exists a nonsingular matrix such that and are both diagonal matrices.

Lemma 2 (see [22, 23]). If and are two positive semidefinite matrices, then there exists a nonsingular matrix such that and are both diagonal matrices.

Let r be the number of negative eigenvalues of , and without loss of generality, the first r eigenvalues of are supposed to be negative. Thus, where and in which , , are eigenvalues and , , are corresponding eigenvectors of .

Since , and , the proof of Lemma 2 [20] provides a method to find a nonsingular matrix such that and with , and , .

Let , , and ; then, (1) can be reformulated as

Equation (2) still cannot be solved in polynomial time as the quadratic constraint remains nonconvex. Then, we derive a new SOCP relaxation for (1) by introducing auxiliary variables for ,:

In general, the new SOCP relaxation provides a weaker lower bound than the SDP relaxation. Compared with the fact that the SDP relaxation lifts the original n-dimensional variable vector to an variable matrix, the new SOCP relaxation only lifts the original n-dimensional variable vector to an -dimensional variable vector, which implies that (3) can be solved more quickly than (4). Therefore, the SOCP relaxation presents greater potential in some real-life applications.

Next, we will show that (3) is as tight as the SDP relaxation under certain circumstances. The SDP relaxation of (1) is [14]:

Suppose that the nonsingular matrix F from (3) also diagonalizes A. In fact, it implies that and are simultaneously diagonalizable. In such a case, we have and with for , for , and for according to Lemmas 1 and 2. Consequently, (3) can be reformulated as

Theorem 1. When and are simultaneously diagonalizable, (5) is as tight as (4).

Proof. On one hand, if is a feasible solution of (5), then let and . It is easy to see that andIt implies that is a feasible solution of (4), and the optimal objective function value of (4) is no more than the one of (5).
On the other hand, if is a feasible solution of (4), then we define , and for . implies for . Also,It implies that is a feasible solution of (5) and the optimal objective function value of (5) is no more than that of (4).
Therefore, (5) is as tight as (4).

Proposition 1. When is positive definite or is negative semidefinite, (5) is as tight as (4).

Proof. If is positive definite or is negative semidefinite, then and are simultaneously diagonalizable according to Lemmas 1 and 2. Hence, (5) is as tight as (4) according to Theorem 1.

In what follows, we will use two simple examples to show that (5) is as tight as (4) under the conditions of Proposition 1, but (5) can be solved faster than (4).

Example 1. Consider the problem with and :The optimal objective function value of (4) is 7.9022, and the CPU time is 0.0448 seconds. Since is positive definite, we can find , , and . The optimal value of (5) is 7.9022, and the CPU time is 0.0196 seconds.

Example 2. Consider the problem with and :The optimal objective function value of (4) is −1, and the CPU time is 0.1364 seconds. Since is negative definite, we can find , , and . The optimal value of (5) is −1, and the CPU time is 0.0196 seconds.

3. A Spatial Branch-and-Bound Algorithm

In this section, we develop a spatial branch-and-bound algorithm based on (3) for (1). Kim and Kojima [24] pointed out that it is necessary to add some appropriate constraints to the auxiliary variables to improve the lower bound. In order to enhance the relaxation effect and design a branch-and-bound algorithm, we introduce reformulation-linearization technique (RLT) constraints to (3). (1) is bounded, and thus there must exist a lower bound and an upper bound for each , . Therefore, we can add r RLT constraints for into (3).

The new SOCP relaxation with RLT constraints is in the form

The initial lower bound and upper bound of , are solved by the following linear programming problems:

Taking Example 1 as an instance, the optimal value of (10) is 15.3976, which is better than the optimal value 7.9022 of (3).

Lemma 3. Let be an optimal solution of (10) over the initial box . If for , then is an optimal solution of (3).

Proof. If for , then y is a feasible solution of (2). Consequently, is an optimal solution of (1).

According to Lemma 3, we can see that if is not an optimal solution, then there must exist some satisfying . Thus, we can select the index and split the initial box into two new boxes and with , , , , , and . Consequently, we generate two new nodes over and , respectively, in the branch-and-bound tree.

Before describing the spatial branch-and-bound algorithm, we give the following definition.

Definition 2. For a given and a vector , let . If , then x is called an ε-feasible solution of (1). Let be the optimal objective function value of (1), if the ε-feasible solution x also satisfies ; then, x is called an ε-optimal solution of (1). Define the functionThe proposed algorithm is presented in Algorithm 1.
Although the branch-and-bound algorithm based on (5) is not detailed here, it can be easily obtained by changing (3) with (5) in the proposed algorithm.
Next, we will prove that Algorithm 1 converges after exploring finite nodes and returns an ε-optimal solution.

Require: An instance of (1) and a given error tolerance . Set iteration step , the upper bound and .
(1)Solve (11) for and .
(2)If (11) is infeasible, then
(3) (1) is infeasible and terminate.
(4)end if
(5)Solve (10) over for its optimal objective function value and optimal solution . Let and .
(6)Construct a set and insert into it.
(7)loop
(8)if then
(9)  return and terminate.
(10)end if
(11) Choose a node from , denoted as such that and remove it from .
(12)if , then
(13)  return and terminate.
(14)end if
(15) Set .
(16) Choose .
(17) Construct the box by setting , , and construct the box by setting , , .
(18)if (10) over is feasible, then
(19)  Solve (10) over for its optimal objective function value and optimal solution . Denote .
(20)  if , then
(21)    and .
(22)  end if
(23)  if , then
(24)   insert into .
(25)  end if
(26)end if
(27)if (10) over is feasible, then
(28)  Solve (10) over for its optimal objective function value and optimal solution . Denote .
(29)  if , then
(30)    and .
(31)  end if
(32)  if , then
(33)   insert into .
(34)  end if
(35)end if
(36)end loop

Lemma 4. Suppose that the node is chosen from in Line 11 of Algorithm 1 such that and . For any , there exists a such that if , then Algorithm 1 terminates in Line 13.

Proof. Let ; then,For any , set , and then is an ε-feasible solution. Besides,Therefore, is an ε-optimal solution from Definition 2 and Algorithm 1 terminates in Line 13.

The proof of Lemma 4 implies that if , then is an ε-feasible solution. Thus, we aim to reduce in the branch-and-bound algorithm. Hence, we choose as the variable selection strategy.

Theorem 2. Algorithm 1 returns an ε-optimal solution after exploring atmost nodes.

Proof. Algorithm 1 shows that if the algorithm does not terminate in line 13, then a chosen box is split in half and two new boxes are generated. After exploring k nodes, the initial box would be split into boxes. If Algorithm 1 does not obtain an ε-optimal solution in line 13, it is easy to check that for each box among those boxes, with for . Otherwise, if there exists a such that , and the -th edge has been selected as a branching direction, then following from Lemma 4, we conclude that is an ε-optimal solution. It contradicts with the assumption that Algorithm 1 does not obtain an ε-optimal solution in line 13 at node k. Hence, the volume of each box is no smaller than . Due to the fact that the total volume of all the boxes is no more than the one of the initial box , it is easy to check that, after exploring nodes, Algorithm 1 returns an ε-optimal solution.

4. Numerical Experiments

In this section, we compare the proposed algorithm with Algorithm 1 in [25] which is a SDP relaxation-based branch-and-bound algorithm (SDP_BB). These algorithms are implemented in MATLAB R2013b on a PC with Window 7, 2.50 GHz Inter Dual Core CPU processors and 8 GB RAM. The SDP relaxations are solved by SeDuMi 1.02 [26] and the SOCP relaxations are solved by Cplex solver 12.6.3. The error tolerance is set to be .

The instances for the experiments are generated as follows. The entries of are integers uniformly drawn at random from intervals for each . The elements of matrices are integers uniformly drawn at random from intervals (reference [17]). Then, and are replaced by their real symmetric parts. is decomposed as where D is the diagonal matrix of eigenvalues, and V is the orthogonal matrix whose columns are the corresponding eigenvectors. Let . We generate a vector with the entries of a uniformly drawn at random from intervals . Then, we let , , , , and . Five instances are generated for each given problem size in Table 1. The explored nodes (exp nodes) and CPU time in seconds (CPU time) are displayed for each algorithm in Table 1. The symbol “—” means the instance cannot be solved within the given maximum computing time of 10000 seconds.

Table 1 shows that(1)It is easy to observe that the average time cost for each explored node, i.e., of the proposed algorithm SOCP_BB, is much lower than that of the algorithm SDP_BB for each instance. Also, this evidence is deepened when n becomes larger. The observation further illustrates that the computing efficiency of solving the SOCP relaxation is higher than that of solving the SDP relaxation.(2)For most of the instances, both the number of explored nodes and the CPU time of the proposed algorithm SOCP_BB are smaller than those of the algorithm SDP_BB. It implies that the proposed SOCP relaxation offers a good trade-off between the computing efficiency and bound quality for (1).

5. Conclusion

By exploiting the simultaneous diagonalization technique, we derive a new SOCP relaxation for (1). Then, we prove that the SOCP relaxation is as tight as the well-known SDP relaxation in certain cases. Furthermore, we design a spatial branch-and-bound algorithm based on the new SOCP relaxation. Finally, numerical experiments are conducted to demonstrate the efficiency of the proposed method by comparing with the SDP relaxation-based branch-and-bound algorithm, and the promising results illustrates that the simultaneous diagonalization-based SOCP relaxation indeed well balance the bound quality and computing time.

Data Availability

The author declares that all data and material in the paper are available and veritable.

Conflicts of Interest

The author declares no conflicts of interest.

Acknowledgments

This research has been supported by the National Natural Science Foundation of China (Grant no. 11701512).