Abstract

The special importance of Difference of Convex (DC) functions programming has been recognized in recent studies on nonconvex optimization problems. In this work, a class of DC programming derived from the portfolio selection problems is studied. The most popular method applied to solve the problem is the Branch-and-Bound (B&B) algorithm. However, “the curse of dimensionality” will affect the performance of the B&B algorithm. DC Algorithm (DCA) is an efficient method to get a local optimal solution. It has been applied to many practical problems, especially for large-scale problems. A B&B-DCA algorithm is proposed by embedding DCA into the B&B algorithms, the new algorithm improves the computational performance and obtains a global optimal solution. Computational results show that the proposed B&B-DCA algorithm has the superiority of the branch number and computational time than general B&B. The nice features of DCA (inexpensiveness, reliability, robustness, globality of computed solutions, etc.) provide crucial support to the combined B&B-DCA for accelerating the convergence of B&B.

1. Introduction

DC programming is an important subject in optimization problems. This paper studies one class of DC programming, which is originally derived from the portfolio investment problems.

Consider the following problem: min𝑥𝑓(𝑥)=𝑝(𝑥)+𝜙(𝑥)s.t.𝑥𝒱𝑙𝑥𝑢.(𝑄) here 𝑝(𝑥)=(1/2)𝑥𝑇𝐻𝑥+𝑐𝑇𝑥, with 𝐻 being a positive definite matrix. 𝜙(𝑥)=𝑛𝑖=1𝜙𝑖(𝑥𝑖), with 𝜙𝑖(𝑥𝑖) being a concave function for 𝑖=1,2,,𝑛. Decision vector 𝑥=(𝑥1,𝑥2,,𝑥𝑛)𝑇𝒱, with 𝒱={𝑥𝐴𝑥𝑏} being a polyhedron, 𝐴𝑅𝑚×𝑛,𝑏𝑅𝑚. In addition, 𝑥 is restricted to the lower bounds 𝑙=(𝑙1,𝑙2,,𝑙𝑛)𝑇 and the upper bounds 𝑢=(𝑢1,𝑢2,,𝑢𝑛)𝑇.

Falk and Soland propose a B&B algorithm for separable nonconvex programming problems in [1], where the objective function is a separable nonconvex function. Phong et al. give a decomposition B&B method for globally solving linearly constrained indefinite quadratic minimization problems in [2, 3], where the objective function is 𝑝(𝑥)+𝜙(𝑦) with 𝑝(𝑥) being a convex quadratic function, however, the concave part 𝜙(𝑦) is a function of 𝑦 rather than 𝑥. Konno and Wijayanayake [4] propose a B&B algorithm to solve portfolio optimization problems under concave transaction costs. The algorithm is proposed by introducing linear underestimated functions for concave transaction cost functions, and is successively used to solve optimal portfolio selection problems with 200 assets. Honggang and Chengxian give a B&B algorithm to solve this class of DC programming with the proposed largest distance bisection in [5], and tests show the efficiency of the method for the problem with 20–160 dimensions. More representatively, a convex minorant of the DC function 𝑓(𝑥)=𝑝(𝑥)+𝜙(𝑥) is defined by 𝑝+co𝕔(𝜙) for B&B algorithm in DC programming, where co𝕔(𝜙)denotes the convex envelope of the concave function 𝜙 on the set , and this is called DC relaxation in DC programming which has been completely studied in [6] and is important for nonconvex programming problems. The performance of the B&B depends on the branching strategy and bounding technology. The main concern of the above B&B algorithms is to solve the underestimated problem of initial problem to obtain an upper bound and also a lower bound for the optimal value, then divide the problem into two subproblems according to some rules, and repeat the above steps for a selected subproblem. By constantly reducing the upper bound and increasing the lower bound for the optimal value, we can obtain a global optimal solution. The main contribution of our work is to improve the upper bound for the optimal value by a local optimization algorithm for the DC programming self rather than the underestimated problem.

DCA is an effective local optimization method based on local optimality and the duality for solving DC programming, especially for large-scale problems. DCA has been first proposed by Tao and An [7] in their preliminary form in 1985 as an extension of subgradient algorithms to DC programming. Then, the method becomes classic and increasingly popular due to the joint work by An and Tao since 1994. Crucial developments and improvements for DCA from both theoretical and computational aspects have been completed, see [712] and references therein. In particular, the work by An et al. [12] investigates DC Programming and DCA for solving large-scale (until 400 000 dimensions) nonconvex quadratic programming. Since solving large-scale convex quadratic programs is expensive, the combination of DCA and interior point method (IP) can reduce the computation time and this outperforms the reference code LOQO by Vanderbei (Princeton). DCA has been widely used to a lot of different and nonconvex optimization problems, such as trust region subproblem, nonconvex quadratic programming, and so on (see [6, 1317]). Furthermore, it always provides a global optimal solution for the problems [11]. The very complete reference including a list of real-world applications of DC Programming and DCA has been summarized by An and Tao(see the website [18]).

DCA is an efficient method for DC programming which allows to solve large-scale DC programming. In this paper, we will obtain a local optimization solution by the DCA method, the optimal value of which is also an upper bound for the optimal value of the problem (𝑄). In most cases, the local optimal solution is always a global optimal solution. Embedding this upper bound into the B&B algorithms can improve the convergence speed and guarantee the global optimality of the solution when it is used to solve the problem (𝑄). Computational tests will be conducted for general problem and especially for portfolio selection problem, the results show that the proposed B&B-DCA algorithm can solve problems efficiently and have the superiority of the branch number and computational time.

The rest of the paper is organized as follows. Local optimization method DCA for the problem (𝑄) is described in Section 2. The B&B method embedded with DCA algorithm is given in Section 3. Computational tests are stated in Section 4. Conclusion and future research are shown in Section 5.

2. Local Optimization Method DCA

2.1. DCA for General DC Programming

Consider the following general DC programming: 𝛾=inf{𝐹(𝑥)=𝑔(𝑥)(𝑥),𝑥𝑛},(𝑃) where 𝑔(𝑥) and (𝑥) are lower semicontinuous proper convex functions on 𝑛. Such a function 𝐹(𝑥) is called a DC function, and 𝑔(𝑥)(𝑥) is called a decomposition of 𝐹(𝑥), while 𝑔(𝑥) and (𝑥) are DC components of 𝐹(𝑥). In addition, a constrained DC programming whose feasible solution set is convex can also be transformed into an unconstrained DC programming by adding the indicator function of (it is equal to 0 in , infinity elsewhere) to the first DC component 𝑔(𝑥).

Let 𝑔(𝑦)=sup{𝑥,𝑦𝑔(𝑥),𝑥𝑛} be the conjugate function of 𝑔(𝑥). Then, the dual programming of (𝑃) can be expressed as 𝛾𝐷=inf(𝑦)𝑔(𝑦),𝑦𝑛.(𝑃𝐷) The perfect symmetry exists between primal programming (𝑃) and dual programming (𝑃𝐷): the dual of (𝑃𝐷) is exactly (𝑃). Remark that if the optimal value 𝛾 is finite, we have here, dom(𝑔)={𝑥𝑛𝑔(𝑥)<+}. Such inclusion will be assumed throughout the paper.

The necessary local optimality condition [6] for the primal problem (𝑃) is (such a point 𝑥 is called a critical point for 𝑔(𝑥)(𝑥)) and where subdifferential [19] of (𝑥) at 𝑥0 is denoted by

Let 𝒫 and 𝒟 denote global solutions sets of problem (𝑃) and (𝑃𝐷), respectively. According to the work by Toland [20], the relationship between them is Under technical conditions, this transportation also holds true for local optimal solutions of problems (𝑃) and (𝑃𝐷), more details can be found in [811].

Based on local optimality conditions and duality in DC programming, the DCA consists in the construction of two sequence {𝑥𝑘} and {𝑦𝑘} [17], such that the sequences {𝑔(𝑥𝑘)(𝑥𝑘)} and {(𝑦𝑘)𝑔(𝑦𝑘)} are decreasing, and {𝑥𝑘} (resp., {𝑦𝑘}) converges to a primal feasible solution ̃𝑥 (resp., a dual feasible solution ̃𝑦) which satisfies local optimality conditions and

Then, the basic scheme of DCA can be expressed as follows: In other words, 𝑥𝑘+1 and 𝑦𝑘+1 are the solution of the convex programming (𝑃𝑘) and (𝑃𝐷𝑘), respectively,

In the following, we will show main convergence properties of DCA which have been proposed and proven in [10, 11, 17]. First, (resp., 𝔻) is used to denote a convex set containing the sequence {𝑥𝑘} (resp., {𝑦𝑘}) and 𝜌(𝑔,) (or 𝜌(𝑔) if =𝑅𝑛) the modulus of strong convexity of the function 𝑔(𝑥) on are given by:𝜌𝜌(𝑔,)=sup𝜌0𝑔(𝑥)2𝑥2isconvex,𝑥.(2.8)(1)The sequences {𝑔(𝑥𝑘)(𝑥𝑘)} and {(𝑦𝑘)𝑔(𝑦𝑘)} are decreasing and.(i)𝑔(𝑥𝑘+1)(𝑥𝑘+1)=𝑔(𝑥𝑘)(𝑥𝑘) if 𝑦𝑘𝜕𝑔(𝑥𝑘)𝜕(𝑥𝑘), 𝑦𝑘𝜕𝑔(𝑥𝑘+1)𝜕(𝑥𝑘+1) and [𝜌(𝑔,)+𝜌(,)]𝑥𝑘+1𝑥𝑘=0. Furthermore, if 𝑔 or are strictly convex on , 𝑥𝑘+1=𝑥𝑘.(ii)(𝑦𝑘+1)𝑔(𝑦𝑘+1)=(𝑦𝑘)𝑔(𝑦𝑘) if 𝑥𝑘+1𝜕𝑔(𝑦𝑘)𝜕(𝑦𝑘), 𝑥𝑘+1𝜕𝑔(𝑦𝑘+1)𝜕(𝑦𝑘+1) and [𝜌(𝑔,𝔻)+𝜌(,𝔻)]𝑦𝑘+1𝑦𝑘=0. Furthermore, if 𝑔 or are strictly convex on 𝔻,then 𝑦𝑘=𝑦𝑘+1. In such cases, DCA terminates at the kth iteration (finite convergence of DCA).(2)If 𝜌(𝑔,)+𝜌(,)>0 (resp., 𝜌(𝑔,𝔻)+𝜌(,𝔻)>0), then the series {𝑥𝑘+1𝑥𝑘2} (resp., {𝑦𝑘+1𝑦𝑘2}) converges.(3)If the optimal value 𝛾 of the primal problem (𝑃) if finite and the infinite sequence 𝑥𝑘 and 𝑦𝑘 are bounded, then every limit point ̃𝑥 (resp., ̃𝑦) of the sequence 𝑥𝑘 (resp., 𝑦𝑘) is a critical point of 𝑔(𝑥)(𝑥) (resp., (𝑦)𝑔(𝑦)).(4)DCA has a linear convergence rate for general DC programming.

2.2. DCA Applied for Solving Problem (𝑄)

Problem (𝑄) is a special form of general DC programming (𝑃), with 𝑔(𝑥)=𝑝(𝑥) and (𝑥)=𝜙(𝑥). According to the description of DCA in Section 2.1, we need to compute 𝜕(𝜙)(𝑥) and 𝜕𝑝(𝑦). According to knowledge of modern convex analysis, we have 𝜕(𝜙)(𝑥)=𝑛𝑖=1𝜕(𝜙𝑖)(𝑥𝑖).

As can be seen, if and only if (𝜙)(𝑥) is differential at 𝑥, 𝜕(𝜙)(𝑥) reduces to a singleton which is exactly {(𝜙)(𝑥)=((𝜙1)(𝑥1),,(𝜙𝑛)(𝑥𝑛))𝑇}. For the computation of 𝜕𝑝(𝑦), we need to solve the following convex quadratic programming:min𝑥{𝑝(𝑥)𝑥,𝑦,𝑥𝒱&𝑙𝑥𝑢}(2.9) because its solution is exactly 𝜕𝑝(𝑦)={𝑝(𝑦)}. Finally, DCA applied for solving problem (𝑄) can be described as follows.

Algorithm 2.1 (The DCA for the Problem (𝑄)). 1° Initialization
Let 𝜖 be a sufficiently small positive number. First select an initial point 𝑥0𝑛. Set 𝑡=0, goto .
2° Iteration
Set 𝑦𝑡=(𝜙)(𝑥𝑡), that is, 𝑦𝑡𝑖=(𝜙𝑖)(𝑥𝑡𝑖),𝑖=1,2,,𝑛, and then solve the following quadratic programming min𝑥𝑝(𝑥)𝑥,𝑦𝑡,𝑥𝒱&𝑙𝑥𝑢(2.10) the solution of (2.10) is denoted as 𝑥𝑡+1, goto .
3° Stop Criterion
If 𝑥𝑡+1𝑥𝑡𝜖, then stop and we get a local optimal solution 𝑥=𝑥𝑡+1. Otherwise, set 𝑡=𝑡+1, goto 2°.
We can obtain a local optimal solution by Algorithm 2.1 efficiently for the problem (𝑄) with different dimensions.

3. B&B Algorithm Embedded with DCA Methods

In most cases, B&B [5] methods are used to obtain the global optimal solution of the problem (𝑄). The main concern of the existing B&B algorithms is to solve the underestimated problem of the problem (𝑄). However, the computational cost of the algorithm will be very large along with the increasing dimension of the problem. In this section, we improve the upper bound for the optimal value by the local optimization algorithm DCA for DC programming self rather than the underestimated problem. DCA will be embedded into the B&B algorithm to accelerate the convergence of B&B.

3.1. The Description of the B&B-DCA Algorithm

In this subsection, we present the B&B-DCA method for the problem (𝑄). Let 𝒮0={𝑙𝑥𝑢} be the initial set which needs to be branched. We replace each concave function 𝜙𝑖(𝑥𝑖) in 𝜙(𝑥) by its underestimated function 𝜙𝑖(𝑥𝑖) over the set 𝒮0:𝜙𝑖𝑥𝑖=𝑎𝑖+𝑏𝑖𝑥𝑖,(3.1) where𝑏𝑖=𝜙𝑖𝑢𝑖𝜙𝑖𝑙𝑖𝑢𝑖𝑙𝑖,𝑎𝑖=𝜙𝑖𝑙𝑖𝑏𝑖𝑙𝑖,𝑖=1,2,,𝑛.(3.2) Then, we let𝑓(𝑥)=𝑝(𝑥)+𝜙(𝑥)=𝑝(𝑥)+𝑛𝑖=1𝜙𝑖𝑥𝑖(3.3) be the underestimated function of the objective 𝑓(𝑥)=𝑝(𝑥)+𝜙(𝑥) in problem (𝑄). We solve the following quadratic programming problem:min𝑥𝑝(𝑥)+𝜙(𝑥)s.t.𝑥𝒱𝑙𝑥𝑢.(𝑄)

Let 𝑥0 be an optimal solution of the problem (𝑄), then we get an upper bound 𝑓(𝑥0) and a lower bound 𝑓(𝑥0) for the optimal value 𝑓(𝑥)(𝑥 is a global optimal solution) of the primal problem (𝑄).

Then, Algorithm 2.1 is used to obtain a local optimal solution for which 𝑥0 is set as the initial iteration point 𝑥0. The relative optimal solution when Algorithm 2.1 stops are noted as ̃𝑥0. Then, we set the upper bound for the optimal value 𝑓(𝑥) (where 𝑥 is a global optimal solution of the problem (𝑄)) 𝛼0=min{𝑓(𝑥0),𝑓(̃𝑥0)}.

Theorem 3.1. Let x0 be an optimal solution of the problem (𝑄), let ̃x0 be a local optimal solution obtained by DCA method, and let 𝑥 be a global optimal solution of the problem (𝑄), then we have 𝑓𝑥0𝑥𝑓𝛼0.(3.4) where 𝛼0=min{𝑓(𝑥0),𝑓(̃𝑥0)}.

Proof. The following relationship holds true: 𝑓𝑥0=min𝑥𝑓(𝑥)𝑥𝒱𝒮0min𝑥𝑓(𝑥)𝑥𝒱𝒮0𝑥=𝑓𝛼0𝑓𝑥=min0,𝑓̃𝑥0.(3.5) This gives the conclusion.

Before continuing to describe the algorithm, we need to know the “Rectangle Subdivision Process”, that is, divide the set 𝒮0 into a sequence of subsets 𝒮𝑘 by means of hyperplanes parallel to certain facets [5]. The family of subrectangles can be represented by a tree with root 𝒮0 and subnodes. A node is a successor of another one if and only if it represents an element of the latter node. An infinite path in the tree corresponds to an infinite nested sequence of rectangles 𝒮𝑘, 𝑘=0,1,. “Rectangle Subdivision Process” plays an important role in the B&B method. In order to ensure the convergence of the algorithm, the concept of “Normal Rectangular Subdivision” (NRS) has been introduced in [21].

Definition 3.2 (see [21]). Assumed that 𝜙𝑘(𝑥)=𝑛𝑖=1𝜙𝑘𝑖(𝑥𝑖) is the linear underestimated function of 𝜙(𝑥) over the set 𝑆𝑘, 𝑥𝑘 is the optimal solution of the underestimated problem of (𝑄), then a nested sequence 𝑆𝑘 is said to be normal if lim𝑘|||𝜙𝑘𝑥𝑘𝜙𝑘𝑥𝑘|||=0.(3.6) A rectangular subdivision process is said to be normal if any nested sequence of rectangles generated from the process is normal.

If 𝛼0𝑓(𝑥0)𝛿 with 𝛿 a given sufficient small number, then 𝑥0(when 𝑓(𝑥0)𝑓(̃𝑥0)) or ̃𝑥0 (when 𝑓(𝑥0)>𝑓(̃𝑥0)) is an 𝛿-approximate global optimal solution of the problem (𝑄). Otherwise, the problem (𝑄) will be divided into two subproblems:𝑓min(𝑥)=𝑝(𝑥)+𝜙(𝑥),𝑥𝒱&𝑥𝒮1,(𝑄1)min𝑓(𝑥)=𝑝(𝑥)+𝜙(𝑥),𝑥𝒱&𝑥𝒮2,(𝑄2) where 𝒮1=𝑥𝑙𝑠𝑥𝑠𝑠,𝑙𝑖𝑥𝑖𝑢𝑖,𝒮,𝑖𝑠,𝑖=1,2,,𝑛2=𝑥𝑠𝑥𝑠𝑢𝑠,𝑙𝑖𝑥𝑖𝑢𝑖,𝑖𝑠,𝑖=1,2,,𝑛(3.7)𝑠 is decided by the NRS process such as 𝜔- subdivision.

Similar to the problem (𝑄), we can get the underestimated quadratic programming (noted as (𝑄1) and (𝑄2)) for each of the subproblem (𝑄1) and (𝑄2) by replacing the concave part 𝜙(𝑥) by their respective underestimated function 𝜙1(𝑥) and 𝜙2(𝑥):𝑓min1(𝑥)=𝑝(𝑥)+𝜙1(𝑥),𝑥𝒱&𝑥𝒮1,(𝑄1)𝑓min2(𝑥)=𝑝(𝑥)+𝜙2(𝑥),𝑥𝒱&𝑥𝒮2.(𝑄2)(1)If either of them is infeasible, then the corresponding subproblem (𝑄1) or (𝑄2) is infeasible, and we will delete it.(2)If at least one subproblem is feasible, we can get the optimal solution 𝑥1 (or 𝑥2) of the underestimated subproblem (𝑄1) (or (𝑄2)) for the subproblem (𝑄1) (or (𝑄2)). Let upper bound 𝛼1=min{𝛼0,𝑓(𝑥1),𝑓(𝑥2)}, then delete the subproblem (𝑄1) or (𝑄2) of which the lower bound 𝑓(𝑥1) or 𝑓(𝑥2) is larger than 𝛼1𝛿.

Remarkably, if 𝑓(𝑥1)<𝛼0𝛿 or 𝑓(𝑥2)<𝛼0𝛿, Algorithm 2.1 is used for solving the subproblem (𝑄1) or (𝑄2). The corresponding optimal solution is noted as ̃𝑥1 or ̃𝑥2. The upper bound for the optimal value will be updated, 𝛼1=min{𝛼1,𝑓(̃𝑥1),𝑓(̃𝑥2)}.

We delete those subproblems of which the lower bound are larger than 𝛼1𝛿. Then we select one subproblem from (𝑄1) and (𝑄2), which has a smaller lower bound for optimal value, and divide it into two subproblems. Repeat this process until no subproblems exist.

In the following, we will give the detailed description of B&B-DCA algorithm.

Algorithm 3.3 (The Combined B&B-DCA Algorithm). 1° Initialization
Set 𝑘=0,𝑙0=𝑙,𝑢0=𝑢, give the tolerance 𝛿>0 a sufficient small number. Solve the underestimated problem (𝑄) to obtain an optimal solution 𝑥0. Then use Algorithm 2.1 (DCA) to solve the problem (𝑄), the resulting optimal solution is noted as ̃𝑥0. Set problems set 𝕄={𝑄0𝑄}, upper bound 𝛼0=min{𝑓(𝑥0),𝑓(̃𝑥0)}, lower bound 𝛽(𝑄0)=𝑓(𝑥0). 𝑥0=argmin{𝛼0}.
2° Stop Criterion
Delete all 𝑄𝑖𝕄 with 𝛽(𝑄𝑖)>𝛼𝑘𝛿. Let 𝕄 be the set of remained subproblems. If 𝕄=, stop, and 𝑥𝑘 is an 𝛿-global optimal solution of the problem (𝑄). Otherwise, goto .
3° Branch
Select a problem (𝑄𝑗) from the set of problems 𝕄: 𝑓min(𝑥)=𝑝(𝑥)+𝜙(𝑥),𝑥𝒱&𝑥𝒮𝑗(𝑄𝑗) with 𝛽𝑘𝑄=𝛽𝑗𝛽𝑄=min𝑡,𝑄𝑡𝕄.(3.8) Then divide 𝑆𝑗 into 𝑆𝑗,1 and 𝑆𝑗,2 according to an NRS process, the relative subproblems are noted as (𝑄𝑗,1) and (𝑄𝑗,2), set 𝕄=𝕄(𝑄𝑗).
4° Bound
For the subproblem (𝑄𝑗,𝑚), 𝑚=1,2, solve the underestimated subproblem of (𝑄𝑗,𝑚) to obtain the optimal solutions 𝑥𝑗,𝑚. Let 𝛽(𝑄𝑗,𝑚)=𝑓𝑗,𝑚(𝑥𝑗,𝑚), 𝛼𝑗,𝑚=𝑓(𝑥𝑗,𝑚). Then set 𝛼𝑘+1=min{𝛼𝑘,𝛼𝑗,1,𝛼𝑗,2} and 𝑥𝑘+1=argmin{𝛼𝑘+1}.
5° Deciding Whether to Call DCA Procedure
For 𝑚=1,2. If 𝛼𝑗,𝑚<𝛼𝑘𝛿, Algorithm 2.1 (DCA) is applied to solve the subproblem of (𝑄𝑗,𝑚), the resulting solution is denoted by ̃𝑥𝑗,𝑚. Then set 𝛼𝑘+1=min{𝛼𝑘+1,𝑓(̃𝑥𝑗,𝑚)}, and 𝑥𝑘+1=argmin{𝛼𝑘+1}, goto ; otherwise, goto .
6° Iteration
Let 𝕄=𝕄{𝑄𝑗,1,𝑄𝑗,2}, 𝑘=𝑘+1 and goto 2°.
Since the DCA method is an efficient local optimization method for DC programming, the combination of DCA and B&B algorithm will guarantee the global optimality and accelerate the convergence of general B&B algorithm (see [5]) for the problem (𝑄). Due to the decrease of upper bound 𝛼, the convergence speed of the B&B algorithm will have some improvement. However, we do not need to implement the DCA in each subproblem. Only when some conditions are satisfied, the DCA procedure will be called so as to prevent from the overusing of DCA.

3.2. The Convergence of B&B-DCA Algorithm

Theorem 3.4. The sequence {𝑥𝑘} generated by the B&B-DCA algorithm converges to a global optimal solution of the problem (𝑄) as 𝑘.

Proof. If the algorithm terminates at finite iterations 𝑘, 𝑥𝑘 is a global optimal solution of the problem (𝑄) from the definition of 𝛼𝑘 and 𝛽𝑘.
If the algorithm does not stop at finite iterations, it must generate an infinite nested sequence 𝑆𝑘𝑡 of rectangles. From the definitions of the upper bound 𝛼𝑘 and the lower bound 𝛽𝑘, we know that the sequence {𝛼𝑘𝛽𝑘} is nonincreasing and a nonnegative number. Since the rectangle bisection process satisfies an NRS process, we have the following expression: lim𝑡|||𝜙𝑘𝑡𝑥𝑘𝑡𝜙𝑘𝑡𝑥𝑘𝑡|||=0.(3.9) Obviously, this means that lim𝑡𝛼𝑘𝑡𝛽𝑘𝑡=0.(3.10) Then we have lim𝑘𝛼𝑘𝛽𝑘=0.(3.11) Furthermore, 𝛽𝑘𝑓(𝑥)𝛼𝑘, so the sequence generated by the algorithm above converges to a global optimal solution as 𝑘.

We can see that NRS process plays an important role in the convergence of the B&B-DCA algorithm

4. Computational Tests

In this section, we will test the performance of proposed B&B-DCA algorithm for randomly generated datasets and the results will be compared with that of general B&B algorithm (see [5]) for problem (𝑄). Specifically, portfolio selection problem with concave transaction costs is also studied in Section 4.2. All the computational tests are coded by MATLAB (CPLEX is integrated to solve the relative quadratic programming) and run on a personal computer with Pentium Pro Dual 2.66 GHZ and 2 GB memory.

4.1. Problems with Randomly Generated Datasets

Datasets with different dimensions will be generated to test the performance of the B&B-DCA and general B&B algorithms. We will conduct numerical experiments of the proposed algorithms with dimensions from 50 to 400 for the problem (𝑄). In the following, we will give the generating process of the data sets and values of some parameters.

For the objective function 𝑓(𝑥)=𝑝(𝑥)+𝜙(𝑥) in the problem (𝑄), the separable logarithm function is used to specify the concave part 𝜙(𝑥) because of its wide applications in economic and financial problems. Let𝜙(𝑥)=𝑛𝑖=1𝜙𝑖𝑥𝑖=𝑛𝑖=1𝜃ln𝑖𝑥𝑖+𝛾𝑖,(4.1) where 𝜃𝑖 and 𝛾𝑖 are randomly generated in regions [2, 3] and [3, 5], respectively, in uniform distribution.

For the convex part 𝑝(𝑥)=(1/2)𝑥𝑇𝐻𝑥+𝑐𝑇𝑥, we generate 𝑛 sequence 𝑞𝑗𝑅𝑚(𝑚>𝑛),𝑗=1,2,,𝑛 in region [−1,1] in uniform distribution, and the covariance matrix of 𝑞𝑗 which is a positive definite matrix is noted as 𝐻. So, we can ensure that the function 𝑝(𝑥) is convex. The coefficients 𝑐𝑖 are randomly generated in regions [−1,1], respectively, in uniform distribution.

The feasible solution sets in the problem (𝑄) are the intersection of𝒱=𝑥𝑛𝑖=1𝑥𝑖,𝑙=1(4.2)𝑖𝑥𝑖𝑢𝑖,𝑙𝑖=0,𝑢𝑖=1,𝑖=1,2,,𝑛.(4.3)

The tolerance 𝜖 and 𝛿 in Algorithm 2.1 and Algorithm 3.3 are both set equal to 1𝑒5.

B&B-DCA and general B&B algorithms [5] are used to solve the problem (𝑄) on the same datasets and parameters from dimension 50 to 400. In order to make the results more reliable, we will generate randomly five datasets to test the proposed B&B-DCA and general B&B algorithms for each dimension. The stop criterion of the algorithms is either obtaining an 𝛿-optimal solution or the branch number greater than 20000.

The NRS process has an important effect on the convergence of the B&B-DCA and general B&B algorithms. To our knowledge, the NRS process includes the exhaustive bisection, 𝜔-bisection, adaptive bisection, and largest distance bisection [5]. Some evidences in [2, 5] show that the 𝜔-bisection and largest distance bisection are greater than the other two methods. Since the calculation of 𝜔-bisection is much more simple and have similar performance to the largest distance bisection, 𝜔-bisection is applied in our proposed B&B-DCA and general B&B algorithms for problem (𝑄).

Definition 4.1 (𝜔-subdivision [2, 14]). With the 𝜔-subdivision process, the bisection index 𝑠 is determined by𝜙𝑠𝑥𝑘𝑠𝜙𝑠𝑥𝑘𝑠𝜙=max𝑖𝑥𝑘𝑖𝜙𝑖𝑥𝑘𝑖,𝑖=1,2,,𝑛.,(4.4) where 𝑥𝑘 is the optimal solution of underestimated problem for (𝑄𝑘). Then, bisection point 𝑘𝑠=𝑥𝑘𝑠.

Results and Analysis
We find that optimal value computed by two algorithms are equal. In the following, we show the average branch number (Avg Bran) and mean CPU times (Time) for each dimension by the B&B-DCA and general B&B algorithms in Table 1. The average number of calling DCA process (Num DCA) in the B&B-DCA algorithm is also given. Furthermore, we test the performance of DCA applied to the problem (𝑄). The number of times (Num glob) when a global optimal solution is obtained after one process of DCA is also given in Table 1. Average CPU times (Time) for one process of DCA method is also present.

From the results and Table 1, we have the following comments.(i)General B&B and the proposed B&B-DCA algorithms can efficiently solve the problem (𝑄) from the dimension 50 to 400. Due to the nice features of DCA (inexpensiveness, reliability, robustness, globality of computed solutions, etc.), the proposed B&B-DCA algorithm shows great superiority than general B&B algorithm not only in the average branch number but also in mean CPU time. Take dimension 𝑛=400, for example, the average branch number by general B&B is 988, however the one by B&B-DCA is only 666.6. Then, the relative mean CPU time has a decrease of nearly 600 seconds by the B&B-DCA algorithm. So, embedding DCA into the B&B algorithm is quite necessary.(ii)The DCA method always gives a good approximation for optimal solution of problem (𝑄) within short CPU time. It can be seen that it gives a global optimal solution five times during five computational tests for the dimension from 50 to 300. Even so, embedding DCA into B&B algorithm is necessary to guarantee the global optimal solution. Furthermore, if we are able to give a new method to obtain a more tighter lower bound for the optimal valve 𝑓(𝑥) in the bound process, the proposed B&B-DCA algorithm can compute global optimal solution within a shorter time.

4.2. Portfolio Selection with Concave Transaction Costs

In this subsection, the proposed B&B-DCA and general B&B algorithms are applied to solve portfolio selection problem with concave transaction costs. It is pointed that concave transaction costs function is more reliable [4, 5]. The Mean-Variance (M-V) model can be written as [22]: min𝑥𝜆𝑈(𝑥)=2𝑥𝑇𝑅𝑉𝑥(1𝜆)𝑇𝑥𝐶(𝑥)s.t.𝑒𝑇𝑥=1𝐴𝑥𝑏𝑙𝑥𝑢.(𝑃𝑆) Vector 𝑥=(𝑥1,𝑥2,,𝑥𝑛)𝑇𝑛 is the decision portfolio with 𝑥𝑖 the investment weight in each asset. 𝑅=(𝑟1,,𝑟𝑛)𝑇𝑛 denotes the expected return rate and 𝑉=(𝜎𝑖𝑗)𝑛×𝑛𝑛×𝑛 is the covariance matrix of the return rate for assets. Then, 𝑥𝑇𝑉𝑥 gives the risk (Variance) and 𝑅𝑇𝑥𝐶(𝑥) gives net return of the portfolio 𝑥, where 𝐶(𝑥)=𝑛𝑖=1𝐶𝑖(𝑥𝑖) denotes the nondecreasing concave transaction costs function, and 𝐶𝑖(𝑥𝑖) is curved in Figure 1.

The sum of investment weight in each asset should be one, that is, 𝑒𝑇𝑥=1, where 𝑒𝑛 denotes the vector with all entries equal to 1. 𝐴𝑚×𝑛, 𝑏𝑚, 𝑙 and 𝑢 are the limitations of lower bound and upper bound of the investment 𝑥. Parameter 𝜆(0,1) is the risk aversion index decided by the investor.

In general, the covariance matrix 𝑉 is symmetric and positive definite. Then (𝜆/2)𝑥𝑇𝑉𝑥 and (1𝜆)(𝑅𝑇𝑥𝐶(𝑥)) are DC components of function 𝑈(𝑥). The proposed B&B-DCA and general B&B algorithms can be used to solve the problem (𝑃𝑆).

Tests will be performed on five datasets from the OR-Library (see [23]) which is a publicly available collection of test datasets for a variety of operations research problems. Each data set contains 290 weekly return rates for each stock. The data is computed from component stocks of Hang Seng, DAX, FTSE, S&P, and Nikkei Index for each dataset respectively. We can compute the relative expected return rate vector 𝑅 and covariance matrix 𝑉. For each dataset, we will give the results for different value of 𝜆 from 0.05 to 0.95 (19 different values for 𝜆) in Table 2.

Similar to randomly generated datasets, we show average branch number (Avg Bran), average CPU time (Time), average number of calling DCA (Num DCA) for B&B and B&B-DCA, also total number (Num glob) when a global optimal solution is obtained after one process of DCA in Table 2.

As can be seen from Table 2, similar conclusions can be obtained to Table 1. First, the proposed B&B-DCA can accelerate the convergence of B&B to some extent in the aspects of branch number and CPU time. Second, DCA can compute a global optimal solution in short time and most cases. However, B&B is needed to confirm the globality of computed solutions. When a global solution is found, the loose lower bound for optimal value cannot guarantee fast convergence of the B&B algorithm. How to obtain a well-defined lower bound is a challenging and practical study.

Additionally, Figure 2 presents the efficient frontiers generated from the the M-V portfolio models without transaction costs (transaction costs function 𝐶(𝑥)=0) and with concave transaction costs (𝐶(𝑥) is a separable nondecreasing concave function). No consideration of concave transaction costs will lead to inefficiently solutions. This will provide wrong guidance for the investors.

5. Conclusions and Future Research

In this paper, a class of DC programming is studied. General B&B is usually adopted to solve such problems. Based on existing local optimization method in DC programming, we have proposed a new global method B&B-DCA to solve the problem. DCA is an efficient local optimization method based on local optimality and the duality for solving DC programming, especially for large-scale problems.

Numerical tests on randomly generated datasets show that the proposed B&B-DCA has great superiority of branch number and computational time than general B&B algorithm with different dimensions. In addition, portfolio selection problem with transaction costs can be solved efficiently. The proposed B&B-DCA can be applied to solved other practical problems which can be modeled by this class of DC programming.

We find that DCA method always provides a global optimal solution, but the lower bound for the optimal value cannot guarantee fast convergence rate of B&B. If we can give a new method to obtain a more tighter lower bound, the proposed B&B-DCA algorithm can solve the problem with much shorter time. This seems significant in solving practical problems. Furthermore, other global optimization methods like filled function methods and so on can be combined with DCA to solve DC Programming. Some of these are under our current consideration.

Acknowledgment

This work is supported by National Natural Science Foundations of China: 10971162, 11101325, and 71171158.