Abstract

We present a new algorithm for solving vector DC programming, where the vector function is a function of the difference of C-convex functions. Because of the nonconvexity of the objective function, it is difficult to solve this class of problems. We propose several proximal point algorithms to address this class of problems, which make use of the special structure of the problems (i.e., the DC structure). The well-posedness and the global convergence of the proposed algorithms are developed. The efficiency of the proposed algorithm is shown by an application to a multicriteria model stemming from lot sizing problems.

1. Introduction

Vector-valued optimization arises from multiobjective programming, multicriteria decision-making, statistics, and cooperative game theory [1]. This class of problems has been widely applied and studied in various decision-making contexts and many methods have been developed to address such problems [24]. Denote as a convex, closed, and pointed cone with . The partial order can be defined as follows: for any The relation is defined as This paper studies the following vector DC optimization problems with linear constraints:where functions and : are -convex (see the definition in Section 2); is a closed and convex set. Note that “” implies the multiobjective minimization over the cone , that is, the weak Pareto efficiency or Pareto efficiency with respect to the cone . When , the objective function reduces to the multiobjective DC function and problem (1) is the multiobjective DC program which is studied by Qu et al. [5]. Similar to the adoption in [5], we utilize the following convention: .

Though many researchers have widely studied vector DC program, they analyze mainly the theoretical aspects. For example, the duality theories and optimal conditions for multiobjective optimization were presented by Qu et al. [5]; the optimal conditions for DC vector optimization were also studied by Gadhi et al. [6]. However, to the best of our knowledge, there are few studies on how to efficiently solve this class of problems. This paper presents several proximal point algorithms for solving this class of problems.

Proximal point algorithms have an extensive application in solving scalar optimization problems [7], which originates from the work of [8]. It has been shown that proximal point algorithms can efficiently solve scalar optimization problems [9, 10]. As an extension, recently, the proximal point algorithm has been utilized to solve multicriteria convex optimization problems [11, 12]. However, to the best of our knowledge, few studies have been paid on extending proximal point algorithms to multicriteria optimization problems with DC objective functions. This paper proposes the exact and inexact proximal point algorithms to solve vector DC optimization, where the special structure, the DC structure of this class of problems, has been considered.

Our contributions can be concluded as follows. The proximal point methods in both exact and inexact forms are extended to address the vector DC optimization. The well-definedness of the proposed methods is presented. The global convergence of the proposed algorithms is proved. An application in lot sizing problems is presented to show the efficiency of the proposed algorithms.

We organize the rest of this paper as follows. Preliminaries are presented in Section 2. The algorithms are proposed in Section 3. The theoretical analysis of the well-definedness and global convergence are also discussed in Section 3. An application of the algorithms to probabilistic lot sizing with service levels is presented in Section 4. The conclusion is given in Section 5.

2. Preliminaries

Function is denoted as iff, for any and any , Function is denoted as strictly iff, for any and any , Given , the -subdifferential of at is a possibly empty, closed, and convex set and can be defined as where . We denote for and is nonempty convex and compact (see Theorem of [13]) if . When , , and , reduces to the usual classical subdifferential [14]. The indicator function of a set can be named as if and , if . We denote as the normal cone to the set which can be described as

To facilitate the discussion in this paper, we need the following information about refined epigraphical and subdifferential rules in convex analysis presented in [15].

Lemma 1. Assume that , are l.s.c. and proper. If , then the following conclusions are equivalent:(i)The set is closed in .(ii)The refined conjugate epigraphical rule holds:

Throughout of this paper, the equivalent conditions in Lemma 1 are supposed to hold which means that the following subdifferential sum rule holds, [15].

In multiobjective problem, no unique solution can minimize all of objective functions simultaneously; therefore, the decision-maker has to trade-off solutions. That is, if he wants to improve some objectives, he has to give up other objectives. So the optimal concept has to be replaced by the Pareto optimal concept, which can be defined as follows: is called a Pareto optimum (PO) or a weak Pareto optimum (WPO) of (1) if there is no strategy such that is named as a local PO or a local WPO of (1) if and only if as a neighborhood of such that is a PO or a WPO in .

To analyze the optimization conditions of (1), we put forward the following assumption.

(A1) Assume that functions and : are -convex. Assume that is a locally Pareto efficient solution of (1) and is limited to .

Under the assumption that , it is guaranteed that . Assumption (A1) will be used to prove Theorem 2.

Theorem 2 (necessary optimality conditions for (1)). Suppose that the assumption (A1) holds and is a locally WPO to (1), then , such that

Proof. This theorem can be proved by considering the two cases about : either or . The first one means that , which leads to (7). For the other, define , with and . Given with , then we can prove that is a local optimal solution to the following scalar optimization problem:We prove it by contradiction; that is, the conclusion does not hold. Then holds for at least one with any given neighborhood of , , and any with . It is easy to show that for the choice of , which contradicts the assumption about and this implies that is also a local optimum to (8). To show that (7) is true, we only need to prove that holds . According to the definition of subdifferential in , it is easy to learn that is also globally solving the following convex optimization problem:Then, from the necessary optimal conditions of (9), the assertion of this theorem can be followed directly.

Usually, it is difficult to utilize the necessary condition in the above theorem to design the solution algorithms for (1). Hence, the other optimal condition should be presented and this paper looks for finding a critical point of (1), that is, finding a point , such that Define . Then, an alternative characterization of criticality is given in the following results which is useful in the following discussions.

Theorem 3. is a critical point to (1) if and only if there is such that globally solves the following problem:

Proof. It is obvious that problem (11) is convex. Hence, the proof can be reduced to prove that is a critical point if and only if is an optimum of (11). We first prove that iff such that is a (globally) WPO to the following convex vector optimization problem:According to the necessary optimal conditions that if is a WPO of (12), there exists some with , such that This implies that the above conclusion is true. Moreover, it follows from the definition of that the cone generated by its convex hull is , which means that there are a integer , , and with such that and . Hence, a WPO satisfiesNow the necessary optimality of (11) is analyzed. According to the formula about the subdifferential of a maximum of convex functions and the assumption about that if is an optimal solution to (11), there are a positive integer and and with such thatThe above result together with (13) leads to the assertion of this theorem.

According to the results of Theorem 3, the solution of a critical point to (1) can be equivalently transformed into solving a min–max problem. Therefore from this point, proximal point algorithms are presented for solving (1) in the following section.

3. Main Results

3.1. Algorithm 4

First, the algorithm for solving (1) is proposed which is globally convergent and well-defined.

Algorithm 4.
Step 0. A small enough constant and an initial point are chosen, respectively. Select a positive constant . Let .
Step 1. At the th iteration, calculate . Solve the following problem and let be an optimum:Step 2. If , then stop; else, update as , set , and go to Step .
In Step , the subproblem is convex and is solved at every iteration. We present the optimality conditions of this subproblem as follows: where and . We note that the algorithm does not rely on scalarization approaches which are usually used in solving vector optimization; that is, the algorithm does not use a priori chosen weighting parameters for the vector in different objective function.

We note that Algorithm 4 aims at obtaining one Pareto optimal solution. In recent years, the classical iteration methods (e.g., decent-direction-type and proximal point-type methods) for scalar optimization have been extended to address the vector problems to obtain one solution [1618]. In this respect, the method proposed in this paper is similar to these ideas. However, when a need to obtain the Pareto surface (the Pareto optimal solution set) is necessary, this method may fail. It can be proved by the following example given in Antoni and Giannessi [19].

Example 5. Consider the following bilevel optimization problem (upper level):where is the Pareto optimal solution set of the following vector optimization problem:with” marks the Pareto optimum with respect to . By Theorem of [12] and Proposition of [20], the Pareto optimum of the lower level problem satisfies , or , , or , . Then, the optimal solution to the corresponding bilevel problem (17) is and the optimal value is . When we use Algorithm 4 to obtain , it can be found that . Therefore, the optimal solution to the corresponding bilevel problem (17) is and the corresponding optimal value is , which is larger than . Since Algorithm 4 can not obtain the entire solution set , the method fails to get the optimum of the bilevel problem.

For Algorithm 4, we have the following conclusion.

Theorem 6. Assume that is generated by Algorithm 4. If one sets , then the following conclusions about hold:(i)either Algorithm 4 stops at a critical point of problem (1)(ii)or there is such that .

Proof. If , then it follows that is a critical point of (1) from the definition for criticality. According to the generation of , the following inequality holds:which together with the assumption about implies that such thatThe subgradient that, for any ,Summing (21) and (22) leads toThe above result together with and the positivity of implies that the assertion is true.

Theorem 6 means that Algorithm 4 either generates a descent sequence satisfying the second conclusion or stops at a critical point. For proving the global convergence, the following assumptions are proposed:

(A2) The set is bounded and the iterations do not stop finitely.

(A3) For a large enough , such that where .

Actually, under assumption (A2), the set is compact and convex. and being convex imply that where denotes WPO set. So, to find a WPO for this problem at this case, it is sufficient to resolve , for any . Therefore, at each iteration of Algorithm 4, we can replace subproblem (15) with the following problem with fixed : which means that assumption (A3) holds at this case.

Assumption (A3) means that a descent sequence for function on is generated by Algorithm 4 after finite iterations. Hence, the proof of global convergence of Algorithm 4 is equivalent to showing that the sequence generated by Algorithm 4 is global convergence to a critical point of function . For this purpose, from the conclusion of [20], it is sufficient to show that the sequence generated by Algorithm 4 satisfies three properties: descent, closedness, and boundedness. First, assumption (A3) implies that the sequence generated by Algorithm 4 is the descent directions for function . Second, the compactness of means the boundedness of . Finally, we prove that the map of the algorithm is closed. The conclusion can be derived by the theorem on the composition of closed point-to-set maps proposed by Zangwill [21] and the assumption on functions and . Then this implies that the following theorem is true.

Theorem 7. Assume that is generated by Algorithm 4. If assumptions (A2) and (A3) are true, then any accumulation point of is a critical point of problem (1).

3.2. Algorithm 8

We now present another algorithm for solving (1). Both the well-definedness and global convergence of Algorithm 8 are also established.

Algorithm 8.
Step 0. A sufficiently small constant and an initial point are presented, respectively. Choose a constant . Let .
Step 1. At th iteration, calculate . Solve the following problem and let be an optimum:Step 2. If , then stop; else, update , set , and go to Step .
At every iteration of Algorithm 8, similar to Algorithm 4, a convex subproblem is presented and the enclosed form expression for the optimum can be computed as where and . The following two conclusions hold for Algorithm 8.

Theorem 9. If one sets and choose with , , then the sequence generated by Algorithm 8 satisfies(i)either the algorithm stops at a critical point of problem (1)(ii)or there is such that .

Proof. If , from Theorem 2, it follows that is a critical point of (1). According to the generation of , the following inequality holds:The above inequality means that such thatThe subgradient definition leads to that, for any ,The following inequality comes from summing (30) and (31):This together with and implies that the assertion is true.

Theorem 9 implies that the sequence generated by Algorithm 8 either satisfies the second conclusion or stops at a critical point. The following assumption is proposed for proving the global convergence:

(A4) For large enough , such that where .

Similar to the proof of the globally convergence analysis for Algorithm 4, Algorithm 8 is also globally convergent.

Theorem 10. Assume that is generated by Algorithm 8. If one supposes that assumptions (A2) and (A4) hold, then any accumulation point of is a critical point of problem (1).

3.3. -Proximal Algorithm

We propose an inexact version for the above algorithms, that is, an -proximal algorithm for obtaining an -critical point of problem (1) by utilizing the -subdifferential. For this purpose we define the -critical point of (1) as follows: find a point such that where is the -subdifferential defined above. This algorithm is in connection with inexact proximal point algorithms proposed in [22].

-Proximal Algorithm

Step 0. A small enough constant and an initial point are chosen, respectively. Let be constants. Set .

Step 1. At the th iteration, calculate . satisfies the following equation:where and .

Step 2. Stop if ; else, update , set , and go to Step .

This is an approximate algorithm to Algorithm 4 by using the -subdifferential instead of the subdifferential. We can similarly propose an approximate algorithm for Algorithm 8 by replacing the subdifferential with the -subdifferential and the corresponding properties can also be found. The next two theorems present the descent and globally convergent properties of the approximate algorithm, respectively.

Theorem 11. Assume that is generated by -proximal algorithm, and , . If the algorithm do not stop finitely, then is a -descent sequnece, which implies that such that

Proof. The conclusion comes from Theorem in [23] and Theorem 6.

To show the global convergent, the following assumptions are presented:

(A5) For large enough , such that where ;

(A6) For any , is bounded below for any ; there is such that ; the parameters , satisfy .

Hence, similar to Theorem in [24], we can prove the approximate algorithm has the following convergent results.

Theorem 12. Suppose that is generated by the -proximal algorithm. If assumptions (A2), (A5), and (A6) hold, then any accumulation point of is a critical point of problem (1) and is asymptotically regular in the following sense: .

4. Numerical Tests

4.1. Probabilistic Lot Sizing with Service Levels

In probabilistic lot sizing with service levels, the order quantities, , are determined at the beginning of the planning horizon by considering to simultaneously minimize the expected total cost and maximize the guarantee of both the inventory balance and service levels. The model can be described as follows [25]:where , and are parameters, , and are decision variables, and , are stochastic variables.

Define , . A fundamental difficulty in solving (38) is that the evaluation of function is nontrival and it is difficult to have its closed form, where . Therefore, many approximate methods have been proposed. CVaR approximation presented by Rockafellar and Uryasev is one of those methods, which is the best convex conservative approximation [26]. However, Hong et al. point out that the CVaR approximation is not a good approximation [27]. Therefore, Hong et al. present a DC approach, which can approximate better than CVaR approximation [27]. Define . The DC approximation of can be stated as follows:where , , and . Note that .

We use DC approximation (39) to replace the probability function in (38). We assume that the stochastic variables are defined on the sample space . For an integrable function , the Monte Carlo sampling estimate for is obtained by taking independently and identically distributed random samples from and letting , where is the probability. Define where . In this paper, we use as the approximation function to (39), where are stochastically generated, and denotes that , if ; otherwise . Define the feasible set

With the above notations, we can describe Algorithm 4 with for solving Problem (38) as follows.

Algorithm 13.   

Step 0. Give ; ; . Let .

Step 1. Compute the following optimization problem:where is the indicator function defined as , if ; otherwise, 0; satisfies . is the optimum of the above subproblem.

Step 2. If , then stop; otherwise, and go to Step .

Similarly, we can present Algorithm 8 and –approximate algorithm for solving problem (38).

4.2. Numerical Results

To show the efficiency of the proposed method, this section presents several numerical tests; that is, use Algorithms 4 and 8 and -proximal algorithm for solving problem (38). We note that the codes are written in Matlab 7.10 with built-in solver “fmincon” to solve the convex subproblems. A DELL computer is used to conduct the tests where the computer is with 4.00 GB of memory and Intel(R)Core(TM)i5-2400 processor (3.10 GHz).

The parameters used in the tests are set as follows: and are stochastically generated in the set ; the initial point is stochastically chosen from the feasible region of (38); the terminating rule ; for , is set as ; is set as zero; we assume that , and , are stochastically generated. The numerical reports are presented in Table 1, where computing time in seconds (“CPU”) and the number of iterations (“Iter”) are given. We note that 500 is set as the maximal number of iterations. From Table 1, we conclude that Algorithm 8 under performs Algorithm 4 with respect to CPU time and the number of iterations. The -approximation of Algorithm 8 also underperforms the corresponding -approximation of Algorithm 4 with respect to CPU time and the number of iterations. These results mean that the theoretical results are true and the proposed algorithms are effective in solving (38).

5. Conclusion

This paper proposes both exact and inexact proximal point algorithms for solving vector DC optimization problems. The proposed algorithms enjoy both well-posedness and global convergence under suitable assumptions. An application to a probabilistic lot sizing with service levels is considered. We first show that this model can be equivalently cast as a multiobjective DC optimization problem. Then, the proposed algorithms are utilized to solve the resulting problem. The numerical results show that the methods are efficient. For future work, we can discuss the local convergence of the proposed algorithms. The construction of methods for solving the vector DC optimization problems with nonsmooth objectives is also an interesting topic for future work.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Authors’ Contributions

All authors contributed equally and significantly to writing this paper. All authors read and approved the final manuscript.

Acknowledgments

This work is supported by the National Social Science Foundation of China (no. 17BGL083).