Abstract

In this paper, we study the convex optimization problem with linear constraint, and the objective function is composed of m separable convex functions. Considering the special case where the objective function is composed of two separable convex functions, the auxiliary problem principle (APP) is an effective parallel distributed algorithm for solving the special case. Inspired by the principle of APP, a natural idea to solve separable convex optimization problem with m ≥ 3 is to extend the method of APP, resulting in the APP-like algorithm. The convergence of the APP-like algorithm is not clear yet. In this paper, we give a sufficient condition for the convergence of the APP-like algorithm. Specifically, the APP algorithm is a special case of the APP-like algorithm when m = 2. However, simulation results show that the convergence efficiency of the APP-like algorithm is affected by the selection of penalty parameter. Therefore, we propose an improved APP-like algorithm in this paper. Simulation results show that the improved APP-like algorithm is robust to the selection of penalty parameter and that the convergence efficiency of the improved APP-like algorithm is better when compared with the APP-like algorithm.

1. Introduction

For the following convex problem with linear constrain,where and are convex function, and are closed convex sets, and are the given fixed matrices, and is the given fixed vector.

Further analysis of problem (1) shows that the objective function of problem (1) is separable without intersecting variables. Then, a natural idea is whether a splitting algorithm can be adopted to solve problem (1). The alternating direction multiplier method (ADMM) is an effective distributed iteration strategy [1] for solving problem (1) and has been widely used to solve engineering problems [2, 3]. For problem (1), the corresponding ADMM iteration strategy can be expressed as follows [4]:where represents the Lagrange multiplier, represents penalty parameter, and denotes the inner product, i.e., .

In this paper, we further consider the general case of problem (1), the objective functions are composed of m separable convex functions () with linear constraint.

By comparing problem (1) with problem (3), an intuitive idea is to directly apply ADMM to solve problem (3) resulting in an ADMM-like iteration strategy is shown in the following equation:

For problem (1), the ADMM-like method is the classical ADMM on the condition that m = 2. It is clear that the classical ADMM is convergent [1]. However, we cannot prove the convergence of ADMM-like iteration strategy when [5]. To tackle the divergence of ADMM-like method, predictive correction ADMM-like iteration strategies ADM-G (ADM with Gaussian back substitution) [6] and ADBC (alternating direction-based contraction method) [7] are proposed. Specifically, for given , the prediction results is generated by ADMM-like method (4) and the correction results is generated by , where represents the correction step size (The expressions of parameter are different in ADM-G and ADBC). The introduction of correction step ensures the convergence of the ADMM-like iterative strategy, but the correction step needs to calculate the inverse matrix, which greatly increases the complexity of the iterative strategy.

ADM-G and ADBC both belong to the Gaussian Seidel iterative scheme. The Gauss-Seidel iterative scheme can use the latest iteration information in solving each sub-problem to obtain a better convergence rate. At the same time, due to the serial computation form of the Gauss-Seidel iterative scheme, the total time consumption of each iteration in the prediction step is equal to the sum of the time consumption of all sub-problems. According to the above description, the Gauss-Seidel iterative scheme requires a long computing time when the problem scale is large (in other words, when the value of m is large). Then, the parallel form is recommended when the problem scale is large. The auxiliary problem principle (APP), which was proposed by G. Cohen in 1980 [8], is an effective parallel distributed algorithm [9]. The iteration strategy of the auxiliary problem principle for solving (1) can be expressed as follows:where is a sufficient condition for the convergence of auxiliary problem principle iterative strategy [10], other symbols have the same meaning as (4). Reference [11] further discussed APP and derive its O(1/n) convergence rate . In order to facilitate the analysis of the mathematical properties of the auxiliary problem principle, the equivalent form of the APP is given [11].

Inspired by the principle of APP, a natural idea for solving problem (3) is to extend the APP from problem (1) to problem (3), resulting in an APP-like algorithm as follows:

The convergence of APP-like algorithm (7) is not clear yet. In this paper, we prove that is a sufficient condition for the convergence of APP-like algorithm. Specifically, the APP algorithm is a special case of the APP-like algorithm when m = 2. This is the first innovation of this paper.

For APP algorithm, the penalty parameter c is a given positive number. For APP-like algorithm, the setting method of the penalty parameter c is same as that in APP algorithm. Simulation results show that the convergence efficiency of APP-like algorithm is affected by the selection of penalty parameter. Therefore, we propose an improved APP-like algorithm in this paper. Compared with APP-like algorithm, the improved APP-like algorithm is robust in terms of the selection of penalty parameter, and the convergence efficiency of the improved APP-like algorithm is better. This is the second innovation of this paper.

2. Preliminaries

For a general convex optimization problem is as follows:where is the convex function and is the closed convex set. Assuming is the optimal solution of the convex optimization problem (8), it can be shown that point must be in the feasible region, and all feasible directions starting from point are the ascending directions of the convex optimization problem (8). Supposing we use to represent the first derivative of function , using to represent all feasible directions of the function at point x, and using to represent all descending directions of the function at point x:

According to the definition (9 and 10), it can be known that the sufficient and necessary conditions for to be the optimal solution of the convex optimization problem (8) can be expressed as follows:

According to the above description, it can be known that solving the convex optimization problem (8) is equivalent to solving the following variational inequality problem:

It is clear that problem (8) contains no equality constraint condition, and problem (3) contains an equality constraint condition. Problem (3) can be easily transformed into a mathematical form of problem (8) by means of Lagrange function. The symbol in (13) represents the Lagrange multiplier. According to the above description, the variational inequality (VI) can be used to express the first-order optimality condition of the optimization problem (3) [12]. Let , solving (3) is equivalent to solving , which satisfies following inequalities:and the compact form of (13) can be written as follows:whereand is monotonic operator.

Definition 1. For the operator F, if the operator F satisfiesOperator F is monotone operator [13].

3. Convergence Analysis of the APP-Like Algorithm

Lemma 1. Let sequence is generated by APP-like algorithm (7). If , then we obtain:

Proof of Lemma 1. According to the description in section 2, solving the sub-problems in (7) is equivalent to solving , which satisfiesConsideringCombing (19)–(22), we obtainwhereBy setting in (23), we obtainMapping F is monotone, so we haveBy combining (14) and (26), we obtainBy combining (25)–(27), we obtainBy using (28), we obtainAnd (29) can be rewritten as follows:whereConsideringFor (32), it is clear that if , is true only and only if . Therefore, we can say that matrix G is a symmetric positive matrix when and (30) is Fejér monotone [14]. Then, we can obtainBased on above discussion, the proof of Lemma 1 is completed.
According to the description of Lemma 1, it is clear that if all are full column rank matrices, then we can obtain and . In other words, the sequence generated by APP-like algorithm (7) approximates the optimal solution of problem (3) on the premise that all are full column rank matrices and . However, in solving practical problems, we cannot guarantee that all are full column rank matrices. For the general matrix , Lemma 2 is given.

Lemma 2. Let sequence is generated by APP-like algorithm (7). If problem (3) is convex, continuous, differentiable, and satisfies the following equation:then, the sequence converges to the optimal solution of problem (3).

Proof of Lemma 2. For convex optimization problem, according to the Karloch-Kuhn-Tucker condition (KKT condition), it is very convenient to determine whether the current iteration point is the optimal solution of the convex optimization problem. For problem (3), if the current iteration point satisfies the following equality constraints:then the current iteration point is the optimal solution of the optimization problem (3).
The sequence is generated by APP-like algorithm (7). It is clear that satisfy the following equality:Similarly, we can obtainandAccording to the description of Lemma 1, it is clear that,By combining (38)–(41), we can obtainTherefore, we can say that sequence generated by APP-like algorithm (7) converges to the optimal solution of problem (3) when problem (3) is convex, continuous, differentiable, and satisfies (35).
Therefore, the proof of Lemma 2 is completed.

4. The Improved APP-Like Algorithm and Convergence Analysis

In Section 3, we prove the convergence of the proposed APP-like algorithm. In this section, we will discuss the parameter selection of the proposed APP-like algorithm and give the improved APP-like algorithm. In addition, we will prove the convergence of the improved APP-like algorithm.

4.1. The Improved APP-Like Algorithm

For the proposed APP-like algorithm, the objective function contains , we can say that represents the penalty function for the equality constraint of the original optimization problem. The concept of penalty functions reminds us of the concept of exterior penalty function. To be specific, the solution of the exterior penalty function tends to be optimal solution as the penalty parameter tends to infinity. However, when the penalty parameter tends to infinity, it is easy to cause the ill of exterior penalty functions. Fortunately, for the proposed APP-like algorithm, the objective function contains not only the penalty function but also the Lagrange function. This method of combining the punishment function and the Lagrange function is called augmented Lagrange function method, and the obvious advantage is that the penalty parameter tends to infinity is not required as the convergence condition of the proposed APP-like algorithm.

According to the description of (30), the sequence generated by the proposed APP-like algorithm gradually approaches the optimal solution and the penalty functions gradually approach zero, which means the convergence efficiency of the proposed APP-like algorithm becomes progressively worse when the current iteration point approximates the optimal solution. Inspired by the method of exterior penalty function, we propose the iterative strategy of penalty parameter as follows:where is an integer greater than one. According to the description of (44) and considering the initial penalty parameter , it can be known that the sequence has upper and lower bounds, which means the iterative strategy of penalty parameter will not cause the ill-conditioned Hessian matrix of objective function.

For APP-like algorithm with m ≥ 3, the objective function contains and the parameter is set to be in order to ensure the convergence of APP-like algorithm. Based on the above discussion, we can give the improved the APP-like algorithm as follows:

4.2. Convergence Analysis of the Improved APP-Like Algorithm

Lemma 3. Let sequence is generated by improved APP-like algorithm (45). If problem (3) is convex, continuous, and differentiable, then sequence converges to the optimal solution of problem (3).

Proof. of Lemma 3.
For given , is generated by the improved APP-like algorithm. Based on the description of the improved APP-like algorithm, it is clear that and when . According to the proof of Lemma1, if , then we can getwhereIf , the matrix M is a fixed symmetric positive definite matrix. Similar to the analysis of (30), we can obtain (46) Fejér monotone. Then, we can obtainBased on the description of Lemma 2, it is clear that sequence converges to the optimal solution of problem (3) if it is convex, continuous, and differentiable.
Therefore, the proof of Lemma 3 is completed.

5. Application

In this section, we firstly give the mathematical model of the power system dynamic economic dispatching problem (DEDP) and point out that the mathematical model for DEDP can be easily transformed into the multiseparable convex problem with linear constraints studied in this paper. After that, we use the proposed APP-like algorithm and improved APP-like algorithm to solve DEDP and analyze the corresponding simulation results. We coded by the MATLAB 2018b, and all codes were implemented on Dell T5820 Server with Xeon- W-2102 CPU at 2.9 GHz and 16G of memory.

5.1. The Mathematical Model of the Dynamic Economic Dispatching Problem for Power System

The mathematical model of dynamic economic dispatching problem for power system can be expressed as follows:where F represents the objective function, N represents the total number of power generation unit, H represents the total dispatching period of power generation unit, represents the output of the ith power generation unit in the hth dispatching period, ai, bi, and ci are given parameters in objective function, represents the system load in the hth dispatching period, and and are the upper and lower ramp rate limits for the ith generation unit and are given.

For problem (50), the equivalent mathematical model can be rewritten as follows:where

It is clear that in (52) is full column rank matrices. Therefore, we can directly use the APP-like algorithm and improved APP-like algorithm to solve problem (51) in the form of distributed parallel.

5.2. Test System

In this section, a test system [15] containing ten generation units is presented to verify the validity and correctness of the proposed algorithms. The data refer to generation units are shown in Table 1. In addition, the demand of the ten-unit test system was divided into 24 intervals in reference [15]. Reference [16] employs the demand data in 1–8 intervals as the data refer to system demand. In this paper, we also employ the demand data in 1–8 intervals as in reference [16] and the data refer to system demand are shown in Table 2. For the test system, according to the description in Section 5.1, the original optimization problem can be decomposed into ten sub-optimization problems. The information shown in Table 1 shows that the output upper and lower limits, for the 10th generation unit, are both 55. Therefore, in the actual simulation process, we can directly set the output of the 10th generation unit to be 55, and only solve nine sub-optimization problems.

5.3. Simulation Analysis
5.3.1. APP-Like Algorithm for Solving Test System

In this section, we use APP-like algorithm to solve the test system and observe the influence of parameter on the convergence efficiency of APP-like algorithm. We set the maximum number of iterations as 50. For the iterative strategy of penalty parameter shown in (44), we set and . The convergence condition is set to . If the convergence condition is not satisfied when the number of iterations is greater than 50, then we can say that the corresponding algorithm fail to convergence with maximum iterations.

The simulation results based on the APP-like algorithm for the test system are shown in Table 3. In order to verify the influence of different penalty parameter settings on the convergence efficiency of the APP-like algorithm, we designed five different penalty parameter settings named Case1–Case5, respectively, which are shown in the first column in Table 3. After that, the second column represents the initial parameter setting of the APP-like algorithm, the third column represents the number of iterations when convergence conditions are met, and the fourth column represents the value of when convergence conditions is met or the number of iterations reaches the maximum iterations. In addition, the word “Fail” in Table 3 represents the corresponding algorithm fail to convergence with maximum iterations. The curve of changing with the number of iterations is shown in Figure 1.

According to the data in Table 3 and the curve in Figure 1, it can be known that the APP-like algorithm meets the convergence condition only under case5. By comparison, the APP-like algorithm fails to converge or stops convergence prematurely under case1-case4. For the further analysis of Figure 1 and Table 3, we find that the value of gradually approaches the convergence condition with the increase of the penalty parameter. Let's recall the iterative formula (7) of APP-like algorithm. Taking the ith sub-optimization problem as an example, there are two terms ( and ) related to the penalty parameter in the objective function. represents the punishment for the equality constraint. For case1-case4 in Table 3 and Figure 2, at the initial stage of iteration, it is obvious that is large, so can play a good punishment role. However, as approaches zero gradually, the penalty effect of decreases gradually when the penalty parameter is set smaller. This explains the reason why case1-case4 in Table 3 and Figure 2 fail to convergence.

The existence of is to ensure the convergence of the APP-like algorithm. However, we must realize that the existence of also prevents the update of the current iteration point. Therefore, we expect to be as small as possible under the premise of ensuring the convergence of the APP-like algorithm. In fact, for the APP-like algorithm, the value of penalty parameters and is given. That is to say that the APP-like algorithm ignores the role of in the iterative process on the premise of ensuring algorithm convergence.

5.3.2. Improved APP-Like Algorithm for Solving Test System

In this section, we use improved APP-like algorithm to solve the test system and observe the influence of parameter on convergence efficiency of improved APP-like algorithm. In order to comparison with the results of APP algorithm, the initial parameter settings of improved APP-like algorithm are the same as those of the APP algorithm. The simulation results based on improved APP-like algorithm for the test system are shown in Table 4. The columns in Table 4 have the same meaning as the columns in Table 3. The curve of based on improved APP-like algorithm changing with iterations is shown in Figure 2.

According to the information shown in Table 4 and Figure 2, we must admit that the trends of these curves of are not entirely consistent. However, the curves of with different initial penalty parameter values maintain similar trends, and can quickly meet convergence condition. That is to say, for improved APP-like algorithm, the selection of penalty parameters is robust to a certain extent.

The difference between APP-like algorithm and improved APP-like algorithm lies in the updating of penalty parameters. For APP-like algorithm, the penalty parameter is given in advance and remains unchanged in the iterative process. For improved APP-like algorithm, the initial value of penalty parameters is given, but it is updated (The iterative strategy of penalty parameter can be found in (44)) in the iterative process.

According to the analysis of APP-like algorithm in section 5.3.1, it can be found that APP-like algorithm has achieved better convergence effect by setting a larger penalty parameter value to ensure that plays an effective punishment role. Comparatively speaking, improved APP-like algorithm has achieved better convergence effect by updating the penalty parameter to ensure that plays an effective punishment role. Both methods can achieve better convergence characteristics, and we cannot say which algorithm is better.

Let's turn our attention to . APP-like algorithm ignores the role of because the parameters and c are given in advance. In fact, we expect to be as small as possible under the premise of ensuring the convergence of the algorithm.

According to the information shown in Table 4 and Figure 2, obviously, the smaller the initial value of penalty parameter is set, the better the convergence characteristic of improved APP-like algorithm will be. According to the description of iteration strategy (44) for penalty parameters, it can be known that penalty parameter has upper and lower bounds in the iterative process. For the improved APP-like algorithm, if we give a small initial value of the penalty parameter, the value of will be smaller throughout the iteration. This is why, in Figure 2, the smaller the initial value of penalty parameter is, the better the convergence efficiency is.

In the above part, we analyze the influence of penalty parameter on the convergence efficiency of the improved APP-like algorithm. If we further analyze , we will find that the value of is related not only to the punishment parameter but also to the parameter . The simulation results based on APP-like algorithm for the test system with different are shown in Table 5. The corresponding curve of is shown in Figure 3. According to the information shown in Table 5 and Figure 3, it is clear that, for the same initialization penalty parameter, the smaller is, the better the convergence efficiency of APPP algorithm is under the premise of ensuring the convergence of improved APP-like algorithm. The simulation results shown in Table 5 and Figure 3 are in agreement with our theoretical analysis of .

According to the theoretical analysis in Section 4 and the simulation results in Section 5, we suggest using the improved APP-like algorithm. Compared with the APP-like algorithm, the penalty parameter selection of improved APP-like algorithm is robust and the convergence efficiency of improved APP-like algorithm is better. Our suggestions for the initial parameters of improved APP-like algorithm are shown as follows:(1)It is better to select a smaller initial value of the penalty parameter, and 1 or less is suggested as the initial value of the penalty parameter.(2)On the premise of ensuring convergence, it is recommended to select parameter as small as possible, and is suggested, where m presents the objective functions, which is composed of m separable convex functions.

6. Conclusion

In this paper, we study the convex optimization problem with linear constraint and the objective function is composed of m separable convex functions. According to the properties of the objective function, it is natural to consider a distributed iterative strategy. ADM-G and ADBC are efficient algorithms to solve the optimization problem in the form of the Gaussian Seidel iterative scheme. Due to the serial computation form of Gauss-Seidel iterative scheme, ADM-G and ADBC needs more computing time when the problem scale is large (in other words, when the value of m is large). Then, a natural idea is to solve optimization problem in parallel. The auxiliary problem principle (APP) is an effective parallel distributed algorithm for solving separable convex optimization problem with m = 2. Inspired by the principle of APP, a natural idea is to extend the method of APP, resulting in an APP-like algorithm. Therefore, we firstly prove that is a sufficient condition for the convergence of APP-like algorithm. However, simulation results show the convergence efficiency of APP-like algorithm is sensitive to the selection of penalty parameter. In order to overcome the deficiency of APP-like algorithm, we proposed the improved APP-like algorithm with variable penalty parameter. Compared with APP-like algorithm, the improved APP-like algorithm is robust in terms of the selection of penalty parameter, and the convergence efficiency of improved APP-like algorithm is better.

Data Availability

The data in this paper are generated by MATLAB, and the MATLAB codes are available any time if needed from author Yaming Ren([email protected]).

Conflicts of Interest

There are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Guangxi Science and Technology Base and Talent Special Project (grant number: GuiKeAD20159077) and the Foundation of the Guilin University of Technology (grant number: GLUTQD2018001).