Abstract

The multiple-set split feasibility problem requires finding a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It can be a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator’s range. It generalizes the convex feasibility problem as well as the two-set split feasibility problem. In this paper, we will review and report some recent results on iterative approaches to the multiple-set split feasibility problem.

1. Introduction

1.1. The Multiple-Set Split Feasibility Problem Model

The intensity-modulated radiation therapy (IMRT) has received a great deal of attention recently; for related works, please refer to [129]. In intensity modulated radiation therapy, beamlets of radiation with different intensities are transmitted into the body of the patient. Each voxel within the patient will then absorb a certain dose of radiation from each beamlet. The goal of IMRT is to direct a sufficient dosage to those regions requiring the radiation, those that are designated planned target volumes (PTVs), while limiting the dosage received by the other regions, the so-called organs at risk (OAR). The forward problem is to calculate the radiation dose absorbed in the irradiated tissue based on a given distribution of the beamlet intensities. The inverse problem is to find a distribution of beamlet intensities, the radiation intensity map, which will result in a clinically acceptable dose distribution. One important constraint is that the radiation intensity map must be implementable; that is, it is physically possible to produce such an intensity map, given the machine’s design. There will be limits on the change in intensity between two adjacent beamlets, for example.

The equivalent uniform dose (EUD) for tumors is the biologically equivalent dose which, if given uniformly, will lead to the same cell kill within the tumor volume as the actual nonuniform dose. Constraints on the EUD received by each voxel of the body are described in dose space, the space of vectors whose entries are the doses received at each voxel. Constraints on the deliverable radiation intensities of the beamlets are best described in intensity space, the space of vectors whose entries are the intensity levels associated with each of the beamlets. The constraints in dose space will be upper bounds on the dosage received by the OAR and lower bounds on the dosage received by the PTV. The constraints in intensity space are limits on the complexity of the intensity map and on the delivery time, and, obviously, that the intensities be nonnegative. Because the constraints operate in two different domains, it is convenient to formulate the problem using these two domains. This leads to a split feasibility problem.

The split feasibility problem (SFP) is to find an 𝑥 in a given closed convex subset 𝐶 of 𝑅𝐽 such that 𝐴𝑥 is in a given closed convex subset 𝑄 of 𝑅𝐼, where 𝐴 is a given real 𝐼 by 𝐽 matrix. Because the constraints are best described in terms of several sets in dose space and several sets in intensity space, the SFP model needs to be expanded into the multiple-set SFP. It is not uncommon to find that, once the various constraints have been specified, there is no intensity map that satisfies them all. In such cases, it is desirable to find an intensity map that comes as close as possible to satisfying all the constraints. One way to do this, as we will see, is to minimize a proximity function.

For 𝑖=1,,𝐼 and 𝑗=1,,𝐽, let 𝑏𝑖0 be the dose absorbed by the 𝑖th voxel of the patient’s body, 𝑥𝑗0 the intensity of the 𝑗th beamlet of radiation, and 𝐴𝑖𝑗0 the dose absorbed at the 𝑖th voxel due to a unit intensity of radiation at the 𝑗th beamlet. The nonnegative matrix 𝐴 with entries 𝐴𝑖𝑗 is the dose influence matrix. Let us assume that we have 𝑀 constraints in the dose space and 𝑁 constraints in the intensity space. Let 𝐻𝑚 be the set of dose vectors that fulfill the 𝑚th dose constraint, and let 𝑋𝑛 be the set of beamlet intensity vectors that fulfill the 𝑛th intensity constraint.

In intensity space, we have the obvious constraints that 𝑥𝑗0. In addition, there are implementation constraints; the available treatment machine will impose its own requirements, such as a limit on the difference in intensities between adjacent beamlets. In dosage space, there will be a lower bound on the dosage delivered to those regions designated as planned target volumes (PTV) and an upper bound on the dosage delivered to those regions designated as organs at risk (OAR).

Suppose that 𝑆𝑡 is either a PTV or an OAR, and suppose that 𝑆𝑡 contains 𝑁𝑡 voxels. For each dosage vector 𝑏=(𝑏1,,𝑏𝐼)𝑇, define the equivalent uniform dosage function (EUD function) 𝑒𝑡(𝑏) by𝑒𝑡(1𝑏)=𝑁𝑡𝑖𝑆𝑡𝑏𝑖𝛼1/𝛼,(1.1) where 0<𝛼<1 if 𝑆𝑡 is a PTV, and 𝛼>1 if 𝑆𝑡 is an OAR. The function 𝑒𝑡(𝑏) is convex, for 𝑏 nonnegative, when 𝑆𝑡 is an OAR and 𝑒𝑡(𝑏) is convex, when 𝑆𝑡 is a PTV. The constraints in dosage space take the form𝑒𝑡(𝑏)𝑎𝑡,(1.2) when 𝑆𝑡 is an OAR, and𝑒𝑡(𝑏)𝑎𝑡,(1.3) when 𝑆𝑡 is a PTV. Therefore, we require that 𝑏=𝐴𝑥 lie within the intersection of these convex sets. In a summary, we have formulated the constraints in the radiation intensity space 𝑅𝐽 and in the dose space 𝑅𝐼, respectively, and the two spaces are related by the dose influence matrix 𝐴; that is, this problem referred as the multiple-set split feasibility problem (MSSFP) is formulated as follows.Findan𝑥𝑁𝑖=1𝑋𝑖suchthat𝐴𝑥𝑀𝑗=1𝐻𝑗,(1.4) which was first investigated by Censor et al. [5]. There are a great deal of literature on the MSSFP; see [5, 7, 8, 18, 19, 22, 23]. In the sequel, there will be involved optimization and variational inequality techniques. For related references, please see [3042].

1.2. Fixed-Point Method

Next, we focus on the multiple-set split feasibility problem (MSSFP) which is to find a point 𝑥 such that𝑥𝐶=𝑁𝑖=1𝐶𝑖,𝐴𝑥𝑄=𝑀𝑗=1𝑄𝑗,(1.5) where 𝑁,𝑀1 are integers, the 𝐶𝑖(𝑖=1,2,,𝑁) are closed convex subsets of 𝐻1, the 𝑄𝑗(𝑗=1,2,,𝑀) are closed convex subsets of 𝐻2, and 𝐴𝐻1𝐻2 is a bounded linear operator. Assume that MSSFP is consistent; that is, it is solvable, and 𝑆 denotes its solution set. The case where 𝑁=𝑀=1, called split feasibility problem (SFP), was introduced by Censor and Elfving [43], modeling phase retrieval and other image restoration problems, and further studied by many researchers; see, for instance, [24, 6, 912, 17, 1921].

We use Γ to denote the solution set of the SFP. Let 𝛾>0 and assume that 𝑥Γ. Thus, 𝐴𝑥𝑄1 which implies the equation (𝐼𝑃𝑄1)𝐴𝑥=0 which in turn implies the equation 𝛾𝐴(𝐼𝑃𝑄1)𝐴𝑥=0, hence the fixed point equation (𝐼𝛾𝐴(𝐼𝑃𝑄1)𝐴)𝑥=𝑥. Requiring that 𝑥𝐶1, we consider the fixed-point equation𝑃𝐶1𝐼𝛾𝐴𝐼𝑃𝑄1𝐴𝑥=𝑥.(1.6) We will see that solutions of the fixed point equation (1.6) are exactly solutions of the SFP. The following proposition is due to Byrne [4] and Xu [2].

Proposition 1.1. Given 𝑥𝐻1. Then 𝑥 solves the SFP if and only if 𝑥 solves the fixed point (1.6).

This proposition reminds us that (MSSFP) (1.5) is equivalent to a common fixed-point problem of finitely many nonexpansive mappings, as we show below.

Decompose MSSFP into 𝑁 subproblems (1𝑖𝑁):𝑥𝑖𝐶𝑖,𝐴𝑥𝑖𝑄=𝑀𝑗=1𝑄𝑗.(1.7) For each 1𝑖𝑁, we define a mapping 𝑇𝑖 by𝑇𝑖𝑥=𝑃𝐶𝑖𝐼𝛾𝑖𝑓𝑥=𝑃𝐶𝑖𝐼𝛾𝑖𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥,(1.8) where 𝑓 is defined by1𝑓(𝑥)=2𝑀𝑗=1𝛽𝑗𝐴𝑥𝑃𝑄𝑗𝐴𝑥2,(1.9) with 𝛽𝑗>0 for all 1𝑗𝑀. Note that the gradient of 𝑓 is𝑓(𝑥)=𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥,(1.10) which is 𝐿-Lipschitz continuous with constant𝐿=𝑀𝑗=1𝛽𝑗𝐴2.(1.11) It is known that if 0<𝛾𝑖2/𝐿, 𝑇𝑖 is nonexpansive. Therefore fixed-point algorithms for nonexpansive mappings can be applied to (MSSFP) (1.5).

1.3. Optimization Method

Note that 𝑥 solves the MSSFP implies that 𝑥 satisfies two properties:(i)the distance from 𝑥 to each 𝐶𝑖 is zero,(ii)the distance from 𝐴𝑥 to each 𝑄𝑗 is also zero.

This motivates us to consider the proximity function1𝑔(𝑥)=2𝑁𝑖=1𝛼𝑖𝑥𝑃𝐶𝑖𝑥2+12𝑀𝑗=1𝛽𝑗𝐴𝑥𝑃𝑄𝑗𝐴𝑥2,(1.12) where {𝛼𝑖} and {𝛽𝑗} are positive real numbers, and 𝑃𝐶𝑖 and 𝑃𝑄𝑗 are the metric projections onto 𝐶𝑖 and 𝑄𝑗, respectively.

Proposition 1.2. 𝑥 is a solution of MSSFP (1.5) if and only if 𝑔(𝑥)=0.

Since 𝑔(𝑥)0 for all 𝑥𝐻1, a solution of MSSFP (1.5) is a minimizer of 𝑔 over any closed convex subset, with minimum value of zero. Note that this proximity function is convex and differentiable with gradient𝑔(𝑥)=𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑖𝑥+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥,(1.13) where 𝐴 is the adjoint of 𝐴. Since the gradient 𝑔(𝑥) is 𝐿-Lipschitz continuous with constant𝐿=𝑁𝑖=1𝛼𝑖+𝑀𝑗=1𝛽𝑗𝐴2,(1.14) we can use the gradient-projection method to solve the minimization problemmin𝑥Ω𝑔(𝑥),(1.15) where Ω is a closed convex subset of 𝐻1 whose intersection with the solution set of MSSFP is nonempty, and get a solution of the so-called constrained multiple-set split feasibility problem (CMSSFP)𝑥Ωsuchthat𝑥solves(1.5).(1.16) In this paper, we will review and report the recent progresses on the fixed-point and optimization methods for solving the MSSFP.

2. Some Concepts and Tools

Assume 𝐻 is a Hilbert space and 𝐶 is a nonempty closed convex subset of 𝐻. The (nearest point or metric) projection, denoted 𝑃𝐶, from 𝐻 onto 𝐶 assigns for each 𝑥𝐻 the unique point 𝑃𝐶𝑥𝐶 in such a way that𝑥𝑃𝐶𝑥=inf{𝑥𝑦𝑦𝐶}.(2.1)

Proposition 2.1. Basic properties of projections are (i)𝑥𝑃𝐶𝑥,𝑦𝑃𝐶𝑥0 for all 𝑥𝐻 and 𝑦𝐶;(ii)𝑥𝑃𝐶𝑥2𝑥𝑦2𝑦𝑃𝐶𝑥2 for all 𝑥𝐻 and 𝑦𝐶;(iii)𝑥𝑦,𝑃𝐶𝑥𝑃𝐶𝑦𝑃𝐶𝑥𝑃𝐶𝑦2 for all 𝑥,𝑦𝐻, and equality holds if and only if 𝑥𝑦=𝑃𝐶𝑥𝑃𝐶𝑦. In particular, 𝑃𝐶 is nonexpansive; that is, 𝑃𝐶𝑥𝑃𝐶𝑦𝑥𝑦,(2.2) for all 𝑥,𝑦𝐻;(iv)if 𝐶 is a closed subspace of 𝐻, then 𝑃𝐶 is the orthogonal projection from 𝐻 onto 𝐶: 𝑥𝑃𝐶𝑥𝐶,𝑜𝑟𝑥𝑃𝐶𝑥,𝑦=0𝑥𝐻,𝑦𝐶.(2.3)

Definition 2.2. The operator𝑄𝜆=(1𝜆)𝐼+𝜆𝑃𝐶(2.4) is called a relaxed projection, where 𝜆(0,2) and 𝐼 is the identity operator on 𝐻.
A mapping 𝑅𝐻𝐻 is said to be an averaged mapping if 𝑅 can be written as an average of the identity 𝐼 and a nonexpansive mapping 𝑇: 𝑅=(1𝛼)𝐼+𝛼𝑇,(2.5) where 𝛼 is a number in (0,1) and 𝑇𝐻𝐻 is nonexpansive.

Proposition 2.1(iii) is equivalent to saying that the operator 𝑆=2𝑃𝐶𝐼 is nonexpansive. Indeed, we have𝑆𝑥𝑆𝑦2𝑃=2𝐶𝑥𝑃𝐶𝑦(𝑥𝑦)2=4𝑃𝐶𝑥𝑃𝐶𝑦24𝑃𝐶𝑥𝑃𝐶𝑦,𝑥𝑦+𝑥𝑦2𝑥𝑦2.(2.6) Consequently, a projection can be written as the mean average of a nonexpansive mapping and the identity:𝑃𝐶=𝐼+𝑆2.(2.7) Thus projections are averaged maps with 𝛼=1/2. Also relaxed projections are averaged.

Proposition 2.3. Let 𝑇𝐻𝐻 be a nonexpansive mapping and 𝑅=(1𝛼)𝐼+𝛼𝑇 an averaged map for some 𝛼(0,1). Assume 𝑇 has a bounded orbit. Then, one has the following. (1)𝑅 is asymptotically regular; that is, lim𝑛𝑅𝑛+1𝑥𝑅𝑛𝑥=0,(2.8) for all 𝑥𝐻.(2)For any 𝑥𝐻, the sequence {𝑅𝑛𝑥} converges weakly to a fixed point of 𝑇.

Definition 2.4. Let 𝐴 be an operator with domain 𝐷(𝐴) and range 𝑅(𝐴) in 𝐻. (i)𝐴 is monotone if for all 𝑥,𝑦𝐷(𝐴), 𝐴𝑥𝐴𝑦,𝑥𝑦0.(2.9)(ii)Given a number 𝜈>0. 𝐴 is said to be 𝜈-inverse strongly monotone (𝜈-ism) (or cocoercive) if𝐴𝑥𝐴𝑦,𝑥𝑦𝜈𝐴𝑥𝐴𝑦2,𝑥,𝑦𝐻.(2.10)
It is easily seen that a projection 𝑃𝐶 is a 1-ism.

Proposition 2.5. Given 𝑇𝐻𝐻, let 𝑉=𝐼𝑇 be the complement of 𝑇. Given also 𝑆𝐻𝐻, then one has the following.(i)𝑇 is nonexpansive if and only if 𝑉 is 1/2-ism.(ii)If 𝑆 is 𝜈-ism, then, for 𝛾>0, 𝛾𝑆 is 𝜈/𝛾-ism.(iii)𝑆 is averaged if and only if the complement 𝐼𝑆 is 𝜈-ism for some 𝜈>1/2.
The next proposition includes the basic properties of averaged mappings.

Proposition 2.6. Given operators 𝑆,𝑇,𝑉𝐻𝐻, then one has the following. (i)If 𝑆=(1𝛼)𝑇+𝛼𝑉 for some 𝛼(0,1) and if 𝑇 is averaged and 𝑉 is nonexpansive, then 𝑆 is averaged.(ii)𝑆 is firmly nonexpansive if and only if the complement 𝐼𝑆 is firmly nonexpansive. If 𝑆 is firmly nonexpansive, then 𝑆 is averaged.(iii)If 𝑆=(1𝛼)𝑇+𝛼𝑉 for some 𝛼(0,1), 𝑇 is firmly nonexpansive and 𝑉 is nonexpansive, then 𝑆 is averaged.(iv)If 𝑆 and 𝑇 are both averaged, then the product (composite) 𝑆𝑇 is averaged.(v)If 𝑆 and 𝑇 are both averaged and if 𝑆 and 𝑇 have a common fixed point, thenFix(𝑆)Fix(𝑇)=Fix(𝑆𝑇).(2.11)

Proposition 2.7. Consider the variational inequality problem (VI). Findapoint𝑥𝐶suchthat𝐴𝑥,𝑥𝑥0,𝑥𝐶,(2.12) where 𝐶 is a closed convex subset of a Hilbert space 𝐻 and 𝐴 is a monotone operator on 𝐻. Assume that VI (2.12) has a solution and 𝐴 is 𝜈-ism. Then for 0<𝛾<2𝜈, the sequence {𝑥𝑛} generated by the algorithm 𝑥𝑛+1=𝑃𝐶𝑥𝑛𝛾𝐴𝑥𝑛,𝑛0,(2.13) converges weakly to a solution of the VI (2.12).

An immediate consequence of Proposition 2.7 is the convergence of the gradient-projection algorithm.

Proposition 2.8. Let 𝑓𝐻𝑅 be a continuously differentiable function such that the gradient 𝑓 is Lipschitz continuous: 𝑓(𝑥)𝑓(𝑦)𝐿𝑥𝑦,𝑥,𝑦𝐻.(2.14) Assume that the minimization problem min𝑥𝐶𝑓(𝑥)(2.15) is consistent, where 𝐶 is a closed convex subset of 𝐻. Then, for 0<𝛾<2/𝐿, the sequence {𝑥𝑛} generated by the gradient-projection algorithm 𝑥𝑛+1=𝑃𝐶𝑥𝑛𝑥𝛾𝑓𝑛(2.16) converges weakly to a solution of (2.15).

3. Iterative Methods

In this section, we will review and report the iterative methods for solving MSSFP (1.5) in the literature.

It is not hard to see that the solution set 𝑆𝑖 of the subproblem (1.7) coincides with Fix(𝑇𝑖), and the solution set 𝑆 of MSSFP (1.5) coincides with the common fixed-point set of the mappings 𝑇𝑖. Further, we have (see [9, 18])𝑆=𝑁𝑖=1𝑇Fix𝑖𝑇=Fix𝑁𝑇2𝑇1.(3.1) By using the fact (3.1), we obtain the corresponding algorithms and the convergence theorems for the MSSFP.

Algorithm 3.1. The Picard iterations are 𝑥𝑛+1=𝑇𝑁𝑇1𝑥𝑛=𝑃𝐶𝑁𝐼𝛾𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑃𝐶1𝐼𝛾𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0.(3.2)

Theorem 3.2 (see [8]). Assume that the MSSFP (1.5) is consistent. Let {𝑥𝑛} be the sequence generated by the Algorithm 3.1, where 0<𝛾<2/𝐿 with 𝐿 given by (1.11). Then {𝑥𝑛} converges weakly to a solution of the MSSFP (1.5).

Algorithm 3.3. Parallel iterations are 𝑥𝑛+1=𝑁𝑖=1𝜆𝑖𝑇𝑖=𝑁𝑖=1𝜆𝑖𝑃𝐶𝑖𝐼𝛾𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0,(3.3)where 𝜆i>0 for all i such that 𝑁i=1𝜆i=1, and 0<𝛾<2/𝐿 with L given by (1.11).

Theorem 3.4 (see [8]). Assume that the MSSFP (1.5) is consistent. Then the sequence {𝑥𝑛} generated by the Algorithm 3.3 converges weakly to a solution of the MSSFP (1.5).

Algorithm 3.5. Cyclic iterations are 𝑥𝑛+1=𝑇[𝑛+1]𝑥𝑛=𝑃𝐶[𝑛+1]𝐼𝛾𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0,(3.4) where 𝑇[𝑛]=𝑇𝑛mod𝑁 with the mod function taking values in {1,2,,𝑁}.

Theorem 3.6 (see [8]). Assume that the MSSFP (1.5) is consistent. Let {𝑥𝑛} be the sequence generated by the Algorithm 3.5, where 0<𝛾<2/𝐿 with 𝐿 given by (1.11). Then {𝑥𝑛} converges weakly to a solution of the MSSFP (1.5).

Note that the MSSFP (1.5) can be viewed as a special case of the convex feasibility problem of finding 𝑥 such that𝑥𝑝𝑖=1𝐶𝑖.(3.5) In fact, (1.5) can be rewritten as𝑥𝑁+𝑀𝑖=1𝐶𝑖,(3.6) where 𝐶𝑁+𝑖={𝑥𝐻1𝐴1𝑥𝑄𝑗},1𝑗𝑀.

However, the methodologies for studying the MSSFP (1.5) are actually different from those for the convex feasibility problem in order to avoid usage of the inverse 𝐴1. In other words, the methods for solving the convex feasibility problem may not apply to solve the MSSFP (1.5) straightforwardly without involving the inverse 𝐴1. The CQ algorithm of Byrne [1] is such an example where only the operator 𝐴 (not the inverse 𝐴1) is relevant.

Since every closed convex subset of a Hilbert space is the fixed point set of its associating projection, the convex feasibility problem becomes a special case of the common fixed-point problem of finding a point 𝑥 with the property𝑥𝑀𝑖=1𝑇Fix𝑖.(3.7) Similarly, the MSSFP (1.5) becomes a special case of the split common fixed-point problem [19] of finding a point 𝑥 with the property𝑥𝑁𝑖=1𝑈Fix𝑖,𝐴𝑥𝑀𝑗=1𝑇Fix𝑗,(3.8) where 𝑈𝑖𝐻1𝐻1(𝑖=1,2,,𝑁) and 𝑇𝑗𝐻2𝐻2(𝑗=1,2,,𝑀) are nonlinear operators. By using these facts, recently, Wang and Xu [17] presented another cyclic iteration as follows.

Algorithm 3.7 (cyclic iterations). Take an initial guess 𝑥0𝐻1, choose 𝛾(0,2/𝐿) and define a sequence {𝑥𝑛} by the iterative procedure: 𝑥𝑛+1=𝑃𝐶[𝑛]𝑥𝑛+𝛾𝐴𝑃𝑄[𝑛]𝐼𝐴𝑥𝑛,𝑛0.(3.9)

Theorem 3.8 (see [17]). The sequence {𝑥𝑛}, generated by Algorithm 3.7, converges weakly to a solution of MSSFP (1.5) whenever its solution set is nonempty.

Since MSSFP (1.5) is equivalent to the minimization problem (1.15), we have the following gradient-projection algorithm.

Algorithm 3.9. Gradient-projection algorithmis 𝑥𝑛+1=𝑃Ω𝑥𝑛𝑥𝛾𝑔𝑛=𝑃Ω𝑥𝑛𝛾𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑖𝑥𝑛+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0.(3.10)
Censor et al. [5] proved in finite-dimensional Hilbert spaces that Algorithm 3.9 converges to a solution of the MSSFP (1.5) in the consistent case. Below is a version of this convergence in infinite-dimensional Hilbert spaces.

Theorem 3.10 (see [8]). Assume that 0<𝛾<2/𝐿, where 𝐿 is given by (1.14). The sequence {𝑥𝑛} generated by the Algorithm 3.9 weakly converges to a point 𝑧 which is a solution of the MSSFP (1.5) in the consistent case and a minimizer of the function 𝑝 over Ω in the inconsistent case.

Consequently, Lopez et al. [18] considered a variant version of Algorithm 3.9 to solve (1.16).

Algorithm 3.11. Gradient-projection algorithm is 𝑥𝑛+1=𝑃Ω𝑥𝑛𝛾𝑛𝑥𝑔𝑛=𝑃Ω𝑥𝑛𝛾𝑛𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑖𝑥𝑛+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0.(3.11)

Theorem 3.12 (see [18]). Assume that 0<liminf𝑛𝛾𝑛limsup𝑛𝛾𝑛<2/𝐿, where 𝐿 is given by (1.14). The sequence {𝑥𝑛} generated by the Algorithm 3.11 weakly converges to a solution of (1.16).

Remark 3.13. It is obvious that Theorem 3.12 contains Theorem 3.10 as a special case.

Perturbation Techniques
Consider the consistent (1.16) and denote by 𝑆 its nonempty solution set. As pointed in the previous, the projection 𝑃𝐶, where 𝐶 is a closed convex subset of 𝐻, may bring difficulties in computing it, unless 𝐶 has a simple form (e.g., a closed ball or a half-space). Therefore some perturbed methods in order to avoid this inconvenience are presented.
We can use subdifferentials when {𝐶𝑖}, {𝑄𝑗}, and Ω are level sets of convex functionals. Consider 𝐶𝑖=𝑥𝐻1𝑐𝑖(𝑥)0,𝑄𝑗=𝑦𝐻2𝑞𝑗,(𝑦)0Ω=𝑥𝐻1,𝜔(𝑥)0(3.12) where 𝑐𝑖,𝜔𝐻1𝑅 and 𝑞𝑗𝐻2𝑅 are convex functionals. We iteratively define a sequence {𝑥𝑛} as follows.

Algorithm 3.14. The initial 𝑥0𝐻1 is arbitrary; once 𝑥𝑛 has been defined, we define the (𝑛+1)th iterate 𝑥𝑛+1 by 𝑥𝑛+1=𝑃Ω𝑛𝑥𝑛𝛾𝑛𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑛𝑖𝑥𝑛+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑛𝑗𝐴𝑥𝑛,𝑛0,(3.13) where Ω𝑛=𝑥𝐻1𝑥𝜔𝑛+𝜉𝑛,𝑥𝑥𝑛,𝜉0𝑛𝑥𝜕𝜔𝑛,𝐶𝑛𝑖=𝑥𝐻1𝑐𝑖𝑥𝑛+𝜉𝑛𝑖,𝑥𝑥𝑛,𝜉0𝑛𝑖𝜕𝑐𝑖𝑥𝑛,𝑄𝑛𝑗=𝑦𝐻2𝑞𝑗𝐴𝑥𝑛+𝜂𝑗𝑦𝐴𝑥𝑛,𝜂0𝑛𝑗𝜕𝑞𝑗𝐴𝑥𝑛.(3.14)

Theorem 3.15 (see [18]). Assume that each of the functions {𝑐𝑖}𝑁𝑖=1, 𝜔, and {𝑞𝑗}𝑀𝑗=1 satisfies the property: it is bounded on every bounded subset of 𝐻1 and 𝐻2, respectively. (Note that this condition is automatically satisfied in a finite-dimensional Hilbert space.) Then the sequence {𝑥𝑛} generated by Algorithm 3.14 converges weakly to a solution of (1.16), provided that the sequence {𝛾𝑛} satisfies 0<liminf𝑛𝛾𝑛limsup𝑛𝛾𝑛<2𝐿,(3.15) where the constant 𝐿 is given by (1.14).

Now consider general perturbation techniques in the direction of the approaches studied in [2022, 44]. These techniques consist on taking approximate sets which involve the 𝜌-distance between two closed convex sets 𝐴 and 𝐵 of a Hilbert space:𝑑𝜌𝑃(𝐴,𝐵)=sup𝐴𝑥𝑃𝐵𝑥𝑥𝐻1,𝑥𝜌.(3.16) Let {Ω𝑛}, {𝐶𝑛𝑖}, and {𝑄𝑛𝑗} be closed convex sets which are viewed as perturbations for the closed convex sets Ω, {𝐶𝑖}, and {𝑄𝑗}, respectively. Define function 𝑔𝑛 by𝑔𝑛1(𝑥)=2𝑁𝑖=1𝛼𝑖𝑥𝑃𝐶𝑛𝑖𝑥2+12𝑀𝑗=1𝛽𝑗𝐴𝑥𝑃𝑄𝑛𝑗𝐴𝑥2.(3.17) The gradient 𝑔𝑛 of 𝑔𝑛 is𝑔𝑛(𝑥)=𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑛𝑖𝑥+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑛𝑗𝐴𝑥.(3.18) It is clear that 𝑔𝑛 is Lipschitz continuous with the Lipschitz constant 𝐿 given by (1.14).

Algorithm 3.16. Let an initial guess 𝑥0𝐻1 be given, and let {𝑥𝑛} be generated by the Krasnosel’skii-Mann iterative algorithm: 𝑥𝑛+1=1𝑡𝑛𝑥𝑛+𝑡𝑛𝑃Ω𝑛𝐼𝛾𝑔𝑛𝑥𝑛=1𝑡𝑛𝑥𝑛+𝑡𝑛𝑃Ω𝑛𝑥𝑛𝛾𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑛𝑖𝑥𝑛+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑛𝑗𝐴𝑥𝑛,𝑛0.(3.19)

In [8], Xu proved the following result.

Theorem 3.17 (see [8]). Assume that the following conditions are satisfied. (i)0<𝛾<2/𝐿.(ii)𝑛=1𝑡𝑛(1𝑡𝑛)=.(iii)For each 𝜌>0, 1𝑖𝑁, and 1𝑗𝑀, there hold 𝑛=1𝑡𝑛𝑑𝜌(Ω𝑛,Ω)<, 𝑛=0𝑡𝑛𝑑𝜌(𝐶𝑛𝑖,𝐶𝑖)<, and 𝑛=0𝑡𝑛𝑑𝜌(𝑄𝑛𝑗,𝑄𝑗)<.Then the sequence {𝑥𝑛} generated by Algorithm 3.16 converges weakly to a solution of MSSFP (1.5).

Lopez et al. [18] further obtained a general result by relaxing condition (ii).

Theorem 3.18 (see [18]). Assume that the following conditions are satisfied. (i)0<𝛾<2/𝐿.(ii)𝑡𝑛[0,4/(2+𝛾𝐿)] for all 𝑛 (note that 𝑡𝑛 may be larger than one since 0<𝛾<2/𝐿) and 𝑛=0𝑡𝑛42+𝛾𝐿𝑡𝑛=.(3.20)(iii)For each 𝜌>0, 1𝑖𝑁, and 1𝑗𝑀, there hold 𝑛=1𝑡𝑛𝑑𝜌(Ω𝑛,Ω)<,𝑛=0𝑡𝑛𝑑𝜌(𝐶𝑛𝑖,𝐶𝑖)<, and 𝑛=0𝑡𝑛𝑑𝜌(𝑄𝑛𝑗,𝑄𝑗)<.
Then the sequence {𝑥𝑛} generated by Algorithm 3.16 converges weakly to a solution of (1.16).

Corollary 3.19. Assume that the following conditions are satisfied. (i)0<𝛾<2/𝐿.(ii)𝑡𝑛[0,4/(2+𝛾𝐿)] for all 𝑛 (note that 𝑡𝑛 may be larger than one since 0<𝛾<2/𝐿) and𝑛=0𝑡𝑛42+𝛾𝐿𝑡𝑛=.(3.21) Then the sequence {𝑥𝑛} generated by 𝑥𝑛+1=1𝑡𝑛𝑥𝑛+𝑡𝑛𝑃Ω𝑥𝑛𝛾𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑖𝑥𝑛+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0,(3.22) converges weakly to a solution of the MSSFP (1.5).

Note that all above algorithms only have weak convergence. Next, we will consider some algorithms with strong convergence.

Algorithm 3.20. The Halpern iterations are 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝑇𝑛𝑥𝑛=𝛼𝑛𝑢+1𝛼𝑛𝑃𝐶[𝑛+1]𝐼𝛾𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0.(3.23)

Theorem 3.21. Assume that the MSSFP (1.5) is consistent, 0<𝛾<2/𝐿 with 𝐿 given by (1.11), and {𝛼𝑛} satisfies the conditions (for instance, 𝛼𝑛=1/𝑛 for all 𝑛1) (C1)lim𝑛𝛼𝑛=0,(C2)𝑛=0𝛼𝑛=, (C3)𝑛=0|𝛼𝑛+1𝛼𝑛|< or lim𝑛(𝛼𝑛+1/𝛼𝑛)=1.
Then the sequence {𝑥𝑛} generated by the Algorithm 3.20 converges strongly to a solution of the MSSFP (1.5) that is closest to 𝑢 from the solution set of the MSSFP (1.5).

Next, we consider a perturbation algorithm which has strong convergence.

Algorithm 3.22. Given an initial guess 𝑥0𝐻1, let {𝑥𝑛} be generated by the perturbed iterative algorithm 𝑥𝑛+1=𝛾𝑛𝑢+1𝛾𝑛𝑃Ω𝑛𝑥𝑛𝛾𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑛𝑖𝑥𝑛+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑛𝑗𝐴𝑥𝑛,𝑛0.(3.24)

Theorem 3.23 (see [18]). Assume that the following conditions are satisfied. (i)0<𝛾<2/𝐿.(ii)lim𝑛𝑡𝑛=0 and 𝑛=0𝑡𝑛=.(iii)For each 𝜌>0, 1𝑖𝑁, and 1𝑗𝑀, there hold 𝑛=1𝑡𝑛𝑑𝜌(Ω𝑛,Ω)<, 𝑛=0𝑡𝑛𝑑𝜌(𝐶𝑛𝑖,𝐶𝑖)<, and 𝑛=0𝑡𝑛𝑑𝜌(𝑄𝑛𝑗,𝑄𝑗)<.
Then the sequence {𝑥𝑛} generated by Algorithm 3.22 converges in norm to the solution of (1.16) which is nearest to 𝑢.

Corollary 3.24. Assume that the following conditions are satisfied. (i)0<𝛾<2/𝐿.(ii)lim𝑛𝑡𝑛=0 and 𝑛=0𝑡𝑛=.
Then the sequence {𝑥𝑛} generated by 𝑥𝑛+1=𝑡𝑛𝑢+𝑡𝑛𝑃Ω𝑥𝑛𝛾𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑖𝑥𝑛+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0,(3.25) converges in norm to a solution of the MSSFP (1.5).

Regularized Methods
Consider the following regularization: 𝑔𝛼1(𝑥)=𝑔(𝑥)+2𝛼𝑥2=12𝑁𝑖=1𝛼𝑖𝑥𝑃𝐶𝑖𝑥2+12𝑀𝑗=1𝛽𝑗𝐴𝑥𝑃𝑄𝑗𝐴𝑥2+12𝛼𝑥2,(3.26) where 𝛼>0 is the regularization parameter. We can compute the gradient 𝑔𝛼 of 𝑔𝛼 as 𝑔𝛼=𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑖+𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴+𝛼𝐼.(3.27) It is easily see that 𝑔𝛼 is 𝐿𝛼-Lipschitz continuous with constant 𝐿𝛼=𝑁𝑖=1𝛼𝑖+𝑀𝑗=1𝛽𝑗𝐴2+𝛼.(3.28) It is known that 𝑔𝛼 is strongly monotone.
Consider the following regularized minimization problem min𝑥Ω𝑔𝛼(𝑥),(3.29) which has a unique solution denoted by 𝑥𝛼.

Theorem 3.25. The strong lim𝛼0𝑥𝛼 exists and equals ̃𝑥, the minimum-norm solution of (1.16).

Algorithm 3.26. Given an initial point 𝑥0Ω. Define a sequence {𝑥𝑛} by the iterative algorithm 𝑥𝑛+1=𝑃Ω𝐼𝛾𝑛𝑔𝛼𝑛𝑥𝑛=𝑃Ω𝐼𝛼𝑛𝛾𝑛𝑥𝑛𝛾𝑛𝑁𝑖=1𝛼𝑖𝐼𝑃𝐶𝑖𝑥𝑛𝛾𝑛𝑀𝑗=1𝛽𝑗𝐴𝐼𝑃𝑄𝑗𝐴𝑥𝑛,𝑛0.(3.30)

Theorem 3.27 (see [18]). Assume the sequences {𝛼𝑛} and {𝛾𝑛} satisfy the conditions: (i)0<𝛾𝑛<𝛼𝑛/𝐿2𝛼𝑛 for all (large enough) 𝑛;(ii)𝛼𝑛0;(iii)𝑛=1𝛼𝑛𝛾𝑛=;(iv)(|𝛾𝑛𝛾𝑛1|+|𝛼𝑛𝛾𝑛𝛼𝑛1𝛾𝑛1|)/(𝛼𝑛𝛾𝑛)20.
Then the sequence {𝑥𝑛} generated by Algorithm 3.26 strongly converges to the minimum norm solution of (1.16).

Self-Adaptive Methods
Consider the following constrained minimization problem: min{𝑔(𝑥),𝑥Ω},(3.31) where 𝑔(𝑥) is defined as in (1.12) and Ω𝑅𝑁 is the same auxiliary simple nonempty closed convex set as in (1.16). This optimization problem is proposed by Censor et al. [5] for solving the constrained MSSFP (1.5) in the finite-dimensional Hilbert spaces. We know that a point 𝑥Ω is a stationary point of problem (3.31) if it satisfies 𝑥𝑔,𝑥𝑥0,𝑥Ω.(3.32) Thus, from Proposition 2.8, we can use a gradient projection algorithm below to solve the MSSFP which was developed by Censor et al. ([5, 24]): 𝑥𝑛+1=𝑃Ω𝑥𝑛𝑥𝛾𝑔𝑛,(3.33) where 2𝛾0,𝐿.(3.34)
Note that the above method of Censor et al. is the application of the projection method of Goldstein [45] and Levitin and Polyak [46] to the variational inequality problem (3.32), which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends greatly on the choice of the parameter 𝛾. If one chooses a small 𝑠 to ensure that it satisfies the condition (3.34) such that it guarantees the convergence of the iterative sequence, the recursion leads to slow speed of convergence. On the other hand, if one chooses a large step size to improve the speed of convergence, the generated sequence may not converge. In real applications for solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the MSSFP.
To overcome the difficulty in estimating the Lipschitz constant, He et al. [47] developed a self-adaptive method for solving variational inequality problems, where the constant step size 𝛾 in the original Goldstein-Levitin-Polyak method is replaced by a sequence of parameters {𝛾𝑛} and {𝛾𝑛} is selected self-adaptively. The numerical results reported in He et al. [47] have shown that the self-adaptive strategy is valid and robust for solving variational inequality problems. The efficiency of their modified algorithm is not affected by the initial choice of the parameter; that is, for any given initial choice 𝛾0, the algorithm can adjust it and finally find a “suitable” one. Thus, there is no need to pay much attention to the choice of the step size as that of the original Goldstein-Levitin-Polyak method. Moreover, the computational burden at each iteration is not much larger than that of the original Goldstein-Levitin-Polyak method. Later, their method is extended to a more flexible self-adaptive rule by Han and Sun [25].
Motivated by the self-adaptive strategy, Zhang et al. [23] proposed the following method for solving the MSSFP by using variable step sizes, instead of the fixed step sizes as in Censor et al. [5, 24].

Algorithm 3.28. (S1) Given a nonnegative sequence 𝜏𝑛 with 𝑛=0𝜏𝑛<, 𝛿(0,1),𝜇(0,1),𝜌(0,1), 𝜖>0, 𝛽0>0, and arbitrary initial point 𝑥0, set 𝛾0=𝛽0 and 𝑛=0.(S2) Find the smallest nonnegative integer𝑙𝑛 such that𝛽𝑛+1=𝜇𝑙𝑘𝛾𝑘 and𝑥𝑛+1=𝑃Ω𝑥𝑛𝛽𝑛+1𝑥𝑔𝑛,(3.35) which satisfies 𝛽𝑛+1𝑥𝑔𝑛𝑥𝑔𝑛+12𝑥(2𝛿)𝑛𝑥𝑛+1𝑥,𝑔𝑛𝑥𝑔𝑛+1.(3.36)(S3) If𝛽𝑛+1𝑥𝑔𝑛𝑥𝑔𝑛+12𝑥𝜌𝑛𝑥𝑛+1𝑥,𝑔𝑛𝑥𝑔𝑛+1,(3.37) then set 𝛾𝑛+1=(1+𝜏𝑛+1)𝛽𝑛+1; otherwise, set 𝛾𝑛+1=𝛽𝑛+1.(S4)If 𝑒(𝑥𝑛,𝛽𝑛)𝜖, stop; otherwise, set𝑛=𝑛+1 and go to (S2).

Theorem 3.29 (see [23]). The proposed Algorithm 3.28 is globally convergent.

Remark 3.30. This new method is a modification of the projection method proposed by Goldstein [45] and Levitin and Polyak [46], where the constant step size 𝛽 in their original method is replaced by an automatically selected one, 𝛽𝑘, per iteration. This is very important, since it helps us avoid the difficult task of selecting a “suitable” step size.

The following self-adaptive projection method was introduced by Zhao and Yang [7], which was adopted by using the Armijo-like searches to solve the MSSFP.

Algorithm 3.31. Given constants 𝛽>0,𝜎(0,1),𝛾(0,1), let 𝑥0 be arbitrary. For 𝑛=0,1,, calculate 𝑥𝑛+1=𝑃Ω𝑥𝑛𝜏𝑛𝑥𝑔𝑛,(3.38) where 𝜏𝑛=𝛽𝛾𝑙𝑛 and 𝑙𝑛 is the smallest nonnegative integer 𝑙 such that 𝑔𝑃Ω𝑥𝑛𝛽𝛾𝑙𝑥𝑔𝑛𝑥𝑔𝑛𝑥𝜎𝑔𝑛,𝑥𝑛𝑃Ω𝑥𝑛𝛽𝛾𝑙𝑥𝑔𝑛.(3.39)

Algorithm 3.31 need not to estimate the Lipschitz constant of 𝑔 or compute the largest eigenvalue of the matrix 𝐴𝑇𝐴, and the step-size 𝜏𝑛 is chosen so that the objective function 𝑔(𝑥) has a sufficient decrease. It is in fact a special case of the standard gradient projection method with the Armijo-like search for solving the constrained optimization problem (3.31).

The following convergence result for the gradient projection method with the Armijo-like searches solving the generalized convex optimization problem (3.31) ensures the convergence of Algorithm 3.31.

Theorem 3.32. Let 𝑔𝐶1Ω be pseudoconvex and {𝑥𝑛} be an infinite sequence generated by the gradient projection method with Armijo-like searches. Then, the following conclusions hold: (1)lim𝑛𝑔(𝑥𝑛)=inf{𝑔(𝑥),𝑥Ω};(2)Ω, which denotes the set of the optimal solutions to (3.31), is nonempty if and only if there exists at least one limit point of {𝑥𝑛}. In this case, {𝑥𝑛} converges to a solution of (3.31).

However, we find that, in each iteration step of Algorithm 3.31, it costs a large amount of work to compute the orthogonal projections 𝑃𝐶𝑖 and 𝑃𝑄𝑗. In what follows, we consider the case that the projections are not easily calculated, and we consider a relaxed self-adaptive projection method for solving the MSSFP. In detail, the MSSFP and the convex sets 𝐶𝑖 and 𝑄𝑗 in this part should satisfy the following assumptions.(1)The solution set of the constrained MSSFP is nonempty.(2)The sets 𝐶𝑖, 𝑖=1,2,,𝑡, are given by 𝐶𝑖=𝑥𝑅𝑁𝑐𝑖(𝑥)0,(3.40) where 𝑐𝑖𝑅𝑁𝑅 are convex functions. The sets 𝑄𝑗, 𝑗=1,2,,𝑟 are given by 𝑄𝑗=𝑦𝑅𝑀𝑞𝑗(𝑦)0,(3.41) where 𝑞𝑗𝑅𝑀𝑅 are convex functions.(3)For any 𝑥𝑅𝑁, at least one subgradient 𝜉𝜕𝑐𝑖(𝑥) can be calculated, where 𝜕𝑐𝑖(𝑥) is a generalized gradient, called subdifferential of 𝑐𝑖(𝑥) at 𝑥, and it is defined as follows:𝜕𝑐𝑖𝜉(𝑥)=𝑖𝑅𝑁𝑐𝑖(𝑧)𝑐𝑖(𝑥)+𝜉𝑖,𝑧𝑥𝑧𝑅𝑁.(3.42) For any 𝑦𝑅𝑀, at least one subgradient 𝜂𝑗𝜕𝑞𝑗(𝑦) can be calculated, where 𝜕𝑞𝑗(𝑦) is a generalized gradient, called subdifferential of 𝑞𝑗(𝑦) at 𝑦 and is defined as follows:𝜕𝑞𝑗𝜂(𝑦)=𝑗𝑅𝑀𝑞𝑗(𝑢)𝑞𝑗𝜂(𝑦)+𝑗,𝑢𝑦𝑢𝑅𝑀.(3.43) In the 𝑘th iteration, let𝐶𝑛𝑖=𝑥𝑅𝑁𝑐𝑖𝑥𝑛+𝜉𝑛𝑖,𝑥𝑥𝑛0,(3.44) where 𝜉𝑛𝑖 is an element in 𝜕𝑐𝑖(𝑥𝑛):𝑄𝑛𝑗=𝑦𝑅𝑀𝑞𝑗𝐴𝑥𝑛+𝜂𝑛𝑗,𝑦𝐴𝑥𝑛0,(3.45) where 𝜂𝑛𝑗 is an element in 𝜕𝑞𝑗(𝐴𝑥𝑛).

Define𝑔𝑛1(𝑥)=2𝑡𝑖=1𝛼𝑖𝑥𝑃𝐶𝑛𝑖(𝑥)2+12𝑟𝑗=1𝛽𝑗𝐴𝑥𝑃𝑄𝑛𝑗(𝐴𝑥)2.(3.46) Obviously,𝑔𝑛(𝑥)=𝑡𝑖=1𝛼𝑖𝐼𝑃𝐶𝑛𝑖𝑥+𝑟𝑗=1𝛽𝑗𝐴𝑇𝐴𝑥𝑃𝑄𝑛𝑗(𝐴𝑥).(3.47)

Algorithm 3.33. Given 𝛾>0,𝜌(0,1),𝜇(0,1) let x0 be arbitrary. For 𝑛=0,1,2,, compute 𝑥𝑛=𝑃Ω𝑥𝑛𝜏𝑛𝑔𝑛𝑥𝑛,(3.48) where 𝜏𝑛=𝛾𝜌𝑙𝑛 and 𝑙𝑛 is the smallest nonnegative integer 𝑙 such that 𝑔𝑛𝑥𝑛𝑔𝑛𝑥𝑛𝜇𝑥𝑛𝑥𝑛𝜏𝑛.(3.49) Set 𝑥𝑛+1=𝑃Ω𝑥𝑛𝜏𝑛𝑔𝑛𝑥𝑛.(3.50)

Theorem 3.34 (see [7]). The sequence {𝑥𝑛} generated by Algorithm 3.33 converges to a solution of the MSSFP.

Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279 and NSFC 71161001-G0105. R. Chen was supported in part by NSFC 11071279. Y.-C. Liou was partially supported by the Program TH-1-3, Optimization Lean Cycle, of Sub-Projects TH-1 of Spindle Plan Four in Excellence Teaching and Learning Plan of Cheng Shiu University, and was supported in part by NSC 100–2221-E-230-012.