Abstract

In this article, we introduce a relaxed self-adaptive projection algorithm for solving the multiple-sets split equality problem. Firstly, we transfer the original problem to the constrained multiple-sets split equality problem and a fixed point equation system is established. Then, we show the equivalence of the constrained multiple-sets split equality problem and the fixed point equation system. Secondly, we present a relaxed self-adaptive projection algorithm for the fixed point equation system. The advantage of the self-adaptive step size is that it could be obtained directly from the iterative procedure. Furthermore, we prove the convergence of the proposed algorithm. Finally, several numerical results are shown to confirm the feasibility and efficiency of the proposed algorithm.

1. Introduction

Let be real Hilbert spaces. For , and are nonempty closed convex subsets of Hilbert spaces and , respectively, and assume that are two bounded linear operators. The multiple-sets split equality problem (MSSEP) is to find and satisfying the property

When , MSSEP (1) reduces to the multiple-sets split feasibility problem which is applied to intensity-modulated radiation therapy [111], signal processing [1221], and image reconstruction [2238]. Censor et al. [39] proposed the proximity function to measure the distance of a point to all sets where for all , and for all with . To solve (2), they considered the following constrained MSSEP: and then presented the projection method where and is an auxiliary simple nonempty closed convex set with , and denotes the solution set of (2). The convergence of the projection method was obtained under some mild conditions.

When , MSSEP (1) reduces to the split equality problem which was introduced by Moudafi [40] as follows: which is applied to the game theory [41] and optimal control and approximation theory [42]. The following alternating CQ algorithm (ACQ) was introduced by Moudafi [40] as follows: where for small enough , and denote the adjoint of and , respectively. and are the spectral radiuses of and , respectively. Since the computation of and onto a closed convex subset might be hard to be implemented, Fukushima [43] suggested a way to compute the projection onto a level set of a convex function by considering a sequence of projections onto half-spaces containing the original level set. Then, Moudafi [44] introduced the following relaxed alternating CQ algorithm (RACQ): where and are two sequences of closed convex sets.

Recently, Dang et al. [45] gave the following relaxed two-point projection method to solve MSSEP (1): where , and are two sequences of closed convex sets corresponding to and , respectively. and are auxiliary simple sets. for all , and for all with . Under some mild conditions, the weak convergence of the algorithm (9) was obtained.

Noting that the determination of the stepsize of algorithm (9) depends on the operator (matrix) norms and . This implies that if we implement the relaxed two-point projection method (9), one first need to calculate operator norms of and , which is in general not an easy work in practice. To overcome this weakness, Lopez et al. [46] and Zhao and Yang [47] introduced self-adaptive methods of which the advantage of the methods is that the stepsizes do not need prior knowledge of the operator norms. Motivated by them, we introduce a relaxed self-adaptive projection algorithm for solving the multiple-sets split equality problem. First, we transfer the origin problem to the constrained multiple-sets split equality problem and establish the fixed point equation system. We show the equivalence of the constrained multiple-sets split equality problem and the fixed point equation system. Second, based on the fixed point equation system, we present a relaxed self-adaptive projection algorithm for solving the constrained multiple-sets split equality problem, and the convergence of the proposed algorithm is obtained. Finally, several numerical results are shown to confirm the feasibility and efficiency of the proposed algorithm.

The remainder of this paper is organized as follows. Section 2 shows some preliminaries and notations used for subsequent analysis. In Section 3, we transfer the origin problem to the constrained multiple-sets split equality problem and establish the fixed point equation system and propose a relaxed self-adaptive projection algorithm for solving the constrained multiple-sets split equality problem. The convergence of the proposed algorithm is obtained. In Section 4, several numerical results are shown to confirm the effectiveness of our algorithm.

2. Preliminaries

Throughout this paper, we use and to denote the strong convergence and weak convergence, respectively. We write to indicate the weak -limit set of . For any , there exists a unique nearest point in , denoted by , such that

It is well known that is nonexpansive and firmly nonexpansive. Moreover, has the following well-known properties (see for example [48]).

Lemma 1. Let be nonempty, closed and convex. Then for all and ,(i)(ii)(iii)

Definition 2. Letbe convex. The subdifferential of at is defined asAn element of is said to be a subgradient.

Lemma 3. Suppose is a convex function, then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded set of

3. Algorithm and Its Convergence

In this section, we focus on a relaxed self-adaptive projection algorithm and obtain the convergence of the proposed algorithm. Following the idea of Censor et al. [39], we give two additional closed convex sets and and consider the constrained multiple-sets split equality problem where the sets and can be expressed by

and are convex functions for all and , and denotes the solution set of (32). Define where and where . It is easily seen that and for all . Notice that and are half-spaces and thus the corresponding projections have closed-form expressions. Hence, we focus on the following multiple-sets split equality problem (CMSSEP):

Now, we define the proximity function : where for all , and for all with .

Using the proximity function , we can obtain the following technical lemmas.

Lemma 4. Assume that (16) is consistent (i.e.,(16) has a solution) and denotes its solution set by . If , then it solves the fixed point equation system

Proof. To solve the problem (16), we consider the minimization problem (19) leads to the following unconstrained optimization problem: where is a indicator function of for defined by

Note that and where and are the normal cone of the convex sets and , respectively. From the optimality conditions of (20), it yields which means that, for , that is,

Since and , we obtain

Thus, the desired result can be obtained.

The following lemma reveals that ESEP (16) is equivalent to the fixed point equation system (18).

Lemma 5. Assume that the problem (16) is consistent. solves ESEP (2) if and only if solves the fixed point equation system (18).

Proof. From Lemma 4, we reveal that can solve (16); it also can solve (18). Next, we will prove that can solve (18), it also can solve (16). Obviously, one has and . It follows from the proposition of projection that which means

Hence, from Lemma 3, we add two inequalities to obtain

Furthermore, from , we deduce

Thus, solves ESEP (16). This completes the proof.

Based on (18), we can introduce a relaxed self-adaptive projection algorithm to solve (16), with .

Alggorithm 6. Let be arbitrary. We calculate the th iterate via the following formula where the stepsize is chosen by with (1, 0).

Next, we will focus on the convergence analysis of Algorithm 6.

Theorem 7. Assume , and , then the sequence generated by Algorithm 6converges to a solution of (1).

Proof. Taking , one has

From (30) and the fact that the projection is nonexpansive, we have

Since together with (33), we deduce

Similarly, we have

From (35) and (36), it follows which together with (31) means

Furthermore, it follows from (31) and (38) that

By induction, one has

Hence, and are bounded. Following (31), (36), and (39), we have

Without loss of generality, we can assume that there is such that for all . Setting , together with (41), we have the following inequality

Since is eventually decreasing, we obtain as convergent. From (42), we have Furthermore,

Furthermore, which with (41), (45), and the assumption on means

Note that we have

(47) and (49) imply

Similarly, we have

Thus, and are asymptotically regular. Notice that which implies that

Moreover, it follows from (22) that which with (43), (45), and the assumption on yields

Similarly, one has

Let and be, respectively, weak cluster points of the sequences and , then there exist two subsequences of and (again labeled and which converge weakly to and ). Next, we will show that . It follows from (30) that

From the graphs of the maximal monotone operators, are weakly-strongly closed, and by passing to the limit in the last inclusions, we obtain that and .

On the other hand, from Lemma 1 and the definition of , one has where satisfies for all . The lower semicontinuity of function and (41) assert that

Thus, for . Likewise, we can obtain where satisfies for all . The lower semicontinuity of function and (42) lead to

Thus, for . Moreover, the weak convergence of to and the lower semicontinuity of the squared norm imply hence, . This completes the proof.

4. Numerical Examples

We are in a position to show numerical examples to demonstrate the performance and convergence of Algorithm 6. The whole programs are written in MATLAB 7.0. All the numerical results are carried out on a personal Lenovo computer with Intel®Core™ i7-7500 U CPU 2.70 GHz and RAM 4.00 GB. We denote the vector with all elements 1 by in what follows.

Example 8. Let. Find such that .

Example 9. Let. Find such that

Example 10. Let and . and , where are all generated randomly; , and are positive integers. Find such that

In this example, we consider and three initial values: (i)Case 1 (ii)Case 2 (iii)Case 3

We take , when the algorithm iterates to step , in Algorithm 6. In the following tables and figures, we denote Algorithm 6 and the algorithm in reference [45] by QSPA and RTPPM, respectively. And we set and and to express the number of iteration, CPU time in seconds, and the final solution, respectively. Init. denote the initial points, and is used as the stop criterion. The numerical results can be seen from Tables 13 and Figures 14. For Figures 3 and 4, take in Example 10.

From Tables 13, we can see that the iterative number and CPU time of Algorithm 6 is less algorithm RTPPM. Figures 14 indicate that Algorithm 6 is more stable than RTPPM.

Furthermore, for testing the stationary property of iterative number, we carry 500 experiments for the initial point which are presented randomly, such as in Example 9, the results can be found in Figure 1.

On the other initial point, such as in Example 9, the results can be found in Figure 2.

Similarly, we carry 500 experiments for the initial point which are presented randomly, such as in Example 10, the results can be found in Figure 3.

On the other initial point, such as in Example 10, the results can be found in Figure 4.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Authors’ Contributions

Each author equally contributed to this paper and read and approved the final manuscript.

Acknowledgments

This project is supported by the Natural Science Foundation of China (11401438, 11671228, 11601261, and 11571120), Shandong Provincial Natural Science Foundation (ZR2019MA022), and Project of Shandong Province Higher Educational Science and Technology Program (Grant No. J14LI52).