Journal of Function Spaces

Volume 2018, Article ID 2409375, 8 pages

https://doi.org/10.1155/2018/2409375

## A Hybrid Proximal Algorithm for the Sum of Monotone Operators with Multivalued Mappings

^{1}Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), 1518, Pracharat 1 Road, Wongsawang, Bangsue, Bangkok 10800, Thailand^{2}Department of Mathematics, Faculty of Science, Kasetsart University, 50 Ngam Wong Wan Road, Ladyaow, Chatuchak, Bangkok 10900, Thailand^{3}Nonlinear Dynamic Analysis Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), 1518 Pracharat 1 Road, Wongsawang, Bangsue, Bangkok 10800, Thailand

Correspondence should be addressed to K. Sitthithakerngkiet; ht.ca.bntumk.ics@s.nawkonak

Received 2 February 2018; Accepted 17 April 2018; Published 7 June 2018

Academic Editor: Giuseppe Marino

Copyright © 2018 N. Kaewyong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We modify a hybrid method and a proximal point algorithm to iteratively find a zero point of the sum of two monotone operators and fixed point of nonspreading multivalued mappings in a Hilbert space by using the technique of forward-backward splitting method. The strong convergence theorem is established and the illustrative numerical example is presented on this work. The results of this paper extend and improve some well-known results in the literature.

#### 1. Introduction

In a Hilbert space, many authors have intensively studied the convergence of finding a zero point for monotone operators, that is, to find a point such thatwhere is a monotone operator and the set of zero point of is denoted by . The first method for finding a zero point is introduced by Martinet [1] in 1970, it is well known as the* proximal point algorithm* () which generates a sequencewhere is the resolvent operator of maximal monotone operator , is the identity mapping and is a regularization sequence. It can be related to many kinds of important problems, such as convex minimization problems, equilibrium problems, and variational inequality problems. An iterative (2) is equivalent toIt is known that can be reduced toif we let be a proper convex and lower semicontinuous function.

Later, Rockafellar [2] presented an inexact variant of the following method:where is an error sequence. Rockafellar [2] proved that if quickly enough such that , , and , then the sequence converges weakly to a solution of a zero point of .

In 1979, Lions and Mercier [3] presented the splitting algorithms to iteratively find zero point of the sum of two nonlinear operators. This algorithm is extended to solve the nonlinear equations seeking a solution of the following inclusion problem:where and are two monotone operators. The inclusion problem can be formulated to many important problems, such as a stationary solution of the initial value problem of the evolution equation [3], the minimization problem [4], which is widely used in image recovery, signal processing, and machine learning, equilibrium problems, and variational inequality problems; see [5]. A splitting method for solving the inclusion problem (6) intends an iterative method for which each iteration involves only with the individual operators and but not . Lions and Mercier [3] introduced the nonlinear splitting iterative algorithms to solve the inclusion problem (6), generated bywhere and are the resolvent operators of monotone operators and , respectively, with . The algorithm (7) is called the nonlinear Peaceman-Rachford splitting iterative algorithm. Since is merely nonexpansive operator then it fails, in general, to converge but the mean averages of can be weakly convergent; for more details see [6]. However, the algorithm, known as the nonlinear Douglas-Rachford splitting iterative algorithm (8), always converges in the weak topology to a point because the operator is firmly nonexpansive.

The extended is introduced by Manaka and Takahashi [7] to the case of sum of two monotone operators and by using the technique of forward-backward splitting method which generates a sequence defined bywhere is a nonexpansive mapping on a nonempty closed convex subset of , is the resolvent of a maximal monotone operator with being a positive sequence, is an inverse strongly monotone mapping, and is a sequence in . This algorithm shows that a sequence converges weakly to some point provided that the control sequence satisfies some conditions.

In 2014, Cho et al. [8] presented the strong convergence theorem for the solution set in a Hilbert space by using the following iterative scheme:where , , are sequences in (0, 1), is a positive sequence, is a strictly pseudo-contractive mapping with , and is a contractive mapping.

An iterative algorithm for finding an approximate solution of the sum of two monotone operators and fixed point of several type mappings has received a lot of attention more recently; for more details, see [9–11].

On the other hand, Iemoto and Takahashi [12] study the approximation of common fixed points of a nonexpansive mapping and a nonspreading mapping (i.e., ) in a Hilbert space by using iterative scheme:where . They proved that if is nonempty then the sequence generated by (11) converges weakly to common fixed point of and . For the extension of mappings, many authors have studied the convergence theorems of multivalued mappings (see [13–15]).

In 2016, Suantai et al. [16] considered iterative schemes for solving split equilibrium problems and fixed point problems of nonspreading multivalued mappings in Hilbert spaces and proved that the modified Mann iteration converges weakly to a common solution of the considered problems.

Inspired by [8, 16], in this paper, we present the convergence analysis on the set , where is a nonspreading multivalued mapping in a Hilbert space. The results of this paper extend and improve some well-known results in the literature. Furthermore, the illustrative numerical example is presented.

#### 2. Preliminaries

Let be a real Hilbert space with inner product and norm , and let be a nonempty closed convex subset of . For any and , we see that

An operator is called a nonexpansive mapping ifand is called a firmly nonexpansive mapping ifClearly, the above inequality is equivalent towhere is identity mapping. For any point , there exists a unique nearest point of , denoted by , such thatThe operator denotes the metric projection from onto . It is known that is a firmly nonexpansive mapping; that is,Furthermore, for any and , we note that if and only if

Any subset of a Hilbert space is said to be if, for all , there exists such that

In this paper, we denote the sets , and are the families of nonempty closed bounded subsets, nonempty compact subsets, and nonempty proximal subsets of , respectively. The* Hausdorff metric* on is defined bywhere Let be a multivalue mapping, an element is called a* fixed point* of if and we denote the fixed point set of a multivalue operator by . A multivalued mapping is said to be* nonexpansive* if and said to be* quasi-nonexpansive* if In this paper, we focus on a* k-nonspreading* multivalued mapping that satisfies, for all ,for some .

*Condition (I).* Let be a Hilbert space and be a subset of . A multivalued mapping is said to satisfy* Condition (I)* if for all and .

*Remark 1. *It is easy to see that satisfies Condition (I) if and only if for all We know that if is nonexpansive, then is quasi-nonexpansive. Clearly, if is a -nonspreading and , then is quasi-nonexpansive. Example in [16] shows that is a -nonspreading multivalued mapping which is not nonexpansive.

A mapping is called -*inverse strongly monotone*, if there exists such thatfor all . We see that if is -*inverse strongly monotone*, then and for all Moreover, for any constant , it is easy to see thatwhere is identity mapping. In particular, if , then is a nonexpansive mapping. For more example of inverse-strongly monotone mappings, see [17, 18].

Let be a mapping of into ; the effective domain of is denoted by ; that is, . A multivalued mapping is said to be a monotone operator on if for all , , and . A monotone operator on is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on . For maximal monotone operator on and , we may define a single-valued operator by , which is called the resolvent of for . If we let be a single value operator and let be a maximal monotone operator in with and , then, using the concept by [19], for ,

Lemma 2 (see [16]). *Let be a nonempty closed convex subset of a Hilbert space and be a -nonspreading multivalued mapping with . Let be a sequence in such that and for some Then .*

#### 3. Main Results

Theorem 3. *Let be a real Hilbert space and be a nonempty closed convex subset of Hilbert spaces . Let be a maximal monotone operator, be an -inverse strongly monotone operator, and be a -nonspreading multivalued mapping. Assume that and is a sequence defined byfor all , where and is a real number sequence in . Suppose that the following conditions hold:*(a)*, and ,*(b)* and .** Then, the sequence converges strongly to a point .*

*Proof. *First, we will show that by using by the mathematical induction. Clearly, and assume that for some . Let be fixed. So, we can obtain that and and since and are nonexpansive mappings, we haveSince , there is such that and then we getFrom (27), (28), and Condition (I), it follows thatThat is, and so . Therefore, for all .

By the assumptions, we can conclude that is nonempty closed convex subset of and then . For fixed and from , we obtain thatThis implies that the sequence is bounded. Since and , by the properties of the metric projection, we havefor any . Next, we want to show that is a Cauchy sequence. We computeThis implies thatBy (31) and (33), we get thatTherefore,We have from (30) and (35) that exists. For any , by using (31) again, we getConsequently, we obtain thatHence, as and , we haveTherefore the sequence is a Cauchy sequence. Without loss of generality, we can assume thatNext, we will prove that for some by dividing the proof into 4 steps. *Step I*. We will prove that .

Since , from (26), we haveand we obtainBy (38), we conclude thatConsiderThen, by (38) and (42), we obtain that *Step II*. We will show that .

Note thatIt follows thatThis implies thatBy using (44), , and , then we conclude that*Step III*. We will show that .

Note thatConsiderThen, we getFrom (49) and (51), we obtain thatForm (52), we haveThen, from (45) and (53), we obtain thatIt follows thatForm (55), in view of conditions (b), and (44), we conclude that*Step IV*. We will show that , for some .

By using Condition (I) and (28), we obtain thatThis implies thatForm (44), we conclude thatConsiderTherefore, we get from (56) and (59)Finally, We will prove that . Since and by (56), we get that also. Since and , by using Lemma 2, we have .

ConsiderSince and by (56), we get thatThat is, as , which implies that . Therefore, we conclude that which completes the proof.

*4. Numerical Example and Convergence Analysis*

*In this section, we give the following numerical example to confirm the convergence of Theorem 3 by using the algorithm (26).*

*Example 1. *Let and . Define the mappings , , and by the following:

*We see that the proposed mappings satisfy the assumptions in Theorem 3. For each , we obtain that . It is easy to see that a point is in the fixed point sets of and ; that is, .*

*In Figure 1, these initial points are randomly chosen from the set and we find optimal solution in 20 steps. This indicates that the sequence in algorithm (26) converges to the same point; that is, as a solution of this example. In this experiment, Figure 1 indicates the behaviour of for algorithm (26) that converges to the same solution; that is, as a solution of this example. Moreover, the decreasing on alpha function decreases rate of convergence to the optimal solutions which is shown in Figure 2. Figure 3 shows that . This means that the iteration of will squeeze the area until we obtain the approximated solution.*