Abstract

Total variation image denoising models have received considerable attention in the last two decades. To solve constrained total variation image denoising problems, we utilize the computation of a resolvent operator, which consists of a maximal monotone operator and a composite operator. More precisely, the composite operator consists of a maximal monotone operator and a bounded linear operator. Based on recent work, in this paper we propose a fixed-point approach for computing this resolvent operator. Under mild conditions on the iterative parameters, we prove strong convergence of the iterative sequence, which is based on the classical Krasnoselskii–Mann algorithm in general Hilbert spaces. As a direct application, we obtain an effective iterative algorithm for solving the proximity operator of the sum of two convex functions, one of which is the composition of a convex function with a linear transformation. Numerical experiments on image denoising are presented to illustrate the efficiency and effectiveness of the proposed iterative algorithm. In particular, we report the numerical results for the proposed algorithm with different step sizes and relaxation parameters.

1. Introduction

In the last two decades, the total variation (TV) image denoising model proposed by Rudin, Osher, and Fatemi [1] has received considerable attention. Often referred to as the ROF model, this takes the formwhere is the observed noisy image, is the total variation, and is the regularization parameter, which balances the data fidelity term and total variation regularization. In the following, we denote by the vectorized image , where . Because TV regularization has the advantage of maintaining image edges when removing noise, it has been extended to many other important image processing problems, including image deblurring [24], image inpainting [5], image superresolution [6], image segmentation [7], and image reconstruction [8, 9].

Many efficient iterative algorithms have been proposed to solve the ROF model (1). These include the Chambolle gradient projection algorithm and its variants [1012], the primal-dual hybrid gradient algorithm [1315], and the split Bregman algorithm [16, 17]. Isotropic total variation (ITV) and anisotropic total variation (ATV) are the most widely employed methods in the literature. It is worth mentioning that TV includes ITV and ATV, which can both be viewed as compositions of a convex function with a linear transformation . That is, . For specific examples, see [18, 19]. Based on this, Micchelli et al. [20] extended the ROF model (1) to the general convex optimization problemwhere , is a proper lower semicontinuous convex function and is a linear transformation. The convex optimization problem (2) is equivalent to computing the proximity operator of the function . Recall that the proximity operator of a proper lower semicontinuous convex function is defined asThus, Micchelli et al. [20] proved that the above minimization problem (2) can be solved using a fixed-point equation. The advantage of the fixed-point framework is that it provides a convenient analysis of the convergence of the proposed algorithm and enables the development of efficient numerical algorithms via various fixed-point iterations.

Because the pixel values of grayscale images are generally distributed in or , it is natural to incorporate this prior information into the ROF model (1). Thus, we obtain the following constrained ROF model:where is a nonempty closed and convex set, which could be chosen as the pixel range of the images. To solve the constrained ROF model (4), it is sufficient to consider the following constrained convex optimization problem:In general, the constrained convex optimization problem (5) can always be reformulated as an unconstrained convex optimization problem. More precisely,where denotes the indicator function of , which is defined by if and otherwise. As the fixed-point algorithm proposed by Micchelli et al. [20] cannot be applied to solve the problem in (6), this motivated us to develop an efficient iterative algorithm for this purpose. Hence, we apply this algorithm to the constrained total variation denoising model (4).

As the indicator function belongs to the class of proper lower semicontinuous convex functions, we are motivated to solve the following general convex optimization problem in general Hilbert spaces:where is a proper lower semicontinuous convex function. To achieve this goal, let us recall the definition of the resolvent operator. Let be a real Hilbert space and let be a maximal monotone operator. The resolvent of is the single-valued operator . Next, let be a proper lower semicontinuous convex function and let denote the subdifferential of , and let . Thus, the resolvent of coincides with the proximity operator as follows:Under certain qualification conditions, the problem considered in (7) can be solved via the resolvent operator . Overall, this leads to the problem of computing the resolvent operator of . More precisely, let andwhere and are two maximal monotone operators and is a bounded linear operator from the Hilbert space to the Hilbert space . Following this, let and . Hence, the problem (9) reduces to the convex optimization problem (7), because .

1.1. Existing Work

Next, let us briefly review some existing work concerning the computation of resolvent operators. Bauschke and Combettes [21] extended the Dykstra algorithm [22] for computing projections onto the intersection of two closed convex sets to compute the resolvent of the sum of two maximal monotone operators. Hence, they obtained an algorithm for finding the proximity operator of the sum of two proper lower semicontinuous convex functions. Combettes [23] proposed two inexact parallel splitting algorithms for computing the resolvent of a weighted sum of maximal monotone operators. The key idea was to reformulate the weighted sum of maximal monotone operators as a sum of two maximal monotone operators in a product space. The two iterative algorithms were based on extensions of the Douglas-Rachford splitting and Dykstra-like methods, respectively. Furthermore, Combettes [23] applied these algorithms when computing the proximity operator of a weighted sum of proper lower semicontinuous convex functions. In more recent work, Artacho and Campoy [24] generalized the averaged alternating modified reflection algorithm [25] to compute the resolvent of the sum of two maximal monotone operators.

In contrast, Moudafi [26] proposed a fixed-point algorithm to compute the resolvent of operator , where is a bounded linear operator with the adjoint , is a maximal monotone operator, and and are two Hilbert spaces. When , this algorithm utilizes the fixed-point approach to computing the proximity operator proposed by Micchelli et al. [20]. It is clear that the resolvent of the operator is a special case of (9) by allowing . In addition, Combettes et al. [27] proposed a dual forward-backward splitting algorithm to solve the convex optimization problem in (7) (see also [28]). The main idea in this case was to first derive the dual problem of (7) and then apply the forward-backward splitting algorithm to solve this problem. Finally, the convergence of the primal iterative sequence was proved using the connection between the primal optimal solution and the dual optimal solution. However, the authors did not apply the obtained iterative algorithm to solve the constrained ROF model (4). Beck and Teboulle [3] solved the constrained ROF model (4) as a subproblem of image deblurring. Chan et al. [29] applied the alternating direction method of multipliers to solve the constrained total variation image deblurring problem. Their numerical results confirmed that the quality of the restored image could be improved by incorporating prior information of the images as a constraint. However, neither study considered the general resolvent operator of (9).

In this study, we focus on computing the resolvent of the operator (9), where and are two maximal monotone operators and is a bounded linear operator. We assume that the operator is maximally monotone. This is true, for example, if [30], where represents a relative interior of the set . If , then the problem (9) becomes that of computing the resolvent operator of . Inspired by the work of Moudafi [26] and Combettes et al. [27], we propose a fixed-point approach to computing resolvent operators (9). First, we show that the resolvent operator (9) can be characterized using a fixed-point equation. Subsequently, we propose a fixed-point algorithm and prove the strong convergence of its iterative sequence. Next, we employ the proposed iterative algorithm to solve the convex optimization problem (7) arising in the field of image denoising. Finally, we conduct numerical experiments on image denoising to validate the effectiveness of the proposed algorithm. In particular, we show how the performance of the algorithm is influenced by the selection of the step size and relaxation parameters.

The remainder of this paper is organized as follows. In Section 2, we introduce some notation and present useful definitions and lemmas. In Section 3, we present the main fixed-point algorithm and prove its strong convergence. In Section 4, we employ the obtained iterative algorithm to solve a particular convex optimization problem, which is related to the calculation of the resolvent operator (9). In Section 5, we present some numerical experiments on image denoising to illustrate the performance of our proposed algorithm. Finally, we provide some conclusions and ideas for future work in Section 6.

2. Preliminaries

In this section, we review some basic definitions and lemmas in monotone operator theory and convex analysis, which will be used throughout this paper. First, let be a real Hilbert space with inner product and the associated norm . Let denote the identity operator on . We use to indicate that the sequence converges weakly to and to indicate that the sequence converges strongly to . Let denote all bounded linear operators from the Hilbert space to the Hilbert space . Let . Then, the adjoint of is the unique operator such that .

Let be a set-valued operator. The sets , , , and represent the domain, range, graph, and zero set of , respectively. The inverse of is the set-valued operator with the graph .

We now introduce some definitions and lemmas, most of which can be found in [28, 31].

Definition 1 (see [28], (maximal monotone operator)). Let be a set-valued operator. is said to be monotone ifMoreover, is maximally monotone if its graph is not strictly contained in the graph of any other monotone operator.

Definition 2 (see [28], (resolvent and Yosida approximation)). Let and . The resolvent of is , and the Yosida approximation of is .

For any , it is easy to derive that .

Lemma 3 (see [28]). Let be maximally monotone and . Then, the following hold:(i), and are firmly nonexpansive and maximally monotone.(ii)The Yosida approximation is cocoercive and maximally monotone.

Definition 4 (see [28]). Let be a nonempty subset of and . Then(i) is nonexpansive if .(ii) is firmly nonexpansive if .(iii) is averaged if there exists a nonexpansive and such that , where denotes the identity operator.(iv) is cocoercive (or inversely strong monotone) for if .

Remark 5. (i) An equivalent definition of firm nonexpansiveness is that . It is easy to check that firm nonexpansiveness implies nonexpansiveness. If is firmly nonexpansive, then is cocoercive.
(ii) An equivalent definition of averaged is for all . Thus, if is averaged then is also nonexpansive.
(iii) Let be cocoercive. Then, for any is cocoercive. Furthermore, owing to the Cauchy-Schwartz inequality is Lipschitz continuous; i.e., .

The following lemma provides some useful characterizations between an operator and .

Lemma 6 (see [28]). Let be a nonempty subset of and let . Then(i) is nonexpansive if and only if is cocoercive.(ii) is firmly nonexpansive if and only if is firmly nonexpansive.(iii) is averaged if and only if is cocoercive, where .

The following lemma shows that a composition of two averaged operators is also an average. This result first appeared in the work of Ogura and Yamada [32]. Combettes and Yamada [33] subsequently confirmed it with a different proof.

Lemma 7 (see [32]). Let be a nonempty subset of . Furthermore, let be averaged and let be averaged. Then is averaged.

We also make full use of the following lemma.

Lemma 8 (see [28]). Let and . Let and denote the set of real numbers. Then

We end this section by introducing the Krasnoselskii–Mann algorithm. Theorem 9 provides a fundamental tool for studying the convergence of many operator splitting methods.

Theorem 9 (see [28], (Krasnoselskii–Mann algorithm)). Let be a nonempty closed convex subset of , and let be a nonexpansive operator such that , where denotes the fixed-point set of . Let be such that . For any , defineThen, the following hold:(i) is Fejér monotone with respect to ; i.e., for any .(ii) converges strongly to .(iii) converges weakly to a fixed point in .

3. Computing the Resolvent Operator (9)

Before presenting our main results, we first introduce some notation. For a fixed , let define operators and as follows:and

In addition, the following lemma provides a fixed-point characterization of the resolvent operator (9).

Lemma 10. Let and be two maximal monotone operators. Furthermore, let and . Then, for a given ,if and only if is a fixed point of .

Proof. It follows from the definition of the resolvent operator that . Thus, we can writeLet satisfy (15). Then we have ; that is . By (16), we have . Thus, . Using the fact that , we deduce thatwhich implies that is a fixed point of .
Conversely, let be a fixed point of . Then, we have thatThis completes the proof.

Next, we prove the following lemma, which characterizes an important property of the operator .

Lemma 11. Let be a maximal monotone operator, and let . For a given and , define an operator . Then(i) is cocoercive.(ii)For any , is averaged.

Proof. (i) Let . Then, we have thatThe first inequality follows from the fact that the resolvent operator is firmly nonexpansive, and the second is trivial. Thus, is cocoercive.
(ii) For any , we know from Lemma 6(iii) that is averaged.

Lemma 11 shows that, for any , is averaged. On the other hand, is firmly nonexpansive and is also averaged. According to Lemma 7, is averaged.

Now, we are ready to present our main results.

Theorem 12. Let and be two maximal monotone operators. Furthermore, let and . For any , we define the iterative sequences and as follows:where and such that . Then, the following hold:(i) converges weakly to a fixed point of .(ii)Suppose that . Then, converges strongly to the resolvent operator (9).

Proof. (i) Because the resolvent operator exists and is unique, the first part of the Lemma 10 implies that has a fixed point. Let . Then, the iterative sequence can then be rewritten as follows:Because is averaged, there exists a nonexpansive mapping such that and . Therefore, the iterative sequence (21) is equivalent toNotice that and . By Theorem 9, we can conclude the following:(1) is Fejér monotone with respect to .(2) as .(3) converges weakly to a fixed point of .Taking into account the fact that , also converges weakly to a fixed point of , and is Fejér monotone with respect to . Therefore, exists for any . Further, we have that , as .
(ii) Let . By Lemma 11 (ii) and Remark 5 (ii), we then have thatwhere . From Lemma 8, we have thatSubstituting (23) into (24), we derivewhich implies thatBecause , exists, and . Taking in the above inequality (26), we obtainBy Lemma 10, , where . Then, we have thatThe first inequality follows from the fact that the resolvent operator is firmly nonexpansive, and the final inequality is derived from the Cauchy-Schwarz inequality. Taking on the right of (28), we have that . Thus, we can conclude that the iterative sequence converges strongly to the resolvent operator of . This completes the proof.

Remark 13. We observe that for any . Taking in (20), we obtain a simple iterative algorithm to compute the resolvent operator problem (9). More precisely, for any the iterative sequences and are generated as follows:where . The iterative algorithm (29) is equivalent to the Picard iteration scheme, which can be easily applied in practice. In fact, the iterative scheme (29) can be rewritten asWith the help of the relation , we obtain from (30) thatLet . Then, the iterative scheme (31) is equivalent to

Remark 14. We observe that the resolvent operator of in (9) is equivalent to the following monotone inclusion problem:where , , , and are the same as in (9). This monotone inclusion (33) can be solved using existing methods for more general monotone inclusion problems (such as [3437]). However, these iterative algorithms are not identical to our proposed algorithm. To illustrate this point, we will explain the proposed algorithm from the perspective of duality. In fact, the dual monotone inclusion of (33) isBy Lemma 11, we know that is -cocoercive. Thus, is a solution of the dual monotone inclusion (34), and is a solution of the primal monotone inclusion (33). Next, we apply the relaxed forward-backward splitting algorithm (see, e.g., Theorem 26.14 of [28]) to solve the dual monotone inclusion (34). More precisely, let and setwhich is equivalent to the iterative algorithm introduced in Theorem 12. For convenience, we summarize the differences between the proposed algorithm and existing algorithms in Table 1.

Let in Theorem 12. Then, we obtain the following corollary.

Corollary 15. Let be a maximal monotone operator and , and let . For any , we define the iterative sequences and as follows:where and are the same as in Theorem 12. Then, the following hold:(i) converges weakly to a fixed point of , where .(ii)If , then converges strongly to the resolvent of and , .

Remark 16. Moudafi [26] proposed the following iterative algorithm to solve the resolvent operator . Let . Then, the iterative sequence is defined bywhere and such that . Moudafi proved that , where is the weak limit point of . Furthermore, he presented several applications of the obtained iterative algorithm.
Corollary 15 extends some of the results from [26] in two aspects. (i) The range of is expanded and (ii) we obtain strong convergence of the iterative sequence (which is not studied in [26]).

Let in Theorem 12. Then, we obtain the following corollary.

Corollary 17. Let and be two maximal monotone operators. In addition, let . For any , we define the iterative sequences and as follows:where and such that . Then, the following hold:(i) converges weakly to a fixed point of , where .(ii)Suppose that . Then, converges strongly to the resolvent operator .

Remark 18. The obtained iterative algorithm (38) for computing the resolvent operator is different from those proposed by Bauschke and Combettes [21, 23]. In fact, similar to (32), the iterative scheme (38) can be rewritten aswhere .
Bauschke and Combettes [21] proposed a Dykstra-like algorithm to compute the resolvent operator . Let , , and . Then, the Dykstra-like algorithm is defined byIn addition, Bauschke and Combettes proved that and generated by (40) converge strongly to . Following some simple calculation, the Dykstra-like algorithm (40) can be rewritten as follows:On the other hand, Combettes [23] proposed an inexact Douglas–Rachford splitting algorithm and an inexact Dykstra-like algorithm for computing the resolvent of the sum of a finite family of maximal monotone operators. For the resolvent of the sum of two maximal monotone operators, the inexact Dykstra-like algorithm without errors coincided with the iterative algorithm (40). For simplicity, we have presented the inexact Douglas-Rachford splitting algorithm without errors for computing the resolvent of the sum of two maximal monotone operators. Let , and define the iterative sequences as follows:where , and such that and . Next, the sequences and converge strongly to . In fact, this iterative algorithm (42) is equivalent to applying the Douglas–Rachford splitting algorithm to the monotone inclusion ofComparing (39), (40), and (42), we find that the obtained iterative sequences generated by all algorithms converge strongly to the resolvent operator . The iterative algorithms (39) and (42) do not have any requirements for the initial value, whereas (40) requires the selection of a fixed initial value. Unlike (39) and (42), the Dykstra-like algorithm (40) does not require tuning of the parameters.

4. Application to Convex Optimization Problem

In this section, we apply the obtained results to solve a particular convex optimization problem that has been studied in the literature.

For convenience, we introduce some notation. A function is called proper if . denotes the class of proper lower semicontinuous convex functions from to . The Fenchel-conjugate of is defined by .

Problem 19. Let and , and let and be proper lower semicontinuous convex functions. Furthermore, let be a nonzero bounded linear operator such that , where the core of a subset is defined by (see, e.g., Definition 6.9 of [28]). Consider the following convex optimization problem:The minimization problem (44) is equivalent to the proximity operator of . As a direct application of the resolvent operator (9), we obtain the following convergence theorem from Theorem 12.

Theorem 20. Under the conditions of Problem 19, let and set and aswhere , and such that and . Then, the iterative sequence converges strongly to a solution of (44).

Proof. Let , , and in (9). Because and , and are maximal monotone operators. Furthermore, note that and . Hence, the iterative sequences (45) follow directly from (20). By Theorem 12, we conclude that converges strongly to a solution of (44). This completes the proof.

Remark 21. The Moreau identity states that, for any , , where denotes the Fenchel conjugate of . It follows from this that the iterative sequences (45) can be rewritten asLet . Then, (46) is reduced to

Combettes et al. [27] proposed the following iterative algorithm to solve the optimization problem (44). For any , set and aswhere , , and . Under the conditions that and , the authors proved that converges strongly to a solution of (44).

Remark 22. (1) Comparing (47) with (48), the range of in (47) is clearly larger than that of the iterative sequence (48) introduced by Combettes et al. [27] when the variable step size is constant; i.e., . In addition, Theorem 20 also recovers the main results of Proposition 28.16 in [28]. However, we require that satisfies , while Proposition 28.16 in [28] requires that . It is obvious that our conditions on are weaker than those of Proposition 28.16 in [28].
(2) Although our proposed iterative sequences (45) are error-free, it is not difficult to add error sequences in corresponding locations, as in (48). Because the proof is almost identical to that of Theorem 12, we have omitted it here.

5. Numerical Experiments

In this section, we present numerical experiments to verify the effectiveness of the proposed iterative algorithms for solving the constrained total variation model (4) for image denoising problems. All experiments are conducted on a Lenovo laptop with an Intel 2.3 GHz CPU and 4 GB RAM. We run the program with MATLAB R2014a.

We select “Barbara,” “Lena,” “Boat,” and “Goldhill” as the test images (see Figure 1). Gaussian noise of mean and standard deviation is added to the clear images.

We use the signal-to-noise (SNR) and peak-signal-to-noise (PSNR) to evaluate the quality of the restored images. These are defined byandwhere is the original image, is the restored image, and and are the row and column size of the image , respectively. The iterative algorithm is stopped when the following criterion is satisfied:where is a given small constant. We tuned the regularization parameter and set it to for optimal denoised image quality.

We aim mainly to solve the constrained total variation (TV) image denoising problem (4). In particular, we choose the anisotropic total variation as the regularization term during testing. By using the indicator function, the constrained (TV) denoising problem (4) can be reformulated as the following unconstrained optimization problem:where is the indicator function. It is clear that the optimization problem (52) is a special case of (44). In fact, if , , and , then the proposed iteration scheme (45) can be employed to solve the constrained TV image denoising problem (4). It is well known that , where is the usual gradient operator of the total variation (see [10, 20]).

Let . Then, (4) (also (52)) is reduced to the well-known ROF model (1). We select as both the nonnegative and the bounded sets. Moreover, the nonnegative set is defined as , and the bounded set as . The corresponding minimization problems (52) are referred to as the nonnegative ROF (N-ROF) model and bounded ROF (B-ROF) model, respectively. It is worth mentioning that the proximity operator of the indicator function is the projection operator, which has closed-form solutions based on our choices. Therefore, in (45) the proximity operator of is the orthogonal projection onto the closed convex set . That is, .

5.1. Numerical Results and Discussion

First, we describe the impacts of the parameters for the iterative step size and relaxation variable on the proposed iterative algorithm (45). According to Theorem 20, we can choose and . Figures 2, 3, and 4 demonstrate the performance of the proposed iterative algorithm for solving (52) with different choices of and . The test image was “Barbara,” and the stopping criterion was set as .

As shown in Figures 24, when the iterative step size is fixed, a larger relaxation parameter leads to a faster convergence of the iterative algorithm. Table 2 presents the corresponding numerical results in terms of the SNR, PSNR, and number of iterations ().

Because the prior pixel information of the image is introduced as a constraint, the performance of the constrained ROF model is superior to that of the unconstrained model. Numerical results confirm the advantages of the constrained ROF model. We can see from Table 2 that the iterative step size has a significant impact on the performance of the algorithm. Furthermore, the experimental results demonstrate that a larger always leads to a faster convergence of the iterative algorithm. Thus, we choose and in the following comparison.

Next, we focus on investigating the performances of the constrained and unconstrained ROF models for the test images from Figure 1. The numerical results are presented in Table 3. We notice that the SNR and PSNR values slowly decrease with an increasing number of iterations. Because more iterations do not improve the quality of the image, we should stop the iterative algorithm in the early stages. Figures 58 present denoised images for in Table 3. From the visual point of view, the images restored by the constrained model are closer to the original images than those restored by the unconstrained model. This further confirms the benefits of introducing constraints in the image recovery model.

6. Conclusions

The total variation can be viewed as a composition of a convex function with a linear transformation. Thus, Micchelli et al. [20] introduced a fixed-point algorithm based on proximity operators to produce a total variation model for image denoising (1). Inspired by the work of Moudafi [26], we studied the calculation of the resolvent of the sum of a maximal monotone operator and a composite operator (9) to produce a constrained total variation model (4). Subsequently, we proposed a fixed-point algorithm for this resolvent operator. Based on the fixed-point theory of nonexpansive mappings, we proved the strong convergence of the obtained iterative sequence. The advantage of the fixed-point approach is that it provides the potential to develop additional fast iterative algorithms. Numerical simulations on image denoising illustrated the performance of the proposed algorithm. In particular, we found that the step size had a significant impact on the convergence speed of the algorithm. In general, when the iterative step size was fixed, larger relaxation parameters resulted in a faster iterative algorithm convergence. Numerical results also confirmed that the constrained ROF model achieved a superior performance compared with the unconstrained ROF model.

Finally, we wish to note that the constrained TV model (4) can also be derived using other iterative algorithms, such as the primal-dual Chambolle–Pock algorithm [15], the alternating direction method of multipliers [29, 38, 39], and the preconditioned primal-dual algorithm [40, 41]. We have not presented the corresponding numerical results here. Thus, we will further examine the convergence rate of our proposed iterative algorithm and include these comparative results in future work.

Data Availability

The image data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundations of China (11661056, 11771198, 11771347, 91730306, 41390454, and 11401293), the China Postdoctoral Science Foundation (2015M571989), and the Jiangxi Province Postdoctoral Science Foundation (2015KY51).