Mathematical Problems in Engineering

Volume 2017 (2017), Article ID 4965262, 13 pages

https://doi.org/10.1155/2017/4965262

## Optimizing Shrinkage Curves and Application in Image Denoising

^{1}School of Information & Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China^{2}College of Computer Engineering, Yangtze Normal University, Chongqing 408000, China^{3}School of Electrical Engineering, Wuhan University, Wuhan 430072, China^{4}School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China

Correspondence should be addressed to Hongyao Deng

Received 9 November 2016; Revised 9 March 2017; Accepted 14 March 2017; Published 30 April 2017

Academic Editor: Bogdan Dumitrescu

Copyright © 2017 Hongyao Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A shrinkage curve optimization is proposed for weighted nuclear norm minimization and is adapted to image denoising. The proposed optimization method employs a penalty function utilizing the difference between a latent matrix and its observation and uses odd polynomials to shrink the singular values of the observation matrix. As a result, the coefficients of polynomial characterize the shrinkage operator fully. Furthermore, the Frobenius norm of the penalty function is converted into the corresponding spectral norm, and thus the parameter optimization problem can be easily solved by using off-and-shelf plain least-squares. In the practical application, the proposed denoising method does not work on the whole image at once, but rather a series of matrix termed Rank-Ordered Similar Matrix (ROSM). Simulation results on 256 noisy images demonstrate the effectiveness of the proposed algorithms.

#### 1. Introduction

Low rank matrix approximation has been attracting significant research interest in recent years. This approach aims to reconstruct the latent data from its degraded observation matrix and is frequently applied in many fields, such as machine learning [1], computer version [2], recommendation system [3], and image processing [4]. As a branch of this research, a regularized nuclear norm minimization problem is widely considered over matrices,where denotes a matrix, scalar is a parameter, and and are the data fidelity term and the data regularization term, respectively. In this formula, is called the nuclear norm of , where denotes the th largest singular value of .

If is convex, then is a convex function because nuclear norm is also convex. Thus, problem (1) is a convex optimization problem and can be treated by using various classic iterative optimization algorithms including steepest-descent, conjugate-gradient, and interior-point algorithms. When the data fidelity term , where is an observation matrix and denotes the Frobenius norm operator, this is the well-known nuclear norm minimization (NNM) [5]. The NNM problem was proved that it can be solved by applying a soft threshold operation on the singular values of , and the solution can be achieved using a Singular Value Thresholding (SVT) algorithm [6].

Despite the success of NNM, it is not flexible enough to handle more complex issues. To pursue the convex property, NNM treats each singular value equally. As a result, the soft-thresholding operator shrinks each singular value by the same amount [6]. In principal component analysis however, different principal directions quantify different information. For example, the large singular value delivers the major feature information such as edges and texture. This implies that, in image denoising, the larger the singular value is, the lesser the amount shrinks. Obviously, the NNM model and the corresponding solver cannot handle this issue.

To overcome this limitation, a regularized nuclear norm minimization with weights was put forward. These weights may enhance the representation capability of the original nuclear norm. Its form is as follows:where is the weight designed to .

Problem (2) is a nonconvex nonsmooth low rank minimization problem. Of course, if is replaced with , problem (2) reverts to problem (1). Solving problem (2) is challenging, or even NP-hard. To solve this problem, researchers presented some assumption to handle it. Gu et al. [7] assumed that is nondescending on , , and thus problem (2) becomes convex and can be solved by a soft-thresholding operation. Moreover, the authors devised a solver in which is inversely proportional to . Gasso et al. [8] argued that if both and are convex, problem (2) can be solved by DC (Difference of Convex functions) programming. Lu et al. [9] assumed that is concave increasing monotonically on and satisfies Lipschitz continuous condition; the weights are achieved at the super gradient point of the concave function . Based on this assumption, the authors proposed Iteratively Reweighted Nuclear Norm (IRNN) method. In addition, Hu et al. [10] reported their Truncated Nuclear Norm Regularization (TNNR) method, based on the same assumptions as in [9].

By applying these low rank matrix approximation theories, different image denoising methods have been reported. For example, a method of coupling sparse denoising and unmixing with low rank constraint is proposed for hyperspectral image in [11]; a scheme of incorporating iterative support detection into TNNR is presented to reduce white Gaussian noise in [12]; the eigenvectors of the Laplacian are considered to suppress Gaussian noise in [13]; a weighted nuclear norm minimization model is presented [14] and is used in three applications, that is, image denoising, background subtraction, and image completion. These methods achieve high quality results. A main reason is that all of them employ a powerful patch-based technique.

Inspired by weighted nuclear norm minimization and patch-based technique, a parameter optimization method is proposed in this paper. The proposed method utilizes the difference between a latent matrix and its observation to design a penalty function and employs odd polynomials to shrink the singular values of the observation matrix. As a result, the coefficients of polynomial fully characterize the shrinkage operator. Furthermore, for the penalty function, its Frobenius norm is converted into a spectral norm. Thereby, the parameter optimization can be easily solved by using plain least-squares.

To validate the effectiveness, the optimization theory is applied in image denoising. Since the proposed method is to optimize shrinkage curves, it is called OSC method. In the practical application, the OSC method does not work on the whole image at once, but rather a series of matrix termed Rank-Ordered Similar Matrix (ROSM, see Definition 2). Thirty-two images were tested. Experimental results show that the OSC method achieves better results than the Bilateral Filter; when the noisy standard deviation is less than 20, the results achieved by OSC are better than those by BM3D, and when the noisy standard deviation varies from 20 to 40, the results by OSC are weaker than by BM3D.

The contribution of this paper is twofold. Firstly, in the penalty function devised for weighted nuclear norm minimization, the weight representation is replaced by odd polynomials. So, the coefficients of polynomial characterize the role of the weights fully. Furthermore, the Frobenius norm of the penalty function is converted into a spectral norm. Secondly, the proposed optimization method is adapted to image denoising. Experimental results show that the proposed OSC method outperforms the Bilateral Filter and also is superior to the BM3D method on the case of low noise.

The rest of paper is organized as follows. In Section 2, the shrinkage curve optimization is formulated. Section 3 describes the image denoising algorithm, and the corresponding analysis is followed in Section 4. Section 5 reports the experimental results, and conclusions are drawn in Section 6.

#### 2. Optimizing Shrinkage Curves

In this section, the problem to be discussed is formulated, and the method of optimizing shrinkage curves is followed.

##### 2.1. Problem Formulation

Let be a unknown square matrix in , and let be its observation. The observed matrix is corrupted by white Gaussian noise with deviation . This is expressed as

To reconstruct the original square matrix from its noisy version, the following weighted nuclear norm minimization with a constraint is considered:where is a threshold, , and . In this formula, denotes the th largest singular value of , and and are the weighted nuclear norm of and the Frobenius norm of , respectively. Our aim is to use polynomial coefficients to characterize the weights and obtain the solution of the problem.

##### 2.2. Optimization Method

As in [7], it is proven that the weighted nuclear norm minimization (4) can be solved by imposing a soft threshold operation on the singular values of observation matrix. The form is as follows:where is the Singular Value Decomposition (SVD) of and is the soft-thresholding operator. This operator with weight vector shrinks the singular values; . To obtain the thresholds, the following penalty function is employed:

Assuming that shrinkage operation is applied differently to every singular value in matrix , and thus it can be broken intowhere is the th largest singular value of and denotes the th shrinkage operator. In our work, odd polynomials are taken to represent these shrinkage operators, where the coefficients of polynomial characterize the shrinkage operator fully. Thus, the shrinkage operator is expressed as

Substituting (8) into (7), the shrinkage operator can be rewritten as

Substituting (9) into (6) and considering is a unitary matrix, the penalty function can be rewritten as

The focus now shifts to optimization of the penalty function. To obtain the optimal solution by using plain least-squares, the Frobenius norm in (10) is converted into a spectral norm. For ease of formulation, a vector is defined that contains all the coefficients series: , ; that is,

A matrix is also defined that is a block-diagonal matrix with blocks,where each of the blocks is size of , with the content

In addition, two operators, and , are introduced. The operator returns a vector by concatenating the columns in a matrix; the operator returns a diagonal matrix by putting the elements of a vector on the main diagonal. For example, if is a matrix, returns the vector ; if is a vector, returns the diagonal matrix where the main diagonal is .

Using these notations and operators, the penalty function (10) can be rewritten aswhere and .

The optimal set of parameters that define the shrinkage curves iswhich leads to

#### 3. Application in Image Denoising

In this section, the OSC method is introduced for image denoising, containing denoising modeling data and the denoising algorithm.

##### 3.1. Denoising Modeling Data

The proposed OSC method does not work on the whole image at once, but rather a matrix-set in which each matrix contains a fixed number of similar patches extracted from the original noisy images. A patch is first defined as follows.

*Definition 1. * denotes an image, sized pixels. Let be a reference pixel. A block of size is extracted from **,** where resides at the top-left corner. By applying to the block, the block is identified with a vector in . The corresponding patch is defined bywhere and . When all pixels are complete, a patch-set in can be built, denoted by .

*Definition 2. *Let be a reference patch and denote the Euclidean norm; the distance between and can be calculated, using the metricThese scalar distances are then sorted and the patches in are correspondingly ordered,where denotes the th smallest distance value, denotes the order relation between patches, and denotes the number of patches in . The denoising modeling data, termed Rank-Ordered Similar Matrix (ROSM), are defined asObviously, ROSM is a square matrix of size and . When all patches in are complete, a matrix-set in can be built, denoted by .

##### 3.2. Denoising Algorithm

The proposed denoising algorithm consists of Algorithms 1 and 2. The former trains the polynomial coefficients , and the latter reduces noise.