Abstract

Digital restoration of image with missing data is a basic need for visual communication and industrial applications. In this paper, making full use of priors of low rank and nonlocal self-similarity a gradual reweighted regularization is proposed for matrix completion and image restoration. Sparsity-promoting regularization produces much sparser representation of grouped nonlocal similar blocks of image by solving a nonconvex minimization problem. Moreover, an alternation direction method of multipliers algorithm is developed to speed up iterative solving of the above problem. Image block classification further enhances the adaptivity of the proposed method. Experiments on simulated matrix and natural image show that the proposed method obtains better image restoration results, where most lost information is reorcovered and few artifacts are produced.

1. Introduction

In today’s information age, the rapid development of computer technology facilitates digital record of rich visual information in pictures and videos, etc. However, data loss and incomplete information of image due to damage and bad transmission bring negative impact on the communication and research of visual media [13]. In this paper, digital image restoration and enhancement are addressed to restore missed and damaged part in digital images using residual information, imaging context and prior knowledge of image, in order to make the restored image close to its original one as much as possible [49]. Digital restoration of degraded images provides possibility of rapid communication, appreciation, and research for people from different fields [3, 4, 10].

As a fundamental ill-posed inverse problem in image processing and low-level vision, digital image restoration aims to reconstruct a latent high-quality image from its degraded observation [7, 11]. Up to now, image restoration technology has made great progress, and many advanced methods have been introduced based on a variety of optimization models, mainly including variational calculus and partial differential equations [4, 5, 8, 12], methods based on such priors as exemplar matching and synthesis [1315], sparse representation and low rank approximation [1626], etc.

Image restoration methods based on variational calculus and partial differential equations propagate/diffuse local structure information from the external of missing areas to the internal based on smoothness prior [4]. Multiple variants use different models (linear, nonlinear, isotropic, or anisotropic) to extend the information in a given particular propagation direction or taking into account some geometric information such as the curvature of local neighborhood pixels [4, 5, 12]. However, a large number of experiments show that these methods have some limitations: although successful for images with piecewise smooth structures or small gaps, they are not suitable for textured images, especially when missing area is larger. In order to preserve edges, after a few of iterations, they can easily lead to excessive smoothness and blur in the repaired areas [7].

In order to restore damaged areas of textured structures, on the basis of pioneering works of texture synthesis [27, 28], another image restoration method is put forward based on exemplar matching and synthesis. For an exemplar needed to be repaired, a texture repair method tries to find the best matching sample to it in a certain neighborhood and restores missing information by sampling or copying corresponding pixels for exemplar synthesis [1315]. If enough similar candidate blocks are found in the image or in an outside image data base, a better image restoration can be achieved. Recently current researches of this kind mainly focus on the multiscale refinement of exemplar matching and synthesis, improvement in distance measure used to find matching blocks, faster searching method of matching blocks, optimized block processing ordering, and filling into unknown pixels from the matching blocks [7].

In the past twenty years, sparse representation theory of signal and image has been a hot research field in signal processing. With the vigorous development of sparse sampling and compressed sensing, sparse prior knowledge is introduced into image restoration algorithms [16, 17, 21]. Here, it is assumed that image is a sparse signal under a set of specific transformation bases (the sparsity of signal depends mainly on the given bases). These bases are composed of atoms stored in a dictionary matrix, which can be obtained by various dictionary learning methods [29]. Recently, as a second-order sparse prior, low rank matrix approximation is also introduced to recover a latent low rank matrix structure of image from its noisy observation. This research branch has attracted increasing attention in image processing due to its popularity and effectiveness [18, 30, 31]. For example, an efficient filtering algorithm is proposed in [20] to sparsely represent image patches using singular value decomposition (SVD) and to remove noise in image by iterative singular value shrinkage. Another algorithm proposed in [26] utilizes nonlocal self-similarity (NSS) and low rank approximation, which includes two steps of SVD based on a special hard thresholding. The method based on weighted nuclear norm minimization [24] assigns different weights to singular values such that a more reasonable soft thresholding is carried out, which has achieved excellent performance in different image processing tasks.

Natural images are usually rich in texture and complex in structure, and they are just of approximately low rank. In this paper, the nonlocal self-similarity is used to gather similar image blocks, and a nonconvex optimization problem with gradual reweighted regularization is proposed based on low rank property of the group matrix of similar blocks. By solving the minimization problem using an alternation direction method of multipliers (ADMM) algorithm, we present much sparser representation of image duo to the sparsity-promoting gradual regularization. The proposed method obtains better image restoration results, where most lost information is recovered and few artifacts are produced.

The remainder of the paper is organized as follows. Related works of low rank minimization and the gradual reweighted regularization for the restoration of matrix and image are introduced in Section 2, respectively. In Section 3, experimental results on matrix completion and image restoration demonstrate the merit of the proposed approaches compared to classic methods. Finally, we conclude this paper in Section 4.

2. Gradual Reweighted Regularization

2.1. Low Rank Minimization

Sparse representation and compressed sensing have achieved large success in image processing [29, 32]. Inspired by this idea low rank matrix approximation has drawn increasing attention and broadly interest in recent years, where rank is interpreted as a measure of second order sparsity (matrix) [33]. Aiming to recover an underlying low rank matrix from its degraded observation, low rank matrix approximation can robustly and efficiently handle high-dimensional data with high noise or severe corruption, due to the fact that many types of data (raw or after some nonlinear transforms) reside near single or multiple subspaces [34]. Singular value decomposition is often an effective approach to solve low rank model using special thresholding operations on the singular values of observation matrix [24, 31, 3537].

In this paper, we consider image restoration as a problem of recovering a low rank signal matrix in which its entries are observed in the presence of lost information. More specifically, the objective is to recover an unknown matrix from its observed degraded data , where is a degeneracy operator containing the spatial position of information loss in , and is a noise disruption (although we do not consider noise in the paper). The low rank matrix approximation estimates by solving the following nuclear norm minimizing problem with a -norm data fidelity [31]:where is the nuclear norm of , is the vector of singular values of , is the rank of , and is a positive constant.

When is an identity matrix, a solution of above problem can be obtained by [31].where is the SVD of , and is a soft thresholding function on the diagonal matrix :where is the -th singular value of .

2.2. Gradual Reweighted Regularization

In this paper, we employ an iteratively reweighted technology to recover matrix with the second-order sparsity (low rank) in the framework of sparsity-promoting low rank approximation [24, 38, 39]. To restore a degraded matrix with pixels of missing information, we solve the following matrix completion model:where is a nonnegative weight assigned to to emphasize the role of different singular values. is a binary support indicator matrix with zeros indicating missing entries in the observed matrix . is a projection operator: , an element-wise Hardamard product of matrixes.

In the process of obtaining iteratively reweighted solution of the above-mentioned model in following algorithm, in order to obtain a sparser solution we adopt a weighting vector with gradual regularization:where is a gradual regularization sequence converging to zero, and are two constants, and is iteration steps.

In the strategy of gradual reweighted regularization (GRR), we first use a relatively large in (5), then we decrease it gradually in the iteration process as done in [39]. As shown in following experiments of matrix completion, we obtain sparser solutions and more exact results of image restoration. We think the success of the proposed method owes to that adopting a relatively large in the weights in foregoing iterations results in a weaker singular value shrinkage (see equation (3)) and more local minima being filled in the solution basin of the low rank model (4). In the iteration process, once the approximation solution enters into the correct basin, decreasing allows it to approach the optimal solution more closely as [39]. The following Theorem 1 is also in favour of above argument.

We employ an alternation direction method of multipliers (ADMM) algorithm [40] to solve problem (4) by introducing a variable :where is a Lagrange multiplier and is a positive constant. We update sequentially by solving the following series of subproblems:(1)Update of : and are fixed, we solveThen, we have .(2)Update of :  and are fixed, we solveLet , we first compute a singular value decomposition: ; then, we have , where .(3)Update of : and are fixed, is easily obtained by

The above optimization procedure is described in Algorithm 1.

Input: Degraded matrix , indicator matrix
Output: Restored matrix
(1)Initialization: , ;
(2)for do
(3);
(4);
(5);
(6), ;
(7);
(8) Update ;
(9)end
(10)return

Finally, we present a weak convergence result to ensure of a rational termination of Algorithm 1.

Theorem 1 (see [24]). If the weight sequence is arranged in a nondescending order, the sequence obtained by Algorithm 1 satisfies:(1),(2).

2.3. Gradual Reweighted Regularization for Image Restoration

The nonlocal self-similarity (NSS) is widely used in image processing and computer vision [20, 24, 26, 35, 37, 41, 42]. There are many repeated local patterns across a natural image. After overlapping image blocks are extracted from a degraded image, for a given block with information loss, its nonlocal similar blocks can afford supplementary information for a better reconstruction of the degraded image. A simpler and effective grouping of mutually similar blocks can be realized by image block matching to find blocks that exhibit high correlation to a given one. The correlation between matrix rows and columns is naturally associated with the rank of matrix; thus, the formed group matrix by block matching is more likely to be of low rank than a whole image.

Considering the correlation between local image blocks, we propose an image restoration algorithm called as nonlocal gradual reweighted regularization method (NGRR), where the singular value thresholding driven by the gradual reweighted regularization provides highly sparser representation of image data. Because bigger singular values mainly describe image structures (major edge and texture information) [24, 43, 44], while smaller singular values are mainly related to interference and noise, through shrinking of smaller singular values we can recover useful information of degraded image. The proposed method is an iterative reweighted filtering from image block estimation to pixel estimation based on a sparse low-rank prior, which includes extraction and classification of image blocks, block matching and grouping, singular value shrinkage filtering, aggregation, and iterative diffusion [37].

2.3.1. Block Extraction and Classification

In order to promote sparse low-rank representation of image data the nonlocal self-similarity of image blocks is used. According to image features image blocks are divided into two categories: block with texture or edge (contains rich structural information) and smooth block (single structure). Adaptive filtering of image blocks with different parameters can better recover the structure and detail of image. For simplicity, the standard deviation of local pixels is measured to distinguish between textured and smooth blocks.

2.3.2. Block Matching and Grouping

After overlapping, image blocks with size are extracted from a degraded image and classified into textured or smooth blocks, and for each of image blocks , block matching is carried out to assemble a group of similar blocks based on certain similarity criterion in a square search window centered at . The reference block and its -most similar blocks denoted by are chosen to construct a group matrix using each of similar blocks as a column of it (see Figure 1). In the similar matrix , the corresponding columns from similar image blocks lead to lower rank of the group matrix [24, 35, 44], and consequently, a highly sparse representation of image blocks is obtained by following singular value shrinkage filtering. For different type of target block , when it is a texture or edge block, we select more similar blocks to it ( is bigger). When it is a flat block, we only need to select relatively few similar blocks to form a similar block matrix ( is smaller).

2.3.3. Singular Value Shrinkage Filtering

The singular value decomposition and shrinkage on above similar block matrix is carried out in the iterative gradual reweighted regularization (Algorithm 1). Here, the singular energy of the similar matrix concentrates on a few of foregoing bigger singular values, which benefits the recovery of lost information from the observed image.

2.3.4. Aggregation

The proposed NGRR method first estimates latent image blocks; then it estimates each image pixel included in multiple image blocks to recover the whole image by aggregating estimated image blocks. Specifically, because overlapping image patches are extracted from a degraded image, a pixel belongs to multiple patches. Then, each pixel can be estimated by averaging multiple results from estimated patches.

2.3.5. Iterative Diffusion

For an image block located in the boundary of area with information loss, we can obtain a good estimate through above four steps. However, for image block located inside area with information loss, since there is no reliable information for similar block matching, the estimation based on above processing is not accurate. In order to solve this problem, we employ iteration diffusion to spread image information into the target area. That is to say, the previous output of image restoration is used as the initialization of next iteration. With the increase of number of iterations, image blocks located inside area with information loss will be gradually filled up with its surrounding information step by step.

The above image restoration procedure is described in Algorithm 2.

Input: Degraded image , indicator matrix
Output: Restored image
(1)Initialization: , ;
(2)for do
(3) Iterative diffusion: ;
(4) Block extraction and classification from ;
(5)for each image block do
(6)  Find similar blocks to form matrix ;
(7)  Obtain using Algorithm 1;
(8)end
(9) Aggregate to form estimated image ;
(10)end
(11)return

3. Results and Analysis

As a fundamental inverse problem in image processing and low-level vision, matrix completion and image restoration by filling in damaged area aim to reconstruct a plausible image from outside information of the damaged area. Below two experiments on matrix completion and image restoration are carried out to verify the rationality and effectiveness of the proposed method using the gradual regularization and low rank prior.

3.1. Matrix Completion

First, we compare the performance of the proposed method GRR with the state-of-the-art WNNM-MC method [24] in matrix completion.

In the following experiments, a synthetic low rank matrix is generated by the multiplication of two low rank matrixes: , where both and are of size . Each element of and is generated from a Gaussian distribution . is used to constrain the rank of . An observation matrix is formed from the ground truth matrix by randomly dropping entries of the matrix . To be specific, in the generation process of the synthetic, low rank matrix is used to control the rank of the synthetic matrix, and is used to control its lost entries. We set and . Let , respectively, and vary from 0.01 to 0.5 with step length 0.01. Initialize to be 1, and update it down at a rate of at each step. For each set of parameters , we generate ten sets of data as test matrixes, and the performance of each method is measured by the mean value of ten sets of results. Experimental results are listed in Tables 14 for easy comparison.

By comparing relative errors listed in Tables 14 one can see that when and is a sequence approaching to zero, the relative errors of low-rank matrix recovery by ( with ) are the smallest, especially when the matrix rank is bigger, where differences between relative errors are much more obvious. Thus, we adopt in (5) in following experiments of image restoration. At the same time, it is clear that the errors of the results by GRR are much smaller than that of WNNM-MC in all cases, which means that GRR has much better capability of low rank matrix reconstruction.

3.2. Image Restoration

In order to verify the performance of the proposed method in image restoration, we apply it to various natural images in different tasks: restoration of images with random loss of pixels. The proposed method (NGRR) is verified on test images shown in Figure 2 by comparing it with some classic image restoration methods, including BPFA [45], TNNR [46], two-stage low rank approximation (TSLRA) [26], and WNNM-MC [24]. All methods are implemented using MATLAB programming with gray scale range from 0 to 255. In comparison of different image restoration methods, default parameter setting suggested by the authors in each method is made as in the source codes of these recovery algorithms from public websites.

In order to evaluate the quality of image recovery, we use peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) [47] as quantitative measure indices. Although PSNR is sometimes inconsistent with human visual perception, it is widely used as a standard quantitative evaluation index of image quality and is closely related to visual perception. Further, based on the fact that human visual system understands image mainly according to its low-level features, FSIM combines phase consistency feature with gradient amplitude to measure the similarity between two images [47]. Larger PSNR and FSIM values mean higher quality of image recovery. The mathematical formulations of PSNR and FSIM are defined as follows:where and are real image and its estimated one at pixel , respectively; measures the similarity between real and estimated images, and represents the maximum of two phase consistency indices [47].

In order to better demonstrate advantages of the proposed NGRR method in following comparative experiments, we try to recover images with , , and missing entries, respectively. PSNR and FSIM values of restored images by different methods are reported in Tables 57. Meanwhile, in Figures 3 and 4, image restoration results are also visually shown using different methods for Barbara and Peppers images with random loss of pixels.

First, from Tables 57, one can see that PSNR (dB) and FSIM results by the proposed method exceed those of other methods on the whole, which verifies the higher performance of NGRR in the restoration of missing data. Both TSLRA and WNNM-MC are also excellent image restoration methods in the case of severe data loss from the view of quantitative evaluation.

Secondly, in Figures 3 and 4, a visual interpretation of image restoration results by different methods can be conducted. The BPFA method produces blurry edges with lost textures and artifacts near edges in restored images. The TNNR method presents wrong linear textures due to its deficiency in restoration of fine textures and details. The TSLRA, WNNM-MC, and proposed NGRR methods obtain relatively better results in the recovery of image structures, textures, and details. Through careful observation, one can see that the NGRR method produces fewer artifacts than that of the WNNM-MC method and relatively sharper edges than that of the TSLRA method in image results. This observation also verifies better ability of sparse representation and information retrieval of the proposed method with the gradual regularization and the nonlocal low rank minimization. Meanwhile, it is obvious that the results by the methods with low rank approximation of whole image matrix are not ideal in the task of randomly missing pixel recovery.

Finally, we discuss the parameter setting in the proposed method in image restoration. Our method includes three main parameters: block size , number of similar blocks , and iteration times . In order to evaluate the effect of on image recovery, we test ten images with randomly missing pixels in experiments on natural gray image by adopting different while fixing other parameters. In Figure 5 the best result is obtained with . In addition, in above experiments on natural images, the parameter setting in the proposed method includes for smooth block, for textured block, and . For the sake of simplicity, Euclidean distance is used to measure the similarity between two image blocks.

All methods implemented in MATLAB are run on a laptop with Intel Core i3 CPU and 8 GB RAM. The average running time of BPFA, TNNR, TSLRA, WNNM-MC, and NGRR methods are 10.1, 2.2, 6.5, 12.7, and 13.8 minutes, respectively.

4. Conclusion

An important need in computer vision is to use advanced image restoration technology to retrieve lost visual information from damaged image. In this paper, we propose a nonconvex minimization model with gradual reweighted regularization to restore missing pixels in digital image, making use of priors of low rank, nonlocal self-similarity and sparsity-promoting regularization. An alternation direction method of multipliers algorithm is used to iteratively solve above minimization problem. Experiments on simulated and real images have verified the effectiveness of the proposed algorithm. In future work, we will develop automatic detection algorithm of missing pixels of image and will combine the proposed method with exemplar synthesis for image restoration of big missing areas.

Data Availability

All test images can be downloaded from http://www.cs.tut.fi/∼foi/GCF-BM3D/.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The research has been supported in part by the National Natural Science Foundation of China (61671276, 11971269), the Natural Science Foundation of Shandong Province of China (ZR2019MF045), and the Social Science Planning Research Project of Shandong Province of China. The author would like to thank Shujun Fu and his research group for their help in the preparation of this manuscript and MATLAB implementation.