#### Abstract

In a typical superresolution algorithm, fusion error modeling, including registration error and additive noise, has a great influence on the performance of the super-resolution algorithms. In this letter, we show that the quality of the reconstructed high-resolution image can be increased by exploiting proper model for the fusion error. To properly model the fusion error, we propose to minimize a cost function that consists of locally and adaptively weighted - and -norms considering the error model. Binary weights are used so as to adaptively select - or -norm, based on the local errors. Simulation results demonstrate that proposed algorithm can overcome disadvantages of using either - or -norm.

#### 1. Introduction

Super-resolution (SR) is an approach to obtain high-resolution (HR) image(s) from a set of low-resolution (LR) images. In recent SR algorithms [1, 2] the SR results depend on model of the fusion error, including the registration error and contaminating noise. As a cost function, the - or -norm is used to fuse LR images [1, 2]. The choice of norm depends on a model of fusion error: whether the error is Gaussian (-norm [1]) or Laplacian (-norm [2]).

Utilizing -norm to solve the super-resolution problem [1] can yield optimal solution when the fusion error has a Gaussian distribution. Although the additive noise in the image generation process can be modeled by Gaussian distribution in many applications [1], the fusion error consisting of a combination of the additive noise and the registration error may be characterized by other distributions. In [2], it has been shown that registration error is proper to be characterized by Laplacian distribution more than Gaussian. However, the assumption of Laplacian distribution for regions that do not suffer from registration error is not always proper assumption.

For these reasons, we propose a mixed-norm-based SR algorithm by minimizing a cost function consisting of - and -norm with adaptive mixing parameters. Since the registration error is space variant, the mixing parameters are chosen based on local error. In addition, the regularization parameter is adaptively estimated.

#### 2. Problem Description

Assume that LR images of the same scene in Lexicographical order denoted by (), each containing pixels, are observed, and they are generated from the HR image denoted by , containing pixels, where . The observation of LR images is modeled by the following degradation process [1, 2]:

where , , and are matrices that represent the motion, the blurring, and the downsampling operators, respectively, is the unknown HR image, and is a vector with the same size as representing the additive noise for the kth image. The size of , **H**, and **D** is , , and , respectively. Throughout the letter, we assume that and are known and the additive noise has Gaussian distribution.

Due to the fact that motion operator is not known, should be estimated from the successive LR frames. This motion estimation is not always accurate; then the so-estimated motion operator can be described as , where is the registration error matrix. Therefore, utilizing (1), the simulated kth LR image can be described as

where is the combination of additive noise and registration error, which represents the noise to be suppressed in the SR process.

#### 3. SR Based on Mixed-Norm

As the Gaussian and Laplacian distributions are the two major candidates for modeling of this mixed noise, we propose to use adaptive mixed - and -norms where - and -norms are locally and adaptively weighted, based on the local error.

##### 3.1. Proposed Cost Function

Define . Let and be diagonal matrices of size with entries 0 or 1 such that . The proposed algorithm is based on the minimization of

Matrices and represent the mixing parameters that select between - and -norms, is the regularization function, is the regularization parameter, and is a scalar that is used to adjust the convergence rate and the regularization parameter for regions that dominated by -norm. When , (3) reverts to the -norm of the error, whereas when , (3) reverts to the -norm of the error. The adaptive adjustment of thereby provides an algorithm that adaptively changes between the -norm, -norm or an intermediate performance between that of and -norms.

##### 3.2. Choice of and

The choice of and is a critical issue in the proposed algorithm. Because of the efficiency of erf to provide an automatic switching parameter, as proved experimentally in [3], we propose to adopt erf in the mixing matrix. Therefore, the th element of the mixing parameter, , for frame k can be described as

where is the error at the initial estimate of the HR image, and is the standard deviation for of the error at frame . can be used as a binary version of (4) by thresholding the erf value by certain value threshold (*θ*), so that if the erf value is greater than that threshold and equals zero otherwise. With this choice of the mixing parameters, will be very small (zero in case of using binary values) at locations that have high error which make the algorithm jump over these locations by updating the HR image with the use of the sign of these errors. Both of and can be adapted based on the partially estimated HR image after each iteration. In the rest of this letter, we will use as binary values.

#### 4. Choice of

The regularization parameter, , adjusts the relative importance between the data fidelity and the smoothness of the solution. Using a relative high value for may lead to very smooth HR image which deviates from the optimal solution of the inverse problem. Also, using small value for may lead to noisy HR image which also deviate from the optimal solution. Therefore, is required to be adaptively estimated. Following an approach similar to [4], the regularization parameter is estimated as a function of the HR image as

where is a monotonically increasing function. Using as a simple linear function of as

then,

is chosen so as to preserve the convexity of the cost function, . The sufficient condition for the convexity of is given as follows.

Proposition 1. *The cost function, , is convex under condition
**
The proof of this proposition is given in the appendix.*

#### 5. Simulation Results and Discussion

##### 5.1. Experiment Setup

To demonstrate the efficiency of the proposed algorithm, the simulation results of Car sequence [5] is included in this section. This sequence consists of 64 LR images and contains a moving object while the camera is stable.

In the experiments, motion operators are estimated from the LR images by using gradient-based registration technique based on affine motion model [6] and the blurring operator is assumed to be Gaussian with kernel size equals with variance equals 2. The steepest descent optimization is used for all the compared algorithms with maximum number of iterations equals 100, step size equals 0.1 (2 in case of -norm-based algorithm), and is set to 20. For more fair comparison, the bilateral total variation (BTV) [2] is used as a regularization function for all the compared algorithms, that is,

where is the shifting operator in direction y with m pixels, is a factor that determines the smoothness support size and *α* is the bilateral parameter, . and are set to 2 and 0.7, respectively, and is set to 0.7. For all the experiments, in (6) is set to , where is the bilinearly interpolated reference frame.

##### 5.2. Results

The results of the Car sequence are shown in Figure 1. The 1st LR image and the corresponding mixing parameter, , are shown in Figures 1(a) and 1(b), respectively. Two regions of interest are shown by red rectangles in Figures 1(a) and 1(b). Zoom of the regions of interest of the resulting HR image using -norm-based algorithm [1] is shown in Figures 1(c) and 2(a). From these figures, we can see that using -norm can be good selection in some regions which is dominated by Gaussian noise as in the smooth regions and the Car's plate in Figure 1(c). However, in regions that dominated by Laplacian noise (due to high registration error), using -norm can result in deformation in the HR image as shown in Figure 2(a). On contrary, zoomed regions of interest of the resulting HR image using -norm-based algorithm [2] are shown in Figures 1(d) and 2(b). From these figures, we can see that using -norm is robust against the registration error as shown in Figure 1(d). However, in some regions which are dominated by Gaussian noise such as the smooth region and the plate in Figure 2(b), -norm fails to reconstruct the optimal solution. Moreover, small objects can disappear from the reconstructed HR image in some regions as shown in Figure 1(d). On the other hand, using adaptive mixed-norm can overcome the drawbacks of both norms as shown in Figures 1(e) and 1(f), for Case I and Case II, where Case I corresponds to the mixed-norm with fixed mixing matrix, based on erf of the initial guess of the HR image. Case II corresponds to the mixed-norm with mixing matrix updated every iteration. And the results of these two cases in the other zoomed parts are shown in Figures 2(c) and 2(d), respectively. Indeed updating the mixing matrix at every iteration gives better results than using fixed mixing matrix.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

**(a)**

**(b)**

**(c)**

**(d)**

Finally, a plot of the normalized cost function, , for different image sequences is shown in Figure 3 to justify proposition 1. It is worth to mention that the complexity of the proposed algorithm is in order .

#### 6. Conclusion

An algorithm for image SR is presented based on adaptive mixed-norms. The proposed algorithm adaptively and locally weights - and -norms based on local error. The proposed algorithm is suitable for suppressing both registration error and additive noise in the SR process.

#### Appendix

#### Convexity Condition

The nonlinear function, can be rewritten as

where and . , and are defined within a convex region, that is, . Then, the condition for convexity of is

where and and . Using (A.1), the left hand side of (A.2) can be written as

Similarly, the right-hand side of (A.2) can be rewritten as

Substituting (A.3) and (A.4) into (A.2), then (A.2) becomes

With the assumption that the function *f* is linear and monotonically increasing of , (A.5) is simplified to

If is convex, then the left-hand side of (A.5) becomes

Considering (A.7), the condition in (A.6) implies that

Since is always positive, this condition becomes

which is true if and have the same sign. One case for to convex is when is monotonically increasing functions of regardless of the other axes , that is, . Since , then

Since (A.10) results in