Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 7367608 | https://doi.org/10.1155/2020/7367608

Jingrui Luo, Jie Wang, "Image Demosaicing Based on Generative Adversarial Network", Mathematical Problems in Engineering, vol. 2020, Article ID 7367608, 13 pages, 2020. https://doi.org/10.1155/2020/7367608

Image Demosaicing Based on Generative Adversarial Network

Academic Editor: Javier Martinez Torres
Received09 Apr 2020
Accepted01 Jun 2020
Published16 Jun 2020

Abstract

Digital cameras with a single sensor use a color filter array (CFA) that captures only one color component in each pixel. Therefore, noise and artifacts will be generated when reconstructing the color image, which reduces the resolution of the image. In this paper, we proposed an image demosaicing method based on generative adversarial network (GAN) to obtain high-quality color images. The proposed network does not need any initial interpolation process in the data preparation phase, which can greatly reduce the computational complexity. The generator of the GAN is designed using the U-net to directly generate the demosaicing images. The dense residual network is used for the discriminator to improve the discriminant ability of the network. We compared the proposed method with several interpolation-based algorithms and the DnCNN. Results from the comparative experiments proved that the proposed method can more effectively eliminate the image artifacts and can better recover the color image.

1. Introduction

Images are widely used in people’s daily life. Compared to analog images, digital images are more superior in their higher resolution and easier storage, and they are more suitable for computer processing. With the development of computer technology, the digital imaging technology has attracted lots of attention and digital cameras have gradually become the mainstream imaging equipment that is widely used in intelligent transportation [1, 2], medical imaging [3, 4], remote sensing technology [5, 6] and other fields. In our daily life, the digital color images are most commonly used, which include three color components, that is, red, green, and blue, in each pixel. Ideally, digital cameras with three sensors can get full-color images with each sensor capturing one color component, and the three components are combined together into a color image. However, in practice, the arrangement of the three color sensors will affect the subsequent color synthesis, and cameras with three sensors are usually expensive and relatively large. Therefore, most digital cameras use single sensor with a color filter array (CFA) placed in front of the sensor. The obtained CFA image needs to be processed to acquire the full-color image and this process is known as image demosaicing [7]. As only one color component is captured for each pixel in the CFA, without image demosaicing, the CFA image can only reflect the general outline of the scenery instead of the complete color information, which consequently affects subsequent image processing [8].

The CFA image demosaicing is essentially an ill-posed inverse problem [9]. The methods for image demosaicing generally include interpolation-based algorithms and learning-based algorithms. Generally, image demosaicing using interpolation methods can achieve high accuracy for smooth areas with approximately same colors and gradient brightness. For the color images, the red, green, and blue components occupy different color channels, respectively. When the high-frequency signals change (high-frequency single/information refers to the region with strong color variation, such as edges and angles), there may be spatial offsets in each color channel. Therefore, the reconstructed images may display color artifacts and zippering when doing interpolation [7]. In addition, some traditional interpolation-based methods ignore the correlation among different color channels, which results in unsmooth images [8]. On the whole, the interpolation-based algorithms still have some limitations for image demosaicing, especially at the high-frequency areas.

In recent years, neural networks have been rapidly developed and widely used in image processing, such as image classification [10, 11], motion recognition [12, 13], and image super-resolution [14, 15]. Recently, the generative adversarial network (GAN) [16] has been proposed and rapidly attracts attention of many researchers. Ledig et al. [17] proposed a super-resolution generative adversarial network (SRGAN), which used a deep residual network for the training and can well recover the image textures from greatly downsampled images. Inspired by super-resolution image reconstruction and the conditional generative adversarial network (CGAN), Kupyn et al. [18] applied the CGAN to image deblurring and effectively restored clear images. Pan et al. [19] proposed a physics model constrained learning algorithm so that it can guide the estimation of the specific task in the conventional GAN framework, which can directly solve image restoration problems (such as image deblurring and image denoising).

GAN has been used and played important roles in several areas; however, it has not been used for image demosaicing. In this paper, we proposed a novel learning-based image demosaicing method using GAN to improve the ability for color image recovery. Our contributions are as follows:(1)We proposed a CFA image demosaicing method based on GAN(2)We carefully designed each part for the GAN model(3)We introduced long jump connections for the improved U-net [20] model to design the generator(4)We used the dense residual network, which includes dense residual blocks with long jump links and dense connections for the discriminator(5)We combined the adversarial loss, the feature loss, and the pixel loss together to further strengthen the network performance

In the experimental section, we show the performance of our method using some comparative experiments. The results prove that the proposed method can more effectively remove artifacts and recover the full-color image, especially for some high-frequency areas such as edges and angles.

2.1. Interpolation-Based Algorithms

There are many interpolation-based methods for image demosaicing. Linear interpolation algorithm is the simplest one. However, this method often causes artifacts and blurring at the image edges [21]. The bilinear interpolation algorithm [22] estimates the unknown pixels from their adjacent pixels. This method often causes color distortion in the reconstructed image. Malvar et al. [23] proposed a high-quality linear (HQL) interpolation algorithm, which can greatly reduce the computational complexity. However, the artifacts still occur at high-frequency components of the image. In order to further reduce the artifacts, different interpolation techniques were proposed.

Within the gradient-based schemes, Hamilton and Adam [24] proposed the Hamilton–Adam algorithm, which uses the second derivative of the sampled color channels when doing interpolation. Therefore, this method considers the correlation among different color channels and significantly improves the image details. Mukherjee et al. [25] proposed a two-line (TL) interpolation algorithm, which used the homogeneity of the cross-ratios of different spectral components around a small neighborhood to interpolate the pixels lying in the low gradient directions, so as to produce high-quality images.

Within the directional interpolation schemes, Chung and Chan [26] used the prior decision in the horizontal interpolation and the vertical interpolation and got the interpolation result according to the trend of the image edges. This method is prone to producing false colors at tiny edges, especially when the edges are not in the horizontal or vertical directions. Zhang et al. [27] proposed a local directional interpolation and nonlocal adaptive thresholding (LDI-NAT) algorithm. This method used the nonlocal redundancy of the image to improve the local color reproduction and can better reconstruct the edges and reduce color artifacts.

Within the residual interpolation schemes, Kiku et al. [28] proposed a minimized-Laplacian residual interpolation (MLRI) algorithm. This method estimated the tentative pixel values by minimizing the Laplacian energy of the residuals, which can effectively reduce the color artifacts. Monno et al. [29] proposed an adaptive residual interpolation (ARI) algorithm, which adaptively selects a suitable iteration number and combines two different types of residual interpolation algorithms at each pixel. Kiku et al. [30] incorporated the residual interpolation algorithm into the gradient-based threshold free (RI-GBTF) algorithm, and the interpolation accuracy is greatly improved. Besides, L. Zhang and D. Zhang [21] proposed a joint demosaicing-zooming scheme. This method used the correlation of the hyperspectral spatial for the CFA image to calculate the color difference, so as to restore the three color components, which can effectively eliminate color artifacts.

2.2. Learning-Based Algorithms

Recently, neural networks have also been used for image demosaicing. Prakash et al. [31] used a denoising convolution neural network (DnCNN) to perform demosaicing and denoising independently, which effectively suppressed the noise and artifacts. Tan et al. [32] used the deep residual network for image demosaicing and image denoising, which also effectively obtained high-resolution color images. Shopovska et al. [33] proposed an improved residual U-net and used it for image demosaicing, which achieved high-quality reconstructed color images for different CFA patterns. Generally, the learning-based strategies can achieve better performance compared to the traditional interpolation-based methods. However, higher-resolution and clearer recovered color images are the constant pursuit for image demosaicing; that is why we are trying GAN for this task.

3. Problem Formulation

3.1. CFA Image

To obtain a color image with detailed description of the natural image, the best solution is to use three sensors to accept the red, green, and blue components for each pixel, respectively. Then the color image can be synthesized by combining the three color components. Considering the cost and volume, most digital cameras use a single image sensor for the image acquisition systems. The image acquisition of the camera with single sensor is shown in Figure 1. The CFA is set before the sensor. For common CFA, such as the Bayer pattern [34] that is used in this work, the light reaching the sensor mainly consists of the red, green, and blue components. Within the CFA, each pixel only accepts one color component. As shown in Figure 1, the obtained Bayer pattern image can only estimate the approximate gray outline of the scenery instead of the complete color information. The color arrangement of Bayer pattern can be clearly seen from the local zoomed in area. In the Bayer pattern, a set of red and green filters and a set of green and blue filters are alternately used. The number of green pixels is 1/2 of the total number of pixels, while the numbers of red and blue pixels are both 1/4 of the total number of pixels. As only one color component is captured for each pixel, the other two color components need to be recovered according to the color information from adjacent pixels; then a full-color image is obtained from the CFA image. This processing is called image demosaicing.

3.2. Theory of GAN

GAN is a kind of probabilistic generative network, which was first introduced by Goodfellow et al. [16] into the deep learning field. The general architecture of GAN is shown in Figure 2. GAN uses to perform inverse transformation sampling of the probability distribution and capture the distribution of the ground truth data . Based on noise data which obeys a certain distribution (such as Gaussian distribution), will generate a fake sample similar to . The output of represents the probability of the incoming data. Thus, if the input is , the output is a large probability value; otherwise it outputs a small probability. The GAN’s training process is to maximize the discrimination accuracy by training , as well as to minimize the difference between the generated sample and the real sample by training . Thus, the training for and is a min-max game problem. The performances of and are improved by alternative optimization. Finally, and reach Nash equilibrium, so that the data distribution synthesized by is similar to that of the ground truth data . The loss function of the above process is defined aswhere represents the value function [16]. represents ground truth data obeying a real data distribution , and represents noise data obeying a simulated distribution (such as the Gaussian distribution). and are the classification outputs of for the ground truth data and the generated data , respectively. means expectation.

4. The Proposed Method

In this section, we propose an effective demosaicing algorithm based on GAN. The whole process is shown in Figure 3. The proposed algorithm first extracts the red, green, and blue components from the original CFA image to form the 3-channeled split CFA image. Then the extracted green component is further separated into two channels to form the 4-channeled split CFA image. Subsequently, the algorithm extracts only the pixel values that are not 0 to compress the 4-channeled split CFA image. The compressed 4-channeled image is taken as the input of in GAN. The output of is the interpolated 3-channel full-color image. The output images from and the ground truth images are then inserted into . The parameters of are optimized according to the output of . We designed the architectures for and and trained the database through an end-to-end trainable neural network. In addition, the algorithm combined the adversarial loss, pixel loss, and feature loss to design the generator loss function in order to further improve the network performance [35]. In the following, we give a detailed introduction to different parts of the network.

4.1. Generator

The purpose of the generator is to convert the 4-channeled compressed CFA images to the 3-channeled output full-color images. The structure of is shown in Figure 4. We used the improved U-net [20] model for . Overall, the generator consists of an encoder (the first half) and a decoder (the second half), which is shown in Figure 4. One layer in the encoder and the corresponding layer in the decoder form a U-shaped symmetric layer. The long jump links within each symmetric layer in the U-net model can reduce the information redundancy. Besides, we remove the pooling layer in the U-net, which can avoid the loss of useful information in the feature maps and increase the stability of the training process.

The encoder is mainly based on the downsampling operation (i.e., convolution operation). It can analyze the input data to obtain the most significant features and provide feature mappings to its corresponding layer in the decoder. The activation function of the encoder is a leaky rectified linear unit (LReLU), which is defined aswhere is a positive constant (). represents the input vectors for a specific layer of the encoder. In our experiments, we set as 0.1.

The decoder is mainly based on the upsampling operation (i.e., deconvolution operation) to restore the full-color images. The activation function of the decoder is a standard rectifier linear unit (ReLU), which is defined aswhere represents the input vectors for a specific layer of the decoder.

Particularly for the final layer of the decoder, the activation function is the tanh activation function, which is defined aswhere represents the input vectors for the final layer of the decoder.

In order to accelerate the convergence and improve the network performance, we introduce the batch normalization (BN) operation after each convolution and deconvolution operation to slow down the transfer of internal covariates and reduce the sensitivity of the network to initialization weights [36].

Detailed parameters for the convolution and deconvolution layers are shown in Table 1.


Convolution
EncoderKernel3 × 34 × 44 × 44 × 44 × 44 × 44 × 44 × 4
Stride1 × 12 × 22 × 22 × 22 × 22 × 22 × 22 × 2
Channel163264128256256512512

Deconvolution
DecoderKernel4 × 44 × 44 × 44 × 44 × 44 × 44 × 43 × 3
Stride2 × 22 × 22 × 22 × 22 × 22 × 22 × 21 × 1
Channel51251225625612864323

4.2. Discriminator

We used a dense residual network, which is inspired by the ResNet [36], for the discriminator . The ResNet is formed by stacking multiple consecutive residual blocks (RB). In order to improve the network performance and solve the problem of gradient disappearance and gradient dispersion during the network training, we used an improved residual dense block (RDB). The structure of is shown in Figure 5. The long jump connection after each RDB helps to transfer the output of this RDB to the final convolution layer. Within each RDB, there are several units with each unit consisting of the ReLU activation function, the convolution layer, and the BN operation. There are dense connections with different distances among these units. The output from the final convolution layer is mapped into 0 or 1 using the sigmoid activate function. The sigmoid function performs a probability analysis that can normalize the discriminant result, which is defined aswhere represents the input vectors for the sigmoid function.

For the convolution layers in , the kernel size is set as 3 × 3, the stride size is set as 1 × 1, and an output channel is 64.

4.3. Loss Function

Denote the ground truth images , where represents the number of images. After a series of operations, the CFA images are transformed into the corresponding 4-channeled compressed CFA images, which are denoted as and are regarded as the input of . According to the loss function inspired by Alsaiari et al. [35], we combined the adversarial loss, the feature loss, and the pixel loss together with appropriate weights to work as the final loss function for the generator. The adversarial loss function () is expressed aswhere represents the 4-channeled compressed CFA image. The 3-channeled color images are produced using Equation (6) to fool the generator .

The feature loss function () is defined aswhere represents the feature mapping matrix extracted from the pretrained VGG network [35]. represents the L2 norm. Using Equation (7), we can extract the image features and restore the image details by comparing feature data between the generated image and the ground truth image .

The pixel loss (pixel-to-pixel Euclidean distance) function () is defined aswhere is the regularization item, with representing the regularization weight. Using Equation (8), we can correctly restore the image information by comparing each pixel between the generated image and the ground truth image .

We combine , , and together with appropriate weights to form the final loss function for the generator, which is defined aswhere , , and represent the predefined positive weights according to the empirical values [35].

According to Equation (1), the discriminator uses the following equation to update the parameters:

For the ground truth image , the probability of the output is close to 1. For the generated image , the probability of the output is close to 0.

Based on the above strategy, the generator and the discriminator will be alternately optimized.

Based on the above introduction, we give the whole pipeline in Figure 6 to clearly describe the proposed method. The real scenery is captured by the camera and converted to the CFA image. The obtained CFA image is then further converted to the 4-channeled compressed CFA images, which is inserted into the generator designed by the U-net model. The output from the generator and the ground truth image are then inputted into the discriminator that is designed by the dense residual network. The generator will finally give near-real demosaiced image through the network training.

5. Experiments

In this section, we demonstrate the performance of the proposed network with numerical experiments. The network training is carried out under the TensorFlow environment, which is installed on a PC with Nvidia GeForce® MX250 GPU and Intel Core i5-8265U CPU. The training sets are created beforehand and then uploaded into TensorFlow.

5.1. Training Details

The training database used in this paper is from the Waterloo Exploration database (WED) [37], which contains 4744 pristine natural images. We first randomly selected 400 images to create the training set. For the training set, we used data augmentation operations such as cropping and rotations to increase the number of images. To be more specific, we first scaled down each selected image by 1, 0.9, 0.8, and 0.7 times and then used a sliding window to crop the scaled images into patches with a size of pixels. The sliding step-lengths in the horizontal and vertical directions are both 20 pixels. Subsequently, the obtained patches are sequentially vertically and horizontally flipped and rotated 90°, 180°, and 270°, respectively, as shown in Figure 7. Through the above data augmentation operations, we obtained 86400 training images. These images are input in batches during the network training process to reduce the calculation and avoid local extreme value problems.

During the training, the weighting parameters in the loss functions are set as , , , and according to the empirical values [35]. The batch size is set as 256 and there are 200 iterations for the whole network training. During the training, we used a variable learning rate, where the initial learning rate is set as 0.01 and the value is reduced by 1/10 every 40 iterations. The trained network is tested with the Kodak database and the McMaster database [38]. The Kodak database consists of 24 images with the size of pixels. The McMaster database consists of 18 images with the size of pixels.

In order to quantitatively evaluate the performance of the proposed network, we used color peak signal to noise ratio (CPSNR) and structural similarity index (SSIM) as measurement standards for the demosaicing results. The CPSNR value is calculated aswhere and represent the pixel value of the ground truth image and the demosaiced image for the color channel, respectively. and represent the height size and width size of the image.

The SSIM measures the similarity between two images, which is defined aswhere and represent the mean intensity and the standard deviation of the ground truth image . and represent the mean intensity and the standard deviation of the demosaiced image ; is the covariance between and . and are two constants used to keep the equation balanced and stable, which are usually set as and , with , , and .

5.2. Image Demosaicing Test

In this section, we prove the effectiveness of the proposed method by comparing different demosaicing methods. The methods used for comparison are the Bilinear [22], TL [25], HQL [23], Zhang’s [21], LDI-NAT [27], ARI [29], MLRI [28], RI-GBTF [30], and DnCNN [31] methods, as well as the proposed method.

Table 2 shows the CPSNR and SSIM of the test results for the Kodak database from different methods. Figure 8 shows the corresponding box plots of the CPSNR and SSIM for easier comparison. Table 3 shows the CPSNR and SSIM of the test results for the McMaster database from different methods. Similarly, Figure 9 shows the corresponding box plots. We can see that the proposed method shows higher CPSNR and lower SSIM, which means better performance compared with other methods.


No.12345678
CPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIM

Bilinear33.550.806038.660.898439.590.933338.560.911834.220.877734.940.844339.160.953833.190.8243
TL32.980.808136.910.856438.640.920137.160.878833.760.850034.140.836437.060.928832.750.8323
HQL36.020.950541.390.960243.130.976142.320.970937.300.970137.400.956642.470.982735.170.9502
Zhang’s38.790.988140.460.975441.450.986140.110.979337.050.985940.230.988640.250.987735.990.9865
LDI-NAT33.690.963339.010.968340.400.980239.020.971834.770.979734.900.969340.930.986932.010.9676
ARI38.840.986339.690.970442.750.987440.660.090438.290.988740.550.988542.740.988834.990.9801
MLRI36.860.981841.490.980742.950.988341.990.984237.680.988538.990.986542.790.990435.170.9825
RI-GBTF39.030.988040.080.975341.510.987040.490.980337.810.987840.890.989940.930.986932.010.9676
DnCNN41.960.987443.730.973242.180.986943.260.986737.770.984940.260.987342.080.987336.490.9738
Proposed42.070.988243.830.975742.220.987643.930.988039.490.989741.770.990143.530.989436.880.9869

No.910111213141516
CPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIM
Bilinear38.580.919838.820.919136.020.868038.890.906332.510.769035.540.870938.760.917436.880.8748
TL38.180.915638.070.908335.340.854537.800.897532.320.786734.580.846737.970.905835.800.8601
HQL41.590.970242.410.972238.750.961142.410.969735.030.942038.540.961641.740.965939.870.9629
Zhang’s42.270.984842.040.985839.850.986842.830.986835.010.982135.520.980639.120.975843.730.9900
LDI-NAT40.110.974439.810.977836.130.972840.720.977729.640.946135.410.971837.580.967138.560.9746
ARI41.960.971441.670.981939.670.984343.250.986935.220.980437.570.983638.640.971543.330.9892
MLRI42.360.981742.440.984739.360.986343.370.986833.150.973237.750.983439.850.977742.840.9885
RI-GBTF42.540.984842.610.986640.240.988043.390.987635.320.982436.310.981738.900.973844.380.9901
DnCNN42.760.985740.040.983642.320.987542.150.987036.940.983739.500.980340.950.972544.650.9913
Proposed43.430.988343.600.987142.660.990143.990.989437.030.986140.020.984542.360.977944.960.9904

No.1718192021222324
CPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIM
Bilinear38.580.924435.550.873036.010.875038.530.919835.810.887836.650.881641.320.955135.530.8774
TL37.820.913634.840.859535.460.879838.040.913235.240.872036.010.877940.030.939935.240.8790
HQL42.170.976038.500.961238.200.961941.270.965138.420.964037.780.960644.460.979338.540.9663
Zhang’s41.670.988237.130.979440.680.985640.940.976939.120.982437.960.974141.990.982035.380.9853
LDI-NAT38.310.978933.890.965337.220.969238.310.967235.220.968637.110.971941.800.981131.770.9731
ARI41.320.986836.910.975340.650.982140.890.976039.280.974838.080.972843.240.981435.360.9849
MLRI40.690.986536.550.978240.030.982340.710.977038.170.980238.550.977243.670.985734.640.9842
RI-GBTF41.750.989037.380.980541.120.986041.210.978039.630.983338.360.976341.960.981635.010.9858
DnCNN42.180.984938.960.981542.740.984740.550.987340.080.982539.790.980544.670.984339.130.9867
Proposed42.250.989139.470.998342.960.992642.860.987740.870.989440.370.986244.980.991640.070.9954


No.123456789
CPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIM

Bilinear29.580.893933.390.925228.990.916931.550.956434.650.937437.830.959931.780.888233.100.948634.760.9498
TL33.670.826135.720.874033.960.875335.960.942235.910.871536.290.888734.810.866237.390.925437.030.9047
HQL34.100.873337.330.917235.720.950137.930.978437.660.923938.230.938938.220.953739.730.960738.760.9373
Zhang’s25.970.848432.890.913531.990.950333.760.981129.890.899931.850.900038.710.973436.730.961033.140.9237
LDI-NAT27.470.876033.980.927131.660.954234.860.984232.580.928535.610.942035.140.954636.650.965635.510.9449
ARI29.630.924335.230.945534.650.972837.920.989435.440.962538.790.962639.750.979239.550.978437.900.9630
MLRI28.980.913535.060.941733.850.969737.660.989333.940.953338.290.966037.500.970537.050.971036.490.9588
RI-GBTF27.440.886933.640.923532.710.960334.850.984331.430.932234.470.943139.110.975436.360.964034.100.9429
DnCNN29.390.923636.660.935835.860.967337.740.992136.280.957937.950.964239.300.973538.450.969138.510.9582
Proposed34.290.936737.670.948635.880.973838.060.998037.700.963538.930.969339.730.979839.870.980439.740.9679

No.101112131415161718
CPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIMCPSNRSSIM
Bilinear36.860.954336.950.949534.620.952039.040.960637.550.948737.660.944731.450.927834.750.937531.460.9098
TL37.100.896038.440.920637.400.902139.640.915338.790.914739.100.917234.790.879934.320.872735.560.8756
HQL40.170.953940.510.945140.060.951740.040.943440.790.942640.770.936036.330.905136.120.896337.320.9361
Zhang’s35.010.940235.910.927335.580.944637.730.938136.490.936836.360.925729.030.867527.650.832433.240.9379
LDI-NAT37.240.956037.570.942737.330.954739.720.948837.760.947137.830.941431.010.891330.810.903133.710.9397
ARI39.280.972340.230.973040.070.964740.640.950739.050.957539.410.959335.640.967334.710.961636.430.9666
MLRI38.650.968839.980.971439.650.963640.630.953138.810.955238.920.952835.160.963232.580.941536.120.9649
RI-GBTF36.270.955237.850.961038.130.957938.760.944837.270.943136.650.930932.190.944229.060.939235.280.9590
DnCNN39.040.967839.470.974640.030.961439.870.952539.360.957940.180.948735.940.960333.890.958937.210.9631
Proposed40.350.975540.620.978740.110.967540.710.961640.950.967040.930.960336.820.967934.640.967737.650.9690

For additional comparison, Figure 10 shows the reconstructed images using different methods for the 19th image in the Kodak database. The marked portion of the image within the black box (the fence) is enlarged for clearer comparison. It can be seen that this part has obvious vertical textures and it is prone to artifacts. Residual images (i.e., difference between the ground truth image and the demosaiced images) for the enlarged portion are also shown for easier comparison. From the reconstructed images and the residual images, we can see that, compared with other methods, the proposed method can more effectively suppress the artifact phenomenon, especially for some tiny edges and angle areas.

Figure 11 shows the reconstructed images using different methods for the 22th image in the Kodak database. The marked portion in the black box (the window) is enlarged. It can be seen that this part is prone to appearing color stripes and zipperings. The residual images are also shown for easier comparison. From the reconstructed images and the residual images, we can see that most methods can obtain satisfactory results for the smooth areas, while there may appear some wrong colors at the edges. From the comparison, we can see that the results of the proposed method show relatively fewer artifacts and color stripes.

Figures 12 and 13 show the reconstructed images using different methods for the 1st and 12th images in the McMaster database, respectively. Similarly, the marked portions in the images are enlarged and the residual images for the enlarged portion are shown for clear comparisons. It can be seen that, compared with other methods, the proposed method can better recover the images with fewer artifacts, especially at some tiny edges, which proved the validity and performance of the proposed method.

6. Discussion

In this work, we proposed a new method for image demosaicing based on GAN, which aims to more effectively reconstruct the full-color image. One of the challenges of this task is the recovery of the high-frequency information in the image, such as edges and angles. Many related algorithms have strong ability to process the smooth part of the image; however artifacts, zippering, and strip colors still exist in the high-frequency part. In the current work, we redesigned the generator and discriminator of GAN and combined the adversarial loss, the feature loss, and the pixel loss to further improve the network performance. Numerical experiments showed that the proposed algorithm can effectively reduce the artifacts at the edges and produce near-real reconstructed images, which can be the basis for subsequent image processing, such as image recognition and image transmission. The proposed method can produce better recovered color images; however, the learning-based strategy is relatively time-consuming in the data training phase. Therefore, how to improve the efficiency of the network training is an important aspect to further enhance the performance of the learning-based technology. In practice, there are many kinds of CFA patterns. We used the Bayer pattern in this paper. Different patterns of the CFA image may have different impact on the reconstructed image, so we will use CFA images with different designs to test the network in the near future. In the current work, we assumed the CFA images are noiseless. However, the images from cameras in practice may have been affected by noises. Therefore, we will try combining image demosaicing and denoising in the future. The current work focuses on directly generating the demosaiced images using neural network. We will test on combining traditional demosaicing algorithms and the neural network in the future.

7. Conclusions

In this paper, we proposed an image demosaicing method based on GAN. The generator is designed by using the improved U-net architecture to directly generate the demosaicing images. For the discriminator, we used the dense residual network including dense residual blocks with long jump connections and dense connections to overcome the problem of gradient disappearance and gradient dispersion during the network training, which can improve the discriminant ability of the network. In addition, we combined the adversarial loss, the pixel loss, and the feature loss together to improve the loss function. The network was trained using images from the Waterloo Exploration database and the trained network was tested with the Kodak database and the McMaster database. Comparisons among different image demosaicing methods showed that the proposed method can better eliminate artifacts in the reconstructed image and can especially better restore high-frequency features, such as edges and angles of the image.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request. The data used to support the findings of this study are open datasets which could be found in general websites, and the datasets are also freely available.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors acknowledge the National Natural Science Foundation of China (Grant no. 41704118) and the Natural Science Basic Research Plan in Shaanxi Province of China (Grant no. 2020JM-446).

References

  1. Y. Yuan and F. Y. Wang, “Towards blockchain-based intelligent transportation systems,” in Proceedings of the IEEE International Conference on Intelligent Transportation Systems, pp. 2663–2668, IEEE, Rio de Janeiro, Brazil, November 2016. View at: Publisher Site | Google Scholar
  2. C. N. E. Anagnostopoulos, I. E. Anagnostopoulos, V. Loumos, and E. Kayafas, “A license plate-recognition algorithm for intelligent transportation system applications,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 3, pp. 377–392, 2006. View at: Publisher Site | Google Scholar
  3. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Optics Letters, vol. 32, no. 8, pp. 912–914, 2007. View at: Publisher Site | Google Scholar
  4. E. R. Hunt, M. Cavigelli, C. S. T. Daughtry, J. E. Mcmurtrey, and C. L. Walthall, “Evaluation of digital photography from model aircraft for remote sensing of crop biomass and nitrogen status,” Precision Agriculture, vol. 6, no. 4, pp. 359–378, 2005. View at: Publisher Site | Google Scholar
  5. A. D. Richardson, B. H. Braswell, D. Y. Hollinger, J. P. Jenkins, and S. V. Ollinger, “Near-surface remote sensing of spatial and temporal variation in canopy phenology,” Ecological Applications, vol. 19, no. 6, pp. 1417–1428, 2009. View at: Publisher Site | Google Scholar
  6. A. Rango, A. Laliberte, J. E. Herrick, C. Winters, and D. Browning, “Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management,” Journal of Applied Remote Sensing, vol. 3, no. 1, Article ID 033542, 2009. View at: Google Scholar
  7. T. Yamaguchi and M. Ikehara, “Image demosaicking via chrominance images with parallel convolutional neural networks,” in Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1702–1706, Brighton, UK, May 2019. View at: Publisher Site | Google Scholar
  8. D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 4968–4981, 2014. View at: Publisher Site | Google Scholar
  9. J. Wang, C. Zhang, and P. Hao, “New color filter arrays of high light sensitivity and high demosaicking performance,” in Proceedings of the 2011 IEEE International Conference on Image Processing, pp. 3153–3156, Brussels, Belgium, September 2011. View at: Publisher Site | Google Scholar
  10. Y. Han, T. Jiang, Y. Ma, and C. Xu, “Pretraining convolutional neural networks for image -based vehicle classification,” Advances in Multimedia, vol. 2018, Article ID 3138278, 10 pages, 2018. View at: Publisher Site | Google Scholar
  11. C. Tang, Q. Zhu, W. Wu, W. Huang, C. Hong, and X. Niu, “PLANET: improved convolutional neural networks with image enhancement for image classification,” Mathematical Problems in Engineering, vol. 2020, Article ID 1245924, 10 pages, 2020. View at: Publisher Site | Google Scholar
  12. P. Wang, W. Li, Z. Gao, J. Zhang, C. Tang, and P. O. Ogunbona, “Action recognition from depth maps using deep convolutional neural networks,” IEEE Transactions on Human-Machine Systems, vol. 46, no. 4, pp. 498–509, 2016. View at: Publisher Site | Google Scholar
  13. Y. Hou, Z. Li, P. Wang, and W. Li, “Skeleton optical spectra-based action recognition using convolutional neural networks,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 3, pp. 807–811, 2018. View at: Publisher Site | Google Scholar
  14. Z. Hua, H. Zhang, and J. Li, “Image super resolution using fractal coding and residual network,” Complexity, vol. 2019, Article ID 9419107, 14 pages, 2019. View at: Publisher Site | Google Scholar
  15. X. Zhu, L. Zhang, L. Zhang, X. Liu, Y. Shen, and S. Zhao, “GAN-based image super-resolution with a novel quality loss,” Mathematical Problems in Engineering, vol. 2020, Article ID 9419107, 12 pages, 2020. View at: Publisher Site | Google Scholar
  16. I. Goodfellow, J. Pouget-Abadie, M. Mirza et al., “Generative adversarial nets,” in Proceedings of the Advances in Neural Information Processing Systems, pp. 2672–2680, Montreal, Canada, 2014. View at: Google Scholar
  17. C. Ledig, L. Theis, F. Huszár et al., “Photo-realistic single image super-resolution using a generative adversarial network,” 2017, https://arxiv.org/abs/1609.04802. View at: Google Scholar
  18. O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: blind motion deblurring using conditional adversarial networks,” 2017, https://arxiv.org/abs/1711.07064. View at: Google Scholar
  19. J. Pan, Y. Liu, and J. Dong, “Physics-based generative adversarial models for image restoration and beyond,” 2018, https://arxiv.org/abs/1808.00605. View at: Google Scholar
  20. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241, Munich, Germany, October 2015. View at: Google Scholar
  21. L. Zhang and D. Zhang, “A joint demosaicking–zooming scheme for single chip digital color cameras,” Computer Vision and Image Understanding, vol. 107, no. 1-2, pp. 14–25, 2007. View at: Publisher Site | Google Scholar
  22. S. C. Pei and I. K. Tam, “Effective color interpolation in CCD color filter arrays using signal correlation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 6, pp. 503–513, 2003. View at: Publisher Site | Google Scholar
  23. H. S. Malvar, L. He, and R. Cutler, “High-quality linear interpolation for demosaicing of Bayer-patterned color images,” in Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 485–488, Montreal, Canada, May 2004. View at: Publisher Site | Google Scholar
  24. J. F. Hamilton and J. E. Adam, “Adaptive color plane interpolation in single sensor color electronic camera,” 1997, US Patent No. 5629734. View at: Google Scholar
  25. J. Mukherjee, M. S. Moore, and S. K. Mitra, “Color demosaicing with constrained buffering,” in Proceedings of the Sixth International Symposium on Signal Processing and Its Applications (Cat.No.01EX467), pp. 52–55, Kuala Lumpur, Malaysia, August 2001. View at: Publisher Site | Google Scholar
  26. K.-H. Chung and Y.-H. Chan, “Color demosaicing using variance of color differences,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2944–2955, 2006. View at: Publisher Site | Google Scholar
  27. L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” Journal of Electronic Imaging, vol. 20, no. 2, Article ID 023016, 2011. View at: Publisher Site | Google Scholar
  28. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” in Proceedings of the SPIE, San Francisco, CA, USA, March 2014. View at: Publisher Site | Google Scholar
  29. Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), pp. 3861–3865, Quebec, Canada, September 2015. View at: Publisher Site | Google Scholar
  30. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Beyond color difference: residual interpolation for color image demosaicking,” IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, vol. 25, no. 3, pp. 1288–1300, 2016. View at: Publisher Site | Google Scholar
  31. V. Prakash, K. S. Prasad, and T. J. C. Prasad, “Deep learning approach for image denoising and image demosaicing,” International Journal of Computer Applications, vol. 168, no. 9, pp. 18–26, 2017. View at: Publisher Site | Google Scholar
  32. H. Tan, H. Xiao, S. Lai, Y. Liu, and M. Zhang, “Deep residual learning for image demosaicing and blind denoising,” Pattern Recognition Letters, vol. 2018, 2018. View at: Google Scholar
  33. I. Shopovska, L. Jovanov, and W. Philips, “RGB-NIR demosaicing using deep residual U-Net,” in Proceedings of the 2018 26th Telecommunications Forum (TELFOR), pp. 1–4, Belgrade, Serbia, November 2018. View at: Publisher Site | Google Scholar
  34. B. E. Bayer, “Color imaging array,” 1976, US Patent No. 3971065. View at: Google Scholar
  35. A. Alsaiari, R. Rustagi, M. M. Thomas, and A. G. Forbes, “Image denoising using a generative adversarial network,” in Proceedings of the 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT), pp. 126–132, Kahului, HI, USA, March 2019. View at: Google Scholar
  36. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017. View at: Publisher Site | Google Scholar
  37. K. Ma, Z. Duanmu, Q. Wu et al., “Waterloo exploration database: new challenges for image quality assessment models,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 1004–1016, 2016. View at: Google Scholar
  38. D. S. Tan, W.-Y. Chen, and K.-L. Hua, “DeepDemosaicking: adaptive image demosaicking via multiple deep fully convolutional networks,” IEEE Transactions on Image Processing, vol. 27, no. 5, pp. 2408–2419, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 Jingrui Luo and Jie Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views856
Downloads488
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.