Advances in Multimedia

Advances in Multimedia / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 8459896 | https://doi.org/10.1155/2019/8459896

Mali Yu, Hai Zhang, "High Dynamic Range Imaging Based on Bidirectional Structural Similarities and Weighted Low-Rank Matrix Completion", Advances in Multimedia, vol. 2019, Article ID 8459896, 8 pages, 2019. https://doi.org/10.1155/2019/8459896

High Dynamic Range Imaging Based on Bidirectional Structural Similarities and Weighted Low-Rank Matrix Completion

Academic Editor: Constantine Kotropoulos
Received05 Aug 2019
Accepted05 Dec 2019
Published26 Dec 2019

Abstract

High dynamic range (HDR) imaging, aiming to increase the dynamic range of an image by merging multiexposure images, has attracted much attention. Ghosts are often observed in a resultant image, due to camera motion and object motion in the scene. Low-rank matrix completion (LRMC) provides an effective tool to remove ghosts. However, user specification of the included or excluded regions is required. In this paper, we propose a novel HDR imaging method based on bidirectional structural similarities and weighted low-rank matrix completion. In our method, we first propose the bidirectional structural similarities containing forward-projection structural similarity (FPSS) and backward-projection structural similarity (BPSS) to divide each image into four groups: motion region, saturated region in the source image, saturated region in the reference image, and static and unsaturated regions. Then, the weight maps and the motion maps constructed based on FPSS and BPSS are introduced in the weighted LRMC model to reconstruct the background irradiance maps. Experiments are conducted on several challenging image sets with complex scene, and the results show that the proposed method outperforms three current state-of-the-art methods and Photoshop cs6 and is robust to the reference image.

1. Introduction

The typical digital cameras capture images represented in 8-bit per pixel for each color channel, which is much lower than the dynamic range of the real-world scenes. Thus, details of the dark or bright parts in the scenes are missing in a single image. This problem can be addressed by merging images captured under the different exposure settings, because different regional information can be captured under specific exposure [1].

Some methods generate a high dynamic range (HDR) image as the weighted sum of the estimated irradiance images, after recovering the camera response function [2, 3], while others directly generate an HDR-like low dynamic range (LDR) image as the weighted sum of the input LDR images by appropriately adjusting weights [46]. These methods perform well if the scene is static. However, ghosts are often observed in a resultant image, because motions are hard to be avoided in the applications. Thus, ghost removal is essential in HDR imaging [79].

Recently, most studies focus on object motion correction in the scene, because camera motion can be avoided by fixing camera or applying global registration methods [1012]. Existing object motion removal methods mainly include two categories: selection-based methods and correction-based ones. Selection-based methods [1316] generate a resultant image as the weighted sum of all input images or based on the weighted sum of all gradient images, where 0 or small weight is assigned to motion pixel. These methods perform well in some cases; however, they usually rely heavily on the accurate detection of the motion pixel.

Correction-based methods reconstruct the motion regions and the saturated regions based on the correlation. For example, Sen et al. [17] and Hu et al. [18] exploited the patch matching method to search the closest patches for each pixel, which was used to correct the motion regions and the saturated regions. However, mismatch appears in the saturated regions and blurring exists in the fused image. Zimmer et al. [19] proposed using optical flow to find the dense correspondence, based on which an HDR image is reconstructed. However, the correspondence failed for large displacement.

Rank minimization provides an effective tool in image recovery [2022]. Based on the assumption that the intensity of image is linear to the irradiance of the scene, Oh et al. [23] first proposed introducing the rank minimization in HDR imaging to detect motion and using the estimated sparse error to determine the weight maps. Bhardwaj and Raman [24] modified the soft thresholding function in the original robust principal component analysis (RPCA) algorithm to recover the low-rank matrix, which was combined by applying the pyramid-based method [4] to obtain the resultant background irradiance map. Lee et al. [25] improved the model by introducing the low-rank matrix completion (LRMC). However, these methods also suffer from the problems of the selection-based methods. To handle this problem, Oh et al. [26] introduced the rank-1 constraint into the LRMC and replaced the partial sum of singular values to the nuclear norm. The estimated low-rank matrix is the background irradiance map. Lee and Lam [27] employed truncated nuclear norm minimization to accelerate the algorithm. However, their performance relies highly on the selection of the missing regions. In [26, 27], part of missing regions requires user specification. When the scene is complex, it is hard for the user to do so.

To address the limitations of the LRMC-based HDR imaging methods, we present a novel HDR imaging method based on the bidirectional structural similarities and the weighted LRMC model. First, we propose the bidirectional structural similarities to segment an image into four groups: motion regions, saturated regions in the source image, saturated regions in the reference image, static and unsaturated regions. Similarity measurements irrelevant to luminance variation, such as local entropy [28], zero-mean normalized cross-correlation [29, 30], interconsistency and intraconsistency [31], direction of the signal structure component [32], were employed to detect motion regions. These methods perform well in many cases but are prone to mistake the well-exposure regions of the source image that correspond to the saturated regions in the reference image as object motion. Considering that the images need to be transformed into the same luminance level prior to the similarity check, we observe that structural variation in the motion regions and in the saturated regions are bidirectional and unidirectional, respectively. To facilitate the discussion later, the projection from the reference image to the source image is termed as the forward-projection (FP) and the reverse projection is termed as the backward-projection (BP). The structure in a motion region changes in both FP and BP; the structure in a saturated region of the source image changes in BP and remains unchanged in FP; the structure in a saturated region in the reference image remains unchanged in BP and changes in FP. Therefore, we propose bidirectional structural similarities including FP structural similarity (FPSS) and BP structural similarity (BPSS) to more accurately detect the motion regions and the saturated regions. Then, we construct the motion maps and the weight maps based on FPSS and BPSS and introduce into the weighted LRMC-based method. The proposed method requires no user specification of the missing regions and is robust to the reference image.

The rest of the paper is organized as follows. The proposed method based on the bidirectional structural similarities and the weighted LRMC-based method is described in Section 2. Section 3 discusses the experiments and results, followed by conclusions in Section 4.

2. HDR Imaging Based on Bidirectional Structural Similarities and Weighted LRMC

In this section, a novel HDR imaging method based on bidirectional structural similarities and weighted LRMC is described. Figure 1 illustrates an overview of the proposed method. Given n different exposure LDR images {I1, I2, …, In}, where the pixel number of each image is m, one image Ir (1 ≤ r ≤ n) is chosen as the reference image and the others are the source images. We assume that n images are globally aligned by applying a global registration method [12]. First, for each source image, we measure the bidirectional structural similarities including FPSS and BPSS. Then, all pixels are classified into four groups: motion regions, saturated regions in the source image, saturated regions in the reference image, and static and unsaturated regions. Noise could lead to incorrect region detection. Thus, we introduce FPSS and BPSS into graph cuts to generate the final motion maps and the weight maps, which are integrated into the weighted LRMC model. The low-rank matrix of the weighted LRMC model corresponds to the background irradiance.

2.1. Bidirectional Structural Similarities

In the previous methods, FP is usually employed prior to measure the similarity between two different exposure images. A unsaturated region in the source image which corresponds to a saturated region in the reference image is mistaken as the motion region. The unsaturated region has richer details than saturated region. When unsaturated intensity is projected to saturated intensity, compression between intensity difference results in missing detail so that the projected region is like the saturated region. By contrast, when saturated intensity is projected to unsaturated intensity, compressed intensity difference cannot be recovered so that the structural similarity is low. Therefore, pixels in the saturated regions of the reference image have small FPSS and big BPSS, while pixels in the saturated regions of the source image have small BPSS and big FPSS. Pixels in the motion regions have both small FPSS and small BPSS.

Figure 2 illustrates the structural similarities between two images in FP and BP, where Figure 2(a) is the reference image and Figure 2(b) is the source image. Figure 2(c) is the backward-projected image of (b) and (d) is the forward-projected image of (a). Figures 2(e) and 2(f) are BPSS and FPSS between (a) and (b). Three regions marked with red box represent the saturated region in the source image, the saturated region in the reference image, and the motion region, respectively. From Figures 2(e) and 2(f), we can see that the region 1 has small BPSS and big FPSS, region 2 has big BPSS and small FPSS, and region 3 has both small BPSS and small FPSS. Based on the above phenomenon, each image can be segmented into four groups: motion regions, saturated regions in the source image, saturated regions in the reference image, and static and unsaturated regions.

As stated in paper [32], patch-based structural similarity is expected to best represent the structural similarity. Unlike the previous method, all color channels are considered jointly. We use the color channel that has the largest structural change to determine the structural similarity. Therefore, the desired structural similarity between two different exposure images Ii and Ij is determined by the smallest structural similarity of all color channels:where R, G, and B represent the red, green, and blue channels of a color image, respectively, vark,p(∙) is the variance in the window with size 9 × 9 around of channel k, covk,p(∙, ∙) is the covariance, and c1 is a small constant to avoid denominator and nominator to be zero and is set to 0.03.

For the smoothed region, intensity similarity of all color channels would respond to the structural similarity. A straightway of this relationship is employed as follows:where μk,p(∙) represents the mean value in the window around p of channel k and c2 is a small constant to avoid denominator and nominator to be zero and is set to 0.01.

For the saturated region, the intensity difference is compressed. When the saturated region and the unsaturated region have the same intensity difference, the similarity of the saturated region is less than that of the unsaturated region. Thus, we introduce the well-exposedness [4] which measures how far the intensity is from the saturated intensity to present similarity. We define the well-exposedness similarity as follows:where f(∙) is the well-exposedness measurement function and defined as

For each source image Ii (1 ≤ i ≤ n, i ≠ r), we define FPSS and BPSS as follows:where FP(∙) and BF(∙) represent the FP and the BP based on histogram projection algorithm, respectively.

2.2. Motion Map and Weight Map Construction

With FPSS and BPSS, the source image is divided into four groups. Let 0, 1, 2, and 3 denote the labels of motion regions, saturated regions in the source image, saturated regions in the reference image, static and unsaturated regions, respectively. For pixel , the probability belonging to l-th group is defined aswhere Cl is the center of the l-th group. As stated in Section 2.1, the centers of motion regions, saturated regions in the source image, saturated regions in the reference image, and static and unsaturated regions are [0,0], [1,0], [0,1] and [1,1], respectively. Then, the initial segmentation is determined by .

Because noises could make the segmentation unreliable, we employed graph cuts algorithm [33, 34], where the energy function is defined as

The first term is the data term and is represented by . The second term is smoothed term and is presented by , where σ is the variance of the whole image.

We use the segmentation results of equation (7) to define the motion maps and the weight maps. First, pixels labeled 0 in the source images are included in the motion regions. Saturated regions may be connected with motion regions; thus, we take the regions labeled 1 and 2 and connected with motion regions as motion regions. The remaining regions are regarded as static regions, where the weight maps are based on the FPSS and BPSS. For each source image, the weight value for each pixel in the static regions labeled 2 and 3 is set 1, while the weight value for each pixel in the static regions labeled 1 is proportional to BPSSp(Ir, BP(Ii)). For the reference image, the weight values for all pixels are set to 1 except that the weight values for the saturated region are proportional to the minima FPSSp(FP(Ir), Ii) for all source images. We define the weight map for each pixel as follows:

2.3. Weighted LRMC-Based HDR Imaging

Let I = [vec(I1), vec(I2), …, vec(In)] ∈ Rm×n be a matrix, where vec(∙) is transform function from matrix to vector. For each image, the corresponding irradiance image Di (i = 1, 2, …, n) is estimated by the camera response function. The irradiance matrix D is represented by the background matrix L with rank equal to 1 plus the sparse error matrix E. Then, the effective region is the static region constructed by the method discussed in Section 2.2. In the effective region, the information in the unsaturated regions is more reliable than that in the saturated regions. Therefore, we add the sparse error with small weights in the saturated regions and small weights in the unsaturated regions. We propose the weighted LRMC-based HDR imaging as follows:where is the nuclear norm, is the l0-norm of the matrix E, and means dot product and P(∙) is defined as follows:

Inspired by Oh et al. [26], the partial sum of the eigenvalues of matrix L is used to replace the nuclear norm and l0-norm is replaced by l1-norm. Then, equation (9) is rewritten as follows:

The optimization of equation (11) can be solved by augmented Lagrange multipliers and alternate direction method. Finally, the resulting HDR image is the average of the recovered low-rank matrix L.

3. Experimental Results

In this section, the performance of the proposed method is evaluated both subjectively and objectively by comparing with Oh et al. [26], Hu et al. [18], Ma et al. [32], and Photoshop cs6 on the challenging image sets with complex scene (downloaded from http://user.ceng.metu.edu.tr/∼akyuz/files/eg2015/), where each image set contains five LDR images and the third image is chosen as the reference image for all methods. Oh et al. [26] is the state-of-the-art LRMC-based HDR imaging method. Hu et al. [18] is the most competitive state-of-the-art correction-based HDR imaging method. Ma et al. [32] is the state-of-the-art exposure fusion method. Photoshop cs6 is commercial software.

To subjectively evaluate the experimental results, Reinhard’s tone-mapping method [35] is employed in displaying the HDR image generated by the proposed method and Oh et al. [26]. The results generated by Hu et al. [18] are a set of the latent images, which is merged by Mertens et al. [4]. Ma et al. [32] and Photoshop cs6 directly generate displayable image. All experiments are carried out in Matlab R2016b (64-bit) and Windows 7.

Figure 3 shows the results generated by the proposed method, Oh et al. [26], Hu et al. [18], Ma et al. [32], and Photoshop cs6. Hu et al. [18] and Ma et al. [32] perform well in ghost removal. However, the details of the man inside the cafe are missing, as shown in Figures 3(h) and 3(i). Oh et al. [26] and Photoshop cs6 preserve the details in the dark and bright regions but have problems in removing ghost, as shown in Figures 3(g) and 3(j). This is because Oh et al. [26] cannot handle the large overlapped region of the man and the sitting women. Yet, the proposed method successfully removes the ghost and preserved the details in both dark and bright regions.

Figure 4 gives another comparison result generated by the proposed method, Oh et al. [26], Hu et al. [18], Ma et al. [32], and Photoshop cs6. This scene is very complex, and there are large and irregular overlapped regions across images. Thus, the region is difficult for the user to specify. Ghosts are very obvious in the results of Oh et al. [26] and Photoshop cs6. Hu et al. [18] and Ma et al. [32] generate the pleasant results completely without ghosts, but the result is unnatural. For example, Figure 4(c) shows that color around the light is distorted, and Figure 4(d) shows halos appear around the edges. The proposed method provides the best performance in ghost removal and detail preserving, and there are no additional artifacts. Similar results can also be seen in Figure 5.

Owing to the lack of the reference image, we applied the blind image quality assessment index, HDR image gradient-based evaluator (HIGRADE) [36], to objectively evaluate the performance. For HIGRADE index, a higher value represents a higher visual quality. Table 1 shows the HIGRADE scores of the proposed method and three state-of-the-art methods. For most cases, the proposed method achieves the highest HIGRADE score, which indicates the proposed method can achieve natural appearances and preserve rich details.


Image sequenceProposed methodOh et al. [26]Hu et al. [18]Ma et al. [32]

café0.4205−0.5244−0.00260.3531
FastCars0.50940.03880.45330.5509
Shop20.483−0.42780.22030.241
Walkpeople0.5869−0.13420.31530.1092
Average0.5−0.26190.24660.3136

Figure 6 shows the results of the proposed method based on the different reference images. The performance of the proposed method relies on the motion map and the weight map. As shown in Figures 6(f)6(i), ghosts are removed successfully and details of dark and bright regions are preserved. Detail loss on the road appears in Figure 6(j), because there are too large saturated regions in the reference image (Figure 6(e)).

4. Conclusions and Discussion

In this paper, a novel HDR imaging method based on the bidirectional structural similarities and the weighted LRMC is proposed. We observe that structural variation in the motion regions and in the saturated regions are bidirectional and unidirectional, respectively. Therefore, we propose bidirectional structural similarities including FPSS and BPSS to segment an image into four groups: motion regions, saturated regions in the source image, saturated regions in the reference image, and static and unsaturated regions. Then, graph cuts algorithm is employed to eliminate noise. Finally, the motion maps and the weight maps based on FPSS and BPSS are introduced in the weighted LRMC-based method. Unlike the previous LRMC-based methods, the proposed method requires no user specification of the missing regions.

Experiments on several challenging image sets with complex scene are conducted. And, the proposed method is compared with three current state-of-the-art algorithms and Photoshop cs6. The results show that the proposed method can preserve more details in the dark and bright regions and simultaneously remove ghosts. In particular, the proposed method is robust to the chosen reference image.

Data Availability

All data used to support the findings of our study are downloaded from http://user.ceng.metu.edu.tr/∼akyuz/files/eg2015/, which have been included within the article (on page 5).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been supported in part by the National Natural Science Foundation of China (Grant nos. 61562047, 61562048, 61462048, and 61862044), Science and Technology Project of the Education Department of Jiangxi Province (no. 151084), and Science Foundation of Jiujiang University (nos. 2014KJYB029 and 2015LGY831).

References

  1. E. Reinhard, G. Ward, S. Pattanaik, and P. E. Debevec, High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting, Morgan Kaufmann, Burlington, MA, USA, 2nd edition, 2005.
  2. P. E. Debevec, “Recovering high dynamic range radiance maps from images,” in Proceedings of the ACM Press the 24th Annual Conference, pp. 369–378, New York, NY, USA, November 1997. View at: Google Scholar
  3. M. D. Grossberg and S. K. Nayar, “Determining the camera response from images: what is knowable?” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 11, pp. 1455–1467, 2003. View at: Publisher Site | Google Scholar
  4. T. Mertens, J. Kautz, and F. V. Reeth, “Exposure fusion: a simple and practical alternative to high dynamic range photography,” Computer Graphics Forum, vol. 28, no. 1, pp. 161–171, 2009. View at: Publisher Site | Google Scholar
  5. M. Song, D. Tao, C. Chen et al., “Probabilistic exposure fusion,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 341–357, 2012. View at: Google Scholar
  6. K. Ma and Z. Wang, “Multi-exposure image fusion: a patch-wise approach,” in Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), pp. 1717–1721, Quebec City, Canada, September 2015. View at: Publisher Site | Google Scholar
  7. K. Karađuzović-hadžiabdić, J. H. Telalović, and R. K. mantiuk, “Assessment of multi-exposure HDR image deghosting methods,” Computers & Graphics, vol. 63, pp. 1–17, 2017. View at: Publisher Site | Google Scholar
  8. A. Srikantha and D. Sidibé, “Ghost detection and removal for high dynamic range images: recent advances,” Signal Processing: Image Communication, vol. 27, no. 6, pp. 650–662, 2012. View at: Publisher Site | Google Scholar
  9. O. T. Tursun, A. O. Akyüz, A. erdem, and E. Erdem, “The state of the art in HDR deghosting: a survey and evaluation,” Computer Graphics Forum, vol. 34, no. 2, pp. 683–707, 2015. View at: Publisher Site | Google Scholar
  10. A. Eden, M. Uyttendaele, and R. Szeliski, “Seamless image stitching of scenes with large motions and exposure differences,” in Proceedings of the 2010 IEEE Conference on Computer Visionand Pattern Recognition (CVPR), pp. 2498–2505, New York, NY, USA, June 2006. View at: Publisher Site | Google Scholar
  11. J. Im, S. Lee, and J. Paik, “Improved elastic registration for removing ghost artifacts in high dynamic imaging,” IEEE Transactions on Consumer Electronics, vol. 57, no. 2, pp. 932–935, 2011. View at: Publisher Site | Google Scholar
  12. G. Ward, “Fast, robust image registration for compositing high dynamic range photographs from hand-held exposures,” Journal of Graphics Tools, vol. 8, no. 2, pp. 17–30, 2003. View at: Publisher Site | Google Scholar
  13. D. Sidibe, W. Puech, and O. Strauss, “Ghost detection and removal in high dynamic range images,” in Proceedings of the 17th European Signal Processing Conference, pp. 1–5, Glasgow, UK, August 2009. View at: Google Scholar
  14. E. A. Khan, A. O. Akyuz, and E. Reinhard, “Ghost removal in high dynamic range images,” in Proceedings of the International Conference on Image Processing (ICIP), pp. 2005–2008, Atlanta, GA, USA, October 2006. View at: Publisher Site | Google Scholar
  15. W. Zhang and W.-K. Cham, “Gradient-directed multiexposure composition,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2318–2323, 2012. View at: Publisher Site | Google Scholar
  16. O. Gallo, N. Gelfand, W.-C. Chen, T. Marius, and P. Kari, “Artifact-free high dynamic range imaging,” in Proceedings of the 2009 IEEE International Conference on Computational Photography (ICCP), pp. 1–7, San Francisco, CA, USA, April 2009. View at: Publisher Site | Google Scholar
  17. P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman, “Robust patch-based HDR reconstruction of dynamic scenes,” ACM Transactions on Graphics, vol. 31, no. 6, pp. 1–11, 2012. View at: Publisher Site | Google Scholar
  18. J. Hu, O. Gallo, K. Pulli, and X. Sun, “HDR deghosting: how to deal with saturation?” in Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1163–1170, Portland, OR, USA, June 2013. View at: Publisher Site | Google Scholar
  19. H. Zimmer, A. Bruhn, and J. Weickert, “Freehand HDR imaging of moving scenes with simultaneous resolution enhancement,” Computer Graphics Forum, vol. 30, no. 2, pp. 405–414, 2011. View at: Publisher Site | Google Scholar
  20. T. Bouwmans, A. Sobral, S. Javed, S. K. Jung, and E.-H. Zahzah, “Decomposition into low-rank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a large-scale dataset,” Computer Science Review, vol. 23, pp. 1–71, 2017. View at: Publisher Site | Google Scholar
  21. T. Bouwmans, S. Javed, H. Zhang, Z. Lin, and R. Otazo, “On the applications of robust PCA in image and video processing,” Proceedings of the IEEE, vol. 106, no. 8, pp. 1427–1457, 2018. View at: Publisher Site | Google Scholar
  22. V. Namrata and N. Praneeth, “Static and dynamic robust pca and matrix completion: a review,” Proceedings of the IEEE, vol. 106, no. 8, pp. 1359–1379, 2018. View at: Publisher Site | Google Scholar
  23. T. H. Oh, J. Y. Lee, and I. Kweon, “High dynamic range imaging by a rank-1 constraint,” in Proceedings of the 2013 IEEE International Conference on Image Processing, pp. 790–794, Melbourne, Australia, September 2013. View at: Publisher Site | Google Scholar
  24. A. Bhardwaj and S. Raman, “Robust PCA-based solution to image composition using augmented Lagrange multiplier (ALM),” The Visual Computer, vol. 32, no. 5, pp. 591–600, 2015. View at: Publisher Site | Google Scholar
  25. C. Lee, Y. Li, and V. Monga, “Ghost-free high dynamic range imaging via rank minimization,” Signal Processing Letters, vol. 21, no. 9, pp. 1045–1049, 2014. View at: Publisher Site | Google Scholar
  26. T.-H. Oh, J.-Y. Lee, Y.-W. Tai, and I. S. Kweon, “Robust high dynamic range imaging by rank minimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 6, pp. 32–1219, 2015. View at: Publisher Site | Google Scholar
  27. C. Lee and E. Y. Lam, “Computationally efficient truncated nuclear norm minimization for high dynamic range imaging,” IEEE Transactions on Image Processing, vol. 25, no. 9, pp. 4145–4157, 2016. View at: Publisher Site | Google Scholar
  28. K. Jacobs, C. Loscos, and G. Ward, “Automatic high-dynamic range image generation for dynamic scenes,” IEEE Computer Graphics and Applications, vol. 28, no. 2, pp. 1–15, 2008. View at: Publisher Site | Google Scholar
  29. J. An, S. J. Ha, and N. I. Cho, “Probabilistic motion pixel detection for the reduction of ghost artifacts in high dynamic range images from multiple exposures,” EURASIP Journal on Image and Video Processing, vol. 2014, no. 1, pp. 1–15, 2014. View at: Publisher Site | Google Scholar
  30. J. An, S. H. Lee, J. G. Kuk, and N. I. Cho, “A multi-exposure image fusion algorithm without ghost effect,” in Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1565–1568, Prague, Czech Republic, May 2011. View at: Publisher Site | Google Scholar
  31. W. Zhang, S. Hu, K. Liu, and J. Yao, “Motion-free exposure fusion based on inter-consistency and intra-consistency,” Information Sciences, vol. 376, pp. 190–201, 2017. View at: Publisher Site | Google Scholar
  32. K. Ma, H. Li, H. Yong, Z. Wang, D. Meng, and L. Zhang, “Robust multi-exposure image fusion: a structural patch decomposition approach,” IEEE Transactions on Image Processing, vol. 26, no. 5, pp. 2519–2532, 2017. View at: Publisher Site | Google Scholar
  33. V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, pp. 147–159, 2004. View at: Publisher Site | Google Scholar
  34. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 377–384, August 2002. View at: Publisher Site | Google Scholar
  35. E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 267–276, 2002. View at: Publisher Site | Google Scholar
  36. D. Kundu, D. Ghadiyaram, A. C. Bovik, and B. L. Evans, “No-reference quality assessment of tone-mapped HDR pictures,” IEEE Transactions on Image Processing, vol. 26, no. 6, pp. 2957–2971, 2017. View at: Publisher Site | Google Scholar

Copyright © 2019 Mali Yu and Hai Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views504
Downloads693
Citations

Related articles