About this Journal Submit a Manuscript Table of Contents
Advances in Astronomy
Volume 2016 (2016), Article ID 8645650, 12 pages
http://dx.doi.org/10.1155/2016/8645650
Research Article

Realization of High Dynamic Range Imaging in the GLORIA Network and Its Effect on Astronomical Measurement

Faculty of Electrical Engineering, Czech Technical University in Prague, Technická 2, 166 27 Prague, Czech Republic

Received 15 September 2015; Revised 15 February 2016; Accepted 16 February 2016

Academic Editor: Ronald Mennickent

Copyright © 2016 Stanislav Vítek and Petr Páta. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Citizen science project GLORIA (GLObal Robotic-telescopes Intelligent Array) is a first free- and open-access network of robotic telescopes in the world. It provides a web-based environment where users can do research in astronomy by observing with robotic telescopes and/or by analyzing data that other users have acquired with GLORIA or from other free-access databases. Network of 17 telescopes allows users to control selected telescopes in real time or schedule any more demanding observation. This paper deals with new opportunity that GLORIA project provides to teachers and students of various levels of education. At the moment, there are prepared educational materials related to events like Sun eclipse (measuring local atmosphere changes), Aurora Borealis (calculation of Northern Lights height), or transit of Venus (measurement of the Earth-Sun distance). Student should be able to learn principles of CCD imaging, spectral analysis, basic calibration like dark frames subtraction, or advanced methods of noise suppression. Every user of the network can design his own experiment. We propose advanced experiment aimed at obtaining astronomical image data with high dynamic range. We also introduce methods of objective image quality evaluation in order to discover how HDR methods are affecting astronomical measurements.

1. Introduction

GLORIA (GLObal Robotic-telescopes Intelligent Array) project is targeted at both professional and amateur scientists. While the first group has a particular interest in the scientific content of image data, the second group may be interested also in imaging, including imaging with high dynamic range. High dynamic range imaging (HDRi) is a very active area of research and consumer electronics market today. HDRi methods, which may be used for both multimedia and scientific image data [1, 2], could help observers to discover unexpected attraction in their images. It is primarily used in scenes where the radiance covers a range that exceeds what can be captured in a single film or digital exposure.

Astrophotography would seem to be a natural fit for this technique, since the objects we photograph cover an enormous range of brightness, ranging from the Sun (magnitude −26) to faint nebulae and galaxies (magnitude 10 and beyond). Stellar objects like nebulae or galaxies often hide other stars that due to the low dynamic range of the image sensor disappear in many times brighter object. This problem can be solved by using shorter exposure time, but the shape and details of the object itself will be most probably missed. Another often used technique is stacking, combining of the frames with the same exposure time [3]. Main goal of stacking method is to increase signal-to-noise ratio in order to detect faint sources [4].

Although high dynamic range image can be captured directly using the special expensive high dynamic range sensors, these sensors are not very suitable for the scientific purposes since end user has no control over processes inside camera. Therefore, various HDR synthesis methods combining multiple low dynamic range (LDR) images, the same scene with different exposure times, were introduced in the last couple of years. Among all the methods, exposure bracketing [5, 6] is most widely used as no hardware modification is required. LDR frames record a different range of light intensities and should be acquired within a short period of time to minimize the change of the lighting condition as well as object movement. Then, as far as radiometric calibration of nonlinear camera is known, an HDR radiance map is recovered by using the best exposure information in different LDR frames.

The paper is organized as follows. Section 2 gives a brief overview of GLORIA network and experiment currently available. Section 3 introduces high dynamic technique and method of camera response function estimation. Sections 4 and 5 bring results of measurements on processed images and finally Section 6 concludes the paper.

2. GLORIA Network

As mentioned in the Introduction, GLORIA aimed to become the first free-access global network of robotic telescopes to allow everybody around the world to do research in astronomy [7]. The project partners put their 17 robotic telescopes (see Figure 1) into this network as a seed, in order to do ICT (Information and Communication Technologies) research toward helping GLORIA to grow up in the future and to continue operation after the life of the project. This should allow all citizens to do astronomy research by giving them free access to a world-wide expensive infrastructure of robotic telescopes and a free database of astronomical images taken by the members of GLORIA robotic telescope network. It means that users who love astronomy have opportunity to research and perhaps generate new knowledge and make it available to everybody and learn and teach astronomy.

Figure 1: Distribution of the GLORIA telescopes.

With provided software, anybody can design a web application for conducting research into some specific astronomical issue. Applications can directly access individual components of the telescopes—cameras, focusers, and filter wheels. Citizen scientist, therefore, may implement advanced methods of image processing, like HDRi technique, which is the subject of this paper.

3. High Dynamic Range Imaging in Astronomy

Astronomical image data have very specific features which make common multimedia oriented HDR methods almost unusable:(i)The exposure values are typically very high; that is, noise in the image is increased.(ii)The noise has significant part based on photon counting (Poisson noise), the signal-to-noise ratio may be very different in different regions of the image, and also stellar object may be distorted on different frames.(iii)Image content is affected by atmospheric turbulence and cosmic rays.(iv)The telescopes (or lenses) tend to have big focal lengths; that is, photographed stellar object must be well guided during the exposure.(v)Wide-field lenses tend to have spatially variant point spread function.(vi)Typically used 16-bit quantization increases numerical complexity and decreases efficiency of postprocessing methods.(vii)Spatial distribution of pixel intensity is nonhomogeneous and pixels are typically grouped in isolated spots—stellar objects.

3.1. Image Acquisition

The HDRi is based on the construction of the radiance map of imaged scene from a set of frames with different exposures. For detailed overview see Figure 2. There are many approaches for choosing exposures sequences [8, 9]. Quality of radiance map is strongly affected by choosing proper LDR frames [10] and their number and exposure times [11, 12]. The frame combining leads to better contrast mapping in the scene and it changes noise level in synthesized HDR image.

Figure 2: Process of high dynamic range image creation.

Let us have a set of images of the same scene obtained with exposures , that is, each image differing only in their relative exposure ratio . Thus, a set of functions,where is scalar constant for each photoquantity , is known as Wyckoff set [13]. For each image in a Wyckoff set there exists representing the pixel at location of the image with th exposure.

Photoquantity is the measurement of quantity of light integrated on sensor element of camera system. Photoquantity is weighted by the sensor response and thus can accurately contain all the information we need to recover a mapping that is linear-responsive to the quantity of light arrived at the sensor. The mapping is known as camera response function.

3.2. Camera Response Function

Let an image captured by digital camera be nonlinear function of sensor exposure defined as multiplication of the sensor irradiance and exposure (see Figure 3). Then, the set of the images of the same scene with different exposure value can be described aswhere is a camera response function (CRF), are indexes of pixel in the image, and is an index of a frame in the set .

Figure 3: Basic principle of the image formation.

To map pixel values to irradiance values , we need to find inverse function so that the sensor exposureIf exposure time is known, one could obtain irradiance using (2).

Function or , respectively, may be obtained by precise measurement of the digital camera, for example, using a uniformly illuminated chart containing patches of known reflectance [14]. However, this process is quite complicated and can be performed only under laboratory conditions. Numerous papers showed that set of differently exposed images of the same scene contain enough information to recover response using images themselves [15, 16].

Mann et al., authors of the papers regarded as seed papers for the area of quantigraphic HDR, assumed that CFR is a gamma function [8]:based on the density curve of photographic film. Parameter can be estimated by taking an image with the lens covered and then subtracted from the other images. Equation is solved by using comparametric equations [17]; when function is monotonic, the comparametric plot can be expressed as a monotonic function not involving . Idea of comparametric image composting is widely recognized as computationally efficient HDRi method [18].

Grossberg and Nayar showed that recovering of exhibits fractal ambiguity [19]. This ambiguity can only be broken by adopting certain assumptions. Debevec and Malik [20] showed that reconstruction of function depends only on the determination of several functional values of . Mitsunaga and Nayar showed that can be modeled by a polynomial [21].

While CRF is estimated, irradiance map then could be expressed aswhere is weighting function involved to emphasize the smoothness and fitting terms toward the middle of the CRF curve. Selection of weighting function will be further discussed in Section 4; comprehensive overview of various weighting function may be founded in [22].

3.3. The Signal-to-Noise Ratio in HDR Imaging

The noise is always included in captured images and its influence can be described by well-known parameters [23]. The noise and image defects have many components and they depend on sensor quality, exposure value, electronic circuit, and photon noise presented in photon flux from a detected scene. The image acquisition in astronomy is never ending compromise among exposure value, noise occurrence, and imaging details in high contrast scene [24]. The values distribution can be evaluated by known parameters. The first one is root mean square error (RMSE) of the th frame from the setwhere the symbol is used for the mean value of the image and are image size. The noise influence is described by signal-to-noise ratio (SNR):SNR of the HDR radiance can be expressed aswhere is a number of images in the LDR set, is active area of the CCD pixel, is radiance, and is variance of the th frame in the set.

These relations can be simplified when the linear camera response function is assumed. For more details about SNR in HDR, see [25, 26].

3.4. Image Aligning

To combine a set of differently exposed images of the same scene into one HDR frame, it is necessary to find correspondence between pixels in the individual exposures. Basically, the process of two or more images aligning requires choosing one of the images as a reference and finding the geometric transformation to spatially register another image to reference image. Registered images are often convolved with appropriate kernel to degrade the point spread function of the registered image to match that of the reference image. Convoluted images are also scaled to match the intensity and background sky level of the reference image [27]. Registration is typically performed with subpixel accuracy [28]. Phillips and Davis provide details about how aligning can be done with IRAF package [29]. There are also recent works on simultaneous HDR reconstruction and optic flow computation [3032].

For our purpose, we can assume that single frames in datasets will be acquired in relatively short time (i.e., as one observer session or as a batch) by using one GLORIA telescope; that is, PSF will be constant.

3.5. Tone Mapping

Since reconstructed HDR image has much larger dynamic range than most display devices, tone mapping operators (TMO) are involved in the process of image reproduction. These operators, global or local, transform dynamic range of image to dynamic range of particular display device; more about the TMO design could be found in [5].

4. Experiments

For the purpose of measurement, we used GLORIA interactive experiment with the BART, telescope placed in Ondřejov, Czech Republic [33]. Telescope BART (an acronym for Burst Alert Robotic Telescope) is one of the oldest fully autonomous robotic telescopes in the world—it exists since early 2000. Nowadays it uses German-type mount Losmandy Titan, which is holding two optical telescopes: the narrow-field (NF), which is 0.25 m Schmidt-Cassegrain from Meade ( mm), and wide-field (WF), which is 0.1 m Maksutov-Cassegrain from Rubinar ( mm).

We used wide-field camera G21600 (equipped with Kodak KAF1603ME, full-frame CCD image sensor) from Moravian Instruments to obtain two sets of LDR images with different exposure values: 32 s, 64 s, 128 s, 256 s, and 512 s. Apparently there is a constant ratio between exposure times, so images could be considered as Wyckoff set. For galaxy M33, see Figure 4, and, for galaxy M101, see Figure 5. Four LDR frames (32 s, 64 s, 128 s, and 256 s) of both image sets were used for reconstruction of HDR image, later compared with 512-second exposure. All single frames were calibrated with proper dark and flat frames.

Figure 4: Galaxy M33 captured with different exposure times (BART, ASU-CAS, Ondřejov, Czech Republic).
Figure 5: Galaxy M101 captured with different exposure times (BART, ASU-CAS, Ondřejov, Czech Republic).
4.1. Camera Response Function Estimation

Pixel selection for the purpose of camera response calculation is a delicate matter. Apparently we need at least two differently exposed images; more are better. Selected pixels should meet the following requirements:(i)Having a good spatial distribution.(ii)Covering the entire irradiance range in the image as well as possible.(iii)Being selected from zones with small irradiance variance.

Some sources state that, for the range of 256 values, typical for the color channel of the multimedia image, it will be sufficient to use 50 pixels to obtain an overdetermined system of equations [34]. As we mentioned before, a typical astronomical image has 16-bit resolution and also spatial distribution of pixel intensities is different from the case of multimedia image data. Therefore, due to a big variance of pixel values over the LDR set, we propose choosing multiple patches across frames, containing stellar objects of different sizes and fluxes, and calculating CRF as a median. Figure 6 shows the estimation of function from observed images by use of procedure proposed by Reinhard et al. [5] in comparison to method proposed by Mann.

Figure 6: (a) CRF estimation from LDR images of M33 and (b) CRF estimation from LDR images of M101.
4.2. HDR Image Reconstruction

For reconstruction of the radiance map, we used the procedure described by (5). An important point of the procedure is obviously the choice of the weighting function mentioned in Section 3.2. Debevec and Malic proposed in [20] a hat function that assigns higher weights to middle-gray values as they are farthest from both underexposed and saturated outputs. Given the nature of astronomical data, irradiance map obtained by use of this weighting function is not very useful. From this reason, we used weighting function based on the standard deviation, estimated directly from the LDR frames [35]. We also included term declining significance of saturated pixels.

The central part of final HDR frame can be seen on Figure 7(b). One can see comparison between HDR frame and LDR frame captured with the exposure time of 512 s. Apparently galaxy and bright stars have lower brightness on the HDR frame, and also structure of the galaxy is more visible.

Figure 7: Details of M101 galaxy captured at 512 s exposure (a) and reconstructed with high dynamic range (b).
4.3. Stellar Profile Analysis

Methods of objective quality evaluation can be divided into two major groups. The first group is classic objective method, for example, SNR or MSE, established in Section 3. The second group of quality evaluation is methods based on data-evaluation algorithms. These methods include fitting stars’ profiles (measuring the PSF (point spread function) of the stars), aperture photometry, astrometry (position error), or successful detection of the star.

There are two common functions for fitting stars’ profiles, according to Bendinelli et al. [36]. The effort is to match a star’s profile as good as possible with the Gaussian or Moffat profile. If a star was ideal, it would be represented by a small dot. But because of many different distortions including application of generally nonlinear postprocessing algorithms, the dot is “blurred” all around, and the star’s profile is close to the Gaussian function. The Gaussian function isand the Moffat function iswhere stands for a radial distance from the middle of the star (with maximal brightness ), is a standard deviation, is brightness at the radial distance , and , are Moffat coefficients. Center of a star usually has a profile closer to the Gaussian function, the more distant parts to the Moffat, though. So ideal fitting function combines Gaussian and Moffat. For evaluation, parameter FWHM (Full Width Half Maximum) is typically used, defined as twice the value of at which .

Figure 9 shows how high dynamic range reconstruction changes the shape of the stellar object. Although this change usually does not affect the results of the algorithms to search for stellar objects, obviously HDR algorithm may be further optimized for wide-field astronomical image data. Objects ellipticity seems to be statistically unchanged; for details see Figure 10.

Figure 11 shows the distribution of stellar objects fluxes. Apparently fluxes are growing with longer exposure values and object tends to be overexposed. It seems like reconstruction with high dynamic range provides approximately the same number of objects, but objects are not overexposed; for detailed analysis of two selected stars, see Figure 8.

Figure 8: Profile of selected stellar objects. Green line for 32 s exposure, red line for 512 s exposure (overexposed), and blue line for HDR recovered shape.
Figure 9: Normalized histograms of objects FWHM (M101).
Figure 10: Normalized histograms of objects ellipticity (M101).
Figure 11: Normalized histograms of objects fluxes (M101).

5. Results

In order to evaluate obtained frame with high dynamic range, various tests were performed. Primarily these are global statistics, which are summarized in Table 1 for M33 image set and Table 2 for M101 image set, respectively; the number of objects is detected by SExtractor [37], the median of FWHM values (Gaussian fit), median of stellar object’s ellipticity, and signal-to-noise ratio (SNR).

Table 1: Images of galaxy M33, overall statistics.
Table 2: Images of galaxy M101, overall statistics.

Figure 12 shows the relationship between a number of stellar objects found in the image and signal-to-noise ratio of the image. There is a noticeable effort of the HDRi approach—ratio between the number of objects detected and SNR is for the HDR image better than for any other LDR frame.

Figure 12: Relationship between number of stellar objects found in the image and its SNR.

According to (8), HDR frame of M33 should have and HDR frame of M101 in the ideal cases. Although these (theoretical) values were not achieved, is better than of 512-second exposure and is better than of 512-second exposure for M101.

6. Conclusions

High dynamic range imaging means a new opportunity for users of GLORIA, the global network of robotic telescopes. In this paper, we presented an evaluation of irradiance map reconstructed from the set of frames with astronomical content. We compared two methods of camera response function estimation and proposed simple rule, how to select pixels for the estimation from astronomical image data. Results of methods are very similar; estimation of CRF strongly depends on the data present in the dataset. It is mostly due to the fact that it is impossible to select appropriate samples from different sets of LDR frames. It should be noted that there is no significant impact of small differences between estimated CRF; the biggest influence on the quality of the resulting image has the choice of weighting function.

According to the performed tests, it seems like high dynamic range reconstruction may have a positive impact on objective quality of image data. Although HDR reconstruction is a nonlinear operation, reconstructed images kept their nature in terms of photometric and astrometric properties. Moreover, it is evident that SNR of reconstructed irradiance map is significantly better than SNR of any LDR image. The structure of galaxy in the HDR image is clearly visible, while bright stars are not saturated. Distortion of the shape of the stellar object, which is evident in the resulting HDR image, may be due to the fact that the BART telescope does not have guiding and longer exposures are a bit blurry. There is undoubtedly the need for further tests, such as inspection of noise models in LDR frames and a comparison between noise ratios of stacked images and HDR reconstructed images.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

This work was supported by Grant no. 14-25251S nonlinear imaging systems with spatially variant point spread function of the Czech Science Foundation.

References

  1. J. J. McCann and A. Rizzi, The Art and Science of HDR Imaging, vol. 26, John Wiley & Sons, 2011.
  2. K. Fliegel, S. Vítek, M. Klíma, and P. Páta, “Open source database of images deimos: high dynamic range and stereoscopic content,” in Applications of Digital Image Processing XXXIV, 81351T, Proceedings of SPIE, International Society for Optics and Photonics, September 2011. View at Publisher · View at Google Scholar
  3. R. L. White, D. J. Helfand, R. H. Becker, E. Glikman, and W. de Vries, “Signals from the noise: image stacking for quasars in the FIRST survey,” The Astrophysical Journal, vol. 654, no. 1 I, pp. 99–114, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. P. Kurczynski and E. Gawiser, “A simultaneous stacking and deblending algorithm for astronomical images,” The Astronomical Journal, vol. 139, no. 4, pp. 1592–1599, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting, Morgan Kaufmann, Boston, Mass, USA, 2010.
  6. B. Hoefflinger, High-Dynamic-Range (HDR) Vision, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar
  7. A. J. Castro-Tirado, F. M. Sánchez Moreno, C. Pérez del et al., “The GLObal Robotic telescopes Intelligent Array for e-science (GLORIA),” Revista Mexicana de Astronomía y Astrofísica, vol. 45, pp. 104–109, 2014. View at Google Scholar
  8. P. Mann, S. Mann, and R. W. Picard, “On being ‘undigital’ with digital cameras: extending dynamic range by combining differently exposed pictures,” in Proceedings of IS&T, pp. 442–448, 1995. View at Google Scholar
  9. M. Gupta, D. Iso, and S. K. Nayar, “Fibonacci exposure bracketing for high dynamic range imaging,” in Proceedings of the 14th IEEE International Conference on Computer Vision (ICCV '13), pp. 1473–1480, IEEE, Sydney, Australia, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. M. D. Grossberg and S. K. Nayar, “High dynamic range from multiple images: which exposures to combine,” in Proceedings of the ICCV Workshop on Color and Photometric Methods in Computer Vision (CPMCV '03), vol. 3, October 2003.
  11. K. Hirakawa and P. J. Wolfe, “Optimal exposure control for high dynamic range imaging,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 3137–3140, Hong Kong, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Guthier, S. Kopf, and W. Effelsberg, “Optimal shutter speed sequences for real-time HDR video,” in Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST '12), pp. 303–308, IEEE, Manchester, UK, July 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Mann, “Joint parameter estimation in both domain and range of functions in same orbit of the projectivewyckoff group,” in Proceedings of the International Conference on Image Processing, vol. 3, pp. 193–196, IEEE, Lausanne, Switzerland, September 1996.
  14. Y.-C. Chang and J. F. Reid, “RGB calibration for color image analysis in machine vision,” IEEE Transactions on Image Processing, vol. 5, no. 10, pp. 1414–1422, 1996. View at Publisher · View at Google Scholar · View at Scopus
  15. S. De Neve, B. Goossens, H. Luong, and W. Philips, “An improved HDR image synthesis algorithm,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 1545–1548, IEEE, Cairo, Egypt, November 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: spatially varying pixel exposures,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), vol. 1, pp. 472–479, IEEE, June 2000. View at Scopus
  17. S. Mann, “Comparametric equations with practical applications in quantigraphic image processing,” IEEE Transactions on Image Processing, vol. 9, no. 8, pp. 1389–1406, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. M. A. Ali and S. Mann, “Comparametric image compositing: computationally efficient high dynamic range imaging,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '12), pp. 913–916, IEEE, Kyoto, Japan, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. M. D. Grossberg and S. K. Nayar, “What can be known about the radiometric response from images?” in Computer Vision—ECCV 2002, pp. 189–205, Springer, Berlin, Germany, 2002. View at Google Scholar
  20. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the ACM SIGGRAPH 2008 Classes (SIGGRAPH '08), p. 31, ACM, Los Angeles, Calif, USA, August 2008.
  21. T. Mitsunaga and S. K. Nayar, “Radiometric self calibration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, IEEE, Fort Collins, Colo, USA, June 1999. View at Publisher · View at Google Scholar
  22. M. Granados, B. Ajdin, M. Wand, C. Theobalt, H.-P. Seidel, and H. P. A. Lensch, “Optimal HDR reconstruction with linear digital cameras,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 215–222, IEEE, San Francisco, Calif, USA, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2002.
  24. B. Anconelli, M. Bertero, P. Boccacci, M. Carbillet, and H. Lanteri, “Iterative methods for the reconstruction of astronomical images with high dynamic range,” Journal of Computational and Applied Mathematics, vol. 198, no. 2, pp. 321–331, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  25. N. Barakat, T. E. Darcie, and A. N. Hone, “The tradeoff between SNR and exposure-set size in HDR imaging,” in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08), pp. 1848–1851, IEEE, San Diego, Calif, USA, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. K. Seshadrinathan, S. H. Park, and O. Nestares, “Noise and dynamic range optimal computational imaging,” in Proceedings of the 19th IEEE International Conference on Image Processing (ICIP '12), pp. 2785–2788, Orlando, Fla, USA, October 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. F. J. Masci, D. Makovoz, and M. Moshir, “A robust algorithm for the pointing refinement and registration of astronomical images,” Publications of the Astronomical Society of the Pacific, vol. 116, no. 823, pp. 842–858, 2004. View at Publisher · View at Google Scholar · View at Scopus
  28. D. Gratadour, L. M. Mugnier, and D. Rouan, “Sub-pixel image registration with a maximum likelihood estimator—application to the first adaptive optics observations of Arp 220 in the L' band,” Astronomy & Astrophysics, vol. 443, no. 1, pp. 357–365, 2005. View at Publisher · View at Google Scholar · View at Scopus
  29. A. C. Phillips and L. E. Davis, “Registering, PSF-matching and intensity-matching images in IRAF,” in PSF-Matching and Intensity-Matching Images in IRAF, vol. 77 of ASP Conference Series, p. 297, ASP, 1995. View at Google Scholar
  30. D. Hafner, O. Demetz, and J. Weickert, “Simultaneous HDR and optic flow computation,” in Proceedings of the 22nd International Conference on Pattern Recognition (ICPR '14), pp. 2065–2070, IEEE, Stockholm, Sweden, August 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. T. Bengtsson, T. McKelvey, and K. Lindström, “Optical flow estimation on image sequences with differently exposed frames,” Optical Engineering, vol. 54, no. 9, article 093103, 2015. View at Publisher · View at Google Scholar
  32. H. Q. Luong, B. Goossens, A. Pižurica, and W. Philips, “Joint photometric and geometric image registration in the total least square sense,” Pattern Recognition Letters, vol. 32, no. 15, pp. 2061–2067, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. BART telescope, GLORIA project, September 2015, http://gloria-project.eu/tag/bart-en/.
  34. A. M. Sá, P. C. Carvalho, and L. Velho, “High dynamic range image reconstruction,” Synthesis Lectures on Computer Graphics and Animation, vol. 2, no. 1, pp. 1–54, 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of CCD imaging process,” in Proceedings of the 8th IEEE International Conference on Computer Vision (ICCV '01), vol. 1, pp. 480–487, Vancouver, Canada, July 2001. View at Publisher · View at Google Scholar
  36. O. Bendinelli, G. Parmeggiani, and F. Zavatti, “CCD star images: on the determination of Moffat's PSF shape parameters,” Journal of Astrophysics and Astronomy, vol. 9, no. 1, pp. 17–24, 1988. View at Publisher · View at Google Scholar · View at Scopus
  37. E. Bertinl and S. Arnouts, “SExtractor: software for source extraction,” Astronomy and Astrophysics Supplement Series, vol. 117, no. 2, pp. 393–404, 1996. View at Publisher · View at Google Scholar · View at Scopus