Table of Contents Author Guidelines Submit a Manuscript
International Journal of Computer Games Technology
Volume 2014, Article ID 308629, 13 pages
Research Article

Foggy Scene Rendering Based on Transmission Map Estimation

School of Information Science and Engineering, Central South University, Changsha, Hunan 410083, China

Received 4 May 2014; Revised 3 September 2014; Accepted 29 September 2014; Published 19 October 2014

Academic Editor: Mark Green

Copyright © 2014 Fan Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Realistic rendering of foggy scene is important in game development and virtual reality. Traditional methods have many parameters to control or require a long time to compute, and they are usually limited to depicting a homogeneous fog without considering the foggy scene with heterogeneous fog. In this paper, a new rendering method based on transmission map estimation is proposed. We first generate perlin noise image as the density distribution texture of heterogeneous fog. Then we estimate the transmission map using the Markov random field (MRF) model and the bilateral filter. Finally, virtual foggy scene is realistically rendered with the generated perlin noise image and the transmission map according to the atmospheric scattering model. Experimental results show that the rendered results of our approach are quite satisfactory.

1. Introduction

Visualization of atmospheric phenomena is an important use of computer graphics, and among of these, fog is one of the most common scenes in game scene and virtual reality. A realistic fog will greatly improve the reality of virtual scenes and add attractiveness to the generated scenes when created. Thus, many applications of the simulated fog in computer games, virtual reality, special effects in movies and TV, and so forth can be found.

The simulation in real-time is usually limited to depicting a homogeneous fog with constant density distribution. However, in the natural world, the phenomenon is more complex. It is a participating medium made of many nonhomogenous layers moving with the wind and undergoing turbulent motion. Thus, this paper discusses the problem of fog simulation and visualization, with special emphasis on generating heterogeneous fog in real-time using image processing method.

The organization of this paper is as follows. We begin by reviewing existing works on the foggy scene simulation. In Section 3 we introduce the atmospheric scattering model and the Perlin noise. In Section 4 we propose the foggy scene rendering algorithm based on the estimated transmission map. In Section 5 we present some experimental results. Finally, in Section 6 we give some including remarks.

2. Related Works

There are several related works concerning the simulation of foggy scene. These existing methods can be divided into two categories: software-based methods and model-based methods. The software-based methods generate rendering effects using software tools. For example, the software tools, such as Photoshop, 3ds max, and OpenGL, can be used to simulate foggy scenes. However, although these tools can convert an input no-fog image to a virtual foggy image, the whole procedure has many steps and each step needs user involvement, which is very hard to control and requires good skills.

A more feasible and effective rendering method is the model-based method. Modeling the physical mechanism of fog makes it possible to simulate foggy conditions. Max [3] introduced a single scattering model for light diffusion to generate haze in the atmosphere. Yamamoto et al. [4] proposed a fast rendering method for atmospheric scattering effects by using graphics hardware. Jackèl and Walter [5] simulated the Mie Scattering in 1997, and Nishita et al. [6] used an improved sky illumination to render the fog scene. But this approach of calculating the scattering effect for air particles is quite time-consuming. Sun et al. [1] proposed a practical analytic method to render the single scattering in fog, mist, and haze in real-time. Wang et al. [7] proposed a sky light model considering atmospheric scattering and refraction. Based on the model, various sky scenes, including foggy day and other conditions, are realistically rendered in real-time. Cao et al. [2] present a new atmospheric point spread function to simulate foggy scene for natural images. However, the above methods have many parameters to control owing to the complex physical models they used or require a number of advanced computational steps and a long time to compute. To reduce the algorithm parameters and computational complexity, a simple and elegant model proposed by Koschmieder et al. [8, 9] is applied to solve the foggy scene simulation problem. Zdrojewska [10] used the Koschmieder model (also called atmospheric scattering model) to synthesize the fog phenomenon by using hardware acceleration in the form of a GPU. Dong et al. [11] proposed an algorithm to simulate the foggy scene for a real image. The method first segments the input image into several semantic regions and then estimates depths for these regions and synthesizes virtual fog using the Koschmieder model.

For the rendering of foggy image based on the Koschmieder model, the most important step is to estimate the transmission map of the model. Besides the color, texture, and geometric position used in Dong’s method [11], there are many other features that can be taken as depth cues for the transmission map estimation. These features include focus/defocus, motion parallax, relative height/size, and texture gradient. For example, Hoiem et al. [12] generated maps using machine learning algorithms. The transmission values of scene and object are assigned according to the trained classes. Park and Kim [13] determined the distance from the focal plane by using the blurriness of low depth-of-field in optical physics. Jung et al. [14] assigned transmission maps by using edge information and prior depth knowledge. Han and Hong [15] generated the transmission map employing both vanishing points and super pixels as geometric and texture cues. However, the above methods are unreliable when the selected cues are weak in an input image. Yang et al. [16] generated a depth map using the local depth hypothesis based on the structural information of the input image and salient regions, but user interaction is required for this method.

Generating a transmission map from single image is an ill-posed problem. Not all the depth cues can be retrieved from an image. To overcome the challenge, in this work, we present a novel method to automatically generate a pseudo transmission map which reflects the depth information in the scene of the input image. Based on the estimated map, a virtual foggy scene can be simulated for both game scene and natural images. To improve the realism of simulated phenomenon, Perlin noise is also introduced in our method to generate heterogeneous fog. Experimental results indicate that the proposed algorithm may generate realistic rendered results.

3. Physical Characters of Foggy Scenes

To render the foggy scene, we first analyze the physical characters of a foggy image, including the model that describes the formation of a foggy image and the noise used for generating heterogeneous fog.

3.1. Atmospheric Scattering Model

Light passing through a scattering medium is attenuated and distributed to other directions. This can happen anywhere along the path and leads to a combination of radiances incident towards the camera, as shown in Figure 1. The atmospheric scattering model widely used to describe the formation of the foggy image is as follows [8, 9]: where this equation is defined on the three RGB color channels. stands for the observed image, is the airlight color vector, and is the surface radiance vector at the interaction point of the scene and the real world ray corresponding to the pixel . is called the transmission map and expresses the relative portion of  light that manages to survive the entire path between the observer and a surface point in the scene without being scatted. Theoretically, the goal of fog scene rendering is to calculate from the no-fog image , the estimated transmission , and the airlight .

Figure 1: Scattering of light by atmospheric particles.
3.2. Perlin Noise for Generating Heterogeneous Fog

The natural phenomena usually do not change in regular ways but are characterized by large degree of randomness. Such feature is also present in fog, which is a volume object of variable density, taking irregular shapes due to wind and air turbulence. Because of that, random noise function seems like a good candidate to help simulate it. An argument to the function is two- or three-dimensional coordinates of a point in space, and the result is a pseudorandom value of fog’s density at these coordinates. Noise generated by such function has a high frequency and hence displays rapid changes between contrasting values, which is not typical for fog density distribution. Therefore, it should be rather modeled with smooth noise created by interpolation of random value samples sequence. The method of generating such noise was proposed by Perlin in [17].

The noise function is characterized by two properties: amplitude and frequency. As seen in Figure 2(a), amplitude is the difference between maximum and minimum noise value. Frequency is reciprocal of the distance between the noise samples. The main ideas behind simulating the fog phenomenon are to adjust the atmospheric scattering coefficient of the transmission map in the atmospheric scattering model and add several noises with various frequencies and amplitudes together using Perlin’s turbulence texture (see Figure 2(b)).

Figure 2: Perlin noise for generating foggy scene. (a) Amplitude and frequency of the noise function. (b) Perlin’s turbulence texture.

4. Rendering of Foggy Scenes

4.1. Algorithm Flowchart

Specifically, the proposed algorithm has three steps to render a foggy scene: the first one is generating Perlin noise image as the density distribution texture of heterogeneous fog. The second step is computing the transmission map with the Markov random field (MRF) model and the bilateral filter. The goal of this step is assigning the accurate pixel label using the graph-cut based α-expansion and removing the redundant details using the bilateral filter. Finally, with the generated Perlin noise image and transmission map, the foggy scene can be rendered according to the atmospheric scattering model. The flowchart of the proposed method is depicted in Figure 3.

Figure 3: Flowchart of the algorithm.
4.2. Transmission Map Estimation

The estimation of transmission map is the most important step for foggy scene rendering and consists in image segmentation, initial map estimation based on MRF, and refined map estimation using bilateral filter.

4.2.1. Image Segmentation

The transmission map describes the portion of the light that is not scattered and reaches the camera. Since the map is a continuous function of depth, it thus reflects the depth information in scene. Koenderink [18] experimentally measured the human’s ability to infer depth from an image, which shows that people cannot determine the relative depth of two points unless there is some visible and monotonic surface that connects them. Therefore, image segmentation technique is used here for estimating the transmission map and enhancing our knowledge of the image structure. The advantage of this technique is that it can often group large homogeneous regions of the image together while dividing heterogeneous regions into many smaller parts.

Mean-shift (MS) algorithm as a classical image segmentation technique is a robust feature-space analysis approach. It can significantly reduce the number of basic image entities, and due to the good discontinuity preserving filtering characteristic, the salient features of the overall image are retained. Besides, it is particularly important in the partitioning of images, in which only several distinct regions are used in representing different scenes such as sky, mountain, building, lake, and animal, whereas other information within a region is often less important and can be neglected [19]. All these features are very useful for acquiring the relative depth information of scene objects. Thus, MS method is used as the first step to estimate the transmission map. Figure 4 shows an example of image segmentation result obtained by MS method.

Figure 4: Image segmentation result. (a) Original images. (b) Its corresponding segmentation result.
4.2.2. Initial Map Estimation Using MRF

After obtaining the segmentation result, we use the graph-cut based α-expansion method to estimate the map , as it is able to handle regularization and optimization problem and has a good track record with vision-specific energy function [20]. Specifically, each element of the transmission map is associated with a label , where the set of labels represents the transmission values . Before labeling, we first convert input RGB image to gray-level image. Thus, the number of labels is 32 since the labeling unit of pixel value is set to be 8 and . The most probable labeling minimizes the associated energy function: where is the set of pixels in the unknown transmission and is the set of pairs of pixels defined over the standard four-connect neighborhood. The unary function is the data term representing the possibility of pixel having transmission associated with label . The smooth term encodes the possibility where neighboring pixels should have similar depth.

For data function , which represents the possibility of pixel having transmission associated with label , we first convert the input RGB image to gray-level image and then compute the absolute differences between each pixel value and the label value. The process can be written as

In (3), is the intensity of pixel in the gray-level image . is each element in the set of labels . The parameter is introduced to ensure that and have the same order of magnitude.

The smooth function encodes the possibility where neighboring pixels should have similar depth. Inspired by the work [21], we use the linear cost function, which is solved by α-expansion:

According to our daily experience, we know that objects which appear closer to the top of the image are usually further away. Thus, if we consider two pixels and , where is directly above , we have . Thus, we can deduce that the transmission of pixel must be less than or equal to the transmission of pixel , that is, . For any pair of labels which violate this trend, a cost can be assigned to punish this pattern. Thus, the smooth function in (4) can be written as

The parameters and are used to control the aspect of rendering effect. The value of controls the strength of the image details display and is usually set 0.01. The cost controls the strength of the image color display and is usually set 100. Besides, the weights associated with the graph edge should be determined. If the subtraction of the two neighboring pixel values in the input image is less than 15, then the two pixels may have high possibility of having the same transmission value. Thus, the cost of the labeling is increased by 15x to minimize the artifacts due to the depth discontinuities in this case. Taking the data function and the smooth function into the energy function equation (2), the pixel label of transmission map can be estimated by using the graph-cut based α-expansion. In our method, the gco-v3.0 library [22] developed by O. Veksler and A. Delong is adopted for optimizing multilabel energies via the α-expansion. It supports energies with any combination of unary, pairwise, and label cost terms [20, 23]. Thus, we use the library to estimate each pixel label in initial transmission map. The pseudocode of the estimation process using the gco-v3.0 library is presented in Pseudocode 1.

Pseudocode 1: The pseudocode of the label assignment using gco-v3.0 library.

In Pseudocode 1, and are the height and width of the input image. By using the functions (e.g., GCO_SetDataCost, GCO_SetSmoothCost, and GCO_GetLabeling) defined in the optimization library, we can obtain each pixel label . Then, a proper intensity value of the initial transmission map can be assigned to each image pixel. Specifically, for each label , we have

In Figure 5, we show a synthetic example where the image consists of 5 gray-level regions. The image can be accurately segmented to be 5 label regions by using the proposed MRF method, and the 5 labels are represented by 5 intensity values, whose results are shown in Figure 5.

Figure 5: Synthetic example. (a) Input gray-level image. (b) Output multilabel image.

The MRF-based algorithm can also be applied for estimating the initial transmission map for real world images. An illustrative example is shown in Figure 6. In the figure, Figure 6(b) shows the initial transmission map estimated using the algorithm presented above; its corresponding rendered result is shown in Figure 6(c). One can clearly see that the fog-like feeling given by the simulated foggy image is not very strong due to the redundant image details.

Figure 6: Example of real world image. (a) Input image. (b) Initial transmission map. (c) Rendered result obtained using (b). (d) Bilateral filter to (b). (e) Rendered result obtained using (d).
4.2.3. Refined Map Estimation Using Bilateral Filter

As shown in Figure 6, there is obvious deficiency in the rendered image in the discontinuities of the transmission map obtained by MRF model. For example, the trees or mountains should have the same depth values. However, as shown in Figure 6(b), one can clearly see different intensity layers of trees or mountains in the transmission map estimated by the MRF-based algorithm. In order to handle these discontinuities, we apply the bilateral filter to our algorithm since the filter can smooth images while preserving edges [24]. Thus, the redundant details of the transmission map estimated by the algorithm presented above can be effectively removed, which improves the rendered result with better fog-like feelings. This process can be written as where is the initial transmission map corresponding to the pixel and is the neighbor of . The spatial domain similarity function is a Gaussian filter with the standard deviation , and the intensity similarity function is a Gaussian filter with the standard deviation ; it can be defined as . In our experiments, the value of and is set as 3 and 0.4, respectively. Thus, we can obtain the final refined transmission map, as shown in Figures 6(d) and 6(e), which is the rendered result obtained using the refined map. From Figure 6(e), one can see that the rendered result obtained using the bilateral filter has better fog-like feelings compared with the result (see Figure 6(c)) obtained without using the filter. However, the refined transmission map is in fact a pseudo depth map instead of a recovery of real depth information. It reflects the relative positions between scene objects and their neighboring regions. In our experiment we find that the pseudo depth map can produce realistic foggy scenes without creating significant errors in the rendered image.

4.3. Fog Scene Synthesis

Since now we already know the input no-fog image , and the final refined transmission map , and the airlight is set to be 255, we can obtain the simulated foggy image according to the atmospheric scattering model (see (1)). The simulation result is calculated by

In (8), the value of is set to be 0.1, and the transmission map combines the geometric distance and the medium extinction coefficient (the net loss from scattering and absorption) into a single variable [25]:

Experimental results show that the simulated foggy scenes with different fog density can be created by adjusting the coefficient with the parameter in (8), that is, . However, using atmospheric scattering model on the input no-fog image, we can only generate rendered result with a homogenous fog added. However, fog is not always perfectly homogeneous in real situations. Therefore, Perlin noise is also introduced in our method to generate heterogeneous fog, which seems more natural to human visual perception. This process can be written as

In (10), is the rendered result obtained by (8), is Perlin’s noise, and is the simulated foggy scene with heterogeneous fog. The parameter is used to control the appearance of Perlin’s turbulence texture in our rendered result. We fixed it to 0.15 for all the results reported in this paper. An illustrative example is shown in Figure 7. In this figure, Figure 7(a) is the input no-fog image, Figure 7(b) is the transmission map estimated by the proposed method, Figure 7(c) is the simulated foggy image with uniform fog, and Figure 7(d) is the rendered heterogeneous fog result. One can clearly see that the simulated result in Figure 7(d) is more natural.

Figure 7: Example of fog scene synthesis. (a) Input no-fog image. (b) Our refined transmission map. (c) Generated foggy scene with homogeneous fog. (d) Generated foggy scene with heterogeneous fog.

5. Experimental Results

Our proposed rendering method can work well for a wide variety of game scenes and real world images. In the experiments, we performed the algorithm by executing Matlab on a PC with 3.00 GHz Intel Pentium Dual-Core Processor.

5.1. Parameter Adjustment for Different Foggy Conditions

Since the parameter used to change the value of medium extinction coefficient controls the fog density of the simulated scene, we can thus obtain our ideal rendered foggy image by tuning this parameter. Figure 8 shows some fog simulation results with different . One can clearly see that the larger value of is, the denser fog will be, as shown in Figure 8.

Figure 8: Simulated foggy images with different .

Therefore, by changing the value of using the fog’s density factor , it is possible to control the visibility distance or the global field of view, which decreases when rises, and vice versa. Table 1 gives the ten levels of the visibility grades and their corresponding extinction coefficient values widely used in meteorology [26].

Table 1: International visibility grades with their medium extinction coefficients.

From Table 1, one can clearly see that different foggy weather conditions are related to different visibility distances and extinction coefficients. Thus, we can deduce that the value of parameter can be selected according to the extinction coefficient value of the ten visibility grades to produce different foggy effects. To verify the effectiveness of the proposed method, a virtual scene in clear weather (see Figure 9(a)) is used as a benchmark image, which contains some targets at different distances, such as 33 m, 80 m, and 200 m. As can be seen in Figure 9, these targets gradually become blur as the value of parameter increases from 0.4 to 20. Since different visibility distances correspond to different foggy weather according to the international visibility grades shown in Table 1, we can thus simulate various fog effects by selecting proper parameter values. Table 2 gives the parameter values and the corresponding weather conditions for Figure 9. Note that the intrinsic and extrinsic parameters of the camera that creates the virtual scene are unknown for the transmission map generation, so the distance value of the targets shown in Figure 9 only reflects the relative relationship of the visibility. Thus, the weather type of the simulated scene is mainly determined by our subjective judgment.

Table 2: The parameter values and the corresponding weather conditions for Figure 9.
Figure 9: Fog simulation for different relative visibility distance.

To further validate the proposed technique for foggy weather simulation, we apply it to several synthetic images. An illustrative example is shown in Figure 10, and its corresponding parameter value and weather condition are given in Table 3. One can clearly see that the simulation results are consistent with human visual perception by setting the value according to the extinction coefficient value in visibility grades. This confirms our observations in Figures 9 and 10.

Table 3: The parameter values and corresponding weather conditions for Figure 10.
Figure 10: Different foggy weather simulations.
5.2. Visual Quality Comparison
5.2.1. Image Test Results

The algorithm proposed in this paper works well for a wide variety of synthetic images and real captured images. Figure 11 shows some examples of the rendering effects for game scene images. The parameter values for the simulated haze or thin fog scene are and , respectively. The transmission maps estimated from the input original images are shown in Figure 11(b). Note that the maps do not recover the real depth information, but the generated foggy scenes still consist with our daily life experience. From the figures, we can see that the generated foggy scenes (see Figure 11(c)) look natural due to the differentiation of the fog’s density.

Figure 11: Rendered results for game scenes.

We also apply our method to real world images. The results for real captured image are shown in Figure 12. The heterogeneous fog effect for the scenes is presented in the figure and the coefficient is, respectively, set to 1 and 0.5 to create the haze effect. From these results, we can get that our simulated results are realistic.

Figure 12: Rendered results for real world images.
5.2.2. Qualitative Comparison

To verify the effectiveness of the proposed algorithm, we evaluate the visual quality of our rendered image by comparison with the results obtained by the commercial software and the state of the art rendering methods.

Figure 13 shows a comparison between our rendered foggy scene and real captured foggy image. It can be seen from the figure that there are some differences between our simulation results and real captured image, especially in the image details. Therefore, neither of the simulated foggy scenes (homogenous fog or heterogeneous fog) is visually close to the real foggy situation. That is because the reason for the change of real fog’s shape, density, and color is very complicated and irregular. Thus, the fog density and color at each pixel of an image are hard to be described by some regular formulas. What we can do is adding more descriptions of fog detail transformation in the rendering algorithm to provide viewers with more realistic sense of fog in the simulated scenes.

Figure 13: Comparison between our rendered foggy scenes and real foggy image.

The comparison between our rendered foggy scenes and the 3ds max result is shown in Figure 14. One can clearly see that the fog effect created by 3ds max (see Figure 14(b)) looks good but not natural enough. That is because the fog density produced by the software is constant over the whole scene, which is not always true in real situations. A more natural way is to let scene objects gradually disappear in the distance fog; thus the atmospheric scattering model is used here for generating homogeneous fog effect. Thanks to the depth information provided by the estimated transmission map, the realism of simulated phenomenon is improved by the distribution of the fog, as shown in Figure 14(c). Compared with the homogenous foggy scene, much more realistic fog effect can be produced by using the Perlin noise and turbulence. As can be seen in Figure 14(d), the visibility is lower at the point being far away from the camera and the fog’s density is differentiated by using the noise textures, just like in real world.

Figure 14: Comparison between our rendered foggy scenes and 3ds max result.

Figures 15 and 16 allow the comparison of our results with three state-of-the-art rendering methods: OpenGL, which simply blends the fog color with the object color based on the distance of the viewer; Sun’s work [1], which is based on a single scattering light transport equation; Cao’s work [2], which is based on an atmosphere point spread function (APSF). Notice that the object outline of OpenGL results seems still clear after rendering original scene with the built-in OpenGL functions, which is not always true for natural foggy image. Both Sun’s work and Cao’s work can produce impressive rendering results; especially the glows around light sources in Sun’s result and the blurriness caused by contrast reduction in Cao’s result are all visually compelling. However, many parameters are involved in these methods, which leads to more user interaction and makes the final simulation effects hard to control as well. Compared with these existing methods, the proposed algorithm can produce comparable result with much less parameter and user involvement.

Figure 15: Comparison with OpenGL result and Sun’s work [1].
Figure 16: Comparison with OpenGL result and Cao’s work [2].
5.3. Computational Time

For computational efficiency, the complexity of the OpenGL is for an image of size , which implies the complexity of the OpenGL method is a linear function of the number of input image pixels due to the point-to-point operations of the built-in OpenGL functions. Thanks to programmable graphics hardware, Sun’s method precomputes the numerical functions and stores them as 2D texture maps. Then, the entire analytic computation and table lookups are implemented in simple pixel or vertex shaders in the graphics hardware [1]. Thus, the computing speed of Sun’s method is very fast to achieve real-time rates. The computational time of Cao’s method is much longer than Sun’s method, since the APSF used in Cao’s algorithm needs the convolution operation, so its computing speed is associated with the product of convolution mask size and input image size, whose time complexity is relatively high. Our proposed method has a relatively faster speed compared with Cao’s method, and less time is needed to obtain a comparable rendered result of the same size in Matlab environment. Table 4 shows the computing speeds of the three methods with an image of size .

Table 4: Running time of different rendering methods for an image of size .

6. Conclusions and Future Work

We propose a new algorithm to simulate foggy scenes for input no-fog images. Firstly, we generate Perlin noise image as the density distribution texture of heterogeneous fog. Then, we estimate the transmission map using the MRF model and the bilateral filter. Finally, virtual foggy scene is realistically rendered with the generated Perlin noise image and transmission map according to the atmospheric scattering model. The proposed method provides a new way to simulate foggy scene with less algorithm parameter and user interaction compared with existing techniques. In the future, we intend to further accelerate the rendering speed by exploiting the many techniques of GPU.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work was supported by the National Natural Science Foundation of China (91220301), the China Postdoctoral Science Foundation (no. 2014M552154), the Hunan Planned Projects for Postdoctoral Research Funds (no. 2014RS4026), and the Postdoctoral Science Foundation of Central South University (no. 126648).


  1. B. Sun, R. Ramamoorthi, S. G. Narsinhan, and K. Nayar, “A practical analytic single scattering model for real time rendering,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 1040–1049, 2005. View at Publisher · View at Google Scholar
  2. Y. Cao, W. Shi, and S. Fang, “Simulation of high realism foggy image for natural scene,” Journal of System Simulation, vol. 24, no. 6, pp. 1247–1253, 2012 (Chinese). View at Google Scholar · View at Scopus
  3. N. Max, “Atmospheric illumination and shadows,” Computer Graphics, vol. 20, no. 4, pp. 117–124, 1986. View at Publisher · View at Google Scholar · View at Scopus
  4. T. Yamamoto, Y. Dobashi, and T. Nishita, “Interactive rendering method for displaying shafts of light,” in Proceedings of the 8th Pacific Conference on Computer Graphics and Applications (PG '00), pp. 31–37, IEEE Press, January 2000.
  5. D. Jackèl and B. Walter, “Modeling and rendering of the atmosphere using Mie-scattering,” Computer Graphics Forum, vol. 16, no. 4, pp. 201–210, 1997. View at Publisher · View at Google Scholar · View at Scopus
  6. T. Nishita, Y. Dobashi, and E. Nakamae, “Display of clouds taking into account multiple anisotropic scattering and sky light,” in Proceedings of the Computer Graphics Conference (SIGGRAPH '96), pp. 379–386, August 1996. View at Scopus
  7. C. B. Wang, Z. Y. Wang, and Q. S. Peng, “Real-time rendering of sky scene considering scattering and refraction,” Computer Animation and Virtual Worlds, vol. 18, no. 4-5, pp. 539–548, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. H. Koschmieder, “Theorie der horizontalen sichtweite,” Beiträge zur Physik der freien Atmosphäre, pp. 171–181, 1924. View at Google Scholar
  9. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” International Journal of Computer Vision, vol. 48, no. 3, pp. 233–254, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  10. D. Zdrojewska, “Real time rendering of heterogenous fog based on the graphics hardware acceleration,” in Proceedings of the Central European Seminar on Computer Graphics for Students, 2004.
  11. J. Dong, K. Liu, and J. Wang, “Simulation of the foggy scene under outdoor natural scenes,” Journal of Computer-Aided Design and Computer Graphics, vol. 25, no. 3, pp. 397–409, 2013 (Chinese). View at Google Scholar · View at Scopus
  12. D. Hoiem, A. A. Efros, and M. Hebert, “Automatic photo pop-up,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 577–584, 2005. View at Publisher · View at Google Scholar
  13. J. Park and C. Kim, “Extracting focused object from low depth-of-field image sequences,” in The International Society for Optical Engineering, vol. 6077 of Proceedings of SPIE, San Jose, Calif, USA, January 2006. View at Publisher · View at Google Scholar
  14. Y. J. Jung, A. Baik, J. Kim, and D. Park, “A novel 2D-to-3D conversion technique based on relative height depth cue,” in Stereoscopic Displays and Applications XX, vol. 7237 of Proceedings of SPIE, p. 8, The International Society for Optical Engineering, San Jose, Calif, USA, January 2009. View at Publisher · View at Google Scholar
  15. K. Han and K. Hong, “Geometric and texture cue based depth-map estimation for 2D to 3D image conversion,” in Proceedings of the IEEE International Conference on Consumer Electronics (ICCE '11), pp. 651–652, IEEE, January 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. N. E. Yang, J. W. Lee, and R. H. Park, “Depth map generation from a single image using local depth hypothesis,” in Proceedings of the IEEE International Conference on Consumer Electronics (ICCE '12), pp. 311–312, IEEE Press, January 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. K. Perlin, “An image synthesizer,” Computer Graphics, vol. 19, no. 3, pp. 287–296, 1985. View at Publisher · View at Google Scholar · View at Scopus
  18. J. J. Koenderink, “Pictorial relief,” The Royal Society of London Philosophical Transactions Series A: Mathematical, Physical and Engineering Sciences, vol. 356, no. 1740, pp. 1071–1086, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  19. W. B. Tao, H. Jin, and Y. M. Zhang, “Color image segmentation based on mean shift and normalized cuts,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 5, pp. 1382–1389, 2007. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 1222–1239, 2001. View at Publisher · View at Google Scholar · View at Scopus
  21. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, IEEE Press, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. “The gco-v3.0 library (gco-v3.0),”
  23. A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” International Journal of Computer Vision, vol. 96, no. 1, pp. 1–27, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the IEEE 6th International Conference on Computer Vision, pp. 839–846, IEEE Press, January 1998. View at Scopus
  25. M. C. W. van Rossum and T. M. Nieuwenhuizen, “Multiple scattering of classical waves: microscopy, mesoscopy, and diffusion,” Reviews of Modern Physics, vol. 71, no. 1, pp. 313–371, 1999. View at Publisher · View at Google Scholar · View at Scopus
  26. C. H. Yang, Handbook of Laser and Infrared Technology, National Defense Industry Press, 1990, (Chinese).