Journal of Sensors

Journal of Sensors / 2020 / Article
Special Issue

Sensor Physical Interpretation, Signal and Artificial Intelligence Processing

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8841811 | https://doi.org/10.1155/2020/8841811

SeHee Jung, SungMin Yang, Eunseok Lee, YongHak Lee, Jisun Ko, Sungjae Lee, JunSang Cho, Jaehwa Lee, SungHwan Kim, "Estimation of Particulate Levels Using Deep Dehazing Network and Temporal Prior", Journal of Sensors, vol. 2020, Article ID 8841811, 9 pages, 2020. https://doi.org/10.1155/2020/8841811

Estimation of Particulate Levels Using Deep Dehazing Network and Temporal Prior

Academic Editor: Bin Gao
Received07 May 2020
Revised02 Jun 2020
Accepted11 Jun 2020
Published07 Jul 2020

Abstract

Particulate matters (PM) have become one of the important pollutants that deteriorate public health. Since PM is ubiquitous in the atmosphere, it is closely related to life quality in many different ways. Thus, a system to accurately monitor PM in diverse environments is imperative. Previous studies using digital images have relied on individual atmospheric images, not benefiting from both spatial and temporal effects of image sequences. This weakness led to undermining predictive power. To address this drawback, we propose a predictive model using the deep dehazing cascaded CNN and temporal priors. The temporal prior accommodates instantaneous visual moves and estimates PM concentration from residuals between the original and dehazed images. The present method also provides, as by-product, high-quality dehazed image sequences superior to the nontemporal methods. The improvements are supported by various experiments under a range of simulation scenarios and assessments using standard metrics.

1. Introduction

Particulate matters (PM) are small particles suspended in the air, generally having an aerodynamic diameter smaller than or equal to 10 μm (micrometers). PM originates from anthropogenic activities, (e.g., combustion of fossil fuel, dust) as well as natural sources (e.g., mineral dust, volcanic ash). PM measurements are commonly made for particles with an aerodynamic diameter smaller than or equal to 2.5 μm (PM2.5) and 10 μm (PM10). The size of PM is directly associated with health problems [1]. Inhaling the small particles is known to be hazardous as they can infiltrate in depth into the respiratory system [2]. In this regard, PM2.5 has been widely used as a key indicator of the air quality index, and thus, we focus on PM2.5 in this study (hereafter abbreviated as PM for brevity). Many experts point out that the recent increases of PM in many parts of the world are attributed to the rapid growth in global energy consumption [3]. Over the years, efforts have been made to identify adverse effects of PM on public health and environment [46]. Surprisingly, the large-scale retrospective cohort study of lung cancer by the World Cancer Institute reported that PM is ascertained as primary carcinogens, as the risk of lung cancer increased by 22% for an increase of PM by 10 μg/m3 [7]. The air pollution from PM has been one of the most controversial issues over East Asia. This has become no longer negligible, and media as well as researchers try to inform the public of its detrimental effects [8]. Particularly, it was reported that the annual mean PM concentration in South Korea is twice as high as in the Organisation for Economic Co-operation and Development (OECD; https://www.oecd.org/) countries [9]. Under this circumstance, it is imperative to build an accurate air monitoring system to facilitate alerting the public prevention. The Korean government drastically expanded the PM monitoring network to help improve PM forecast service to a remarkable extent. However, there are still not enough PM monitoring stations to cover the whole country. Moreover, most of the stations are distributed in urban areas (e.g., Seoul, Busan), leaving many suburban and rural undermonitored.

Two different types of approaches can be used to estimate PM concentrations: sensor-based approaches and vision-based approaches.

1.1. Sensor-Based

Improvements in PM measurements using sensor-based approaches have been made to develop more precise sensor units [10]. There are two types of devices [11], i.e., microbalance PM monitoring stations (accurate but expensive [12]) and portable light-scattering-based PM monitors [13, 14]. For instance, the Korea Meteorological Administration (KMA) now operates 475 measuring stations and publicly reports PM concentration levels (i.e., limited to the station vicinity) every hour [15]. Most of the instruments operated by KMA are Tapered Element Oscillating Microbalance (TEOM), which is designed to directly weight PM on the filter [11]. Although highly accurate, TEOM is relatively expensive to install and maintain (approximately 200K USD per year) [16] and is bound to space limitations, thereby undermining practicality. The light-scattering method is relatively affordable. In [11, 17], collected PM via airflow structure are measured by densely deployed sensors. However, both methods relying on large-scale sensing nodes inevitably suffer from the expensive maintenance costs for high coverage and reliability. Recently, several newly developed devices for mobile facility (e.g., balloons and drones) are very interesting [10, 18, 19], but carrying sensors to acquire data is still highly energy-intensive and less practical to use.

1.2. Vision-Based

This approach is less explored compared to the sensor-based approach, and so there is a lot of room for improvement. To the best of our knowledge, all the vision-based studies exploit only individual images (e.g., [2022]). In this case, it is fairly sensitive to motion blurring frequently caused by camera or subject movements. Specifically, Liu et al. [20] manually determined several regions of interest targeting distant objects to derive the transmission map. The explanatory power of the transmission map for PM estimation has proven its efficiency [2023]. However, the need for selecting regions of interest is the pain point. Li et al. [21] leveraged heterogeneous data composed of GPS, camera lens, magnetic sensor, official station, and image data. Combining these multiple data, they generate high-dimensional features using kernel methods. Pan et al. [22] extracted haze effects using Adaptive Transmission Map [24] and pass derived features to the deep neural network designed on the basis of the well-known Boltzmann machine [25]. Since it is assumed that transmission values in a local patch (a.k.a. window) are the same constant, the dehazed images inevitably contain blocking artifacts [22, 26, 27]. The rest of this paper is organized as follows. In Sections 2 and 3, we introduce related works and the proposed method, respectively. Section 4 describes the numerical experiments.

We discuss the results and conclude in Section 5.

According to the atmospheric scattering model (ASM), two factors are involved in the formation of a haze image: (1) direct attenuation and (2) airlight. When we take a photo, reflected radiance coming from objects is attenuated while reaching the camera. This is due to the effects of atmosphere absorption (a.k.a. direct attenuation), and the intensity of attenuation is proportional to camera distance (scene depth). In addition to this, there is another light, called airlight, resulting from scattering of neighboring light sources (e.g., sun) by haze [28]. Importantly, airlight is known to shift the color range of the object, and direct attenuation describes the scene radiance and its decay. Figure 1 illustrates how image degradation occurs under haze conditions. With a little of algebra, this process can be formulated as follows: where denotes the pixel, is the observed haze image in RGB channels, is the haze-free image, is the medium transmission describing the portion of the scene radiance that reaches the camera, and is the global atmospheric light. Note that is assumed to be homogeneous throughout the image and determined using empirical techniques. The first term on the right-hand side of equation (1) represents direct attenuation, and the second term corresponds to airlight. The farther the distance between a camera and objects is, the thicker the atmospheric layer exists between them. where is the scattering coefficient of atmosphere, and is the distance from the object to the camera. Equation (2) indicates that the scene radiance is attenuated exponentially with the scene depth . Here, it is intuitively understandable that some haze effect deteriorates the haze-free image (i.e., ), causing the haze image . Starting from this simple idea, we can consider the following relation:

Related to image dehazing (a.k.a. haze removal), we restore , , and from the observed haze image .

3. Proposed Method

To measure PM via image sequences (i.e., video clip) is the ultimate goal of our study, and the deep neural network serves as a feature extractor. To achieve our goal, we first learn the deep dehazing network that extracts informative features based on strong correlation with PM concentration levels. In this section, we introduce two strategies working together for the network: (1) deep compression for energy efficiency in light of the model architecture and (2) temporal priors to capture haze-related features in image sequences. The specific design of our dehazing network is presented in Figure 2.

3.1. Network Architectures in Favor of Feature Extraction
3.1.1. Network Pruning

Network depth in CNN plays a decisive role in extracting various levels of features [29]. Inspired by this, we formulate the feature extraction network (FEN) at the front part of our dehazing network with deep cascaded convolutional layers of 436 kernels as in Figure 2. However, going deep convolutions involves too many parameters in the network and high computational cost. We thus seek to simplify the network without loss of accuracy. To address this hurdle, we compress the FEN using network pruning (NP). This has been widely used to reduce network complexity and prevent overfitting [3032]. We also eliminate a waste of unnecessary parameters in our network without performance loss. Since the local feature is more important than the global feature in image restoration [33], we reduced the number of kernels in a row (see Table 1). This technique (a.k.a. pruned network) outperforms the original network having far more connections with plain and dense architecture. This in turn enabled the FEN to be less complex, but still deep enough to extract the necessary features for PM measurements. Based on the deep cascaded CNN architecture, the FEN grasps all the levels of features from local to global.


LayerKernelInput sizeOutput size

fen1
fen2
fen3
fen4
fen5
fen6
fen7
concat
irn1
irn21
irn22
concat
irn3
irn4

3.1.2. Parallelized CNN Layer

In the second place, we adopt in the network the paralleled CNN Layers (P1CL, a.k.a. Network In Network [34]) to image restoration network (IRN) that appears in the second half of our dehazing network in Figure 2. The P1CL proves its applicability to enhance the representational power of neural networks [34]. GoogLeNet fully exploits this technique not only to make the network deep in the name of the inception module but also to reduce the dimensions inside these modules [35]. Inspired by GoogLeNet, we place the P1CL in front of the IRN to reduce the dimension of the feature maps accumulated through the FEN. This does not simply imply more of dimension reduction. In doing so, we compress the feature maps so as to narrow down the scope of involving features. Besides, the P1CL also includes the use of rectified linear activation [36]. In general, each P1CL consists of one or more convolutions followed by a nonlinear activation function (e.g., ReLU), which adds more nonlinearity and thereby helps approximate the highly nonlinear function such as ASM. Taking all together, we form the IRN approximating the following equation derived from equation (3) with a little algebra: where means the haze effect and the two terms and correspond to the lost scene radiance (e.g., by scattering or absorbing) and the shift of the scene color, respectively. The main reason for parallel processing in IRN is to divide the computational tasks and thereby reduce the burden of the network. The upper and lower unit estimates the lost scene radiance and the shift of the scene, respectively. In the blessing of parallel architecture, the IRN can expedite the work in a simultaneous processing fashion, and it also helps prevent overfitting problem with reduced network complexity. The residual between original and dehazed images can serve as an explanatory variable related to haze effects for predicting PM levels. Motivated by this, we set a model of the relationship between the variables of haze effects and PM levels. Using components stored in estimated haze effects (in Figure 3), we can estimate PM concentration levels.

3.2. Temporal Prior

Since the PM consists of infinitesimal particles floating in the air, they can be characterized by a transport phenomenon such as flow motions in video. Although we cannot directly identify flow motion of PM by tracking all particles, we can indirectly discover its effects through variability in the midst of multiple image sequences in video. As introduced by Kim et al. [37], differences between the original consecutive frames in both cases ( and ) are obviously distinctive. This previous work reinforces the plausibility of hypothesis that the image distinctions across images show PM concentration levels are considerably significant. This is motivated by the fact that the moving objects happen to create subtle changes. Stepping beyond the previous work, inspired by prior in Bayesian, we impose the fluid flow model proposed by Xie et al. [38] in order to additionally accommodate feature variations: where is the time domain and , , and denote the fluid, the transport operator (a.k.a. advection or warp operator), and the temporal prior, respectively. Note that , the element of , is the vector, which has the velocity and flow direction field. For brevity, we replace the existing physical prior , where denotes the average over a period of time prior to . More precisely, Figure 4 describes how the proposed temporal prior is made up based on sequential frames during the prior length, each of pixels calculating the average values of RGB across image frames. Taken together, we combine the original RGB (3 channels) and its corresponding temporal priors (3 channels) into augmented inputs for a total of six channels (see the first half of the network in Figure 2). When exploiting the priors almost consistent across image sequences, the network is expected to be superior in producing consistent haze effects, as compared to the model with no temporal prior.

4. Numerical Experiments

4.1. Datasets

In this section, we describe haze video datasets to apply. One of the challenges in creating real video datasets is that both consecutive haze frames and the corresponding haze-free frames are supposed to be perfectly matched up. To circumvent a little, artificial haze video datasets based on existing videos can be used to train dehazing networks [3941]. Yet, synthetic haze effects hardly accommodate natural flow motions that disperse light, and thereby, this wrongly distorts pixel values. Instead, we collected datasets of various environments including both indoor and outdoor areas (refer to the sample haze images in Supplementary Material (available here)). Here, one note of caution is that since indoor environments are not directly exposed to the outside atmosphere, in the case of indoor, it is difficult to collect video clips with high PM levels. Therefore, we had to open all the windows when PM was high or sometimes directly generate PM by burning incense or spraying potassium chloride. The thumbnail images are shown in Table 2. The video clips were recorded with the Raspberry Pi Camera Module V2 (Raspberry Pi Foundation; http://www.raspberrypi.org/) employing a low-cost CMOS sensor. In the stage of building datasets, we first mount the pi cameras on the tripods for image stabilization. While recording video clips, we concurrently measure PM levels using the Aerocet 831 Handheld Particle Counter (Met One Instruments; http://www.metone.com/), a high-precision device. With the help of the Aerocet, we collected the haze-free images under atmosphere where PM levels are less than or equal to 15 as the target images for training the network. For these data, we finally applied the optical flow [42] to the captured video clips for removing any spatial variances. All the datasets and the codes are available at the author’s website (http://www.hifiai.pe.kr/).


CategoryThumbnail# of video clips

Indoor environments2,000 for each site

Outdoor environments3,000 for each site

Experimental chamber1,000 for each experimental condition

4.2. Results

To measure PM levels, the estimated haze effects per frame (matrix) are converted to statistic (scalar) such as mean, entropy, and variance [20, 43]. To assess predictive power, we compare the true hazy effects with predicted hazy effects. To do so, we fit the three regression-type models: (1) random forest regression (RFR), (2) support vector regression (SVR) with the radial basis function (rbf) kernel, and (3) multilayer perceptron regression (MPR). The metric to evaluate prediction accuracy is as follows: where and refer to statistic (e.g., mean, entropy, and variance of ) from ground truth and the proposed model. Table 3 summarizes the accuracy results for the test sets, and all indoor and outdoor scenarios are equally assigned.


NonpriorPrior
RFRSVRMLPRRFRSVRMLPR

Indoor
Mean0.69030.58720.75460.72990.61390.7953
Variance0.66450.81010.81520.66820.82390.8258
Entropy0.69490.82470.82400.70540.83880.8672
Outdoor
Mean0.62280.45990.58820.69170.51970.6969
Variance0.61570.68430.68910.70050.77600.7753
Entropy0.61930.72160.71350.71270.82450.8132

4.2.1. Indoor and Outdoor Environment

We collect over 2,000 video clips from the indoor office and corridors at the Konkuk University over several months. Here, the proposed model presents the best performance of accuracy with 86.72% when adopting MLPR with prior and the entropy benchmark. Especially notable is that accuracy tends to increase when applying the temporal prior across all scenarios. In this sense, it is confirmed that temporal priors can provide the network with additional haze-related information. In addition, indoor environments are believed to be less sensitive to environmental factors, and thus, it is understandable that indoor experiments outperform outdoor as a whole. For outdoor experiments, we select two locations in South Korea populated with people in the midst of the residential areas and the building complex. For several months, we collect more than 3,000 video clips of each location. For reliability, we make sure that outdoor data consist of a widespread range of PM levels. In Table 3, SVR with entropy presents 72.16% and 82.45% for nonprior and prior, respectively. Interestingly, priors in outdoor data allow considerable accuracy gains compared to indoor scenarios. The results implicate that prior information can be effective especially in outdoor environments (e.g., windiness) in the midst of particulate flows.

4.2.2. Experimental Chamber

In simulations, it is essential to assess diverse environmental conditions. To this end, we especially design the experimental chamber to implement diverse conditions of interest. The experiments were carried out regarding four factors including wind, temperature, humidity, and illuminance. In the course of the experiments, the other confounding factors are well adjusted in the experiment chamber. We gather approximately 1,000 video clips across all experiment conditions. In Table 4, the results show that the MLPR with entropy consistently present high accuracy over 80% for almost all scenarios. Therefore, it is clearly confirmed that this prior-based model serves to adequately control environmental confounding factors that possibly intervene in haze effects.


NonpriorNormal stateWindinessPriorNormal stateWindiness
RFRSVRMLPRRFRSVRMLPRRFRSVRMLPRRFRSVRMLPR
Mean0.75860.64120.74310.64030.59860.66530.76200.64410.78990.69510.46700.7235
Variance0.71160.78050.74700.63190.69510.72490.67280.79950.80930.63570.64280.7764
Entropy0.65110.75230.75280.70740.77800.60570.69340.86740.86990.71860.84280.8101

NonpriorTemperature (low) (20°C)Temperature (high) (40°C)PriorTemperature (low) (20°C)Temperature (high) (40°C)
RFRSVRMLPRRFRSVRMLPRRFRSVRMLPRRFRSVRMLPR
Mean0.69060.51890.72540.61050.51430.68560.71540.51620.81230.76360.58100.8094
Variance0.71010.75670.76740.70830.74100.75280.65100.78540.85180.68830.76810.8082
Entropy0.75170.77810.70560.65050.76960.73870.72150.83040.83190.72710.83300.8334

NonpriorHumidity (low) (25%)Humidity (high) (50%)PriorHumidity (low) (25%)Humidity (high) (50%)
RFRSVRMLPRRFRSVRMLPRRFRSVRMLPRRFRSVRMLPR
Mean0.69100.63970.66660.65930.63170.72340.73190.55230.78540.74620.56990.7734
Variance0.72180.75810.69770.67450.66120.67150.68210.81490.77240.67700.76750.7379
Entropy0.72030.78270.66600.66790.76740.68690.74660.80540.85240.74510.81740.8468

NonpriorIlluminance (low) (100 lx)Illuminance (high) (300 lx)PriorIlluminance (low) (100 lx)Illuminance (high) (300 lx)
RFRSVRMLPRRFRSVRMLPRRFRSVRMLPRRFRSVRMLPR
Mean0.66590.52860.70380.65620.56340.71400.71310.55060.69050.66720.51850.7522
Variance0.64330.70240.60460.64500.75130.66850.62410.78610.77570.73840.71350.8104
Entropy0.65880.78210.62800.66150.77640.68000.68250.80180.80310.69260.83550.8764

5. Discussion

Undoubtedly, the latest AI has mainly focused on vision-based techniques (e.g., RGB and Lidar). Nonetheless, in the AI domain, infinitesimal materials still remain underestimated due to its invisible nature. In this regard, vision-based PM measurements are featured with many advantages of flexibility and accessibility in the view of real-time air quality monitoring and extension to spatial scales. Given our finding, even with a low-cost optical sensor, the proposed method can offer further benefits in business and cost savings in practical aspects. Moreover, this also serves as a predictive model using the deep cascaded CNN and temporal prior in a methodological aspect. Compared to existing vision-based predictive models, the proposed model stretches into accommodating additional temporal prior features among image sequences, aiming at improving predictive power in the virtue of data augmentation. With various simulation designs (e.g., real data and experimental chamber data), we confirm that the proposed models are superior to the traditional models without temporal priors, showing outstanding predictive power. Further improvements can be made by exploiting the optimal length of frames in the context of optimizing predictive power. This effort can facilitate to export it to a gauging device and promote practical utility, since all of vision-based models strongly depend on well-controlled radiance, which considerably discourages vision-based measurement techniques at times. To address this issue, we plan on developing alternative prediction models that account for particulate-related features only. We leave these topics for future research.

Data Availability

All the datasets and the codes are available at the author’s website (http://www.hifiai.pe.kr/).

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This research was supported by Konkuk University Researcher Fund in 2019, Konkuk University Researcher Fund in 2020, and the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2020R1C1C1A01005229).

Supplementary Materials

Fig. S1: the sample haze images. (a) Konkuk University office. (b) The residential area. (c) The experimental chamber. (Supplementary Materials)

References

  1. K. H. Kim, E. Kabir, and S. Kabir, “A review on the human health impact of airborne particulate matter,” Environment International, vol. 74, pp. 136–143, 2015. View at: Publisher Site | Google Scholar
  2. February 2020, http://www.nbrienvis.nic.in/Database/1_2463.aspx.
  3. Y. Wang, “The analysis of the impacts of energy consumption on environment and public health in China,” Energy, vol. 35, no. 11, pp. 4473–4479, 2010. View at: Publisher Site | Google Scholar
  4. J. D. Sacks, L. W. Stanek, T. J. Luben et al., “Particulate matter–induced health effects: who is susceptible?” Environmental Health Perspectives, vol. 119, no. 4, pp. 446–454, 2011. View at: Publisher Site | Google Scholar
  5. A. Mukherjee and M. Agrawal, “World air particulate matter: sources, distribution and health effects,” Environmental Chemistry Letters, vol. 15, no. 2, pp. 283–309, 2017. View at: Publisher Site | Google Scholar
  6. Q. Zhang, Y. Niu, Y. Xia et al., “The acute effects of fine particulate matter constituents on circulating inflammatory biomarkers in healthy adults,” Science of the Total Environment, vol. 707, article 135989, 2020. View at: Publisher Site | Google Scholar
  7. O. Raaschou-Nielsen, Z. J. Andersen, R. Beelen et al., “Air pollution and lung cancer incidence in 17 European cohorts: prospective analyses from the European Study of Cohorts for Air Pollution Effects (ESCAPE),” The Lancet Oncology, vol. 14, no. 9, pp. 813–822, 2013. View at: Publisher Site | Google Scholar
  8. H. C. Kim, S. Kim, B.-U. Kim et al., “Recent increase of surface particulate matter concentrations in the Seoul Metropolitan Area, Korea,” Scientific Reports, vol. 7, no. 1, pp. 1–7, 2017. View at: Publisher Site | Google Scholar
  9. OECD, OECD Economic Surveys: Korea 2018, OECD Publishing, 2018.
  10. Y. Yang, Z. Hu, K. Bian, and L. Song, “ImgSensingNet: UAV vision guided aerial-ground air quality sensing system,” in IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pp. 1207–1215, Paris, France, France, 2019. View at: Publisher Site | Google Scholar
  11. Y. Cheng, X. Li, Z. Li et al., “AirCloud: a cloud-based air-quality monitoring system for everyone,” in Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems - SenSys '14, pp. 251–265, Memphis, Tennessee, USA, 2014. View at: Publisher Site | Google Scholar
  12. March 2020, https://www.thermofisher.com/order/catalog/product/TEOM1405.
  13. March 2020, https://metone.com/products/aerocet-831-handheld-particle-counter.
  14. March 2020, http://www.dylosproducts.com/dcproairqumo.html.
  15. March 2020, https://www.airkorea.or.kr/web/stationInfo?pMENUNO=93.
  16. Y. Zheng, F. Liu, and H. P. Hsieh, “U-air: when urban air quality inference meets big data,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '13, pp. 1436–1444, Chicago, Illinois, USA, 2013. View at: Publisher Site | Google Scholar
  17. Y. Gao, W. Dong, K. Guo et al., “Mosaic: a low-cost mobile sensing system for urban air quality monitoring,” in IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications, pp. 1–9, San Francisco, CA, USA, 2016. View at: Publisher Site | Google Scholar
  18. J. Li, Q. Fu, J. Huo et al., “Tethered balloon-based black carbon profiles within the lower troposphere of Shanghai in the 2013 East China smog,” Atmospheric Environment, vol. 123, pp. 327–338, 2015. View at: Publisher Site | Google Scholar
  19. K. Weber, G. Heweling, C. Fischer, and M. Lange, “The use of an octocopter UAV for the determination of air pollutants–a case study of the traffic induced pollution plume around a river bridge in Duesseldorf, Germany,” International Journal of Education and Learn- ing Systems, vol. 2, 2017. View at: Google Scholar
  20. C. Liu, F. Tsow, Y. Zou, and N. Tao, “Particle pollution estimation based on image analysis,” PLoS One, vol. 11, no. 2, article e0145955, 2016. View at: Publisher Site | Google Scholar
  21. S. Li, T. Xi, Y. Tian, and W. Wang, “Inferring fine-grained PM2. 5 with Bayesian based kernel method for crowdsourcing system,” in GLOBECOM 2017 - 2017 IEEE Global Communications Conference, pp. 1–6, Singapore, Singapore, 2017. View at: Publisher Site | Google Scholar
  22. Z. Pan, H. Yu, C. Miao, and C. Leung, “Crowdsensing air quality with camera-enabled mobile devices,” in Twenty-Ninth IAAI Conference, San Francisco, California, USA, 2017. View at: Google Scholar
  23. H. Wang, X. Yuan, X. Wang, Y. Zhang, and Q. Dai, “Real-time air quality estimation based on color image processing,” in 2014 IEEE Visual Communications and Image Processing Conference, pp. 326–329, Valletta, Malta, 2014. View at: Publisher Site | Google Scholar
  24. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE transactions on pattern analysis and machine intelligence, vol. 30, no. 2, pp. 228–242, 2007. View at: Google Scholar
  25. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. View at: Publisher Site | Google Scholar
  26. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. View at: Publisher Site | Google Scholar
  27. C. Li, J. Guo, F. Porikli, H. Fu, and Y. Pang, “A cascaded convolutional neural network for single image dehazing,” IEEE Access, vol. 6, pp. 24877–24887, 2018. View at: Publisher Site | Google Scholar
  28. V. Natarajan, “Enhanced single image uniform and heterogeneous fog removal using guided filter,” in Artificial Intelligence and Evolutionary Computations in Engineering Systems, S. Dash, K. Vijayakumar, B. Panigrahi, and S. Das, Eds., vol. 517 of Advances in Intelligent Systems and Computing, pp. 453–463, Springer, Singapore, 2017. View at: Publisher Site | Google Scholar
  29. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, Las Vegas, NV, USA, 2016. View at: Publisher Site | Google Scholar
  30. S. Han, H. Mao, and W. J. Dally, “Deep compression: compressing deep neu- ral networks with pruning, trained quantization and huffman coding,” 2015, https://arxiv.org/abs/1510.00149. View at: Google Scholar
  31. S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in Advances in Neural Information Processing Systems, pp. 1135–1143, Curran Associates, Inc., 2015. View at: Google Scholar
  32. J. Yamanaka, S. Kuwashima, and T. Kurita, “Fast and accurate image super resolution by deep CNN with skip connection and network in network,” in Neural Information Processing, D. Liu, S. Xie, Y. Li, D. Zhao, and E. S. El-Alfy, Eds., vol. 10635 of ICONIP 2017. Lecture Notes in Computer Science, pp. 217–225, Springer, Cham, 2017. View at: Publisher Site | Google Scholar
  33. D. M. Strong, P. Blomgren, and T. F. Chan, “Spatially adaptive local-feature-driven total variation minimizing image restoration,” in Statistical and Stochastic Methods in Image Processing II, vol. 3167, pp. 222–233, San Diego, CA, USA, 1997. View at: Publisher Site | Google Scholar
  34. M. Lin, Q. Chen, and S. Yan, “Network in network,” 2013, https://arxiv.org/abs/1312.4400. View at: Google Scholar
  35. C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, Boston, MA, USA, 2015. View at: Publisher Site | Google Scholar
  36. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, Curran Associates Inc., pp. 1097–1105, 2012. View at: Google Scholar
  37. S. Kim, S. Jung, S. Yang et al., “Vision-based deep Q-learning network models to predict particulate matter concentration levels using temporal digital image data,” Journal of Sensors, vol. 2019, 10 pages, 2019. View at: Publisher Site | Google Scholar
  38. Y. Xie, E. Franz, M. Chu, and N. Thuerey, “tempoGAN: a temporally coherent, volumetric gan for super-resolution fluid flow,” ACM Transactions on Graphics, vol. 37, no. 4, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  39. B. Cai, X. Xu, and D. Tao, “Real-time video dehazing based on spatiotemporal mrf,” in Advances in Multimedia Information Processing - PCM 2016. PCM 2016, E. Chen, Y. Gong, and Y. Tie, Eds., vol. 9917 of Lecture Notes in Computer Science, pp. 315–325, Springer, Cham, 2016. View at: Publisher Site | Google Scholar
  40. W. Ren and X. Cao, “Deep video dehazing,” in Advances in Multimedia Information Processing – PCM 2017. PCM 2017, B. Zeng, Q. Huang, A. Saddik, H. Li, S. Jiang, and X. Fan, Eds., vol. 10735 of Lecture Notes in Computer Science, pp. 14–24, Springer, Cham, 2017. View at: Publisher Site | Google Scholar
  41. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “End-to-end united video dehazing and detection,” in Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, 2018. View at: Google Scholar
  42. D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2432–2439, San Francisco, CA, USA, 2010. View at: Publisher Site | Google Scholar
  43. W. Yuchi, E. Gombojav, B. Boldbaatar et al., “Evaluation of random forest regression and multiple linear regression for predicting indoor fine particulate matter concentrations in a highly polluted city,” Environmental Pollution, vol. 245, pp. 746–753, 2019. View at: Publisher Site | Google Scholar

Copyright © 2020 SeHee Jung et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views361
Downloads329
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.