Research Article  Open Access
Xiaoli Zhao, Guozhong Wang, Jiaqi Zhang, Xiang Zhang, "Scene Understanding Based on HighOrder Potentials and Generative Adversarial Networks", Advances in Multimedia, vol. 2018, Article ID 8207201, 8 pages, 2018. https://doi.org/10.1155/2018/8207201
Scene Understanding Based on HighOrder Potentials and Generative Adversarial Networks
Abstract
Scene understanding is to predict a class label at each pixel of an image. In this study, we propose a semantic segmentation framework based on classic generative adversarial nets (GAN) to train a fully convolutional semantic segmentation model along with an adversarial network. To improve the consistency of the segmented image, the highorder potentials, instead of unary or pairwise potentials, are adopted. We realize the highorder potentials by substituting adversarial network for CRF model, which can continuously improve the consistency and details of the segmented semantic image until it cannot discriminate the segmented result from the ground truth. A number of experiments are conducted on PASCAL VOC 2012 and Cityscapes datasets, and the quantitative and qualitative assessments have shown the effectiveness of our proposed approach.
1. Introduction
Scene understanding, based on semantic segmentation, is a core problem in the field of computer vision, which has been applied to 2D image, video, and even volumetric data. Its goal is to assign each pixel a label and then provide complete understanding of a scene. Two examples of scene understanding are shown in Figure 1. The importance of scene understanding is highlighted by the fact that there are increasing applications, such as autonomous driving [1], humancomputer interaction [2], robot technology, and augmented reality, to name a few.
(a)
(b)
The earliest scene parsing [3] is to classify 33 scenes for 2688 images on LMO dataset, which adopts label transfer technology to establish dense correspondences between the input image and each of the nearest neighbors using SIFT flow algorithm. Stateoftheart scene parsing frameworks are mostly based on fully convolutional network (FCN) [4]. FCN transforms the wellknown networksAlexNet, VGG, GooLeNet, and ResNet into fully convolutional ones by replacing the fully connected layers with convolutional ones. The key insight of FCN is to build the “fully convolutional” networks that take input of arbitrary size and produce correspondingsized output with efficient inference and learning and realize endtoend and imagetoimage system of deep learning. For all these reasons and other contributions, FCN is considered as the milestone of deep learning. Although amounts of pooling operations enlarge the receptive fields of the convolution kernel of FCN, they lose the detailed location information, resulting in coarse segmentation result, which hinders its further application.
In order to refine the segmentation result, a postprocessing stage using conditional random field (CRF) is adopted after the output of system [5], which makes use of the fully connected pairwise CRF to capture the dependencies of pixels and achieve fine local details. Dilated convolution is a generalization of Kroneckerfactored convolutional filters [6] which expand exponentially receptive fields without losing resolution by disposing of some pooling layers. The works [7] that make use of this technique allow dense feature extraction on any arbitrary resolution and then combine dilated convolutions of different scales to have wider receptive fields with no additional cost. Combined CRF with dilated convolution, Chen et al. [8] propose the “deeplab” system, which enlarges the receptive fields of filters at multiple scales and overcomes the disadvantage of location accuracy by using a fully connected CRF to response the final layer of network. In order to take the dense CRF with pairwise potentials as an integral part of the network, Zheng et al. [9] propose a model called CRFasRNN to refine the segmentation of FCN; they make it possible to fully integrate the CRF with a FCN and train the whole network end to end. Although CRF taking into account the correlation of pixels has improved the segmentation accuracy, it has also increased the computational complexity. To incorporate suitable global features, Zhao et al. [10] propose a pyramid scene parsing network (PSPNet), which extends the pixellevel feature to special designed pyramid pooling one in addition to traditional dilated convolution. This algorithm achieves the champion of ImageNet scene parsing challenge 2016.
In the abovementioned algorithms, a common property is that all label variables are predicted either using unary potentials such as FCN or using pairwise potentials such as methods based on CRF. Despite the fact that pairwise potentials refine the accuracy of semantic segmentation, they only consider the correlation of two pixels. In an image, many pixels have the consistency across superpixels; highorder potentials should be effective in refining the segmentation accuracy. Arnab et al. [11] have integrated specific classes of highorder potentials in CNNbased segmentation models. This specific class may be object or superpixel and so on, for which we need to design different energy function to calculate highorder potentials, whose computation is complicated.
The generative adversarial nets (GAN) proposed by Goodfellow et al. [12] in 2014 can be characterized by training a pair of networks in competition with each other, in which an adversarial network can estimate the generative model without approximating many intractable probability computation. Because there is no need for any Markov chains or unrolled approximate inference network, GAN has drawn many researchers’ attention in the domains of superresolution [13], imagetoimage translation [14, 15], and image synthesis [16, 17], etc. We are interested in higherorder consistency without confining to a certain class. We also do not want to have complex probability or inference computation. Motivated by all kinds of GAN, we proposed a semantic segmentation framework based on GAN, which consists of two components: generative network and adversarial network. The former one generates the segmented image, and the latter one encourages the segmentation model to improve continuously the semantic segmentation result until it cannot be distinguished from the ground truth according to the value of loss function. Different from the classic GAN, we take the original image as the input of the generative network and and the output of generative network or corresponding ground truth as the input of the adversarial network; then adversarial network discriminates the similarity of two inputs. If the value of loss function of the framework is large, backpropagation is performed to adjust the parameters of the network; if the value of loss function satisfies the termination criterion, the output of the generative network is the final semantic segmentation result. The semantic segmentation framework based on GAN is shown in Figure 2. This approach takes into account the highorder potentials of an image because it differentiates the similarity between the segmented image and the corresponding ground truth in the whole image.
2. The Proposed Semantic Segmentation Approach
The aim of the proposed framework is to generate the semantic image from an original image . To achieve this goal, we design a generator network G and an adversarial network D. The generator is trained as a network parameterized by . These parameters denote the weights and are obtained by minimizing the loss function; then the output of generator and the ground truth are fed into the adversarial network parameterized by , in which the discriminator is trained to distinguish real or fake value. In order to achieve the desired result, it is important to design the architecture network and loss function.
2.1. The Architecture of Networks
Some works have shown that deeper network model can improve the performance of the segmentation and meanwhile make the architecture of the network complex, resulting in difficult training [18]. We make a compromise between the depth of the network and the performance of the algorithm.
In the generative network, which is shown in the first row of Figure 3, there are two modules of convolution and deconvolution. The role of convolution module is to extract the feature maps of an image, which consists of 10 layers. Each layer is composed of convolution, activation function, and batch normalization. The convolution is performed with kernels and 64 feature maps followed by ReLU layer as the activation function, whose role is to conduct the nonlinear operation. Batch normalization is performed to avoid the network overfitting in each layer. Although pooling operations enlarge the receptive field of the network, they also reduce the accuracy of the segmentation. To improve the fine details of feature maps, the last three pooling outputs are integrated into one, on which deconvolution is performed to achieve the same size output with the original image.
To discriminate the ground truth from the segmented image, we train a discriminator network, which is illustrated in the second row of Figure 3. This architecture follows literature [13] to solve (4) in an alternating manner along with the generator. It contains eight convolution layers and uses LeakyReLU as the activation function. The convolution is conducted by kernels, resulting in final feature maps of size 512, which are followed by two dense layers and a final sigmoid activation function to achieve a probability for classification.
2.2. Loss Function
In terms of information theory, cross entropy denotes the similarity of two variables; the more similar the distribution of two variables, the smaller the cross entropy, so we adopt the cross entropy as the loss function. The definition of cross entropy is shown in the following:where p and are the real value and predicted value. Equation (1) is Shannon entropy when p and are equal. In the multiple classification task, we use onehot encoding cross entropy. Equation (1) can be rewritten as follows:where y specifies one pixel of ground truth and represents 0 or 1.
The loss function of the proposed networks is a weighted sum of two terms. The first is a multiclass cross entropy term of a generator that encourages the segmented output similar to the input. We use to denote the class probability map over C classes of size that the segmentation model generates given an input image x of size . This segmentation model predicts the right class label at each pixel independently, which is described in the following:where represents the cross entropy loss function of multiple classification on an image of size , in which the class probability of perpixel is predicted as .
The second loss term represents the loss of the adversarial network. If the adversarial network can distinguish the output of generator from the ground truth, the loss value is large; otherwise, the loss is small. Because the loss is calculated based on the whole image or a large portion of it, this highorder statistics dissimilarity can be penalized by the adversarial loss term. We take the output of the adversarial network as . Training the adversarial model is equivalent to minimizing the following binary classification loss:where denotes the binary cross entropy loss and and represent the label maps of adversarial network when the network input is the ground truth or the output of a generator .
Given a data set of original images and the corresponding ground truth , we define the total loss functions of the proposed semantic segmentation networks based on GAN as in the following:where denotes weight factor. In this paper, we set it as 0.01.
3. Experiments
To evaluate the proposed scene understanding algorithm based on GAN, we conduct some experiments on two widely used datasets, including PASCAL VOC 2012 [19] and urban scene understanding dataset Cityscapes [1]. We train networks on a NVIDIA Tesla K40 GPU and Intel Xeon E5 CPU using 2000 iterations and the batch size of size 16.
To quantitatively assess the accuracy of scene parsing, four performance indices are adopted: pixel accuracy (PA), mean pixel accuracy (MPA), mean intersection over union (MeanIoU), and frequency weighted intersection over union (FWIoU), whose formulations [20] are in (6)−(9). We assume a total of classes, and is the amount of pixels of class inferred to belong to class . denotes the number of true positives, while and are usually represented as false positives and false negatives, respectively:
We use adaptive estimates of firstorder moments (ADAM) [21] to optimize the algorithm because it requires little parametertuning, in which and are set to 0.9 and 0.999, respectively. We have also compared the divergence of different learning rate on the algorithm to select the optimal value, which is shown in Figure 4. According to this figure, we select as the rate learning in these experiments.
3.1. Experiment 1: PASCAL VOC 2012
We carry out experiments on PASCAL VOC 2012 segmentation dataset, which contains 20 object categories and 1 background class. Its augmented dataset [22] includes 10582, 1449, and 1456 images for training, validation, and testing. We have compared our method with the classic FCN [4] and popular DeepLab [5]: the accuracy of every class is shown in Table 1. Except for bicycle class, our approach achieves the highest accuracy on other 20 classes. Table 2 illustrates the four performance indices of different algorithms, PA, MPA, MeanIoU, and FWIoU. It is obvious that, from the left to right column, the accuracy of the algorithm gradually increases. The proposed approach gets the highest accuracy on these four performance indices.


To qualitatively validate the proposed method, several examples are exhibited in Figure 5. For “cat” in row one, our method gets the cat in accordance with the ground truth; however, FCN and DeepLab segment other noise regions. For “cow” and “child” in rows two and five, the details, such as leg, can be segmented in our method, while leg cannot be found in images using other two methods. In the fourth image, little cow and person are segmented in fine contour comparing with other two methods. In a word, the subjective quality of the segmented image using DeepLab is better than that using FCN; the segmented result using our method outperforms those using FCN and DeepLab.
(a)
(b)
(c)
(d)
(e)
3.2. Experiment 2: Cityscapes
Cityscapes [1] is a dataset for semantic urban scene understanding which was released in 2016. It contains 5000 high quality pixellevel finely annotated images collected from 50 cities in different seasons. The images, which consists of 2975, 500, and 1524 images for training, validation, and testing, are divided into 19 categories. Because this dataset is recently released, previous algorithms have not issued code for this dataset. We only do subjective assessment for Cityscapes using our method and FCN.
Several examples are shown in Figure 6. It is clear that our proposed method outperforms FCN and can achieve more details and distinguish road, building, cars, etc.
(a)
(b)
(c)
(d)
4. Conclusion
In this paper, we propose a scene understanding framework based on generative adversarial networks, which trains the fully convolutional semantic segmentation network by adversarial network, and adopt highorder potentials to achieve the fine details and consistency of the segmented semantic image. We perform a number of experiments on two famous datasets, PASCAL VOC 2012 and Cityscapes. We analyze not only each class accuracy but also four accuracy indices by using different semantic segmentation algorithms. The quantitative and qualitative assessments have shown our proposed method achieves the best accuracy among all algorithms. In the future, we will do more experiments on Cityscapes dataset and address the misclassification caused by class imbalance.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work is supported by Shanghai Science and Technology Committee (no. 15590501300).
References
 M. Cordts, M. Omran, S. Ramos et al., “The Cityscapes dataset for semantic urban scene understanding,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pp. 3213–3223, USA, July 2016. View at: Google Scholar
 M. Oberweger, P. Wohlhart, and V. Lepetit, Hands deep in deep learning for hand pose estimation, Computer Science, 2015.
 C. Liu, J. Yuen, and A. Torralba, “Nonparametric Scene Parsing via Label Transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2368–2382, 2011. View at: Publisher Site  Google Scholar
 J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pp. 3431–3440, USA, June 2015. View at: Google Scholar
 L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” Computer Science, vol. 4, pp. 357–361, 2014. View at: Google Scholar
 S. Zhou, J. N. Wu, Y. Wu, and X. Zhou, Exploiting local structures with the kronecker layer in convolutional networks, 2015.
 F. Yu and V. Koltun, Multiscale context aggregation by dilated convolutions, arXiv preprint, 2015.
 L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018. View at: Publisher Site  Google Scholar
 S. Zheng, S. Jayasumana, B. RomeraParedes et al., “Conditional random fields as recurrent neural networks,” in Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV 2015, pp. 1529–1537, Chile, December 2015. View at: Publisher Site  Google Scholar
 H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6230–6239, Honolulu, HI, July 2017. View at: Publisher Site  Google Scholar
 A. Arnab, S. Jayasumana, S. Zheng, and P. H. Torr, “Higher Order Conditional Random Fields in Deep Neural Networks,” in Computer Vision – ECCV 2016, vol. 9906 of Lecture Notes in Computer Science, pp. 524–540, Springer International Publishing, Cham, 2016. View at: Publisher Site  Google Scholar
 I. J. Goodfellow, J. PougetAbadie, M. Mirza et al., “Generative adversarial nets,” in Proceedings of the 28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014, pp. 2672–2680, Canada, December 2014. View at: Google Scholar
 C. Ledig, L. Theis, F. Huszar et al., “PhotoRealistic Single Image SuperResolution Using a Generative Adversarial Network,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105–114, Honolulu, HI, July 2017. View at: Publisher Site  Google Scholar
 P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “ImagetoImage Translation with Conditional Adversarial Networks,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976, Honolulu, HI, July 2017. View at: Publisher Site  Google Scholar
 J. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired ImagetoImage Translation Using CycleConsistent Adversarial Networks,” in Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251, Venice, October 2017. View at: Publisher Site  Google Scholar
 X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie, “Stacked Generative Adversarial Networks,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1866–1875, Honolulu, HI, July 2017. View at: Publisher Site  Google Scholar
 S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, Generative adversarial text to image synthesis, arXiv preprint, 2016.
 C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '15), pp. 1–9, Boston, Mass, USA, June 2015. View at: Publisher Site  Google Scholar
 M. Everingham, L. van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010. View at: Publisher Site  Google Scholar
 A. GarciaGarcia, S. OrtsEscolano, S. Oprea, V. VillenaMartinez, P. MartinezGonzalez, and J. GarciaRodriguez, “A survey on deep learning techniques for image and video semantic segmentation,” Applied Soft Computing, vol. 70, pp. 41–65, 2018. View at: Publisher Site  Google Scholar
 D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint, 2014.
 B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik, “Semantic contours from inverse detectors,” in Proceedings of the 2011 IEEE International Conference on Computer Vision, ICCV 2011, pp. 991–998, Spain, November 2011. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Xiaoli Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.