Abstract

In recent years, methods based on neural network have achieved excellent performance for image segmentation. However, segmentation around the edge area is still unsatisfactory when dealing with complex boundaries. This paper proposes an edge prior semantic segmentation architecture based on Bayesian framework. The entire framework is composed of three network structures, a likelihood network and an edge prior network at the front, followed by a constraint network. The likelihood network produces a rough segmentation result, which is later optimized by edge prior information, including the edge map and the edge distance. For the constraint network, the modified domain transform method is proposed, in which the diffusion direction is revised through the newly defined distance map and some added constraint conditions. Experiments about the proposed approach and several contrastive methods show that our proposed method had good performance and outperformed FCN in terms of average accuracy for 0.0209 on ESAR data set.

1. Introduction

1.1. Background

Semantic segmentation of images is a very important task in computer vision, which aims to classify each pixel in the image and can be applied to autopilot, 3D reconstruction, and other fields. Synthetic Aperture Radar (SAR) image semantic segmentation is widely utilized in military and civilian information applications because SAR can obtain images at any time of the day and night independently of the weather conditions. Traditional segmentation methods can be mainly divided into three steps: segmentation of super-pixel blocks, feature extraction, and classifier selection. Methods like Meanshift [1] and Watershed [2] are typical for extracting super-pixel blocks. For classification, Support Vector Machine [3], naive Bayes [4], and Maximum Likelihood method [5] are commonly used algorithms. Afterwards, Markov Random Field (MRF) [6] and Conditional Random Field (CRF) are also introduced to consider the information of surrounding pixels.

However, owing to the special imaging mechanism of SAR imagery, such as polarization characteristic expression [7], the modeling of multiplicative non-Gaussian noise, simply applying traditional strategies for optical images to SAR imagery is not suitable anymore. Many methods for SAR image segmentation have been proposed. Liu et al. [8] proposed a SAR image segmentation with reaction diffusion level set evolution equation in an active contour model. Zhang et al. [9] developed a semisupervised SAR image segmentation method based on a hierarchical CRF model. Liu et al. [10] proffered a new SAR image segmentation approach via a hierarchical visual semantic and adaptive neighborhood multinomial latent model.

Meanwhile, the color information of polarized SAR imagery has also begun to attract researcher’s attention. Through methods like Pauli decomposition [11] based on scattering matrix, the pseudo color and texture features in SAR imagery can be extracted, that many segmentation methods mainly for optical images can be utilised. Chen et al. [12] proposed a multifeature segmentation for high-resolution polarimetric SAR image based on fractal net evolution approach. Lang et al. [13] developed the generalized mean shift algorithm for polarimetric SAR image segmentation. In recent years, since convolutional neural network (CNN) has achieved remarkable performance in image classification, various networks suitable for semantic segmentation, such as Fully Convolutional Networks (FCN) [14], DeepLab [15] (Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs), and CRF as Recurrent Neural Networks (CRFasRNN) [16], were proposed based on VGG-Net.

CNN automatically learns multiple-level feature due to its multilayer structure; it can well judge what kind of objects are contained in an image and has achieved remarkable performance in image classification, but the accurate segmentation is difficult because of the loss of the object edge details. FCN modifies CNN to obtain the classification results of each pixel and implement semantic segmentation. In general, CRF usually serves as a postprocessing method of FCN [17, 18] to improve the segmentation results and DeepLab is a typical case. DeepLab performs semantic segmentation with atrous convolution, deep convolutional nets, and fully connected CRFs. The atrous convolution is used to achieve downsampling, and the fully connected CRF overcomes the negative impact on locating accuracy caused by the achievement of invariance. For better edge correction, Chen et al. [19] replaced the fully connected CRF with Domain Transform (DT) [20] to improve the efficiency. Though DT is traditionally used for common image processing, it still shows good effect in filtering rough segmentation results, making the object edges integrated precisely.

With complex boundaries in SAR imagery, segmentation near object edges usually has a high error rate. Considering the limitations of traditional classifiers applied to the image classification, FCN-8, the best in FCN networks, is chosen to improve the accuracy of the initial classification. FCN is responsible for convolutional feature extraction and works directly on RGB vectors of each single pixel. However, the broad receptive field in traditional FCN convolution kernel and the reduced dimension in pooling layer blur the output of a single pixel, limiting the segmentation accuracy. Therefore, DT and the fully connected CRF [21] are introduced, with the influence of surrounding pixels taken into consideration. Traditional edge feature extraction methods include Canny [22], LSD [23], and the ratio of exponentially weighted averages (ROEWA) edge extraction [24]. In these years, many new edge detection methods have been proposed. Zhang et al. [25] proposed a new edge detector using structured random forests as the classifier, which can make full use of RGB-D image information from Kinect. Tabb and Ahuja [26] describe an algorithm for image segmentation at multiple scales based on edge and region detection. Mandal et. al [27] presented a novel scheme for segmenting the image boundary in optoacoustic images. Marmanis et al. [28] proposed a boundary-aware semantic segmentation algorithm; this algorithm explicitly represents and extracts the boundaries between regions of different semantic classes. The class boundaries improve the deep convolutional neural network for semantic segmentation.

DT can be seen as a recurrent neural network (RNN) [19]. The internal DT combining edge information and score map is applied for internal edge correction. The modified DT is utilized for external edge correction by combining the edge distance map and rough segmentation result. Meanwhile, holistically nested edge detection (HED) network [29] is employed to provide edge prior information to improve segmentation accuracy.

The contributions of the proposed approach are as follows: (i)On the basis of Bayesian framework, this study presents a parallel network architecture, which is composed of two parallel networks called the likelihood network (FCN8) and the edge prior network (HED), respectively, and a constraint network (the directed DT) behind(ii)To achieve accurate edge detection, edge map obtained from HED is utilized, serving as edge prior information to sharpen blurred edge-pixel categories(iii)Considering that the edge distribution in traditional DT is not completely trusted and some edges may be lost, Directed Domain Transform (DDT) is proposed for image classification with complex edges. In this method, an edge distance map is defined to limit the diffusion direction, to avoid error label spreading. Besides, the gradient descent method is adopted for parameter determination, difference vectors of color and class probability about adjacent pixels are considered to deal with misclassification

The remainder of this paper is organized as follows: Section 2 introduces the algorithms involved in each step, including HED, the fully connected CRF, internal fusion, and modified DT, and Section 3 shows the whole framework of our proposed method. In Section 4, the proposed method is tested, and its performance is compared with other semantic segmentation methods. Finally, this paper is concluded in Section 5.

2. Framework

The framework of our proposed method is shown in Figure 1; the upper part of the structure is the edge prior information, including edge map obtained by HED and the corresponding edge distance map. In the lower part of the structure, internal DT filters the rough FCN8 segmentation result by the fused edge prior information; the obtained FCN8+DT model is input into the fully connected CRF. Finally, the Directed DT combines the coarse segmentation result, the edge map, and the edge distance map to obtain the fine segmentation result.

The specific process of our proposed method mainly consists of two phases, training and testing. These two phases are described in the following.

Training: (1) based on the VGG-Net, training FCN8 with ESAR data. (2) Adding CRF as postprocessing of FCN8 to train the parameters of CRF. (3) Extracting the edge maps from internal FCN8 network and HED. (4) Generating the fused edge map—internal edge fusion. (5) Fine-tuning the entire model after adding DDT.

Testing: (1) the test image is input to FCN8 to obtain the score map, which has the same size as original image. At the same time, HED derives the corresponding reference edge map. (2) Fusing the internal edge map from FCN8 with the HED edge map. (3) Inputting the fused edge map into DT and the fully connected CRF. (4) Using DDT to filter the raw output, guided by the edge distance map.

3. Methodology

3.1. HED

HED is an end-to-end edge detection algorithm based on the fully convolutional neural network and deeply supervised nets, referring to article [29, 30]. By relying on the weight update of convolution kernel, HED automatically learns the rich hierarchical features and determines whether the feature is important for solving the abstraction of object edges. The network architecture of HED is shown in Figure 2, five side-output layers of HED are inserted after the convolutional layer, respectively, and deep supervision is imposed at each side-output layer, so that the result is toward the edge detection. Finally, the output of HED is obtained through a weighted fusion layer from multiscale outputs. As can be seen from Figure 2, based on VGG-Net, HED incorporates two improvements. First, HED connects the side-output layer to the last convolutional layer in each stage, respectively, conv1_2, conv2_2, conv3_3, conv4_3, and conv5_3, to obtain multiple prediction results and then fuse all the edge maps. The size of the receptive field of each convolutional layer is the same as the corresponding side-output layer, since the side-output layer is implemented as a convolution layer with 1 kernel and 1 output. Second, HED cuts the fifth pooling layer and all the fully connected layers of the VGG-Net to obtain meaningful side-output results and reduce the cost. The main goal of HED is to complete the overall image training and prediction and achieve nested multilevel feature learning.

HED is a network which can learn features to produce the edge maps approaching the ground truth. In the training process, all of the network parameters are denoted as . Suppose the network has side-output layers, for each side-output layer, a classifier is connected with it. The weights of the corresponding classifiers are . Then, the image-level loss function is defined by this formulation:

For normal images, the number of edge points is much smaller than nonedge points; only 10% of the ground truth consists of edge pixels; therefore, HED takes a simple strategy to solve the bias problem between the edge and nonedge pixels. Using represents the original image, represents the corresponding ground truth, and the edge and nonedge pixels are denoted as and , respectively. The class-balanced cross-entropy loss function is defined as follows: where represents a class-balancing weight, and . is calculated by the sigmoid function on the activation value at pixel .

A “weight-fusion” layer is introduced to connect all prediction results of side-output layers; the fusion weight is obtained by training process, denoted as . Then, the loss function of the fusion layer is as follows:

The activations of the side-output of layer is , where . Then, the total loss function is as follows:

The greyscale edge map is obtained through HED and it can be utilized as edge prior information.

3.2. Fully Connected CRFs

The probabilistic graphical model has been proven to effectively improve the image classification accuracy. CRF has a good performance in image classification and segmentation; it can extract fuzzy and insignificant pixel-level category annotations into sharp edge distributions and fine segmentation results. Therefore, CRF can be used to solve the classification error caused by fuzzy output of FCN.

Suppose represents the category label of point , with categories, as . The variable is a random vector composed of , where represents the number of pixels of the image. The original image is denoted as ; represents a normalization function. CRF can be used to describe the above relationship based on Gibbs distribution:

In the fully connected CRF model, the energy distribution of is expressed by the following formula: where is a unitary energy term representing the energy labeling the pixel as . is the energy that simultaneously labeling the pixel as ,. The value of unitary energy term is derived from FCN. FCN does not consider the smoothness and continuity of the point class assignment. The pairwise potential provides smooth rules based on image data and encourages the same category to be assigned to points of the same attribute. The pairwise potential is built into a model based on a Gaussian kernel as follows:

In this formulation, if , ; otherwise, . is the number of Gaussian kernel used in the feature vector. represents the feature vector of point such as RGB color value or two-dimensional coordinates. Minimizing the energy function of CRF obtains the class label.

3.3. Internal DT
3.3.1. Internal Edge Fusion

The internal edge map is output by the internal edge net, which utilized the features obtained by the intermediate layer of FCN8. In the internal edge net, these feature layers obtained by FCN8 are filled to the same spatial scale before being stacked. A convolutional layer with convolution kernel and one output channel is introduced to predict edge strength. A RELU layer is added to limit the output to a range of 0 to 1. Since the ground truth does not contain reference edges, the implicitly generated edge map might be inaccurate. However, HED predicts object edge by training various contour types, instead of calculating gradients for edge discrimination. Thus, there is a direct connection between multilevel features. In the proposed approach, the edge map form HED is fused with the internal one, as expressed in Equation (8). where is the internal edge map and represents another one from HED; denotes fused edge as new input of DT. means the weight with initial value of 0.5. Generally, is calculated using gradient descent algorithm. The contribution of , in backpropagation is as follows: where is the loss function; and represent the current gradient and updated gradient, respectively. With fixed, is updated through Equation (10). In practical application, a layer of convolutional neural network is defined to train parameters as shown in Equation (11). The network consists of a convolution kernel with a single channel, an eltwise layer for add operation, and a ReLU layer. ranges from zero to infinity and is initialized to .

3.3.2. Standard DT

In the Domain Transform [19] (DT) network, the raw signal and the positive “domain transform density” are the inputs and the filtered signal is the output. For 1-D signals, the output is calculated as follows:

The weight is related to density and defined as follows: where is the edge map obtained by exploiting features from the intermediate network layers. and are the standard deviation of the convolution kernel over the spatial domain and the edge map, respectively. The details of , , and can be found in DeepLab DT [19].

Filtering from one side by Equation (12) is asymmetric. Therefore, DT filters from two sides for 1-D signals, left-to-right and right-to-left. For 2-D signals, DT employs 1-D filtering in each dimension, that is a horizontal pass (left-to-right and right-to-left) is performed along each row, followed by a vertical pass (top-to-bottom and bottom-to-top) along each column.

Actually, the edge map is implicitly generated by the DT network, not working well as the explicit network for complex boundaries. Besides, some reverse diffusion occurs in four-direction filtering, spreading error categories. Aimed at the unsatisfactory effect of DT network, the corresponding improvements in the proposed method are listed below: (1)Introduce the HED network to explicitly get the edge map, which truly reflects edge strength of different connection types(2)Define an edge distance map and obtain positive diffusion direction, that is, towards the nearest edge. Besides, the diffusion coefficients are adjusted by color vector and class probability vector, to deal with misclassification

3.4. Directed DT

To acquire a more precise edge map, an improved DT algorithm—DDT—is adopted in the presented approach, as shown in Figure 3. First, the edge distance map is defined to eliminate inappropriate diffusion. In four-direction filtering, the diffusion coefficient is adjusted by two parameters. For pixels that satisfy the diffusion conditions, the point is diffused in four directions until the iteration ends. Finally, an area-filling method is employed to fill holes.

In the distance map, the edge distance is measured by the minimum distance of each pixel to its nearest edge, as shown in formula (14). where is the minimum distance, denotes all boundary pixels in the edge map, and is the coordinate of pixel .

In Figure 3(c), the dashed line means the actual edge and the solid one denotes the edge to be filtered. A and B are different class labels. The correct diffusion direction is from pixels far away to pixels near the actual edge.

As shown in Equation (15), is the diffusion potential. For adjacent pixels with different classes, the diffusion condition is that should be larger than .

Since a large color gradient of the same category and different classes with a small gradient may result in false edges and leaked edges, respectively, the weight in Equation (12) is adapted as follows:

Weight is determined by the color vector and class probability vector . Two parameters and are used to balance the spreading strength and determined through gradient descent method. Equation (12) is unrolled to compute derivatives with respect to and . where represents whether area has a diffusion or not. is the loss function, is the current gradient, while denotes the updated gradient.

The segmentation error of DDT output is backpropagated to and .

After the gradients of and are updated, the gradient of can be further obtained.

4. Experiment

4.1. Experiment Data

The first experimental data in this paper is ESAR L-band PolInSAR image of the German Aerospace Center, with a dimension of pixels. The spatial resolution of the SAR imagery is . There are five classes, namely, building, forest, farmland, road, and others, as shown in Figure 4. The second experimental data in this paper is L-band fully polarized image of Foulum Area in Denmark, acquired on April 17, 1998. This data mainly consists of farmland, forest, residential area, higher crop, shrub, and untilled glebe, as shown in Figure 5.

For these two data sets, the entire imagery is divided into four subimages. Each time, one subimage is for testing and the rest three are sliding to 4 directions for training. After four cycles, four segmented images are spliced as the final result.

4.2. Experiment Result

Through HED, the edge map of ESAR data is obtained. The edge map and edge distance map are shown in Figure 6.

The proposed algorithm is conducted on the ESAR data with seven contrast methods: (1) semantic segmentation algorithm based on Meanshift and SVM, (2) CRF based on Meanshift and Potts, (3) FCN8, (4) DeepLab, (5) FCN8+DT+the fully connected CRF, (6) FCN8+HED+DT+the fully connected CRF, and (7) the proposed method. Traditional method first uses Meanshift to obtain coarsely divided pixel blocks and extracts intensity features, texture features, and polarization decomposition features for each block [31]; then, traditional method directly classifies these features with SVM [32] or introduces neighborhood information with Potts prior to CRF [31]. These two methods mentioned above are the first and second methods of our comparative experiment. The third comparison method is FCN8, the most accurate net of FCN. The th comparative experiment is DeepLab, which proposed the fully connected CRF. Among the remaining three comparison methods, the last one is our proposed method, the th method deletes the DDT of our proposed method, and the th experiment inserts DT between FCN8 and the fully connected CRF. The results of seven methods are shown in Figure 7.

To further verify the effectiveness of our method, five methods related to neural networks from the above seven methods are applied on the second dataset. The results are shown in Figure 8.

4.2.1. Confusion Matrix

As an important measuring index, the average accuracy of different methods is assessed. Since objects of the same class may have quite different colors in Pauli SAR image, thus this color continuity-based method is largely limited. As presented in Table 1, seven algorithms based on deep learning all exhibit better performance. The confusion matrix of five algorithms on data set 2 is shown in Table 2.

Comparing with traditional methods, FCN8 has greatly improved, mainly because FCN8 is suitable for images and high accuracy in image classification. DeepLab is equivalent to adding a fully connected CRF for postprocessing after FCN8. The classification information of surrounding pixels is introduced as a reference to further improve the classification accuracy. For data set 1, compared with DeepLab, the average accuracy of DeepLab DT only increases by , revealing that DT with internal edge extraction has almost the same effect as postprocessing CRF. For the sixth method, the edge-weighted input obtained by the FCN-HED edge and the internal edge network is trained in the DT, but the correct rate is not greatly improved. These can be caused by two reasons: (1) DT has many unsolved problems; (2) the actual image category distribution is complex. For data set 1, the proposed approach achieves the highest performance with accuracy, outperforming DeepLab DT by , which is attributed to internal edge fusion, edge prior information, and directed DT. The accuracy of each class is improved, especially for nonbackground categories. For data set 2, the average accuracy of our proposed method is increased by over FCN.

4.2.2. Edge Improvement

The results, including the segmentation images and some zoomed-in details, are shown in Figure 9, respectively. It is clear that the upper right area is labeled as building (water blue in Figure 4(b)), which corresponds to uneven color in the Pauli image. Besides, the road (pink in ground truth) shows two different colors in the original image (black and dark green in Figure 4(a)). Since the rest of comparative methods and the proposed one are network based, they achieve better performance than the traditional algorithm. To show the improvement of DDT, some details of segmentation result are magnified and presented in Figure 9; the segmentation images by FCN8+DT+the fully connected CRF and the presented one are quite similar. From the four zoomed-in areas, it is obvious that the result by the proposed approach matches the ground truth more closely.

The average accuracy, which reflects improvement in the whole image (edge area and nonedge area), is insufficient to measure the edge enhancement. To verify the contributions of the proposed approach, all improved pixels in the marginal area are counted in Table 3 and Table 4. The number of boundary points is rather smaller than that of the total pixels. Thus, the contribution is evaluated by the ratio of edge-improved points to all boundary pixels. With DeepLab as a comparison basis, the edge accuracy of FCN8+DT+the fully connected CRF and the proposed algorithm increases by and for data set 1, respectively. For data set 2, the edge accuracy increases for these two methods are and , respectively. It is indicated that DDT guided by prior information does have a good effect on edge segmentation.

4.3. Discussion

There are three reasons for the improvements of our proposed method. (1) Based on Bayesian framework, HED is introduced to provide the edge prior information. In the segmentation process, the edge map and the edge distance map play an important role in improving the accuracy of segmentation. (2) FCN8 is chosen to improve the initial classification accuracy replacing traditional classifiers, and the fully connected CRF is employed to improve the classification accuracy of the points since it introduces the prior information of the surrounding pixel classification categories. nnnnn Internal DT and modified DT (DDT) contribute to improving segmentation accuracy. As a constraint network, DDT analyzes the core problem of original DT and improves it. DDT combines the improved diffusion method and the hole-filling method for external fusion, which enhances the correctness of the edge distribution, improves the classification accuracy of points near the edge, and further improves the overall accuracy.

5. Conclusions

An edge prior Bayesian semantic segmentation network for SAR image is proposed in this study. The raw segmentation result is firstly derived from the likelihood network (FCN8). Afterwards, the edge map from HED network is fused with the map from intermediate FNC8 layers, to obtain more accurate boundaries as prior information. In the last stage, the newly defined edge distance is utilised for eliminating inappropriate diffusion directions. Besides, the DDT method, which redefines the domain transform density, is proposed to improve the segmentation performance. Experiments about the proposed approach and six comparative methods are conducted on the two data sets. The experimental results demonstrate that the proposed method has a good effect on edge correction, improving the overall accuracy as well. However, Pauli SAR image is segmented directly without preprocessing in this paper. In the future work, we will focus on preprocessing the SAR image before segmentation to improve the accuracy.

Data Availability

The image data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Authors’ Contributions

Chu He and Zishan Shi conceived and designed the experiments. Dehui Xiong and Zishan Shi performed the experiments and analyzed the results. Chu He and Peizhang Fang wrote the paper. Bokun He and Mingsheng Liao revised the paper.

Acknowledgments

The authors would like to thank the National Key Research and Development Program of China (grant number 2016YFC0803000), the National Natural Science Foundation of China (grant numbers 41371342 and 61331016), and the Hubei Innovation Group (grant number 2018CFA006).