Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 4275720 | https://doi.org/10.1155/2019/4275720

Hui Huang, Yan Ma, "A Hybrid Clustering Approach for Bag-of-Words Image Categorization", Mathematical Problems in Engineering, vol. 2019, Article ID 4275720, 11 pages, 2019. https://doi.org/10.1155/2019/4275720

A Hybrid Clustering Approach for Bag-of-Words Image Categorization

Academic Editor: Piotr Jędrzejowicz
Received29 Dec 2018
Revised17 Jan 2019
Accepted28 Jan 2019
Published11 Feb 2019

Abstract

The Bag-of-Words (BoW) model is a well-known image categorization technique. However, in conventional BoW, neither the vocabulary size nor the visual words can be determined automatically. To overcome these problems, a hybrid clustering approach that combines improved hierarchical clustering with a K-means algorithm is proposed. We present a cluster validity index for the hierarchical clustering algorithm to adaptively determine when the algorithm should terminate and the optimal number of clusters. Furthermore, we improve the max-min distance method to optimize the initial cluster centers. The optimal number of clusters and initial cluster centers are fed into K-means, and finally the vocabulary size and visual words are obtained. The proposed approach is extensively evaluated on two visual datasets. The experimental results show that the proposed method outperforms the conventional BoW model in terms of categorization and demonstrate the feasibility and effectiveness of our approach.

1. Introduction

Bag-of-Words (BoW), which was originally implemented in the field of text categorization, has been widely used for image categorization. The BoW model for image categorization includes three essential steps: extracting local features from images; constructing a visual vocabulary by clustering the local features to visual words; mapping each local feature to a visual word and representing an image as a vector containing the count of each visual word in that image. Therefore, in BoW, the image is described as a vector, which reduces the time complexity of image categorization [1]. However, a visual vocabulary designed in this manner is not necessarily effective for categorization, and the vocabulary size will influence the level of effectiveness. Therefore, obtaining a suitable vocabulary with an appropriate size and visual words is quite important. In fact, the vocabulary size and visual words correspond to the number of clusters and cluster centers, respectively, so the essence of the aforementioned problem is to determine the optimal number of clusters and cluster centers.

A fundamental problem of cluster analysis is determining the best estimate for the number of clusters, as this has a significant effect on the clustering results. The most widely used clustering algorithms are K-means and hierarchical clustering. K-means is efficient and simple, but the initial centers are randomly chosen and the number of clusters K is not known beforehand. Hierarchical clustering achieves its final results by iteratively splitting or merging clusters [2]. This method is simple to implement and can solve multiscale spatial clustering problems, although its termination condition is difficult to determine. Many improved methods that estimate the number of clusters K in BoW have been developed. A novel algorithm combining Gaussian Mixture Models and the Bayesian Information Criterion claims to obtain the true number of visual words in the vocabulary [3], and the number of visual words has been computed by a data mining method based on a cooccurrence matrix [4]. By removing inconsistent edges, the minimum panning tree based clustering method can identify clusters of arbitrary shape [5]. A clustering quality curve has been introduced to estimate the number of clusters in hierarchical clustering [6], and the original cluster centers have been selected dynamically by constructing a minimum spanning tree [7]. The true number of clusters can also be determined using a clustering validity index [8].

In this paper, we present a novel clustering method that combines hierarchical clustering and K-means. The proposed clustering method is adopted in BoW to complete the process of image categorization. First, in terms of hierarchical clustering, a new cluster validity index is introduced to determine the vocabulary size, i.e., the optimal number of clusters for a set of image features. After the optimal number of clusters has been determined, the optimal initial cluster centers are calculated by an improved max-min method. The best number of clusters and initial centers are then fed into K-means, and the final result for the cluster centers (that is, the vocabulary) is achieved. Finally, the images are categorized according to histogram vectors that represent images with BoW. In short, the vocabulary size for the BoW is adaptively adjusted to reduce errors introduced by image quantization, leading to better image categorization performance.

The rest of this paper is organized as follows: Section 2 reviews hierarchical clustering algorithms and presents the proposed algorithm, which includes a cluster validity index and an improved max-min distance method. Our experimental results and analysis then evaluate the performance of the proposed method in Section 3. Finally, Section 4 draws together some conclusions from this study.

2. Cluster Validity Index and Initial Cluster Centers

2.1. Hierarchical Clustering

Hierarchical clustering creates a hierarchy of clusters that can be represented in a tree structure. There are two types of hierarchical clustering: agglomerative and divisive. Agglomerative hierarchical clustering algorithms work by merging the nearest clusters in a bottom-up fashion. Each individual data point is first assigned to a cluster, and then the two clusters that are closest to each other are merged into a new cluster. This process continues until all clusters have merged into one. The divisive hierarchical clustering algorithm works by splitting clusters into separate clusters in a top-down fashion. A cluster containing all data is created, and this is then divided into two clusters with respect to the amount of separation between data. This process is repeated until the final clusters contain only one data point [9]. The two types of hierarchical clustering are illustrated in Figure 1.

The agglomerative method has lower computational complexity than the divisive approach, and can obtain better classification performance. In summary, for the computational complexity considered here, we adopt the agglomerative method of hierarchical clustering.

The typical agglomerative hierarchical clustering approach consists of the following steps:(1)For a dataset of size , each sample is regarded as a separate cluster.(2)Calculate distances between all pairs of clusters and create a distance matrix.(3)Find the minimum distance in the distance matrix, then merge the two corresponding clusters.(4)Repeat Steps and until all elements have been merged into one cluster containing samples.

Hierarchical clustering algorithms are simple to implement and can solve multiscale spatial clustering problems. However, they have some limitations; in particular, the terminal condition is difficult to determine.

2.2. Cluster Validity Index

The quality of clustering is measured by the cluster validity index. Furthermore, the optimal number of clusters k is estimated based on the cluster validity index. It has been verified that there is a knee point in the validity index curves representing the correct number of clusters. Various cluster validity indices have been developed, such as Calinski–Harabasz (CH) [10], Davies–Bouldin (DB) [11], and Krzanowski–Lai (KL) [12]. These indices are implemented to evaluate the clustering results according to the dataset itself and the statistical properties of the clustering results. However, the CH method may present unstable results, with the number of clusters varying with the search range. The DB method is only appropriate for well-separated clusters, and the KL method is designed for datasets whose structure can be easily estimated. Thus, to obtain better clustering results, we propose a novel cluster validity index for an agglomerative hierarchical clustering algorithm. The index is described in more detail below.

Let the dataset be . Record the distances between all pairs of data samples in a matrix . As the data samples are clustered by agglomerative hierarchical clustering, a hierarchical clustering tree containing levels is formed, that is, . Any level of consists of clusters with samples. Compactness and separation are implemented to measure the similarity between within-cluster and between-cluster samples based on the Euclidean distance [13].

Definition 1. The within-cluster compactness is the longest edge of the minimum spanning tree formed by all samples in the cluster, that is, the maximum weight. In detail, any level of the hierarchical clustering tree formed by consists of clusters , and each cluster has samples . The within-cluster compactness for the ith cluster is then defined as:where denotes the edge weight in the minimum spanning tree formed by all samples in the ith cluster.

Definition 2. The between-cluster separation is the minimum distance from a given data sample to the closest sample from another cluster. In detail, any level of the hierarchical clustering tree formed by consists of clusters , and each cluster has samples . The between-cluster separation for the ith cluster is then defined as:where represents the Euclidean distance between sample in cluster and sample in cluster .

Definition 3. Suppose that any level of the hierarchical clustering tree formed by consists of clusters , and each cluster has samples . The cluster validity index for the ith cluster is the ratio of the within-cluster compactness to the between-cluster separation, that is:

The index reflects the cluster validity for the clustered samples in a dataset. Lower values denote better clustering results. For a dataset, we can analyze the clustering result by averaging the index for all clusters. A lower average indicates better clustering results for the dataset. We write for the average of the indices for clustered classes and denote the optimal number of clusters as :

The criterion for judging the cluster validity considers both within-cluster compactness and between-cluster separation. In the case of within-cluster compactness, a smaller distance between a pair of samples in the ith cluster denotes better clustering results. The maximum or minimum distance between a pair of samples is not representative, and samples in a cluster can form the minimum spanning tree. Therefore, it is reasonable that the within-cluster compactness can be measured by the weight of the minimum spanning tree. In the case of between-cluster separation, a greater distance between cluster i and the nearest neighbor cluster j suggests better clustering results. Therefore, it is important to exploit the minimum distance between a pair of clusters to measure the between-cluster separation. The index synthesizes these two factors: a smaller value of the index implies that the clustering result is better, i.e., a smaller within-cluster compactness and greater between-cluster separation. The best partition (that is, the optimal number of clusters K) is obtained when reaches a minimum.

In determining the optimal number of clusters using agglomerative hierarchical clustering, we search the effective range . We set to 2, and choose be the upper bound, as related research shows that where n is the number of samples in the dataset [14]. The optimal partition result can then be obtained using the proposed index. The proposed method is as follows:(1): (2)(a)Calculate clustering validity index according to Eq. (3);(b)Calculate the average according to Eq. (4);(c)Calculate the proper number of clusters according to Eq. (5);(3)(4):

The effectiveness of the index is demonstrated through the following experiments. Three datasets were selected from the UCI database [14]. We applied the DB, CH, KL, and indexes to the three datasets, and obtained the optimal number of clusters using each index. Table 1 illustrates the results for the above validity indexes with the datasets divided into the same number of clusters. The optimal number of clusters is indicated in bold.


KDBCHKL

20.0666/0.4400/115.46360.2149/0.4099/28.109211.4453/3.1191/1.08306.3420/2.4650/4.6660
30.4640/0.4962/63.94221.0816/0.7709/14.035660.6343/7.0889/3.54361.9846/0.6920/28.3850
41.1963/0.6330/40.36790.9327/0.7344/27.48261.4063/0.6718/0.49437.9528/2.4210/28.4230
51.5125/0.6583/61.66710.7129/1.1670/20.58320.9956/5.9125/1.17978.8753/2.9340/28.8990
62.6920/1.5296/62.61600.5731/1.1521/16.44381.9325/0.6848/1.147210.9862/2.9940/28.9480
77.6907/1.4995/60.04370.4929/1.3889/13.68430.1244/2.1073/1.124612.7995/3.3610/28.9420
820.9400/2.9274/50.39630.6470/1.0086/11.71291.7109/4.9006/1.108113.7742/3.6750/29.0550

A detailed description of these datasets can be found in the UCI database. The size, number of attributes (dimension of the data), real number of clusters in the datasets, and proper number of clusters obtained with the four aforementioned methods are listed in Table 2.


DatasetNumber of attributesTrue ClustersDBCHKL

Iris432333
Wine1332733
Breast-Cancer-Wisconsin1124232

Tables 1 and 2 indicate that errors occur, and there exist some differences between the proper number of clusters and the true number of clusters when the DB, CH, and KL clustering validity indices are applied to the datasets. In contrast, the proposed index gives a number of clusters that is consistent with the true number. This indicates the effectiveness and feasibility of the proposed index for evaluating clustering results.

2.3. Initial Cluster Centers

The basic principle of the K-means algorithm is to use initial cluster centers and classify the samples to the closest cluster center. The cluster centers are then updated iteratively until a convergence condition is satisfied [15]. A detailed description of K-means is as follows:(1)Randomly select samples from the dataset as initial cluster centers.(2)Calculate the distance between each sample and the cluster centers, and classify the sample into the closest class.(3)Recalculate and update the cluster centers according to the above results.(4)Repeat Steps and until the clustering does not change the results.

K-means is popular because of its simplicity and efficiency. Despite its advantages, K-means suffers from some limitations, particularly that the number of clusters and the initial cluster centers must be determined a priori. Poor quality initialization can lead to a poor solution. Therefore, it is important to determine an appropriate number of clusters and initial cluster centers. The method proposed in this paper can be exploited to solve the problem of the number of clusters. In the case of initial cluster centers, different centers will lead to different clustering results. The max-min distance method is introduced to optimize the initial cluster centers [16].

The core idea of the max-min distance method is to select cluster centers according to the maximum distance and to classify samples according to the minimum distance, thus avoiding the initial cluster centers from being too close together. For the max-min distance method, the number of clusters is unknown in advance, and the proportional coefficient is selected as the limitation condition. To determine the center, we calculate the distance between sample (which has not been selected as a center) and the predetermined cluster centers. We then find and take sample corresponding to the condition as the center . However, it is difficult to determine the value of .

In this paper, the max-min distance method is extended to solve the aforementioned problem and optimize the initial cluster centers under the condition that the number of clusters is known a priori. The proposed method to determine initial cluster centers can therefore be summarized as follows:(1)Calculate the average of all samples and select the sample closest to the average as the first cluster center .(2)When the number of clusters is 2, select the unclassified sample farthest from as the second cluster center .(3)When the number of clusters is 3, calculate the distances between samples that have not been selected as centers and the two initial cluster centers . Find the minimum distance , and choose the sample as the third center according to .(4)When the number of clusters is and , calculate the distances between samples that have not been selected as centers and the determined cluster centers, and find the sample satisfying the condition . Then, choose the sample as the center, and output the initial cluster centers.

In this section, the proposed algorithm is evaluated to illustrate its performance in automatically determining the number of clusters and the cluster centers. The experiments were implemented on the Iris dataset by two methods: the conventional and improved max-min distance algorithm. For the former, we set to 0.7, 0.4, 0.1 and obtained corresponding values for K of 2, 3, and 5, respectively. For the improved max-min method, we calculated K = 3 with the cluster validity index and obtained appropriate initial cluster centers. Figure 2 illustrates the two corresponding clustering results and the ROC (Receiver Operating Characteristic) curve.

The clustering results for different values of are shown in Figures 2(a)2(c). Figure 2(d) illustrates the clustering results given using the improved max-min distance method. As shown in Figure 2(a), the data samples are obviously partitioned into two clusters, which does not conform to the actual situation. The number of clusters is correct in Figure 2(b), but the clustering results are not ideal. Neither the number of clusters nor the clustering results are ideal in Figure 2(c). Compared with these results, Figure 2(d) shows that the improved max-min distance method can enhance the clustering results with the correct number of clusters and a more concentrated data distribution. The right-hand side of Figure 2 illustrates the ROC curves, which reflect the performance of the classification method. A greater area under the ROC curve (AUC) signifies that the performance of the classification method is better. The AUC in Figures 2(a)2(c) is lower than in Figure 2(d), which indicates that the improved max-min approach generates a better clustering result. This means that the improved max-min distance method gives a better classification of the dataset in the clustering process.

2.4. Proposed Method

The above sections have described the cluster validity index for the optimal number of clusters and the improved max-min distance method for initial cluster centers to achieve the optimal clustering result. Based on the aforementioned methods, it is possible to improve the conventional BoW: we combine the agglomerative hierarchical clustering algorithm with K-means to automatically determine the optimal number of clusters and the initial cluster centers, i.e., the vocabulary size and the visual words. Figure 3 gives an overview of the proposed approach.

We can summarize the proposed clustering algorithm as follows:(1)The dataset is clustered by agglomerative hierarchical clustering, and a hierarchical clustering tree is generated.(2)Define a new cluster validity index to evaluate the clustering results in each level, and determine the optimal number of clusters K.(3)Apply the improved max-min distance algorithm to determine the initial cluster centers C according to K.(4)Input K and C into the K-means algorithm, and cluster the dataset again to improve the results from the agglomerative hierarchical clustering algorithm. Finally, we achieve the optimal clustering results, which correspond to the partitioned dataset.

3. Experimental Results and Analysis

To evaluate the proposed method, we used the Caltech 101[17] and 15 Scenes [18] datasets. Caltech 101 includes 101 object categories, whereas 15 Scenes includes the 15 scene categories of Store, Office, Tall building, Street, Open country, Mountain, Inside city, Highway, Forest, Coast, Living room, Kitchen, Industrial, Suburb, and Bedroom.

We used sixteen object categories from Caltech 101 as shown in Figure 4 and six object categories from 15 Scenes (bedroom, CALsuburb, industrial, MITcoast, MITforest, and MITinsidecity) for the experiments. We chose 20 images per category for training and 10 per category for testing. For convenience, the size of all images was set to 100 × 100 pixels. Each training image was represented as a BoW. First, scale-invariant feature transform (SIFT) descriptors extracted from the training images are chosen as visual features. Second, the dataset constructed by the visual features is clustered by the proposed clustering approach. The size of the dataset is dependent on the total number of the SIFT descriptors. In our experiments, the dataset includes 34328 feature vectors. Finally, the BoW representation was fed into a support vector machine (SVM) [19] using LIBSVM [20] to improve the classification performance. In this paper, we analyze the BoW model with both known and unknown vocabulary sizes.

(1) Known Vocabulary Size. We fixed the vocabulary size to 150. While clustering the image data, the value of K was set to 150 and the SVM was applied to classify the image. Figures 5 and 6 show the categorization results (the first eight images are selected), where red blocks denote the misclassified images. Tables 3 and 4 list the precision and recall [21] corresponding to the two databases, respectively.


airplanesbonsaibrontosaurusbutterflycamerachaircupdolphin

Precision (%)100.0087.50100.0087.5087.50100.00100.0087.50
Recall (%)80.0060.0060.0050.0060.0070.0060.0050.00

grand-pianokangarooketchlotuspandaroosterscissorsstarfish

Precision (%)87.5087.50100.0075.00100.00100.00100.00100.00
Recall (%)60.0065.0080.0050.0080.0085.0085.0080.00


bedroomCALsuburbindustrialMITcoastMITforestMITinsidecity

Precision (%)87.5087.5075.00100.0087.50100.00
Recall (%)70.0070.0060.00100.0070.00100.00

Figure 5 shows that a large number of the Caltech 101 image categories were misclassified. There exist two misclassified images for the bonsai, brontosaurus, and camera classes, and three misclassified images for the butterfly class. There was generally one misclassified images in each of the 15 Scenes classes. That is, different levels of misclassification exist. From Tables 3 and 4, it can be seen that the precision and recall corresponding to both databases was not high.

(2) Unknown Vocabulary Size. In the case of an unknown BoW vocabulary size, we used the proposed method to automatically determine the optimal number of clusters and the initial cluster centers. We then performed a clustering analysis to categorize the images and expressed the images according to the distribution of visual words. By calculating the number of clusters, it was found that the vocabulary sizes for Caltech 101 and 15 Scenes were 475 and 264. The SVM classifier was then used for the classification task. Figures 7 and 8 show the categorization results (the first eight images are selected), where red blocks denotes the misclassified images. Tables 5 and 6 list the precision and recall corresponding to the two databases, respectively.


airplanesbonsaibrontosaurusbutterflycamerachaircupdolphin

Precision (%)100.00100.0075.0087.5087.50100.00100.0087.50
Recall (%)80.0080.0060.0070.0070.0080.0080.0060.00

grand-pianokangarooketchlotuspandaroosterscissorsstarfish

Precision (%)87.5095.00100.0085.00100.0090.00100.0085.00
Recall (%)60.0080.0070.0075.0080.0070.0080.0070.00


bedroomCALsuburbindustrialMITcoastMITforestMITinsidecity

Precision (%)100.0087.5087.50100.00100.00100.00
Recall (%)80.0070.0070.0080.0080.0080.00

Figures 7 and 8 show that the image categorization results for both databases have been improved, and the number of misclassified images has decreased to at most one. From Tables 5 and 6, it can be seen that the precision and recall have improved compared to the case of a preset vocabulary size, which illustrates that the improved BoW model can express images more accurately and automatically determine the vocabulary size. Hence, the proposed method obtains improved categorization performance.

4. Conclusions

In this paper, we have presented an improved method that automatically determines the vocabulary size of the BoW model. We performed a clustering analysis on image datasets to judge the optimal number of clusters and initial cluster centers according to a cluster validity index, and improved max-min distance method by combining K-means with agglomerative hierarchical clustering. The experimental results on three datasets validated the effectiveness and feasibility of the proposed approach. To evaluate the categorization performance of the proposed approach, experiments with known and unknown vocabulary sizes were conducted on the Caltech 101 and 15 Scenes datasets. The results demonstrated that the proposed BoW model based on the hybrid of hierarchical clustering and K-means algorithms classifies images more accurately and achieves better categorization performance.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (nos. 61501297 and 61373004).

References

  1. Y.-X. Wang, H. Guo, C.-Q. He, Z. Feng, and Q. Jia, “Bag of spatial visual words model for scene classification,” Computer Science, vol. 38, no. 8, p. 265, 2011. View at: Google Scholar
  2. R. C. de Amorim, “Feature relevance in Ward's hierarchical clustering using the Lp norm,” Journal of Classification, vol. 32, no. 1, pp. 46–62, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  3. M. Xu, J. Han, and L. Guo, “Determine word number of Visual Bag-of-Words model by model selection method--,” Computer Engineering and Applications, vol. 31, no. 47, pp. 148–150, 2011. View at: Google Scholar
  4. K. Koonsanit, C. Jaruskulchai, and A. Eiumnoh, “Determination of the initialization number of clusters in K-means clustering application using co-occurrence statistics techniques for multispectral satellite imagery,” International Journal of Information and Electronics Engineering, vol. 2, no. 5, pp. 785–789, 2012. View at: Publisher Site | Google Scholar
  5. X. Lv, Y. Ma, X. He, H. Huang, and J. Yang, “CciMST: A clustering algorithm based on minimum spanning tree and cluster centers,” Mathematical Problems in Engineering, vol. 2018, Article ID 8451796, 14 pages, 2018. View at: Publisher Site | Google Scholar | MathSciNet
  6. N. Chen, Z.-S. Xu, and M.-S. Xia, “Hierarchical hesitant fuzzy K-means clustering algorithm,” Applied Mathematics-A Journal of Chinese Universities Series B, vol. 29, no. 1, pp. 1–17, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  7. X.-J. Tong, F.-R. Meng, and Z.-X. Wang, “Optimization to k-means initial cluster centers,” Computer Engineering and Design, vol. 32, no. 8, pp. 2721–2723, 2011. View at: Google Scholar
  8. J. Yang, Y. Ma, X. Zhang, S. Li, and Y. Zhang, “An initialization method based on hybrid distance for k-means algorithm,” Neural Computation, vol. 29, no. 11, pp. 3094–3117, 2017. View at: Publisher Site | Google Scholar
  9. J. Wu, H. Xiong, and J. Chen, “Towards understanding hierarchical clustering: A data distribution perspective,” Neurocomputing, vol. 72, no. 10-12, pp. 2319–2330, 2009. View at: Publisher Site | Google Scholar
  10. T. Calinski and J. Harabasz, “A dendrite method for cluster analysis,” Communications in Statistics-theory and Methods, vol. 3, no. 1, pp. 1–27, 1974. View at: Google Scholar | MathSciNet
  11. D. L. Davies and D. W. Bouldin, “A cluster separation measure,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 2, pp. 224–227, 1979. View at: Publisher Site | Google Scholar
  12. S. Dudoit and J. Fridlyand, “A prediction-based resampling method for estimating the number of clusters in a dataset,” Genome Biology, vol. 3, no. 7, Article ID research0036.1, 2002. View at: Google Scholar
  13. H. He and Y. Tan, “A two-stage genetic algorithm for automatic clustering,” Neurocomputing, vol. 81, pp. 49–59, 2012. View at: Publisher Site | Google Scholar
  14. “UCI Machine Learning Repository: Data Sets,” 2018, http://archive.ics.uci.edu/ml/datasets.html. View at: Google Scholar
  15. Z. Jing and D. Fu, “Improved k-means algorithm with meliorated initial centers,” Computer Engineering and Design, vol. 34, no. 5, pp. 1691–1694, 2013. View at: Google Scholar
  16. D. Li, J. Shen, and H. Chen, “A fast K-means clustering algorithm based on grid data reduction,” in Proceedings of the 2008 IEEE Aerospace Conference, pp. 1–6, Big Sky, MT, USA, 2008. View at: Google Scholar
  17. Available: http://www.vision.caltech.edu/Image_Datasets/Caltech%20101/.
  18. Available: http://archive.ics.uci.edu/ml/index.php.
  19. X.-Y. Wang, Y.-W. Li, H.-Y. Yang, and J.-W. Chen, “An image retrieval scheme with relevance feedback using feature reconstruction and SVM reclassification,” Neurocomputing, vol. 127, pp. 214–230, 2014. View at: Publisher Site | Google Scholar
  20. C.-C. Chang and C.-J. Lin, “LIBSVM -- A Library for Support Vector Machines,” 2018, https://www.csie.ntu.edu.tw/~cjlin/libsvm/. View at: Google Scholar
  21. J. Zhang, L. Sui, L. Zhuo, Z. Li, and Y. Yang, “An approach of bag-of-words based on visual attention model for pornographic images recognition in compressed domain,” Neurocomputing, vol. 110, pp. 145–152, 2013. View at: Publisher Site | Google Scholar

Copyright © 2019 Hui Huang and Yan Ma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1296
Downloads393
Citations

Related articles