Abstract

Learning a proper distance metric for histogram data plays a crucial role in many computer vision tasks. The chi-squared distance is a nonlinear metric and is widely used to compare histograms. In this paper, we show how to learn a general form of chi-squared distance based on the nearest neighbor model. In our method, the margin of sample is first defined with respect to the nearest hits (nearest neighbors from the same class) and the nearest misses (nearest neighbors from the different classes), and then the simplex-preserving linear transformation is trained by maximizing the margin while minimizing the distance between each sample and its nearest hits. With the iterative projected gradient method for optimization, we naturally introduce the norm regularization into the proposed method for sparse metric learning. Comparative studies with the state-of-the-art approaches on five real-world datasets verify the effectiveness of the proposed method.

1. Introduction

Histograms are frequently used tools in natural language processing and various computer vision tasks, including image retrieval, image classification, shape matching, and object recognition, to represent texture and color features or to characterize rich information in local/global regions of objects. In particular, a histogram in the statistics is the frequency distribution of a set of specific measurements over discrete intervals. For many computer vision tasks, each object of interest can be presented as a histogram by using visual descriptors, such as SIFT [1], SURF [2], GIST [3], and HOG [4]. As a result, the resulting histogram obtains some merits of the descriptors, for example, rotation-invariant, scale-invariant, and translation-invariant. These make it an excellent representation method for performing classification and recognition of objects.

When the histogram representations are adopted, the choice of histogram distance metric has a great impact on the classification performance or recognition accuracy of the specific task. Since a histogram can be considered as a vector of probability, many metrics such as distance, chi-squared distance, and Kullback-Leibler (KL) divergence can be used directly. These metrics, however, only account for the difference between the corresponding bins and are hence sensitive to distortions in visual descriptors as well as quantization effects [5]. To mitigate these problems, many cross-bin distances have been proposed. Rubner et al. [6] propose the Earth Movers Distance (EMD), which is defined as the minimal cost that must be paid to transform one histogram into the other, by considering the cross-bin information. Diffusion distance [5] exploits the idea of diffusion process to define the difference between two histograms as a temperature field. The Quadratic-Chi distances (QCS and QCN) [7] take into account cross-bin relationships and meanwhile reduce the effect of large bins. In particular, for the cross-bin distance, most of the work mainly focuses on how to improve the EMD and hence many variants have been proposed. EMD- [8] uses the distance as the ground distance and significantly simplifies the original linear programming formulation of the EMD. Pele and Werman [9] propose a different formulation of the EMD with a linear-time algorithm for nonnormalized histograms. FastEMD [10] adopts a robust thresholded ground distance and was shown to outperform the EMD in both accuracy and speed. TEMD [11] uses a tangent vector to represent each global transformation. For the methods mentioned above, the determinations of metrics are all based on a priori knowledge of features or handcraft. However, distance metric is problem-specific and designing a good distance metric manually is extremely difficult. Aiming at this problem, some researchers have attempted to learn a proper distance metric from histogram training data. Considering that the ground distance, which is the unique variable of the EMD, should be chosen according to the problem at hand, Cuturi and Avis [12] propose a ground metric learning algorithm to learn the ground metric adaptively by using the training data. Subsequently, EMDL [13] formulates the ground metric learning as an optimization problem in which a ground distance matrix and a flow-network for the EMD are learned jointly based on a partial ordering of histogram distances. Noh [14] uses a convex optimization method to perform chi-squared metric learning with relaxation. -LMNN [15] employs a large-margin framework to learn a generalized chi-squared distance for histogram data and obtains a significant improvement compared to standard histogram metrics and the state-of-the-art metric learning algorithms. Le and Cuturi [16] adopt the generalized Aitchison embedding to compare histograms by mapping the probability simplex onto a suitable Euclidean space.

In this paper, we present a novel nearest neighbor-based nonlinear metric learning method, chi-squared distance metric learning (CDML), for normalized histogram data. CDML learns a simplex-preserving linear transformation by maximizing the margin while minimizing the distance between each sample and its -nearest hits. In the original space, the learned metric can be considered as a cross-bin metric. For sparse metric learning, the norm regularization term is further introduced to enforce row sparsity on the learned linear transformation matrix. Two solving strategies, the iterative projected gradient and the soft-max method, are used to induce the linear transformation. We demonstrate that our algorithms perform better than the state-of-the-art ones in terms of classification performance.

The remainder of this paper is organized as follows. Section 2 provides a review of supervised metric learning algorithms. Section 3 describes the proposed distance metric learning method. The experimental results on five real-world datasets are given in Section 4. Meanwhile, we discuss the difference between our method and -LMNN in detail. Section 5 concludes the paper.

In this section, we review the related work on supervised distance metric learning. Due to the seminal work of Xing et al. [17], which formulates metric learning as an optimization problem, supervised metric learning has been extensively studied in machine learning area and various algorithms have been proposed. In general, the proposed methods can be roughly cast into three different categories: Mahalanobis metric learning, local metric learning, and nonlinear metric learning. For the Mahalanobis metric learning, its main characteristic is to learn a linear transformation or a positive semidefinite matrix from training data under the Mahalanobis distance metric. The representative methods include neighborhood component analysis [18], large-margin nearest neighbor [19], and information-theoretic metric learning [20]. Neighborhood component analysis [18] learns a linear transformation by directly maximizing the stochastic variant of the expected leave-one-out classification accuracy on the training set. Large-margin nearest neighbor (LMNN) [19] formulates distance metric learning into a semidefinite programming problem by forcing that the -nearest neighbors of each training sample belong to the same class while examples from different classes are separated by a large margin. Information-theoretic metric learning (ITML) [20] formulates distance metric learning as a particular Bregman optimization problem by minimizing the differential relative entropy between two multivariate Gaussians under constraints on the distance function. Bian and Tao [21] formulate metric learning as a constrained empirical risk minimization problem. Wang et al. [22] propose a general kernel classification framework, which can unify many representative and state-of-the-art Mahalanobis metric learning algorithms such as LMNN and ITML. Chang [23] uses boosting algorithm to learn a Mahalanobis distance metric. Shen et al. [24] propose an efficient and scalable approach to the Mahalanobis metric learning problem based on the Lagrange dual formulation. Yang et al. [25] propose a novel multitask framework for metric learning by using common subspace. For the local metric learning, its motivation is to increase the expressiveness of learned metrics so that more complex problems, such as heterogeneous data, can be better handled. In virtue of involving more learning parameters compared to its global counterpart, local metric learning is prone to overfitting. One of early local metric algorithms is discriminant adaptive nearest neighbor classification (DANN) [26], which estimates local metrics by shrinking neighborhoods in directions orthogonal to the local decision boundaries and enlarging the neighborhoods parallel to the boundaries. Multiple metrics LMNN [19] learns multiple locally linear transformations in different parts of the sample space under the large-margin framework. By using an approximation error bound of the metric matrix function, Wang et al. [27] formulate local metric learning as linear combinations of basis metrics defined on anchor points over different regions of the instance space. Mu et al. [28] propose a new local discriminative distance metrics algorithm to learn multiple distance metrics. For nonlinear metric learning, there are two ways to conduct metric learning. One strategy is to use kernel trick to learn a linear metric in the high-dimensional nonlinear feature space induced by a kernel function. The kernelized variants of many Mahalanobis metric learning methods, such as KLFDA [29] and large-margin component analysis [30], have been shown to be efficient in capturing complicated nonlinear relationships between data. Soleymani Baghshah and Bagheri Shouraki [31] formulate nonlinear metric learning as constrained trace ratio problems by using both positive and negative constraints. By combining metric learning and multiple kernel learning, Wang et al. [32] propose a general framework for learning a linear combination of a number of predefined kernels. Another strategy is to learn nonlinear forms of metrics directly. Based on convolutional neural network, Chopra et al. [33] propose learning a nonlinear function such that the norm in the target space approximates the semantic distance in the input space. GB-LMNN [15] learns a nonlinear mapping directly in function space with gradient boosted regression trees. Support vector metric learning [34] learns a metric for radial basis function kernel by minimizing the validation error of the SVM prediction at the same time as it trains the SVM classifier. For a comprehensive review of metric learning and its applications we refer the readers to [3537] for details.

Although metric learning about Mahalanobis distance has been widely studied, metric learning for chi-squared distance is largely unexplored. Unlike Mahalanobis distance, chi-squared distance is a nonlinear metric and its general form requires the learned linear transformation to be simplex-preserving. Therefore, the existing linear metric learning algorithms cannot naturally apply to chi-squared distance. -LMNN adopts the LMNN model to learn chi-squared distance, but its additional margin hyperparameter is sensitive to the used data and needs to be evaluated on a hold-out set. In addition, it exploits the soft-max method to optimize the objective function, which makes the regularizers unable to be introduced naturally. The proposed method utilizes the margin of sample to construct the objective function and adopts the iterative projected gradient method for optimization and hence overcomes the weaknesses of the -LMNN. The regularizers can be incorporated into our model naturally and no additional parameter needs to be evaluated compared to the -LMNN.

3. Chi-Squared Distance Metric Learning

In this section, we will propose a metric learning algorithm termed as chi-squared distance metric learning (CDML). This algorithm uses the margin of sample to construct the objective function. It is more suitable to metric learning for histogram data.

In the following, we will first introduce the definition of the margin of sample. Then the motivation and the objective function of CDML will be proposed. Finally, the optimization method of the algorithm will be discussed.

3.1. The Margin of Sample

Let training data be , where is sampled from a probability simplex and let be the associated class label; the symbol denotes a -dimensional column vector whose all components are one. The chi-squared distance between two samples and can be computed bywhere indicates the th feature of the sample .

For each instance in the original input space, we can map it into an -dimensional probability simplex space by performing a simplex-preserving linear transformation , where is an element-wise nonnegative matrix of size () and the sum of each column element is one. In particular, the set of such simplex-preserving linear transformations can be defined as . With the linear transform matrix , the chi-squared distance between two instances and under the transformed space can be written as

For each sample , we call a hit if    has the same class label with , and the nearest hit    is defined as the hit which has the minimum distance with the sample . Similarly, we call a miss if the class label of is different from , and the nearest miss is defined as the miss which has the minimum distance with the sample . Let and be the th nearest hit and miss of , respectively. The margin of sample [38] with respect to its th nearest hit and th nearest miss is defined aswhere , . Note that and are determined by the generalized chi-squared distance and the transformation matrix affects the margin through the distance metric.

3.2. The Objective Function

Similar to many metric learning algorithms about Mahalanobis distance, the goal of our algorithm is to learn a simplex-preserving linear transformation optimizing NN classification. Given an unclassified sample point , NN first finds its -nearest neighbors in the training set and then assigns the label by the class that appears most frequently in the -nearest neighbors. Therefore, for robust NN classification, each training sample should have the same label with its -nearest neighbors. Obviously, if the margins of all the samples in the training set are bigger than zero, then the robust NN classification can be obtained. By maximizing the margins of all training samples, our distance metric learning problem can be formulated as follows:Here, the utility function is used to control the contribution of each margin term to the objective function. The introduction of constraint is to ensure that the chi-squared distance in the transformed space is still a well-defined metric.

Note that in (4) maximizing the margins can also be attained by increasing the distances between each sample and its nearest hits and the distances to its nearest misses simultaneously, where the latter obtain the much larger increase. However, we expect that each training sample and its nearest hits form a compact clustering. Therefore, we further introduce a term to constrain the distances between each sample and its nearest hits and obtain the following optimization problem:where is a balance parameter trading off the effect between two terms.

Moreover, considering the sparseness of some high-dimensional histogram data, the direct transformation matrix learning probably overfits the training data, resulting in poor generalization performance. To address this problem, we introduce the norm regularizer to regularize the model complexity. With the norm regularization, the metric learning problem can be written aswhere the regularization term guarantees that the parameter matrix is sparse in rows and is a nonnegative regularization parameter.

3.3. The Optimization Method

For the constrained optimization problem in (5), there are two methods that can be used to solve it. The first strategy is the iterative projected gradient method, which uses a gradient descent step to minimize followed by the method of iterative projections to ensure that is a simplex-preserving linear transformation matrix. Specifically, we will take a gradient step and then project into the set on each iteration, where is a learning rate and is the gradient of the objective function about the matrix parameter . Note that the constraints on can be seen as separated probabilistic simplex constraints on each column of . Therefore, the projection onto the set can be done by performing a probabilistic simplex projection, which can be efficiently implemented with a complexity of [39], on each column of . In addition, in order to compute the gradient , we need to obtain the partial derivative of the chi-squared distance in (2). Let and ; the partial derivative of with respect to the matrix can be given byGenerally speaking, the iterative projected gradient method needs a matrix of size to initialize the linear transformation matrix . In our work, the rectangle identity matrix is always used to initialize it. When the iterative projected gradient method is used, in particular, various regularizers, such as Frobenius norm regularization and norm regularization, can be naturally incorporated into the objective function in (5) and without influencing the solving of the problem.

Another strategy is that we first transform the constrained optimization problem in (5) into an unconstrained version by introducing a soft-max function, and then the steepest gradient descent method is used for learning. Here the soft-max function is defined aswhere the matrix is an assistant parameter. Obviously, the matrix is always in the set for any choice of . Thus, we can use the gradient of the objective function with respect to the matrix to minimize (5). In particular, the partial derivative of the chi-squared distance in (2) with respect to the matrix can be computed bywhich will be used to compute the gradient . The initial value of the matrix used for optimization is set to , where is a rectangle identity matrix and denotes the matrix of all ones. This solving strategy is named as the soft-max method. In particular, when the soft-max method is used for optimization, it is not easy for us to introduce the regularization directly. For the two solving methods, the proposed algorithm can always perform both metric learning and dimensionality reduction.

4. Experiments

In this section, we perform a number of experiments on five real-world image datasets to evaluate the proposed methods. In the first experiment, two solving strategies, the iterative projected gradient and the soft-max method, are compared according to training time and classification error. In the second experiment, we evaluate the proposed method with the state-of-the-art methods, including four histogram metrics (, QCN (available at http://www.ariel.ac.il/sites/ofirpele/QC/), QCS (available at http://www.ariel.ac.il/sites/ofirpele/QC/), and FastEMD (available at http://www.ariel.ac.il/sites/ofirpele/FastEMD/code/)) and three metric learning methods (ITML (available at http://www.cs.utexas.edu/~pjain/itml/), LMNN (available at http://www.cse.wustl.edu/~kilian/code/files/mLMNN2.4.zip), and GB-LMNN (available at http://www.cse.wustl.edu/~kilian/code/files/mLMNN2.4.zip)), on the image retrieval dataset corel. As the source code of the closely related method -LMNN [15] is not publicly available, we further perform the full-rank and low-rank metric learning experiments on the four datasets (dslr, webcam, amazon, and caltech). Since the -LMNN has also been tested on the above datasets, we can make a direct comparison. There are several parameters to be set in our model. The parameter is empirically set to , , 10%#NumberofTrainingSamples/#NumberofClasses.  We fix the parameters and to 0.5 and 50 in our experiments, respectively. Moreover, the parameter is set to 1 if the regularization is used. The proposed methods are implemented in standard C++. In our work, all the experiments are executed in a PC with 8 Intel(R) Xeon(R) E5-1620 CPUs (3.6 GHz) and 8 GB main memory.

4.1. Datasets

Table 1 summarizes the basic information of the five histogram datasets used in our experiments. The dataset corel is often used in the evaluation of histogram distance metric [7, 10, 11], which contains 773 landscape images in 10 different classes: people in Africa, beaches, outdoor buildings, buses, dinosaurs, elephants, flowers, horses, mountains, and food. There are 50 to 100 images in each class. All images have two types of representation: SIFT and CSIFT. For SIFT, Harris-affine detector [1] is used to extract orientation histogram descriptor. The second representation CSIFT is a SIFT-like descriptor for color image. CSIFT takes into account color edges in time of computing the SIFT and skips the normalization step to preserve more distinctive information. The size of final histogram descriptor is also . For more detailed information readers can be referred to [10]. As in [7], for each kind of descriptions we select 5 images (numbers ) from each class to construct the test data of 50 samples and the remaining image as training data. Moreover, each histogram descriptor of dimension 384 is further normalized to sum to one.

The remaining four datasets are all from 10 common object categories (back_pack, bike, calculator, headphones, keyboard, laptop_computer, monitor, mouse, mug, and projector) and are often used in the study of domain adaptation [40, 41]. Therein, dslr contains high-resolution images captured from a digital SLR camera in an office; webcam consists of low-resolution images taken from a web camera; amazon contains medium-resolution images downloaded from online merchants; caltech’s images are all from Caltech-256 database [42]. Figure 1 shows several example images from the category of projector in the four datasets. According to the same protocols in the previous work [40], we first resized all images to the same width and converted them to grayscale. The local scale-invariant descriptor detector SURF [2] with the Hessian threshold of 1000 was then used to extract 64-dimensional SURF descriptor. Subsequently, we use -means clustering algorithm to construct a codebook of size 800 based on a randomly chosen descriptor subset of the amazon dataset. Finally, each image can be represented by a bag of keypoints, which corresponds to a histogram of the number of occurrences of particular visual codebook entry in it. As in corel, each histogram is further normalized to sum to one.

4.2. Comparison of the Two Solving Strategies

In this subsection, we first evaluate the computational efficiency of the two solving strategies: the iterative projected gradient and the soft-max. For a fair comparison, we adopt the same stop criterion and adaptive step-size adjusting strategy for the implementation of two methods. Figure 2 presents the training time of two solving strategies under different projection dimensions on the corel dataset with two kinds of descriptors, SIFT and CSIFT. It can be observed from the figure that the iterative projected gradient method is always several times faster than the soft-max method. The result should not be amazing considering that the soft-max method requires more complex computation of gradient than the former according to (7) and (9). Although the iterative projected gradient method needs to perform the projection step with a complexity of on each iteration, the soft-max method also requires calculating the matrix based on the matrix , involving the computation of the exponential function of times.

We further compare the NN classification error based on the distance metrics learned by two solving strategies on the corel dataset. The experimental results are given in Figure 3. For the results in the figure, the number of nearest neighbors of NN is set to 3. From Figure 3, it can be found that the classification error of the iterative projected gradient is lower than that of the soft-max in most cases. One possible reason is that the matrices in the set have less restriction than the in (8). Considering training time and classification error, hereafter we use the iterative projected gradient method as the default solving strategy of CDML.

4.3. Image Retrieval Results

In the image retrieval task, we compare the performance of the proposed method with four histogram metrics (, QCN, QCS, and FastEMD) and three metric learning methods (ITML, LMNN, and GB-LMNN) on the corel dataset. As in [10], we make the images in the test set of the corel as the query images. The 50 nearest neighbors of each query image are searched based on different metrics. For four metric learning methods, CDML, GB-LMNN, LMNN, and ITML, we use the defined training dataset to train the metrics. Specially, for LMNN we utilize the PCA matrix to initialize it and GB-LMNN is initialized by the output matrix of LMNN. The regularization parameter of CDML is set to 1. The retrieval results are given in Figure 4. We can see that CDML achieves better performance compared with the competing methods, which performs best on SIFT and ranks second on CSIFT. One key observation is that the retrieval results of GB-LMNN are significantly better than those of other methods on the CSIFT descriptor, which shows the effectiveness of nonlinear transformation method. Moreover, it should be noted that metric always performs better than QCN, QCS, and FastEMD; one important reason is that the latter three methods are mainly to address unnormalized histogram.

Figure 5 compares the training times of the four metric learning methods, that is, CDML, GB-LMNN, LMNN, and ITML. It can be seen that the computational efficiency of CDML ranks second in the four methods. Specially, in average CDML is 9 times faster than the nonlinear metric learning method, GB-LMNN. Actually, the implementation of LMNN and GB-LMNN adopts OpenMP parallel mechanism, while that of CDML does not. Figure 6 compares the NN () classification error of , QCN, QCS, FastEMD, CDML, GB-LMNN, LMNN, and ITML on the test set of the corel. Clearly, CDML always achieves the lowest classification error, and the classification performance of GB-LMNN is unstable.

4.4. Object Classification Results

To investigate the ability of the proposed method under the full-rank and low-rank metric learning cases, we further performed the experiments to compare it with seven different algorithms , QCS, QCN, ITML, LMNN, GB-LMNN, and -LMNN on the four object classification datasets, including dslr, webcam, amazon, and caltech. For each dataset, we adopt exactly the same experimental setup as used in [15]: The results of CDML were obtained by averaging over 5 runs on randomly generated 80%/20% splits for training and test. Therefore, the direct comparison of CDML with other methods can be made. In what follows, the reported results of the seven algorithms , QCS, QCN, ITML, LMNN, GB-LMNN, and -LMNN all come from the literature [15]. Table 2 shows the performance comparison of our method against the methods mentioned above under the full-rank case. Considering -LMNN without introducing regularizer, we set the regularization parameter of CDML to 0 for a fair comparison. From the table, it can be observed that CDML is the clear winner compared to , QCS, QCN, ITML, LMNN, GB-LMNN, and -LMNN according to the classification error. In particular, although CDML and -LMNN are very similar in the learning model, the former shows significant performance boost on the three datasets (dslr, webcam, and caltech) compared with the latter. In particular, for each dataset -LMNN needs to perform additional evaluation on a hold-out set so as to determine the adaptive margin parameter , while CDML does not.

Figure 7 compares classification performance of four metric learning methods LMNN, GB-LMNN, -LMNN, and CDML under low-rank metric learning case. One can see that for all datasets CDML shows best performance consistently under different projection dimensions among the four metric learning algorithms. The results verify the effectiveness of the proposed method. Moreover, the low classification error of CDML under the projection dimensions 10, 20, 40, and 80 also demonstrates that dimensionality reduction is absolutely effective for histogram data.

4.5. Comparison with the -LMNN

Since the proposed method is very similar to -LMNN, in this section we discuss the difference between them. CDML differs from -LMNN in three major aspects. First, -LMNN adopts the hinge-loss to construct the objective function, while CDML uses the logistic-loss . Second, -LMNN uses the soft-max method as the solving strategy, while CDML adopts the iterative projected gradient method. Third, in -LMNN, the target neighbors of each training sample are determined by the -nearest neighbors in original metric space and do not change during the learning process. However, when we consider the nearest hits of CDML as the target neighbors, which are dynamically updated according to new distance metric on each iteration, thus, it is interesting to investigate the performance of -LMNN under the target neighbors being dynamically changed.

In order to evaluate the difference between -LMNN and CDML, we implement the following four algorithms in standard C++.(i)-LMNN (Soft-Max). This is the original -LMNN that uses the soft-max method to solve the simplex-preserving transformation matrix. The number of target neighbors is set to 3.(ii)-LMNN (Projection). This is the -LMNN using the iterative projected gradient method as the solving strategy. The number of target neighbors is set to 3.(iii)-LMNN (Dynamic). This is the -LMNN using the iterative projected gradient method as the solving strategy. The target neighbors of each sample are dynamically updated after obtaining the novel simplex-preserving transformation matrix on each iteration. They are the -nearest neighbors of each sample under the new chi-squared distance metric. The number of target neighbors is set to 3.(iv) CDML-Margin. This is the CDML using the hinge-loss as the utility function instead of , where is an additional margin parameter as in [15]. The setting about the parameters of nearest neighbor is the same as that of the CDML.

On the four histogram datasets, dslr, webcam, amazon, and caltech, we conducted low-rank fivefold cross validation experiment to evaluate the methods mentioned above. For the four algorithms -LMNN (soft-max), -LMNN (projection), -LMNN (dynamic), and CDML-Margin, they all require specifying a margin parameter . In our experiment, the value of margin parameter is set to 0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.15, and 0.2. The projection dimension is set to 20. Figure 8 shows the effect of margin parameter on classification error. The reported results are obtained by averaging over 5 runs. In order to make a comparison, the classification errors of CDML are also given. From the figure, we can see that the margin parameter has a significant effect on the performance of the margin-based methods. Different methods and datasets require distinct margin parameters. It can be observed that CDML is the clear winner compared to four margin-based methods -LMNN (soft-max), -LMNN (projection), -LMNN (dynamic), and CDML-Margin. This indicates that the logistic-loss is better than the hinge-loss in metric learning for histogram data. In order to explain it, we further compare the logistic-loss and the hinge-loss in Figure 9. Evidently, the logistic-loss is more suitable to histogram data since the chi-squared distance margin between histogram data is often very small. Moreover, -LMNN (soft-max) shows the worst performance on all datasets and CDML-Margin outperforms -LMNN (projection) in most cases. -LMNN (dynamic) performs better than -LMNN (projection) in some cases, while it does not in other cases, which implies that the introduction of dynamic target neighbor cannot ensure boosting the performance of -LMNN (projection). One possible reason is that the used data is insensitive to noise in it. From Figure 8, we summarize that the promising performance of CDML against -LMNN should be attributed to the following three reasons: (1) Maintaining the same margin for all histogram data is unsuitable. (2) The iterative projected gradient method is more reasonable compared with the soft-max method. (3) In CDML, the nearest hits and misses are adopted as the target neighbors. Thus, even the same hinge-loss and dynamic adjustments on neighbors are adopted; CDML-Margin can outperform -LMNN (dynamic) in most cases, which indicates that the margins defined based on nearest hits and misses generally result in lower classification error for object classification.

In order to compare the difference between CDML and -LMNN in the training set with the varying size, we further perform experiment on the webcam dataset. A random subset with (=5, 10, 15, 20) samples per class was taken to form the training set. The rest of the dataset was considered to be the testing set. For each given , we average the results over 5 random splits. In particular, we use Euclidean distance and chi-squared distance as the benchmarks. The projection dimension of CDML and -LMNN is set to 80. Table 3 shows the classification errors. As can be seen, the Euclidean distance performed the worst. The classification performance of CDML is significantly better than that of the -LMNN, which means that the latter is more sensible than CDML to the size of the training set.

5. Conclusion

To address the matching of histogram data, we propose a novel nearest neighbor-based algorithm to efficiently learn chi-squared distance based on maximizing the margin while maintaining the compactness between each training sample and its nearest hits. The proposed method could obtain a simplex-preserving linear transformation, which makes the learned metric a chi-squared distance in the transformed space. The two solving strategies, the iterative projected gradient and the soft-max method, can be used to solve our method. Experimental results show that the former is more efficient. With the iterative projected gradient method, the regularizer can be introduced naturally. In the comparative experiments on five real-world histogram datasets, the proposed method demonstrates very promising performance in both classification error and efficiency in comparison with the state-of-the-art methods. In the future, we will investigate the other choices of the objective function [18, 43] and consider the robustness against cross-bin distortion to design proper regularization terms. The C++ source code of CDML is freely available from the website https://sites.google.com/site/codeofcdml/.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors would like to reveal that this research was supported partially by Foundation of Henan Educational Committee of China under Grant nos. 14A520027 and 14A520041.