Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 4652526, 7 pages
https://doi.org/10.1155/2018/4652526
Research Article

A Color Distance Model Based on Visual Recognition

1School of Software, East China Jiaotong University, Nanchang 330013, China
2School of Geophysics and Measure Control Technology, East China University of Technology, Nanchang 330011, China

Correspondence should be addressed to Jingqin Lv; nc.ude.utjce@niqgnijvl

Received 16 November 2017; Revised 13 March 2018; Accepted 27 March 2018; Published 20 May 2018

Academic Editor: Nazrul Islam

Copyright © 2018 Jingqin Lv and Jiangxiong Fang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In computer vision, Euclidean Distance is generally used to measure the color distance between two colors. And how to deal with illumination change is still an important research topic. However, our evaluation results demonstrate that Euclidean Distance does not perform well under illumination change. Since human eyes can recognize similar or irrelevant colors under illumination change, a novel color distance model based on visual recognition is proposed. First, we find that various colors are distributed complexly in color spaces. We propose to divide the HSV space into three less complex subspaces, and study their specific distance models. Then a novel hue distance is modeled based on visual recognition, and the chromatic distance model is proposed in line with our visual color distance principles. Finally, the gray distance model and the dark distance model are studied according to the natures of their subspaces, respectively. Experimental results show that the proposed model outperforms Euclidean Distance and the related methods and achieves a good distance measure against illumination change. In addition, the proposed model obtains good performance for matching patches of pedestrian images. The proposed model can be applied to image segmentation, pedestrian reidentification, visual tracking, and patch or superpixel-based tasks.

1. Introduction

Nowadays, computer vision has achieved great progress and gives people many useful technologies, such as image segmentation, image retrieval, object tracking, and video surveillance. In many applications, illumination change happens frequently as shown in Figure 1 (selected from the Berkeley segment dataset [1] and the CUHK01 Pedestrian Dataset [2]). However, illumination change is still a difficulty which is not well resolved [24].

Figure 1: Illustration of objects under illumination variations. The mean HSV saturations and values of the patches are given.

Generally, color distances are computed by Euclidean Distance in RGB or CIELAB [58]. In order to evaluate its performance, a simple test is carried out. First, we generate four chromatic color pairs and six gray colors as shown in Figure 2. Every chromatic color pair that has = , = 0.7, = 0.4, = 1, and = 0.7 in HSV space are assumed to be coming from the same object under a moderate illumination change. Then the distances between each chromatic color and all the other thirteen colors are calculated by Euclidean Distances in RGB and CIELAB, respectively, that is, and . For each chromatic color the corresponding thirteen distances are sorted in ascending order. Figure 2 shows the front seven distances corresponding to the dark orange color and the yellow color, respectively. We observe that the distance between the colors of the yellow or orange pair is not the shortest one or even larger than six irrelevant color distances. Therefore, we think that Euclidean Distance cannot work well under illumination change. In addition, the results of the proposed color distance model based on visual recognition (VR) are given for comparison. Obviously, measures the distances between these colors effectively.

Figure 2: Illustration of the front color distances. A value in blue is the distance of its corresponding color pair.

A few researchers proposed different distance measure methods, aiming at their specific goals, respectively. Since the nice perceptual uniformity of LAB space remains in effect only within a radius of a few just-noticeable differences, Sharma et al. [9] devised a very sophisticated metric CIEDE2000, which provides a closer fit to perceptual judgments. Mojsilović [10] proposed a complicated distance metric for naming input colors by eye. In order to achieve shading invariance, Wesolkowski and Fieguth [11] proposed a vector angle based distance measure in RGB, which decreases the weights of shaded colors. However, the distance between two pixels becomes smaller when either one is shaded. Vertan et al. [12] studied a parallel coordinate distance in RGB for edge detection, in which the differences of the three channels are used. Lee and Plataniotis [13] combined a hue direction comparison term with a chroma comparison term in LAB for computing image difference. Their comparison terms are computed by using the product and the quadratic sum of two hue or chroma values.

For these patch or superpixel-based methods [14, 15], the color distance between two patches can be measured by using their histogram distance. Given a pixel corresponding to a bin, if the illumination changes, usually it will vote for another bin. Therefore color distance is strongly influenced by the illumination. To obtain illumination invariance, many color descriptors are proposed, such as the opponent histograms [16], the color moment invariants [16], and the shape context based-color signatures [17]. The color name method [18], which assigns a color to a language term, also displays a certain amount of photometric invariance. However, these descriptors and color names are utilized for feature description or image recognition and are not suitable to measure color distance. On the other hand, the state-of-the-art LOMO feature [15] applies the multiscale Retinex algorithm to preprocess pedestrian images before extracting color histograms. The Retinex algorithm processing both the color constancy and dynamic range compression automatically is useful to handle illumination variations to some extent.

Since human eyes can simply recognize similar or irrelevant colors under illumination change, a novel color distance model based on visual recognition is proposed. First, we find that various colors are distributed complexly in color spaces. Thus, a single model cannot achieve a good color distance. Therefore, we propose to divide the HSV space into three less complex subspaces. Three specific distance models need to be studied. In addition, the principles of our visual color distance are introduced, while the related works [1113] are proposed without explicit principles. Then a novel hue distance is modeled and trained based on visual recognition. And the chromatic distance model is proposed in line with the introduced principles. Finally, the gray distance model and the dark distance model are studied according to the natures of their subspaces, respectively. Experimental results show that the proposed model outperforms Euclidean Distance and the related methods and achieves a good distance measure against illumination change. In addition, our obtains good performance for matching patches of pedestrian images under illumination variations.

2. The Proposed Method

If every color belongs to the same category, it would be an easy task to measure the similarities or distances between colors. However, colors are visually perceived as various categories. The primary color categories include white, gray, black, red, orange, yellow, green, cyan, blue, and purple. Furthermore, in the color space most colors are distributed between two or more primary colors, such as yellowish green, dark red, light cyan, and gray-blue. On the other hand, colors are represented with only three values. Therefore we consider that various colors are distributed complexly in a three-dimensional color space. Thus, it should be difficult to recognize arbitrary two colors as similar or irrelevant by linear distance models and three color channels. Consequently, Euclidean Distances in RGB or CIELAB cannot measure color distance correctly as shown in Figure 2.

As everyone knows, human eyes can recognize similar colors and discriminate various colors under illumination change. This kind of ability can be helpful to many computer vision technologies, such as object segmentation, visual tracking, and pedestrian reidentification. We propose to study a color distance model based on visual recognition. Thus we begin with a definition of nonmetric visual color distance. The visual distance between two colors and should satisfy the following principles as well as possible:

(1) If and look more similar than and , should be less than .

(2) If and look completely different, should be equal to the maximum or larger than .

(3) If and belonging to the same object are under a moderate or strong illumination change, should be less than .

In HSV space, the hue describes various chromatic colors. The value defines the intensity of a color. And the saturation defines the chromatic degree of a color. Since the HSV space is intuitive to humans and available for color description, arbitrary two colors can be recognized as similar or irrelevant by using hue, saturation, and value. We prefer to study our human visual recognition based-color distance model in the HSV space. The primary color categories can be grouped into chromatic ones and achromatic ones. If the illumination is heavily decreased, many chromatic colors may appear black to some extent. For an undertint color with low , it may resemble a gray or a white color.

To reduce the complexity and achieve a desirable distance model, the HSV space is simply divided into three overlapped subspaces according to the above analyses as shown in Figure 3. For each subspace, a specific color distance model needs to be studied. The chromatic subspace is composed of general colors, undertint colors, and dusky colors. If the hue values of some colors in this subspace are close enough, they will look similar, or may belong to the same object. For this subspace, the key ability of its model is to discriminate various hues.

Figure 3: Illustration of the specific subspaces.

On the other two sides, the gray subspace and the dark subspace focus on near-gray colors and near-dark colors, respectively. Each subspace includes a focused region and an adjacent region which overlaps the chromatic subspace. Apparently, colors in an adjacent region may look similar to colors in the corresponding focused region. The distance model of each subspace is expected to be able to compute the distance between colors from its focused region and the adjacent one, respectively.

3. The Chromatic Distance Model

In the chromatic subspace, every color is visually recognized with its hue, saturation, and value. Given two colors and , the differences of their features, that is, , , and , are natural distance measures. However, Euclidean Distance, which combines these differences, does not perform well for the above test (see Figure 2). Obviously, saturation and value are simple features for discriminating against white, gray, or dark colors and colors with different saturations. Hue is the key feature for recognizing various chromatic colors. We notice that the maximum of is 0.5, while there are seven primary hues and many mixed hues. Consequently, cannot effectively tell us that two hues are similar or different.

As human can recognize various hues, we propose to train a novel hue distance based on visual recognition, which is a regression task. First, is set to 1, because we consider that two colors with or should be recognized as completely different. About seven hundred hue pairs with are generated by randomly sampling in the whole hue range. Since it is a difficult job to give a distance value to a hue pair, a rough distance value (i.e., 0.2, 0.4, 0.6, 0.8, 1.1, or 1.4) is obtained by eye for each hue pair as shown in Figure 4. These default distance values correspond to similar hue pairs ( < 1, = 1) or irrelevant hue pairs ( > 1, = −1).

Figure 4: Illustration of some training hue pairs .

To achieve good hue recognition performance, a logistic function, which is utilized to model the activation of a neuron in neural network, is adopted to model the visual perception of hue distance The parameter is set to an appropriate constant due to = 1. Thus can be determined by enforcing = 0. In the whole hue range, the primary hues are not distributed linearly with respect to human perception. Thus, a hinge loss function is utilized to learn the parameters and by gradient descent.Then the color distance can be calculated asHowever, does not keep in line with the third principle, because usually becomes large when the illumination change is heavy, and is also increased to some extent. In fact, irrelevant colors can be discriminated by the most discrepant or . The color distance is improved as follows:Consequently, can give us an effective color distance under illumination change in the chromatic subspace.

4. The Gray Distance Model and the Dark Distance Model

In the other two subspaces, achromatic categories, that is, white, gray, or black, are added. Their complexities are much larger than the pure chromatic subspace’s. Therefore, it is more difficult to measure color distances in these two subspaces. For example, the distances between the dark orange and some grays are lower than the orange pair’s distance in Figure 2.

Obviously, a gray or a white only look similar to colors with , where is a low saturation. Then the gray color distance is introduced by combining with for all the colors in the gray subspace.The key of this distance model is how to compute an appropriate gray weight . For a chromatic color, its saturation may vary under illumination change, and is effective. While the saturation of a gray-like color is limited within a very small and low range. Therefore, is calculated with the lower saturation of two colors.where is the parameter to define the range of the gray-like colors.

For a black, the chromatic and saturation information is lost, and its and will be influenced by the noises. Thus and should be neglected when measuring the distance between a black color and other colors. In the dark subspace, the color distance is modeled aswhere is set to of an appropriate dark color. Like the above , the dark weight is defined in the same mannerwhere is the parameter to define the range of the dark colors.

5. Experimental Results

The related methods [1113] are evaluated for their specific goals instead of color distance measure, such as image segmentation, edge detection, or computing image difference. Obviously, it is difficult and impossible to give appropriate distance values for most color pairs. Hence, we carry out two experiments by comparing the distances of the related color pairs with the distances between various irrelevant colors. In addition, we test our model by matching patches of pedestrian images. In our model, the parameter can be in the range , as = 1 and should not be much larger than for or . Evidently, the other four parameters, that is, , , , and , should be in the range . In the following experiments, , , , , and are simply set to 2, 0.15, 0.2, 0.15, and 0.15, respectively.

5.1. Tests for the Chromatic Distance Model

To evaluate the performance of the chromatic distance model, enough testing colors covering fundamental illumination variations are needed. Usually, the value difference of an object under illumination change may be significant, while the hue difference is small and limited. In Figure 1, of the five patch pairs are .02, .02, .03, .01, and .01, respectively. Since the illumination change cases of these patch pairs are usual for many computer vision tasks, ten tests of color query are conducted with the couples of these patch pairs, respectively.

For a test, twelve query colors are generated by using () of a patch and twelve hues sampled uniformly in at intervals of . In addition, for each query color, three related colors are generated by using three hue differences (i.e., , 0, and .02) and () of the related patch which belongs to the same object as the patch . As a result, a query color and its three related colors can be regarded as coming from the same object under illumination variations, while the other 44 colors are different and irrelevant to the query color.

The proposed model , Euclidean Distances in three color spaces (i.e., , , and ), and the distance measures [1113] are evaluated by comparing the distances between each query color and all the other 47 colors of a test. The three related color distances between a query color and its related colors should be shorter than all the 44 irrelevant color distances between the query color and the irrelevant colors. Thus a query error is defined as the number of the irrelevant color distances shorter than a related color distance. Finally, the average query error of all the related colors is employed to evaluate the performance of a test.

The average query errors of these ten tests are given in Table 1. Most distance measures except the measure [11] and perform badly to different extents. The main reason could be that these distance measures cannot deal with the complexity of their color spaces. The measure [11] based on vector angle in RGB achieves only a few zero query errors. Because a vector angle is dependent on both its saturation and hue, the measure [11] obtains good results when the saturation of the query color is high enough and is low enough. Obviously, our achieves zero query errors under different illumination variations. In , the distances of hue, saturation, and value are all considered properly. And the distance of hue is studied in line with the principles. Therefore the proposed implements a good distance measure under various color variations.

Table 1: The testing results corresponding to Figure 1.
5.2. Tests for the Gray Distance Model

Five color query tests are carried out with the five color pairs shown in Figure 5, which are gray or undertint color pairs. Thirty hues, ten saturations, and fifteen values are uniformly sampled in the ranges , , and at intervals of 0.033, 0.02, and 0.05, respectively. Then totally 4500 chromatic colors generated with these sampled data are assumed to be dissimilar to all the ten testing colors () in Figure 5.

Figure 5: Illustration of undertint objects under illumination variations.

In each color pair, the color with the smaller saturation is used as a query color. Its distances from its related color and the 4500 chromatic colors are calculated by the proposed model , the above Euclidean Distances, and the distance measures [1113], respectively. Then the number of the chromatic color distances smaller than the related color distance is employed as a query error and recorded in Table 2. Since most of the chromatic colors are really irrelevant to each query color, a very small number relative to 4500 can indicate the correctness of a distance measure. In Table 2, the results show that it is difficult for the Euclidean Distances to measure the color distance correctly in the gray subspace.

Table 2: The testing results corresponding to Figure 5.

of the five pairs are .01, .18, .28, .11, and .24, respectively. As for the “Pink” and “Leg” pairs, their are large, and their chromatic information, that is, , is relatively nontrivial. Thus, our results of the two pairs are reasonably correct. Though there are four pairs with large , the measure [11] and achieve good performance. Therefore, we consider that our can be applied to measure distance under illumination variations in the gray subspace.

5.3. Matching Patches for Pedestrian Reidentification

As shown in Figure 6, 128 pairs of 9 × 9 patches uniform in color are cropped from 128 pairs of pedestrian images from the CUHK01 Dataset [2]. Many pairs are under strong illumination change, while more pairs are under moderate illumination variations. We give a hue label to each pair. The labels include pink, red, orange, yellow, green, cyan, and blue.

Figure 6: Two patch pairs and their matching results.

For every query patch , it is matched with all the other 255 patches by calculating their color distances by many color histogram distances and our , respectively. These distances are sorted in ascending order. Then a threshold is used to exclude patches dissimilar to patch and reserve patches with the same label of patch . In each patch list in Figure 6, a query patch highlighted by a dark yellow hoop is shown at the left. All the reserved patches are listed after the query patch. The related patch of a query patch is highlighted by a yellow hoop. A patch with a hue label different to the query patch’s is viewed as a wrong match highlighted by a red hoop. The recall is defined as the rate that the related patch of every query patch is reserved. The error rate is computed from all the matching lists.

Our , LOMO [15], and other color histograms used by recent pedestrian reidentification works [19, 20] are evaluated. LOMO is one of the state-of-the-art features and adopted by many current works [21, 22]. Although it extracts color histograms and texture features and does a postprocess to address viewpoint changes of pedestrians, only a histogram is extracted as the color feature for a single patch. In our experiments, we found that for every color histogram the Retinex preprocess of LOMO [15] improves its matching performance. Thus, the results of the histograms named with a (R) or (LOMO), for which the Retinex preprocess is applied, are given in Table 3. In other words, only and our are tested without the Retinex preprocess.

Table 3: The results of the matching tests.

For a histogram, the input data is discretized into one of 8 or 512 bins. If the illumination changes, colors usually turn to other bins. Thus, the related HSV (LOMO) distance of a patch pair may be very large or even equal to the maximum value 2 as shown in Figure 7. On the other hand, colors with different primary hues may vote to the same bin, which will lead to matching mistakes. For example, in Figure 6, the two HSV (LOMO) lists include two red patches and three cyan ones, respectively. Although the illumination variations are handled by the Retinex preprocess, the best recall of these histograms is 66% at the error rate of 34%. Therefore, color histograms are not good at measuring color distance and sensitive to illumination change.

Figure 7: The distributions of the related color distances of the 128 patch pairs.

Obviously, our achieves desirable performance on this test. In Figure 7, the related distances, which are all lower than 0.7, indicate that the distances of the three channels are effectively combined, and keeps in line with the third principle. Since the hue labels given to the testing patch pairs are discretized outputs, a few matching mistakes will be inevitable. Consequently, we consider that can be an effective color distance measure for pedestrian reidentification.

6. Conclusion

In this paper, a color distance model based on visual recognition is proposed. To reduce its complexity, the HSV space is divided into three subspaces. Then a novel hue distance is modeled based on visual recognition, and the chromatic distance model is studied in line with the principles. Finally, the gray distance model and the dark distance model are proposed according to their natures, respectively. Experimental results show that the proposed model outperforms Euclidean Distance and the related methods and achieves a good distance measure against illumination change. Therefore, the proposed model can be applied to image segmentation, color based detection, and image retrieval. In addition, our obtains good performance for matching patches of pedestrian images. As many patches or superpixels consist of two or three kinds of colors, an effective color clustering method is needed to be studied for future work, so that the proposed model can be applied to pedestrian reidentification, visual tracking, and other patch or superpixel-based tasks.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by NSFC, China (nos. 61463005, 61463017, 61563016, and 61702226), the Scientific Program of the Education Department of Jiangxi Province (no. GJJ14400), the Key Research and Development Program of Jiangxi Province (20161BBE53006), the Natural Science Foundation of Jiangsu Province (Grant no. BK20170200), and the Fundamental Research Funds for the Central Universities (Grant no. JUSRP11854).

References

  1. D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference on Computer Vision, pp. 416–423, July 2001. View at Scopus
  2. R. Zhao, W. Ouyang, and X. Wang, “Person re-identification by salience matching,” in Proceedings of the 14th IEEE International Conference on Computer Vision, ICCV '13, pp. 2528–2535, Australia, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. L. Wen, Z. Cai, Z. Lei, D. Yi, and S. Z. Li, “Robust online learned spatio-temporal context model for visual tracking,” IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 785–796, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. A. Andreopoulos and J. K. Tsotsos, “50 Years of object recognition: directions forward,” Computer Vision and Image Understanding, vol. 117, no. 8, pp. 827–891, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. L. Wen, D. Du, Z. Lei, S. Z. Li, and M.-H. Yang, “JOTS: Joint Online Tracking and Segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR '15, pp. 2226–2234, USA, June 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2281, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. E. Lobacheva, O. Veksler, and Y. Boykov, “Joint optimization of segmentation and color clustering,” in Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV '15, pp. 1626–1634, Chile, December 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Lin, D. Ritchie, M. Fisher, and P. Hanrahan, “Probabilistic color-by-numbers: suggesting pattern colorizations using factor graphs,” ACM Transactions on Graphics, vol. 32, no. 4, article 37, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. G. Sharma, W. Wu, and E. N. Dalal, “The CIEDE2000 color-difference formula: implementation notes, supplementary test data, and mathematical observations,” Color Research & Application, vol. 30, no. 1, pp. 21–30, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Mojsilović, “A computational model for color naming and describing color composition of images,” IEEE Transactions on Image Processing, vol. 14, no. 5, pp. 690–699, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Wesolkowski and P. Fieguth, “A probabilistic shading invariant color distance measure,” in Proceedings of the Proc. European Conf. Signal Processing (EUSIPCO), pp. 1907–1911, 2007.
  12. C. Vertan, B. Ionescu, and M. Ciuc, “Color image edge detection by parallel coordinates-based color distance,” in Proceedings of the 8th International Conference on Communications, COMM '10, pp. 141–144, Romania, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. D. Lee and K. N. Plataniotis, “Towards anovel perceptual color difference metric using circular processing of hue components,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '14, pp. 166–170, Italy, May 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. S. Ardeshir, K. M. Collins-Sibley, and M. Shah, “Geo-semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR '15, pp. 2792–2799, USA, June 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Liao, Y. Hu, X. Zhu, and S. Z. Li, “Person re-identification by Local Maximal Occurrence representation and metric learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR '15, pp. 2197–2206, USA, June 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. K. van de Sande, T. Gevers, and C. G. M. Snoek, “Evaluating color descriptors for object and scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1582–1596, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. I. Kviatkovsky, A. Adam, and E. Rivlin, “Color invariants for person reidentification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 7, pp. 1622–1634, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. J. van de Weijer, C. Schmid, J. Verbeek, and D. Larlus, “Learning color names for real-world applications,” IEEE Transactions on Image Processing, vol. 18, no. 7, pp. 1512–1523, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  19. D. Chen, Z. Yuan, B. Chen, and N. Zheng, “Similarity Learning with Spatial Constraints for Person Re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '16), pp. 1268–1277, Las Vegas, NV, USA, June 2016. View at Publisher · View at Google Scholar
  20. S. Bak and P. Carr, “One-Shot Metric Learning for Person Re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '17), pp. 1571–1580, Honolulu, HI, July 2017. View at Publisher · View at Google Scholar
  21. J. Chen, Y. Wang, J. Qin, L. Liu, and L. Shao, “Fast Person Re-identification via Cross-Camera Semantic Binary Transformation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '17), pp. 5330–5339, Honolulu, HI, July 2017. View at Publisher · View at Google Scholar
  22. Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re-ranking Person Re-identification with k-Reciprocal Encoding,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '17), pp. 3652–3661, Honolulu, HI, July 2017. View at Publisher · View at Google Scholar