Journal of Sensors

Journal of Sensors / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 6068759 | https://doi.org/10.1155/2020/6068759

Guangyi Xie, Zhe Huang, Baolong Guo, Yan Zheng, Yunyi Yan, "Image Retrieval Based on the Combination of Region and Orientation Correlation Descriptors", Journal of Sensors, vol. 2020, Article ID 6068759, 15 pages, 2020. https://doi.org/10.1155/2020/6068759

Image Retrieval Based on the Combination of Region and Orientation Correlation Descriptors

Academic Editor: Eduard Llobet
Received23 Nov 2019
Revised02 Feb 2020
Accepted13 May 2020
Published10 Jun 2020

Abstract

A large number of growing digital images require retrieval effectively, but the trade-off between accuracy and speed is a tricky problem. This paperwork proposes a lightweight and efficient image retrieval approach by combining region and orientation correlation descriptors (CROCD). The region color correlation pattern and orientation color correlation pattern are extracted by the region descriptor and the orientation descriptor, respectively. The feature vector of the image is extracted from the two correlation patterns. The proposed algorithm has the advantages of statistic and texture description methods, and it can represent the spatial correlation of color and texture. The feature vector has only 80 dimensions for full color images specifically. Therefore, it is very efficient in image retrieving. The proposed algorithm is extensively tested on three datasets in terms of precision and recall. The experimental results demonstrate that the proposed algorithm outperforms other state-of-the-art algorithms.

1. Introduction

The rapid and massive growth of digital images requires effective retrieval methods, which motivates people to research and develop effective image storage, indexing, and retrieval technologies [14]. Image retrieval and indexing have been applied in many fields, such as the internet, media, advertising, art, architecture, education, medical, biological, and other industries. The text-based image retrieval process first manually labels the image with text and then uses keywords to retrieve the image. This method of retrieving an image based on the degree of character matching in the image description is time-consuming and subjective. The content-based image retrieval method overcomes the shortcomings of the text-based method, starting from the visual characteristics of the image (color, texture, shape, etc.) and finding similar images in the image library (search range). According to the working principle of general image retrieval, there are three keys to content-based image retrieval: selecting appropriate image features, adopting effective feature extraction methods, and accurate feature matching strategies.

Texture is an important and difficult-to-describe feature in images. Aerial, remote sensing pictures, fabric patterns, complex natural landscapes, and animals and plants all contain textures. Generally speaking, the local irregularity in the image and the macroscopic regularity are called textures, and the areas with repetitiveness, simple shapes, and consistent intensity are regarded as texture elements. After local binary pattern (LBP) [5], there are many similar methods proposed in recent years, i.e., local tridirectional patterns [6], local energy-oriented pattern [7], 3D local transform patterns [8], local structure cooccurrence pattern [9], local neighborhood difference pattern [10], etc.

Color histogram is the most commonly used and most basic method in color characteristics; however, it loses the correlation between pixel points. To solve this problem, many researchers have come up with their own visual models. Color correlogram [11] and color coherence vector (CCV) [12] characterize the color distributions of pixels and the spatial correlation between pair of colors. The gray cooccurrence matrix [13, 14] describes the cooccurrence relationship between the values of two pixels. Mehmood et al. present an image representation based on the weighted average of triangular histograms (WATH) of visual words [15]. This approach adds the image spatial contents to the inverted index of the bag-of-visual words (BoVW) mode.

1.1. Related Works

Color, texture, and shape are prominent features of an image, but a single feature usually has some limitations. To overcome these problems, some researchers have proposed multifeature fusion methods, which utilize two or more features simultaneously. In [16], Pavithra et al. proposed an efficient framework for image retrieval using color, texture, and edge features. Fadaei et al. proposed a new content-based image retrieval (CBIR) scheme based on the optimised combination of the color and texture features to enhance the image retrieval precision [17]. Reta et al. put forward color uniformity descriptor (CUD) in the Lab color space [18]. Color difference histograms (CDH) count the perceptually uniform color difference between two points under different backgrounds with regard to colors and edge orientations in the Lab color space [19]. Taking advantage of multiregion-based diagonal texture structure descriptor for image retrieval is proposed in the HSV space [20]. In [21], Feng et al. proposed multifactor correlation (MFC) to describe the image, which includes structure element correlation (SEC), gradient value correlation (GVC), and gradient orientation correlation (GDC). Wang and Wang proposed SED [22], which integrates the advantages of both statistical and structural texture description methods, and it can represent the spatial correlation of color and texture. Singh et al. proposed BDIP+ BVLC+CH (BBC) [23], which is represented by a combination of texture feature block difference of inverse probabilities (BDIP) and block variation of local correlation coefficients (BVLC) and color histograms. In [24], the visual contents of the images have been extracted using block level discrete cosine transformation (DCT) and gray level cooccurrence matrix (GLCM) in RGB channel, respectively. It can be represented as DCT+ GLCM. In addition, local extrema cooccurrence pattern for color and texture image retrieval is proposed in [25].

According to the texton theory proposed by Julesz [26], many scholars have proposed texton-based algorithms. Texton cooccurrence matrix (TCM) [27], a combination of a` trous wavelet transform (AWT) and Julesz’s texton elements, is used to generate the texton image. Further, texton cooccurrence matrix is obtained from texton image which is used for feature extraction and retrieval of the images from natural image database. Multitexton histogram (MTH) integrates the advantages of cooccurrence matrix and histogram, and it has a good discrimination power of color, texture, and shape features [28]. Correlated primary visual texton histogram features (CPV-THF) is proposed for image retrieval [29]. Square Texton Histogram (STH) is derived based on the correlation between texture orientation and color information [30].

1.2. Main Contributions

Considering that color, texture, and uniformity features are of relevant importance in recognition of visual patterns [1721], an algorithm proposed in this paper combines region and orientation correlation descriptors (CROCD). This method entails two compact descriptors that characterize the image content by analyzing similar color regions and four orientation color edges in the image. It is based on the HSV color space since it is in better agreement with the visual assessments [20]. Contrasting with other approaches, CROCD features have the advantage of balancing operation speed and accuracy.

The rest of the paper is organized as follows. In Section 2, the overall introduction and workflow of the algorithm are presented. Section 3 explains the proposed algorithm in detail. Experimental results are obtained in Section (3). Finally, the whole work is concluded in Section 4.

2. Region Correlation and Orientation Correlation Descriptors

There are different objects in an image. The same object is usually a certain area made up of the same or approximate color, which constitutes the texture of the internal area of the object. The edges of an object have distinct color differences from the surrounding ones, and the edges of every object are the same or similar in color. Based on the above analysis, this paper presents a method of combining region color correlation descriptor and orientation color correlation descriptor. This method is also an effective method of combining color, texture, and edges to retrieve images. Firstly, the color image is quantified and coded, and then, the region color correlation pattern is calculated by the region descriptor; after that, the region correlation vector is calculated. Secondly, the orientation color correlation pattern is obtained by the orientation descriptor, and the color correlation histogram of the four orientations is obtained by statistics of the correlation pattern. The orientation color correlation vector of the image is calculated. The feature vector of image is obtained by concatenating the two-color correlation vectors of region and orientation. Finally, use similarity distance measure for comparing the query feature vector and feature vectors of database and sort the distance measure, then produce the corresponding images of the best match vectors as final results. The workflow of the proposed algorithm is shown in Figure 1.

3. The Algorithm Process

3.1. Image Color Quantization

Common color spaces for images are RGB, HSV, and Lab. Among them, the HSV space is a uniform quantized space, which could mimic human color perception well; thus, many researchers use it for image processing [17, 2022, 25]. The HSV color space is defined in terms of three components: hue (H), saturation (S), and value (V). H component describes the color type which ranges from 0 to 360. S component refers to the relative purity or how much the color is polluted with white color which ranges from 0 to 1. V component is used for the amount of black that is mixed with a hue or represents the brightness of the color. It also ranges 0–1.

Image color quantization is a common method in image processing, especially in image retrieval. Assuming that the same objects are detected, the color will be slightly different due to the influence of light, environment, and background. These effects can be eliminated by quantization with appropriate bins. On the other hand, quantization in image processing can also make the operation simple and reduce the operation time.

Therefore, giving a color image I (), the quantization is presented as follows [22]: (1)Nonuniformly quantize the , , and channels into 8, 3, and 3 bins, respectively, as equations (1), (2), and (3): (2)Calculate the value of every point according to formula (4). where are the quantization bins of color and , respectively. As mentioned above, both and are quantified into 3 bins, respectively, so both values are 3. Substitute them into equation (4) to get the following formula: (3)Obtain the quantized color image. The quantized image is denoted by , and as follows:

This set of points will be used for color statistics of the region and orientation descriptor, respectively, and the dimension of the quantized image is denoted by bins.

3.2. Region Correlation Descriptor

The concept of texton element is proposed by Julesz [26]. Texton is an important concept in texture analysis. In general, textons are defined as a set of blobs or emergent patterns sharing a common property all over the image.

The features of an image have close relation to the distribution of textons. Different textons form different images. If the textons in the image are small and the color tone difference between adjacent textons is large, the image may have a smooth texture. If the texton is large and composed of multiple points, the image may have a rough texture. At the same time, a smooth or rough texture is also determined by proportion of textons. If the textons in the image are large and have only a few types, distinct shapes may be formed. In fact, textons can be simply expressed by region correlation descriptors in a way [19]. Five region correlation templates are presented here, as shown in Figure 2. The shaded portion of the grid indicates that these values are the same.

The process of extracting the region color correlation pattern is shown in Figure 3. Figure 3(a) is a schematic diagram of a descriptor. The template moves from top to bottom, left to right, in two steps throughout the image . When the values in the grayscale frame where the image and template coincide are the same, these pixels are the color correlation region. The other templates are used successively to obtain the result pattern of that template. The corresponding shaded parts of the five templates in the quantization pattern are retained, and the rest are left blank to obtain the regional color correlation pattern , as shown in Figure 3(c). Calculate its histogram, constitute a quantization vector, and get the region color correlation vector .

3.3. Orientation Correlation Descriptor

The orientation templates are shown in Figure 4, which can be used to detect the lines with the same color in the orientations of horizontal, vertical, diagonal, and antidiagonal, respectively. In other words, the edge information of an image can be detected. Figure 5 shows the operation diagram of horizontal, vertical, diagonal, and antidiagonal descriptors from top to bottom. These templates move through the whole image from top to bottom, left to right, in two steps. When the values in the grayscale frame where the image and template coincide are the same, the two pixels are the color correlation pixels of the orientation. The corresponding shadow part of the four orientation template in quantization pattern is retained, and the rest part is left blank to obtain quantization pattern , as shown in Figure 5(d). Then, the quantization histogram of each orientation is counted, and the color correlation vector of the orientation is calculated. For the sake of illustration, only three quantization elements are taken as examples in Figure 5. In practice, it is the quantized value of image (0, bin-1). The specific steps are as follows: (1)Construct a statistical matrix of 4x bins. Each row of the matrix represents the orientation of horizontal, vertical, diagonal, and antidiagonal, respectively, and the number of columns is the bins of quantization(2)In the orientation color correlation pattern , if it meets one of the orientation descriptor conditions, add 1 to the corresponding quantization value in the matrix.

( represents one of the orientation descriptors, and represents the value of quantization) (3)Calculate the mean and standard deviation of each orientation descriptor according to equations (7) and (8), then get an 8-dimensional vector to represent the orientation correlation vector of the image, .According to the above steps, the orientation correlation vectors obtained in Figure 5 are (3, 2.65, 3.33, 2.52, 2, 1, 3.67, 2.52).

3.4. Composition of Feature Vector

The objects may have the same texture, but the edge characteristics of the objects may be different. The two factors can complement each other to improve the retrieval accuracy. The region correlation descriptor represents the texture features of an object and mainly represents the texture features of some areas inside the object, and the features are 72 dimensions. The orientation correlation descriptor represents the edge characteristics of the object. Different objects usually have different edge distributions. By taking the respective averages and standard deviations of the colors in the four directions of the horizontal, vertical, diagonal, and diagonal edges, the average color value and color offset in the four edge directions can be expressed and the object edge features are only represented by 8-dimensional feature vectors, which can improve the retrieval efficiency. Therefore, the region correlation descriptor in these two operators works better, and the later experimental part also proves that.

In Section 4.4, the experiments demonstrated that quantizing the HSV color space into 72 color bins nonuniformly is well suitable for our proposed algorithm. Therefore, can represent the histogram of the region correlation image obtained by the region correlation descriptor, leading to a 72 dimensional vector. can represent the orientation correlation image obtained by the orientation correlation descriptor, leading to an 8-dimensional vector. Finally, the two vectors are concatenated into a vector to obtain an 80-dimensional vector representing the image. Figure 6 shows two images and their own feature vectors of CROCD.

4. Experimental Results

4.1. Experimental Dataset

For the purpose of experimentation and verification, experiments are conducted over the benchmark Corel-1K, Corel-5K, and Corel-10K datasets. (1) 1K dataset (as shown in Figure 7(a)), with a size of (or ), contains 10 categories of original residents, beaches, buildings, public buses, dinosaurs, elephants, flowers, horses, valleys, and food, with 100 images for each category, and a total of 1000 images. (2) 5K dataset (shown in Figure 7(b)), with a size of (or ), contains 50 categories of images, including lion, bear, vegetable, female, castle, and fireworks, with 100 images for each category, a total of 5,000 images. (3) 10K dataset (as shown in Figure 7(c)), with a size of (or ), contains 100 category images of flags, stamps, ships, motorcycles, sailboats, airplanes, and furniture and 100 images of each category, a total of 10,000 images. In this section, we evaluate the performance of our method by these Corel datasets.

4.2. Performance Evaluation Metrics

The performance of an image retrieval system is normally measured using precision and recall for retrieving top images defined by formula (9) and (10), respectively, where is the number of relevant images retrieved from top positions and is the total number of images in the dataset that are similar to the query image. Precision is used to describe the accuracy of algorithm query. Recall is used to describe the comprehensiveness of algorithm query. The higher the precision and recall are, the better the function of the algorithm is. Precision and recall are the most extensive evaluation criteria for evaluating query algorithms.

In these experiments, we randomly selected 10 images from each category. In other words, 100, 500, and 1,000 images are selected randomly from three datasets, respectively, as query images to compare various results.

4.3. Similarity Measure

In the content-based image retrieval system, the retrieval precision and recall are not only related to the extracted features but also related to the similarity measurement. So, choosing an appropriate measure for our algorithm is a key step. In this experiment, we compared several common similarity criteria, such as Euclidean, L1, weighted L1, Canberra, and .

There are two feature vectors and extracted from images; their similarity measures can be expressed as

Calculate the value according to the above formulas and sort it from smallest to largest. The smaller the value is, the more similar the two images are. Table 1 shows the comparison results of different distance measurement methods. The test dataset is Corel-1K, and the statistical precision and recall are taken, respectively, when the total returned images from 10 to 30. It can be seen that the commonly used Euclidean distance is not good, while weighted L1 is the best.


Similarity measurePrecision (%)Recall (%)
10152025301015202530

WeightL183.2078.0774.6071.9669.808.3211.7114.9217.9920.94
L175.8071.936966.2464.107.5810.7913.8016.5619.23
Euclidean68.4064.6061.7059.3257.706.849.6912.3414.8317.31
Canberra80.9076.0772.156966.548.0911.4114.4317.2519.96
77.5073.6070.8568.8066.677.7511.0414.1717.2020

The best retrieval results are shown in bold, which means that CROCD has the best performance on this condition.
4.4. Retrieval Performance

Different color spaces and quantization methods are both used to evaluate the performance of the proposed algorithm. Experimental results reveal why the HSV space and nonuniform quantization are chosen.

The average precision and recall of HSV, RGB, and Lab are shown in Table 2. Images returned in the experiment range from 10 to 30. When color quantization is increased from 45 to 225 dimensions in the Lab color space, the precision and recall of the proposed method are both increased on the whole. There are the same in two other color spaces. On the other hand, the more quantization will increase the noise; thus, the precision and recall of the proposed method are both decreased when the quantization is 225 in the Lab color space. The highest precision of the top-10 image retrieval results is 79.2% and 71.5% in the RGB and Lab spaces, respectively. The best results are seen in the HSV space, which range from 78.7% to 83.2%. The precision of uniform quantization is not more than 81%; thus, we chose the HSV space of 72-dimensional quantization nonuniformly.


Color spacePrecision (%)Recall (%)
10152025301015202530

HSV72 (nonuniform)83.2078.0774.6071.9669.808.32011.7114.9217.9920.94
7280.6077.6073.7570.5668.778.06011.6414.7517.6420.63
10879.5074.7371.4568.8067.077.95011.2114.2917.2020.12
1288175.8072.9070.4068.038.10011.3714.5817.6020.41
19278.7073.9371.1569.2467.277.87011.0914.2317.3120.18

RGB1672.9068.1365.1062.2059.707.29010.2213.0215.5517.91
3279.2074.7471.5568.4866.437.92011.2114.3117.1219.93
6477.9073.9370.2567.4865.207.79011.0914.0516.8719.56
12878.5074.4770.6067.9265.577.85011.1714.1216.9819.67

Lab4564.5060.805754.6051.676.4509.12011.4013.6515.50
9071.5065.4762.1059.2456.677.1509.82012.4214.8117
18069.7065.3361.9059.6457.136.9709.80012.3814.9117.14
22569.8065.1362.3059.5257.106.9809.77012.4614.8817.13

The best retrieval results are shown in bold, which means that CROCD has the best performance on this condition.

In order to test our proposed algorithm, we compared the algorithms proposed by CDH [19], SED [22], BBC [23], DCT+ GLCM [24], TCM [27], and MTH [28] on Corel-1K and compared the retrieval precision and recall of 10 categories when the top retrieval image is 15, as shown in Table 3. Five of the ten classes in the proposed method are the best, and its average precision and recall are obviously higher than other algorithms.


CategoryPrecision (%)Recall (%)
CDHSEDBBCDCT+ GLCMMTHTCMCROCDCDHSEDBBCDCT+ GLCMMTHTCMCROCD

African78.6778.6772.676072.677684.6711.8011.8010.90910.9011.412.70
Beach46.6733.3342.674640.675844.67756.46.9006.1008.76.700
Building68.677863.334876.6757.3381.3310.3011.709.57.20011.508.612.20
Bus77.3388.678471.3380.6786.6790.6711.6013.3012.610.7012.101313.60
Dinosaur9699.339697.33989699.3314.4014.9014.4014.6014.7014.414.90
Elephant5656.6746.67527048628.4008.50077.80010.507.209.300
Flower90.6797.3398948488.6794.6713.6014.6014.7014.1012.6013.3014.20
Horse68.6783.338896907893.3310.3012.5013.2014.4013.5011.714
Mountain22.674654.67465246.67483.4006.9008.2006.9007.8007.07.200
Food7866.6779.335473.3375.338211.701011.908.1001111.3012.30
Average68.3372.8072.5366.4773.8071.0778.0710.2510.9210.889.9711.0710.6611.71

The best retrieval results are shown in bold.

In addition, the average precision and recall curve of the algorithm and other algorithms on Corel-1K dataset is shown in Figure 8. According to the results, the average precision of the proposed algorithm has been significantly improved from DCT+ GLCM, CDH, TCM, BBC, SED, and MTH up to 11.6%, 9.74%,7%, 5.54%, 5.27%, and 4.27%, respectively, when the top retrieval image is 15. Moreover, the area enclosed by the P-R curve of the proposed algorithm is the largest. Therefore, the precision and recall of the proposed algorithm are higher than the other six algorithms. Based on these analyses, this method has better robustness.

To illustrate the universality of the algorithm, the precision and recall of the algorithm and other algorithms on Corel-5K and Corel-10K dataset are shown in Tables 4 and 5, respectively. When tested on Corel-5K and Corel-10K datasets, the of the proposed method is 60.2% and 50.02%, respectively, which are superior to the other six algorithms. To give an intuitive view, Figure 9 shows the P-R curves of the seven algorithms. It can also be seen from the figure that the algorithm proposed in this paper has the best effect.


MethodsPrecision (%)Recall (%)
10152025301015202530

MTH54.2648.8545.3342.6640.375.437.339.0710.6612.11
TCM54.7149.3446.0443.3540.985.477.409.2110.8412.29
CDH52.5247.7644.4842.1640.135.257.168.910.5412.04
SED58.8452.9149.2946.2043.715.887.949.8611.5513.11
BCC58.4052.5148.7845.9143.795.847.889.7611.4813.14
DCT+ GLCM54.3249.4345.7542.339.335.437.419.1510.5711.80
CROCD60.2054.6050.7547.8345.236.028.1910.1511.9613.57

The best retrieval results are shown in bold.

MethodsPrecision (%)Recall (%)
10152025301015202530

MTH43.4837.9734.7732.1130.154.355.76.958.039.04
TCM44.2739.2135.8833.3831.494.435.887.188.359.45
CDH43.7138.2734.8632.5330.554.375.746.978.139.17
SED49.0842.8939.0936.233.934.916.437.829.0510.18
BCC47.0541.3537.8035.0933.134.716.207.568.779.94
DCT+ GLCM45.2039.9936.3933.1230.684.526.007.288.289.20
CROCD50.0244.5140.6737.8635.665.06.688.139.4610. 7

The best retrieval results are shown in bold.

The region correlation descriptor (RCD) and orientation correlation descriptor (OCD) in the CROCD algorithm make different contributions to the retrieval results. Retrieval results of region correlation vector, orientation correlation vector, and their combination (CROCD) are shown in Table 6 on the datasets Corel-1K, Corel-5K, and Corel-10K when the returned image is 15. In the dataset Corel-1K, the precision of RCD and OCD is 71.42% and 38.54%, respectively. The combination of the two, that is, CROCD is 78.07%, and the precision is increased by 6.65%. In the datasets Corel-5K and Corel-10K, the precision of CROCD increased by 5.49% and 5.43%, respectively, compared with the bigger one between RCD and OCD. So, in both the region correlation vector and the orientation correlation vector, the region correlation vector makes a major contribution to the final retrieval result. The results of orientation correlation vector alone are not very good, but after combining with region correlation vector, the proposed algorithm is better than other state-of-the-art retrieval methods. For an intuitive display, the contents of Table 6 are shown in Figure 10.


MethodsPrecision (%)Recall (%)
Corel-1KCorel-5KCorel-10KCorel-1KCorel-5KCorel-10K

OCD38.5419.0414.895.782.862.23
RCD71.4249.1139.0810.717.375.86
CROCD78.0754.6044.5111.718.196.68

The best retrieval results are shown in bold.

Figure 11 shows four images retrieved by CROCD from dataset Corel-10K and lists the first 30 returned images according to their similarity to the query images. The first 30 images returned from the tree branch (Figure 11(a)) and dinosaur (Figure 11(b)) images are related to the query images, respectively. And, of course, not all query images of these two categories have such effect, but it can be shown that the proposed algorithm has the superiority to those objects which have the obvious color and texture in the similar background. Of the 30 returned images in the snow mountain category (Figure 11(c)), 27 were returned correct. Those incorrect images (enclosed by the rectangular box), the three billow images, have similar colors and textures as snow mountains. Machinery category (Figure 11(d)) also has the 27 returned correct. In the three images returned by the error (enclosed by the rectangular box), they have similar textures and colors to the query image.

4.5. Computational Complexity

The complexity of the proposed algorithm consists of the amount of calculations required to complete a retrieval which is divided into three parts: query image and database image feature extraction, similarity measurement, and ranking retrieval.

As for feature extraction, the calculation amount of extracting the correlation features of the region is , and the calculation amount of extracting the correlation features of orientation correlation is , and the total is , which is , where and are the length and width of the image. is the dimensions of the image color quantization space. The variable represents the total number of images in the dataset.

As for similarity measurement, the weighted 1 criterion is adopted, and the calculation amount is , that is, the order of . The dimension of the feature vector is .

As for sort and search, the quick sort method is used. The calculation amount for sorting and searching the relevant images from the dataset is [24].

The total amount of calculation is

The speed of extracting similar images to the query image depends on the feature vector length of the image. Lengthy feature vector takes more time in calculating the difference between query image and database images. The comparison of feature vector of the proposed method with other methods has been given in Table 7 for speed evaluation. Also, feature extraction time for one image has been given in Table 7 for all methods including the proposed method. These experiments are conducted on dataset Corel-10K with Matlab R2016b on a Windows 10 machine equipped with an Intel i7-9700 CPU 3.0 GHz and 16 Gb of RAM.


MethodFeature vector lengthFeature extraction time (s)Image retrieval time (s)Total times (s)

MTH
TCM

0.1955
0.4728
3.2475
3.2521
3.443
3.725
CDH0.33303.29803.631
SED0.14193.31063.453
BCC0.59493.26793.863
DCT+ GLCM0.26353.22673.490
CROCD0.16273.24723.410

The best retrieval results are shown in bold.

As demonstrated in the table, the proposed method is slightly slower than SED but faster than the other methods for feature extraction. The feature vector length of the proposed method is slightly longer than the DCT+ GLCM but shorter than other methods. Moreover, the proposed method outperforms the other methods in terms of accuracy as mentioned in different datasets.

5. Conclusions

In this paper, the proposed method is an effective approach for color, texture, and edge image retrieval. Firstly, the color image is quantized into 72 bins, and then, the color correlation pattern of the region is calculated using the region descriptor. The orientation color correlation pattern which reflects edges of objects in an image is obtained using the orientation descriptor. Furthermore, the color correlation histogram of the four orientations is obtained by statistics of the correlation pattern, and then, the orientation color correlation vector is calculated. The characteristic vector of the image is obtained by combining two vectors of region and orientation. Finally, the similarity ranking of query images is obtained by similarity comparison. Experiments show that the proposed method has the advantages of balancing high speed and high precision compared with similar algorithms. It is often difficult to extract a single closed shape in a natural image, but there are many partial contours of objects in a natural image. If an efficient description method of partial contours can extract feature information and integrate it with our proposed method, it should be able to improve retrieval effect. The next step is to combine the features of color, texture, and shape for retrieval, so as to further improve the retrieval effect. Besides, voting-based scoring, ranking on manifold [31] or other ranking methods [32] for image retrieval will be used instead of distance based merely for measurement criteria.

Data Availability

Data are available on request. Please contact Guangyi Xie to request the data.

Conflicts of Interest

The authors declare no conflict of interest.

Authors’ Contributions

All the authors contributed to this study. G.X. performed the conceptualization, writing of the original draft, and editing; Z.H. did the investigation and designed the network and experiments; Y.Z. analyzed the data and investigation; B.G. and Y.Y. contributed to funding acquisition, project administration, and instruction.

Acknowledgments

This research is supported financially by the National Natural Science Foundation of China (Grant Nos. 61571346 and 61671357).

References

  1. W. Zhou, H. Li, J. Sun, and Q. Tian, “Collaborative index embedding for image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 5, pp. 1154–1166, 2018. View at: Publisher Site | Google Scholar
  2. C. Iakovidou, N. Anagnostopoulos, M. Lux, K. Christodoulou, Y. Boutalis, and S. A. Chatzichristofis, “Composite description based on salient contours and color information for cbir tasks,” IEEE Transactions on Image Processing, vol. 28, no. 6, pp. 3115–3129, 2019. View at: Publisher Site | Google Scholar
  3. Z. Shabbir, A. Irtaza, A. Javed, and M. T. Mahmood, “Tetragonal local octa-pattern (T-LOP) based image retrieval using genetically optimized support vector machines,” Multimedia Tools and Applications, vol. 78, no. 16, pp. 23617–23638, 2019. View at: Publisher Site | Google Scholar
  4. Y. Zheng, B. Guo, Y. Yan, and W. He, “O2O method for fast 2D shape retrieval,” IEEE Transactions on Image Processing, vol. 28, no. 11, pp. 1–5378, 2019. View at: Publisher Site | Google Scholar
  5. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at: Publisher Site | Google Scholar
  6. M. Verma and B. Raman, “Local tri-directional patterns: a new texture feature descriptor for image retrieval,” Digital Signal Processing, vol. 51, pp. 62–72, 2016. View at: Publisher Site | Google Scholar
  7. G. M. Galshetwar, L. M. Waghmare, A. B. Gonde, and S. Murala, “Local energy oriented pattern for image indexing and retrieval,” Journal of Visual Communication and Image Representation, vol. 64, article 102615, 2019. View at: Publisher Site | Google Scholar
  8. A. B. Gonde, S. Murala, S. K. Vipparthi, R. Maheshwari, and R. Balasubramanian, “3D local transform patterns: a new feature descriptor for image retrieval,” in Proceedings of International Conference on Computer Vision and Image Processing, CVIP 2016, pp. 495–507, Roorkee, India, February 2016. View at: Google Scholar
  9. K. Zhang, F. Zhang, J. Lu, Y. Lu, J. Kong, and M. Zhang, “Local structure co-occurrence pattern for image retrieval,” Journal of Electronic Imaging, vol. 25, no. 2, article 023030, 2016. View at: Publisher Site | Google Scholar
  10. M. Verma and B. Raman, “Local neighborhood difference pattern: a new feature descriptor for natural and texture image retrieval,” Multimedia Tools and Applications, vol. 77, no. 10, pp. 11843–11866, 2018. View at: Publisher Site | Google Scholar
  11. J. Huang, S. R. Kumar, M. Mitra, W. J. Zhu, and R. Zabih, “Image indexing using color correlograms.,” in Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 762–768, San Juan, PR, USA, February 1997. View at: Google Scholar
  12. G. Pass, R. Zabih, and J. Miller, “Comparing images using color coherence vectors,” in Proceedings of the 1996 4th ACM International Multimedia Conference, pp. 65–73, Boston, MA, USA, November 1996. View at: Google Scholar
  13. R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” in IEEE Transactions on Systems, Man, and Cybernetics, vol. 6, pp. 610–621, 1973. View at: Google Scholar
  14. D. Srivastava, B. Rajitha, S. Agarwal, and S. Singh, “Pattern-based image retrieval using GLCM,” Neural Computing and Applications, pp. 1–14, 2018. View at: Publisher Site | Google Scholar
  15. Z. Mehmood, T. Mahmood, and M. A. Javid, “Content-based image retrieval and semantic automatic image annotation based on the weighted average of triangular histograms using support vector machine,” Applied Intelligence, vol. 48, no. 1, pp. 166–181, 2018. View at: Publisher Site | Google Scholar
  16. L. K. Pavithra and T. S. Sharmila, “An efficient framework for image retrieval using color, texture and edge features,” Computers & Electrical Engineering, vol. 70, pp. 580–593, 2018. View at: Publisher Site | Google Scholar
  17. S. Fadaei, R. Amirfattahi, and M. R. Ahmadzadeh, “New content-based image retrieval system based on optimised integration of DCD, wavelet and curvelet features,” IET Image Processing, vol. 11, no. 2, pp. 89–98, 2017. View at: Publisher Site | Google Scholar
  18. C. Reta, J. A. Cantoral-Ceballos, I. Solis-Moreno, J. A. Gonzalez, R. Alvarez-Vargas, and N. Delgadillo-Checa, “Color uniformity descriptor: an efficient contextual color representation for image indexing and retrieval,” Journal of Visual Communication and Image Representation, vol. 54, pp. 39–50, 2018. View at: Publisher Site | Google Scholar
  19. G. H. Liu and J. Y. Yang, “Content-based image retrieval using color difference histogram,” Pattern Recognition, vol. 46, no. 1, pp. 188–198, 2013. View at: Publisher Site | Google Scholar
  20. W. Song, Y. Zhang, F. Liu et al., “Taking advantage of multi-regions-based diagonal texture structure descriptor for image retrieval,” Expert Systems with Applications, vol. 96, pp. 347–357, 2018. View at: Publisher Site | Google Scholar
  21. L. Feng, J. Wu, S. Liu, and H. Zhang, “Global correlation descriptor: a novel image representation for image retrieval,” Journal of Visual Communication and Image Representation, vol. 33, pp. 104–114, 2015. View at: Publisher Site | Google Scholar
  22. X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” Journal of Visual Communication and Image Representation, vol. 24, no. 1, pp. 63–74, 2013. View at: Publisher Site | Google Scholar
  23. C. Singh and K. Preet Kaur, “A fast and efficient image retrieval system based on color and texture features,” Journal of Visual Communication and Image Representation, vol. 41, pp. 225–238, 2016. View at: Publisher Site | Google Scholar
  24. N. Varish and A. K. Pal, “A novel image retrieval scheme using gray level co-occurrence matrix descriptors of discrete cosine transform based residual image,” Applied Intelligence, vol. 48, no. 9, pp. 2930–2953, 2018. View at: Publisher Site | Google Scholar
  25. M. Verma, B. Raman, and S. Murala, “Local extrema co-occurrence pattern for color and texture image retrieval,” Neurocomputing, vol. 165, pp. 255–269, 2015. View at: Publisher Site | Google Scholar
  26. B. Julesz, “Textons, the elements of texture perception, and their interactions,” Nature, vol. 290, no. 5802, pp. 91–97, 1981. View at: Publisher Site | Google Scholar
  27. A. B. Gonde, R. P. Maheshwari, and R. Balasubramanian, “Texton co-occurrence matrix: a new feature for image retrieval,” in Proceedings of the 2010 Annual IEEE India Conference: Green Energy, Computing and Communication, pp. 1–5, Kolkata, 2010. View at: Google Scholar
  28. G. H. Liu, L. Zhang, Y. K. Hou, Z. Y. Li, and J. Y. Yang, “Image retrieval based on multi-texton histogram,” Pattern Recognition, vol. 43, no. 7, pp. 2380–2389, 2010. View at: Publisher Site | Google Scholar
  29. A. Raza, H. Dawood, H. Dawood, S. Shabbir, R. Mehboob, and A. Banjar, “Correlated primary visual texton histogram features for content base image retrieval,” IEEE Access, vol. 6, pp. 46595–46616, 2018. View at: Publisher Site | Google Scholar
  30. A. Raza, T. Nawaz, H. Dawood, and H. Dawood, “Square texton histogram features for image retrieval,” Multimedia Tools and Applications, vol. 78, no. 3, pp. 2719–2746, 2019. View at: Publisher Site | Google Scholar
  31. S. Liu, J. Wu, L. Feng et al., “Perceptual uniform descriptor and ranking on manifold for image retrieval,” Information Sciences, vol. 424, pp. 235–249, 2018. View at: Publisher Site | Google Scholar
  32. Z. Liu, S. Wang, L. Zheng, and Q. Tian, “Robust ImageGraph: rank-level feature fusion for image search,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3128–3141, 2017. View at: Publisher Site | Google Scholar

Copyright © 2020 Guangyi Xie et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views84
Downloads102
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.