Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 8179670 | 16 pages | https://doi.org/10.1155/2016/8179670

A “Tuned” Mask Learnt Approach Based on Gravitational Search Algorithm

Academic Editor: Michael Schmuker
Received20 Aug 2016
Revised09 Nov 2016
Accepted15 Nov 2016
Published19 Dec 2016

Abstract

Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using “Tuned” mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA) and particle swarm optimization (PSO) easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, “Tuned” mask is viewed as a constrained optimization problem and the optimal “Tuned” mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA). The optimal “Tuned” mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO), and artificial immune algorithm (AIA). Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy.

1. Introduction

Texture [1] is an important characteristic of the appearance of objects in natural scenes and is a powerful visual cue, used by both humans and machines in describing and recognizing objects of the real world. Texture image classification [2] is a vital topic in machine vision and image analysis, which is to identify a texture sample as one of several possible classes with a reliable texture classifier, and plays a very important role in a wide range of applications. In the real world, there are kinds of texture due to changes in orientation, scale, or other visual appearance; as a result, a number of texture feature extraction and classification methods have been proposed over the years. For instance, Xu et al. [3] developed a novel tool called dynamic fractal analysis for dynamic texture (DT) classification, which not only provided a rich description of DT but also had strong robustness to environmental changes. Liu et al. [4] presented a simple, novel, and yet very powerful approach for robust rotation-invariant texture classification based on random projection, which maintained the strengths of random projection, in being computationally efficient and low-dimensional. Celik and Tjahjadi [5] proposed a supervised multiscale Bayesian texture classifier by obtaining complex-valued multiscale representations of training texture samples for each texture class. Zhang et al. [6] utilized the normalized local-oriented energies to generate the local feature vectors, which described the local structures distinctively and are less sensitive to imaging conditions. Thakare and Patil [7] presented an improved method for texture image classification and retrieval using gray level cooccurrence matrix (GLCM) and self-organizing maps (SOM). Riaz et al. [8] and Li et al. [9] introduced a novel technique to rotation and scale invariant texture classification based on Gabor wavelet feature that have the capability to collapse the filter responses according to the scale and orientation of the texture features. Liu et al. [10] and Zhao et al. [11] presented a novel approach for texture feature classification by generalizing the well-known local binary pattern (LBP) approach. The experimental results showed that the proposed method was robust to noise and could achieve impressive classification accuracy. Gai et al. [12] and Soulard Carré [13] presented a study of the wavelet transform (WT) which had one shift invariant magnitude and three angle phases at each scale from texture image analysis application. The experimental results demonstrated the robustness of the proposed method and obtained satisfied accuracy. Texture feature especially is one of the most significant symbols for remote sensing image classification. For instance, residential area is one of the most important landscape elements. Extraction of residential area by remote sensing image has become the favored technique to monitor urban expansion and environment, which is significant to the regional sustainable development. Some studies have been focused on the field of residential area recognition by texture feature; for example, information of residential area was extracted by airborne SAR aided with gray level cooccurrence matrix (GLCM) texture feature [14]. Wang et al. [15] proposed a Gabor filtering based method to recognize residential areas from remotely sensed imagery. Jin et al. [16] presented a residential area recognition method for some remote sensing images based on Fourier transformation and Hough transformation. Shi et al. [17] proposed an extended oscillatory correlation algorithm to perform unsupervised scene recognition of residential areas for hyperspectral imagery. Experiment demonstrated the utility of the proposed method for residential areas recognition. However, it expends numerous features to complete the task of texture feature classification for some traditional techniques, which needs a large amount of CPU time to extract the features, and the excessive features will decrease the classification efficiency at the same time. Although there are some methods that only need a few of features, it is difficult to stably obtain high classification accuracy.

In order to extract the texture feature efficiently and effectively, the texture feature classification technique based on texture mask has drawn rather considerable interest in recent years [18]. Among them, Laws’ mask [19] is one of the most commonly used masks to classify the different types of texture. However, the basic form of Laws’ mask is relatively stationary, which is difficult to adapt various types of texture for a fixed mask [20]. Thus, You and Cohen [21] developed an adaptive texture feature extraction method called “Tuned” mask exempted from changes in rotation and scale of the texture image and its validity was proved. To obtain the optimal texture mask, it utilized a search strategy of gradient estimation and random search with heuristic learning. It may lead to high time complexity and probably trap into the local optimum [21].

In essence, how to obtain the optimal texture mask is a combinatorial optimization problem which may be handled by evolutionary algorithms and swarm intelligence algorithms. For instance, Zheng et al. [22] proposed a mask approach optimized by artificial immune algorithm (AIA) to detect texture objects on satellite images. H. Zheng and Z. Zheng [23] employed genetic algorithm (GA) guided search to obtain optimal “Tuned” mask and produced rather good results. Ye et al. [24] explained the principle and steps of producing texture “Tuned” mask with particle swarm optimization algorithm (PSO) and illustrated how to train “Tuned” mask with the proposed method in details. Zheng [25] introduced a honey-bee model and provided a new method of producing better “Tuned” mask with honey-bee mating optimization (HBMO), which was applied to texture classification of aerial images. The experiments showed that the proposed method could improve the quality of “Tuned” mask and classification accuracy. In short, AIA, GA, PSO, and HBMO could obtain good “Tuned” mask; however, it is a very hard optimization problem with high dimension, and the value of each dimension might be a real number in the range of wide continuous space; that is, algorithms mentioned above could not guarantee the optimal solution; it is worth trying more evolutionary algorithms and swarm intelligence based algorithms on this topic.

Gravitational search algorithm (GSA) [26] is a newly proposed stochastic global search algorithm. Nowadays, GSA has been widely used in diverse applications; for example, Yazdani et al. [27] utilized GSA to find multiple solutions in multimodal problems. Kumar and Sahoo [28] presented the compendious survey on the GSA and its applications as well as enlightened the applicability of GSA in data clustering and classification. Duman et al. [29] used GSA to find the solution for optimal power flow (OPF) problem in a power system. In the field of classification, GSA was used to provide a prototype classifier to face the classification of instances in multiclass datasets [30]. Sarafrazi and Nezamabadi-pour [31] hybridized GSA with support vector machine (SVM) and made a novel GSA-SVM hybrid system to improve classification accuracy in binary problems. Further, there are some variants and modifications of GSA; for example, Rashedi et al. [32] proposed a binary coded GSA (BGSA) and used it for benchmark functions. A modified GSA with moving strategy was utilized to solve the problem of path planning of uninhabited aerial vehicle (UAV) [33]. Li and Duan [34] proposed a chaotic GSA (CGSA) for the parameter identification problem of chaotic system, which performed better than the standard GSA. However, the standard GSA is by far the most popularly used, and the optimal “Tuned” mask is a combinatorial optimization problem, which could be solved by GSA. Hence, in this paper, a novel residential areas recognition technique is proposed using “Tuned” mask and blending of standard GSA.

The rest of this paper is structured as follows. Section 2 illustrates the basic principle of gravitational search algorithm. The idea of the proposed approach to produce the optimal “Tuned” mask is detailed in Section 3. Section 4 displays the experimental results and discussion. Finally, the paper is concluded in Section 5.

2. The Basic Principle of Gravitational Search Algorithm

In 2009, Rashedi et al. have developed a new swarm intelligence algorithm named gravitational search algorithm (GSA) though Newtonian laws of gravity and mass interaction, which has the enormous potential to solve the combinatorial optimization problem [26]. In this algorithm, agents are considered as objects and their performance is evaluated by their masses. Each object is a solution for the problem. Objects will be mutually attracted by the gravity force, and the force leads to a global movement of all objects to which have heavier masses [35]. Because the heavier masses could have good solutions, they are more likely to obtain the optimal solution and they move sluggishly than lighter masses that represent worse solutions. In GSA, there are four particulars for every mass: position, inertial mass, active gravitational mass, and passive gravitational mass [35]. The position represents one of the solutions of the problem and the gravitational and inertial masses are utilized as a fitness function.

Assume that there is a system with agents (objects); the position of th agent can be defined aswhere represents the position of th object in the th dimension and is the dimension of search space. According to the theory of GSA, the gravitational force between the object and at iteration could be defined bywhere is the active gravitational mass of object , is the passive gravitational mass correlated with object , is a small constant, is the Euclidian distance from the object to object at iteration , and is gravitational variable at iteration , which could be defined aswhere is the initial value of , is a constant by manual setting, is the current iteration, and is the maximum iteration number.

Moreover, the total gravitational force acting that works on th object is a randomly weighted sum of th component of the forces, which is computed aswhere is a uniform random variable in the interval and best is the set of first agents with the optimal fitness value and biggest mass, which is a function related to time and is initialized as at the beginning and decreased with iteration.

By the law of motion, the acceleration of th object at iteration and in direction is calculated as follows:where is the inertial mass of th object.

The velocity at iteration of an object is considered as an addition of the velocity and acceleration at iteration . Therefore, the new velocity and position at iteration could be calculated as follows:where is a random number within interval .

Gravitational and inertia masses are simply computed by the fitness value. A heavier mass means a good solution, which means that the better object has higher attractions and walks more sluggishly. Suppose that the gravitational and inertia mass are equalized, the values of masses are calculated using the map of fitness. The gravitational and inertial masses will be updated by the following equations:where is the fitness value of the object at iteration and and should be defined as follows (for a minimization problem):

It is clear that, for a maximization problem, (8) and (9) will be replaced by (10) and (11), respectively:

As GSA is applied to solve the combinatorial optimization problem, each object is located at a certain position of the search space, which represents a solution of the problem at each iteration. Then, the objects will be updated and the next positions and velocities are calculated by (6). Other parameters of GSA like the gravitational variable , the active gravitational mass , the passive gravitational mass , the inertial mass , and the acceleration will be, respectively, computed by other equations. The basic procedures of GSA could be described as in Pseudocode 1 [26].

Begin
Generate initial population with N objects
While (The current iteration t < The maximum iteration T)
Compute the fitness value of each object by objective functions
Update the gravitational variable , and and of the population
Calculate the active gravitational mass , the passive gravitational mass , the inertial mass and the acceleration
for each object
Update velocity and position of each object by using (6)
If (The fitness value of current position is better)
Replace the object by the new position
End if
End while
Post process results and visualization
End

3. The Proposed Method

In this section, an efficient texture feature classification method with “Tuned” mask is expounded, which learn the parameters of “Tuned” mask as a combinatorial optimization problem by using GSA. The goal of the proposed method is to maximize the classification accuracy by only one feature. The main procedure of the proposed method will be explained as follows.

3.1. The Fundamental of “Tuned” Mask

In order to utilize the optimal texture mask and make an accurate classification for different texture features, You and Cohen [21] suggested the extension of Laws’ scheme by abandoning the traditional masks with constants and replacing them with variables in order to improve the classification accuracy and reliability. In the method, a single mask is produced which extracts a common feature of a single texture at different rotations and scales; at the same time, it discriminates this feature from other texture features to a large extent. The new mask is called a “Tuned” or adaptive mask, and the whole process of texture feature classification is very simple. In principle, the procedure to capture texture characterization comprises two steps. The first step is to convolve the whole image with the “Tuned” mask . Experimental results showed that the mask with symmetrical and zero sums will reduce the computation cost, which nearly does not have effect on the performance of the mask [23]. Thus, the whole mask could be composed by only 10 parameters. The 2D convolution of the original image with size and mask with size is computed as below:where “” represents convolution operation and “” represents the multiplication operation, is the image after transformation, and are, respectively, the translation variable of horizontal and vertical, and is a constant as in the paper.

The second step is to make a statistics within ( is used in the paper) window at pixel point . The “texture energy” could be calculated by the variance statistic within macro window size of in our training stage, which is defined as [36]

It is apparent that the value of texture energy is decided by mask; the optimal “Tuned” mask could provide favorable criminating ability. In this paper, the newly proposed evolutionary algorithm GSA is employed to generate the robust “Tuned” mask and make classification for different textural images.

3.2. The Encoding Schema

The key issue to apply GSA is the representation of the problem, that is, how to make a suitable mapping between the problem solution and each agent (object) of GSA. In the paper, a search space for a mask is of 25 dimensions. Each dimension has continuous or integer values. H. Zheng and Z. Zheng suggested employing the symmetrical mask with zero sums to avoid plenty of computation [23]. Therefore, the “Tuned” mask could be defined as below:

As the size of “Tuned” mask is and requires being symmetrical with zero sums, so only 10 parameters ,  ,  ,  ,  ,  ,  ,  ,  ,   in a mask need to be encoded. In a “Tuned” mask, the layout of parameters in the mask plays a more important role for texture image classification than its actual values. Due to the fact that the decimal code can be directly used for GSA, the parameters of ,  ,  ,  ,  ,  ,  ,  ,  ,   are encoded by decimal number in the range of for simplicity [23].

3.3. The Objective Function

In order to make an evaluation for the optimization ability of GSA and other evolutionary algorithms, it is necessary to choose a suitable objective function. Due to the fact that the residential areas’ recognition could be considered as a binary-class classification problem, this regards residential areas as a category and other texture areas as another category. Fisher’s criterion has a good performance for binary-class classification problem, which tries to maximize the difference of interclass and minimize the difference of intraclass and precisely recognize the target category from another category [37]. Therefore, in the paper, the objective function within Fisher’s criterion is defined aswhere and are, respectively, the average and variance of the eigenvalues in the first category and and are, respectively, the average and variance of the eigenvalues in the second category. The larger value of the fitness function demonstrates better quality of “Tuned” mask.

3.4. Implementation of the Proposed Method

The proposed method is simple and easy to implement. The main process to learn the “Tuned” mask based on GSA for texture feature classification is as in Pseudocode 2.

Begin
Input training sample texture images
Set the parameters of GSA and generate initial populations
For each agent (object), generate a “Tuned” mask by using (15) (the position of the agent could be directly used as the
element value of the mask), make convolution with training images and “Tuned” mask, and output the eigenvalues
While (The current iteration t < The maximum iteration T)
Compute the fitness value of each object by using (16)
Update the gravitational variable , and and of the population
Calculate the active gravitational mass , the passive gravitational mass , the inertial mass and the
acceleration for each object
Update velocity and position of each object by using (6)
If (The fitness value of current position is better)
Replace the object by the new position
End if
End while
Output the optimal “Tuned” mask according to (15)
End

4. Simulation Results and Discussion

The proposed method is implemented by the language of MATLAB 2014b on a personal computer with a 2.30 GHz CPU, 8.00 G RAM under Windows 8 system.

In order to evaluate the performance of the proposed residential areas’ recognition method, 3 texture images from public texture database and 5 remote sensing images are, respectively, used in this section. The objective function is defined as (16). A higher fitness value of fitness function indicates better optimization ability.

To make a fair comparison, the number of function evaluations is used as terminal criterion; that is, all algorithms will stop when the number of function evaluations reaches 1000, and all the algorithms make 50 independent operations. In the section, we present some contrastive experimental results, including illustrative examples and performance evaluating tables, which clearly demonstrate the merits of the proposed method. All the algorithms are evaluated using the same objective function. Our primary interest is the optimal “Tuned” mask, which is shown by the fitness value of objective function defined as (16), and the classification accuracy by using the optimal “Tuned” mask.

4.1. Parameters Setting for Different Algorithms

According to the operational process of evolutionary computation algorithm, the computational results of GSA depend on parameters setting to some extent; fine tuning of the parameters can produce a better result. Table 1 shows the parameters used in GSA.


ParameterExplanationValue

Number of agents (objects)20
Initial value of the gravitational variable 100
User specified constant10

Some commonly used evolutionary algorithm or swarm intelligence based texture feature classification methods are also carried out for comparison in the paper as well. As is illustrated in Section 2, the primary GSA is used in this paper. Some existing “Tuned” mask techniques which are, respectively, proposed by Zheng (GA [23], HBMO [25]) and Ye et al. (PSO [24]) are used to make a comparison. On the other hand, Zheng et al. utilized another texture energy function to detect texture objects [22], and experimental results demonstrated the validity, so Zheng’s mask [22] was used and, respectively, optimized by AIA and GSA in this paper. Furthermore, the commonly used Gabor wavelet feature [8, 9] is also utilized to make a comparison, which totally includes 56 features (7 scales and 8 orientations) here. Although there are many variants of GA, PSO, AIA, and HBMO, in order to make a fair comparison, GA, PSO, AIA, and HBMO are all used with their standard types. Tables 25 show the parameters setting of GA [38], PSO [39], AIA [40], and HBMO [41].


ParameterExplanationValue

Number of genetics20
Selection ratio0.9
Crossover ratio0.8
Mutation ratio0.01


ParameterExplanationValue

Number of particles20
Positive acceleration constants2.0
Random numbers


ParameterExplanationValue

Number of antibodies20
Antibody elimination rate0.3
Crossover ratio0.8
Mutation ratio0.01


ParameterExplanationValue

Number of queens1
Number of drones20
Number of broods10
Decreasing factor0.98

4.2. Experiments on Public Texture Images

Here, a preliminary test of the proposed texture feature classification technique on 3 texture images, respectively, named “Brick,” “Rock,” and “Tile” from a public texture database (http://www.textures.com) is conducted, and the images’ sizes are, respectively, , , and . 30 training samples are utilized for classification, the size is , and all training samples are all extracted from the original image. As the optimal “Tuned” mask is achieved, the classification for each pixel of the original image is accomplished by using the minimum distance classifier. Table 6 shows the fitness value and classification accuracy of the “Tuned” mask handled by different algorithms. Furthermore, Table 7 shows the classification accuracy by using Zheng’s mask and Gabor wavelet based feature, and the testing images and recognized result will be given in Figures 13.


DatasetMeas.GAPSOAIAHBMOGSA

BrickAvg28.882530.427531.518231.738032.6939
Std2.29921.43740.86390.80220.6479
Accuracy (%)93.449394.566595.343395.777496.3742
Time0.27530.27280.27800.27820.2746

RockAvg27.069027.964528.874128.956429.4108
Std1.79560.88830.82960.79700.6602
Accuracy (%)89.048390.165791.062891.376492.0350
Time0.27670.27350.27940.29100.2759

TileAvg48.001348.728949.435449.598850.1986
Std2.49161.86121.50161.39501.0025
Accuracy (%)93.601194.102894.553294.714995.0289
Time0.28140.27550.28470.29480.2783


DatasetMeas.“Tuned”-AIA“Tuned”-GSAZheng-AIAZheng-GSAGabor

BrickAccuracy (%)95.343396.374288.299291.345690.9904
Time0.27800.27460.30610.29471.3642

RockAccuracy (%)91.062892.035083.770587.157486.2962
Time0.27940.27590.30840.29771.3851

TileAccuracy (%)94.553295.028986.964889.239688.4899
Time0.28470.27830.31520.29981.4661

In Tables 6 and 7, Avg and Std, respectively, indicate the average and the standard deviation of the fitness value by making 50 independent operations. Accuracy is the average classification accuracy of 50 independent operations. Time is the CPU time of each iteration; its unit is second. “Tuned”-AIA and “Tuned”-GSA denote the recognition result by using “Tuned” mask that is optimized by AIA and GSA. Zheng-AIA and Zheng-GSA denote the recognized result by using Zheng’s mask optimized by AIA and GSA. Gabor indicates the recognized result by using Gabor wavelet feature. According to the data in Table 6, the classification accuracy is close for all algorithms; the maximum difference is less than 3%, and for “Tile” image, the difference is only 1.6%. However, GSA still has the best optimization ability in five algorithms, its average fitness value is the maximum for the 3 images, and the average classification accuracy has exceeded 92%; although the average fitness value of GSA and HBMO is very similar, the standard deviation of fitness value by using GSA is the minimum for 3 images, which proves that GSA can more stably converge to the optimal solution. For computation efficiency, PSO and GSA have a fast convergence speed comparing with the other three algorithms; the maximum difference of CPU time between them is less than 0.03 s for each iteration, but the fitness value by using GSA is obviously better than PSO; the average fitness value by using GSA is more than 29 for 3 images. According to Figures 13, although the features based on Gabor wavelet could make a rough recognition for the object, the edge selection is distinctly worse than that by using the proposed method. In Table 7, it is evidently revealed that Zheng’s mask [22] will cost more time, and the classification accuracy is distinctly lower than that by using “Tuned” mask, and the difference has reached 5% for “Brick” and “Tile” images. Consequently, it may deduce that the proposed method can be widely used to make recognition for different texture areas.

4.3. Experiments on Remote Sensing Images

As it is illustrated in Section 4.2, the proposed method has good classification result for public texture dataset, which manifests that it is suitable for texture feature classification. In this section, 5 remote sensing images that include part of residential areas, respectively, named RS1, RS2, RS3, RS4, and RS5 are utilized to make a further experiment, and the images’ sizes are all . The training samples are all extracted from the original image. As the optimal “Tuned” mask is achieved, the classification for each pixel of the original image is accomplished by using the minimum distance classifier. Table 8 shows the fitness value and classification accuracy of the “Tuned” mask optimized by different algorithms. Table 9 shows the classification accuracy with Zheng’s mask and Gabor wavelet based feature, and the recognized result will be given in Figures 48.


DatasetMeas.GAPSOAIAHBMOGSA

RS1Avg34.273136.769636.765737.012538.0333
Std5.15054.25384.43633.08992.2603
Accuracy (%)93.979795.427595.140096.033397.1932
Time0.27790.26520.27930.29150.2729

RS2Avg46.960347.497548.329548.453049.0828
Std1.42410.94700.85910.83150.4004
Accuracy (%)93.448494.296395.159495.592796.2379
Time0.27700.26390.27820.28970.2719

RS3Avg5.37015.47125.50265.51605.5837
Std0.11620.08830.07020.06390.0506
Accuracy (%)85.140587.329288.485889.155490.7826
Time0.27630.26340.27750.28650.2700

RS4Avg6.93047.03977.08587.10067.1720
Std0.15200.11770.09600.09260.0855
Accuracy (%)82.434883.988484.799385.215486.6071
Time0.27680.26380.27800.28930.2717

RS5Avg4.39034.52234.70304.69884.8958
Std0.14630.13070.12740.11440.0963
Accuracy (%)83.167185.039487.670587.114489.9387
Time0.27620.26310.27710.28610.2698


DatasetMeas.“Tuned”-AIA“Tuned”-GSAZheng-AIAZheng-GSAGabor

RS1Accuracy (%)95.140097.193284.756090.284685.6994
Time0.27930.27290.30400.29201.4166

RS2Accuracy (%)95.159496.237983.067588.135484.9965
Time0.27820.27190.30300.29141.3696

RS3Accuracy (%)88.584890.783678.529681.414475.1974
Time0.27750.27000.30140.29021.2956

RS4Accuracy (%)84.799386.607172.915779.363374.0210
Time0.27800.27170.30320.29171.3554

RS5Accuracy (%)87.670589.937869.671076.592572.4010
Time0.27710.26980.29950.28761.2829

In Tables 8 and 9, Avg and Std, respectively, indicate the average and the standard deviation of the fitness value by making 50 independent operations. Accuracy is the average classification accuracy of 50 independent operations. Time is the CPU time at each iteration; its unit is second. The meanings of “Tuned”-AIA, “Tuned”-GSA, Zheng-AIA, Zheng-GSA and Gabor are the same as that of Table 7. As the texture feature of remote sensing images is more random, the fitness value for RS3, RS4, and RS5 images is obviously lower than other images; recognition of residential area is more complex, thus, it is easy to misidentify the objective areas. According to the data in Table 8, GSA could obtain the maximum average fitness value, and, for RS2 image, the average fitness of GSA is more than 49, which illustrate that the optimization ability of GSA has a distinct advantage comparing with the other 4 algorithms, and the recognition ability of different categories is apparent. On the other hand, the standard deviation of fitness value by using GSA is the minimum; and for RS3, RS4, and RS5 images, the standard deviation of fitness value is all less than 0.1, which is a small range and nearly has no volatility; and it is illustrated that the algorithm could stably converge to a satisfied solution for each independent experiment. The classification accuracy by using GSA has exceeded 86% for 5 images, and it has reached 97.1932% for RS1 image especially, which is a satisfied accuracy for practical application, and the residential areas have been generally recognized. For converge efficiency, it is the same with the last section that PSO and GSA can quickly converge to the optimal solution, and the difference is only 0.08 s for each iteration. However, the fitness value by using GSA is distinctly better than PSO; the maximum difference of average fitness value between them has reached 1.5. Meanwhile, the advantage of mask technique is better than that of Gabor wavelet based method; in addition, the classification result of Zheng’s mask [22] is only 72.9157% and 69.6710% for RS4 and RS5 images, which is apparently worse than that by using “Tuned” mask. More importantly, “Tuned” mask will cost fewer CPU time at the same time. Experiment results demonstrate that GSA has a better optimization ability comparing with other 4 algorithms, and “Tuned” mask is a feasible approach for texture feature classification, which only needs fewer parameters, and has satisfactory classification accuracy; particularly for residential area recognition, the classification accuracy is apparently better than Zheng’s mask [22].

5. Conclusion

In conclusion, a residential area recognition method based on “Tuned” mask and optimized with gravitational search algorithm (GSA) is detailed. Three texture images from public texture database and 5 remote sensing images are used to make an evaluation for the proposed method. Results are compared with some other mask based classification techniques optimized by GA, PSO, AIA, and HBMO. In general, it is observed that evolutionary algorithm and swarm intelligence algorithm can be well used to complete the task of texture feature classification. Among these algorithms, GSA has a better performance; the average fitness value is higher than the other 4 algorithms; that is, GSA is more appropriate to be employed to obtain the optimal “Tuned” mask than GA, PSO, AIA, and HBMO. Moreover, in terms of CPU time, GSA can quickly converge to the optimal solution, which is quite fast enough to meet real-time applications. On the other hand, in order to make a more comprehensive comparison, features based on Gabor wavelet and another mask technique which is proposed by Zheng et al. [22] are also used in this paper and the mask is optimized by AIA and GSA. It is revealed that the proposed method has a better performance; the classification accuracy is satisfied. In sum, “Tuned” mask has a stable performance for texture feature classification in most cases. Further, the disadvantage of heavy computation efficiency could be conquered at the maximum degree when it is combined with GSA. The proposed method is able to keep a good balance between the efficiency and classification accuracy, which makes it more suitable for some texture feature classification applications.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is funded by the 863 High Technology Program of China under Grant no. 2013AA122104, the National Science & Technology Pillar Program under Grant no. 2014BAL05B07, the National Natural Science Foundation of China under Grant no. 41301371, and the State Key Laboratory of Geo-Information Engineering under Grant no. SKLGIE2014-M-3-3.

References

  1. R. M. Haralick, “Statistical and structural approaches to texture,” Proceedings of the IEEE, vol. 67, no. 5, pp. 786–804, 1979. View at: Publisher Site | Google Scholar
  2. T. Randen and J. H. Husøy, “Filtering for texture classification: A Comparative Study,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 4, pp. 291–310, 1999. View at: Publisher Site | Google Scholar
  3. Y. Xu, Y. Quan, H. Ling, and H. Ji, “Dynamic texture classification using dynamic fractal analysis,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '11), pp. 1219–1226, IEEE, Barcelona, Spain, November 2011. View at: Publisher Site | Google Scholar
  4. L. Liu, P. Fieguth, D. Clausi, and G. Kuang, “Sorted random projections for robust rotation-invariant texture classification,” Pattern Recognition, vol. 45, no. 6, pp. 2405–2418, 2012. View at: Publisher Site | Google Scholar
  5. T. Celik and T. Tjahjadi, “Bayesian texture classification and retrieval based on multiscale feature vector,” Pattern Recognition Letters, vol. 32, no. 2, pp. 159–167, 2011. View at: Publisher Site | Google Scholar
  6. J. Zhang, J. Liang, and H. Zhao, “Local energy pattern for texture classification using self-adaptive quantization thresholds,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 31–42, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  7. V. S. Thakare and N. N. Patil, “Classification of texture using gray level co-occurrence matrix and self-organizing map,” in Proceedings of the International Conference on Electronic Systems, Signal Processing, and Computing Technologies (ICESC '14), pp. 350–355, Nagpur, India, January 2014. View at: Publisher Site | Google Scholar
  8. F. Riaz, A. Hassan, S. Rehman, and U. Qamar, “Texture classification using rotation-and scale-invariant gabor texture features,” IEEE Signal Processing Letters, vol. 20, no. 6, pp. 607–610, 2013. View at: Publisher Site | Google Scholar
  9. C. Li, G. Duan, and F. Zhong, “Rotation invariant texture retrieval considering the scale dependence of Gabor wavelet,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2344–2354, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  10. L. Liu, L. Zhao, Y. Long, G. Kuang, and P. Fieguth, “Extended local binary patterns for texture classification,” Image and Vision Computing, vol. 30, no. 2, pp. 86–99, 2012. View at: Publisher Site | Google Scholar
  11. Y. Zhao, W. Jia, R.-X. Hu, and H. Min, “Completed robust local binary pattern for texture classification,” Neurocomputing, vol. 106, pp. 68–76, 2013. View at: Publisher Site | Google Scholar
  12. S. Gai, G. Yang, and S. Zhang, “Multiscale texture classification using reduced quaternion wavelet transform,” AEU—International Journal of Electronics and Communications, vol. 67, no. 3, pp. 233–241, 2013. View at: Publisher Site | Google Scholar
  13. R. Soulard and P. Carré, “Quaternionic wavelets for texture classification,” Pattern Recognition Letters, vol. 32, no. 13, pp. 1669–1678, 2011. View at: Publisher Site | Google Scholar
  14. F. Wu, W. Chao, and Z. Hong, “Residential area information extraction by combining china airborne SAR and optical images,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium Proceedings: Science for Society: Exploring and Managing a Changing Planet (IGARSS '04), pp. 2568–2570, September 2004. View at: Google Scholar
  15. M. Wang, S. Jiang, and X. Yang, “Residential area recognizing with Gabor filtering from high spatial resolution remotely sensed imagery,” Geo-information Science, vol. 10, no. 3, pp. 308–313, 2010. View at: Google Scholar
  16. F. Jin, Z. Zhang, and J. Rui, “Residential area extraction from remote sensing image based on texture principal directions,” Science of Surveying and Mapping, vol. 35, no. 4, pp. 139–141, 2010. View at: Google Scholar
  17. B. Shi, C. Liu, W. Sun, and H. Wu, “Residential area recognition using oscillatory correlation segmentation of hyperspectral imagery,” in Proceedings of the International Symposium on Image and Data Fusion (ISIDF '11), pp. 1–4, IEEE, Yunnan, China, August 2011. View at: Publisher Site | Google Scholar
  18. H. A. Jalab and R. W. Ibrahim, “Texture feature extraction based on fractional mask convolution with cesáro means for content-based image retrieval,” in PRICAI 2012: Trends in Artificial Intelligence, pp. 170–179, Springer, Berlin, Germany, 2012. View at: Google Scholar
  19. K. I. Laws, Textured Image Segmentation, Image Processing Institute University of Southern California, Los Angeles, Calif, USA, 1980.
  20. U. R. Acharya, S. V. Sree, M. M. R. Krishnan et al., “Atherosclerotic risk stratification strategy for carotid arteries using texture-based features,” Ultrasound in Medicine & Biology, vol. 38, no. 6, pp. 899–915, 2012. View at: Publisher Site | Google Scholar
  21. J. You and H. A. Cohen, “Classification and segmentation of rotated and scaled textured images using texture ‘tuned’ masks,” Pattern Recognition, vol. 26, no. 2, pp. 245–258, 1993. View at: Publisher Site | Google Scholar
  22. H. Zheng, J. Zhang, and S. Nahavandi, “Learning to detect texture objects by artificial immune approaches,” Future Generation Computer Systems, vol. 20, no. 7, pp. 1197–1208, 2004. View at: Publisher Site | Google Scholar
  23. H. Zheng and Z. Zheng, “Robust texture feature extraction using two dimension genetic algorithms,” in Proceedings of the 5th International Conference on Signal Processing (WCCC-ICSP '00), vol. 3, pp. 1580–1584, IEEE, 2000. View at: Google Scholar
  24. Z. Ye, X. Zhou, Z. Zheng, and X. Lai, “Chaotic particle swarm optimization algorithm for producing texture ‘tuned’ masks,” Geomatics and Information Science of Wuhan University, vol. 38, no. 1, pp. 10–14, 2013. View at: Google Scholar
  25. Z. Zheng, “Honey-bee mating optimization algorithm for producing better ‘Tuned’ masks,” Geomatics and Information Science of Wuhan University, vol. 34, no. 4, pp. 387–390, 2009. View at: Google Scholar
  26. E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at: Publisher Site | Google Scholar
  27. S. Yazdani, H. Nezamabadi-Pour, and S. Kamyab, “A gravitational search algorithm for multimodal optimization,” Swarm and Evolutionary Computation, vol. 14, pp. 1–14, 2014. View at: Publisher Site | Google Scholar
  28. Y. Kumar and G. Sahoo, “A review on gravitational search algorithm and its applications to data clustering & classification,” International Journal of Intelligent Systems and Applications, vol. 6, no. 6, pp. 79–93, 2014. View at: Publisher Site | Google Scholar
  29. S. Duman, U. Güvenç, Y. Sönmez, and N. Yörükeren, “Optimal power flow using gravitational search algorithm,” Energy Conversion and Management, vol. 59, pp. 86–95, 2012. View at: Publisher Site | Google Scholar
  30. A. Bahrololoum, H. Nezamabadi-Pour, H. Bahrololoum, and M. Saeed, “A prototype classifier based on gravitational search algorithm,” Applied Soft Computing, vol. 12, no. 2, pp. 819–825, 2012. View at: Publisher Site | Google Scholar
  31. S. Sarafrazi and H. Nezamabadi-pour, “Facing the classification of binary problems with a GSA-SVM hybrid system,” Mathematical and Computer Modelling, vol. 57, no. 1-2, pp. 270–278, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  32. E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “BGSA: binary gravitational search algorithm,” Natural Computing, vol. 9, no. 3, pp. 727–745, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  33. P. Li and H. B. Duan, “Path planning of unmanned aerial vehicle based on improved gravitational search algorithm,” Science China Technological Sciences, vol. 55, no. 10, pp. 2712–2719, 2012. View at: Publisher Site | Google Scholar
  34. C. Li, J. Zhou, J. Xiao, and H. Xiao, “Parameters identification of chaotic system by chaotic gravitational search algorithm,” Chaos, Solitons & Fractals, vol. 45, no. 4, pp. 539–547, 2012. View at: Publisher Site | Google Scholar
  35. L. Ma and L. Liu, “Analysis and improvement of gravitational search algorithm,” Microelectronics & Computer, vol. 32, no. 9, pp. 76–80, 2015. View at: Google Scholar
  36. A. S. Setiawan, J. Wesley, and Y. Purnama, “Mammogram classification using law's texture energy measure and neural networks,” in Procedia Computer Science, vol. 59, pp. 92–97, 2015. View at: Publisher Site | Google Scholar
  37. Y. Xu, J.-Y. Yang, and Z. Jin, “A novel method for Fisher discriminant analysis,” Pattern Recognition, vol. 37, no. 2, pp. 381–384, 2004. View at: Publisher Site | Google Scholar
  38. S. A. Kazarlis, A. G. Bakirtzis, and V. Petridis, “A genetic algorithm solution to the unit commitment problem,” IEEE Transactions on Power Systems, vol. 11, no. 1, pp. 83–92, 1996. View at: Publisher Site | Google Scholar
  39. R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization,” Swarm Intelligence, vol. 1, no. 1, pp. 33–57, 2007. View at: Google Scholar
  40. I. Aydin, M. Karakose, and E. Akin, “A multi-objective artificial immune algorithm for parameter optimization in support vector machine,” Applied Soft Computing, vol. 11, no. 1, pp. 120–129, 2011. View at: Publisher Site | Google Scholar
  41. M.-H. Horng, “Multilevel minimum cross entropy threshold selection based on the honey bee mating optimization,” Expert Systems with Applications, vol. 37, no. 6, pp. 4580–4592, 2010. View at: Publisher Site | Google Scholar

Copyright © 2016 Youchuan Wan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

615 Views | 620 Downloads | 5 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder