Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 1848471, 9 pages

http://dx.doi.org/10.1155/2016/1848471

## Target Matching Recognition for Satellite Images Based on the Improved FREAK Algorithm

^{1}Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China^{2}University of Chinese Academy of Sciences, Beijing 100049, China

Received 4 June 2016; Accepted 7 September 2016

Academic Editor: Yakov Strelniker

Copyright © 2016 Yantong Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Satellite remote sensing image target matching recognition exhibits poor robustness and accuracy because of the unfit feature extractor and large data quantity. To address this problem, we propose a new feature extraction algorithm for fast target matching recognition that comprises an improved feature from accelerated segment test (FAST) feature detector and a binary fast retina key point (FREAK) feature descriptor. To improve robustness, we extend the FAST feature detector by applying scale space theory and then transform the feature vector acquired by the FREAK descriptor from decimal into binary. We reduce the quantity of data in the computer and improve matching accuracy by using the binary space. Simulation test results show that our algorithm outperforms other relevant methods in terms of robustness and accuracy.

#### 1. Introduction

Matching recognition has many practical applications, including target recognition by matching target features, which is an important issue in the pattern recognition field. Increasing the resolution of satellite remote sensing images generates detailed image features, thereby allowing target matching recognition [1]. Satellite image target matching recognition exhibits higher stability and accuracy than traditional methods [2].

However, satellite remote sensing image target matching recognition is a complex process. For example, the complex background of satellite images leads to poor robustness of feature extraction. A large amount of satellite images will also produce processing difficulties and matching errors. Therefore, a matching algorithm with high robustness and accuracy must be used. As a core step in satellite image target matching recognition, feature extraction may be classified into global and local invariant feature extraction based on the amount of utilised target information. Traditional target recognition methods frequently extract the target shape feature and other global invariant features, including invariant moments [3] and transform field invariance [4]. In [5], the principal component analysis (PCA) algorithm is adopted to an image obtained via segmentation following the method of Otsu [6] to estimate the main direction and recognise the targets via template matching. In [7], Hu invariant moment is applied to recognise an aircraft from a satellite image, and then four features are extracted and combined to build the global invariant feature. However, these methods assume that the edges of the target can be perfectly extracted, which is difficult to achieve in practice. To address this problem, we use the local invariant feature that is robust to noise and partial occlusion.

The local invariant feature comprises feature detection and feature description, which have drawn the attention of many researchers. Introduced by Lowe [8], the scale-invariant feature transform (SIFT) is a new scale-invariant method that uses a difference of Gaussian (DoG) detector and a SIFT descriptor to extract a robust and high-class discriminative degree local invariant feature vector. Introduced by Bay et al. [9], the speeded up robust feature (SURF) uses a fast Hessian feature detector and applies Haar-like features to create a feature vector. Although simpler than SIFT, SURF exhibits poor performance in extracting scale-invariant features. Ke and Sukthankar [10] proposed the PCA-SIFT descriptor, which outpaces SIFT in describing key points by using the PCA algorithm. Although the GLOH [11] and DAISY [12] algorithms apply linear dimension reduction to improve the SIFT descriptor, they both involve a complex computation process. Bit operation presents a favourable alternative to reduce the complexity of the computation process in a computer; however, this method requires the transformation of the vector into binary. In [13], the binary robust independent elementary feature (BRIEF) approach is proposed; this method uses a FAST detector and bit operation for matching and offers a considerably more suitable alternative for real-time applications. However, despite its obvious advantage in speed, BRIEF exhibits poor reliability and robustness because of its minimal tolerance to image distortions and transformations, particularly rotation and scale change. Despite simulating the human vision system using a simple binary transform method to accelerate the matching process, the fast retina key point (FREAK) [14] descriptor has a relatively low accuracy.

Given these limitations, we propose a new efficient method for satellite remote sensing image target matching recognition based on FREAK feature extraction algorithm. In particular, we extend the FAST detector by applying scale space theory to improve its poor robustness that results from a complex satellite image background. We also create a binary data space to transform the high-dimensional FREAK descriptor from a decimal feature vector to a binary feature vector to improve its accuracy. The rest of this paper is organised as follows. Section 2.1 presents the scale-invariant FAST feature detector. Section 2.2 describes the FREAK descriptor and the binary data space for transforming the feature vector into binary data. Section 3 presents the simulation results. Section 4 concludes the study and offers directions for future work.

#### 2. Improved FREAK Algorithm

Feature extraction algorithm often comprises feature detection and feature description. And the FREAK feature extraction algorithm comprises FAST detector and FREAK descriptor. However, the FAST detector is not scale invariant. The FREAK descriptor suffers from large quantities of data storage. So we improved FREAK feature extraction algorithm as follows.

##### 2.1. Scale-Invariant FAST Feature Detector

###### 2.1.1. Scale Space Theory

Feature detection begins by identifying locations and scales that can be repeated under different views of the same target. Detecting locations involves the search for stable features at all possible scales. Therefore, confirming the invariant scale is the key step in feature detection. Scale space theory regards scale as a free parameter and adds it to the signal. Lindeberg [15] identified the Gaussian function as the only possible scale-space kernel under various reasonable assumptions. Therefore, the scale space of an image is defined as function , which is produced by the function of variant scale Gaussian with an input image as follows:where is the convolution operation, and

To detect features in the scale space effectively, we use the DoG function of Lowe [8] to convolve the image and then compute the difference between two nearby scale images using a constant multiplicative factor as follows:

Function should compute all the scale space features of the images, and function can be easily calculated via image subtraction. The difference of the Gaussian function is closed to the scale-normalised Laplacian of Gaussian , which is a true yet complex scale invariance. In Mikolajczyk and Schmid [11], the Laplacian of the Gaussian function provided a more stable image feature than the gradient, Hessian, or Harris corner functions. The relationship between the difference of the Gaussian function and the Laplacian of the Gaussian function is expressed as follows:where is approximate to at scales and ; that is,

Therefore,

However, if is equal to 1, then the approximate equation is incorrect. The factor in the equation is a constant and does not influence the location and stability of the features. Therefore, can be defined as a constant value, such as .

###### 2.1.2. Scale-Invariant FAST Detector

A FAST detector functions as the foundation of the smallest univalue segment assimilating nucleus (SUSAN) detector [16]. However, the FAST detector is faster and more accurate than the SUSAN detector, thereby making the former suitable for satellite remote sensing images. The SUSAN detector has been implemented using a circle window, and the centre of the circle window is defined as the nucleus. The univalue segment assimilating nucleus (USAN) area includes points with brightness levels that are equal to that of the nucleus, whereas the unassimilated area includes points that do not have the equivalent brightness level of the nucleus. The area near the nucleus is divided into two. A larger USAN area is obtained if this area also covers the nucleus. Otherwise, the USAN area is only half the size of the circle window. The nucleus will be detected as a corner when the USAN area is small. Following this algorithm, the FAST detector can be expressed as follows:where is the position of the nucleus in the image, is the position of any other point in the circle window, is the brightness of the nucleus, is the brightness of any other point, is the brightness difference threshold, and is the output result. The pixels in the image are compared with one another, and the total result is calculated as follows:which denotes the size of the USAN area and the number of pixels within this area. This result is minimised. determines the minimum contrast of the features to be detected and the maximum amount of noise to be ignored.

The initial feature is then detected using the following rule: where is the initial detected feature and is the threshold compared with , which is set to , where is the maximum value of . This value is calculated by analysing the response of noise.

Despite providing a favourable result, the aforementioned method still requires improvement. The FAST detector provides the following equation, which is more stable and reasonable than (7):

The preceding equation is smoother than the SUSAN detector and allows slight changes in the brightness of each pixel, thereby permitting the new function to build a search decision tree for feature detection. The brightness degree of each pixel must be specified and compared with that of the eight nearby pixels. Consequently, the configuration space contains three states, namely, “darker,” “brighter,” and “similar.” The searching tree is assigned as follows:

Given its sensitivity to scale change, we add scale space to the FAST detector. We show the image at a Gaussian scale space through a convolution process. We mark the extreme point in a 3 × 3 template based on the computation results. If the corner point is not the extreme point, then we reinterpolate the image coordinates between the patches in the layers next to the determined scale. The scale-invariant FAST detector is expressed as follows:

Direction estimation is another important process in feature detection. The points are placed on the edge between two reigns of the USAN area that can consist of a line, which is the direction of the edge. Therefore, the edge direction is calculated by finding the longest axis of symmetry and taking the sum of the following equations:

We use the ratio of to to determine the orientation of the edge and then use the sign of to determine whether a diagonal edge has a positive or negative gradient. Finally, we estimate the feature direction in this manner.

##### 2.2. Binary FREAK Descriptor

###### 2.2.1. FREAK Descriptor

The previous operations have detected the features scale, location, and orientation of an image. We then compute a descriptor for the image that remains invariant and stable at different situations, such as scale change, viewpoint change, and illumination change.

The FREAK descriptor simulates the human retina vision system. The human vision system has a complex process. Firstly, the light stimulates the retina to excite the optic nerve cells. Secondly, the optic nerve transfers the information to the retinal lateral geniculate nucleus (LGN) for decoding. Thirdly, a central visual cell and a peripheral visual cell in the LGN extract the detail information and contour feature. Fourthly, the central nervous system transfers the processed information to the primary region of the human brain. The visual information is fully recognised by integrating the information obtained from different cortical regions.

The FREAK descriptor designs the fast retina key point sample model according to the human retina imaging principle. This model includes a concentric circle with seven floors as shown in Figure 1. Each concentric circle has six sampling points that imitate the relationship between central visual cells and peripheral visual cells. The central sample circle extracts the texture feature, whereas the peripheral cells extract the outline feature.