Research Article  Open Access
YunHua Wu, LinLin Ge, Feng Wang, Bing Hua, ZhiMing Chen, Feng Yu, "Fast Image Registration for Spacecraft Autonomous Navigation Using Natural Landmarks", International Journal of Aerospace Engineering, vol. 2018, Article ID 8324298, 12 pages, 2018. https://doi.org/10.1155/2018/8324298
Fast Image Registration for Spacecraft Autonomous Navigation Using Natural Landmarks
Abstract
In order to satisfy the realtime requirement of spacecraft autonomous navigation using natural landmarks, a novel algorithm called CSASURF (chessboard segmentation algorithm and speeded up robust features) is proposed to improve the speed without loss of repeatability performance of image registration progress. It is a combination of chessboard segmentation algorithm and SURF. Here, SURF is used to extract the features from satellite images because of its scale and rotationinvariant properties and low computational cost. CSA is based on image segmentation technology, aiming to find representative blocks, which will be allocated to different tasks to speed up the image registration progress. To illustrate the advantages of the proposed algorithm, PCASURF, which is the combination of principle component analysis and SURF, is also analyzed in this paper for comparison. Furthermore, random sample consensus (RANSAC) algorithm is applied to eliminate the false matches for further accuracy improvement. The simulation results show that the proposed strategy obtains good results, especially in scaling and rotation variation. Besides, CSASURF decreased 50% of the time in extraction and 90% of the time in matching without losing the repeatability performance by comparing with SURF algorithm. The proposed method has been demonstrated as an alternative way for image registration of spacecraft autonomous navigation using natural landmarks.
1. Introduction
Spacecraft autonomous navigation only needs to regularly check the spacecraft working conditions, eliminating the complex navigation computing tasks, which greatly reduces the manpower and ground facility requirements and the cost of space projects [1]. Furthermore, ground stations can be destroyed during wartime. However, with the use of an autonomous navigation system, spacecraft can still work well when the ground communication is under interruption. Among many kinds of autonomous navigation systems, autonomous navigation using natural landmarks based on machine vision is a newly proposed navigation method and has potential applications for future space missions.
For the landmarkbased autonomous navigation, landmarks are used as reference and measuring objects. At first, the landmarks combined with position information are gathered and stored onboard the satellite. For onorbit satellite, the camera can capture ground targets that have been stored in the satellite. During the period when the satellite runs over the target, several images from different view angles can be captured. These images are used as the inputs of image matching algorithm. Once the matching succeeds, the location information corresponding to this landmark is used to determine the position of the satellite. Then the orbit of the satellite is estimated by different positions. Figure 1 shows the schematic of autonomous navigation using natural landmarks.
Image registration is a vital technology for landmarkbased spacecraft autonomous navigation. However, autonomous navigation systems should be stable with realtime characteristics. In order to satisfy these stringent requirements, image registration progress must be fast and stable. This paper aims to propose an alternative method to improve the above performances.
To improve the correctness and realtime performance in the process of image matching, Sha et al. proposed a fast matching algorithm based on image graydegree clustering, which was robust and fast under the condition of nonlinear changing of local lighting, noise, target matching of irregular shape, and even complex background [2]. However, the problem of quickly obtaining a degree template reflecting main features of the matching object precisely still needs to be further researched. Xu et al. proposed a novel method called DFOB for the detection, orientation computation, and description of feature points. The method was computationally efficient as it was implemented by integral images. Compared with SIFT and SURF algorithms, the computational cost of this method was much lower [3]. However, this method does not work well under large affine and perspective deformations, making it unable to perform well in wide baseline matching.
Zhao et al. studied a method based on (principal component analysis) PCA [4] to speed up the image registration progress. The resulted computing time was reduced to 60% compared to single gray level normalized crosscorrelation matching. He and Jiang proposed a fast image matching algorithm based on discrete Hartley transform (DHT) [5], which reduced data calculation and storage. Besides, it also improved image matching accuracy and efficiency. The PCAbased method proposed by Peng Zhao greatly improved the speed performance. However, its performance is poor when the rotation is larger than 20 deg. The image matching algorithm based on DHT also increased the image registration speed compared to traditional algorithm based on (fast Fourier transformation) FFT. But its computation time reached 18 s, which cannot satisfy the realtime requirement for natural landmarkbased autonomous navigation. In order to overcome the above shortcomings, a fast image registration algorithm based on chessboard segmentation is proposed. Furthermore, RANSAC algorithm is applied to remove the errormatched key points for further repeatability improvement.
The reminder of this paper is organized as follows. Firstly, Section 1 introduces the purpose of this research. Then Section 2 reviews the basic theory of PCASURF algorithm. Later, Section 3 presents a method called chessboard segmentation algorithm to speed up image registration progress. In addition, random sample consensus algorithm is also presented in this section to obtain the registration statistical results about the number of matches and mismatches. In Section 4, the time consumptions of different methods are compared to verify the advantages of our algorithm. In Section 5, some metrics are defined to evaluate repeatability of image registration result. Besides, several tests are designed to evaluate the performance of the proposed method. Finally, Section 6 summarizes the contributions of this work.
2. Review of PCASURF
Principal component analysis (PCA) is a classical feature extraction and data representation technique widely used in the area of computer vision [6, 7]. PCASURF, which is a combination method of PCA and SURF [8], aims to reduce the computation by compressing the data dimension.
2.1. PCASURF Description
Suppose that is a 64 × 1 description vector of a reference image and is a 64 × 1 description vector of an object image . Let denote all SURF description vectors in the reference image and the object image: where and are the number of SURF features in the reference and object images, respectively. Now, a group of orthogonal projection directions is required to be found to project into a lower dimension space. A covariance matrix can be defined as follows:
Then the eigenvalue and eigenvector corresponding to are and . Actually, the value of is related to the amount of information in its corresponding vector . Larger eigenvalue means more amount of information. Then a matrix is defined
Here, it is important to satisfy that is an orthogonal matrix. Only with orthogonal transformation, the original vectors can be invariant to their Euclidean distance or angle with each other. What is more, in order to compress the dimension with least information loss, the following relation should be satisfied:
Then with the orthogonal transformation, we have where is the feature in the new space corresponding to the original SURF feature. A new equation is defined as
Here, each column of is the principal component. Compressing the dimension of the descriptor, then can be removed sequentially, because the smaller the eigenvalues are, the less the information is.
2.2. Application of Image Registration
In practical application, two conceptions called contribution rate and cumulative contribution rate should be defined:
Image registration task will be completed by using the first principal components which occupy a large proportion of the contribution rate. In order to find the best value of the cumulative contribution rate, some simulation results are presented in the following.
Figure 2 shows the repeatability of PCASURF with different cumulative contribution rates. Different lines in the figure represent the object image with different rotations. Figure 3 presents the remaining dimensions with different cumulative contribution rates. From the above comparison, it can be seen that PCASURF worked poorly when the rotation is larger than 20 degrees. When the rotation is less than 20 degree, the cumulative contribution rate can be chosen as 0.95; therefore, it can largely compress the dimension of the SURF descriptor with little accuracy loss.
3. CSA and RANSAC
3.1. Chessboard Segmentation Algorithm
According to [9–11], there are three strategies that can be implemented to speed up the image registration progress. The first strategy is to decrease the dimension of the key points’ descriptors. And the second one is to decompose the task and deal them with multithreads. The third is to decrease the number of key points. Here, in this paper, the latter two methods are used to improve the speed performance.
Chessboard segmentation algorithm, which combined these two ideas, can greatly improve the speed without losing performance of repeatability.
Figure 4 shows an island image that has been segmented into parts. The number in each block represents the amount of SURF features. In order to speed up image registration progress, considering the second idea, the feature extraction tasks in each block will be allocated to different threads. Besides, by considering the third idea, only some representative blocks rather than all of them will be selected. In order to satisfy these requirements, several factors needed to be considered, for example, if the block with the most number of SURF features has been selected, then the weight of blocks around the chosen block should be decreased to make sure that regional distribution is more even. So, a block selection model should be established. The coordinate of each block is defined in Figure 4. And we suppose that is the amount of SURF features of block .
Due to the diversity of each image, the SURF feature amounts for different images must be distinct so normalization is defined as where and are the minimum and maximum values, respectively, of all . And is the normalization of . It is assumed that is the weight of importance for each block. Here is the weight matrix
Local field attenuation effect simulates the phenomenon that the importance of blocks around the selected block needs to decrease. The diagram of local field attenuation effect is also demonstrated in Figure 4. It is supposed that block has been selected; then the normalized SURF feature amount of each local field with white color should be decayed. Assuming that the attenuation threshold is , then is updated according to
The entire diagram flow is presented in Figure 5.
The major steps of chessboard segmentation algorithm are as follows.
(1) Image Segmentation. In the first step, CSA splits the source image into blocks. Simultaneously, the SURF features of each block will be extracted with the help of parallel computing technology.
(2) Data Normalization. Then the amount of SURF features of each block will be normalized to establish the weight matrix .
(3) Block Selection. In each step, the block with maximum weight among the candidate set will be selected and then be removed from the candidate set.
(4) Local Attenuation Effect. After the maximum weight has been selected, the weight coefficient around the selected block should minus a threshold to simulate the local attenuation effect.
Two parameters will affect the stability of CSA: the number of blocks in and directions. The experimental determination of the number of blocks that maximizes the stability of CSA is shown in Figure 6, which is based on an image registration task by using a collection of different natural landmarks. The terms and in this figure represent the number of blocks in direction and direction. Besides, the size of the object image that needs to be matched with the reference image is 800 × 684. The origin of an image coordinate system is located at the upper left corner of the corresponding image.
Figure 6 shows the repeatability for different combination of and . The red marker is the best result corresponding to 100% repeatability. According to experimental results, the CSA works better when the following equation is satisfied:
Besides, CSA threshold represents the extent of local attenuation effect and it is also an important parameter that will affect the CSA result. Here, a solution is proposed to automatically calculate the threshold. A dataset is defined as follows: where is the weight coefficient in weight matrix. Then the threshold is equal to standard derivation of weight coefficients. in which is the mean of dataset . By this equation, a suitable threshold can be obtained.
Figures 7–9 present the results of CSA with different thresholds. In these figures, the selected blocks have been marked by green color. In Figure 7, the threshold means that the local attenuation effect is invalid and illustrates that these blocks whose feature number is not equal to 0 are selected. Figure 8 shows the CSA result with the threshold or 0.2. And Figure 9 shows the CSA result with automatic determined threshold . It can be seen that Figure 9 gives the most typical and minimal number of blocks with the result of automatic threshold determination.
3.2. Random Sample Consensus Algorithm
Once the result of image registration is obtained, it is hard to analyze statistically the match ratio because it may have hundreds of matched key points in result. In this section, an algorithm called random sample consensus [12–14] is introduced to deal with the above problem. Besides, this algorithm also can be used to remove the wrong matched key points [15].
3.2.1. Perspective Transformation
Perspective transformation is a commonly used model to represent the relationship between two images from different views.
Figure 10 is a demonstration of perspective transformation. is a point in plane 1, and its coordinate is . is the corresponding point in plane 2 with coordinate . In order to get the relationship between and , a perspective transformation matrix is defined as
Then the relationship between and is where is the scaling factor. By dividing the first and second rows with the third row, it will achieve
Then we assume that ; to transfer the above equation into the matrix form, we can obtain
In order to solve , four or more corresponding pairs are required. So, we suppose that there are four known corresponding groups and , and satisfying it can be simplified as
Because of distortion, noise, or other reasons, the above equation may be a contradictory equation. Therefore, the least squares method is applied to find the satisfied result
Eventually, the perspective transformation matrix can be recovered from .
Assuming that is the key point in the first picture. is the corresponding matched key point in the second image. is the perspective transformation matrix between the first and second image. Then we have where is the corresponding point after perspective projection transformation on key point . Then the perspective projection error can be defined as follows:
3.2.2. RANSAC Algorithm Progress
Figure 11 shows the diagram flow of RANSAC algorithm. The main stages of RANSAC algorithm are described as follows:
(1) Initialization. The threshold of minimum perspective project error should be initialized at first. Besides, the maximum iterator time should also be initialized here.
(2) Perspective Transformation Solving. The perspective transformation can be solved by least squares method with four groups of key points, among which four key points are randomly selected from the dataset of source image key points. The other four key points are randomly selected from the dataset of target image key points.
(3) Perspective Matrix Validation. Once a new perspective projection error has been obtained in each iterator, a validation needs to be performed. The minimum of perspective projection error between the key points of the source image and the key points of the target image will be calculated. If is smaller than the smallest perspective projection error which is stored before, the minimum perspective projection error will be updated.
(4) Loop Termination. If and , the loop will go back to step 2; otherwise, the loop will be terminated.
4. Time Consumption Comparison
In order to verify the advantage of the proposed CSASURF algorithm, the SURF and PCASURF algorithms are used for comparison. Feature extraction and matching progress are different for the above three algorithms, of which the time complexities are discussed in the following.
4.1. Time Consumption for Feature Extraction
It is well known that SIFT or SURF algorithms use the sliding window method to detect local extrema. PCASURF and CSASURF, having the same mechanism, are the improved algorithms based on SURF. However, there are some differences among them. In order to descript more clearly, the width and height of the image are defined as and and the step size is . The time for solving the eigenvalue and eigenvector is . The time for transferring the old feature space to the new feature space is . The time for computing the local feature is . The extraction times of SURF, PCASURF, and CSASURF are , , and , respectively. The pseudocodes for the feature extraction progress of SURF, PCASURF, and CSASURF algorithms are presented as in Pseudocodes 1, 2, and 3.



The extraction time of SURF, PCASURF, and CSASURF can be calculated by satisfying in which the symbol is a rounding down function.
The relationship of (24) can be proven as follows.
Obviously, we have
Combining (26) and (27), we obtain
that is,
Obviously,
Therefore, (24) has been verified.
4.2. Time Consumption of Matching
In CSASURF, the image is divided into blocks in horizontal and blocks in vertical. The number of extracted features for SURF and PCASURF is , while the corresponding number for CSASURF is and is much smaller than . Besides, the dimension of PCASURF is . The time consumption for two features matching is . The matching time for SURF, PCASURF, and CSASURF are , , and and can be calculated by
As is much less than , then we have which can be proven as follows.
As we can see,
Therefore, we have
If , then
For the spacecraft autonomous navigation problem using natural landmarks, only 5 to 10 features satisfying certain relative distance constraints are required; therefore, it is easy to guarantee that and then we have .
From the above comparison, the proposed CSASURF method consumes the least time.
5. Evaluation
5.1. Evaluation Metrics
In order to descript the repeatability performance, the following evaluation metrics are defined in this section. Positive in this paper means the key points that can be matched. Negative means the key points that cannot be matched.
Then we suppose that is the number of positive key points that are correct matches. is the number of negative key points that are false matches. is the number of negative key points that correct mismatches. And is the number of negative key points that are false mismatches. To explain it more clearly, Figure 12 shows the relationship of different symbols.
Then we have the following definitions:
Precision means the proportion of correct matches to positive. Recall means the proportion of correct matches to matches. and are conflicting, so a versus graph will help in analyzing the repeatability performance.
5.2. Speed Test
The following experiments are tested on Core I5 2.3 GHz CPU, 6GB RAM laptop with Windows 7 operation system. VS2013 and OpenCV2.4 are used to carry out all the experiments. And all datasets are from an open source database called UC Merced Land Use Dataset. And the resolution is 256 × 256.
Figures 13–15 are the matching comparison results of fifty images about SURF, PCASURF, and CSASURF. It can be seen that PCASURF consumes much more time than SURF and CSASURF during feature extraction process. The CSASURF algorithm works faster than SURF and PCASURF algorithms for both feature extraction and matching processes.
5.3. Repeatability Test
Figure 16 shows the repeatability of different methods for different conditions. But recall and precision may be conflict in some condition, so a versus graph has been established to help us analyze the repeatability performance. The red line is the test under scaling; it can be seen that CSASURF performs worse when and its performance is not stable when . For , it works well with . The green line represents the test of CSASURF algorithm under rotation. Its performance is much the same as the solid line’s case, while its unstable region is . The blue line represents the test of CSASURF algorithm under salt and pepper noise [16] with a noise density equal to 0.35. It can be seen that CSASURF algorithm is not stable under salt and pepper noise, but still maintains its value above 0.7.
Furthermore, the above dataset with fifty images is also used to test the repeatability performance of the three methods. Figure 17 is the of different methods for these images. The results show that the repeatability of SURF and CSASURF is very close. Therefore, CSASURF can speed up the registration progress without losing its repeatability performance. However, PCASURF is unstable as it can get the correct matches in some conditions.
5.4. Application on Natural Landmark Registration
The major purpose of this paper is to apply the proposed method into natural landmark registration task. Five different kinds of natural landmarks, including island, airport, railway, river, and coastline, are collected for the demonstration testing.
Figure 18 presents the registration results for Hawaiian islands with different algorithms, in which SURF and CSASURF work fine but PCASURF has a wrong match. Figure 19 shows the matching results of two O’Hare International Airport images with different algorithms, indicating that the PCASURF algorithm is unstable as it cannot find any matched key points in this example. Figures 20–22 are the matching results of Qingzang railway, river, and coastline images with different algorithms.
It can be seen from the above testing results that CSASURF algorithm works well for different images with significant contour features. The proposed CSASURF algorithm, which is improved from SURF, keeps the repeatability of SURF unchanged and improves the image matching speed. Many other pictures have also been tested; however, the results are not presented for the page limitation of the paper.
6. Conclusion
Because of the restricted computational resource and horrible environment for an autonomous navigation system, traditional image registration methods cannot satisfy practical requirements in speed. A novel algorithm called chessboard segmentation algorithm is proposed to solve the above problem. Because the new method is based on SURF features, it inherits lots of their advantages, such as scale and rotation invariant properties. To verify the improvement of the proposed algorithm, the PCASURF algorithm which was proposed in recent years is also presented in this paper for comparison. Besides, RANSAC algorithm is applied to remove the false negative key points to further improve the accuracy of the proposed algorithm. Thorough experiments have been carried out to demonstrate the performance of the proposed method; the corresponding simulation results show great improvement in image registration speed without losing repeatability. Finally, the CSASURF algorithm is applied to natural landmark registration task, showing that it works fine. The proposed method is a good candidate for image registration task for spacecraft autonomous navigation based on natural landmarks.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
The authors would like to express their acknowledgment for the support provided by the National Natural Science Foundation of China (nos. 61403197 and 61673212), the Natural Science Foundation of Jiangsu Province (no. BK20140830), the National Key Research and Development Plan (no. 2016YFB0500901), and the Open Fund of National Defense Key Discipline Laboratory of MicroSpacecraft Technology (no. HIT.KLOF.MST.201705).
References
 A. C. Vigneron, A. H. J. de Ruiter, B. V. Burlton, and W. K. H. Soh, “Nonlinear filtering for autonomous navigation of spacecraft in highly elliptical orbit,” Acta Astronautica, vol. 126, pp. 138–149, 2016. View at: Publisher Site  Google Scholar
 S. Sha, C. Jianer, and L. Sanding, “A fast matching algorithm based on Kdegree template,” in 2009 4th International Conference on Computer Science & Education, pp. 1967–1971, Nanning, China, 2009. View at: Publisher Site  Google Scholar
 Z. Xu, Y. Liu, S. Du, P. Wu, and J. Li, “DFOB: detecting and describing features by octagon filter bank for fast image matching,” Signal Processing: Image Communication, vol. 41, pp. 61–71, 2016. View at: Publisher Site  Google Scholar
 P. Zhao, Z. Bai, and W. Fan, “Research of fast image matching based on PCA,” Computer Technology and Its Applications, vol. 4, pp. 132–134, 2010. View at: Google Scholar
 D. He and P. Jiang, “Fast image matching algorithm based on discrete Hartley transform,” Modern Defense Technology, vol. 44, no. 5, pp. 61–65, 2016. View at: Google Scholar
 Y. Ke and R. Sukthankar, “PCASIFT: a more distinctive representation for local image descriptors,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, pp. II506–II513, Washington, DC, USA, 2004. View at: Publisher Site  Google Scholar
 J. Yang, D. Zhang, A. F. Frangi, and J.y. Yang, “Twodimensional PCA: a new approach to appearancebased face representation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131–137, 2004. View at: Publisher Site  Google Scholar
 H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speededup robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at: Publisher Site  Google Scholar
 M. Bleyer and M. Gelautz, “Graphcutbased stereo matching using image segmentation with symmetrical treatment of occlusions,” Signal Processing: Image Communication, vol. 22, no. 2, pp. 127–143, 2007. View at: Publisher Site  Google Scholar
 A. Pancham, D. Withey, and G. Bright, “Tracking image features with PCASURF descriptors,” in 14th IAPR International Conference on Machine Vision Applications (MVA), pp. 365–368, Tokyo, Japan, 2015. View at: Publisher Site  Google Scholar
 E. E. Maraş, M. Caniberk, and H. H. Maraş, “Automatic coastline detection using image enhancement and segmentation algorithms,” Polish Journal of Environmental Studies, vol. 25, no. 6, pp. 2519–2525, 2016. View at: Publisher Site  Google Scholar
 Y. Wang, J. Zheng, Q. Z. Xu, B. Li, and H. M. Hu, “An improved RANSAC based on the scale variation homogeneity,” Journal of Visual Communication and Image Representation, vol. 40, pp. 751–764, 2016. View at: Publisher Site  Google Scholar
 Y. Chen, Q. Sun, H. Xu, and L. Geng, “Matching method of remote sensing images based on SURF algorithm and RANSAC algorithm,” Jisuanji Kexue yu Tansuo, vol. 6, no. 9, pp. 822–828, 2012. View at: Google Scholar
 Y. Zhao, R. Hong, and J. Jiang, “Visual summarization of image collections by fast RANSAC,” Neurocomputing, vol. 172, pp. 48–52, 2016. View at: Publisher Site  Google Scholar
 F. Yang, J. Guo, and J. Wang, “Image mismatching eliminating algorithm using structural similarity and geometric constraint,” Journal of Signal Processing, vol. 32, no. 1, pp. 83–90, 2016. View at: Google Scholar
 R. H. Chan, ChungWa, and M. Nikolova, “Saltandpepper noise removal by mediantype noise detectors and detailpreserving regularization,” IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1479–1485, 2005. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 YunHua Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.