Research Article  Open Access
Kezhao Li, "Vision Autonomous Relative Positioning and Orientating Algorithm for Distributed Micro/Nanosatellite Earth Observation System Based on Dual Quaternion", International Journal of Navigation and Observation, vol. 2010, Article ID 172679, 6 pages, 2010. https://doi.org/10.1155/2010/172679
Vision Autonomous Relative Positioning and Orientating Algorithm for Distributed Micro/Nanosatellite Earth Observation System Based on Dual Quaternion
Abstract
It is a valid way to analyze the space object realtime movement by using distributed satellite earth observation system, which can provide the stereographic image through the collaboration of the satellites. That relative position and pose estimation is one of the key technologies for distributed micro/nanosatellite earth observation system (DMSEOS). In this paper, on the basis of the attitude dynamics of spacecrafts and the theory of machine vision, an autonomous positioning and orientating algorithm for distributed micro/nanosatellites based on dual quaternion and EKF (extended Kalman filtering) is proposed. Firstly, how to represent a line transform unit using dual quaternion is introduced. Then, the feature line point of the line transform unit is defined. And then, on the basis of the attitude dynamics of spacecrafts and the theory of EKF, we build the state and observation equations. Finally, the simulations show that this algorithm is an accurate valid method in positioning and orientating of distributed micro/nanosatellite earth observation system.
1. Introduction
Many science problems to be pondered for surveyors after Sichuan Wenchuan Earthquake, such as the anomalous variations of the terrain surface objects, can be monitored dynamically by orbital satellites to forecast earthquake disaster or weather disaster. Unfortunately, traditional earth observation method only does some post hoc analysis and cannot forecast and reduce disaster currently.
It is a novel way to improve the efficiency of earth observation or to solve some difficult space problems by network spacecraft system, which depends on the collaboration of each spacecraft of the system. In recent years, as the development of modern science and technology, new space activities in space AR&D operations, such as capturing, maintaining, assembling, and attacking, have been considered by some countries. And some material progresses, including Demonstration of Autonomous Rendezvous Technology (DART) [1], XSS11 [2], Orbital Express (OE) [3], and NASA Deep Space Network (DSN) [4] plans, have been made by the USA. However, autonomous positioning and orientating is one of the key technologies in all these space activities. As metioned above, autonomous positioning and orientating based on machine vision is a direction all over the world currently. But there are some disadvantages of some traditional algorithms, such as complicated description, huge calculation burden, and lack of realtime ability.
Fortunately, dual quaternion is introduced by Clifford [5], can represent the rigid motion of 3D objects, and can calculate the rotation and translation transformations simultaneously. So it is more effective for dealing with positioning and orientating of DMSEOS.
Because dual quaternion is used for relative orientating and positioning based on feature line transform unit, we first introduce the repretation of line enties and their motion by dual quaternion. Then, the feature line point of the line transform unit is defined. And then, on the basis of the attitude dynamics of spacecrafts and the theory of EKF, we build the state and observation equations. Finally, the simulations show that this algorithm is an accurate valid method in positioning and orientating of spacecrafts based on machine vision.
2. Repretation of Lines Enties and Their Motion by Dual Quaternion
According to geometric algebra theory [4], dual quaternion can be found in special 4D even subalgebra of and is spanned by the following basis
Since a rigid motion consists of the transformations rotation and translation according to Euler theorem, a simple rotor is in its Euler representation for a rotation by an angle : where is the unit 3D bivector of the rotationaxis spanned by the bivector basis , , , and ,.
In the dual quaternion , a translation is represented by a spinor . Thus applying from the left and its conjugated from the right to the in formula (2), we can get the modified rotor:
As we know, a 3D line can be represented by Plücker coordinate:
Hence the Euclidean transformation of the 3D line by the modified rotor can be represented:
3. Visual Autonomous Positioning and Orientating Algorithm Based on Dual Quaternion for Distributed Micro/Nanosatellite Earth Observation System
3.1. Camera Modelling and the Relation with Object Frame
Figure 1 shows the relation of projective line and projective plane with space object frame. stands for object frame, that is, body frame of object satellite in this paper, stands for camera frame, and stands for image frame.
In Figure 1, stands for a 3D object line, and its projective line is . We can represent the transformation between and as formula (5). But how to calculate the translator using image coordinates and object coordinates? In order to process the data simply, first we give a definition as follows.
Feature line point is the intersection point of the projective line and a perpendicular line passing through the origin , it is unique.
In camera frame, the projective plane, which is shown in grey, can be represented as where .
From Figure 1, we can see that the projection line lies in either projective plane or image plane. When ( is the focus of the camera), the equation of the projective line that lies in image plane can be described as
Thus we can get the vector containing the feature line point of normal to the projective plane
Then the feature line point of coordinates is described as
From formula (9), we can see that , , and are all connected with . How can we calculate from the observation values? According to [6], the related formula will be given as follows.
Consider that are two points on , then where , are image coordinates of and , which are observation values, and is a constraint condition.
The correspondence points of , are , on 3D line ; hence we can represent and as follows:
Thus the relation between and as in formula (5) by using the known and observation values is built.
3.2. State Equation of the Algorithm
To solve the dynamic estimation problem based on EKF, the state equation must be built. And how to select state variable is introduced here firstly. Since the filter computation time is proportional to the number of state variables, fewer variables are desirable. Based on the approach of the references [7–9], thirteen variables are used. In these applications, angle velocity vector is regarded as constant. But in the application of positioning and orientating for spacecraft, relative angle velocity vector is a variable. In this paper, we can estimate in advance and consider it as an input variable of time . In this way, the number of state variables is reduced, but also the practical problem is solved well. Thus the state variables are three relative position variables , three relative velocity variables , and four relative attitude variables , where the vector is defined in orbital frame of target spacecraft and vector is defined in body frame of active satellite . However, vision positioning and orientating is based on camera frame, and we must build the relationship of the camera frame, orbital frame, and body frame with each other. This will be studied in Section 3.
As mentioned above, the state variable assignment is Without disturbances, according to [10], the nonlinear continuous equation of is where
is orbital angular velocity of objective satellite , and is the vector . From formula (13), we can get the linearization matrix .
3.3. Observation Equation of the Algorithm
From formulas (9) and (12), we can see that the state variables we selected are not related with image coordinate directly. So we must build the relationship between them before observation equation is built. Hereinafter, we will talk about the transformation of firstly, and then we will discuss the transformation of .
3.3.1. The Transformation of
(a) Transform from orbital frame defined in target spacecraft to inertial frame defined in active spacecraft as follows: where is the transformation matrix from orbital frame defined in target spacecraft to its inertial frame and is the transformation matrix from second orbital frame defined in target spacecraft to its geocentric orbital frame.
(b) Transform from inertial frame defined in active spacecraft to its body frame by the formulation where is the transformation matrix from second orbital frame defined in active spacecraft to its body frame, is the transformation matrix from geocentric orbital frame in active spacecraft to its second orbital frame, and is the transformation matrix from inertial frame defined in active spacecraft to its geocentric orbital frame.
(c) Transform from body frame defined in active spacecraft to camera frame by the formulation as follows: where is the rotation transformation matrix (namely, attitude transformation matrix) from body frame defined in active spacecraft to camera frame and is the translation transformation matrix from body frame defined in active spacecraft to camera frame. They can be designed or measured.
3.3.2. The Transformation of
We can translate the relative attitude into quaternion form according to the transformation between cosine matrix and quaternion.
And we can transform from body frame defined in active spacecraft to camera frame by as follows:
Thus the relationship between the state variables and observation variables is built. Obviously, formula (9) is nonlinear equation about . And they must be linearized in visual position and pose estimation based on EKF and dual quaternion. On the basis of [8], the partial derivative of each feature line point of formula (9) need to be computed as follows:
From formula (5), we can reduce to as follows:
where , is the transformation matrix from active satellite body frame to camera and is the transformation matrix from passive satellite body frame to active satellite body frame:
, is the transformation matrix from passive spacecraft orbital frame to camera frame, are the coordinates of the active satellite center in passive satellite orbital frame, and is the translation matrix from the active satellite center to camera origin.
Thus the partial derivatives of with respect to each state variable can be now calculated according to formula (20), and the linearization matrix will be gotten.
4. Simulations and Analyses
On the basis of the algorithm above, let the camera focus be mm, the target spacecraft a cube, and its body frame coordinates of feature points, respectively, , , , , , , , and . Table 1 lists the initial parameters of the simulations. The simulation time is three orbital periods. Figures 2 and 3 are the simulation results.

(a)
(b)
(c)
(a)
(b)
(c)
It is not intuitionistic to represent the attitude results by quaternion, yet the attitude results are described as their Euler form.
From Figure 2 and Figure 3, we can see that this algorithm is convergent. Figure 2 shows that the relative position errors are within ±0.045 meters when the simulation time is before about 1000 seconds. But after 1000 seconds the relative position errors decrease and are close to zero. Figure 3 shows that relative yaw angle errors are about −1.6~0 mrad when the simulation time is before about 400 seconds. But after 400 seconds the relative yaw angle errors decrease and are close to zero. Relative pitch error is about −0.9~0.1 mrad before 800 seconds, but after 800 seconds the errors are within ±0.1 mrad. Relative roll angle errors are about −0.6~1.2 mrad before 800 seconds, but after 800 seconds the errors decrease and are within −0.2~0 mrad. These simulation results indicate that the algorithm of this paper can meet some high precision mission in distributed earth observation system.
5. Conclusions
On the basis of the theories of dual quaternion, extended Kalman filtering (EKF), and the attitude dynamics of spacecraft, we put forward an autonomous positioning and orientating algorithm for micro/nanosatellite earth observation system Based on dual quaternion. The simulation results show that this algorithm is an accurate valid method in positioning and orientating of distributed satellites. In our future work, we will consider the disturbance factor of satellites in order to improve the practicability of this algorithm.
Acknowledgments
The authors wish to thank Jikun Ou, who is a precisian and kind professor of Institute of Geodesy and Geophysics, Chinese Academy of Sciences. This work is supported by Projects of the Ministry of Land and Resources P.R.C (Grant no. 1212010914015), Doctor Foundation of Henan Polytechnic University (Grant no. 648296), and National Natural Science Funds for Distinguished Young Scholar (Grant no. 50525414).
References
 B. Iannotta, “DART aims at space rendezvous,” Aerospace America, vol. 43, no. 3, pp. 26–30, 2005. View at: Google Scholar
 A. Tatsch, N. FitzCoy, and S. Gladun, “Onorbit servicing: a brief survey,” in Proceedings of the IEEE International Workshop on Safety, Security, and Rescue Robotics (SSRR '06), pp. 276–281, Gaithersburg, Md, USA, August 2006. View at: Google Scholar
 J. W. Evans and T. A. Mulder, “Autonomous rendezvous guidance and navigation for orbital express and beyond,” in Proceedings of the 16th AAS/AIAA Space Flight Mechanics Conference, Tampa, Fla, USA, January 2006, Paper AAS 06194. View at: Google Scholar
 About the Deep Space Network, 2010, http://deepspace.jpl.nasa.gov/dsn/about.html.
 Y. Zhang, G. Sommer, and E. BayroCorrochano, The Motor Extended Kalman Filter for Dynamic Rigid Motion Estimation from Line Observations, 2006, http://www.ks.informatik.unikiel.de/~vision/doc/Publications/geocom/zhang_sommer.pdf.
 Y. C. Liu, T. S. Huang, and O. D. Faugeras, “Determination of camera location from 2D to 3D line and point correspondences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 28–37, 1990. View at: Publisher Site  Google Scholar
 T. J. Broida, S. Chandrashekhar, and R. Chellappa, “Recursive 3D motion estimation from a monocular image sequence,” IEEE Transactions on Aerospace and Electronic Systems, vol. 26, no. 4, pp. 639–656, 1990. View at: Publisher Site  Google Scholar
 J. S. Goddard, Pose and motion estimation from vision using dual quaternionbased extended Kalman filtering, Ph.D. thesis, The University of Tennessee, Knoxville, Tenn, USA, 1997.
 D. F. Malan, 3D tracking between satellites using monocular computer vision, M.S. thesis, Department of Electrical & Electronic Engineering, University of Stellenbosch, Stellenbosch, South Africa, December 2004.
 Y. L. Xiao, Foundation of Flight DynamicsModeling of Aerospace Vehicle Motion, BeiHang University Press, Beijing, China, 2003.
Copyright
Copyright © 2010 Kezhao Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.