Journal of Sensors

Volume 2018, Article ID 6308530, 10 pages

https://doi.org/10.1155/2018/6308530

## Optimal Attitude Determination from Vector Sensors Using Fast Analytical Singular Value Decomposition

^{1}State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China^{2}Beijing University of Posts and Telecommunications, Beijing 100876, China^{3}State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China^{4}School of Automation, University of Electronic Science and Technology of China, Chengdu, China

Correspondence should be addressed to Xiangyang Gong; nc.ude.tpub@gnogyx and Jin Wu; moc.liamtoh@ctseu_uw_nij

Received 19 August 2017; Accepted 26 November 2017; Published 27 June 2018

Academic Editor: Francesco Dell'Olio

Copyright © 2018 Zhuohua Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A novel algorithm is proposed in this paper to solve the optimal attitude determination formulation from vector observation pairs, that is, the Wahba problem. We propose here a fast analytic singular value decomposition (SVD) approach to obtain the optimal attitude matrix. The derivations and mandatory proofs are presented to clarify the theory and support its feasibility. Through simulation experiments, the proposed algorithm is validated. The results show that it maintains the same attitude determination accuracy and robustness with conventional methodologies but significantly reduces the computation time.

#### 1. Introduction

In aerial applications, strapdown vector sensors are often required to directly give the rotation matrix between the body frame and inertial frame [1]. With the development of modern global navigation satellite systems (GNSS), GNSS-based attitude determination has been a vital part in spacecraft instrumentation and measurement [2]. This leads to different algorithms of attitude determination on various platforms, for example, remote sensing and marine navigation [3, 4]. In such applications, the attitude determination from vector observation pairs is usually an effective solution [5, 6]. This approach is referred to as the Wahba problem, posed in 1965, which minimizes the following loss function
in which and are the *i*th pair of vector observation that was obtained in the body frame and reference frame, respectively [7]. is the positive weight of the *i*th pair with the sum of 1. Here, is the 3-dimensional direction cosine matrix (DCM) belonging to SO(3) [8]. Initial research on this problem mainly focuses on the exact solution rather than its robustness [9]. For instance, the above loss function can be explicitly given by the augmented rotation equations. Brute force calculation of such a system requires a large matrix memory and its sophisticated generalized inverse. Such solution is not only time and space consuming, but it will also cause nonorthogonality of the DCM. This in fact has a demand on later matrix correction that deduces the optimality of the solution. A robust framework was proposed by Davenport in 1968, who analytically changed the Wahba problem into an eigenvalue problem [9]. The target, that is, DCM, is shifted to an attitude quaternion accordingly. Following this formulation, many famous attitude estimators were then proposed in the next 30 years, including Shuster’s QUaternion ESTimator (QUEST), Markley’s Fast Optimal Attitude Matrix (FOAM), Mortari’s ESOQ, and Wu’s FLAE [10–13].

The robust solutions to Wahba’s problem provides a possibility of accurate attitude determination from sensors like the sun sensor, nadir sensor, star tracker, magnetometer, and so on [14]. It can significantly improve the attitude estimation results combined with gyroscopes [15–17]. Apart from Davenport’s method, it is also noticeable that the singular value decomposition (SVD) could be efficient. Markley proposed the SVD-based Wahba solution in 1988 where the DCM is composed of decomposition results from the SVD [18]. The SVD algorithm is proven to be very robust, hence facing some critical cases of sensor combinations, where its accuracy is always maintained. However, conventional SVD computation is very complicated. Could there be a fast analytical SVD algorithm that can boost its execution on embedded platforms?

The answer is positive. As a matter of fact, a 3-matrix SVD can be given with much faster speed. And in this paper, we focus on presenting a novel fast SVD algorithm. The derivations are given to show why it is fast. Yet simulations are carried out to compare the proposed algorithm with representative ones on accuracy, robustness, and time consumption, which reflect its superiority in real applications.

This paper is structured as follows: Section 2 contains our fast SVD theory and its relationship with optimal attitude determination. Section 3 includes the experimental results. Section 4 consists of concluding remarks.

#### 2. Our Method

The introduced matrix theory in this paper is mainly referenced from [19]. Given an arbitrary nonsingular real squared matrix , the matrix is real symmetry. One can compute ’s singular values with ’s eigenvalues with , . The Householder transform and QR factorization are usually adopted for SVD. However, in real applications, they are proven to be time-consuming. Here, we define that matrix is decomposed by where and and are 3 × 3 orthonormal matrices. That is to say, the factorization process of can be significantly simplified if the preliminary decomposition of is completed. Actually, in scientific computations, such technique is very common which uses the QR decomposition or Jacobi method imposed on .

Now we propose the analytic eigenvalue decomposition of . The characteristic polynomial of is given by
where
with denoting the element of in the *i*th row and *j*th column. Letting , can be converted to
where

The discriminant is established by

As far as the symbol of the discriminant is concerned, there are three specific situations: (1): there are one real and a pair of conjugate complex roots. where in which denotes the unitary imaginary number.(2): there are three real roots in which two of them are equal to each other. Especially when , the roots are zeros.(3): there are three different real roots. where

Based on the three conditions and inserting the roots into , ’s eigenvalues can be obtained. Here we assume that they are sorted in descending order described as before. Note that is really symmetric. Then its eigenvalues are real numbers, that is, the roots of (5) are real as well. Hence only the abovementioned cases (2) and (3) would be adopted. Apart from this, according to the Vieta theorem [20], the eigenvalues are associated with

Since the eigenvalues are assumed to be real, their maximum condition number could be , while the minimum one is that of . Now we define the condition number of the characteristic polynomial equation as

Before computations, it is suggested that the users check the condition number first. If the condition number is too large, that means the numerical stability of the calculated results are not satisfactory. In such circumstance, we may resort to a more robust way, for example, the Jacobi method.

Based on the matrix theory, the orthogonal eigenvectors corresponding to the above-computed eigenvalues can be determined as follows:
(1)The eigenvalues have a multiplicity of 3 . Three orthonormal eigenvectors can be immediately established by
(2)The eigenvalues have a multiplicity of 2 . The eigenvector associated with the eigenvalue meets
Letting , we have , that is, the three row vectors of are linearly correlated. Assuming that the maximum element of is , it is obtained that
where is the *i*th row vector of the matrix . If or , the eigenvector can be directly established by
And if , we have
When , the eigenvectors are composed by
while when , the eigenvectors are composed by
(3)The eigenvalues are all different: the eigenvector associated with the eigenvalue meets

Letting , we have , which describes that the three row vectors of should be on the same plane, with the preliminary definition that the zero vector can be on an arbitrary plane. Note that at least two of them are not parallel with each other. Here we define

Theoretically, are parallel. Every one of them can be the eigenvector only if it is not a zero vector. In engineering practice, the item with the maximum norm is usually chosen for computing the eigenvector **v**_{1} belonging to the eigenvalue *λ*_{1}. Using the same technique, the eigenvector **v**_{2} can be computed. While for **v**_{3}, it can be given by .

Finally the eigenvectors are normalized. Letting one can write in which

Inserting and into (26), we obtain which is transformed into

Extracting , we can easily verify that **U** is an orthogonal matrix, namely, . Now, we have finished the presentation of the proposed SVD method.

The proposed SVD method can be applied to productions in which the computation resources are highly restricted. Relating to the attitude determination, we hereinafter introduce two propositions.

Proposition 1. *For the 3 × 3 nonsingular matrix **, if its SVD is **, ** is one of the closest orthonormal matrices to **, such that*

*Proof 1. *(46) is equivalent to
With the consideration of , introducing Lagrange multipliers, the Lagrangian is developed by
where is the symmetric Lagrange multiplication matrix. Using differential rules of matrix
The above Lagrangian is optimized by
producing
Then we have
which arrives at
Inserting (36) into (34), is determined by
When approximates an attitude matrix, that is, , we choose the positive symbol, which gives

If we rewrite the SVD into the following form where and are a symmetric positive definite matrix, that is, , then (37) represents the orthonormal symmetric decomposition.

Proposition 2. *For the 3 × 3 nonsingular matrix **, if its SVD is **, the choice ** makes the following function maximum:*

*Proof 2. *(38) is equivalent to
where
Then we have
Since , the above “equal” holds if and only if , which corresponds to .

Finally (42) arrives at

It has been proposed by Arun et al. in 1987 [21] and Markley in 1988 [18], respectively, that for equal-weight and normal-weight cases the optimal DCM can be computed with with the given

The Wahba problem can be shifted to maximizing [22]

According to the above propositions, the optimal DCM is calculated by

The proposed fast SVD is now summarized in the MATLAB coding (see Algorithm 1).