Abstract

A novel algorithm is proposed in this paper to solve the optimal attitude determination formulation from vector observation pairs, that is, the Wahba problem. We propose here a fast analytic singular value decomposition (SVD) approach to obtain the optimal attitude matrix. The derivations and mandatory proofs are presented to clarify the theory and support its feasibility. Through simulation experiments, the proposed algorithm is validated. The results show that it maintains the same attitude determination accuracy and robustness with conventional methodologies but significantly reduces the computation time.

1. Introduction

In aerial applications, strapdown vector sensors are often required to directly give the rotation matrix between the body frame and inertial frame [1]. With the development of modern global navigation satellite systems (GNSS), GNSS-based attitude determination has been a vital part in spacecraft instrumentation and measurement [2]. This leads to different algorithms of attitude determination on various platforms, for example, remote sensing and marine navigation [3, 4]. In such applications, the attitude determination from vector observation pairs is usually an effective solution [5, 6]. This approach is referred to as the Wahba problem, posed in 1965, which minimizes the following loss function in which and are the ith pair of vector observation that was obtained in the body frame and reference frame, respectively [7]. is the positive weight of the ith pair with the sum of 1. Here, is the 3-dimensional direction cosine matrix (DCM) belonging to SO(3) [8]. Initial research on this problem mainly focuses on the exact solution rather than its robustness [9]. For instance, the above loss function can be explicitly given by the augmented rotation equations. Brute force calculation of such a system requires a large matrix memory and its sophisticated generalized inverse. Such solution is not only time and space consuming, but it will also cause nonorthogonality of the DCM. This in fact has a demand on later matrix correction that deduces the optimality of the solution. A robust framework was proposed by Davenport in 1968, who analytically changed the Wahba problem into an eigenvalue problem [9]. The target, that is, DCM, is shifted to an attitude quaternion accordingly. Following this formulation, many famous attitude estimators were then proposed in the next 30 years, including Shuster’s QUaternion ESTimator (QUEST), Markley’s Fast Optimal Attitude Matrix (FOAM), Mortari’s ESOQ, and Wu’s FLAE [1013].

The robust solutions to Wahba’s problem provides a possibility of accurate attitude determination from sensors like the sun sensor, nadir sensor, star tracker, magnetometer, and so on [14]. It can significantly improve the attitude estimation results combined with gyroscopes [1517]. Apart from Davenport’s method, it is also noticeable that the singular value decomposition (SVD) could be efficient. Markley proposed the SVD-based Wahba solution in 1988 where the DCM is composed of decomposition results from the SVD [18]. The SVD algorithm is proven to be very robust, hence facing some critical cases of sensor combinations, where its accuracy is always maintained. However, conventional SVD computation is very complicated. Could there be a fast analytical SVD algorithm that can boost its execution on embedded platforms?

The answer is positive. As a matter of fact, a 3-matrix SVD can be given with much faster speed. And in this paper, we focus on presenting a novel fast SVD algorithm. The derivations are given to show why it is fast. Yet simulations are carried out to compare the proposed algorithm with representative ones on accuracy, robustness, and time consumption, which reflect its superiority in real applications.

This paper is structured as follows: Section 2 contains our fast SVD theory and its relationship with optimal attitude determination. Section 3 includes the experimental results. Section 4 consists of concluding remarks.

2. Our Method

The introduced matrix theory in this paper is mainly referenced from [19]. Given an arbitrary nonsingular real squared matrix , the matrix is real symmetry. One can compute ’s singular values with ’s eigenvalues with , . The Householder transform and QR factorization are usually adopted for SVD. However, in real applications, they are proven to be time-consuming. Here, we define that matrix is decomposed by where and and are 3 × 3 orthonormal matrices. That is to say, the factorization process of can be significantly simplified if the preliminary decomposition of is completed. Actually, in scientific computations, such technique is very common which uses the QR decomposition or Jacobi method imposed on .

Now we propose the analytic eigenvalue decomposition of . The characteristic polynomial of is given by where with denoting the element of in the ith row and jth column. Letting , can be converted to where

The discriminant is established by

As far as the symbol of the discriminant is concerned, there are three specific situations: (1): there are one real and a pair of conjugate complex roots. where in which denotes the unitary imaginary number.(2): there are three real roots in which two of them are equal to each other. Especially when , the roots are zeros.(3): there are three different real roots. where

Based on the three conditions and inserting the roots into , ’s eigenvalues can be obtained. Here we assume that they are sorted in descending order described as before. Note that is really symmetric. Then its eigenvalues are real numbers, that is, the roots of (5) are real as well. Hence only the abovementioned cases (2) and (3) would be adopted. Apart from this, according to the Vieta theorem [20], the eigenvalues are associated with

Since the eigenvalues are assumed to be real, their maximum condition number could be , while the minimum one is that of . Now we define the condition number of the characteristic polynomial equation as

Before computations, it is suggested that the users check the condition number first. If the condition number is too large, that means the numerical stability of the calculated results are not satisfactory. In such circumstance, we may resort to a more robust way, for example, the Jacobi method.

Based on the matrix theory, the orthogonal eigenvectors corresponding to the above-computed eigenvalues can be determined as follows: (1)The eigenvalues have a multiplicity of 3 . Three orthonormal eigenvectors can be immediately established by (2)The eigenvalues have a multiplicity of 2 . The eigenvector associated with the eigenvalue meets Letting , we have , that is, the three row vectors of are linearly correlated. Assuming that the maximum element of is , it is obtained that where is the ith row vector of the matrix . If or , the eigenvector can be directly established by And if , we have When , the eigenvectors are composed by while when , the eigenvectors are composed by (3)The eigenvalues are all different: the eigenvector associated with the eigenvalue meets

Letting , we have , which describes that the three row vectors of should be on the same plane, with the preliminary definition that the zero vector can be on an arbitrary plane. Note that at least two of them are not parallel with each other. Here we define

Theoretically, are parallel. Every one of them can be the eigenvector only if it is not a zero vector. In engineering practice, the item with the maximum norm is usually chosen for computing the eigenvector v1 belonging to the eigenvalue λ1. Using the same technique, the eigenvector v2 can be computed. While for v3, it can be given by .

Finally the eigenvectors are normalized. Letting one can write in which

Inserting and into (26), we obtain which is transformed into

Extracting , we can easily verify that U is an orthogonal matrix, namely, . Now, we have finished the presentation of the proposed SVD method.

The proposed SVD method can be applied to productions in which the computation resources are highly restricted. Relating to the attitude determination, we hereinafter introduce two propositions.

Proposition 1. For the 3 × 3 nonsingular matrix , if its SVD is , is one of the closest orthonormal matrices to , such that

Proof 1. (46) is equivalent to With the consideration of , introducing Lagrange multipliers, the Lagrangian is developed by where is the symmetric Lagrange multiplication matrix. Using differential rules of matrix The above Lagrangian is optimized by producing Then we have which arrives at Inserting (36) into (34), is determined by When approximates an attitude matrix, that is, , we choose the positive symbol, which gives

If we rewrite the SVD into the following form where and are a symmetric positive definite matrix, that is, , then (37) represents the orthonormal symmetric decomposition.

Proposition 2. For the 3 × 3 nonsingular matrix , if its SVD is , the choice makes the following function maximum:

Proof 2. (38) is equivalent to where Then we have Since , the above “equal” holds if and only if , which corresponds to .
Finally (42) arrives at

It has been proposed by Arun et al. in 1987 [21] and Markley in 1988 [18], respectively, that for equal-weight and normal-weight cases the optimal DCM can be computed with with the given

The Wahba problem can be shifted to maximizing [22]

According to the above propositions, the optimal DCM is calculated by

The proposed fast SVD is now summarized in the MATLAB coding (see Algorithm 1).

1:
  % Optimal Attitude Determination Algorithm Using Fast Singular Value Decomposition
  % author: liuzhuohua@bupt.edu.cn; jin_wu_uestc@hotmail.com
  function [u1,u2, u3,   …
          d1, d2, d3,  …
          v1, v2, v3] = svd3_fast (A,   …
                    A11, A12, A13,   …
                    A21, A22, A23,   …
                    A31, A32, A33)
    B11 = A11 ∗ A11 + A21 ∗ A21 + A31 ∗ A31;
    B12 = A11 ∗ A12 + A21 ∗ A22 + A31 ∗ A32;
    B13 = A11 ∗ A13 + A21 ∗ A23 + A31 ∗ A33;
    B22 = A12 ∗ A12 + A22 ∗ A22 + A32 ∗ A32;
    B23 = A12 ∗ A13 + A22 ∗ A23 + A32 ∗ A33;
    B33 = A13 ∗ A13 + A23 ∗ A23 + A33 ∗ A33;
    B12B23 = B12 ∗ B23;
    B13B22 = B13 ∗ B22;
    B13B12 = B13 ∗ B12;
    B11B23 = B11 ∗ B23;
    B12B12 = B12 ∗ B12;
    B11B22 = B11 ∗ B22;
    a = −(B11 + B22 + B33);
    b =     B11B22 + B11 ∗ B33 + B22 ∗ B33 − B23ˆ2 − B12B12 − B13ˆ2;
    c = −(B11B22 ∗ B33 − B11B23 ∗ B23 − B12B12 ∗ B33 + 2 ∗ B12B23 ∗ B13 − B13 ∗ B13B22);
    a3 = a/3;
    p = b − a ∗ a3;
    q = (2 ∗ aˆ3–9 ∗ a ∗ b + 27 ∗ c)/27;
    p2 = p ∗ p;
    T = −q ∗ sqrt(−27 ∗ p)/2/p2;
    theta3 = acos(T)/3;
    pi23 = 2.094395102393195;
    x0 = 2 ∗ sqrt(−p/3);
    x1 = x0 ∗ cos(theta3);
    x2 = x0 ∗ cos(theta3 − pi23);
    x3 = x0 ∗ cos(theta3 + pi23);
    rs1 = x1 − a3;
    rs2 = x2 − a3;
    rs3 = x3 − a3;
    if rs1 < rs2, tmp = rs2; rs2 = rs1; rs1 = tmp; end
    if rs1 < rs3, tmp = rs3; rs3 = rs1; rs1 = tmp; end
    if rs2 < rs3, tmp = rs2; rs2 = rs3; rs3 = tmp; end
    v1 = [B12B23 − B13B22 + rs1 ∗ B13
        B13B12 − B11B23 + rs1 ∗ B23
        (rs1 − B11) ∗ (rs1 − B22) − B12B12];
    v2 = [B12B23 − B13B22 + rs2 ∗ B13
        B13B12 − B11B23 + rs2 ∗ B23
        (rs2 − B11) ∗ (rs2 − B22) − B12B12];
    v1 = v1/norm(v1);
    v2 = v2/norm(v2);
    v3 = [v1 (2) ∗ v2 (3) − v1 (3) ∗ v2 (2);
        v1 (3) ∗ v2 (1) − v1 (1) ∗ v2 (3);
        v1 (1) ∗ v2 (2) − v1 (2) ∗ v2 (1)];
    d1 = sqrt(rs1);
    d2 = sqrt(rs2);
    d3 = sqrt(rs3);
    u1 = A ∗ v1/d1;
    u2 = A ∗ v2/d2;
    u3 = A ∗ v3/d3;
end

3. Simulation Results

In this subsection, we conduct two simulations to verify the accuracy and efficiency of the proposed fast SVD method.

3.1. Static Simulation

The vector observation pairs are simulated with in which the noise items with different indexes are independent from each other and subject to Gaussian distribution. The reference DCM here is chosen as which corresponds to the Euler angles of roll: −14°, pitch: 75°, yaw: 24°. Here the standard deviation of noise is defined as 0.01 for each axis.

With this model, 10,000 groups of 3 pairs of vector observations are simulated for testing. The FLAE, QUEST, and Markley’s SVD method are employed for comparison. For FLAE, we here use the Newton iteration to compute the eigenvalues of Davenport matrix K. While for QUEST, we use both Newton iteration and direct factorization with the Eigen math library. The Eigen library is also adopted for calculation of Markley’s SVD method. We use the MATLAB software to evaluate the results, which are shown in Figure 1. We can see from the figure that the tested results of representative algorithms and the proposed method are all the same. Actually, since we have not broken the conventional framework of Wahba’s problem, the final accuracy are all the same. However, as is known, SVD has been recognized as a robust computation tool. The numerical stability of SVD is far beyond the eigenvalue factorization.

By taking more cases into account, detailed time consumption results are tested. The designed cases from Markley are employed (see Table 1, [5]). The results are categorized as follows (Tables 24). The presented materials show that the proposed algorithm has the same attitude accuracy with the compared representative ones.

3.2. Dynamic Simulation

In this subsection, the attitude is simulated using an attitude and heading reference system (AHRS) on which a MicroStrain 3DM-GX3-25 module is attached to the platform firmly. With a gimbal with angle encoders, the platform is rotated and the raw sensor data along with the reference angles are logged with 500 Hz. The reference angles are converted to DCMs that are inserted into (49) to simulate vector observations. With the abovementioned compared algorithms, the results are evaluated and plotted in Figure 2. The overall tests show that the proposed fast SVD method is not different compared with the representative methods. The superiority of the proposed method is going to be introduced in Subsection 3.3.

3.3. Computation Complexity

Several algorithms are tested with simulated pairs of vector observations. Here, in each round, there are 5 pairs of vector observations. The execution time consumption from different sources are collected with the internal timer of the MATLAB software and is shown in Figure 3. It is obvious that the proposed fast SVD is the best one compared with the other ones. This is because the proposed algorithm seeks the intelligent way by taking the simplest analytical form of SVD of the 3 × 3 matrix.

The low time consumption of the proposed method can significantly increase the CPU schedule of the attitude determination system especially on embedded platforms. The reserved time can be used for emergency backup to enhance the overall stability [23].

4. Conclusion

In this paper, an analytical singular value decomposition (SVD) of a 3 × 3 matrix is proposed. The algorithm is based on the SVD of a real symmetric matrix and the roots to its characteristic polynomial. Detailed derivations of roots to this cubic equation and further compositions of eigenvectors are introduced. With the presented theory, any 3 × 3 real matrix can be factorized with much faster speed. This is then applied to the optimal attitude determination problem. Simulations are carried out which contains static and dynamic performance tests. The attitude accuracy results indicate that the proposed fast SVD approach has the same accuracy with conventional ones. However, since the computations are significantly reduced, the execution time of the algorithm is much more superior to other methods. It is believed that this algorithm may be introduced in embedded attitude determination systems in the future. However, when dealing with some critical cases, the proposed method may not be usable as the divisions may face NaN values.

Conflicts of Interest

The authors declare no conflict of interest regarding the content and publication of this paper.

Acknowledgments

The authors would like to thank Prof. Zebo Zhou for his constructive suggestions. This research is supported by the National Natural Science Foundation of China under Grant Nos. 61602053 and 41604025, the Fundamental Research Funds for Central Universities, and the Foundation of Key Laboratory (2017-MS-14).