Abstract

Estimating the 3D pose of the space object from a single image is an important but challenging work. Most of the existing methods estimate the 3D pose of known space objects and assume that the detailed geometry of a specific object is known. These methods are not available for unknown objects without the known geometry of the object. In contrast to previous works, this paper devotes to estimate the 3D pose of the unknown space object from a single image. Our method estimates not only the pose but also the shape of the unknown object from a single image. In this paper, a hierarchical shape model is proposed to represent the prior structure information of typical space objects. On this basis, the parameters of the pose and shape are estimated simultaneously for unknown space objects. Experimental results demonstrate the effectiveness of our method to estimate the 3D pose and infer the geometry of unknown typical space objects from a single image. Moreover, experimental results show the advantage of our approach over the methods relying on the known geometry of the object.

1. Introduction

The pose estimation of uncooperative space objects is a key technology for many space missions such as target recognition and on-orbit service [1, 2]. In contrast to other state-of-the-art systems, a monocular camera ensures pose estimation under low power and low hardware complexity [3]. However, it is sometimes hard for the monocular camera to provide clear image sequence limited by the observation condition. Therefore, it is necessary to study the pose estimate of the space object from a single image.

3D pose estimation from a single image is an important but challenging task. For space objects, most of the existing methods rely on the known geometry of the object, which can be broadly classified into model-based and template-based methods. For model-based methods, the 3D pose is determined by an iterative algorithm to minimize a certain fit error between the features detected in the input image and the corresponding features of the known reference 3D model [48]. For the template-based methods, the input image is searched for specific features that can be matched to an assigned template, and the 3D pose is obtained from the best matching with the prebuilt template library of the object [913]. Currently, with the development of deep learning technology, a pose determination method based on convolutional neural networks (CNN) is provided in [14]. All of the above methods rely on the known geometry of the object and work on the specific object.

However, the detailed geometry of the object cannot always be known in some scenarios, such as for the unknown space objects. Even for the object with known geometry, there might be difference between the known reference model and the actual object. This is because the structure of the space object may change in different working modes, such as the rotation of solar sails. Compared with the known object, the pose estimation of the unknown object is more difficult. Methods relying on the known object geometry are not available for the unknown objects. For the unknown object, not only the pose but also the shape should be estimated. A fewer works exist in the literature on pose estimation of unknown space objects. In [15, 16], the feature points of the unknown space object are stored during continuous observation, and the pose is determined through the matching of feature points. However, the stable and continuous observation is always hard to provide due to the limitation of observation conditions. To our knowledge, there is no effective method at present to estimate the 3D pose of the unknown space object from a single image.

This paper devotes to estimating the 3D pose of uncooperative unknown space objects from a single image. Although this is an ill-posed problem, it can be made possible by introducing the prior information of the object structure. Actually, there has been some researches on this problem in computer vision. Generally, a prior shape model is built based on the structural similarity of objects in a specific category, and the pose estimation of the unknown object is reduced to a 3D-to-2D shape fitting problem where the parameters of shape and pose are estimated simultaneously [1719]. However, this solution is not practicable for space objects. Compared with common objects such as cars and chairs, there are large structural differences among different space objects. For example, the number of the solar panels, the shape of the main body, and the size of the antenna vary from different space objects. As a result, the structures of different space objects are hard to be represented by a unified model, which causes difficulty to introduce the prior structural information to parameter estimation of the unknown space object.

This paper presents an efficient method to estimate the 3D pose of unknown space objects from a single image. Although space objects have large structural variability, there are plenty of constraints among the components of typical space objects. For example, the solar panels of one object always have the same size and spatially symmetric disposition. These constraints have potential value to estimate the pose and shape for unknown space objects. From this, we propose a hierarchical shape model of space objects to describe the prior information of the object geometry, which represents the constraints among components of objects in the form of probability distribution. Compared with traditional prior shape models in computer vision such as wireframe [1719] and mesh [20], the hierarchical shape model proposed is able to describe objects with large structural variability. Therefore, with the support of the hierarchical shape model proposed, the pose estimation of unknown space objects can be conducted from a single image.

The main contribution of this paper is a pose estimation method of unknown space objects from a single image. Specifically, we establish a hierarchical shape model to describe the prior structures of typical space objects. On this basis, the parameter estimation method of unknown space objects is presented.

2. Overview

Our method consists of two aspects. Firstly, the hierarchical shape model of typical space objects is established previously. Secondly, the pose and shape of the unknown object are estimated simultaneously from a single image in support of the hierarchical shape model. The framework of our method is shown in Figure 1.

Section 3 illustrates the building of the hierarchical shape model of space objects. Section 4 introduces the estimation of pose and shape.

3. Prebuilt Hierarchical Shape Model

This paper builds the hierarchical shape model previously to describe the prior structures of typical space objects. The prebuilt hierarchical shape model is established from the structural laws of space objects, which is defined as . is a 2-tuple where represents the types of object components and indicates the constraints among object components. Each node is a type of basic shapes corresponding to one type of object components. Each node is a set of constraints of specific components. The detailed illustration of and is as follows.

3.1. Object Components

represents the components of space objects. The components of the space object always can be simplified as the basic shapes. In this paper, consists of four types of basic shapes: rectangle, cylinder, cube, and paraboloid, shown as follows. where rectangle is used to represent the solar panels, cylinder and cube can be seen as the main body of the space object, and paraboloid indicates the antenna. Admittedly, these basic shapes certainly cannot represent all the components of space objects. Nevertheless, these basic shapes share the general characters of space objects and have the ability to describe the main structure of typical space objects. Besides, the types of basic shapes in our model are extensible. has a set of attributes that describe its position, size, and pose, respectively, as below.

indicates the position of the component in the object frame. represent the size of the component in the object frame. is the rotation angle between the object frame and the camera frame. Figure 2 shows the reference frames of our method.

3.2. Constraints among Object Components

represents the constraints among the components of space objects. The constraints considered in this paper are shown in Figure 3, which exist commonly in typical space objects.

Each consists of a set of defined as where is the function over the attributes of corresponding components of the object and is the probability distribution of the responses of .

To explain detailedly, Table 1 elaborates of , which indicates the relation of reflective symmetry.where axisi is the major axis of the object component. indicates the distance between axes of two components. , , and measure the parallelism, conformance of size, and coaxiality between two components, respectively. Similarly, each can be represented in this way.

4. Pose and Shape Estimation

On the basis of the prebuilt hierarchical shape model, the pose and shape of the unknown space object can be estimated from the input image. The process of estimation is depicted in the lower part in Figure 1.

Firstly, image features are extracted from the input image, and the candidate regions of object components are determined based on the image features, which are detailed in Section 4.1. Secondly, according to the image features and regions of components, the pose and shape are estimated under the support of the prebuilt hierarchical shape model of space objects, which are detailed in Section 4.2. Besides, this parameter estimation is a complicated optimization problem, and the strategy of optimization is given in Section 4.3.

4.1. Image Processing

For the input image, the first step is the feature extraction. In our method, the features extracted from the input image are lines and arcs. This is because the components of space object always can be treated as the simple basic shapes such as cylinders and cube, which reflect significant lines and arcs in the image.

The feature extraction method in this paper is inspired by [21]. In [21], connecting pixels are extracted from the gradient magnitudes of the image, and then, lines are detected by aggregation of the connecting pixels. Similar to [21], lines and arcs are extracted, respectively, from the input image by making corresponding aggregation rules in this paper. Figure 4 shows the extraction of image features. The image feature detected in our method is defined as . Experiments in Section 5.3 verify the advantage of image feature used in our method.

After the feature extraction, the candidate regions of object components are determined based on the lines and arcs detected. For example, the candidate region of the solar panel contains the long parallel lines, the candidate region of a cylindrical main body contains the parallel lines and arcs, the candidate region of a cuboid main body contains several sets of the parallel lines, and the candidate region of a paraboloid antenna contains arcs.

4.2. Estimation of Pose and Shape

Under the hierarchical shape model, an arbitrary object can be represented as which is an instance of , where and . In the support of the hierarchical shape model, estimation of the pose and shape is converted to determine corresponding to the input image.

As a single image is input, the nodes of the hierarchical shape model are activated. is obtained by maximum a posteriori probability as follows: where is the probability distribution of activated nodes in the hierarchical shape model, which is the product of and . The conditional probability measures the matching degree of the inferred object and the input image, which is equal to . Consequently, (5) can be represented as

The first item in (6) indicates the probability of each type of constraint, which is regarded as a constant in this paper.

The second item in (6) can be expressed as follows due to the definition of constraints among the components in (4).

The last item in (6) measures the matching degree of the inferred object and the input image. For the input image, lines and arcs are detected as image features. is defined as the matching degree between the distance map of image features and edge of projection of the inferred object . where measures the difference of the distance map and edge. At this point, the constraints among components of space objects are represented as the probability distribution.

4.3. Optimization Strategy

Substituting (7) and (8) into (6), we have

To solve this complicated optimization problem, the model inference strategy is given as follows: (1) The pose of the main body of the space object is estimated firstly as the initial value of other components. This is because the main body can be always seen as a cylinder or cuboid, which has significant difference of image features in different poses. (2) The pruning is used to remove incompatible proposals in the inference of the hierarchical shape model. (3) The particle swarm optimization [22] is used in model inference.

5. Experiment

The experiments consist of three parts. In Section 5.1, the pose estimation method proposed is evaluated on four types of unknown space objects. The purpose of this part is to verify the ability of our method to estimate the 3D pose of unknown space objects. In Section 5.2, the accuracy of pose estimation on the object with a changed structure is compared between our method and the mainstream method which relies on the known geometry of the object. This part reflects the advantage of our method over the approaches only for the known objects. In Section 5.3, the performance of the image feature extracted mentioned in Section 4.1 is compared with other common features based on the framework of our method. This part is to prove the validity of the image features extracted in our method for space objects.

5.1. Pose Estimation of Unknown Objects

The performance of our method is evaluated on four types of unknown space objects from a single image. The experiment data is produced by the computer simulation. For each object, 30 test images are sampled from different views shown in Figure 5.

The error of pose estimation between the predicted rotation vector and the ground truth rotation vector is measured as

The quantitative results of four objects are shown sequentially in Figure 6.

The qualitative results are shown in Figure 7.

In our method, the shape inference and pose estimation are conducted simultaneously. The object shape inferred is represented as the combination of basis shapes. Experiments show the effectiveness of our method to estimate the 3D pose of unknown space objects.

5.2. Pose Estimation of the Object with Structure Changed

Most of the existing methods rely on the known geometry of space object. However, there will always be a difference between the known reference model and the actual object. For the pose estimation of the object with a changed structure, the performance of our method is compared with the method in [10]. The pose estimation in [10] is based on the matching with the template library reflecting the object features in different poses, which is the representative way at present. The experimental object of this part is the first object in Section 3.1. The angle deviation of the solar panels and main body is 0°, 15°, 30°, 45°, and 60°, respectively. The experimental result is shown in Figure 8.

It can be seen that with the deviation of angle increases, the accuracy of the method in [10] decreases significantly. In comparison, our method performs robustly in the case of change of object structure. The result shows the limitation of the methods relying known object geometry. The experimental result shows the necessity to estimate the pose of the unknown objects.

5.3. Comparison of Image Features

In our method, the features extracted from the input image are lines and arcs. In order to verify the validity of feature extracted in this paper, the experiment is designed as follows. Based on the framework of our method, the performance of image feature extracted in our method is compared with several common features including edge [23], line [21], and HOG [24]. These features are displayed in Figure 9.

The quantitative comparison between the image feature in our method and other features is shown in Figure 10.

From the results, the performance of the image feature extracted in our method is better than that of other features. This is because the image features extracted in our method reflect the characteristics of space objects well.

6. Conclusion

This paper provides a pose estimation method for uncooperative unknown space objects from a single image. In this paper, a hierarchical shape model is established to represent the prior structure information of space objects, and the model inference is illustrated to estimate the parameters of shape and pose simultaneously. The experiments verify the effectiveness of our method and show the advantage of the approach proposed over the methods which rely on the known geometry of the object. Our research is valuable in exploration to understand the 3D pose and geometry of unknown objects from a single image.

Data Availability

The underlying data supporting the results of our study is the public 3D models of space targets, which are from the Internet.

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

This work was supported by the National Key Laboratory Foundation (grant number 61404130316).