Abstract

Compared to ground-based observation, space-based observation is an effective approach to catalog and monitor increasing space objects. In this paper, space object detection in a video satellite image with star image background is studied. A new detection algorithm using motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. The effect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of the previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Experimental results with a video image from the Tiantuo-2 satellite show that this algorithm provides a good way for space object detection.

1. Introduction

In September 8, 2014, Tiantuo-2 (TT-2) designed by the National University of Defense Technology independently was successfully launched into orbit, which is the first Chinese interactive earth observation microsatellite using a video imaging system. The mass was 67 kg, and the altitude was 490 km. Experiments based on interactive control strategies with human in the loop were carried out to realize continuous tracking and monitoring of moving objects.

Now, video satellites have been widely utilized by domestic and foreign researchers, and several video satellites have been launched into orbits, such as Skysat-1 and 2 by Skybox Imaging, TUBSAT series satellites by the Technical University of Berlin [1], and the video satellite by Chang Guang Satellite Technology. They can obtain video images with different on-orbit performances.

Increasing space objects have greater impact on human space activities, and cataloging and monitoring them become a hot issue in the field of space environment [25]. Compared to ground-based observation, space-based observation is not restricted by weather or geographical location and avoids the disturbance of the atmosphere to the object’s signal, which has a unique advantage [2, 3, 6]. To use a video satellite to observe space objects is an effective approach.

Object detection and tracking in satellite video images is an important part of space-based observation. For the general problem of object detection and tracking in video, optical flow [7], block matching [8], template detection [9, 10], and so on have been proposed. But most of these methods are based on gray level, and enough texture information is highly required [11]. In fact, small object detection in optical images with star background mainly has the following difficulties: (1) Because the object occupies only one or a few pixels in the image, the shape of the object is not available. (2) Due to background stars and noise introduced by a space environment detecting equipment, the object almost submerges in the complex background bright spots, which increases the difficulty of object detection greatly.

Aiming at these difficulties, many scholars have proposed various algorithms, mainly including track before detection (TBD) and detect before tracking (DBT). Multistage hypothesis testing (MHT) [12] and dynamic programming-based algorithm [1315] can be classified into TBD, which is effective when the image signal-to-noise ratio is very low. But high computation complexity and hard threshold always follow them [16]. Actually, DBT is usually adopted for object detection in star images [17]. Some DBT algorithms are discussed below.

An object trace acquisition algorithm for sequence star images with moving background was put forward to detect the discontinuous trace of a small space object by [18], which used the centroid of a group of brightest stars in sequence images to match images and filtered background stars. A space object detection algorithm based on the principle of star map recognition was put forward by [19], which used a triangle algorithm to accomplish image registration and detected the space object in the registered image series. Based on the analysis of a star image model, Han and Liu [20] proposed an image registration method via extracting feature points and then matching the triangle. Zhang et al. [21] used a triangle algorithm to match background stars in image series and classified stars and potential targets and then utilized the 3 frames nearest neighboring correlation method to detect targets grossly, in which false targets were filtered with the multiframe back and forth searching method. An algorithm based on iterative optimization distance classification is proposed to detect small visible optical space objects by [22], which used the classification method to filter background stars and utilized the continuity of object motion to achieve trajectory association.

The core idea of the DBT algorithms above is to match the sequence images based on the relative invariance of the star position and to filter background stars. Therefore, there is also a large amount of computation and poor real-time performance. A real-time detection algorithm based on FPGA and DSP for small space objects was presented by [23], but the instability and motion of observation platform were not considered. In addition, image sequences used to validate the algorithm in the literature above are from a ground photoelectricity telescope in [15, 17, 18, 2123] and generated by simulation in [19, 20]; the characteristic of image sequence from space-based observation has not been considered.

This paper studies space object detection in a video satellite image with star image background, and a new detection algorithm based on motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. Experimental results about a video image from TT-2 demonstrate the effectiveness of the algorithm.

2. Characteristic Analysis of Satellite Video Images

A video satellite image of space contains deep space background, stars, objects, and noise introduced by imaging devices and cosmic rays. The mathematical model is given by [24] as follows: where is the gray level of deep space background, is the gray level of stars, is the gray level of objects, and is the gray level of noise. is the pixel coordinate in the image. t is the time of the video.

In the video image, stars and weak small objects occupy only one or a few pixels. It is difficult to distinguish the object from background stars by morphological characteristics or photometric features. Besides, attitude motion of the object leads to changing brightness, even losing the object in several frames. Therefore, it is almost impossible to detect the object in a single-frame image and necessary to use the continuity of object motion in multiframe images.

Figure 1 is a local image of a video from TT-2, where Figure 1(a) is a star and Figure 1(b) is an object (debris). It is impossible to distinguish them by morphological characteristics or photometric features.

3. Object Detection Using Motion Information

When attitude motion of a video satellite is stabilized, background stars are moving extremely slowly in the video and can be considered static in several frames. At the same time, noise is random (dead pixels appear in a fixed position) and only objects are moving continuously. This is the most important distinction of motion characteristics of their images. Because of platform jitter, objects cannot be detected by a simple frame difference method. A new object detection algorithm using motion information (NODAMI) is proposed. The motion information contains known satellite attitude motion information. When the attitude changes, the video image varies accordingly. This case can be transformed into the case of attitude stabilization by compensating attitude motion to image. The complexity of the detection algorithm reduces greatly without image registration.

The procedure of NODAMI is shown in Figure 2.

Firstly, the effect of satellite attitude motion on image is analyzed and attitude motion compensation formula is derived. Then, each step of NODAMI is described in detail.

3.1. Compensation of Satellite Attitude Motion

The inertial frame is defined as , originating at the center of the Earth, and the -axis aligned with the Earth’s North Pole and the -axis aligned with the vernal equinox. The satellite body frame is defined as , originating at the mass center of the satellite and three axes aligned with the principal axes of the satellite. The pixel frame is defined as , originating at the upper left corner of the image plane and using the pixel as a coordinate unit. represent the number of columns and rows, respectively, of the pixel in the image. The image frame is defined as , originating at the intersection of the optical axis and the image plane, and the -axis and the -axis aligned with and , respectively. The camera frame is defined as , originating at the optical center of the camera, and the -axis aligned with the optical axis, the -axis aligned with , and the -axis aligned with . All the 3D frames defined above are right handed.

In the inertial frame, let the coordinate of the object be and the coordinate of the satellite be . Assuming that the 3-2-1 Euler angle of the satellite body frame with respect to the inertial frame is , the coordinate of the object in the satellite body frame is given as follows: where is the attitude matrix of the satellite body frame with respect to the inertial frame and , , and are the coordinate transformation matrices after a rotation around the x-axis, y-axis, and z-axis, respectively, with the angle θ:

Assuming that the camera is fixed on the satellite and the 3-2-1 Euler angle of the camera frame with respect to the satellite body frame is , which is constantly determined by the design of the satellite structure, then the attitude matrix of the camera frame with respect to the satellite body frame can be derived. Let the coordinate of in the satellite body frame be ; then, the coordinate of the object in the camera frame is given as follows:

The design of the video satellite tries to make as zero, and actually, they are of small quantities. are much lesser than . Without loss of generality, let , that is, the camera frame and the satellite body frame coincide.

As shown in Figure 3, the coordinate of the object in the image frame is given as follows: where is the focal length of the camera.

If the field of view of the camera is FOV, then .

The coordinate of the object in the pixel frame is given as follows: where is the size of each pixel and is the coordinate of the image center.

The video satellite can achieve tracking imaging of the object based on interactive attitude adjustment with human in the loop. Attitude adjustment is applied to the attitude of the satellite body frame with respect to the inertial frame. Assuming that the 3-2-1 Euler angle of attitude adjustment is , the coordinates of the point P in the camera frame, the image frame, and the pixel frame are , , and , respectively, before attitude adjustment and , , and , respectively, after attitude adjustment.

Ignoring the orbit motion of the satellite during the attitude adjustment, which is reasonable when adjustment is instantaneous, gives

Denote as and transformation function determined by , , as , , , that is,

Then,

Denote as and the mapping from the camera frame to the image frame as , that is,

Denote transformation function of the image frame determined by as , that is,

Then,

Equation (12) proves that transformation of the image frame caused by attitude adjustment can be decomposed into compound of transformation function determined by the rotation around the satellite body frame axes.

Obviously,

When are of small quantities, can be approximated as follows:

Then, the conclusion can be drawn that the effect of satellite attitude motion on an image can be decomposed into the translation and rotation of the image. The effects in an image of small rotations around each axis are that roll and pitch lead to image translation, whereas yaw induces image rotation.

Especially, when , the attitude motion compensation formula can be given as follows:

In practice, onboard sensors, such as fiber optic gyroscope, give the angular rate of attitude motion, . Supposing that the time interval per frame is , when is of small quantity, the angular rate in can be regarded as constant. Let

Then, the attitude adjustment matrix of the current frame moment with respect to the previous frame moment is given as follows: where

The 3-2-1 Euler angle can be solved as follows:

Generally, attitude adjustment does not rotate the image, that is, can be approximated as 0. And can be approximated as follows: for they are of small quantities.

Substituting and (20) into (15) gives

3.2. Image Denoising

Noise in the video image mainly includes the space radiation noise, the space background noise, and the CCD dark current noise. A bilateral filter can be used to denoise, which is a simple, noniterative scheme for edge-preserving smoothing [25]. The weights of the filter have two components. The first of which is the same weighting used by the Gaussian filter. The second component takes into account the difference in intensity between the neighboring pixels and the evaluated one. The diameter of the filter is set to 5, and weights are given as follows: where is the normalization constant, is the center of the filter, is the gray level at , is set to 10, and is set to 75.

3.3. Single-Frame Binary Segmentation

In order to distinguish between stars and objects, it is necessary to remove the background in each frame image, but stray light leads to uneven gray level distribution of background. Figure 4(a) is a single frame of the video image, and Figure 4(b) is its gray level histogram. The single-peak shape of the gray histogram shows that the classical global threshold method, which is traditionally used to segment the star image [15, 18, 2022, 24], cannot be used to segment the video image.

Whether it is a star or an object, its gray level is greater than the pixels in its neighborhood. Consider using variable thresholding based on local image properties to segment the image. Calculate standard deviation and mean value for the neighborhood of every point in the image, which are descriptors of the local contrast and average gray level. Then, the variable thresholding based on local contrast and average gray level is given as follows: where and are constant and greater than 0. is the contribution of the local average gray level to the thresholding and can be set to 1. is the contribution of local contrast to the thresholding and is the main parameter to set according to the object characteristic.

But on the other hand, for space moving objects, their brightness is sometimes changing according to the changing attitude. If was set to be constant, the object would be lost in some frames. Considering the continuity of object motion, if in the current frame is detected as the probable object coordinate, the object detection probability is much greater in the 5 × 5 window of in the next frame. Integrated with the continuity of object brightness change, if there is no probable object detected in the 5 × 5 window of in the next frame, a can be reduced with a factor, that is, . will be stored as if is detected as the probable object coordinate in the 5 × 5 window of in the next frame. So the variable thresholding based on local image properties and detection of the previous frame is given as follows:

The difference between (23) and (24) is the variable coefficient based on detection of the previous frame. is initially set to be a little greater than 1 and reset to the initial value when the gray level at becomes large again (greater than 150).

The image binary segmentation algorithm is given as follows: where is the gray level of the original image at and is the gray level of the segmented image at .

3.4. Coordinate Extraction

In the ideal optical system, the point object in the CCD focal plane occupies one pixel, but in practical imaging condition, circular aperture diffraction results in the object in the focal plane diffusing multipixels. So the coordinate of the object in the pixel frame is determined by the position of the center of the gray level. The simple gray weighted centroid algorithm is used to calculate the coordinates, with positioning accuracy up to 0.1~0.3 pixels [26], as follows: where is object area after segmentation, is the gray level of the original image at , and is the coordinate of .

The coordinates will be used in single-frame binary segmentation.

3.5. Trajectory Association

After processing a frame image, some probable object coordinates are obtained. Associate them with existing trajectories or generate new trajectories, by the nearest neighborhood filter. The radius of the neighborhood is determined by object characteristic, which needs to be greater than the moving distance of object image in a frame and endures losing the object in several frames.

Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. Use Kalman filter to predict probable object’s coordinate and where to use the nearest neighborhood filter in the current frame.

Assuming that the state vector of the object at the frame is , that is, its coordinate and velocity in the pixel frame, the system equation is where is the state transition matrix, is the control-input matrix, is the control vector, is the measurement matrix, is the measurement vector, that is, the coordinate associated with obtained by the nearest neighborhood filter at the frame, is the process noise which is assumed to be zero mean Gaussian white noise with covariance , denoted as , and is the measurement noise which is assumed to be zero mean Gaussian white noise with covariance , denoted as .

In fact, uses the result of (21) and realizes compensation of attitude motion when adjusting attitude. In the case of attitude stabilization, . Equation (27) unifies these two situations by the control vector.

The details of how the Kalman filter predicts and updates are omitted here for there is nothing special.

When a trajectory has 20 points, we need to judge whether the trajectory is an object. Velocity in the state vector is used. If the mean velocity of these points is greater than a given thresholding, the trajectory is an object; otherwise, it is not. That is to judge whether the point is moving. The thresholding here is mainly to remove image motion caused by the instability of the satellite platform and other noise. The thresholding is usually set to 2.

When satellite attitude is adjusting, do not generate new trajectories in order to reduce false objects. On the other hand, in practice, adjusting attitude means something in the current frame is of interest and there is no need to detect new objects.

Thus, objects are detected based on the continuity of motion in multiframes and trajectories are updated. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting.

4. Experimental Results

NODAMI is verified using a video image from TT-2. TT-2 has 4 space video sensors, and the video image used in this paper is from the high-resolution camera, whose focal length is 1000 mm, FOV is 2°30′, and . The video image is 25 frames per second, and the resolution is 960 × 576.

Firstly, continuous 20 frame images taken in attitude stabilization are processed. In (24), is initially set to 1.1 and is set to 1. The reduced factor is set to 0.8. The neighborhood in image segmentation is set to 7 × 7. For example, 10 object areas are obtained after segmenting the 20th frame (as shown in Figure 5). These areas include objects, stars, and noise, which cannot be distinguished in a single frame.

In (27), is set to 10−4I4 based on empirical data and is set to 0.2I2, where is the n × n identity matrix. The radius of neighborhood in trajectory association is set to 5. NODAMI detects one object in the 20 frame images, as shown in Figure 6. The image of 960 × 576 is too large; Figure 6 is interested in the local image, and a white box is used to identify the object. Figure 6 shows that the brightness of the object varies considerably. NODAMI detects the object well.

Besides, for these 20 frames, if in (24) is set to constant, the object will be lost in 4 frames whereas (24) with a variable coefficient detected the object in all the frames.

Another longer video of 40 s taken in attitude stabilization is processed, which has 1000 frame images. NODAMI detects one object in the 1000 frame images, as shown in Figure 7, derived by overlaying the 1000 frame images. The image of 960 × 576 is too large; Figure 7(b) is interested in the local image, and a white box is used to identify the object every 50 frames.

Figure 7 shows that the brightness of the object varies considerably and naked eyes tend to ignore the object in quite a few frames. Actually, if in (24) is set to constant, the object will be lost in 421 frames whereas (24) with a variable coefficient detected the object in 947 frames. Probability of detection improved from 42.1% to 94.7%.

Besides, even (24) with a variable coefficient could not detect the object in all the frames. But NODAMI can associate the object with the existing trajectory correctly after losing the object in several frames.

Then, we process the video image of 30 s, during which satellite attitude was adjusted. Overlaying 750 frame images gives Figure 8. The trajectory of the object is in the white box of Figure 8. The brightness of the object also varies considerably. The 2 right trajectories are of stars caused by satellite attitude adjustment. The object image was moving originally towards the lower left, but it appears to be moving towards the upper left due to attitude adjustment; meanwhile, the star image moved upwards, resulting in the 2 right trajectories. NODAMI detected the space object trajectory well and abandoned the trajectory of the stars. Similarly, the number of points detected in the trajectory is less than 750, for the brightness of the object varies too much and the object is lost in several frames. But NODAMI can associate the object with the existing trajectory correctly after losing the object in several frames.

Using the known satellite attitude motion information and (21) to compensate attitude adjustment can give the trajectory of the object as shown in Figure 9, from which it can be seen that the trajectory is recovered well and the effect of attitude adjustment on the image is removed.

Experimental results show that the algorithm has good performance for moving space point objects even with changing brightness.

5. Conclusion

In this paper, a new space object detection algorithm in a video satellite image using motion information is proposed. The effect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Attitude motion compensation formula is derived. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting. Using the algorithm to process the video image from Tiantuo-2 detects the object and gives its trajectory, which shows that the algorithm has good performance for moving space point objects even with changing brightness.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to express their thanks for the support from the Major Innovation Project of NUDT (no. 7-03), the Young Talents Training Project of NUDT, and the High Resolution Major Project (GFZX04010801).