Abstract

Chinese space station has been in construction phase, and it will be launched around 2020. Lots of orbital replacement units (ORUs) are installed on the space station, and they need to be replaced on orbit by a manipulator. In view of above application requirements, the control method for ORU changeout is designed and verified in this paper. Based on the analysis of the ORU changeout task flow, requirements of space station manipulator’s control algorithms are presented. The open loop path planning algorithm, close loop path planning algorithm based on visual feedback, and impedance control algorithm are researched. To verify the ORU changeout task flow and corresponding control algorithms, a ground experiment platform is designed, which includes a 6-DOF manipulator with a camera and a force/torque sensor, an end effector with clamp/release and screwing function, ORU module, and ORU store. At last, the task flow and control algorithms are verified on the test platform. Through the research, it is found that the ORU changeout task flow designed in this paper is reasonable and feasible, and the control method can be used to control a manipulator to complete the ORU changeout task.

1. Introduction

On orbit service is a kind of space operation, which can prolong spacecraft life and extend spacecraft task execution ability. The ORU changeout is one of the most important tasks in orbit maintenance. Human’s operating ability and scope is greatly limited due to the special nature of the space environment. The space manipulator has the ability to work under an environment of microgravity, high temperature, and high radiation, so it is of great significance on economy and safety to use a space manipulator to assist or replace the astronaut to do ORU changeout mission.

In construction and service periods of the international space station, the Space Station Remote Manipulator System (SSRMS) and the Japanese Experiment Module Remote Manipulator System (JEMRMS) which are launched in the early stage are not precise enough [1], so the Special Purpose Dexterous Manipulator (SPDM) and Small Fine Arm (SFA) are launched in 2008 and 2009, respectively, and their end accuracy achieves 13 mm and 10 mm, so they can do some fine operation tasks such as small exposed load replacement, cutting, and refueling [2, 3]. When carrying out ORU replacement task, the SPDM’s end effector gets close to the ORU waiting for replacing under the guidance of a hand-eye camera firstly, until the ORU’s adaptor is in the end effector’s capture range. Then, the SPDM’s force-moment accommodation (FMA) control mode will limit the lateral force and torque in a selectable range [4].

The nature of the ORU replacement by the manipulator can be seen as pegging in hole, which is a typical problem of manipulator operation. For the ORU replacement task, Backes and Tso divide the task into several subtasks, and the control strategy of compliant control and gravity compensation is adopted for different subtasks [5]. Colombina et al. use a similar control method, and a ground test system is used to simulate the ORU replacement operation. Studies have found that the operating strategy has a smaller motion error, and the manipulator system has a tolerance as small as 5 mm/0.5° [6]. Ozaki et al. have established a ground test platform for microsatellite assembly/disassembly, and a heuristic search and assembly strategy is used in experiments [7]. Jiang et al. propose the MHIC (modified hybrid impedance control) strategy of the redundant robot for ORU replacements based on the ground test bed. Experimental results demonstrate that MHIC can reduce the contact force with uncertainties of constrained environments [8].

In order to ensure that the space robot implements space tasks successfully, ground experiments are required to verify the task planning and control algorithms. Because the space manipulator works in microgravity environment, the following five methods are often used to simulate the microgravity environment: air-bearing table, neutral buoyancy, airplane flying or free-falling motion, suspension system, and hardware-in-the-loop simulation system [9]. Each method has its advantages and disadvantages, but the hardware-in-the-loop simulation system is good for kinematic motion understanding and qualification of specific flight components or evaluation of integrated sensor-controller-actuator system performance [9]. The Stewart parallel robots are often used in the hardware-in-the-loop simulation system for the contact dynamics of space operation [1012]. To verify the contact dynamics performance of SPDM performing various maintenance tasks, Dubowsky et al. developed a task verification facility. In that facility, a hydraulic robot is used to mimic the space robot performing ORU changeout, and a force sensor is installed at the base of the hydraulic manipulator to get contact force and torque [12].

Chinese space station has been in construction phase, and it will be launched into orbit around 2020. It is equipped with an exposure platform outside the cabin, and there are lots of exposed loads on the platform. During the operation phase, the exposed load should be replaced and upgraded periodically. Due to the high frequency and large quality of the ORU changeout mission, it is necessary to use a manipulator to assist [13]. Compared with the cargo transporting, astronauts assisting, and cabin inspection, ORU replacement has a higher accuracy requirement, so it is more difficult.

According to the demand of ORU replacement in Chinese space station, the second part of this paper analyzes the ORU changeout task flow and specific control algorithm requirements; the third section presents a ground verification platform to verify the task flow and control method; then corresponding algorithms are studied in Section 4; in the fifth section, the ORU replacement experiment based on the platform is introduced, and experimental results are discussed; conclusions and follow-up work are given at last.

2. ORU Changeout Mission Analysis

2.1. Mission Process Analysis

As shown in Figure 1, there are 4 stages in an ORU changeout task flow, which are old ORU extracting, old ORU inserting, new ORU extracting, and new ORU installing.

In the old ORU extracting stage, the space station manipulator grasps and pulls out an old ORU from the ORU store installed on the exposed platform. There are several subtasks. First, the manipulator gets close to the old ORU from an initial pose, and the manipulator stops above the ORU adaptor until the adaptor is in the vision range of the manipulator’s hand-eye camera. Then, the manipulator approaches to the ORU adaptor in linear motion under the guidance of the camera, until the ORU adaptor begins to contact with the end effector. The manipulator continues to approach to the ORU adaptor under the guidance of guide blocks on the end effector and the arc surface on the adaptor until the end effector captures the adaptor completely. Guide blocks and the arc surface’s geometry are specially designed, which ensures that the end effector can capture the adapter reliably in the presence of pose errors. After that, the end effector locks the old ORU and the old ORU’s connection mechanism unlocks, and the electrical and mechanical connection between the ORU and the ORU store is cut off. The passive part of the connection mechanism is installed at the bottom of the ORU, and the active part is installed in the ORU store. Finally, the manipulator moves and pulls the old ORU from the ORU store installed on the exposed platform in a linear motion. The first stage named old ORU extracting is over.

In the old ORU inserting stage, the space station manipulator with the captured old ORU moves from the initial pose, which is the end pose of the first stage, to the top of the ORU store which is in the airlock cabin. Then, the manipulator approaches the ORU store under the guidance of the camera, until the ORU’s edge begins to contact with the ORU store’s guiding surface. Under the guidance of the specially designed guiding surface on ORU store, the manipulator inserts the old ORU into the ORU store. The active part of the connecting mechanism which is installed in the ORU store locks the passive part of the ORU connecting mechanism, and the old ORU is connected with the ORU store. At last, the end effector releases the old ORU adaptor. The second stage named old ORU inserting is over.

In the third stage, there is a new ORU extracted from the ORU store in the airlock cabin, and the manipulator’s motion and control methods are the same with the first stage, except the pose of the new ORU. In the last stage, the manipulator with the captured new ORU moves from the airlock cabin to the exposed platform, and the new ORU is inserted into the ORU store on the exposed platform. The whole process is the same with the second stage.

2.2. Control Algorithm Requirements

From the above analysis, the space station manipulator has 3 working modes at each stage, and they are free motion mode, approaching mode, and contact mode individually.

In every stage, the manipulator always moves from an initial pose to the top of an ORU adaptor or the ORU store firstly, and these are called free motion modes. In this mode, the manipulator does not contact with environment and there is not a contact force. The manipulator just needs to move from an initial pose to the vicinity of the module to be replaced or the ORU store. Because the start pose and end pose are known before, an open loop path planning algorithm can be used.

The process of the manipulator’s end effector approaching to the adaptor of the ORU or the manipulator with a captured ORU approaching to the ORU store is called approaching mode. At the end of this mode, the end effector must be in the adaptor’s guidance scope, or the ORU’s edge must be in the ORU store’s guidance scope, which means that if the end effector approaches the adapter in a straight line in the presence of pose error, the effector can contact the adapter, or when the manipulator carries the ORU module to the ORU store along a linear direction in the presence of pose error, the ORU module can contact the guide surface on the ORU store. So, this mode requires a higher accuracy than the free motion mode. A close loop path planning based on visual feedback should be introduced to improve accuracy. A hand-eye camera must be installed to get the exact pose of the ORU or the ORU store.

Because an error is unavoidable when the end effector contacts with the adaptor, or the ORU edge contacts with the ORU store, some exterior force is exerted at the end of the manipulator, and these are called contact modes. To eliminate the pose error and insert the ORU into the ORU store slot precisely, a force control method must be integrated into the manipulator’s control system. Also, it can prevent damages of the manipulator and ORUs. An impedance control method based on position is chosen in this paper. A 6-axis wrist force/torque sensor should be installed at the end of the manipulator between the last joint and the end effector to measure the contact force and torque.

Subtasks and corresponding control algorithms of the old ORU extracting stage and the old ORU inserting stage are illustrated in Table 1. The remaining two stages are the same.

3. Design of the Test Bed for Ground Verification

In order to verify the above task flow and corresponding control algorithms, a ground experiment platform is designed firstly as shown in Figure 2. The platform mainly consists of 6 degrees of freedom manipulator, an end effector, an adapter installed on a simulated ORU, two simulated ORU stores, robot vision system, and experiment bench. The geometry size of the real space station is big, and the space station manipulator works in a microgravity environment, but the experiment platform works under a gravity environment, so the experiment platform’s size is also reduced. The simulated ORU only contains an adaptor, and there is no a connection mechanism in the platform. The real space light condition is also not simulated. Though there are some differences between the real condition and the experiment condition, these constraints do not affect verification of the ORU replacement task flow and control methods.

3.1. The Manipulator

According to the analysis in Section 2, 6 degrees of freedom manipulator is enough for the ORU changeout task verification. In order to reduce the complexity of the system, this paper does not use any ground weightlessness simulator such as the air-bearing platform and the active force follow-up hanging system. All joints’ output torque of the manipulator prototype in this paper is the same as the real space manipulator. In order to reduce the gravity load, joints of the manipulator are arranged in the form of shoulder yaw, shoulder roll, shoulder pitch, elbow pitch, wrist pitch, and wrist roll (Figure 3). To minimize the size of the manipulator envelope, the elbow joint adopts the bias type layout, and the other joints adopt the collinear layout. DH parameters of the manipulator are shown in Table 2. The total length of the manipulator is 1.9 m; the design value of absolute position accuracy is 5 mm/0.5 degrees, and the end load is 2 kg under the gravity environment.

To reduce weight and volume, the joint of the manipulator in this paper adopts a mechatronics integration design (Figure 4). In order to simplify the transmission mechanism, the joint uses a lightweight CSD series harmonic reducer with a large center hole. A permanent magnet synchronous motor which has a high rated torque and low rated speed is used. A speed sensor is integrated in the motor. In order to ensure the safety, there is a power off brake in the joint. A mechanical limit and an electrical safety limit are also contained in the joint. A dual-channel high-precision magnet resolver is integrated to measure the absolute position of the joint to ensure the accuracy, and the measurement accuracy of the resolver is up to 25. The weight of the joint is 4 kg, and the rated output torque is 150 Nm.

3.2. The Vision System

In order to measure the relative pose between the end of manipulator and the target, and to avoid collision with the surrounding objects in approaching mode at the same time, a camera is installed at the end of the manipulator. The vision system workflow is shown in Figure 5. Firstly, the pose relation between the end of the manipulator and the camera is calibrated. Secondly, images of the marker are acquired by the camera, and the image processing and pattern recognition algorithms are used to identify and extract the characteristic information of the marker. Then, the relative pose between the target marker and the camera is obtained by the relative 3D pose measurement technology. Finally, the relative pose is sent to the central controller as the input of the close loop path planning.

Referring to the marker on the exposed platform of the Japanese experimental module in space station [14], a stereo marker is presented in this paper to improve the measurement accuracy (Figure 6). On the stereo marker, four characteristic circles are painted in white color. Three of the four characteristic circles are in one plane, and the other one is on top of a rod, which is higher than the former three. The other place of the marker is painted in black. Geometric dimensions between characteristic circles are known before. In this paper, the geometric size of simulated ORU is 120 × 20 × 90 mm. The geometric size of the marker is designed as 100 × 80 mm, and the characteristic circle diameter is 4 mm, considering requirements of the recognition distance, the measurement accuracy, and the ORU’s geometric size.

Because the ORU installed with a marker can be considered as a cooperative target, a monocular camera is enough. The experimental system is required to be able to monitor the area of 1 m × 1 m × 1 m when the distance between the camera and the target is 1.5 m, and the recognition accuracy is 2 mm/0.3 degrees when the distance between the camera and the marker is 200 mm. According to the above requirements, the camera GC1380H from Prosilica Company is chosen, and the lens is Computar’s M1214-MP2 whose focal length is 12 mm.

3.3. The End Effector

In order to complete the ORU changeout task, the function of the end effector is to capture, lock, and release. In addition, screwing screws is a necessary function additionally. According to the above functional requirements, the end effector in this paper is similar as Nishida designed in 2005 [15]. The end effector has two degrees of freedom, which are used for clamp/release and screwing, respectively. The end effector mainly comprises a ball screw, a ball nut, torque sleeve components, clamp/release drive assembly, screwing drive assembly, two guide blocks, and clamp finger assembly (Figure 7). To measure the contact force and torque, a six-axis force sensor is installed in the end effector.

When there is a pose error, the end effector also should capture the ORU reliably, so a guide block is designed with an arc surface to obtain a tolerance larger than 10 mm as shown in Figure 8. The adaptor also has an arc surface (Figure 9). If the end effector approaches the adapter in a straight line, the guide block on the effector can contact the arc surface of the adapter when the position error is less than 10 mm. The pose bias is eliminated at last under the guidance of the guide block, and the contact force is controlled based on the feedback of the force sensor. When the end effector captures the target adapter completely as in Figure 7, the end effector begins to lock. The clamp/release drive assembly drives the ball screw to rotate, and the two clamp fingers lock the adaptor. When releasing, the clamp/release drive assembly rotates in the opposite direction.

The opening of the ORU store also has a chamfered guide surface, and the chamfer size is 10 mm (Figure 9). The chamfer is used for adapting pose error when ORU contacts with ORU store.

3.4. Control Architecture

A distributed control system is designed to control the manipulator to complete the ORU replacement task, and the control system contains an operation console, a manipulator central controller, a visual controller, six joint controllers, an end effector controller, and a monocular camera (Figure 10).

A tel operation computer and a monitor computer are included in the operation console. A graphical user interface (GUI) is run on the tel operation computer, and the GUI can send control commands to the central controller, including single-joint control command and point-to-point movement command (Figure 11). The GUI also can display state messages of the manipulator system. A three-dimensional virtual scene which is same with the real operation scene can be shown in the GUI. The three-dimensional virtual scene is constructed by ProE and Open Inventor, and its role is to do presimulation to verify commands’ correctness. The monitor computer displays the video signal which is collected by the camera at the end of the manipulator.

The central controller receives task level control commands in Cartesian space from the tel operation computer, force signals from the wrist force sensor, and pose data of the ORU from the visual controller. After running inverse kinematics, path planning, and force control algorithms, joints and the end effector’s commands are sent to joints and the end effector’s drive controller through communication network. Joints and the end effector’s position and speed will be controlled.

Visual controller will collect the camera’s images, and the vision recognition and measurement algorithm will be run to get the pose of the ORU relative to the camera waiting for operation. The pose data is sent to the central controller.

4. Design of Control Algorithms

4.1. Open Loop Path Planning

Because the space manipulator works on a floating base, the pose of the floating base (such as space station and satellite) will be changed during the manipulator’s motion process, and this will affect the end pose of the manipulator. Especially when the target is moving relative to the floating base, the pose change of the floating base must be considered. In the space station ORU module replacement task, the ORU adapter, ORU store, and the manipulator base are fixed on the space station cabin. Assuming that the ORU adapter base coordinate system is , the base coordinate system of the manipulator is , and the space station base coordinate system is . In the course of the ORU replacement, the relative pose between and , , and is not changed, so the relative pose of and is not changed, which means that when designing the space station manipulator path planning method for ORU replacement, there is no need to consider the manipulator movement’s disturbance to the cabin.

In free motion mode, the manipulator just needs to move from an initial pose to the vicinity of the module to be replaced or the ORU store. The initial pose and target pose are known before because the space station manipulator works under a structured environment. It is defined that the manipulator’s initial pose is , and the target pose of free motion mode is . To interpolate between the initial and target pose, the polynomial or spline methods are used, then the position and posture of manipulator at time can be gotten

The manipulator end velocity at time is

In (2), is the linear velocity and is the angular velocity; is a transition matrix to convert the Euler angular velocity to the end angular velocity. Every joint’s velocity is calculated according to the inverse kinematics

In (3), is the Jacobian matrix of the manipulator. If (3) is integrated, every joint’s position can be gotten

At last, all joints’ velocity and position commands are sent to the joint controller.

4.2. Close Loop Path Planning Based on Visual Feedback

As illustrated in Section 2, a visual feedback is introduced when the manipulator is close to an ORU or to the ORU store to get a higher accuracy. The closed loop path planning algorithm flow is shown in Figure 12. First, according to the joint angles measured by the absolute position sensor, the homogeneous transformation matrix between the end and the mounting base of the manipulator is calculated by the manipulator’s forward kinematics. The homogeneous transformation matrix between the end effector and the ORU’s adapter or the ORU store is

In (5), is the transformation matrix between the camera and the marker near the ORU’s adapter or the ORU store, which can be measured by the camera. The detailed process will be described below. is the transformation matrix between the end effector and the camera, is transformation matrix between the target ORU’s adaptor and the ORU store and the marker. and are known before by assembly relationship.

In approaching mode, the end of the manipulator should move from an initial pose to the pose of ORU’s adapter (target pose). Here, some middle poses are interpolated. Then, joint control commands gotten by inverse kinematics are sent to joint controllers. At the end of one control cycle, the tolerance between real-time pose of the manipulator and the target pose obtained by camera is computed. If the tolerance between the initial pose and the target pose is small enough, the close loop path planning finishes, vice versa the next control loop starts.

There are two steps in the marker pose’s visual measurement, and they are the marker’s recognition and measurement. The flow of the marker’s recognition is in Figure 13. Here, Gaussian filter is used for denoising. A local adaptive threshold method is used for binary processing, because the environment around the marker is complicated, and the background luminance is not uniform. The basic model of the local adaptive threshold method is [16] where and are pixel coordinates, is the gray value of the pixel, is the gray value given after the judgment, and is the detection threshold

is the local mean value of the image background; is the root mean square value of the image’s local random noise; the parameter is chosen according to the complexity of the image’s background and the object image’s grey value.

Then, Canny edge detection algorithm is used for extracting the outline of every characteristic circle. The rectangle boundary is extracted from the outline’s 2D point set, and the center of a rectangular boundary is the center of the characteristic circle [17]. After the center of the characteristic circle is extracted, the center of the characteristic circle can be recognized by known geometric characteristics of the marker. The marker’s pose transformation matrix relative to the camera can be measured using a three-dimensional pose solving algorithm. The RPNP algorithm is used in this paper because it has a relatively stable and more accurate solution [18]. The RPNP algorithm’s flow chart is in Figure 14, and its detailed description is as follows. (a)Select the rotation axis: if the camera’s coordinate system is and the marker’s coordinate system is . As shown in Figure 15, a calibrated camera and 3D reference points () are given. The projection of the reference point on normalized image plane is . The side between any two reference points is . If the value of is larger, the noise’s influence on the direction of is smaller, so the side corresponding to the longest projection side is chosen as the rotation axis ; the midpoint of is the origin .(b)Determine the least squares rotation axis: after the axis of rotation is determined, a new orthogonal coordinate system can be determined according to the cross principle. To seek the reference point coordinate in , the 3D transformation matrix and translation vector from to should be determined.To determine the direction of the axis in , the depth values of the point and must be known. The solution of depth value is as follows: the reference points are divided into 2 3 point sets, which is , and each subset can get a four polynomial by P3P constraint as shown in (8), where the unknown parameter is the square of the unknown depth of is the four-order polynomial gotten by P3P constraint; are coefficients of the four-order polynomial. If the linearization method is used to solve (8), the redundancy of the equation will cause inconsistency of the solution. In this paper, the least square method is used to solve the local optimal solution of the equation [19]. First, a cost equation is defined as The minimum value of can be obtained by solving its derivative Equation (10) is a seven equation of one variable and it can be solved by eigenvalue method. When and the depth value of are determined, the depth of the other endpoint can be obtained by the P2P constraint. The unit vector of the coordinate system in the coordinate system is .(c)Solve the rotation angle and translation vector: when the rotation axis is determined, the rotation matrix between the new coordinate system and the camera’s coordinate system can be expressed as follows: In (11), is an arbitrary rotation matrix whose third column is , and it must satisfy the orthogonally constraint. represents rotating angle around the -axis. The projection from a 3D reference point to a two-dimensional normalized image plane can be expressed as follows: In (12), are the normalized coordinates of the image point , and is the translation vector. Equation (12) can be listed as 2 × 6 homogeneous linear equations, and its unknown variable vector can be calculated. is the normalization factor. are three-dimensional position coordinates of space points.(d)Calculate the relative pose between the marker and the camera: in practical application, the solution of the homogeneous linear (12) may not satisfy the constraint of the trigonometric function due to the interference of noise. Therefore, it is necessary to impose an orthogonal constraint on the rotation matrix [20]. First, the reference point’s 3D coordinates in the camera coordinate system are estimated by unnormalized and , and then the standard 3D alignment method is used to solve the rotation and translation matrix. The cost function is the sum of squares of polynomials, which contains at least 4 local minima. According to the local minimum value, the pose of the marker relative to the camera is estimated, and the solution which has the smallest inverse projection error is chosen as the optimal solution.

4.3. Impedance Control

In contact mode, there is an external force and torque at the end of the manipulator. To eliminate the pose error and insert the ORU into the ORU store slot precisely, an impedance control algorithm based on pose is used in this paper as shown in Figure 16.

In Figure 16, can be seen as the desired impedance characteristic. are equivalent weight, damping, and stiffness matrix of the manipulator, respectively. are joints’ position command, joints’ actual position, and the manipulator’s pose command in Cartesian space. is the desired contact force vector in the force sensor’s coordinate system. is the actual force vector measured by the force sensor. not only contains the contact force and the gravity of the end effector, but also may contain an ORU’s gravity when the ORU is captured. Because only the pure contact force vector is needed, the gravity terms must be compensated.

The homogeneous transformation matrix between the force sensor and the base coordinate system is

In (13), is the homogeneous transformation matrix between the force sensor and the end effector, which is known before by assembly relationship. can be calculated by forward kinematics. is the posture transformation matrix between the force sensor and the manipulator’s base coordinate system, and is the coordinate system of the force sensor’s position vector in the manipulator’s base coordinate system. If the gravity field in the base coordinate system is , the pure contact force vector is

In (14), when there is no ORU at the end, is the vector of the centroid of the end effector relative to the origin of the force/torque sensor coordinate. When there is an ORU at the manipulator’s end, is the vector of the centroid of the assembly composed by the end effector and the ORU relative to the origin of the force/torque sensor coordinate. With the desired impedance characteristic, if there is a contact force, the regulated pose in Cartesian coordinate system is

In order to determine the reasonable impedance parameters , the one-dimensional impedance control model is analyzed on the basis of (15), and the simplified two-order system model is expressed as

, , and are acceleration, velocity, and position adjustment value in one-dimensional space, respectively. is the pure disturbance force. are the expected inertia, damping, and stiffness, respectively. The following transfer function can be gotten by doing a Laplace transformation on (16).

In (17),

is the damping ratio and is the natural frequency of the impedance control system. The dynamic performance of the impedance control system is determined by and . is related to the position deviation of the impedance control system, and it affects the stiffness of the manipulator. The larger the value is, the greater the stiffness of the control system is, and the higher the tracking accuracy is. is related to the velocity deviation. A reasonable value can reduce the overshoot and oscillation, and it can also shorten the response time. is related to the acceleration deviation of the impedance control system. When the expected inertia coefficient is larger, it will cause a larger impact on the environment, resulting in an increase in trajectory tracking error and slow response of the system; therefore, in an impedance control system, usually takes a very small value. In this paper, the speed and acceleration changes of the space station manipulator are very slow when performing the ORU module replacement task, which means that can be ignored, so the value of in this paper is set to 0.

When the impedance parameter satisfies the following formula, the transition from free movement to restricted space can reach stability [21].

In (19), is the environment stiffness, assuming that the ORU inserting and extracting direction is the -axis in the manipulator base coordinate system, and the -axis is also perpendicular to the ORU’s mounting surface. At the final stage of approaching mode, target poses (the ORU adapter or the ORU store slot’s pose) that the manipulator will reach are obtained by the hand-eye camera, and it can be expressed as (), but the Cartesian space control command sent by the central controller is (), which means that the end effector is right above the ORU’s adaptor or above the ORU store at the end of the approaching mode. In contact mode, take inserting ORU as an example, the manipulator carries the ORU along the -axis and approaches the ORU store slot to insert the ORU. In the presence of pose error, the bottom edge of the ORU will contact with the guide surface of the ORU store slot. If the impedance control method is used in the direction, and the expected stiffness and damping are and , respectively, the manipulator will stop moving but in a balance state in the -axis when the position error reaches . is the contact force in the -axis. In order to ensure that the ORU module can be inserted into the ORU slot in the -axis successfully, the manipulator requires a high stiffness in the -axis. Therefore, the manipulator adopts the position control mode in this direction. In order to prevent the excessive contact force in the direction, the contact force threshold is set in this direction. When the contact force is greater than the threshold, the manipulator stops moving or carries the ORU exit along the direction and prevents the contact force from being too large. In the other directions (), the impedance control is adopted to correct pose deviation under the guidance of the guide surface of the ORU store slot. In summary, the manipulator is in position control mode in direction, and it moves a distance of for every control cycle until the contact force is bigger than a threshold of . In other directions, the manipulator is in impedance control mode using the control method in (15). The motion adjustment value is

The pose command sent to the manipulator is

5. Experiments and Discussion

5.1. Experiment Introduction

In Section 2.1, the process of ORU changeout is divided into old ORU extracting, old ORU inserting, new ORU extracting, and new ORU installing. Because the control method and operation flow of a new or an old ORU are the same, so only one ORU’s extracting and inserting experiment is described in this paper. As shown in Figure 2, the reachable operating space of the manipulator for extracting ORU in the inertial coordinate system is

The reachable operating space for inserting ORU in the inertial coordinate system is

The inertial coordinate system origin is the midpoint of the experiment platform. Axis direction of the inertial coordinate system is shown in Figure 2. Axis and axis are parallel to the platform’s two sides, respectively.

5.2. Results and Discussion

In this section, control algorithms in Section 4 are verified individually. First, an operator gives a target point to the central controller of the manipulator by the GUI. Using the open loop path planning algorithm, the manipulator moves from the initial pose to the target pose. Experiment results show that joints’ motion is continuous, and the absolute pose accuracy is 3.2 mm/0.1° which is measured by a laser tracker.

When verifying the visual recognition and measurement algorithm, the end of the manipulator moves close to the marker, and the relative pose of the camera to the marker is measured in real time. At the same time, a laser tracker is used to measure the exact pose of the camera to the marker. Compared with visual measurement results, the measurement error curve at different distance is gotten as shown in Figure 17. It can be found that the error decreases as decreasing of the distance between the camera and the marker, and the measurement error is less than 1 mm/0.3° when the distance is below 300 mm. Experiment results show that the accuracy can meet the tolerance requirement when the manipulator enters into the contact mode.

Because the principle of impedance control method in any one of the 6 directions of the terminal position and posture is the same as that in the other 5 directions. Therefore, the impedance control is selected in the direction in this section just for verifying the impedance control algorithm, and position control mode is set in the and directions. The initial contact force in the x-axis is 0 N. The initial positions in , , and directions are 901 mm, 225 mm, and 311 mm. The desired stiffness is set to 1 N/mm, and desired damping is 50 Ns/m. Figure 18 is the force and position response curve. From 0 s, the external force is applied, so the manipulator moves along the direction. After about 5 s, the external force is increased and maintained at 5 N. The position along the direction is changed quickly with the change of the external force, and it maintained at about 906 mm at last. The position change caused by the 5 N external force is about 5 mm, and this proves that the stiffness in the direction is 1 N/mm. The experimental results verify the correctness of the impedance control method. The position error in and directions is lower than 0.5 mm, which means that the manipulator nearly keeps still in and directions.

The whole process of ORU extracting and inserting experiment is shown in Figure 19, and all working modes and control algorithms are contained in the experiment. Figure 19(a) is the initial state. After receiving the target pose, the manipulator moves above the ORU adapter in free motion mode (Figure 19(b)). Then, the manipulator approaches to the adapter under the guide of the camera (Figure 19(c)). In Figure 19(d), the manipulator continues to get close to the adapter, until the end effector captures the adapter completely. In Figure 19(e), the end effector locks the adapter and extracts the ORU. Then, the manipulator sends the ORU to the ORU store (Figure 19(f)). In Figure 19(g), the deviation of the ORU relative to the ORU store slot is completely corrected by the guide surface on the ORU store using impedance control. At last, the ORU is inserted into the ORU store slot completely (Figure 19(h)). In this experiment, the impedance parameters are determined as .

In Figure 19(d), the manipulator is in contact mode and the impedance control method in Section 4.3 is used. Figure 20 is the end force and moment in that process and its reference coordinate system is the force sensor’s coordinate system. Figure 21 is the manipulator’s end pose of that process, and it is in inertial coordinate system. The sampling period of the sensor is 20 ms, while the impedance control cycle is also 20 ms. It is defined that is 2.5 N. From 0 s to 3 s, the contact force and moment does not change obviously because of impedance control. The position in direction reduces linearly because the manipulator is in position control mode in that direction. The position in other directions changes slightly according to contact force or moment. At 3 s, the contact force in direction begins to increase gradually. When the force in direction is equal to at last, the manipulator stops to move in direction, which means that the ORU has been inserted completely. These experiment data verify the correctness of impedance control method in Section 4.3. After , the ORU module began to contact with the ORU store slot, so the force/torque in other directions also changed besides the axis. In structural design, there is a small gap between ORU store slot and ORU. Because the manipulator is in position control mode in the direction of , and the manipulator in this article has a certain flexibility, so when the bottom of the ORU module begins to contact with the slot, the high stiffness in direction and the gap mentioned before will cause slight deformation of the manipulator, and the result is that the other surface of the ORU will contact the module store slot, which leads to changes of the contact force/torque in the other directions.

6. Conclusions and Future Work

Replacing of orbital replacement units by a manipulator has great significances in space station. In this paper, the full process of ORU changeout is analyzed first, and it is divided into old ORU extracting, old ORU inserting, new ORU extracting, and new ORU installing. For every stage, there are 3 working modes, which are free motion mode, approaching mode, and contact mode. Referring to different working modes, the open loop path planning algorithm, the vision-based close loop path planning algorithm, and the impedance control algorithm are researched individually. To verify the ORU changeout task flow and corresponding control algorithms, a ground experiment platform is designed, which includes a 6-DOF manipulator and an end effector with clamp/release and screwing function. The manipulator has a distributed control system and a monocular camera. Then, some experiments are done first to verify every algorithm singly. Results show that the absolute pose accuracy of the manipulator controlled by the open loop path planning algorithm is 3.2 mm/0.1°. The visual measurement accuracy is less than 1 mm/0.3° when the distance between the camera and the marker is below 300 millimeters. Impedance control experiment shows that the force control algorithm based on position and end force feedback is correct. At last, the whole process of ORU extracting and inserting experiments is done. Experiments results show that the control architecture and control algorithms in this paper are correct. In the future, experiments will be done on a more real environment as in a space station.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research is partially supported by a government grant from the Shanghai Engineering Research Center for Assistive Devices, Shanghai, China.