This paper studies the cooperation between two master-slave modular robots. A cooperative robot system is set up with two modular robots and a dynamic optical meter-Optotrak. With Optotrak, the positions of the end effectors are measured as the optical position feedback, which is used to adjust the robots' end positions. A tri-layered motion controller is designed for the two cooperative robots. The RMRC control method is adopted to adjust the master robot to the desired position. With the kinematics constraints of the two robots including position and pose, joint velocity, and acceleration constraints, the two robots can cooperate well. A bolt and nut assembly experiment is executed to verify the methods.

1. Introduction

In many robotic tasks such as catching irregular objects, assembling complex works and carrying large-scaled objects, a single robot cannot satisfy the requirements. In some operations, dual-arms or multiple robot manipulators are needed to complete the operation by coordination. Firstly, the complex tasks that are difficult to complete for a single robot are expected to be undertaken by multiple robots in a collaborative fashion. Secondly, the work efficiency of the robot system can be improved due to the collaboration. Furthermore, even if the robot work environment changes or the robot system has partial malfunctions, the multirobot system can still finish the scheduled task through the intrinsic coordination relation among the system members. The multirobot system composed of two or more robots has attracted tremendous interest and attention. A pair or multiple robot system has the following characteristics which distinguish it from the operation of a single robot: (1) when the pair or multiple robots engage in carriage, assembly,cutting, or sawing operations, the two robots and the operated object will form a closed motion chain. The motion or kinematics of the two robot arms must satisfy a group of movement restrictions. Thus, the kinematics of the two coordinated robots is more complicated than that of a single robot. (2) The dynamics of the two coordinated arm robots is more complicated than the single-arm robot. Though the two equations of dynamics of the two collaborated arm robots can be combined into one unitary dynamics equation, the problem does not become easier. The equation dimension increases and internal force will be introduced. (3) The control structure for the two-arm robots get more complicated than the single-arm robot. Since the two-arm robots face the same operation task, the corresponding control cannot be carried out solely as for the single-arm robot. In comparison with the control of a single-arm robot, the controller for the two-arm robots has multilayered hierarchical structure. Luh and Zheng [1, 2] studied the motion and kinematics restrictions of the two-arm robot cooperation and proposed the master/slave position-to-position collaboration control strategy based on the rigid body restriction relation when grasping. Furthermore, Zheng and Luh [3] studied the restriction relationship of two collaborating rigid robots when grasping a plier,and deduced the position, pose and velocity restrictions. Uchiyama et al. [4] introduced the position/force control policy into the dual robots’ control. They analyzed the kinematics and statics of the two-arm robots under position/force control and completed the collaboration control of the two 4-DOF B-HAND robots. Kopf and Yabuta [5] compared the master/slave control policy with the position/force control policy experimentally. Tarn et al. [6] proposed the dynamics control methods for dual-arm collaboration and studied the dynamics equations of the dual-arm robot when operating an object. In [7], a force control strategy was presented for the cooperation of two robots. At the same time, the problem of position accuracy in robot cooperation is mentioned. It is assumed that all the joint variables are the mid values between the upper and lower limits of joint angles. That means that the end manipulator position is not accurate enough. In [8], two cooperative redundant manipulators were structured as a new single redundant manipulator by using the relative Jacobian. In that research, a force/torque sensing model, using an approximated penetration depth calculation algorithm, is developed and used to compute a contact force/torque in the graphic assembly simulation. Using the potential function method, each joint trajectory is generated within the joint limit. So the end effectors’ position is not accurate because of the feedback method. Dauchez et al. [9] tried to extend the hybrid position/force control of robot manipulators to assembly tasks by humanoid arm manipulators. He proposed the question of keeping the workspace visible to be completed.

Zhao et al. [10] studied the 3-dimensional motion restriction of the two-arm robots. They implemented the collision avoidance track planning of the two-arms by constructing a collision belief map using the path and track knowledge of the two robots. Guo et al. [11] presented the dynamic control method of coordinated operation for a team of free-flying space robots (FFSRs) in a micro-gravity environment of space based on reproducing the kernel theory. They completed the dynamic control of multiFFSR coordinated operation using dynamic equation of multiFFSR coordinated operation based on the reproducing kernel theory. In [12, 13] multiple robot manipulators cooperate to assembly by communication through network. In the test bed, a sensing robot was mounted on the vibrated environment measures relative position and orientation of the flexible beam and the sensed information can be shared by the other two ground robots in the cooperative assembly. They demonstrated algorithms for the visual servo in the wide area with motion prediction using cooperative robots on the ground testbed.

In [14], hybrid kinematics machines combining both parallel and serial kinematics machines were adopted in cooperative assembly. In that work, the positional controller is based on a Cartesian space PID algorithm with gravity compensation. An impedance controller compensates the positional and orientation errors and turns the interaction control into a problem of positional control. The idea was verified in simulation. Hirata et al. [15] proposed a decentralized motion control algorithm of multiple mobile manipulators for handling a single object in cooperation with a human without using the geometric relations among robots. With the algorithm, each mobile manipulator could handle the object based on an intentional force/moment applied by a human. Zhao et al. [16] established the dynamics between object space and joint space in terms of object dynamic, individual-arm dynamics and joint driving torques as multiple manipulators handle a single object. Li and Chen [17] considered multiple mobile manipulators grasping a rigid object in contact with deformable working surfaces, whose geometric and physical model is unknown. Adaptive neuro-fuzzy (NF) control for coordinated mobile manipulators is proposed for robust force/motion tracking on the constraint surface while it is in motion. In [18] a neural network algorithm was employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. It showed that feedback from the cameras is enough to control the position with desirable accuracy. The disadvantage is that the accuracy will be limited by the cameras’ resolution and the learning time is not short enough for Real-time operation. Hartman [19] studied the real-time or near real-time monitoring of manipulator motion to detect and prevent potential collisions by reserving space ahead of each link in the system. Park et al. [20] introduced a vision/camera system to recognize the position and orientation of the robot manipulators for the assembly job. In [21] an open architecture for real-time sensory feedback control of a dual-arm industrial robotic cell has been described. It proposed the possibility of using vision and other sensory feedback control, but no vision-based position or force feedback is shown in physical experiment.

Most of the work on collaborative robots is focused on the kinematics and control policy. How to feed back the robots’ position or force precisely and so to design an effective collaboration policy is important for the task. While in conventional systems this feedback is usually in the form of joint angle information obtained from rotary encoders fixed to the joints, we adopt an optical meter to measure the end effecter position as the direct feedback for the cooperative robots. In this paper, a closed-loop collaborating system is built. The system comprises of two modular robots and a dynamic optical meter named Optotrak. The optical information obtained from the Optotrak will work as feedback for the robots’ operation. Compared with camera, the advantage of using Optotrak is that the robot end position can be determined and feedback directly so that Real-time control is possible.

The collaboration of a pair of arm robots can be classified into object mode and master/slave mode. In the object mode, the grasped object assumes the dominant role. Then the pose matrix of the two grabbers can be obtained according to the restriction relationship between the object and the two grabbers. And the joint parameters can be obtained by solving the inverse kinematics equations. In the object mode, the two robots have equal importance and work in a consorted fashion. In the master/slave mode, one robot is assigned the role of master and the other robot is the slave. The motion of the slave robot can be deduced from the master robot motion according to the cooperation relation between the two robots.

Real-time control is necessary for the pair of arm robots in the operation, and the stable coordination relation between the two robots is expected to hold at all times. To do that, the master/slave mode is adopted for the cooperation of the two robots in the experiment. One robot acts as the master arm and the other acts as the slave one. During the Real-time control of the two robots cooperation, when the desired motion of the master arm robot is modified due to the need to avoid collision, the motion of the slave arm robot must be modified accordingly in Real-time lest that the coordination motion of the two robots be destroyed. In the master/slave mode, the coordination relations of the joint positions and the joint velocities are built for the master and slave robots according to the two robots kinematics relationship. The coordination relations are established by a set of orbicular equation constraints. Thus, once the desired motion of the master arm is fixed, the joint positions and velocities of the slave robot can be deduced according to the master/slave robots coordination relationship. With the master/slave mode, the motion of the master and slave robots is under coordination status during the entire operation period. In other words, if the desired motion of the master robot is modified in Real-time, the slave robot motion will have to be adjusted according to the intrinsic coordination relation.

The paper is organized as follows. After a survey of related work, the collaboration system is introduced and the kinematics restrictions of the two modular robots are analyzed in Section 2. A hierarchical kinematics control scheme is designed in Section 3. In Section 4, we present a bolt and nut assemblage experiment set-up which is executed successfully with the two modular robots system. Finally, we draw conclusions in Section 5 and suggest some issues for further discussion.

2. Kinematics Constraints of the Cooperative Modular Robots

In this paper, a cooperative robot system is designed. It consists of two modular robots and a dynamic optical meter named Optotrak. The system configuration is shown in Figure 1.

2.1. Position and Pose Constraints

The two cooperating robots are shown in Figure 2. is the base coordinate system origin of the slave robot, and represents the base coordinate system origin of its end executor. represents the base coordinate system origin of the master robot, and represents the base coordinate system origin of its end executor. is the angle vector of rotating around (Z coordinate in coordinate system ) and r is the position vector of under the coordinate system . q1 and q2 are the joint variables of the master and slave robots separately.

Using the base coordination of the master arm as the reference coordination of the whole system, the kinematics formula of the master and slave arm ends under the reference coordination system is as follows:

Suppose that represents the position vector of in the reference coordinate, and represents the position vector of in the reference coordination, and represent the rotation matrix of and in the reference coordination separately, and is the rotary transformation matrix between the pose of the slave arm end relative to the pose of the master robot end. Then we can get From formula (2), (3) and (4), we can get the holonomic position constraints between the two-arms as follows: And the full pose constraints is where

Formula (5) and (6) show that once the pose r and position of the master arm are fixed, the pose and position of the slave arm can also be fixed, and by the inverse kinematics solution the joint variables of the slave arm can be obtained.

2.2. Joint Velocity Constraints

If the Jacobian matrix of the robot is , by differentiating (4) the linearly velocity constraints of the end-actuator can be obtained as follows:

where, ; and respectively, represent the linear velocity and angular velocity of the end executor caused by the joint velocities.

Since the difference between the two-arms angular velocities rotating around is we can obtain the angular velocity constraints of the two-arm end executers:

From formula (9) we can get:

where, :

By combining formula (8) and (10), we can get

where, .

Equation (11) is the necessary velocity condition for the joint velocities of the two-arms when the two-arms cooperate. During the arms motion, the joint velocity of the slave arm can be deducted from ,, and . and must be given prior to solving the slave robot joint velocity.

2.3. Joint Acceleration Constraints

Suppose ,, and are given in advance; . Differentiating (11) on both sides with respect to time, we can get

where, ,

It can be seen from (12) that when , , , , , , , and are known the joint acceleration of the slave robot can be derived.

2.4. Position and Pose Adjustment for the End Effectors of the Two Modular Robots

When the two modular robots finish their cooperation motion, there remain errors between their poses and the desired positions. These errors are due to the linkage variable errors, kinematics errors, and systematic errors. We can use Optotrak 3020 to measure the 3-D motion data of the object in Real-time. The measurement precision of Optotrak 3020 is 0.1 mm. Using the Optotrak 3020, the actual poses of the two-arm end actuators can be measured and fed back to the control computer RobotPC in Real-time. Then the poses of the two end actuators can be adjusted according to the measurement. From the data measured by Optotrak 3020, the 3-D error of the master end coordinate origin relative to the slave arm end coordination system can be computed. The error is described as , in which , , and are separately the angular displacement or pose error of the master end coordinates rotating from the corresponding slave end coordinates. The robots can be adjusted to arrive at the desired positions by adopting the resolved motion rate control method (RMRC), which will be described in Section 3.

3. The Kinematics Control of the Two Modular Robots

The motion controller of the two-arm robots is designed as a layered hierarchical control. We can set a tri-layered cooperation control system for the two modular robots as shown in Figure 3.

The hierarchical control system consists of a cooperation layer, a servo layer, and finally a position and pose adjustment layer. The cooperation layer mainly carries out the motion planning of the master robot, on-line computation of the kinematics cooperation relation, and computation involving the desired motion of the slave robot. The servo layer mainly carries out the joint control by the joint controller and the motion execution by the two modular robots. Position and pose adjustment layer mainly accomplishes the measurements of the two robot end pose, and position, and the pose adjustment of the two robot ends by adopting RMRC. The RMRC control diagram is shown in Figure 4.

It can be deduced that the relation between the robot operation velocity and the joint velocity is

In the actual computer control, the velocity component can be described as the increment in unit time. Then (14) can be rewritten as

where is the joint angle increment in the same time spacing.

is the Euler angle displacement of the two robot end effectors in a sampling cycle.

Upon obtaining the measurement via Optotrak 3020, we can get the joint angle increment, , by substituting into (15). That can be realized by the robot joint servo control. After multiple measurements and feed back to the control computer, the master robot end can get to the desired position with a high degree of precision.

4. Experiments on the Two Modular Robots Cooperation

In this section, we describe how to set up an experiment that demonstrates the cooperation principles involving the two robots and an optical feedback meter. Specifically, the experiment explores the two robots cooperation mechanism used to perform a bolt and nut mating operation.

We begin by describing the experimental set-up. In this experiment, a robot will hold a part and assemble it into another part. Such operation exists not only in the manufacture assembly tasks, but also in many non-manufacturing tasks. The bolt and nut assembly operation is chosen because it is a very common mechanical operation. The master and slave arms in the experiment are two 6-DOF PowerCube modular robots. By combing numerical and LINGO software, we can get the cooperative robots’ workspace. The range of the end effectors is , , and .

4.1. Motion Planning of the Bolt and Screw Assembly by Two Robots

The assembly is divided into 5 detailed motion stages and each of these stages is described below.

Stage 1. To avoid the collision between two robots during their motion, the first step is to bring the master robot that holds the bolt to an initial starting position. Next, the slave robot that holds the nut moves to a position that is a certain distance away from the master robot.

Stage 2. Using the cooperative kinematics principles, the master and slave robots assume a position required to perform the assembly. During the motion, the slave robot motion is online planned in Real-time according to the master robot motion. Then the slave robot will move according to the computed kinematics parameters. In this experiment, the two cooperative modular robots move towards each other in the established direction. The assembly position should satisfy the workspace constraints for the two cooperative robots. At the end of this stage, a position is chosen such that the two cooperating robots can complete the assembly.

Stage 3. The positions of the graspers of the two robots’ are measured in Real-time by Optotrak 3020 and the measurement data is fed back to the control computer RobotPC. Then the position and pose of the graspers are adjusted using RMRC.

Stage 4. The master robot holding the bolt starts to move forward linearly. Meanwhile, the slave robot grasping the nut starts to move in a uniform circular motion. The speeds of the two robot motions must be in cooperation so that the master robot will precisely insert the grasped bolt into the nut grasped by the slave robot seamlessly.

Stage 5. After the assemblage, the slave robot will loosen the grip and retreat from the assemblage configuration to return to its original state. The master robot holds the assembled parts and moves to the appointed position to release it. After releasing the assembled parts it will return to its original state.

4.2. Speed Cooperation during the Assembly

The thread on the bolt and nut in the experiment is isometry mongline. When the two robots assume the proper assembly position and pose, they can carry out the assemblage task with the master robot feeding linearly and the slave robot moving in uniform circular motion. During the assembly, the linear feeding speed of the master robot must match the circular motion speed of the slave robot. While the nut circumrotates each circuit, the bolt will move forward a thread pitch.

In the assembly task, the linear feeding motion of the master robot is uniform speed. The linear feeding speed is

where, is the screw pitch of the bolt and nut, is the linear feeding speed, and T is the cycle of the master robot in uniform circular motion.

The angle velocity of the slave robot in uniform circular motion is

It can be derived from (16) and (17) that

The relationship between the linear movement of the master robot grasper and the circular motion of the slave robot grasper is established in (18).

In this experiment, there is a time limit for the uniform circular motion of the slave robot. Specifically, once the assembly is completed, the uniform circular motion of the slave robot should come to a complete halt. A timer is adopted in the program for this purpose. Suppose the linear motion time of the master robot is 10 s, and the slave robot motions in three circles with the angle velocity of rad/s.

4.3. Experiment

In the assembly experiment, the bolt length is 44 mm, its nominal diameter is 10 mm, its pitch is 2 mm, and the whorl is mongline triangle. The master robot holds the bolt and the slave robot holds the nut. The whole assembly process is as follows.

The master robot moves from the original position to a point at (100, 300, 450). The slave robot moves to a position where the distance between the slave and the master robot is 400 mm in the horizontal direction. This movement is performed such that it obeys the constraint relations.

The two robots move linearly in cooperation. The master robot moves forward 100 mm horizontally within 10 s, and the slave robot moves according to the constraint relationship to a point where the distance to the master robot grasper is 50 mm, a little longer than the bolt length.

The configurations of the master and slave robots ends are measured by Optotrak 3020 and fed back to the control computer. By multiple fine tuning through RMRC, the master robot can get at the desired position for the assembly operation.

The master robot advances slowly and smoothly with a constant velocity of 0.001 m/s. Accordingly, the slave robot rotates with a constant angular velocity. So the two robots cooperate for the bolt and nut assembly.

When the assembly is completed, the slave end grasper unclenches and retreats from its assembly configuration. The master robot grasper holds the assembled parts and moves to the point (, 300, ) to put down the couples and then recurs to its original configuration.

Figure 5 shows the experiment in process, the two robots completing the bolt and nut assembly by cooperation.

The lines in Figure 6 show the end position track of the master robot in X, Y, and Z coordinates during its cooperation motion process and the fine tuning process. The lines in Figure 7 show the end position track of the slave robot in X, Y, and Z coordinates during its cooperation motion process and the fine tuning process. The line L1 shows the end position according to the measurement of Optotrak 3020 when the two robots move to the original position. L2 represents the two robots’ end position during their cooperation process. L3 represents the two robots’ end position during the multiple fine tuning process. It can be seen from the graph that the original position of the master robot is (66.957, 334.145, 453.763) due to the fact that the marker cannot be stuck at the original point of the end coordinate reference, but stuck at the point (33, , ) in the end coordinate conference system. The motion time of L2 is 10 s, during which period the master robot moves linearly in the Y direction of the reference coordinate frame.

During the robot assembly, the relative configuration error between the assembly part and the assembly base part is an important factor. The relative pose and position error results from the robot kinematics parameter error, clearance of the robot joints, the manufacture error of the components, the orientation error of the work pieces, and the orientation error of the grasper. In this paper, we consider the assembly operation of the bolt and nut. So it is required to guarantee that the non-superposition between the bolt and nut epicenter axis is less than the conjugate clearance of the bolt and nut. Suppose the slave robot end grasper center is at in the reference coordinate system, and the slave robot end grasper center is at . If the two graspers coordinates are in the Y coordinate direction, their non-superposition can be expressed as follows:

The clearance between the bolt and nut is 0.6mm as measured by Vernier calipers. That means that the non-superposition between the master and slave robots end graspers center, that is, the bolt and nut epicenter axis should be:

In the experiment, the two marker positions are measured by Optotrak 3020. Based on these measurements, the non-superposition between the bolt and nut epicenter axis is computed to be 6.7 mm, which does not satisfy (20). Through multiple adjustments using the RMRC methods, the non-superposition between the bolt and nut epicenter axis can be adjusted to 0.22 mm such that (20) is satisfied and the bolt and nut can be assembled smoothly.

5. Conclusions and Discussions

In the paper, a cooperative two-modular robot system is built with optical measurement as feedback. That optical measurement can be used as feedback directly, which makes Real-time control easy. The cooperation between the two modular robots, the constraint conditions of the pose, velocity and acceleration are analyzed, and finally a tri-layered cooperative control system of RMRC is proposed to adjust the robots’ pose and position. Based on the cooperative system hardware platform and the control scheme, a Real-time bolt and screw assembly experiment is executed to demonstrate the validity of the cooperation control scheme.

Further research on the two modular robots cooperation needs be done to improve the current results. Firstly, only kinematics of the system is studied for the ordinary cooperation tasks in the paper. Cooperation tasks require a much higher precision and the dynamics should be considered in addition to the kinematics.

Secondly, a 3D measurement system of Optotrak 3020 is used to feed back the end position of the two robots. In the real industry robot task, it is possible for the robots to contact or collide with each other. In that case, only position feedback is far from adequate. Consequently, force or torque sensors should be utilized to measure the force between the robots. We plan to address both these issues and leave them as subjects of the future work.


This research is supported by the National Natural Science Foundation of China (project no. 60705036), Institute of Automation, Chinese Academy of Sciences (project no. 20070104), and Beijing Municipal Education Commission (titled as “Global Map Construction by Mobile Robot Based on Nature Landmark Extraction”). The authors would like to thank Hugh Durrant-Whyte, Jayant Gupchup, John Pretty, and the anonymous reviewers for commenting on various versions of this research.