Abstract

This paper discusses the design and steering control for an autonomous modular mobile robot. The module is designed with a center-articulated steering joint to minimize the number of actuators used in the chain. We propose a feedback control law which allows steering between configurations in the plane and show its application as a parking control to dock modules together. The control law is designed by Lyapunov techniques and relies on the equations of the robot in polar coordinates. A set of experiments have been carried out to show the performance of the proposed approach. The design is intended to endow individual wheeled modules with the capability to merge and make a single snake-like robot to take advantage of the benefits of modular robotics.

1. Motivation

The capability of moving through a wide variety of remote areas has made mobile robots an interesting topic of research. But since there are different ways to move, the locomotion method selection is a challenging aspect of mobile robots design.

Inspired by their biological counterparts, mobile robots can walk, slide, and swim. In addition, conventional mobile robots travel using powered wheels. Generally, wheels have high efficiency; they are simple, and well suited to flat lands. But, the performance of wheeled mobile robots becomes seriously critical in exploring untraditional environments. For instance, wheeled mobile robots are inappropriate to move over rough terrain, sands, or water.

Chain-based robots tend to become increasingly an alternative to wheeled robots in robotic applications. While both legged and wheeled robots are unable to effectively enter narrow spaces or climb over obstacles, snake robots with many degrees of freedom can cross a narrow gap, climb over a rock, move over rough terrain or marshland, and even swim. However, snake-like robot locomotion is not efficient or appropriate where traditional wheeled systems can be used.

This contrast motivated us to study the transition between wheeled and modular robotics and present autonomous mobile robot modules capable of self-assembling to form up a chain-like robot. Figure 1 shows the main idea of the work. Each robot module is equipped with a docking connector (connection mechanism) on the front plate and a universal joint in the middle. Note that the modules are designed with an articulated central joint, rather than a traditional axle for steering. This design means that no additional actuators are necessary to permit the creation of a snake-like serial chain when the modules are docked.

2. Introduction

A robotic system can be defined as a collection of members which are employed to do particular tasks. For many applications, it is possible to use a certain structure to complete the tasks. However, in untraditional environments and unexpected situations, it is almost impossible for a fixed-architecture robot to meet all the task requirements.

The work presented in this paper enables mobile robots to overcome more sophisticated tasks and enables modular robots to change the number of their modules to complete a specific task.

We investigate autonomous docking between separate modules which covers(i)design and construction of a suitable connection mechanism;(ii)investigation of a parking control algorithm to drive the robot modules to a defined position and orientation;(iii)implementation of the system using a localization system.

We have already presented the design details of the connection mechanism in [1]. The proposed mechanism is suitable for our application since it is lightweight, compact, and powerful enough to secure a reliable connection. It overcomes significant alignment errors, and it is considerably power efficient. So, here we focus on implementation and experiments using proper control algorithm and localization system.

This research contains the study of articulated-steering robots kinematics, using the common model for center-articulated mobile robots [2] with some modifications. After defining the model, the next step is to develop a stable control law to steer the robot modules from any initial position and orientation to the goal configuration.

The feedback control of center-articulated mobile robots has rarely been addressed in the literature [3]. In articulated steering, the heading of the robot changes by folding the hinged chassis units. Apostolopoulos [4] presented a practical analytical framework for synthesis and optimization of wheeled robotic locomotion configurations.

Choi and Sreenivasan [5] investigated the kinematics of a multimodule vehicle using a numerical approach. The number of actuators in this design can vary from nine in a fully active system to a minimum of three.

Load-haul-dump (LHD) vehicles which transport ore in underground mines are articulated-steering vehicles, and their steering kinematics resembles center-articulated mobile robots kinematics. Corke and Ridley [2] developed a general kinematics model of the vehicle that shows how heading angle evolves with time as a function of steering angle and velocity.

A path-tracking criterion for LHD trucks is proposed in [6]. Marshall et al. [7] have also investigated localization and steering of an LHD vehicle in a mining network.

In another work, Ridley and Corke [8] derived a linear, state-space, mathematical model of the vehicle, purely from geometric consideration of the vehicle and its desired path.

Visual navigation is increasingly becoming a more attractive method for robot navigation [9]. The field of visual navigation is of particular importance mainly because of the rich perceptual input provided by vision. Montesano et al. [10] have presented a method to relatively localize pairs of robots fusing bearing measurements and the motion of the vehicles.

Dorigo et al. and Mondada et al. [11, 12] presented the Swarm-bot platform. The basic components of the system, called s-bots, is equipped with eight RGB LEDs distributed around the module and a video graphics array (VGA) omnidirectional camera. The camera can detect s-bots that have activated their LEDs in different colors.

A docking control strategy for security robots recharging has been suggested by Luo et al. [13] based on detecting an artificial landmark. In this configuration, there is a camera mounted on top of the robot and the video signal is captured by the image frame grabber installed inside the main controller.

The works presented in [14, 15] have also reported experiments where a robot with on-board vision docks with another robot.

Many other researchers have also studied other aspects of modelling and reconfiguration of modular mobile robots [1622].

This paper is organized as follows. Section 2 surveys the related work. Section 3 discusses the parking problem. Section 4 presents the experimental results. Finally, Section 5 concludes the paper, and Section 6 points out some future work.

3. Steering Control

This section addresses the closed-loop steering of the active-joint center-articulated mobile robot. As illustrated in Figure 1, each robot module has a universal joint in the middle, so once they are connected, each module adds 2-DOF to the chain. Therefore, we focus on the steering kinematic of such robots, which in this paper are called “center-articulated” (In this work, we focus on planar motion only. We designed the robot to have out-of-plane capability, but this is left for future work).

To avoid confusion between this type of mobile robots and tractor-trailer vehicles [23], we emphasize on the word “active-joint”. The modules are subject to move and dock to one another. Here we call this docking maneuver “parking control.”

We first propose a kinematic model of an active-joint center-articulated mobile robot, and then a proper law is derived to stabilize the configuration of the vehicle to a small neighborhood of the goal. The control law is designed by Lyapunov techniques and relies on the equations of the robot in polar coordinates.

As discussed in [1], the designed connection mechanism allows significant misalignment. Therefore, steering the robot module to a small neighborhood of the goal is enough to achieve successful docking.

3.1. Kinematic Model

A center-articulated mobile robot consists of two bodies joined by an active joint. The vehicle is steered only by changing the body angle, since both axles are fixed.

Consider an active-joint center-articulated mobile robot positioned at a non-zero distance with respect to a target frame (Figure 2). The robot’s motion is governed by the combined action of the linear velocity 𝑣 and the angular velocity 𝓌.

The kinematic equations of the robot which involve the robot’s Cartesian position and the heading angle of the front body (𝑥,𝑦,𝜓) can be written aṡ𝑥=𝑣cos𝜓,(1)̇𝑦=𝑣sin𝜓,(2)̇𝜓=sin𝜑𝑙2+𝑙1𝑙cos𝜑𝑣+2𝑙2+𝑙1cos𝜑𝜔,(3)̇𝜑=𝜔,(4) where 𝑙1 and 𝑙2 are the lengths of the front and the rear parts of the robot, and 𝜑 is the body angle. Equations (1), (2), and (4) are similar to that of a simple differential-drive mobile robot. Equation (3) can be derived as follows.

The relationship between the front and the rear halves of the robot is given by𝑥+𝑙2cos𝜓+𝑙1,𝑦cos𝜓=𝑥+𝑙2sin𝜓+𝑙1,sin𝜓=𝑦(5) where (𝑥,𝑦,𝜓) denote the position and orientation of the rear part of the robot with respect to the target frame (Figure 2).

Taking time derivative of (5) giveṡ𝑥𝑙2̇𝜓sin𝜓𝑙1̇̇𝜓sin𝜓=̇𝑥,𝑦+𝑙2̇𝜓cos𝜓+𝑙1̇𝜓cos𝜓=̇𝑦.(6)

We also know that 𝜓=𝜓𝜙. Therefore, considering (4), we can writė𝜓=̇𝜓𝜔.(7)

Substituting (1), (2), and (7) in (6) giveṡ𝑥𝑙2(̇𝜓𝜔)sin(𝜓𝜙)𝑙1̇̇𝜓sin𝜓=𝑣cos𝜓,𝑦+𝑙2(̇𝜓𝜔)cos(𝜓𝜙)+𝑙1̇𝜓cos𝜓=𝑣sin𝜓.(8)

It is also assumed that there can be no motion parallel to the robot’s axles. This constraint on rolling without slipping for the rear part implies thaṫ𝑥̇𝑦sin(𝜓𝜙)cos(𝜓𝜙)=0.(9)

This equation can be simply derived by projection of ̇𝑥 and ̇𝑦 onto wheels’ axle (Figure 3).

Finally, solving (8) and (9) for ̇𝜓 verifies (3).

The kinematic equations can also be written in polar coordinates. From Figure 4 we can write𝑒=𝑥2+𝑦2,(10)𝑥=𝑒cos𝜃1,(11)𝑦=𝑒sin𝜃1,(12) where 𝑒 is the error distance, 𝜃1 is the error vector orientation with respect to the target frame, and 𝜃2 is the angle between the distance vector 𝑒 and the linear speed vector.

The time derivative of (10) can be written aṡ𝑒=𝑥̇𝑥+𝑦̇𝑦𝑒.(13)

Combining (11) and (12) with (13) yieldṡ𝑒=̇𝑥cos𝜃1+̇𝑦sin𝜃1.(14)

Substituting (1) and (2) into (14) giveṡ𝑒=𝑣cos𝜓cos𝜃1𝑣sin𝜓sin𝜃1.(15)

So,𝜃̇𝑒=𝑣cos1𝜓.(16)

As 𝜃2=𝜃1𝜓, thereforė𝑒=𝑣cos𝜃2.(17)

Taking the time derivative of (11) and (12) and substituting (1) and (2) in the results yields𝜃𝑣sin1̇𝜃𝜓=𝑒1.(18) As 𝜃2=𝜃1𝜓,̇𝜃1=𝜃𝑣sin2𝑒.(19)

Considering that ̇𝜃2=̇𝜃1̇𝜓, from (19) and (3) we obtaiṅ𝜃2=sin𝜃2𝑒sin𝜑𝑙2+𝑙1𝑙cos𝜑𝑣2𝑙2+𝑙1cos𝜑𝜔.(20)

Therefore, the kinematic equations of a center-articulated mobile robot in polar coordinates can be summarized aṡ𝜃̇𝑒1̇𝜃2=̇𝜑cos𝜃20sin𝜃2𝑒0sin𝜃2𝑒sin𝜑𝑙2+𝑙1𝑙cos𝜑2𝑙2+𝑙1𝑣𝜔cos𝜑01.(21)

It is interesting to note that using polar coordinates allows for a set of state variables which closely resemble the same ones regularly used within our car-driving experience [24]. In the next section, it will be shown that (21) are suitable to design an appropriate control law for parking maneuvers.

3.2. Controller Design

The Lyapunov stability theory is a common tool to design control systems (see, for example, Bullo and Lewis [25] for a general introduction). Here we consider a simple quadratic equation as a candidate Lyapunov function.

Let the robot be initially positioned at a nonzero distance from the target frame. The objective of the parking control system is to move the robot so that it is accurately aligned with the target frame.

In other words, it is intended to find a stable control law [𝑣(𝑒,𝜃1,𝜃2,𝜑),𝜔(𝑒,𝜃1,𝜃2,𝜑)] which drives the robot from any initial position (𝑒(0),𝜃1(0),𝜃2(0)) to a small neighborhood of the target, (0,0,0).

Consider the positive definite form1𝑉=2𝜆1𝑒2+12𝜆2𝜃21+12𝜆3𝜃22+12𝜆4𝜑2,𝜆1,𝜆2,𝜆3,𝜆4>0.(22)

The time derivative of 𝑉 can be expressed aṡ𝑉=𝜆1𝑒̇𝑒+𝜆2𝜃1̇𝜃1+𝜆3𝜃2̇𝜃2+𝜆4𝜑̇𝜑.(23)

Substituting (21) in (23) giveṡ𝜆𝑉=2𝜃1+𝜆3𝜃2sin𝜃2𝑒𝜆1𝑒cos𝜃2𝜆3𝜃2sin𝜑𝑙2+𝑙1𝑣+𝜆cos𝜑4𝑙𝜑2𝜆3𝜃2𝑙2+𝑙1cos𝜑𝜔.(24)

It can be seen that letting𝜆𝑣=2𝜃1+𝜆3𝜃2sin𝜃2𝑒𝜆1𝑒cos𝜃2𝜆3𝜃2sin𝜑𝑙2+𝑙1𝜆cos𝜑,(25)𝜔=4𝑙𝜑2𝜆3𝜃2𝑙2+𝑙1cos𝜑(26) makes ̇𝑉0 which implies stability of the system states. Convergence (asymptotic stability) depends on the choice of 𝜆s, as discussed next.

3.3. Stability Analysis

The proposed candidate Lyapunov function 𝑉 is lower bounded. Furthermore, ̇𝑉 is negative semidefinite and uniformly continuous in time (̈𝑉 is finite). Therefore, according to Barbalat’s lemma [26], ̇𝑉0 as 𝑡.

The time derivative of 𝑉 can be expressed aṡ𝑉=Λ1+Λ2𝜆=2𝜃1+𝜆3𝜃2sin𝜃2𝑒𝜆1𝑒cos𝜃2𝜆3𝜃2sin𝜑𝑙2+𝑙1cos𝜑2𝜆4𝑙𝜑2𝜆3𝜃2𝑙2+𝑙1cos𝜑2.(27)

It is noted that as ̇𝑉0 and both Λ1 and Λ2 are squared, therefore Λ10 and Λ20. If 𝜆4 is selected to be very small, Λ2 takes on the formΛ2𝑙2𝜆3𝜃2𝑙2+𝑙1cos𝜑2.(28)

So, Λ20 implies that 𝜃20.

As 𝜃20, Λ1 also takes on a simpler form ofΛ1𝜆2𝜃1𝜃2𝑒𝜆1𝑒2.(29)

Consequently, Λ10 gives𝜆2𝜃1𝜃2𝜆1𝑒2.(30)

As 𝜃20, we get 𝑒0.

Finally, in the limit where both 𝑒 and 𝜃2 go to zero, 𝜃2/𝑒 is bounded and (25) giveslim𝑒,𝜃20𝜆𝑣=2𝜃1𝜃2𝑒.(31)

Therefore, from (19) we obtaiṅ𝜃1=𝜆2𝜃2𝑒2𝜃1.(32)

As 𝜆2>0 and (𝜃2/𝑒)2 is positive, from (32) it is found that 𝜃1 is stable and eventually approaches zero, though it may do so slowly.

Therefore, ̇𝑉0 results in (𝑒,𝜃1,𝜃2)(0,0,0).

Since this system is driftless, Brockett’s condition [25] predicts that a smooth control will not stabilize the system. However, in this case, it is not necessary to stabilize the entire state of the system because 𝜑 is only the internal body angle, which can always be corrected by lifting one end thus eliminating the nonslip constraint on one axle. As a result, we work only to steer the triple (e,𝜃1,𝜃2) to a near neighbourhood of (0,0,0), indicating that the robot is in position to dock.

In practice, there is a trade-off in selecting parameter 𝜆4. Setting 𝜆4=0 stabilizes (𝑒,𝜃1,𝜃2) while rendering 𝜑 uncontrollable. In such cases, 𝜑 can take on physically unrealizable values, for example, causing the robot to fold in on itself. By contrast, choosing 𝜆4 large can result in very slow approaches to the origin.

It should also be mentioned that the proposed model for the center-articulated mobile robots has a singularity at 𝑒=0, since according to (21), ̇𝜃1 and ̇𝜃2 are not defined at 𝑒=0. The condition 𝑒=0 cannot occur in finite time since the approach to zero is asymptotic.

One may also observe another singularity. If 𝑙2+𝑙1cos𝜑=0 then 𝜃2 is not defined. If the robot is designed such that 𝑙2>𝑙1, this singularity never happens. If 𝑙2=𝑙1, cos𝜑=±𝜋 results in this singularity, but this case cannot occur since it means that the robot is fully folded back on itself.

Finally, we note that there is a special case where the controller is not able to stabilize the configuration of the robot. This special case occurs when both 𝜑 and 𝜃2 are initially zero. As can be observed from (25) and (26), in this situation 𝜔=0 and 𝑣=𝜆1𝑒. In fact, there is no control on 𝜃1. The controller should recognize this special case and take a proper action. For instance, the controller can change the initial body angle to a nonzero value.

4. Experiments

In previous section, we introduced a control law to steer a center-articulated mobile robot in order to achieve successful parking (docking). In our experiments, we use a beacon-based localization system [27] to determine the pose of the robot relative to the target position, however that is only an implementation detail. In principle any localization scheme could be used. We include details of our beacon-based localization approach in this paper only for completeness of the description of the experimental setup.

The simulation results and the effect of measurement noise are also presented in our previous works [27, 28].

In this section, we first discuss the design details of the robot and the experimental setup. We then present experimental results on the robot system to verify the effectiveness of the proposed approach.

4.1. Robot Design

In order to provide a platform to perform our experiments, we designed and constructed an articulated-steering mobile robot. The robot module consists of a dual-actuated universal joint with servomotors as the joint actuators (Figure 5).

The robot is driven by a single-motor gearbox and the actuators are controlled by a control board. The robot is also equipped with an omnidirectional camera which measures the view angles of the beacons by means of a real-time color detection algorithm implemented on a PC. The PC calculates the control signals (the motor speed and the servo angle) which are transmitted to the control board through serial communication.

In this design, two servomotors are located on the yokes which turn the axles of the middle piece. Rotating the horizontal axle moves the joint up and down (pitch), and rotating the vertical axel moves the joint from side to side (yaw).

4.2. Visual System

We setup a vision-based localization system using an omnidirectional camera that provides a description of the environment around the robot. The system is composed of a camera pointed upwards looking at a spherical mirror, mounted on the top of the robot (Figure 6). We do not assume to know any camera/mirror calibration parameters (mirror size or focal length).

Three red, green, and blue color objects are considered as beacons, located on the top of the target rover to determine the pose of the target. The colored-beacons are detected by the camera, and the images are transferred to an off-board computer for real-time processing.

4.3. Image Processing

Once the images are transferred to the PC, a machine-vision software processes incoming image data and detects the position of the beacons in the image plane.

In this process, we first perform color filtering (RGB filtering), followed by a closing operation. In RGB filtering, all pixels that are not of the selected color are diminished. Closing has the effect of filling small and thin holes in objects by connecting nearby objects in the image plane [29].

Then, blob-size filtering is performed to remove objects below a certain size. As a result, each beacon is located as a single blob in the image. The center of gravity of each blob determines the position of each beacon in the image plane.

The image processor outputs the beacons’ positions on the image plane, from which the bearing from each beacon can be computed. The steps of image processing algorithm are summarized in Algorithm 1.

Step 1: The image is rotated and flipped horizontally to be aligned
with the actual position of the robot.
Step 2: The image is cropped to only include the beacons area.
Step 3: The raw image is labeled which allows to refer to the image
currently being processed at a later time.
Step 4: RGB filtering is performed to detect the first color.
Step 5: Closing process is performed to connect nearby detected objects.
Step 6: Blob-size filtering is performed to remove the image noise.
Step 7: The center-of-gravity of the detected beacon is marked as
the position of the beacon in the image plane.
Step 8: Steps 4–7 are repeated for the second and third color,
using the labeled raw image.

4.4. Control Algorithm

In this section, we briefly describe the control algorithm to steer the robot to the neighborhood of the target. The control algorithm is implemented on the PC which receives the measurements of the beacons’ positions and sends the control signals to the control board.

The algorithm starts with relative bearing measurement, using the data provided from the camera. In case that the measured angles are beyond the interval [𝜋,𝜋], ±2𝜋 is added to the computed angle. It is noted that some parameters such as the beacons’ position in target frame, robot’s length, and the controller’s gains are predefined in the algorithm.

Once the angles are determined, the feedback parameters (𝑒,𝜃1,𝜃2) are calculated based on equations described in [27]. Now that the feedback parameters are available, in the next step, the control signals (𝑣,𝜔) are computed according to the equations given in Section 3.2.

The calculated control signals are not directly applied to the robot. As there are limitations on robot’s speed and body angle, the control signals are restricted within defined upper bounds. Finally, the control signals are scaled to adjust the effect of actuator dead band.

The whole process is performed once a new image frame arrives from the camera. The control algorithm is summarized in Algorithm 2.

Step 1: 𝛼 , 𝛽 , and 𝛾 angles are computed, based on the position of
the beacons received from the Image Processing application.
(The angles are confined to [ 𝜋 , 𝜋 ] )
Step 2: Feedback parameters ( 𝑒 , 𝜃 1 , 𝜃 2 ) are computed,
Step 3: Control signals ( 𝑣 , 𝑤 ) are computed, based on
equations (25),(26).
Step 4: If the control signals exceed their limits, they are set
to their maximum allowable values.
Step 5: The computed control signals are scaled, and sent to the
control-board.
Step 6: Once a new image frame arrives, Steps 1–5 are repeated.

The simulation results reveal that, in some cases, the generated path passes through the beacon’s region. Since this implies that two robots will be physically occupying the same space, the control algorithm must recognize this situation and resolve it. The solution is as follows, based on locating a set of via points in the robot’s workspace.

As Figure 7(a) indicates, the robot’s workspace can be divided into four parts, considering the sign of 𝛼 and 𝛽 angles. The region where both 𝛼 and 𝛽 are positive is considered as the safe region. This region is called safe since if the robot’s initial position is located in this region, the robot reaches the goal with no need to pass through the beacons.

Therefore, if the control algorithm detects that the robot is not in the safe region, it steers the robot to first pass through a set of predefined via points. Figure 7(b) shows a set of assumed via points in the workspace. According to the figure, the via points are reached in an order such that the robot is finally located in the safe region.

The via points are reached using the same control algorithm summarized in Algorithm 2. For each point, the target frame is redefined for that specific via point. In this approach, the feedback still comes from the beacons located on the target frame, but using a simple translation, the robot is steered to a coordinate system, originated at the specified via point. Once the error distance to that via point is small enough, the next via point is followed till the robot is located in the safe region.

This process is done by setting/resetting status flags. The process of following via points is summarized in Algorithm 3.

Step 1: The position of the via points (for point 𝑉 𝑝 𝑖 , ( 𝑋 𝑉 𝑝 𝑖 , 𝑌 𝑉 𝑝 𝑖 ) )
is pre-defined in the algorithm (Figure 7(b)). Detection flag (D-Flag)
and Point-Flag (P-Flag) are two status flags to determine the status
of the robot. P-Flag indicates the next via point that should be
reached. D-Flag indicates if the via points are followed.
Step 2:
If 𝛼 < 0 and 𝛽 < 0 and D-Flag = 0, then P-Flag = 1 and D-Flag = 1.
If 𝛼 > 0 and 𝛽 < 0 and D-Flag = 0, then P-Flag = 2 and D-Flag = 1.
If 𝛼 < 0 and 𝛽 > 0 and D-Flag = 0, then P-Flag = 5 and D-Flag = 1.
Step 3: If P-Flag = 𝑖 , 𝑋 𝑟 = 𝑋 𝑟 𝑋 𝑉 𝑝 𝑖 , and 𝑌 𝑟 = 𝑌 𝑟 𝑌 𝑉 𝑝 𝑖 .
So, the calculation of feedback parameters is changed.
Step 4: The control algorithm (Algorithm 2) is then followed
to reach the specified via point.
Step 5:
If P-Flag = 4, then steer to the actual target frame.
If P-Flag = 3 and 𝑒 , then P-Flag = 4.
If P-Flag = 6 and 𝑒 , then P-Flag = 4.
If P-Flag = 2 and 𝑒 , then P-Flag = 3.
If P-Flag = 5 and 𝑒 , then P-Flag = 6.
If P-Flag = 3 and 𝑒 , then P-Flag = 4.
If P-Flag = 1 and 𝑒 , then P-Flag = 2.
Step 6: Steps 2–5 are repeated once a new image frame arrives.

4.5. Robot Construction

Based on the mentioned design approach, a prototype of the robot has been implemented including the docking connector and the visual system. Figure 8 presents a general view of the built robot. The prototype weights less than 1.0 kg, and the total length of the robot is 24 cm.

The universal joint’s actuators are two Ultra Torque HSR-5995TG servomotors from HiTec company which gives maximum 3.8 Nm standing torque. The robot is driven by a twin-motor gearbox, consisting of two small DC motors, both connected to a single gearbox (ratio 344 : 1). The robot’s actuators are controlled by the main control board equipped with an HCS12 Motorola microcontroller and a MicroCore-11 motor drive. The control board communicates with the PC through RS232 serial communication.

4.6. Experimental Setup

To test the proposed approach, an experimental setup has been designed in which the performance of the built articulated-steering robot is examined. The experimental setup is illustrated in Figures 9 and 10. The experiments have been made in a small field made of white styrofoam blocks.

Figure 11(a) illustrates an image frame received from the camera (the image is cropped). Figures 11(b)11(d) show the results of image processing algorithm. The size of the cropped image is 420 × 400 pixels.

The software platform used to implement the control algorithm is Microsoft Visual Studio 9.0. The codes are written in C++ and executed on a 3.0 GHz Intel dual core processor with 1 GB of RAM. The software platform calculates the control signals (the motor speed and the servo angle) which are transmitted to the control board through RS232 serial communication.

A power supply supplies 12 V DC at 1 A to the control board, including an HCS12 Motorola microcontroller which is programmed to generate appropriate PWM signals to the actuators.

We perform experiments while a fixed digital camera records the behavior of the robot. Image processing is then performed off-line using MATLAB Image Processing Toolbox to yield the robot’s true position.

4.7. Experimental Results

A set of experiments were carried out to show the performance of the proposed algorithms. The experimental results are shown graphically in the following figures. In all experiments, the distance between the center of the beacons has been considered as the unit of distance, “B”. So, the beacons’ distances are 𝑎=𝑏 = 1 B, and as the beacons are located on an equilateral triangle, 𝛿=𝜋/3 (see the Localization section in [27]). The lengths of the robot are also 𝑙1=𝑙2=0.5B.

The controller gains are chosen to be 𝜆1=0.8, 𝜆2=1.05, 𝜆3=0.95, and 𝜆4=0.02. These gains are obtained by trial and error. It is noted that only choosing positive gains is enough to achieve a stable control law (see Section 3.2), but fine adjustment to improve the performance is done considering the actuators’ limits.

Figure 12(a) shows the snapshots of an experiment where the robot is positioned at (𝑒,𝜃1,𝜃2,𝜑) = (6,0,0,0). It is noted that the distance unit is in B. Figure 12(b) illustrates the traveled path of the robot for this experiment.

Figure 13 shows the changes in 𝛼 and 𝛽 angles (the inputs to the controller), sent from the image processing algorithm, as well as the control signals 𝑣 and 𝜔 (the outputs of the controller), applied to the robot.

Figures 14(a), 14(b), 15, 16, 17, 18 and 19 show the results of some other experiments, while the robot is located in different positions and orientations relative to the target frame (imagine that the red and the green beacons are located on the 𝑋-axis of the target frame). For each experiment, the snapshots of the experiment and the actual traveled path are presented. The figures also show the inputs and outputs of the controller. In these figures, the robot is located at (7,5𝜋/8,0,0), (7.5,𝜋/8,𝜋/4,0), and (8,5𝜋/8,𝜋,0), respectively.

As mentioned in Section 4.4, in case the generated path passes through the beacon’s region, the control system steers the robot through a set of predefined via points. The via points are reached in an order such that the robot is finally located in the safe region (Figure 7(b)).

Finally, Figure 20 shows the snapshots of an experiment where the initial position of the robot is not in the safe region. As can be seen, the robot has been steered appropriately to park the robot at the target.

Because our docking joint permits wide initial positioning errors (±40 degrees of yaw and 11 mm of offset), once the robot falls below a threshold speed, it suffices to go forward in a straight line in order to accomplish docking. In these experiments, the robot docked successfully on each of trials.

We also performed a set of experiments with actual docking joints [1] to visualize the connection maneuver. As Figure 21 shows, the connector’s pieces are fixed in front of the robot, as well as on the origin of the target frame. This experiment was repeated ten times, and for each trial, the robot successfully completed the connection maneuver.

5. Conclusion

This research work introduced the idea of a new robotic system including a team of wheeled mobile robots which are able to autonomously assemble and form a chain-like robot. The goal is to improve the performance of mobile robots by autonomously changing their locomotion method.

We proposed the design of a center-articulated module to form the building block of a robot team that can form itself into a serial chain where required by terrain constraints.

Next, we proposed a kinematic model of an active-joint center-articulated mobile robot in polar coordinates, and then a proper control law was derived to stabilize the configuration of the vehicle to a small neighborhood of the goal, using Lyapunov techniques. The results reveal that choosing a suitable state model allows to use a simple Lyapunov function to achieve parking control, even if the feedback is noisy.

We finally designed and constructed a center-articulated mobile robot equipped with a beacon-based localization system to verify our approach. The experimental results show the effectiveness of the proposed approach.

6. Future Work

An important extension to this research work would be investigating self-reconfiguration, using a set of autonomous wheeled mobile robots and the proposed approach. The next step could also be maneuvering the chain to overcome tasks such as passing through a narrow space and climbing up steps. A fully functional modular team of these units will also require the intelligence to examine surrounding terrain to determine when docking is necessary.

There are a large number of open research topics in repairable systems. The problem of designing a robot team with repair capabilities is complex and not easily solved [30]. The proposed configuration can be extended to answer some of the questions in this area.

Investigating dynamic docking problem can also be considered as a future work. In dynamic docking, the robot modules find and connect to one another while they are all moving. The dynamic docking offers faster docking time since there is no need for the modules to stop and perform parking maneuver. As the target is in motion, vector tracking should be investigated rather than regulation problem in parking control.