Abstract

We propose a wireless remote teleoperated and autonomous mobile security robot based on a multisensor system to monitor the ship/cabin environment. By doing this, pilots in charge of monitoring can be away from the scene and feel like being at the site monitoring and responding to any potential safety problems. Also, this robot can be a supplementary device for safety cabin crew members who are very busy and/or very tired of properly responding to crises. This can make one crew member on duty at the cabin a possible option. In fact, when the robot detects something unusual in the cabin, it can also notify the pilot so that the pilot can teleoperate the robot to response to whatever is needed. As a result, a cabin without any crew members on duty can be achieved through this type of robot/system.

1. Introduction

To reduce whatever manpower and/or energy used in a ship, current ships are getting bigger and using fewer crew members. This can be achieved because the current technology is getting smarter and smarter. For example, the equipment built in a cabin can monitor different data for a safer and more efficient navigation, including the temperature, pressure, and ocean flow. Consequently, a one crew member is possible [1]. In fact, the most advanced ship, under certain condition regulated by the adoption of amendments to the International Convention for the Safety of Life at Sea (SOLAS) [2], can be operated without any crew member in the cabin: conditions such as notifying a crew member and pilot to signal any unusual problem in a ship as well as notifying next possible crew member when the previous crew member cannot respond to the problem. However, when the crew member(s) is tired or failing to respond to the problem, the pilot cannot tackle the problem directly by himself. To solve this problem, we propose a wireless remote teleoperated and autonomous mobile security robot based on a multisensor system.

The autonomous robot becomes a growing interest and curiosity subject and has been implemented widely in many popular areas. Traditionally, the robots have been used to help people in many fields of industrials and science such as semiconductor factories. In the near years, the robots are being carefully designed, expected to provide all kinds of service in human daily lives [36]. Basically, the developed robot in service has the following functions to perform such specifications and security service: autonomous navigation, supervision through internet, a remote operation utility, vision system, human-robot interaction (HRI), and so on.

Many researchers and institutions have developed various kinds of multifunction robots. The famous R&D groups MIT,  IROBOT in America and SONY, HONDA in Japan have invested in advance robotics such as rescue robots, military robots, service robots and so forth. The developments of robotics continually have been published in recent years by more and more research organizations. For example, ASIMO [7] is made by HONDA, and the University of Tokyo proposes Human Robot-HRP [8]. In Japan, security robots have become more and more popular with people and companies such as ALOSK [9], FSR [10], and SECOM [11].

Figure 1 shows the wireless remote teleoperated and autonomous mobile security robot platform in this investigation, and there are four major systems that are developed as shown in Figure 2. Through these four systems, the remote security mobile robot can execute some tasks autonomously or be controlled by pilot teleoperation to response to whatever is needed. The top-left blocks are the vision system. When the remote mobile security robot executes patrolling tasks in a ship environment, the CCD Camera will acquire the images with the pan-tilt-zoom utility. The top-right blocks are the environment sensing system. There are smoke and fire sensors installed in the mobile security robot. When these sensors detect any smoke or fire event, the mobile security robot will sound the alarm and send message to the pilot/console. Furthermore, if the mobile security robot has equipped a fire extinguisher, it may directly extinguish the fire via the remote teleportation with time efficiency. The bottom-left blocks are the user interface system. The bottom-right blocks are the navigation system. The wireless remote teleoperated and autonomous mobile security robots are designed with the teleoperated mode and autonomous mode. As we expect, the wireless remote teleoperated and autonomous mobile security robot has the ability to patrol in the ship/cabin environment; it must locate its position and plan its path to the destination automatically.

2. Sensory Configuration of Mobile Security Robot

2.1. Ultrasonic Sensor

In the motion planning system, the obstacle detection is a key issue. The mobile robot uses the ultrasonic sensors as shown in Figure 3 to detect obstacles of the environment.

There are eight Polaroid 6500 ultrasonic range sensors distributed around the mobile robot for near obstacle detection. The ultrasonic sensors are controlled by a microprocessor. The beam width for each sensor is near 90 degree, and the range to an object is determined by the time of flight of an acoustic signal generated by the transducer and reflected by the object. The robot can use the sensors to detect the range information from 50 cm to 300 cm.

2.2. Laser Range Finder

The mobile security robot is equipped with a SICK-LMS100 laser ranger [12] with a detection range from 50 cm to 20 m. The field of view is 270° with 0.5° resolution. Because of its wide range and high-resolution advantages, the laser ranger is used for major environment sensing and self-localization mapping as shown in Figure 4(a).

2.3. PTZ Vision Camera

The CCD Camera, as shown in Figure 4(b), we used is Sony communication color video camera EVI-D70. Its features are listed below:  (i) 216x room ratio (18x Optical, 12x Digital); (ii) pan angle: −170° to +170° (max. pan speed: 100°/s); (iii) tilt angle: −30° to +90° (max. tilt speed: 90°/s); (iv) max resolution = 768 × 494.

The CCD Camera is connected with an image transformer. It can capture PAL real-time color images, providing 768 × 576 pixel image. And these images will be processed to let us get some information from them. After we got the information, we can send commands/instructions to control the CCD Camera pan, tilt, and zooming. For example, when the mobile robot notifies the pilot the information about the location of fire event or the robot detects something unusual in the cabin, the pilot/remote user can send the commends/instructions to control the CCD Camera for pan, tilt and zooming through the wireless network in ship.

2.4. Fire Detection Sensor

In the fire detection system, we need to use multiple various sensors to detect the fire accident and use the sensory data to generate a reliable fire-detected signal. Specifically, we employ a smoke sensor, a flame sensor, and a temperature sensor for fire detection. Even if one sensor is wrong, we can use the data from the others and isolate the wrong sensor signal and then obtain the right result by using adaptive sensory fusion method. More specifically, in the beginning, we use the ionization smoke sensor as shown in Figure 5(a). When smoke occurs, an ionizing radioactive source is brought close to the plates and the air itself is ionized.

Figure 5(b) shows a flame sensor R2868, which is an UV TRON ultraviolet detector that makes use of the photoelectric effect of metal and the gas multiplication effect. It has a narrow spectral sensitivity of  185 to 260 nm, being completely insensitive to visible light. The temperature sensor is shown in Figure 5(c) which is the DS1821. It can work as a standalone thermostat with an operating temperature range from −55°C to +125°C. These signals of sensors must be processed to be binary digital by comparison circuits. An embedded system translates the analog signal to digital signal and sends to the host as shown in Figure 6.

3. Adaptive Sensory Fusion on Fire Detection

The adaptive data fusion method is proposed by Ansari [1316]. The computations of reinforcement updating rules are very complex, and it is difficult to design hardware modular, so a modify update rule using the Taylor expansion [17] is shown in Figure 7. S1, S2, and S3 represent the smoke sensor, flame sensor, and temperature sensor. When these sensors detect fire event, these sensor signals must be high or low.

The updating rule of the fusion algorithm is shown as follows: where and represent the weight value after and before each update.

4. Autonomous Navigation of Mobile Security Robot

4.1. Local Obstacle Avoidance Using Tangent Bug

The tangent bug algorithm [18, 19] is an obstacle avoidance algorithm. The robot keeps moving to the goal until encountering an obstacle. When encountering an obstacle, the robot will follow the edge of the obstacle until it can link itself to the goal without crossing the obstacle. Again, the robot keeps moving to the goal until encountering another obstacle. Figure 8 shows the representation, where is the discontinuous point, is the circular scope, and is the center of the robot.

The detail of this algorithm is described as shown in Algorithm 1.

Input:  A robot with a range sensor
Output:  A path to Goal or a conclusion no such path exists
while True do
repeat
  Continuously move toward the point , which minimizes ( , ) + ( , Goal)
until
   the goal is encountered or
   The direction that minimizes ( , ) + ( , Goal) begins to increase
    ( , Goal), that is, the robot detects a “local minimum” of (·, Goal)
Chose a boundary following direction which continues in the same direction as the
  most recent motion-to-goal direction.
repeat
  Continuously update , , and .
  Continuously moves toward that is in the chosen boundary direction.
until
   The goal is reached.
   The robot completes a cycle around the obstacle in which case the goal cannot be achieved.
   
end while

Figures 9 and 10 show the mobile robot with different behaviors on different sensor ranges. In Figure 9, the edge following behavior is demonstrated. In Figure 10, the sensing range is very far so the tangent bug can make a better choice for self-path planning.

4.2. Global Path Planning Using D*

The purpose of path planning algorithm is to solve problems, where a robot has to be navigated to given goal coordinates in a map-based environment. It makes assumptions about the unknown part of the terrain and finds a shortest path from its current coordinates to the goal coordinates under these assumptions. Furthermore, the key feature of   is that it supports incremental replanning (Figure 11). This is important if, while we are moving, we discover that the world is different from our map. If we discover that a route has a higher than expected cost or is completely blocked, we can incrementally replan to find a better path.

The original [20], by Stentz, is an informed incremental search algorithm. Focused [21] is an informed incremental heuristic search algorithm by Stentz that combines ideas of   [22] and the original . Focused resulted from a further development of the original .   Lite [23] is an incremental heuristic search algorithm by Koenig and Likhachev that builds on [23], an incremental heuristic search algorithm that combines ideas of and Dynamic SWSFFP [24].

4.3. Landmark Extraction from Laser Ranger

The data set of a 2D range sensor is represented by points  , and the points are sequentially acquired by the laser range sensor with a given angle and distance in polar coordinates. These points can be transferred to the Cartesian coordinates as . For an environment line feature extraction purpose, the first step is to apply an Iterative End Point Fit [25] on the data set . The IEPF will recursively split    into two subsets and while a validation criterion distance is satisfied from point to the virtual line segment consisted of . Through the iteration, IEPF function will return all segment endpoints ,  . Figure 12(a) shows IEPF results when the laser scanning is near a corner. Because the vertex of corner is beyond a distance measurement, IEPF will lead out three line segments. Obviously, the shortest line segment is not a real feature candidate. In this work, a modified IEPF with a weighting threshold is shown: where the is the angle between endpoints and is the resolution of laser ranger. When the number of points between is beyond 80% of the threshold, then the line segment extraction from IEPF is valid. Otherwise, the line segment is inactive. Figure 12(b) shows the weighting threshold result.

4.4. Robust Particle Filter Localization

Particle filter, also called CONDENSATION (conditional density propagation) [26], is a method based on Monte-Carlo and Bayesian. Unlike Kalman filter [27], particle filter is a nonliner filter. As the particle filter has the property of random sampling, it is appropriated to be used in tracking and localization problems. The advantage of particle filter is that it can get rid of noises, background noises. However, it suffers from using a large amount processing time in prediction. In other words, it takes considerable computational time. For mobile robots, we distinguish two types of data: perceptual data such as laser range measurements and odometry data or control which carries information about robot motion. Denoting the former by and the latter by , we have Without loss of generality, we assume that observations and actions occur in an alternating sequence. Note that the most recent perception in is , whereas the most recent control odometry reading is .

Bayes filters estimate the belief recursively. The initial belief characterizes the initial knowledge about the system state. In the absence of such knowledge (e.g., global localization), it is typically initialized by a uniform distribution over the state space.

To derive a recursive update equation, we use Expression (3) transformed by Bayes rule: The Markov assumption states that measurements are conditionally independent of past measurements and odometry readings given knowledge of the state : This allows us to conveniently simplify Equation To obtain the final recursive form, we now have to integrate out the pose at time which yields The Markov assumption also implies that given knowledge of and , the state is conditionally independent of past measurements and odometry readings up to time : Using the definition of the belief Bel, we obtain a recursive estimator known as Bayes filter: where is a normalizing constant. The sequence of particle filter for mobile robot self-localization is shown in Algorithm 2, where is robot posture in time sequence , is the robot motion command, and is the robot measurement.

Input: , , ,
Output:
For to do
     sample_motion_model ,
     measurement_model , ,
    
endfor
for to do
      draw with probability
      add to
endfor
return

5. Remote Telepresence Operation

The wireless remote teleoperated and autonomous mobile security robots are designed with the teleoperated mode and autonomous mode. In order to implement the teleoperation, a pilot/an operator must be able to sense the remote environment and send commands according to the transmitted image sequence or sensor information. Joystick provides a convention communication interface for motion control. The remote view is visible. As the remote site environment image captured by wireless remote teleoperated and autonomous mobile robots, the pilot freely controls these robots to detect and monitor something unusual in the cabin. Remote images are compressed by using the H.263 Codec technique, providing a low-bitrates compressed format video and acceptable video quality that was originally developed by the ITU-T Video Coding Experts Group (VCEG). It is a member of the H.26x family of video coding standards in the domain of the ITU-T. Since the time-delay and packet-lose are important issues, one of the most important challenges in the remote operation is how to solve the problems caused by time-delay of the communication. Real-Time Protocol (RTP) is a transport protocol that meets the requirements of voice and other real time data.

The basic features in RTP are(1)sequence number-reorder packets;(2)timestamp (synchronization): tell receiver what is corresponding display time and using timestamp to provide relationship between display time of different media;(3)multiple source identifier ability to support multiple media bit streams simultaneously.

The remote teleoperated mode user interface is shown in Figure 13. Several studies which have been made on RGB color model were sensitive to light and not suitable as the color threshold value. From the experiment experience, HSI color model has a high detect ratio.

6. Experiments

6.1. Remote Telepresence Operation and Environment Sensing

The remote camera pan-tilt control and laser ranger environment sensed by the mobile security robot platform are shown in Figures 14 and 15. The experiment scenario shows that the security robot is moving in the free space and detects the fire event as shown in Figure 16. When a fire event starts up, the security robot can detect it and send a warning message immediately. If the fire detection is true, the security robot will sound the alarm and send the message to notify the pilot/remote security guard so that the pilot can teleoperate the robot to response to whatever is needed.

6.2. The Comparison of Mobile Robot Path Planning

Figure 17 shows the result of path planning of tangent bug. The start position is at (40, 10) and the goal is at (30, 30). For the same map space, Figure 18 shows the path planning result. Obviously, the has the shortest path planning result without obstacle following.

6.3. Mobile Robot Self-Localization

For the particle filter localization experiment, we apply a particle filter estimator [28] which is configured with 1000 particles. Figure 19(a) shows that the particles (the green points) are initially uniformly distributed over the map space which has the associated landmark features that can be identified (the black diamond). In timestamp 20, all the particles are located near the real position as shown in Figure 19(d).

Figure 20(a) shows the ground truth (blue) and estimated (red) trajectory of the robot. Besides, after the timestamp 20, the localization deviation is always less than 0.5 m.

6.4. Particle Filter Performance Comparison

In this experiment, the particle filter performance is compared with various particles as shown in Figure 21. The ground truth (blue) and estimated trajectory (red) are represented in the robot trajectory within 1000 simulation step. When the particles are less than 100 such as in Figures 21(a) and 21(b), the localization result will always cause errors. However, when the particles are more than 400, the localization results will be more accurate, but the computational time is near exponential growth with the particle number such as in Figures 21(d)21(f). One interesting property can be noticed: as more particles occur, the initial localization will converge to the actual position as shown. Thus, one may first add the particle number in the initial localization step for fast localization convergency and then reduce the particle number for computational time saving.

Figure 22 shows the localization results from different range sensor uncertainties by using the same particle number (particle no. = 500). In Figure 22(a), the range sensor uncertainty is set as  m (distance) and (angle). In Figure 22(b), the range sensor uncertainty is set as  m (distance) and . Obviously, with higher range sensor resolution, the particle filter localization result will be closer to the actual position.

7. Conclusions and Future Works

From the past data, most shipping accidents occur because of the improper response from the crew members, causing the irreversible damages and/or injuries. To avoid further damages and/or injuries, a remote mobile security system can be installed to improve these problems. In this paper, we propose a wireless teleoperated mobile security robot based on a multisensor system to monitor the ship/cabin environment. By doing this, the pilot as well as the crew members in charge of monitoring can be away from the scene and feel like being at the site monitoring and responding to any potential safety problems. Also, this robot can be a supplementary device for safety crew member(s) who are very busy and/or very tired to properly response to crisis. This can make one man on duty at the cabin a possible option. In fact, when the robot detects something unusual in the cabin, it can also be teleoperated to response to whatever is needed. As a result, when carefully designed, a cabin without crew member on duty can be possible. In fact, with this capacity, this robot will not affect any current safety procedures. On the contrary, it can be a supplementary device for a better safety cabin environment.

For future research, the robot can be equipped with toxic gas detectors to detect any leakage problems.