A team of autonomous robots working in parallel has a high potential to accomplish an assigned task faster than a single robot. To achieve this target, two fundamental challenges need to be addressed: The first challenge is to assign actions to robots under the coordination of the team so that the task can be finished more effectively. The second challenge is the environment mapping for robots navigation and path planning under unknown environments. As above, the dynamic task assignment of multirobot may be achieved using a self-organizing map based feature, reaching real-time collision-free robot path through sensor measurement in the environment. Therefore, the development of an algorithm to incorporate task assignment, path planning, and tracking control of a multirobot system is an indispensable mission for autonomous robots.

This special issue contains original research articles to address problems in both conventional and emerging human-robot interaction fields.

The paper “A Swarm Robotic Exploration Strategy Based on an Improved Random Walk Method” by B. Pang et al. presents an improved random-walk method in which each robot adjusts its step size adaptively to reduce the number of repeated searches. An environment can be searched far more efficiently if the appropriate search strategy is used. Because of the limited individual abilities of swarm robots, namely, local sensing and low processing power, random searching is the main search strategy used in swarm robotics. The random-walk methods that are used most commonly are Brownian motion and Lévy flight, both of which mimic the self-organized behavior of social insects. However, both methods are somewhat limited when applied to swarm robotics, where having the robots search repeatedly can result in highly inefficient searching. Therefore, by analyzing the characteristics of swarm robotic exploration, this paper proposes an improved RW method based on the density of robots in the environment. Each swarm robot adjusts its step size adaptively to arrive at other areas, and the proposed method distributes the robots uniformly in the environment to reduce the number of repeated searches.

The paper “Toward Dynamic Monitoring and Suppressing Uncertainty in Wildfire by Multiple Unmanned Air Vehicle System” by S. Rabinovich et al. presents an efficient response and persistent monitoring method for a wildfire. A crucial aspect is the ability to search for the boundaries of the wildfire by exploring a wide area. However, even as wildfires are increasing today, the number of available monitoring systems that can provide support is decreasing, creating an operational gap and slow response in such urgent situations. The objective of this work is to estimate a propagating boundary and create an autonomous system that works in real time. It proposes a coordination strategy with a new methodology for estimating the periphery of a propagating phenomenon using limited observations. The complete system design, tested on the high-fidelity simulation, demonstrates that steering the vehicles towards the highest perpendicular uncertainty generates the effective predictions. The results indicate that the new coordination scheme has a large beneficial impact on uncertainty suppression. This study thus suggests that an efficient solution for suppressing uncertainty in monitoring a wildfire is to use a fleet of low-cost unmanned aerial vehicles that can be deployed quickly.

The paper “Optimal Skipping Rates: Training Agents with Fine-Grained Control Using Deep Reinforcement Learning” by A. Khan et al. presents the method for how the number of skip counts influences the learning process by employing convolutional deep neural networks (CDNN) with Q-learning and experience replay in a new game learning environment known as VizDoom. Game AI is one of the emerging, focused, and active research areas in artificial intelligence because computer games are the best test-beds for testing theoretical ideas in AI before practically applying them in real life world. Similarly, VizDoom is an artificial intelligence research platform based on Doom used for visual deep reinforcement learning in 3D game environments such as first-person shooters (FPS). While learning, the speed of the learning agent greatly depends on the number of frames the agent is permitted to skip. The agent is trained and tested on Doom’s basic scenario(s) where the results are compared and found to be 10% better than the existing state-of-the-art research work on Doom based agents.

The paper “Human-Machine Interface for a Smart Wheelchair” by A. Hartman and V. K. Nandikolla presents the integration of hardware and software with sensor technology and computer processing to develop the next generation intelligent wheelchair. The focus is a computer cluster design to test high performance computing for smart wheelchair operation and human interaction. The LabVIEW cluster is developed for real-time autonomous path planning and sensor data processing. Four small form factor computers are connected over a Gigabit Ethernet local area network to form the computer cluster. Autonomous programs are distributed across the cluster for increased task parallelism to improve processing time performance.

The paper “Particle Filter and Finite Impulse Response Filter Fusion and Hector SLAM to Improve the Performance of Robot Positioning” by A. Bassiri et al. presents a hybrid (PF/FIR) algorithm for robot positioning in harsh environments, where there are more noise and sudden changes. This paper uses a hybrid filter algorithm for the indoor positioning system for robot navigation integrating Particle Filter (PF) algorithm and Finite Impulse Response (FIR) filter algorithm to ensure the continuity of the positioning solution. Additionally, the Hector Simultaneous Localisation and Mapping (Hector SLAM) algorithm is used to map the environment and improve the accuracy of the navigation.

Conflicts of Interest

The editors declare that they have no conflicts of interest regarding the publication of this special issue.

Hsiung-Cheng Lin
Ling-Ling Li
Vincent C. S. Lee