Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 864180, 6 pages
http://dx.doi.org/10.1155/2014/864180
Research Article

Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor

1School of Computer and Information, Hefei University of Technology, Hefei 230009, China
2College of Electronic and Information Engineering, Ningbo University of Technology, Ningbo 315016, China

Received 24 June 2014; Accepted 10 July 2014; Published 23 July 2014

Academic Editor: Yunqiang Yin

Copyright © 2014 Baofu Fang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots’ individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.

1. Introduction

In recent years, multirobot task allocation problem has attracted widespread attention, which is widely used in robotic soccer, robotic rescue, intelligent warehouse, and other issues [13]. In order to solve the multirobot task allocation problem, Elango develops a simulation model involving task priority and the utilization of robots and refers to it as a balance multirobot prioritized task allocation (BMRPTA) problem [4]. This simulation model tries to find a balanced exploration path for each robot which is so constituted that the average waiting time and total completion time of all tasks are minimized. This allocation method assumes that robots are completely rational, so it does not consider the benefits of the individual robot. Auction [5, 6] is a fast and effective method of negotiation; when applying it into multirobot task allocation, robots can ensure the maximization of the overall benefits, at the same time considering their own benefits. Eric and Ofear [7] give up the ideas from traditional auction methods to pursue optimal task allocation strategy, so this paper focuses on the study of how to use robots efficiently in each team and how to prevent robots interfering with each other in the process of task execution, as well as how to execute the allocated tasks efficiently based on auction. Nanjanath and Gini [8] take more attention to the impact of robotic failures and environmental uncertainties in auction, so they aim at finding a compromise between computational complexity and quality of allocations and ability to adapt. If a robot finds an unexpected obstacle, experiences any other delay, loses communication, or is otherwise disabled, the rest of the team continues to operate. Above literatures take into account the self-interest of robot, but they do not have a deep research about self-interested robots such as the ability of self-interested robots, the individual cooperative willingness of robots in different situations, and the changes of individual cooperative willingness during executing tasks.

Psychology shows that emotion plays a very essential role in the process of work, where negative emotions may cause people’s action to be slow and insensitive for the work while positive emotions can be conducive to the work [9]. Delgado-Mata and Ibanez-Martinez [10] add emotional mechanism to robots and use emotion to reflect the self-interest features of robots. When robots have negative emotions, such as sadness and fear, they are trended to keep the status quo and do not participate in the task; while robots have the positive emotion, they choose to participate in the task.

In order to measure individual cooperative willingness of self-interested robot in multirobot task allocation problem, a multirobot pursuit task allocation algorithm which is based on emotional cooperation factor is proposed. We build emotional cooperation factor for self-interested robots, then use it to measure the individual cooperative willingness of self-interested robots, and determine whether the robots are suited to participate in the pursuit task or not. Emotional cooperation factor is updated based on emotional attenuation and external stimuli. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams and finally use certain strategies to complete the pursuit task.

2. Emotional Cooperation Factor

2.1. Definition

In order to describe the willingness intensity of the robot to participate in the pursuit, this paper enhances an emotion model constructed in our work before [11]. At first, the emotional cooperation factor , representing the robots’ emotional cooperative willingness, can be defined as follows: where the basic emotion status is described by a set and is corresponding to happiness, anger, sadness, and joy; is an emotional intensity vector; ; is the emotional intensity of basic emotion status . is the exchanging matrix from basic emotion to PAD model where pleasure-arousal-dominance (PAD), three-dimensional model of emotion, is proposed by Chen and Long [12]. The value of is decided by the type of the basic emotion status ; are the weights of PAD in each dimension where .

Definition 1. is a justice factor for robot cooperation, which is calculated to judge whether the robot is willing to attend pursuit task allocation: where is a justice parameter of the emotional cooperation factor. If the value of emotional cooperation factor is above , the robots’ emotion will be pleased and stable, whereas if the value of emotional cooperation factor is below , the robot will be variable and unstable.

2.2. Several Impacts on Emotional Cooperation Factor
2.2.1. Emotional Decay

Emotional intensity will decrease with the time passing by which directly causes the change of emotional intensity vector and then influence the value of emotional cooperation factor. Based on the 3rd law of emotional intensity in psychophysics, the emotional decay curves comply with an exponential function . In order to simplify the process of emotional decay, we only consider previous emotional decay impact on the emotion of current moment. So the function of emotional decay can be expressed as follows: where is emotional intensity vector at ; is the checking period for emotional status; and is an emotional decay factor.

2.2.2. Emotional Stimuli

External stimulation is another important factor affecting the emotional status. Pursuing different evader can get different rewards. The higher reward given, the greater stimulation intensity produced. During the pursuit, if the robot is in depressed mood or emotion decay for pursuit unwilling to complete the task, we need to impose different stimulation intensity for different types of robots to ensure that the robot will complete its pursuit.

By using a stimuli vector to represent a real world stimulation, a stimuli with the kind of and the intensity of can be written as , where is the maximum emotional intensity. By simulating robot’s emotions and personality, Wang and Teng [13] define the matrix of stimulation , assuming that stimulation is instantaneous without considering time; the emotional intensity can be expressed as follows:

In summary, the value of emotional cooperation factor is influenced by the emotional decay and emotional stimuli. It can be transformed to the change of emotional intensity vector , which can be defined as follows: where is a limiting function [14] that confines the value of emotional intensity within ; we set ; is the active threshold of stimuli and the value of corresponds to the type of robot for stimulus; the fact that a different type of stimuli gives different effects of emotion changes can be expressed.

3. Multirobot Pursuit Task Allocation Algorithm

The contract net protocol is proposed by Smith [15] in 1980 at first and it is a typical market method which origins from the contractual mechanisms in the business activity. In the pursuit task allocation problems, emotional robots choose the appropriate evaders to pursue using contract net protocol. This paper provides a multirobot pursuit task allocation algorithm based on emotional cooperation factor to get appropriate pursuers and form the optimized cooperation teams at last.

3.1. Basic Definition

Definition 2. Define as a set of evaders, where is represented by four elements , where indicates the position of the evaders, represented by ; is the type’s set of the evader, which may have various types; set indicates the capability of each evader needed. Since the evaders have different types, and this means that catching evaders needs different type and different number of robots. Therefore, the key point to construct a pursuit team is to decide the number and the relative types of the pursuers. Define the reward after completing a pursuit task, . The pursuer can gain different reward to capture a different type of evaders.

Definition 3. Define the pursuers set . has five elements , where is the position of the pursuer, indicated by ; set represents the different type of the pursuer; set indicates the capability of each pursuer, , which embodies the characteristics of heterogeneous of different kinds of pursuers; indicates the emotional intensity of the pursuers, which is a dimensional vector ; represents the value of emotional cooperation factor introduced before, which shows the willingness of the pursuer to complete its task.

Definition 4. Use to represent the team gain after capturing an evader. After evader is captured, the gain of team can be calculated by the cost of pursuit and reward of the task , as follows:
During the pursuit, the cost of the team is mainly reflected on cost of distance and emotional cost : where is the sum of the distance between each team’s robots of and the target evader . Supposing is the number of the pursuers in team , the cost of distance can be written as follows:
The emotional cost is carried out by its mood swings. In the stage of allocation, we need to predict the number of pursuers who may have emotional risk. Firstly calculate the time that pursuer needs to move to the evader, then estimate its emotional status after , and calculate the value of in that time; if , it means that this pursuer may have emotional risk; otherwise it does not. So the emotional risk of each pursuer can be represented to be as follows: where is the number of robots who may have emotional risk and is the emotional risk cost.

3.2. Each Pursuit Team Leader Confirmed

In this paper, hybrid architecture is used to assign the pursuit task. There is a central administrator and some distributed robots in the system. The functions of central administrator include collecting information of evaders, publishing evaders’ information, receiving the bid, and authorizing contracts. The first auction is to recruit a leader for each pursuit team, who likes a practical project contractor. When the pursuers in the team are not enough, administrator takes charge of recruiting pursuers. The algorithm of recruiting team leader is as follows:(a)The central administrator sends a recruiting message to robots, also giving them leaders’ number that the central administrator needs, and then publishes the detailed information of all evaders.(b)Pursuer receives the message, calculates its own , and sends it to the central administrator.(c)The central administrator will count the number of the robots which fit and then publish the leader they will recruit. After that, if , the administrator will send a stimulation command to each robot to update their . Pursuit robot will accept the command and recalculate its and then send it to the central administrator and then back to step b; if , just continue.(d)Pursuer checks the evader information of each pursuit task and calculates its gain, then uses each gain as a bid value, and sends it back to the central administrator.(e)Central administrator views the tenders submitted by pursuers and selects the most appropriate pursuer based on bid values for each task, so that all team leaders have the shortest total distance to the evaders. The problem can be transformed into a classic assignment problem; the Hungarian algorithm [16] is the most appropriate.(f)The contract was awarded to the successful pursuing robots. After the pursuers receive the contract, the contract becomes effective. So it becomes the team ’s leader, and then the team is formally established.

According to Definition 2, different capability is needed to capture each evader. If the leader is fully capable to catch the evader, it will do it alone. Otherwise, leader with insufficient capability needs other cooperators, so it should recruit other pursuers.

3.3. Recruiting Cooperators

At first, each insufficient leader sends its team’s message to other free pursuers. Leader will choose appropriate members to join its team. The relative algorithm is listed below.(a)Leaders will check whether their team’s capability meets the requirement. If not, the leader will publish team’s information to recruit collaborators including the evaders position, the capability value they still need, the reward, and the leader’s information.(b)Free pursuer receives the recruitment information, calculates its , and sends it to the leaders with corresponding recruitment information.(c)If the value of is greater than , which shows pursuer is willing to cooperate with other robots, then the pursuer count its gain as a bid value send to the leader.(d)Leader views the tenders received and selects the most appropriate cooperators based on bid values for each team, so that all pursuit teams have the biggest total gains. The problem can also be transformed into a classic assignment problem, solved by the Hungarian algorithm [16].(e)Each leader counts the total capability after new cooperator attends in its team, if , where is an total capability to complete the task, the team can be established; whereas it not meet the capability requirement, the leader sends a emotional stimuli to the pursuer, then jump to step a.(f)When all of the teams have enough capability as the task required, an allocation matrix will be generated and contracts will be sent from the leader to its cooperators. If some teams still need more cooperators to complete the task while there are no free robots, they must wait until the eligible robot completes its task.

After the contracts are received by cooperators, the pursuit team will be established.

3.4. Pursuit Strategy

In order to verify the effectiveness of multirobot task allocation algorithm proposed by this paper, we combine this multirobot task allocation algorithm with some pursuit algorithm to complete the pursuit task. We use the pursuit algorithm as follows.

Assume that there is a virtual force field in the pursuing process. The force from pursuers to the evader is repulsion, whereas that from evader to pursuers is an attraction. The pursuers will move according to the attraction and the evader will run depending on the repulsion. We assume that the size of attraction and repulsion is related to distance. Attraction and repulsion can be defined as follows: where is scale factor; is the distance from the pursuer to the evader ; is the distance from each pursuer in pursuit team to its own evader. Draw a unit circle which uses the position of evader to be the center of the circle and then split the circle into parts on the circumference; the direction of the evader will be chosen from the directions which come from the center to the dividing points. Calculate the repulsion of the evader to get to each of the divisions; choose the direction with the least repulsion as the motion direction. In the same way, choose the direction with the maximum attraction as the motion direction for pursuit.

Definition 5. Use as the capturing distance judgment factor which can predict whether evader is captured or not: where is the available distance for capture; when the distance between pursuer and evader is below in the same pursuit team, it means that this evader is captured.

4. Experiments

4.1. Relevant Parameters Setting

In the experiments, we let and select dread, anger, and joy as three kinds of emotion as a basic emotion. According to the paper [11], we get the transformation matrix from basic emotion to PAD emotion model which showed the following:

From the paper [14], we find that “dominance” in the PAD model has the maximum impact on emotional cooperation factor, “pleasure” follows, and “arousal” has the minimal impact. Therefore, respectively, set , , . With the passage of time, emotional intensity infinitely tends to zero, so we select . The pursuer is not willing to participate in pursuit, when . On the contrary (), it means the pursuer is willing to finish the task.

Define , , so 12 pursuers pursue 4 evaders and produce the initial position of evaders and pursuers in the square area. In order to determine the influence of emotional decay factor experiments, we have randomly done 1200 sets of experiments without emotional decay; the experimental results are shown in Figure 1.

864180.fig.001
Figure 1: 1200 sets of experiments without emotional decay.

From Figure 1, the total cost time of pursuit from 20 to 30 seconds is more intensive, so we randomly select 100 sets of data from the 1200 sets of initial data which obtained from above experiments, then use the 100 sets of data for the next experiments. For the following experiment, suppose emotional decay types of robots were the same in all experiments; namely, emotion decay factor was the same. Set , so check the emotional status of pursuit robot every 0.1 second. When the value of is too small, emotional decay will be too fast, not conducive to experiment, so the value of is fixed between 0.5 and 1. Using the 100 groups’ initial data, experiments are carried out under different values of ; then obtain the total cost time of pursuit; finally, calculate the average time to finish all pursuits by different . Result is shown in Figure 2.

864180.fig.002
Figure 2: The effect of on pursuing time.

Figure 2 shows that when the value of is between 0.5 and 0.85, average time has larger effect by . With the increase of , average time shows a trend of increase, but it has greater volatility. When is between 0.85 and 0.93, pursuit time decreases and decays obviously. When the value of is between 0.93 and 1, it has little effect on pursuit time. To ensure the stability of experiment and reflect the impact of to pursuit time, we control the value of between 0.85 and 0.93, so set , so emotional decay factor .

In the pursuit strategy, the forces of attraction and repulsion are just the parameter to determine the motion direction for evaders and pursuers, whose value is too large or too small having no effect on determining the actual motion direction. In order to simplify the experiment we set . Choose ; evaders select the direction that will produce the smallest repulsive force. Take ; when the distance between pursuit robot and evader is less than or equal to 0.8 m, that means the evader has been captured.

4.2. Experiment and Data Analysis

To verify the effectiveness of the algorithm in this paper, we have made a comparison with the method of instantaneous greedy optimal auction algorithm [17]. The 200 experiments are designed under the same scene. As Figures 3 and 4 show, comparing with instantaneous optimal auction greedy algorithm in the same scenario, 86.5% total time of pursuit can be shortened and 77.5% total gain is higher. When use instantaneous greedy optimal auction algorithm to establish the pursuit teams, all robots are self-interested, so every robot would like to choose joining the team which gives itself the greatest gain, thus self-interested robots may lead to part of teams’ capacity excess and another part of teams lack of capacity, it is not conducive to pursuit task allocation. In our algorithm, all pursuers have emotions, and they consider personal gain while taking into account the global gain for pursuit team. At the stage of pursuit task allocation, when pursuers meet conflict between global gain and their own gain, in order to obtain greater global gain they will sacrifice their own gain. So our algorithm can optimize the total pursuing time and global gains. Analyzing the experiments with poor total pursuit time or global gains, we found that the initial position of robots is very favorable to instantaneous greedy optimal auction algorithm.

864180.fig.003
Figure 3: The comparison chart of total cost time of pursuit.
864180.fig.004
Figure 4: The comparison chart of total gain for pursuing teams.

5. Conclusions

In this paper we have a deep research about self-interested robots, such as the ability of self-interested robots, the individual cooperative willingness of robots in different situations, and the changes of individual cooperative willingness during executing tasks. We proposed a multirobot pursuit task allocation algorithm based on emotional cooperation factor. In order to measure individual cooperative willingness of self-interested robot in multirobot task allocation problem, emotional cooperation factor, a value updated based on emotional attenuation and external stimuli in the process of pursuit is introduced into emotional robot. By using two steps of auction for allocation, several subteams and their leaders and cooperators are selected and then choose a certain pursuit strategy to complete pursuit task. As the experiment shows, our algorithm optimizes the cost of pursuit time and total team’s gain, as well as proving the effectiveness.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by the Natural Science Foundation of China under Grants 61175051, 61175033, National High Technology Research and Development Program of China (863 Program) under Grant 2012AA011005, Natural Science Foundation of Anhui Province (1308085QF108), Doctorate Personnel Special Fund of Hefei University (JZ2014HGBZ0014), Natural Science Foundation of China under Grant (61203360), and Zhejiang Provincial Natural Science Foundation of China under Grant (LQ12F03001).

References

  1. T. Otsuka, K. Murata, K. Nakadai et al., “Incremental polyphonic audio to score alignment using beat tracking for singer robots,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS '09), pp. 2289–2296, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  2. Z. W. Liang and J. Shen, “Task allocation algorithm based on auction in RoboCup rescue robot simulation,” ROBOT, vol. 33, no. 4, pp. 410–416, 2013 (Chinese). View at Google Scholar
  3. Y. Guo, Auction-Based Mulit-Agent Task Allocation in Smart Logistic Center, Harbin Institute of Technology, 2010, (Chinese).
  4. M. Elango, S. Nachiappan, and M. K. Tiwari, “Balancing task allocation in multi-robot systems using K-means clustering and auction based mechanisms,” Expert Systems with Applications, vol. 38, no. 6, pp. 6486–6491, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. Z. L. Ling and S. Katia, “Competitive analysis of repeated greedy auction algorithm for online multi-robot task assignment,” in Proceedings of the International Conference on Robotics and Automation (ICRA '12), pp. 4792–4799, IEEE, Saint Paul, Minn, USA, 2012.
  6. G. Wu, P. Ren, and Q. Du, “Dynamic spectrum auction with time optimization in cognitive radio networks,” in Proceedings of the 76th IEEE Vehicular Technology Conference (VTC '12), 5, p. 1, September 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. S. Eric and B. Ofear, “An empirical evaluation of auction-based task allocation in multi-robot teams,” in Proceedings of the 13th International Conference on Autonomous Agents and Multi-Agent Systems, pp. 1443–1444, International Foundation for Autonomous Agents and Multiagent Systems, 2014.
  8. M. Nanjanath and M. Gini, “Repeated auctions for robust task execution by a robot team,” Robotics and Autonomous Systems, vol. 58, no. 7, pp. 900–909, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. H. L. Cheng and W. Z. Chen, “The impacts of emotion regulation on job burnout,” Advances in Psychological Science, vol. 18, no. 6, pp. 971–979, 2010 (Chinese). View at Google Scholar
  10. C. Delgado-Mata and J. Ibanez-Martinez, “An emotion affected Action Selection Mechanism for multiple virtual agents,” in Proceedings of the 16th International Conference on Artificial Reality and Telexistence—Workshops, (ICAT '06), pp. 57–63, Hangzhou, China, November 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Wang, Q. Zhang, B. Fang, and S. Fang, “Maximum similarity matching emotion model based on mapping between state space and probability space,” Pattern Recognition and Artificial Intelligence, vol. 26, no. 6, pp. 552–560, 2013 (Chinese). View at Google Scholar · View at Scopus
  12. Y. X. Chen and R. T. Long, “Trainable emotional speech synthesis based on PAD,” Pattern Recognition and Artificial Intelligence, vol. 26, no. 11, pp. 1019–1025, 2013 (Chinese). View at Google Scholar
  13. G. J. Wang and S. D. Teng, “Simulating emotion and personality for intelligent agent,” in Proceedings of the International Conference on Intelligent Computation Technology and Automation, pp. 304–308, Changsha, China, 2010, (Chinese).
  14. G. Niu, D. Hu, and Q. Gao, “Robot emotion generation and decision-making model construction based on personality,” Robot, vol. 33, no. 6, pp. 706–718, 2011 (Chinese). View at Publisher · View at Google Scholar · View at Scopus
  15. R. G. Smith, “The contract-net protocol high-level communication and control in a distributed problem solver,” IEEE Transactions on Computers, vol. 19, no. 12, pp. 1104–1113, 1980. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Liu and M. A. Tong, “An application of hungarian algorithm to the multi-target assignment,” Fire Control & Command Control, vol. 27, no. 4, pp. 34–37, 2002 (Chinese). View at Google Scholar
  17. Q. Y. Zhang, Research on Task Allocation of Multi Emotional Robot, Hefei University of Technology, Anhui, China, 2013 (Chinese).