Abstract

This paper investigates the flocking and the coordinative control problems of multiple mobile agents with the rules of avoiding collision. We propose a set of control laws using hysteresis in adding new links and applying new potential function to guarantee that the fragmentation of the network can be avoided, under which all agents approach a common velocity vector, and asymptotically converge to a fixed value of interagent distances and collisions between agents can be avoided throughout the motion. Furthermore, we extend the flocking algorithm to solve the flocking situation of the group with a virtual leader agent. The laws can make all agents asymptotically approach the virtual leader and collisions can be avoided between agents in the motion evolution. Finally, some numerical simulations are showed to illustrate the theoretical results.

1. Introduction

In recent years, the problem of coordinated motion of multiple mobile agents, especially the flocking control [13], has attracted a lot of attention among researchers. Researchers structure a distributed model that simulates these behaviors such as flocks, herds, and schools [321]. The wide attention and research on this issue get a good application in biology, social behavior, statistical physics, control engineering, and other fields [211]. Most previous works of flocking algorithm are based on the assumption that the network topologies are always connected [12, 13]. However, in fact, it is difficult to ensure that network maintains connection at all times even if the initial network is connected [14]. Therefore, there is great realistic significance to find out an algorithm that ensures that the connectivity requirement of the network is always satisfied. Fortunately, a connectivity preserving flocking algorithm for network topologies with distributed control was studied in [1417]. Under the condition that the initial network is connected, the author applies appropriate weight for network edge to make the network maintain connection. In [2, 16], in order to maintain network connectivity, the method of measuring the local network was applied. In [17], using the potential function method was considered. However, these methods were designed for single-integrator dynamics. In [13], the author first proposed a distributed connected algorithm for the second-order network system. With the algorithm, collisions between agents can be avoided in the motion evolution.

In this work, we consider how to maintain network connectivity by constructing artificial potential function and investigating the flocking algorithm with avoiding collision rules. This algorithm is using hysteresis in adding new links [14] and applying new potential function method to maintain network connectivity. The major difference from the algorithm which was considered in [13, 15] is that we choose the new potential function. Furthermore, the situation of multiagent systems with a virtual leader is further investigated. We conclude that, in the case that the initial network is connected, the laws can make all agents approach a common velocity vector and asymptotically converge to a fixed value of interagent distances and collisions between agents can be avoided throughout the motion.

The rest of this paper is organized as follows. We define the multiagent flocking problem in Section 2, and some background on Graphs, Laplacian, and Reynolds model is presented in this section. In Section 3 we propose a new flocking algorithm and its results analyses are presented. Then, some numerical simulation examples are presented in Section 4. Finally, some conclusions are made in Section 5.

2. Problem Formulation

2.1. Background

In order to improve the understanding of the flocking control problem and facilitate the narrative for the next part, some basic knowledge including the undirected graph , Laplacian matrix, and Reynolds model is introduced as follows.

2.1.1. Algebraic Graph Theory

Assume that each agent has the same sensing radius , for given constants and , satisfying . Consider the agent as a node and use undirected edge link node and other nodes which can be sensing. Thus, the graph consisting of a set of nodes and a time-varying set of edges can be expressed as an undirected graph at time , and the detailed meaning is as follows [15, 22]:(i)the initial edges are satisfied with ;(ii)if the inequality is reasonable, we have ;(iii)if and , a new edge is generated between agent and agent .

Here, denotes the position vector of agent and expresses the Euclidean norm. The description of whether there exist edges between agent and agent at time can use a symmetric indicator function , and we can describe it as follows:Here, if , means that there is no edge between agent and agent at time , then . If , scilicet that exist edge between agent and agent at time , then .

The above imply that a new edge will be added when the distance between the two agents is less than induction radius .

2.1.2. Laplacian

Define the Laplacian of agent as , where is the adjacency matrix of graph such that [13]where , , , is the sum of th row element of matrix . Obviously, the sum is zero of all row elements for Laplacian. Furthermore, Laplacian is a positive semidefinite matrix that satisfies the following sum-of-squares property [14]:

For a given undirected graph , the Laplace matrix is symmetrical, and the nonnegative adjacency matrix satisfies .

2.1.3. Reynolds Model

In 1986, Reynolds introduced three heuristic rules that led to creating the first computer animation of flocking [3]. Then, researchers used the name Reynolds model that describes the rules. The flocking algorithm that will be proposed is based on these rules. Here, we list three flocking rules of Reynolds [4].(i)Flock centering: attempt to stay close to nearby agents.(ii)Velocity matching: attempt to match velocity with nearby agents.(iii)Collision avoidance: avoid collisions with nearby agents.

2.2. Problem Description

In our background, the Reynolds model was proposed based on the position and velocity of agents; consider a group of mobile agents moving in a -dimensional Euclidean space. Let and denote the position vector and velocity vector of agent , , respectively. The motion equations of each agent are described by the following double integrator:where denotes the energy control input of agent , . The value of the position vector and the velocity vector of each agent is stored in the form of matrix; defineThen, the problems of flocking control can consider how to design the control input value, which can make all agent motion satisfy the three rules of Reynolds.

The control purpose of this work is to make all agents approach a uniform velocity and asymptotically converge to a fixed value of interagent distances, and collisions can be avoided among agents in the motion evolution. Namely, for all , we require and , and here is offset constants. In general, we want the final desired formation that satisfies equation . In order to achieve this goal, the control input is designed as follows:

And consider the situation of the group with a virtual leader agent; the control input in this case can be described as follows:where is the gradient-base term and its action is enforced on each agent that asymptotically converges to the fixed value of interagent distances, ensures the network connectivity, and avoids collision between agents. The middle item of (7) acts as a damping force, which makes every agent attempt to consistent velocity with nearby agent. And is the navigational feedback term, which enforces that all agents are aware of the virtual leader.

3. Flocking Algorithms

3.1. Without a Virtual Leader

We have assumed that multiple mobile agents are moving in -dimensional Euclidean space; consider the fact that the agent can only sense other agents in its epsilon neighborhood because of the limited awareness of each agent. So we define stand for the epsilon neighborhood of agent at time . The control input (6) for agent is governed by where and weight number . The where in (8) is nonnegative artificial potential function, which independent variables is the distance between agent and agent ; here . It has the following properties:(i)the potential function achieves its unique minimum value when reaches a desired distance;(ii)if and , the potential function .

We can know that no distance between agents will tend to 0 or at all times, which implies that any two agents can attract each other and collisions can be avoided by constructing . One example to meet the above features of artificial potential function is as follows:

For the physical properties of the dynamic agent, we define the total energy of agent system as follows:It shows that is the sum of the total potential energy and the total relative kinetic energy among agents. Define the initial energy ; it is easy to see that is a positive semidefinite function.

For the convenience of description, denote the velocity and position of the center of mass (COM) of all agents as follows [22]:

Considering a group of agents with dynamic motion (4) steered by protocol (8) in combination with the artificial potential function which we have put forward, the following conclusion can be got based on the above description.

Theorem 1. Assume that the initial energy and the initial network are finite and connected, respectively. Then, one has the main result: (i) for all , the will maintain connection; (ii) the velocity of the COM remains the same for all ; (iii) all agents approach a uniform velocity and asymptotically converge to the fixed value of interagent distances; (iv) collisions among agents can be avoided.

Proof. Part one of Theorem 1 is proved first. We can speculate that the topology of is fixed at the time period under the assumption that switches at time . In view of initial energy is finite, and we obtain the time derivative of energy equation in as follows:One has in the time because of the fact that is positive semidefinite [14], and it means that the following inequality is established at the time periodAccording to the definition of the artificial potential function, it is easy to know . Therefore, all existing edge-distances will not tend to in from the definition of artificial potential function, which implies that fragmentation of existing edges will be avoided at time . Hence, the edge number of the interaction network must be increased. The potential energy, which comes from the new edges, is a limited value because of the effect of hysteresis.
Using a similar analysis method, we can obtain the time derivative of , and the following equation exists in time period :Based on this fact, the inequation is reasonable in as follows:It is obvious that all existing edge-distances will not tend to in , which implies that fragmentation of existing edges will be avoided at time . Therefore, .
According to the above analysis, we know that is connected and no edge in will be fragmented, and it proves that maintains connection for all .
In what follows, we will give the proof procedure of parts (ii) and (iii).
Let us consider that there are edges being added to the dynamic network at ; it is easy to know thatFrom (9) and (15), we haveCombined with (16), clearly, for all time . Hence, we can define the positive invariant set that goes with above analysis. Consider the following:Here, , , and .
The inequality is reasonable for all agents and under the fact that always maintains connection. From , we have or , and it means that the positive invariant set is compact. Note that network is a fixed topology; combining with LaSalle’s invariance principle [23], we know that the value of each solution lies in set Ω that tends to the range of the invariant setFrom (14), we can getIt is easy to see that = 0 if and only if , which shows that all agents approach a uniform velocity .
For given conditions and and the control input (8), we haveThis formula implies that part (ii) of Theorem 1 is established.
From the control input (8), we haveIn a stable state, clearly , which implies that for all . We can rewrite (22) in a matrix form such thatHere , the element of matrix is , and . For a given initial network and combining the definition of Laplacian matrix , one has , and it means that part (iii) is reasonable.
At the end of Section 3, we prove part (iv). According to the set , it is easy to know that for all . Hence, we can deduce that combines with the definition of artificial potential function. Thus, collisions between any two agents can be avoided.

3.2. With a Virtual Leader

The situation of multiagent systems (4) with a virtual leader is considered in this subsection. The control input is governed bywhere denotes the velocity of the virtual leader agent and it is a constant vector. If , it means that agent has the information of the virtual leader, and otherwise . The last item in the equation has the same role of the in (7).

We define the total energy of agent system as follows:

It shows that is the sum of the total interrelated kinetic energy and the total potential energy between the virtual leader and the agents of dynamic network, and it satisfies the feature of positive semidefinite function. In this work, we will choose the artificial potential function which satisfies the property defined in Section 2.

Similar to the analysis of flocking behavior without the virtual leader, considering a group of agents with dynamic motion (4) steered by protocol (24) in combination with the artificial potential function which we have put forward, we can get the following conclusion.

Theorem 2. Assume that the initial energy and the initial network are finite and connected, respectively. Then, the following hold: (i) the will maintain connection for all ; (ii) all agents approach a uniform velocity and asymptotically converge to the fixed value of interagent distances; (iii) collisions can be avoided between any agents; (iv) the velocity of the COM will exponentially converge to the desired velocity , and the group keeps on moving with the velocity all the following time.

Proof. Part one of Theorem 2 is proved first.
Assume that represents the position difference and represents the velocity difference between agent of the network and virtual leader. The kinetic equation of agent is given byAccording to the definition of , we have ; therefore, the control input (24) can be rewritten as follows:The energy equation (25) can be rewritten as follows:Referencing the proof of part (i) of Theorem 1, we obtain the time derivative of energy equation at the time :It is clear that , which implied that the following inequality is existing in , . Consider the following:Hence, all existing edge-distances will not tend to radius in , , which implies that fragmentation of existing edges will be avoided at time ; thus . We further understand that is connected and no edge in will be fragmentated, and it ensures that maintains connection for all .
Then, we give the proof procedure of part (ii). We can get that the positive invariant set iswhere , , , and .
The following equation is given combined with LaSalle’s invariance principle [23]:Hence , and it means that , which implies that all agents approach a uniform velocity .
We now prove part (iii) of Theorem 2. According to the set , it is easy to know that for all . However, we can deduce from the definition of artificial potential function. Thus collisions will be avoided among agents.
At the end of this section, we will prove part (iv). We can get the following equation from the control protocol (24):

We can get the solution of the equation

which implies that the initial velocity of the COM will exponentially converge to the desired velocity .

4. Numerical Simulation

In this section, several numerical examples of the proposed control laws are presented to illustrate the rationality of the theoretical analysis.

4.1. Flocking without a Virtual Leader

The simulation is performed with 10 agents moving in a two-dimensional Euclidean space under the control protocol (8). All initial positions are chosen randomly from the plane , and initial velocities of the 10 agents are set with arbitrary directions and magnitudes within the plane . The potential function which is selected for the control protocol (8) with the sensing radius and the switching threshold dictating is , the difference between and is the threshold of add edges. The simulation results are shown as follows in the figures. The initial states of each agent are shown in Figure 1; the black lines represent existing neighboring relations between agent and agent at time , and the blue point and the solid lines with arrows represent the multiagent and the direction of velocity, respectively. Figure 2 shows the final states of the 10 agents with the control protocol (8), which showed that all agents eventually move in the same direction. The paths and final states of the agents’ motion that evolve according to the control protocol (8) are captured in Figure 3, where the dotted line represents the path of each agent. Figures 4 and 5 describe the convergence process of the agents velocity over the -axis and the -axis. It is clearly known that all agents approach a uniform velocity . The convergence process of position over the -axis and the -axis is demonstrated in Figures 6 and 7. We can know clearly that all agents asymptotically converge to the stabilization of interagent distances.

4.2. Flocking with a Virtual Leader

This simulation is performed with a virtual leader and 10 agents moving in a two-dimensional Euclidean space under the control protocol (24). The choice of initial positions and initial velocities of the 10 agents is the same with the simulation of the flocking without a virtual leader. The initial position and velocity of the virtual leader are chosen randomly. Here we set and , respectively. The radius for interagent sensing and communication is and we will perform the simulation with 1. The initial states of each agent are shown in Figure 8; the pink hexagons represent the virtual leader and its velocity vector is described as the solid lines with arrows. Figure 9 shows the final states of the 10 agents with the control protocol (24), which showed that all agents eventually move in the same direction of the virtual leader. The paths and final states of the agents’ motion that evolve according to the control protocol (24) are captured in Figure 10. Figures 11 and 12 describe the convergence process of the agents velocity over the -axis and the -axis, where the dotted line represents the path of the ten agents and the red solid line represents the virtual leader, which makes it clearly known that all agents approach the velocity of virtual leader and the group with the virtual leader keeps on moving at this velocity all the time. The convergence process of position over the -axis and the -axis is demonstrated in Figures 13 and 14. These figures are in close agreement with our theoretical predictions in Theorem 2.

5. Conclusion

In this paper, we have investigated the flocking and the coordinative control problems of mobile autonomous agents with preserved network connectivity and proposed the flocking algorithm with avoiding collision rules. This algorithm has proposed using hysteresis in adding new links and applying new potential function method to ensure that the network always stays connected and collisions between agents can be avoided. The extended application of the flocking algorithm with a virtual leader has been investigated. The simulation has proved that the laws can make all agents approach a common velocity vector and asymptotically converge to the fixed value of interagent distances and collisions between any agents can be avoided throughout the motion. The laws also satisfy the situation that there exists a virtual leader in the group of all agents, and it is proved that the value of desired velocity of the group is the same as that of the virtual leader. Future work will pay attention to the situation of how to make the network topology be connected as the initial network is not satisfied.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China, Grant no. 61473129.