Abstract

In this paper the suboptimal event-triggered consensus problem of Multiagent systems is investigated. Using the combinational measurement approach, each agent only updates its control input at its own event time instants. Thus the total number of events and the amount of controller updates can be significantly reduced in practice. Then, based on the observation of increasing the consensus rate and reducing the number of triggering events, we have proposed the time-average cost of the agent system and developed a suboptimal approach to determine the triggering condition. The effectiveness of the proposed strategy is illustrated by numerical examples.

1. Introduction

In practical application of networked dynamical systems, individual subsystems such as robots, vehicles, or mobile sensors are required to work cooperatively to accomplish complex tasks. This motivates the research on analysis and synthesis of networked dynamical systems and distributed coordination control of multiagent systems in recent years. Typical research directions in this filed include, but are not limited to, the problems of networked systems with unreliable communication links and quantized measurements [13], multiagent consensus [46], distributed tracking [7], formation control [8], connectivity preservation [9], agent flocking [1012], rendezvous [13, 14], coverage, and deployment [1517].

To reduce the total cost in practical systems, when implementing the communication and controller actuation schemes for multiagent systems, a possible design may equip each agent with a small embedded microprocessor and simple communication and actuation modules. However, these low-cost processors and modules usually have only limited energy and abilities. As a result, event-triggered schemes for practical control systems with digital platforms are proposed; see [1821]. Recently, event-based distributed control strategies have also been proposed for multiagent systems in [2225]. Using the deterministic strategy introduced in [19], the control input of each agent is updated only when the measurement error magnitude exceeds a certain threshold. It is also proved that the lower bounds for the inter-event time intervals are strictly positive to ensure there is no Zeno behavior. Other works regarding event-triggered control for multiagent systems include decentralized control over wireless sensor networks [26], event-triggered consensus with second-order dynamics [27], and event-based leader-follower tracking [28].

The motivation of employing event-triggered control in multiagent systems is to reduce the costs of communication and controller updates so as to meet the hardware limitations and to save energy. However, in the existing event-triggered control of multiagent systems, the performance of the event-triggered controller has not been studied yet [2225]. In this paper, we firstly present a short review of the combinational measurement approach proposed in [29]. Compared with the existing approach, this approach allows the controller of each agent to be triggered only at the event time of itself, which reduces the frequency of controller updates in practice. Then, based on this control approach, we have investigated the optimal control problem of the event-triggered control system by formulating the average cost of the system. It is noted that the combinational measurement approach can be utilized to decouple the costs of different agents. By such cost decoupling strategy, a suboptimal approach to determine the triggering condition has been proposed. Numerical examples show that the proposed approach can reduce the total cost of the agent system during the consensus tasks.

The contribution of this work is as follows. Firstly, we have proposed a formulation of the time-average cost for multiagent systems with event-based controllers. This cost can describe the tradeoff between increasing the consensus rate and reducing the resource consumption. To the best of our knowledge, there has been very few works regarding this issue of event-triggered multiagent systems so far. Secondly, we have decoupled the costs of different agents and then found an upper bound of the cost for each agent. By this approach, we are able to propose a distributed suboptimal controller for the multiagent consensus problem.

The rest of this paper is organized as follows. Section 2 presents the event-triggered controller design of the multiagent system and the results of its convergence. In Section 3, the average cost of the system is formulated and the suboptimal triggering condition is obtained. In Section 4, simulations are provided to illustrate the proposed strategies. Finally the paper is concluded in Section 5.

2. Event-Triggered Consensus

In this section we provide a review of the event-triggered control with combinational measurement proposed in [29]. Consider a multiagent system with agents, labeled by , which are required to achieve the consensus task. The agent states at time are represented by . The dynamic of agent is The communication links among agents are considered to be undirected and the communication topology of the system is represented by an undirected graph , where is the vertex set and is the edge set. Agent is said to be a neighbor of agent if and only if (or ). All the neighbors of agent constitute the neighbor set .

The event-triggering mechanism is introduced in agent control. The control input of agent will remain fixed until the next triggering event occurs. Assume that the triggering time sequence for agent is , where is always a default triggering time. In the agent group, each agent can obtain the state information of its communication neighbors. When , the control input of agent will depend on the states of itself and its neighbors at time .

To develop decentralized control, agent 's local coordinate system is introduced and the origin is at . The real-time average state of agent and all its neighbors in this local coordinate system is At each triggering time point, agent measures this average state and takes the measurement as its target; see Figure 1 for an illustration in a 2D plane. This target state will remain fixed until the next triggering time comes. Thus the target state of agent when is For , the control law for agent is proposed in [29] as follows: where is a positive real number to be determined. The measurement error of agent will be Since is a triggering time instant, one has . In the sequel we will show how to use this error to determine the triggering event that guarantees the consensus of the agent group.

We denote as the augmented state of the system and also denote . From (4) and (5) one has Let be the Laplacian matrix of the underlying graph . Also let and . Then the compact form of the system equation is given by where . Since the communication is bidirectional, graph is undirected and then is symmetric [5]. Consider the candidate Lyapunov function One has Let and Then one has From (9), Note that for any and any , one always has . Thus Enforcing yields Thus if . From this, one has and . Notice that, when , reaches its maximum . Also notice that Thus (14) can be rewritten as Then (15) becomes The triggering function for agent is And an event of agent will be triggered when Notice that when an event is triggered, the control input changes and the error is automatically reset to .

It is noted that the triggering mechanism is designed in such a way that the time derivative of the Lyapunov function is enforced to be nonpositive by (17). However, this does not sufficiently guarantee the convergence of the closed-loop system. In a hybrid system, the interevent time may get shorter and shorter for increasing such that infinitely many events are triggered in a finite time interval. Such execution of a hybrid system is called Zeno; see [30] and references therein for more details. Generally speaking, in controller design one may expect the agents are always triggered regularly and there is no Zeno behavior. Actually, in [29] comprehensive triggering behavior analysis has been provided and we have the following lemma.

Lemma 1. Consider an agent with a nonempty neighbor set . Its kinematic is given in (1) and its controller is the event-triggered control (4), with (20) being the triggering condition. If exists and , agent will only exhibit regular triggering behavior for all .

Proof. The proof of this lemma follows directly from Lemmas 2, 3, and 4 in [29].

Then we are at the position to present the consensus result of the proposed event-triggered controller.

Theorem 2 (see [29]). Consider a group of agents moving in the working space . The dynamic of each agent is (1). Assume that the communication graph is fixed and connected. If no agent is located at the average state of its neighbors, the group will achieve consensus asymptotically under the event-triggered control law (4) with the triggering condition (20).

Remark 3. We note that the agent group may not achieve consensus if more than one agent is located at the average of all its neighbors. One strategy for solving this problem is to use a subset of ; for example, , to compute and . Then, at , when agent is no longer at the average state of all its neighbors, the controller is switched back to use .

3. Suboptimal Triggering

In a practical multiagent system, fast achievement of the coordination tasks with least resource consumption is often expected. For consensus problem discussed in this work, one may expect the highest consensus rate with the least amount of events and controller updating executions. However, there is a tradeoff between these two factors. On the one hand, to achieve fast consensus and precise control, one may require the norm of the measurement error as smaller as possible, which may call for high frequency of event triggering and controller updating. On the other hand, to save energy and communication bandwidth, one may reduce the triggering frequency and thus the amount of controller updating, which contradicts the above mentioned consensus rate expectation. The goal of this section is to balance a tradeoff between increasing the consensus rate and reducing events and controller updates.

It is noted that, if , agent takes the center of all its neighbors as the target point. Then it may achieve consensus faster than using since is the real-time neighborhood center. Thus can be considered as the measurement cost of agent . The smaller this cost is, the faster the consensus rate can be. However, directly using as the measurement cost is not a better choice since, when all the agents are very close to each other, goes to and cannot reflect the consensus rate well. To solve this problem, we let the measurement cost of agent be This definition can represent the measurement deviation of the real-time neighborhood center from its true value. It is better than directly using as the measurement cost since is also well defined to reflect the consensus rate when tends to as time goes to infinity. The lower this cost is, the faster the consensus rate can be.

To formulate the above mentioned tradeoff, we should also find a way to count the amount of triggering events for all the agents. Let the entire triggering cost of agent be its total number of triggering events. Then during the time interval , the triggering cost of agent is . Thus we can define the time-average triggering cost as This definition implies that the total cost in a single event time interval is always ; that is, The lower the cost , the fewer the amount of event triggering and controller updating and thus the resource consumption can be.

Then we can define the comprehensive time-average cost of agent , that is, the per-period cost, as where is the cost coupling strength. The objective is to find a balance between the estimation error and the triggering frequency. Namely, we are aiming to find a set of optimal triggering policies to minimize the average cost of the agent group, which is defined by where is the collection of triggering functions. Since all the agents are coupled by the event-triggered control, the whole group exhibits the behavior of a complex hybrid system. Thus to solve the problem of minimizing by designing is rather challenging. However, based on the behavior analysis presented in [29], one can find a suboptimal solution to this problem.

Denote where is the average cost of agent . Let and be two event instants of agent with , and let We consider this cost over a sufficiently long time period . From (25) one notices that the finite time form of agent 's average cost over the time interval is From (17) one has , and they are equal only when . Then from (23) one has with being the number of event time intervals on . Thus, when , the average cost of agent can be upper bounded by where is the average length of triggering time interval of agent . It is difficult to obtain an estimation of . However, one may consider the average cost of agent on the time interval when is sufficiently large. In this case, will be lower bounded by 's limit defined in [29]. Consider where . Thus one has The right-hand side will reach its minimum if and only if . If all agents take the same , this condition will be Thus a suboptimal triggering condition is given by if .

Remark 4. Equation (33) shows the relationship between the triggering execution and the importance of the measurement and triggering cost. For example, a larger means reducing the triggering cost is more important. Then one will obtain a larger , which may lengthen the time in between consecutive triggering executions and reduce the triggering cost.

4. Simulations

In this section some simulations will be provided to illustrate the proposed event-triggered control strategy. Consider a group of agents in the working space . Each agent has dynamic (1) and the controller (4). The parameters in the control input (4) and the triggering function (19) are given by and for all agents. The initial states of agents are randomly selected which are as follows: The communication graph is also shown in the first subfigure of Figure 2. In the simulation, when the sum of the distances from agents to the group average is shorter than ; that is, , the group is considered to have achieved consensus.

The trajectories of agents in the simulation are shown in Figure 2. The agents are represented by small circles and the trajectories are represented by solid lines. Notice that the agent group eventually achieve consensus at under the proposed control law. One can also note from the trajectories that the control inputs of all the agents, which are shown in Figure 3, are fixed during each interevent time interval.

Figure 4 shows the evolution of the error norms of all the agents. From (20) one concludes that the curve of these error norms stays below the threshold . The error increases in each triggering time interval and then is automatically reset to when an event occurs.

The event time instants of all the agents are shown in Figure 5. From this figure one can observe that all agents are triggered regularly and the interevent time intervals have strictly positive lengths. This implies there is no Zeno behavior in the system evolution. Moreover, the input of each agent only triggers when its own event occurs.

To verify the proposed suboptimal triggering approach, a set of similar simulations are carried out with different parameter selections. The initial conditions of the agents are the same as in Figure 2. The results are listed in Table 1. The table shows that, following the proposed strategy, the cost can be obviously reduced compared with those under the choices which appear appropriate. Simulations also show that, in some cases, the suboptimal choices are very close to the optimal ones.

5. Conclusions

In this paper, the suboptimal event-triggered consensus problem for multiagent systems is considered. The event design is based on the measurement error which is determined by a combined state of neighbors. As a result, each agent only updates its controller at its own event time, which reduces the amount of interagent communication and controller updates in practice. Then we have proposed a novel definition of time-average cost for the agent system and developed a suboptimal triggering approach to determine the event condition. It has been shown that the proposed approach is effective in reducing the average cost of the system. Future work includes extending the proposed approach to multiagent systems with directed communication networks and developing better optimization approach to reduce the system cost.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work described in this paper was partially supported by grants from the National Natural Science Foundation of China (no. 61203027), the Specialized Research Fund for the Doctoral Program of Higher Education of China (no. 20123401120011), and the Anhui Provincial Natural Science Foundation (no. 1208085QF108).