Abstract

This paper discusses the event-triggered consensus problem of multiagent systems. To investigate distributed event-triggering strategies applied to general linear dynamics, we employ a dynamic controller to convert the general linear dynamic to the single-integrator model by a change of variable. The consensus value of these new states is a constant so that the distributed event-triggering scheme is obtained under periodic event detections, in which agents with general linear dynamics require knowledge only of the relative states with their neighbors. Further, an event-triggered observer is proposed to address the case that only relative output information is available. Hence, the consensus of both the state-based and observer-based cases is achieved by the distributed event-triggered dynamic controller. Finally, numerical simulations are provided to demonstrate the effectiveness of theoretical results.

1. Introduction

Distributed coordination of multiagent systems has attracted considerable attentions recently due to their extensive applications [1, 2]. The implementation of distributed algorithms is significant for multiagent systems with limited resources in real applications. In general, sampled-data control is natural for the implementation on a digital platform. Traditionally, periodic sampling control is well done as long as the sampling frequency is high enough. However, this time-triggered approach is not suitable for large-scale multiagent systems where energy, computation, and communication constraints should be explicitly addressed. On the contrary, event-triggered control limits the sensor and control computation and/or communication to instances when the system needs attention. It offers some clear advantages such as the reduction of information transmissions and control updates, as well as a certain level of performance being guaranteed.

Motivated by these advantages, event-triggered control strategies have been researched on both general dynamic systems and distributed networked dynamic systems. Reference [3] provided an introductory overview on some recent works in these areas. In [4] the author proposed an execution rule of the control task based on an ISS-Lyapunov function for the closed-loop system, which is the main idea of event-triggered control in many subsequent works. The author in [5] provided a unifying Lyapunov-based framework for the event-triggered control of nonlinear systems, which was modeled as hybrid systems including the case of unnecessary monotonic decrease of Lyapunov function. In [6] the output-based event-triggered control was considered since the full state measurements were not available for feedback in practice. Moreover, it is not the original purpose of introducing event-based control for resource conservations if the event-triggering condition has to be monitored continuously. As a result, periodic event-triggered control was proposed by [7] where the event detection occurred periodically. Model-based event-triggered control was proposed in [8], which provided a larger bound on the minimum intersampling time instead of using a zero-order hold. And [9] extended to model-based periodic event-triggered control with observers.

The issues on resource limitations are more critical in distributed networked dynamic systems [1013]. Typically, multiagent systems equipped with resource-limited microprocessors have necessitated the event-triggering strategies on actuating the control updates in [1426], and so on. Most of the existing works focus on distributed event-triggered control for single-integrator multiagent systems. In [14], a distributed event-triggered scheme was proposed for a single-integrator model; then this scheme was improved by [1518] and extended to double-integrator models [19] and general linear models [20, 21]. Observed from these results, we can hardly find distributed event-triggered consensus schemes applicable to double-integrator models by Lyapunov method, to say nothing of general linear models. This is attributed to the fact that the consensus state of double-integrator agents is no longer a constant; that is, it is impossible to find a measurable error without global information to design the triggering condition by Lyapunov function. The triggering thresholds of measurement errors in most of the aforementioned references are state dependent which are natural and convenient for constructing the ISS-Lyapunov function. On the other hand, [15, 20] proposed event-triggering schemes taking a constant or a time-dependent variable as the triggering threshold. These schemes with state-independent triggering thresholds cannot reflect the evolution of states; however, they do not require monitoring the neighbours' states for event triggering and can be easily extended to the double-integrator [15] or general linear models [20].

Most of the references defined the measurement error as except [16, 22, 23]. In [16] the author proposed a combinational measuring approach to event design, by which the control input of each agent was piecewise constant between its own successive events. And [22, 23] exploited the relative state errors for the event design known as edge events. In large-scale multiagent systems, the absolute information measurement is unavailable or expensive; consequently, it is more reasonable to define relative state-based measurement error instead of the conventional measurement error as   .

Based on the limitations mentioned above, the objective of this paper is to find a solution of event design for general linear dynamics using relative information in a distributed fashion to achieve consensus. Firstly, it is notable that the consensus value of general linear models is relative to and [27]. Accordingly, the consensus value of general linear models can be represented as a constant by a change of variable. By virtue of a dynamic controller, the new variable will evolve according to the single-integrator model. Sequentially a distributed event-triggering scheme can be achieved based on the relative information of neighbors. This idea of dynamic controller is inspired by [24], which was also introduced in event design by [25, 26]. Nevertheless, [25] considered the absolute measurement error and state-independent triggering threshold; moreover, the event-triggering scheme needed an exact model of the real system. By applying variable substitution method, as well, [26] solved the consensus problem of a special type of high-order linear multiagent systems via event-triggered control. Secondly, as the full states are not available in practice, a distributed event-triggered observer is proposed using relative output information under a mild assumption. Meanwhile, the event triggering of the dynamic controller is independent on that of the observer. Thirdly, the detections of triggering conditions of all agents occur at a sequence of times without requiring continuous monitoring.

The rest of this paper is organized as follows. Section 2 presents a formal problem description. In Section 3, a distributed event-triggered dynamic controller is given, which will be extended to the case of output feedback by proposing a distributed event-triggered observer in Section 4. Section 5 gives the simulations to validate the theoretical results. Conclusions are given in Section 6.

Notation. Throughout the paper, the sets and denote the -dimensional vectors and matrices, respectively. The notion refers to the Euclidean norm for vectors and the induced 2-norm for matrices. The superscript “ ” stands for transposition. means that is symmetric and positive definite and the symbol represents the identity matrix of -dimension. For any matrix , , and are the minimum eigenvalue, maximum eigenvalue, spectral radius, and maximal singular values of , respectively.

2. Problem Description

This paper addresses the sampled-data consensus problems for multiagent systems taking into account event-triggered strategies.

We use a graph to model the network topology of multiagent systems, where is the node set and is the edge set. An edge in denotes that there is a directed information path from agent to agent . is called undirected if . is the weighted adjacency matrix associated with such that if , and otherwise, as well as for all . The neighbor set of the th agent is denoted by . is Laplacian matrix, where . For undirected graphs, the Laplacian matrix is symmetric and positive semidefinite. Hence, the eigenvalues of are real and can be ordered as with and is the smallest nonzero eigenvalue for connected graphs. Here, we impose the following assumption.

Assumption 1. Graph is fixed, undirected, and connected.
The dynamic of the th agent is described by the general linear time-invariant differential equation where is the state, is the output, and is the input. , , and are real constant matrices with appropriate dimensions and the following assumption.

Assumption 2. is stabilizable, is detectable, and all the eigenvalues of lie in the closed left-half plane.

We say that multiagent system (1) solves a consensus problem asymptotically under given , if for any initial states and any . When the information transmissions among all agents are continuous, a general consensus protocol of the th agent is in the following form: where is the feedback gain matrix. A majority of references have facilitated the implementation of the above control law by event-triggered strategies; unfortunately, these existing results are probably not feasible to general linear dynamics.

In this paper, we denote by the relative state information of agent and its neighbors and denote by the difference of controller state of agent and its neighbors. Instead of using continuous event detectors, we intend to implement the event-triggered control with discrete event detecting. That is, the event detections of all agents occur at a sequence of times denoted by , which are periodic in the sense that , for some properly chosen sampling interval .

Consequently, we would like to discrete (1) with sampling interval and propose the following discrete-time event-triggered dynamic control law for each agent in the case of as where is the state of dynamic controller, is the control gain matrix to be designed, and where   is the triggering condition to be designed and is the measurement error at time defined as

The main objective of this paper is to design an event-triggered scheme of the dynamic controller (4) with respect to the measurement error detecting at time instants , such that the multiagent system (1) achieves consensus asymptotically.

3. Distributed Event-Triggered Control of Multiagent Systems

In this section, we will focus on the case that the state feedback is available. According to (3) and (4), we can define and give the following equation by combining (3), (4), and (6): Introducing the changes of variables and leads to Thus we can obtain the discrete-time system (8) with and in compact form as

Before giving the main result of this section, a lemma is presented as follows.

Lemma 3. Consider the system (9) under Assumption 1. If the triggering condition holds and then the states of system (9) asymptotically converge to a common value.

Proof. Consider the following ISS-Lyapunov function for system (9): and the time is briefly denoted by .
For any and any , Thus, for any , where .
Using the triggering condition (10) and choosing , for all , we bound as with .
Hence, if and , which implies that the states of system (9) converge to a common value.

Remark 4. We have known that the consensus states of general linear dynamics are , which are not a common value except the case of single-integrator dynamic. Thus the so-called Lyapunov function with regard to of general linear dynamics, which is exploited to the design of distributed event-triggered controllers, becomes invalid as it does not converge to zero. Notice that the design of distributed event-triggering scheme of general linear dynamics is converted to the case of single-integrator dynamic by virtue of the dynamic controller (4). Nevertheless, whether or not the consensus of system (9) concludes the consensus of the original system (1) is dependent on the stability of state matrix A, which will be explicated by the next result.

Theorem 5. Consider the system (1) with the dynamic control law (4) and suppose that Assumption 1 and 2 hold. The triggering condition of (4) is determined by (16) with (11) In addition, , whenever , where denote the eigenvalues of . Then for any initial states there exists matrix such that all agents achieve asymptotically the consensus state .

Proof. There is a gap to fill before deriving the definite consensus conclusion of system (1) from (9), although the consensus condition of system (9) has been established by Lemma 3. From Lemma 3, , in system (9) exponentially converge to as , where is the left eigenvector of associated with zero eigenvalue; that is, there exist constants and such that In systems (3) and (4), this means that which implies that the consensus of system (3) depends on not only the network topology and control strategy, but also the dynamic of the isolated agents. It is known by [28] that there exists a positive number such that, for any , if , , while if . Thus, for these agents without exponentially unstable eigenvalues, the convergence rate to reach a consensus always dominates the instability of the eigenvalues on the imaginary axis. Therefore, we can verify that as and obtain the triggering condition (16) from (10) directly. Based on this fact, in (4) by the definition of , , and . We can know that as long as there exists matrix such that is stable, which is assured by , whenever , where denote the eigenvalues of [29]. Thus, as , and the stable intersample behaviour is also guaranteed, which concludes that as .

Remark 6. Observed from the triggering condition (16), we do not need an exact model of the real system for the event detection compared with [25], although the changes of variables with the system matrix are introduced.

4. Distributed Event-Triggered Observer-Based Control of Multiagent Systems

As in many applications, the full state measurements are not always available for feedback and the absolute output measurements of each agent are also impractical. In this section, it is desired to design a distributed event-triggered observer using relative output information under the following assumption.

Assumption 7. There exists at least an agent knowing its own absolute output information in the graph .

Remark 8. Assumption 7 is not a very strong restriction on the system; for example, only a robot equipped with high-performance GPS is acceptable in practical applications of large-scale multirobot systems.

We denote by the estimate of the state and denote by the consequent estimate of the output . Then, we denote by the relative output information of agent and its neighbors and denote by the estimate of the relative output information of agent and its neighbors. The distributed event-triggered observer is designed in the following form: where where   is the triggering condition to be designed, if agent knows its own absolute output information, and otherwise. And is the measurement error at time defined as Based on the above estimate of , in (4), (6), and (16) can be replaced by . Then, we can derive the following result.

Theorem 9. Consider the system (3) with the observer (19) and suppose that Assumptions 1, 2, and 7 hold. The triggering condition of (19) is determined by where , is identical to the detection period of update of controller in Theorem 5. Then there exist matrix F and coupling gain c such that the estimation error dynamics (23) are asymptotically stable.

Proof. Denote by the state estimate error for agent ; then subtracting (19) from (3) gives the dynamic of the state estimation error in compact form as where , , and , . By Assumption 7, we can easily conclude that all the eigenvalues of the matrix denoted by , are real and positive.
Since , there exists an orthogonal matrix such that . Introduce the state transformation ; then (23) becomes where , , and .
It is known by [30] that for all eigenvalues , of matrix when the matrix is chosen, and if there exists a covering circle centered at and containing all eigenvalues of with radii such that , where is a solution of the DARE ,then   can be concluded.
Now we show that there exists an ISS-Lyapunov function , for the system (24), which satisfies where are class functions and . We know that satisfies (25) with and . And (26) with (24) is equivalent to the following LMI: By Schur complement, (27) is equivalent to Since , there always exists a and a constant such that Hence, (28) holds as the chosen is large enough.
Consider (26); we get Applying inequality (30) repetitively yields According to the triggering condition (22) and condition (25), inequality (31) yields Thus, which concludes .

Corollary 10. Under Assumptions 1, 2, and 7, consider the system (1) with the dynamical control law (4) and its triggering condition (16), where in (4) and (16) are estimated by the observer (19) with its triggering condition (22). Assume that the sampling period of event detections and parameters of triggering conditions are satisfied with what are stated in Theorem 5. Then, for any initial states, there exist matrices and coupling gain such that all agents achieve consensus asymptotically.

Remark 11. Notice that the events of the dynamic controller and observer are triggered independently, although they both need to measure the states of observer for event detections. Theorem 9 has proved that the state of each agent can be estimated by the proposed event-triggered observer. Thus, Corollary 10 is established by the separation principle. However, the drawback of triggering condition (22) is that it is not clear how the state-independent triggering function should be designed.

5. Examples

In this section, two models of agents including double-integrator dynamics and linearized dynamics of the Caltech multivehicle wireless testbed vehicles are considered to illustrate the theoretical results. It will be shown that we can achieve consensus for general linear dynamics by the distributed event-triggered dynamic controller, for both the state-based and observer-based cases.

Example 1. Consider system (1) of six agents with the following system states and matrices: where are the positions of the th agent along the and coordinates. The initial states of the agents are , respectively. Let and , for all , which are applicable to the next examples.

The fixed network topology in Figure 1 is chosen. By calculation, ; then the sampling period of event detection and parameters of triggering conditions for all agents are chosen as and , which satisfy all the conditions claimed by Theorem 5. Then the gain matrix is obtained. By using the dynamic controller (4) and triggering condition (16), the state trajectories of six agents during time intervals are shown in Figure 2. Event-triggering instants and control inputs of six agents are shown in Figures 3 and 4, respectively. It can be easily seen that consensus is achieved by the discrete-time detections of events; moreover, the updates of controller of each agent only occur at its own event-triggering instants.

Example 2. A linearized model of the Caltech multivehicle wireless testbed vehicles in [31] is considered here. The system states and matrices of six agents are described as where are the positions of the th agent along the and coordinates, respectively. is the orientation of the th agent. The initial states of the agents are , , , , , and , respectively. The network topology, sampling period, and parameters of triggering conditions are the same as Example 1, and It should be noticed that [26] is incapable of dealing with the above system matrices because they do not satisfy in [26]. The computer simulations illustrate the validity of the proposed dynamic controller (4) and triggering condition (16) on the system (37). The state trajectories of the agents are depicted in Figure 5. Also, event-triggering instants and control inputs of six agents are shown in Figures 6 and 7, respectively.

The simulation results in Figures 3 and 6 are reported in Tables 1 and 2, respectively, which clarify that both the actuation and communication updates are reduced evidently. In addition, a minimum positive interevent interval is guaranteed by the sampling period of event detectors.

Example 3. Consider the case of output feedback on the dynamic in Example 1 with . From Theorem 9, we obtain and . The parameters of triggering condition (22) are chosen as and . The initial states of the agents, the design of the dynamic controller with its triggering condition, and the network topology are also the same as Example 1. As expected, the simulation results demonstrate the consensus. Observed from Figure 8, the convergence rate of the estimate errors is fast so that the evolutions of agent states approximate the case of Example 1. As shown in Figure 9, the mean event intervals are affected slightly by the estimated states, while the event times of observer under the triggering condition (22) are considerable. Unfortunately, it remains an open problem to identify how to reduce the unnecessary updates triggered by the time-dependent triggering function.

6. Conclusion

This paper studies the distributed event-triggered control of multiagent systems under the fixed undirected network topology. In order to find distributed event-triggering schemes applicable to general linear dynamics, a dynamic controller is employed to convert the general linear dynamic to the single-integrator model by a change of variable. Therefore, a distributed event-triggering scheme only using the relative state information of neighbors is obtained to update the control law, where the triggering condition is detected periodically. Then, the result is extended to design event-triggered observer-based controller via relative output information. These theoretical results have been verified by simulations. Further work will focus on the event-triggered consensus of general linear dynamics in multiagent systems with switching topologies and/or time delays and other issues on multiagent systems, such as event-triggered formation and/or containment control.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The work was partly supported by the National Natural Science Foundation of China (Grants nos. 61174140, 61174050, and 61203016).