Abstract

This paper focuses on a class of event-triggered discrete-time distributed consensus optimization algorithms, with a set of agents whose communication topology is depicted by a sequence of time-varying networks. The communication process is steered by independent trigger conditions observed by agents and is decentralized and just rests with each agent’s own state. At each time, each agent only has access to its privately local Lipschitz convex objective function. At the next time step, every agent updates its state by applying its own objective function and the information sent from its neighboring agents. Under the assumption that the network topology is uniformly strongly connected and weight-balanced, the novel event-triggered distributed subgradient algorithm is capable of steering the whole network of agents asymptotically converging to an optimal solution of the convex optimization problem. Finally, a simulation example is given to validate effectiveness of the introduced algorithm and demonstrate feasibility of the theoretical analysis.

1. Introduction

In the last decade, multiagent systems have obtained some achievements in theory and application, like consensus problem [17], flocking problem [810], resource allocation control [1113], and so on [1417]. Multiagent systems not only have the characteristics of resource sharing, good coordination, high distribution, and strong autonomy, but also can cooperatively solve large-scale complex tasks [1823]. However, as an emerging issue, the coordination control of multiagent systems still faces many problems in theoretical researches and practical applications. As one of the most important research subjects in the field of coordination control of multiagent systems, the consensus problem has achieved considerable attention over the past few years due to its expanding applications in cooperative control of highway system, mobile multirobot system, design of distributed sensor networks, and other areas [2431]. Generally speaking, consensus means that the states of all agents asymptotically or even exponentially reach an agreement on a unique value by effectively applying local information of each agent. Naturally, in order to guide the actual application and achieve greater value, we still need further study.

Among the existing papers, the consensus-based subgradient methods for solving the distributed convex optimization problem have drawn a surge of attentions since Nedic and Ozdaglar presented a systematic analysis of it in [32]. To date, many valuable consensus-based subgradient algorithms have been addressed. Projection-based distributed subgradient algorithm for distributed optimization was put forward in Nedic et al. [33], where each agent was constrained to an individual closed convex set. Convergence to the same optimal solution is proved for the cases when the weights were constant and equal and when the weights were uniform but all agents had the same constraint set. Further distributed algorithm for set constrained optimization was investigated in Bianchi and Jakubowicz [34] and Lou et al. [35]. To work out the distributed optimization problems with asynchronous step-sizes or inequality-equality constraints, distributed Lagrangian and penalty primal-dual subgradient algorithms were developed in Zhu and Martinez [36] and Towfic and Sayed [37]. Both of them were designed for function constrained problems. Meanwhile, the distributed convex optimization problems over general networks were settled in Lobel et al. [38] and Matei and Baras [39]. Recent works [4045] have coordinately put their efforts on the consensus of multiagent systems by designing the control protocols based on event-triggered sampling schemes. Lu and Tang [41] proposed a continuous-time consensus-based zero-gradient-sum (ZGS) algorithm which is built on the condition of the gradient sum being zero. Seyboth et al. [42] studied a variant of the event-trigger average consensus problem for single-integrators and double-integrators, where a novel control strategy for multiagent coordination was employed to simplify the performance and convergence analysis of the method. To solve the optimization problem with more general case that the mean square consensus for multiple agents is affected by noises over directed networks, Hu et al. [44] proposed a novel centralized and decentralized event-triggered protocols and built its convergence. In more recent literature, Li et al. [45] investigated event-triggered nonlinear consensus in directed multiagent systems with combinational state measurements. In general, event-triggered sampling schemes were introduced into the implementation of the aforementioned methods, respectively.

Our method builds on the pioneering work of [33, 46, 47]. Nedic et al. [33] assumed that each agent was constrained to remain in a closed convex set. Convergence to the same optimal solution was proved for the cases when the weights were constant and equal and when the weights were time-varying but all agents had the same constraint set. Furthermore, paper [46] developed a broadcast-based algorithm, called the subgradient-push, which guides each agent to reach an optimal value under a standard assumption of subgradient boundedness. The subgradient-push requires neither the knowledge of the number of agents nor the graph sequence to implement. In order to avoid unnecessary communication and ensure fast and exact convergence, Chen and Ren [47] presented an event-triggered zero-gradient-sum distributed consensus optimization over directed networks with a general assumption that the objective function is twice continuously differentiable.

Contributions. Inspired by the previous works, this paper proposes a novel distributed subgradient algorithm for multiagent convex optimization with event-triggered sampling scheme. Previous works did not perform well on the applications of the distributed algorithms in multiagent network; for example, they may just study discrete-time distributed consensus optimization over time-varying graphs without trigger condition or they only consider event-based distributed consensus of multiagent systems with general networks. However, our methods perfectly integrate the event-triggered scheme with discrete-time distributed consensus optimization over time-varying graphs. Also, we only require that the network topology is uniformly strongly connected and weight-balanced, which makes our algorithm more effective. More precisely, the contribution of this paper is mainly in three aspects. Firstly, we study the convex optimization problem of discrete-time multiagent systems by a distributed event-triggered sampling control scheme, where the event-triggered control strategy in this paper can eliminate unnecessary communications among neighboring agents, leading to the reduction of computation costs and energy consumption in practice. Secondly, based on the assumption that the digraph is weight-balanced and uniformly strongly connected, we introduce a novel distributed subgradient algorithm by a distributed event-triggered sampling control scheme. Thirdly, we also show the convergence of the algorithm and prove that it can achieve the optimal point of the sum of agents’ local objective functions while satisfying the trigger condition.

The remainder of this paper is organized as follows. Some essential concepts and knowledge with regard to graph theory are given, and problems are formulated in Section 2. Then, in Section 3, the main results are presented. Furthermore, the effectiveness of the algorithm is testified by using a numerical example in Section 4. Finally, the conclusion is drawn in Section 5.

2. Preliminaries and Concepts

In this section, we present some important mathematical preliminaries including algebraic graph theory, notations, and problem formulation (referring to [48, 49]).

2.1. Algebraic Graph Theory

We always employ a graph to describe the information exchange between the nodes. The information exchange between nodes in an information interaction topology can be modeled as a weighted directed graph , where is the set of vertices with representing vertex and is the set of edges. The graph is assumed to be simple when there are no repeated edges or self-loops. The weighted adjacency matrix of is denoted by with nonnegative adjacency elements and zero diagonal elements. Note that the diagonal elements and is generally an asymmetric matrix. A directed edge denoted by a pair implies that node can arrive at node or node can receive information from node . If an edge , then node is called a neighbor of node and . The neighbor index node set of node is denoted by , while we show , the number of neighbors of node . The Laplacian matrix of digraph associated with the adjacency matrix is defined by , ; , which ensures that . The in-degree and out-degree of node can be defined by the Laplacian matrix as and , respectively. A directed path from node to node is a sequence of edges in the directed graph with different nodes , . A directed graph is strongly connected if and only if, for any two distinct nodes and in the set , there exists a directed path from node to node . A graph is called an out-degrees (or in-degrees) balanced graph if the out-degrees (or in-degrees) of all nodes in the directed graph are equivalent. A directed graph with nodes is called a directed tree if it includes edges and there exists a root node to every other node with directed paths. For a directed graph, a directed tree can be regarded as a directed spanning tree if it contains all network nodes.

2.2. Notations

Some mathematically standard notations throughout this paper are listed in the following. , and refer to the set of real numbers, the set of real vectors, and the set of real matrices, respectively. and denote the identity matrix and the zero matrix, respectively. Let and refer to the vector with all entries being one and zero, respectively. We let or denote the transpose of a vector or a matrix . For a vector , we denote , while is the standard Euclidean norm in the Euclidean space. denote the gradient of . For a matrix , we write or to denote its ’th entry.

2.3. Distributed Optimization Problem

In this subsection, we consider a network of nodes whose object is to solve the following distributed minimization problem:where is the convex objective function of agent and is a global decision vector. Assume that is only known by agent and probably different. Under the condition that the set of optimal solutions is nonempty, we would like to denote the optimal value of and an optimizer of (1).

In this paper, we do not assume the differentiability of the local objective function . At the points where the function is not differentiable, the subgradient plays the role of gradient. For a given convex function and a point , a subgradient of the function at is a vector such that the following subgradient inequality holds for any :

The following assumptions are necessary in the analysis of distributed optimization algorithm throughout this paper.

Assumption 1 (weight-balanced). is weight-balanced if for all .

Assumption 2 (uniformly strongly connected). Let . There exists an integer such that, for every , agent sends its information to a neighboring agent at least once every consecutive time slots, that is, at time or at time and so on until (at last) at time for any . In other words, the graph sequence is uniformly strongly connected.

Assumption 3 (subgradient boundedness). The subgradients of each are uniformly bounded; that is, there exists for all such that for all subgradients of .

3. Main Results

In this section, motivated by [35, 36], we provide a novel distributed subgradient algorithm to solve the optimization problem (1), followed by its convergence properties. To this end, we consider a group of agents with the communication topology described by a sequence of uniformly strongly connected time-varying digraph as before. The distributed subgradient algorithm is a discrete-time dynamical system, which is depicted as follows.

3.1. Distributed Subgradient Algorithm

Consider a set of agents. Formally, at each iteration , the agent updates its next time state according to the following laws:where is the iteration number, is the control gain, denotes the instant when the event happens for the agent , for , the positive scalars are step-sizes, the scalars are nonnegative weights and have an upper bound , and the vector is a subgradient of the agent objective function at .

Remark 4. Denote the measurement error asThen, we can rewrite algorithm (3) into the following form:where is an error term. Due to (5), it follows that Modifying the second term on the right-hand side in the above formula, we then haveLetting , one hasSince graph is balanced, we then have and hold if satisfies .

Before giving some supporting lemmas, we need the following assumption on the sequences of step-sizes and .

Assumption 5 (step-size assumption). The step-sizes are positive sequence, which satisfies the following:

Assumption 6 (nondegeneracy). There exists a constant with such that, for all , (a) for all and (b) if , then .
Next, the transition matrices are introduced as follows:where for all . And the column of is defined as . Meanwhile, the entry in th column and th row of is described as . A crucial property of the transition matrices is given in the following, which plays a key role in our analysis of the algorithms.

Lemma 7 (see [35]). Let the weight-balanced Assumption 1, the uniformly strongly connected Assumption 2, and the nondegeneracy Assumption 6 hold. Then the entries of the transition matrices converge as to a uniform geometric rate with respect to and , that is, for all ,where is the lower bound of the nondegeneracy Assumption 6, is the number of agents, , and is given in the uniformly strongly connected Assumption 2.

Proof. We refer the reader to the papers [35, 37] for proofs of this and similar assertions.

Before moving on, it is important to introduce the following lemma and the proof of the lemma.

Lemma 8 (see [33]). Let and let be a positive scalar sequence. Suppose . ThenFurthermore, if , then

Proof. We do not give the proof of Lemma 8 since it is almost identical to that of Lemma  7 in [33].

Chen and Ren [47] provided an event-triggered consensus algorithm of multiagent system. Inspired by this work, we design a new event-triggered scheme applied as follows.

We now define the triggering time sequence for agent by whereis referred to as the trigger function for all , , and . Therefore, we can conclude that at .

Lemma 9.

Proof. Since , then we have for all By the weight-balanced Assumption 1 and the subgradient boundedness Assumption 3, we therefore havewhere . Substituting (15) into (18) yields Taking the limits of both sides of (19) as , one can see that . Thus, the proof is completed.

Remark 10. Lemma 9 shows that if the error term in system (8) decays, the differences between the states of all the agents will vanish asymptotically.

3.2. Convergence Analysis

We next introduce the characteristic that the agents reach a consensus asymptotically, which means the agent estimates converge to the same point when goes to infinity.

Theorem 11 (consensus). Let the weight-balanced Assumption 1, the uniformly strongly connected Assumption 2, the subgradient boundedness Assumption 3, the step-size Assumption 5, and the nondegeneracy Assumption 6 hold. Consider the sequence being produced by the distributed subgradient algorithm (3). We then have for and all

Proof. It is obtained from (8) thatUsing the transition matrices , we can write for all and with Therefore, we have the following:Then, using (8) and , we can achieve thatTogether with (23), it yields thatAccording to the property of standard Euclidean norm, we then haveLetting and , the following inequality can be built: that is,Using the upper bound estimate for of Lemma 7, we get for all with and . Therefore, this together with (27) results inBy Lemmas 8 and 9 and taking limits on both sides of (29), one can finally have for all .

We now begin to introduce a well-known convergence result which is shown in the following lemma.

Lemma 12 (see [38]). Let be a nonnegative scalar sequence such thatwhere , , and for all with and . Then, the sequence converges to some and .

Proof. The proof process can imitate that of [38], and thus it is omitted.

In what follows, we present a key lemma, which is important in the analysis of the distributed optimization algorithm. Thereafter, we study the convergence behavior of the subgradient algorithm, where the optimal solution can be asymptotically reached.

Lemma 13 (see [46]). Consider an optimization problem , where is a continuous objective function. Assume that the optimal solution set of the optimization problem is nonempty. Let be a sequence with all and all such thatwhere , , and for all with , , and . Then the sequence converges to some optimal solution .

Proof. The proof procedure can imitate that of Lemma  7 in [46], and thus it is omitted.

Theorem 14 (optimization). Let the weight-balanced Assumption 1, the uniformly strongly connected Assumption 2, the subgradient boundedness Assumption 3, the step-size Assumption 5, and the nondegeneracy Assumption 6 hold. For problem (1), consider the sequence being updated by the distributed subgradient algorithm (3). Then there exists an optimal solution such that

Proof. Applying (3) to the average process, it is obtained thatUsing , the control law (33) can be rewritten asNow, let be arbitrary vector; we have for all thatSince the subgradient of each is uniformly bounded by and , then it follows that for all We next study the cross-term in (36). For this term, we writeUsing the subgradient boundedness, we can lower-bound the first term as As for the second term , we use the convexity of to obtainfrom which, by adding and subtracting and using the Lipschitz continuity of (implied by the subgradient boundedness), we further obtainSubstituting (40) into (36) yieldswhere in the second inequality we used the subgradient boundedness. Now, suppose that we can employ (41) with for arbitrary to obtainBy rearranging the above formula and using , we easily getwhere is the optimal value. Summing (43) over , dropping the nonnegative term on the left hand side, and multiplying by on both sides, we obtainNow, we are in the position to analyze inequality (44). The right side of (44) can be partitioned as three items. For the first item, it is easy to get Similarly, under the step-size Assumption 5, we immediately obtain Thus, we will place emphasis on the second item of (44). Recalling (29), we getMultiplying the above relation with , it yields thatBy using , we haveTherefore, by summing (49) over and rearranging some of the items, we can getSince , then, by Lemmas 8 and 9, we therefore obtainBy the step-size Assumption 5 and , this implies thatSince , one hasCombining (50)–(53), it can be derived that Substituting (45), (46), and (54) into (44) yieldsSince and (because is the optimal value), we obtainThen, by step-size Assumption 5 we have . Thus, from (42), we can deduce that the conditions of Lemma 13 are established. Under this lemma, we conclude that the average sequence converges to a solution . Recalling Theorem 11, it yields that each sequence , , converges to the same solution . The proof is thus completed.

4. Numerical Example

In this section, a numerical example is presented to verify the feasibility of the proposed algorithm and correctness of our theoretical analysis. Consider the time-varying communication graph , where , , , , , , , and , if when , , , , , , , and , if when , , , , , and , and if when Obviously, is a directed balanced graph sequence. Moreover, by computation, one has , , , , and . In the following example, the graph sequence described above will be employed.

Example 1. Consider the optimization problem (1) with , where is the convex objective function of agent and is a global decision vector. Moreover, we use algorithm (3) with the triggering function (15). In simulation, we select the design parameters , , and and the step-sizes . The simulation results of the distributed subgradient algorithm (3) are shown in Figures 14. The state evolutions of all agents are shown in Figure 1, from which we can observe that all the agents asymptotically achieve the optimal solution by taking iterations. Figure 2 shows that the distributed control input tends to when achieving consensus. The event-triggered sampling time instants for each agent are described in Figure 3, from which we can observe that the updates of the control inputs are asynchronous. According to the statistics, the sampling times for the five agents are , and the average sampling time is . Thus, the average update rate of control inputs is . In Figure 4, for agent 1, it is explicit that the norm of measurement error is asymptotically reduced to zero.

5. Conclusion and Future Work

In this paper, a novel consensus-based event-triggered algorithm for solving the distributed convex optimization problem over time-varying directed networks has been analyzed in detail. We have proved that, based on the designed distributed event-triggered scheme and the uniformly strongly connected communication graph sequence , the algorithm succeeds in making all the nodes converge to the optimal point asymptotically. Moreover, the theoretical results are demonstrated through a numerical example. Future work will concentrate on the event-triggered algorithms for the constrained convex optimization problem. And the convergence rate of the algorithm we introduce in this paper deserves further investigation as well.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work described in this paper was supported in part by the Visiting Scholarship of State Key Laboratory of Power Transmission Equipment & System Security and New Technology (Chongqing University) under Grant 2007DA10512716421, in part by the Fundamental Research Funds for the Central Universities under Grant XDJK2016B016, in part by the Natural Science Foundation Project of Chongqing CSTC under Grant cstc2014jcyjA40016, in part by the China Postdoctoral Science Foundation under Grant 2016M590852, and in part by the Natural Science Foundation of China under Grant 61403314.