Abstract

This paper studies distributed optimization having flocking behavior and local constraint set. Multiagent systems with continuous-time and second-order dynamics are studied. Each agent has a local constraint set and a local objective function, which are known to only one agent. The objective is for multiple agents to optimize a sum of the local functions with local interaction and information. First, a bounded potential function to construct the controller is given and a distributed optimization algorithm that makes a group of agents avoid collisions during the evolution is presented. Then, it is proved that all agents track the optimal velocity while avoiding collisions. The proof of the main result is divided into three steps: global set convergence, consensus analysis, and optimal set convergence. Finally, a simulation is included to illustrate the results.

1. Introduction

The study of distributed optimization of multiagent has attracted extensive attention in recent years. The objective of distributed optimization is to optimize the sum of local functions with local interaction and information. Both unconstraint and constraint models of distributed optimization problems were researched. Nedic et al. [1] firstly gave a distributed subgradient algorithm to investigate unconstrained distributed optimization problems and proved that all agents optimize the sum of local functions over a time-varying graph. Wang et al. [2] firstly introduced a continuous-time algorithm with undirected topology. Rahili et al. [3] studied a distributed optimization problem with single-order and second-order dynamics, in which the local function is time-varying. Zhao [4] studied a distributed continuous-time optimization problem for general linear multiagent systems. Under edge- and node-based frameworks, respectively, they developed two adaptive algorithms to minimize the team performance function. Yang et al. [5] studied distributed unconstraint optimization with flocking. A distributed adaptive protocol for multiagent was proposed to realise flocking behavior. For distributed constraint optimization problems, there are also some results. Nedic et al. [6] presented a distributed projected subgradient algorithm for the constraint optimization problem and researched its convergence properties. Some modified or extended models of distributed constrain optimization were also given [7ā€“16]. Qiu et al. [8] studied a distributed convex optimization of continuous-time dynamics with a common state set constraint. They noted that if the time-varying gains of the gradients satisfy a persistence condition, the states of all agents converge to the optimal point of the set constraints. Lin et al. [9] proposed a nonuniform gradient gain control method and a finite-time control method for distributed constraint optimization. Zeng et al. [10] studied nonsmooth convex optimization problem and proposed a distributed continuous-time algorithm by combining primal-dual methods for saddle point seeking and projection methods for set constraints. Lin [11] devoted to distributed optimization problems with nonuniform convex constraint sets and nonuniform step-sizes. Liu and Wang [12] proposed a group of coupled two-layer projection networks with bounded constraint. Li et al. [13] gave a distributed discrete-time control law for solving the nonconvex problem with inequality constraints. They transformed the nonconvex problem into a sequence of strongly convex subproblems through continuous convex approximation technique. Zhang et al. [14] optimized a sum of convex functions defined over a graph, where every edge in the graph carried a linear equality constraint. Hong et al. [15] studied two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm and the Gradient Alternating Direction Method of Multipliers, for solving a class of linearly constrained nonconvex optimization problems. Gu et al. [16] proposed a solution tool for distributed convex problems with coupling equality constraints. The proposed algorithm was implemented time-changing directed networks.

As is known to all, there are a great number of results about distributed optimization. However, the distributed optimization with flocking behavior is rarely considered. Flocking problem is a significant issue considered by many researchers [17ā€“25]. The purpose of flocking problem is to control a group of agents to move by local information while maintaining connectivity, avoiding collisions, and having the same speed. Nevertheless, the above results can not be directly applied to more complex flocking problems. In this paper, a distributed optimization problem while considering the flocking behavior is researched. The aim of this paper is to solve that problem with local constraint set. Due to the coexistence of constraint sets, flocking behavior, and optimization objectives, there are great challenges in the research. There are three major contributions in this manuscript. Firstly, a bounded potential function is used to construct the controller that makes a group of agents avoid collisions during the evolution. Secondly, our proposed control law lets the velocity sate converge to the local constraint set in finite time. Thirdly, the control law is proved to be correct in three aspects.

An outline of the paper is as follows. The notations and some essential concepts used in this paper are given in Section 2. In Section 3, we formulate the distributed constrained optimization with flocking behavior. In Section 4, we shown the main result and prove the main result in three steps. In Section 5, a simulation example is presented. Finally, conclusions are made in Section 6.

2. Notations and Preliminaries

Notations. The identity matrix in is denoted by . The index set is denoted by . is the Kronecker product of and . is a component wise sign function of . The gradient of at is denoted by . The Euclidean norm of the vector is denoted by . denotes the 1-norm of the vector . denotes the projection of the vector onto the closed convex set X; i.e., .

An a dynamic undirected graph is considered in this paper, having a node set and set of links . means that node is a neighbor of node . The neighbors of vertex are given by . The adjacency matrix of the graph is which is denoted as , if ; otherwise . By arbitrarily assigning an orientation for the edges in , using represents the incidence matrix associated with the graph , where if the edge leaves node and if it enters node , and otherwise. The Laplacian of is denoted by , where is the degree matrix of with for . Note that, for undirected graphs, Laplacian matrix is a symmetric matrix satisfying , and when the graph is connected, the eigenvalues of have order as .

Lemma 1 (see [26]). Suppose that is closed convex set in . Then, for any , is continuous with respect to and .

Lemma 2 (see [27]). Let function be a differentiable and convex function. For any ,

3. Problem Formulation

We consider multiagents operating in , with dynamics expressed by double integratorswhere is the position vector of agent , is the velocity vector, and is the control input acting on agent . A local cost function is assigned to agent ( ), and it is known to only agent . The global cost function is denoted by

The topology graph studied in this paper is dynamic. In the remaining parts of this paper, the dynamic undirected graph, the adjacency matrix, the incidence matrix, and the Laplacian matrix on time are simply remarked as , , , and , respectively. In the dynamic graph, we assume if and only if , where is the communication radius of an agent.

The aim of this paper is to design the controller for system (2) using local function and local information gathered from neighbors such that all agents track the optimal velocity while maintaining connectivity and avoiding collisions. The optimal velocity satisfieswhere is the local constraint set, and it is closed and convex. Let denote the optimal set of problem (4).

The problem defined in (4) is equivalent toTo ensure problem (4), the following assumptions are needed.

Assumption 3. Each function is strictly convex and differentiable. Let , where , is a continuous function satisfying for a certain positive number and all .

Because is a closed convex set, there is a constant , which makes . Moreover, if the function is convex and differentiable, and the constrained set is convex and closed; we can get the optimal solution set of problem (4) is nonempty, closed, and bounded.

In order to smooth the controller, we adopt the norm that firstly presented in [21]The function , unlike the norm which is not differentiable at , is differentiable everywhere. The gradient of is given by

To propose a controller law for collision avoidance, we need to give a smooth collective potential function .

Definition 4. The potential function is a differentiable nonnegative function of which satisfies the following conditions.
has a unique minimum in , where is a desired distance between agents and and , and is a constant.
is continuous, satisfying

Remark 5. There are many potential functions satisfying Definition 4, such as the potential functions defined in [21].

Remark 6. From the characteristic of function , especially the continuity in interval , we know that function ) is bounded. So is also bounded, for . Thus, there is a constant , such that .

To solve our objective of this paper, we present the algorithm.where and are positive constants and is the projection onto . It is worth pointing out that depends on only agent ā€™s velocity.

Remark 7. In (9), the first term is used to regulate position between agent and its neighbors; this term is responsible for collision avoidance and cohesion in the group, the second term is to the desired velocity alignment, the third term is a negative gradient , and the fourth term is used to pull the velocity vector onto .

4. Main Theorem and Convergence Analysis

We give the main theorem of this paper and propose the convergence of the control law in this subsection. First, we give the main result of this paper as follows.

Theorem 8. Assume that the graph is connected for all and Assumption 3 holds. For system (2) with algorithm (9), if all and , the agentsā€™ velocities in the group track the optimal velocity and the agents avoid interagent collisions..

In the following, we are interested in proving Theorem 8. In order to do so, first, we have to verify global set convergence, second, we have to give consensus analysis, and third we have to prove all agents converge to the optimal set.

Proof. First, to verify global set convergence, let us consider the Lyapunov function candidate:In the above prove, we have used the fact and . From (11), it follows that We integrate on both sides of inequality and get the following inequality Thus, converges to zero in finite time. Namely, there is a constant , such that for all , . That is, under control (9), for all .
Second, we will present the consensus analysis. From the above proof, for all , we can get andFrom Assumption 3 we can get is continuous. When is a bounded closed convex set, from the property of continuous functions on closed bounded sets, we can get is upper bounded on . So there is a constant , such thatFor all , consider the Lyapunov function candidateNote, however, that, due to the symmetric nature of , So, taking time derivative of , we can getBecause , if , we haveThus, (18) can be rewritten byIf we let , , then the above equation can be rewritten asLet ; we know is upper bounded and is upper bounded . So if , we can get By LaSalleā€™s invariance principle, we can get as , which implies that the velocities of all agents in system (2) asymptotically become the same.

If we let , from the above proof we can get as . So for any , there exists a time (), such that , , and.

From the convexity of the function and (15), we have

So for all , (14) and (24) hold.

For , denote

Similar to the proof of above, we haveIn view of the arbitrariness of , let ; we have for . By LaSalleā€™s invariance principle and the unique global minimum of on , we can get . That is, the global cost function (4) is minimized as .

Remark 9. Compared with the distributed optimization with a common constraint [18ā€“20], we extended the control law to distributed optimization with local constraints. Besides, the result of [8] requires time-varying gains of the gradients to satisfy a persistence condition, and the gains conditions of this paper are relatively easy to satisfy. Compared with the control law in [10, 12], the control law this paper is relatively simple. Moreover this paper addresses the issue of distributed optimization problem having flocking and local constraint set.

5. Simulation

In this section, a numerical example is presented to verify the feasibility of the proposed algorithm and correctness of our theoretical analysis.

For giving the potential function , we choose the action function as follows: which satisfies condition in Definition 4

The corresponding repulsive potential function is

The following parameters remain fixed through the simulation: and . The potential function has the shape as in Figure 1.

Then, we choose the potential function satisfying Definition 4 as follows:

In the illustration, we considered agents in a 2D plane. The task of the multiagent is to enable their velocities to minimize the total cost function , where is the coordinate of agent . We consider the second-order dynamic system (2) with the control algorithm (9). The local cost functions are given by , , , , , , , and . Those local functions are used in [9]. We assume that the local constraint set is a square, which is given by . It is observed from the above local cost functions and local constraint set that Assumption 3 is satisfied. Figures of the local constraint set and the intersection are plotted in Figure 2. Through simple calculations, we have that the global function is minimized if and only if . Since , the objection function (4) must have at least one optimal point at the boundary of . By calculating the values of along the constraint set , we have that is the optimal points of the objective function (4).

The simulation results are shown in Figures 3ā€“5. For simplicity, we choose the coefficients in algorithm (9) as , . Figure 3 demonstrates the initial state of the group including initial positions and velocities. All initial positions are set on a line and all initial velocities are set with arbitrary directions and magnitudes within the range of m/s. Figure 4 gives the final steady state configuration and the final velocity of the agent group where the solid lines represent the neighboring relations between agents and the dotted arrows represent the velocities of all agents. Figure 5 plots the velocity . It is easy to see that the flocking motion can be obtained and the velocities of all agents converge to the velocity of the optimal velocity.

6. Conclusion

We studied distributed optimization with flocking and constraint set in this paper. Multiagent systems with continuous-time and second-order dynamics are considered. Each agent has a local constraint set and a local objective function which is known to only one agent. The objective is for multiple agents to optimize a sum of the local functions with local interaction and information. First, a bounded potential function to construct the controller is given and the distributed constrained optimization algorithm is presented that make all agents avoid collisions during the evolution. Then, it is proved that all agents can track the optimal velocity while avoiding the interagent collision. The proof of the main result is divided into three steps: global set convergence, consensus analysis, and optimal set convergence. Finally, a simulation is included to illustrate the results.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by National Natural Science Foundation of China (61573199) and Basic Research Projects of High Education (3122015C025).