International Journal of Aerospace Engineering

Volume 2018, Article ID 4865745, 12 pages

https://doi.org/10.1155/2018/4865745

## Consensus Control of Time-Varying Delayed Multiagent Systems with High-Order Iterative Learning Control

^{1}Equipment Management and Unmanned Aerial Vehicle Engineering College, Air Force Engineering University, Xi’an 710051, China^{2}Theory Training Department, Air Force Harbin Flight Academy, Harbin 150001, China

Correspondence should be addressed to Xiuxia Sun; moc.621@xxsyxcg

Received 17 March 2018; Revised 30 May 2018; Accepted 20 June 2018; Published 5 August 2018

Academic Editor: Giovanni Palmerini

Copyright © 2018 Xiongfeng Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We address the consensus control problem of time-varying delayed multiagent systems with directed communication topology. The model of each agent includes time-varying nonlinear dynamic and external disturbance, where the time-varying nonlinear term satisfies the global Lipschitz condition and the disturbance term satisfies norm-bounded condition. An improved control protocol, that is, a high-order iterative learning control scheme, is applied to cope with consensus tracking problem, where the desired trajectory is generated by a virtual leader agent. Through theoretical analysis, the improved control protocol guarantees that the tracking errors converge asymptotically to a sufficiently small interval under the given convergence conditions. Furthermore, the bounds of initial state difference and disturbances tend to zero; the bound of tracking errors also tends to zero. In the end, some cases are provided to illustrate the effectiveness of the theoretical analysis.

#### 1. Introduction

In the past few decades, cooperative control of multiagent systems has been attracted outstanding attention. One of the main reasons is its wide application in various fields, such as military, aerospace, and industrial [1]. The research directions include formation control, containment control, tracking control, swarm control, and flocking control [2–6]. Generally speaking, the fundamental problem for cooperative control is consensus, where the objective is to design an appropriate control law so that the whole states of a group of agents eventually reach an agreement.

In the process of studying the cooperative problem of multiagent systems, diverse control strategies are proposed in many literatures. For example, the output regulation method and distributed containment control law for the containment control of linear multiagent systems were investigated in [7, 8], while an advanced clustering with the frequency sweeping method was used for the general linear multiagent systems with communication and input delay in [9]. Nevertheless, it should be emphasized that almost all practice systems involve nonlinear dynamics. In [10, 11], the distributed adaptive control protocol and pinning control algorithm for the consensus of a class of second-order leader-following nonlinear multiagent systems with directed topology were considered. In [12], a distributed cooperative method was applied to deal with the dynamic task planning of multiple satellites. Additionally, the neural network algorithm was also introduced to study the consensus problem of nonlinear multiagent systems [13, 14].

In practice, time delays always exist due to the impact of physical factors of the sensor. Therefore, it is of great significance to study the time delay system. In [15], a distributed consensus control protocol with decaying gains for linear discrete-time multiagent systems with delays and noises was considered; and the cases of communication delay and input delay were studied in [16, 17]. For the problem of time-varying delays, Xiao and Wang [18] investigated the state control problem of multiagent systems with bounded time-varying communication delays and Chen et al. [19] presented a robust control law for a class of time-varying delayed multiagent systems with noisy environment.

As we know, iterative learning control is based upon the idea that the performance of a system that performs the same task repeatedly can be improved by learning from previous iterations [20]. In early works, iterative learning control has been considered in various issues. In [21], an iterative learning control scheme was proposed to make the tracking error trajectory converge to the prespecified error trajectory. In [22], an adaptive iterative learning updating law was applied to solve the tracking problem of high precision motion systems. In [23], the packet dropout problem of nonlinear systems with iterative learning control was developed; and Wu et al. [24] solved the high precision satellite attitude tracking control by using iterative learning control. Additionally, a high-order iterative learning identification scheme was considered to extract the projectile’s aerodynamic drag coefficient curve from radar measured velocity data in [25]; and the stability analysis of a high-order iterative learning control problem for discrete-time systems and nonlinear switched systems was also studied in [26, 27], respectively. In recent years, the application of iterative learning control has been extended to multiagent systems. In [28], a D-type iterative learning control scheme was presented to deal with the tracking problem of multiagent systems. In [29], a distributed adaptive fuzzy iterative learning control algorithm for the coordination control problem of leader-following multiagent systems with unknown dynamics and nonrepeatable input disturbances was designed; and Meng and Jia [30] developed a finite-time consensus control protocol with iterative learning control for a class of multiagent systems. In addition, the iterative learning control was also considered to achieve formation control of multiagent systems [31, 32].

By analyzing the above literature, it is not difficult to find that there exist few results about the consensus problem of time-varying delayed multiagent systems with iterative learning control. Moreover, the application of high-order iterative learning control for multiagent systems is also seldom described in the existing papers. These facts inspired our current study. In this work, we divert our attention to the consensus control of time-varying delayed multiagent systems with high-order iterative learning control. The main contributions of this work are summarized as follows: (i) a class of time-varying delayed multiagent systems with directed communication topology is considered in this paper. The dynamic of each agent contains time-varying nonlinear dynamic and external disturbance, where the time-varying nonlinear term satisfies the global Lipschitz condition and the disturbance term satisfies norm-bounded condition; (ii) different from [26, 27], an improved high-order iterative learning control scheme is applied to guarantee the tracking error convergence in the iteration domain. Furthermore, we show that the bound of tracking errors also tends to zero when the bounds of initial state difference and disturbances tend to zero; and (iii) it is proven that the improved control protocol can effectively handle the consensus problem of multiagent systems with time-varying delays and external disturbances.

The rest of this paper is organized as follows. Some necessary preliminaries are introduced in Section 2. The problem formulation and main results about time-varying delayed multiagent systems with high-order iterative learning control are discussed in Sections 3 and 4, respectively. In Section 5, the effectiveness of the proposed control protocol is illustrated by simulations, and briefly, conclusions are outlined in Section 6.

#### 2. Preliminaries

In this section, graph theory, definition, and lemma which will be utilized in this paper are introduced briefly.

##### 2.1. Graph Theory

For a multiagent systems with agents, the exchange information among agents can be modeled as an interaction graph with nodes. Let denote a directed graph with a node set and a directed edge set . And the adjacency matrix of the directed graph is defined by , where if and only if and otherwise. Moreover, it is assumed that . For agent , the set of neighbors is denoted by . The Laplacian matrix is denoted by , where with .

In this work, an augmented graph is described as which consists of agents and one virtual leader agent. The communication between agents and the virtual agent is defined by the matrix . If agent can obtain the information of the virtual agent, then and otherwise.

##### 2.2. Definition and Lemma

*Definition 1 (see [33]). *For a function , the norm is defined by
The following property for *λ* norm is held.

*Property 1. *For functions , if , then have
where .

Lemma 1 (see [34]). *For a real positive series satisfies
where and and exists
then have
*

#### 3. Construction of the Control Protocol for the Consensus Problem

In this section, the main works are carried out to formulate the consensus problem and construct the consensus control protocol. In addition, some assumptions are also provided.

Considering the multiagent systems which consist of identical agents with time-varying delays and nonlinear dynamics, the dynamics of the agent at iteration are described by where , , and are the state, control input, and output of the system, respectively; is bounded disturbance; the time and is given; are time delays; and . The functions and are piecewise continuous in ; and is differentiable in and , with and . In addition, it is assumed that if .

The matrix form of (6) is expressed as where , , , , , , and .

The dynamics of virtual leader agent are given as which may show a given bounded desired trajectory ; there exist unique bounded input and such that when , the virtual agent has a unique bounded state and output .

Similar to the expression of (7), the matrix form of (8) is expressed as where , , , , , and ; , is dimension unit matrix, and represents the Kronecker product.

Given the desired output trajectory of the virtual agent , the goal is to find a control consensus such that when , the output of all agents will track the desired output trajectory as close as possible.

According to (7) and (9), the consensus tracking error of the agent at the iteration is defined as and have where , , , , and .

Before introducing a high-order iterative learning control scheme, we give the following assumptions.

*Assumption 1. *The functions , , and and the partial derivatives and are uniformly globally Lipschitz in on the interval . That is, there exist constants , , and such that
where .

*Assumption 2. *The functions , , , and are uniformly bounded with bounds , , , and which are denoted by

We now present the following improved high-order iterative learning control scheme for the multiagent systems (7) and (9): where , , is initial given control input, integer is the order of the iterative learning control scheme, and is a weighting parameter to prevent the large fluctuation of the input at the beginning of the iterative operation. The term may be allowed to vary with the iteration and let it fixed for simplicity in this paper. is the time-varying matrix to regulate the rate of convergence, which needs to satisfy certain convergence condition (will be introduced below). Like the description of [25, 27], , , and are learning matrices. In addition, it is assumed that , , and are bounded, and their bounds are defined by , , and , respectively, for instance,

*Remark 1. *If for (14), the iterative learning control scheme may be considered as a PID-type control law similar to the convention PID controller; and if , it can be seen as consisting of dual or multiple PID controllers. As more past learning errors are considered, it implies that the high-order iterative learning control scheme has better control effect than one-order. Generally speaking, the higher the orders of the control scheme, the better the iterative learning control performance is expected. However, the choice of in practice is normally not more than 3.

*Remark 2. *Like the choice of parameters of the traditional PID algorithm, the choice of , , and is usually based on a trial-and-error method. Considering that the purpose of is to increase the convergence speed of the system, is to eliminate the steady-state error of the system, and is to overcome the oscillation and improve the stability of the system, their weight can be tuned according to the control requirements. In addition, the control approaches such as a fuzzy adaptive method, a neural network algorithm, and particle swarm optimization algorithm can be also applied to obtain the optimal learning matrices.

#### 4. Convergence Analysis

In this section, we will analyze the convergence of time-varying delayed multiagent systems (7) and (9) with a high-order iterative learning control scheme (14). The convergence conditions are shown in the following theorem.

For clarification of the remaining discussion, the variable will be omitted, and the following notations will be introduced. That is, where represents a function concerned.

The partial derivatives of are expressed by

Hence, it is easy to find that , where is the Lipschitz constant of the function on the interval . We now state the main result of this paper.

Theorem 1. *Let the multiagent systems* (7) *and the virtual agent dynamic* (9) *satisfy Assumptions*1*and*2* and suppose that the initial state difference ** is bounded. If there exist*and positive numbers satisfying
*and then the bounds of tracking errors ** and** converge asymptotically to a sufficiently small interval as ** for **. Furthermore, the bounds of initial state difference and disturbances tend to zero; the bound of tracking errors also tends to zero*.

*Proof. *According to the multiagent systems (7) and (9), we have
and then
where .

Also, where and .

Substituting (11) into (14) gets

Furthermore, substituting (18), (20), (21), and (22) into (23) yields

Considering the condition (19), taking norms on both sides of (24) has where

Writing the integral expression for and taking norms, we get

In the light of Definition 1 and the fact that , we have

Considering (28) and taking norm on both sides of (27) obtain where

Obviously, there for a sufficiently large and then have

Similarly, taking norm on both sides of (25), we have where

Substituting (31) into (32) yields where

Considering the condition (19) again, we have for a sufficiently large , then . Hence, according to the Lemma 1, it is obtained that

Furthermore, we also have

From (31), (36), and (38), one can be observed that the tracking errors , , and are all relevant to the initial state error and disturbance bound . Furthermore, if , , and tend to zero, the bound of tracking errors also tends to zero through the iterative operations of the iterative learning control algorithm. The proof completed.

*Remark 3. *Under the convergence conditions (18) and (19), one finds that the convergence of iterative learning control is not influenced by the disturbances, the initial state difference, and even the choice of . However, the bound of the final tracking errors is directly influenced by those factors. In addition, the iterative learning control algorithm itself as using alone cannot eliminate disturbances or initial state difference, but it can realize a perfect tracking once those factors will not appear any more in the coming iterations.

#### 5. Simulation Analysis

In this section, we will provide some cases to illustrate the effectiveness of the high-order iterative learning control scheme (14). Consider a time-varying delayed multiagent systems with five agents labeled as 1, 2, 3, 4, and 5. The virtual leader agent is labeled as 0. The communication graph with five agents and one virtual agent is shown in Figure 1.