Abstract

Distributed discrete-time coordinated tracking control problem is investigated for multiagent systems in the ideal case, where agents with a fixed graph combine with a leader-following group, aiming to expand the function of the traditional one in some scenes. The modified union switching topology is derived from a set of Markov chains to the edges by introducing a novel mapping. The issue on how to guarantee all the agents tracking the leader is solved through a PD-like consensus algorithm. The available sampling period and the feasible control gain are calculated in terms of the trigonometric function theory, and the mean-square bound of tracking errors is provided finally. Simulation example is presented to demonstrate the validity of the theoretical results.

1. Introduction

Inspired by the potential applications in engineering, such as networked autonomous vehicles, sensor networks [1], and formation control [2], distributed coordination of multiagent systems has attracted much attention from researchers [3, 4]. Very recently, consensus problems have been studied extensively as well as references therein [5, 6]. Many methods have been developed to deal with the consensus problem like linear system theory [7], impulsive control [8], convex optimization method [9], and so on. Due to the complexity of network, the control problem of multiagent systems will be very challenging and difficult. Therefore, how to design the consensus control protocol for multiagent systems becomes a significant research focus.

In most cases, the connectivity of graph might be unfixed; it may deteriorate the system performance and even cause instability. Therefore, some of the existing results concentrate on the ideal case where multiagent systems can be described as the dynamic topology [10, 11]. Researching among those works, some main results and progress on distributed coordination control were given and the system under a dynamic topology was addressed through various methods [12, 13]. For example, distributed consensus problem was studied for discrete-time multiagent systems with the switching graphs, where each agent’s velocity was constrained to lie in a nonconvex set [14]. Moreover, two consensus problems were solved under a switching topology, which was assumed to be uniformly connected only [15]. Otherwise, the aforementioned works not only focused on first-order and second-order systems [16], but also focused on Euler-Lagrange models [17, 18] and even took the time delay and noises into account [19, 20].

Due to the random link failures, variation meeting the need and sudden environmental disturbances, some dynamical systems could be modeled as Markovian switching systems, which are starting with a rapid development [2123]. Leader-following consensus problem was studied for data-sampled multiagent systems under the Markovian switching topologies [24] and a more interesting case with multiple dynamic leaders was considered in [25]. In [26], under a switching topology governed by Markov chains, the consensus seeking problem was solved through a guaranteed cost control method. It was unnecessary for the Markov chain to be ergodic, since each topology had a spanning tree. In addition, it is difficult to obtain all the elements of the transition rate matrix, or some of the elements are not necessary to guarantee the system stability. Markovian switching model with partially unknown transition rates was considered in [27], and any knowledge of the unknown elements was needed in the design procedure of finite time synchronization controller.

However, many practical systems can be addressed as a dynamic model, such as replacing the broken agents in the group and expanding the function on the basis of traditional one. The time varying reference can be tracked firmly in the original system, whether the union system could achieve consensus in case of combining some followers with a fixed graph. Besides, as far as we know, the usual case, in which Markov chain used in, is the modes of the topologies, since all of the subgraphs and the transition rate matrix should be known or partly known clearly. In contrast with that, Markov chains are applied to the edges of graph in this paper, so that the union system could be discovered through introducing a novel mapping, together with the distributed tracking problem for the union system; it is a valuable topic to be researched.

The main purpose of this paper is to establish the Markovian switching topologies for the union system with two subgraphs. Through a novel mapping, Markovian switching topologies are governed by a set of Markov chains to the edges of the graph. Hence, distributed coordinated tracking control problem is solved via a PD-like consensus algorithm adopted from [16]. Different from [16], a sufficient condition on the system stability is obtained based on trigonometric function theory. As shown in the forward reference, the tracking errors are ultimately bounded, which is partly determined by the bounded changing rate and the number of agents. Simulation result can more fully prove the effectiveness of the strategy.

The rest of this paper is organized as follows. In Section 2, graph theory based on a novel Markov process is given and PD-like consensus algorithm is adopted. In Section 3, stability analysis and some results are provided. Simulation example is presented in Section 4 and this paper is concluded in Section 5.

2. Preliminaries

2.1. Graph Theory

Define a directed leader-following graph with one leader labeled as node 0 and followers. is a nonempty finite set of nodes and is a set of edges. For an edge , if the node can obtain information from , is a neighbor of node . A directed path is a sequence of edges in the form of , where . The adjacency matrix is associated with , where if agent can obtain information from agent and otherwise. Assume and the leader does not receive information from the followers. Thus, the adjacency matrix of is denoted bywhere and . Let be a fixed and directed graph with agents. The adjacency matrix of is given by . The union graph is denoted by with the node set and the edge set . Hencewhere is the adjacency matrix of , , , and are parts of the switching matrices among the nodes of and , (for brevity, denoted by ) is a finite homogeneous Markov process, and it will be detailed in the following section.

2.2. Markov Chains

Define a finite set , , and a set of matrices , with the elements , , . There are two sets and , where represents a novel operation mark among matrices, is a set corresponding to . Meanwhile, introduce the mapping withThen, the mapping is a bijection from to .

Remark 1. Based on the bijection in (4), the transition probability could be derived as follows.
Firstly, for the matrix , each is corresponding to the only matrix in the set . In the modes and , the following is yielded:

Assume that each edge of takes value in the set with an unequal probability. The transition rate matrix is given bywhere

In addition, Markov chain is ergodic throughout this paper. It is obvious thatwhere represents the transition probability from one mode to another. Let the transition probability be while , and ; then

The same work is done to the matrices and . Overall consideration, for brevity, denotes the total probability as

Finally, the total number of system modes is , and the transition rate matrix is

2.3. PD-Like Consensus Algorithm

Suppose the discrete dynamic of the follower iswhere is the state at , where is the discrete-time index, is the sampling period, and is the control input.

Let the reference state be . Consider the discrete-time coordinated tracking algorithm adopted from [16], together with the Markovian parameter ; consensus algorithm (13) will be applied to the agents in graph :where , , , is the entry of , and is a positive constant. Suppose that each follower has at least one neighbor, thus , . Appling (12) and (13) yieldsDefine ; let and ; it follows thatwhere

is not a Markov process, but the joint process is. Assume that the reference trajectory is a deterministic signal instead of a random one. The initial state of the joint process is denoted as . It follows that the solution of (15) isNote that the eigenvalues of play an important role in the determining of as .

3. Convergence Analysis

Theorem 2. Suppose that the leader has directed paths to all followers 1 to in , then has all eigenvalues within the unit circle, where is denoted as the inverse of .

Proof. There exists (resp., ) which is corresponding to (resp., ) as denoted in (16), it follows from (3) thatThen, it is obvious thatAll the elements of , , , and are less than 1. Based on Lemma 3.1 in [5], (18) has all eigenvalues within the unit circle.

Lemma 3 ([28], Proposition 3.6). Let and , where , , is defined in (16); let represent the Kronecker product of matrices. If , then , where denotes the matrix spectral radius.

Theorem 4. Suppose that the leader has directed paths to all nodes in the union graph , while holds obviously. If the positive scalars and satisfywherethen has all eigenvalues within the unit circle.

Proof.
Step 1. The matrix has modes based on the analysis in the Section 2. If the leader has directed paths to all followers, it follows from Theorem 2 that (18) has all eigenvalues within the unit circle. It will be shown that through the method of perturbation arguments. Hence can be written asDenote the elementary transformation block matrix asThe equation can be calculated asThe issue will be converted to find the conditions to make sure all eigenvalues of are within the unit circle.
Step 2. The characteristic polynomial of is given byNote that is the eigenvalue of , which is in the unit circle. Define , where is the length of , , and is the imaginary parts signal. Therefore, the roots satisfyIt can be noted thatLet and . Based on (28), thusIt follows from (30) thatUsing (29) and (31), after some manipulation, (31) can be rewritten asAimed to prove that and are within the unit circle, with , , it follows thatThen, the following holds:To get the condition of , the transition of (34) is made:With the limit conditions of , , and (35), the range of can be obtained.
Firstly, as is well-known, in the analysis of , letthenAfter some manipulation, this yieldswhere satisfies (22) with .
Then, for the condition of , the same as , it can be obtained thatSimilarly, we havewhere satisfies (22) with .
Finally, sufficient condition (21) can be exactly proved. It follows from Lemma 3.1 in [5] that has all eigenvalues within the unit circle. Thus, based on (21), the system tracking errors can be convergent stably.

Remark 5. Markov chains are required to be ergodic; therefore it can be ensured that the leader has directed paths to all followers in . Certainly, the results can be expanded to the case where all the links are governed by Markov chains; through the mapping in (4), Markovian switching topologies will be addressed finally. From this, large numbers of system modes can be described, and the traditional Markovian switching topologies can be recovered through the adjustment of the links modes and the transition rate matrix. However, it will magnify the calculation load and the unknown or partly unknown transition probability in some scenes will be considered in the future.

Lemma 6 ([16], Lemma 3.5). Let be defined in Lemma 3. For small enough , , if and only if the leader has directed paths to all followers in the union graph rather than the subgraphs.

Theorem 7. Assume that satisfies the fact that the changing rate is bounded; thusthe leader has directed paths to all followers 1 to in the union graph. When Theorem 4 holds, using algorithm (13), if there exist and , the tracking errors of the agents are ultimately mean-square bounded as follows:

Proof. It follows from (15) thatNoting that is deterministic, based on (41), thusBased on Lemmas 3.4 and 3.5 and Theorem 3.9 in [28], there exist and , yieldingNoting that , after some manipulation, thus it follows thatTherefore, as , it can be obtained that . The same as Theorem 3.2 in [16], the tracking errors will go to zero ultimately as . But for the original interaction topology , the ultimate mean-square bound is given by through the same method, which is smaller than the union system.

4. Simulation Results

In this section, a simulation example is given to verify the effectiveness of the theoretical results. For brevity, let if , , and . The subgraphs and are shown in Figure 1.

It follows from that each Markov chain has two modes, which means and , and the transition rate matrices are considered as in Table 1.

As an example, some modes of the edges are shown in Figure 2.

For the PD-like discrete-time consensus algorithm, the initial states of the agents in and are and . Furthermore, also should be defined at the initial time, and let . Distributed controller (13) is implemented with the parameters in the following four cases.

Case 1. , .

Case 2. , .

Case 3. , .

Case 4. , .

Simulation results are shown in Figures 36.

Figure 3 shows the plots of the system states and tracking errors with a time varying reference when , . Once the two fixed groups are combined at half of the time, all the followers can track the reference finally. More specifically, the system states and tracking errors curves are smooth in the first half of time, but in the rest of time as shown in the partial enlarged details, there are lots of burrs on the plots for all agents obviously, since the links between agents in and are governed by the random Markov chains.

Under the same reference, compared with Figure 3, the system states can track more effectively and the tracking errors are smaller while , . Furthermore, as shown in Figures 4 and 5, there is a quick response and smaller tracking errors ultimately, along with a bigger control gain and the same . But for Figure 6, when , in Case 4, the situation is unpredictable, because does not meet the condition of Theorem 4. It can be noted that the tracking errors become unbounded in this case. What should be stressed more is that, based on Theorem 4, the largest value of is approximately equal to 0.44. Otherwise, a quantitative comparison among the four cases is given in Table 2, which shows the mean and standard deviation of the tracking errors in the second half of the time. The comparison results show that the tracking errors depend on and obviously.

5. Conclusion

In this paper, distributed discrete-time coordinated tracking control for multiagent systems is investigated to solve the issue on the union graph with Markov chains. Based on a novel mapping, Markovian switching topologies are redesigned through using the Markov chains to the edge set. The PD-like discrete-time consensus algorithm is applied to deal with the time varying reference. A sufficient condition of the match sampling period and a feasible control gain to the time varying reference is obtained in terms of trigonometric function with multiple-term formula. Both the theoretical and simulation results show that the ultimate tracking errors are related to the sampling period. Although we focus on studying the discrete-time multiagent systems with an ideal communication network, an extended analysis may be considered for the case with time delays, which will be addressed in our future work.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by National Natural Science Foundation of China under Grant 61473248 and Natural Science Foundation of Hebei Province under Grant F2016203496.