Table of Contents Author Guidelines Submit a Manuscript
Journal of Control Science and Engineering
Volume 2012, Article ID 313716, 8 pages
http://dx.doi.org/10.1155/2012/313716
Research Article

Distributed Model Predictive Control of the Multi-Agent Systems with Improving Control Performance

College of Automation, Chongqing University, Chongqing 400044, China

Received 8 October 2011; Revised 22 December 2011; Accepted 19 January 2012

Academic Editor: Yugeng Xi

Copyright © 2012 Wei Shanbi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper addresses a distributed model predictive control (DMPC) scheme for multiagent systems with improving control performance. In order to penalize the deviation of the computed state trajectory from the assumed state trajectory, the deviation punishment is involved in the local cost function of each agent. The closed-loop stability is guaranteed with a large weight for deviation punishment. However, this large weight leads to much loss of control performance. Hence, the time-varying compatibility constraints of each agent are designed to balance the closed-loop stability and the control performance, so that the closed-loop stability is achieved with a small weight for the deviation punishment. A numerical example is given to illustrate the effectiveness of the proposed scheme.

1. Introduction

Interests in the cooperative control of multiagent systems have been growing significantly over the last years. The main motivation is the wide range of military and civilian applications, including formation flight of UAV and automated traffic systems. Compared with the traditional approach, model predictive control (MPC), or receding horizon control (RHC) has the ability to redefine cost functions and constraints as needed to reflect changes in the system and/or the environment. Therefore, MPC is extensively applied to the cooperative control of multiagent systems, which makes the agents operate close to the constraint boundaries and obtain better performance than traditional approaches [13]. Moreover, due to the computational advantages and the convenience of communication, distributed MPC (DMPC) is recognized as a nature technique to address trajectory optimization problems for multiagent systems.

One of the challenges for distributed control is to ensure that local control actions keep consistent with the actions of others agents [4, 5]. For the coupled systems, the local optimization problem is solved based on the states of its neighbors’ at sample time instant using Nash-optimization technique in [6]. As the local controllers lack of communication and cooperation, the local control actions cannot keep consistent [7, 8]. require each local controller exchange information with all other local controllers to improve optimality and consistency based on sufficient communication. For the decoupled systems [9], exploits the estimation of the prediction state trajectories of the neighbors’ [10]; treats the prediction state trajectories of the neighbor agents as bounded disturbance where a min-max optimal problem is solved for each agent with respect to the worst-case disturbance. In [11, 12], the optimal variables of the local optimization problem contain the control action of its own and its neighbors’ which are coupled in collision avoidance constraints and cost function. Obviously, the deviation between the actions of what the agent is actually doing and of what its neighbor estimates for it affects the control performance. Sometimes the consistency and collision avoidance cannot be achieved, and the feasibility and stability of this scheme cannot be guaranteed [13]. proposes a distributed MPC with a fixed compatibility constraint to restrict the deviation. When the bound of this constraint is sufficiently small, the closed-loop system state enter a neighborhood of the objective state [14, 15] give an improvement over [13] by adding deviation punishment term to penalize the deviation of the computed state trajectory from the assumed state trajectory. Closed-loop exponential stability follows if the weight on the deviation function term is large enough. But the large weight leads to the loss of the control performance.

A contribution in this paper is to propose an idea to reduce the adverse effect of the deviation punishment on the control performance. At each sample time, the value of compatibility constraint is set as the maximum value of the deviation of the previous sample time. We give the stability condition to guarantee the exponential stability of the global closed-loop system with a small weight on the deviation punishment term, which is obtained by dividing the centralize stability constraint as the manner of [16, 17]. The effectiveness of the scheme is also demonstrated by a numerical example.

Notations
𝑥𝑖𝑘 is the value of vector 𝑥𝑖 at time 𝑘𝑥𝑖𝑘,𝑡 is the value of vector 𝑥𝑖 at a future time 𝑘+𝑡, predicted at time 𝑘|𝑥|=[|𝑥1|,|𝑥2|,,|𝑥𝑁|] is the absolute value for each component of 𝑥. For a vector 𝑥 and positive-definite matrix 𝑄, 𝑥2𝑄=𝑥𝑇𝑄𝑥.

2. Problem Statement

Let us consider a system which is composed of 𝑁𝑎 agents. The dynamics of agent 𝑖 [11] is𝑥𝑖𝑘+1=𝑓𝑖𝑥𝑖𝑘,𝑢𝑖𝑘,(1) where 𝑢𝑖𝑘𝑚𝑖, 𝑥𝑖𝑘𝑛𝑖, and 𝑓𝑖𝑛𝑖×𝑚𝑖𝑛𝑖, are the input, state, and state transition function of agent 𝑖, respectively. 𝑢𝑖𝑘=[𝑢𝑘𝑖,1,,𝑢𝑖,𝑚𝑖𝑘]T, 𝑥𝑖𝑘=[𝑥𝑘𝑖,1,,𝑥𝑖,𝑛𝑖𝑘]T. The sets of feasible input and state of agent 𝑖 are denoted as 𝒰𝑖𝑚𝑖 and 𝒳𝑖𝑛𝑖, respectively, that is,𝑢𝑖𝑘𝒰𝑖,𝑥𝑖𝑘𝒳𝑖,𝑘0.(2) At each time 𝑘, the control objective is [18] to minimize𝐽𝑘=𝑡=0𝑥𝑘,𝑡2𝑄+𝑢𝑘,𝑡2𝑅(3) with respect to 𝑢𝑘,𝑡, 𝑡0, where 𝑥=[(𝑥1)T,,(𝑥𝑁𝑎)T]T, 𝑢=[(𝑢1)T,,(𝑢𝑁𝑎)T]T; 𝑥𝑖𝑘,𝑡+1=𝑓𝑖(𝑥𝑖𝑘,𝑡,𝑢𝑖𝑘,𝑡), 𝑥𝑖𝑘,0=𝑥𝑖𝑘; 𝑄=𝑄T>0, 𝑅=𝑅T>0. 𝑢𝑚, 𝑚=𝑖𝑚𝑖, and 𝑥𝑛, 𝑛=𝑖𝑛𝑖. Then,𝑥𝑘+1𝑥=𝑓𝑘,𝑢𝑘,(4) where 𝑓=[𝑓1,𝑓2,,𝑓𝑁𝑎]T, 𝑓𝑛×𝑚𝑛. (𝑥𝑖𝑒,𝑢𝑖𝑒) is the equilibrium point of agent 𝑖, and (𝑥𝑒,𝑢𝑒) is the corresponding equilibrium point of all agents. 𝒳=𝒳1×𝒳2××𝒳𝑁𝑎, 𝒰=𝒰1×𝒰2××𝒰𝑁𝑎. The models for all agents are completely decoupled. The coupling between agents arises due to the fact that they operate in the same environment, and that the “cooperative’’ objective is imposed on each agent by the cost function. Hence, there are the coupling cost function and coupling constraints [19]. The coupling constraints can be transformed to coupling cost function term directly or handled as decoupling constraints using the technique of [15]. In the present paper we will not consider this issue.

The control objective for all system is to cooperatively asymptotically stabilize all agents to an equilibrium point (𝑥𝑒,𝑢𝑒) of (4). In this paper we assumed that the (𝑥𝑒,𝑢𝑒)=(0,0), 𝑓(𝑥𝑒,𝑢𝑒)=0. The corresponding equilibrium point for each agent is (𝑥𝑖𝑒,𝑢𝑖𝑒)=(0,0), 𝑓𝑖(𝑥𝑖𝑒,𝑢𝑖𝑒)=0. Assumption 𝑓𝑖(0,0)=0 is not restrictive, since if (𝑥𝑖𝑒,𝑢𝑖𝑒)(0,0), one can always shift the origin of the system to it.

The resultant control law for minimization of (3) can be implemented in a centralized way. However, the existing methods for centralized MPC are only computationally tractable for small-scale system. Furthermore, the communication cost of implementing a centralized receding horizon control law may be costly. Hence, by means of decomposition, 𝐽𝑘 is divided as 𝐽𝑖𝑘’s such that the minimization of (3) is implemented in distributed manner, with𝐽𝑖𝑘=𝑡=0𝑧𝑖𝑘,𝑡2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖,𝐽𝑘=𝑁𝑎𝑖=1𝐽𝑖𝑘,(5) where 𝑧𝑖𝑘,𝑡=[(𝑥𝑖𝑘,𝑡)T(𝑥𝑖𝑘,𝑡)T]T; 𝑥𝑖𝑘,𝑡 includes the states of the neighbors. The set of neighbors’ of agent 𝑖 is denoted as 𝒩𝑖. 𝑥𝑘𝑖={𝑥𝑗𝑘𝑗𝒩𝑖}, 𝑥𝑘𝑖𝑛𝑖, 𝑛𝑖=𝑗𝒩𝑖𝑛𝑗. For each agent 𝑖, the control objective is to stabilize it to the equilibrium point (𝑥𝑖𝑒,𝑢𝑖𝑒)𝑄𝑖=𝑄T𝑖>0, 𝑅𝑖=𝑅T𝑖>0. 𝑄𝑖 is obtained by dividing 𝑄 using the technique of [19]. For the agents that have decoupled dynamics, the couplings of control moves for all system are not considered. 𝑅 is a diagonal matrix and 𝑅𝑖 is directly obtained.

Under the networked environment, the bandwidth limitation can restrict the amount of information exchange [17]. It is thus appropriate to allow agents to exchange information only once in each sampling interval. We assume that the connectivity of the interagent communication network is sufficient for agents to obtain information regarding all the variables that appear in their local problems.

In the receding horizon control manner, a finite-horizon cost function is exploited to approximate 𝐽𝑖𝑘. According to the (5), the evolution of the control moves with predictive horizon for agent 𝑖 is based on the estimation of the state trajectories 𝑥𝑖𝑘,𝑡,𝑡𝑁 of the neighbors’, which are substituted by the assumed state trajectories ̂𝑥𝑖𝑘,𝑡,𝑡𝑁 as [11]. In each control interval, the transmitted information between agents is the assumed state trajectories. As the cooperative consistency and efficiency of distributed control moves is affected for the existence of the deviation of the computed state trajectory from the assumed state trajectory, it is appreciate to penalize it by adding the deviation punishment term into the local cost function.

Define𝑢𝑖𝑘,𝑡=𝐹𝑖(𝑘)𝑥𝑖𝑘,𝑡,𝑡𝑁.(6)𝐹𝑖(𝑘) is the gain of distributed state feedback controller.

Consider̆𝐽𝑖𝑘=𝑁1𝑡=0̂𝑧𝑖𝑘,𝑡2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖+𝑥𝑖𝑘,𝑡̂𝑥𝑖𝑘,𝑡2𝑇𝑖+𝑡=𝑁𝑥𝑖𝑘,𝑡2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖,(7) wherê𝑧𝑖𝑘,𝑡=𝑥𝑖𝑘,𝑡T̂𝑥𝑖𝑘,𝑡TT,̂𝑥𝑖𝑘,0=𝑥𝑖𝑘,(8)̂𝑥𝑖𝑘,𝑡 includes the assumed states of the neighbors. 𝑄𝑖=𝑄T𝑖>0 and 𝑅𝑖=𝑅T𝑖=𝑅𝑖 satisfy𝑄diag1,𝑄2,,𝑄𝑁𝑎𝑅𝑄,diag1,𝑅2,,𝑅𝑁𝑎=𝑅.(9) Obviously, 𝑄𝑖 is designed to stabilize the agent 𝑖 to the local equilibrium point, independently. 𝑄𝑖 is designed to stabilize the agent 𝑖 to the local equilibrium point with neighbor agents, cooperatively. 𝑇𝑖 is the weight on the deviation punishment term, to penalize the deviation of the computed state trajectory from the assumed state trajectory.

At each time 𝑘, the optimization problem for distributed MPC is transformed as:min𝑈𝑖𝑘,𝐹𝑖(𝑘)̆𝐽𝑖𝑘,𝑠.𝑡.(1),(2),(6),(7).(10)𝑈𝑘𝑖=[(𝑢𝑖𝑘,0)T,(𝑢𝑖𝑘,1)T,,(𝑢𝑖𝑘,𝑁1)T]T, only when 𝑢𝑘𝑖=𝑢𝑖𝑘,0 is implemented, and the problem (9) is solved again at time 𝑘+1.

Remark 1. The local deviate punishment by each agent effects the control performance, that is, incurs the loss of optimality.

3. Stability of Distributed MPC

The stability of distributed MPC by simply applying the procedure as in the centralized MPC will be discussed. The compact and convex terminal set Ω𝑖 is definedΩ𝑖=𝑥𝑖𝑛𝑖𝑥𝑖T𝑃𝑖𝑥𝑖𝛼𝑖,(11) where 𝑃𝑖>0, 𝛼𝑖>0 are specified such that Ω𝑖 is a control invariant set. So using the idea of [20, 21], one simultaneously determines a linear feedback such that Ω𝑖 is a positively invariant under this feedback.

Define the local linearization at the equilibrium point𝐴𝑖=𝜕𝑓𝑖𝜕𝑥𝑖(0,0),𝐵𝑖=𝜕𝑓𝑖𝜕𝑢𝑖(0,0).(12) and assume that (𝐴𝑖,𝐵𝑖) is stabilizable. When 𝑥𝑖𝑘,𝑁+𝑡, 𝑡0 enters into the terminal set Ω𝑖, the local linear feedback control law is assumed as 𝑢𝑖𝑘,𝑁+𝑡=𝐹𝑖(𝑘)𝑥𝑖𝑘,𝑁+𝑡=𝐾𝑖𝑥𝑖𝑘,𝑁+𝑡. 𝐾𝑖 is a constant which is calculated off line as follows.

3.1. Design of the Local Control Law

The following equation follows for achieving closed-loop stability:𝑥𝑖𝑘,𝑁+𝑡+12𝑃𝑖𝑥𝑖𝑘,𝑁+𝑡2𝑃𝑖𝑥𝑖𝑘,𝑁+𝑡2𝑄𝑖𝑢𝑖𝑘,𝑁+𝑡2𝑅𝑖,𝑡0.(13)

Lemma 1. Suppose that there exist 𝑄𝑖>0, 𝑅𝑖>0, 𝑃𝑖>0, which satisfy the Lyapunov-equation: 𝐴𝑖+𝐵𝑖𝐾𝑖T𝑃𝑖𝐴𝑖+𝐵𝑖𝐾𝑖𝑃𝑖=𝜅𝑖𝑃𝑖𝑄𝑖𝐾T𝑖𝑅𝑖𝐾𝑖,(14) for some 𝜅𝑖>0. Then, there exists a constant 𝛼𝑖>0 such that Ω𝑖 defined in (11) satisfies (13).

Remark 2. Lemma 1 is directly obtained by referring to “𝐿𝑒𝑚𝑚𝑎1” in [21]. For MPC, the stability margin can be adjusted by turning the value of 𝜅𝑖 according to Lemma 1. With regard to DMPC, [11] adjusts the stability margin by tuning the weight in the local cost function. The control objective is to asymptotically stabilize the closed-loop system, so that 𝑥𝑖𝑘,=0 and 𝑢𝑖𝑘,=0. For 𝑡=0,,, summing (13) obtains 𝑡=𝑁𝑥𝑖𝑘,𝑡2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖𝑥𝑖𝑘,𝑁2𝑃𝑖.(15)
Considering both (7) and (15), yields ̆𝐽𝑖𝑘𝐽𝑖𝑘=𝑁1𝑡=0̂𝑧𝑖𝑘,𝑡2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖+𝑥𝑖𝑘,𝑡̂𝑥𝑖𝑘,𝑡2𝑇𝑖+𝑥𝑖𝑘,𝑁2𝑃𝑖,(16) where 𝐽𝑖𝑘 is a finite-horizon cost function, which consists of a finite horizon standard cost, to specify the desired control performance and a terminal cost, to penalize the states at the end of the finite horizon.
The terminal region Ω𝑖 for agent 𝑖 is designed, so that it is invariant for nonlinear system controlled by a local linear state feedback. The quadratic terminal cost 𝑥𝑖𝑘,𝑁2𝑃𝑖 bounds the infinite horizon cost of the nonlinear system starting from Ω𝑖 and controlled by the local linear state feedback.

3.2. Compatibility Constraint for Stability

As in [18], we define two terms, 𝜉𝑖=𝑥𝑖̂𝑥𝑖, 𝜉𝑖=𝑥𝑖̂𝑥𝑖,𝑄𝑖=𝑄1𝑖𝑄𝑖12𝑄𝑖12T𝑄3𝑖,𝐶𝑥(𝑘)=𝑁𝑎𝑖=1𝑁1𝑡=12𝑥𝑖𝑘,𝑡T𝑄𝑖12𝜉𝑖𝑘,𝑡+2̂𝑥𝑖𝑘,𝑡T𝑄3𝑖𝜉𝑖𝑘,𝑡+𝜉𝑖𝑘,𝑡T𝑄3𝑖𝜉𝑖𝑘,𝑡,𝐶𝜉(𝑘)=𝑁𝑎𝑖=1𝑁1𝑡=1𝜉𝑖𝑘,𝑡T𝑇𝑖𝜉𝑖𝑘,𝑡,(17)

Lemma 2. Suppose that (9) holds and there exits 𝜌(𝑘) such that, for all 𝑘>0, 0𝜌(𝑘)1,𝜌(𝑘)𝑁𝑎𝑖=1𝑥𝑖(𝑘)T,̂𝑥𝑘𝑖T2𝑄𝑖+𝑢𝑖(0𝑘)2𝑅𝑖+𝐶𝑥(𝑘)𝐶𝜉(𝑘)0.(18)

Then, by solving the receding-horizon optimization problemmin𝑈𝑖(𝑘)𝐽𝑖𝑘,𝑠.𝑡.(1),(2),(14),(16),𝑢𝑖𝑘,𝑁=𝐾𝑖𝑥𝑖𝑘,𝑁,𝑥𝑖𝑘,𝑁Ω𝑖,(19) and implementing 𝑢𝑖𝑘,0, the stability of the global closed-loop system is guaranteed, once a feasible solution at time 𝑘=0 is found.

Proof. Define 𝐽(𝑘)=𝑁𝑎𝑖=1𝐽𝑖𝑘. Suppose, at time 𝑘, there are optimal solution 𝑈𝑘𝑖, 𝑖{1,,𝑁𝑎}, which yields 𝐽(𝑘)=𝑁𝑎𝑖=1𝑥𝑖𝑘T,̂𝑥𝑘𝑖T2𝑄𝑖+𝑢𝑖𝑘,02𝑅𝑖+𝑁𝑎𝑖=1𝑁1𝑡=1𝑥𝑖𝑘,𝑡T,̂𝑥𝑖k,𝑡T2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖+𝑥𝑖𝑘,𝑡T̂𝑥𝑖𝑘,𝑡T2𝑇𝑖+𝑁𝑎𝑖=1𝑥𝑖𝑘,𝑁2𝑃𝑖.(20)
At time 𝑡+1, according to Lemma 2, 𝑈𝑖𝑘+1={𝑢𝑖𝑘,1,,𝑢𝑖𝑘,𝑁1,𝐾𝑖𝑥𝑖𝑘,𝑁} is feasible, which yields 𝐽(𝑘+1)=𝑁𝑎𝑁𝑖=1𝑡=1𝑥𝑖𝑘,𝑡T,𝑥𝑖𝑘,𝑡T2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖+𝑁𝑎𝑖=1𝑥𝑖𝑘,𝑁+12𝑃𝑖=𝑁𝑎𝑖=1𝑁1𝑡=1𝑥𝑖𝑘,𝑡T,𝑥𝑖𝑘,𝑡T2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖+𝑥𝑘,𝑁2𝑄+𝑢𝑘,𝑁2𝑅+𝑥𝑘,𝑁+12𝑃,(21) where 𝑃=diag{𝑃1,𝑃2,,𝑃𝑁𝑎}. By applying (9) and Lemma 2, (11) guarantees that 𝑥𝑘,𝑁+12𝑃𝑥𝑘,𝑁2𝑃𝑥𝑘,𝑁2𝑄𝑢𝑘,𝑁2𝑅.(22) Substituting (22) into 𝐽(𝑘+1) yields 𝐽(𝑘+1)𝑁𝑎𝑖=1𝑁1𝑡=1𝑥𝑖𝑘,𝑡T,𝑥𝑖𝑘,𝑡T2𝑄𝑖+𝑢𝑖𝑘,𝑡2𝑅𝑖+𝑁𝑎𝑖=1𝑥𝑖𝑘,𝑁2𝑃𝑖.(23) By applying (17)–(19), 𝐽(𝑘+1)𝐽(𝑘)(1𝜌(𝑘))𝑁𝑎𝑖=1𝑥𝑖𝑘T,̂𝑥𝑘𝑖T2𝑄𝑖+𝑢𝑖𝑘,02𝑅𝑖𝑥(1𝜌(𝑘))𝑘2𝑄.(24)
At time 𝑘+1, by reoptimization, 𝐽(𝑘+1)𝐽(𝑘+1). Hence, it leads to 𝐽(𝑘+1)𝐽(𝑥𝑘)(1𝜌(𝑘))𝑘2𝑄(1𝜌(𝑘))𝜆min(𝑄)𝑥(𝑘)2𝑄,(25) where 𝜆min(𝑄) is the minimum eigenvalue of 𝑄. This indicates that the closed-loop system is exponentially stable.
Satisfaction of (18) indicates that all 𝑥𝑖𝑘,𝑡 should not deviate too far from their assumed values ̂𝑥𝑖𝑘,𝑡 [13]. Hence, (18) can be taken as a new version of the compatibility condition. This compatibility condition is derived from a single compatibility condition that collects all the states (whether predicted or assumed) with in the switching horizon and is disassembled to each agent in distributed manner, which results in local compatibility constraint for each agent.

3.3. Synthesis Approach of Distributed MPC

In the synthesis approach, the local optimization problem incorporates the above compatibility condition. Since 𝑥𝑘,𝑡 for all agent 𝑖 is coupled with other agents through (18), it is necessary to assign the constraint to each agent so as to satisfy (18) along the optimization. The continued discussion on stability depends on handling of (18).

Denote 𝜉𝑖𝑘=[𝜉𝑘𝑖,1,,𝜉𝑖,𝑛𝑖𝑘]T, 𝜉𝑘𝑖={𝜉𝑗𝑘𝑗𝒩𝑖}. At time 𝑘>0, by solving the optimization problem, there exits a parameter 𝑘𝑖,𝑙, 𝑙=1,,𝑛𝑖, for each element of 𝜉𝑘𝑖,𝑙, 𝑙=1,,𝑛𝑖.

Define𝑘𝑖,𝑙=max𝑡||𝜉𝑖,𝑙𝑘1,𝑡||,(26) and denote 𝑖𝑘=[𝑘𝑖,1,,𝑖,𝑛𝑖𝑘]T, 𝑘𝑖={𝑗𝑘𝑗𝒩𝑖}. At time 𝑘+1>0, set following constraint for each agent 𝑖:||||𝑥𝑖𝑘+1,𝑡T̂𝑥𝑖𝑘+1,𝑡T||||<𝑖𝑘.(27) From (26) and (27), it is shown that 𝜉𝑖𝑘+1,𝑡<𝑖𝑘 and 𝜉𝑖𝑘+1,𝑡<𝑘𝑖.

Denote𝐶𝑥𝑖(𝑘)=𝑁1𝑡=12𝑥𝑖𝑘,𝑡T𝑄𝑖12𝑘𝑖+2̂𝑥𝑖𝑘,𝑡T𝑄3𝑖𝑘𝑖+𝑘𝑖T𝑄3𝑖𝑘𝑖T,(28)𝐶𝜉𝑖(𝑘)=𝑁1𝑡=1𝜉𝑖𝑘,𝑡T𝑇𝑖𝜉𝑖𝑘,𝑡.(29) Then 𝐶𝑥(𝑘)𝑁𝑎𝑖=1𝐶𝑥𝑖(𝑘), 𝐶𝜉(𝑘)=𝑁𝑎𝑖=1𝐶𝜉𝑖(𝑘).

By applying (26)–(29), it is shown that (18) is guaranteed by assigning0𝜌𝑖(𝑘)1,𝑁𝑎𝑖=1𝜌𝑖𝑥(𝑘)𝑖𝑘T,̂𝑥𝑘𝑖T2𝑄𝑖+𝑢𝑖𝑘,02𝑅𝑖+𝑁𝑎𝑖=1𝐶𝑥𝑖(𝑘)𝑁𝑎𝑖=1𝐶𝜉𝑖(𝑘)0.(30) is dispensed to agent 𝑖:0𝜌𝑖(𝑘)1,𝑁1𝑡=1𝜉𝑖𝑘,𝑡2𝑇𝑖𝜌𝑖𝑥(𝑘)𝑖𝑘T,̂𝑥𝑘𝑖T2𝑄𝑖+𝑢𝑖k,02𝑅𝑖+𝐶𝑥𝑖(𝑘).(31) By using (26)–(28), conservativeness is introduced. Hence, (31) is more stringent than (18).

Remark 3. By adding the deviation punishment term in the local cost function, the closed-loop stability follows with a large weight. The larger weight means the more loss of the performance [14, 19]. For a small value of 𝑇𝑖, we can adjust the value of 𝜌𝑖(𝑘) to obtain exponential stability. As the 𝜌𝑖(𝑘) is set by optimization, this scheme has more freedom to tuning parameters, to balance the closed-loop stability and control performance.

Remark 4. According to (31), the maximum value and minimum value of 𝑇𝑖 can be calculated by considering the range of each variable. We choose the middle value for 𝑇𝑖. Obviously, the 𝑇𝑖 is time varying and denoted as 𝑇𝑖(𝑘).

4. Control Strategy

For practical implementation, distributed MPC is formulated in the following algorithm.

Algorithm
Off-line stage:(i)Set the value of the prediction horizon 𝑁.(ii)According to (3), (5) and (9), find 𝑄𝑖, 𝑅𝑖,𝑄𝑖,𝑅𝑖,𝑡=0,,𝑁1, for all agents.(iii)Set the value of the compatibility constraint for all agents 𝑖(0)=+, 𝑗𝒩𝑖.(iv)Calculate the terminal weight 𝑃𝑖, local linear feedback control gain 𝐾𝑖 and the terminal set Ω𝑖.
On-line stage: For agent 𝑖, perform the following steps at 𝑘=0:(i)Take the measurement of 𝑥𝑖0. Set 𝑇𝑖=0.(ii)Send 𝑥𝑖0 to its neighbor 𝑗, 𝑗𝒩𝑖 of agent 𝑖. Receive 𝑥𝑗0.(iii)Set ̂𝑥𝑗𝑡,0=𝑥𝑗0,0,𝑗𝒩𝑖, 𝑡=0,,𝑁1 and ̂𝑥𝑖0,𝑡=𝑥𝑖0.(iv)Solve problem (19).(v)Implement 𝑢𝑖0=𝑢𝑖0,0.(vi)Get ̂𝑥𝑖𝑡,0 and the value of compatibility constraint 𝑖(1).(vii)Send ̂𝑥𝑖0,𝑡 and 𝑖(1) to its neighbor 𝑗,𝑗𝑁𝑖. Receive ̂𝑥𝑗0,𝑡 and 𝑗(1). Calculate 𝑇𝑖(𝑘).  For the agent 𝑖, perform the following steps at 𝑘>0:(i)Take the measurement of 𝑥𝑖𝑘.(ii)Solve problem (19).(iii)Implement 𝑢𝑖𝑘=𝑢𝑖0,𝑘.(iv)Get ̂𝑥𝑖𝑘,𝑡 and the new value of compatibility constraint 𝑖(𝑘+1).(v)Send ̂𝑥𝑖𝑘,𝑡 and 𝑖(𝑘+1) to its neighbor 𝑗,𝑗𝒩𝑖. Receive ̂𝑥𝑗𝑘,𝑡 and 𝑗(𝑘+1). (vi)Calculate 𝑇𝑖(𝑘).

5. Numerical Example

We consider the model of agent 𝑖 [22] as𝑥𝑖𝑘+1=𝐼2𝐼20𝐼2𝑥𝑖𝑘+0.5𝐼2𝐼2𝑢𝑖𝑘,(32)

which is obtained by discretizing the continuous-time model̇𝑥𝑖=0𝐼2𝑥00𝑖+0𝐼2𝑢𝑖.(33)

(𝑥𝑖𝑘=[𝑞𝑘𝑖,𝑥,𝑞𝑘𝑖,𝑦,𝑣𝑘𝑖,𝑥,𝑣𝑘𝑖,𝑦]T, 𝑞𝑘𝑖,𝑥 and 𝑞𝑘𝑖,𝑦 are positions in the horizontal and vertical directions, resp. 𝑣𝑘𝑖,𝑥 and 𝑣𝑘𝑖,𝑦 are velocities in the horizontal and vertical directions, resp.) with sampling time interval of 0.5 second. There are four agents. A set of positions of the four agents constitute a formation.

The initial positions of the four agents are𝑞𝑜1,𝑥,𝑞𝑜1,𝑦=[],𝑞0,2𝑜2,𝑥,𝑞𝑜2,𝑦=[]𝑞2,0,(34)𝑜3,𝑥,𝑞𝑜3,𝑦=[],𝑞0,3𝑜4,𝑥,𝑞𝑜4,𝑦=[]2,0.(35)

Linear constraints on states and input are||𝑥𝑖||1001001515T,||𝑢𝑖||22T.(36)

The agent 𝑖, 𝑖=1,2,3 are selected as the core agents of the formation. 𝒜0 is designed as 𝒜0={(1,2);(1,3);(2,4)}. If all systems achieve the desire formation and the core agents cooperatively cover the virtue leader, then 𝑢𝑘𝑖,𝑥(𝑘) = 0, 𝑢𝑘𝑖,𝑦=0. The global cost function is obtained as𝐽(𝑘)=𝑡=0𝑞1𝑘,𝑡𝑞2𝑘,𝑡+𝑐122+𝑞1𝑘,𝑡𝑞3𝑘,𝑡+𝑐132+𝑞2𝑘,𝑡𝑞4𝑘,𝑡+𝑐242+19𝑞1𝑘,𝑡+𝑞2𝑘,𝑡+𝑞3𝑘,𝑡𝑞𝑐2+𝑣1𝑘,𝑡2+𝑣2𝑘,𝑡2+𝑣3𝑘,𝑡2+𝑣4𝑘,𝑡2+𝑢𝑘,𝑡2.(37) They cooperatively track the virtual leader whose reference is 𝑞𝑐=(0.5𝑘,0). The distance between agents is defined as 𝑐12=(2,1),𝑐13=(2,1),𝑐24=(2,1). Choose 𝒩1={2}, 𝒩2={1}, 𝒩3={1}, 𝒩4={2}. Then,21𝑄=9𝐼2809𝐼2809𝐼20000𝐼280000009𝐼21029𝐼2019𝐼20𝐼20000𝐼2800009𝐼2019𝐼21019𝐼200000000𝐼20000𝐼2000𝐼200000000𝐼2,𝑅=𝐼8.𝑄1=79𝐼2409𝐼20013𝐼24009𝐼20𝐼2010003𝐼2,𝑄2=119𝐼2409𝐼20013𝐼24009𝐼2049𝐼2010003𝐼2,𝑄3=119𝐼2809𝐼20012𝐼28009𝐼2089𝐼2010003𝐼2,𝑄4=𝐼20𝐼200𝐼200𝐼20𝐼2010003𝐼2,(38)

and 𝑅𝑖=𝐼2, 𝑖{1,2,3,4}. Choose 𝑄𝑖=6.85𝐼4 and 𝑅𝑖=𝐼2, 𝑖{1,2,3,4}, 𝑁=10. The terminal set is 𝛼𝑖=0.22. The above choice of model, cost, and constraints allow us to rewrite problem (19) as a quadratic programming with quadratic constraint. To solve the optimal control problems numerically, the package NPSOL 5.02 is used. From top to bottom, the first subgraph of Figure 1 is the evolution of the formation with central MPC; the second sub-graph of Figure 1 is the evolution of the formation with distributed MPC with time-varying compatible constraint; the third sub-graph of Figure 1 is the evolution of the formation with distributed MPC with a fixed compatibility constraint.

fig1
Figure 1: Evolutions of the formation with different control schemes.

With the three control schemes, the formation of all agents can be achieved. The obtained 𝐽trues are 2.5779×106, 4.8725×106, and 5.654×106, respectively. Compared with the second sub-graph, the third sub-graph have a large overshoot at the time-instant 𝑘=9 (nearby the position (3,0)). The distributed MPC with the time-varying compatible constraint has a better control process comparing to the one with fixed compatible constraint. The value of 𝜌𝑖(𝑘) is shown in Figure 2. “*” for agent 1; “O” for agent 2; “>” for agent 3; “<” for agent 4.

313716.fig.002
Figure 2: The value of 𝜌𝑖(𝑘).

Remark 5. For the second simulation, the value of the fixed compatible constraint is 0.2. For the third simulation, the values of the time-varying compatible constraint is calculated according to the states deviation of the previous horizon.

6. Conclusions

In this paper, we have proposed an improved distributed MPC scheme for multiagent systems based on deviation punishment. One of the features of the proposed scheme is that the cost function of each agent penalizes the deviation between the predicted state trajectory and the assumed state trajectory, which improves the consistency and optimal control trajectory. At each sample time, the value of compatibility constraint is set by the deviation of previous sample time-instant. The closed-loop stability is guaranteed with a small value for the weight of the deviation function term. Furthermore, the effectiveness of the scheme has been investigated by a numerical example. One of the future works will focus on feasibility of optimization.

Acknowledgment

This work is supported by a Grant from the Fundamental Research Funds for the Central Universities of China, no. CDJZR10170006.

References

  1. J. A. Primbs, “The analysis of optimization based controllers,” Automatica, vol. 37, no. 6, pp. 933–938, 2001. View at Publisher · View at Google Scholar · View at Scopus
  2. J. M. Maciejowski, Predictive Control with Constraints, Prentice Hall, Englewood Cliffs, NJ, USA, 2002.
  3. J. A. Rossiter, Model-Based Predictive Control: A Practical, CRC, Boca Raton, Fla, USA, 2003.
  4. Y. Kuwata, A. Richards, T. Schouwenaars, and J. P. How, “Distributed robust receding horizon control for multivehicle guidance,” IEEE Transactions on Control Systems Technology, vol. 15, no. 4, pp. 627–641, 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. E. Camponogara, D. Jia, B. H. Krogh, and S. Talukdar, “Distributed model predictive control,” IEEE Control Systems Magazine, vol. 22, no. 1, pp. 44–52, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Li, Y. Zhang, and Q. Zhu, “Nash-optimization enhanced distributed model predictive control applied to the Shell benchmark problem,” Information Sciences, vol. 170, no. 2–4, pp. 329–349, 2005. View at Publisher · View at Google Scholar · View at Scopus
  7. A. N. Venkat, J. B. Rawlings, and S. J. Wright, “Stability and optimality of distributed model predictive control,” in Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference (CDC-ECC '05), vol. 2005, pp. 6680–6685, Seville, Spain, December 2005. View at Publisher · View at Google Scholar
  8. A. N. Venkat, J. B. Rawlings, and S. J. Wright, “Distributed model predictive control of large-scale systems,” in Assessment and Future Directions of Nonlinear Model Predictive Control, vol. 358 of Lecture Notes in Control and Information Sciences, pp. 591–605, 2007. View at Publisher · View at Google Scholar
  9. M. Mercangöz and F. J. Doyle III, “Distributed model predictive control of an experimental four-tank system,” Journal of Process Control, vol. 17, no. 3, pp. 297–308, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. D. Jia and B. Krogh, “Min-max feedback model predictive control for distributed control with communication,” in Proceedings of the American Control Conference, pp. 4507–4512, May 2002. View at Scopus
  11. T. Keviczky, F. Borrelli, and G. J. Balas, “Decentralized receding horizon control for large scale dynamically decoupled systems,” Automatica, vol. 42, no. 12, pp. 2105–2115, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. F. Borrelli, T. Keviczky, G. J. Balas, G. Stewart, K. Fregene, and D. Godbole, “Hybrid decentralized control of large scale systems,” in Hybrid Systems: Computation and Control, vol. 3414 of Lecture Notes in Computer Science, pp. 168–183, Springer, 2005.
  13. W. B. Dunbar and R. M. Murray, “Distributed receding horizon control for multi-vehicle formation stabilization,” Automatica, vol. 42, no. 4, pp. 549–558, 2006. View at Publisher · View at Google Scholar · View at Scopus
  14. W. B. Dunbar, “Distributed receding horizon control of cost coupled systems,” in Proceedings of the 46th IEEE Conference on Decision and Control (CDC '07), pp. 2510–2515, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Wei, Y. Chai, and B. Ding, “Distributed model predictive control for multiagent systems with improved consistency,” Journal of Control Theory and Applications, vol. 8, no. 1, pp. 117–122, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. B. Ding, “Distributed robust MPC for constrained systems with polytopic description,” Asian Journal of Control, vol. 13, no. 1, pp. 198–212, 2011. View at Publisher · View at Google Scholar
  17. B. Ding, L. Xie, and W. Cai, “Distributed model predictive control for constrained linear systems,” International Journal of Robust and Nonlinear Control, vol. 20, no. 11, pp. 1285–1298, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. B. Ding and S. Y. Li, “Design and analysis of constrained nonlinear quadratic regulator,” ISA Transactions, vol. 42, no. 2, pp. 251–258, 2003. View at Google Scholar · View at Scopus
  19. W. Shanbi, B. Ding, C. Gang, and C. Yi, “Distributed model predictive control for multi-agent systems with coupling constraints,” International Journal of Modelling, Identification and Control, vol. 10, no. 3-4, pp. 238–245, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Chen and F. Allgöwer, “A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability,” Automatica, vol. 34, no. 10, pp. 1205–1217, 1998. View at Google Scholar · View at Scopus
  21. T. A. Johansen, “Approximate explicit receding horizon control of constrained nonlinear systems,” Automatica, vol. 40, no. 2, pp. 293–300, 2004. View at Publisher · View at Google Scholar · View at Scopus
  22. W. B. Dunbar, Distributed receding horizon control for multiagent systems, Ph.D. thesis, California Institute of Technology, Pasadena, Calif, USA, 2004.