Abstract

The growing complexity of optimization problems in distributed systems (DSs) has motivated computer scientists to strive for efficient approaches. This paper presents a novel cooperative algorithm inspired by chaos–order transition in a chaotic ant swarm (CAS). This work analyzes the basic dynamic characteristics of a DS in light of a networked multiagent system at microlevel and models a mapping from state set to self-organization mechanism set under the guide of system theory at macrolevel. A collaborative optimization algorithm (COA) in DS based on the chaos–order transition of CAS is then devised. To verify the validity of the proposed model and algorithm, we solve a locality-based task allocation in a networked multiagent system that uses COA. Simulations show that our algorithm is feasible and effective compared with previous task allocation approaches, thereby illustrating that our design ideas are correct.

1. Introduction

Distributed system (DS) is composed of many distributed computational components/units that mutually communicate and cooperate in networks. Owing to developments in computer networks and communication and distributed programming technology, DSs have become extensive. Ad hoc networks [1], peer-to-peer systems [2], and pervasive computing systems [3] are various types of DS. In general, DSs have a few common features as follows.

(1) Decentralized Control. DSs have a large scale and dynamical structure, so entire systems do not have a global control node. The actions of nodes are random. An example is the routing protocol for resource-constraint opportunistic networks [4], in which the intermittent connected nodes transfer messages from one node to another based on dynamic topology.

(2) Sharing of Local Resources. In this paper, resources refer to the buffer of nodes in opportunistic networks, replicas of popular web objects, computational capacity of nodes in wireless sensor networks, and so forth. All these resources are distributed among a group of nodes and are shared by these nodes [5].

(3) Openness, Dynamics, and Unreliability of DS. Most DSs are open with a dynamic structure, so the link of nodes may be unreliable [6]. Moreover, due to the autonomy of nodes, each node always communicates with its neighboring nodes to maximize the total utilities [7].

(4) Constraints of DS. In a DS, each task will be executed on available nodes. However, in a real network, the resources of each node are limited, such as energy, channel, and bandwidth. Sometimes a variety of conditions exist in a DS, numerous relationships are intertwined (including the relationship between nonlinear dynamics and unknown mechanisms), and “relationships” are difficult to decompose into a combination of a number of simple relationships.

Recently, DSs received considerable attention because of their large number of applications, such as task and resource allocation in grids, performance optimization in multi-agent-based e-commerce systems, data fusion in distributed sensor networks, and logistics in enterprise resource planning. However, the application requirements of DSs suffer from the aforementioned four common features. Traditional optimization methods often fail in terms of rational allocation of resources and optimal performance of DS. For instance, Asynchronous Distributed Constraint Optimization [8] and Optimal Asynchronous Partial Overlay [9] require a global topological structure to ensure optimal performance. However, acquiring global topology in DS is difficult or even impossible.

Cooperative optimization approaches of DSs have attracted the attention of many scholars. Existing methods can be divided into two categories. The first category is designing a distributed algorithm. The objective function of a DS is the sum of the node’s functions. The function value of each node is acquired by exchanging with its neighbors. Bertsekas and Tsitsiklis [10, 11] put forward a parallel and distributed computing theory frame for a set of processor scheduling problems. Nedić and Ozdaglar [12] propose an average subgradient consensus algorithm to solve the convex optimization problem of a DS with constraints and further present an extended subgradient algorithm in light of the constraints on a set of nodes. Srivastava and Nedić [13, 14] devise a distributed optimization algorithm with asynchronous communication. Zhu and Martinez [15] use the primal dual subgradient algorithm to solve the optimization problem with constraints, which are described by a set of global equality and inequality. In summary, these studies are closely related with the networked utility maximization optimization [16, 17]. The second category is heuristic algorithms based on a specific behavior. Consensus-based decentralized auction algorithm (CBAA) [18] is a distributed task allocation algorithm based on auction and consistency, where each iteration runs the auction and consensus, and the algorithm uniformly converges to the optimal solution. Market-based algorithm (MBA) [19] is a distributed task allocation algorithm based on the market mechanism, where each agent can buy a task, trade transactions among them, and minimize the purchase price. Swarm-GAP (SGAP) [20] is an approximate algorithm for distributed task allocation based on the principle of division of labor in social insects, where an individual selects a task based on the stimuli of the task and their preference to the task.

However, none of the aforementioned studies consider the autonomy of the individual, especially self-organization, self-adaptation, and other advanced functions in DSs. Consequently, these algorithms cannot embody the advanced features of DSs. The current paper aims to employ the advanced functions and construct a distributed cooperative optimization algorithm (COA).

Importantly, the thinking of chaos–order transition in the foraging behavior of ants [21] gives us a vital spark of imagination for solving cooperative optimization problems. The entire foraging process of chaotic ants is controlled by three successive strategies, namely, hunting, homing, and path building. During this process, the behavior of ants transforms from chaos to order under the influence of the ants’ intelligence and experience.

This paper investigates the dynamic characteristics of an agent with autonomy at the microlevel and devises a cooperative optimization model for DSs at the macrolevel. On the basis of our studies on chaotic ant swarm (CAS) [2224], we propose a COA. This paper focuses on various perspectives as follows.

The autonomy of agents in DSs plays a key role in representing the functions of DSs, where autonomy means that agents have spontaneous, initiative behaviors.

A mechanism is shown to bridge the microlevel and macrolevel.

Specifically, our main contributions are summarized as follows.

Our collaborative optimization model can be applied to describe the dynamic evolution of self-organization in DS.

Our algorithm is parallel because the individuals in the proposed algorithm evolve dynamically and are parallel with little scale relations.

Our algorithm fully embodies the autonomy of individual and interactive systems.

The remainder of the paper is organized as follows. Section 2 analyzes the basic dynamic characteristics of an autonomous agent in a multiagent network from the point of view of dynamics and explores the collaborative optimization model of DS. Section 3 describes how a CAS can be used to derive an effective COA. Section 4 depicts the application to a locality-based task allocation. Finally, Section 5 concludes our paper and provides further research content.

2. Cooperative Optimization Model

To construct our cooperative optimization model, we first analyze the dynamic characteristics at the micro scope and then devise the mapping method at the macro scope.

2.1. Dynamic Characteristics of an Autonomous Agent

Many researchers use the networked multiagent system based on local information structures to study DSs and have achieved good results [25, 26]. We also apply the networked multiagent system to analyze the dynamic characteristics of DSs and to set up the general dynamic characteristics of an agent.

The networked multiagent system can be expressed by a graph, where the vertex of the graph denotes an agent, while the edge represents the relationship between two agents. A networked multiagent system is shown in Figure 1, where agent is under the influence of agent .

From a multiagent system perspective, DSs can be regarded as a large-scale multiagent system. A single agent of this system lacks a global unit, which adjusts its state according to its own information. A large number of agents can communicate with one another, so the dynamic evolution of agents tends to emerge globally with consistent behaviors [27].

We now consider the dynamic characteristics of an agent with autonomy in DS. In a networked multiagent system, a dynamic characteristic of autonomous agents will be a set of partial differential equations or differential equations related to the various nonlinear constraint conditions on time and space. From the perspective of relationship characteristics, the dynamic expression of a single autonomous agent is determined jointly by its current states, environmental factors, and control information. Therefore, the dynamic characteristics of an agent are written aswhere is the current time; is the current state of autonomous agent ; is the output of autonomous agent ; is the output entropy flow of autonomous agent ; is the input information of autonomous agent , including normal control information and regulation on autonomous agent ; is the input environmental information that comes from the surrounding of autonomous agent , including the normal input where autonomous agent is imposed on by the environment and input , which autonomous agent is aware of; and is a nonlinear function.

Equation (1) shows that the dynamic characteristics of an autonomous agent are under influence of the interaction among individuals and the macro functions of the DS. Furthermore, the structure of a single autonomous agent is described in light of the characteristics of a multiliving agent [28]. Thus, the dynamic characteristics of autonomous agent can be expressed aswhere denotes the number of dimensions, is a state set of agent at time , is the input set of agent , is the normal control set, is the regulation of agent by its neighbors, is the output set of agent , is the environmental input set of agent , is the normal environmental input set of agent , is the environmental information that agent is aware of, and is the output entropy of agent from the perspective of information theory. is an operator, which can generate an input sequence and is expressed as is an operator, which can generate an output sequence and can be written asNow we can describe an autonomous agent in Figure 2.

The seven-tuple is a general structure, which depicts an autonomous agent. For example, the output flow , where are the state information, input information, and environmental information of autonomous agent at time , respectively. When , is the regulation imposed by agent ’s neighbors at time and is the normal input flow. If is bigger than , then agent is dominated by the regulation; otherwise, it is governed by the normal input. Accordingly, from the individual point of view, the autonomy of an agent can spontaneously adjust the state and output information with respect to the input and control information. With their continuous interaction, the agents gradually cooperate and ultimately achieve consistency.

The seven-tuple highly summarizes the idea that each autonomous agent in a DS has owned complex relationships. The tuple also reflects the microscopic characteristics of autonomous agents.

To depict the various relations in DS, constructing certain mappings and operations according to the specific problems is necessary. We can extract the main relationship or conclude similar relationships. As a result, establishing these reasonable and correct relationships is key to solving cooperative optimization problems in a DS.

2.2. Description of a Cooperative Optimization Model

Explaining the characterization of autonomous agents at the macro scope requires an analysis on the self-organization mechanism of DS and the provision of an abstract cooperative optimization model. A mapping from the state set of autonomous agents to self-organization mechanism of DS is shown in Figure 3. The state set is composed of a number of interacting relationships among autonomous agents, an interaction that can be represented by operator . For instance, a self-organization mechanism of an open dissipative system is a coherent effect of important interacted characteristics. A self-organization mechanism set consists of various interacted elements, and the interaction can be represented by operator . The mapping model from the state set to the self-organization mechanism set can be described as follows.

First, the elements of the state set run operator . Second, the results of the last step will be mapped to the self-organization mechanism set with operator . Finally, the self-organization mechanism set carries out operator , and the various characteristics of the self-organization mechanism are formed. Operator is a core of this model because the self-organization mechanism is established under the control of constraints. Figure 3 shows the mapping model from the state set to the self-organization mechanism set to more clearly demonstrate the mechanism.

Let the state set and operator at time be , where are the elements of the state set of an autonomous agent. Let the self-organization mechanism set and operator at time be , where are the elements of the self-organization mechanism set. Consequently, the internal or external operators of these two sets can be expressed as follows.

Internal operator of the state set :

External operator of the self-organization mechanism set :where is the mapping of in the self-organization mechanism set.

Internal operator of the self-organization mechanism set :

As a consequence of (5), (6), and (7), the mapping model reflects the macroscopic process of the self-organization mechanism of DS. In other words, cooperative optimization is the mapping from state set to self-organization mechanism set under the influence of operators , , and . Through this mapping, each part of a DS interdepends and interacts. Accordingly, the whole system evolves spontaneously, forming ordered structures and generating overall synergy.

In the succeeding sections, we devise a COA combined with the novel CAS according to the above model.

3. Description of COA

Based on the above dynamic characteristics of autonomous agents and the cooperative optimization model, a COA combined with CAS is proposed.

3.1. Chaotic Ant Swarm

Chaotic ant swarm (CAS) [29] is a global optimization algorithm developed in light of two ant behaviors, namely, chaos of a single ant and self-organization of the whole colony. CAS is constructed according to chaotic dynamics and swarm intelligence. It has been successfully applied to solve real world problems [23, 30, 31], such as parameter identification [30], clustering [31], traveling salesman problem [23], and power optimization problem [32]. Li et al. [21] further research the foraging behavior of ants and point out that efficient foraging is accomplished under the guidance of their nest, physical abilities, and experience. They demonstrate that their viewpoint of the transition from chaos to order results from CAS. In other words, the core idea of CAS is the transition from chaos to order, that is, the self-organization of ant colony sets from the pheromones, home, physical abilities, and experience of ants. Simply put, CAS ascribes the four factors to pheromones to minimize its complexity. A brief description of CAS is provided in the succeeding section.

During foraging, an individual ant shows low-dimensional determinate chaos, whereas the colony displays a periodic macroscopic behavior. Assuming that the search space of CAS is in accordance with the domain of a specific problem, the search space is defined as , which is dimensional continuous space, and each point in search space is a validated solution. Let the position of ant be , . Foraging is the process of search optimum; that is, minimize . Thus, the mathematical model of CAS algorithm can be formulated aswhere is the current step; is the last step; is the organizational variables of ant at the current step, which shows the effect of pheromone, ; is the th dimensional position of ant , , ; is the best position of ant and its neighbors at step; is a sufficiently large positive number (here ); is a constant, ; is an organizational factor of ant , which can affect the convergence of CAS; and is the th control parameter of ant , which can adjust the search space in accordance with the concrete problem. For a more detailed description of parameters, see [29].

To more clearly understand CAS, we explain how to construct CAS. Equation (8) shows how to generate the organizational variables , which are used to achieve gradual change from chaos to order. Chaotic system is the prototype of (9), while this chaotic system is transformed by when [29, 30]. The term of (9) is designed according to optimal adaptive control, where is the th dimensional best position of ant , which is calculated by virtue of a given strategy.

Consequently, the CAS algorithm describes the search process of an ant colony. The organizational variables are in control of the chaotic motion of ants. The control ability of is weaker at the beginning of the search process but becomes stronger with time evolution. Finally, the chaos is lost and formulates an order state under the influence of and . At this point, a clear understanding of how CAS depicts the transition from chaos to order can be obtained.

Chiang et al. [16] do not give a definition of an ant’s neighbor. In general, the neighbors of ant are the ants that are closer to ant with limited distance. The authors use the Euclidean distance as a criterion. Assuming that two ants exist, their respective positions are and and their distance is where , .

3.2. Cooperative Mechanism of CAS

In this subsection, we analyze the self-organization mechanism of ants and the dynamic characteristics of the individual ant in CAS algorithm to illustrate that CAS algorithm can solve cooperative optimization problems of DS.

First, the dynamic characteristics of ants are analyzed in combination with the dynamic characteristics of an autonomous agent. If an agent is treated as an autonomous agent, then the th dynamic characteristic of ant can be represented as in Figure 4.

We then analyze the corresponding relationships between the symbols of CAS and those of the dynamic characteristics of ants. corresponds to the feasible region of ant at time ; concludes and ; corresponds to the position of ant at time ; and corresponds to the best position of ant at time , which can readjust . corresponds to the organizational variable of ant at time . CAS does not consider environmental information that ants perceive. corresponds to the position of ant at time , while corresponds to the entropy change within time of ant . corresponds to chaos under the influence of pheromone. corresponds to the selection strategy for the best neighbor. Accordingly, the dynamic characteristics of ant are expressed as

Compared to the above seven-tuple with the CAS algorithm, we can see that (11) corresponds to (8) and (12) corresponds to (9) and the selection strategy of an ant for the best neighbor.

Second, the self-organization mechanism of CAS is analyzed based on the cooperative optimization model. Figure 5 shows that CAS is an iterative process of optimization with three operators, namely, updating the self-organization ability, generating chaotic behavior of a single ant, and selecting the best neighbor.

We can now derive the correspondences between the cooperative optimization model and CAS. The state set corresponds to the feasible region of CAS. The internal operator can be seen as the chaotic behavior of ants, that is, the chaotic system . The self-organization mechanism set corresponds to the position set of ants after the chaotic behavior. The internal operator ⊙ is deemed to be the selection strategy for the best neighbor. Operator , which maps from the state set to the self-organization mechanism set, can be considered as continuously transforming a chaotic position to the appropriate location under the influence of a pheromone. Note that the calculations in (9) include the two operators and .

The above discussion shows that a chaotic ant colony is an instance of a DS. As an effective algorithm, CAS can solve the cooperative optimization problems in DSs.

However, an issue will arise if the cooperative optimization problem is tackled directly; that is, (8) of CAS is established in a hypothetical case wherein the density of a pheromone gradually increases. Although this hypothesis is reasonable in CAS and can reflect the chaos-order law, it can neither embody the mutual influence of the individual ant nor show the autonomy of an ant through the perception of local environment information. The next subsection provides an approach to calculate the self-organization ability of ants based on data field theory of cognitive physics; that is, CAS is converted into COA.

3.3. Framework of COA

Assume that two close ants exist in the domain, ants and located in and , respectively. The potential energy of ant with respect to that of ant is where is the perceived effect of ant , which comes from ant , corresponding to factor of the seven-tuple and showing the strength of agent that the external environment information imposes on. is a regulator, which is proportional to the influential radius of ant ; that is, . The neighboring ants of ant can then be defined as Accordingly, the imposed potential energy of ant by other ants within its neighborhood can be expressed asConsequently, the probability distribution of ant ’s potential iswhere indicates any ant in the neighborhood of ant .

According to the definition of Shannon entropy, the order degree of ant at time is defined asThe smaller the entropy , the greater the order degree of the ant colony and the stronger the self-organization ability of the ant colony. Dynamic information theory [33] points out that only the open system generates the self-organization function when the negative entropy flow is greater than the positive entropy flow. Therefore, we identify if an ant has the self-organization function according to ant ’s entropy change with respect to its neighborhood. Assuming that ants exist within the neighborhood of ant , the entropy of ant is As a result, the self-organization ability of ant at time can be replaced with (18), denoted by . Thus, our COA can be expressed as

Note that twofold differences exist between COA and CAS. First, (19) not only reflects the mutual relationships between an ant and its neighbor, but also quantizes the self-organization ability of ants. Importantly, the sensing information of an autonomous agent is added to (13) as a feedback term, which embodies the autonomy of an agent. Second, (20) shows how to generate the state set by chaotic behavior and how to map to the self-organization mechanism under the control of the self-organization ability.

3.4. Process of COA and Its Complexity

Based on the above analysis, a cooperative optimization is given, whose process is listed as follows.

Step 1. Initialize the population size , parameters , , , , iteration , and ant ’s initial position randomly in domain, and then calculate their fitness value .

Step 2. Construct the neighborhood of ant , and then calculate the self-organization variant and the position of ant .

Step 3. If , then jump to Step 2.

Step 4. Calculate the current fitness value of each ant, and then get the best position , which guides the moving direction of ant for the next iteration.

Step 5. Consider ; when is satisfied, then exit the process; otherwise jump to Step 2.

From the COA process, the complexity of this algorithm is mainly calculated from the organizational variables of ant . The complexity of self-organizational variables is , where is the number of ant ’s neighbors. Therefore, the complexity of COA is , where is the number of iterations and is the population size. Clearly, the ants will gradually reach the optimal or suboptimal position under the influence of self-organization. When , the maximum complexity of the proposed algorithm is .

4. Simulation Experiments

To verify the effectiveness of COA, a locality-based task allocation in networked multiagent systems is solved using COA.

We first explain the locality-based task allocation in detail. When a task is executed on a few agents, the location of these agents is important because of the communication delay and energy consumption. These two factors are known to be proportional to the distance of two agents. Consequently, selecting an appropriate agent is crucial for task allocation according to the location of agents. For example, data transmission between the two nodes in wireless sensor networks needs to consider the position of the related nodes, because the greater the distance between two nodes, the larger the energy consumed. The effectiveness of task allocation is also an important evaluation of an allocation scheme, such as the distribution of network load balance. Therefore, the effectiveness of task allocation can be treated as effective feedback for an allocation scheme to reflect agent autonomy. Task allocation in networked multiagent systems can be thus divided into three steps: location awareness, task allocation strategy, and effectiveness of task allocation.

To fairly evaluate the performance of COA, it is compared with other distributed algorithms of task allocation, CBAA [18], MBA [19], and SGAP [20]. Without loss of generality, the simulation environment is set as follows. A networked multiagent system consists of 50 agents, with each agent deployed in a plane area. Each agent can perceive the changes in its neighbors’ positions. Here, the task is transferring large amounts of data from the source agent to the destination agent. A five-task set is , and these tasks increase gradually from 1 MB to 5 MB. Each task may be divided into a few parallel subtasks. COA runs 50 times. The number of iterations is 200, the population size of COA is 20, and the neighborhood radius is 10. The other parameters are set according to CAS. The parameters of comparative algorithms are set in accordance with the related references. Those comparative algorithms also run 50 times. All these algorithms terminate with the same runtimes.

4.1. Location-Aware Method of an Agent

Location is important for two agents that need to communicate with each other because of communication cost. Therefore, the centrality of two related agents is regarded as a measurement method for location awareness. Assuming that a group of agents, , exists, is the distance of two agents, , in the group , and then the centrality of agent is defined as follows:

4.2. Quantitative Effectiveness of Task Allocation

In task allocation, if the amount of allocated tasks on an agent is only determined by the computing ability, then it leads to a long waiting time, which affects the efficiency of a system. Accordingly, the waiting time of a task is applied to measure the effectiveness of task allocation.

Assuming that the waiting time depends on the queue length of waiting tasks, let the task set be , let the required resource be , and let the allocated agent group be . The task queue of agent leads to resource and waiting time . Given that the organizing and scheduling tasks in a DS are parallel, the total waiting time is

For task allocation, we replace with the term of (15) to show the effectiveness of feedback in task allocation, thereby embodying the autonomy of agents.

Based on the aforementioned, the centrality of task allocation is taken as the objective function:

4.3. Task Representation

Considering that CAS is a continuous optimization algorithm, the position of an ant is in the real place and does not directly represent the task allocation. To represent a task by the position of the ant, the absolute value of the position of the ant is taken and a positive real number is transformed into an integer in the range using a rounding up function. In other words, the integral position of ant is , where is the number of agents, which means that subtask of task is allocated to agent .

4.4. Simulation Analysis

To evaluate the performance of COA, we design four measures of simulation results to make a comparative analysis of COA, CBAA, MBA, and SGAP. The measures include the occupancy rate of an agent, satisfaction rate of task allocation, fairness rate of task allocation, and task execution time. Furthermore, two kinds of methods are considered: COA with feedback (COAf) and COA without feedback (COAnf). “Without feedback” means that the influence strength between ants is fixed in the process of evolution, which cannot dynamically reflect the autonomy of ants. Thus, the influence strength of ant is set as a fixed value; here, it is set as 1.

Occupancy rate of an agent (OR) is the ratio of a number of allocated agents to that of agents in the networked multiagent system:where is the number of allocated agents and is the number of agents in a networked multiagent system.

The satisfaction rate of task allocation (SR) is the average distance of any two neighboring allocated agents: where is the distance of agents and .

In the center of a networked multiagent system, for a given networked multiagent system , the total communication cost of agent is The center of a networked multiagent system is then defined as

The eccentricity rate of allocated agent (FR) is

Mean execution time (AET) is the average time to obtain the optimal allocation scheme. Total execution time includes task processing time and communication time, while task processing time is specific for a fixed task.

The performance of all algorithms is then compared using the above four criteria.

Figure 6 shows that the occupancy rate of each algorithm has a little difference for a small task; the occupancy rate of all algorithms will increase with the increment of task but that of MBA rises fastest; COAf performs better than other algorithms because it can reasonably allocate resources; and the autonomy of ants leads to a better allocation scheme of resources, whereas the market behavior of MBA has great randomness.

The satisfaction rates of all algorithms are shown in Figure 7. The satisfaction rate of COAf is the best, whereas that of COAnf is the worst. These results indicate that the feedback of COAf can effectively enhance the autonomy of individual ants and the search purpose of an ant colony. As a result, the satisfaction rate is better than other algorithms. These results also confirm that feedback is necessary for COAf to effectively adjust the input information flow.

The eccentricity rate of each task is shown in Figure 8. According to the definition of eccentricity rate, the closer the total distance between allocated agents and the central agent, the smaller the eccentricity rate and the lower the communication cost our algorithm requires (as usual, the wireless communication cost is proportional to the distance between two nodes). Figure 8 shows that the larger the task, the smaller the eccentricity rate. COAf has the lowest eccentricity compared with other algorithms when they deal with the same task. All these results show that the larger the task, the more the agents becoming involved, the evener the locations of allocated agents, and the greater the decrease in communication cost.

The mean execution time of the five tasks on each algorithm is shown in Figure 9. The figure indicates that the mean execution time of COAf is the shortest among the algorithms and that the larger the task, the longer the mean execution time. These results prove that large tasks can be allocated evenly and that COAf performs best.

Figures 6 to 9 show that our algorithm outperforms other algorithms on the four aspects. Furthermore, the autonomy of agents in DS can improve the purpose of swarm search and can be aware of the efficient solution, thereby enhancing the efficiency of our algorithm. The communication cost is also dramatically decreased.

To indicate the evolution of COAf for task allocation, the convergence of our algorithm on task is shown in Figure 10.

Figure 10 shows that COAf can converge to a steady state for task allocation, and the larger the task, the greater the centrality of agents. As for task allocation, our algorithm can effectively allocate those subtasks to the autonomous agents because the interaction of the related agents causes them to converge to the best solution.

In summary, the above experimental results verify the effectiveness and correctness of our cooperative optimization model.

5. Conclusion

In this paper, the application requirements of DSs are used as a starting point, and a distributed collaborative optimization model is put forward in light of the transition from chaos to order. A COA is devised based on chaotic behavior and self-organization mechanism. Finally, the proposed algorithm is employed to task allocation in networked multiagent systems. This paper highlights the idea that the autonomy of agents can play a role in constructing a collaborative optimization model and COA. Specifically, we contribute the orderly evolution to autonomy.

Further research must be conducted to deepen the autonomy and self-organizational evolution mechanisms of dynamic DSs, to set up a general autonomous multiagent coordination mechanism, and to promote their applications in distributed systems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank the anonymous reviewers for their helpful comments and feedback that helped to improve this paper. This work is supported by the National Natural Science Foundation of China (Grant no. 61070220), the Anhui Provincial Natural Science Foundation (Grants nos. 1408085MF130 and 1308085MF101), and the Education Department of Anhui Province Natural Science Foundation (Grants nos. KJ2013A229 and KJ2013Z281).