Discrete Dynamics in Nature and Society

Volume 2012 (2012), Article ID 904815, 24 pages

http://dx.doi.org/10.1155/2012/904815

## A Multiswarm Optimizer for Distributed Decision Making in Virtual Enterprise Risk Management

^{1}College of Information Science and Engineering, Shenyang University, Shenyang 110044, China^{2}School of New Energy Engineering, Shenyang University of Technology, Shenyang 110036, China^{3}Science and Technology Agency, Shenyang University, Shenyang 110036, China^{4}Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China

Received 27 October 2011; Accepted 16 February 2012

Academic Editor: Leonid Shaikhet

Copyright © 2012 Yichuan Shao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We develop an optimization model for risk management in a virtual enterprise environment based on a novel multiswarm particle swarm optimizer called PS^{2}O. The main idea of PS^{2}O is to extend the single population PSO to the interacting multiswarms model by constructing hierarchical interaction topology and enhanced dynamical update equations. With the hierarchical interaction topology, a suitable diversity in the whole population can be maintained. At the same time, the enhanced dynamical update rule significantly speeds up the multiswarm to converge to the global optimum. With five mathematical benchmark functions, PS^{2}O is proved to have considerable potential for solving complex optimization problems. PS^{2}O is then applied to risk management in a virtual enterprise environment. Simulation results demonstrate that the PS^{2}O algorithm is more feasible and efficient than the PSO algorithm in solving this real-world problem.

#### 1. Introduction

Swarm Intelligence (SI) is an innovative artificial intelligence technique for solving complex optimization problems. This discipline is inspired by the collective behaviors of social animals such as fish schools, bird flocks, and ant colonies. In SI systems, there are many simple individuals who can interact locally with one another and with their environments. Although such systems are decentralized, local interactions between individuals lead to the emergence of global behavior and properties.

In recent years, many algorithmic SI methods were designed to deal with practical problems [1–5]. Among them, the most successful is Particle Swarm Optimization (PSO) that drew inspiration from the biological swarming behaviors observed in flocks of birds, schools of fish, and even human social behavior [6–8]. PSO is a population-based optimization tool, which could be implemented and applied easily to solve various function optimization problems. As a problem-solving technique, the main strength of PSO is its fast convergence, which compares favorably with Evolutionary Algorithms (EAs) and other global optimization algorithms [9–12]. However, when solving complex multimodal problems, PSO suffers from the following drawback [13]: as a population evolves, all individuals suffer premature convergence to the local optimum in the first generations. This leads to low population diversity and adaptation stagnation in successive generations. However, such loss of population diversity is not observed in natural systems. Because populations of species interact with one another in natural ecosystems, these species form biological communities which are large social systems typically consist of both heterogeneous and homogeneous aspects. The interaction between species and the complexity of their relationships in these communities exemplify what is meant by the term “symbiosis.” According to different symbiotic interrelationships, symbiotic coevolution can be classified into several categories: mutualism, commensalism, parasitism, and competition. We found that all these types are suitable to be incorporated into the standard PSO model to improve PSO’s performance on complex optimization problems. This should be a general extension of PSO with the purpose of accurately representing as many different forms of symbiotic coevolution as possible.

Thus, inspired by symbiotic cooperation (i.e., mutualism coevolution) phenomenon in nature, this paper proposed a novel multiswarm particle swarm optimizer called PS^{2}O, which extend the single population PSO to interacting multiswarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. In PS^{2}O, we implement a hierarchical interaction topology that consists of two levels (i.e., individual level and swarm level), in which information exchanges take place permanently. Each individual of the proposed model evolves based on the knowledge integration of itself (associate with individual’s own cognition), its swarm members (associate social interaction within each swarm), and its symbiotic partners from other swarm (associate heterogeneous cooperation between different swarms). That is, we extend the control law (i.e., the dynamic update equation) of the canonical PSO model by adding a significant ingredient, which takes into account the symbiotic coevolution (or heterogeneous cooperation) between different swarms. By incorporating this new degree of complexity, PS^{2}O can accommodate a considerable potential for solving more complex problems. Here we provide some initial insights into this potential by evaluating PS^{2}O on both mathematical benchmark functions and a complex real-world problem-risk management in a virtual enterprise (VE). The 5 benchmark functions used in our experiments have been widely employed by other researchers to evaluate their algorithms [14–16]. In this paper, the risk management problem in VE is modeled as a distributed decision-making (DDM) system. This novel risk management model is a complex hierarchical optimization problem with two levels, namely, the top model and the base model, which take care of the continuous decision variables and the discrete ones, respectively. The simulation results, which are compared to other methods, are reported in this paper to show the merits of the proposed algorithm.

The paper is organized as follows. Section 2 gives a review of the canonical PSO algorithm and several multi-swarm PSO variants. Section 3 describes the proposed multi-swarm coevolution algorithm. In Section 4, it will be shown that PS^{2}O outperforms the canonical PSO and its variants on 5 benchmark test functions. Section 5 describes the risk management optimization model in VE and a detailed design algorithm of risk management by PS^{2}O. The simulation result of risk management in a VE based on PS^{2}O compared with canonical PSO is also presented in this section. Finally, conclusions are drawn in Section 6.

#### 2. Review of Canonical Particle Swarm Optimization

The canonical PSO is a population-based technique, similar in some respects to evolutionary algorithms, except that potential solutions (particles) move, rather than evolve, through the search space. The rules (or particle dynamics) that govern this movement are inspired by models of swarming and flocking [7]. Each particle has a position and a velocity, and experiences linear spring-like attractions towards two attractors:(i)its previous best position,(ii)best position of its neighbors.

In mathematical terms, the th particle is represented as in the D-dimensional space, where , , , are the lower and upper bounds for the th dimension, respectively. The rate of velocity for particle is represented as is clamped to a maximum velocity which is specified by the user. In each time step , the particles are manipulated according to the following equations: where and are random values between 0 and 1, and are learning rates, which control how far a particle will move in a single iteration, is the best position found so far of the th particle, is the best position of any particles in its neighborhood, and is called constriction factor [17], given by where .

Kennedy and Eberhart [18] proposed a binary PSO in which a particle moves in a state space restricted to zero and one on each dimension, in terms of the changes in probabilities that a bit will be in one state or the other. The velocity formula (2.1) remains unchanged except that , , and are integers in , and must be constrained to the interval . This can be accomplished by introducing a sigmoid function , and the new particle position is calculated using the following rule: where rand is a random number selected from a uniform distribution in [0.0, 1.0] and the function is a sigmoid limiting transformation as follows:

#### 3. PS^{2}O Algorithm

Straight PSO uses the analogy of a single-species population and the suitable definition of the particle dynamics and the particle information network (interaction topology) to reflect the social evolution in the population. However, the situation in nature is much more complex than what this simple metaphor seems to suggest. Indeed, in biological populations there is a continuous interplay between individuals of the same species, and also encounters and interactions of various kinds with other species [19]. The points at issue can be clearly seen when one observes such ecological systems as symbiosis, host-parasite systems, and prey-predator systems, in which two organisms mutually support each other, one exploits the other, or they fight against each other. For instance, mutualistic relations between plants and fungi are very common. The fungus invades and lives among the cortex cells of the secondary roots and, in turn, helps the host plant absorb minerals from the soil. Another well-known example is the “association” between the Nile crocodile and the Egyptian plover, a bird that feeds on any leeches attached to the crocodile’s gums, thus keeping them clean. This kind of “cleaning symbiosis” is also common in fish.

Inspired by mutualism phenomenon, we extend the single population PSO to the interacting multi-swarms model by constructing hierarchical information networks and enhanced particle dynamics. In our multi-swarms approach, the interaction occurs not only between the particles within each swarm but also between different swarms. That is, the information exchanges on a hierarchical topology of two levels (i.e., the individual level and the swarm level). Many patterns of connection can be used in different levels of our model. The most common ones are rings, two-dimensional and three-dimensional lattices, stars, and hypercubes. Two example hierarchical topologies are illustrated in Figure 1. In Figure 1(a), four swarms at the upper level are connected by a ring, while each swarm (possesses four individual particles at the lower level) is structured as a star. While in Figure 1(b), both levels are structured as rings. Then, we suggest in the proposed model that each individual moving through the solution space should be influenced by three attractors:(i)its own previous best position,(ii)best position of its neighbors from its own swarm,(iii)best position of its neighbor swarms.

In mathematical terms, our multi-swarm model is defined as a triplet, where is a collection of swarms, and each swarm possesses a members set of individuals. is the hierarchical topology of the multi-swarm. is the enhanced control low of the particle dynamics, which can be formulated as where represents the position of the th particle of the th swarm, is the personal best position found so far by , is the best position found so far by this particle’s neighbors within swarm , is the best position found so far by the other swarms in the neighborhood of swarm (here is the index of the swarm which the best position belongs to), are the individual learning rates, are the social learning rate between particles within each swarm, are the social learning rate between different swarms, and are random vectors uniformly distributed in . Here, the term is associated with cognition since it takes into account the individual’s own experiences; the term represents the social interaction within swarm ; the term takes into account the symbiotic coevolution between dissimilar swarms.

When constriction factor is implemented as in the canonical PSO above, is calculated from the values of the acceleration coefficients (i.e., the learning rate) and , importantly, it is the sum of these two coefficients that determines what to use [17]. This fact implies that the particle’s velocity can be adjusted by any number of terms, as long as the acceleration coefficients sum to an appropriate value. Thus, the constriction factor in velocity formula of PS^{2}O can be calculated by
where , . Then the algorithm will behave properly, at least as far as its convergence and explosion characteristics, whether all of is allocated to one term, or it is divided into thirds, fourths, and so forth.

We should note that, for solving discrete problems, we still use (2.4) and (2.5) to discrete the position vectors in PS^{2}O algorithm. The pseudocode for the PS^{2}O algorithm is listed in Table 1. The flowchart of the PS^{2}O algorithm is presented in Figure 2, and according variables used in PS^{2}O are summarized in Table 2.

#### 4. Benchmark Test

##### 4.1. Test Function

A set of 5 benchmark functions, which are commonly used in evolutionary computation literature [16, 20] to show solution quality and convergence rate, was employed to evaluate the PS^{2}O algorithm in comparison to others. The first problem is the unimodal Sphere function that is easy to solve. The second problem is the Rosenbrock function, which has a narrow valley from the perceived local optima to the global optimum and can be treat as a multimodal problem. The remaining three functions are multimodal problem. Griewank’s function has a component causing linkages among variables, thereby making it difficult to reach the global optimum. The Weierstrass function is continuous but differentiable only on a set of points. The composition functions are a set of novel challenging problems, which are constructed using some basic benchmark function with a random located global optimum and several randomly located deep local optima. The Gaussian function is used to combine the basic functions and blur the function structures. CF1 is constructed using 10 sphere functions which is an asymmetrical multimodal function with 1 global optimum and 9 local optima (the landscape of CF1 is illustrated in Figure 3). The variables of the CF1 formulation can be referred to [20]. The formulas of these functions are presented below.(1)Sphere function:
(2)Rosenbrock function:
(3)Griewank function:
(4)Weierstrass function:
where , , .(5)Composition function 1:

where is the number of basic function, is weight value for each is th basic function used to construct the composition function (here : Sphere Function), is new shifted optimum position for each is old optimum position for each , is used to stretch or compress the function, is orthogonal rotation matrix for each , and is defines which optimum is global optimum.

##### 4.2. Experimental Setting

Experiments were conducted with PS^{2}O compared with four successful variants of PSO:(i)local version of PSO with constriction factor (CPSO) [21];(ii)fully informed particle swarm (FIPS) [22];(iii)unified particle swarm (UPSO) [23];(iv)fitness-distance-ratio-based PSO (FDR-PSO) [24].

Among these variations, UPSO combined the global version and local version PSO together to construct a unified particle swarm optimizer; FIPS used all the neighbors’ knowledge of the particle to update the velocity; the FDR-PSO selects one other particle, which has a higher fitness value and is nearer to the particle being updated, to update each velocity dimension.

The number of swarms needs be tuned. Three 10D functions, namely Sphere, Rosenbrock, and Griewank, are used to investigate the impact of this parameter. Experiments were executed by changing the number of swarms and fixing each swarm size at 10. The average test results obtained form 30 runs are plotted in Figure 4. From Figure 4, we can observe that the performance of PS^{2}O is influenced by . When increases, we obtained faster convergence velocity and better results.

For fair comparison, the population size of all algorithms used in our experiments was set at 100 (all the swarms of PS^{2}O include the same particle numbers of 10). The maximum velocity of all PSO variants was set to be 5% of the search space for unimodal functions and 50% for multimodal functions. For canonical PSO and UPSO, the learning rates and were both 2.05 and the constriction factor . For FIPS, the constriction factor equals 0.729 and the U-ring topology that achieved highest success rate is used. For FDR-PSO, the inertia weight started at 0.9 and ended at 0.5 and a setting of was adopted. For PS^{2}O, the interaction topology illustrated in Figure 1(b) is used; the constriction factor in PS^{2}O is also used with according to Clerc’s method; correspondingly, the coefficient must sum to 4.1 and then the learning rates .

##### 4.3. Simulation Results

The experiment runs 50 times, respectively, for each algorithm on each benchmark function of 30 dimensions. The numbers of generations were set to be 10000. The representative results obtained are presented in Table 4, including the best, worst, mean, and standard deviation of the function values found in 50 runs. Figures 5, 6, 7, 8, and 9 presents the evolution process for all algorithms according to the reported results in Table 3.

From the results, we can observe that the PS^{2}O algorithm obtains an obviously remarkable performance. We can see it clearly that PS^{2}O converged with greatly faster speed to significantly better results than the other PSO variants for both unimodal and multimodal cases. It should be mentioned that the PS^{2}O were the only ones able to consistently find the minimum of the Sphere function, Griewank’s function, Weierstrass function, and Composition function 1, while the other algorithms generated poorer results on them. The result on Rosenbrock obtained by PS^{2}O is also very good. Since a result within 40.0 on 30 D Rosenbrock reported in other EA and SI works is considered well, the PS^{2}O algorithm’s performance on Rosenbrock function is remarkable good.

With the hierarchical interaction topology, a suitable diversity in the whole population can be maintained. At the same time, the enhanced dynamical update rule significantly speeds up the multi-swarm to converge to the global optimum. Because of this, the PS^{2}O performs considerably better than many PSO variants.

#### 5. Virtual Enterprise Risk Management Based on PS^{2}O

A virtual enterprise (VE) [25] is a temporary consortium of autonomous, diverse, and possibly geographically dispersed organizations that pool their resource to meet short-term objectives and exploit fast changing market trends. A VE is a dynamic alliance of member companies (owner and partners), which join to take advantage of a market opportunity. Each member company will provide its own core competencies in areas such as marketing, engineering, and manufacturing to the VE. When the market opportunity has passed, the VE is dissolved. In a VE environment, there are various sources of risks that may threaten the security of VE, such as market risk, credit risk, operational risk, and others [26]. Recently, risk management of a VE has attracted much research attention.

##### 5.1. The Two-Level Optimization Model for Risk Management in a Virtual Enterprise

In this paper, the two-level risk manage model suggested by Huang and Lu [27] is employed to evaluate the performance of the proposed PS^{2}O algorithm. This model can be described as a two-level Distributed Decision Making (DDM) system that is depicted in Figure 10(a).

In the top level, the decision maker is the owner who allocates the budget (i.e., the risk cost investment) to each member of VE. The decision variables are therefore given by . Here denotes the budget to owner and () represents the budget to Partner . That is, there are members in a VE. Then the top-level objective of risk management in a VE is to best allocate the budget of each member so as to minimize the total risk level of the VE. The top-level model can be formulated as a continuous optimization problem that is given in what follows: where is the risk level of th member under risk cost investment , represents the weight of member , is the maximum total investment budget, and stands for the maximum risk level for each member in the VE.

In the base level, the partners of VE are making their decisions in view of the top-level’s instruction (i.e., the budget to partners). The base-level risk management is that the decision makers select the optimal series of risk control actions for each partner to minimize the risk level with respect to the allocated budget . Here is the number of risk factors that affect each partner’s security. Then the base-level model can be formulated as a discrete optimization problem that is given in what follows: where is the risk level of th partner under risk control action with respect to the top-level investment budget , represents the cost of partner under the risk control action for the risk factor , and stands for the number of available actions for each risk factor of each partner.

##### 5.2. Risk Management in VE Based on PS^{2}O

The detailed design of risk management algorithm based on PS^{2}O is introduced in this section. Since the optimizing problem has a two-level hierarchical structure, this risk management algorithm is composed of two types of swarms that search in different levels, respectively, namely, the upper swarm and the lower swarm. The algorithm design reflects a two-phase searching process as Figure 10(b) illustrates. In the top-level searching process, the upper swarms that are designed based on the continuous PS^{2}O, search a continuous space for the investment budget allocation for all VE members. While the lower swarms, which are designed based on the discrete PS^{2}O, receive information from upper swarms, and must search the discrete space for a best action combination for risk management of all partners. The overall searching process can be described as follows.

*(1) Particle Representation**(a) Definition of Continuous Particle*

In each upper-swarm, each particle has a dimension equal to (i.e., the number of VE members). Each particle has a real number representation and is a possible allocation of investment budget for all members. The th particle of the th upper swarm is defined as follows:

For example, a real-number particle (286.55, 678.33, 456.78, 701.21, 567.62) is a possible allocation of investment budget of 5 VE members. The first bit means that the owner received investment of 286.55 units. The 2 to 5 bits mean that the amount of investment allocated to partner 1 to 4 is 678.33, 456.78, 701.21, and 567.62 respectively.*(b) Definition of Discrete Particle*

For the lower swarms, in order to appropriately represent the action combination by a particle, we design an “action-to-risk-to-partner” representation for the discrete particle. Each discrete particle in each lower swarm has a dimension equal to the number of , here is the number of available actions for each risk factor, is the number of risk factors of each partner, and is the number of VE partners. The th particle of the th lower swarm is defined as follows:
where equals 1 if the risk factor of VE partner is solved by the th action and 0 otherwise. For example, set , suppose the action combination of two partners is (2314, 2401), here 0 stands for no action is selected for the third risk factor of the second partner in VE. By our definition, we have and all other (see Figure 11).

*(2) Risk Management Procedure*

The processing performed by this algorithm is best illustrated in the diagram given in Figure 12.*Step 1. *The first step in top level is to randomly initialize upper swarms each possesses particles (totally individuals). Each particle in the top level is an *instruction* and is communicated to the base level to drive a base level search process (Steps 2–4).*Step 2. *For each top-level *instruction *, the base-level randomly initialize lower swarms each possesses particles (totally individuals). At each iteration in base level, for each particle (i.e., the th particle of the th lower-swarm), evaluate its fitness using the base-level optimization function as follows:
where is the weight of the risk factor , is the value corresponding to the risk rating , *l *is the number of risk ratings, and is the punishment coefficient. and is defined as the position index of 1 in . For example, if , the value of is 3. is a convex decreasing function, which is approximated by
to assesses the probability of risk occurrence at risk rating under action . Here the parameter is used to describe the effects of different risk factors under different risk ratings. The cost of the action is assumed to be a concave increasing function of the corresponding action, which is approximated by
and the parameter describes the effects of different risk factors of different partners. The notation is defined as follows:
*Step 3. *Compare the evaluated fitness values and select *pbest*, *sbest*, and *cbest* for each lower swarm. Then update the velocity of each base level particle according to (3.1). For our problem, each partner can only select one action for each risk factor or do nothing with this factor. In order to take care of this problem, for each particle, action is selected for risk factor of partner according to following probability:
Then the position of each base-level particle is updated by Algorithm 1.*Step 4. *The base-level search process is repeated until the maximum number of base-level iteration is met. Then send the last best base-level decision variable to the top-level for the fitness computation of the top-level particle .*Step 5. *With the base-level reaction , each top-level particle is evaluated by the following top-level fitness function:
where and is the punishment coefficient and the risk level of the owner is approximated by a convex decreasing function as follows:
*Step 6. *Compare the evaluated fitness values and select *pbest*, *sbest*, and *cbest* for each upper swarm. Then update the velocity and position of each top-level particle according to (3.1) and (3.2). The top-level computation is repeated until the maximum number of top-level iteration is met.

##### 5.3. An Illustrative Example

In this section, a numerical example of a VE is conducted to validate the capability of VE risk management based on the proposed PS^{2}O. In order to show the superiority of PS^{2}O, the risk management algorithm designed by canonical PSO is also applied to the same case.

In this case, the VE is constructed by one owner and four partners (i.e., ) and the total investment is ; 10 risk factors are considered for each partner and 4 actions can be selected for each risk factor (i.e., and ); the number of risk ratings and the value of each rating is , , and , respectively (according to the values of ratings, the criterion of risk rating is shown in Table 4); the maximum risk level , which means that the risk level of each member must be below the medium level; the weight of risk level of each VE member is and the weights of each risk factors for each partner are listed in Table 5; the values of the parameter and are presented in Tables 6 and 7, respectively; the punishment coefficient , , and are given as 1.5, 28 and 0.2.

In applying PS^{2}O and PSO to this case, the continuous and binary algorithms are used in top level and base level of the optimization model respectively. For the top-level algorithms, the maximum generation in each execution for each algorithm is 100; the initialized population size of 10 particles is the same for PS^{2}O and PSO, while the whole population is divided into 2 swarms (each possesses 5 individuals) for PS^{2}O in the initialization step; the interaction topology illustrated in Figure 1(a) is used for continuous PS^{2}O; the other parameters of continuous PS^{2}O and PSO were set to the same values as in Section 4. For the base-level algorithms, the maximum generation for each algorithm is 100; the initialized population size of 20 particles is the same for PS^{2}O and PSO, while the whole population is divided into 4 swarms (each possesses 5 individuals) for PS^{2}O in the initialization step; the interaction topology illustrated in Figure 1(b) is used for binary PS^{2}O; the other parameters of binary PS^{2}O and PSO were set to the same values as in Section 4. The experiment runs 30 times, respectively, for each algorithm.

The top-level and base-level search progresses of the averaged best-so-far fitness values over 20 runs are shown in Figures 13 and 14, respectively. It should be noted that the total iteration of base-level searching is 100 (base-level maximum generation) ×10 (top-level population size) ×10 (top-level maximum generation) = 10^{4}. That is, the base-level algorithms will be restarted after every 100 iterations. From the figures, we can see that PS^{2}O converges with a higher speed compared to PSO and obtains better results in both levels searching progresses.

The average solutions over 30 runs obtained by PS^{2}O and PSO are summarized in Table 8. Before proceeding with the risk management procedure, the risk levels are one for both the VE and the partner, which is a high risk level. Table 8 shows that the resulting risk levels of the VE and the owner are in the low risk level, while all the partners are in the medium risk level. Therefore the budget and the actions selected by the owner and the partner are very effective to reduce the risk of the VE.

To fully demonstrate the risk management performance using the PS^{2}O algorithm, risk investment budget, and risk level controlling processes of each VE member based on PS^{2}O is shown in Figure 15. Generally, an effective actions sequence corresponds to a higher cost and a lower risk level. From the figures, it can be concluded that the additional cost of selecting effect actions can not result in a remarkable decrease in the risk level.

#### 6. Conclusions

In this paper, we develop an optimization model for minimizing the risks of the virtual enterprise based on a novel multi swarm optimizer PS^{2}O. In PS^{2}O, the hierarchical interaction topology consists of two levels (i.e., the individual level and the swarm level), in which information exchanges take place not only between the particles within each swarm but also between different swarms. The dynamical update equations of our multi-swarm approach are enhanced by a significant ingredient, which takes into account the symbiotic coevolution (or heterogeneous cooperation) between different swarms. Because of this, each individual of the proposed model evolves based on the knowledge integration of itself (associate with individual’s own cognition), its swarm members (associate social interaction within each swarm), and its symbiotic partners from other swarm (associate heterogeneous cooperation between different swarms). With five mathematical benchmark functions, PS^{2}O is proved to have significantly better performance than four successful variants of PSO.

In the proposed risk management model of VE, a two-level optimization scheme was introduced to describe the decision processes of the owner and the partners. This DDM model considers the situation that the owner allocates the budget to each member of the VE in order to minimize the risk level of the VE. Accordingly, a transfer optimization model, which can easily use EA and SI algorithms to treat the risk manage problem in VE, is elaborately developed. PS^{2}O is then employed to solve the real-world VE risk management problem. The simulation studies, which compared to Canonical PSO algorithm, show that the PS^{2}O obtains superior risk management solutions than PSO methods in terms of optimization accuracy and convergence speed.

#### Acknowledgments

This work is supported by the Natural Science Foundation of Liaoning Province of China under Grant 20082006, the Support Program for the Outstanding Technological Person in Liaoning Province of China under Grant lr2011035, and the National Natural Science Foundation of China under Grants 61105067 and 61174164.

#### References

- M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,”
*IEEE Transactions on Evolutionary Computation*, vol. 1, no. 1, pp. 53–66, 1997. View at Google Scholar · View at Scopus - D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,”
*Applied Soft Computing Journal*, vol. 8, no. 1, pp. 687–697, 2008. View at Publisher · View at Google Scholar · View at Scopus - K. M. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,”
*IEEE Control Systems Magazine*, vol. 22, no. 3, pp. 52–67, 2002. View at Publisher · View at Google Scholar · View at Scopus - H. Chen, Y. Zhu, and K. Hu, “Cooperative bacterial foraging optimization,”
*Discrete Dynamics in Nature and Society*, vol. 2009, Article ID 815247, 17 pages, 2009. View at Publisher · View at Google Scholar - H. Chen, Y. Zhu, and K. Hu, “Multi-colony bacteria foraging optimization with cell-to-cell communication for RFID network planning,”
*Applied Soft Computing Journal*, vol. 10, no. 2, pp. 539–547, 2010. View at Publisher · View at Google Scholar · View at Scopus - R. C. Eberchart and J. Kennedy, “A new optimizer using particle swarm theory,” in
*Proceedings of the 6th International Symposium on Micromachine and Human Science*, pp. 39–43, Nagoya, Japan, 1995. - J. Kennedy and R. C. Eberhart,
*Swarm Intelligence*, Morgan Kaufmann, San Francisco, Calif, USA, 2001. - H. Chen and Y. Zhu, “Optimization based on symbiotic multi-species coevolution,”
*Applied Mathematics and Computation*, vol. 205, no. 1, pp. 47–60, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. R. Koza,
*Genetic Programming: On the Programming of Computers by Means of Natural Selection*, MIT Press, Cambridge, Mass, USA, 1992. - X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,”
*IEEE Transactions on Evolutionary Computation*, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar · View at Scopus - T. Bäck and H.-P. Schwefel,
*Evolution Strategies I: Variants and Their Computational Implementation*, Genetic Algorithms in Engineering and Computer Science, Wiley, Chichester, UK, 1995. View at Zentralblatt MATH - J. H. Holland,
*Adaptation in Natural and Artificial Systems*, University of Michigan Press, Ann Arbor, Mich, USA, 1975. - P. J. Angeline, “Evolutionary optimization versus particle swarm optimization: philosophy and performance difference,” in
*Proceedings of the 7th International Conference on Evolutionary Programming*, pp. 601–610, San Diego, Calif, USA, 1998. - Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in
*Proceedings of the 1999 IEEE Congress on Evolutionary Computation*, pp. 1945–1950, Piscataway, NJ, USA, 1999. - J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,”
*IEEE Transactions on Evolutionary Computation*, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus - M. Clerc, Binary Particle Swarm Optimizers: Toolbox, Derivations, and Mathematical Insights, 2005, http://clerc.maurice.free.fr/pso/.
- M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,”
*IEEE Transactions on Evolutionary Computation*, vol. 6, no. 1, pp. 58–73, 2002. View at Publisher · View at Google Scholar · View at Scopus - J. Kennedy and R. C. Eberhart, “A discrete binary version of the particle swarm algorithm,” in
*Proceedings of the World Multiconference on Systemics, Cybernetics and Informatics*, pp. 4104–4109, Piscataway, NJ, USA, 1997. - M. Tomassini,
*Spatially Structured Evolutionary Algorithms: Artificial Evolution in Space and Time*, Springer-Verlag, Berlin, Germany, 2005. - J. J. Liang, P. N. Suganthan, and K. Deb, “Novel composition test functions for numerical global optimization,” in
*Proceedings of the 2005 Swarm Intelligence Symposium (SIS '05)*, pp. 68–75, June 2005. View at Publisher · View at Google Scholar · View at Scopus - J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in
*Proceedings of the Congress on Evolutionary Computation*, pp. 1671–1676, Honolulu, Hawaii, USA, 2002. - R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,”
*IEEE Transactions on Evolutionary Computation*, vol. 8, no. 3, pp. 204–210, 2004. View at Publisher · View at Google Scholar · View at Scopus - K. E. Parsopoulos and M. N. Vrahatis, “UPSO: a unified particle swarm optimization scheme,” in
*Lecture Notes in Computer Science*, pp. 868–873, 2004. View at Google Scholar - K. Veeramachaneni, T. Peram, C. Mohan, and L. A. Osadciw, “Optimization using particle swarms with near neighbor interactions,” in
*Proceedings of the Genetic and Evolutionary Computation Conference*, pp. 110–121, Chicago, Ill, USA, 2003. - W. H. Ip, M. Huang, K. L. Yung, and D. Wang, “Genetic algorithm solution for a risk-based partner selection problem in a virtual enterprise,”
*Computers and Operations Research*, vol. 30, no. 2, pp. 213–231, 2003. View at Publisher · View at Google Scholar · View at Scopus - R. L. Kliem and I. S. Ludin,
*Reducing Project Risk*, Gower, London, UK, 1997. - M. Huang, F.-Q. Lu, W.-K. Ching, and T. K. Siu, “A distributed decision making model for risk management of virtual enterprise,”
*Expert Systems with Applications*, vol. 38, no. 10, pp. 13208–13215, 2011. View at Publisher · View at Google Scholar