Research Article  Open Access
Virtual Enterprise Risk Management Using Artificial Intelligence
Abstract
Virtual enterprise (VE) has to manage its risk effectively in order to guarantee the profit. However, restricting the risk in a VE to the acceptable level is considered difficult due to the agility and diversity of its distributed characteristics. First, in this paper, an optimization model for VE risk management based on distributed decision making model is introduced. This optimization model has two levels, namely, the top model and the base model, which describe the decision processes of the owner and the partners of the VE, respectively. In order to solve the proposed model effectively, this work then applies two powerful artificial intelligence optimization techniques known as evolutionary algorithms (EA) and swarm intelligence (SI). Experiments present comparative studies on the VE risk management problem for one EA and three stateoftheart SI algorithms. All of the algorithms are evaluated against a test scenario, in which the VE is constructed by one owner and different partners. The simulation results show that the algorithm, which is a recently developed SI paradigm simulating symbiotic coevolution behavior in nature, obtains the superior solution for VE risk management problem than the other algorithms in terms of optimization accuracy and computation robustness.
1. Introduction
A virtual enterprise (VE) [1] is a dynamic alliance of autonomous, diverse, and possibly geographically dispersed member companies composed of one owner and several partners that pool their resource to take advantage of a market opportunity. Each member company will provide its own core competencies in areas such as marketing, engineering, and manufacturing to the VE. When the market opportunity has passed, the VE is dissolved. With the rapidly increasing competitiveness in global manufacturing area, VE is becoming essential approach to meet the market’s requirements for quality, responsiveness, and customer satisfaction. As the VE environment continues to grow in size and complexity, the importance of managing such complexity increases. In the VE environment, there are various sources of risks that may threaten the success of the projects, such as market risk, credit risk, operational risk, and others [2]. Therefore, an effective approach that can actually deal with the risk measurement and management problem is a major concern in VE.
Up to date, risk management of VE has received considerable research attentions. Various models and algorithms are developed to provide a more scientific and effective way for managing the risk of a VE. Ma and Zhang [3] analyzed all kinds of risks during the organization of a VE. They then proposed the defensive measures on the established risk of a VE in order to offer a reference to risk management for the new form of the VE organization. Huang et al. [4] introduced a fuzzy synthetic evaluation model for evaluating the risk of the VE, which focuses on the project mode and the uncertain characteristics of the VE. Ip et al. [1] proposed a riskbased partner selection model, which considers minimizing the risk in selecting partners and ensuring the due date of a project in a VE. By exploring the characteristics of the problem considered and the knowledge of project scheduling, a Rulebased Genetic Algorithm with embedded project scheduling is developed to solve the problem. Sun et al. [5] employed a constructional distributed decision making (DDM) model for risk management of VE that focuses on the situation of team or enforced team relationship between partners. A taboo search algorithm was designed to solve the model. Lu et al. [6] introduced a DDM model for VE risk management that has two levels, namely, the topmodel and the basemodel, which describe the decision processes of the owner and the partners, respectively. A particle swarm optimization approach was then designed to solve the resulting optimization problem.
Nature serves as a fertile source of concepts, principles, and mechanisms for designing artificial computation systems to tackle complex computational problems. In the past few decades, many natureinspired computational techniques were designed to deal with practical problems. Among them, the most successful are evolutionary algorithms (EA) and swarm intelligence (SI). Evolutionary algorithms are search methods that take their inspiration from natural selection and survival of the fittest in the biological world. Several different types of EA methods were developed independently. These include genetic programming (GP) [7], evolutionary programming (EP) [8], evolution strategies (ES) [9], and genetic algorithm (GA) [10]. Swarm intelligence (SI), which is inspired by the collective behavior of social systems (such as fish schools, bird flocks, and ant colonies), is an innovative computational way to solve hard optimization problems. Currently, SI includes several different algorithms, namely, ant colony optimization (ACO) [11], particle swarm optimization (PSO) [12, 13], bacterial foraging algorithm (BFA) [14–16], and artificial bee colony algorithm (ABC) [17]. In our previous works [18, 19], we also proposed a novel hierarchical swarm optimization algorithm called PS^{2}O, which extends the single population PSO to interacting multiswarm model by constructing hierarchical interaction topologies and enhanced dynamical update equations. By incorporating the new degree of complexity, PS^{2}O can avoid premature convergence drawback of traditional SI algorithms and accommodate a considerable potential for solving more complex problems.
In this paper, we develop an optimization model for distributed decision making of risk management in VE based on the evolutionary and swarmbased methods. Here the VE risk management problem is described and formulated as a twolevel DDM model, which is in order to minimize the aggregate risk level of the VE to a reasonable lower level. Then, in order to solve this complex problem effectively and efficiently, the optimization procedure based on EA and SI systems is developed. Experiments are performed on three VE risk management cases with different scales. In the experiments, a comprehensive comparative study on the performances of four wellknown evolutionary and swarmbased algorithms, namely, GA, PSO, ABC, and the recently proposed PS^{2}O, is presented. Results show that the performance of the PS^{2}O is better than or similar to those of other EA and SI algorithms with the advantage of maintaining suitable diversity of the whole population in optimization process.
The paper is organized as follows. Section 2 describes the twolevel DDM model of VE risk management. In Section 3, the GA, PSO, ABC, and PS^{2}O algorithms are summarized. Section 4 describes a detailed design optimization procedure of risk management in VE by evolutionary and swarmbased algorithms. In Section 5, the simulation results obtained are presented and discussed. Finally, Section 6 outlines the conclusions.
2. Problem Formulation of Risk Management in a VE
In this paper, the twolevel risk management model suggested by Lu et al. [6] is employed to evaluate the performance of the proposed methods. This model can be described as a twolevel distributed decision making (DDM) system that is depicted in Figure 1.
In the toplevel, the decision maker is the owner who allocates the budget (i.e., the risk cost investment) to each member of VE. The decision variables are therefore given by . Here denotes the budget to owner and represents the budget to Partner . That is, there are members in a VE. Then the toplevel objective of risk management in a VE is to allocate the optimal budget to each member in order to minimize the total risk level of the VE. The toplevel model can be formulated as a continuous optimization problem that is given in what follows: where is the risk level of th member under risk cost investment , represents the weight of member , is the maximum total investment budget, and stands for the maximum risk level for each member in the VE.
In the baselevel, the partners of VE are making their decisions according to the toplevel’s instruction (i.e., the budget to partners). The baselevel risk management is that the decision maker selects the optimal series of risk control actions for each partner to minimize the risk level with respect to the allocated budget . Here is the number of risk factors that affect each partner’s security. Then the baselevel model can be formulated as a discrete optimization problem that is given in what follows: where is the risk level of th partner under risk control action with respect to the toplevel investment budget , represents the cost of partner under the risk control action for the risk factor , and stands for the number of available actions for each risk factor of each partner.
3. Description of the Involved Evolutionary and Swarm Intelligence Algorithms
3.1. The Genetic Algorithm
Genetic algorithm is a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover. A basic GA consists of five components. These are a random number generator, a fitness evaluation unit, genetic operators for reproduction, crossover, and mutation operations. The basic algorithm is summarized in Algorithm 1.

At the start of the algorithm, the population initialization step randomly generates a set of number strings. Each string is a representation of a solution to the optimization problem being addressed. Continuous and discrete strings are both commonly employed. Associated with each string is a fitness value computed by the evaluation unit. The reproduction operator performs a natural selection function known as seeded selection. Individual strings are copied from one set (representing a generation of solutions) to the next according to their fitness values; the better the fitness value, the greater the probability of a string being selected for the next generation. The crossover operator chooses pairs of strings at random and produces new pairs. The simplest crossover operation is to cut the original parent strings at a randomly selected point and to exchange their tails. The number of crossover operations is governed by a crossover rate. The mutation operator, which is determined by a mutation rate, randomly mutates or reverses the values of bits in a string. A phase of the algorithm consists of applying the evaluation, reproduction, crossover, and mutation operations. A new generation of solutions is produced with each phase of the algorithm [20].
3.2. The Artificial Bee Colony Algorithm
Artificial bee colony (ABC) algorithm is one of the most recently introduced SI algorithms. ABC simulates the intelligent foraging behavior of a honeybee swarm. In ABC model, the foraging bees are classified into three categories: employed bees, onlookers, and scout bees. The main steps of the algorithm are as shown in Algorithm 2.

ABC starts by associating all employed bees with randomly generated food sources (solution). In mathematical terms, is total number of food sources; the th food source position can be represented as in the Ddimensional space. refers to the nectar amount of the food source located at . In each iteration , every employed bee determines a food source in the neighborhood of its current food source and evaluates its nectar amount (fitness). This comparison of two food source position by each employed bee is manipulated according to the following equations: where and are randomly chosen indexes, and . Equation (3.1) controls the production of neighbor food sources around . If its new fitness value is better than the best fitness value achieved so far, then the bee moves to this new food source abandoning the old one; otherwise it remains in its old food source. When all employed bees have finished this process, they share the fitness information with the onlookers; each of which selects a food source according to probability defined as For each onlooker that selects the food source , it will find a new neighborhood food source in the vicinity of by using (3.1). Also, the greedy selection mechanism is employed by this onlooker as the selection operation between the old and the new food sources. With this scheme, good food sources will get more onlookers than the bad ones. In ABC, if a food source position cannot be improved further through a predetermined number of cycles, then that food source is assumed to be abandoned and the scout bee will randomly choose a new food source position in the search space.
3.3. The Particle Swarm Optimization
The canonical PSO is a successful SIbased technique. In PSO model, the rules that govern particles’ movements are inspired by models of fish schooling and bird flocking [21]. Each particle has a position and a velocity, and experiences linear springlike attractions towards two attractors:(i)its previous best position,(ii)best position of its neighbors.
In mathematical terms, the th particle is represented as in the dimensional space, where , , and , are the lower and upper bounds for the th dimension, respectively. The rate of velocity for particle is represented as clamped to a maximum velocity which is specified by the user. In each time step , the particles are manipulated according to the following equations: where and are random values between 0 and 1, and are learning rates, which control how far a particle will move in a single iteration, is the best position found so far of the particle, is the best position of any particles in its neighborhood, and is called constriction factor, given by: where , . Main steps of the PSO procedure are as shown in Algorithm 3.

Kennedy and Eberhart [13] proposed a binary PSO in which a particle moves in a state space restricted to zero and one on each dimension, in terms of the changes in probabilities that a bit will be in one state or the other. The velocity formula (2.1) remains unchanged except that , , and are integers in and must be constrained to the interval . This can be accomplished by introducing a sigmoid function , and the new particle position is calculated using the following rule: where rand is a random number selected from a uniform distribution in and the function is a sigmoidlimiting transformation as follows:
3.4. The MultiSwarm Optimizer: PS^{2}O
Straight PSO uses the analogy of a singlespecies population and the suitable definition of the particle dynamics and the particle information network (interaction topology) to reflect the social evolution in the population. However, the situation in nature is much more complex than what this simple metaphor seems to suggest. Indeed, in biological populations there is a continuous interplay between individuals of the same species, and also encounters and interactions of various kinds with other species [22]. The points at issue can be clearly seen when one observes such ecological systems as symbiosis, hostparasite systems, and preypredator systems, in which two organisms mutually support each other; one exploits the other, or they fight against each other. For instance, mutualistic relations between plants and fungi are very common. The fungus invades and lives among the cortex cells of the secondary roots and, in turn, helps the host plant absorb minerals from the soil. Another wellknown example is the “association” between the Nile crocodile and the Egyptian plover, a bird that feeds on any leeches attached to the crocodile’s gums, thus keeping them clean. This kind of “cleaning symbiosis” is also common in fish.
Inspired by mutualism phenomenon, in the previous works [18, 19] we extend the single population PSO to the interacting multiswarm model by constructing hierarchical information networks and enhanced particle dynamics. In our multiswarms approach, the interaction occurs not only between the particles within each swarm but also between different swarms. That is, the information exchanges on a hierarchical topology of two levels (i.e., the individual level and the swarm level). Many patterns of connection can be used in different levels of our model. The most common ones are rings, twodimensional and threedimensional lattices, stars, and hypercubes. Two example hierarchical topologies are illustrated in Figure 2. In Figure 2(a), four swarms at the upper level are connected by a ring, while each swarm (possesses four individual particles at the lower level) is structured as a star. While in Figure 2(b), both levels are structured as rings. Then, we suggest in the proposed model that each individual moving through the solution space should be influenced by three attractors:(i)its own previous best position,(ii)best position of its neighbors from its own swarm,(iii)best position of its neighbor swarms.
(a)
(b)
In mathematical terms, our multiswarm model is defined as a triplet , where is a collection of M swarms, and each swarm possesses a members set of N individuals. T is the hierarchical topology of the multiswarm. C is the enhanced control low of the particle dynamics, which can be formulated as where represents the position of the particle of the swarm, is the personal best position found so far by , is the best position found so far by this particle’s neighbors within swarm k, is the best position found so far by the other swarms in the neighborhood of swarm (here is the index of the swarm which the best position belongs to), is the individual learning rates, is the social learning rate between particles within each swarm, is the social learning rate between different swarms, and are random vectors uniformly distributed in . Constriction factor is calculated by where , . Here, the term is associated with cognition since it takes into account the individual’s own experiences; the term represents the social interaction within swarm k; the term takes into account the symbiotic coevolution between dissimilar swarms. The pseudocode for the PS^{2}O algorithm is listed in Algorithm 4.

We should note that, for solving discrete problems, we still use (3.5) and (3.6) to discretize the position vectors in PS^{2}O algorithm.
4. Risk Management in VE Base on Evolutionary and SI Algorithms
The detailed design of Risk management algorithm based on EA and SI algorithms is introduced in this section. Since the risk management model described in Section 2 has a twolevel hierarchical structure, the proposed EA and SIbased risk management algorithm is composed of two types of evolving population that search in different levels, respectively, namely, the upperpopulation and the lowerpopulation. This designed algorithm reflects a twophase search process, that is, toplevel searching phase and baselevel searching phase. In the toplevel searching phase, the upperpopulation searches a continuous space for the optimal investment budget allocation by the sponsor for all VE members. While in the baselevel searching phase, the lowerpopulation receives information from upperpopulation and searches the discrete space for a best action combination for risk management of all VE partners.
4.1. Chromosome Representation Scheme and Model Transformation
4.1.1. Definition of Continuous Individual
In upperpopulation, each individual has a dimension equal to (i.e., the number of VE members). Each individual is a possible allocation of investment budget for all members that have a real number representation. The th individual of the upperpopulation is defined as follows:
For example, a realnumber particle (286.55, 678.33, 456.78, 701.21, 567.62) is an investment budget possible allocation of a VE consisting of 5 members. The first bit means that the owner received investment of 286.55 units. The 2 to 5 bits mean that the amounts of investment allocated to partner 1 to 4 are 678.33, 456.78, 701.21, and 567.62, respectively.
4.1.2. Definition of Discrete Individual
For the lowerpopulation, in order to appropriately represent the action combination by a particle, we design an “actiontorisktopartner” representation for the discrete individual. Each discrete individual in each lowerpopulation has a dimension equal to the number of , where W is the number of available actions for each risk factor, is the number of risk factors of each partner, and n is the number of VE partners. The th individual of the lowerpopulation is defined as follows: where equals 1 if the risk factor of VE partner is solved by the th action and 0 otherwise. That is, each partner can only select one action for each risk factor or do nothing with this factor. For example, set , , and ; suppose that the action combination of two partners is (2314, 2401), where 0 stands for no action and is selected for the third risk factor of the second partner in VE. By our definition, we have and all other (see Figure 3).
(a)
(b)
4.1.3. Model Transformation
Then, the baselevel objectives, that is, (2.4), formulated in Section 2 are equivalent to the following optimization problem: where is the weight of the risk factor , is the value corresponding to the risk rating , l is the number of risk ratings, and is the punishment coefficient. and is defined as the position index of 1 in . For example, if = (0010), the value of is 3. Here is approximated by the convex decreasing function to assess the probability of risk occurrence at risk rating under action . Here the parameter is used to describe the effects of different risk factors under different risk ratings. The cost of the action is assumed to be a concave increasing function of the corresponding action, which is approximated by and the parameter describes the effects of different risk factors of different partners. The notation is defined as follows:
And the toplevel objectives, that is, (2.1)–(2.3), formulated in Section 2 are equivalent to the following optimization problem: where and are the punishment coefficients and is the last best baselevel decision. Here the risk level of the owner is approximated by a convex decreasing function as follows:
In order to easily use EA and SI algorithms to treat the risk management problem in VE, it is clear that the rewritten model, that is, (4.3)–(4.8), is much complex than the original model, that is, (2.1)–(2.4), for there are more variables described in it.
4.2. Risk Management Procedure
The overall risk management process based on EA and SI algorithms can be described as follows.
Step 1. The first step in toplevel is to randomly initialize the EA and SIbased upperpopulation. Each individual in the toplevel is an instruction and is communicated to the baselevel to drive a baselevel search process (Steps 2–4).
Step 2. For each toplevel instruction , the baselevel randomly initializes a corresponding lowerpopulation. At each iteration in baselevel, for each particle , evaluate its fitness using the baselevel optimization function, that is, (4.3).
Step 3. Compare the evaluated fitness values for all individuals in lowerpopulation. Then update each baselevel individual by its updating rules according to the selected EA and SI algorithms. For our problem, each partner can only select one action for each risk factor or do nothing with this factor. In order to take care of this problem, for each particle, action is selected for risk factor of partner according to following probability: Then the position of each baselevel particle is updated by Algorithm 5.

Step 4. The baselevel search process is repeated until the maximum number of baselevel iteration is met. Then send the last best baselevel decision variable to the toplevel for the fitness computation of the toplevel individual .
Step 5. With the baselevel reaction each toplevel individual is evaluated by the following toplevel fitness function, that is, (4.7).
Step 6. Compare the evaluated fitness values for all individuals in upperpopulation. Then update each toplevel individual by its updating rules according to the selected EA and SI algorithms. The toplevel computation is repeated until the maximum number of toplevel iteration is met.
The flowchart of this risk management process is illustrated in the diagram given in Figure 4.
5. Experiments Analysis
In this section, a numerical example of a VE is conducted to validate the capability of the proposed VE risk management method. Experiments were conducted with four EA and SIbased algorithms, namely, GA, PSO, ABC, and PS^{2}O, to fully evaluate the performance of the proposed optimization model.
5.1. Illustrative Examples
In this section, the total investment is ; 10 risk factors are considered for each partner and 4 actions can be selected for each risk factor (i.e., and ); the number of risk ratings is and the value of each rating is , , and , respectively (according to the values of ratings, the criterion of risk rating is shown in Table 1); the maximum risk level , which means that the risk level of each member must be below the medium level; the weight of risk level of each VE member is and the weights of each risk factor for each partner are listed in Table 2; the values of the parameters and are presented in Tables 3 and 4, respectively; the punishment coefficients , , and are given as 1.5, 28, and 0.2.




This simulated VE environment can be constructed by one owner and different number of partners. For scalability study purpose, all involved algorithms are tested on three illustrative VE examples with 2, 4, and 9 partners (i.e., n = 2, 4, 9), respectively.
5.2. Settings for Involved Algorithms
In applying EA and SI algorithms to this case, the continuous and binary versions of these algorithms are used in toplevel and baselevel of the DDM optimization model, respectively. For the toplevel algorithms, the maximum generation in each execution for each algorithm is 50; the initialized population size of 10 individuals is the same for all involved algorithms, while the whole population is divided into 2 swarms (each possesses 5 individuals) for PS^{2}O in the initialization step. For the baselevel algorithms, the maximum generation for each algorithm is 100; the initialized population size of 20 particles is the same for all involved algorithms, while the whole population is divided into 4 swarms (each possesses 5 individuals) for PS^{2}O in the initialization step. The experiment runs 30 times, respectively, for each algorithm. The other specific parameters of algorithms are given below.
GA Settings
The experiment employed a binary coded standard GA having random selection, crossover, mutation, and elite units. Stochastic uniform sampling technique was the chosen selection method. Singlepoint crossover operation with the rate of 0.8 was employed. Mutation operation restores genetic diversity lost during the application of reproduction and crossover. Mutation rate in the experiment was set to be 0.01.
PSO Settings
For continuous PSO, the learning rates and were both 2.05 and the constriction factor ; for binary PSO, the parameters were set to the values and . The ring topology was used for both versions of PSO.
ABC Settings
The basic ABC is used in the study. Since there are no literatures using ABC for discrete optimization so far, this experiment just used crossover operation to update individuals in ABC population. That is, the ABC position update (3.1) can be changed to the following equation (5.1) for discrete problems:
Then the limit parameter is set to be for ABC in both continuous and discrete search, where D is the dimension of the problem and is the number of employed bees.
PS^{2}O Settings
For continuous PS^{2}O, the parameters were set to the values (i.e., ) and then , which is calculated by (3.9); the interaction topology illustrated in Figure 2(a) is used. For discrete PS^{2}O, the parameters were set to the values and ; the interaction topology illustrated in Figure 2(b) is used.
All algorithms are tested on the risk management problems with 3, 5, and 10 VE members. The representative results obtained are presented in Table 5, including the best, worst, mean, and standard deviation of the risk values of VE found in 30 runs. Figures 5(a)–5(c) present the evolution process of all algorithms for the minimization of the VE risks in 3 different scales. For each trial, we can see before proceeding with the EA and SIbased risk management procedure, the risk levels are very high for the VE. Table 5 shows that the resulting risk levels of the VE are in the lower risk level. Therefore the budget and the actions selected by the four EA and SI algorithms for the owner and the partners are very effective to reduce the risks of the VE. From the results, it is clear that the PS^{2}O algorithm can consistently converge to better results than the other three algorithms for all test cases. Also, PS^{2}O is the most fast one for finding good results within relatively few generations.

(a)
(b)
(c)
In this experiment, the analysis of variance (ANOVA) test was also carried out to validate the efficacy of four tested EA and SI methods. The graphical statistics analyses are done through box plot. A box plot is a graphical tool, which provides an excellent visual summary of many important aspects of a distribution. The box stretches from the lower hinge (defined as the 25th percentile) to the upper hinge (the 75th percentile) and therefore contains the middle half of the scores in the distribution. The median is shown as a line across the box. Therefore, onefourth of the distribution is between this line and the top of the box and onefourth of the distribution is between this line and the bottom of the box.
First, the box plots for the results presented in Table 5 are shown in Figures 6(a)–6(c). Figure 6 implies the graphical performance representation of all algorithms in 30 runs. From this box plot representation, it is clearly visible and proved that the PS^{2}O provides better results for all the test cases than those of the other three algorithms.
(a)
(b)
(c)
Second, to compare the robustness of the involved algorithms on the risk manage problem, the experiment can be statistically considered as onefactor experiment, in which the optimization result was the response variable and the scales of the VE were the factor, which had 3 levels: 3, 5, and 10. The results of the ANOVA for the VE risk management in different scales using these algorithms were presented in Figure 7 by the box plot. From Figure 7, we can observe that the main effects of the problem scale are not significant for the optimization results obtained by the PS^{2}O algorithm. Therefore, against the scales variation of the testing VE cases, the robustness of the PS^{2}O is much better than those of the PSO, GA, and ABC.
(a)
(b)
(c)
(d)
6. Conclusions
In this paper, we develop an optimization model for minimizing the risks of the virtual enterprise based on evolutionary and swarm intelligence methods. First, a twolevel risk management model was introduced to describe the decision processes of the owner and the partners. This DDM model considers the situation that the owner allocates the budget to each member of the VE in order to minimize the risk level of the VE. Accordingly, a transfer optimization model, which can easily use EA and SI algorithms to treat the risk management problem in VE, is elaborately developed. We should note that the proposed optimization model is genetic and extendible: the model does not depend on the optimization algorithm used and other evolutionary and swarm intelligence techniques could be equally well adopted, which enable a comparison of various algorithms for the same application scenario.
Experiments show comparative studies on the VE risk management problem for the GA, PSO, ABC, and PS^{2}O. The simulation results show that the PS^{2}O algorithm obtains superior solutions on three testing cases than the other algorithms in terms of optimization accuracy and computation robustness. That is, in PS^{2}O, with the hierarchical interaction topology, a suitable diversity in the whole population can be maintained; at the same time, the enhanced dynamical update rule significantly speeds up the multiswarm to converge to the global optimum.
Acknowledgments
This work is supported by the National 863 plans projects of China under Grants nos. 2006AA04A117 and 08H2010201. The first author would like to thank Dr. Fuqing Lu for helpful discussions and constructive comments.
References
 W. H. Ip, M. Huang, K. L. Yung, and D. Wang, “Genetic algorithm solution for a riskbased partner selection problem in a virtual enterprise,” Computers & Operations Research, vol. 30, no. 2, pp. 213–231, 2003. View at: Publisher Site  Google Scholar
 R. L. Kliem and I. S. Ludin, Reducing Project Risk, Gower, Hampshire, UK, 1997.
 J. Ma and Q. Zhang, “The search on the established risk of enterprise dynamic alliance,” in Proceedings of International Conference on Management Science and Engineering, pp. 727–731, 2002. View at: Google Scholar
 M. Huang, H.M. Yang, and X.W. Wang, “Genetic algorithm and fuzzy synthetic evaluation based risk programming for virtual enterprise,” Acta Automatica Sinica, vol. 30, no. 3, pp. 449–454, 2004. View at: Google Scholar
 X. Sun, M. Huang, and X. Wang, “Tabu search based distributed risk management for virtual enterprise,” in Proceedings of the 2nd IEEE Conference on Industrial Electronics and Applications (ICIEA '07), pp. 2366–2370, Harbin, China, May 2007. View at: Publisher Site  Google Scholar
 F.Q. Lu, M. Huang, W.K. Ching, X.W. Wang, and X.L. Sun, “Multiswarm particle swarm optimization based risk management model for virtual enterprise,” in Proceedings of the 1st ACM/SIGEVO Summit on Genetic and Evolutionary Computation (GEC '09), pp. 387–392, Shanghai, China, June 2009. View at: Publisher Site  Google Scholar
 J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, Mass, USA, 1992.
 X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at: Publisher Site  Google Scholar
 T. Bäck and H. P. Schwefel, “Evolution strategies I: variants and their computational implementation,” in Genetic Algorithms in Engineering and Computer Science, pp. 111–126, Wiley, Chichester, UK, 1995. View at: Google Scholar
 J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975. View at: MathSciNet
 M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 53–66, 1997. View at: Google Scholar
 R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, pp. 39–43, Nagoya, Japan, October 1995. View at: Google Scholar
 J. Kennedy and R. C. Eberhart, “A discrete binary version of the particle swarm algorithm,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, vol. 5, pp. 4104–4108, Orlando, Fla, USA, October 1997. View at: Google Scholar
 K. M. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” IEEE Control Systems Magazine, vol. 22, no. 3, pp. 52–67, 2002. View at: Publisher Site  Google Scholar
 H. Chen, Y. Zhu, and K. Hu, “Cooperative bacterial foraging optimization,” Discrete Dynamics in Nature and Society, vol. 2009, Article ID 815247, 17 pages, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 H. Chen, Y. Zhu, and K. Hu, “Multicolony bacteria foraging optimization with celltocell communication for RFID network planning,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 539–547, 2010. View at: Publisher Site  Google Scholar
 D. Karaboga, “An idea based on honeybee swarm for numerical optimization,” Tech. Rep. TR06, Computer Engineering Department, Engineering Faculty, Erciyes University, 2005. View at: Google Scholar
 H. Chen and Y. Zhu, “Optimization based on symbiotic multispecies coevolution,” Applied Mathematics and Computation, vol. 205, no. 1, pp. 47–60, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Chen, Y. Zhu, K. Hu, and X. He, “Hierarchical swarm model: a new approach to optimization,” Discrete Dynamic in Nature and Society. In press. View at: Google Scholar
 D. T. Pham and D. Karaboga, “Optimum design of fuzzy logic controllers using genetic algorithms,” Journal of Systems Engineering, pp. 114–118, 1991. View at: Google Scholar
 J. Kennedy and R. C. Eberhart, Swarm Intelligence, Morgan Kaufmann, San Francisco, Calif, USA, 2001.
 M. Tomassini, Spatially Structured Evolutionary Algorithms: Artificial Evolution in Space and Time, Natural Computing Series, Springer, Berlin, Germany, 2005. View at: MathSciNet
Copyright
Copyright © 2010 Hanning Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.