Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article
Special Issue

Mathematical Tools of Soft Computing 2014

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 569580 | 8 pages | https://doi.org/10.1155/2014/569580

A Multiobjective Optimization Algorithm Based on Discrete Bacterial Colony Chemotaxis

Academic Editor: Yang Xu
Received18 Feb 2014
Accepted26 Jun 2014
Published14 Jul 2014

Abstract

Bacterial colony chemotaxis algorithm was originally developed for optimal problem with continuous space. In this paper the discrete bacterial colony chemotaxis (DBCC) algorithm is developed to solve multiobjective optimization problems. The basic DBCC algorithm has the disadvantage of being trapped into the local minimum. Therefore, some improvements are adopted in the new algorithm, such as adding chaos transfer mechanism when the bacterium choose their next locations and the crowding distance operation to maintain the population diversity in the Pareto Front. The definition of chaos transfer mechanism is used to retain the elite solution produced during the operation, and the definition of crowding distance is used to guide the bacteria for determinate variation, thus enabling the algorithm obtain well-distributed solution in the Pareto optimal set. The convergence properties of the DBCC strategy are tested on some test functions. At last, some numerical results are given to demonstrate the effectiveness of the results obtained by the new algorithm.

1. Introduction

In the field of optimization, many researchers have been inspired by biological processes such as evolution [1, 2] or the food-searching behavior of ants [3] to develop new optimal methods such as evolutionary algorithms or ant codes. These techniques have been found to perform better than the classical heuristic or gradient-based optimization methods, especially when faced with the problem of optimizing multimodal, nondifferentiable, or discontinuous functions. Examples of applying these biologically inspired strategies in the field of engineering range from aerodynamic design [4] to job-shop scheduling problems [5]. Another biologically inspired optimization method is the chemotaxis algorithm, pioneered by Bremermann [6] and his coworkers [7, 8], that is proposed by analogy to the way bacteria react to chemoattractants in concentration gradients. This algorithm was analyzed for chemotaxis in a three-dimensional (3D) gradient by Bremermann [6] and employed for the training of neural networks [7, 8]. This strategy is evaluated on a number of test functions for local and global optimization, compared with other optimization techniques, and applied to the problem of inverse airfoil design [9]. A similar algorithm is the guided accelerated random search technique [10], which was applied to optimize parameters in flight control systems, [11] and to optimize perceptrons [12]. The BCC algorithm is a new colony intelligence optimization algorithm which was introduced in [13]. This novel algorithm considers not only the chemotactical strategy but also the communication between the colony members, and the performance has improved greatly.

Most real-world optimization problems require making decisions involving two or more goals. When these goals are the minimization or maximization of the objective functions they are typically referred to as multiobjective optimization (MO). From the 1950s, a variety of methods known as classical have been developed for the solution of multiobjective optimal problems (MOP). These methods are based on formal logic or mathematical programming [14]. Another interesting biological process that has been already implemented as an optimization technique is the BCC algorithm. This novel technique exposed the potential of implementing bacterial chemotaxis as a distributed optimization process, recognizing that in natural colonies, it is the interaction and communication between bacteria the mechanism that enables them to develop biologically advantageous patterns. Many real-world binary-discrete optimization problems require making decisions involving two or more goals that typically are in contradiction with each other. So in this paper the DBCC algorithm is developed to solve MOP. The basic DBCC algorithm has the disadvantage of being trapped into the local minimum and also cannot maintain the population diversity in the Pareto optimal set. Therefore, some improvements are adopted in the new algorithm, as adding arithmetic operator when the bacteria choose their next locations and taking into account the chaos transfer mechanism.

This paper is organized as follows. Section 2 describes the basic theorem of MOP. The model is shown in Section 3. The convergence properties of the developed DBCC strategy which is applied on test functions are given by comparing with other algorithms in Section 4. Finally, some conclusions are drawn in Section 5.

2. Preparative Theorem of MOP

A MOP [15] is defined as the problem of finding a vector of decision variables that satisfies some restrictions and optimizes a vector function, whose elements represent the values of the functions. A MOP may be formulated as follows [16]: where is the vector of discrete decision variables and are the objective functions. The inequalities and the equalities are known as constraint functions.

For MOP, instead of a single optimal, there is a set of optimal solutions known as Pareto optimal front (POF). Any solution of this set represents a balance between the objective functions; therefore, it is not possible to say that there is other solution in the search space which is superior to this one when all objectives are considered. In the minimization MOP, Pareto optimality can be mathematically defined as follows [17].

Definition 1. Pareto dominate: given two candidate solutions and from , where is the feasible region, the vector is said to dominate vector (denoted by y) if and only if, Denoted by if and only if,

Definition 2. Pareto optimal: the candidate solution is Pareto optimal if and only if,

Definition 3. Pareto optimal set
The set of solutions that satisfy all constraints is known as the Pareto optimal set and the fitness values corresponding to these solutions form the Pareto front or trade-off surface in objective space. In most cases it is not easy to find analytical expressions for the line or curve that contains the POF; thus, commonly optimal solutions points and the objective functions values in them are calculated. In order to find optimal solutions, there are two goals that any multiobjective optimization algorithm (MOA) seeks to achieve [18]: (1)guide the search towards the global Pareto optimal region,(2)maintain the population diversity in the Pareto optimal front.

3. Multiobjective Optimal Algorithm Based on DBCC (MOADBCC)

In this section, we present further improvements to the algorithm. MOADBCC can be summarized in the following steps:

The strategy parameters , , and are relevant to the calculation precision

Step 1. Initialize variables.

Generate bacteria randomly by using binary-discrete code and make it turn to real code. Compute the velocity . In this model, the velocity is assumed to be a scalar constant value . Evaluate the fitness of the bacteria and store the nondominated solutions in the Pareto optimal set.

Step 2. Individual bacterial optimization and find the new position .

(a) Compute the duration of the trajectory from the distribution of a random variable with an exponential probability density function The time is given by where is minimal mean time, and are the position of a bacterium in the previous and current step, respectively, is the difference between the actual and the previous function value, is vector connecting the previous and the actual position in the parameter space, and is dimensionless parameter.

(b) The position and motion of a bacterium are defined by , with a radius and angles , , , , .

The angle between the previous and the new direction obeys Gaussian distribution, for turning left or right, respectively, where , the expectation value , and the variance with where is the duration of the previous step.

(c) Compute the position of the bacterium. The length of the path is given by

The normalized new direction vector with is multiplied by to obtain the displacement vector such that the new location of the bacterium is

Step 3. Optimize by the bacterial colony and find the new position .

The individual bacterial acquires information about its environment; compute the best neighbor center . We can get the new position by where , where is the random number within the interval and is the distance connecting the vector and .

Step 4. Optimize by the crowding distance operator and find the new position .

The new position of the individual bacterial with crowding distance operator is given by where is assumed to be a real number and is the biggest crowding distance position of bacterium in the previous step. The count of is given no more than .

Step 5. The reference colony is produced and bacteria move to the new location [19].

Step 6. If the chosen target precision is not reached, then go to Step 2; otherwise end.

In practical optimization, the calculation precision can be adjusted adaptively with , where is the precision update constant.

As the initial precision is reached, the parameters are adapted to another precision (defined by the number of parameter changes) and the search continues, until this new precision is reached. A given precision is reached if the difference between function values found by the bacterium is smaller than for a given number .

When the the difference between the actual and the previous function value is reached in a row that has assumed before. The bacterial colony has to move to other positions and neglects information that the bacterial colony has held in previous step. This has been called bacterial migration. We modify the algorithm with chaos transfer mechanism (CTM).

Therefore, one has the following: So the algorithm can escape from the local optimal region.

In the other side, sometimes the bacterium jumps out of the area of the global optimal and is never able to get back. To prevent the bacteria strategy from leaving the global optimal regions, the algorithm will search each of the bacterial migration positions, compare the function values with that in the POF, and then take the better one into the POF. Flowchart of the MOADBCC algorithm is given as shown in Figure 1.

4. Tests Results of the MOADBCC

MOADBCC with operators, MOADBCC without operators, and NSGA-II are, respectively, applied on simulation, among which NSGA-II applies binary coding to obtain Pareto optimal solution set. The parameter selection of MOADBCC algorithm can refer to [6]. The initial precision is set as 2.0 with a precision update constant . The cross probability of NSGA-II is 0.9, and the variation probability is 0.01. The colony scales of all these three algorithms are 100 and the test function iterates 100 times except for the ZDT series function which iterates 200 times. Test condition was AMD Athlon(tm) IIX2 255 Processor, 2 GB Memory, Microsoft Windows XP operation system; the procedure was tested in Matlab 7.1.

This paper applies five test functions to examine the performance of the algorithm, including the research achievement SCH of Schaffer [20], DErB, and ZDT1 and ZDT2 [2123] of Deb and SRN [24, 25] of test function Srinivas with constraint condition,. Binary coding and discretization treatment are applied by test function while the optimization is conducted in the discrete area. Among them, ZDT1 and ZDT2 and DEB apply 10 coding and SRN and SCH apply 20 coding. Table 1 shows test functions.


Problems Constraint Objective functions

SCH1 −103, 103

DEB2 0, 1

ZDT130 0, 1


ZDT230 0, 1


SRN2 −20, 20



In order to evaluate the comprehensive performance of the algorithm, CM, and RVI performance evaluation indexes are applied in this paper to measure the advantages of the algorithm performance.

Convergence index (CM) [26] is used to measure the intimacy level between the solutions obtained by the algorithm and real solution set. The smaller the CM index value is, the more intimate the obtained solutions are with real POF

Effective solution proportion index (RVI) refers to the proportion of nondominant solutions in all solutions, which is used to examine the convergence degree of final solutions. The bigger the RVI index value is, the better performance the algorithm is where means that the nondominant solutions minus the bacteria amount in same location.

The algorithm operates for 30 times independently for each index to obtain their mean value Mean and standard deviation Stdev. Tables 2 to 3 show the mean and Stdev of CM and RVI index for MOADBCC with operators, MOADBCC without operators, and NSGA-II respectively.


Standard test functionSCHZDT1ZDT2

MODBCC (with operators)
 Mean0.0031000.0009090.000816
 Stdev0.0001780.0004580.005400

MODBCC (without operators)
 Mean0.0033000.0034000.005600
 Stdev0.0002580.0054000.007100

NSGA-II
 Mean0.0032000.1047000.187300
 Stdev0.0001500.1977000.229800


Standard test functionSCHZDT1ZDT2

MODBCC (with operators)
 Mean1.0000000.9700000.965000
 Stdev0.0072000.0129000.011100

MODBCC (without operators)
 Mean0.9900000.7200000.665000
 Stdev0.0291000.0436000.056900

NSGA-II
 Mean1.0000001.0000001.000000
 Stdev0.0000000.0000000.122100

From the table of comparisons, the results show that all indexes of MOADBCC with operators are better than that of MOADBCC without operators. The performance of MODBCC and NSGA-II is equal to each other in terms of RVI index. However, the performance of MOADBCC is better than NSGA-II in terms of CM index, indicating it has better performance than NSGA-II algorithm. It is due to the fact that the operators proposed by this paper make it easier for bacteria to jump out of local convergence and accelerate the convergence speed of the algorithm, thus enabling it to obtain more effective solutions with same iterations and increasing the value of RVI.

In order to further compare the performances among these three algorithms, this paper listed the Figures of simulation results of three algorithms corresponding to SCH, DEB, and FON test functions, shown in Figures 2, 3, 4, 5, 6, and 7. From the simulation Figures, those results show that MOADBCC with operators is better than the other two algorithms in view of test problems in the Pareto Front, as it can converge to the Pareto Front more effectively and obtain more well-distributed solutions.

5. Conclusions

In this paper, a discrete bacterial colony chemotaxis (DBCC) algorithm is developed by chaos transfer mechanism and crowding distance strategy and used to solve multiobjective optimization problems. An additional term containing the chaos transfer mechanism draws the search away from local optima. Incorporation of crowding distance strategy also improves performance. The performance on tests is given to demonstrate the validity of the proposed algorithm. In fact, the results given in this paper not only show the effectiveness of the DBCC for the test functions, but also generally support the use of DBCC for other practical problems. Hence, there are future works including parameter settings and more applications of the proposed algorithmic framework to other optimization problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments. Project 61374098 was supported by National Natural Science Foundation of China.

References

  1. H. P. Schwefel, Evolution and Optimum Seeking, Wiley, New York, NY, USA, 1995.
  2. J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975. View at: MathSciNet
  3. M. Dorigo, Learning and natural algorithms [Ph.D. dissertation], Dipartimento di Elettronica e Informazione, Politecnico di Milano, Milano, Italy, 1992.
  4. S. Obayashi, “Pareto genetic algorithm for aerodynamic design using the Navier-Stokes equations,” in Genetic Algorithms in Engineering and Computer Science, Wiley, New York, NY, USA, 1997. View at: Google Scholar
  5. A. Colorni, M. Dorigo, V. Maniezzo, and M. Trubian, “Ant system for job-shop scheduling,” Belgian Journal of Operations Research, Statistics and Computer Science, vol. 34, no. 1, pp. 39–53, 1994. View at: Google Scholar
  6. H. J. Bremermann, “Chemotaxis and optimization,” Journal of the Franklin Institute, vol. 297, no. 5, pp. 397–404, 1974. View at: Publisher Site | Google Scholar
  7. H. J. Bremermann and R. W. Anderson, “How the brain adjusts synapses-maybe,” in Automated Reasoning: Essays in Honor of Woody Bledsoe, R. S. Boyer, Ed., pp. 119–147, Kluwer, Norwell, Mass, USA, 1991. View at: Google Scholar
  8. R. W. Anderson, “Biased random-walk learning: a neurobiological correlate to trial-and-error,” in Neural Networks and Pattern Recognition, O. M. Omidvar and J. Dayhoff, Eds., pp. 221–244, Academic Press, New York, NY, USA, 1998. View at: Google Scholar
  9. S. Müller, S. Airaghi, and J. Marchetto, “Optimization based on bacterial chemotaxis,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 16–29, 2002. View at: Publisher Site | Google Scholar
  10. R. L. Barron, “Self-organizing and learning control systems,” in Proceedings of the Cybernetic Problems in Bionics. Bionics Symposium, pp. 147–203, Gordon and Breach, New York, NY, USA, May 1966. View at: Google Scholar
  11. “Neuromine nets as the basis for predictive component of robot brains,” in Proceedings of the 4th Annual Symposium American Society of Cybernetics—Cybernetics, Artificial Intelligence, and Ecology, H. W. Robinson and D. E. Knight, Eds., pp. 159–193, Spartan, Washington, DC, USA, 1972. View at: Google Scholar
  12. A. N. Mucciardi, “Adaptive flight control systems,” in Proceedings of the Principles and Practice of Bionics—NATO AGARD Bionics Symposium, pp. 119–167, Brussels, Belgium, September 1968. View at: Google Scholar
  13. W. W. Li, H. Wang, and Z. J. Zou, “Function optimization method based on bacterial colony chemotaxis,” Chinese Journal of Circuits and Systems, vol. 10, no. 1, pp. 58–63, 2005 (Chinese). View at: Google Scholar
  14. G. B. Dantzig and M. N. Thapa, Linear Programming 1: Introduction, Springer Series in Operations Research, Springer, New York, NY. USA, 1997. View at: MathSciNet
  15. M. A. Guzmán, A. Delgado, and J. de Carvalho, “A novel multiobjective optimization algorithm based on bacterial chemotaxis,” Engineering Applications of Artificial Intelligence, vol. 23, no. 3, pp. 292–301, 2010. View at: Publisher Site | Google Scholar
  16. K. Deb, Multi-objective Optimization using Evolutionary Algorithms, John Wiley & Sons, Chichester, UK, 2009. View at: MathSciNet
  17. A. Chinchuluun and P. M. Pardalos, “A survey of recent developments in multiobjective optimization,” Annals of Operations Research, vol. 154, pp. 29–50, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  18. K. Deb, “Evolutionary algorithms for multi-criterion optimization in engineering design,” in Proceedings of the Evolutionary Algorithms in Engineering and Computer Science (EUROGEN '99), K. Miettinen, M. Mäkelä, P. Neittaanmäki, and J. Périaux, Eds., pp. 135–161, Jyväskylä, Finland, 1999. View at: Google Scholar
  19. C. Huilin, L. Zhigang, and S. Songqiang, “Multiobjective optimization using bacterial colony chemotaxis,” in Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems, 2011. View at: Google Scholar
  20. J. D. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithms,” in Proceedings of the 1st International Conference on Genetic Algorithms, pp. 93–100, 1985. View at: Google Scholar
  21. K. Deb, “Multi-objective genetic algorithms: problem difficulties and construction of test problems,” Evolutionary Computation, vol. 7, no. 3, pp. 205–230, 1999. View at: Publisher Site | Google Scholar
  22. K. Deb, Scalable Test Problems for Evolutionary Multi-Objective Optimization, Springer, Berlin, Germany, 2005.
  23. G. G. Yen and H. Lu, “Dynamic multiobjective evolutionary algorithm: adaptive cell-based rank and density estimation,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 3, pp. 253–274, 2003. View at: Publisher Site | Google Scholar
  24. N. Srinivas and K. Deb, “Multiobjective function optimization using nondominated sorting genetic algorithms,” Evolutionary Computation, vol. 2, pp. 221–248, 1994. View at: Publisher Site | Google Scholar
  25. T. T. Binh and U. Korn, “MOBES: a multiobjective evolution strategy for constrained optimization problems,” in Proceedings of the IMACS World Congress on Scientific Computation, pp. 357–362, Berlin, Germany, 1997. View at: Google Scholar
  26. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site | Google Scholar

Copyright © 2014 Zhigang Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1175 Views | 716 Downloads | 3 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.