Abstract

Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster.

1. Introduction

The artificial bee colony algorithm is a new heuristic optimization algorithm proposed in recent years by Karaboga [1]. References [2, 3] pointed out that by comparing the performance of optimization of differential evolution algorithm [4] and the particle swarm algorithm [5], ABC algorithm obtained more favorable test results and is one of the most outstanding function optimization methods, which has become a hot topic at the forefront of domestic and foreign heuristic algorithm researches.

However, similar to other intelligent algorithms, the standard ABC algorithm also had disadvantages of easily prematurely falling into local optima and slow convergence rate in later stage. In this regard, a number of scholars made corresponding improvement. Reference [6] proposed an artificial bee colony algorithm solving the problem of minimum spanning tree; [7] proposed an improved algorithm where combining particle swarm algorithm, to some extent, accelerated the local convergence rate of algorithm; and [8] proposed an improved algorithm combining differential evolution algorithm and artificial bee colony algorithm, which effectively improved the searching accuracy and the convergence rate of the algorithm to some extent. However, improved algorithms above did not effectively improve the convergence rate and avoid premature convergence problem at the same time.

In order to overcome premature to improve the convergence speed and optimal accuracy, this paper proposes a new improved artificial bee colony algorithm. First, a chaos reverse learning strategy was proposed and introduced into the initialization phase of artificial bee colony algorithm, making the initial population uniformly distributed in the search space, in order to improve the quality of population solution, thus speeding up the global convergence rate of the algorithm. Secondly, according to the idea of separate optimization and survival of the fittest, two populations was filter optimized by using dual population structure, thus the optimization process was accelerated in the case of maintaining the population diversity. The introduction of the dynamic adaptive idea to the algorithm and the improvement of nectar update formula based on the comparison of population diversity measurement values, the problem of algorithm falling into a local optimum was solved. The simulation results of optimization of 10 standard test functions that were widely used showed that, comparing with the existing two artificial bee colony algorithms, the proposed algorithm had better optimization accuracy, convergence rate, and robustness.

2. Algorithm Description

2.1. Artificial Bee Colony Algorithm

Artificial bee colony algorithm uses simulating the mechanism of bees collecting nectar to achieve the optimization processing function. In artificial bee colony algorithm, the bees are divided into three categories, that is, employed bees, onlookers, and scouts [9]. The main task of employed bees and onlookers is to search and mine nectar, and scouts are used to search and compare nectar in order to avoid few nectar species. The location and quantity of the nectar are the solution of function optimization problem and corresponding function value. The process of searching optimal nectar is as follows: employed bees find nectar and memorize and search for new nectar in the vicinity of each nectar; at the same time employed bees release information that is proportional to the mass of marked nectar to attract onlookers. Onlookers select the appropriate marked nectar under some mechanism and search for new nectar source in the vicinity and compare with selected nectar. Select excellent quality nectar as final marked nectar, looking for the best nectar in repeated cycles. If during the process of collecting nectar, after several searches, nectar is unchanged, then corresponding employed bees are changed into scouts and they randomly search for new nectar [911]. The function optimization problem can be expressed as follows: where fit represents the objective function, is an -dimensional variable, and [] is the corresponding upper and lower bounds of the th dimensional variable. Set the number of nectar, employed bees, and onlookers to be in the ABC algorithm. The specific steps of ABC algorithm are as follows.

Step 1. nectar positions are generated randomly by the following formula: where is the corresponding search position of th bee in th dimension; , are upper and lower bounds of the th dimensional variables; select positions with low fitness values as the position of nectar.

Step 2. Employed bees search and update nectar in the vicinity of nectar according to the following: where is the position of new nectar, is the th dimensional position of nectar , is th dimensional position of randomly selected nectar , and , is a random number of .

Step 3. Comparing the pros and cons of before and after nectar, replace the previous nectar, if after searching nectar is superior to previous nectar.

Step 4. According to the way of roulette and nectar information released by employed bees, onlookers select nectar, the selection probability of onlookers is as follows:

Step 5. Onlookers search for new nectar in accordance with (3), and compared with the nectar quantity searched by employed bees, Set the N position of more nectar as the position of employed bees; the rest is position of onlookers.

Step 6. If some nectar is unchanged after limit cycles, the nectar is given up, corresponding employed bees are turned into onlookers, and new nectar is randomly generated according to (2).

Step 7. Record location of best nectar source and return to Step 2 until the termination condition is met.

2.2. Particle Swarm Algorithm

Mathematical description of the particle swarm intelligence algorithm [5] is as follows. Suppose in a -dimensional target space, particles with potential problem solutions composed a group, where th particle is represented as a -dimensional vector, ; position of the th particle in -dimensional search space is ; flight speed is ; is the personal best position searched by th particle so far; and remember is the global optimal position searched by particle swarm so far; in each iteration, particles update speed and position in accordance with where, , is the number of iterations; is the inertia coefficient; , are learning factors and suitable , can speed up convergence and not easily fall into local optimum; and , are random numbers in . Particles find which is the global optimal solution via constantly learning and updating [5, 1214]. Particle swarm algorithm is applied to the positioning phase of the proposed algorithm; the main steps are as follows.

Step 1. Determine parameters: the number of particles , the inertia factor , and the number of iterations .

Step 2. Randomly generate a population of particles.

Step 3. Update velocity and position of particle using (5).

Step 4. Global optimal solution is obtained by comparing and calculating the fitness function values; solution is the coordinates of unknown node.

Step 5. Determine whether the condition of the loop termination was met, if it was, record coordinates of unknown node, otherwise return to Step 3.

3. Improved Artificial Bee Colony Algorithm

3.1. Chaos Reverse Learning Strategies

Population initialization is particularly important in intelligent algorithm, because initialization quality directly affects the algorithm global convergence speed and the corresponding solution quality. Under normal circumstances, due to the lack of a priori information, random initialization is often used to generate the initial solution of algorithm. Reference [15] proposed a chaotic initialization method in the process of researching particle swarm algorithm, while [16] proposed initialized method of reverse learning. On this basis, this paper proposed a chaotic reverse learning strategy by combining these two initialization methods, and the strategy was used to initialize ABC algorithm; concrete steps are as follows.(1)Set maximum chaotic iteration step and the population size . The charts of chaotic phase and reverse learning phase are given in Figures 1 and 2.(2)Select best fitness value particles as the initial bee swarm from .

3.2. Dynamic Self-Adaptive Nectar Update Strategy

In the process of searching nectar source, employed bees often choose nectar source with more nectar quantity, but when many employed bees select the same nectar, this information amount of nectar will increase in vain, which causes too many employed bees to concentrate on one nectar source, causing blockage or stagnation. When solving the optimization problem, this will manifest premature and local convergence [1720]. In order to solve this problem, a new dynamic self-adaptive nectar update strategy was proposed. This strategy introduced the concept of population diversity measurement and it was used to the redefinition of nectar update formula, in order to improve the algorithm search capabilities.

Reference [21] pointed out that the difference between the average particle distance and particle fitness was commonly used to indicate the population diversity. On the basis of analyzing disadvantages of this approach, the similarity degree of the individuals in population was used to characterize the population diversity, which was introduced to the updated nectar formula that is formula (3). Let individual number of ABC algorithm be , and the th individual of th generation bee colony is , where is the number of nectar solution dimensions; the th individual successive dynasties nectar optimum position is . Combine individual nectar position and successive dynasties optimum position together, referred to as ; all individuals in bee colony can be composed of a matrix of order, normalization process , and matrix of order can be obtained as where , , and each row vector can be seen as a fuzzy set, expressed as membership degrees of the nectar current location and successive position of each component searched by th employed bee or onlooker; any similarity degree of two , can be expressed by nearness; that is,

Bees Diversity Measurement can be expressed by population average nearness

is obtained by ; if individuals in bee colony are identical, then the diversity is the worst; is the maximum value 1.

In the update nectar formula of ABC algorithm, since is randomly generated numerical value in , the relationship between nectar source and the diversity of employed bees and bee colony is ignored. Therefore, let therefore, let adjustment formula of be: where is the update coefficient of th generation bee colony, is diversity measurement of th bee colony, and , are constants. Update formula of improved nectar is

3.3. Dual Population Search Strategy

Since the update methods of ABC and PSO Algorithm individual, as well as different optimization strategies, the effect of optimization also varies. In order to improve the population diversity of ABC algorithm and accelerate the speed of algorithm searching for optimal solution, inspired by reference [8, 22], taking into consideration the advantages of particle swarm algorithm which has a simple structure, easy to implement, few parameters, and fast algorithm convergence rate in early stage, the advantages of this algorithm and ABC algorithm are combined to propose a dual population search strategy. Main ideas of the strategy are to randomly divide the population into two groups, each group using different optimization strategies to find optimal solution. Better solution is selected as the algorithm optimal solution after comparison. The specific process is as follows.

Step 1. Initialize population behaviors using chaos reverse learning strategy mentioned above.

Step 2. Initialized population is randomly divided into two groups; one group uses improved ABC algorithm mentioned above, nectar update using formula (10); another group uses particle swarm algorithm, individual update using formula (5).

Step 3. Two kinds of populations are searching for optimal solution in accordance with their respective search strategy under algorithm termination condition. And based on the idea of survival of the fittest, respective proceeds in accordance with the respective optimal solutions are compared, and position of the better solution is recorded.

Diversity of the population is ensured by mixing two populations and two populations parallel searching; at the same time, algorithm convergence rate is improved to a large extent; the algorithm has a higher convergence rate in reasonable computational complexity.

4. Convergence Analysis

IMABC algorithm in this paper determines convergence according to methods given in the literature [2325].

4.1. Convergence Criteria

If the result of the iteration of optimization problem is , then the next iteration is , of which is solution space, is fitness function, and is the solution which the algorithm has found. The function is defined as the optimal area; if the algorithm finds a point in , then the algorithm can be considered to find the optimal algorithm or approximate optimal solution.

Condition 1. ; if , then. If the algorithm satisfies this condition, it can be stated that fitness is nonincremental.

Condition 2. For any Borel subset of , if in the set the Lebesgue measure , then . If the algorithm satisfies the condition, it can be stated that after bee colony unlimitedly searches optimization, the probability of global optimum that cannot be found is 0.

Theorem 1. Set the function as measurable; is a measurable subset of , algorithm satisfies Conditions 1 and 2, and is the solution sequence generated by algorithm ; there .

4.2. Algorithm Convergence Analysis

Lemma 2. IMABC algorithm meets Condition 1.

Proof. The algorithm uses chaotic reverse learning strategies to initialize population, double-population search is conducted in each iteration, and the optimal value is saved; that is, Condition 1 is met.

Definition 3. Assuming optimal solution is ; optimal solution set is defined as .

Theorem 4 (see [13, 2325]). In the algorithm, for bee colony state sequence , set as a closed set in state space .

Proof. Set , ; for any transfer step length , , the probability of bee colony state transferred from to by steps can be obtained by bee colony algorithm where is the probability of bee colony state transferred from to , , and the probability is determined by transition probability of each bee; that is, . is the number of bees. exists in each expression in (13), since , , , ; at least there exists , , so set as a closed set in state space .

Theorem 5 (see [2325]). Bee colony state space does not have a nonempty closed set , making .

Proof. Assume that there exists a nonempty closed set and . Set , ; since, after a finite number of iterations, the probability of scouter transferred from to is . Therefore when the step size is large enough, in expansion there must be a certain product expression greater than 0; that is . From (13) in Theorem 4, ; is not a closed set, which is contradicted with the question conditions. Therefore, there is no closed set in the state space except .

Theorem 6 (see [24]). Assume Markov Chain has a nonempty closed set and there is no other nonempty closed set ; let . Therefore, when , and when , [2].

Theorem 7. When bee colony is unlimitedly iterated and optimized, all state sequences are present in the optimal state set . Theorem 7 is established which can be drawn from Theorems 46.

Lemma 8. Algorithm satisfies Condition 2.

Proof. By Theorem 7, after bee colony is unlimitedly optimized, the probability of no global optimum is 0; then there is .

Theorem 9. IMABC algorithm converges to global optimum.

As the bee colony algorithm satisfies Conditions 1 and 2, the algorithm can be obtained and converged to the global optimum by Theorem 1. The algorithm in this paper uses two optimization algorithms and the optimal solutions obtained from two populations are recorded; the two populations are independent. As long as the global optimal solution probability of one of the optimization algorithms converges to 1, the global optimal solution probability of the entire algorithm also converges to 1. Therefore, this algorithm is a global convergence algorithm.

5. Simulation Experiment and Results Analysis

In order to verify the validity of above analysis and the improved algorithm performance, comparison experiments were done on this improved algorithm (abbreviated as IMABC), traditional ABC algorithm, and improved integration algorithm (abbreviated as PABC) combined by ABC algorithm proposed by [7] and PSO algorithm. In the simulation experiment, 10 test functions [2629] were selected; are high-dimensional functions, where and are unimodal functions; are multimodal functions; and and are two-dimensional functions. Table 1 lists names, dimensions, definitions, ranges, and theory global optimal solutions of these test functions. In the experiments, population sizes of the three algorithms are all 40, is 300, limit is 40, and the corresponding maximum number of iterations is 5000, where parameter settings of dual population strategy particle swarm algorithm are seen in [27]; that is, ,   . For each test function, every algorithm is randomly run 30 times to find the best value, the worst value, average, and standard deviation. The best and worst values reflect the solution quality; average tells the accuracy that algorithm can achieve under a given number of function evaluations, reflecting the algorithm convergence rate; variance reflects the stability and robustness of the algorithm. Results are shown in Table 2.

As can be seen from the data comparison in Table 2, among most standard test functions, whether it is the solution quality or algorithm convergence accuracy and stability, IMABC algorithm has been greatly more improved than PABC algorithm and ABC algorithm. In functions and , although compared to PABC algorithm IMABC algorithm’s minimum, the worst value, average, and variance all slightly increase, significant improvement is achieved when compared to the standard ABC algorithm. In functions , , , and , the improved algorithm is significantly better than the standard ABC algorithms and PABC algorithm in various test results; especially in functions , , , and , IMABC algorithm not only has good test results but also can converge to optimal solution, showing good searching performance.

In order to compare algorithm optimization effect more visually, IMABC algorithm, PABC algorithm, and ABC algorithm are compared. Corresponding test function convergence curves are given in Figures 3 to 12. According to the figures, because of the use of a new initialization method and dual population parallel search strategy, as well as the dynamic self-adaptation of nectar location update, the improved Artificial Bee Colony algorithm can jump out of local optimal solution and gradually converge to the global optimal solution when processing multimodal functions, and has a faster convergence rate when in processing unimodal functions and low-dimensional functions. While processing unimodal functions and low-dimensional functions, this improved algorithm has a faster convergence rate. It can be seen from Figures 3, 4, 7, and 8, since standard test function has a high complexity in 50th-dimension, three algorithms are all unable to converge to the optimal solution, but this improved algorithm, compared to ABC algorithm and PABC algorithm, has a faster convergence rate and significantly superior convergence accuracy; it can be seen from Figures 5, 6, 9, and 12 that, compared to the other two algorithms, IMABC algorithm has higher convergence accuracy and can converge to a global optimal solution faster and stabilize. As can be seen from Figure 10, this proposed algorithm gradually approaches function optimal solution with the increase of iterations, although it cannot converge; the extent of approaching and search accuracy are significantly better than the other two algorithms. As can be seen from Figure 11, IMABC algorithm and the other two algorithms are all approaching function optimal solution with the increase of iterations. When the number of iterations is more than 60 times, this algorithm has little difference with PABC algorithm. But when the number of iterations is less than 60 times, it can be seen that IMABC algorithm has higher convergence rate. Therefore, it can be concluded that the overall optimization performance of this proposed IMABC algorithm is superior to the standard ABC algorithm and the PABC improved algorithm proposed in [7].

6. Conclusion

In order to avoid falling into local optimum resulting from premature and improve convergence rate of ABC algorithm, an improved artificial bee colony algorithm based on multipolicy optimization was proposed. In order to improve the global search ability and keep the algorithm diversity, improved algorithm proposed a chaotic reverse learning initialization method on the basis of existing research results; in order to avoid the algorithm falling into a local optimum, improved algorithm introduced dual population search mechanism into search phase of the algorithm. Advantages of particle swarm algorithm and standard ABC algorithm were merged; meanwhile, algorithm convergence rate was increased. In addition, in order to improve the algorithm population diversity and global search capability, the concept of population similarity degree was introduced into the improved algorithm, and an indicator of population diversity measure was proposed to dynamic self-adaptive adjustment of the nectar location. Experimental results of 10 standard test functions optimization showed that this proposed algorithm improved more greatly than standard ABC algorithm and PABC in optimization efficiency, optimization performance, and robustness.

Furthermore, this improved algorithm also has certain limitations: though optimization performance is improved, the algorithm complexity is increased to a certain extent. How to ensure algorithm jumping out of local optima and having high convergence rate, at the same time, possessing low algorithm complexity, will be the next step in research.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the Scientific Research Program of the Higher Education Institution of XinJiang (no. XJEDU2010S48).