Abstract

An advanced chemical reaction optimization algorithm based on balanced local search and global search is proposed, which combines the advantages of adaptive chemical reaction optimization (ACRO) and particle swarm optimization (PSO), to solve continuous optimization problems. This new optimization is mainly based on the framework of ACRO, with PSO’s global search operator applied as part of ACRO’s neighborhood search operator. Moreover, a “finish” operator is added to the ACRO’s structure and the search operator is evolved by an adaptive scheme. The simulation results tested on a set of twenty-three benchmark functions, and a comparison was made with the results of a newly proposed hybrid algorithm based on chemical reaction optimization (CRO) and particle swarm optimization (denoted as HP-CRO). The finial comparison results show the superior performance improvement over HP-CRO in most experiments.

1. Introduction

We often encounter optimization problems in scientific and technological research and development. Over the past decades, a series of optimization algorithms have been proposed: genetic algorithm (GA) (see, e.g., [1]), simulated annealing (SA) (see, e.g., [2]), ant colony optimization (ACO) (see, e.g., [3]), particle swarm optimization (PSO) (see, e.g., [4]), chemical reaction optimization (CRO) (see, e.g., [5]), and others. The optimization algorithms mentioned above are with the same objective, that is, to find a best (or optimal) solution.

In general, when faced with an optimization problem, we can always simplify it as follows: a solution and a solution space , an objective function , and a set of constraints (assume there are constraints), , which confine the search region, respectively. A particular solution is usually represented by a vector of variables , where corresponds to the problem dimensions.

The optimality of is evaluated by an objective function and its output value , that is, . Our objective can be to either maximize or minimize . In this paper we assume the latter. Then our goal is to find the minimum solution where ,  . Then a minimization problem can be described as follows:where , , and represent the real number set, the index set for equalities, and the index set for inequalities, respectively. The No-Free-Lunch theorem states that all metaheuristics which search for extreme are exactly the same in performance when averaged over all possible objective function (see, e.g., [6]). In other words, when one works excellent in a certain class of problems, it will be outperformed by the others in other classes.

In recent years, CRO has been proposed and attracted an increasing interest from the community of optimization algorithms, a variety of improved algorithms based on CRO has been suggested (see, e.g., [710]) and it has been applied in many fields (see, e.g., [1115]). Of all these algorithms, adaptive chemical reaction optimization (ACRO) stands out and shows its strong superiority. PSO has been applied in variety of fields and shows its higher convergence speed. However, it usually converges to local minimum quickly and loses the opportunity to find a better one.

According to the introduction mentioned above, ACRO seems to be a well-performed optimization algorithm. However, similar to CRO, ACRO is still lacking in convergence speed. In order to avoid weaknesses of ACRO and PSO, we proposed a new algorithm which combines the advantages of both (denoted as AACRO).

The rest of the paper is organized as follows. Section briefly outlines the related works in this paper and gives the inspiration of our proposed algorithm. We explain the modifications of ACRO and introduce the basic framework of AACRO in Section. In Section, we propose the proof on convergence and provide the convergence speed analysis. In Section, we describe 23 benchmark problems. In Section, we present the simulations results and compare the results of AACRO and HP-CRO. In particular, Section presents the experimental environment, parameter settings are shown in Section, and Section gives the comparison results. We conclude this paper and give some potential future works in Section.

A good optimization algorithm must have a good global search performance as well as a good local search performance. However, global search and local search performance are always confined to each other in practice. For example, if an optimization algorithm is good at global search then it must be poor at local search and vice versa. In order to achieve the best performance, the two abilities should be well balanced.

2.1. The Adaptive Chemical Reaction Algorithm

As an improvement of CRO, ACRO is proposed by Yu et al. in 2015 (see, e.g., [9]) and substantially inherits the standard CRO’s structure. ACRO reduces the number of parameters defined in canonical from eight (initial population size (inipopsize), initial molecular kinetic energy (iniKE), initial central energy buffer (iniBuffer), molecular collision rate (CollRate), energy loss rate (LossRate), decomposition occurrence threshold (DecThres), synthesis occurrence threshold (SynThres), and perturbation step size (StepSize)) to three classes (energy-related, reaction-related, and real-coded-related) and makes the occurrence of elementary reactions adaptive.

The energy-related class includes iniKE, iniBuffer, and LossRate. The modifications of the parameters are as follows:where and are molecules with the largest and the smallest and the value for LossRate of each molecule is approximated by a modified folded normal distribution, which is generated from a normal distribution with a mean value of 0 and standard deviation of .

The reaction-related class includes CollRate, SynThres, and DecThres. A new parameter ChangeRate has been introduced to replace the original parameters SynThres and DecThres to control the frequency of decompositions and syntheses. And in order to control the number of molecules, , , and have been introduced to the system; then the population feedback term is modified as follows:where curPopSize is the current population size and iniPopSize is the initial population size. and determine that the probability the current iteration is a decomposition and a synthesis, respectively.

The real-code-related class includes StepSize. In order to solve the continuous optimization problems, the parameter StepSize needs to keep changing with iterations. The modification in StepSize is twofold.

(1) Initial Value of StepSizewhere is the initial StepSize for the element in the solution and and are the upper and lower bounds for the element, respectively.

(2) Evolution of StepSize. The strategy “1/5 success rule” (see, e.g., [16]) has been adopted to modify StepSize in the course of searching. This rule was originally stated as follows.

After every mutations, check how many successes have occurred over the preceding mutations. If the number is less than , multiply the step lengths by the factor 0.85; divide them by 0.85 if more than successes occurred.

Besides these parameter modifications, the framework of ACRO is similar to the canonical CRO. ACRO also satisfies two thermodynamics laws and four basic reactions (i.e., on-wall ineffective collision, decomposition, intermolecular ineffective collision, and synthesis).

2.1.1. On-Wall Ineffective Collision Operator

An on-wall ineffective collision represents the situation when a molecule collides with the container (i.e., ). This can be done by picking in the neighborhood of , so it only leads to a small change of the previous molecule’s structure. With the increasement of this elementary reaction, more and more will be transferred to the buffer and offers the energy needed in decomposition. This process can be regarded as a kind of local search of the solution space.

2.1.2. Intermolecular Ineffective Collision Operator

Intermolecular ineffective collision takes place when molecules collide with each other and then bounce away. The molecules (assume two) (i.e., ) remain unchanged before and after the process. This elementary reaction is very similar to the on-wall ineffective collision operator but no external buffer is involved. The new molecules and are generated from their own neighborhoods. This process can also be regarded as a kind of local research.

2.1.3. Decomposition Operator

Decomposition refers to the situation when a molecule hits the wall and breaks into several parts (for simplicity, we only consider two parts, i.e., ). The idea of decomposition is to allow the system to explore other regions of the solution space after enough local search by the ineffective collisions, so this process can be regarded as a local search over a wider range.

2.1.4. Synthesis Operator

A synthesis happens when multiple (assume two) molecules collide with each other and then fuse together (i.e., ). The resulting molecules have a higher “ability” to explore a new solution region. In other words, the resultant molecule may appear in a region farther away from the existing ones in the solution space. The idea behind synthesis is a kind of global research.

Based on the framework introduced above, we can formulate ACRO algorithm as Algorithm 1.

(1) Input: Objective function f, constraints
(2) ∖∖Initialization
(3) Set iniPopSize, iniKE, buffer, CollRate, iter,
ChangeRate and StepSize
(4) Create PopSize number of molecules
(5) ∖∖Iterations
(6) while (the stopping criteria not met) do
(7) if  iter % and iter/n > 10
(8)if totalsuccess  
(9)StepSize = StepSize/0.85
(10)else
(11)StepSize = StepSize × 0.85
(12)
(13)
(14) Generate
(15)if
(16)Randomly select one molecule
(17)if Decomposition criterion met then
(18)Trigger Decomposition
(19)else
(20)Trigger On-wall Ineffective Collision
(21)
(22)else
(23) Randomly select two molecules
(24)if Synthesis criterion met then
(25)Trigger Synthesis
(26)else
(27)Trigger Inter-molecular Ineffective Collision
(28)
(29)
(30)Check for any new minimum solution
(31)
(32) ∖∖The final stage
(33) Output the best solution found and its objective function value
2.2. Particle Swarm Optimization Algorithm

Similar to CRO, the PSO searches the solution space by using a series of particles, which is randomly distributed in initial search space . Each particle has its own attributes (i.e., particle has three attributes, including its position vector , velocity , and its best location found so far ), and the position of each particle is determined by its own flight experience as well as the swarm’s optimal position. Based on the rules of PSO, the update process from iteration to becomes the following:where denotes the velocity of particle in iteration ; denotes the position of particle in iteration ; is the optimal solution found by particle is the global optimal solution; is the inertia weight; is the cognitive weight and is the social weight; and are random numbers uniformly distributed in .

2.3. Inspiration of Advanced Adaptive Chemical Reaction Optimization Algorithm

PSO is famous with its high convergence speed. However, high convergence speed ability may lead to inadequate local search and a high probability of falling into local optimum. Moreover, CRO was proposed as a new algorithm; it demonstrates strong local search ability. As an advanced algorithm based on CRO, ACRO simplifies the CRO’s structure in a wide range and makes the StepSize adaptive, which further improves CRO’s performance.

CRO’s strong local search performance and PSO’s excellent global search performance make the combination of the two algorithms an inevitable trend. The algorithm HP-CRO (see, e.g., [17]) proposed by Nguyen et al. combines both algorithms and achieves a good result.

However, HP-CRO just replaces CRO’s decomposition and synthesis operators with PSO’s search operator, so it has the same global optimization operator as PSO, which brings about that the accuracy of the optimization depends largely on the parameter settings of the PSO algorithm. In other words, if there exits an incorrect parameter setting, the optimization result may be poor, which greatly weakens the performance of the HP-CRO algorithm. Moreover, without an adaptive scheme, the fixed parameter settings in CRO algorithm greatly limit the accuracy of algorithm optimization.

In order to overcome the shortcomings mentioned above, AACRO is proposed. In next section, the detailed algorithm for AACRO is designed.

This section focuses on discussing the infrastructure and basic principles of the AACRO algorithm.

3.1. Basic Modifications

In order to make ACRO and PSO organically combined, we modify 2 parameters of ACRO and introduce a finish operator. The details are as follows.

3.1.1. Changing ACRO’s Neighborhood Operator by Adding PSO’s Search Operator ( and )

As we can see in Section, and refer to particle’s optimal solution and the global best solution. PSO’s high convergence speed is closely related to these two parameters. Add these to ACRO’s search operator will also lead to a high convergence speed. However, if we simply add them together, then the ACRO’s search operator will be the same as the PSO’s, which will lead to a premature convergence. To solve this, we define a new parameter w_global to control whether it is a global search or a local search. Then the ACRO’s neighborhood operator becomes the following:where the value of is manly dependent on a randomly generated number distributed in . If the value of is larger than w_global, we set , else . is also a random number generated by Gaussian distribution. is the neighborhood operator. Equation (6) combines global search operator and local search operator with each other; the value of parameter w_global can be set manually and change in schedule, which increases the flexibility of the algorithm.

3.1.2. Modifying Synthesis and Decomposition Criterion

With the change of the neighborhood search operator, AACRO will have a higher convergence speed, if synthesis operator conducts in a relatively fast frequency then the population diversity cannot be maintained. On the other hand, if there are too many decomposition reactions, the total amount of the molecules will shoot up, which turns the algorithm into an unordered search. So if molecule amount is less than one-half of the original molecule amount, the synthesis operator is suspended; the decomposition operator is prohibited if the number of molecules is more than doubled.

3.1.3. Introducing a “Finish” Operator

At the end of the iteration, we introduce a finish operator. If the molecules are more than two, we continuously choose two molecules from the existing molecular population randomly and conduct the synthesis operator. The optimal particle is obtained until the number of molecules is reduced to one. The difference is that the new molecule produced by the operator will be reserved if its is less than the global minima, otherwise give up. The finish operator is somewhat similar to the crossover operator of genetic algorithm, the only difference is that the finish operator combines Darwin’s theory of evolution in the “survival of the fittest” thought (see, e.g., [18]) and just selects the final optimum solution. The steps of the finish operator can be described as shown in Algorithm 2.

(1) Get current PopSize
(2) while
(3) choose two molecules randomly, conduct synthesis reaction
(4) if
(5)
(6)
(7) update molecules, update PopSize
(8)
(9) Output optimal solution
3.2. The Framework of AACRO

After some structure changes, we establish the framework of AACRO. Similar to other optimization algorithms, there are also three stages in AACRO: initialization, iteration, and the final stage. Figure 1 shows its flow chart.

In the first stage, all parameters need to be initialized. In this step, the state space and some constraints are defined first; then we produce the molecule swarm by generating PopSize numbers of solutions randomly in the solution space; at last, the and of each molecule are initialized.

In the iteration stage, a number of iterations are performed. In each iteration, we first determine whether a unimolecular collision or an intermolecular collision would happen by randomly generating a number in the interval and comparing with CollRate. If is larger than CollRate, it will result in an intermolecular collision and two molecules are randomly chosen. If both of them satisfy the synthesis criterion (), they combine through synthesis, or an intermolecular ineffective collision takes place. Otherwise, unimolecular collision is triggered and one molecule is randomly chosen. If it satisfies the decomposition criterion (, where is the duration tolerance without obtaining any new local minimum solution.), the molecule will experience a decomposition, else an on-wall ineffective collision will take place.

In the final stage, a finish operator is triggered. The existing molecules will continuously take a synthesis reaction until the numbers of the molecules reduce to one. After each synthesis reaction, the new molecule is updated and the old ones are abandoned.

We provide the source code (see, e.g., [19]), and the pseudocode of the details of AACRO is also given in Algorithm 3.

(1) Input: Objective function f and the parameter values
(2) Initialization
(3) Set PopSize, w_global, ChangeRate, CollRate,
LossRate and totaliters
(4) Create Swarm, PE, KE, StepSize, success (iter), n
and MinPE
(5) Iterations
(6) while (the stopping criteria not met) do
(7) if (step size change rule met) then
(8) if (“1/5 success rule” met) then
(9)
(10) else
(11)
(12)
(13)
(14) Generate
(15) ifthen
(16) Randomly select one molecule
(17) if (Decomposition criterion met) then
(18) Trigger Decomposition
(19) else
(20) Trigger On-wall Ineffective Collision
(21)
(22) else
(23) Randomly select two molecules and
(24) if (synthesis criterion met) then
(25) Trigger Synthesis
(26) else
(27) Trigger Inter-molecular Ineffective Collision
(28)
(29)
(30) check for any new minimum solution
(31)
(32) Trigger Finish operator
(33) Obtain the global optimal
(34) Output the best solution found and its objective function value
3.3. The Differences between the CRO Versions

We can see from Section that the ACRO version is proposed as an improvement of canonical CRO version and in this section we continue to optimize the structure of ACRO, so the AACRO version is actually also an improvement of the canonical CRO version. Therefore, we conclude several modification experiences and analyze the differences between the canonical CRO, ACRO, and ACCRO version.

As an initial version, the canonical CRO version builds its basic framework and shows its relatively balanced global search and local search abilities. However, it adopts fixed step size and unreasonable collision criterions, which results in a low optimization efficiency. So the promoted version ACCRO comes out, it greatly streamlines the structure of the canonical CRO version and adopts several adaptive strategies, which makes the canonical CRO adaptive and speeds up its optimization efficiency to some extent.

The AACRO version inherits most of the adaptive strategies used in ACRO version and makes some further improvements. In order to solve the low efficiency at the beginning of the optimization process, we adopt PSO’s global search operator as part of ACRO’s neighborhood operator and use a new parameter to control whether it is a global search or a local search, which greatly enhances ACRO’s optimization efficiency. However, high optimization efficiency may lead to a premature convergence. To prevent this, we modify ACRO’s synthesis and decomposition criterions again and make some molecule amount restrictions to ensure a relatively high converge speed.

So we can conclude that the ACRO and AACRO versions are both with adaptive strategies; what is more, the AACRO version has a higher convergence speed and owns a better search ability.

4. Convergence Proof and Convergence Speed Analysis

Similar to CRO, the operation of the AACRO algorithm is a process of repetitive operation of on-wall ineffective collisions, intermolecular ineffective collisions, decomposition, and synthesis operators. Each iteration process is related only to the state of the current population. Therefore, the AACRO algorithm process can also be modeled as a Markov chain (see, e.g., [20]) and the convergence of AACRO can be proved by using the characteristics of the Markov chain. However, it is worth noting that the search domain (i.e., state space ) of continuous constraint problem is often infinite; that is, the algorithm cannot be proved directly by using Markov chain property.

To solve the problem above, we define a minimal step size ; if the StepSize generated by the neighborhood operator is smaller than , StepSize is equal to . Furthermore, StepSize in the ACRO algorithm will slowly decrease with the iteration, which leads to gradually reducing the actual search domain. With this feature, we can simplify the infinite search domain into a finite state space; then the convergence can be proved.

4.1. Algorithm Convergence Proof

Before the proof process, we first provide some basic definitions, assumptions, and corresponding inferences.

Definition 1 (pseudomolecule). A pseudomolecule is an imaginary molecule with no attributes (i.e., and .).

The purpose of introducing the pseudomolecule is to keep the numbers of the molecules constant.

Definition 2 (state space). Given a problem , a state of AACRO can be described as follows:where denotes the state space of molecule and is the maximum population size.

Definition 3 (the best-so-far solution). For a problem , the best-so-far solution is the current optimal solution found up to current iteration , where

Definition 4 (absorbing Markov chain). Let be the set of states, where the probability of transiting from one absorbing state to another is 1. Then a Markov chain is absorbing if it satisfies

Theorem 5. The optimizing process of AACRO on solving problem can be modeled by a Markov chain .

Proof. We can see that the state space at time is only dependent on the state at time , namely, we havewhere is the transition probability and is the state space on problem . Equation is a Markov property, so is actually a Markov chain with state space . Besides, we can model AACRO as an absorbing Markov chain by an additional term , the state space of iteration is , and then we have , , which makes AACRO an absorbing Markov chain.

Definition 6 (nonattenuation sequence). Let be a sequence, where , . The sequence is nonattenuation if it satisfies .

Lemma 7. Given an absorbing Markov chain , there exists a nonattenuation sequence such that for if and only if the Markov chain is said to be convergent.

Proof. We first prove the sufficiency part. If for ,, it implies . DefineThen, we haveObviously, for all . From (11) we have ; thus . From the property of the absorbing Markov chain, we haveThenThereforeAs , we have . ThenTherefore, the algorithm will reach the optimal state with probability 1 if iteration time tends to infinity.

Necessity Part. If a Markov chain is convergent, we can see from Definition that if time tends to infinity, the probability that the state space converges to the optimal space is 1, that is, , which is equivalent toAccording to (11), we haveTherefore,

Let ; that is, Then ; by Definition, we have is a nonattenuation sequence.

As can be seen from the proof of necessity of Lemma, an absorbing Markov chain will reach the optimal state with probability 1 if iteration time tends to infinity. Furthermore, the AACRO algorithm can also be modeled as an absorbing Markov chain according to Theorem. So we can get AACRO can reach the optimal state with probability as long as the time allowed to evolve is sufficiently long.

4.2. Convergence Speed Analysis

As can be seen according to Definitions 11 and 12 in (see, e.g., [21]), the convergence rate at time is defined aswhere represents the optimal state set. The first hitting time in Definition 12 is denoted as

From (20) and (21), we can see that the first hitting time and the convergence rate at time are closely related. Intuitively speaking, the faster the convergence rate, the shorter the first collision time should be.

Although it is difficult to derive accurate convergence rate and first hitting time from the above (20) and (21), it provides a good mathematical basis for analyzing the convergence rate of AACRO.

This paper mainly studies the effect of StepSize on convergence rate. First, the effect of StepSize on the convergence rate is analyzed. For simplicity, it is assumed that the neighborhood operator is directly equal to the given step size, so that the continuous domain optimization problem becomes a discrete one, that is, the state space is finite.

Since the molecules are randomly generated in the state space, the probability that any molecule converges to a region near to the optimal solution is as follows:where and correspond to the upper bound and lower bound of the state space, respectively, is the optimal solution space,and is the problem dimensions.

It can be seen from (22) that if the StepSize is equal to the difference between the upper and lower bounds, the algorithm has been completely changed to a random search process, and its efficiency will be greatly reduced. So the maximum value of the general step is half of the difference between the upper and lower bounds. Moreover, if the StepSize is large, the probability of reaching the optimal solution from the initial stage space is larger; that is, the convergence rate is larger. However, if the StepSize still remained unchanged, the probability of getting a worse solution will increase since each iteration has a quite large StepSize; we may havewhich will result in reduction of the current convergence speed.

If we change StepSize by “1/5 success rule,” we will actually get a smaller search space (state space), then the state space and the convergence rate are changed as follows:where rate denotes the change rate of the StepSize and represents the initial convergence speed.

We can see from (24) that keeps increasing and maintains in a high speed. What is more, the neighborhood of AACRO combines the PSO’s search operator, which further enhances the convergence rate.

It can be seen from the above analysis that the “1/5 success rule” strategy can greatly improve the convergence speed of the algorithm while the algorithm is guaranteed to converge on the global optimal solution.

5. Test Problems

In order to compare the performance between our proposed AACRO algorithm over the HP-CRO algorithm, we use a set of standard benchmark functions used in (see, e.g., [8]). The benchmark functions are shown in Table 1, which contains 23 benchmark functions with different dimension sizes, solution space , and of course different global minimums , and they can be classified into three categories according to their different characteristics.

(1) Unimodal Functions. belong to unimodal functions and the problem dimension sizes are all 30. However, these functions are relatively easy to solve since there is only one global minimum in each function.

(2) High-Dimensional Multimodal Functions. belong to multimodal functions. The problem dimension sizes are also 30 but there are many local minimums in each function, so functions are considered as the most difficult problems in these 23 benchmark functions.

(3) Low-Dimensional Multimodal Functions. belong to low-dimensional multimodal functions. These functions are with the lowest problem dimensions and the search space , and there exits some local minimums in each function, so they are more difficult than unimodal functions and easier than high-dimensional multimodal functions. The detailed introduction of these benchmark functions can be seen in [22].

6. Simulation Results

6.1. Experimental Environment

Both AACRO and HP-CRO are implemented in Matlab7.8. All simulations are performed on a computer with an Intel Core I5-4590 @ 3.3 GHz CPU and 4 GB of RAM in Windows 7 environment.

6.2. Parameter Setting

In order to achieve the best results, different test functions are set with different parameters and each simulation terminates when a certain number of function evaluations (NFEs) have been performed. The NFEs limits of AACRO for different test functions are listed in Table 2; the parameter settings are listed in Table 3. The local weight and global weight of PSO are chosen as .

6.3. Comparisons

This paper also provides the results of the RCCRO versions of RCCRO1 (RCCRO1, RCCRO2, and RCCRO4) and HP-CRO (HP-CRO1 and HP-CRO2). All versions tested 23 standard test functions. For each function, we run AACRO 25 times and obtain the averaged computed minimum value (mean) and standard deviation (StdDev). We rank the results from the lowest mean to the highest and get the average rank. Finally, we order the average rank and get the overall rank.

As shown in Tables 46, the overall rank of AACRO is best and HP-CRO versions rank 2nd, followed by RCCRO versions. It is worth noting that each version has its specialty. In other words, no algorithm can work best on all functions. The AACRO version outperforms on , , , , , , , , , , , , , , , , , and . HP-CRO versions are on , , , , and . RCCRO versions are on , , and .

Table 4 gives the results for unimodal functions. From Table 4, we can see that AACRO outperforms the rest of the algorithms. AACRO behaves best on , , and , and the HP-CRO2 version has a better performance than other algorithms on functions and ; the RCCRO2 version ranks first on . For , all versions have the same performance, so they all rank first. The standard deviation of AACRO is also less than other versions.

For this table, AACRO ranks first, the 2nd highest overall rank is HP-CRO, and the RCCRO versions have the worst performance. In general, AACRO is efficient in solving unimodal functions.

Table 5 gives the results for high-dimensional multimodal functions (Category II). AACRO also has the best performance compared to others. Besides , AACRO outperforms the HP-CRO and RCCRO versions. The HP-CRO2 version performs best on . For RCCRO versions, the performances are all not ideal, expect that RCCRO4 gets a 2nd rank on .

For this table, AACRO also ranks first, followed by HP-CRO versions, and the RCCRO versions ranks last. Therefore, we can also believe that AACRO can treat high-dimensional multimodal functions well.

Table 6 gives the results for low-dimensional multimodal functions. From the overall rank we can see AACRO has a smaller standard deviation except . AACRO outperforms the rest on , , , , , , , and , and gets a 2nd rank. The RCCRO versions perform best on . The 1st, 2nd, and 3rd ranks are all RCCRO versions, HP-CRO2 ranks 4th, AACRO ranks 5th, and HP-CRO1 ranks 6th. In general, AACRO is also efficient in solving low-dimensional multimodal functions.

From Tables 46, AACRO ranks 1st and HP-CRO versions ranks 2nd, followed by RCCRO versions. The details of AACRO are also presented, shown in Tables 7 and 8.

We can see from Tables 7 and 8 that when dealing with unimodal problems, our proposed ACCRO algorithm takes less average computation time than other problems. It is obvious since this kind of problem has only one global minimum in each of the functions and it is relatively “easy” to get the optimal solution. For the Hartman’s family problems ( and ), ACCRO takes the longest average computation time due to a “narrow” solution space and a relatively complicated exponential function.

For a more detailed comparison of the AACRO proposed in this paper with the HP-CRO version, we give experimental results of some functions, the results of the execution on, , , and are shown in Figure 2. The convergence curves on , , , and are shown in Figure 3.

It was observed that the performance of AACRO is better than HP-CRO in Figures 2(a) and 2(d). It was worth noting that sometimes HP-CRO can have a better performance in Figure 2(c), and AACRO is better than HP-CRO in most cases in Figure 2(b). Moreover, we can see from Figure 3 that AACRO converges faster than HP-CRO.

7. Concluding and Future Work

In this paper, a new algorithm AACRO based on balanced local search and global search has been proposed. The algorithm combines the ACRO features and also incorporates the optimality operator of the PSO algorithm and adds the weighting factor to control the ratio of local search and global search. The algorithm’s structure allows the algorithm to switch seamlessly between global and local searches efficiently, making it easier to find optimal values.

What is more, we give the convergence proof and convergence speed analysis. We draw a conclusion that the AACRO algorithm will keep converging with a high speed.

At last, this algorithm is simulated and compared with HP-CRO and RCCRO two versions of the algorithm. The results show that the algorithm can solve the optimization problem efficiently.

Our future work will be focused on the investigations of AACRO’s parameters and figuring out the impact of each parameter, and the structure of the algorithm still needs to be streamlined. We expect to combine improved algorithms with engineering practice, which is also another key issue in the near future.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.