Abstract

Firefly algorithm (FA) is a metaheuristic for global optimization. In this paper, we address the practical testing of a heuristic-based FA (HBFA) for computing optima of discrete nonlinear optimization problems, where the discrete variables are of binary type. An important issue in FA is the formulation of attractiveness of each firefly which in turn affects its movement in the search space. Dynamic updating schemes are proposed for two parameters, one from the attractiveness term and the other from the randomization term. Three simple heuristics capable of transforming real continuous variables into binary ones are analyzed. A new sigmoid “erf” function is proposed. In the context of FA, three different implementations to incorporate the heuristics for binary variables into the algorithm are proposed. Based on a set of benchmark problems, a comparison is carried out with other binary dealing metaheuristics. The results demonstrate that the proposed HBFA is efficient and outperforms binary versions of differential evolution (DE) and particle swarm optimization (PSO). The HBFA also compares very favorably with angle modulated version of DE and PSO. It is shown that the variant of HBFA based on the sigmoid “erf” function with “movements in continuous space” is the best, in terms of both computational requirements and accuracy.

1. Introduction

This paper aims to analyze the merit, in terms of performance, of a heuristic-based firefly algorithm (HBFA) for computing the optimal and binary solution of bound constrained nonlinear optimization problems. The problem to be addressed has the form where is a continuous function. Due to the compactness of , we also have , , where and are the vectors of the lower and upper bounds, respectively. We do not assume that is differentiable and convex. Instead of searching for any local (nonglobal) solution we want the globally best binary point. Direct search methods might be suitable since we do not assume differentiability. However, they are only local optimization procedures and therefore there is no guarantee that a global solution is reached. For global optimization, stochastic methods are generally used and aim to explore the search space and converge to a global solution. Metaheuristics are higher-level procedures or heuristics that are designed to search for good solutions, known as near-optimal solutions, with less computational effort and time than more classical algorithms. They are usually nondeterministic and their behaviors do not depend on problem’s properties. Population-based metaheuristics have been used to solve a variety of optimization problems, from continuous to the combinatorial ones.

Metaheuristics are common for solving discrete binary optimization problems [110]. Many approaches have been developed aiming to solve nonlinear programming problems with mixed-discrete variables by transforming the discrete problem into a continuous one [11]. The most used and simple approach solves the continuous relaxed problem and then discretizes the obtained solution by using a rounding scheme. This type of approach works well on simple and small dimension academic and benchmark problems but may be somehow limited on some real-world applications.

Recently, a metaheuristic optimization algorithm, termed firefly algorithm (FA), that mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies, has been developed [12, 13]. This is a swarm intelligence optimization algorithm that is capable of competing with the most well-known algorithms, like ant colony optimization, particle swarm optimization, artificial bee colony, artificial fish swarm, and cuckoo-search.

FA performance is controlled by three parameters: the randomization parameter , the attractiveness , and the absorption coefficient . Authors have argued that its efficiency is due to its capability of subdividing the population into subgroups (since local attraction is stronger than long-distance attraction) and its ability to adapt the search to problem landscape by controlling the parameter [14, 15]. Several variants of the firefly algorithm do already exist in the literature. Based on the settings of their parameters, a classification scheme has appeared. Gaussian FA [16], hybrid FA with harmony search [17], hybrid genetic algorithm with FA [18], self-adaptive step FA [15], and modified FA in [19] are just a few examples. Further improvements have been made aiming to accelerate convergence (see, e.g., [2022]). A practical convergence analysis of FA with different parameter sets is presented in [23]. FA has become popular and widely used in recent years in many applications, like economic dispatch problems [24] and mixed variable optimization problems [25]. The extension of FA to multiobjective continuous optimization has already been investigated [26]. A recent review of firefly algorithms is available in [14].

Based on the effectiveness of FA in continuous optimization, it is predicted that it will perform well when solving discrete optimization problems. Discrete versions of the FA are available for solving discrete NP hard optimization problems [27, 28].

The main purpose of this study is to incorporate some heuristics aiming to deal with binary variables in the firefly algorithm for solving nonlinear optimization problems with binary variables. The binary dealing methods that were implemented are adaptations of well-known heuristics for defining 0 and 1 bit strings from real variables. Furthermore, a new sigmoid function aiming to constrain a real valued variable to the range is also proposed. Three different implementations to incorporate the heuristics for binary variables and adapt FA to binary optimization are proposed. We apply the proposed heuristic strategies to solve a set of benchmark nonlinear problems and show that the newly developed HBFA is effective in binary nonlinear programming.

The remaining part of the paper is organized as follows. Section 2 reviews the standard FA and presents new dynamic updates for some FA parameters, and Section 3 describes different heuristic strategies and reports on their implementations to adapt FA to binary optimization. All the heuristic approaches are validated with tests on a set of well-known bound constrained problems. These results and a comparison with other methods in the literature are shown in Section 4. Finally, the conclusions and ideas for future work are listed in Section 5.

2. Firefly Algorithm

Firefly algorithm is a bioinspired metaheuristic algorithm that is able to compute a solution to an optimization problem. It is inspired by the flashing behavior of fireflies at night. According to [12, 13, 19], the three main rules used to construct the standard algorithm are the following:(i)all fireflies are unisex, meaning that any firefly can be attracted to any other brighter one;(ii)the attractiveness of a firefly is determined by its brightness which is associated with the encoded objective function;(iii)attractiveness is directly proportional to brightness but decreases with distance.

Throughout this paper, we let represent the Euclidean norm of a vector. We use the vector to represent the position of a firefly in the search space. The position of the firefly will be represented by . We assume that the size of the population of fireflies is . In the context of problem (1), firefly is brighter than firefly if .

2.1. Standard FA

First, in the standard FA, the positions of the fireflies are randomly generated in the search space as follows: where is a uniformly distributed random number in , hereafter represented by . The movement of a firefly is attracted to another brighter firefly and is given by where is the randomization parameter, , is a problem dependent vector of scaling parameters, and gives the attractiveness of a firefly which varies with the light intensity/brightness seen by adjacent fireflies and the distance between themselves and is the attraction parameter when the distance is zero [12, 13, 22, 29]. Besides the presented “” function, any monotonically decreasing function could be used. The parameter which characterizes the variation of the attractiveness is the light absorption coefficient and is crucial to determine the speed of convergence of the algorithm. In theory, could take any value in the set . When , the attractiveness is constant , meaning that a flashing firefly can be seen anywhere in the search space. This is an ideal case for a problem with a single (usually global) optimum since it can easily be reached. On the other hand, when , the attractiveness is almost zero in the sight of other fireflies and each firefly moves in a random way. In particular, when , the algorithm behaves like a random search method [13, 22]. The randomization term can be extended to the normal distribution or to any other distribution [15]. Algorithm 1 presents the main steps of the standard FA (on continuous space).

Data:  , ,
Set  ;
Randomly generate , ;
Evaluate , rank fireflies (from lowest to largest );
while    and   do
  forall     such that   do
   forall     such that   do
     Compute randomization term;
     Compute attractiveness  ;
     Move firefly towards using (3);
  Evaluate , rank fireflies (from lowest to largest );
  Set  ;

2.2. Dynamic Updates of and

The relative value of the parameters and affects the performance of FA. The parameter controls the randomness or, to some extent, the diversity of solutions. Parameter aims to scale the attraction power of the algorithm. Small values of with large values of can increase the number of iterations required to converge to an optimal solution. Experience has shown that must take large values at the beginning of the iterative process to enforce the algorithm to increase the diversity of solutions. However, small values combined with small values of in the final iterations increase the fine-tuning of solutions since the effort is focused on exploitation. Thus, it is possible to improve the quality of the solution by reducing the randomness. Convergence can be improved by varying the randomization parameter so that it decreases gradually as the optimum solution is approaching [22, 24, 26, 29]. In order to improve convergence speed and solution accuracy, dynamic updates of the parameters and of FA, which depend on the iteration counter of the algorithm, are implemented as follows.

Similarly to the factor which controls the amplification of differential variations, in differential evolution (DE) metaheuristic [5], the inertial weight, in particle swarm optimization (PSO) [29, 30], and the pitch adjusting rate, in the harmony search (HS) algorithm [31], we allow the value of to decrease linearly with , from an upper level to a lower level : where is the maximum number of allowed iterations. To increase the attractiveness with , the parameter is dynamically updated by where and are the minimum variation and maximum variation of attractiveness, respectively.

2.3. Lévy Dependent Randomization Term

We remark that our implementation of the randomization term in the proposed dynamic FA considers the Lévy distribution. Based on the attractiveness , in (4), the equation for the movement of firefly towards a brighter firefly can be written as follows: where is a random number from the Lévy distribution centered at , the position of the brightest firefly, with an unitary standard deviation. The vector represents the variation around (and based on real position )

3. Dealing with Binary Variables

The standard FA is a real-coded algorithm and some modifications are needed to enable it to deal with discrete optimization problems. This section describes the implementation of some heuristics with FA for binary nonlinear optimization problems. In the context of the proposed HBFA, three different heuristics to transform a continuous real variable into a binary one are presented. Furthermore, to extend FA to binary optimization, different implementations to incorporate the heuristic strategies into FA are described. We will use the term “discretization” to define the process that transforms a continuous real variable, represented, for example, by , into a binary one, represented by .

3.1. Sigmoid Logistic Function

This discretization methodology is the most common in the literature when population-based stochastic algorithms are considered in binary optimization, namely, PSO [6, 8, 9], DE [3], HS [1, 32], artificial fish swarm [33], and artificial bee colony [4, 7, 10].

When moves towards , the likelihood is that the discrete components of change from binary numbers to real ones. To transform a real number into a binary one, the following sigmoid logistic function constrains the real value to the interval : where , in the context of FA, is the component of the position vector (of firefly ) after movement—recall (7) and (4). Equation (9) interprets the floating-point components of a solution as a set of probabilities. These are then used to assign appropriate binary values by using where gives the probability that the component itself is 0 or 1 [28] and . We note that during the iterative process the firefly positions, , were not allowed to move outside the search space .

3.2. Proposed Sigmoid Function

The error function is a special function with a shape that appears mainly in probability and statistics contexts. Denoted by “erf,” the mathematical function defined by the integral, satisfies the following properties and it has a close relation with the normal distribution probabilities. When a series of measurements are described by a normal distribution with mean and standard deviation , the erf function evaluated at , for a positive , gives the probability that the error of a single measurement lies in the interval . The derivative of the function follows immediately from its definition: The good properties of the function are thus used to define a new sigmoid function, the sigmoid function: which is a bounded differentiable real function defined for all and has a positive derivative at each point. A comparison of both functions (9) and (14) is depicted in Figure 1. Note that the slope at the origin of the sigmoid function in (14) is around 0.5641895, while that of function (9) is 0.25, thus yielding a faster growing from 0 to 1.

3.3. Rounding to Integer Part

The simplest discretization procedure of a continuous component of a point into 0/1 bit uses the rounding to the integer part function, known as floor function, and is described in [34]. Each continuous value is transformed into a binary one, 0 bit or 1 bit, , for in the following way: where represents the floor function of and gives the largest integer not greater than . The floating-point value is first divided by 2 and then the absolute value of the remainder is floored. The obtained integer number is the bit value of the component.

3.4. Heuristics Implementation

In this study, three methods capable of computing global solutions to binary optimization problems using FA are proposed.

3.4.1. Movement on Continuous Space

In this implementation of the previously described heuristics, denoted by “movement on continuous space” (mCS), the movement of each firefly is made on the continuous space and its attractiveness term is updated considering the real position vector. The real position of firefly is discretized only after all movements towards brighter fireflies have been carried out. We note that the fitness evaluation of each firefly, for firefly ranking, is always based on the binary position. Algorithm 2 presents the main steps of HBFA with mCS.

Data:  , ,
Set  ;
Randomly generate , ;
Discretize position of firefly : , ;
Compute , rank fireflies (from lowest to largest  );
while     and   do
  forall    such that   do
   forall      such that   do
     Compute randomization term;
     Compute attractiveness ;
     Move position of firefly towards using (7);
  Discretize positions: , ;
  Compute , rank fireflies (from lowest to largest );
  Set ;

3.4.2. Movement on Binary Space

This implementation, denoted by “movement on binary space” (mBS), moves the binary position of each firefly towards the binary positions of brighter fireflies; that is, each movement is made on the binary space although the corresponding position may fail to be 0 or 1 bit string and must be discretized before the updating of attractiveness. Here, fitness is also based on the binary positions. Algorithm 3 presents the main steps of HBFA with mBS.

Data: , ,
Set  ;
Randomly generate , ;
Discretize position of firefly : , ;
Compute , rank fireflies (from lowest to largest );
while    and   do
  forall    such that   do
   forall    such that   do
     Compute randomization term;
     Compute attractiveness based on distance ;
     Move binary position of firefly towards  
     using ;
     Discretize position of firefly : ;
  Compute , rank fireflies (from lowest to largest  );
  Set ;

3.4.3. Probability for Binary Component

For this implementation, named “probability for binary component” (pBC), we borrow the concept from the binary PSO [6, 9, 35] where each component of the velocity vector is directly used to compute the probability that the corresponding component of the particle position, , is 0 or 1. Similarly, in the FA algorithm, we do not interpret the vector in (7) as a step size, but rather as a mean to compute the probability that each component of the position vector of firefly is 0 or 1. Thus, we define where represents a sigmoid function. Algorithm 4 is the pseudocode of HBFA with pBC.

Data:  , ,
Set  ;
Randomly generate , ;
Discretize position of firefly : , ;
Compute , rank fireflies (from lowest to largest );
while    and   do
  forall     such that   do
   forall    such that   do
     Compute randomization term;
     Compute attractiveness based on distance ;
     Compute using binary positions (see (7));
     Discretize and define using (16);
  Compute , rank fireflies (from lowest to largest  );
  Set  ;

4. Numerical Experiments

In this section, we present the computational results that were obtained with HBFA—Algorithms 2, 3, and 4, using (9), (14), or (15)—aiming to investigate its performance when solving a set of binary nonlinear optimization problems. Two small 0-1 knapsack problems are also used to test the algorithms’ behavior on linear problems with 0/1 variables.

The numerical experiments were carried out on a PC Intel Core 2 Duo Processor E7500 with 2.9 GHz and 4 Gb of memory. The algorithms were coded in Matlab Version 8.0.0.783 (R2012b).

4.1. Experimental Setting

Each experiment was conducted 30 times. The size of the population is made to depend on the problem’s dimension and is set to . Some experiments have been carried out to tune certain parameters of the algorithms. In the proposed FA with dynamic and , they are set as follows: , , , , , and . In Algorithms 2 (mCS), 3 (mBS), and 4 (pBC), iterations were limited to and the tolerance for finding a good quality solution is . Results reported are averaged (over the 30 runs) of best function values, number of function evaluations, and number of iterations.

4.2. Experimental Results

First, we use a set of ten benchmark nonlinear functions with different dimensions and characteristics. For example, five functions are unimodal and the remaining multimodal [3, 9, 10, 36]. They are displayed in Table 1. Although they are widely used in continuous optimization, we now aim to converge to a 0/1 bit string solution.

First, we aim to compare with the results reported in [3, 9, 10]. Due to poor results, the authors in [10] do not recommend the use of ABC to solve binary-valued problems. The other metaheuristics therein implemented are the following:(i)angle modulated PSO (AMPSO) and angle modulated DE (AMDE) that incorporate a trigonometric function as a bit string generator into the classic PSO and DE algorithms, respectively;(ii)binary DE and PSO based on the sigmoid logistic function and (10), denoted by binDE and binPSO, respectively.

We noticed that the problems Foxholes, Griewank, Rosenbrock, Schaffer, and Step are not correctly described in [3, 9, 10]. Table 2 shows both the averaged best function values (obtained during the 30 runs), , with the St.D. in parentheses, and the averaged number of function evaluations, , obtained with the sigmoid logistic function (see in (9)) and (10), while using the three implementations: mCS, mBS, and pBC. Results obtained for these ten functions indicate that our proposal HBFA produces high quality solutions and outperforms the binary versions binPSO and binDE, as well as AMPSO and AMDE. We also note that mCS has the best “” values on 6 problems, mBS is better on 3 problems (one is a tie with mCS), and pBC on 2 problems. Thus, the performance of mCS is the best when compared with those of mBS and pBC. The latter is the least efficient of all, in particular for the large dimensional problems.

To analyze the statistical significance of the results we perform a Friedman test. This is a nonparametric statistical test to determine significant differences in mean for one independent variable with two or more levels—also denoted as treatments—and a dependent variable (or matched groups taken as the problems). The null hypothesis in this test is that the mean ranks assigned to the treatments under testing are the same. Since all three implementations are able to reach the solutions within the error tolerance on 9 out of 10 problems, the statistical analysis is based on the performance criterion “.” In this hypothesis testing, we have three treatments and ten groups. Friedman’s chi-square has a value of 2.737 (with a value of 0.255). For 2 degrees of freedom reference distribution, the critical value for a significance level of 5% is 5.99. Hence, since 2.737 5.99, the null hypothesis is not rejected and we conclude that there is no evidence that the three mean ranks values have statistically significant differences.

To further compare the sigmoid functions with the rounding to integer strategy, we include in Table 3 the results obtained by the “” function in (14), together with (10), and the floor function in (15). Only the implementations mCS and mBS are tested. The table also shows the averaged number of iterations, . The results illustrate that implementation mCS (Algorithm 2) works very well with strategies based on (14), together with (10), and (15). The success rate for all the problems is 100%, meaning that the algorithms stop because the value at the position of the best/brightest firefly is within a tolerance of the optimal solution , in all runs. Further, mBS (Algorithm 3) works better when the discretization of the variables is carried out by (15). Overall, mCS based on (14) produces the best results on 6 problems, mCS based on (15) gives the best results on 7 problems (including 4 ties with the former case), mBS based on (14) wins only on one problem, and mBS based on (15) wins on 3 problems (all are ties with mCS based on (15)).

Further, when performing the Friedman test on the four distributions of “” values, the chi-square statistical value is 13.747 (and the value is 0.0033). From the distribution table, the critical value for a significance level of 5% and 3 degrees of freedom is 7.81. Since 13.747 7.81, the null hypothesis is rejected and we conclude that the observed differences of the four distributions are statistically significant.

We now introduce in the statistical analysis the results reported in Tables 2 and 3 concerned with both implementations mCS and mBS. Six distributions of “” values are now in comparison. Friedman’s chi-square value is 18.175 ( value = 0.0027). The critical value of the chi-square distribution for a significance level of 5% and 5 degrees of freedom is 11.07. Thus, the null hypothesis of “no significant differences on mean ranks” is rejected and there is evidence that the six distributions of “” values have statistically significant differences. Multiple comparisons (two at a time) may be carried out to determine which mean ranks are significantly different. The estimates of the 95% confidence intervals are shown in the graph of Figure 2 for each case under testing. Two compared distributions of “” are significantly different if their intervals are disjoint and are not significantly different if their intervals overlap. Hence, from the six cases, we conclude that the mean ranks produced by mCS based on (15) are significantly different from those of mBS based on (9) and mBS based on (14). For the remaining pairs of comparison there are no significant differences on the mean ranks.

For comparative purposes we include in Table 4 the results obtained by using the proposed Lévy (L) distribution in the randomization term, as shown in (7), and those produced by the Uniform (U) distribution, using as shown in (3). The reported tests use implementation mCS (described in Algorithm 2) with the two heuristics for binary variables: (i) the “” function in (14), together with (10), and (ii) the floor function in (15). It is shown that the performance of HBFA with Uniform distribution is very sensitive to the dimension of the problem, since the efficiency is good when is small but gets worse when is large. Thus, we have shown that the Lévy distribution is a very good bid.

We add to some problems with from Table 1—Ackley, Griewank, Rastrigin, Rosenbrock, and Spherical—three other functions Schwefel 2.22, Schwefel 2.26, and Sum of Different Power to compare our results with those reported in [1]. Schwefel 2.22 is unimodal and for , the binary solution is with ; Schwefel 2.26 is multimodal and in , the binary solution is with ; Sum of Different Power is unimodal and in , the minimum is 0 at . For the results of Table 5, we use HBFA based on mCS, with both “” function in (14), together with (10), and the floor function (15). The table reports on the average function values, average number of function evaluations, and success rate (SR). Here, 50 independent runs were carried out to compare with the results shown in [1]. The maximum number of function evaluations therein used was 90000. It is shown that our HBFA outperforms the proposed adaptive binary harmony search (ABHS).

4.3. Effect of Problem’s Dimension on HBFA Performance

We now consider six problems with varied dimensions from the previous set to analyze the effect of problem’s dimension on the HBFA performance. We test three dimensions: , , and . The algorithm’s parameters are set as previously defined. We remark that the size of the population for all the tested problems and dimensions is 40 points.

Table 6 contains the results for comparison based on averaged values of , number of function evaluations, and number of iterations. The “St.D.” of the values are also displayed. Since the implementation mCS, shown in Algorithm 2, performs better and shows more consistent results than the other two, we tested only mCS based on (14) and mCS based on (15).

Besides testing significant differences on the mean ranks produced by the two treatments, mCS based on (14) and mCS based on (15), we also want to determine if the differences on mean ranks produced by problem’s dimension—50, 100, and 200—are statistically significant at a significance level of 5%. Hence, we aim to analyze the effects of two factors “A” and “B.” “A” is the HBFA implementation (with two levels) and “B” is the problem’s dimension (with three levels). For this purpose, the results obtained for the six problems for each combination of the levels of “A” and “B” are considered as replications. When performing the Friedman test for factor “A,” the chi-square statistical value is 1.225 ( value = 0.2685) with 1 degree of freedom. The critical value for a significance level of 5% and 1 degree of freedom in the distribution table is 3.84, and there is no evidence of statistically significant differences. From the Friedman test for factor “B,” we also conclude that there is no evidence of statistically significant differences, since the chi-square statistical value is 0.746 ( value = 0.6886) with 2 degrees of freedom. (The critical value of the distribution table for a significance level of 5% and 2 degrees of freedom is 5.99.) Hence, we conclude that the dimension of the problem does not affect the algorithm’s performance. Only with problem Quartic, the efficiency of mCS based on (14) gets worse as dimension increases. Overall, both tested strategies are rather effective when binary solutions are required on small as well as on large nonlinear optimization problems.

4.4. Solving 0-1 Knapsack Problems

Finally, we aim to analyze the behavior of our best tested strategies when solving well-known binary and linear optimization problems. For this preliminary experiment, we selected two small knapsack problems. The 0-1 knapsack problem (KP) can be described as follows. Let be the number of items, from which we have to select some of them to be carried in a knapsack. Let and be the weight and the value of item , respectively, and let be the knapsack’s capacity. It is usually assumed that all weights and values are nonnegative. The objective is to maximize the total value of the knapsack under the constraint of the knapsack’s capacity: If item is selected, ; otherwise, . Using a penalty function, this problem can be transformed into where is the penalty parameter which was set to be 100 in this experiment.

Case 1 (an instance of a 0-1 KP with 4 items). Knapsack’s capacity is and the vectors of values and weights are and . Based on the above-mentioned parameters, the HBFA with mCS based on (14) was run 30 times and the averaged results were the following. With a success rate of 100%, items 1 and 2 are included in the knapsack and items 3 and 4 are excluded, with a maximum value of 55 (St.D. = 0.0e00). On average, the runs required 0.8 iterations and 29.3 function evaluations. With a success rate of 23%, the heuristic based on the floor function, thus mCS based on (15), reached (St.D. = 4.0e00) after an average of 6161.1 function evaluations and an average of 384.1 iterations.

Case 2 (an instance of a 0-1 KP with 8 items). The maximum capacity of the knapsack is set to 8 and the vectors of values and weights are and . The results are averaged over the 30 runs. After 8.7 iterations and 386.7 function evaluations, the maximum value produced by the strategy mCS based on (14) is 286 (St.D. = 0.0e00), with a success rate of 100%. Items 1, 4, 5, and 6 are included in the knapsack and the others are excluded. The heuristic mCS based on (15) did not reach the optimal solution. All runs required 500 iterations and 20040 function evaluations and the average function values were with St.D. = 3.14e01.

5. Conclusions and Future Work

In this work we have implemented several heuristics to compute a global optimal binary solution of bound constrained nonlinear optimization problems, which have been incorporated into FA, yielding the herein called HBFA. The problems addressed in this study have bounded continuous search space. Our FA proposal uses dynamic updating schemes for two parameters, from the attractiveness term and from the randomization term, and considers the Lévy distribution to create randomness in firefly movement. The performance behavior of the proposed heuristics has been investigated. Three simple heuristics capable of transforming real continuous variables into binary ones are implemented. A new sigmoid “” function is proposed. In the context of the firefly algorithm, three different implementations aiming to incorporate the heuristics for binary variables into FA are proposed (mCS, mBS, and pBC). Based on a set of benchmark problems, a comparison is carried out with other binary dealing metaheuristics, namely, AMPSO, binPSO, binDE, and AMDE. The experimental results show that the implementation denoted by mCS when combined with either the new sigmoid “” function or the rounding scheme based on the floor function is quite efficient and superior to the other methods in comparison. The statistical analysis carried out on the results shows evidence of statistically significant differences on efficiency, measured by the number of function evaluations, between the implementation mCS based on the floor function approach and the mBS based on both tested sigmoid functions schemes. We have also investigated the effect of problem’s dimension on the performance of our algorithm. Using the Friedman statistical test we conclude that the differences on efficiency are not statistically significant. Another simple experiment has shown that the implementation mCS with the sigmoid “” function is effective in solving two small 0-1 KP. The performance of this simple heuristic strategy will be further analyzed to solve large and multidimensional 0-1 KP. Future developments concerning the HBFA will consider its extension to deal with integer variables in nonlinear optimization problems. Different heuristics to transform continuous real variables into integer variables will be investigated. Challenging mixed-integer nonconvex nonlinear problems will be solved.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors wish to thank two anonymous referees for their valuable suggestions to improve the paper. This work has been supported by FCT (Fundação para a Ciência e Tecnologia, Portugal) in the scope of the Projects PEst-OE/MAT/UI0013/2014 and PEst-OE/EEI/UI0319/2014.