Abstract

Constrained optimization plays an important role in many decision-making problems and various real-world applications. In the last two decades, various evolutionary algorithms (EAs) were developed and still are developing under the umbrella of evolutionary computation. In general, EAs are mainly categorized into nature-inspired and swarm-intelligence- (SI-) based paradigms. All these developed algorithms have some merits and also demerits. Particle swarm optimization (PSO), firefly algorithm, ant colony optimization (ACO), and bat algorithm (BA) have gained much popularity and they have successfully tackled various test suites of benchmark functions and real-world problems. These SI-based algorithms follow the social and interactive principles to perform their search process while approximating solution for the given problems. In this paper, a multiswarm-intelligence-based algorithm (MSIA) is developed to cope with bound constrained functions. The suggested algorithm integrates the SI-based algorithms to evolve population and handle exploration versus exploitation issues. Thirty bound constrained benchmark functions are used to evaluate the performance of the proposed algorithm. The test suite of benchmark function is recently designed for the special session of EAs competition in IEEE Congress on Evolutionary Computation (IEEE-CEC′13). The suggested algorithm has approximated promising solutions with good convergence and diversity maintenance for most of the used bound constrained single optimization problems.

1. Introduction

Constrained optimization has indispensable applications and currently it is impossible to name a single industry that is not using optimization process. In general, bound constrained optimization is the mathematical process of finding the minimum or maximum value of an objective function subject to various constraints imposed on the search space of the problem to be solved. The roots of optimization process trace back to early work of Euler and Lagrange as found in the calculus of variations. A general constrained optimization problem can mathematically be described as follows:where is the candidate solution with decision variables/real parameters, is the function with objectives, are the inequality constrained functions, are the equality constrained functions, and are bound constrained over the search space .

Problem (1) is said to be a linear programming (LP) or linear optimization problem if all objective functions and all their corresponding constrained functions are linear. Likewise, if all objective functions are quadratic and all their corresponding constraint functions are linear, then problem (1) is said to be quadratic programming. Furthermore, if , then problem (1) becomes a single objective optimization problem and if , then problem (1) is said to be a multiobjective optimization problem (MOP). If all objective functions and constrained functions are described in real values, then problem (1) will be a continuous MOP.

A linear programming (LP) or linear optimization problem is comparatively easy to solve as compared to nonlinear optimization (NLP) problems. The existence of equality constraints is quite difficult and more challengeable while solving nonlinear programming problems by guessing which constraints are active and which are not active [1].

In the last two decades, various optimization methods were developed as outlined in Figure 1. Among them, the linear programming was first introduced by the Soviet economist Leonid Kantorovich in World War II while designing a systemic plan of expenditures for US Army aiming at increasing losses of opposition. In 1940, American mathematical scientists Fang and Puthenpura have made further improvement and developed first generalized model of linear programming to schedule training of US Air Force [2]. LP problems have many advantages and disadvantages. This is quite unrealistic because most of the problems in sciences and engineering fields are naturally modeled as nonlinear problems [3].

Nonlinear optimization approaches are further subdivided in local and global optimization methods [4]. Local search methods provide local optimum with low computational cost as compared to global search methods. Newton method and sequential quadratic programming are commonly used local optimization methods. These gradient-based methods can be employed as local search optimizers. However, they are not much effective in solving complex optimization problems having had high multimodality, discontinuity, and noisy landscapes. In general, the objective functions of real-world problems are mostly nondifferentiable, noncontinuous, nonlinear, noisy, flat, and multidimensional or comprise many local minima. Traditional optimization techniques are mostly unable to find a quality solution for problems with nonlinearity, multimodality, and high-dimensionality structures.

Over the last two decades, various global optimization methods were developed and successfully tackled various complex optimization and search problems [5, 6]. Evolutionary algorithms (EAs) are much advanced and promising alternatives as compared to gradient-based optimization methods. EAs are population-based techniques and provide good approximated solutions in single simulation unlike the traditional mathematical programming techniques. EAs are population-based metaheuristics [79] and are generally divided into nine different groups: the biology-based [10, 11], physics-based [12, 13], social-based [1417], music-based [18], chemical-based [19], sport-based [20], mathematics-based [21], swarm-based [2228], and hybrid methods [2936]. Genetic algorithm (GA) [37], evolutionary strategy (ES) [38], evolutionary programming (EP) [39, 40], and genetic programming (GP) [41] are the classical paradigms of evolutionary computing [8]. Differential evolution (DE) [42, 43], particle swarm optimization (PSO) [23], ant colony optimization (ACO) [44, 45], cuckoo search (CS) [46], and teaching-learning-based optimization (TLBO) algorithm [47] are the most popular and recently developed EAs [17].

Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, whether they are natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Beni and Wang in 1989 [48] in the context of cellular robotic systems. Swarm-intelligence- (SI-) based algorithms, mainly inspired by the behavior of real swarm or insect colonies, have shown improved performance on large-scale optimization and decision-making problems [49]. They can work collectively following some specific rules. The decentralization and self-organization are the main sources of interactions among swarm and agents [26]. SI has received great interests and attention in the collected works. In the communities of optimization, computational intelligence, and computer science, bioinspired algorithms, especially swarm-intelligence-based algorithms, have gained much popularity [50]. Nature-inspired metaheuristic algorithms are now among the most broadly used algorithms for optimization and computational intelligence [28]. For example, the need for cohesion requires individual members to stay in close proximity to one another and, at the same time, to avoid conflicts or collisions with other members of the group; whenever the group moves, all the members of the group should move together.

The use of ensemble strategies in the framework of different existing evolutionary algorithms has attracted the attention of researchers in academia and industrial applications because of their capabilities. Evolutionary computation researchers have shown great interests in employing multiple search operators such as one-point, two-point, and uniform crossover operators, simplex crossover, tournament, ranking, stochastic, uniform sampling selection operator, clearing, crowding, sharing-based niching techniques, adaptive penalty, epsilon, and superiority of feasible constraint handling approaches while solving the complicated test suites of benchmark functions and many important real-world problems have had high complexity, noisy environment, imprecision, uncertainty, and vagueness in their structures [9]. It is quite useful to use different strategies at different stages of population evolutions. Recently, many ensemble strategies were proposed, which benefit from both the availability of diverse approaches and adaptive tuning of associated intrinsic parameters [51]. Many researches have shown the general applicability of the ensemble strategy in solving diverse problems by using different populated optimization algorithms [29, 52, 53]. The performance of the suggested MSIA was evaluated over recently designed benchmark functions for the special session of evolutionary algorithms competition in 2013-IEEE-Congress on Evolutionary Computing (IEEE-CEC′13) [54]. The suggested algorithm has effective and promising optimal solutions for most used test problems. The numerical results provided by our proposed algorithm are much promising in terms of proximity and convergence toward the optimal solution of the problems with large-scale search space region. The main contributions of this paper are as follows:A multiswarm intelligence algorithm (MSIA) is proposed to improve the exploration and exploitation capabilities of the baseline SI algorithm.The ant colony optimization (ACO) and firefly algorithm (FA) are employed as constituent algorithms in the framework of the proposed algorithm.The performance of the MSIA is evaluated over the benchmark functions that are recently designed for the special session of IEEE Congress on Evolutionary Computation (IEEE-CEC-2017) [55].Extensive experimental results indicate that the proposed MSIA algorithm has performed promisingly in the parlance of evolutionary computing communities.

The rest of the paper is organized as follows: Section 2 introduces the framework of the proposed hybrid swarm-intelligence-based algorithm; Section 3 demonstrates the experimental results and characteristics of the used benchmark functions. Section 4 finally concludes this paper.

2. Multiswarm-Intelligence-Based Algorithm

In the last two decades, the nature-inspired algorithms (NIA) including artificial neural networks (ANN), fuzzy systems (FS), evolutionary computing (EC), and swarm intelligence (SI) have received much attention in both academia and industrial applications [56]. Among them, SI-based methods are mainly inspired by the behavior of real swarm or insect colonies and have shown great success to cope with large-scale optimization and decision-making problems [49]. These algorithms follow collectively some basic and specific rules. The decentralization and self-organization are the main sources of interactions among swarm and agents [57]. The ant colony algorithms (ACO) and bee algorithms, particle swarm optimization (PSO), cuckoo search, bat algorithm, and firefly algorithm offer great advantages over conventional algorithms [28]. We propose a multiswarm-intelligence-based algorithm by integrating ant colony optimization (ACO) and firefly algorithm (FA) to solve unconstrained optimization problems. The flowchart of the proposed algorithm is hereby given in Figure 2. The proposed multiswarm intelligence algorithm as outlined in Algorithm 1 employs different ants while roaming in the search space of the problem at hand. During this roaming, an evolution of ants is performed by integrating ACO and FA, where FA works as a local search to refine the positions found by the ants. Secondly, the performance of FA is improved by reducing the randomization parameter so that it decreases gradually as the optima are approaching. Finally, the performance of suggested methodology as described in the algorithm is examined by using the benchmark functions designed for the special session of evolutionary computation, CEC2013 [54]. The numerical results approximated by the proposed algorithm demonstrated its superiority in finding the global optimal solution for the most used benchmark functions.

Initialization Process of the Multiswarm Intelligence Algorithm
(1): number of Swarms/size of the population set;
(2): number of decision variables or dimension of the search space;
(3): lower bound of the search space;
(4): upper bound of the search space;
(5): maximum number of generations;
(6): generation counter;
(7),
(8)Evaluate the initial Population,
(9)Select solution among solutions based on their minimum objective function value.
Evolutionary Process of Multi-Swarm Intelligence Based Algorithm
(10) = 1;
(11)whiledo
(12)fordo
(13)  ifthen
(14)   
(15)   
(16)   
(17)  else
(18)   
(19)   
(20)   
(21)   
(22)  end if
(23)  Evaluation the offspring Population,
(24)  ifthen
(25)   
(26)  else
(27)   
(28)  end if
(29)  Update for next generation of population evolution.
(30)end for
(31)
(32)end while

3. Benchmark Functions and Experimental Results

Due to the flurry of recently developed EAs, their performances are mainly evaluated by utilizing different test suites of optimization and search problems. In the last few years, several bound constrained test suites of benchmark functions are designed for the competition of EAS in the special sessions of the IEEE Conference of Evolutionary Computation [54]. In the study of this paper, we have used twenty-eight different unconstrained continuous test functions whose characteristics are given in Table 1. The first five test problems, to , are unimodal functions. These types of test functions are offer great challenges and trap an evolutionary algorithm in their local basin of attraction. The problems to are multimodal functions that contain more than one local minimum. The multimodal problems are much difficult to deal with as compared to unimodal functions. The functions to are composite functions.

3.1. PC Configuration to Perform Experiments

The proposed algorithm was coded in MATLAB environment and PC with a 3.0 GHz Core m3 Duo processor, 4.00 GB RAM, and 64-bit operating system Windows 7 Professional is used to carry out experiments. We have executed the proposed multiswarm intelligence algorithm (MSIA) and the baseline algorithms, namely, the ant colony optimization (ACO) and firefly algorithm (FA), 25 times indecently with different random seeds by using rand(“state”, sum(100  clock)) in MATLAB environment. To validate the performance of any stochastic-nature-based algorithms, it is intimated in the existing literature of evolutionary computation to execute them at least 10 to 25 times to establish fair judgment and comparison against competitors.

3.2. Parameter Settings in Carried out Experiments

The lower bound , upper bound for the used benchmark functions, is size of initially generated set of solutions, , 30, and 50 are the different dimensions of the search space, are maximum number function evaluations, , is the light absorption coefficient, attraction coefficient, and is the mutation coefficient, where is uniform mutation range.

Tables 2 and 3 show that the proposed MSIA has performed better than ACO and FA while providing the minimum objective function values for the benchmark functions denoted by , , , , , , , , and . However, ACO has discovered better numerical results as compared to MSIA and FA keeping in view the minimum function value got for problem . FA has discovered better solutions against the proposed MSIA and ACO in terms of the minimum values for the problems , , , , , , , , , , , , , , , and . Similarly, ACO, FA, and MSIA have obtained almost the same numerical solutions for the problems denoted by in terms of minimum value. The average values found by MSIA are better than ACO and FA as in the second columns of Tables 2 and 3. The standard deviation values of ACO are better. Similarly, by comparing the maximum values, MSIA has performed better by solving the test problems denoted by , , , , , , , , and . These experimental results demonstrate that the proposed algorithm is much competitive in solving most of the benchmark functions [54].

Tables 4 and 5 clearly exhibit that the proposed MSIA has performed better than ACO and FA in approximating the minimum objective function values by solving the benchmark functions referred to as , , , , , , , , , , , , , and . ACO has contributed better results for the problems , , , , , , , , and . FA has provided better minimum objective function value for the benchmark . For problems and , PSO and MSIA have got the same result in terms of minimum values. The average values of MSIA are better than ACO and FA as can be shown in in the second columns of the above tables. In case of standard deviation, ACO gives better result. Similarly, by comparing the maximum values, MSIA shows better performance for problems , , , , , , , , , and . ACO contributes better values in terms of maximum for the problems including , , , , , , , , , and . FA performed better than ACO and MSIA in terms of maximum values for the problems, , , , , , , , and .

Tables 6 and 7 clearly demonstrate that the proposed MSIA has performed better than ACO and FA in terms of the minimum values by solving the benchmark functions , , , , , , , , , , , , , , , and . ACO is better than FA and MSIA in terms of minimum values for the problems , , , F7, , , , , , , and . The average values of the MSIA are better than ACO and FA for the test problems denoted by , , , , , , , , , , , , , , and . In case of standard deviation, ACO gives better performance. Similarly, the maximum values obtained by ACO are better for the problems , , , , , , , , , , , and . FA has shown good performance on the test problems , , , , , , , , and . The convergence graphs of the benchmark functions were solved by proposed algorithms in different dimensions including as shown in Figure 3. The first panel offers the convergence graphs for the benchmark functions for to by solving them each in decision variables, the second panel for the same problems to solve them decision variables, and third panel offers the convergence graphs of the benchmark functions for to to solve them each in decision variables or dimension. In each figure, the upward curves demonstrate the evolution in the maximum objective function values of benchmark function, the middle curves depict the evolution in the average objective function value, and the downward curves show the evolution in the minimum objective function value of to benchmark functions. The convergence graphs of the benchmark functions were displayed by proposed algorithms in different dimensions including , 30, and 50 as shown in Figure 4. The first panel offers the convergence graphs for to by solving them in dimensions, the second panel presents the convergence graphs of the same problems to solve them in dimensions, and the third panel offers the convergence graphs for to benchmark functions as solved with decision variables. Overall, the convergence graphs in the aforementioned figures demonstrate that MSIA has tackled most of the test problems promisingly as compared to their competitors in its 25-time independent runs of executions within same given resources and parameter settings. In each figure in each panel, the upward curves demonstrate the evolution in the maximum objective function values of benchmark function, the middle curves depict the evolution in the average objective function value, and the downward curves show the evolution in the minimum objective function value of to benchmark functions. The convergence graphs are exhibited by the proposed algorithms to solve each benchmark function in different dimensions including , 30, and 50 as visualized in Figure 5. The first panel represents the convergence graphs for to benchmark functions as solved with decision variables by the proposed algorithm, the second panel is for the same problems with decision variables, and third panel is for the to benchmark functions as solved in dimensions. In Figure 5, for each panel, the upward curves demonstrate the evolution in the maximum objective function values of benchmark function, the middle curves depict the evolution in the average objective function value, and the downward curves show the evolution in the minimum objective function value of to benchmark functions. These figures clearly demonstrate that overall MSIA has tackled most of the test problems efficiently as compared to its competitors in 25-time independent runs of simulations within allowed resources.

4. Conclusions

In the last two decades, evolutionary algorithms based on swarm intelligence were developed to solve various test suites of benchmark functions and real-world problems. Swarm-intelligence-based approaches perform their search process with mutual interactions. They are mainly inspired from the collective and cooperative movement of ant colonies, bird flocking, hawks hunting, animal herding, bacterial growth, fish schooling, and microbial intelligence. In the firefly algorithm, fireflies comply with the brightness of the individual firefly in the exploration process to effectively search the decision space of the given problem. ACO is another algorithm of the most important SI-based algorithms inspired by the foraging behavior of real ant colonies. In this paper, we have proposed a multiswarm-intelligence-based algorithm (MSIA) by employing the ant colony optimization (ACO) and firefly algorithm (FA) as constituent algorithms for population evolution. The proposed algorithm is tested upon using the recently designed benchmark functions for the special session of 2013 IEEE Congress on Evolutionary Computation. The experimental results provided by MSIA are promising in terms of proximity and diversity.

In the future, we intend to investigate the performance of the suggested algorithm by using the latest IEEE-CEC series benchmark functions and real-world problems.

Data Availability

The data used in this article are freely available upon citing the article. The MATLAB code of the benchmark functions and related data are available at https://www3.ntu.edu.sg/home/epnsugan/.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

The authors thankfully acknowledge the support of the Qatar National Library, Qatar, and Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Education City, Qatar Foundation, Doha, Qatar.