Large-scale global optimization problems are ambitious and quite difficult to handle with deterministic methods. The use of stochastic optimization techniques is a good choice for dealing with these problems. Nature-inspired algorithms (NIAs) are stochastic in nature, computer-based, and quite easy to implement due to their population-based nature. The Grey wolf optimizer (GWO) and teaching-learning-based optimization are the most recently developed and well-known NIAs. GWO is based on the preying strategies of grey wolves while TLBO is based on the effect of the influence of a teacher on the output of learners in a class. NIAs are quite often stuck in the local basins of attraction due to the improper balancing of exploration versus exploitation. In this paper, an advanced amalgam of nature-inspired algorithms (ANIA) is developed by employing GWO and TLBO as constituent algorithms. Initially, an equal number of solutions are assigned to both NIAs to perform their search process of population evolution; then, in later iterations, the number of solutions are allocated to each constituent algorithm based on their individual performance and achievements gained by each algorithm in the previous iteration. The performance of an algorithm is determined at the end of iteration by calculating the ratio of total updated solutions to the total assigned solutions in the amalgam. The proposed strategy effectively balanced the exploration versus exploitation dilemma via compelling the parent algorithms to show continuous improvement during the whole course of the optimization process. The performance of the proposed algorithm, ANIA is evaluated on recently designed benchmark functions of large-scale global optimization problems. The approximated results found by the proposed algorithm are promising as compared to state-of-the-art evolutionary algorithms including the GWO and TLBO in terms of diversity and proximity. The proposed ANIA has tackled most of the benchmark functions efficiently in the parlance of evolutionary computing communities.

1. Introduction

Optimization is the utilization of available resources in an effective manner that directs the objective function at hand to its desired optimum value with optimized values of decision variables. Optimization has wide range of applications in natural sciences, physical sciences social sciences and various engineering technologies. Since 1940, optimization is considered an essential part of applied sciences. In the early days of fifties, simulation of optimization algorithms with digital computing machines were not widely known and common. Later on, it became possible to solve complex optimization problems which were considered to be difficult to solve manually [1]. Examples of complex and hard problems includes the vehicle routing and scheduling, knapsack problems, investment problems, optimal control problems, power grid estimation problems [24].

Optimization problems are classified based on various backgrounds. For instance, path followed splits optimization problem into deterministic and stochastic. The former deals with problems where a unique and fixed path is tracked in various attempts for the same inputs, while in the latter, following of unique and fixed path is not considered essential for the same inputs in various trials. Likewise, removal/subjection of constraints to the optimization problem split it to unconstrained/constrained. Unconstrained type Optimization Problem (UCOP) with a single objective in the sense of minimization is described as below [5]:

In above defined problem, the -dimensional decision variables vector is , objective function for minimization is , the lower and upper limits correspond to each dimension of the search space are and , respectively.

In the last two decades and so, many optimization algorithms have been developed for solving various optimization problems [612]. Among them, the Nature-Inspired-Algorithms (NIAs) are stochastic optimization techniques that have simulated various sophisticated systems of biological or physical organism existing in the nature [13]. They have successfully tackled distinct benchmark functions and many real-world optimization problems [1418]. In the frameworks of NIAs, integration of various operators can categorize them into Evolutionary algorithms (EA) and Swarm Intelligence (SI) approaches for global optimization [1922]. EAs are originated from the Darwin’s principle “Survival of the fittest”. Similarly, the main source of learning for SI approaches are their information sharing among their swarm members [23]. In the recent past, stochastic optimization algorithms become an emerging area of research in the evolutionary communities [24]. The recently developed algorithms includes the improved Consensus-Based Optimization (CBO) [25], Algorithm of the Innovative Gunner (AIG) [26], and Bayesian Optimization with Deep Partitioning Tree (DPT-BO) [27].

In evolutionary communities, SI-based algorithms are of special interest for researchers due to their easiness, extreme flexibility, and adaptivity for optimization problems. SI algorithms perform promisingly and robustly for global optimization problems. Some key algorithms are Teaching Learning Based Optimization (TLBO) [28, 29], Grey Wolf Optimizer (GWO) [30, 31], Particle Swarm Optimization (PSO) [32, 33], and Naked Mole-Rat (NRM) [34] algorithms. GWO is an SI-based algorithm that is the emulation of preying strategies of grey wolf used for hunting the prey. GWO is proposed for solving unconstrained optimization problems in canonical form. The main characteristics used in the mimicry process for GWO are searching, encircling, and attacking the prey. In GWO, solutions are symbolized as grey wolves. They live in a pack that constituents a population for GWO. The nomination of leaders of the pack is made based on the fitness values of the wolves (solutions). The top three solutions are named alpha , beta , and delta , respectively; while, the rest are omegas . TLBO is also SI based algorithm that imitates the idea of the teaching-learning process used by the students to get knowledge and improve their grades/results accordingly. Initially, TLBO is designed for solving unconstrained optimization problems. It consists of two phases, teacher and learner. In TLBO students are assumed as solutions, selected courses are demonstrated as dimensions, and class is nominated as population. The rule of teacher, in the teacher phase, is played by the best students i.e. optimal solution in the previous iteration. In mentioned NIAs, initially, solutions are generated randomly in some defined search spaces. Later on, corresponding formulae of discussed NIAs are used for finding new positions and the selection operator gives updated positions. Discussed algorithms are iterative in nature and are repeated till the consumption of the provided budget.

In the recent few years, GWO and TLBO have gained much popularity in solving optimization problem with low-scale dimensions. During the optimization process sometimes GWO solutions are stuck in local minimum due to low exploration and high exploitation that ultimately leads the algorithm to decide pre-maturely which is the major weakness in the framework of GWO. TLBO is highly explorative due to the presence of two phases in it, which consumes more function evaluations and slows down its convergence rate of it. For the remedy of the discussed conundrums, in this paper performance-based amalgam of GWO and TLBO is presented abbreviated as ANIA. Initially, GWO and TLBO are allocated an equal number of solutions. The proposed ANIA allocates the number of solutions allocated to each constituent algorithm based on its individual performance in the whole course of optimization.

The rest of the paper is organized as follows. Section 2 reflects reviewed literature for GWO and TLBO along with the motivation for the current research work. Section 3 presents the proposed algorithm with novelty. Section 4 demonstrates numerical results, parameters setting for conducting simulations, comparison and ranking of proposed algorithms with its ancestors along with computational time complexity. Section 5 discusses the performance of the compared algorithms. A short summary of conclusions and future work are included in Section 6.

2. Reviewed Literature

This section presents the literature of main algorithms GWO and TLBO. Weaknesses in GWO and TLBO along with the strategies used for diminishing them motivated the current work.

2.1. Grey Wolf Optimizer (GWO)

GWO was introduced by Mirjalili et al. [30] in 2014 for solving unconstrained optimization problems. It is an SI-based algorithm that has successfully produced competitive results on different optimization problems. GWO mimics the hunting strategies and leadership characteristics of grey wolves. They live and work as one pack and follow a strict dominant behavior. Different wolves are categorized as , , , and . is the most dominant and fittest wolf, called the leader of the pack. is placed second to , while follows the order of the leaders. and are direct subordinates to . The remaining wolves are categorized as wolves. In GWO solutions (wolves) are generated randomly in the given search region then their positions are updated through three main phases: searching prey, encircling prey, and attacking prey. Grey wolves search and encircle the prey using Eq (2)–(5).where represents the position of the prey, wolf position is denoted by , and are coefficient vectors. and are stochastically generated vectors in the interval (0,1) and is decreasing real number from 2 to 0, linearly. The next phase is to hunt the prey. Primarily, the wolf guides the pack for hunting. However, if desired, then and wolves also take part in the hunt. Thus, in the mathematical modeling of hunting, it is declared that , , and wolves are leaders based on their fitness values. Therefore, they are followed while searching for the prayer. Their highest fitness values ensure good prediction about the location of the prey. Hence, the top three fittest agents are recorded as , , and wolves, while the rest of them are . They are asked to update their positions according to the location of their leaders. Equations used for finding new positions are given below:

Initially, the grey wolves harass the prey to get it too tired to move. This behavior of wolves is mathematically formulated by decreasing the value of scaling parameter linearly from 2 to 0, across different iterations. During this stage, the value of is taken less than 1 in order to attack the prey, while searching of the hunt is controlled by taking .

2.2. Teaching Learning based Optimization (TLBO)

TLBO is an SI-based nature-inspired algorithm brought in by R.Venkata Rao et al. in [28]. It is amongst the most prominent algorithms in the field of optimization, due to its large range of usage in many areas, such as engineering and industry. TLBO simulates the basic concept of the teaching-learning process that constituents of two phases i.e. teacher and learner. In TLBO, learners develop their results/grades by passing through two phases, teacher and learner, in each iteration, respectively. In the initial stage of TLBO, swarm members are stochastically generated in the desired region. However, during the search process, swarm members get their new positions in succeeding iterations.

Mathematical formulation adopted in teacher phase is given as [29]:

In the above equation, current solution is , lies in (0,1) is stochastically taken real number, is the fittest candidate solution symbolized as teacher, is factor for teaching which is randomly taken either 1 or 2, mean position of the current swarm is , and new position of is .

Mathematical modelling of learner phase of TLBO is given by [29]:where current solution is , is a stochastically taken number from the interval (0,1), is randomly picked solution other than from the swarm, and are costs of mentioned solutions, and new position of is .

2.3. Motivation for the current research work

Almost all NIAs meet up with some hurdles while solving optimization problems, like stagnation, premature convergence, low convergence rate. These are faced due to the weak exploration/exploitation or their improper balancing. These hurdles also exist in GWO and TLBO like other NIAs. To avoid the happening of these phenomena during the optimization process is a popular research field nowadays, upon which the current research work is based. For tackling prescribed hurdles and solving various types of optimization problems, researchers have constructed many improved versions of NIAs or their amalgam/hybrid algorithms. GWO is improved by adaptively adjusting the tunable parameters of it in [35]. In [18] a hybrid algorithm, HTLBO, is innovated by improving the teacher phase of basic TLBO, which produced remarkable results among the compared. Information sharing-based hybridization is made in [36] for PSO and GWO and it is shown in the paper that hybrid one performed well than parents. Surrogate-Assisted Hybrid Optimization (SAHO) algorithm is introduced in [37] that combines TLBO and Differential Evolution (DE) [38] with the remarks that SAHO is the supreme among the competitors. Hybridization of various stochastic optimization algorithms is done based on different strategies in [3944] with the bottom line that if NIAs are gathered through an efficient procedure, it will mature the exploration and exploitation and balance their participation in the optimization process which ultimately results in a robust hybrid algorithm. The prescribed situation motivated this work to seek an economic strategy for constructing the advanced amalgam.

3. Proposed Algorithm with Novelty

To diminish the occurrence of premature convergence and to avoid being stuck in a local minimum, various strategies are introduced by the researchers in literature in order to make hybrid algorithms of existing NIAs. For hybridization, the combination of highly exploitative and highly explorative algorithms is the best option. When two algorithms with discussed characteristics are selected then the only leftover task is to introduce an efficient hybridization strategy. This research work is based on innovating one such novel strategy for hybridization to tackle the prescribed hurdles and combine GWO and TLBO efficiently. In the current research work, GWO and TLBO are selected for making an advanced amalgam since GWO is highly exploitative and TLBO is highly explorative in nature. For examining the future of anything its current status is studied. The same idea is used as a strategy for hybridization in this work. In this paper, an advanced amalgam of well-known NIAs, GWO, and TLBO, is introduced, based on their updating solutions ratio in the current iteration which is the bottom line for this research. In the amalgam, the number of assigned solutions for the next iteration to an algorithm depends upon the update ratio (total updated/total assigned) of solutions in the current iteration. At any iteration, the update ratio of an algorithm reflects its performance in the amalgam. If the update ratio of an algorithm is greater than the other, then in the next iteration it is favourable and more solutions are assigned to it as compared to others. Mentioned strategy for advanced amalgam compels GWO and TLBO to show continuous relative performance else their importance will be diminished. The proposed unconstrained algorithm is the advanced Amalgam of NIAs, GWO, and TLBO, named as ANIA. Step by step procedure of it is illustrated in the Pseudocode as given in Algorithm 1, while, the flowchart of the proposed algorithm is shown in Figure 1.

1.Define parameters: Pop Size , Max Ftn Evaluations ;
2.Initialize the with size uniformly and randomly;
3.Start Counter of Function Evaluations =0;
4.Determine the values of , for each ;
5.Assign ;
6.Nominate Teacher, , and as top three solutions of the ;
7.Set = round(/2) and ;
8.while do
9. Assign randomly solutions to GWO from the ;
10. Find new positions using GWO equations for assigned solutions;
11. Determine the fitness values for new positions;
12. Update positions by comparing the assigned with new positions;
13. Find update ratio, ;
14. Assign the unassigned solution of to TLBO;
15. Select randomly round(/2) solutions from assigned;
16. Find new positions via teacher phase equation for selected solutions;
17. Determine the fitness values for new positions;
18. Update positions by comparing the selected with new positions;
19. Assign the unassigned solution to learner phase;
20. Find new positions via learner phase equation for assigned solutions;
21. Determine the fitness values for new positions;
22. Update positions by comparing the assigned with new positions;
23. Find update ratio, ;
24. Nominate Teacher, , and as top three solutions of the ;
25. Assign ;
26. if and then
27. Set and ;
28. else if
29.  ;
30.  if then
31.  ;
32. end if
33.  ;
34. else if
35.  ;
36.  if then
37.   ;
38. end if
39.  ;
40. else
41.  ;
42.  ;
43. end if
44. end while

4. Numerical Results

For performance evaluation of GWO, TLBO, and ANIA, the unconstrained suit CEC-2014 [5] is taken. It consists of 30 non-linear, unimodal, multimodal, hybrid, and composite benchmark functions with rotated, translated, separable, non-separable, and asymmetrical characteristics. Nature of the functions classifies the tested suit as follows:(i)Unimodal Functions: These are from to in the tested suit.(ii)Simple Multimodal Functions: These are from to in the tested suit.(iii)Hybrid Functions: These are from to in the tested suit.(iv)Composite Functions: These are from to in the tested suit.

4.1. System Specification and Parameters Setting for Conducting Simulations

Experiments are conducted with a system having Window 10, Intel(R) Core(TM) i3-7100U CPU @ 2.40 GHz, and 8 GB RAM, and simulation environment is 2013a. Parameters are set as: size of population = 100, dimension (D) of the problems = 10 and 30, maximum function evaluations = 10000 D, and independent runs = 51 to each problem correspond to taken dimensions.

4.2. Comparison via Rank Points and Convergence Graphs

All algorithms are compared based on the statics , , , , and which are the minimum, maximum, median, mean, and standard deviation of 51 obtained the best solution fitness values corresponds to each run for a problem, respectively. Algorithms are ranked based on user statistics. For compared algorithms, rank points are displayed in parenthesis Table 2 beside Table 3 each statistic. Tables 14 reflect the comparison and ranking of the competitors GWO, TLBO, and ANIA via mentioned statistics. Table 5 shows positions of algorithms via the sum of rank points for unimodal functions. Table 6 illustrates positions of competitors via the sum of rank points for multimodel functions. Table 7 demonstrates positions of contestants via the sum of rank points for hybrid functions. Table 8 exhibits positions of rivals via the sum of rank points for composite functions. Table 9 presents the sum of rank points corresponding to various statistics used for analysis with D10, D30, and overall. Convergence graphs for each problem of the testedsuit are included in Figures 26, which exhibits the average progress of the opponents through the optimization process.

4.3. Computational Time Complexity

Time taken by an algorithm while simulating the experiments is called computational time complexity. An increase in it reduces the importance of the algorithm. For various suits, different procedures for calculation are proposed. In the current research benchmark functions suit CEC-2014 is taken, for which computation time complexity has been evaluated based on various time factors. The following is the illustration of these factors:(i): It is the time taken by used system while processing Algorithm 2.(ii): It is the computation time of 200000 function evolutions for corresponds to tested dimensions.(iii): It is the computation time of algorithm while consuming 200000 function evolutions for corresponds to the attempted dimensions.(iv): Repeat the process of calculating five time and nominate the mean as .

2.for j = 1:1000000
11.end for

Table 10 reflects the computational time complexity for the compared algorithms via discussed factors for the competitors.

5. Discussion

In this paper performance-based advance amalgam of NIAs, GWO and TLBO are proposed for solving single-objective large-scale global optimization problems CEC-2014 by the name ANIA. ANIA is compared and ranked via various statistics with its parent algorithms i.e. GWO and TLBO. For discussion purposes, functions are classified into unimodal, multimodal, hybrid, and composite, in order to judge the efficiency and robustness of the contestants for various types of function. Discussion for D10, D30, overall, convergence graphs, and computation time complexity is done in the following.

For unimodal functions of CEC-2014, GWO, TLBO, and ANIA secured 45, 26, and 19 rank points in total for D10 and hence placed as , , and , respectively. For D30 GWO, TLBO, and ANIA got 45, 23, and 22 rank points in total and hence secured , , and positions, respectively. Overall view for these problems is that GWO, TLBO, and ANIA acquired 90, 49, and 41 rank points in total and hence stood as , , and , respectively. This can be checked from Table 5.

For multimodal functions of CEC-2014, GWO, TLBO, and ANIA obtained 166, 141, and 80 rank points in total for D10 and hence ranked as , , and , respectively. For D30 GWO, TLBO, and ANIA acquired 142, 147, and 100 rank points in total and hence gained , , and positions, respectively. Overall view for these problems is that GWO, TLBO, and ANIA secured 308, 288, and 180 rank points in total and hence came by , , and , respectively. It can be demonstrated from Table 6.

For hybrid functions of CEC-2014, GWO, TLBO, and ANIA obtained 87, 44, and 49 rank points in total for D10 and hence stood as , , and , respectively. For D30 GWO, TLBO, and ANIA attained 82, 57, and 41 rank points in total and hence secured , , and positions, respectively. Overall view for these problems is that GWO, TLBO, and ANIA acquired 169, 101, and 90 rank points in total and hence obtained , , and positions, respectively. This can be analyzed from Table 7.

For composite functions of CEC-2014, GWO, TLBO, and ANIA gained 107, 71, and 57 rank points in total for D10 and hence placed as , , and , respectively. For D30 GWO, TLBO, and ANIA secured 92, 72, and 63 rank points in total and hence came by , , and , respectively. Overall view for these problems is that GWO, TLBO, and ANIA acquired 199, 143, and 121 rank points in total and hence stood as , , and , respectively. This can be examined from Table 8.

Dimension wise comparison based on total rank points of opponents for CEC-2014 demonstrates that GWO, TLBO, and ANIA acquired 404, 282, and 205 for D10, 361, 299, 227 for D30 and hence placed as , , and at all attempted dimensions, respectively. The overall positioning of GWO, TLBO, and ANIA, are , , and since they attained 766, 581, and 432 rank points in total. This can be seen from Table 9.

The Convergence graphs displayed by the proposed ANIA by each solving the benchmark functions in n=10 dimensions as denoted by F01, F05, F08-F16, F19, F25, F28, and F30 are promising in terms of proximity and diversity. Similarly, the proposed algorithm has tackled the benchmark functions, namely, F01, F05, F08, F09, F11, F12, F16-F19, F22, and F30 and produced efficient convergence graphs in thirty dimensions. Moreover, the proposed ANIA algorithm has provided a better convergence graph by solving each of the used benchmark function in n=50 dimensions as compared to their constituent algorithms. This can be seen from Figures 2-4.

Computational time complexity is reflected by Table 10 with the remarks that GWO, TLBO, and ANIA are ranked as , , and at both tested dimensions, respectively. Table 10 reflects that the computational time complexity of the ANIA is slightly higher than the other competitors.

6. Conclusion and Future Work

Since the inception of the first Genetic Algorithm developed in 1940, many diverse types of evolutionary algorithms (EAs) are suggested in the existing literature of evolutionary computing. However, different EAs performed differently on various types of benchmark functions and real-world problems. To overcome the drawbacks of stand-alone algorithms, use of multiple EAs in a single framework is a trending idea in the current generation of EC. In this paper, we proposed an advanced amalgam of the nature-inspired algorithm (ANIA) for solving high-dimensional global optimization problems. The performance of ANIA and its parent algorithms i.e. GWO and TLBO are examined upon the test suit of benchmark functions as designed for IEEE CEC-2014. These challenging problems are solved by the competitors with D10 and D30. ANIA has performed significantly better on most of the used benchmark functions in terms of fast convergence and sufficient diversity. The numerical results obtained by ANIA enable us to conclude that the proposed algorithm is better than GWO and TLBO that reflecting the importance of the proposed concept of the amalgam.

In future research work, we intend to find out the efficiency of the proposed algorithm with other unconstrained test suites and engineering problems. The idea of ANIA can be extended to constrained optimization. Furthermore, we will implement the current concept of amalgam to other two or more than two NIAs to check its global compatibility.

Data Availability

The data used in this paper are freely available while citing this paper.

Conflicts of Interest

The authors are hereby declared that they reviewed the paper and confirm that there is no conflicts of interest exist.


[1] The authors are thankful to the Directorate of ORIC KUST for the award of the project titled “Advanced Soft Computing Methods for Large-Scale Global Optimization Problems” [2]. The authors are thankful to Higher Education Commission, Pakistan, for the research award of National Research Program for Universities Project with [NRPU ID 15332] and titled “Advanced Computational Methods for Solving Complex Computer Science and Mathematical Engineering Problems.”