Abstract

The setup of heuristics and metaheuristics, that is, the fine-tuning of their parameters, exercises a great influence in both the solution process, and in the quality of results of optimization problems. The search for the best fit of these algorithms is an important task and a major research challenge in the field of metaheuristics. The fine-tuning process requires a robust statistical approach, in order to aid in the process understanding and also in the effective settings, as well as an efficient algorithm which can summarize the search process. This paper aims to present an approach combining design of experiments (DOE) techniques and racing algorithms to improve the performance of different algorithms to solve classical optimization problems. The results comparison considering the default metaheuristics and ones using the settings suggested by the fine-tuning procedure will be presented. Broadly, the statistical results suggest that the fine-tuning process improves the quality of solutions for different instances of the studied problems. Therefore, by means of this study it can be concluded that the use of DOE techniques combined with racing algorithms may be a promising and powerful tool to assist in the investigation, and in the fine-tuning of different algorithms. However, additional studies must be conducted to verify the effectiveness of the proposed methodology.

1. Introduction

The fine-tuning of heuristics and metaheuristics is, usually, a tedious and laborious work for almost all researchers. However, it exerts strong influence in the solution process and in the quality of results of optimization problems. Some researchers classify this process as an optimization problem with many variables (e.g., the parameters to be set) subject to several constraints (e.g., ranges of the parameters), where the choice of inappropriate values may result in a poor performance of algorithms and/or low-quality solutions.

Researches about metaheuristics are constantly evolving and cover theoretical developments, new algorithms, and the enhancement techniques to assist the researchers. Since the last decade, there is a growing interest in methods to assist the tuning of these algorithms and reduce the work related to this activity. Therefore, the fine-tuning of metaheuristics is an important field of research in the development context of these algorithms and in the evaluation of problems from areas such as Operations Research and Engineering.

Since the last decade, many researchers (e.g., [16] and many others) have been studying the use of different methodologies in order to summarize the process. Broadly, there is a consensus that the fine-tuning of algorithms requires a robust statistical approach, supported by efficient algorithms methods, in order to aid in the process of understanding and also in the effective settings. In that context, the design of experiments (DOE) methodology, a framework of statistical techniques, which enables simultaneous studies of multiple parameters combined between them, as well as the concept of racing algorithms, a prominent method, which automates the fine-tuning of algorithms by streamlining the assessment of alternatives (candidate configurations) and by releasing those that appeared less promising during the evaluation process, must be highlighted.

This paper aims to present an approach combining DOE techniques and racing algorithms to improve the performance of different metaheuristics from distinct nature, such as genetic algorithm (GA) and simulated annealing (SA), where the main difference between them is the way that the searching patterns are implemented. Our approach will be illustrated by means of a case study, where a set of parameters of both algorithms will be simultaneously studied by means of the response surface methodology (RSM), in order to define the space of search (e.g., candidate configurations) of each one, followed by a racing algorithm to define the best fit of the algorithms. The quality of the proposed settings for GA and SA will be evaluated applying the selected metaheuristics in classical optimization problems, such as the travelling salesman problem (TSP) and the scheduling problem to minimizing the total weighted tardiness in a single machine (TWTP), and through the results comparison from the default metaheuristics and ones using the settings suggested by the study of fine-tuning.

The rest of the paper is structured as follows: Section 2 presents an overview about the studied problems (TSP and TWTP), as well as the algorithms that will be used in the case study. The problem of fine-tuning algorithms is presented in Section 3. This section also presents our approach combining RSM and racing algorithm to fine-tune different algorithms. The proposed approach is applied in a case study (Section 4) with different parameters of GA and SA. Section 5 presents the results of case study and its analysis. Our final considerations are in Section 6.

2. Considered Problems and Algorithms

Many optimization problems, especially those related to the real world (resource allocation, facility location, vehicle routing, etc.), cannot be solved exactly considering realistic time limits. Essentially, these problems consist in finding an absolute extreme (maximum or minimum), called optimum, from an objective function with many local extremes (Figure 1).

Generally, such problems are inherently complex and therefore require a lot of computing effort. Some of those problems are classics in the Operations Research field, that is, travelling salesman and scheduling, since it involves a significant number of publications in the specialized literature [711].

The travelling salesman problem (TSP) is a classical optimization problem, where the idea is to find the shortest route between a set of given cities, starting and ending at the same city, such that each city should be visited exactly once. A TSP consists of a set of cities and the corresponding distance of each pair of cities , such as . The problem is classified as symmetric if , , or asymmetric if , [8].

Scheduling is another classical optimization problem involving tasks that must be arranged in machines () subject to restrictions, in order to optimize an objective function. In contemporary literature one of the most widely studied goals is that related to dates, especially significant for the industry, due to the need of meeting deadlines. Among the problems with date restrictions, there is the total weighted tardiness in a single machine [1214]. These problems consider a set of tasks and only one machine to process at most one task at a time. When the processing of a task is started, it cannot be stopped. Each task spends a processing time (time units), positive and continuous, and has a weight , a start time , and a due date . Broadly, the tardiness is the difference between the task due date and its effective completion. The tardiness is computed as , where is the time completion of task . The purpose of this problem is to organize the tasks to find an optimal sequence such that the weighted total delay is minimized.

Both problems are known to be NP-hard [15]; thus, in order to get optimal solutions, a great computing effort is required and demands the use of efficient algorithms.

Metaheuristics are one of the best known approaches to solving problems for which there is no specific and/or efficient algorithm. Many metaheuristics are inspired by metaphors from different knowledge areas, such as biology (genetic algorithms and neural networks) and physics (particle swarm optimization and simulated annealing). Usually, these algorithms differ from each other in terms of searching pattern but offer accurate and balanced methods for diversification (search space exploration) and intensification (exploitation of a promising region), share features, such as the use of stochastic components (involving randomness of variables), and have a variety of parameters that must be set according to the problem under study.

The genetic algorithm (GA) is a population-based method invented by Holland [16], inspired by principles of survival from Darwin’s evolution theory. GA simulates an evolution process in which the fitness of individuals (parents) is crucial to generate new individuals (children). On the other hand, the simulated annealing (SA) is a probabilistic method proposed in Kirkpatrick et al. [17] and Černý [18] in order to find the global minimum of an objective function with numerous local minimums. Widely applied to solve optimization problems, SA simulates a physical process from which a solid is cooled slowly, so that the final product becomes a homogeneous mass to achieve a minimum energy configuration [19].

The main difference between these algorithms is the way that the searching methods are implemented. GA operates on a population of solutions, where the new generations (offspring) are generated from the fittest individuals of previous generations (parents). This feature (survival principle) guarantees an increase in the quality of solutions as new generations are created. On the other hand, the SA performs constant movement between one solution () and another () according to some predefined neighborhood structure. SA uses a probabilistic test to accept new solutions, and sometimes this feature allows low-quality solutions to be considered in the search algorithm process.

Both GA and SA algorithms have a wide range of parameters (e.g., rates of crossover and mutation and population size, for GA, and initial temperature and its rate of decrement and number of interactions, for SA) that must be tuned before starting the solution of a problem. Since the metaheuristics are extremely dependent on the values assigned to those parameters, they must be carefully studied during the process of fine-tuning, since they can define the success of the algorithm.

3. Improving the Performance of Metaheuristics

is an algorithm with a set of parameters applied on different problems . Thus, the problem of fine-tuning an algorithm can be summarized as a search space:where are parameters of algorithm for a given problem and are the finite range of values assumed for each parameter. The number of parameters as well as their ranges can vary extensively according to and studied, such that should be possibly a number of combinations tests of on .

Our approach to fine-tune algorithms can be expressed as a procedure that begins with an arbitrary selection of instances from a class of optimization problems and followed by the definition of ranges for each parameter from the algorithm, to apply full factorial design in order to study the response (effect) of multiple parameters (factors) [2022]. By means of the full factorial designs it is possible to establish a relationship of cause and effect between factors and response, whose usual representation is an empirical regression model (linear or quadratic) for the process under analysis, such aswhere is the response, are the coefficients, and are factors, and is the experimental error.

Factorial designs are useful for identifying the factors influencing the response. However, when the interest is to define the settings of the factors that can optimize the process and produce values (of factors) closer to the optimum, (2) can be insufficient to represent the relationship between factors and response. Thus, the next stage of our approach employs the response surface methodology (RSM) as a fine-tuning tool, to achieve greater proximity of regions with promising settings. The RSM is a framework of statistical and mathematical techniques, which employs factorial designs, regression analysis, and optimization methods in situations where several input parameters influence the performance of a process [23, 24]. Roughly, the RSM works by exploring the neighborhood regions around a promising region.

The result of RSM is a second-order model [22], given byto represent the relationship between response and factors in the new region, and a three-dimensional contour with the shape of the surface (Figure 2).

The final stage of our approach consists in applying a racing algorithm to define the setup of the algorithms. Racing was introduced by Maron and Moore [25]. Differently from a brute-force technique, where all alternatives must be tested by an equal number of times, the racing idea is to decrease the number of alternatives (candidate configurations) as soon as statistical evidence exists to discard them. Thus, at the beginning, a very large set of alternatives should exist, which decreases by the elimination of the worst ones. The racing concept was studied in Birattari et al. [26, 27] by means of the method -Race, a racing algorithm, where the candidate configurations are eliminated by means of the Friedman statistics. In [28] the authors propose an iterative application of -Race, so-called -Race. The racing and sharpening are combined [29] in order to increase the efficiency of search. An extension of the -Race procedure by means of a package implemented in R is presented in [30]. Different sampling techniques for -Race are suggested in [31, 32].

4. Case Study

To illustrate our approach of fine-tuning algorithms, we selected a set of parameters that intuitively seems to influence the performance of the metaheuristics GA and SA, regardless of the studied problem. For GA, we chose the following parameters: mutation probability (pMut), crossover probability (pCros), size of the population (sPop), and number of generations to be computed (nGen). For SA, the considered parameters are initial probability of accepting a solution (pIni), number of temperature stages (iExt), number of iterations during one temperature stage (iInt), and decrease in temperature (dTem).

To generalize our results and compare them among themselves, we use the relative deviation from the optimum, given bywhere is our computed solution and is the best known solution of the problem. Thus, the lower the value of Dev for the metaheuristics, the better the performance of the algorithms.

The parameters and their corresponding levels (low and high) required by a full factorial design in the first stage of our approach are in Table 1.

4.1. TSP

The fine-tuning of the metaheuristics GA and SA on TSP uses four arbitrary instances from the TSPLIB (URL: http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/).

The full factorial design was used here in order to identify the factors that influence the response. Thus, in the first stage of this approach we can conclude for both GA and SA algorithms the following:(i)All four factors studied are significant for the process, regardless of the instance selected.(ii)There are differences between interactions of the factors according to the instance studied.

The next stage consists in applying the RSM to explore the neighborhood regions around a promising region and get values for each parameter according to the studied instance. The procedure employed consists in simultaneous study of all four parameters of each algorithm until ANOVA shows statistical significance and suggests an empirical model to explain the relationships between them. The direct search algorithm based on the simplex of Nelder and Mead [33] was used in order to optimize the empirical model. The values of the parameters should result in the best responses.

After RSM, the next stage uses a racing algorithm inspired by -Race [26, 27] to define the setup of the algorithms. The main contribution of our approach is combining the power of RSM (a framework of mathematical and statistical techniques employed in the modeling and optimization of problems, where the response is affected by many factors) and the efficiency of racing algorithms. In this context, RSM is a sampling technique that accurately finds the best fit of each parameter. That is, from the results of RSM (Tables 2 and 3) we define a range of values between minimum and maximum of each parameter, such that it forms a space of search of candidate configurations. Thus, the ranges of GA are as follows:(i)pCros 0.54 and 0.63;(ii)pMut 0.59 and 0.86;(iii)sPop 110 and 130;(iv)nGen 176 and 230.

The same methodology produces the following ranges for SA:(i)pIni 0.74 and 0.76;(ii)iExt 64 and 65;(iii)iInt 1490 and 1500;(iv)dTem 0.51 and 0.55.

Our idea of using a racing algorithm is to select the as good as possible configuration out of a lot of options. For this study, the settings used for GA are , , , and , and for SA we consider , , , and . Each possible combination leads to one different algorithm setting, such that our space of search was composed of 36 different parameter settings for both algorithms. After applying racing algorithm in the space of search we reached the best setting for each algorithm to solve the TSP (Table 4). This table also presents the default values for each metaheuristic in Scilab.

4.2. TWTP

This approach begins as previously presented from the arbitrary selection of four instances of the “wt40,” a TWTP benchmark with 40 tasks from the OR Library (URL: http://people.brunel.ac.uk/~mastjjb/jeb/info.html).

Here, all four factors studied are also significant for the process, but it is possible to identify differences between their interactions. The results of applying RSM in the instances are presented in Tables 5 and 6.

Once again, we applied a racing algorithm considering the space of search of candidate configurations built from ranges of minimum and maximum of each parameter (Tables 5 and 6). For this problem, the ranges of GA are as follows:(i)pCros 0.51 and 0.55;(ii)pMut 0.54 and 0.58;(iii)sPop 110 and 140;(iv)nGen 120 and 140.

The same methodology suggests the following ranges for SA:(i)pIni 0.71 and 0.76;(ii)iExt 61 and 64;(iii)iInt 1400 and 1480;(iv)dTem 0.53 and 0.58.

Here, the settings used for GA are , , , and , while for SA we have , , , and , such that our space of search was composed of 36 different parameter settings for both algorithms. Thus, running the racing we reached the following setup for each metaheuristic (Table 7).

5. Analysis of Results

Our results were collected using the scientific software Scilab (http://www.scilab.org) in Intel Core i5TM 1.8 GHz, 6 GB memory, 1 TB hard disc on Windows 8 64 bits.

All results presented in this section were computed by means of (4) and in order to do the comparisons, they were collected before the fine-tuning process (called GA and SA) using the default settings of each metaheuristic in Scilab and then (called and ) with the algorithms set according to Table 4 for TSP and Table 8 for TWTP.

It should be noted that the procedure to improve the performance of metaheuristics (Section 3) can be adapted to many algorithms coming from distinct natures (e.g., tabu search, ant colony optimization, particle swarm optimization, etc.); thus as for the purpose of this paper, we consider sufficiently the use of distinct metaheuristics GA and SA.

5.1. TSP

The first set of results (Tables 8 and 9) corresponds to 10 runs of the metaheuristics GA and SA in ten instances of the benchmark of symmetric TSP from TSPLIB. In these tables, column AD is the arithmetic mean of (4) in 10 runs, such as , where ; NOpt is the number of times that algorithms reach the optimum for each instance; and and are arithmetic mean and standard deviation, respectively, of all instances.

The statistics of and (Tables 8 and 9) reveal an increase of the performances of both algorithms after the fine-tuning (about 65% and 85% for and , resp.). It should be highlighted that both metaheuristics are more promising for almost all instances. Those results (Tables 8 and 9) also suggest that the fine-tuning makes the process more stable for SA, since , whereas .

When we analyze the time series of both algorithms (Figure 3) on a single instance (Inst. Berlin52) of TSP, the SA has the best performance if compared with GA, but there is an increase of both performances with the fine-tuning. From this graph it should be highlighted that the performance of is very similar to the algorithm SA (before fine-tuning process). However, comparing the metaheuristics with themselves, the fine-tuning process (dashed lines on graph) grants an improvement in terms of quality of solutions for both algorithms.

The substantial increase in the performance of both algorithms (GA and SA) can also be noted when we analyze the normalized execution times (Figure 4), where SA and show better results. That is, with the fine-tuning process, presents good performance in lesser time. Nevertheless, the decrease in time is relatively less than the increase in the quality of solutions. On the other hand, for GA′ there is also an improvement in the quality of solutions, but its time is slightly larger.

Through the results (Tables 8 and 9) and supported by the graphical analysis we can highlight that the fine-tuning process produces better results (Figure 5).

5.2. TWTP

In Tables 10 and 11 are the sets of results of 10 runs of the GA and SA in the first ten instances from the OR Library benchmark “wt40.” In those tables columns AD, NOpt, , and have similar meaning as previously described (Section 5.1).

The statistics of (Table 10) suggests an increment in the quality of results for all instances. Its statistics stands out for instance 5, where about 50% of runs can reach the optimum. also shows the best results for instance 5; however the optimum can be reached for 70% of runs (Table 11). In general, it is also noted that can reach the optimum more frequently. The statistics (Tables 10 and 11) suggest that has more stable results than , since , whereas .

Through the analysis of the time series of both algorithms (Figure 6) on a single instance (Inst. 1) of TWTP, it can be noted that after the fine-tuning process (dashed lines on graph), there is an improvement of about 90% in the performances of both metaheuristics.

Just as previously noted (Section 5.1), SA also presents better results in terms of the execution time (Figure 7). Through the fine-tuning, can reach the best results in lesser time, whereas also reaches the best, but in a larger execution time.

The statistics (Tables 10 and 11) also suggest that the fine-tuning process improves the performance of the algorithms and produces better results, closer to the optimum (Figure 8).

6. Final Considerations

This paper presented a study on fine-tuning of different metaheuristics through a statistical approach combining RSM and racing algorithm. From a case study was investigated the influence of different parameters of the GA and SA, applied in different instances of classical optimization problems, such as TSP and TWTP. The quality of settings for GA and SA was evaluated and compared considering default algorithms settings and ones using the settings suggested by the study of fine-tuning.

The use of RSM allows defining the search space of candidate configurations, explored by means of a racing algorithm to reach the best fit of each parameter of the metaheuristics to solve different instances of studied problems.

From a case study we collected different results of the TSP and TWTP before and after the fine-tuning of the algorithms. In general, it can be seen that, regardless of the nature of the metaheuristic, the fine-tuning process improves the quality of solutions and allows for both GA and SA achieving better results for different instances of the problems. In the comparisons of our results with themselves, SA has the best results for the TSP (quality of solution), whereas for the TWTP, they both have similar performance, but SA is slightly better than GA. In terms of the execution time, in general, the SA produces good performance in lesser time for both of the studied problems, whereas the GA is the slowest in finding better results.

It is important to note that the metaheuristics GA and SA, as well as the problems TSP and TWTP, were used in this work only to demonstrate our approach combing RSM and racing algorithm. The aim was to verify the effectiveness of the proposed methodology applied in these cases. Our results suggest that the proposed approach may be a promising and powerful tool to assist in the fine-tuning of different algorithms. However, additional studies must be conducted to verify the effectiveness of the proposed methodology, mainly when applied on already well-configured algorithms.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.