Abstract

The Machine-Part Cell Formation Problem (MPCFP) is a NP-Hard optimization problem that consists in grouping machines and parts in a set of cells, so that each cell can operate independently and the intercell movements are minimized. This problem has largely been tackled in the literature by using different techniques ranging from classic methods such as linear programming to more modern nature-inspired metaheuristics. In this paper, we present an efficient parallel version of the Migrating Birds Optimization metaheuristic for solving the MPCFP. Migrating Birds Optimization is a population metaheuristic based on the V-Flight formation of the migrating birds, which is proven to be an effective formation in energy saving. This approach is enhanced by the smart incorporation of parallel procedures that notably improve performance of the several sorting processes performed by the metaheuristic. We perform computational experiments on 1080 benchmarks resulting from the combination of 90 well-known MPCFP instances with 12 sorting configurations with and without threads. We illustrate promising results where the proposal is able to reach the global optimum in all instances, while the solving time with respect to a nonparallel approach is notably reduced.

1. Introduction

The Machine-Part Cell Formation Problem (MPCFP) is based on the well-known Group Technology (GP) [1] widely used in the manufacturing industry. The goal of the MPCFP is to organize a manufacturing plant in a set of cells that contain a limited number of machines and parts, but minimize the cell-interchange of parts. The purpose is to reduce costs and increase productivity. The MPCFP is known to be NP-Hard, so there is always the challenge of producing high quality solutions in a limited time interval. During the last years, several techniques have successfully been applied to this problem, from mathematical methods such as linear programming [2] to classic metaheuristics such as particle swarm optimization [3] and more modern ones such as artificial fish swarms [4].

In this paper, we present a new and efficient parallel version of the Migrating Birds Optimization metaheuristic for solving the MPCFP. Migrating Birds Optimization (MBO) is a population-based metaheuristic inspired by the V-shaped flight employed by birds when they migrate. This technique is known to be very effective for energy saving during flying. This interesting approach is enhanced by precisely integrating parallel procedures particularly for the efficient sorting of birds and neighboring solutions. It results in a notable improvement in terms of performance of the whole solving process. We perform computational experiments by using 90 well-known MPCFP instances, 2 well-known sorting algorithms, 1 sequential configuration, and 5 thread configurations (1, 4, 8, 16, and 32) resulting in a total of 1080 benchmarks. The obtained results are encouraging where the proposal is able to reach the global optimum in all instances, while the solving time with respect to a nonparallel approach is notably reduced.

The outline of the study is as follows. In Section 2, we give some information on the related work. Section 3 describes and models the MPCFP. Section 4 gives an overview of MBO. Section 5 describes how to resolve the MPCFP using MBO with parallel sort. Section 6 presents and discusses the experimental results. Section 7 concludes and provides guidelines for future work. Finally, the Appendix provides detailed results for the 1080 benchmarks, in which each includes 90 test instances.

Several investigations that have been carried out for the problem MPCFP are as follows: linear programming [5], a goal programming model [6], a hybrid genetic algorithm [7], boolean satisfiability [8], constraint programming [9], and using an artificial fish swarm algorithm [4]. An analysis of the evolution of cell formation problem can be found in [10, 11]. As MPCFP is NP-Hard problem, many researchers have considered applying approximation methods as metaheuristics. Various research can be found in the field of metaheuristics,.in which the search is performed using iterative procedures, which allow moving from one solution to another in the search space. This type of metaheuristic performs movements in the neighborhood of the current solution, which means it has a perturbative nature. As metaheuristics are based on trajectories, among which we have simulated annealing [1214], tabu search [1517], local iterated search [18, 19], and local variable search [20], there is also another type of metaheuristics which are based on population. The population metaheuristics have a set of individuals, where each individual encodes a temporary solution. These metaheuristics use an evaluation function fitness of each individual, with the aim of obtaining the best values of the population that solve an optimization problem. In addition, there are disturbances in the algorithm that directed individuals in the population or part of them to possible solutions for better fitness. Among these population-based metaheuristics, ant colony optimization [21, 22], particle swarm optimization [3, 23, 24], cat swarm [25, 26], and Migrating Birds Optimization [27] can be found in problems like a hybrid flowshop scheduling with total flowtime minimisation [28], closed loop layout with exact distances in flexible manufacturing systems [29], and preliminary results for MPCFP [30]. A brief review of nature-inspired algorithms for optimization can be found in [31].

In the area of manufacturing and industrial applications, parallel implementations using metaheuristics have been addressed, such as GRASP and grid computing to solve the location area problem [32], performance analysis of coarse-grained parallel genetic algorithms on the multicore Sun Ultra SPARC T1 [33], a parallel genetic algorithm for the multilevel unconstrained lot-sizing problem [34], and a memetic algorithm and a parallel hyperheuristic island-based model for a 2D packing problem [35]. We can also find research using parallel metaheuristics as a cooperative parallel metaheuristic for the capacitated vehicle routing problem [36] and optimizing shared-memory hyperheuristics on top of parameterized metaheuristics [37] and, finally, we can find a review of recent progress in implementing parallelism with metaheuristics included in the following work [38].

Different studies have addressed the parallel sorting algorithms as massively parallel sort-merge joins in main memory multicore database systems [39], an efficient parallel merge sort for fixed and variable length keys [40], a randomized parallel sorting algorithm with an experimental study [41], sorting algorithm for many-core architectures based on adaptive bitonic sort [42], performance comparison of sequential quick sort and parallel quick sort algorithms [43], the time profit obtained by parallelization of quick sort algorithm used for numerical sorting [44], an efficient massively parallel quick sort [45], and, finally, a fast parallel implementation of quick sort and its performance evaluation on SUN enterprise 10000 [46].

In this paper, we have chosen metaheuristic MBO because it has a friendly design, composed of two linked lists for each of the sides of the flock of birds and a node as a leader bird. Furthermore, MBO has been applied to various optimization problems with promising results [4749]. We focus on an efficient parallel sorting for MBO when solving MPCFP, which to our knowledge has not yet been reported. It has been chosen to parallelize the sorting algorithms in MBO because 4 out of the 8 steps of the MBO metaheuristic used sorting algorithms, and therefore we believe that applying parallel sorting in these steps is possible to achieve reduction of the execution time in MBO algorithm.

3. Machine-Part Cell Formation Problems

The Machine-Part Cell Formation Problem (MPCFP) is NP-Hard optimization problem; the main objective is the formation of set of machines and parts in groups so that the number of intercell transportation of parts is minimized. Therefore, the initial matrix (see matrix in Table 1) must be converted into a matrix that has a block diagonal structure (cells are easily visible; see matrix in Table 2). This example corresponds to a MPCFP with the following parameters: 5 machines, 7 parts, and = 3 for 2 cells. The optimum value obtained is 0, and the final incidence matrix is constructed from the results of the matrices and (see Tables 3 and 4).

A rigorous mathematical formulation of MPCFP is given by Boctor [2]. The problem is represented by the following mathematical model:(i)Parameters, index, and sets:(a), the binary machine-part incidence matrix, where(b), the number of machines,(c), the number of parts,(d), the number of cells,(e), the index of machines (),(f), the index of parts (),(g), the index of cells (),(h), the maximum number of machines per cell.(ii)Variables and domains:(a), the machine-cell matrix, where(b), the part-cell matrix, where(iii)Objective function:(iv)Being subject to the following constraints:

4. Migrating Birds Optimization

Migrating Birds Optimization (MBO) is a nature-inspired population metaheuristic based on the V-shaped flight of the migrating birds [50, 51], which is proven to be an effective formation in energy saving [52]. In the V-formation (shown in Figure 1), some parameters like wing-tip spacing (WTS), maximum width of the wing (), angle of the V-formation (), and depth and wing span () are important to form an effective V-formation [53, 54]. The conceptual similarity between the parameters of the algorithm of MBO with the actual migration of birds in V-flight formation is investigated in Duman et al. [27]. This conceptual similarity is described in Table 5.

The MBO starts with a number of initial solutions corresponding to birds in a V-flight formation. These solutions are generated randomly and a solution as the leader bird is chosen; then the remaining birds are alternately distributed to each side of the flock. In the next step, for all tours, improving the leader solution is performed by generating and evaluating its neighbors. For each solution in the flock (except leader), try to improve by evaluating its neighbors and unused best neighbors from the solution in the front. Move the leader solution to the end and forward one of the solutions following it to the leader position. Finally, when the iterations are over, return the best solution in the flock. Algorithm 1 depicts the classic procedure of MBO algorithm.

()   Generate initial solutions in V-formation
()   
()   while do
()   for ; ; ++ do
()    Improve the leading solution by generating and evaluating neighbors of it
()    
()    for each solution in the flock (except leader) do
()     Try to improve by evaluating neighbors of it and unused best
        neighbors from the leader
()      
()     end
()  end
()  Move the leader solution to the end and forward one of the solutions following it
      to the leader position
() end
() Return the best solution in the flock

5. Solving the MPCFP Using MBO with Parallel Sort

The first step in solving the problem MPCFP is a proper integration with the MBO metaheuristic. In this sense, we conceptualize every bird of the flock (including the leader of the flock) as a possible solution to the problem MPCFP (see Figure 2). Each bird will be composed of a matrix and matrix . These matrices are those modified while the problem lapsed iterations. In addition, they will be used for the calculation of fitness to problem MPCFP.

5.1. Generate Initial Solutions

For generating initial solutions, consider the following inputs: machines = 4, parts = 5, cells = 2, = 2, and a matrix (see Table 6).

Considering data in Table 6, the first step to generate an initial solution is as follows.

Step 1. Generate random allocations in the matrix with some random method. This matrix (see Table 7) satisfies constraints that exists in only one machine in a cell and is less than or equal to 2 in an entire cell.

Step 2. Generate with a manual method the matrix from the matrix . For this, a connection is established with the matrix and with the matrix solution . In lightface, machines correspond to the cell number 1 and, in bold, machines correspond to cell number 2 (see Table 8).

The manual method for determining the matrix involves adding the value 1 out of every position , of the matrix and putting this value in the temporary array of sums according to the corresponding cell. We perform the operation for each value of positions , of the matrix (see Table 9).

Example 1. (i) is the value that is associated with cell 1; we add this value to the Row .(ii) is the value that is associated with cell 2; we add to this value to the Row .(iii) is the value that is associated with cell 2; we add to this value to the Row .(iv) is the value that is associated with cell 1; we add to this value to the Row .

Step 3. From the results of Table 9, we build a partial solution to the matrix (see Table 10).

Step 4. Find possible solutions to .

We must choose the values of the positions , that belong to the temporary matrix and have the largest values of each row. Subsequently, we replace larger values assigning them the value 1, and for the other values in the row we assign them the value of 0 (see Table 11). In case of draw, that is, the entire row has equal numbers, we keep unchanged if the value of their positions is 1. In the case that the same numbers are greater than 1, to each number greater than 1, we assign them a value of 1.

We consider rows having values equal to 1 in this case correspond to rows pertaining to machines 3 and 4. From this point, we generate all possible combinations of matrices and randomly choose one (see Table 12).

5.2. Generating Neighboring Solutions

To generate a neighboring solution, we consider the following inputs: machines = 4, parts = 5, cells = 2, = 2, matrix , matrix , and matrix (see Table 13).

Step 1. Randomly choose a position , in the matrix that has a value of 1 and we work with the machine associated with that row . This is the same as selecting a machine and working with its row . In this case, we chose randomly the position of the matrix with a value of 1, which corresponds to machine 2 (see Table 14).

Step 2. Then on machine 2, we randomly pick a cell that has a 0. In this case, we choose the position of the matrix ; we accomplish a replacement of the value of 0 to 1 and finally realize a replacement for value of 1 to 0. You can see the result of this operation in Table 15. In the case that the constraints are inconsistent, you must repeat the procedure from Step  1.

Step 3. We carry out the construction of the matrix using the manual method explained in Section 5.1.

5.3. Flowchart of Migrating Birds Optimization

This section describes the stages of the algorithm MBO, as it integrates with the problem of MPCFP and in which stages parallel arrangement algorithms are applied. For a better understanding, a flow chart has been included (see Figure 3) which is divided into 4 main phases: algorithm initialization, a tour process, leader replacement, and algorithm finalization.

5.3.1. Algorithm Initialization

Parameter Initialization (Stage 1 in Figure 3). At this stage, all the values required by the MBO metaheuristic are initialized; all the values required by the problem of MPCFP and that type of sorting algorithm are to be implemented in the metaheuristic.

Among the metaheuristic values, there are(i)number of birds (),(ii)number of neighbors (),(iii)number of tours (),(iv)number of shared solutions (),(v)number of iterations ().

And between complementary metaheuristic parameters, we can include(i)leader exchange mode, where there are 2 types of changes of leader; exchange the leader with the successor and exchange the leader with the best;(ii)sort type applied in the initial birds flock, where there is an ordering with random birds in the flock and another with birds ordered according to their fitness.

Among the parameters required for the problem MPCFP, we have(i)incidence matrix (),(ii)number of machines (),(iii)number of parts (),(iv)number of cells (),(v)maximum number of machines in a cell ().

Finally, for the types of sorting applied to MBO, we have the following parameters:(i)Type of sorting algorithm: quick sort or merge sort.(ii)Sorting technique: normal or with threads.(iii)Number of threads if required.Generate Initial Solutions (Stage 2 in Figure 3). At this stage, we generate initial solutions according to the method described in Section 5.1. These solutions must satisfy the three constraints of model MPCFP.

Create Initial Flock (Stage 3 in Figure 3). From the initial solutions, we create an array of birds, and finally ordered arrangement created the conceptual V-formation birds. For it, we assign the first position (first solution) in the array as the leader of the flock, and we perform an alternate distribution of birds between each side of the flock for the rest of the birds. We must consider that the birds are always sorted by fitness from low to high, the bird with lower fitness being the best solution to the problem (MPCFP is a minimization problem). Furthermore, we must consider that this is a stage in which we incorporate parallel order by quick sort and merge sort.

5.3.2. A Tour Process

This phase has two subphases internally: evolution of the leader bird and the evolution of the other birds. The evolution of the leader bird phase has one stage (generating neighboring solutions for the leader) and the evolution of the other birds phase has two stages (generating neighboring solutions for the other birds and sharing neighboring solutions that are not used by the leader with the other birds).

Generate Neighboring Solutions for the Leader (Stage 4 in Figure 3). This stage consists of generating neighboring solutions according to the steps outlined in Section 5.2. Subsequently, these neighboring solutions are sorted using an algorithm of parallel sort; see line (5) in the algorithm MBO (Algorithm 1).

Generate Neighboring Solutions for the Other Birds (Stage 5 in Figure 3). In this stage, we create neighboring solution to each bird of the flock; then these birds are sorted by an algorithm of parallel sort; see lines (7) and (8) in the algorithm MBO (Algorithm 1).

Share Neighboring Solutions That Are Not Used by the Leader with the Other Birds (Stage 6 in Figure 3). Share the other neighboring solutions of the leader with the other sides of the V-formation of birds; this is done alternately from left to right or from right to left. After sharing the neighboring solutions, we performed a sorting using an algorithm of parallel sort in all neighboring solutions that every bird in the flock has (not including the leader).

Leader Replacement (Stage 7 in Figure 3). At this stage, we performed a replacement of leader; for this we choose the nearest successor; that is, choose one of the birds closer to the leader. The successor will take its place and the leader will be available at the end of the flock. We must take into consideration that if we choose a successor belonging to the left section of the flock, the leader will pass to the end of the left section. The same procedure applies to the right section of the flock (see Algorithm 2).

()    = Get_Type_Of_Sort ()
()   boolean = Random_Boolean ()
()   Main Function Replace_Leader_With_Successor () is
()    if then
()     Change_Leader_With_Left_Birds ()
()     Sort_Left_Birds ()
()    else
()     Change_Leader_With_Right_Birds ()
()     Sort_Right_Birds ()
()    end
()     =
() end

Algorithm Finalization (Stage 8 in Figure 3). In this step, the fitness of the leader is obtained; in addition, a procedure to remove a response matrix explicitly is made; that is, we construct an incidence matrix according to the order given by the matrices of solutions and . The order is according to the index of less than most of the machines and parts (see Algorithm 3).

()   Main Function Convert_to_final_matrix (, , ) is
()   , , ,
()   / Sort machines /
()   for ; ; ++ do
()     for ; ; ++ do
()     if then
()         for ; ; ++ do
()        
()         end
()       
()      end
()    end
()    end
()   / Sort parts /
()   for ; ; ++ do
()    for ; ; ++ do
()     if then
()       for ; ; ++ do
()        
()       end
()       
()     end
()    end
()    end
()    return
() end
5.3.3. Sorting Algorithm

For MBO, we use two sorting algorithms, merge sort and quick sort, in which we executed sequentially and in parallel using threads.

Merge Sort. It was developed in 1945 by John Von Neumann. It is a search algorithm of complexity (see Table 16). Algorithm 4 illustrates a general pseudocode for merge sort.

()   Main Function MergeSort () is
()    Object result, left, right
()    if then
()  return
()    else
()  
()  for to middle do
()     
()  end
()    for to length() do
()      
()    end
()    left = MergeSort (left)
()    right = MergeSort (right)
()    result = Merge (left, right)
()    return result
()  end
() end

Quick Sort. It was developed in 1959 by Charles Richard Hoare. It is a search algorithm of complexity (see Table 16). Algorithm 5 illustrates a general pseudocode for quick sort.

()   Main Function QuickSort (, , ) is
()   if then
()      Partition (, low, high)
()    QuickSort (, low, )
()    QuickSort (, + 1, high)
()   end
()   end
()   Function Partition (, , ) is
()   Object pivot = [low]
()    = low
()   for to high do
()    if then
()     Swap (, )
()     leftwall = leftwall + 1
()    end
() end
()   Swap (, )
()   return leftwall
() end

6. Computational Experiments

6.1. Configuration Benchmarks

The MPCFP performance was evaluated experimentally using 90 test instances from Boctor’s experiments, composed of 10 problems considering 5 values of with and 10 instances considering 4 values of with . The test data are available in [55]. The 90 instances have been executed 31 times using 12 different configurations of sequential and parallel sorting (see Table 18). We must also consider that the sorting algorithms (sequential and parallel) were applied in the following stages (see Figure 3) of MBO:(i)Stage 3 create initial flock.(ii)Stage 4 generate neighboring solutions for the leader.(iii)Stage 5 generate neighboring solutions for other birds.(iv)Stage 6 share neighboring solutions that are not used by the leader with the other birds.

The optimization algorithm using parallel sorting was coded in Java version 1.8.0_40 using Eclipse IDE for Java Developers Version Mars.1 Release (4.5.1). For running the MBO with parallel sorting, we use Java(TM) SE Runtime Environment (build 1.8.0_40-b27). The parameter values of MBO are described in Table 17. Finally, the hardware features that have run instances are MacBook Pro computer (Retina, 13-inch, late 2013) with an Intel Core i5 (processor speed = 2,4 GHz; number of processors = 1; total number of cores = 2; L2 cache per core = 256 KB; L3 cache = 3 MB), 4 GB RAM 1600 MHz DDR3, and Video Card Intel Iris 1536 MB running OS X El Capitan version 10.11.2 (15C50).

6.2. Results

Tables 19 and 20 represent a summary of the results provided in Appendix A [55]. These summary tables describe the average results of 90 instances. Column 1 (threads) corresponds to the identifier assigned to the number of threads used in the search algorithm. Column 2 (execution time-BET) shows the best execution time obtained in milliseconds. Column 3 (execution time-AVG) describes the average time in milliseconds of the finest time of execution of MBO. Column 4 (execution time-ST) represents the standard deviation of the best times of execution obtained by MBO. Column 5 (cycle-BC) shows the cycle in which the optimal value was found. Column 6 (cycle-AVG) describes the average cycle. Column 7 (cycle-ST) represents the standard deviation. Column 8 (time cycle-BTC) shows the best cycle time in which the optimal value was found in milliseconds. Column 9 (time cycle-AVG) describes the average cycle in milliseconds. Column 10 (time cycle-ST) represents the standard deviation in milliseconds. Finally, we must consider all tests that have achieved the global optimum.

Execution Time Analysis. The execution time is the total time when the metaheuristic was searching in the search space for the best solution. In this sense, according to what is described in Tables 19 and 20, we can consider that the merge sort algorithm using 8 threads had the lowest average time (152.7333 ms) to run 90 instances (see column 2: execution time-BET in Tables 19 and 20). In addition, when we compare the times of running the metaheuristic sequentially (without threads and using merge sort algorithm), regarding the best time to run the metaheuristic reached with threads (8 threads), we can see that using 8 threads is an improvement in the execution time of the metaheuristic. This behavior can also be seen in Tables 19 and 20 (see column 3: execution time-AVG in Tables 19 and 20).

Cycle Analysis. A cycle is fulfilled after replacing the leader of the flock (stage 7 in Figure 3). Currently with a parameter of 2591 iterations (), we can achieve 38 cycles. Consequently, the leader of the flock during the algorithm changes its value 38 times. Basically, the cycles, where the best value has been obtained before the execution is finished, provided us with information to reduce the iteration time assigned in the input parameter of the metaheuristic. Considering the values of the cycles (see column 6: cycle-AVG in Tables 19 and 20), we can approximate an average value in 9, for the number of cycles to find a global optimum, for using both merge sort and quick sort, with threads and without threads. Also, this involves reducing the value of the iteration limit . But we must also consider the random factor having the metaheuristic (see stage 2 in Figure 3, stage 4 in Figure 3, and stage 5 in Figure 3), and this randomness allows the possibility of finding a global optimum with fewer cycles (see column 5: cycle-BC in Tables 19 and 20). Therefore, this is a recommended change, but it is not enough to always find a global optimum. The quick sort algorithm using threads having irregular times is even much worse than the quick sort used sequentially.

Time Cycle Analysis. In relation to the average cycle time where the best optimum was found, we note that, according to the tests, the merge sort algorithm with 8 threads has been the least time used (see column 9: time cycle-AVG in Tables 19 and 20. It should be considered interesting results as it has achieved better times using parallel sort versus sequential sort, mainly by using merge sort with 4, 8, and 16 threads and quick sort with 16 threads (see column 8: time cycle-BTC in Tables 19 and 20).

7. Conclusions

In this paper, we use a Migrating Birds Optimization algorithm by a parallel sorting to solve Machine-Part Cell Formation Problems. Computational experiments using conventional hardware resources have been conducted in 90 test instances, 2 well-known sorting algorithms, 1 sequential configuration, and 5 thread configurations (1, 4, 8, 16, and 32), giving a total of 1080 benchmarks. Among the main results, we highlight a new alignment of the MBO parameters to solve MPCFP. The new configuration consists of the following parameter values: number of birds (), number of neighbors (), number of tours (), number of shared solutions (), and number of iterations (). We observed a decrease in cycles from 38 to 9, which means that the leader of the flock reaching the cycle number 9 has a very high probability of finding the global optimum; therefore, we can reduce the number of iteration limit from to . This observation of the number of cycles applied to all benchmarks.

In addition, it is important to note that quick sort algoritm has variable times in the execution of MBO (see column 9: time cycle-AVG in Table 20). There have been cases in which quick sort algorithm runs without threads; it is even more favorable than when run with threads. Otherwise, with the merge sort algorithm, we found promising results. We can consider that the merge sort algorithm using 8 threads shows a reduction in execution time compared to using sequential sorting algorithm.

Among the guidelines for future research, we can point to applying different ways of generating initial solutions as well as of generating neighboring solutions. We can also compare them with other sorting algorithms implemented in MBO.

Appendix

A. Details of the Benchmarks

A.1. Boctor’s Problems

Table 21 describes the configuration of the 90 instances of tests. The description of the table is given the following attributes: Column 1 (ID) corresponds to the identifier assigned to each instance. Column 2 (Boctor’s problem) represents the identifier of the 10 Boctor’s problems. Column 3 (cell) is the number of cells. Column 4 () corresponds to the maximum number of machines per cell. Column 5 (OPT) depicts the optimum global for the given problem.

A.2. Experimental Results

Tables 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, and 33 constrast the times by using different sorting algorithms. These tables have the same headers, which are described below: column 1 (ID) corresponds to the identifier assigned to each instance. Column 2 (OPT) depicts the optimum value for the given problem. Column 3 (optimum-MBO) depicts the best value reached by using Migrating Birds Optimization. Column 4 (optimum-AVG) the average value of 31 executiones is depicted. Column 5 (optimum-SD) represents the standard deviation. Column 6 (optimum-RPD%) represents the difference between the best known optimum value and the best optimum value reached by MBO in terms of percentage. Column 7 (execution time-BET) shows the best execution time obtained in milliseconds. Column 8 (execution time-AVG) describes the average time in milliseconds of the finest time of execution of MBO. Column 9 (execution time-ST) represents the standard deviation of the best times of execution obtained by MBO. Column 10 (cycle-BC) shows the cycle in which the optimal value was found. Column 11 (cycle-AVG) describes the average cycle. Column 12 (cycle-ST) represents the standard deviation. Column 13 (time cycle-BTC) shows the best cycle time in which the optimal value was found in milliseconds. Column 14 (time cycle-AVG) describes the average cycle in milliseconds. Column 15 (time cycle-ST) represents the standard deviation in milliseconds. Finally, it describes the average time for each column.

Competing Interests

The authors declare that they have no competing interests regarding the publication of this paper.

Acknowledgments

Boris Almonacid is supported by Postgraduate Grant Pontificia Universidad Católica de Valparaíso 2015 (INF-PUCV 2015). Ricardo Soto is supported by Grant CONICYT/FONDECYT/INICIACION/11130459. Broderick Crawford is supported by Grant CONICYT/FONDECYT/REGULAR/1140897. Fernando Paredes is supported by Grant CONICYT/FONDECYT/1130455.