Abstract

In the real world, there are a number of optimization problems whose search space is restricted to take binary values; however, there are many continuous metaheuristics with good results in continuous search spaces. These algorithms must be adapted to solve binary problems. This paper surveys articles focused on the binarization of metaheuristics designed for continuous optimization.

1. Introduction

In practically any activity that is performed, the resources are scarce; thus, we must properly utilize such resources. To this end, we can use technical optimization. Such problems are common in engineering, economics, machine learning, agriculture and, others areas. We found applications in learning automata in dynamic environments [1], optimum design of structures [2], load dispatch problems [3], optimization of directional overcurrent relay times [4], two-dimensional intermittent search processes [5], and control and risk monitoring [6], among other various real problems in industry.

Some models are logical only if the variables take on values from a discrete set, often a subset of integers, whereas other models contain variables that can take on any real value. Models with discrete variables are discrete optimization problems, and models with continuous variables are continuous optimization problems. In general, continuous optimization problems tend to be easier to solve than discrete optimization problems; the smoothness of the functions means that the objective function and constraint function values at a point can be used to deduce information about points in a neighborhood of . However, improvements in algorithms and in computing technology have dramatically increased the size and complexity of discrete optimization problems that can be solved efficiently.

Discrete optimization, that is, the identification of the best arrangement or selection of a finite number of discrete possibilities [7], has its origin in the economic challenge of efficiently utilizing scarce resources and effectively planning and managing operations. The decision problems in the field of operations management were among the first to be modeled as discrete optimization problems, for example, the sequencing of machines, the scheduling of production, or the design and layout of production facilities [8]. Today, discrete optimization problems are recognized in all areas of management when referring to the minimization of cost, time, or risk or the maximization of profit, quality, or efficiency [9]. Typical examples of such problems are variants of assignment and scheduling problems, location problems, facility layout problems, set partitioning and set covering problems, inventory control, and traveling salesman or vehicle routing problems [10], among others.

The difficulty level of such optimization problems is conceptualized by the theory of computational complexity [11, 12]. In this context, two complexity classes are of particular interest: and NP (whereby the inclusion holds). The problem class contains all decision problems that can be solved in polynomial time in the size of the input on a deterministic sequential machine. These problems are considered to be easy and efficiently solvable [13]. The class NP contains all decision problems that can be solved in polynomial time on a nondeterministic machine.

A nondeterministic machine has two stages: the first one is the guessing stage and the second one is the checking stage. In the case of class NP, this checking stage is computable in polynomial time. A subclass of problems in NP is called NP-complete. A problem is said to be NP-complete if it is a problem belonging to class NP and additionally has the feature that, given any other problem NP, is polynomial time reducible to . Finally a problem of NP-hard type corresponds to a problem which is not necessarily NP but, given any problem NP, is polynomial time reducible to .

Many important discrete optimization problems are known to be NP-hard; that is, in the worst case, the time required to solve a problem instance to optimality increases exponentially with its size; hence, these problems are easy to describe and understand but are difficult to solve. Even for problems of moderate size, it is practically impossible to determine all possibilities to identify the optimum. Consequently, heuristic approaches, that is, approximate solution algorithms, are considered to be the only reasonable way to solve difficult discrete optimization problems. Accordingly, there is a vast and still growing body of research on metaheuristics for discrete optimization that aim at balancing the trade-off between computation time and solution quality [14].

Metaheuristics provide general frameworks for the creation of heuristic algorithms based on principles borrowed from classical heuristics, artificial intelligence, biological evolution, nervous systems, mathematical and physical sciences, and statistical mechanics. Although metaheuristics have proven their potential to identify high-quality solutions for many complex real-life discrete optimization problems from different domains, the effectiveness of any heuristic strongly depends on its specific design [15]. Hence, the abilities of researchers and practitioners to construct and parameterize heuristic algorithms strongly impact algorithmic performance in terms of solution quality and computation times. Consequently, there is a need for a deeper understanding of how heuristics need to be designed such that they achieve high effectiveness when searching the solution spaces of discrete optimization problems.

However, many of the well-known metaheuristics originally worked on continuous spaces because these can be formulated naturally in a real domain; examples of these metaheuristics are particle swarm optimization (PSO) [16], magnetic optimization algorithm (MOA) [17], cuckoo search (CS) [18], firefly algorithm (FA) [19], galaxy-based search (GS) [20], earthworm optimization (EW) [21], lightning search (LS) [22], moth-flame optimization (MF) [23], sine cosine (SC) [24], and black hole (BH) [25]. However, researchers have been developing binary versions that make these metaheuristics capable of performing in binary spaces. There are different methods for developing the binary version of a continuous heuristic algorithm while preserving the principles inspiring the search process. Examples of such binarizations are harmony search (HS) [26], the differential evolution algorithm (DE) [2730], particle swarm optimization (PSO) [31], the magnetic optimization algorithm (MOA) [32], the gravitational search algorithm (GSA) [33], the firefly algorithm (FA) [34], the shuffled frog leaping algorithm (FLA) [35], the fruit fly optimization algorithm (FFA) [36], the cuckoo search algorithm (CSA) [37], the cat swarm optimization algorithm (CSOA) [38], the bat algorithm [39], the Black Hole Algorithm [40], the algae algorithm (AA) [41], and fireworks [42].

In contrast to the continuous binary approaches, we also found in the literature the inverse transformation, that is, from discrete techniques to continuous [43, 44]. This inverse approach uses the concepts of probability density function and its associated cumulative distribution function. When we use the inverse of cumulative distribution functions, we can produce uniformly distributed real numbers. This process is a general way to transform discrete metaheuristics to continuous metaheuristics. Typical probability density distributions were proposed in [45, 46].

This article is a review of the main binarization methods used when we are putting continuous metaheuristics to work in binary search spaces. The remainder of this paper is organized as follows. In Section 2, we present the main optimization problem definitions, the principal optimization techniques, and the different types of variables. Section 3 provides a definition of metaheuristics. In Section 4, we describe the main criteria for transforming continuous metaheuristics to discrete metaheuristics. Section 5 presents the most frequently used techniques allowing the binarization of the continuous metaheuristic. In the discussion in Section 6, we summarize and analyze the techniques and problems in terms of the number of articles published. The conclusions are outlined in Section 7, which presents a summary table that compares metaheuristics and discretization or binarization techniques.

2. Concepts and Notations

This section establishes the definitions and notations required for understanding the discretization and binarization techniques. For this purpose, we need to define some basic concepts.

2.1. Optimization Problem

The main goal of optimization metaheuristics is to resolve an optimization problem. An optimization problem corresponds to the pair of search space and objective function . This pair is denoted as , where is generally a vector space, , and . Let be a feasible solution of the optimization problem. The solution of the optimization problem when we are minimizing corresponds to finding a solution such that . In the case of a maximization problem, it can be transformed into a minimization problem by multiplying the objective function by −1.

2.1.1. Search Space

A search space, , is a set of all possible points or solutions of the optimization problem that satisfy the problem’s constraints. When we classify the parameters that make up each point of the solution, there are two groups. The first group corresponds to parameters with an unordered domain. These parameters do not have an exploitable structure; that is, they do not naturally have a metric, an order, or a partial order and therefore it is not feasible to use optimization methods to find optimal values. The only option for these cases is to use sampling [47]. A second group of parameters corresponds to those that naturally have a structure such as metric, order, or partial order. In this case we can take advantage of this structure to use optimization methods to find optimal values. Within this second group we often find parameters of real, discrete, or binary type. In terms of these real, discrete, or binary parameters the optimization problems can be classified as real optimization problems , discrete optimization problems , binary problems , and mixed problems. Since our review is directly linked to continuous, discrete, and binary optimization methods, from now on we will focus on these types of parameters.

2.1.2. Neighborhood

Let be an optimization problem. A neighborhood structure is a function:where is a power set of . function assigns a set for each element, where is the neighborhood of .

2.1.3. Local Optimum

Let be an optimization problem and be the neighborhood of , . is a local optimum (minimum) if it satisfies the following inequality:

2.2. Optimization Techniques

There are several optimization techniques, as shown in the overview in Figure 1. We can group them into complete techniques and approximate or incomplete techniques. Without pretending to be exhaustive in the classification, we mentioned to our understanding the main techniques, giving more detail to the case of complete techniques in integer programming due to the proximity with the combinatorial problems. The exact or complete techniques are those where we find an optimum result regardless of the process time. For integer programming, the typical techniques are branch-and-cut and branch-and-bound. Many combinatorial optimization problems can be formulated as mixed integer linear programming problems. They can then be solved using branch-and-cut or branch-and-bound methods, which are exact algorithms that consist of a combination of a cutting plane method with a branch-and-bound algorithm. These methods work by solving a sequence of linear programming relaxations of the integer programming problem. Cutting plane methods improve the relaxation of the problem to more closely approximate the integer programming problem, and branch-and-bound algorithms proceed by a sophisticated divide-and-conquer approach to solve problems. Unfortunately when the problems are NP-hard and the size of instance grows, these algorithms do not provide good results. On the other hand, the incomplete techniques are those where a good solution is found that is not necessarily the best but found in a short processing time. This technique better fits the actual conditions of the problems since, in daily life, the solutions of the problems are required in a given time. Within the approximate or incomplete techniques, we find metaheuristics.

In general terms, a metaheuristic attempts to find values for the variables that will provide an optimum value of the objective function.

2.3. Search Space

As our focus is on the continuous, discrete, and binary searches spaces (see Section 2.1, search space), a solution in that context can be classified into three categories, as follows [48]:(i)Continuous Variable. Continuous variables are when the variable can have any value in the given interval.(ii)Discrete Variable. Discrete variables correspond to variables that may have integer or binary values.(iii)Mixed Variables. In this case, the variables can have many real, integer, or binary values; thus, it is called a mixed problem.

Discrete variables arise in many optimization problems, for example, in manufacturing [49], cutting and packing problems [50], integer programming [51], and quadratic assignment [52]. A common reason for discrete variables occurring is when the resources of interest are quantified in terms of integers or Boolean values, for example, in production lines, scheduling processes, or resource assignments. There is a set of classic problems that can be treated in binary form, such as the well-known knapsack problem [53], the set covering problem [54], and the traveling salesman problem [55].

For example, in the knapsack problem, the th item has weight and value . The objective is to maximize the total value of the items placed in the knapsack subject to the constraint that the weight of the items does not exceed a limit . To formulate this problem, one can let be the binary variable such that

Then, we want to maximize subject to , where .

Another situation where the use of discrete variables is appropriate is when we need to manage constraints that involve logical conditions. For example, suppose that we want and and that we also want to preserve the linearity of the problem. This can be achieved by including the linear constraints where is a binary variable and is a sufficiently large positive number that does not affect the feasibility of the problem. By this definition of , if , then we will have and , whereas if , we will have and .

Another common situation that requires integer variables is when the problem involves set-up costs. As an example, consider a generator that supplies electricity to a local region with nodes for periods. Suppose that at period the generator incurs a cost of when it is turned on, a cost of for producing electricity after it is turned on, a cost of for supplying electricity to node after it is turned on, and a cost of for shutting it down. For , , , and denote the binary variables such that

If we let be variables that represent the percentage of the generator’s capacity for node that is used in period , then the total costs incurred would be . The objective is to minimize the total costs.

3. Metaheuristics

A metaheuristic is formally defined as an iterative generation process that guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, and learning strategies are used to structure information to efficiently find near-optimal solutions [14, 56]. The fundamental properties that characterize the set of metaheuristic algorithms are as follows:(i)Metaheuristics are higher level strategies that guide the search process.(ii)The goal is to efficiently explore the search space to find (quasi)optimal solutions.(iii)Metaheuristic algorithms are approximate and generally nondeterministic.(iv)The basic concepts of metaheuristics permit an abstract level of description.(v)Metaheuristics are not problem specific.(vi)Metaheuristics may utilize domain-specific knowledge in the form of heuristics that are controlled by the upper level strategy.(vii)Today, more advanced metaheuristics use search experience (embodied in some form of memory) to guide the search.

3.1. Metaheuristic Classification

(i)Nature Inspired versus Non-Nature Inspired Algorithms. Generally, it is the most natural way to classify metaheuristics since it is based on the origins of the algorithm. It takes into account whether their models have been inspired by nature. There are bioinspired algorithms, such as genetic algorithms (GAs) and ant colony optimization (ACOs), and non-nature inspired ones, such as tabu search (TS) and iterated local search (ILS). This classification is not very meaningful following the emergence of hybrid algorithms.(ii)Population-Based versus Single-Point Search (Trajectory). In this case, the characteristic used for the classification is the number of solutions used at the same time. On the one hand, single-point search algorithms work on a single solution describing a trajectory in the search space during the search process. They encompass local search-based metaheuristics, such as variable neighborhood search (VNS), tabu search (TS), and iterated local search (ILS). On the other hand, population-based methods work on a set of solutions (points) called a population.(iii)Static versus Dynamic Objective Function. The algorithms that keep the objective function given in the problem during the entire process are called metaheuristics with static objective functions. However, there are other algorithms with dynamic objective functions, such as guided local search (GLS), which modify the fitness function during the search, incorporating information collected during the search process to escape from local optima.

Many of the metaheuristic techniques are motivated in an vectorial space [16, 5761]; naturally, they cannot solve discrete or binary optimization problems. Many methods have been proposed that allow the use of a real optimization metaheuristic in discrete or binary problems. These methods are called discretization if the method allows adapting the real technique to solve integer problems and called binarization if the method solves binary optimization problems. In the next sections, we propose and explain a grouping of the main discretization and binarization techniques.

4. Discretization of Continuous Metaheuristics

There are many problems that require discrete search spaces [6264]. While investigating these techniques, we found many names. However, these techniques can be classified into three main groups:(i)Rounding off generic technique.(ii)Priority position techniques associated with scheduling problems.(iii)Specific techniques associated with metaheuristic discretizations.

4.1. Rounding off Generic Techniques

This approach is one of the most commonly used approaches for managing discrete variables due to its simplicity and low computational cost. It is based on the strategy of rounding to the nearest integer. It was first used in [65] in the optimization of reactive power and voltage control as a discretization method.

The rounding off operator transforms feasible solution into feasible solution. The metaheuristic operators are used without modifications, and two strategies exist to implement the discretization. The first strategy applies a rounding off near integer operation to the feasible solution in every iteration. In the second approach, it is applied at the end of the optimization process.

There are multiple problems that use this method, for example, optimization of transport aircraft wing [64], task assignment problem [62], and distribution systems reconfiguration [63]. The main metaheuristics that use the round-off method are ant colony [63], PSO [62, 64, 65], firefly [66], and artificial bee colony [66, 67]. The disadvantages of this method include the possibility that the solution is in a nonfeasible region. Moreover, the value of the fitness in the rounded point can be very different from that in the original point.

4.2. Priority Position Techniques: Random-Key or Small Value Position

The random-key encoding scheme is used to transform a position in a continuous space into a position in a discrete space. The random-key was first used in a scheduling problem in [68], where the solutions are elements of a space.

Let us start with a solution of dimensions. In each position, a random number in is assigned, obtaining an real random-key solution. To decode the position from real random-key solution in a discrete space, the positions are visited in ascending order, generating a discrete solution. An example is shown in Table 1.

This method has been used with the gravitational search algorithm [68, 69], resolving the traveling salesman problem and scheduling problem. In both cases, the result of the gravitational algorithm is mapped as a random-key, where small values map to the top position.

The same method, but called small value position (SVP), was used for the first time in [70] to solve the single-machine total weighted tardiness problem using a PSO algorithm. Later, [71] utilized SVP with the firefly algorithm to schedule jobs on grid computing, and [72] used SVP in the cuckoo search algorithm to resolve the design optimization of reliable embedded systems. Additionally, we found this method in the first steps of great value priority. This binarization technique is explained in Section 5.

4.3. Metaheuristic Discretization

This method has been used in [73] to solve distribution system reconfiguration in the presence of distributed generation using a teaching-learning metaheuristic.

Let us start with a continuous metaheuristic that produces values in or we can adapt a metaheuristic using functions that map to , for example, transfer function (see Section 5).

Let be the number of elements of our problem and correspond to the dimension of the search space. is a result of the metaheuristic; then , and multiply by , obtaining and . This real number is discretized by applying the following equation:

This procedure allows us to map to discrete values between 1 and . This has been applied to smart grids using teaching-learning-based optimization [73].

5. Binarization of Continuous Metaheuristics

In our study and the conceptualization of binarization techniques, we found two main groups of binarization techniques. The first group of techniques we call two-step binarization; these techniques allow working with the continuous metaheuristics without operator modifications and include two steps after the original continuous iteration; these two steps make the binarization of the continuous solution. The second group of techniques is called continuous-binary operator transformation; it redefines the algebra of the search space, thereby reformulating the operators.

5.1. Two-Step Binarization Technique

This technique works with the continuous operators without modifications. To perform the binarization, two additional steps are applied. The first step corresponds to introducing operators that transform the solution from to . For example, in great value priority, our interspace is ; in the case of a transfer function (TF), we have and in angle modulation. The second step transforms from the interspace (, , ) into a binary space. A general scheme is presented in Figure 2.

5.1.1. Transfer Function and Binarization

Transfer Function. In this technique, the first step corresponds to the transfer function, which is the most used normalization method and was introduced in [31]. The transfer function is a very cheap operator, and its range provides probability values and attempts to model the transition of particle positions. This function is responsible for the first step of the binarization method and mapping solutions in solutions.

In our revision, we found that two types of functions have been used in research: the S-shape [7477] shown in (7) to (10), and their shapes are shown in Figure 3(a), and V-shape [54, 78, 79] shown in (11) to (14), and their shapes are shown in Figure 3(b).

Let be a feasible solution to the problem; for each dimension, it is applied to the transfer function (TF), , obtaining an intermediate solution , where . This transfer function defines the probability of changing a position (an assignment of a value to a variable or enumeration). According to [80], some concepts should be taken into account when selecting a transfer function to map velocity values to probability. Intuitively, an S-shaped transfer function should provide a high probability of changing the position for a large absolute value of the velocity. Particles that have large absolute values for their velocities are probably far from the best solution; thus, they should switch their positions in the next iteration. It should also present a small probability of changing the position for a small absolute value of the velocity.

Binarization. The second step is the binarization technique, in which the particle is transformed into a binary solution by applying a binarization rule. In the literature, we found the binarization equations (15) to (18). In the following, rand is a random number in .Standard. If the condition is satisfied, the second step operator returns the value 1, independent of the previous value. Otherwise, it returns 0.Complement. If the condition is satisfied, the second step operator returns the complement of the actual value.Static Probability. A static probability transition value is generated, and it is evaluated with a transfer function.Elitist. This discretization method selects the value of the best individual of the population. Elitist Roulette. This discretization method, also known as Monte Carlo, selects the new value randomly among the best individuals of the population, as shown in Figure 4, with a probability proportional to its fitness.

In particle swarm optimization, this approach was first used in [31]; in [81], it was used to optimize the sizing of capacitor banks in radial distribution feeders; in [82], it was used for the analysis of bulk power system; and, in [83], it was used for network reconfiguration. This approach was also used by Crawford et al. to resolve the set covering problem using the binary firefly algorithm [54] and artificial bee colony algorithm [84] and in [84] to resolve the set covering problem with the artificial bee colony algorithm, and it was also used in [37] using the cuckoo search algorithm applied to the set covering problem. To resolve the unit commitment problem, [77] used firefly and PSO. The knapsack cryptosystem was approached in [74], the network and reliability constrained problem was solved in [78], and knapsack problems were solved in [41], all using the firefly algorithm. In [85], a teaching-learning optimization algorithm was used for designing plasmonic nanobipyramids based on the absorption coefficient.

5.1.2. Great Value Priority and Mapping

Great Value Priority. Great value priority (GVP) was introduced in [86] to solve a quadratic assignment problem by applying a particle swarm optimization (PSO) algorithm. This method encodes a continuous space into the binary space having two principal properties: it is an injective mapping, and it reflects a priority order relation, which is suitable for assignment problems.

Let us start with the solution ; then, as a first step, we obtain a permutation that lies in . The GVP rule chooses the heaviest element and places its position in the first element of . For the remaining elements, choose the heaviest and place it in the second position and so on until the elements of the original vector are browsed.

Binarization. In this technique, the second step maps to . To obtain , we apply the mapping rule shown in (19). The result is a binary solution of dimensions. A concrete example is presented in Table 2.

This technique has been used in other types of binary problems; for example, in [87], it was used to solve the antenna positioning (AP) problem using a binary bat algorithm. In this solution, the algorithm preserves the original operators and Euclidean geometry of the space and adds a new module that maps real-valued solutions into the binary ones. Unlike the quadratic binary algorithm, where the priority is intrinsic to the problems, in the AP problem, the use of priority is clearly not suitable. The result was not conclusive; there were good and bad solutions for different instances.

5.1.3. Angle Modulation and Rule

Angle Modulation (AM). This approach was used in the telecommunications industry for phase and frequency modulation of the signal [88]. This method uses a trigonometric function that has four parameters, and these parameters control the frequency and shift of the trigonometric function.

In binary heuristic optimization, this method was first applied in PSO using a set of benchmark functions [89].

Consider an -dimensional binary problem, and let be a solution. We start with a four-dimensional search space, where each dimension represents a coefficient of (20).

In the first step from the four-dimensional space, we obtain a function in a function space. Specifically, from every solution in this space, we obtain a trigonometric function that lies in a function space.

Binarization. In the second step, for each single element , apply rule 24 and obtain an -dimensional binary solution.

Then, for each initial 4-dimensional solution , we obtain a binary -dimensional solution that is a feasible solution of our n-binary problem.

The angle modulation technique was applied to network reconfiguration problems in [90] using a binary PSO method. This technique was also applied in [91] to a multiuser detection technique using a binary adaptive evolution algorithm. The antenna position problem was solved in [87] using an angle modulation binary bat algorithm. Using PSO in [92], they solved the -queens problems, and, in [90], Liu et al. solved large-scale power systems. In [93], a differential evolution algorithm was applied to knapsack problems, and, in [94], it was applied to binary problems. Artificial bee colony was applied to a feature selection in [95] and to binary problems in [96].

5.2. Continuous-Binary Operator Transformation

These methods are characterized to redefine the operators of the metaheuristic, and there are two main groups. We call the first group modified algebraic operations. In this group, the algebraic operations of the search space are modified, and examples include the Boolean approach and set approach. The second group is called promising regions, and the operators are restructured in terms of selected promising regions in the search spaces. This selection is performed using a probability vector. Examples of this group include the quantum-based binary approach and binary method based on the estimation of distribution.

5.2.1. Modified Algebraic Operations

Boolean Approach (BA). This method belongs to modified algebraic operations. Let us transform the real operators into binary operators. This transformation is performed using Boolean operations. The operators act over the binary solutions. This approach emerged as a binarization technique of particle swarm optimization [97]; subsequently, [98] incorporated the inertia weight.

In Figure 5, we show a concrete example of applying the velocity Boolean equation (22) to the position of selected particle . The Boolean notation is as follows: “XOR” = , “AND” = , and “OR” = . The velocity and position Boolean Equations with inertia weight are presented in (22) and (23), respectively. and belong to the best position of the selected particle and the position of the global best solution, respectively, and and are random vectors. corresponds to the velocity at time .

This method has been applied to different binary optimization problems using the particle swarm method [9799]. In [100], it was used with bitwise operations applied to optimization problems. The Boolean approach introduced an efficient velocity bounding strategy based on negative thymic selection of T-cells.

Set-Based Approach (SBA). The set-based approach is a technique that belongs to modified algebraic operations. It is a good framework for discrete problems. In this approach, we eliminate all structures of the space (vectorial or metrics), and we work with a pure set. In a set, we have the standard set operations, union, intersection, complement, and so forth. Then, we need to redefine the operations of sums, multiplications, and others of the vector spaces and use set operations. Consequently, we have to reformulate the operators as discrete operators.

In the literature, there are numerous set frameworks applied to PSO algorithms. Reference [101] proposed a generic set-based algorithm, which had the same problems with the size of the positions and velocities. In [102], a generic set PSO algorithm was proposed, but its performance is less than that of other algorithms. In [103], an algorithm called S-PSO was proposed that can be used to adjust a continuous PSO to a discrete one. In [104], an SBPSO was proposed.

In a general framework, it is necessary to define some operations; let be a power set of , and indicates whether the elements are added or removed in any operation. The definition of velocity is a transformation that maps a position to new positions.(1)The addition of two velocities: , where .(2)The difference between two positions: is a mapping , where (3)Multiplication of a velocity by a scalar: , where the mapping is defined by picking a subset of elements at random from velocity .(4)Addition of velocity and position: , where .

Using these operations, our equations must be modified:

This method modifies the operators velocity and position, and the construction is not simple. There are many variations of (25) [101, 103, 105], and in most cases they apply to PSO. In PSO, this technique has been used to solve the traveling salesman problem [106], the multidimensional knapsack problem, and vehicle routing problems [101]. The Boolean approach is a particular case of the set approach.

5.2.2. Promising Regions

Quantum Binary Approach. This approach, which belongs to promising regions, has been developed in PSO [107, 108]. It has been inspired in the uncertainly principle, where we cannot simultaneously determine position and velocity. Therefore, for individual particles, the PSO algorithm works in a different fashion, and we need to rewrite the operators.

In the quantum approach, each feasible solution has a position and a quantum vector . represents the probability of taking the value . For each dimension , a random number between is generated and compared with ; if , then ; otherwise, .

Then, the new and are calculated using the objective function. Finally, we update the transition probability using

The quantum method has been applied to a swarm optimization algorithm in combinatorial optimization [109], cooperative approach [110], knapsack problems [108], and power quality monitor [111]. In differential evolution, it has been applied to knapsack problems [112], combinatorial problems [113], and image threshold methods [114]. The cuckoo search metaheuristic has been used for 0-1 knapsack problems [115] and the bin packing problem [116]. In ant colony optimization, it has been applied to image threshold [114]. The harmony search [117] and the Monkey algorithms [118] were applied to Knapsack problems.

Binary Method Based on Estimation of Distribution. Estimation of distribution algorithms (EDA) belong to promising regions. They are probabilistic models used in optimization methods. These methods guide the search for the optimum by sampling the promising candidates and building a distribution [119].

The new solutions are obtained by sampling the search space using EDA. After each iteration, the distribution is reestimated using the new candidates.

In the case of binary optimization, [120] used a univariate marginal distribution (UMD) to obtain a binary method.

Let be the space dimension, be the number of candidates, be the best position for the particle, and be the global best position. and are binary variables.

We want to obtain a particle , where is the probability that the th dimension of a solution takes the value . Let be a specific dimension; to initialize the particle , we apply the rule

With the particle , for an element , where , we apply the next decision:If random() , then Else

With this rule, we obtain , . The next step is to update particle. We use the rule

This method was constructed for a particle swarm optimization technique. However, it is easy to adapt for other metaheuristics. The advantage of this procedure is its adaptation on each iteration; however, it needs to adjust the parameters and and to compute the distribution in each iteration. In PSO, it was applied to solve knapsack problems [120]. In differential evolution, it was applied to optimization problems [121, 122]. For genetic algorithms, it was used to work on economic dispatch problems [106]. Finally, local search [123] and memetic [123] metaheuristics were used to solve the traveling salesman problem.

6. Discussion

This section aims to summarize the techniques and problems recently addressed. Additionally with the information obtained from the articles along with our experience in the area, we want to capture our vision of what are the trends in binarization. This last point is very difficult to answer and is not intended to be a quantitative analysis but rather our vision regarding the area.

From 65 papers, we have summarized, reviewed, and classified techniques that allow transforming continuous metaheuristics into discrete or binary metaheuristics. Figure 6 shows how the articles are distributed on the different binarization techniques. The most reviewed technique was the transfer function. From the 19 read articles about this technique, it is observed that there is a general and simple mechanism for performing binarizations. However, the results are not always suitable and are related to the choice of the transfer function. In this sense the greatest challenge, in our view, corresponds to developing a methodology for choosing the transfer function (not simply trial and error) where this selection could be dynamic as the system evolves.

Another technique that appeared quite often was the quantum binary approach (QBA). From the articles read it is observed that the implementations are particular for each metaheuristic, with quite good results. From our point of view, there is a line of research associated with designing a general quantum mechanism that allows binarizing any continuous metaheuristic. Another important point to work on is the development of a methodology for the selection of parameters associated with binarization.

For the case of angle modulation, there is space to perform binarizations on new metaheuristics, where different variations of angle modulation can be explored. This exploration of new binarizations is very powerful if it is accompanied by analysis of positions and velocities of particles of the system to understand the conditions in which angle modulation works properly. As a suggestion we propose reading the work done in [124] in PSO.

From the point of view of problems, the greater number of problems addressed corresponds to classic problems such as the knapsack (KS), set covering problem (SCP), and traveling salesman problem (TSP). The summary is shown in Figure 7. With the generation of large amounts of data and the incorporation of the Internet of things, there is a large space to use metaheuristics associated with combinatorial problems in the area of image processing and feature selection [125], deep learning tuning parameters [126], data intensive applications [127130], and network and complex systems [131]. In this context, feature selection (FS) has been resolved using angle modulation and the set-based approach. Image thresholding (IT) has been addressed using the quantum binary approach.

7. Conclusion

This work surveyed important discretization and binarization methods of continuous metaheuristics. Inside the binarization conglomerate, we propose two main group classifications. The first group we call two-step binarization methods, which use an intermediate space from where the binarization is mapped. The second group we call continuous-binary operator transformation, where the metaheuristic operator is adapted to a binary problem. When we analyze the operator adaptation, we found methods that transform the algebraic operations and methods that use a probability for performing the transition in the search space. Table 3 summarizes these results.

Moreover, we provide a summary of the main discretization techniques, indicating the metaheuristic that was used and what problem was resolved. This summary is shown in Table 4.

Additionally, we investigate what specific metaheuristics use these binarization techniques. The conclusion is that the most frequently used method is the transfer function, belonging to two-step binarization. Furthermore, we searched for what type of optimizations problems were resolved by the different techniques. The summary is shown in Tables 5 and 6.

The principal research in this area is to try to understand in a general way how exploration and exploitation properties are mapped from continuous metaheuristics to discrete or binary metaheuristics. This allows improving the result of the metaheuristics and enlarging the spectrum of discrete or binary problems to solve. This compilation work of discretization and binarization techniques allows us to conclude that no general technique exists that allows for efficient discretization.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Broderick Crawford is supported by Grant CONICYT/FONDECYT/REGULAR/1171243, Ricardo Soto is supported by Grant CONICYT/FONDECYT/REGULAR/1160455, José García is supported by INF-PUCV 2016, and Gino Astorga is supported by Postgraduate Grant, Pontificia Universidad Católica de Valparaíso, 2015.