Research Article  Open Access
Vojtěch Uher, Petr Gajdoš, Michal Radecký, Václav Snášel, "Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds", Computational Intelligence and Neuroscience, vol. 2016, Article ID 6329530, 14 pages, 2016. https://doi.org/10.1155/2016/6329530
Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds
Abstract
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for realvalued problems and continuous functions, but several modified versions optimizing both integer and discretevalued problems have been developed. The discretecoded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discretecoded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.
1. Introduction
The Differential Evolution (DE) has been successfully applied to many continuous, combinatorial, and design optimization problems. The measuring devices, cameras, laser devices, or sensors produce discrete multidimensional vertices [1–3]. The big spatial data is analyzed in research areas like robotics, pattern recognition, and/or computer vision. In most of these areas, good results have been achieved with the DE (see, e.g., [1, 4–7]). This paper proposes a novel DE based algorithm solving the combinatorial tasks with discrete vertices. The abilities of the discrete Differential Evolution to search the optimal combinatorial solutions in the multidimensional discrete point clouds (PCs) are discussed. Our modified method called the Multidimensional Discrete Differential Evolution (MDDE) uses a vertex hashing function to strengthen the local properties of an ndimensional discrete dataset.
The Differential Evolution has been introduced by Storn and Price [8]. It is an evolutionary method, which has become popular for its simplicity, robustness, and good convergence properties [9]. It is based on the population of individuals which represent the temporary solutions that are iteratively refined during the generations. Each individual consists of several variables. The quality of individuals is evaluated by an objective function. After the successful DE application for realvalued problems on a continuous space, some combinational or design optimization applications on integer or discretevalued problems were presented, such as load dispatch problem [10], unit commitment problem [11], 01 knapsack problem [12], generalized traveling salesman problem [13], different NPhard scheduling problems [14–18], formfinding of tensegrity structures [19], assembly line balancing problem [20], or robots path planning problem [21, 22]. Survey of discretevalued problems and applications of evolutionary algorithms was published in the papers [23–25].
There are several basic categories of variables according to the paper [26]. The discrete integer variables bounded within a range are primarily discussed in this paper. We will call this category the discretevalued variables. The value of such a variable is an integer pointer addressing an enumerative sample from the set of discrete elements. The elements should be arranged to get better convergence of population [9]; otherwise the DE leads to the random search. The existing discrete methods can be divided into (a) indirect and (b) direct methods. The indirect methods operate with the standard realvalued variables. The values are progressively recalculated to/from the integer ones by some transformation function (see, e.g., [27–30]). The direct methods operate directly with integer values without any transformation, which eliminates the rounding error. The advantages of the indirect methods are that they utilize the robustness of the realcoded DE and require minimal intervention to the original DE. In the paper by Lampinen and Zelinka [27], a simple truncation of the realvalued parameters is proposed. But this simple approach worsens the diversity of the population and the robustness of the algorithm [31]. Other methods using improved rounding techniques involving some additional conditions, constraints, and thresholds were published by Angira and Babu [32], Liao [26], or Schmidt and Thierauf [33]. Tasgetiren et al. introduced several approaches using a discrete DE algorithm for a flow shop scheduling problem [14, 34]. A novel indirect method called the Discrete Differential Evolution (DDE) [29] was proposed by Davendra and Onwubolu. In this case, the whole evolution is managed with the integer values that are transformed into the real ones only for the mutation phase of the DE. This approach uses the Forward Backward Transformation and it has better convergence properties than the simple real/integer rounding techniques [29]. Datta and Figueira described a new mutation operator for discretevalued variables [35]. Their approach called the ridDE is a direct method based on a bit mutation of integer values to avoid the real/integer transformation.
This paper primarily aims at the problems addressing the optimizations in the sparse discrete data represented by distributed vertices in a vector space. The Differential Evolution is often used for the pattern recognition [7, 36], clustering [37], classification, or feature extraction [38]. All these disciplines find utilization in the bioinspired systems and robot automation [4, 5, 22] or computer vision [36, 39]. The article [38] summarizes different applications of evolutionary algorithms in the pattern recognition and machine learning including the Differential Evolution. The DE has been utilized for human body pose estimation from the point clouds [6, 36, 40], circles detection [7], ellipses detection [41], recognition of leukocytes in images, or 3D face model reconstruction utilizing multiview 2D images [42]. Most of the referenced algorithms optimize analytically a temporary pattern shape, deformable or active shape models. The intersection rate between the proposed model and vertices represents the quality of a solution. However, this means complete passage of whole dataset every time when a solution is evaluated by the objective function. Our further vision is to apply our novel approach to the direct pattern or feature recognition, where an optimized set of discrete vertices represents the required pattern or its estimate.
To do so, some modifications have to be done in the discretecoded DE. This paper conducts the basic model of the MDDE. The multidimensional vertices are numbered by their indices in the memory. The discretevalued variables of individuals store the integer indices addressing the vertices in the memory. Thus, the stochastic optimization iteratively refines the vertex indices to find the required combination of vertices. The local searching abilities of the MDDE in the static point clouds are examined. The DE can efficiently handle a nonlinear and a nondifferentiable objective function. Thus, it is expected that it should be applicable to the global optimization problems in the sparse point clouds as well. The main problem is that the discrete vertices are unordered and the optimization is very slow and unstable [9]. The 1D enumerative datasets can be ordered by their values. But in the multidimensional space it is necessary to define some hashing function for the ndimensional vertices. The three space filling curves (SFCs) are tested for the vertex hashing to obtain partly sequenced spatial data (Section 2.2).
First, the used realcoded Differential Evolution and the selected SFCs are introduced in Section 2. Section 3 describes the whole method, its input parameters, and the utilization of the SFCs. It also solves the problem of duplicate indices generated during the evolution. Section 4 tests the proposed method on several optimization problems and datasets. It proves that our novel MDDE efficiently works in the spatial discrete data and the more sophisticated SFCs considerably improve the convergence of a population.
2. Related Work
First, the reference model of the Differential Evolution is reminded (Section 2.1). Next, several types of the space filling curves (SFCs) are mentioned in Section 2.2.
2.1. RealCoded Differential Evolution
The first Differential Evolution algorithm was presented by Storn and Price in 1995 [43] and then improved in 1997 [8]. It is a simple evolution strategy for a global optimization problem [8, 43]. The paper [44] defines several variants of the DE, but the DE/best/l/bin variant is explained here, because it provides better results for most of the tested optimization problems. The basic algorithm is briefly described as follows.
A population consists of individuals representing the potential solutions of the selected optimization problem. The objective function evaluating the quality (objective value) of an individual is defined as . An individual consists of the realvalued variables that are represented by a vector . The problem dependent constraints defining the searchspace limiting the values of the variables can be established as well [26, 45, 46]. Mostly, the minimal value is searched. The process of the evolution is done by generating a new population of individuals with improved objective values. The normalized objective value is usually called the fitness value. The number of generations is limited and labeled as . The individual with the minimum objective value found during generations is returned as the result of the optimization. The appropriate setup of the DE input parameters is discussed in [8, 44]. The process of the DE/best/l/bin algorithm can be described as follows:(1)At the beginning of the DE, the random population respecting defined constraints is generated.(2) is a variable value of an individual from an actual population, where , , and is a generation counter .(3)For of generation ,(a)different individuals and are selected in the population randomly, where , and the third one is , which represents the best known solution so far;(b)a mutant vector is computed by the mutation operator: , where is the mutational factor and ;(c)a new individual is computed from the mutant vector by the crossover operator: if or ; otherwise, , where , is the crossover constant (), is a random number (), and is a randomly chosen index of individual variable;(d) if ; otherwise, , where is an individual of the next population, is an individual of the actual population, and is a new proposed individual.(4)Step () is repeated times.
2.2. Space Filling Curves
The algorithm proposed in this paper uses space filling curves (SFCs) to represent the multidimensional discrete data. Three variants of the SFCs were selected: linear indexing (Ccurve), Zorder, and Hilbert curve (see, e.g., [47, 48] and Figure 1). Generally, SFCs connect the points that are close to each other in the space and thus transform a general ndimensional problem into one dimensional (1D). Any SFC is usually based on a bounded space division. The bounding box of the dataset is computed. For each vertex a code representing its location in the subspace hierarchy is computed, and the vertices are sorted according to these codes. Thus, the ordered linear array grouping the discrete vertices with a similar space character is created. All the three mentioned SFCs are based on the Octree structure, so that they are universally applicable for the ndimensional space. The construction of the SFCs is described in [49, 50]. The SFCs are very straightforward and efficient methods for sparse space clustering [51]. The Ccurve is the basic approach for the linearization of the ndimensional data. It can be simply constructed, but the local properties are very basic in comparison with the other two SFCs. The Zorder curve is a very popular curve with good local properties and fast construction times. The Hilbert curve fills the space conveniently without any unnecessary crossings or space leaps (see Figure 1), and thus it is considered to be one of the best Octree based SFCs (see [49, 51]).
(a)
(b)
(c)
3. Discrete Differential Evolution in Dimensional Space
This section describes a novel approach based on the DE for the discrete multidimensional data analysis. The method is explained on the DE/best/l/bin variant (described in Section 2.1), because it seems to be efficient for distance function minimization, but any other variant can be used [44]. The two discretecoded methods were tested with spatial data: DDE by Davendra and Onwubolu [29] and ridDE by Datta and Figueira [35]. However, the ridDE cannot be parametrized conveniently; thus the DDE was selected as the reference model, as it is introduced in Section 3.1. The problem of discrete vertex optimization is described in Section 3.2. The Multidimensional Discrete Differential Evolution (MDDE) utilized for the distinct solutions search in spatial data is explained in detail in Section 3.3.
3.1. Utilized Discrete Model of the DE
The DDE by Davendra and Onwubolu [29] was selected as the reference discrete model, because it works with individuals that consist of the discretevalued variables. The internal crossover and mutation operators invariably change any applied value to a real number. This leads to infeasible solutions. Therefore, it is necessary to progressively convert the values from integers to real ones and then back to the integers. The DDE uses the socalled Forward Backward Transformation of values. The Forward (integer/real) Transformation is computed only for the mutation and crossover phases of the DE, so that the operators are applied to the real values. The variable values of the new individual are then transformed back to the integers by the Backward (real/integer) Transformation and the evolution continues with the integer values. This model is very convenient for combinatorial problems, where the real values make no sense, and for the detection and elimination of the found duplicate values of an individual. The individual is represented by a vector . The Forward Transformation is defined asThe Backward Transformation is defined aswhere is an integer value and is the corresponding real value for . The constants were established after extensive experimentation [29]. The transformations (1) and (2) are mutually inverse.
3.2. Direct MDDE
The modified Multidimensional Discrete Differential Evolution (MDDE) is very similar to the DE from Section 2.1. The most important differences are in the mutation and the evaluation parts. The MDDE optimizes a set of indices addressing the static vertices of the dataset. The vertices are stored in a linear array in the memory. An individual consists of discretevalued variables. The final solution is defined as a combination of indices addressing the vertices meeting the required conditions. The conditions depend on a specific optimization problem. The objective function can be formulated as a distance function defining some vertex distribution representing, for example, the outline of a required shape.
The main problem is that the real discrete datasets are nonuniformly distributed in the space. Thus, the indices addressing the vertices in the array represent no information about the spatial character of the vertices. Application of the DDE model to the set of unordered vertices leads to the random search. The dataset has to be ordered to get better convergence of the population. However, this is not that straightforward in the ndimensional space; thus smart vertex hashing has to be applied. The three space filling curves are tested in this paper (Section 2.2). SFC makes the ndimensional discrete data partly sequenced, so that the close indices address the spatially close vertices. The specific vertex order affects the diversity of the population and the robustness of the algorithm (see Section 4). The order of vertices is primarily important for the mutational phase of the evolution.
As the MDDE is a randomized algorithm, it is possible that a new generated individual contains some duplicate indices. Generally, a resulting solution consisting of distinct vertex indices is expected to obtain the set of vertices representing the searched pattern or feature. The duplicities have to be eliminated to obtain the duplicity free individuals at the end of every generation. The basic algorithm works as follows:(1)The input parameters and data are set.(2)SFC representation of a point cloud is computed.(3)The initial population of individuals is generated. Each individual consists of discretevalued variables, which are randomly initialized, so that there are no duplicities.(4)All individuals are evaluated by the objective function.(5)For each individual of a population,(a)three different individuals are randomly selected from the current population;(b)the best known individual and the two of the randomly selected individuals are combined:(i)the Forward Transformation (1) of the variable values is computed for all parent individuals;(ii)the mutation operator and the crossover operator are applied to the corresponding variables;(iii)the variable values of the new individual are transformed to the integers by the Backward Transformation (2) and validated afterwards;(c)the duplicate variable values of the new individual have to be resolved; the duplicities are replaced by distinct values from the third randomly selected individual;(d)an individual is evaluated by the objective function according to the total objective value (e.g., sum of separate distances); the new individual is compared with the corresponding one from the current population and the better one is selected for the new population;(e)the best known solution is compared with the new individual and replaced eventually.(6)Point () is repeated in each of generations.(7)Finally, the resulting vertices are read according to the found integer indices stored in the discretevalued variables of the best found individual.
3.3. The Distinct Solutions Search
This section describes the parts of the MDDE algorithm in more detail. The utilization of the SFCs, the mutation, and the duplicity elimination are explained here.
3.3.1. Initialization
The input parameters of the MDDE are almost the same as those mentioned in Section 2.1: : number of vertices in the dataset : total objective function, where : separate objective function : number of individuals of a population : number of individual variables : dimension of the discrete vertices and the separate objective function : maximum number of generations : constant mutational factor, : crossover constant,
3.3.2. Individual Representation
Each individual of the population consists of discretevalued variables storing the vertex indices. One array containing the individuals is allocated. The alternation of populations is done by the double buffering of individuals and the populations are switched simply by the exchange of pointers addressing the and the individual. The individual variables are aligned in the memory as well; thus values (32bit) are stored in a row.
3.3.3. Initial Population
The first duplicity free population has to be generated. The range of the vertex indices is divided into blocks. One random index is selected from each block; thus the different initial values are generated randomly for each of individuals. A random permutation of the values is computed afterwards. Therefore, the variable values of all individuals are completely distinguished.
3.3.4. Evaluation
The evaluation of the objective function with an individual is done similarly as it is in the case of casual 1D discrete data. The whole MDDE works with vertex indices assigned by the SFC. The separate objective function is called with a vertex addressed by the integer index. If an individual consists of more variables, a multidimensional objective function will be utilized. Generally, the variables are evaluated by a separate objective function and the sum of particular objective values is used to compute the total objective value of an individual. However, this can be done only if the particular objective value converges by itself (e.g., Euclidean distance). Otherwise, a sophisticated objective function must be used.
3.3.5. Mutation Operator
The MDDE operates with vertex indices addressing the ordered vertices on the SFC (Figure 2). The mutation operator computes a mutant vector as a linear combination of three different individuals (Section 2.1): two from the current population and the best known one (see Figure 2). According to the DDE model, the mutation operator already calculates with the transformed real values. The computation of the mutant vector is done for each individual variable:where , , and is a generation counter. Obviously, the mutation operator can be simply reformulated to, for example, DE/rand/l/bin and other variants [44] if it is needed. Figure 2 shows that the order of vertices is crucial for the convergence of population. The SFC better secures that the mutant index computed from the parent indices (3) addresses the vertex that is placed nearby the vertices addressed by the parent indices , and . In the case of unordered point clouds, the mutation would practically lead to a random selection of a mutant vector without any spatial logic (see Figure 2).
(a)
(b)
3.3.6. Crossover Operator
The traditional crossover operator described in Section 2.1 is applied. A proposed (mutant) solution is accepted with the probability . If , the operator will be applied separately for each variable. The variable values of the new individual are transformed to the integers by (2). Additional constraints and the condition that the values (indices) belong to have to be validated afterwards. If a variable value is placed out of the interval, a random value in the interval will be selected.
3.3.7. Separate Selection Operator
If and it is possible to assess the quality of the variable values separately, the selection can be made on the level of separate variables. This pretends the average results generated by the simple optimization of the sum of values and improves the convergence of the population. For example, the vertex distance from the proposed pattern can be used as a separate metric.
3.3.8. Elimination of Duplicities
The various combinations of distinct variable values (vertex indices) may lead to the same resulting value due to the convergence to the global optimum. The duplicities have to be found and replaced to get better diversity of a discrete solution. A point cloud is a finite set of vertices; thus a subset of sufficient vertices can fulfil a condition resulting in some pattern or feature recognition. Therefore, the duplicity free solutions are required. All individuals are checked for duplicities before the final individual selection to preserve this demand for the new population.
For each newly generated individual another one is randomly selected from the current population (the new one is not finished yet), where is a generation number and . A new individual is checked for duplicities at first and the number of recurrences is obtained, where . The mentioned facts mean that and can have maximally identical indices after the elimination of recurrences from . Thus, having the certainty of the duplicity free , the remaining indices can replace the recurrences of .
The implementation of this algorithm is based on convenient flagging of the indices followed by their sorting (Figure 3). The index arrays of both individuals are copied to the temporary array one by one. is stored at first followed by . Another array holds the corresponding flags of indices. The flagging is done by the sequential comparison of unmarked indices. The indices of are flagged at first and the recurrences are found. The unique indices are flagged with 1 and the duplicities with 3. Next, the distinct indices have to be found in , so that the indices of are compared with the preceding ones. The unique indices are flagged with 2 and the search will be terminated when indices are found. The remaining indices are flagged by 3. The indices are sorted by Quick sort algorithm according to the flags; thus the first indices represent the new duplicity free individual. The flagging can be also used for penalization of undesired solutions so that the penalized indices are sorted out.
3.3.9. Final Remark
The new proposed individual is compared with the best known one. The total objective value is used to assess the best ascertained individual. The whole computation is terminated after generations, or when a terminating condition is met. The ascertained individual with the best total objective value is returned.
4. Experiments and Discussion
In this section, the proposed MDDE method is tested. The main aim of the experiments is to test the local behaviour of the MDDE on the three space filling curves (SFCs) and its convergence to the global optimum in the discrete point clouds (PCs). The Ccurve was selected as a naive vertex hashing algorithm for comparison to show that the MDDE running on more complex SFCs with better local properties converges faster to the searched extreme. It seems there is no comparable method addressing the combinational problems on the level of discrete multidimensional vertices. The SFCs are constructed by hierarchical vertex hashing followed by sorting of the vertices according to the hashes/codes (see Section 2.2). A code represents an octant that contains the hashed vertex. The order of the octant written to the code distinguishes the different variants of the SFCs. The codes are usually represented by a bit sequence of octant coordinates. The SFCs of all the tests and datasets were constructed for the maximum hierarchical level allowed by the 64bit integer. The bit length of the hash is the main limitation of our method, because the greater the dimension of the discrete vertices is the lower the maximum level of clustering and the ability of the SFCs to distinguish location of two close vertices are. That is why the experiments are focused on 2D and 3D problems and datasets. But the MDDE is generally applicable for ndimensional spaces if the longer hashes are used.
This paper primarily aims at the problems addressing the optimizations in the sparse discrete data represented by distributed vertices in a vector space. It is assumed that the observed property or the pattern is locally bound to the spatial data. Several discrete methods were tested, but the DDE by Davendra and Onwubolu [29] has been chosen. In comparison with the ridDE [35], the DDE provides the option of the parameter setting that allows one to define the sampling step of the evolution. All the tests were performed with the DE/best/l/bin variant, as this seems to be the best one after extensive experimentation.
4.1. The Definition of the Tested Problems
The algorithm was tested on several common optimization problems:(1)pointtopoint and pointtoline distance minimization problem(2)discrete optimization of Schwefel and Rastrigin functions(3)maximum distance search in 3D datasets
These problems have been selected, because they are applicable for all kinds of point clouds and space dimensions and they mostly represent the basic tasks in the area of the spatial data analysis. They can be precisely solved analytically by the brute force vertex comparison as well; thus it is possible to compare the results of the analytical and the evolutionary approaches. The problems are described in the following subsections.
4.1.1. PointtoPoint and PointtoLine Distance Minimization
The objective function of the pointtopoint problem is defined as the Euclidean distance between a randomly chosen vertex from the dataset and the vertices proposed by an evolution. The objective function of the pointtoline problem is defined as the Euclidean distance between the line constructed by two different vertices randomly chosen from the dataset and the vertices proposed by an evolution [52]. The distance is the basic metric that is generally minimized to recognize some shape or pattern. The evolution converges locally to the global extreme in this case; thus it is a good example that can be tested with the MDDE. It is obvious that the randomly selected vertices have to be consistent during the whole evolution process. The distances of the vertices of each individual are optimized separately and the total fitness (objective) value of an individual is computed as a sum of distances. In both cases, the zero distance solutions are heavily penalized in order to provide the comparison rating between the analytical and the evolutionary approach.
4.1.2. Discrete Optimizations of Test Functions
The evolutionary algorithms are usually checked on several continuous test functions [53]. The wellknown Schwefel and Rastrigin functions have been selected for the tests of the MDDE, because they are both very complex functions with many local minima and they are applicable for any dimensions (see [53]). These continuous functions represent the corresponding objective functions evaluating the quality of the ascertained vertices. The discrete vertices of the dimension are randomly generated in the typical input domains defined, for example, in [53]. Thus the optimization is based on the search of the distinct vertices with the minimal objective value.
Two different distributions of random samples were tested to better distinguish the properties of the space filling curves (see Figure 4). The Gaussian distribution consists of vertices sampled randomly according to the standard normal distribution recalculated to the intervals of the input domain. Similarly, the Gaussian islands are the ten randomly chosen vertex groups distributed according to the standard normal distribution (Figure 4(b)) containing together vertices. The distributions are the same for all measurements.
(a)
(b)
4.1.3. Maximum Distance Search
The problem is defined as a search of the two most distant vertices of the dataset. This can be used, for example, as an approximative solution of the minimum sphere problem, which is defined as a search of the minimum sphere containing all the vertices of the dataset [54]. The minimum sphere problem is more complex, because the maximum Euclidean distance used as a perimeter of the sphere does not guarantee that all the vertices are contained inside the sphere. However, in many cases the maximum distance can be used as a good estimate of the minimum sphere problem solution, which can be further improved. We reformulated it to a minimization problem, so that the differenceis minimized, where is the diagonal length of the bounding box and is the maximum distance between two vertices found in the dataset. The bounding box diagonal represents the possible maximum distance of two vertices; thus is always positive.
This problem is different from the others, which locally converge to the extremes. But the maximum distance can be found by the local search of two distant areas, which leads to finding of greater distances. Therefore, the MDDE algorithm converges to the global extreme as well.
4.2. Achieved Results
This section discusses the achieved results of the MDDE tested on the defined problems. The three artificial and the three real standard datasets were chosen for the tests, as they are mentioned in Table 1. The random Gaussian datasets were generated according to standard normal distribution. The Gaussian islands were explained in Section 4.1.2. For all optimization problems and datasets the best solutions are computed analytically in advance.
 
topoint (PP); pointtoline (PL). from The Stanford 3D Scanning Repository [55]. 
4.2.1. Sufficient Solution Search
First, the pointtopoint and pointtoline problems were tested (see Section 4.1.1). The corresponding DE parameters for both problems can be seen in Table 1 and they were established after extensive experimentation. Figure 5 shows the comparison of the SFCs on the six different 3D datasets. These tests measure the number of DE generations needed to obtain a sufficient result, so that all vertices must have the sufficient distance. The sufficient result has to meet the conditionfor , where is the number of individual indices, is a generation counter, is a separate objective function that returns the distance of the individual vertex from the reference point, is the best analytically computed solution, and is the corresponding accuracy rate according to Table 1. Each measurement was performed 50 times for different randomly selected vertices, which define the reference vertex or line. Thus, the graphs represent the convergence metrics examining various areas of the distributed datasets. Figure 5 shows that the MDDE utilizing the Zorder and the Hilbert curve converges faster to the global optimum than in the case of the Ccurve. The Zorder generally shows better results than the Hilbert curve especially in sparse and nonuniformly distributed datasets.
(a)
(b)
4.2.2. Convergence Tests
The next measurements are focused on the evolution convergence during the generations. Figures 6, 7, 8, and 9 show the MDDE progress measured on different problems, datasets, and dimensions. These measurements are visualized by the ribbon plots or curves of medians constructed from 20 preformed measurements. The vertical axis represents the corresponding fitness value expressed by a multiple of the best analytical solution.
(a)
(b)
(a)
(b)

(a)
(b)
Figure 6 shows the comparison of the ribbon plots displaying the median, the first, and the third quartile of the measured fitness for pointtopoint and pointtoline problems. These tests were performed on the artificial datasets with vertices with the Gaussian distribution according to the parameters in Table 1. The Zorder shows its supremacy again; the Ccurve has the worst convergence in this measurement. The accuracy is much better in the case of the pointtopoint distance problem, because the line crosses the whole point cloud; thus there are many very close vertices. The vertices with the zero distance metric are eliminated in both cases.
Figures 7 and 8 show similar convergence metrics for the Rastrigin (Figure 7) and the Schwefel (Figure 8) test functions. Only the medians are displayed to obtain better legibility of the plots. Table 2 summarizes the MDDE parameters for all tests. The tests on both functions were performed on artificial datasets with the Gaussian distribution ( vertices) and Gaussian islands ( vertices), as it was explained in Section 4.1.2. The results are more comparable in contrast with the distance functions especially in the case of the Gaussian islands. However, the Zorder mostly shows the fastest convergence and the best accuracy in comparison with the other SFCs.
Finally, Figure 9 represents the convergence metrics of the maximum distance problem reformulated to the minimization problem (see Section 4.1.3). These tests were performed on the three Stanford datasets mentioned in Table 1 according to the parameters in Table 2. The plots show the progress of the fitness rate during the 100 generations. The results are quite comparable again, but the Zorder converges faster than the Hilbert curve and Ccurve.
4.2.3. Completeness Tests
The MDDE returns a vector of vertex indices as a result of the optimization. The discrete optimal solutions can be found analytically in the datasets with the finite number of vertices, so that the intersection of the stochastically found solution and the best solution can be computed. Thus, the completeness is defined by the ratewhere is the number of correctly found vertex indices of an individual and is the total number of individual indices.
The completeness was measured after 100 generations of the evolution on the Rastrigin (Table 3) and the Schwefel (Table 4) test functions, because they are very complex functions with many local minima. The measurements were performed with the DE parameters summarized in Table 2. Tables 3 and 4 represent the completeness comparison for the three SFCs and two vertex distributions. The tables show that the completeness is better in the case of Gaussian islands and 3D space. The same number of vertices distributed in the 2D space leads to the greater density of sampling; thus there are more vertices with good fitness than in the 3D space, where the distances between samples are greater. Therefore, the distinction of two very close solutions is very complicated for such a bioinspired method. However, the results are still very good especially in the case of Zorder and Hilbert curves.


4.2.4. Performance Tests
This section briefly introduces the performance of the proposed MDDE algorithm. The evolutionary times of 100 generations including the duplicity elimination are summarized in Table 5. Each measurement was performed 50 times on all the mentioned optimization problems according to the DE parameters in Tables 1 and 2. The computation times basically rely on a population size , the number of individual variables , the dimension , and the objective function. All experiments run on the following hardware: Intel Core i5 760 @ 2.8 GHz, 16 GB RAM, Windows 7 64bit.

5. Conclusion
A novel modification of the DE called the Multidimensional Discrete Differential Evolution (MDDE) addressing the combinatorial problems in ndimensional point clouds is presented. Our method aims at the discretevalued problems, where a combination of multidimensional vertices represents the required solution. The convergence of the evolution is improved by spatial data linearization by the space filling curves (SFCs). The algorithm efficiently eliminates the problem of the duplicate values in an individual. The paper examines the local searching abilities of the MDDE and the convergence to the global extreme in the discrete point clouds. The method is tested on several spatial optimization problems and the three SFCs (Zorder, Hilbert, and Ccurve). The tests on the convergence and completeness of the discrete solution show that the Zorder curve can be recommended as the best variant from the tested SFCs. The completeness of the best found solutions mostly balances between 60% and 100% depending on the used SFC. The evolution converges fast especially during the first 50 generations. The computation times of 100 generations measured on the test problems are maximally several milliseconds. Our MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. The main disadvantage of the MDDE is the limited precision of the SFCs, which are limited by the bit length of the vertex hashes. This is considerable especially in higher dimensions.
The MDDE represents a basic discrete model for pattern recognition and feature extraction especially in the 2D and 3D discrete datasets. The difficult task is to formulate the real problems for the MDDE; thus this will be the direction of our future work. We have promising results in the area of primitives detection, where the MDDE can accelerate the convergence of evolution.
Competing Interests
The authors declare that the grant, scholarship, and/or funding do not lead to any conflict of interests. Additionally, the authors declare that there is no conflict of interests regarding the publication of this manuscript.
Acknowledgments
This work was supported by SGS project, VŠBTechnical University of Ostrava, under Grant no. SP2016/97. This work was also supported by Grant Agency of the Czech Republic, under Grant no. GACR GA1506700S: Unconventional Control of Complex Systems.
References
 A. Harrison and P. Newman, “High quality 3D laser ranging under general vehicle motion,” in Proceedings of the 2008 IEEE International Conference on Robotics and Automation (ICRA '08), pp. 7–12, Pasadena, Calif, USA, May 2008. View at: Publisher Site  Google Scholar
 V. Uher, P. Gajdoš, T. Ježowicz, and V. Snášel, “Application of hexagonal coordinate systems for searching the KNN in 2D space,” in Innovations in BioInspired Computing and Applications: Proceedings of the 6th International Conference on Innovations in BioInspired Computing and Applications (IBICA 2015) held in Kochi, India during December 16–18, 2015, vol. 424 of Advances in Intelligent Systems and Computing, pp. 209–220, Springer, Berlin, Germany, 2016. View at: Publisher Site  Google Scholar
 P. Núñez, R. VázquezMartín, J. C. del Toro, A. Bandera, and F. Sandoval, “Feature extraction from laser scan data based on curvature estimation for mobile robotics,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '06), pp. 1167–1172, IEEE, Orlando, Fla, USA, May 2006. View at: Publisher Site  Google Scholar
 A. Qing, Differential Evolution: Fundamentals and Applications in Electrical Engineering, John Wiley & Sons, New York, NY, USA, 2009.
 H. Mo and Z. Li, “Biogeography based differential evolution for robot path planning,” in Proceedings of the IEEE International Conference on Information and Automation (ICIA '12), pp. 1–6, IEEE, Shenyang, China, June 2012. View at: Publisher Site  Google Scholar
 R. Ugolotti and S. Cagnoni, “Differential evolution based human body pose estimation from point clouds,” in Proceedings of the 2013 15th Annual Conference Genetic and Evolutionary Computation (GECCO '13), pp. 1389–1396, ACM, Amsterdam, The Netherlands, July 2013. View at: Publisher Site  Google Scholar
 E. Cuevas, D. Zaldivar, M. PérezCisneros, and M. RamírezOrtegón, “Circle detection using discrete differential evolution optimization,” Pattern Analysis and Applications, vol. 14, no. 1, pp. 93–107, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site  Google Scholar  MathSciNet
 G. C. Onwubolu and D. Davendra, Differential Evolution: A Handbook for Global PermutationBased Combinatorial Optimization, Springer Publishing Company, Incorporated, Berlin, Germany, 1st edition, 2009.
 R. Balamurugan and S. Subramanian, “Hybrid integer coded differential evolution—dynamic programming approach for economic load dispatch with multiple fuel options,” Energy Conversion and Management, vol. 49, no. 4, pp. 608–614, 2008. View at: Publisher Site  Google Scholar
 A. Ş. Uyar, B. Türkay, and A. Keleş, “A novel differential evolution application to shortterm electrical power generation scheduling,” International Journal of Electrical Power and Energy Systems, vol. 33, no. 6, pp. 1236–1242, 2011. View at: Publisher Site  Google Scholar
 C.S. Deng, B.Y. Zhao, A.Y. Deng, and C.Y. Liang, “Hybridcoding Binary Differential Evolution algorithm with application to 01 knapsack problems,” in Proceedings of the International Conference on Computer Science and Software Engineering (CSSE '08), vol. 1, pp. 317–320, December 2008. View at: Publisher Site  Google Scholar
 M. Fatih Tasgetiren, P. N. Suganthan, and Q.K. Pan, “An ensemble of discrete differential evolution algorithms for solving the generalized traveling salesman problem,” Applied Mathematics and Computation, vol. 215, no. 9, pp. 3356–3368, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 M. F. Tasgetiren, Q.K. Pan, and Y.C. Liang, “A discrete differential evolution algorithm for the single machine total weighted tardiness problem with sequence dependent setup times,” Computers and Operations Research, vol. 36, no. 6, pp. 1900–1915, 2009. View at: Publisher Site  Google Scholar
 J. Zhang, Y. Wu, Y. Guo, B. Wang, H. Wang, and H. Liu, “A hybrid harmony search algorithm with differential evolution for dayahead scheduling problem of a microgrid with consideration of power flow constraints,” Applied Energy, vol. 183, pp. 791–804, 2016. View at: Publisher Site  Google Scholar
 S. Sivasubramani and K. S. Swarup, “Multiagent based differential evolution approach to optimal power flow,” Applied Soft Computing Journal, vol. 12, no. 2, pp. 735–740, 2012. View at: Publisher Site  Google Scholar
 P. Yan, G. Wang, A. Che, and Y. Li, “Hybrid discrete differential evolution algorithm for biobjective cyclic hoist scheduling with reentrance,” Computers & Operations Research, vol. 76, pp. 155–166, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 K. Ma, P. Yan, and W. Dai, “A hybrid discrete differential evolution algorithm for dynamic scheduling in robotic cells,” in Proceedings of the 13th International Conference on Service Systems and Service Management (ICSSSM '16), Kunming, China, June 2016. View at: Publisher Site  Google Scholar
 D. T. Do, S. Lee, and J. Lee, “A modified differential evolution algorithm for tensegrity structures,” Composite Structures, vol. 158, pp. 11–19, 2016. View at: Publisher Site  Google Scholar
 H. Zhang, Q. Yan, Y. Liu, and Z. Jiang, “An integercoded differential evolution algorithm for simple assembly line balancing problem of type 2,” Assembly Automation, vol. 36, no. 3, pp. 246–261, 2016. View at: Publisher Site  Google Scholar
 J. Chakraborty, A. Konar, U. K. Chakraborty, and L. C. Jain, “Distributed cooperative multirobot path planning using differential evolution,” in Proceedings of the 2008 IEEE Congress on Evolutionary Computation (CEC '08), pp. 718–725, IEEE World Congress on Computational Intelligence, Hong Kong, June 2008. View at: Publisher Site  Google Scholar
 L. D. S. Coelho, N. Nedjah, and L. D. M. Mourelle, “Mobile robots: the evolutionary approach,” in Differential Evolution Approach Using Chaotic Sequences Applied to Planning of Mobile Robot in a Static Environment with Obstacles, pp. 3–22, Springer, Berlin, Germany, 2007. View at: Google Scholar
 D. Lichtblau, “Differential evolution in discrete optimization,” International Journal of Swarm Intelligence and Evolutionary Computation, vol. 1, 10 pages, 2012. View at: Publisher Site  Google Scholar
 S. Das and P. N. Suganthan, “Differential evolution: a survey of the stateoftheart,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. View at: Publisher Site  Google Scholar
 Y. Zhang, S. Wang, and G. Ji, “A comprehensive survey on particle swarm optimization algorithm and its applications,” Mathematical Problems in Engineering, vol. 2015, Article ID 931256, 38 pages, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 T. W. Liao, “Two hybrid differential evolution algorithms for engineering design optimization,” Applied Soft Computing Journal, vol. 10, no. 4, pp. 1188–1199, 2010. View at: Publisher Site  Google Scholar
 J. Lampinen and I. Zelinka, “Mixed integerdiscretecontinuous optimization by differential evolution,” in Proceedings of the 5th International Conference on Soft Computing, pp. 77–81, 1999. View at: Google Scholar
 J. Lampinen and I. Zelinka, “Mixed integerdiscretecontinuous optimization by differential evolution—part 2: a practical example,” in Proceedings of the 5th International Mendel Conference on Soft Computing (MENDEL '99), pp. 77–81, Brno University of Technology, Brno, Czech Republic, June 1999. View at: Google Scholar
 D. Davendra and G. Onwubolu, “Forward backward transformation,” in Differential Evolution: A Handbook for Global PermutationBased Combinatorial Optimization, pp. 35–80, Springer, Berlin, Germany, 2009. View at: Google Scholar
 P. Gajdoš and I. Zelinka, “On the influence of different number generators on results of the symbolic regression,” Soft Computing, vol. 18, no. 4, pp. 641–650, 2014. View at: Publisher Site  Google Scholar
 X. Yuan, A. Su, H. Nie, Y. Yuan, and L. Wang, “Application of enhanced discrete differential evolution approach to unit commitment problem,” Energy Conversion and Management, vol. 50, no. 9, pp. 2449–2456, 2009. View at: Publisher Site  Google Scholar
 R. Angira and B. V. Babu, “Optimization of process synthesis and design problems: a modified differential evolution approach,” Chemical Engineering Science, vol. 61, no. 14, pp. 4707–4721, 2006. View at: Publisher Site  Google Scholar
 H. Schmidt and G. Thierauf, “A combined heuristic optimization technique,” Advances in Engineering Software, vol. 36, no. 1, pp. 11–19, 2005. View at: Publisher Site  Google Scholar
 M. Fatih Tasgetiren, Q. Ke Pan, P. Suganthan, and Y.C. Liang, “A discrete differential evolution algorithm for the nowait flowshop scheduling problem with total flowtime criterion,” in Proceedings of the IEEE Symposium on Computational Intelligence in Scheduling (SCIS '07), pp. 251–258, April 2007. View at: Google Scholar
 D. Datta and J. R. Figueira, “A realintegerdiscretecoded differential evolution,” Applied Soft Computing Journal, vol. 13, no. 9, pp. 3884–3893, 2013. View at: Publisher Site  Google Scholar
 R. Ugolotti, G. Micconi, J. Aleotti, and S. Cagnoni, “GPUbased point cloud recognition using evolutionary algorithms,” in Applications of Evolutionary Computation: 17th European Conference, EvoApplications 2014, Granada, Spain, April 23–25, 2014, Revised Selected Papers, pp. 489–500, Springer, Berlin, Germany, 2014. View at: Google Scholar
 S. Das, A. Abraham, and A. Konar, “Automatic clustering using an improved differential evolution algorithm,” IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, vol. 38, no. 1, pp. 218–237, 2008. View at: Publisher Site  Google Scholar
 L. G. Fraga and C. A. Coello Coello, “A review of applications of evolutionary algorithms in pattern recognition,” in Pattern Recognition, Machine Intelligence and Biometrics, P. S. P. Wang, Ed., pp. 3–28, Springer, Berlin, Germany, 2011. View at: Google Scholar
 E. Corrochano and J. Eklundh, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 14th Iberoamerican Conference on Pattern Recognition, CIARP 2009, Guadalajara, Jalisco, México, November 15–18, 2009. Proceedings, vol. 5856 of Image Processing, Computer Vision, Pattern Recognition, and Graphics, Springer, Berlin, Germany, 2009.
 R. Ugolotti, Y. S. G. Nashed, P. Mesejo, S. Ivekovič, L. Mussi, and S. Cagnoni, “Particle swarm optimization and differential evolution for modelbased object detection,” Applied Soft Computing, vol. 13, no. 6, pp. 3092–3150, 2013. View at: Publisher Site  Google Scholar
 E. Cuevas, M. González, D. Zaldívar, and M. PérezCisneros, “Multiellipses detection on images inspired by collective animal behavior,” Neural Computing and Applications, vol. 24, no. 5, pp. 1019–1033, 2014. View at: Publisher Site  Google Scholar
 K. P. Chandar and T. S. Savithri, “3D face model estimation based on similarity transform using differential evolution optimization,” Procedia Computer Science, vol. 54, pp. 621–630, 2015. View at: Publisher Site  Google Scholar
 R. Storn and K. Price, “Differential evolution—a simple and efficient adaptive scheme for global optimization over continuous spaces,” Tech. Rep., 1995. View at: Google Scholar
 E. MezuraMontes, J. VelázquezReyes, and C. A. Coello Coello, “A comparative study of differential evolution variants for global optimization,” in Proceedings of the 8th Annual Genetic and Evolutionary Computation Conference (GECCO '06), pp. 485–492, ACM, Seattle, Wash, USA, July 2006. View at: Google Scholar
 S. Koziel and Z. Michalewicz, “Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization,” Evolutionary Computation, vol. 7, no. 1, pp. 19–44, 1999. View at: Publisher Site  Google Scholar
 Y. Gao, Y. Sun, and J. Wu, “Differencegenetic coevolutionary algorithm for nonlinear mixed integer programming problems,” Journal of Nonlinear Science and Its Applications, vol. 9, no. 3, pp. 1261–1284, 2016. View at: Google Scholar  MathSciNet
 A. R. Butz, “Convergence with Hilbert's space filling curve,” Journal of Computer and System Sciences, vol. 3, no. 2, pp. 128–146, 1969. View at: Publisher Site  Google Scholar  MathSciNet
 G. Breinholt and C. Schierz, “Algorithm 781: generating Hilbert's spacefilling curve by recursion,” ACM Transactions on Mathematical Software, vol. 24, no. 2, pp. 184–189, 1998. View at: Publisher Site  Google Scholar  MathSciNet
 J. K. Lawder and P. J. H. King, Advances in Databases: 17th British National Conference on Databases, BNCOD 17 Exeter, UK, July 3–5, 2000 Proceedings, Lecture Notes in Computer Science, Springer, Berlin, Germany, 2000.
 J. Skilling, “Programming the hilbert curve,” in Proceedings of the Bayesian Inference and Maximum Entropy Methods, vol. 707 of AIP Conference Proceedings, pp. 381–387, Jackson Hole, Wyo, USA, 2004. View at: Publisher Site  Google Scholar
 T. Skopal, M. Krátký, J. Pokorný, and V. Snášel, “A new range query algorithm for universal Btrees,” Information Systems, vol. 31, no. 6, pp. 489–511, 2006. View at: Publisher Site  Google Scholar
 D. Eberly, “Distance between point and line, ray, or line segment,” April 2016, http://www.geometrictools.com/Source/Distance3D.html. View at: Google Scholar
 “Appendix a—test function benchmarks for global optimization,” in NatureInspired Optimization Algorithms, X.S. Yang, Ed., Elsevier, Oxford, UK, 2014. View at: Google Scholar
 K. Fischer, B. Gärtner, and M. Kutz, “Fast smallestenclosingball computation in high dimensions,” in Algorithms—ESA 2003, G. Di Battista and U. Zwick, Eds., vol. 2832 of Lecture Notes in Computer Science, pp. 630–641, Springer, Berlin, Germany, 2003. View at: Publisher Site  Google Scholar
 The stanford 3d scanning repository, 2016, http://graphics.stanford.edu/data/3Dscanrep/.
Copyright
Copyright © 2016 Vojtěch Uher et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.