Abstract

Discovering and utilizing problem domain knowledge is a promising direction towards improving the efficiency of evolutionary algorithms (EAs) when solving optimization problems. We propose a knowledge-based variable reduction strategy (VRS) that can be integrated into EAs to solve unconstrained and first-order derivative optimization functions more efficiently. VRS originates from the knowledge that, in an unconstrained and first-order derivative optimization function, the optimal solution locates in a local extreme point at which the partial derivative over each variable equals zero. Through this collective of partial derivative equations, some quantitative relations among different variables can be obtained. These variable relations have to be satisfied in the optimal solution. With the use of such relations, VRS could reduce the number of variables and shrink the solution space when using EAs to deal with the optimization function, thus improving the optimizing speed and quality. When we apply VRS to optimization problems, we just need to modify the calculation approach of the objective function. Therefore, practically, it can be integrated with any EA. In this study, VRS is combined with particle swarm optimization variants and tested on several benchmark optimization functions and a real-world optimization problem. Computational results and comparative study demonstrate the effectiveness of VRS.

1. Introduction

Optimization, including continuous optimization and discrete optimization, plays an important role in scientific research, management, industry, and so forth, given the fact that many problems in the real world are essentially optimization tasks. Evolutionary algorithms (EAs), such as genetic algorithms (GAs), ant colony optimization (ACO), and particle swarm optimization (PSO), have shown competitive performance when solving complex and large-scale optimization problems. To improve the efficiency of EAs, two aspects are important and deserve investigation. The first one is the search capability of the used EA itself, including both the exploitation and exploration capabilities. The other one is how to effectively integrate domain knowledge about the optimization problem into EAs [1].

Previously, more attention was paid to the design of generic EA variants for higher search capability. Take PSO as an example. In the last decades, many enhanced PSO versions were developed, such as comprehensive learning PSO [2], mimetic fitness Euclidean-distance PSO [3], orthogonal learning PSO [4], and PSO with local search [5]. According to no free lunch theory [6], no algorithm will be effective for all optimization problems. It is therefore hard to design an efficient EA that is suitable to all kinds of optimization problems. However, if we can make use of some valuable domain knowledge implied in optimization problems, we may improve the efficiency of EAs by reducing the complexity of the optimization problems.

In the area of discrete optimization, the problem domain knowledge has started to attract researchers’ attention. For instance, the incorporation of knowledge-based strategies into the heuristics of swarm optimization has demonstrated to be effective [7]. Note that the problem domain knowledge in discrete optimization (e.g., the scheduling problem [8, 9] and spatial geoinformation services composition problem [10, 11]) is dependent on concrete problems considered and the knowledge extraction and discovery process is relatively subjective.

In comparison, in the area of continuous optimization, such as function optimization, the problem domain knowledge is seldom considered. One may consider the integration of PSO with the gradient-based search technique as an instance that the problem domain knowledge is combined into EAs [12], as the gradient-based search technique utilizes the gradient information implied in the optimization problem. This gradient-based search technique is used to guide the search direction of EAs. It is believed that there should be some relations among different variables on the optima of an optimization problem. Particularly, in [1], the problem domain knowledge of variable symmetry was formulated. Based on this knowledge, an inner variable learning (IVL) strategy was proposed and incorporated into PSO, thus a new PSO variant named PSO-IVL was developed. PSO-IVL demonstrates the effectiveness and potential of the integration of problems domain knowledge into EAs. However, it is not a generic algorithm because it is only suitable to optimization problems with symmetric variables.

Therefore, we may wonder whether there exist more general methods for finding relations among different variables of an optimization problem; then we can utilize the knowledge of such relations to simplify the original optimization problems and improve the efficiency of EAs. As we know, the variable relation reflects the variable dependence, which can shrink the solution space of original optimization problems.

Driven by this motivation, we investigate a method to discover underlying variable relations existing in an unconstrained and first-order derivative optimization function. We find that the discovered variable relations can be used effectively to reduce the number of variables included in the optimization function when applying PSO (other EAs should be suitable as well) to that function optimization problem. Consequently a variable reduction strategy (VRS) is developed and integrated into the PSO variants. Experimental tests on some benchmark optimization functions and a real-world optimization problem demonstrate that VRS can reduce the complexity of the optimization functions and help PSO to find high-quality solutions more efficiently.

2. Variable Reduction Strategy

Assume that has a first-order derivative. For the corresponding unconstrained optimization problem , the optimal solution arises from the relationships

It may sometimes be difficult to solve the equations above to obtain exact values of their variables. There are two reasons for this. The first one is that the equations may be nonlinear and complex which makes it difficult to get the completely analytic solution. The second one is that there might be many extreme points for a multimodal optimization function; that is to say, solutions of the equations related to such an optimization function are not unique. However, some quantitative and explicit relations among variables could be determined from (1). We do not have to find out all the relations among variables, since if we can obtain just some variable relations from (1), we can reduce the number of variables and shrink the solution space, thus, decreasing the complexity of the original optimization function.

For example, if from (1) we can form a relation described as we say can be expressed by . This variable relation has to be satisfied in the optimal solution. Under this condition, in the course of using EA (e.g., PSO) to solve the optimization function, the value of can be calculated directly from (2) and the values of variables in . As a result, in the problem solving process, variable can be reduced.

To give an intuitive illustration, let us now consider optimization problem as an example. The solution space of this optimization problem is illustrated in Figure 1(a). Let the derivative of the optimization function be equal to zero; we can obtain the following:

From (3), we get the relation . With this relation, variable can be reduced and the original optimization problem is changed into . The solution space of the optimization problem after variable reduction is changed accordingly as displayed in Figure 1(b). As a result, the original two-variable optimization problem is transformed into a one-variable optimization problem after variable reduction. In addition, the solution space is shrunk from two dimensions to one dimension. Therefore, with variable reduction, the complexity of this optimization problem is reduced noticeably.

It is clear that if more variable relations can be found, then more variables can be reduced. Let us introduce several essential definitions. (i)Core variable: the variable is used to represent other variables.(ii)Reduced variable: the reduced variable can be represented by core variables. (iii)Optimization variable core: the collection of all core variables present in an optimization function. We denote it by , .

Obviously, the fewer core variables and the more reduced variables we obtain, the more the complexity of the original optimization function will be eliminated. Therefore, the task of the variable reduction strategy (VRS) is to obtain an optimization variable core with the minimum cardinality. To obtain a general theory and method for finding minimum core variables is still an open problem, since the equations described in (1) may be too complex. However, at least, we can safely conclude that if a variable in a derivative optimization problem is less than or equal to third-order, and this variable can be reduced.

In general, the performance of EAs degrades noticeably with the dimensionality increase of an optimization problem. VRS can help alleviate the problem.

3. Experimental Study on Benchmark Optimization Problems

3.1. Experimental Setting

VRS is integrated into the basic version of PSO [13] to obtain a new PSO variant called PSO-VRS. We apply PSO-VRS to several benchmark optimization functions to test the effectiveness of VRS. The Rosenbrock function [2], variably dimensioned function [14], Wood function [14], and Ackley function [2] are selected as the benchmark optimization functions. Each function was optimized by some state-of-the-art PSO variants as well as PSO-VRS. We present the details of the variable reduction procedure for each function, provide the computational results of each algorithm, and demonstrate the evolutionary process of PSO-VRS. The PSO variants used in the comparative study are listed below:(i)PSO with inertia weight (PSO-w) [15];(ii)PSO with constriction factor (PSO-cf) [16];(iii)UPSO [17];(iv)fully informed particle swarm (FIPS) [18];(v)FDR-PSO [19];(vi)CPSO-H [20];(vii)CLPSO [2];(viii)PSO-VRS.

It should be noted that the parameter settings of PSO-w, PSO-cf, UPSO, FDR, FIPS, CPSO-H, and CLPSO are the same as those in [2]. Related parameters of PSO-VRS are set up as follows: the inertia weight , the maximum function evaluations , acceleration coefficients , and number of particles .

3.2. Variable Reduction Process of Test Optimization Functions
3.2.1. Rosenbrock Function

This function is formulated as , which is multimodal, is nonseparable, and exhibits a very narrow valley moving from local optimum to global optimum [21]. Let us rewrite the function as follows: Then setting the partial derivatives to zero, we have From (6), we have Subsequently from (7), we have Furthermore,

We can observe from the above expressions (9)–(11) that any other variables in the Rosenbrock function can be calculated with the use of , such that the objective function can be calculated only with the aid of the value of and (9)–(11). Therefore is the one and only one core variable of this optimization function. As a result, the original multivariable optimization problem is actually transformed into a one-variable optimization problem. The Rosenbrock function with 10 variables was optimized by each PSO variant. The search range of each variable is taken from −3 to 3.

3.2.2. Variably Dimensioned Function

This function is described as . With regard to this function, we obtain

According to (12), for any two variables and we have

From (13), we get

In the sequel, from (14), we can obtain .

Let , then we have . As a result all other variables can be represented by , which is the core variable of this optimization function.

3.2.3. Wood Function

This function comes in the form

We determine the related derivative as shown below: From (16), one has

We can see from (17)–(20) that can be calculated from ; can be calculated with the use of both and ; and can be calculated based on and . In fact, all , , and can be computed from , which is therefore the only core variable of this optimization function. Note that, in the search process of PSO-VRS, the value of in (20) may be smaller than zero. Under this condition, the function fitness will be penalized and equal to 1000, to drive the solution to the feasible area.

3.2.4. Ackley Function

This function reads as .

Regarding this function, the partial derivatives are

From (21), we obtain the following relationship:

Such that we have . Therefore, variable can be the only core variable.

3.3. Computational Results and Comparative Study

To solve each test optimization function, each algorithm was run 30 times. The corresponding computational results are listed in Table 1, where Mean is the mean fitness value of 30 runs; Std is the related standard deviation of the results; FEs is the mean function evaluations to obtain the results. The evolutionary process of the best fitness value of each function obtained by PSO-VRS is displayed in Figure 2.

From the results about the Rosenbrock function in Table 1, we can find that it is generally difficult for other PSO variants to find an optimal or near-optimal solution of the Rosenbrock function. However, with the aid of VRS, PSO-VRS can use the basic PSO to efficiently find the optimal solution averagely within about 12000 fitness evaluations. Moreover, Figure 2(a) demonstrates that VRS enables PSO to converge to high-quality solutions quickly.

Results of the variably dimensioned function listed in Table 1 reveal that it is hard for most of the comparative PSO variants without the integration of VRS to generate a solution of high quality. In contrast, PSO-VRS always produces the optimal solution in relatively small number of function evaluations (averagely about 12684). Figure 2(b) demonstrates that PSO-VRS converges to the optimal solution quickly. The variably dimensioned function is highly related, which brings a great deal of difficulty to typical PSO variants to form satisfactory solutions. On the other hand, VRS utilizes the underlying variable relations and translates the original problem into a one-variable optimization task. This indicates that VRS significantly reduces the complexity of the variably dimensioned function.

We can see from Table 1 that, compared with other comparative PSO variants, PSO-VRS generates the best result (also the optimal) of the Wood function averagely within no more than 13100 function evaluations. Though the variable relations described in (17)–(20) seem more complex, they well support PSO to find the optimal solution. Figure 2(c) underlines that PSO-VRS exhibits high convergence.

It also can be observed from Table 1 that both CLPSO and PSO-VRS could always find the best results, compared to other PSO variants. The superiority of PSO-VRS to CLPSO is that PSO-VRS can obtain the optimal solution at the cost of much less function evaluations. The high convergence capability of PSO-VRS when solving the Ackley function is shown in Figure 2(d).

4. Experimental Study on a Real-World Optimization Problem

Frequency-modulated (FM) sound wave synthesis has an important role in several modern music systems and to optimize the parameter of an FM synthesizer is a six-dimensional optimization problem where the vector to be optimized is represented by [22, 23]. This problem is a highly complex multimodal one having strong epistasis with minimum value [24]. This problem was frequently solved by EAs or taken as a benchmark real-world optimization problem to test the performance of new EA variants [25, 26]. This optimization problem is formulated as follows [22]:

The objective function is

To use the variable reduction strategy, we let the derivative of the objective function on variable equal to zero and obtain where , .

From (25), we have

According to (26), variable can be calculated from other five variables. Therefore, is the reduced variable and collection is the corresponding optimization variable core. With VRS, the solution space is shrunk from six dimensions to five dimensions.

To evaluate the impact of VRS on this optimization problem, we use PSO-w, PSO-cf, UPSO, FDR, FIPS, CPSO-H, and CLPSO with and without the integration of VRS to solve the problem, respectively. Each algorithm is run for 30 times.

From Table 2, we can discover that, for every PSO variant, the results produced by the algorithm with the integration of VRS are better than that produced by the algorithm without the integration of VRS. Particularly, this improvement is more significant when FIPS and CLPSO are taken as the solvers. These results demonstrate the potential application of VRS to real-world optimization problems. Moreover, in this optimization problem, only one variable is reduced, which indicates that when we apply VRS to optimization problems, we do not have to find all the quantitative variable relations and reduce major variables, even the reduction of small number of variables could be beneficial and improve the efficiency of EAs.

5. Conclusions

The utilization of the domain knowledge associated with the optimization problem can reduce the complexity of the original problem and facilitate the solution search of EAs. In this study, we investigate the underlying knowledge of quantitative variable relations that have to be satisfied in the optimal solutions of an unconstrained and first-order derivative optimization function. Based on these relations, we propose a variable reduction strategy (VRS). The essence of VRS is to find an optimization variable core with the minimum number of core variables. Computational results and comparative studies carried out for several test benchmark optimization functions and a real-life optimization problem demonstrate that VRS can significantly improve the efficiency of PSO variants. Currently, we cannot guarantee that VRS can be applied to any unconstrained optimization problems. However, it could be beneficial to check the variable relations and use VRS when using EAs to solve unconstrained optimization problems. VRS is expected to have large application potential in real-world optimization problems.

Some future researches can be carried in four directions. Although the variable reduction strategy is generic and effective, on some occasions, it might be very difficult to obtain the variable relations from the partial derivatives of an optimization function because of the complexity of these derivatives. To construct a solid and comprehensive theory of variable reduction, we may investigate whether there are some generic and formal theories about finding the explicit variable relations through a group of equations. Furthermore, we also consider formulating the underlying variable relations by some approximate methods, such as neural network. The third direction of research is to further study the variable reduction strategy to apply it to constrained optimization problems. The fourth one is to test the efficiency and effectiveness of the variable reduction strategy with regard to more real-world optimization problems.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC, 51178193 and 41001220). Guohua Wu is supported by the China Scholarship Council under Grant no. 201206110082. The authors thank Dr. Ponnuthurai Nagaratnam Suganthan for providing them with the source codes of the comparative algorithms and giving them insightful suggestion and comments.