Abstract

Mobile edge computing (MEC) provides physical resources closer to end users, becoming a good complement to cloud computing. The booming MEC brings many multiobjective optimization problems. The paper proposes a multiobjective optimization (MOO) algorithm called SAMOACOMV, which provides a new choice for solving MOO problems of MEC. We improve the ACOMV algorithm that is only suitable for solving mixed-variable single-objective optimization (SOO) problems and propose a MOACOMV algorithm suitable for solving mixed-variable MOO problems. And aiming at the dependence of MOACOMV algorithm performance on parameter setting, we proposed the SAMOACOMV algorithm using a self-adaptive parameter setting scheme. Furthermore, the paper also designs some mixed-variable MOO benchmark problems for the purpose to test and compare the performance of the SAMOACOMV algorithm. The experiments indicate that the SAMOACOMV algorithm has excellent comprehensive performance and is an ideal choice for solving mixed-variable MOO problems.

1. Introduction

In recent years, mobile edge computing (MEC), as a powerful computing paradigm, provides sufficient computing resources for the internet of things (IoT) [1]. Edge computing extends traditional cloud services to the edge of the network and closer to users and is suitable for network services with low latency requirements. There are many multiobjective optimization (MOO) problems in MEC, and the research on MOO for MEC is also a hot topic. Liu et al. [1] propose a multiobjective resource allocation method, named MRAM, and the method is leveraged to optimize the time cost of IoT applications, load balance, and energy consumption of MEC servers. Huang et al. [2] present a multiobjective whale optimization algorithm (MOWOA) based on time and energy consumption to solve the optimal offloading mechanism of computation offloading in MEC. Fan et al. [3] propose an algorithm based on particle swarm optimization (PSO) to solve the MOO of the container-based microservice scheduling, aiming to optimize network latency among microservices, reliability of microservice applications, and load balancing of the cluster.

Xu et al. [4] present a multiobjective computation offloading method (MOC) for internet of vehicles (IoV) in MEC to realize the multiobjective optimization of decreasing the load balancing rate and reduce the energy consumption in ECDs and shorten the time during processing the computing tasks.

This paper studies the multiobjective optimization algorithm, which provides a new choice for MOO in MEC. The classic MOO algorithm converts the multiple objective function values into a single value according to certain rules and then applies single-objective optimization algorithms to solve them [5]. There are three common converting rules [6]: weighted sum of multiple objective function values, calculating the distance between the objective function value vector and a given decision vector and finding the maximum value of the relative difference between the respective objective function values and their corresponding given values. The classic MOO algorithm is essentially a single-objective optimization algorithm, which cannot really solve the MOO problem. Most of the modern MOO algorithms are heuristic algorithms that can find the Pareto solution set. Some famous algorithms are NSGA-II [7], SPEA2 [8], PAES [9], and NSGA-III [10] based on an evolutionary algorithm; SMPSO [11] and OMOPSO [12] based on particle swarm algorithm; GDE3 [13], MOEAD [14], and MOEA/D-IEpsilon [15] based on differential evolution algorithm; MOACO [16], P-ACO [17], MACS [18], Monaco [19], and SACO [20] based on ant colony algorithm; and so on. Other heuristic MOO algorithms include: MOO algorithms based on simulated annealing, tabu search and immune algorithms, and new algorithms obtained by improving or mixing various algorithms. According to the no free lunch (NFL) theorems in [21], when dealing with MOO problems, the average performance of various algorithms is the same, but the algorithms can show different performances for different optimization problems. Therefore, it is another hot spot for scholars to study the applicable algorithms for specific optimization problems or to study the applicable problems according to the characteristics of optimization algorithms.

Refer to the literature [22] for the classification of optimization problems, MOO problems can be divided into four categories according to whether their variable domains are continuous or not:(i)Continuous-variable (CV) MOO: the range of all variables is the continuous domain. These continuous variables are usually mapped to real numbers(ii)Pseudo-discrete variable (PDV) MOO: the range of all variables is ordered discrete domain, which means that the variable values are arranged in ascending or descending order according to certain rules. The pseudo-discrete variables are usually mapped to integers.(iii)Real-discrete-variable (RDV) MOO: the range of all variables is a disordered discrete domain, which means that the variable values cannot be arranged according to certain rules. The discrete variables are usually called categorical variables.(iv)Mixed-variable MOO: the range of the variables includes continuous domain and discrete domain.

According to the NFL theorem, in order to obtain better optimization performance, different types of MOO problems should use different types of optimization algorithms. The research on continuous-variable MOO and pseudo-discrete variable MOO is relatively mature. Most of the aforementioned heuristic algorithms or their variants are suitable for solving these two types of problems. There are a few studies on mixed-variable MOO.

Manson et al. [23] present a novel Bayesian multiobjective algorithm (MVMOO) capable of simultaneously optimizing both discrete and continuous input variables. The algorithm utilizes Gaussian processes as surrogates in combination with a novel distance metric based upon Gower similarity. MVMOO was able to perform competitively when compared to NSGA-II with a substantially reduced experimental budget, providing a viable, efficient option when optimizing expensive mixed-variable multiobjective optimization problems.

Li et al. [24] propose an improved version of OLAR-PSO-d named OLAR-PSO-DE. The OLAR-PSO-DE utilizes a modified stagnation strategy and a dynamic hybridization strategy. The OLAR-PSO-DE is employed to optimize the design of the engine hood, which is a high-dimensional, multiobjective, and mixed-variable optimization problem. The comparative study and final hood optimization results prove that the proposed method can effectively solve complicated engineering problems.

Khokhar et al. [25] modify the continuous-variable version of the PSP algorithm to handle mixed variables. The performance of PSP was tested using a set of quality indicators with a benchmark test suite. And the performance was compared with the state-of-the-art multiobjective optimization algorithms. The modified PSP is found to be competitive when the total number of function evaluations is limited but faces an increased computational challenge when the number of design variables increases.

However, there are relatively few studies on discrete variable MOO and mixed-variable MOO, but such MOO problems are often encountered in engineering. Therefore, the research on these two types of MOO algorithms is of great significance. This paper proposes the SAMOACOMV algorithm by improving the ACOMV algorithm [26]. The main work of the author is as follows:(i)Improve the ACOMV algorithm used to solve mixed-variable SOO problems to make it suitable for solving mixed-variable MOO problems(ii)Propose a self-adaptive parameter setting scheme for the algorithm and verify the superiority of the self-adaptive parameter setting scheme by comparison with the manual parameter adjustment scheme(iii)Design some mixed-variable MOO benchmark problems to test and compare the performance of the SAMOACOMV algorithm(iv)Apply SAMOACOMV algorithm to solve spring design engineering problems and compare the algorithm performance with other well-known MOO algorithms

2. Materials and Methods

2.1. ACOMV Algorithm

The ACOMV algorithm [26] is an ant colony optimization algorithm proposed by K. Socha and M. Dorigo for solving mixed-variable problems. The algorithm has excellent comprehensive performance when dealing with mixed-variable optimization problems, but for pure continuous optimization or pure discrete optimization, it has weaker performance than some specialized algorithms.

The basic process of the ACOMV algorithm is as follows: the first step is to initialize the solution archive by randomly creating some solutions and storing them in the solution archive. In the second step, the ants construct some new solutions based on the solution archive. Many algorithms, such as local search, gradient descent, can be used to construct and improve the quality of new solutions. The third step is to refresh the solution archive with the new solutions, and the best solutions will be stored in the solution archive. Repeat steps 2 and 3 until the termination criteria are met.

2.1.1. The Structure, Initialization, and Refresh of Solution Archive

ACOMV maintains a solution archive T, whose dimension |T| = k can be set in advance. Assume that there is an n-dimensional continuous optimization problem that has k feasible solutions, ACOMV stores n variable values of each feasible solution and its objective function value in the solution archive. Figure 1 depicts the structure of the solution archive, where represents the value of the i-th variable of the j-th solution and represents the weight of the j-th solution. The solutions in the solution archive are sorted by their quality (such as the value of the objective function), so the position of the solution in the archive reflects its preference (pheromone).

Before the algorithm starts, k solutions are randomly generated and stored in the solution archive T. In each iteration of the algorithm, m ants generate m new solutions. The new solutions and the solutions from the solution archive T form a solution set including k + m solutions and take the k solutions with the best quality (such as objective function value) from the solution set to refresh solution archive T. The solutions in the solution archive are always sorted by their quality, and the best quality solution is at the top. In this way, the search process will always tend to find the best quality solution, so as to achieve the solution of the optimization problem.

2.1.2. Constructing New Solutions Probabilistically

Each ant constructs a new solution incrementally, that is, selects the value of the solution variable one by one. First, the ants select a solution from the solution archive based on the selection probability. The selection probability of the j-th solution is as follows:where can be calculated by using various formulas. In this paper, the Gaussian function is selected, which formula is as (2). Besides, q is the algorithm parameter, and k is the number of solutions in the solution archive.

Then, construct a new solution based on the selected solution. According to the probability density function P(x) for each dimension variable of the solution, the ant probabilistically extracts a new value in the neighborhood of the variable value of the solution, and these new values form a new solution. For different types of variables, the structure of the probability density function is different.

The P(x) of continuous variables is as follows:where represents the Gaussian function with the variable x, μ is the mean value, σ is the mean square error, and ξ is the algorithm parameter.

The P(x) of ordered discrete variables is the same as (3), but it needs to be modified as follows:(i)The variable x is the index number of the ordered discrete variable value in its range. If the variable value range of x is {large, medium, small}, then x = 1 when the variable value is “large”, x = 2 when the variable value is “medium”, and x = 3 when the variable value is “small”.(ii)The new value obtained by probability extraction according to P(x) needs to be rounded to the closest value of the index number in the domain. If the extracted value is 2.3, it needs to be rounded to 2, which corresponds to “middle.”

The probability density function of disordered discrete variables is as follows:where represents the probability of selecting the l-th variable value from the domain of the i-th variable of the solution. And is the weight associated with the l-th available value; it is calculated as follows:where is the weight corresponding to the best quality solution in the solution archive whose value of the dimension variable is not empty, and it can calculate as (2). In particular, if this dimension variable of all solutions is empty, then is taken as 0. is the number of solutions whose value of this dimension variable is not empty in the solution archive. q is an algorithm parameter, which is the same as q in (2). η is the number of unused values in the domain of the dimension variable.

2.2. MOACOMV Algorithm

Improve the single-objective optimization algorithm ACOMV and obtain the MOACOMV algorithm suitable for solving MOO problems. The main improvement of the MOACOMV algorithm is to introduce the Pareto set into the solution archive, which is the non-inferior solution set [27]. The specific method is to sort according to the Pareto characteristics of the solution in the solution archive, and the solution with the best quality is placed at the top of the solution archive. After improvement, the probability of selecting a good solution is higher so that the MOACOMV algorithm can find non-inferior solutions.

The solutions in the solution archive are arranged according to the following two rules:(i)The solutions in the solution archive are sorted according to the non-inferior order, and the solutions with the smaller order value are arranged at the top of the solution archive. Referring to reference [28], the definition of non-inferior order of the solution is in Definition 1.(ii)For solutions with the same non-inferior order, they are sorted according to the degree of congestion of the solution, and the solution with a lower degree of congestion is ranked at the top of the solution archive. Referring to reference [9], the definition of the congestion degree of the solution is in Definition 2.

In the above two rules, the first rule ensures that the algorithm can find non-inferior solutions, and the second rule ensures that the distribution of these non-inferior solutions is as uniform as possible. The MOACOMV algorithm designed according to the above rules has excellent comprehensive performance.

Definition 1. Non-inferior order of one solution : In the solution set , take out its non-inferior solutions to form a solution set TU(z) whose sequence number z = 0, and the remaining solutions refresh the solution set T; repeat the above process until T is an empty set, and every time it is repeated, z increases by 1. Then the is the sequence number z of the non-inferior solution set TU(z) in which the solution is.

Definition 2. Congestion degree of the solution : In the solution set , the objective function corresponding to the solution is . Calculate the distance between of one solution and of other solutions and take out the minimum distance . The calculation process of is as equation (1). Then the is multiplied by the adjustment coefficient α; the calculation process is as equation (2).

2.3. SAMOACOMV Algorithm

The ant colony algorithm needs to set some parameters, which have a huge impact on the performance of the algorithm. Since the convergence speed of the algorithm and the diversity of the solution are always contradictory, how to obtain a compromised excellent performance through proper parameter settings is the purpose of studying parameter settings.

This paper adopts the self-adaptive parameter control method to adjust the parameters of the MOACOMV algorithm according to the quality of the solution archive and the convergence speed of the algorithm. And we call this MOO algorithm as SAMOCOMV algorithm.

The SAMOACOMV algorithm needs to set four parameters, which are: the convergence speed ξ, the size of search solution archiving area q, the number of ants m, and the solution archive size k. In this paper, to balance the diversity and convergence abilities of SAMOACOMV, two modifications for four parameters are proposed.

2.3.1. Set Method for Parameters ξ and q

The parameter ξ is used to adjust the convergence speed of the algorithm, and the parameter q is used to change the size of the search area. These two parameters are in conflict. When the search area increases or the convergence speed decreases, more Pareto solutions can be found with higher probability, but the calculation time becomes longer, and vice versa. In order to obtain a good Pareto solution archive with reasonable calculation time, we calculate the quality index of the solution archive and adjust the parameter ξ and q according to the value of the quality index. The set method for parameters ξ and q is shown in Algorithm 1:

Input: , , ξi, qi, where, ;
(1)
(2)Pi(T)=Pi(T) − Pi − 1(T)
(3)ξi=ξi − ξi − 1
(4)qi=qi − q − 1
(5)ξi+1=ξi − r Pi(T) ξi
(6)qi+1=qi − r Pi(T)qi

In Algorithm 1, the quality index of the solution archive for the i-th iteration is calculated firstly. The is the mean value of the weighted sum of each objective function and congestion degree of all solutions in the solution archive. Next, the quality index’s increment ΔPi(T) and the parameters’ increment Δei, Δqi is calculated. Finally, the new parameter values are set by subtracting the product of ξi, qi, step size constant B, and random number r from the old parameter values.

2.3.2. Set Method for Parameters m and k

The parameter m is the number of ants, and the parameter k is the solution archive size. The larger the values of these two parameters are, the higher the probability of obtaining more Pareto solutions is, but the larger parameters’ value will also bring more calculations and increase time-consuming. We set the expected number of Pareto solutions according to the complexity of the problem and then adjust these two parameters in real time according to the difference between ENUM and the actual number of Pareto solutions. ENUM is the number of non-inferior solutions expected from the solution archive. The set method for parameters m and k is shown in Algorithm 2.

Input: , mi, ki, where , ;
(1)for j = 1 to k do
if
num++;
(2)rateArchive = ki/num;
(3)rateAnt = mi/num;
(4)ki + 1 = CrateArchive ENUM;
(5)mi + 1 = CrateAnt ENUM;

In Algorithm 2, count the number of solutions whose non-inferior order are zero in the solution archive. Then calculate the ratio factors rateArchive and rateAnt, which represent the size of the solution archive and the number of ants needed to produce one non-inferior solution, respectively. Finally, set the new parameters’ value to the product of the old parameters’ value, the ratio factors, adjustment coefficient C, and expected number ENUM.

3. Experiment Results and Discussion

The application field and performance of the algorithm are usually studied by comparing the performance of different MOO algorithms when solving benchmark problems. Referring to some existing mixed-variable MOO algorithms [2933], this paper designs some problems for algorithm experiments, besides comparing with other well-known MOO algorithms to verify the performance of the algorithm.

3.1. Experimental Environment

The operating environment of the experiment is as follows: Thinkpad T470p computer; Core i7-7700HQ CPU (4cores) 2; 24 GB memory; 512 GB solid hard disk; and equipped with Windows 10 operating system. The programming tool is Microsoft Visual Studio 2017, and the programming language is C#.

3.2. Benchmark Problem

In this paper, we select eight well-known benchmark problems to evaluate MOO algorithms, that is, Schaffer, Fonseca, Kursawe, ZDT problems, Viennet2, and Viennet3 [34]. These benchmark problems have two (Schaffer, Fonseca, Kursawe, and ZDT family) or three objectives (Viennet2 and Viennet3), and they occupy different properties: separability, unimodality multimodality, convexity, linearity, non-convexity, continuity, discontinuity, bias, Pareto many-to-one, and so on.

The problem name, variable count(N), variable bounds, designed variables, and objective functions are shown in Table 1.

The variables of the eight benchmark problems are all continuous variables. In order to test MOO algorithms with mixed variables, we modify the problems to make some variables as PDV and some variables as RDV; then the continuous problems become mixed problems. PDV and RDV are calculated by the following equations:where N is the number of equal divisions of the value range. In order to make the variable a value of 0, N takes a positive even number. RND(N) is a random nonnegative integer not greater than N. In order to make the distribution range of larger, we need to take every number in once. The domain of is a set of N + 1 ordered discrete variables increasing from to , and the domain of is a set of N + 1 disordered discrete variables between and .

If N is large enough, the Pareto set of the mixed problems is similar to the Pareto set of the continuous problems.

3.3. Performance Metrics

Convergence and diversity are usually the two most important criteria for the evaluation of MOO algorithms. The convergence refers to the distance from the non-dominated front generated by the optimization algorithm to the true Pareto front; the diversity involves coverage area and uniformity; and a front with wide coverage and good uniformity is always pursued.

We have used generational distance (GD) [35] and inverted generational distance plus (IGD+) [36] for measuring convergence and spread for measuring coverage.

GD: let be a set of uniformly distributed Pareto optimal points in the true PF(TPF), and be a non-dominated front of the problems. The GD of T is the average distance from each solution in T to the nearest reference point:where is the objective function corresponding to the solution and . is the Euclidean distance between and .

IGD+: the IGD+ of T is the average distance from each reference point in to the nearest solution.

In IGD+, the distance between a reference point and a solution is calculated in the objective space for the -objective minimization problem as follows:

Generalized Spread (see [36]). The generalized spread is an indicator that measures the distribution and spread of the obtained non-dominated front of the problems with two or more objectives:where {e1, e2, …, em} are m extreme solutions in and

3.4. Performance Improvement of SAMOACOMV

In order to test the performance of SAMOACOMV, some experiments are carried out under the same conditions, for example, when the problem is the modified Fonseca problem, the maximum number of algorithm iterations is the same. Table 2 lists the setting schemes of the algorithm parameters in the six experiments. The first five experiments test the performance of the MOACOMV algorithm that have different parameter values of ξ and q and the same values of m and k. The sixth experiment tests the performance of the SAMOCOMV algorithm.

Table 3 shows the performance of the 6 experiments for the Fonseca test problem. For each major cell of Table 3, the first column indicates the mean of 25 runs, the second column indicates the standard deviation, and the third column indicates the rank.

Figure 2 shows the Pareto points obtained with reference to the true Pareto frontier graphically using results from 1 of the 25 runs. MOACOMV5 generates only a few Pareto points, so it is not shown in the figure.

It can be seen from the figure and the table:(i)The figure shows that the Pareto points generated by the SAMOACOMV algorithm are right on TPF, and the table shows the overall rank value of the SAMOACOMV algorithm is minimum, that is, the performance of the SAMOACOMV algorithm is the best in all experiments.(ii)When the MOACOMV algorithm adopts setting schemes 3 and 4, the algorithm performance is basically the same as that of the SAMOCOMV algorithm, but when other schemes are used, the algorithm performance is very poor, which shows that the performance of the MOACOMV algorithm relies heavily on parameter settings.

More experiments show that the performance of the SAMOCOMV algorithm is better than that of the MOACOMV algorithm; especially, this advantage is more obvious when the values of m and k are small.

3.5. Performance Comparison Using Benchmark Problems

In order to test the performance of the algorithm, this paper compares the SAMOCOMV algorithms with the well-known MOO algorithm NSGAII¸ SPEA2¸ SMPSO, MOEAD, NSGAIII, and MOEA/D-IEpsilon. These algorithm programs come from jMetal [30], and the two algorithms can only be used to deal with CV MOO.

In order to compare the multiobjective optimization algorithms, each algorithm is allowed to run for the test problems for a constant number of function evaluations. The performance metrics are calculated for each algorithm run. This procedure is repeated for 20 runs, and the mean and standard deviation of the performance metrics are recorded for each algorithm.

3.5.1. Results Based on Schaffer, Fonseca, and Kursawe Problems

Tables 46 show the mean and standard deviation of generational distance, inverted generational distance plus, and generalized spread for different algorithms, respectively. The SAMOACOMV fetches good performance metric values in terms of the Schaffer problem, while other algorithms cannot obtain or only obtain a few Pareto points. It may be because the only variable of Schaffer problem is changed to a discrete variable, and other algorithms cannot solve the pure discrete variable problem. For the Fonseca and Kursawe problems, compared with other techniques, SAMOACOMV obtains excellent GD and IGD+ values, only slightly weaker than MOEA/D-IEpsilon, but obtains relatively poor generalized spread value.

Figures 35 provide a graphical visualization of the Pareto points obtained for Schaffer, Fonseca, and Kursawe problems, respectively. For the Schaffer problem, none of the other algorithms apart from SAMOACOMV was able to produce any Pareto points close to the TPF. For the Fonseca problem, the performance of each algorithm is very good, and the generated Pareto points right on TPF. For the Kursawe problem, the performance of each algorithm is also very good, except that some Pareto points generated by SMPSO and MOEAD deviate slightly from TPF.

3.5.2. Results Based on ZDT (ZDT1–ZDT3) Problems

From Tables 4 and 5, SAMOACOMV ranks 1 for ZDT problems, which means that the SAMOACOMV outperforms other algorithms on the performance metrics GD and IGD+. From Table 6, SAMOACOMV performed slightly worse on generalized spread for ZDT problems, ranking 3.

From Figures 68, all the algorithms have good performance, and the obtained Pareto front is basically consistent with the TPF. Some algorithms do not perform well on certain problems, such as SPEA2 and MOEAD produce some points that deviate slightly from the TPF for ZDT1 and ZDT2 problems.

3.5.3. Results Based on Viennet2 and Viennet3 Problems

As shown in Table 4, the mean of GD of SAMOACOMV for Viennet2 and Viennet3 problems are about 0.000032 and 0.000021, respectively, which are only slightly worse than the mean values of NSGAIII but far better than the corresponding performance metric values of other algorithms. It can be seen from Table 5 that similar to GD, the SAMOACOMV has almost the best IGD+ mean for Viennet2 and Viennet3 problems, around 0.0007 and 0.0005, respectively, only slightly worse than the mean of NSGAIII. From Table 6, SAMOACOMV performed worse on generalized spread for Viennet2 and Viennet3 problems, ranking 4 and 3, respectively.

In Figures 9 and 10, the approximated Viennet2 and Viennet3 fronts of each algorithm are shown. It is clear that SAMOACOMV obtained much more Pareto points, they converge well to the TPF, and they widely and uniformly distribute along the TPF, which illustrates that it has better convergence and diversity compared with the other algorithms.

In summary, with GD, IGD+, and generalized spread taken into consideration, SAMOACOMV is quite a competitive algorithm in terms of the convergence of the generated Pareto solution set; the overall rank is 1. But SAMOACOMV is slightly weaker than other algorithms in the coverage performance; the overall rank is 3.

4. Experiment Results on Spring Design Problem

The spring design problem is a common engineering practice problem and widely used MOO algorithm performance verification example [37, 38], and it is a mixed-variable MOO problem containing continuous and discrete variables. We use the spring design problem to test the performance of the SAMOACOMV algorithm in this paper.

4.1. Problem Description

The spring design problem consists of two discrete variables and one continuous variable. The objectives are to minimize the volume of the spring and minimize the stress developed by applying a load. Variables are the diameter of the wire (d), the diameter of the spring (D), and the number of turns (N). Denoting the variable vector  = (, , ) = (N, d, D), formulation of this problem with two objectives and eight constraints is as follows [38]:where is an integer, is a discrete variable, and is a continuous variable.

The parameters used are as follows:

The 42 discrete values of d are given below:

5. Experiment Results

From Table 7, the mean of GD, IGD+, and generalized spread of SAMOACOMV for the spring design problem are about 0.0014, 0.064, and 0.3532, respectively, much smaller than other algorithms. The values of the three performance metrics of SAMOACOMV for the spring design problem are all ranked first, and its overall rank is also the first, which shows that SAMOACOMV is optimal in convergence and coverage.

As shown in Table 8, for the spring design problem, the number of archive points and the number of Pareto points of SAMOACOMV rank 2, and the percentage of Pareto points in archive ranks 1, which means that SAMOACOMV has the highest comprehensive efficiency in finding Pareto points.

The obtained Pareto frontier is plotted in Figure 11. The TPF represents the set of non-inferior solutions obtained by merging all experimental results from all independent runs of all algorithms and removing the inferior solution. SMPSO can only obtain a few Pareto points, so it is not shown by the figure. It can be seen from Figure 11 that many points of NSGA-II, SPEA2, and GDE3 do not converge to the TPF, and some points of SPEA2 and GDE3 are far away from TPF. The Pareto points of NSGA-II, SPEA2, and GDE3 have poor distributions, and SPEA2 and GDE3 only cover part of TPF. In contrast, the Pareto points obtained by SAMOACOMV widely and uniformly distributed along the TPF, which illustrates that it has better convergence and diversity compared with the other algorithms.

6. Conclusion

In this work, we have modified the single-objective optimization algorithm ACOMV to handle mixed-variable MOO problems and proposed a self-adaptive parameter-setting scheme. Then the performance of SAMOACOMV was thoroughly tested using a set of performance metrics with a well-designed benchmark test suite. Its performance was compared with the state-of-the-art multiobjective optimization algorithms. For all benchmark problems, the SAMOACOMV algorithm has good convergence performance, and its GD and IGD+ are almost the best. However, the generalized spread of SAMOACOMV is slightly worse, which means that the coverage performance of SAMOACOMV is slightly weaker than other algorithms. For spring design problem, the SAMOACOMV algorithm can get widely and uniformly distributed Pareto front, and it has the best convergence and coverage performance.

In general, the SAMOACOMV algorithm is an excellent MOO algorithm, which adds a new choice for solving MOO problems.

Data Availability

Some or all data, models, or codes that support the findings of this study are available from the corresponding author upon reasonable request.

Disclosure

The approach proposed in this paper has been published at the 2020 IEEE International Congress on Cybermatics (iThings/GreenCom/CPSCom/SmartData/Blockchain-2020) [39]. Based on the conference paper, this paper mainly expands as follows: a new congestion degree of the solution is defined to rank the solutions in the archive, modified the self-adaptive strategy to set the parameters m and k of the SAMOACOMV algorithm, and designed some new mixed-variable MOO benchmark problems to test and compare the performance of the SAMOACOMV algorithm. New performance metrics such as GD, IGD+, and Generalized Spread are used to evaluate the performance of the algorithms. All experiments are redone, and the corresponding described text, figure, and table of experimental results are updated.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National Key Research and Development Program of China (Grant no. 2018YFC1405700) and Industry University Research Cooperation Project of Jiangsu Province (Grant no. BY2019005).