Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 6858541 | https://doi.org/10.1155/2020/6858541

Jiangnan Zhang, Kewen Xia, Ziping He, Shurui Fan, "Dynamic Multi-Swarm Differential Learning Quantum Bird Swarm Algorithm and Its Application in Random Forest Classification Model", Computational Intelligence and Neuroscience, vol. 2020, Article ID 6858541, 24 pages, 2020. https://doi.org/10.1155/2020/6858541

Dynamic Multi-Swarm Differential Learning Quantum Bird Swarm Algorithm and Its Application in Random Forest Classification Model

Academic Editor: Raşit Köker
Received14 Aug 2019
Revised21 Jun 2020
Accepted24 Jun 2020
Published07 Aug 2020

Abstract

Bird swarm algorithm is one of the swarm intelligence algorithms proposed recently. However, the original bird swarm algorithm has some drawbacks, such as easy to fall into local optimum and slow convergence speed. To overcome these short-comings, a dynamic multi-swarm differential learning quantum bird swarm algorithm which combines three hybrid strategies was established. First, establishing a dynamic multi-swarm bird swarm algorithm and the differential evolution strategy was adopted to enhance the randomness of the foraging behavior’s movement, which can make the bird swarm algorithm have a stronger global exploration capability. Next, quantum behavior was introduced into the bird swarm algorithm for more efficient search solution space. Then, the improved bird swarm algorithm is used to optimize the number of decision trees and the number of predictor variables on the random forest classification model. In the experiment, the 18 benchmark functions, 30 CEC2014 functions, and the 8 UCI datasets are tested to show that the improved algorithm and model are very competitive and outperform the other algorithms and models. Finally, the effective random forest classification model was applied to actual oil logging prediction. As the experimental results show, the three strategies can significantly boost the performance of the bird swarm algorithm and the proposed learning scheme can guarantee a more stable random forest classification model with higher accuracy and efficiency compared to others.

1. Introduction

The concept of swarm intelligence was first proposed by Hackwood and Beni in 1992 [1]. Swarm intelligence algorithms have been proved that it can solve nondifferentiable problems, NP-hard problems, and difficult nonlinear problems which the traditional techniques cannot solve. For this reason, swarm intelligence algorithms are hotly researched in computer science and have been updated from generation to generation. For classic swarm intelligence algorithms, particle swarm optimization (PSO) [2] is used to define the basic principle and equations of the swarm intelligence algorithms. In recent years, many new swarm intelligence algorithms have been proposed, such as artificial bee colony (ABC) algorithm [3] which is inspired by the stock of food location behavior of bees. Artificial fish school algorithm (AFSA) [4] and firefly algorithm (FA) [5] are inspired by the foraging process of fish and firefly, and cat swarm optimization (CSO) [6] is developed based on vigilance and foraging behavior of cats in nature. According to the foraging behavior, vigilance behavior, and flight behavior of the bird swarms in nature, Meng et al. proposed a novel swarm intelligence algorithm called bird swarm algorithm (BSA) [7]. Meanwhile, due to these advantages above, swarm intelligence algorithms have been applied to optimize various fields, such as PSO for mutation testing problems [8], genetic algorithm (GA) for convolutional neural networks parameters [9], FA for convolutional neural network problems [10], and whale optimization algorithm (WOA) for cloud computing environments [11]. So, BSA which will be used in this paper has been widely applied to engineering optimization problems.

However, the original swarm intelligence algorithms have limitations in solving some practical problems. Hybrid strategy which is one of the main research directions to improve the performance of swarm intelligence algorithms has become a research hotspot in machine learning. Tuba and Bacanin [12] modified the exploitation process of the original seeker optimization algorithm (SOA) approach by hybridizing it with FA which overcame shortcomings and outperformed other algorithms. Strumberger et al. [13] also has proposed dynamic search tree growth algorithm (TGA) and hybridized elephant herding optimization (EHO) with ABC, and the simulation results have shown that the proposed approach was viable and effective. Yang [14] analyzed swarm intelligence algorithms by using differential evolution, dynamic systems, self-organization, and Markov chain framework. The discussions demonstrate that the hybrid algorithms have some advantages over traditional algorithms. Bacanin and Tuba [15] proposed a modified ABC based on GA, and the obtained results show that the hybrid ABC is able to provide competitive results and outperform other counterparts. Liu et al. [16] presented a multistrategy brain storm optimization (BSO) with dynamic parameter adjustment which is more competitive than other related algorithms. Peng et al. [17] has proposed FA with luciferase inhibition mechanism to improve the effectiveness of selection. The simulation results have shown that the proposed approach has the best performance in some complex functions. Peng et al. [18] also developed a hybrid approach, which is using the best neighbor-guided solution search strategy to search ABC algorithm. The experimental results indicate that the proposed ABC is very competitive and outperforms the other algorithms. It can be seen that the hybrid strategy is a strategy to successfully improve the swarm intelligence algorithm, so the BSA algorithm will be improved by hybrid strategy in this paper.

Similarly, BSA can also be applied to multiple fields, especially in the field of parameter estimation, and hybrid strategy is also the improvement method for BSA. In 2017, Xu et al. [19] proposed improved boundary BSA (IBBSA) for chaotic system optimization of the Lorenz system and the coupling motor system. However, the improved boundary learning strategy has randomness, which makes IBBSA generalization performance not high. Yang and Liu [20] introduced the dynamic weight into the foraging formula of BSA (IBSA) which provides a solution for problem that anti-same-frequency interference of shipborne radar. The results have shown that the dynamic weight is just introduced into the foraging formula of BSA, but IBSA ignored the impact of population initialization. Wang et al. [21] designed a strategy named “disturbing the local optimum” for helping the original BSA converge to the global optimal solution faster and more stably. However, “disturbing the local optimum” also has randomness, which makes the generalization performance of improved BSA not very well.

Like many swarm intelligence algorithms, BSA is also faced with the problem of being trapped in local optima and slow convergence. These disadvantages limit the wider application of BSA. In this paper, a dynamic multi-swarm differential learning quantum BSA called DMSDL-QBSA is proposed, which introduced three hybrid strategies into the original BSA to improve its effectiveness. Motivated by the defect of insufficient generalization ability in the literature [19, 21], we will first establish a dynamic multi-swarm bird swarm algorithm (DMS-BSA) and merge the differential evolution operator into each sub-swarm of the DMS-BSA, and it improves the local search capability and global search capability of foraging behavior. Second, according to the contempt for the impact of population initialization in the literature [20], they used quantum behavior to optimize the particle swarm optimization in order to obtain a good ability to jump out of global optimum; we will use the quantum system to initialize the search space of the bird. Consequently, it improves the convergence rate of the whole population and avoids BSA into a local optimum. In order to validate effectiveness of the proposed method, we have evaluated the performance of DMSDL-QBSA on classical benchmark functions and CEC2014 functions including unimodal and multimodal functions in comparison with the state-of-the-art methods and new popular algorithms. The experimental results have shown that the three improvement strategies are able to significantly boost the performance of BSA.

Based on the DMSDL-QBSA, an effective hybrid random forest (RF) model for actual oil logging prediction is established, called DMSDL-QBSA-RF approach. RF has the characteristics of being nonlinear and anti-interference [22]. In addition, it can decrease the possibility of overfitting which often occurs in actual logging. RF has been widely used in various classification problems, but it has not yet been applied to the field of actual logging. Parameter estimation is a prerequisite to accomplish the RF classification model. The two key parameters of RF are the number of decision trees and the number of predictor variables; the former is called , and the latter is called . Meanwhile, parameter estimation of the model is a complex optimization problem that traditional methods might fail to solve. Many works have proposed to use swarm intelligence algorithms to find the best parameters of the RF model. Ma and Fan [23] adopted AFSA and PSO to optimize the parameters of the RF. Hou et al. [24] used the DE to obtain an optimal set of initial parameters for RF. Liu et al. [25] compared genetic algorithms, simulated annealing, and hill climbing algorithms to optimize the parameters of the RF. From these papers, we can see that metaheuristic algorithm must be suitable for this problem. In this study, the DMSDL-QBSA was used to optimize the two key parameters that can improve the accuracy without overfitting for RF. When investigating the performance of the DMSDL-QBSA-RF classification model compared with 3 swarm intelligence algorithm-based RF methods, 8 two-dimensional UCI datasets are applied. As the experimental results show, the proposed learning scheme can guarantee a more stable RF classification model with higher predictive accuracy compared to other counterparts. The rest of the paper is organized as follows:(i)In order to achieve a better balance between efficiency and velocity for BSA, we have studied the effects of four different hybrid strategies of the dynamic multi-swarm method, differential evolution, and quantum behavior on the performance of BSA.(ii)The proposed DMSDL-QBSA has successfully optimized and setting problem of RF. The resulting hybrid classification model has been rigorously evaluated on oil logging prediction.(iii)The proposed hybrid classification model delivers better classification performance and offers more accurate and faster results when compared to other swarm intelligence algorithm-based RF models.

2. Bird Swarm Algorithm and Its Improvement

2.1. Bird Swarm Algorithm Principle

BSA, as proposed by Meng et al. in 2015, is a new intelligent bionic algorithm based on multigroup and multisearch methods; it mimics the birds’ foraging behavior, vigilance behavior, and flight behavior, and employs this swarm intelligence to solve the optimization problem. The bird swarm algorithm can be simplified by the five rules:Rule 1: each bird can switch between vigilant behavior and foraging behavior, and both bird forages and keeps vigilance is mimicked as random decisions.Rule 2: when foraging, each bird records and updates its previous best experience and the swarms’ previous best experience with food patches. The experience can also be used to search for food. Instant sharing of social information is across the group.Rule 3: when keeping vigilance, each bird tries to move towards the center of the swarm. This behavior may be influenced by disturbances caused by swarm competition. Birds with more stocks are more likely to be near swarm’s centers than birds with lease stocks.Rule 4: birds fly to another place regularly. When flying to another location, birds often switch between production and shrubs. The bird with the most stocks is the producer, and the bird with the least is a scrounger. Other birds with the highest and lowest reserves are randomly selected for producers and scroungers.Rule 5: producers actively seek food. Scroungers randomly follow producers looking for food.

According to Rule 1, we define that the time interval of each bird flight behavior , the probability of foraging behavior , and a uniform random number .(1)Foraging behaviorIf the number of iteration is less than and , the bird will be the foraging behavior. Rule 2 can be written mathematically as follows:where and are two positive numbers; the former is called cognitive accelerated coefficients, and the latter is called social accelerated coefficients. Here, is the -th bird’s best previous position and is the best previous swarm’s position.(2)Vigilance behaviorIf the number of iteration is less than and , the bird will be the vigilance behavior. Rule 3 can be written mathematically as follows:where and are two positive constants in , is the best fitness value of -th bird, and is the sum of the swarms’ best fitness value. Here, , which is used to avoid zero-division error, is the smallest constant in the computer. denotes the -th element of the whole swarm’s average position.(3)Flight behaviorIf the number of iteration equals , the bird will be the flight behavior which can be divided into the behaviors of the producers and scroungers by fitness. Rule 3 and Rule 4 can be written mathematically as follows:where () means that the scrounger would follow the producer to search for food.

2.2. The Bird Swarm Algorithm Based on Dynamic Multi-Swarm Method, Differential Evolution, and Quantum Behavior
2.2.1. The Foraging Behavior Based on Dynamic Multi-Swarm Method

Dynamic multi-swarm method has been widely used in real-world applications, because it is efficient and easy to implement. In addition, it is very common in the improvement of swarm intelligent optimization, such as coevolutionary algorithm [26], the framework of evolutionary algorithms [27], multiobjective particle swarm optimization [28], hybrid dynamic robust [29], and PSO algorithm [30, 31]. However, the PSO algorithm is easy to fall into the local optimum and its generalization performance is not high. Consequently, motivated by these literature studies, we will establish a dynamic multi-swarm bird swarm algorithm (DMS-BSA), and it improves the local search capability of foraging behavior.

In DMS-PSO, the whole population is divided into many small swarms, which are often regrouped by using various reorganization plans to exchange information. The velocity update strategy iswhere is the best historical position achieved within the local community of the -th particle.

According to the characteristic of equation (1), we can see that the foraging behavior formula of BSA is similar to the particle velocity update formula of PSO. So, according to the characteristic of equation (7), we can get the improved foraging behavior formula as follows:where is called the guiding vector.

The dynamic multi-swarm method is used to improve the local search capability, while the guiding vector can enhance the global search capability of foraging behavior. Obviously, we need to build a good guiding vector.

2.2.2. The Guiding Vector Based on Differential Evolution

Differential evolution (DE) is a powerful evolutionary algorithm with three differential evolution operators for solving the tough global optimization problems [32]. Besides, DE has got more and more attention of scholars to evolve and improve in evolutionary computation, such as hybrid multiple crossover operations [33] and proposed DE/neighbor/1 [34], due to its excellent global search capability. From these literature studies, we can see that DE has a good global search capability, so we will establish the guiding vector based on differential evolution operator to improve the global search capability of foraging behavior. The detailed implementation of is presented as follows:(1)Differential mutationAccording to the characteristic of equation (8), the “DE/best/1”, “DE/best/2,” and “DE/current-to-best/1” mutation strategies are suitable. In the experiments of the literature [31], they showed that the “DE/best/1” mutation strategy is the most suitable in DMS-PSO, so we choose this mutation strategy in BSA. And the “DE/lbest/1” mutation strategy can be written as follows:DE/lbest/1:Note that some components of the mutant vector may violate predefined boundary constraints. In this case, the boundary processing is used. It can be expressed as follows:(2)CrossoverAfter differential mutation, a binomial crossover operation exchanges some components of the mutant vector with the best previous position to generate the target vector . The process can be expressed as(3)SelectionBecause the purpose of BSA is to find the best fitness, a selection operation chooses a vector accompanied a better fitness to enter the next generation to generate the selection operator, namely, guiding vector . The process can be expressed as follows:Choose a vector with better fitness to enter the next generation

2.2.3. The Initialization of Search Space Based on Quantum Behavior

Quantum behavior is a nonlinear and excellent superposition system. With its simple and effective characteristics and good performance in global optimization, it has been applied to optimize many algorithms, such as particle swarm optimization [35] and pigeon-inspired optimization algorithm [36]. Consequently, according to the literature studies and its excellent global optimization performance, we use the quantum system to initialize the search space of the bird.

Quantum-behaved particle position can be written mathematically as follows:

According to the characteristics of equations (13)–(15), we can get the improved search space initialization formula as follows:where is a positive number, which can be, respectively, called as a contraction expansion factor. Here, is the position of the particle at the previous moment and is the average value of the best previous positions of all the birds (Algorithm 1).

2.2.4. Procedures of the DMSDL-QBSA
Variable setting: number of iterations:, bird positions: , local optimum: , global optimal position:, global optimum:,and fitness of sub-swarm optimal position: ;
Input: population size: , dimension’s size: , number of function evaluations:, the time interval of each bird flight behavior: , the probability of foraging behavior: , constant parameter: , , , , , , and contraction expansion factor: ;
Output: global optimal position: , fitness of global optimal position: ;
(1)Begin
(2)Initialize the positions of N birds using equations (16)-(17): ;
(3)Calculated fitness: ; set to be and find ;
(4)Whiledo
(5)/∗ Dynamic sub-swarm∗/
(6)Regroup and of the sub-swarms randomly;
(7)Sort the and refine the first best ;
(8)update the corresponding ;
(9) /∗Differential learning scheme∗/
(10)For
(11)  Construct using DMS-BSA
(12)  Differential evolution: construct using equations (9)-(10);
(13)  Crossover: construct using equation (11);
(14)  Selection: construct using equation (12);
(15)End For
(16) /∗Birds position adjusting∗/
(17)If
(18)  For
(19)   If
(20)    Foraging behavior: update the position of birds using equation (8);
(21)   Else
(22)    Vigilance behavior: update the position of birds using equation (2);
(23)   End If
(24)  End For
(25)Else
(26)  Flight behavior is divided into producers and scroungers;
(27)  For
(28)   If is a producer
(29)   Producers: update the position of birds using equation (5);
(30)  Else
(31)   Scroungers: update the position of birds using equation (6);
(32)  End If
(33)End For
(34)End If
(35) Evaluate ;
(36) Update and ;
(37)  ;
(38)End While
(39)End

In Sections 2.2.12.2.3, in order to improve the local search capability and the global search capability on BSA, this paper has improved the BSA in three parts:(1)In order to improve the local search capability of foraging behavior on BSA, we put forward equation (8) based on the dynamic multi-swarm method.(2)In order to get the guiding vector to improve the global search capability of foraging behavior on BSA, we put forward equations (9), (11), and (12) based on differential evolution.(3)In order to expand the initialization search space of the bird to improve the global search capability on BSA, we put forward equations (16) and (17) based on quantum behavior.

Finally, the steps of DMSDL-QBSA can be shown in Algorithm 1.

2.3. Simulation Experiment and Analysis

This section presents the evaluation of DMSDL-QBSA using a series of experiments on benchmark functions and CEC2014 test functions. All experiments in this paper are implemented using the following: MATLAB R2014b; Win 7 (64-bit); Inter (R) Core (TM) i5-2450M; CPU @2.50 GHz; 4.00 GB RAM. To obtain fair results, all the experiments were conducted under the same conditions. The number of the population size is set as 30 in these algorithms. And each algorithm runs 30 times independently for each function.

2.3.1. Benchmark Functions and CEC 2014 Test Functions

When investigating the effective and universal performance of DMSDL-QBSA compared with several hybrid algorithms and popular algorithms, 18 benchmark functions and CEC2014 test functions are applied. In order to test the effectiveness of the proposed DMSDL-QBSA, 18 benchmark functions [37] are adopted, and all of which have an optimal value of 0. The benchmark functions and their searching ranges are shown in Table 1. In this test suite, are unimodal functions. These unimodal functions are usually used to test and investigate whether the proposed algorithm has a good convergence performance. Then, are multimodal functions. These multimodal functions are used to test the global search capability of the proposed algorithm. The smaller the fitness value of functions, the better the algorithm performs. Furthermore, in order to better verify the comprehensive performance of DMSDL-QBSA in a more comprehensively manner, another 30 complex CEC2014 benchmarks are used. The CEC2014 benchmark functions are simply described in Table 2.


NameTest functionRange

Sphere
Schwefel P2.22
Schwefel P1.2
Generalized Rosenbrock
Step
Noise
SumSquares
Zakharov
Schaffer

Generalized Schwefel 2.26
Generalized Rastrigin
Ackley
Generalized Griewank
Generalized Penalized 1. ,
Generalized Penalized 2
Alpine
Booth
Levy


TypeNo.FunctionsFi = Fi (x)

Unimodal functionsF1Rotated High-Conditioned Elliptic Function100
F2Rotated Bent Cigar Function200
F3Rotated Discus Function300

Simple multimodal functionsF4Shifted and Rotated Rosenbrock’s Function400
F5Shifted and Rotated Ackley’s Function500
F6Shifted and Rotated Weierstrass Function600
F7Shifted and Rotated Griewank’s Function700
F8Shifted Rastrigin’s Function800
F9Shifted and Rotated Rastrigin’s Function900
F10Shifted Schwefel’s Function1000
F11Shifted and Rotated Schwefel’s Function1100
F12Shifted and Rotated Katsuura Function1200
F13Shifted and Rotated HappyCat function1300
F14Shifted and Rotated HGBat Function1400
F15Shifted and Rotated Expanded Griewank’s Plus Rosenbrock’s Function1500
F16Shifted and Rotated Expanded Scaffer’s F6 Function1600

Hybrid functionsF17Hybrid function 1 (N = 3)1700
F18Hybrid function 2 (N = 3)1800
F19Hybrid function 3 (N = 4)1900
F20Hybrid function 4 (N = 4)2000
F21Hybrid function 5 (N = 5)2100
F22Hybrid function 6 (N = 5)2200

Composition functionsF23Composition function 1 (N = 5)2300
F24Composition function 2 (N = 3)2400
F25Composition function 3 (N = 3)2500
F26Composition function 4 (N = 5)2600
F27Composition function 5 (N = 5)2700
F28Composition function 6 (N = 5)2800
F29Composition function 7 (N = 3)2900
F30Composition function 8 (N = 3)3000

Search range:

2.3.2. Parameter Settings

In order to verify the effectiveness and generalization of the proposed DMSDL-QBSA, the improved DMSDL-QBSA is compared with several hybrid algorithms. These algorithms are BSA [7], DE [32], DMSDL-PSO [31], and DMSDL-BSA. Another 5 popular intelligence algorithms, such as grey wolf optimizer (GWO) [38], whale optimization algorithm (WOA) [39], sine cosine algorithm (SCA) [40], grasshopper optimization algorithm (GOA) [41], and sparrow search algorithm (SSA) [42], are used to compare with DMSDL-QBSA. These algorithms represented state-of-the-art can be used to better verify the performance of DMSDL-QBSA in a more comprehensively manner. For fair comparison, the number of populations of all algorithms is set to 30, respectively, and other parameters of all algorithms are set according to their original papers. The parameter settings of these involved algorithms are shown in Table 3 in detail.


AlgorithmParameter settings

BSA, ,
DE,
DMSDL-PSO, , , sub-swarm = 10
DMSDL-BSA, , , , , sub-swarm = 10
DMSDL-QBSA, , , , , sub-swarm = 10

2.3.3. Comparison on Benchmark Functions with Hybrid Algorithms

According to Section 2.2, three hybrid strategies (dynamic multi-swarm method, DE, and quantum behavior) have been combined with the basic BSA method. When investigating the effectiveness of DMSDL-QBSA compared with several hybrid algorithms, such as BSA, DE, DMSDL-PSO, and DMSDL-BSA, 18 benchmark functions are applied. Compared with DMSDL-QBSA, quantum behavior dynamic is not used in the dynamic multi-swarm differential learning bird swarm algorithm (DMSDL-BSA). The number of function evaluations (FEs) is 10000. We selected two different dimension’s sizes (Dim). Dim = 10 is the typical dimensions for the benchmark functions. And Dim = 2 is for RF has two parameters that need to be optimized, which means that the optimization function is 2-dimensional.

The fitness value curves of a run of several algorithms on about eight different functions are shown in Figures 1 and 2, where the horizontal axis represents the number of iterations and the vertical axis represents the fitness value. We can obviously see the convergence speeds of several different algorithms. The maximum value (Max), the minimum value (Min), the mean value (Mean), and the variance (Var) obtained by several benchmark algorithms are shown in Tables 47, where the best results are marked in bold. Table 4 and 5 show the performance of the several algorithms on unimodal functions when Dim = 10 and 2, and Table 6 and 7 show the performance of the several algorithms on multimodal functions when Dim = 10 and 2.


FunctionTermBSADEDMSDL-PSODMSDL-BSADMSDL-QBSA

Max1.0317E + 041.2466E + 041.7232E + 041.0874E + 047.4436E + 01
Min04.0214E + 031.6580E − 0200
Mean6.6202E + 004.6622E + 036.0643E + 016.3444E + 003.3920E − 02
Var2.0892E + 028.2293E + 027.3868E + 021.9784E + 021.2343E + 00

Max3.3278E + 021.8153E + 028.0436E+014.9554E+022.9845E + 01
Min4.9286E − 1821.5889E + 011.0536E − 013.0074E − 1820
Mean5.7340E − 021.8349E + 012.9779E + 006.9700E − 021.1220E − 02
Var3.5768E + 004.8296E + 002.4966E + 005.1243E + 004.2864E − 01

Max1.3078E + 041.3949E + 041.9382E + 041.2899E + 048.4935E + 01
Min3.4873E − 2504.0327E + 036.5860E − 021.6352E − 2490
Mean7.6735E + 004.6130E + 038.6149E + 017.5623E + 003.3260E − 02
Var2.4929E + 028.4876E + 028.1698E + 022.4169E + 021.2827E + 00

Max2.5311E + 092.3900E + 094.8639E + 093.7041E + 093.5739E + 08
Min5.2310E + 002.7690E + 088.4802E + 005.0021E + 008.9799E + 00
Mean6.9192E + 053.3334E + 081.1841E + 079.9162E + 056.8518E + 04
Var3.6005E + 071.4428E + 081.8149E + 084.5261E + 074.0563E + 06

Max1.1619E + 041.3773E + 041.6188E + 041.3194E + 045.1960E + 03
Min5.5043E − 155.6109E + 031.1894E − 024.2090E − 151.5157E + 00
Mean5.9547E + 006.3278E + 035.2064E + 016.5198E + 003.5440E + 00
Var2.0533E + 029.6605E + 026.2095E + 022.2457E + 027.8983E + 01

Max3.2456E + 007.3566E + 008.9320E + 002.8822E + 001.4244E + 00
Min1.3994E − 041.2186E + 002.2482E − 038.2911E − 051.0911E − 05
Mean2.1509E − 031.4021E + 001.1982E − 011.9200E − 036.1476E − 04
Var5.3780E − 023.8482E − 013.5554E − 015.0940E − 021.9880E − 02

Max4.7215E + 026.7534E + 025.6753E + 025.3090E + 022.3468E + 02
Min02.2001E + 025.6300E − 0200
Mean2.4908E − 012.3377E + 029.2909E + 003.0558E − 019.4500E − 02
Var8.5433E + 003.3856E + 012.2424E + 011.0089E + 013.5569E + 00

Max3.2500E + 022.4690E + 022.7226E + 022.8001E + 021.7249E + 02
Min1.4678E − 2398.3483E + 015.9820E − 028.9624E − 2390
Mean1.9072E − 019.1050E + 017.9923E + 002.3232E − 018.1580E − 02
Var6.3211E + 001.3811E + 011.7349E + 016.4400E + 002.9531E + 00


FunctionTermBSADEDMSDL-PSODMSDL-BSADMSDL-QBSA

Max1.8411E + 023.4713E + 022.9347E + 021.6918E + 021.6789E + 00
Min05.7879E − 01003.2988E − 238
Mean5.5890E − 021.1150E + 001.6095E − 013.6580E − 024.2590E − 03
Var2.6628E + 005.7218E + 005.4561E + 002.0867E + 007.5858E − 02

Max2.2980E + 002.1935E + 003.2363E + 003.1492E + 001.1020E + 00
Min5.5923E − 2668.2690E − 022.9096E − 2423.4367E − 2410
Mean9.2769E − 049.3960E − 027.4900E − 031.2045E − 032.1565E − 04
Var3.3310E − 026.9080E − 024.0100E − 024.2190E − 021.3130E − 02

Max1.0647E + 021.3245E + 023.6203E + 022.3793E + 021.3089E + 02
Min05.9950E − 01000
Mean2.3040E − 028.5959E − 012.5020E − 016.3560E − 021.7170E − 02
Var1.2892E + 002.8747E + 007.5569E + 002.9203E + 001.3518E + 00

Max1.7097E + 026.1375E + 016.8210E + 015.9141E + 011.4726E + 01
Min1.6325E − 214.0940E − 028.2726E − 133.4830E − 250
Mean2.2480E − 029.3940E − 021.5730E − 021.1020E − 021.4308E − 02
Var1.7987E + 007.4859E − 017.6015E − 016.4984E − 012.7598E − 01

Max1.5719E + 022.2513E + 023.3938E + 021.8946E + 028.7078E + 01
Min07.0367E − 01000
Mean3.4380E − 021.8850E + 001.7082E − 015.0090E − 021.1880E − 02
Var1.9018E + 005.6163E + 005.9868E + 002.4994E + 009.1749E − 01

Max1.5887E − 011.5649E − 011.5919E − 011.3461E − 011.0139E − 01
Min2.5412E − 054.5060E − 045.9140E − 054.1588E − 057.3524E − 06
Mean2.3437E − 041.3328E − 036.0989E − 042.3462E − 049.2394E − 05
Var2.4301E − 033.6700E − 033.5200E − 031.9117E − 031.4664E − 03

Max3.5804E + 002.8236E + 001.7372E + 002.7513E + 001.9411E + 00
Min07.6633E − 03000
Mean8.5474E − 041.6590E − 028.6701E − 047.6781E − 043.1439E − 04
Var4.4630E − 026.0390E − 022.4090E − 023.7520E − 022.2333E − 02

Max4.3247E + 002.1924E + 005.3555E + 003.3944E + 005.5079E − 01
Min08.6132E − 03000
Mean1.1649E − 031.9330E − 021.7145E − 037.3418E − 046.9138E − 05
Var5.9280E − 024.7800E − 027.5810E − 024.1414E − 025.7489E − 03

Max2.7030E − 023.5200E − 021.7240E − 024.0480E − 022.5230E − 02
Min05.0732E − 03000
Mean6.1701E − 056.2500E − 038.8947E − 048.4870E − 052.7362E − 05
Var7.6990E − 041.3062E − 032.0400E − 039.6160E − 045.5610E − 04

(1) Unimodal functionsFrom the numerical testing results on 8 unimodal functions in Table 4, we can see that DMSDL-QBSA can find the optimal solution for all unimodal functions and get the minimum value of 0 on , , , , and . Both DMSDL-QBSA and DMSDL-BSA can find the minimum value. However, DMSDL-QBSA has the best mean value and variance on each function. The main reason is that DMSDL-QBSA has better population diversity during the initialization period. In summary, the DMSDL-QBSA has best performance on unimodal functions compared to the other algorithms when Dim = 10. Obviously, the DMSDL-QBSA has a relatively well convergence speed.The evolution curves of these algorithms on four unimodal functions , , , and are drawn in Figure 1. It can be detected from the figure that the curve of DMSDL-QBSA descends fastest in the number of iterations that are far less than 10000 times. For , , and case, DMSDL-QBSA has the fastest convergence speed compared with other algorithms. However, the original BSA got the worst solution because it is trapped in the local optimum prematurely. For function , these algorithms did not find the value 0. However, the convergence speed of the DMSDL-QBSA is significantly faster than other algorithms in the early stage and the solution eventually found is the best. Overall, owing to enhance the diversity of population, DMSDL-QBSA has a relatively excellent convergence speed when Dim = 2.According to the results of Table 5, DMSDL-QBSA gets the best performance on these 8 unimodal functions when Dim = 2. DMSDL-QBSA finds the minimum value of 0 on , , , , , , , and . DMSDL-QBSA has better performance on , , and compared with DMSDL-BSA and DMSDL-PSO. DE does not perform well on these functions, but BSA performs relatively well on , , , , , and . The main reason is that BSA has a better convergence performance in the early search. Obviously, DMSDL-QBSA can find the best two parameters for RF that need to be optimized.(2)Multimodal functionsFrom the numerical testing results on 8 multimodal functions in Table 6, we can see that DMSDL-QBSA can find the optimal solution for all multimodal functions and get the minimum value of 0 on , , and . DMSDL-QBSA has the best performance on , , , , , and . BSA works the best on . DMSDL-PSO performs not very well. And DMSDL-QBSA has the best mean value and variance on most functions. The main reason is that DMSDL-QBSA has a stronger global exploration capability based on the dynamic multi-swarm method and differential evolution. In summary, the DMSDL-QBSA has relatively well performance on unimodal functions compared to the other algorithms when typical Dim = 10. Obviously, the DMSDL-QBSA has relatively well global search capability.

FunctionTermBSADEDMSDL-PSODMSDL-BSADMSDL-QBSA

Max2.8498E + 032.8226E + 033.0564E + 032.7739E + 032.8795E + 03
Min1.2553E + 021.8214E + 031.2922E + 031.6446E + 021.1634E + 03
Mean2.5861E + 021.9229E + 031.3185E + 033.1119E + 021.2729E + 03
Var2.4093E + 021.2066E + 021.2663E + 022.5060E + 021.2998E + 02

Max1.2550E + 021.0899E + 021.1806E + 021.1243E + 029.1376E + 01
Min06.3502E + 011.0751E + 0100
Mean2.0417E − 016.7394E + 013.9864E + 011.3732E − 016.8060E − 02
Var3.5886E + 005.8621E + 001.3570E + 013.0325E + 002.0567E + 00

Max2.0021E + 011.9910E + 011.9748E + 011.9254E + 011.8118E + 01
Min8.8818E − 161.6575E + 017.1700E − 028.8818E − 168.8818E − 16
Mean3.0500E − 021.7157E + 013.0367E + 003.8520E − 021.3420E − 02
Var5.8820E − 015.2968E − 011.6585E + 006.4822E − 014.2888E − 01

Max1.0431E + 021.3266E + 021.5115E + 021.2017E + 026.1996E + 01
Min04.5742E + 012.1198E − 0100
Mean6.1050E − 025.2056E + 013.0613E + 006.9340E − 022.9700E − 02
Var1.8258E + 008.3141E + 001.5058E + 012.2452E + 001.0425E + 00

Max8.4576E + 063.0442E + 075.3508E + 076.2509E + 078.5231E + 06
Min1.7658E − 131.9816E + 064.5685E − 051.6961E − 135.1104E − 01
Mean1.3266E + 033.1857E + 066.8165E + 048.8667E + 031.1326E + 03
Var9.7405E + 041.4876E + 061.4622E + 066.4328E + 058.7645E + 04

Max1.8310E + 081.4389E + 081.8502E + 081.4578E + 082.5680E + 07
Min1.7942E − 111.0497E + 072.4500E − 031.1248E − 119.9870E − 01
Mean3.7089E + 041.5974E + 072.0226E + 053.8852E + 043.5739E + 03
Var2.0633E + 061.0724E + 074.6539E + 062.1133E + 062.6488E + 05

Max1.3876E + 011.4988E + 011.4849E + 011.3506E + 019.3280E + 00
Min4.2410E − 1746.8743E + 002.5133E − 027.3524E − 1760
Mean1.3633E − 027.2408E + 002.5045E + 001.3900E − 025.1800E − 03
Var3.3567E − 017.7774E − 011.0219E + 003.4678E − 011.7952E − 01

Max3.6704E + 013.6950E + 012.8458E + 012.6869E + 012.4435E + 01
Min2.0914E − 119.7737E + 003.3997E − 035.9165E − 127.5806E − 01
Mean6.5733E − 021.2351E + 016.7478E − 015.6520E − 027.9392E − 01
Var6.7543E − 012.8057E + 001.4666E + 006.4874E − 013.6928E − 01


FunctionTermBSADEDMSDL-PSODMSDL-BSADMSDL-QBSA

Max2.0101E + 022.8292E + 022.9899E + 022.4244E + 022.8533E + 02
Min2.5455E − 053.7717E + 002.3690E + 012.5455E − 059.4751E + 01
Mean3.4882E − 018.0980E + 002.5222E + 014.2346E − 019.4816E + 01
Var5.7922E + 001.0853E + 011.5533E + 015.6138E + 002.8160E + 00

Max4.7662E + 004.7784E + 001.1067E + 018.7792E + 008.1665E + 00
Min03.8174E − 01000
Mean2.7200E − 036.0050E − 013.1540E − 024.2587E − 033.7800E − 03
Var8.6860E − 023.1980E − 012.4862E − 011.2032E − 011.3420E − 01

Max9.6893E + 008.1811E + 001.1635E + 019.1576E + 008.4720E + 00
Min8.8818E − 165.1646E − 018.8818E − 168.8818E − 168.8818E − 16
Mean9.5600E − 036.9734E − 013.4540E − 029.9600E − 032.8548E − 03
Var2.1936E − 016.1050E − 012.5816E − 012.1556E − 011.1804E − 01

Max4.4609E + 004.9215E + 004.1160E + 001.9020E + 001.6875E + 00
Min01.3718E − 01000
Mean1.9200E − 031.7032E − 011.8240E − 021.4800E − 035.7618E − 04
Var6.6900E − 021.3032E − 011.8202E − 013.3900E − 022.2360E − 02

Max1.0045E + 011.9266E + 031.9212E + 015.7939E + 028.2650E + 00
Min2.3558E − 311.3188E − 012.3558E − 312.3558E − 312.3558E − 31
Mean4.1600E − 033.5402E − 011.0840E − 026.1420E − 021.3924E − 03
Var1.7174E − 011.9427E + 013.9528E − 015.8445E + 008.7160E − 02

Max6.5797E + 044.4041E + 031.4412E + 058.6107E + 032.6372E + 00
Min1.3498E − 319.1580E − 021.3498E − 311.3498E − 317.7800E − 03
Mean7.1736E + 008.9370E − 011.7440E + 019.0066E − 018.2551E − 03
Var6.7678E + 025.4800E + 011.4742E + 038.7683E + 012.8820E − 02

Max6.2468E − 016.4488E − 015.1564E − 018.4452E − 013.9560E − 01
Min6.9981E − 082.5000E − 031.5518E − 2402.7655E − 070
Mean2.7062E − 046.9400E − 036.8555E − 042.0497E − 046.1996E − 05
Var1.0380E − 021.7520E − 028.4600E − 031.0140E − 024.4000E − 03

Max5.1946E + 003.6014E + 002.3463E + 006.9106E + 001.2521E + 00
Min2.6445E − 112.6739E − 0201.0855E − 100
Mean1.9343E − 035.1800E − 021.2245E − 032.8193E − 031.5138E − 04
Var7.3540E − 021.2590E − 014.1620E − 021.1506E − 011.2699E − 02

Max5.0214E − 013.4034E − 014.1400E − 013.7422E − 014.0295E − 01
Min1.4998E − 321.9167E − 031.4998E − 321.4998E − 321.4998E − 32
Mean1.0967E − 044.1000E − 031.8998E − 041.4147E − 046.0718E − 05
Var6.1800E − 031.0500E − 026.5200E − 035.7200E − 034.4014E − 03

The evolution curves of these algorithms on four multimodal functions , , , and when Dim = 2 are depicted in Figure 2. We can see that DMSDL-QBSA can find the optimal solution in the same iteration. For and case, DMSDL-QBSA continues to decline. However, the original BSA and DE get parallel straight lines because of their poor global convergence ability. For functions and , although DMSDL-QBSA also trapped the local optimum, it find the minimum value compared to other algorithms. Obviously, the convergence speed of the DMSDL-QBSA is significantly faster than other algorithms in the early stage, and the solution eventually found is the best. In general, owing to enhance the diversity of population, DMSDL-QBSA has a relatively balanced global search capability when Dim = 2.

Furthermore, from the numerical testing results on nine multimodal functions in Table 7, we can see that DMSDL-QBSA has the best performance on , , , , , , and . DMSDL-QBSA gets the minimum value of 0 on , , , and . BSA has got the minimum value of 0 on , , and . DE also has not got the minimum value of 0 on any functions. DMSDL-BSA has got the minimum value of 0 on and . In summary, the DMSDL-QBSA has a superior global search capability on most multimodal functions when Dim = 2. Obviously, DMSDL-QBSA can find the best two parameters for RF that need to be optimized, because of its best global search capability.

In this section, it can be seen from Figures 1 and 2 and Tables 47 that DMSDL-QBSA can obtain the best function values for most cases. It indicates that the hybrid strategies of BSA, dynamic multi-swarm method, DE, and quantum behavior operators, lead to the bird moves towards the best solutions. And DMSDL-QBSA has well ability of searching for the best two parameters for RF with higher accuracy and efficiency.

2.3.4. Comparison on Benchmark Functions with Popular Algorithms

When comparing the timeliness and applicability of DMSDL-QBSA compared with several popular algorithms, such as GWO, WOA, SCA, GOA, and SSA, 18 benchmark functions are applied. And GWO, WOA, GOA and SSA are swarm intelligence algorithms. In this experiment, the dimension’s size of these functions is10. The number of function evaluations (FEs) is100000. The maximum value (Max), the minimum value (Min), the mean value (Mean), and the variance (Var) obtained by several different algorithms are shown in Tables 8 and 9, where the best results are marked in bold.


FunctionTermGWOWOASCAGOASSADMSDL-QBSA

Max1.3396E + 041.4767E + 041.3310E + 042.0099E + 014.8745E + 039.8570E − 01
Min004.2905E − 2938.6468E − 1700
Mean3.5990E + 004.7621E + 001.4014E + 027.0100E − 024.5864E + 001.0483E − 04
Var1.7645E + 022.0419E + 028.5054E + 024.4200E − 011.4148E + 029.8725E − 03

Max3.6021E + 022.5789E + 036.5027E + 019.3479E + 013.4359E + 013.2313E − 01
Min009.8354E − 1922.8954E − 032.4642E − 1810
Mean5.0667E − 022.9480E − 014.0760E − 013.1406E + 001.7000E − 021.1278E − 04
Var3.7270E + 002.6091E + 012.2746E + 003.9264E + 005.4370E − 014.8000E − 03

Max1.8041E + 041.6789E + 042.4921E + 046.5697E + 031.1382E + 044.0855E + 01
Min01.0581E − 187.6116E − 1332.8796E − 013.2956E − 2531.5918E − 264
Mean7.4511E + 002.8838E + 024.0693E + 024.8472E + 029.2062E + 004.8381E − 03
Var2.6124E + 021.4642E + 031.5913E + 037.1786E + 022.9107E + 024.1383E − 01

Max2.1812E + 095.4706E + 098.4019E + 091.1942E + 094.9386E + 087.5188E + 01
Min4.9125E + 003.5695E + 005.9559E + 002.2249E + 024.9806E + 002.9279E − 13
Mean4.9592E + 052.4802E + 064.4489E + 075.0021E + 063.3374E + 052.4033E − 02
Var2.9484E + 071.1616E + 084.5682E + 083.9698E + 071.1952E + 079.7253E − 01

Max1.8222E + 041.5374E + 041.5874E + 041.2132E + 031.6361E + 041.8007E + 00
Min1.1334E − 088.3228E − 092.3971E − 012.7566E − 102.6159E − 161.0272E − 33
Mean5.1332E + 005.9967E + 001.2620E + 025.8321E + 018.8985E + 002.3963E − 04
Var2.3617E + 022.3285E + 028.8155E + 021.0872E + 022.9986E + 021.8500E − 02

Max7.4088E + 008.3047E + 008.8101E + 006.8900E − 014.4298E + 002.5787E − 01
Min1.8112E − 053.9349E − 054.8350E − 058.9528E − 024.0807E − 051.0734E − 04
Mean1.8333E − 033.2667E − 034.8400E − 029.3300E − 022.1000E − 035.2825E − 04
Var9.4267E − 021.0077E − 012.9410E − 013.5900E − 026.5500E − 024.6000E − 03

Max7.3626E + 026.8488E + 028.0796E + 023.9241E + 028.2036E + 021.4770E + 01
Min001.9441E − 2927.9956E − 0700
Mean2.0490E − 012.8060E − 014.9889E + 001.6572E + 012.7290E − 011.8081E − 03
Var9.5155E + 001.0152E + 013.5531E + 012.3058E + 011.0581E + 011.6033E − 01

Max1.2749E + 035.9740E + 023.2527E + 022.3425E + 022.0300E + 022.0423E + 01
Min04.3596E − 351.5241E − 1603.6588E − 051.0239E − 2440
Mean3.1317E − 011.0582E + 011.0457E + 011.2497E + 012.1870E − 012.6947E − 03
Var1.4416E + 014.3485E + 013.5021E + 012.5766E + 016.2362E + 002.1290E − 01


FunctionTermGWOWOASCAGOASSADMSDL-QBSA

Max2.9544E + 032.9903E + 032.7629E + 032.9445E + 033.2180E + 033.0032E + 03
Min1.3037E + 038.6892E − 041.5713E + 031.3438E + 031.2839E − 041.2922E + 03
Mean1.6053E + 031.4339E + 011.7860E + 031.8562E + 032.5055E + 021.2960E + 03
Var2.6594E + 021.2243E + 021.6564E + 025.1605E + 023.3099E + 025.8691E + 01

Max1.3792E + 021.2293E + 021.2313E + 021.1249E + 023.2180E + 031.8455E + 01
Min0001.2437E + 011.2839E − 040
Mean4.4220E − 011.1252E + 009.9316E + 002.8378E + 012.5055E + 023.2676E − 03
Var4.3784E + 006.5162E + 001.9180E + 011.6240E + 013.3099E + 021.9867E − 01

Max2.0257E + 012.0043E + 011.9440E + 011.6623E + 013.2180E + 031.9113E + 00
Min4.4409E − 153.2567E − 153.2567E − 152.3168E + 001.2839E − 048.8818E − 16
Mean1.7200E − 024.2200E − 028.8870E − 015.5339E + 002.5055E + 023.1275E − 04
Var4.7080E − 016.4937E − 013.0887E + 002.8866E + 003.3099E + 022.2433E − 02

Max1.5246E + 021.6106E + 021.1187E + 026.1505E + 013.2180E + 033.1560E − 01
Min3.3000E − 03002.4147E − 011.2839E − 040
Mean4.6733E − 029.3867E − 021.2094E + 003.7540E + 002.5055E + 023.3660E − 05
Var1.9297E + 002.6570E + 006.8476E + 004.1936E + 003.3099E + 023.1721E − 03

Max9.5993E + 079.9026E + 075.9355E + 076.1674 E + 063.2180E + 036.4903E − 01
Min3.8394E − 091.1749E − 089.6787E − 031.8099E − 041.2839E − 044.7116E − 32
Mean1.2033E + 043.5007E + 044.8303E + 051.0465E + 042.5055E + 028.9321E − 05
Var9.8272E + 051.5889E + 064.0068E + 061.9887E + 053.3099E + 026.8667E − 03

Max2.2691E + 082.4717E + 081.1346E + 082.8101E + 073.2180E + 031.6407E − 01
Min3.2467E − 024.5345E − 081.1922E − 013.5465E − 051.2839E − 041.3498E − 32
Mean2.9011E + 044.3873E + 046.5529E + 057.2504E + 042.5055E + 026.7357E − 05
Var2.3526E + 062.7453E + 067.1864E + 061.2814E + 063.3099E + 022.4333E − 03

Max1.7692E + 011.7142E + 011.6087E + 018.7570E + 003.2180E + 031.0959E + 00
Min2.6210E − 070.0000E + 006.2663E − 1551.0497E − 021.2839E − 040
Mean1.0133E − 023.9073E − 015.9003E − 012.4770E + 002.5055E + 021.5200E − 04
Var3.0110E − 019.6267E − 011.4701E + 001.9985E + 003.3099E + 021.1633E − 02

Max4.4776E + 014.3588E + 013.9095E + 011.7041E + 013.2180E + 036.5613E − 01
Min1.9360E − 019.4058E − 082.2666E − 013.9111E + 001.2839E − 041.4998E − 32
Mean2.1563E − 014.7800E − 029.8357E − 015.4021E + 002.5055E + 029.4518E − 05
Var5.8130E − 011.0434E + 003.0643E + 001.6674E + 003.3099E + 027.0251E − 03

From the test results in Table 8, we can see that DMSDL-QBSA has the best performance on each unimodal function. GWO finds the value 0 on , , , , and . WOA obtains 0 on , , and . SSA works the best on and . With the experiment of multimodal function evaluations, Table 9 shows that DMSDL-QBSA has the best performance on , , , , , , and . SSA has the best performance on . GWO gets the minimum on . WOA and SCA obtains the optimal value on and . Obviously, compared with these popular algorithms, DMSDL-QBSA is a competitive algorithm for solving several functions and the swarm intelligence algorithms perform better than other algorithms. The results of Tables 8 and 9 show that DMSDL-QBSA has the best performance on the most test benchmark functions.

2.3.5. Comparison on CEC2014 Test Functions with Hybrid Algorithms

When comparing the comprehensive performance of proposed DMSDL-QBSA compared with several hybrid algorithms, such as BSA, DE, DMSDL-PSO, and DMSDL-BSA, 30 CEC2014 test functions are applied. In this experiment, the dimension’s size (Dim) is set to 10. The number of function evaluations (FEs) is 100000. Experimental comparisons included the maximum value (Max), the minimum value (Min), the mean value (Mean), and the variance (Var) are given in Tables 10 and 11, where the best results are marked in bold.


FunctionTermBSADEDMSDL-PSODMSDL-BSADMSDL-QBSA

F1Max2.7411E + 083.6316E + 096.8993E + 083.9664E + 089.6209E + 08
Min1.4794E + 073.1738E + 091.3205E + 061.6929E + 051.7687E + 05
Mean3.1107E + 073.2020E + 096.7081E + 061.3567E + 061.9320E + 06
Var1.2900E + 074.0848E + 072.6376E + 071.0236E + 071.3093E + 07

F2Max8.7763E + 091.3597E + 101.3515E + 101.6907E + 101.7326E + 10
Min1.7455E + 091.1878E + 103.6982E + 042.1268E + 051.4074E + 05
Mean1.9206E + 091.1951E + 101.7449E + 085.9354E + 074.8412E + 07
Var1.9900E + 081.5615E + 089.2365E + 085.1642E + 084.2984E + 08

F3Max4.8974E + 058.5863E + 042.6323E + 062.4742E + 065.3828E + 06
Min1.1067E + 041.4616E + 041.8878E + 031.2967E + 036.3986E + 02
Mean1.3286E + 041.4976E + 041.0451E + 043.5184E + 033.0848E + 03
Var6.5283E + 031.9988E + 034.5736E + 044.1580E + 048.0572E + 04

F4Max3.5252E + 039.7330E + 034.5679E + 034.9730E + 034.3338E + 03
Min5.0446E + 028.4482E + 034.0222E + 024.0843E + 024.1267E + 02
Mean5.8061E + 028.5355E + 034.4199E + 024.3908E + 024.3621E + 02
Var1.4590E + 021.3305E + 021.6766E + 021.2212E + 028.5548E + 01

F5Max5.2111E + 025.2106E + 025.2075E + 025.2098E + 025.2110E + 02
Min5.2001E + 025.2038E + 025.2027E + 025.2006E + 025.2007E + 02
Mean5.2003E + 025.2041E + 025.2033E + 025.2014E + 025.2014E + 02
Var7.8380E − 025.7620E − 025.6433E − 021.0577E − 019.9700E − 02

F6Max6.1243E + 026.1299E + 026.1374E + 026.1424E + 026.1569E + 02
Min6.0881E + 026.1157E + 026.0514E + 026.0288E + 026.0257E + 02
Mean6.0904E + 026.1164E + 026.0604E + 026.0401E + 026.0358E + 02
Var2.5608E − 011.2632E − 011.0434E + 001.1717E + 001.3117E + 00

F7Max8.7895E + 021.0459E + 039.1355E + 021.0029E + 039.3907E + 02
Min7.4203E + 021.0238E + 037.0013E + 027.0081E + 027.0069E + 02
Mean7.4322E + 021.0253E + 037.1184E + 027.0332E + 027.0290E + 02
Var4.3160E + 002.4258E + 002.8075E + 011.5135E + 011.3815E + 01

F8Max8.9972E + 029.1783E + 029.6259E + 029.2720E + 029.3391E + 02
Min8.4904E + 028.8172E + 028.3615E + 028.1042E + 028.0877E + 02
Mean8.5087E + 028.8406E + 028.5213E + 028.1888E + 028.1676E + 02
Var2.1797E + 004.4494E + 008.6249E + 009.2595E + 009.7968E + 00

F9Max1.0082E + 039.9538E + 021.0239E + 031.0146E + 031.0366E + 03
Min9.3598E + 029.6805E + 029.2725E + 029.2062E + 029.1537E + 02
Mean9.3902E + 029.7034E + 029.4290E + 029.2754E + 029.2335E + 02
Var3.4172E + 003.2502E + 008.1321E + 009.0492E + 009.3860E + 00

F10Max3.1087E + 033.5943E + 033.9105E + 033.9116E + 033.4795E + 03
Min2.2958E + 032.8792E + 031.5807E + 031.4802E + 031.4316E + 03
Mean1.8668E + 032.9172E + 031.7627E + 031.6336E + 031.6111E + 03
Var3.2703E + 016.1787E + 012.2359E + 021.9535E + 022.2195E + 02

F11Max3.6641E + 033.4628E + 034.0593E + 033.5263E + 033.7357E + 03
Min2.4726E + 032.8210E + 031.7855E + 031.4012E + 031.2811E + 03
Mean2.5553E + 032.8614E + 031.8532E + 031.5790E + 031.4869E + 03
Var8.4488E + 015.6351E + 011.3201E + 022.1085E + 022.6682E + 02

F12Max1.2044E + 031.2051E + 031.2053E + 031.2055E + 031.2044E + 03
Min1.2006E + 031.2014E + 031.2004E + 031.2003E + 031.2017E + 03
Mean1.2007E + 031.2017E + 031.2007E + 031.2003E + 031.2018E + 03
Var1.4482E − 013.5583E − 012.5873E − 012.4643E − 012.1603E − 01

F13Max1.3049E + 031.3072E + 031.3073E + 031.3056E + 031.3061E + 03
Min1.3005E + 031.3067E + 031.3009E + 031.3003E + 031.3004E + 03
Mean1.3006E + 031.3068E + 031.3011E + 031.3005E + 031.3006E + 03
Var3.2018E − 015.0767E − 024.2173E − 013.6570E − 012.9913E − 01

F14Max1.4395E + 031.4565E + 031.4775E + 031.4749E + 031.4493E + 03
Min1.4067E + 031.4522E + 031.4009E + 031.4003E + 031.4005E + 03
Mean1.4079E + 031.4529E + 031.4024E + 031.4008E + 031.4009E + 03
Var2.1699E + 006.3013E − 015.3198E + 003.2578E + 002.8527E + 00

F15Max5.4068E + 043.3586E + 044.8370E + 054.0007E + 051.9050E + 05
Min1.9611E + 032.1347E + 041.5029E + 031.5027E + 031.5026E + 03
Mean1.9933E + 032.2417E + 042.7920E + 031.5860E + 031.5816E + 03
Var6.1622E + 021.2832E + 031.7802E + 044.4091E + 032.7233E + 03


FunctionTermBSADEDMSDL-PSODMSDL-BSADMSDL-QBSA

F16Max1.6044E + 031.6044E + 031.6046E + 031.6044E + 031.6046E + 03
Min1.6034E + 031.6041E + 031.6028E + 031.6021E + 031.6019E + 03
Mean1.6034E + 031.6041E + 031.6032E + 031.6024E + 031.6024E + 03
Var3.3300E − 023.5900E − 022.5510E − 013.7540E − 013.6883E − 01

F17Max1.3711E + 071.6481E + 063.3071E + 074.6525E + 078.4770E + 06
Min1.2526E + 054.7216E + 052.7261E + 032.0177E + 032.0097E + 03
Mean1.6342E + 054.8499E + 058.7769E + 043.1084E + 042.1095E + 04
Var2.0194E + 052.9949E + 041.2284E + 068.7638E + 052.0329E + 05

F18Max7.7173E + 073.5371E + 076.1684E + 081.4216E + 086.6050E + 08
Min4.1934E + 032.6103E + 062.2743E + 031.8192E + 031.8288E + 03
Mean1.6509E + 043.4523E + 061.2227E + 061.0781E + 051.9475E + 05
Var7.9108E + 051.4888E + 062.1334E + 073.6818E + 061.0139E + 07

F19Max1.9851E + 032.5657E + 032.0875E + 031.9872E + 031.9555E + 03
Min1.9292E + 032.4816E + 031.9027E + 031.9023E + 031.9028E + 03
Mean1.9299E + 032.4834E + 031.9044E + 031.9032E + 031.9036E + 03
Var1.0820E + 003.8009E + 001.1111E + 013.3514E + 001.4209E + 00

F20Max8.3021E + 062.0160E + 071.0350E + 076.1162E + 071.2708E + 08
Min5.6288E + 031.7838E + 062.1570E + 032.0408E + 032.0241E + 03
Mean1.2260E + 041.8138E + 065.6957E + 031.1337E + 044.5834E + 04
Var1.0918E + 055.9134E + 051.3819E + 056.4167E + 052.1988E + 06

F21Max2.4495E + 061.7278E + 092.0322E + 061.3473E + 072.6897E + 07
Min4.9016E + 031.4049E + 093.3699E + 032.1842 E + 032.2314E + 03
Mean6.6613E + 031.4153E + 099.9472E + 031.3972E + 047.4587E + 03
Var4.1702E + 042.2557E + 073.3942E + 042.5098E + 052.8735E + 05

F22Max2.8304E + 034.9894E + 033.1817E + 033.1865E + 033.2211E + 03
Min2.5070E + 034.1011E + 032.3492E + 032.2442E + 032.2314E + 03
Mean2.5081E + 034.1175E + 032.3694E + 032.2962E + 032.2687E + 03
Var5.4064E + 003.8524E + 014.3029E + 014.8006E + 015.1234E + 01

F23Max2.8890E + 032.6834E + 032.8758E + 033.0065E + 032.9923E + 03
Min2.5000E + 032.6031E + 032.4870E + 032.5000E + 032.5000E + 03
Mean2.5004E + 032.6088E + 032.5326E + 032.5010E + 032.5015E + 03
Var9.9486E + 008.6432E + 005.7045E + 011.4213E + 011.7156E + 01

F24Max2.6293E + 032.6074E + 032.6565E + 032.6491E + 032.6369E + 03
Min2.5816E + 032.6049E + 032.5557E + 032.5246E + 032.5251E + 03
Mean2.5829E + 032.6052E + 032.5671E + 032.5337E + 032.5338E + 03
Var1.3640E + 003.3490E − 018.9434E + 001.3050E + 011.1715E + 01

F25Max2.7133E + 032.7014E + 032.7445E + 032.7122E + 032.7327E + 03
Min2.6996E + 032.7003E + 032.6784E + 032.6789E + 032.6635E + 03
Mean2.6996E + 032.7004E + 032.6831E + 032.6903E + 032.6894E + 03
Var2.3283E − 019.9333E − 024.9609E + 007.1175E + 001.2571E + 01

F26Max2.7056E + 032.8003E + 032.7058E + 032.7447E + 032.7116E + 03
Min2.7008E + 032.8000E + 032.7003E + 032.7002E + 032.7002E + 03
Mean2.7010E + 032.8000E + 032.7005E + 032.7003E + 032.7003E + 03
Var3.1316E − 011.6500E − 023.9700E − 011.1168E + 003.5003E − 01

F27Max3.4052E + 035.7614E + 033.3229E + 033.4188E + 033.4144E + 03
Min2.9000E + 033.9113E + 032.9698E + 032.8347E + 032.7054E + 03
Mean2.9038E + 034.0351E + 032.9816E + 032.8668E + 032.7219E + 03
Var3.2449E + 012.0673E + 023.4400E + 016.2696E + 016.1325E + 01

F28Max4.4333E + 035.4138E + 034.5480E + 034.3490E + 034.8154E + 03
Min3.0000E + 034.0092E + 033.4908E + 033.0000E + 033.0000E + 03
Mean3.0079E + 034.0606E + 033.6004E + 033.0063E + 033.0065E + 03
Var6.8101E + 019.2507E + 018.5705E + 014.2080E + 015.3483E + 01

F29Max2.5038E + 071.6181E + 085.9096E + 077.0928E + 076.4392E + 07
Min3.2066E + 031.0014E + 083.1755E + 033.1005E + 033.3287E + 03
Mean5.7057E + 041.0476E + 088.5591E + 046.8388E + 042.8663E + 04
Var4.9839E + 056.7995E + 061.7272E + 061.5808E + 068.6740E + 05

F30Max5.2623E + 052.8922E + 071.1938E + 061.2245E + 061.1393E + 06
Min5.9886E + 031.9017E + 073.7874E + 033.6290E + 033.6416E + 03
Mean7.4148E + 032.0002E + 075.5468E + 034.3605E + 034.1746E + 03
Var1.1434E + 041.1968E + 063.2255E + 042.2987E + 041.8202E + 04

Based on the mean value (Mean), on the CEC2014 test functions, DMSDL-QBSA has the best performance on F2, F3, F4, F6, F7, F8, F9, F10, F11, F15, F16, F17, F21, F26, F27, F29, and F30. DMSDL-BSA does show an advantage on F1, F12, F13, F14, F18, F19, F20, F24, F25, and F28. According to the results, we can observe that DMSDL-QBSA can find the minimal value on 17 CEC2014 test functions. DMSDL-BSA gets the minimum value on F1, F12, F13, F14, F18, F19, F24, and F30, and DMSDL-PSO obtains the minimum value on F4, F7, and F23. Owing to enhance the capability of exploitation, DMSDL-QBSA is better than DMSDL-BSA and DMSDL-PSO on most functions. From the results of tests, it can be seen that DMSDL-QBSA performs better than BSA, DE, DMSDL-PSO, and DMSDL-BSA. It can be observed that DMSDL-QBSA obtains optimal value. It can be concluded that DMSDL-QBSA has better global search ability and better robustness on these test suites.

3. Optimize RF Classification Model Based on Improved BSA Algorithm

3.1. RF Classification Model

RF, as proposed by Breiman et al., is an ensemble learning model based on bagging and random subspace methods. The whole modeling process includes building decision trees and decision processes. The process of constructing decision trees is mainly composed of decision trees, and each of which consists of nonleaf nodes and leaf nodes. The leaf node is a child node of the node branch. It is supposed that the dataset has M attributes. When each leaf node of the decision tree needs to be segmented, the attributes are randomly selected from the M attributes as the reselected splitting variables of this node. This process can be defined as follows:where is the splitting variable of the -th leaf node of the decision tree, and is the probability that reselected attributes are selected as the splitting attribute of the node.

The nonleaf node is a parent node that classifies training data as a left or right child node. The function of -th decision tree is as follows:where , where the symbol 0 indicates that the -th row of data is classified as a negative label and the symbol 1 indicates that the -th row of data is classified as a positive label. Here, is the training function of the -th decision tree based on the splitting variable . is the -th row of data in the dataset by random sampling with replacement. The symbol is a positive constant, which is used as the threshold value of the training decision.

When decision processes are trained, each row of data will be input into a leaf node of each decision tree. The average of decision tree classification results is used as the final classification result. This process can be written mathematically as follows:where is the number of decision trees which judged -th row of data as .

From the above principle, we can see that it is mainly necessary to determine two parameters of and in the RF modeling process. In order to verify the influence of these two parameters on the classification accuracy of the RF classification model, the Ionosphere dataset is used to test the influence of the two parameters on the performance of the RF model, as shown in Figure 3, where the horizontal axis represents and , respectively, and the vertical axis represents the accuracy of the RF classification model.(1)Parameter analysis of When the number of predictor variables is set to 6, the number of decision trees is cyclically set from 0 to 1000 at intervals of 20. And the evolutionary progress of RF classification model accuracy with the change of is shown in Figure 3(a). From the curve in Figure 3(a), we can see that the accuracy of RF is gradually improved with the increase of the number N of decision trees. However, when the number of decision trees is greater than a certain value, the improvement of RF performance has become gentle without obvious improvement, but the running time becomes longer.(2)Parameter analysis of When the number of decision trees is set to 500, the number of predictor variables is cyclically set from 1 to 32. The limit of is set to 32, because the number of attributes of the Ionosphere dataset is 32. And the obtained curve of RF classification model accuracy with transformation is shown in Figure 3(b). And we can see that with the increase of the splitting property of the selection, the classification performance of RF is gradually improved, but when the number of predictor variables is greater than 9, the RF generates overfitting and the accuracy of RF begins to decrease. The main reason is that too many split attributes are selected, which resulted in the same splitting attributes which are owned by a large number of decision trees. This reduced the diversity of decision trees.

In summary, for the RF classification model to obtain the ideal optimal solution, the selection of the number of decision trees and the number of predictor variables are very important. And the classification accuracy of the RF classification model can only be optimized by the comprehensive optimization of these two parameters. So, it is necessary to use the proposed algorithm to find a suitable set of RF parameters. Next, we will optimize the RF classification model by the improved BSA proposed in Section 2.

3.2. RF Model Based on an Improved Bird Swarm Algorithm

Improved bird swarm algorithm optimized RF classification model (DMSDL-QBSA-RF) is based on the improved bird swarm algorithm optimized the RF classification model and introduced the training dataset into the training process of the RF classification model, finally getting the DMSDL-QBSA-RF classification model. The main idea is to construct a two-dimensional fitness function containing RF’s two parameters and as the optimization target of DMSDL-QBSA, so as to obtain a set of grouping parameters and make the RF classification model obtain the best classification accuracy. The specific algorithm steps are shown as in Algorithm 2.

Variable setting: number of iterations: , bird positions: , local optimum: , global optimal position: , global optimum:, and out of bag error: OOB error;
Input: training dataset: , , test dataset: , the parameters of DMSDL-QBSA;
Output: the label of test dataset: , the final classification model: DMSDL-QBSA-RF;
(1)Begin
(2)/∗Build classification model based on DMSDL-QBSA-RF∗/
(3)Initialize the positions of N birds using equations (16) and (17): ;
(4) Calculated fitness: ; set to be and find ;
(5)Whiledo
(6)  For
(7)   Give each tree a training set of size by random sampling with replacement based on Bootstrap;
(8)  Select attributes randomly at each leaf node, compare the attributes, and select the best one;
(9)   Recursively generate each decision tree without pruning operations;
(10)  End For
(11)  Update classification accuracy of RF: Evaluate ;
(12)  Update and ;
(13)