Abstract

This paper proposes a novel hybrid algorithm named Adaptive Cat Swarm Optimization (ACSO). It combines the benefits of two swarm intelligence algorithms, CSO and APSO, and presents better search results. Firstly, some strategies are implemented to improve the performance of the proposed hybrid algorithm. The tracing radius of the cat group is limited, and the random number parameter r is adaptive adjusted. In addition, a scaling factor update method, called a memory factor y, is introduced into the proposed algorithm. They can be learnt very well so as to jump out of local optimums and speed up the global convergence. Secondly, by comparing the proposed algorithm with PSO, APSO, and CSO, 23 benchmark functions are verified by simulation experiments, which consists of unimodal, multimodal, and fixed-dimension multimodal. The results show the effectiveness and efficiency of the innovative hybrid algorithm. Lastly, the proposed ACSO is utilized to solve the Vehicle Routing Problem (VRP). Experimental findings also reveal the practicability of the ACSO through a comparison with certain existing methods.

1. Introduction

In recent decades, Evolutionary Computation (EC) has become a very popular research topic, and great progress has been made in both theory and practice. Swarm Intelligence is a part of Evolutionary Computation, which is also one of the research hotspots in the field of evolutionary computing. Many metaheuristic algorithms have been proposed, including but not limited to Differential Evolution (DE) [13] Algorithm, Genetic Algorithm (GA) [4, 5], Particle Swarm Optimization (PSO) [68], Cat Swarm Optimization (CSO) [911], Ant Colony Optimization (ACO) [1215], and Bat Algorithm (BA) [16, 17]. Some intelligent computing technologies show great promise in many practical application scenarios, for example, wind power [18, 19], vehicle routing problem [2023], and wireless sensors [2431]. All above have been successfully in applied evolutionary calculations to improve their performance, and it is also one of the effective methods to solve traffic problems. However, the diversity of algorithms is affected by the “no free lunch” theorem, which has inspired people to propose more valuable algorithms.

PSO is considered to have the following advantages: few control times, easy to implement, and convenient to use. However, it is simple to fall into a local maximum stagnation in terms of convergence and search earlier than anticipated. Therefore, avoiding local optimal solutions and accelerating the rate of convergence are two important issues for intelligent algorithms. Later, many variants of PSO were derived. One of them is proposed by Zhan et al. [32], which sets four states according to the distance between particles, and adaptive adjusts parameters.

CSO has two submodes which are only suitable for small-scale population optimization. When the population size increases, the convergence rate will be slower. In order to alleviate the previously mentioned disadvantages, it is a crucial balance between population diversity and convergence speed. Through the improved cat swarm algorithm, the parameter adaptability of ACSO can increase the diversity and flexibility of the population.

ACSO is an optimization algorithm which aims to improve their convergence and search capabilities for cat swarm and particle swarm algorithms. According to the experimental results, based on the benefits and disadvantages of the APSO and the CSO, ACSO has improved. The CSO is applicable to smaller populations. Due to the evolutionary state strategy, the APSO is suitable to avoid getting trapped in a local optimum. Compared with other evolutionary algorithms, the ACSO shows unparalleled advantages.

The paper’s structure is detailed below. Section 2 introduces related research works: PSO, APSO, and CSO. ACSO’s adaptive parameter settings are discussed in Section 3. Then, the performances of ACSO, PSO, APSO, and CSO algorithms are verified by using typical 23 benchmark functions. This is shown in Section 4 while applying the algorithm to VRP as described in Section 5. Finally, Section 6 summarizes the work of this paper, and the suggestions are described for future work.

In this section, the basic theory of many traditional algorithms will be reviewed briefly, namely, Cat Swarm Optimization (CSO) and Particle Swarm Optimization (PSO) and the deformation of PSO, named Adaptive Particle Swarm Optimization (APSO). APSO has greatly improved convergence speed and search capabilities. By reviewing the APSO and CSO algorithms, we wondered if we could combine the two algorithms to upgrade CSO.

2.1. Particle Swarm Optimization

Originally devised in 1995 by Kennedy and Eberhart [6, 33], PSO is inspired by the behaviour socially of swarms of fish and flocks of birds. It has been widely concerned by people and has good development prospects. In the PSO algorithm, the solution in each optimization space is considered as “particles” with neither volume nor mass. All the particles are based on their own cognitive learning, solo flight experience, and companion flight experience. Therefore, when particles seek the best position in the optimized space, their flights are adjusted through information exchange.

Population is randomly initialized at first, and then each particle follows the current individual local optimum and the group global optimum to succeed in finding the optimal solution space. While the program is running, each particle implies a point in the decision space and contains two basic information, position and velocity. Particles update velocity and displacement through the following state transition equations:

The primary PSO is effortless with simply a few adjusted parameters: indicates the velocity value of the ith particle in the D-dimension before update; denotes the position of the ith particle before the update; represents the position of the particle in the particle swarm that currently has a locally optimal solution; is the global optimal position of the ith particle; ω is the inertial weight of the particle; c1 and c2 also called the acceleration coefficients, for extending the velocity of the cat to move in the solution space and usually set to 2.05; r1 and r2 are two uniform random values uniformly generated in the range of [0, 1].

2.2. Adaptive Particle Swarm Optimization

Traditional PSO still has some shortcomings in global search and convergence. The algorithm has attracted the strong interest of many scholars and is committed to improving the performance of the algorithm. APSO, which is suggested by Zhan et al. [32], accelerates the convergence speed even more. The population distribution state of the algorithm includes four evolutionary states, namely, detection, development, convergence, and bounce. In the process of operation, the inertia weight, acceleration factor, and other parameters can be automatically controlled. Therefore, the search efficiency and the convergence speed are effectively improved. As the evolution progresses, particles may cluster together and converge to a local optimum. At this time, an elite learning strategy guides the global optimal particles to escape from local optimum. This algorithm breaks through the PSO, by detecting the distribution information of different populations and using this information to evaluate the evolutionary state, and the steps are shown below:Step 1: the distribution information can use Euclidean distance to describe the average distance between each particle i and other particles:where n and d, respectively, represent the population size and dimension.Step 2: compare the distances of globally optimal particles to other particles and calculate the maximum dmax and minimum dmin distances. The “evolutionary factor” f is denoted as follows:In the exploration phase, the f value is large; in the exploitation phase, the f value decreases rapidly; after the environment changes, it will reach the convergence phase; as the number of iterations continues to increase, the particles will jump out, causing the f value to become larger. The cycle then repeats itself.Step 3: the four states S1, S2, S3, and S4 are divided by f. With regard to the order, they represent the circumstances of exploration, exploitation, convergence, and jumping-out, respectively. Generally, a larger adaptive weight ω value is set in exploration, and a smaller value is set in exploitation. However, the evolution factor f also shares some characteristics of the inertial weight ω, so ω can follow f change. Four strategies are summarized in Table 1.

At the same time, when the denominator is greater than 3.0, the values of c1 and c2 are standardized:

2.3. Cat Swarm Optimization

CSO is a heuristic global optimization method which was first presented by Taiwan scholar Chu et al. in 2006 [9]. CSO was proposed based on imitating the behaviour of cat. It has been analyzed that cats always spend most of their time to observe the surrounding environments first instead of hunting. Before the hunt, they alternate between moving slowly and staying at a location in a stationary state. This is called the seeking mode. Another submode is called the tracing mode. The CSO algorithm relies on the cooperation of these two states to obtain the optimal solution. Similar to PSO, every cat has its own velocity and position. MR defines how many cats in the overall cat group enter the seeking mode and how many cats enter the tracing mode. Flag is identifying which mode the cat is in. The optimal solution is a fitness value FS, representing the cat’s accommodation to the function of fitness and the best position of the cat that has been obtained. The algorithm’s specific stages are as follows:(i)Initial population, each cat has D-dimensional coordinate values(ii)Initialize the speed for randomizing the position of each dimension(iii)According to the mixture ratio MR, the population is randomly divided into seeking and tracing modes(iv)On the basis of the cat’s flag bit, perform the corresponding position update on the cat(v)Evaluate and record the fitness function value of each cat and keep the cat with the best fitness(vi)Terminate the algorithm if the conditions are met; otherwise, return to step three

2.3.1. Seeking Model

The mode in which cats look around to find targets is called the seeking mode. There are four key parameters:Seeking Memory Pool (SMP) defines the search memory size of each cat, which is representative of the position features that the cat has sought out.Seeking Range of the Selected Dimension (SRD) represents the change rate of the selected area, and the change range of each dimension is determined by the SRD change domain.Counts of Dimension to Change (CDC) refers to the number of dimensions that a single cat will mutate in the future. Its value is a random value between 0 and the maximum dimension.Self-Position Consideration (SPC) is a Boolean valued variable, which indicates whether the position of the cat is about to move includes the position that has passed.

The process is described below:(i)Make j = SMP, which represents copy the cat’s current position. In case, the value of SPC is true, let j = SMP−1, and then return to the current position as a candidate solution.(ii)For each individual copy in the memory pool, according to the size of the CDC, randomly add or subtract SRD percentage to the current value, and the old value is replaced by the updated value.(iii)Calculate the values of the fitness function FS of all solutions that are candidates in the memory pool.(iv)If all fitness function values FS are not completely equal, calculate the selection probability of each candidate solution by equation (5).(v)The candidate point with the highest fitness value is selected from the memory pool to replace the current cat position, thereby completing the cat position update:

If the fitness function finds the minimum solution of the problem, let FSb = FSmax; otherwise, FSb = FSmin.

2.3.2. Tracing Model

The state mode when tracing the target after finding it is called the tracing mode. This action can be briefly described in three steps.

The process is as follows:(i)Update the velocities of each dimension according to (6), the best position that the entire cat group passes, that is, the optimal solution currently found:(ii)Check whether the velocities are within the maximum velocity range.(iii)According to (7), update the cat’s position:

3. Adaptive Cat Swarm Optimization

This paper proposes a new cat swarm algorithm with adaptive strategy based on the traditional APSO and CSO algorithms. This improvement does not only advance the efficiency and convergence of the algorithm but also maintains an understanding of the uniformity of the distribution. The specific content and innovations can be summarized as follows.

3.1. Increase Adaptive Parameters

Based on the parameter self-adaptation, the operation process of random numbers is adjust, and an adaptive strategy is add. In the early stage of iteration, the cat swarm can obtain strong global optimization ability:

Make the particle swarm converge to the optimal position later in the iteration:

When the initial values of c1 and c2 are set relatively small, add them to limit the generation of negative values, in order to avoid negative numbers:

Change equations (8)–(10) are beneficial to the global search ability at the early stage of particle iteration, local refinement in later iterations, and to improve the accuracy of the solution.

3.2. A Radius Range is Added to the Search Position

When the distance between and is less than the radius, then toward the individual to . However, when the distance between and is less than the radius, just deviate it from . The value of the elements is defined by the following equations:where f and e show the weights attracted toward the global optimal solution and the weights away from the local optimal solution, respectively. Fi and Ei indicate the food sources of the ith individual and the enemy of the ith individual.

3.3. Increase Memory Factor y

Particles learn from each other to obtain the most informative information in their respective fields. For each searcher, a memory factor y is added, and each particle gives a lower memory weight to the previous position. For the purpose of updating the historical best position of each searcher, higher memory weight is referenced to update the current position:

To better explain the process of ACSO, the complete flow chart is shown in Figure 1. Firstly, randomly initialize each cat. Then, calculate the fitness value. Finally, make its parameters adaptive adjusted in the tracing mode.

The pseudocode of ACSO seeking mode function is exhibited in Algorithm 1.

Create N cats
Initialize related parameters
Calculate the fitness of each cat
Divides cats into two mode based on flag
SMP = the search memory pool size
if flag = = 0 then
for i = 1 to SMP do
  fitness = fobj(catCopy(i).Posi)
end for
else
for i = 2 to SMP do
  fitness = fobj(catCopy(i).Posi)
end for
end if

The pseudocode of ACSO tracing mode function is exhibited in Algorithm 2.

while t < = Maxiteration do
 Adaptive adjustment parameter r by equations (8)∼(10)
 Compare the distance between Cat.Posi and cat’s gbest.Posi and pbest.Posi
 Dist2gbest = distances (Cat.Posi, gbest.Posi)
if Dist2gbest < = radius then
  F = gbest.Posi − Cat.Posi
else
  F = 0
end if
 Dist2pbest = distances (Cat.Posi, pbest.Posi)
if Dist2pbest < = radius then
  E = pbest.Posi + Cat.Posi
else
  E = 0
end if
 Update the position and velocity by equations (11) and (12)
end while
return gbest

4. Experiment and Result Analysis

This segment is predominantly to verify the efficient performance of the projected algorithm, and 23 mathematical optimization functions were performed for the purpose of comparing the ACSO with PSO, APSO, and CSO. These typical test equations are listed in Tables 24 used by many scholars [34]. Three of these elements need to be declared, Space, Dim, and Fmin, which denotes the boundary of function’s search space, the dimension of the function, and the optimal solution, respectively.

4.1. Experimental Results

For verifying the results, CSO, APSO, and PSO are used to compare with the proposed ACSO algorithm. 23 benchmark functions are used to evaluate the performance of ACSO for real-parameter optimization. Usually, the benchmark function is also described as a mathematical test function. The mathematical test function used illustrates its 2D version in Figures 24. The relevant test parameters are listed in Table 5. In order to achieve a fair competition, for each test function, we tested 10 times for each optimization algorithm to get the average and standard deviation. The population size of each algorithm is set to 100, and the maximum number of iterations is 500. Subsequently, Table 6 illustrates the comparison of four algorithms on average (Ave) and standard deviation (Std).

4.2. Experiment Analysis

From Figures 57, solution quality and speed of PSO, APSO, CSO, and ACSO under 23 benchmark functions can be obtained. The horizontal axis in the figures stand for the maximum number of iterations during program execution and along the vertical axis are the corresponding fitness values. They are the simulation results of unimodal, multimodal, and fixed-dimensional multimodal mathematical test functions.

The unimodal function is a continuous function with only one extreme point in the domain; in other words, it has only one global optimum and no local optimum, so these algorithms are used to benchmark it. The performance of ACSO is superior to PSO, APSO, and ACSO. From Figure 5, because they contain only one global optimal solution, they have faster search convergence in the function.

The multimodal function is a function that contains multiple locally or globally optimal solutions, the purpose of which is to detect whether the test algorithm avoids local optimums. Relatively speaking, the improved algorithm is better than other algorithms, as shown in Figure 6. Premature stagnation of the optimal solution appears in F9 and F11. ACSO is better than other algorithms in the benchmark function of F10 and F13. However, the PSO algorithm is known to have the shortcoming of premature convergence in dealing with multimodal optimization problems, owing to the lack of enough momentum for particles to do exploration or exploitation when the algorithm is nearing its end, so it is the worst solution curve.

In the fixed-dimensional multimodal function, Figure 7 shows the curves of simulation results by all four algorithms, where the curves of APSO, CSO, and ACSO are almost overlapped in F17. From Table 6 and Figures 57, observing ACSO, not only can they avoid getting trapped in a local optimum but they can also converge quickly and eventually finding a global optimal solution. Therefore, we conclude that the proposed algorithms are noticeably superior to other comparison algorithms.

5. Adaptive Cat Swarm Algorithm Application in Vehicle Routing Problem

In this segment, the projected algorithm is connected to Vehicle Routing Problem (VRP). One of the fundamental problems in logistics has always been VRP, with the core link being cargo distribution. It was originally proposed by Dantzig and Ramser in 1959 [35]. Traditionally, VRP refers to the location of the distribution center that is known, the coordinate requirements and position of every customer, and finding the best route under precise constraints for visiting each customer, with the requirement of the lowest cost in transportation. The VRP has been studied for many years in the fields of mathematics and computer science. It also has the characteristics of nonlinearity, nonconvexity, complexity, and constraints and is difficult to coordinate with each other. In reality, its procedure of solution is quite complicated; therefore, there are certain advantages to solving the issue using an intelligent heuristic algorithm. Many scholars have devoted themselves to solving this problem and its derivative problems. The best known results for VRP have been obtained using Tabu Search (TS) [36, 37] or Simulated Annealing (SA) [38, 39]. In the present paper, VRP is chosen as the application objective of the ACSO algorithm to validate the algorithm’s practicability further.

5.1. Description of Constraints

(i)The total cargo carried by each vehicle must meet its maximum load limit(ii)Each vehicle may serve multiple customers, but each customer can only be served by one vehicle(iii)Vehicle k should go to the next customer i or returns to the distribution center immediately after serving customer j

5.2. Definition of Parameters

The vehicle routing problem is defined on a directed network with a vertex set , where represents a distribution center, indicates the set of customers in a directed graph, is an edge set, and cij represents the attribute value of each edge.(i) is the collection of all customers(ii) is the collection of all delivery vehicles(iii)c0 is the unit distance cost(iv)dij is the distance between the two points i and j(v)cij is the transportation cost from point i to point j and cij = c0dij(vi)ri is the customer demands for goods(vii)W is the maximum load capacity(viii)Sk is the set of customer points for vehicle k service

The objectives and restrictions of the VRP are then described as follows:

In VRP problems, as can be seen from the above model, equation (15) is a multimode function that belongs to the test function, aiming to minimize the objective function with least vehicle distribution cost. The level of distribution costs is the basic requirement for whether the economic benefits of the distribution process are maximized.

In a process with supply and demand in existence, formulas (16)–(19) denote the following: equation (16) demonstrates that this condition of constraint ensures a visit to the customer. Only one vehicle is allowed to supply each customer; equation (17) is a cargo flow constraint, requiring the load of the vehicle that cannot exceed the maximum load; equation (18) indicates that the continuity of the vehicle allocation process is constrained; that is, it must start from this point after serving customer i; equation (19) represents that each vehicle is restricted from having no subloops in the path.

5.3. Analysis of Experimental Results

In order to verify the effectiveness of ACSO algorithm to solve VRP, experiments were performed using MATLAB R2018b software. The simulation environment is the processor Inter (R) Core (TM) i7-8550U CPU @1.80 GHz 2.00 GHz, PC with Win10 operating system. The iteration is set as 300, and the statistical results of the algorithms used to solve VRP and the three algorithms of CSO, APSO, and PSO are recorded in Table 7.

The case column denotes various calculation examples, the Best Cost column represents the optimal value solution, and the Time/s column is representative of the whole time of the running of the algorithm. Through the comparison of the above experiments, it can be seen intuitively, compared with CSO, APSO, and PSO algorithms, the ACSO algorithm has achieved better results and has certain advantages. According to the results of this ACSO algorithm, however, as the number of nodes in the transmission network increases, it will lead to an increase in the time consumed during operation. To a certain extent, it shows that the algorithm still has room for improvement.

Table 8 presents the optimal consequence of ACSO according to the case n20-k7. Seven vehicles set off from the distribution center, each of them found an optimal path under specific constraints, traversing twenty customers to meet the customer’s cargo needs. Here, zero point represents the distribution center, and numbers from one to twenty denote the customer number. Simultaneously, the optimal path graph is shown in Figure 8. “Star” in the figure represents the distribution center, and “Point” indicates the coordinates of the location of the customer point.

6. Conclusions

In the study, based on the benefits of CSO and APSO, an Adaptive Cat Swarm Optimization (ACSO) algorithm is proposed. Through the cat swarm behaviour in the tracing mode, there is an adaptive adjustment to its parameters. The effectiveness of it has been tested through 23 benchmark functions. This experimental result indicates that ACSO has excellent performance than other existing heuristics in the process of exploration and exploitation.

In the end, ACSO is applied to VRP. Numerical assessments on four algorithms (ACSO, CSO, APSO, and PSO) reveal that the best result comes from the proposed ACSO algorithm, which further confirms the practicability and effectiveness of the algorithm. However, during the course of the algorithm, because the evolutionary state needs to be evaluated based on the distance to adjust the adaptive parameters in the tracing mode, it requires much more processing time to make related parameter adjustments. Therefore, the future work is to reduce the running time of the algorithm more reasonably without affecting the group to find the optimal solution.

Data Availability

All data are included within the tables of this article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61872085), the Natural Science Foundation of Fujian Province (2018J01638), and the Fujian Provincial Department of Science and Technology (2018Y3001).