Abstract

The aim of this research work is to obtain the numerical solution of Fisher’s equation using the radial basis function (RBF) with pseudospectral method (RBF-PS). The two optimization techniques, namely, particle swarm optimization (PSO) and artificial bee colony (ABC), have been compared for the numerical results in terms of errors, which are employed to find the shape parameter of the RBF. Two problems of Fisher’s equation are presented to test the accuracy of the method, and the obtained numerical results are compared to verify the effectiveness of this novel approach. The calculation of the error norms leads to the conclusion that the performance of PSO is better than the ABC algorithm to minimize the error for the shape parameter in a given range.

1. Introduction

To obtain numerical solutions with the optimized results in a variety of scientific and engineering disciplines, researchers have developed various methods. Algorithms based on swarm intelligence have great potential in the field of numerical optimization, according to researchers Yagmahan and Yenisey [1], Eberhart and Kennedy [2], Price et al. [3], and Vesterstrom and Thomsen [4]. Swarm intelligence-based algorithms [5] and evolution [6] are two significant categories of population-based algorithms in the field of optimization. Optimization is the process of increasing the advantages of a mathematical model or function while minimizing its disadvantages. It is a combination of techniques that enables us to improve the output of the system. The primary goal of optimization is to find an optimal or nearly ideal solution with the least amount of computing work. The key to find the solution is to optimize parameters connected to a mathematical model.

Several research fields adopt optimization approaches for numerical simulation of various linear and nonlinear partial differential equations (PDEs) and also for optimizing the parameters related to problematic models. There are some well-known approaches of optimization such as ant colony optimization (ACO) [7] is one of the optimization algorithms that is a meta-heuristic algorithm inspired by the foraging behaviour of ants and how they find the shortest path between their nest and a food source. While it is commonly used for combinatorial optimization problems, it can also be adopted for numerical solutions of PDEs. Particle swarm optimization (PSO) [2] is also a heuristic algorithm inspired by the social behaviour of birds and fish, where individuals in a group (particles) cooperate and communicate to find optimal solutions to a problem. Bacteria foraging optimization (BFO) [8] is stimulated by the foraging behaviour of Escherichia coli (E. coli) bacteria that mimics the way bacteria forage for nutrients in their environment to find the optimal solution for a given optimization problem. When adapting BFO for the numerical solution of PDEs, it can effectively explore the solution space for parameter settings that yield accurate and efficient numerical solutions to PDEs. These nature-inspired meta-heuristic optimization algorithms have recently gained popularity for developing an effective search algorithm.

Exploration and exploitation are two major determinants for the development of successful optimization algorithms for search mechanisms. A meta-heuristic optimization algorithm effectively explores the solution space, balancing between exploration (global exploration) and exploitation (local refinement) to find near-optimal or optimal solutions for a variety of optimization problems. Exploration involves the search for new, unexplored regions of the solution space. It aims to discover potential solutions that might be superior to the current ones. Exploitation involves focusing on known promising regions of the solution space to improve the quality of solutions. It aims to refine and optimize the current solutions based on the information available. Researchers are motivated to develop such population-based optimization algorithms because of the abundance of natural resources. These population-based optimization methods assess fitness and provide almost perfect solutions to complex optimization problems.

Swarm intelligence (SI) is a field of study inspired by the collective behaviour of social insect colonies and other animal societies. It explores the principles and models of behaviour that emerge from the interactions of simple individuals within a group. The connection between SI and optimization lies in leveraging the collective behaviour observed in natural swarms to create effective optimization algorithms and strategies. SI uses social insect behaviour to create algorithms or distributed problem-solving tools, according to Bonabeau et al. [9]. Bonabeau studied only social insects such as termites, bees, wasps, and ants. Social species first developed swarm intelligence through trial and error. It simulates self-organizing swarms of interacting agents. An immune system, ant colony, or bird flock are swarm systems. Bees swarming around their hives illustrate swarm intelligence. Based on honey bee swarm social intelligence, the artificial bee colony (ABC) algorithm first described by Karaboga [10] in 2005, and in 1995 [2], Kennedy and Russell proposed PSO for solving numerical optimization problems. Honey bees’ search for nutritious food influenced the procedure of ABC, and the process of PSO was attracted by the behaviour of social animals. These population-based stochastic search methods are simple and fast. These methods also solve complex, continuous, and unbounded optimization problems with multimodal or unimodal issues.

SI-based meta-heuristic algorithms are popular for solving various optimization models as in [11] employed a novel adaptive artificial bee colony (A-ABC) algorithm that can select the best search equation based on the current situation in order to more precisely predict the transport energy demand (TED), in [12] four different meta heuristic algorithms used for natural gas demand forecasting based on meteorological indicators in Turkey, [13] proposed a new modified artificial bee colony (M-ABC) method that can more precisely calculate Turkey’s energy usage by adaptively choosing an optimal search equation and many more examples are there in biology, physics, evolution, and human behaviour that inspire nature-inspired algorithms that include ant colony optimization, artificial bee colony, the firefly method, particle swarm optimization, brain storm optimization, sine and cosine algorithms, and genetic algorithms. With inspiration from SI, many researchers applied these meta-heuristic algorithms for the numerical simulation of various PDEs by optimizing the solution space.

For numerically simulating ordinary and partial differential equations, RBF has proven to be a useful basis function. Numerical solutions to a nonlinear partial differential equation are found in this study by employing a mesh-free method based on radial basis functions (RBFs) with ABC and PSO optimization techniques. Both optimization strategies are used to determine the shape parameter () related to RBF.

The reaction-diffusion equation is one of the most intriguing equations in physical processes. We concentrate on the form of reaction-diffusion, which is known as Fisher’s equation.whose boundary and initial conditions are as follows:

Many chemical and biological processes use F () =  (1 − ). Fisher [14] introduced this equation to demonstrate a beneficent gene’s kinetic advance rate. Fisher’s equation shows population evolution through opposing physical phenomena. Fisher’s equation dominates genetics, tissue engineering, growth models, and more in science and engineering. Fisher’s equation was first simulated using the pseudospectral method developed by Gazdag and Canosa in 1974 [15]. Since then, many different approaches have been developed to solve it, such as the Petrov–Galerkin finite element method processed by Tang and Weber [16], the Tanh method by Wazwaz [17], and the homotopy analysis method proposed by Tan et al. [18]. Other methods that have been used to solve this problem include the alternating iterative method by Sahimi and Evans [19], the central finite difference algorithm by Hagstrom and Keller [20], the explicit and implicit finite difference algorithms by Parekh and Puri [21], the collocation of cubic B-splines by Mittal and Arora [22], and the pseudo spectral approach by Bhatia and Arora [23].

In this paper, the ABC and PSO algorithms with RBF applied to Fisher’s partial differential equation are used to find the best shape parameter of RBF by minimizing the error, and the RBF-PS method is used for numerical simulation of Fisher’s equation by converting it into an ordinary differential equations (ODEs) system. MATLAB is used for optimizing the parameter ε and for numerical approximation of Fisher’s equation. In this work, the results are obtained by the present hybrid approach in the form of error norms-, and shape parameter values at different time intervals, which are more comparable to the results available in the literature. The errors obtained by PSO are less as compared to the errors obtained by the ABC algorithm; thus, PSO values of shape parameter are good in comparison to the ABC’s results.

The structure of the paper is as follows. Section 2 describes the ABC algorithm in detail, with pseudocodes for all the phases, and also presents the complete process of ABC. Section 3 presents the explanation of the PSO algorithm with pseudocode, and the obtained results of the two problems of Fisher’s equation by the novel hybrid approach are discussed and compared in Section 4. Section 5 concludes the present article with the details of key findings and the future scope.

2. Artificial Bee Colony (ABC) Algorithm

The artificial bee colony (ABC) algorithm was invented by Karaboga [10] in 2005. The algorithm seeks the nectar-rich flower region (optimal solution). Employed bees (busy), onlookers, and scouts structured the swarms in the ABC algorithm. The swarm that finds the best food supply is more likely to be followed by the others. ABC remembers a user’s best location, as in the process of PSO. The bee travels to a new location and evaluates it in comparison to its current favourite place. If the new site is better, the old one is forgotten and the new one is remembered. The recollection remains unchanged. The ABC algorithm begins with deploying bees in the beginning and dispersing them in different locations. All employed bees are those actively foraging for nectar or pollen. Employed bees bring food information back to the hive and share it with curious bees. Onlooker bees wait in the hive for scout bees to report new food sources. To share food supply information, employed bees danced in a designated area. The dancing bee’s dance depends on the food source’s nectar. Onlooker bees monitor the dance and choose a food source depending on its trustworthiness. Before returning to the beehive, employed bees communicate the information with onlooker bees, which chose the most likely to follow. So, better food sources attract more bees than poor ones. When a food supply is exhausted, all employed bees become scouts. Scout and employed bees pursue exploitation and exploration processes.

In this algorithm, each food source is a potential solution to the problem, and its nectar amount indicates the fitness value’s estimate of its quality. For each possible food source, there is exactly one busy bee, and the total number of food sources is equal to the total number of employed bees. Based on the following related probability value defined, an observer bee selects a food source.where the fitness function (objective function value) of solution is evaluated by employed bees that are proportional to the optimal solution and Np is the number of food sources. Due to the process, employed bees share the whole information with the onlooker bees. Utilizing the probability values of the employed bees, a roulette wheel selection method is used. The likelihood of being chosen by onlooker bees increases with the amount of nectar a worker bee shares. With the aid of the chosen employed bee, the onlooker bee travels to a new site using the following formula:

Here, the present position is denoted by , employed bee selection is represented by , and is selected randomly from −1 to 1 for finding the food sources in the region of . Any bee that is not capable of finding a better food source after several iterations is replaced with a scout bee . The scout bee summoned to replace the unsuccessful bee flies about any random or uncharted area to investigate its surroundings using the following equation:

Here, ub and lb are upper and lower bounds, respectively, and randomness lies between 0 and 1. Again, the process is repeated with employed bees. The parameter limit (the number of cycles) is used for a location that cannot be enhanced during the fixed cycles. These three steps of the algorithm ABC, with their pseudocodes, are as follows.

2.1. Employed Bee Phase

For the generation of new solutions in the employed bee phase, the following are some points to be remembered:(1)The number of employed bees is equal to the number of food sources(2)There is an opportunity for all of the solutions(3)A partner is randomly selected for the generation of a new solution(4)The current solution and partner should not be the same(5)Modification of a randomly selected variable is important for the generation of a new solutionwhere variable of solution variable of a new solution the random variable lies between −1 to 1 variable of the current solution(6)Boundedness of the newly generated solution

After generating a new solution within the boundaries, we must assess the objective function value and find its fitness via the relation. To update the current solution, we greedily select the newly generated solution. We track trial failures for each solution. If the new solution is worse, we increase the trial by one; if it is better, we reset it. Now, this better solution counts in the population. The pseudocode for the employed bee phase is given as follows:Input = objective function, , fit, trial, lb, ub, Np = s/2 = no. of food source/employed or onlooker bees, s = swarm sizefor i = 1 to Np Selected a partner () randomly such that i is not equal to  Selected a variable (j) randomly and update jth variable Bound modified j () Evaluate objective function (fnew) and fitness (fitnew) Accept Xnew, if fitnew greater then fitness and Set trial = 0. Else increase trial by one. End.

2.2. Onlooker Bee Phase

In this phase, there is a condition of probability for bees to exploit a particular food source. As we know the fitness of each food source, for each food source, we calculate the probability which is determined as follows:where and are fitness and probability of the ith solution, respectively. A solution with a higher fitness value will have a higher probability. The pseudocode for the onlooker bee phase is as follows:Input = obj. Function, f, p, fit, trial, lb, ub, Np = s/2, prob.Set m = 0 & n = 1(m is onlooker bee & n is food source)While m less then Np Generate random no. r If r less then prob.  Randomly select a partner (p) s.t. n is not equal to p  Selected a variable (j) randomly and update jth variable    Bound modified j ()  Evaluate objective function () and fitness ()  Accept Xnew, if fitnew greater then fitness and Set trial = 0. Else increase trial by one  m = m + 1 Endn = n + 1 Modify n = 1 End.

2.3. Scout Bee Phase

As every solution is associated with an individual trial, we need to specify a parameter limit that is a user-specified integer value. For entering the solution in this phase, the value of trial should be greater than the limit and trial of the abandoned solution is reset to zero. Not every solution passes through the scout phase. The limit of solution can be calculated as Np ∗ d because of d-dimensional problem space. Scout phase can occur only when trial counter of at least one solution is greater than the limit. The pseudocode for the scout bee phase is as follows:Input = obj. Function, p, fit, trial, lb, ub, limit.Identify food source (t) whose .Replace Xt with p as Evaluate the objective function (ft) and assign fitness (fitt)

2.4. Complete Pseudocode for the ABC Algorithm

ABC initializes bee swarms and repeats until stopping criteria are met. ABC optimizes iteratively. Employed and onlooker bees agree on exploitation, while the process of exploration is performed with scout bees.Initialisation of input parameters = fit, t, lb, ub, limit, Np.(1) Initialization of population parameter (p) randomly.(2) Calculate the objective function value (f) and the fitness value (fit).(3) Locate trial counter = 0.(4) for i = 1 to t  Evaluate the employed bee stage.  Determined probability  Apply the onlooker bee phase for generating food sources.  Memorise the best food source.  If    Enter into Scout bee stage  End  End

3. Particle Swarm Algorithm (PSO)

Kennedy and Eberhart [4] invented PSO as a popular swarm intelligence technique in 1995. PSO is effective in solving optimization problems by changing the paths of particles. Particle movement is mostly stochastic and deterministic. Social animals, flocks of birds, marine animal communities, and swarms influence this optimization strategy. Swarms are population particles that transfer information to improve the search solution and discover the global optimum in this nature-based swarm optimization technique. Each particle’s position is their best experience. Particles' worldwide best position is their finest experience. When it finds a target better than all others, a particle alters its best position. Each n-particle has a new best solution during iterations. This method finds the best solution among all possible solutions. This process continues until the set iteration or the goal is not met. In this algorithm, the velocity and the position of the ith particle are updated as follows:where is the velocity of particle i at kth iteration, are real parameters, is a random number, whose value lies between 0 and 1, is the ith-particle position at kth-iteration, is the personal best position of particle i, is the global best position of whole search space.

3.1. Pseudocode for the PSO Algorithm

Enter values of parameters: , fitness, lb, ub, Np, and t.(1)Initialization of population (P) randomly and velocity of particle i.(2)Calculate the objective function (f).(3)Assign as P and as f.(4)Evaluate finest fitness solution and allocate the solution to and fitness to . for t = 1tot for i = 1 to Np  Determine the velocity   Determine the new position ()  Bound   Find objective function value  Update the population by including &   Update and   Update and  EndEnd

PSO is a computational technique that iteratively optimizes a problem to reduce error. It is a statistical method used to determine parameter values. To find the optimal answer to an optimization problem, the particles communicate, share their knowledge, and follow a simple rule. It is an innovative method for evaluating the best shape parameter value of RBF using the nonlinear partial differential equation. It is a global search optimization strategy and offers numerous characteristics in the parameter space.

4. Numerical Applications

In this section, the numerical solution of Fisher’s equation using the RBF pseudospectral method is obtained for the applications of the above novel approach. Two problems of Fisher’s equation are solved numerically using the present approach, and their results are calculated using the different error norms: , , absolute errors along with the shape parameter values. A comparison of the obtained results is presented for the effectiveness and applicability of the proposed method. First, derivatives are approximated using RBF, and then solutions are determined by MATLAB software with version R2022a using both the algorithms with the intel (R) Pentium processor and the window 7 operating system. The cubic radial pattern basis is taken as a basis function for the numerical simulation of the equation. The initial parameters for both the algorithms are as follows: number of decision variables = 1 and values of lower and upper bounds of decision variables are 0 and 1, respectively. Same sets of values are used for obtaining the shape parameter that minimizes the errors in both the problems of Fisher’s equation.

The process of the proposed approach is shown graphically in Figure 1.

Following are the formulae used for the calculation of the errors:

4.1. Test Problem 1

Taking one dimensional Fisher’s equation (1) with ; as on the domain [0, 1] with initial condition and exact solution are, respectively,

Numerical simulation of problem 1 is carried out by taking Δt = 0.0001, N = 21 at time interval 0.2, 0.5, 1. Table 1 represents the computed error norms: , by ABC and PSO algorithm and comparison is carried out with results in the literature [24, 25]. It can be seen from the results that the errors are less by the PSO algorithm comparative to the ABC’s approach. The obtained results in Table 1 are compared to the results calculated by other numerical methods available in the literature. By optimizing the errors, the shape parameter resulted in a value 0.018062 using PSO and 0.065821 using ABC at which the best errors occur. The values of the shape parameter at different T for Δt = 0.0001 and N = 21 are given in Table 2. Comparative analysis of absolute errors using both the algorithms at N = 21 and Δt = 0.0001 with time 0.01, 0.02, and 0.03 as shown by Table 3 which concluded the errors are less by the PSO algorithm in comparison to ABC. Figure 2 demonstrates the graphical solution for 21 node points with Δt = 0.0001 on domain [0, 1].

4.2. Test Problem 2

We consider the general Fisher’s (1) with ; as with domain [0, 1], whose exact solution and initial condition are given as follows, respectively:

Table 4 represents the comparison of the results obtained by ABC and PSO algorithm with results in the literature [26] for a = 1 with Δt = 0.0001 at N = 21 along with exact solutions. As shown in Table 5, a comparison of different error norms calculated by both the algorithms at Δt = 0.00001, N = 21 with various time intervals 0.001, 0.002, 0.003, and 0.004 and iteration = 71 is similar to results in the literature [23]. The analysis of the shape parameter values is carried out in Table 6 which shows the PSO algorithm as the best optimizer for error giving the value of shape parameter as 0.251490. Table 7 presents the comparative analysis of absolute errors of the problem by ABC with [23] at different time levels which seems better than the available results. Here, the optimised shape parameter value is 0.246477. The numerical solution is presented in Figure 3 for N = 21, Δt = 0.0001 at various time intervals.

Using the current hybrid approach, two Fisher’s equations are solved numerically and the results are derived in terms of various error norms, including absolute errors and shape parameter values. The comparison of the obtained results is performed and presented to test the efficacy and application of this novel approach.

5. Conclusion

In this paper, a novel hybrid technique is proposed for computing the numerical solution of Fisher’s equation using PSO and ABC optimization algorithms with RBF-PS. Using PSO and ABC optimization algorithms, the concept of ideal shape parameters is proposed because there is a discrepancy between numerical stability and accuracy when using different radial basis functions. To show the accuracy and efficiency of the specific method, two problems are solved numerically. Based on their error norms and shape parameter values, the obtained results are compared with the results available in the literature. The obtained results are more accurate in comparison to the results available in the literature. Form the results, it can be concluded that PSO gives more accurate results compared to the ABC algorithm in terms of less errors. Furthermore, the present work can be explored with various other optimization algorithms, such as the genetic algorithm, the ant colony optimization algorithm, the bacteria foraging optimization algorithm, and the firefly algorithm. Thus, the work has scope to solve the partial differential equation existing in various other fields with minimum errors.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank their affiliations for facilitating the publication of this paper through their support.