Abstract

A precise estimation of isotherm model parameters and selection of isotherms from the measured data are essential for the fate and transport of toxic contaminants in the environment. Nonlinear least-square techniques are widely used for fitting the isotherm model on the experimental data. However, such conventional techniques pose several limitations in the parameter estimation and the choice of appropriate isotherm model as shown in this paper. It is demonstrated in the present work that the classical deterministic techniques are sensitive to the initial guess and thus the performance is impeded by the presence of local optima. A novel solver based on modified artificial bee-colony (MABC) algorithm is proposed in this work for the selection and configuration of appropriate sorption isotherms. The performance of the proposed solver is compared with the other three solvers based on swarm intelligence for model parameter estimation using measured data from 21 soils. Performance comparison of developed solvers on the measured data reveals that the proposed solver demonstrates excellent convergence capabilities due to the superior exploration-exploitation abilities. The estimated solutions by the proposed solver are almost identical to the mean fitness values obtained over 20 independent runs. The advantages of the proposed solver are presented.

1. Introduction

The fate and transport of heavy metals in the soils are a serious concern due to their potential impact on the environment. These reactive substances in soils undergo complex interactions based on the soil properties, type of clay minerals, and pore-fluid parameters. The problem of identifying the fate and transport of these substances in soils requires the understanding of various sorption mechanisms of species in the soil environment [1]. The sorption reactions in soils are, therefore, important in contaminant transport modeling in the subsurface and containment facilities such as landfills. Contaminant sorption process of different solutes in the soils is generally described using isotherms. Sorption isotherms describe the concentration of the sorbed solute to the sorbent concentration at equilibrium. Several linear and nonlinear isotherms are commonly used in various fields of research to understand the sorption mechanism between sorbent (soil solids) and solute [24]. Precise selection and configuration of sorption isotherm for a given sorbent-solute combination are essential in the contaminant risk assessment. The selection and configuration depend on chosen isotherm model, experimental representation of the environmental conditions, and parameter estimation method [5]. The sorption model parameters and selection of appropriate isotherm models are accomplished by fitting isotherm models to the experimental data using regression [6]. Nonlinear least-square techniques are widely used for this purpose [4, 5, 7, 8]. However, it will be shown in this work that classical gradient-based techniques suffer from several limitations.

Nature-inspired algorithms such as algorithms based on swarm intelligence (SI) are global search techniques and are widely used for many optimization problems in engineering. Particle swarm optimization (PSO) [9] and artificial bee colony (ABC) [10] are commonly used SI techniques. These algorithms overcome several limitations of conventional, deterministic techniques. Thus, these global search techniques are widely applied in geotechnical engineering for several optimization problems [1117]. However, these techniques often prematurely converge to local optimum solution and may not generate the same solution in different runs due to stochastic nature of the algorithm [18]. A hybrid method based on particle swarm optimization and gradient-based algorithm is often used for configuration of sorption isotherms [19]. However, the limitations associated with the slow convergence rate in early stage of the search and lack of diversity in later stage of the search may remain in the basic particle swarm optimization [20, 21]. Therefore, hybrid methods may not balance the limitations that arose from each other. Such hybrid methods are also computationally expensive as they require both gradient information and more number of objective function evaluations.

It will be shown that the performance of conventional optimization techniques and PSO algorithm is greatly hampered by the presence of several local optima on the functional terrain. An inverse model based on artificial bee-colony (ABC) optimization method is proposed in this work for selection and configuration of appropriate sorption isotherms using the experimental sorption data. The proposed solver is further improved by introducing several modifications to the ABC algorithm for better performance. The proposed technique is highly robust; it accurately estimates the sorption model parameters and appropriate isotherms to the experimental data.

2. Theory

Sorption of different chemical constituents in soils is routinely performed in the laboratory under controlled temperature, pressure, and pH. The measured sorption data is generally described by either equilibrium or a kinetic retention process [2]. The present study is limited to only modeling of equilibrium retention process. The measured, equilibrium sorption data is represented as sorbed concentration, , against the equilibrium concentration in the aqueous phase, . Sorption isotherms describe the mathematical dependency of sorbed concentration on the equilibrium concentration. The determination of the sorption model parameters from this sorption data is essential for the design of containment facilities. Further, configuration of the observed data using available models is important in the modeling of contaminant transport and sorption processes for containment applications. Sorption isotherms are classified into S-curve, L-curve, H-curve, and C-curve based on the initial slope of the curve [2224]. A thorough description of various sorption isotherm models is given elsewhere [2427]. Some of the commonly used isotherms in geoenvironmental engineering applications are given below.

2.1. Freundlich Isotherm

The Freundlich isotherm is widely used for describing nonideal and reversible adsorption [27, 28]. This isotherm is applied in heterogeneous systems such as soils. The Freundlich isotherm can be expressed as where is the amount of solute retained by the soil (μg/g), is the solute concentration in solution (μg/mL), is the distribution coefficient (mL/g), and the parameter is dimensionless constant. The distribution coefficient describes the partitioning of solute species between solid and liquid phases [2]. The linear sorption equation is a particular case of Freundlich expression and can be derived by substituting .

2.2. Langmuir Isotherm

The Langmuir model is another popularly used sorption model, originally developed by Langmuir [29] to describe the adsorption of gases by solids. This isotherm model is widely used in geoenvironmental applications. For example, the model is recently applied for understanding the phosphorus contamination in soils [30], ammonium removal using zeolites [31], and phenol adsorption in natural soils [32]. The advantage of this model is that a maximum sorptive capacity is used in the model, which is regarded as a measure of the available sorption sites on the solid phase. The Langmuir isotherm is defined as where and are the fitting parameters. The parameter (mL/g) is a measure of the bond strength at which the sorbed solutes are held on the soil surface and (μg/g) is the maximum sorption capacity. This isotherm is extensively used to describe the sorption of solutes by soils [5]. Often various modifications to (1) and (2) are used [23] as some sorption data do not follow the standard isotherms. These modified forms are presented in the following subsections.

2.3. Freundlich-Langmuir Model

The Freundlich-Langmuir (FL) model is a power function based on the assumption of continuously distributed affinity coefficients [4, 23]. This model can be described aswhere is the fitting parameter (0 < ≤ 1). The Langmuir isotherm is a special case of this model and can be derived by substituting = 1. The FL model is widely used for simulating the sorption behaviour of heavy metals on clay minerals [33, 34].

2.4. Two-Site Langmuir Model

The two-site Langmuir model is based on the assumption that sorption occurs on two types of surface sites, each with different bonding energies. One site contains a high bonding energy and reacts rapidly, while the other contains a lower bonding energy and reacts more slowly [35]. The modified Langmuir model with two sorbing components is given by where and are the maximum quantities of solute that can be sorbed by the two sites; and and are constants related to the bonding energies of the components. Several researchers obtained excellent fit of phosphate sorption data with the two-site Langmuir model [36] and metal sorption, that is, Lanthanide on smectite clays [37, 38]. Several other isotherm models such as Farley–Dzombak–Morel [39] and Tóth [40] are often used, but the modeling is out of the scope of this work.

3. Estimation of Model Parameters

Accurate estimation of model parameters of (1) through (4) is required for the prediction of fate and transport of contaminants in the environment. These model parameters are frequently estimated by nonlinear, least-square approaches by minimizing the objective function. The objective function is an error measure which is used to determine the accuracy of a given model to describe the measured data. Several error measures, such as coefficient of determination (), sum of squared deviations (SSD), mean error (MSE), and root mean square error (RMSE), are commonly used in the inverse analysis [27]. The RMSE is often used to verify the appropriateness of the model to the observed data and is given bywhere and are the observed and fitted sorption values, respectively, and is the number of observed data.

As the aforementioned theoretical isotherms are nonlinear, optimization techniques are used to estimate the model parameters of a given model isotherm and to determine the appropriate isotherm for a given data. Nonlinear regression techniques such as Levenberg–Marquardt method are widely used in the literature for parameterization [34, 41, 42]. A brief description of gradient-based technique is given here.

3.1. Gradient-Based Approach

Gradient-based techniques are the conventional optimization tools for parameter estimation in many engineering applications. The principle behind such techniques is determining an optimum point on the search space where the derivatives of the objective function with respect to the model parameters () become zero. The unknown model parameter vector (), in these methods, is determined iteratively. The direction of descent for the gradient-based method is based onwhere is a scalar which determines the step length in the direction of . Several variants of gradient-based algorithms are used that may require the information of second derivative of the objective function (Hessian matrix) for a faster convergence rate.

It was in this work that these techniques suffered from several limitations. These conventional algorithms are highly sensitive to the fitness function (error measure) and on the initial guess. Of late, nature-inspired or swarm intelligence search techniques are used as an alternative to the conventional methods for solving several optimization problems in geoenvironmental engineering [1113, 1518].

3.2. Swarm Intelligence

Swarm intelligence (SI) is a field of research which develops computational techniques by taking inspiration from the nature for solving the optimization problems. Genetic algorithms (GA), particle swarm optimization (PSO), and artificial bee colony (ABC) are some of the techniques developed by mimicking the nature for the determination of global optimum solutions to the inverse problems. This paper developed several solvers based on some of the SI techniques for predicting the model parameters of isotherm equations and for determining the appropriate isotherms for the measured data. The following SI techniques were used in developing the solvers.

3.2.1. Particle Swarm Optimization

PSO is a class of stochastic, derivative-free, population-based method. The working principle of PSO method is inspired by the social behaviour of animals such as flocking of birds and schooling of fishes [9]. In PSO, each particle (i.e., agent) of the population (i.e., swarm) is thought to be a collision-free bird in an -dimensional search space, where is the number of model parameters to be optimized. PSO algorithm starts with initializing predefined number of particles in the search space represented by the objective function. Each particle is updated with a position and velocity at any given iteration in the search space. The position and velocity are updated based on the flying experience (movement) of individual particles and the neighbors to achieve a better position in the search space. Further, the best location of an individual particle in the history (all the previous iterations) and best location experienced in comparison to all the particles in any given iteration are rememorized. Therefore, the movement of the particle is an aggregated acceleration towards its best previously visited location and towards the best individual of a topological neighborhood [43]. The velocity () and movement () of the th particle in the th dimension are updated using [44, 45]where is the history best position of each individual particle; is the global-best position; and are the acceleration constants that accelerate movement of the particles towards individual and global-best positions, respectively; and are two independent random numbers that are uniformly distributed between 0 and 1 used to stochastically vary the relative pull between and . The randomness is introduced using the constants for providing unpredictable behaviour of the swarm which is useful for preventing from premature convergence. This iterative process continues swarm by swarm until the predefined number of iterations. Individual particles are drawn towards the success of each other during the search process which results in the formation of a cluster of particles at the global optimum region.

PSO is widely used for solving the optimization problems in many engineering disciplines due to the simplicity. Several experiment studies on the application of PSO algorithm on benchmark functions and optimization problems revealed that the algorithm explodes quickly as the first term in the velocity term in (7) increases incessantly [45]. The notable work on improving the update equation are introduction of the inertia factor to control the velocity of the particles [44] and constriction factor which constrains the magnitude of the velocity [45]. The modified form of velocity-update equation is [12, 44, 45]where is the constriction factor and is the inertia weight to improve the performance of the PSO algorithm. The first term in (9) is the inertia term to restrict the particles to change the flight direction drastically. Considerable improvements to (9) are available in the literature for improving the performance of the PSO algorithm. The present work uses perturbed PSO algorithm for developing the solver.

3.2.2. Variant of PSO Algorithm

The classical PSO algorithm and many of the variants suffer from few limitations during the search process. The major drawback with these algorithms is that they converge to suboptimal solutions for several problems due to the application of the same movement update equation (10) for particle [18]. Bharat et al. [18] proposed perturbed PSO (PPSO) algorithm which uses the following perturbation equation for the particle:where is a constant and .

Equation (10) is used to randomly perturb the global-best particle in the suboptimal region to avoid convergence. The performance of the PPSO algorithm is further improved using catfish particles [46] into the swarm [14]. The particles are arranged in descending order of their fitness values after few generations (catIter). A known percentage of low-fit solutions are chosen as catfish particles and are repositioned on the -dimensional search space based on opposition-based learning (OBL). The OBL technique is based on the utilization of opposition numbers of the current positions in the search space [47]. This technique is explained in the following subsections. The catfish particles improve the efficiency by exploring the new feasible regions in the search space. This optimization algorithm is referred to as PCPSO algorithm in this work. PCPSO algorithm is believed to be alleviating premature convergence of the swarm even with small population due to unique combination of perturbation equation for particle and exploration capabilities of the catfish particles [14]. Two other solvers based on SI algorithms are also developed in this work.

3.2.3. Artificial Bee-Colony (ABC) Algorithm

ABC algorithm is a recent addition to the class of global optimization algorithms. ABC algorithm [10] is inspired by the foraging behaviour of bees. It is shown recently on over 50 benchmark functions (unimodal to multimodal, separable to nonseparable functions) that ABC algorithm outperforms other popular SI techniques such as genetic algorithm (GA), differential evolution (DE), and particle swarm optimization (PSO) [48] due to better trade-off between exploration and exploitation.

The algorithm uses several steps inspired by the collective behaviour of bees for the collection and processing of the nectar. The basic ABC algorithm uses three classes of bees, that is, employed bees, onlooker bees, and scout bees. Employed bees are responsible for locating the nectar sources and sharing this information with the onlookers on the dance area of the hive. Onlooker bees select nectar source based on the quality (objective function) with some probability. Therefore, the probability of higher number of onlookers at better quality of food source is higher. The task of employed and onlooker bees is to explore the search space. Scout bees are similar to the catfishes used to exploit new food sources in the search space by random search process. Further, the employed bees with abandoned food sources become the scout bees. Each food source (a set of model parameters) is a feasible solution of the problem and the nectar amount of a food source represents quality of the solution, that is, the objective function. The quality of the solution is represented by fitness value as given by where is the objective function (5) and is the model parameter vector of the problem. Half of the colony is assumed to be comprised of employed bees and the other half by the onlookers. Each food source is exploited by only one employed bee. Therefore, the number of employed or the onlooker bees is equal to the number of food sources. The following four steps are used in the algorithm.

(i) Initialization Phase. A predefined number of food sources are randomly generated in the -dimensional parametric space where “” is the dimension of the parametric space representing the number of parameters to be optimized. The food sources are generated usingwhere ( is the number of food sources and equals half of the colony size); ( is the dimension of the problem); and are the lower and upper bounds of the th parameter and th food source, respectively. The fitness of all the food sources is evaluated using a suitable fitness norm. Further, counters which store the number of trails of solutions are reset to 0 in this phase.

(ii) Employed Bees Phase. Each employed bee produces a modification to the solution in its memory depending on the local (visual) information. The modification to the present food source is determined using the following equation:where represents the inertia factor (usually used in canonical ABC), represents a randomly selected food source different from , and is a uniformly distributed random number in the range . The fitness of the newly discovered solution is computed. A greedy selection process is applied between the new () and the original () solution. The better solution is stored in the memory. The trails count for this food will be reset to 0 if the solution is improved; otherwise, its value is increased by 1. Employed bees share the nectar (solution) information with the onlooker bees on the dance area after this phase.

(iii) Onlooker Bees Phase. The food sources are chosen based on a probability of its nectar amount (quality of the objective function) in this phase. A high number of onlooker bees are swarmed around the same food source when the fitness of the source is high. The probability is calculated according towhere the fitness of each agent is calculated according to (11). After this stage, each onlooker bee finds a new food source in the neighborhood based on (13), similar to the employed bees. The new position is replaced with the old one in the memory when the quality of the new position is better than that of the previous one. Otherwise, the bee keeps the old position in memory and the trail count is raised by 1.

(iv) Scout Bees Phase. A food source is abandoned and a new one is selected when the food source does not improve the quality in a predetermined number of iterations, called the “limit.” The new food source is selected randomly in the search space using (2).

Some modifications to the original ABC algorithm have been studied on benchmark functions. The modification of the basic ABC involves modifying the position-update equation [49] which was used recently by Zhu and Kwong [49] on six benchmark functions and found favorable results. However, the convergence towards gbest solution cannot be controlled in the Zhu and Kwong [49] work due to lack of acceleration factors. Apart from the modification to position-update equation, the frequency and magnitude of the perturbation are modified [50, 51]. The OBL was used by Gao and Liu [52] for the initialization phase on benchmark functions. Hybrid techniques combining crossover operations of GA with ABC is used recently by Karaboga and Kaya [53].

3.2.4. Modified ABC (MABC) Algorithm

The exploration-exploitation qualities of the classical ABC algorithm are improved in the present work. The following position-update equation is used for the employed bees in the modified algorithm:where is the random number varying within , is a constant, and is the acceleration coefficient. The perturbation equation in (15) produces perturbed solutions around the present solution in randomly chosen direction of another food source. The information of fittest solutions is not required in this method as the food sources are randomly selected to generate the new solutions.

The movement of the employed bees using the proposed position-update equation is based on the resultant direction of best-fit food source (position) and the randomly chosen food source location. The inertia weight and acceleration coefficient improve the exploration process. The position-update equation for onlooker bees is based on (13) with an adaptive inertia factor. This helps in carrying out the exploitation process for better food sources around the present positions. Further, opposition-based concept is used to generate new food sources for the scout bees.

OBL is based on the utilization of opposition numbers of the current positions in the search space [47]. If is a food source position in the -dimensional space, and , , the new position based on the OBL is of which is defined aswhere and are the lower and upper limits of the th dimension.

4. Inverse Analysis

The performance of all the SI techniques is based on the tuning parameters. The tuning parameters of these algorithms are very sensitive to the nature of the parametric space [14, 18, 54]. The tuning parameters of all the four algorithms and the measured data are defined in the following subsection.

4.1. Experimental Sorption Data

Observed sorption data of metal substances, that is, Cu, Zn, Ca, and Pb, on various soils were used from the literature for selection and configuration of the described isotherms. Natural soils having various percentages of clay content, different clay minerals, and pH were chosen. The physical properties of the soils as reported in the literature were presented in Table 1. The observed data contains sorbed concentration of the metal species against the corresponding equilibrium concentrations. The details of the experimental data as reported in the literature are given in Table 2.

4.2. Parameter Setting

A number of twenty runs were used to test the performance of the proposed solvers. The number of iterations was fixed to 150 for all the algorithms and for all the isotherms. However, the number of agents was varied based on the number of model parameters, that is, size of the dimensional (parametric) space. The algorithms used 10, 20, and 30 agents for fitting the experimental data with Freundlich isotherm (2-dimensional), Freundlich-Langmuir isotherm (3-dimensional), and two-site Langmuir isotherm (4-dimensional), respectively.

The parameter setting for different algorithms was determined empirically on synthetic test data for which the optima were known. The tuning parameters of both the PSO algorithms, such as acceleration coefficient, inertia factor, and constriction coefficient, are dependent on the given problem [1114, 18]. The coefficients ; and provided the best performance for the present problem. The same parameter setting was used for both the PSO algorithms. The perturbation coefficient of in (12) provides good convergence for the algorithms [14, 18]. PCPSO algorithm required additional tuning parameters, namely, catIter and number of catfish particles. The 20% worst-fit particles were considered as the catfish particles and repositioned using OBL in every 10 iterations.

The colony size in ABC and MABC algorithms was twice the number of agents considered for other algorithms. In other words, the number of employed bees and onlooker bees was the same and equal to the number of agents for other algorithms. In these algorithms, the function was evaluated three times in each iteration. Therefore, the number of iterations was fixed to 50 to use the same number of function evaluations for each agent in all the algorithms. The scout limit was equal to the colony size multiplied by number of dimensions of the problem. The parameter, , was varied from 2.0 to 0.5 from the beginning of the iteration to end of the iteration consistently. The acceleration coefficient, , in MABC algorithm was fixed to 3.0 for the best performance based on the synthetic tests. The agents were repositioned randomly on the search space when they move out of the boundaries of the -dimensional parametric space.

The pseudocode of the proposed MABC algorithm was given in the Appendix.

5. Results and Discussion

5.1. Parametric Space

The parametric space for the experimental data of Cr sorption on Alligator soil using nonlinear isotherm equations was computed. The parametric spaces were obtained using two-parameter models of Freundlich and Langmuir isotherms (obtained by substituting in Freundlich-Langmuir isotherm) and are shown in Figures 1 and 2, respectively. RMSE (5) between experimentally measured data and theoretically computed data from 22,50,000 sets of model parameters (, or , ) was used. The parametric spaces appeared fairly different in these two cases although the same experimental data was used. Further, these functional terrains contained unique global optimum solutions, several local optima, and a plan surface having the same RMSE values. Further, the parametric space varied with measurement errors in the data and the number of data points. Complexity of the functional terrains increases further with the parametric dimension. The conventional algorithms may converge to one of these local optima when the initial guess was used in such regions of the parametric space. The application of robust algorithms for determining global optimum solutions for the present problem is, therefore, justified.

5.2. Gradient-Based Algorithm

The performance of gradient-based algorithms is dependent on the user-supplied, initial solution (guess), and the type of error measure. The sensitivity of these algorithms on the initial guess was verified qualitatively for the present problem. The “fmincon” MATLAB® routine, featuring a constrained optimization of a multivariable function using interior-point technique, was used for finding the best isotherm model parameters that provided best-fit theoretical sorption data to the measured data. The interior-point technique was found to have better convergence abilities than trust-region-reflective or any other available techniques. A finite-difference numerical solution was used to determine the first- and second-order derivatives of the objective function with respect to the model parameters. The tolerance value of the objective function was fixed to 1 × 10−8 and a very large value was used for maximum number of iterations which served as the stopping criterion. Different initial guess solutions were used to obtain the converged solutions. The performance of the gradient-based algorithm for fitting two-site Langmuir isotherm on the experimental data of Cd sorption on Molokai soil was presented in Table 3. The solutions converged to several suboptimal and local solutions for different initial guess solutions. The predicted model parameters were fairly different from each other, but the RMSEs were very close to each other. Thus, finding a global-best solution was fairly complex in such situations. It is evident from these results that the performance of the gradient-based algorithm was highly sensitive to the initial guess and the convergence was impeded by the presence of local minima.

5.3. Parameter Estimation by SI Based Solvers

Model parameters of Freundlich, Freundlich-Langmuir, and two-site Langmuir isotherms on the experimental data were predicted using the developed solvers. The performance of the solvers was determined based on the mean fitness value, best-fit solution, and standard deviation (STD) computed from 20 independent runs as the SI techniques are stochastic in nature and may produce different solutions in different runs. The robustness of the algorithms is evaluated by comparing the mean fitness value with the best fitness (i.e., minimum error between theoretical and experimental data) and by determining the standard deviation. The robust solver predicts best-fit solution very close to the mean fitness solution and predicts smallest standard deviation, that is, close to zero. The model parameters obtained by different algorithms for all the 21 experiments using Freundlich, Freundlich-Langmuir, and two-site Langmuir isotherms were given separately in Tables 4(a), 5(a), and 6(a), respectively. The best and worst solutions obtained in 20 independent runs based on the objective function values were provided. The Langmuir isotherm was not used in the comparison as it was the special case of Fredlund-Langmuir isotherm with .

The mean fitness, best-fit solution, and standard deviation by different solvers for fitting the experimental data using Freundlich isotherm were presented in Table 4(b). The achieved best results were highlighted in bold. Performance comparison of PSO and PCPSO algorithms showed that PCPSO algorithm improved the mean fitness and standard deviations adequately without compromising on the best solution. The modification to PSO algorithm by perturbation equations and catfish swarms helped PCPSO algorithm to achieve repeatability of the best-fit solution in different runs. The performance comparison of PCPSO and ABC algorithms showed that the performance was fairly the same in finding the best (global) solutions on several different experimental data. However, ABC algorithm improved the performance of the solver with better predicted mean fitness values and standard deviation that suggested that the ABC algorithm outperformed the two PSO algorithms for fitting the sorption isotherms. Performance comparison of ABC and MABC showed that the performance of MABC algorithm was superior to other algorithms; the optimizer produced the same values of mean fit and best-fit solutions on all the experimental data. Thus, zero standard deviation is achieved in all the tests. The performance comparison of the algorithms for the 2-parameter Freundlich isotherm was in the order of MABC > ABC > PCPSO > PSO.

The computed mean fit, best-fit, and standard deviation by different algorithms on the experimental data using the Freundlich-Langmuir isotherm was presented in Table 5(b). The best solution in each case was highlighted in bold. The PCPSO algorithm improved the mean fitness and outperformed the classical PSO algorithm. However, the basic PSO algorithm converges to global-best solutions for many experimental data when compared to the PCPSO algorithm. The convergence of the PCPSO algorithm is relatively slow for fitting the data to Freundlich-Langmuir isotherm due to the additional exploitation and exploration steps by using perturbation equation and catfish swarm, respectively. Similarly, ABC algorithm improved the mean fitness but failed to converge to global solution in many experiments. On the other hand, MABC algorithm predicted the global-best solution for all the experimental data without compromising on the mean fit value. The computed standard deviations for all the test cases were close to zero. The MABC algorithm, therefore, balanced the exploration-exploitation components efficiently. The performance order of the algorithms in improving the mean fitness was MABC > ABC > PCPSO > PSO for the Freundlich-Langmuir isotherm. On the other hand, performance order in obtaining global-best solutions was MABC > PSO > ABC ≥ PCPSO. The relative convergence behaviour of different solvers for fitting the experimental sorption data of Cu on McLaren soil was presented in Figure 3. The PSO and PCPSO algorithms converged quickly in the first few iteration and got trapped at the local optima. The convergence profiles of ABC and MABC algorithms were fairly similar and continued the search until the termination.

The performance measures by different solvers on the experimental data using the two-site Langmuir isotherm using 20 independent runs were presented in Table 6(b). The predicted best solutions were highlighted in bold. The results showed that the performance in terms of mean fitness value by PCPSO and ABC algorithms was improved slightly compared to the PSO algorithm. However, this improvement by the PCPSO and ABC algorithms was not significant for fitting the two-site Langmuir isotherm on the experimental data. Conversely, the performance of PCPSO and ABC algorithms was inferior to PSO algorithm in finding the best-fit solutions. The convergence behaviour of agents against number of iterations for PSO, PCPSO, and ABC algorithms for 10 independent runs were given in Figures 46, respectively. The observed sorption data of Zn on Kula soil was used. PSO poorly established the repeatability of the best-fit solutions (presence of very thick band of fitness solutions) and converged to multiple solutions in different runs. In some runs, the optimizer prematurely converged to the local optima. On the other hand, the convergence of PCPSO and ABC algorithms was similar but predicted an improved mean fitness solution. However, these algorithms were not robust in converging to global-best solutions. Performance of MABC algorithm showed that the algorithm obtained global-best solutions on all the data without compromising on the predicted mean best fitness. In case of Pb sorption on Cecil and Lafitte soil and Cd sorption on Calciorthid soils, the PSO outperformed the MABC algorithm in predicting the best-fit solutions. However, the predicted model parameters and RMSEs were practically identical. The convergence behaviour of agent with respect to the number of iterations for MABC algorithm for Zn sorption on Kula soil in 10 independent runs was presented in Figure 7. The figure showed that agent quickly converged to the global-best solutions in all runs by establishing a small mean fitness value. Further, the computed standard deviations for all the test cases were nearly zero. The algorithms performance order in improving the mean fitness was MABC > ABC = PCPSO = PSO. On the other hand, performance order of algorithms in locating global solutions was MABC > PSO > ABC ≥ PCPSO. The relative convergence behaviour of different optimizers on the experimental sorption data of Zn on Alligator soil was presented in Figure 8. The PSO and PCPSO algorithms quickly converged in the beginning but were trapped in the local optimum solutions. The convergence profiles of ABC and MABC optimizers were similar and continued the search process until the termination. However, MABC solver outperformed all the other solvers and provided excellent convergence. Figure 9 compares theoretical sorption data obtained using the isotherm parameters predicted by different solvers for the observed sorption data of Zn on Alligator soil. The comparison clearly showed that the MABC optimizer provided an excellent theoretical fit to the observed data. Theoretical solutions based on the predicted model parameters by MABC algorithm were used to fit the experimental data. Theoretical and experimental data for Lead sorption on Cecil soil and Chromium sorption on Alligator soil were given in Figures 10-11, respectively. Thus, the proposed MABC solver provided an excellent performance in fitting all three isotherms on all the experimental data.

6. Isotherm Selection

Performance comparison of different solvers for the estimation of isotherm model parameters from the experimental data was presented earlier. The suitable isotherms are often selected in the contaminant transport studies in soils by comparing optimized fitness values obtained by fitting different isotherms to the sorption data. The best-fit isotherms for the observed data based on the computed mean fitness values by different solvers were presented in Table 7. The suitable isotherms were highlighted in bold that provided lowest RMSE values by different solvers. It was clear that all the solvers except MABC algorithms failed to predict select appropriate isotherm due to premature convergence of the solvers while estimating the model parameters. The MABC algorithm selected suitable isotherms for all the experimental data due to excellent performance in the parameter estimation. This clearly explained the robustness of the MABC algorithm in the selection of appropriate isotherm. Similarly, the selection of appropriate isotherm based on best-fit values, obtained over 20 independent runs, was presented in Table 8. The results were similar to the former case, where the algorithms other than MABC failed to predict the appropriate isotherm for some experimental data. The results indicated that the MABC algorithm outperformed all the other solvers for finding the accurate model parameters and in the selection of appropriate isotherm for the observed data. Further, the solver is robust on various problems that consist of different parametric dimensions.

7. Summary and Conclusions

In this paper, we analyzed conventional and stochastic optimization algorithms for fitting the nonlinear isotherm models to the experimental sorption data. The disadvantage of using conventional optimization techniques is demonstrated for the present problem. These techniques were found to be highly sensitive to the initial solution and the performance is impeded by the presence of local minima. Inverse models based on swarm intelligence are developed for fitting Freundlich, Freundlich-Langmuir, and two-site Langmuir isotherm models to the observed sorption data after tuning the model parameters. However, the estimated solutions vary in different runs due to the stochastic nature of these algorithms. It was observed that all the tested optimization algorithms used from the literature are relatively slow in nature as they do not balance the exploitation and exploration components effectively. A modified bee-colony optimization algorithm is proposed in this work. Comparative analysis reveals that the proposed solver, MABC, predicts excellent results on all the experiments. The results showed that the proposed solver has superior convergence capabilities due to good exploration-exploitation components in estimating global best, mean fitness, and standard deviations. Further, the proposed solver is proved to be robust on various functional terrains. The best-fit isotherms for observed data based on the computed mean fitness values by different solvers are also obtained. The improvement in the performance of the proposed solver is significantly better and reliable for studying the fate of contaminants in soils. Therefore, a great caution should be exercised when the existing solvers are used for the selection and configuration of nonlinear isotherms.

Appendix

See Pseudocode 1.

Set Iteration = 0;
INITIALIZE
Initialize food sources (equation (10));
WHILE (the termination conditions are not met)
    % Employed Bee phase
    Evaluate nectar amount (fitness) of food sources (equation (5));
    FOR (each employed bee)
     Produce a new food source using (15);
     Evaluate the fitness of the new food source (equation (5));
     Greedy selection to choose best food source;
    ENDFOR
    Compute probability for each food source (equation (14));
    % Onlooker Bee’s phase
    FOR (each onlooker bee)
     Send each onlooker bees to food sources based on ;
     Produce a new food source using (13);
    Evaluate the fitness of the new food source (equation (5));
    Greedy selection to choose best food source;
    END FOR
    % Scout Bee’s phase
    IF (any employed bee becomes scout bee)
         Send scout bee to a new food source based on OBL (equation (16));
    ENDIF
    Memorize the best solution achieved so far;
    Iteration = Iteration + 1;
END WHILE

Competing Interests

The author declares that they have no competing interests.