Research Article  Open Access
Tadikonda Venkata Bharat, "Selection and Configuration of Sorption Isotherm Models in Soils Using Artificial Bees Guided by the Particle Swarm", Advances in Artificial Intelligence, vol. 2017, Article ID 3497652, 22 pages, 2017. https://doi.org/10.1155/2017/3497652
Selection and Configuration of Sorption Isotherm Models in Soils Using Artificial Bees Guided by the Particle Swarm
Abstract
A precise estimation of isotherm model parameters and selection of isotherms from the measured data are essential for the fate and transport of toxic contaminants in the environment. Nonlinear leastsquare techniques are widely used for fitting the isotherm model on the experimental data. However, such conventional techniques pose several limitations in the parameter estimation and the choice of appropriate isotherm model as shown in this paper. It is demonstrated in the present work that the classical deterministic techniques are sensitive to the initial guess and thus the performance is impeded by the presence of local optima. A novel solver based on modified artificial beecolony (MABC) algorithm is proposed in this work for the selection and configuration of appropriate sorption isotherms. The performance of the proposed solver is compared with the other three solvers based on swarm intelligence for model parameter estimation using measured data from 21 soils. Performance comparison of developed solvers on the measured data reveals that the proposed solver demonstrates excellent convergence capabilities due to the superior explorationexploitation abilities. The estimated solutions by the proposed solver are almost identical to the mean fitness values obtained over 20 independent runs. The advantages of the proposed solver are presented.
1. Introduction
The fate and transport of heavy metals in the soils are a serious concern due to their potential impact on the environment. These reactive substances in soils undergo complex interactions based on the soil properties, type of clay minerals, and porefluid parameters. The problem of identifying the fate and transport of these substances in soils requires the understanding of various sorption mechanisms of species in the soil environment [1]. The sorption reactions in soils are, therefore, important in contaminant transport modeling in the subsurface and containment facilities such as landfills. Contaminant sorption process of different solutes in the soils is generally described using isotherms. Sorption isotherms describe the concentration of the sorbed solute to the sorbent concentration at equilibrium. Several linear and nonlinear isotherms are commonly used in various fields of research to understand the sorption mechanism between sorbent (soil solids) and solute [2–4]. Precise selection and configuration of sorption isotherm for a given sorbentsolute combination are essential in the contaminant risk assessment. The selection and configuration depend on chosen isotherm model, experimental representation of the environmental conditions, and parameter estimation method [5]. The sorption model parameters and selection of appropriate isotherm models are accomplished by fitting isotherm models to the experimental data using regression [6]. Nonlinear leastsquare techniques are widely used for this purpose [4, 5, 7, 8]. However, it will be shown in this work that classical gradientbased techniques suffer from several limitations.
Natureinspired algorithms such as algorithms based on swarm intelligence (SI) are global search techniques and are widely used for many optimization problems in engineering. Particle swarm optimization (PSO) [9] and artificial bee colony (ABC) [10] are commonly used SI techniques. These algorithms overcome several limitations of conventional, deterministic techniques. Thus, these global search techniques are widely applied in geotechnical engineering for several optimization problems [11–17]. However, these techniques often prematurely converge to local optimum solution and may not generate the same solution in different runs due to stochastic nature of the algorithm [18]. A hybrid method based on particle swarm optimization and gradientbased algorithm is often used for configuration of sorption isotherms [19]. However, the limitations associated with the slow convergence rate in early stage of the search and lack of diversity in later stage of the search may remain in the basic particle swarm optimization [20, 21]. Therefore, hybrid methods may not balance the limitations that arose from each other. Such hybrid methods are also computationally expensive as they require both gradient information and more number of objective function evaluations.
It will be shown that the performance of conventional optimization techniques and PSO algorithm is greatly hampered by the presence of several local optima on the functional terrain. An inverse model based on artificial beecolony (ABC) optimization method is proposed in this work for selection and configuration of appropriate sorption isotherms using the experimental sorption data. The proposed solver is further improved by introducing several modifications to the ABC algorithm for better performance. The proposed technique is highly robust; it accurately estimates the sorption model parameters and appropriate isotherms to the experimental data.
2. Theory
Sorption of different chemical constituents in soils is routinely performed in the laboratory under controlled temperature, pressure, and pH. The measured sorption data is generally described by either equilibrium or a kinetic retention process [2]. The present study is limited to only modeling of equilibrium retention process. The measured, equilibrium sorption data is represented as sorbed concentration, , against the equilibrium concentration in the aqueous phase, . Sorption isotherms describe the mathematical dependency of sorbed concentration on the equilibrium concentration. The determination of the sorption model parameters from this sorption data is essential for the design of containment facilities. Further, configuration of the observed data using available models is important in the modeling of contaminant transport and sorption processes for containment applications. Sorption isotherms are classified into Scurve, Lcurve, Hcurve, and Ccurve based on the initial slope of the curve [22–24]. A thorough description of various sorption isotherm models is given elsewhere [24–27]. Some of the commonly used isotherms in geoenvironmental engineering applications are given below.
2.1. Freundlich Isotherm
The Freundlich isotherm is widely used for describing nonideal and reversible adsorption [27, 28]. This isotherm is applied in heterogeneous systems such as soils. The Freundlich isotherm can be expressed as where is the amount of solute retained by the soil (μg/g), is the solute concentration in solution (μg/mL), is the distribution coefficient (mL/g), and the parameter is dimensionless constant. The distribution coefficient describes the partitioning of solute species between solid and liquid phases [2]. The linear sorption equation is a particular case of Freundlich expression and can be derived by substituting .
2.2. Langmuir Isotherm
The Langmuir model is another popularly used sorption model, originally developed by Langmuir [29] to describe the adsorption of gases by solids. This isotherm model is widely used in geoenvironmental applications. For example, the model is recently applied for understanding the phosphorus contamination in soils [30], ammonium removal using zeolites [31], and phenol adsorption in natural soils [32]. The advantage of this model is that a maximum sorptive capacity is used in the model, which is regarded as a measure of the available sorption sites on the solid phase. The Langmuir isotherm is defined as where and are the fitting parameters. The parameter (mL/g) is a measure of the bond strength at which the sorbed solutes are held on the soil surface and (μg/g) is the maximum sorption capacity. This isotherm is extensively used to describe the sorption of solutes by soils [5]. Often various modifications to (1) and (2) are used [23] as some sorption data do not follow the standard isotherms. These modified forms are presented in the following subsections.
2.3. FreundlichLangmuir Model
The FreundlichLangmuir (FL) model is a power function based on the assumption of continuously distributed affinity coefficients [4, 23]. This model can be described aswhere is the fitting parameter (0 < ≤ 1). The Langmuir isotherm is a special case of this model and can be derived by substituting = 1. The FL model is widely used for simulating the sorption behaviour of heavy metals on clay minerals [33, 34].
2.4. TwoSite Langmuir Model
The twosite Langmuir model is based on the assumption that sorption occurs on two types of surface sites, each with different bonding energies. One site contains a high bonding energy and reacts rapidly, while the other contains a lower bonding energy and reacts more slowly [35]. The modified Langmuir model with two sorbing components is given by where and are the maximum quantities of solute that can be sorbed by the two sites; and and are constants related to the bonding energies of the components. Several researchers obtained excellent fit of phosphate sorption data with the twosite Langmuir model [36] and metal sorption, that is, Lanthanide on smectite clays [37, 38]. Several other isotherm models such as Farley–Dzombak–Morel [39] and Tóth [40] are often used, but the modeling is out of the scope of this work.
3. Estimation of Model Parameters
Accurate estimation of model parameters of (1) through (4) is required for the prediction of fate and transport of contaminants in the environment. These model parameters are frequently estimated by nonlinear, leastsquare approaches by minimizing the objective function. The objective function is an error measure which is used to determine the accuracy of a given model to describe the measured data. Several error measures, such as coefficient of determination (), sum of squared deviations (SSD), mean error (MSE), and root mean square error (RMSE), are commonly used in the inverse analysis [27]. The RMSE is often used to verify the appropriateness of the model to the observed data and is given bywhere and are the observed and fitted sorption values, respectively, and is the number of observed data.
As the aforementioned theoretical isotherms are nonlinear, optimization techniques are used to estimate the model parameters of a given model isotherm and to determine the appropriate isotherm for a given data. Nonlinear regression techniques such as Levenberg–Marquardt method are widely used in the literature for parameterization [34, 41, 42]. A brief description of gradientbased technique is given here.
3.1. GradientBased Approach
Gradientbased techniques are the conventional optimization tools for parameter estimation in many engineering applications. The principle behind such techniques is determining an optimum point on the search space where the derivatives of the objective function with respect to the model parameters () become zero. The unknown model parameter vector (), in these methods, is determined iteratively. The direction of descent for the gradientbased method is based onwhere is a scalar which determines the step length in the direction of . Several variants of gradientbased algorithms are used that may require the information of second derivative of the objective function (Hessian matrix) for a faster convergence rate.
It was in this work that these techniques suffered from several limitations. These conventional algorithms are highly sensitive to the fitness function (error measure) and on the initial guess. Of late, natureinspired or swarm intelligence search techniques are used as an alternative to the conventional methods for solving several optimization problems in geoenvironmental engineering [11–13, 15–18].
3.2. Swarm Intelligence
Swarm intelligence (SI) is a field of research which develops computational techniques by taking inspiration from the nature for solving the optimization problems. Genetic algorithms (GA), particle swarm optimization (PSO), and artificial bee colony (ABC) are some of the techniques developed by mimicking the nature for the determination of global optimum solutions to the inverse problems. This paper developed several solvers based on some of the SI techniques for predicting the model parameters of isotherm equations and for determining the appropriate isotherms for the measured data. The following SI techniques were used in developing the solvers.
3.2.1. Particle Swarm Optimization
PSO is a class of stochastic, derivativefree, populationbased method. The working principle of PSO method is inspired by the social behaviour of animals such as flocking of birds and schooling of fishes [9]. In PSO, each particle (i.e., agent) of the population (i.e., swarm) is thought to be a collisionfree bird in an dimensional search space, where is the number of model parameters to be optimized. PSO algorithm starts with initializing predefined number of particles in the search space represented by the objective function. Each particle is updated with a position and velocity at any given iteration in the search space. The position and velocity are updated based on the flying experience (movement) of individual particles and the neighbors to achieve a better position in the search space. Further, the best location of an individual particle in the history (all the previous iterations) and best location experienced in comparison to all the particles in any given iteration are rememorized. Therefore, the movement of the particle is an aggregated acceleration towards its best previously visited location and towards the best individual of a topological neighborhood [43]. The velocity () and movement () of the th particle in the th dimension are updated using [44, 45]where is the history best position of each individual particle; is the globalbest position; and are the acceleration constants that accelerate movement of the particles towards individual and globalbest positions, respectively; and are two independent random numbers that are uniformly distributed between 0 and 1 used to stochastically vary the relative pull between and . The randomness is introduced using the constants for providing unpredictable behaviour of the swarm which is useful for preventing from premature convergence. This iterative process continues swarm by swarm until the predefined number of iterations. Individual particles are drawn towards the success of each other during the search process which results in the formation of a cluster of particles at the global optimum region.
PSO is widely used for solving the optimization problems in many engineering disciplines due to the simplicity. Several experiment studies on the application of PSO algorithm on benchmark functions and optimization problems revealed that the algorithm explodes quickly as the first term in the velocity term in (7) increases incessantly [45]. The notable work on improving the update equation are introduction of the inertia factor to control the velocity of the particles [44] and constriction factor which constrains the magnitude of the velocity [45]. The modified form of velocityupdate equation is [12, 44, 45]where is the constriction factor and is the inertia weight to improve the performance of the PSO algorithm. The first term in (9) is the inertia term to restrict the particles to change the flight direction drastically. Considerable improvements to (9) are available in the literature for improving the performance of the PSO algorithm. The present work uses perturbed PSO algorithm for developing the solver.
3.2.2. Variant of PSO Algorithm
The classical PSO algorithm and many of the variants suffer from few limitations during the search process. The major drawback with these algorithms is that they converge to suboptimal solutions for several problems due to the application of the same movement update equation (10) for particle [18]. Bharat et al. [18] proposed perturbed PSO (PPSO) algorithm which uses the following perturbation equation for the particle:where is a constant and .
Equation (10) is used to randomly perturb the globalbest particle in the suboptimal region to avoid convergence. The performance of the PPSO algorithm is further improved using catfish particles [46] into the swarm [14]. The particles are arranged in descending order of their fitness values after few generations (catIter). A known percentage of lowfit solutions are chosen as catfish particles and are repositioned on the dimensional search space based on oppositionbased learning (OBL). The OBL technique is based on the utilization of opposition numbers of the current positions in the search space [47]. This technique is explained in the following subsections. The catfish particles improve the efficiency by exploring the new feasible regions in the search space. This optimization algorithm is referred to as PCPSO algorithm in this work. PCPSO algorithm is believed to be alleviating premature convergence of the swarm even with small population due to unique combination of perturbation equation for particle and exploration capabilities of the catfish particles [14]. Two other solvers based on SI algorithms are also developed in this work.
3.2.3. Artificial BeeColony (ABC) Algorithm
ABC algorithm is a recent addition to the class of global optimization algorithms. ABC algorithm [10] is inspired by the foraging behaviour of bees. It is shown recently on over 50 benchmark functions (unimodal to multimodal, separable to nonseparable functions) that ABC algorithm outperforms other popular SI techniques such as genetic algorithm (GA), differential evolution (DE), and particle swarm optimization (PSO) [48] due to better tradeoff between exploration and exploitation.
The algorithm uses several steps inspired by the collective behaviour of bees for the collection and processing of the nectar. The basic ABC algorithm uses three classes of bees, that is, employed bees, onlooker bees, and scout bees. Employed bees are responsible for locating the nectar sources and sharing this information with the onlookers on the dance area of the hive. Onlooker bees select nectar source based on the quality (objective function) with some probability. Therefore, the probability of higher number of onlookers at better quality of food source is higher. The task of employed and onlooker bees is to explore the search space. Scout bees are similar to the catfishes used to exploit new food sources in the search space by random search process. Further, the employed bees with abandoned food sources become the scout bees. Each food source (a set of model parameters) is a feasible solution of the problem and the nectar amount of a food source represents quality of the solution, that is, the objective function. The quality of the solution is represented by fitness value as given by where is the objective function (5) and is the model parameter vector of the problem. Half of the colony is assumed to be comprised of employed bees and the other half by the onlookers. Each food source is exploited by only one employed bee. Therefore, the number of employed or the onlooker bees is equal to the number of food sources. The following four steps are used in the algorithm.
(i) Initialization Phase. A predefined number of food sources are randomly generated in the dimensional parametric space where “” is the dimension of the parametric space representing the number of parameters to be optimized. The food sources are generated usingwhere ( is the number of food sources and equals half of the colony size); ( is the dimension of the problem); and are the lower and upper bounds of the th parameter and th food source, respectively. The fitness of all the food sources is evaluated using a suitable fitness norm. Further, counters which store the number of trails of solutions are reset to 0 in this phase.
(ii) Employed Bees Phase. Each employed bee produces a modification to the solution in its memory depending on the local (visual) information. The modification to the present food source is determined using the following equation:where represents the inertia factor (usually used in canonical ABC), represents a randomly selected food source different from , and is a uniformly distributed random number in the range . The fitness of the newly discovered solution is computed. A greedy selection process is applied between the new () and the original () solution. The better solution is stored in the memory. The trails count for this food will be reset to 0 if the solution is improved; otherwise, its value is increased by 1. Employed bees share the nectar (solution) information with the onlooker bees on the dance area after this phase.
(iii) Onlooker Bees Phase. The food sources are chosen based on a probability of its nectar amount (quality of the objective function) in this phase. A high number of onlooker bees are swarmed around the same food source when the fitness of the source is high. The probability is calculated according towhere the fitness of each agent is calculated according to (11). After this stage, each onlooker bee finds a new food source in the neighborhood based on (13), similar to the employed bees. The new position is replaced with the old one in the memory when the quality of the new position is better than that of the previous one. Otherwise, the bee keeps the old position in memory and the trail count is raised by 1.
(iv) Scout Bees Phase. A food source is abandoned and a new one is selected when the food source does not improve the quality in a predetermined number of iterations, called the “limit.” The new food source is selected randomly in the search space using (2).
Some modifications to the original ABC algorithm have been studied on benchmark functions. The modification of the basic ABC involves modifying the positionupdate equation [49] which was used recently by Zhu and Kwong [49] on six benchmark functions and found favorable results. However, the convergence towards gbest solution cannot be controlled in the Zhu and Kwong [49] work due to lack of acceleration factors. Apart from the modification to positionupdate equation, the frequency and magnitude of the perturbation are modified [50, 51]. The OBL was used by Gao and Liu [52] for the initialization phase on benchmark functions. Hybrid techniques combining crossover operations of GA with ABC is used recently by Karaboga and Kaya [53].
3.2.4. Modified ABC (MABC) Algorithm
The explorationexploitation qualities of the classical ABC algorithm are improved in the present work. The following positionupdate equation is used for the employed bees in the modified algorithm:where is the random number varying within , is a constant, and is the acceleration coefficient. The perturbation equation in (15) produces perturbed solutions around the present solution in randomly chosen direction of another food source. The information of fittest solutions is not required in this method as the food sources are randomly selected to generate the new solutions.
The movement of the employed bees using the proposed positionupdate equation is based on the resultant direction of bestfit food source (position) and the randomly chosen food source location. The inertia weight and acceleration coefficient improve the exploration process. The positionupdate equation for onlooker bees is based on (13) with an adaptive inertia factor. This helps in carrying out the exploitation process for better food sources around the present positions. Further, oppositionbased concept is used to generate new food sources for the scout bees.
OBL is based on the utilization of opposition numbers of the current positions in the search space [47]. If is a food source position in the dimensional space, and , , the new position based on the OBL is of which is defined aswhere and are the lower and upper limits of the th dimension.
4. Inverse Analysis
The performance of all the SI techniques is based on the tuning parameters. The tuning parameters of these algorithms are very sensitive to the nature of the parametric space [14, 18, 54]. The tuning parameters of all the four algorithms and the measured data are defined in the following subsection.
4.1. Experimental Sorption Data
Observed sorption data of metal substances, that is, Cu, Zn, Ca, and Pb, on various soils were used from the literature for selection and configuration of the described isotherms. Natural soils having various percentages of clay content, different clay minerals, and pH were chosen. The physical properties of the soils as reported in the literature were presented in Table 1. The observed data contains sorbed concentration of the metal species against the corresponding equilibrium concentrations. The details of the experimental data as reported in the literature are given in Table 2.


4.2. Parameter Setting
A number of twenty runs were used to test the performance of the proposed solvers. The number of iterations was fixed to 150 for all the algorithms and for all the isotherms. However, the number of agents was varied based on the number of model parameters, that is, size of the dimensional (parametric) space. The algorithms used 10, 20, and 30 agents for fitting the experimental data with Freundlich isotherm (2dimensional), FreundlichLangmuir isotherm (3dimensional), and twosite Langmuir isotherm (4dimensional), respectively.
The parameter setting for different algorithms was determined empirically on synthetic test data for which the optima were known. The tuning parameters of both the PSO algorithms, such as acceleration coefficient, inertia factor, and constriction coefficient, are dependent on the given problem [11–14, 18]. The coefficients ; and provided the best performance for the present problem. The same parameter setting was used for both the PSO algorithms. The perturbation coefficient of in (12) provides good convergence for the algorithms [14, 18]. PCPSO algorithm required additional tuning parameters, namely, catIter and number of catfish particles. The 20% worstfit particles were considered as the catfish particles and repositioned using OBL in every 10 iterations.
The colony size in ABC and MABC algorithms was twice the number of agents considered for other algorithms. In other words, the number of employed bees and onlooker bees was the same and equal to the number of agents for other algorithms. In these algorithms, the function was evaluated three times in each iteration. Therefore, the number of iterations was fixed to 50 to use the same number of function evaluations for each agent in all the algorithms. The scout limit was equal to the colony size multiplied by number of dimensions of the problem. The parameter, , was varied from 2.0 to 0.5 from the beginning of the iteration to end of the iteration consistently. The acceleration coefficient, , in MABC algorithm was fixed to 3.0 for the best performance based on the synthetic tests. The agents were repositioned randomly on the search space when they move out of the boundaries of the dimensional parametric space.
The pseudocode of the proposed MABC algorithm was given in the Appendix.
5. Results and Discussion
5.1. Parametric Space
The parametric space for the experimental data of Cr sorption on Alligator soil using nonlinear isotherm equations was computed. The parametric spaces were obtained using twoparameter models of Freundlich and Langmuir isotherms (obtained by substituting in FreundlichLangmuir isotherm) and are shown in Figures 1 and 2, respectively. RMSE (5) between experimentally measured data and theoretically computed data from 22,50,000 sets of model parameters (, or , ) was used. The parametric spaces appeared fairly different in these two cases although the same experimental data was used. Further, these functional terrains contained unique global optimum solutions, several local optima, and a plan surface having the same RMSE values. Further, the parametric space varied with measurement errors in the data and the number of data points. Complexity of the functional terrains increases further with the parametric dimension. The conventional algorithms may converge to one of these local optima when the initial guess was used in such regions of the parametric space. The application of robust algorithms for determining global optimum solutions for the present problem is, therefore, justified.
5.2. GradientBased Algorithm
The performance of gradientbased algorithms is dependent on the usersupplied, initial solution (guess), and the type of error measure. The sensitivity of these algorithms on the initial guess was verified qualitatively for the present problem. The “fmincon” MATLAB® routine, featuring a constrained optimization of a multivariable function using interiorpoint technique, was used for finding the best isotherm model parameters that provided bestfit theoretical sorption data to the measured data. The interiorpoint technique was found to have better convergence abilities than trustregionreflective or any other available techniques. A finitedifference numerical solution was used to determine the first and secondorder derivatives of the objective function with respect to the model parameters. The tolerance value of the objective function was fixed to 1 × 10^{−8} and a very large value was used for maximum number of iterations which served as the stopping criterion. Different initial guess solutions were used to obtain the converged solutions. The performance of the gradientbased algorithm for fitting twosite Langmuir isotherm on the experimental data of Cd sorption on Molokai soil was presented in Table 3. The solutions converged to several suboptimal and local solutions for different initial guess solutions. The predicted model parameters were fairly different from each other, but the RMSEs were very close to each other. Thus, finding a globalbest solution was fairly complex in such situations. It is evident from these results that the performance of the gradientbased algorithm was highly sensitive to the initial guess and the convergence was impeded by the presence of local minima.

5.3. Parameter Estimation by SI Based Solvers
Model parameters of Freundlich, FreundlichLangmuir, and twosite Langmuir isotherms on the experimental data were predicted using the developed solvers. The performance of the solvers was determined based on the mean fitness value, bestfit solution, and standard deviation (STD) computed from 20 independent runs as the SI techniques are stochastic in nature and may produce different solutions in different runs. The robustness of the algorithms is evaluated by comparing the mean fitness value with the best fitness (i.e., minimum error between theoretical and experimental data) and by determining the standard deviation. The robust solver predicts bestfit solution very close to the mean fitness solution and predicts smallest standard deviation, that is, close to zero. The model parameters obtained by different algorithms for all the 21 experiments using Freundlich, FreundlichLangmuir, and twosite Langmuir isotherms were given separately in Tables 4(a), 5(a), and 6(a), respectively. The best and worst solutions obtained in 20 independent runs based on the objective function values were provided. The Langmuir isotherm was not used in the comparison as it was the special case of FredlundLangmuir isotherm with .
(a)  
 
(b)  

(a)  
 
(b)  
